Re: [2.4] page->buffers vanished in journal_try_to_free_buffers()

From: Andrew Morton
Date: Thu Jun 17 2004 - 22:12:37 EST


Marcelo Tosatti <marcelo.tosatti@xxxxxxxxxxxx> wrote:
>
> > tmp (page->buffers) above is null. b_this_page is at offset 0x28 (the accessed address in the oops). This means that
> > page->buffers is set to null by some other routine which results in the oops.
> >
> > I read the page allocate code
> > (ext3_read_page->block_read_full_page->create_emty_buffers->create_buffers), and it appears that it is not possible to allocate a page->buffers value of zero in the allocate function. I am having difficulty reproducing and cannot debug further, however. Can page->buffers be set to zero somewhere else?
> >Perhaps kswapd and some other thread are racing on the free?
>
> Steve,
>
> Hum, I'm starting to believe we might have an issue here.
>
> Searching lkml archives I find other similar oopses at the same place
> (trying to access 00000028, tmp->b_this_page), as you said.
>
> However I wonder what other kernel codepath could remove the page buffers
> under us, the page MUST be locked here. In the backtrace above the page
> is locked by shrink_cache(). And with the page locked, we guarantee the VM
> freeing routines (shrink_cache) wont try to mess with the page.
>
> Can you reproduce the oopsen?
>
> Stephen, Andrew, do you have any idea how the buffers could have vanished
> under us with the page locked? That should not be possible.
>
> I dont see how this "page->buffers = NULL" could be caused by hardware problem,
> which is usually one or two bit flip.

It's a bit odd. The page is definitely locked, and definitely had non-null
->buffers a few tens of instructions beforehand.

Is this an SMP machine?

One possibility is that we died on the second pass around the loop:
page->buffers points at a buffer_head which has a NULL ->b_this_page. But
I cannot suggest how ->b_this_page could have been zapped.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/