Re: SLOB lockup (was: Re: [tip:core/locking] lockdep: annotate reclaim context (__GFP_NOFS), fix SLOB)
From: Nick Piggin
Date: Mon Mar 16 2009 - 06:00:51 EST
On Monday 16 March 2009 01:56:00 Matt Mackall wrote:
> On Sun, 2009-03-15 at 20:06 +1100, Nick Piggin wrote:
> > On Sunday 15 March 2009 17:48:18 Ingo Molnar wrote:
> > > > Cc: Nick Piggin <npiggin@xxxxxxx>
> > > > Cc: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
> > > > LKML-Reference: <20090128135457.350751756@xxxxxxxxx>
> > > > Signed-off-by: Ingo Molnar <mingo@xxxxxxx>
> > >
> > > and with this fixed, and with SLOB now being tested in -tip, the
> > > new lockdep assert attached below (followed by a real lockup)
> > > pops up.
> > >
> > > Seems like a genuine SLOB bug, probably present upstream as
> > > well.
> >
> > Hmmf. debugobjects calls back into the slab allocator from the page
> > allocator. The following patch would improve SLOB, but I think it
> > would be a good idea to avoid a dependency in that direction. Can
> > debugobjects defer this freeing?
>
> Yeah. I don't think any of the allocators are designed with recursion in
> mind. That the others aren't (visibly) failing here is blind luck.
>
> Nick, not really sure what your patch is accomplishing. It narrows the
> lock window, but it doesn't eliminate it. But I think we can take the
> page allocator case out from under the lock entirely, no?
Oh, it was trying to accomplish exactly this, but wasn't tested (just
for illustration).
I think Thomas's deferred freeing work should be a good way to fix this
problem, but of course reducing locking in SLOB doesn't hurt in the
slightest either ;)
> diff -r 8e0f1cee0a71 mm/slob.c
> --- a/mm/slob.c Sat Jan 24 15:41:13 2009 -0600
> +++ b/mm/slob.c Sun Mar 15 09:50:42 2009 -0500
> @@ -387,8 +387,6 @@
> sp = (struct slob_page *)virt_to_page(block);
> units = SLOB_UNITS(size);
>
> - spin_lock_irqsave(&slob_lock, flags);
> -
> if (sp->units + units == SLOB_UNITS(PAGE_SIZE)) {
> /* Go directly to page allocator. Do not pass slob allocator */
> if (slob_page_free(sp))
This doesn't work because you have to hold the lock over the test
otherwise another thread can concurrently meddle with sp->units.
For that matter my previous patch was buggy, aside from the obvious
that Ingo pointed out, because I unlocked before removing the page
from the freelist too.
This should be pretty close to correct ;)
--
Don't hold SLOB lock when freeing the page. Reduces lock hold width.
Signed-off-by: Nick Piggin <npiggin@xxxxxxx>
---
mm/slob.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c
+++ linux-2.6/mm/slob.c
@@ -393,10 +393,11 @@ static void slob_free(void *block, int s
/* Go directly to page allocator. Do not pass slob allocator */
if (slob_page_free(sp))
clear_slob_page_free(sp);
+ spin_unlock_irqrestore(&slob_lock, flags);
clear_slob_page(sp);
free_slob_page(sp);
free_page((unsigned long)b);
- goto out;
+ return;
}
if (!slob_page_free(sp)) {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/