Re: [RFC][PATCH 0/7] re-shrink 'struct page' when SLUB is on.
From: Andrew Morton
Date: Wed Dec 18 2013 - 19:41:15 EST
On Wed, 18 Dec 2013 16:24:15 -0800 Dave Hansen <dave@xxxxxxxx> wrote:
> On 12/17/2013 07:17 AM, Christoph Lameter wrote:
> > On Mon, 16 Dec 2013, Dave Hansen wrote:
> >
> >> I'll do some testing and see if I can coax out any delta from the
> >> optimization myself. Christoph went to a lot of trouble to put this
> >> together, so I assumed that he had a really good reason, although the
> >> changelogs don't really mention any.
> >
> > The cmpxchg on the struct page avoids disabling interrupts etc and
> > therefore simplifies the code significantly.
> >
> >> I honestly can't imagine that a cmpxchg16 is going to be *THAT* much
> >> cheaper than a per-page spinlock. The contended case of the cmpxchg is
> >> way more expensive than spinlock contention for sure.
> >
> > Make sure slub does not set __CMPXCHG_DOUBLE in the kmem_cache flags
> > and it will fall back to spinlocks if you want to do a comparison. Most
> > non x86 arches will use that fallback code.
>
>
> I did four tests. The first workload allocs a bunch of stuff, then
> frees it all with both the cmpxchg-enabled 64-byte struct page and the
> 48-byte one that is supposed to use a spinlock. I confirmed the 'struct
> page' size in both cases by looking at dmesg.
>
> Essentially, I see no worthwhile benefit from using the double-cmpxchg
> over the spinlock. In fact, the increased cache footprint makes it
> *substantially* worse when doing a tight loop.
>
> Unless somebody can find some holes in this, I think we have no choice
> but to unset the HAVE_ALIGNED_STRUCT_PAGE config option and revert using
> the cmpxchg, at least for now.
>
So your scary patch series which shrinks struct page while retaining
the cmpxchg_double() might reclaim most of this loss?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/