Re: [RFC][PATCH 2/3] mm: slab: move around slab ->freelist for cmpxchg

From: Dave Hansen
Date: Thu Dec 12 2013 - 14:40:26 EST


On 12/12/2013 09:46 AM, Christoph Lameter wrote:
> On Wed, 11 Dec 2013, Dave Hansen wrote:
>> The write-argument to cmpxchg_double() must be 16-byte aligned.
>> We used to align 'struct page' itself in order to guarantee this,
>> but that wastes 8-bytes per page. Instead, we take 8-bytes
>> internal to the page before page->counters and move freelist
>> between there and the existing 8-bytes after counters. That way,
>> no matter how 'stuct page' itself is aligned, we can ensure that
>> we have a 16-byte area with which to to this cmpxchg.
>
> Well this adds additional branching to the fast paths.

I don't think it *HAS* to inherently. The reason here is really that we
swap the _order_ of the arguments to the cmpxchg() since their order in
memory changes. Essentially, we do:

| flags | freelist | counters | |
| flags | | counters | freelist |

I did this so I wouldn't have to make a helper for ->counters. But, if
we also move counters around, we can do:

| flags | counters | freelist | |
| flags | | counters | freelist |

I believe we can do that all with plain pointer arithmetic and masks so
that it won't cost any branches.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/