Re: [RFC PATCH 3/3] sched/fair: Use different cachelines for readers and writers of load_avg
From: Peter Zijlstra
Date: Mon Nov 30 2015 - 17:29:48 EST
On Mon, Nov 30, 2015 at 02:13:32PM -0500, Waiman Long wrote:
> >This would only work if the structure itself is allocated with cacheline
> >alignment, and looking at sched_create_group(), we use a plain kzalloc()
> >for this, which doesn't guarantee any sort of alignment beyond machine
> >word size IIRC.
>
> With a RHEL 6 derived .config file, the size of the task_group structure was
> 460 bytes on a 32-bit x86 kernel. Adding a ____cacheline_aligned tag
> increase the size to 512 bytes. So it did make the structure a multiple of
> the cacheline size. With both slub and slab, the allocated task group
> pointers from kzalloc() in sched_create_group() were all multiples of 0x200.
> So they were properly aligned for the ____cacheline_aligned tag to work.
Not sure we should rely on sl*b doing the right thing here.
KMALLOC_MIN_ALIGN is explicitly set to sizeof(long long). If you want
explicit alignment, one should use KMEM_CACHE().
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/