Re: [PATCH 1/2] mm/slub: Introduce two counters for the partial objects
From: Pekka Enberg
Date: Fri Aug 07 2020 - 03:26:18 EST
On Thu, Aug 6, 2020 at 3:42 PM Vlastimil Babka <vbabka@xxxxxxx> wrote:
>
> On 7/2/20 10:32 AM, Xunlei Pang wrote:
> > The node list_lock in count_partial() spend long time iterating
> > in case of large amount of partial page lists, which can cause
> > thunder herd effect to the list_lock contention, e.g. it cause
> > business response-time jitters when accessing "/proc/slabinfo"
> > in our production environments.
> >
> > This patch introduces two counters to maintain the actual number
> > of partial objects dynamically instead of iterating the partial
> > page lists with list_lock held.
> >
> > New counters of kmem_cache_node are: pfree_objects, ptotal_objects.
> > The main operations are under list_lock in slow path, its performance
> > impact is minimal.
> >
> > Co-developed-by: Wen Yang <wenyang@xxxxxxxxxxxxxxxxx>
> > Signed-off-by: Xunlei Pang <xlpang@xxxxxxxxxxxxxxxxx>
>
> This or similar things seem to be reported every few months now, last time was
> here [1] AFAIK. The solution was to just stop counting at some point.
>
> Shall we perhaps add these counters under CONFIG_SLUB_DEBUG then and be done
> with it? If anyone needs the extreme performance and builds without
> CONFIG_SLUB_DEBUG, I'd assume they also don't have userspace programs reading
> /proc/slabinfo periodically anyway?
I think we can just default to the counters. After all, if I
understood correctly, we're talking about up to 100 ms time period
with IRQs disabled when count_partial() is called. As this is
triggerable from user space, that's a performance bug whatever way you
look at it.
Whoever needs to eliminate these counters from fast-path, can wrap
them in a CONFIG_MAKE_SLABINFO_EXTREMELY_SLOW option.
So for this patch, with updated information about the severity of the
problem, and the hackbench numbers:
Acked-by: Pekka Enberg <penberg@xxxxxxxxxx>
Christoph, others, any objections?
- Pekka