Re: [PATCH] mm: slowly shrink slabs with a relatively small number of objects
From: Rik van Riel
Date: Fri Aug 31 2018 - 21:27:11 EST
On Fri, 2018-08-31 at 14:31 -0700, Roman Gushchin wrote:
> On Fri, Aug 31, 2018 at 05:15:39PM -0400, Rik van Riel wrote:
> > On Fri, 2018-08-31 at 13:34 -0700, Roman Gushchin wrote:
> >
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index fa2c150ab7b9..c910cf6bf606 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -476,6 +476,10 @@ static unsigned long do_shrink_slab(struct
> > > shrink_control *shrinkctl,
> > > delta = freeable >> priority;
> > > delta *= 4;
> > > do_div(delta, shrinker->seeks);
> > > +
> > > + if (delta == 0 && freeable > 0)
> > > + delta = min(freeable, batch_size);
> > > +
> > > total_scan += delta;
> > > if (total_scan < 0) {
> > > pr_err("shrink_slab: %pF negative objects to delete
> > > nr=%ld\n",
> >
> > I agree that we need to shrink slabs with fewer than
> > 4096 objects, but do we want to put more pressure on
> > a slab the moment it drops below 4096 than we applied
> > when it had just over 4096 objects on it?
> >
> > With this patch, a slab with 5000 objects on it will
> > get 1 item scanned, while a slab with 4000 objects on
> > it will see shrinker->batch or SHRINK_BATCH objects
> > scanned every time.
> >
> > I don't know if this would cause any issues, just
> > something to ponder.
>
> Hm, fair enough. So, basically we can always do
>
> delta = max(delta, min(freeable, batch_size));
>
> Does it look better?
Yeah, that looks fine to me.
That will read to small cgroups having small caches
reclaimed relatively more quickly than large caches
getting reclaimed, but small caches should also be
faster to refill once they are needed again, so it
is probably fine.
--
All Rights Reversed.
Attachment:
signature.asc
Description: This is a digitally signed message part