Re: [PATCH 2/2] mm/slub: don't use reserved highatomic pageblock for optimistic try
From: Michal Hocko
Date: Mon Aug 28 2017 - 09:08:38 EST
On Mon 28-08-17 13:29:29, Vlastimil Babka wrote:
> On 08/28/2017 03:11 AM, js1304@xxxxxxxxx wrote:
> > From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
> >
> > High-order atomic allocation is difficult to succeed since we cannot
> > reclaim anything in this context. So, we reserves the pageblock for
> > this kind of request.
> >
> > In slub, we try to allocate higher-order page more than it actually
> > needs in order to get the best performance. If this optimistic try is
> > used with GFP_ATOMIC, alloc_flags will be set as ALLOC_HARDER and
> > the pageblock reserved for high-order atomic allocation would be used.
> > Moreover, this request would reserve the MIGRATE_HIGHATOMIC pageblock
> > ,if succeed, to prepare further request. It would not be good to use
> > MIGRATE_HIGHATOMIC pageblock in terms of fragmentation management
> > since it unconditionally set a migratetype to request's migratetype
> > when unreserving the pageblock without considering the migratetype of
> > used pages in the pageblock.
> >
> > This is not what we don't intend so fix it by unconditionally setting
> > __GFP_NOMEMALLOC in order to not set ALLOC_HARDER.
>
> I wonder if it would be more robust to strip GFP_ATOMIC from alloc_gfp.
> E.g. __GFP_NOMEMALLOC does seem to prevent ALLOC_HARDER, but not
> ALLOC_HIGH. Or maybe we should adjust __GFP_NOMEMALLOC implementation
> and document it more thoroughly? CC Michal Hocko
Yeah, __GFP_NOMEMALLOC is rather inconsistent. It has been added to
override __GFP_MEMALLOC resp. PF_MEMALLOC AFAIK. In this particular
case I would agree that dropping __GFP_HIGH and __GFP_ATOMIC would
be more precise. I am not sure we want to touch the existing semantic of
__GFP_NOMEMALLOC though. This would require auditing all the existing
users (something tells me that quite some of those will be incorrect...)
> Also, were these 2 patches done via code inspection or you noticed
> suboptimal behavior which got fixed? Thanks.
The patch description is not very clear to me either but I guess that
Joonsoo sees to many larger order pages to back slab objects when the
system is not under heavy memory pressure and that increases internal
fragmentation?
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
> > ---
> > mm/slub.c | 6 ++----
> > 1 file changed, 2 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index e1e442c..fd8dd89 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -1579,10 +1579,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
> > */
> > alloc_gfp = (flags | __GFP_NOWARN | __GFP_NORETRY) & ~__GFP_NOFAIL;
> > if (oo_order(oo) > oo_order(s->min)) {
> > - if (alloc_gfp & __GFP_DIRECT_RECLAIM) {
> > - alloc_gfp |= __GFP_NOMEMALLOC;
> > - alloc_gfp &= ~__GFP_DIRECT_RECLAIM;
> > - }
> > + alloc_gfp |= __GFP_NOMEMALLOC;
> > + alloc_gfp &= ~__GFP_DIRECT_RECLAIM;
> > }
> >
> > page = alloc_slab_page(s, alloc_gfp, node, oo);
> >
--
Michal Hocko
SUSE Labs