Re: [RFC PATCH 0/2] mm: fix OOMs for binding workloads to movable zone only node
From: Feng Tang
Date: Fri Nov 06 2020 - 04:09:09 EST
On Fri, Nov 06, 2020 at 09:10:26AM +0100, Michal Hocko wrote:
> > > > The incomming parameter nodemask is NULL, and the function will first try the
> > > > cpuset nodemask (1 here), and the zoneidx is only granted 2, which makes the
> > > > 'ac's preferred zone to be NULL. so it goes into __alloc_pages_slowpath(),
> > > > which will first set the nodemask to 'NULL', and this time it got a preferred
> > > > zone: zone DMA32 from node 0, following get_page_from_freelist will allocate
> > > > one page from that zone.
> > >
> > > I do not follow. Both hot and slow paths of the allocator set
> > > ALLOC_CPUSET or emulate it by mems_allowed when cpusets are nebaled
> > > IIRC. This is later enforced in get_page_from_free_list. There are some
> > > exceptions when the allocating process can run away from its cpusets -
> > > e.g. IRQs, OOM victims and few other cases but definitely not a random
> > > allocation. There might be some subtle details that have changed or I
> > > might have forgot but
> >
> > yes, I was confused too. IIUC, the key check inside get_page_from_freelist()
> > is
> >
> > if (cpusets_enabled() &&
> > (alloc_flags & ALLOC_CPUSET) &&
> > !__cpuset_zone_allowed(zone, gfp_mask))
> >
> > In our case (kernel page got allocated), the first 2 conditions are true,
> > and for __cpuset_zone_allowed(), the possible place to return true is
> > checking parent cpuset's nodemask
> >
> > cs = nearest_hardwall_ancestor(task_cs(current));
> > allowed = node_isset(node, cs->mems_allowed);
> >
> > This will override the ALLOC_CPUSET check.
>
> Yes and this is ok because that is defined hierarchical semantic of the
> cpusets which applies to any !hardwalled allocation. Cpusets are quite
> non intuitive. Re-reading the previous discussion I have realized that
> me trying to not go into those details might have mislead you. Let me
> try again and clarify that now.
>
> I was talking in context of the patch you are proposing and that is a
> clear violation of the cpuset isolation. Especially for hardwalled
> setups because it allows to spill over to other nodes which shouldn't be
> possible except for few exceptions which shouldn't generate a lot of
> allocations (e.g. oom victim exiting, IRQ context).
I agree my patch is pretty hacky. As said in the cover-letter, I would
bring up this usage case, and get suggestions on how to support it.
> What I was not talking about, and should have been more clear about, is
> that without hardwall resp. exclusive nodes the isolation is best effort
> only for most kernel allocation requests (or more specifically those
> without __GFP_HARDWALL). Your patch doesn't distinguish between those
> and any non movable allocations and effectively allowed to runaway even
> for hardwalled allocations which are not movable. Those can be controlled
> by userspace very easily.
You are right, there are quiet several types of page allocations failures.
The callstack in patch 2/2 is a GFP_HIGHUSER from pipe_write, and there
are more types of kernel allocation requests which will got blocked by
the differnt check. My RFC patch just gave a easiest one-for-all hack to
let them bypass it.
Do we need to tackle them case by case?
> I hope this clarifies it a bit more and sorry if I mislead you.
Yes, it does and many thanks for the clarifying!
- Feng
> --
> Michal Hocko
> SUSE Labs