Re: [PATCH] mm/vmscan: respect cpuset policy during page demotion

From: Yang Shi
Date: Fri Oct 28 2022 - 13:24:10 EST


On Thu, Oct 27, 2022 at 10:55 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
>
> Feng Tang <feng.tang@xxxxxxxxx> writes:
>
> > On Thu, Oct 27, 2022 at 10:55:58AM -0700, Yang Shi wrote:
> >> On Thu, Oct 27, 2022 at 12:12 AM Feng Tang <feng.tang@xxxxxxxxx> wrote:
> >> >
> >> > On Thu, Oct 27, 2022 at 01:57:52AM +0800, Yang Shi wrote:
> >> > > On Wed, Oct 26, 2022 at 8:59 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
> >> > [...]
> >> > > > > > This all can get quite expensive so the primary question is, does the
> >> > > > > > existing behavior generates any real issues or is this more of an
> >> > > > > > correctness exercise? I mean it certainly is not great to demote to an
> >> > > > > > incompatible numa node but are there any reasonable configurations when
> >> > > > > > the demotion target node is explicitly excluded from memory
> >> > > > > > policy/cpuset?
> >> > > > >
> >> > > > > We haven't got customer report on this, but there are quite some customers
> >> > > > > use cpuset to bind some specific memory nodes to a docker (You've helped
> >> > > > > us solve a OOM issue in such cases), so I think it's practical to respect
> >> > > > > the cpuset semantics as much as we can.
> >> > > >
> >> > > > Yes, it is definitely better to respect cpusets and all local memory
> >> > > > policies. There is no dispute there. The thing is whether this is really
> >> > > > worth it. How often would cpusets (or policies in general) go actively
> >> > > > against demotion nodes (i.e. exclude those nodes from their allowes node
> >> > > > mask)?
> >> > > >
> >> > > > I can imagine workloads which wouldn't like to get their memory demoted
> >> > > > for some reason but wouldn't it be more practical to tell that
> >> > > > explicitly (e.g. via prctl) rather than configuring cpusets/memory
> >> > > > policies explicitly?
> >> > > >
> >> > > > > Your concern about the expensive cost makes sense! Some raw ideas are:
> >> > > > > * if the shrink_folio_list is called by kswapd, the folios come from
> >> > > > > the same per-memcg lruvec, so only one check is enough
> >> > > > > * if not from kswapd, like called form madvise or DAMON code, we can
> >> > > > > save a memcg cache, and if the next folio's memcg is same as the
> >> > > > > cache, we reuse its result. And due to the locality, the real
> >> > > > > check is rarely performed.
> >> > > >
> >> > > > memcg is not the expensive part of the thing. You need to get from page
> >> > > > -> all vmas::vm_policy -> mm -> task::mempolicy
> >> > >
> >> > > Yeah, on the same page with Michal. Figuring out mempolicy from page
> >> > > seems quite expensive and the correctness can't be guranteed since the
> >> > > mempolicy could be set per-thread and the mm->task depends on
> >> > > CONFIG_MEMCG so it doesn't work for !CONFIG_MEMCG.
> >> >
> >> > Yes, you are right. Our "working" psudo code for mem policy looks like
> >> > what Michal mentioned, and it can't work for all cases, but try to
> >> > enforce it whenever possible:
> >> >
> >> > static bool __check_mpol_demotion(struct folio *folio, struct vm_area_struct *vma,
> >> > unsigned long addr, void *arg)
> >> > {
> >> > bool *skip_demotion = arg;
> >> > struct mempolicy *mpol;
> >> > int nid, dnid;
> >> > bool ret = true;
> >> >
> >> > mpol = __get_vma_policy(vma, addr);
> >> > if (!mpol) {
> >> > struct task_struct *task;
> >> > if (vma->vm_mm)
> >> > task = vma->vm_mm->owner;
> >>
> >> But this task may not be the task you want IIUC. For example, the
> >> process has two threads, A and B. They have different mempolicy. The
> >> vmscan is trying to demote a page belonging to thread A, but the task
> >> may point to thread B, so you actually get the wrong mempolicy IIUC.
> >
> > Yes, this is a valid concern! We don't have good solution for this.
> > For memory policy, we may only handle the per-vma policy for now whose
> > cost is relatively low, as a best-effort try.
>
> Yes. The solution isn't perfect, especially for multiple-thread
> processes with thread specific memory policy. But the proposed code
> above can support the most common cases at least, that is, run workload
> with `numactl`.

Not only multi threads, but also may be broken for shared pages. When
you do rmap walk, you may get multiple contradict mempolicy, which one
would you like to obey?

TBH I'm not sure whether such half-baked solution is worth it or not,
at least at this moment. The cost is not cheap, but the gain may not
be worth it IMHO.

>
> Best Regards,
> Huang, Ying