Re: [RFC PATCH 0/6] mm/memcontrol: Make memcg limits tier-aware

From: Joshua Hahn

Date: Tue Mar 24 2026 - 10:58:27 EST


On Tue, 24 Mar 2026 16:00:34 +0530 Donet Tom <donettom@xxxxxxxxxxxxx> wrote:

> Hi Josua
>
> On 2/24/26 4:08 AM, Joshua Hahn wrote:
> > Memory cgroups provide an interface that allow multiple workloads on a
> > host to co-exist, and establish both weak and strong memory isolation
> > guarantees. For large servers and small embedded systems alike, memcgs
> > provide an effective way to provide a baseline quality of service for
> > protected workloads.
> >
> > This works, because for the most part, all memory is equal (except for
> > zram / zswap). Restricting a cgroup's memory footprint restricts how
> > much it can hurt other workloads competing for memory. Likewise, setting
> > memory.low or memory.min limits can provide weak and strong guarantees
> > to the performance of a cgroup.
> >
> > However, on systems with tiered memory (e.g. CXL / compressed memory),
> > the quality of service guarantees that memcg limits enforced become less
> > effective, as memcg has no awareness of the physical location of its
> > charged memory. In other words, a workload that is well-behaved within
> > its memcg limits may still be hurting the performance of other
> > well-behaving workloads on the system by hogging more than its
> > "fair share" of toptier memory.
> >
> > Introduce tier-aware memcg limits, which scale memory.low/high to
> > reflect the ratio of toptier:total memory the cgroup has access.
> >
> > Take the following scenario as an example:
> > On a host with 3:1 toptier:lowtier, say 150G toptier, and 50Glowtier,
> > setting a cgroup's limits to:
> > memory.min: 15G
> > memory.low: 20G
> > memory.high: 40G
> > memory.max: 50G
> >
> > Will be enforced at the toptier as:
> > memory.min: 15G
> > memory.toptier_low: 15G (20 * 150/200)
> > memory.toptier_high: 30G (40 * 150/200)
> > memory.max: 50G
>
>

Hello Donet,

Thank you for reviewing the series! I hope you are doing well.

> Currently, the high and low thresholds are adjusted based on the ratio
> of top-tier to total memory. One concern I see is that if the working
> set size exceeds the top-tier high threshold, it could lead to frequent
> demotions and promotions. Instead, would it make sense to introduce a
> tunable knob to configure the top-tier high threshold?

Yes, this is true. It is also a concern that I have, and I think that
adding a tunable knob could be helpful. The other side of the question is
whether there are too many tunables for the users already, with min /
low / high / max. I'm hoping to get a consensus for this at LSFMMBPF,
I hope we can talk about it there!

The other way to approach this is to throttle promotions and demotions
when workloads are thrashing. Personally I prefer this decision, although
it isn't mutually exclusive to adding more knobs.

> Another concern is that if the lower-tier memory size is very large, the
> cgroup may end up getting only a small portion of higher-tier memory.

I think the issue you mentioned above is a bigger problem.

If the lower tier memory is large and the toptier memory is small, then it
makes toptier memory an even more constrained resource, so splitting it
fairly among the cgroups becomes an even bigger issue. Remember, we're
limiting workloads' toptier memory usage because other workloads have
to use it; if we let a cgroup use more toptier memory, it has to come
from another cgroup's share.

Thanks again. Please let me know if you have any other concerns, I'm
excited to talk about this more as well!

Joshua