Re: [PATCH] memcg: add hierarchical effective limits for v2

From: Shakeel Butt
Date: Thu Feb 06 2025 - 14:09:33 EST


On Thu, Feb 06, 2025 at 04:57:39PM +0100, Michal Koutný wrote:
> Hello Shakeel.
>
> On Wed, Feb 05, 2025 at 02:20:29PM -0800, Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote:
> > Memcg-v1 exposes hierarchical_[memory|memsw]_limit counters in its
> > memory.stat file which applications can use to get their effective limit
> > which is the minimum of limits of itself and all of its ancestors.
>
> I was fan of equal idea too [1]. The referenced series also tackles
> change notifications (to make this complete for apps that really want to
> scale based on the actual limit). I ceased to like it when I realized
> there can be hierarchies when the effective value cannot be effectively
> :) determined [2].
>
> > This is pretty useful in environments where cgroup namespace is used
> > and the application does not have access to the full view of the
> > cgroup hierarchy. Let's expose effective limits for memcg v2 as well.
>
> Also, the case for this exposition was never strongly built.
> Why isn't PSI enough in your case?
>

Hi Michal,

Oh I totally forgot about your series. In my use-case, it is not about
dynamically knowning how much they can expand and adjust themselves but
rather knowing statically upfront what resources they have been given.
More concretely, these are workloads which used to completely occupy a
single machine, though within containers but without limits. These
workloads used to look at machine level metrics at startup on how much
resources are available.

Now these workloads are being moved to multi-tenant environment but
still the machine is partitioned statically between the workloads. So,
these workloads need to know upfront how much resources are allocated to
them upfront and the way the cgroup hierarchy is setup, that information
is a bit above the tree.

I hope this clarifies the motivation behind this change i.e. the target
is not dynamic load balancing but rather upfront static knowledge.

thanks,
Shakeel