Re: [PATCH] mm/memcontrol: Add the drop_cache interface for cgroup v2

From: Yafang Shao
Date: Mon Sep 21 2020 - 06:56:36 EST


On Mon, Sep 21, 2020 at 4:12 PM Michal Hocko <mhocko@xxxxxxxx> wrote:
>
> On Mon 21-09-20 16:02:55, zangchunxin@xxxxxxxxxxxxx wrote:
> > From: Chunxin Zang <zangchunxin@xxxxxxxxxxxxx>
> >
> > In the cgroup v1, we have 'force_mepty' interface. This is very
> > useful for userspace to actively release memory. But the cgroup
> > v2 does not.
> >
> > This patch reuse cgroup v1's function, but have a new name for
> > the interface. Because I think 'drop_cache' may be is easier to
> > understand :)
>
> This should really explain a usecase. Global drop_caches is a terrible
> interface and it has caused many problems in the past. People have
> learned to use it as a remedy to any problem they might see and cause
> other problems without realizing that. This is the reason why we even
> log each attempt to drop caches.
>
> I would rather not repeat the same mistake on the memcg level unless
> there is a very strong reason for it.
>

I think we'd better add these comments above the function
mem_cgroup_force_empty() to explain why we don't want to expose this
interface in cgroup2, otherwise people will continue to send this
proposal without any strong reason.


> > Signed-off-by: Chunxin Zang <zangchunxin@xxxxxxxxxxxxx>
> > ---
> > Documentation/admin-guide/cgroup-v2.rst | 11 +++++++++++
> > mm/memcontrol.c | 5 +++++
> > 2 files changed, 16 insertions(+)
> >
> > diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> > index ce3e05e41724..fbff959c8116 100644
> > --- a/Documentation/admin-guide/cgroup-v2.rst
> > +++ b/Documentation/admin-guide/cgroup-v2.rst
> > @@ -1181,6 +1181,17 @@ PAGE_SIZE multiple when read back.
> > high limit is used and monitored properly, this limit's
> > utility is limited to providing the final safety net.
> >
> > + memory.drop_cache
> > + A write-only single value file which exists on non-root
> > + cgroups.
> > +
> > + Provide a mechanism for users to actively trigger memory
> > + reclaim. The cgroup will be reclaimed and as many pages
> > + reclaimed as possible.
> > +
> > + It will broke low boundary. Because it tries to reclaim the
> > + memory many times, until the memory drops to a certain level.
> > +
> > memory.oom.group
> > A read-write single value file which exists on non-root
> > cgroups. The default value is "0".
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 0b38b6ad547d..98646484efff 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -6226,6 +6226,11 @@ static struct cftype memory_files[] = {
> > .write = memory_max_write,
> > },
> > {
> > + .name = "drop_cache",
> > + .flags = CFTYPE_NOT_ON_ROOT,
> > + .write = mem_cgroup_force_empty_write,
> > + },
> > + {
> > .name = "events",
> > .flags = CFTYPE_NOT_ON_ROOT,
> > .file_offset = offsetof(struct mem_cgroup, events_file),
> > --
> > 2.11.0
>
> --
> Michal Hocko
> SUSE Labs
>


--
Thanks
Yafang