Re: [PATCH] mm: memcontrol: reclaim and OOM kill when shrinking memory.max below usage

From: Michal Hocko
Date: Wed Mar 16 2016 - 04:44:09 EST


On Tue 15-03-16 22:18:48, Johannes Weiner wrote:
> On Fri, Mar 11, 2016 at 12:19:31PM +0300, Vladimir Davydov wrote:
> > On Fri, Mar 11, 2016 at 09:18:25AM +0100, Michal Hocko wrote:
> > > On Thu 10-03-16 15:50:14, Johannes Weiner wrote:
> > ...
> > > > @@ -5037,9 +5040,36 @@ static ssize_t memory_max_write(struct kernfs_open_file *of,
> > > > if (err)
> > > > return err;
> > > >
> > > > - err = mem_cgroup_resize_limit(memcg, max);
> > > > - if (err)
> > > > - return err;
> > > > + xchg(&memcg->memory.limit, max);
> > > > +
> > > > + for (;;) {
> > > > + unsigned long nr_pages = page_counter_read(&memcg->memory);
> > > > +
> > > > + if (nr_pages <= max)
> > > > + break;
> > > > +
> > > > + if (signal_pending(current)) {
> > >
> > > Didn't you want fatal_signal_pending here? At least the changelog
> > > suggests that.
> >
> > I suppose the user might want to interrupt the write by hitting CTRL-C.
>
> Yeah. This is the same thing we do for the current limit setting loop.

Yes we do but then the operation is canceled without any change. Now
re-reading the changelog I've realized I have misread the "we run out of
OOM victims and there's only unreclaimable memory left, or the task
writing to memory.max is killed." part and considered task writing... is
OOM killed.

> > Come to think of it, shouldn't we restore the old limit and return EBUSY
> > if we failed to reclaim enough memory?
>
> I suspect it's very rare that it would fail. But even in that case
> it's probably better to at least not allow new charges past what the
> user requested, even if we can't push the level back far enough.

I guess you are right. This guarantee is indeed useful.
--
Michal Hocko
SUSE Labs