Re: [patch] mm, memcg: add oom killer delay

From: David Rientjes
Date: Mon Jun 03 2013 - 14:18:22 EST


On Sat, 1 Jun 2013, Michal Hocko wrote:

> > Users obviously don't have the ability to attach processes to the root
> > memcg. They are constrained to their own subtree of memcgs.
>
> OK, I assume those groups are generally untrusted, right? So you cannot
> let them register their oom handler even via an admin interface. This
> makes it a bit complicated because it makes much harder demands on the
> handler itself as it has to run under restricted environment.
>

That's the point of the patch. We want to allow users to register their
own oom handler in a subtree (they may attach it to their own subtree root
and wait on memory.oom_control of a child memcg with a limit less than
that root) but not insist on an absolutely perfect implementation that can
never fail when you run on many, many servers. Userspace implementations
do fail sometimes, we just accept that.

> I still do not see why you cannot simply read tasks file into a
> preallocated buffer. This would be few pages even for thousands of pids.
> You do not have to track processes as they come and go.
>

What do you suggest when you read the "tasks" file and it returns -ENOMEM
because kmalloc() fails because the userspace oom handler's memcg is also
oom? Obviously it's not a situation we want to get into, but unless you
know that handler's exact memory usage across multiple versions, nothing
else is sharing that memcg, and it's a perfect implementation, you can't
guarantee it. We need to address real world problems that occur in
practice.

> As I said before. oom_delay_millisecs is actually really easy to be done
> from userspace. If you really need a safety break then you can register
> such a handler as a fallback. I am not familiar with eventfd internals
> much but I guess that multiple handlers are possible. The fallback might
> be enforeced by the admin (when a new group is created) or by the
> container itself. Would something like this work for your use case?
>

You're suggesting another userspace process that solely waits for a set
duration and then reenables the oom killer? It faces all the same
problems as the true userspace oom handler: it's own perfect
implementation and it's own memcg constraints.

> > If that user is constrained to his or her own subtree, as previously
> > stated, there's also no way to login and rectify the situation at that
> > point and requires admin intervention or a reboot.
>
> Yes, insisting on the same subtree makes the life much harder for oom
> handlers. I totally agree with you on that. I just feel that introducing
> a new knob to workaround user "inability" to write a proper handler
> (what ever that means) is not justified.
>

It's not necessarily harder if you assign the userspace oom handlers to
the root of your subtree with access to more memory than the children.
There is no "inability" to write a proper handler, but when you have
dozens of individual users implementing their own userspace handlers with
changing memcg limits over time, then you might find it hard to have
perfection every time. If we had perfection, we wouldn't have to worry
about oom in the first place. We can't just let these gazillion memcgs
sit spinning forever because they get stuck, either. That's why we've
used this solution for years as a failsafe. Disabling the oom killer
entirely, even for a memcg, is ridiculous, and if you don't have a grace
period then oom handlers themselves just don't work.

> > Then why does "cat tasks" stall when my memcg is totally depleted of all
> > memory?
>
> if you run it like this then cat obviously needs some charged
> allocations. If you had a proper handler which mlocks its buffer for the
> read syscall then you shouldn't require any allocation at the oom time.
> This shouldn't be that hard to do without too much memory overhead. As I
> said we are talking about few (dozens) of pages per handler.
>

I'm talking about the memory the kernel allocates when reading the "tasks"
file, not userspace. This can, and will, return -ENOMEM.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/