Re: can't oom-kill zap the victim's memory?

From: Linus Torvalds
Date: Tue Oct 06 2015 - 04:49:26 EST


On Tue, Oct 6, 2015 at 8:55 AM, Eric W. Biederman <ebiederm@xxxxxxxxxxxx> wrote:
>
> Not to take away from your point about very small allocations. However
> assuming allocations larger than a page will always succeed is down
> right dangerous.

We've required retrying for *at least* order-1 allocations. Exactly
because things like fork() etc have wanted them, and:

- as you say, you can be unlucky even with reasonable amounts of free memory

- the page-out code is approximate and doesn't guarantee that you get
buddy coalescing

- just failing after a couple of loops has been known to result in
fork() and similar friends returning -EAGAIN and breaking user space.

Really. Stop this idiocy. We have gone through this before. It's a disaster.

The basic fact remains: kernel allocations are so important that
rather than fail, you should kill user space. Only kernel allocations
that *explicitly* know that they have fallback code should fail, and
they should just do the __GFP_NORETRY.

So the rule ends up being that we retry the memory freeing loop for
small allocations (where "small" is something like "order 2 or less")

So really. If you find some particular case that is painful because it
wants an order-1 or order-2 allocation, then you do this:

- do the allocation with GFP_NORETRY

- have a fallback that uses vmalloc or just is able to make the
buffer even smaller.

But by default we will continue to make small orders retry. As
mentioned, we have tried the alternatives. It doesn't work.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/