Re: Is it possible to give the user the option to cancel forkbombs?

From: Andi Kleen
Date: Sat Nov 17 2007 - 08:36:23 EST


Alan Cox <alan@xxxxxxxxxxxxxxxxxxx> writes:

>> > I would like to see something done about this, with Ubuntu as popular as
>> > it is, even as a server in some cases. Is there a way that in the
>> > future, one could simply download a package or click a box or something
>> > and have a limit set, like the links suggest? That would make things
>> > just "that much" more convenient for system administrators (and might
>> > help them/us to remember to set these limits, too...).
>>
>> If you don't know which limits to set and need a package for them, your
>> job title should not be system administrator.
>
> Thats a very arrogant viewpoint. I don't have to be a TV engineer to use
> my television.
>
> Distributions should be providing sensible defaults out of the box. The
> kernel already provides them the mechanisms.

If you mean ulimits -- the current ulimits are not very useful imho. Or
at least not for handling the general resource resumption problem. They
work in some limited circumstances for well known special purpose workloads,
but not more.

I don't know why people here always claiming they are (have they ever
tried to use them on your desktop in general?).

The equation to limit resources is something like:

MAX_PROCESSES_PER_UID*(MAX_MEM_PER_PROCESS+MAX_FD_PER_PROCESS+...)=TOTAL_RESOURCES_UID

To get an effective limit you either need MAX_PROCESSES or MAX_MEM to unusable
low numbers breaking a lot of applications. And you cannot generally predict
in advance if the workload will need high MAX_MEM or high MAX_PROCESSES
or a combination of both.

That is why distributions usually don't set them. Or at least not for
in a setup to protect the system fully because that is not possible.

e.g. SUSE supports a optional default limit that sets the max memory
per process to below the available memory, but the user can still
easily circumvent that by starting multiple processes.

And I'm not really talking about a general bean counter for all kernel
objects here, just the "ordinary easy resources" like processes and virtual
memory.

Pretty much all the per process limits would need to be per uid to be really
useful in general. I'm hoping that we'll get some of that out of the recent
container work. e.g. if there was a "max mem per uid" then you could actually
set it to a sensible value. Even better than max mem per uid would be probably
"max total memory used as fraction of the system" or something like that --
that would also handle things like memory hotplug etc. well. Probably
would also need to be separated into swap space and real memory; at least
as long as the Linux swap algorithms are so slow.

Regarding the fork bomb problem:

the uid cgroup scheduler that went in recently should already help a little,
although to be really effective against fork bombs for a desktop user you
would probably need multiple cgroups per uid (so e.g. that the window
manager is also protected against other processes running on the same
uid and you can then still kill the nasty processes from it).

That is why I didn't like hardcoding the uid in the scheduler too much.

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/