> Has anyone thought of a way to possibly put a limit on the amount of
> resources to alow anyone process.
Yes, not much works against the fork bomb.
> For example the following code is a
> common fork bomb used by people to cause systems to crash.
>
> main()
> { int *root; while(1) { root=(int *)malloc(10000);
> fork(); } }
>
> When ran on
> this system it crashs the kernel within seconds and even after its killed
> I have waited an hour, and the harddrive was still spinning :).
The problem is that you havent killed it, just one instance of the
process, each time the fork() is done, a new processes is created, which
also fork()'s infinitely (well in an alpha it only takes about 5 minutes
;). If you only leave one unkilled, your still in the same stiuation,
you have to kill them from the lowest up, and faster than they can fork().
> There has
> to be a way to prevent something like this from causeing such a problem.
>
Well no, but you can limit the damage that it can do, with setrlimit(2),
which is easyest to adjust with the bash builtin ulimit (man bash, and
search for ulimit). Basically if you put 'ulimit -Hu 20' in /etc/profile,
with that program, you'll never have more than 20 forks of the bomb
(proberbly 19, and the shell ;), and only 190000 bytes malloced for no
reason (assuming the 19 processes). It sets the hard limit for a users
processes at 20 (uid 0 can raise that, but normal users can't).
Bryn
-- PGP key pass phrase forgotten, \ Overload -- core meltdown sequence again :( and I don't care ;) | initiated. / This space is intentionally left | blank, apart from this text ;-) \____________________________________