On 18 Jul 2002, Robert Love wrote:
> On Thu, 2002-07-18 at 11:56, Richard B. Johnson wrote:
>
> > What should have happened is each of the tasks need only about
> > 4k until they actually access something. Since they can't possibly
> > access everything at once, we need to fault in pages as needed,
> > not all at once. This is what 'overcomit' is, and it is necessary.
>
> Then do not enable strict overcommit, Dick.
>
> > If you have 'fixed' something so that no RAM ever has to be paged
> > you have a badly broken system.
>
> That is not the intention of Alan or I's work at all.
>
> Robert Love
Okay then. When would it be useful? I read that it would be useful
in embedded systems, but everything that will ever run on embedded
systems is known at compile time, or is uploaded by something written
by an intelligent developer, so I don't think it's useful there. I
'do' embedded systems and have never encountered OOM.
I also read about some 'home users' not knowing how to set up
there systems. I don't think one CPU cycle should be wasted to
protect them, well maybe 10, but that's it.
I keep seeing the same thing about protecting root against fork and
malloc bombs and I get rather "malloc()" about it. All distributions
I have seen, so far, come with `gcc` and `make`. The kiddies can
crap all over their kernels at their heart's content. I don't think
Linux should be reduced to the lowest common denominator.
Cheers,
Dick Johnson
Penguin : Linux version 2.4.18 on an i686 machine (797.90 BogoMips).
Windows-2000/Professional isn't.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Tue Jul 23 2002 - 22:00:27 EST