Re: vfork: out of memory, when there's plenty of swap free

Gerard Roudier (groudier@club-internet.fr)
Sun, 14 Mar 1999 23:26:34 +0100 (MET)


On Sun, 14 Mar 1999, Andrea Arcangeli wrote:

> On Sun, 14 Mar 1999, Linus Torvalds wrote:
>
> >
> >
> >On Sun, 14 Mar 1999, Andrea Arcangeli wrote:
> >>
> >> On Sat, 13 Mar 1999, Linus Torvalds wrote:
> >>
> >> >It might certainly be an option to allocate inodes in bigger chunks at a
> >> >time. That would at least make the problem become less.
> >>
> >> It would make the problem to go away completly according to me.
> >
> >For inodes, yes.
> >
> >For other things, no.
>
> Ah yes of course, I was talking only about inodes, I had not ever thought
> about other pieces of code. Excuse me if it wasn't clear from my previous
> email.
>
> >And it makes allocating inode pages a riskier thing - although for inodes
> >we almost always do have the option of just trying to free up something
> >else, so there a "soft" allocator may well be completely acceptable.
>
> Agreed.
>
> >just too likely to fail. Going to 2 or 4 is probably fine (just 2 pages
> >is enough to guarantee that there is no 2-page fragmentation, which is all
> >that forking cares about, and is safer if you worry about making sure you
> >can sometimes get new inodes).
>
> Hmm, yes agreed. The main issue is to get 2 contiguous pages.
>
> >Want to try it and see? It should be trivial to change the inode code to
> >allocate 2 or four pages at a time (two constants need to change).
>
> Ok sure! ;) I'll try that very soon and I'll let you know.

Last time I looked into the inode stuff, it seems that it never frees
memory. So, except for systems that use an inode table that will grow up
to a large part of the memory, this change will be not better than a
placebo, and placebo are known to only work with humans for the moment.
Btw, if the inode stuff would free memory on demand (8k on 4K page
systems), then it would just behave as a 8k chunks pool for the fork
not to fail.

If we would apply that 8k granularity allocation to all parts that
allocate memory, then we would get a system that uses 8k pages virtualized
over 4k hardware virtual pages. ;-)

Since you seem to be a kernel champion, instead of implementating minute
patches, you should for example try to implement the following:

1) Implement the virtualized kernel stack and fix any driver you are
using and that DMAs the stack. I can help you for that, but I
doubt you need help for.

2) Compare the behaviour of the original kernel with the modified kernel.
Btw, I doubt you will see significant performance differences at this
step.

3) Now that you have got rid of the 8k allocation fork problem, you can
also get rid of everything that tried to deal with this problem and
simplify some parts of the kernel.

4) Compare the behaviour of the original kernel with the modified kernel,
and observe which one behaves overall better.

Regards,
Gérard.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/