*Bzzt* This will mean that we'll free pages on a per-4k-page
basis too, returning to the current failing behaviour...
We'll need something a little more radical, or just halfway
radical by counting the contiguous buddies (of eg. 3 pages)
and allocating from smaller areas only, giving the larger
areas chance to grow.
But this is so close to a zone allocator that we might as well
do a lot of finetuning right now and start 2.3 real fast...
> However, I'd prefer to still try out some other ways of handling this. For
> example, "__get_free_pages()" currently only re-tries once. It shouldn't
> be hard to make it re-try a few more times, and it might well be enough to
> make the problem go away.
retry = (gfp & GFP_WAIT ? 3 : 1);
do {
stuff();
} while (retry--);
> 2.1.x is not going to be usable on 4MB machines. I didn't even have to
That's a real shame, IMHO. But with the new VM enhancements we
planned, 2.3 might actually be usable on small boxes again...
Rik.
+-------------------------------------------------------------------+
| Linux memory management tour guide. H.H.vanRiel@phys.uu.nl |
| Scouting Vries cubscout leader. http://www.phys.uu.nl/~riel/ |
+-------------------------------------------------------------------+
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html