Re: [RFC][PATCH 0/4] VM deadlock prevention -v4

From: Peter Zijlstra
Date: Sat Aug 12 2006 - 14:53:08 EST

On Sat, 2006-08-12 at 20:16 +0200, Indan Zupancic wrote:
> On Sat, August 12, 2006 19:33, Peter Zijlstra said:
> > Simpler yes, but also more complete; the old patches had serious issues
> > with the alternative allocation scheme.
> It sure is more complete, and looks nicer, but the price is IMHO too high.
> I'm curious what those serious issues are, and if they can't be fixed.
> > As for why SROG, because trying to stick all the semantics needed for
> > all skb operations into the old approach was nasty, I had it almost
> > complete but it was horror (and more code than the SROG approach).
> What was missing or wrong in the old approach? Can't you use the new
> approach, but use alloc_pages() instead of SROG?
> Sorry if I bug you so, but I'm also trying to increase my knowledge here. ;-)

I'm almost sorry I threw that code out, you'd understand instantly..

Lemme see what I can do to explain; what I need/want is:
- single allocation group per packet - that is, when I free a packet
and all its associated object I get my memory back.
- not waste too much space managing the various objects

skb operations want to allocate various sk_buffs for the same data
clones. Also, it wants to be able to break the COW or realloc the data.

The trivial approach would be one page (or higher alloc page) per
object, and that will work quite well, except that it'll waste a _lot_
of memory.

So I tried manual packing (parts of that you have seen in previous
attempts). This gets hard when you want to do unlimited clones and COW
breaks. To do either you need to go link several pages.

So needing a list of pages and wanting packing gave me SROG. The biggest
wart is having to deal with higher order pages. Explicitly coding in
knowledge of the object you're packing just makes the code bigger - such
is the power of abstraction.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at