Re: [00/17] Large Blocksize Support V3
From: William Lee Irwin III
Date: Sat Apr 28 2007 - 15:20:58 EST
On Sat, 28 Apr 2007 07:09:07 -0700 William Lee Irwin III <wli@xxxxxxxxxxxxxx> wrote:
>> The gang allocation affair would may also want to make the calls into
>> the page allocator batched. For instance, grab enough compound pages to
>> build the gang under the lock, since we're going to blow the per-cpu
>> lists with so many pages, then break the compound pages up outside the
>> zone->lock.
On Sat, Apr 28, 2007 at 11:26:40AM -0700, Andrew Morton wrote:
> Sure, but...
> Allocating a single order-3 (say) page _is_ a form of batching
Sorry, I should clarify here. If we fall back, we may still want to
get all the pages together. For instance, if we can't get an order 3,
grab an order 2, then if a second order 2 doesn't pan out, an order
1, and so on, until as many pages as requested are allocated or an
allocation failure occurs.
Also, passing around the results linked together into a list vs.
e.g. filling an array has the advantage of splice operations under
the lock, though arrays can catch up for the most part if their
elements are allowed to vary in terms of the orders of the pages.
On Sat, Apr 28, 2007 at 11:26:40AM -0700, Andrew Morton wrote:
> We don't want compound pages here: just higher-order ones
> Higher-order allocations bypass the per-cpu lists
Sorry again. I conflated the two, and failed to take the use of
higher-order pages as an assumption as I should've.
On Sat, 28 Apr 2007 07:09:07 -0700 William Lee Irwin III <wli@xxxxxxxxxxxxxx> wrote:
>> I think it'd be good to have some corresponding tactics for freeing as
>> well.
On Sat, Apr 28, 2007 at 11:26:40AM -0700, Andrew Morton wrote:
> hm, hadn't thought about that - would need to peek at contiguous pages in
> the pagecache and see if we can gang-free them as higher-order pages.
> The place to do that is perhaps inside the per-cpu magazines: it's more
> general. Dunno if it would net advantageous though.
What I was hoping for was an interface to hand back groups of pages at
a time which would then do contiguity detection if advantageous, and if
not, just assembles the pages into something that can be slung around
more quickly under the lock. Essentially doing small bits of the buddy
system's work for it outside the lock. Arrays make more sense here, as
it's relatively easy to do contiguity detection by heapifying them
and dequeueing in order in preparation for work under the lock.
There is an issue in that reclaim is not organized in such a fashion as
to issue calls to such freeing functions. An implicit effect of this
sort could be achieved by maintaining the pcp lists as an array-based
deque via duelling heap arrays with reversed comparators if an
appropriate deque structure for sets as small as the pcp arrays can't
be dredged up, or an auxiliary adjacency detection structure.
I'm skeptical, however, that the contiguity gains will compensate for
the CPU required to do such with the pcp lists. I think rather that
users of an interface for likely-contiguous batched freeing would be
better to arrange provided reclaim in such manners makes sense from
the standpoint of IO. Gang freeing in general could do adjacency
detection without disturbing the characteristics of the pcp lists,
though it, too, may not be productive without some specific notion
of whether contiguity is likely. For instance, quicklist_trim()
could readily use gang freeing, but it's not likely to have much in
the way of contiguity.
These sorts of algorithmic concerns are probably not quite as pressing
as the general notion of trying to establish some sort of contiguity,
so I'm by no means insistent on any of this.
-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/