Re: Directories > 2GB

From: Steve Lord
Date: Wed Oct 11 2006 - 12:53:24 EST


David Chinner wrote:
On Tue, Oct 10, 2006 at 10:19:04AM +0100, Christoph Hellwig wrote:
On Mon, Oct 09, 2006 at 09:15:28PM -0500, Steve Lord wrote:
Hi Dave,

My recollection is that it used to default to on, it was disabled
because it needs to map the buffer into a single contiguous chunk
of kernel memory. This was placing a lot of pressure on the memory
remapping code, so we made it not default to on as reworking the
code to deal with non contig memory was looking like a major
effort.
Exactly. The code works but tends to go OOM pretty fast at least
when the dir blocksize code is bigger than the page size. I should
give the code a spin on my ppc box with 64k pages if it works better
there.

The pagebuf code doesn't use high-order allocations anymore; it uses
scatter lists and remapping to allow physically discontiguous pages
in a multi-page buffer. That is, the pages are sourced via
find_or_create_page() from the address space of the backing device,
and then mapped via vmap() to provide a virtually contigous mapping
of the multi-page buffer.

So I don't think this problem exists anymore...

I was not referring to high order allocations here, but the overhead
of doing address space remapping every time a directory is accessed.

Steve

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/