Re: [PATCH] Avoiding fragmentation through different allocator

From: Grant Grundler
Date: Mon Jan 24 2005 - 14:57:31 EST


On Mon, Jan 24, 2005 at 10:29:52AM -0200, Marcelo Tosatti wrote:
> Grant Grundler and James Bottomley have been working on this area,
> they might want to add some comments to this discussion.
>
> It seems HP (Grant et all) has pursued using big pages on IA64 (64K)
> for this purpose.

Marcello,
That might have been Alex Williamson...but the reasons for 64K pages
is to reduce TLB thrashing, not faster IO.

On HP ZX1 boxes, SG performance is slightly better (max +5%) when going
through the IOMMU than when bypassing it. The IOMMU can perfectly
coalesce DMA pages but has a small CPU and DMA cost to do so as well.

Otherwise, I totally agree with James. IO devices do scatter-gather
pretty well and IO subsystems are tuned for page-size chunk or
smaller anyway.

...
> > I could keep digging, but I think the bottom line is that having large
> > pages generally available rather than a fixed setting is desirable.
>
> Definately, yes. Thanks for the pointers.

Big pages are good for CPU TLB and that's where most of the
research has been done. I think IO devices have learned to cope
with the fact the alot less has been (or can be for many
workloads) done to coalesce IO pages.

grant
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/