Re: RFT: updatedb "morning after" problem [was: Re: -mm merge plansfor 2.6.23]

From: Rene Herman
Date: Sun Jul 29 2007 - 10:10:20 EST


On 07/29/2007 03:12 PM, Alan Cox wrote:

What are the tradeoffs here? What wants small chunks? Also, as far as
I'm aware Linux does not do things like up the granularity when it
notices it's swapping in heavily? That sounds sort of promising...

Small chunks means you get better efficiency of memory use - large chunks
mean you may well page in a lot more than you needed to each time (and cause more paging in turn). Your disk would prefer you fed it big linear
I/O's - 512KB would probably be my first guess at tuning a large box under load for paging chunk size.

That probably kills my momentary hope that I was looking at yet another good use of large soft-pages seeing as how 512K would be going overboard a bit right? :-/

More radically if anyone wants to do real researchy type work - how about
log structured swap with a cleaner ?

Right over my head. Why does log-structure help anything?

Rene.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/