Re: large files unnecessary trashing filesystem cache?

From: Avi Kivity
Date: Wed Oct 19 2005 - 08:47:44 EST


Bodo Eggert wrote:

I guess the solution would be using random cache eviction rather than
a FIFO. I never took a look the cache mechanism, so I may very well be
wrong here.


Instead of random cache eviction, you can make pages that were read in contiguously age faster than pages that were read in singly.

The motivation is that the cost of reading 64K vs 4K is almost the same (most of the cost is the seek), while the benefit for evicting 64K is 16 times that of evicting 4K. Over time, the kernel would favor expensive random-access pages over cheap streaming pages.

In a way, this is already implemented for inodes, which are aged more slowly than data pages.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/