> c) Currently we keep the stuff for the first class around the page
>cache and the rest in buffer cache. Large part of our problems comes
>from the fact that we need to detect migration of block from one class to
>another and scanning the whole buffer cache is way too slow.
We don't scan the whole buffer cache. We only do a fast query on the
buffer hash.
And I still can't see how you can find the stale buffer in a per-object
queue as the object can be destroyed as well after the lowlevel truncate.
You can't even know which is the inode Y that is using a block X without
reading all the inode metadata while the block X still belongs to the
inode Y (before the truncate).
Right now you know only that the stale buffer can't came from inode
metadata as we have the race prone slowww trick to loop in polling mode
inside ext2_truncate until we'll be sure bforget will run in hard mode.
And even if you know such information you should do a linear search in the
list that's slower than an hash lookup anyway (or you can start slowly
unmapping all the other stale buffers even if you are not interested
about, so you may block more than necessary). Doing a per-object hash is
not an option IMHO.
> d) Moving the second class into the page cache will cause problems
>with bigmem stuff. Besides, I have the reasons of my own for keeping those
I can't see these bigmem issues. The buffer and page-cache memory is not
in bigmem anyway. And you can use bigmem _wherever_ you want as far as you
remeber to fix all the involved code to kmap before read/write to
potential bigmem memory. bigmem issue looks like a red-herring to me.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/