Re: [PATCH 0/3] Fadvise: Directory level page cache cleaning support

From: Li Wang
Date: Thu Jan 02 2014 - 07:44:27 EST

Do we really need clean dcache/icache at the current stage?
That will introduce more code work, so far, iput() will put
those unreferenced inodes into superblock lru list. To free
the inodes inside a specific directory, it seems we do not
have a handy API to use, and need
modify iput() to recognize our situation, and collect those
inodes into our list rather than superblock lru list. Maybe
we stay at current stage now, since it is simple and could
gain the major benefits, leave the dcache/icache cleaning
to do in the future?

On 2013/12/31 5:33, Dave Hansen wrote:
On 12/30/2013 11:40 AM, Andreas Dilger wrote:
On Dec 30, 2013, at 12:18, Dave Hansen <dave.hansen@xxxxxxxxx> wrote:
Why is this necessary to do in the kernel? Why not leave it to
userspace to walk the filesystem(s)?

I would suspect that trying to do it in userspace would be quite bad. It would require traversing the whole directory tree to issue cache flushed for each subdirectory, but it doesn't know when to stop traversal. That would mean the "cache flush" would turn into "cache pollute" and cause a lot of disk IO for subdirectories not in cache to begin with.

That makes sense for dentries at least and is a pretty good reason.
Probably good enough to to include some text in the patch description.
;) Perhaps: "We need this interface because we have no way of
determining what is in the dcache from userspace, and we do not want
userspace to pollute the dcache going and looking for page cache to evict."

One other thing that bothers me: POSIX_FADV_DONTNEED on a directory
seems like it should do something with the _directory_. It should undo
the kernel's caching that happens as a result of readdir().

Should this also be trying to drop the dentry/inode entries like "echo 2
.../drop_caches" does?
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at