Re: [PATCH v2 1/3] 9p: Cache negative dentries for lookup performance

From: Remi Pommarel

Date: Sat Feb 21 2026 - 15:55:05 EST


On Thu, Feb 12, 2026 at 10:16:13AM +0100, Remi Pommarel wrote:
> On Wed, Feb 11, 2026 at 04:49:19PM +0100, Christian Schoenebeck wrote:
> > On Wednesday, 21 January 2026 20:56:08 CET Remi Pommarel wrote:
> > > Not caching negative dentries can result in poor performance for
> > > workloads that repeatedly look up non-existent paths. Each such
> > > lookup triggers a full 9P transaction with the server, adding
> > > unnecessary overhead.
> > >
> > > A typical example is source compilation, where multiple cc1 processes
> > > are spawned and repeatedly search for the same missing header files
> > > over and over again.
> > >
> > > This change enables caching of negative dentries, so that lookups for
> > > known non-existent paths do not require a full 9P transaction. The
> > > cached negative dentries are retained for a configurable duration
> > > (expressed in milliseconds), as specified by the ndentry_timeout
> > > field in struct v9fs_session_info. If set to -1, negative dentries
> > > are cached indefinitely.
> > >
> > > This optimization reduces lookup overhead and improves performance for
> > > workloads involving frequent access to non-existent paths.
> > >
> > > Signed-off-by: Remi Pommarel <repk@xxxxxxxxxxxx>
> > > ---
> > > fs/9p/fid.c | 11 +++--
> > > fs/9p/v9fs.c | 1 +
> > > fs/9p/v9fs.h | 2 +
> > > fs/9p/v9fs_vfs.h | 15 ++++++
> > > fs/9p/vfs_dentry.c | 105 ++++++++++++++++++++++++++++++++++------
> > > fs/9p/vfs_inode.c | 7 +--
> > > fs/9p/vfs_super.c | 1 +
> > > include/net/9p/client.h | 2 +
> > > 8 files changed, 122 insertions(+), 22 deletions(-)
> > >
> > > diff --git a/fs/9p/fid.c b/fs/9p/fid.c
> > > index f84412290a30..76242d450aa7 100644
> > > --- a/fs/9p/fid.c
> > > +++ b/fs/9p/fid.c
> > > @@ -20,7 +20,9 @@
> > >
> > > static inline void __add_fid(struct dentry *dentry, struct p9_fid *fid)
> > > {
> > > - hlist_add_head(&fid->dlist, (struct hlist_head *)&dentry->d_fsdata);
> > > + struct v9fs_dentry *v9fs_dentry = to_v9fs_dentry(dentry);
> > > +
> > > + hlist_add_head(&fid->dlist, &v9fs_dentry->head);
> > > }
> > >
> > >
> > > @@ -112,6 +114,7 @@ void v9fs_open_fid_add(struct inode *inode, struct
> > > p9_fid **pfid)
> > >
> > > static struct p9_fid *v9fs_fid_find(struct dentry *dentry, kuid_t uid, int
> > > any) {
> > > + struct v9fs_dentry *v9fs_dentry = to_v9fs_dentry(dentry);
> > > struct p9_fid *fid, *ret;
> > >
> > > p9_debug(P9_DEBUG_VFS, " dentry: %pd (%p) uid %d any %d\n",
> > > @@ -119,11 +122,9 @@ static struct p9_fid *v9fs_fid_find(struct dentry
> > > *dentry, kuid_t uid, int any) any);
> > > ret = NULL;
> > > /* we'll recheck under lock if there's anything to look in */
> > > - if (dentry->d_fsdata) {
> > > - struct hlist_head *h = (struct hlist_head *)&dentry->d_fsdata;
> > > -
> > > + if (!hlist_empty(&v9fs_dentry->head)) {
> > > spin_lock(&dentry->d_lock);
> > > - hlist_for_each_entry(fid, h, dlist) {
> > > + hlist_for_each_entry(fid, &v9fs_dentry->head, dlist) {
> > > if (any || uid_eq(fid->uid, uid)) {
> > > ret = fid;
> > > p9_fid_get(ret);
> > > diff --git a/fs/9p/v9fs.c b/fs/9p/v9fs.c
> > > index 057487efaaeb..1da7ab186478 100644
> > > --- a/fs/9p/v9fs.c
> > > +++ b/fs/9p/v9fs.c
> > > @@ -422,6 +422,7 @@ static void v9fs_apply_options(struct v9fs_session_info
> > > *v9ses, v9ses->cache = ctx->session_opts.cache;
> > > v9ses->uid = ctx->session_opts.uid;
> > > v9ses->session_lock_timeout = ctx->session_opts.session_lock_timeout;
> > > + v9ses->ndentry_timeout = ctx->session_opts.ndentry_timeout;
> > > }
> > >
> > > /**
> > > diff --git a/fs/9p/v9fs.h b/fs/9p/v9fs.h
> > > index 6a12445d3858..99d1a0ff3368 100644
> > > --- a/fs/9p/v9fs.h
> > > +++ b/fs/9p/v9fs.h
> > > @@ -91,6 +91,7 @@ enum p9_cache_bits {
> > > * @debug: debug level
> > > * @afid: authentication handle
> > > * @cache: cache mode of type &p9_cache_bits
> > > + * @ndentry_timeout: Negative dentry lookup cache retention time in ms
> > > * @cachetag: the tag of the cache associated with this session
> > > * @fscache: session cookie associated with FS-Cache
> > > * @uname: string user name to mount hierarchy as
> > > @@ -116,6 +117,7 @@ struct v9fs_session_info {
> > > unsigned short debug;
> > > unsigned int afid;
> > > unsigned int cache;
> > > + unsigned int ndentry_timeout;
> >
> > Why not (signed) long?
>
> I first though 40+ days of cache retention was enough but that is just
> an useless limitation, I will change it to signed long.

Well now that I think about it, this is supposed to be set from a mount
options. However, with the new mount API in use, there is currently no
support for fsparam_long or fsparam_s64.

While it could be implemented using a custom __fsparam, is the effort
truly justified here? Also in that case maybe a long long would be a bit
more portable across 32-bit and 64-bit platform?

Thanks,

--
Remi