Re: [GIT PULL] fs/9p patches for 6.9 merge window

From: Eric Van Hensbergen
Date: Wed Apr 10 2024 - 14:57:48 EST


April 10, 2024 at 12:20 PM, "Eric Van Hensbergen" <eric.vanhensbergen@xxxxxxxxx> wrote:
> April 8, 2024 at 9:14 AM, "Oleg Nesterov" <oleg@xxxxxxxxxx> wrote:
> > Hello,
> > the commit 724a08450f74 ("fs/9p: simplify iget to remove unnecessary paths")
> > from this PR breaks my setup.
> >
>
> Thanks for the bisect and detailed reproduction instructions. I am looking at this now and it seems to be related to another problem reported by Kent Overstreet where he was seeing symlink loop reports that were disrupting his testing environment. Once I'm able to reproduce, I'll try and get a patch out later today, if not I may revert the commit to keep from disrupting folks' testing environments.
>

Okay, I think I understand this one, unfortunately it doesn't appear obviously related to Kent's report if what I believe is correct. I think I've reproduced the problem, fundamentally, since you have two mount points you are exporting together. I believe we are getting an inode number collision which was being hidden by the "always create a new inode on lookup" inefficiency in v9fs_vfs_lookup. You could probably verify that for me by stating the /home directory and the / directory on the server side of your setup. When I created two partitions and mounted one inside the other the "home" and the "root" both had inode 2 and I got -ELOOP when trying to access.

The underlying cause in the patch series is that I was trying to maintain inodes versus constantly creating and deleting them -- and I simplified the inode lookup to be based purely on inode number (versus checking against times, type, i_generation, etc.) -- so collisions are much more likely to happen. If qemu detects that this is a possibility it usually prints something:
qemu-system-aarch64: warning: 9p: Multiple devices detected in same VirtFS export, which might lead to file ID collisions and severe misbehaviours on guest! You should either use a separate export for each device shared from host or use virtfs option 'multidevs=remap'!

I can confirm that multidevs=remap in qemu does appear to avoid the problem, but this doesn't help the fact that we have a broader regression on our hand that used to work. It'd be useful to see if mutlidevs=remap also deals with your problem @Kent, but while I think its the same underlying problem, I don't think it may be the same solution.

I think I have to give up on relying on qid.path/inode-number as unique and maintain our own client view of those -- but this will cause problems with the way several parts of the code currently operate where we need to lookup inode by a unique identifier from the server (which is currently conveyed by qids).

Now that I understand the problem, I think I can work thorugh a partial revert which will go back to a more complex match in vfs_lookup (which examines several other fields from the server beyond qid.path) -- but maybe I can do it in such a way which will still avoid keeping duplicate inode structs around in memory.

This didn't show up in my regressions because I always export a single file system where the inodes are always unique.

Thanks for your patience while I work through this - now that i can reproduce the problem, I'm fairly certain I can get a patch together this week to test to see if it solves the regressions.

-eric