Re: [PATCH] vfs: Speed up deactivate_super for non-modular filesystems

From: Eric W. Biederman
Date: Tue May 15 2012 - 20:35:49 EST


Nick Piggin <npiggin@xxxxxxxxx> writes:

> On 9 May 2012 21:02, Eric W. Biederman <ebiederm@xxxxxxxxxxxx> wrote:
>> Nick Piggin <npiggin@xxxxxxxxx> writes:
>>
>>> On 8 May 2012 11:07, Eric W. Biederman <ebiederm@xxxxxxxxxxxx> wrote:
>>>> "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx> writes:
>>>>
>>>>> On Mon, May 07, 2012 at 11:17:06PM +0100, Al Viro wrote:
>>>>>> On Mon, May 07, 2012 at 02:51:08PM -0700, Eric W. Biederman wrote:
>>>>>>
>>>>>> > /proc and similar non-modular filesystems do not need a rcu_barrier
>>>>>> > in deactivate_locked_super. ÂBeing non-modular there is no danger
>>>>>> > of the rcu callback running after the module is unloaded.
>>>>>>
>>>>>> There's more than just a module unload there, though - actual freeing
>>>>>> Âstruct super_block also happens past that rcu_barrier()...
>>>>
>>>> Al. ÂI have not closely audited the entire code path but at a quick
>>>> sample I see no evidence that anything depends on inode->i_sb being
>>>> rcu safe. ÂDo you know of any such location?
>>>>
>>>> It has only been a year and a half since Nick added this code which
>>>> isn't very much time to have grown strange dependencies like that.
>>>
>>> No, it has always depended on this.
>>>
>>> Look at ncp_compare_dentry(), for example.
>>
>> Interesting. ncp_compare_dentry this logic is broken.
>>
>> Accessing i_sb->s_fs_info for parameters does seem reasonable.
>> Unfortunately ncp_put_super frees server directly.
>>
>> Meaning if we are depending on only rcu protections a badly timed
>> ncp_compare_dentry will oops the kernel.
>>
>> I am going to go out on a limb and guess that every other filesystem
>> with a similar dependency follows the same pattern and is likely
>> broken as well.
>
> But ncp_put_super should be called after the rcu_barrier(), no?
>
> How is it broken?

The interesting hunk of code from deactivate_locked_super is:
> cleancache_invalidate_fs(s);
> fs->kill_sb(s);
^^^^^^^^^^^^^^ This is where ncp_put_super() is called.
>
> /* caches are now gone, we can safely kill the shrinker now */
> unregister_shrinker(&s->s_shrink);
>
> /*
> * We need to call rcu_barrier so all the delayed rcu free
> * inodes are flushed before we release the fs module.
> */
> rcu_barrier();
> put_filesystem(fs);
> put_super(s);

Which guarantees ncp_put_super() happens before the rcu_barrier.

>> Taking that observation farther we have a mount reference count, that
>> pins the super block. ÂSo at first glance the super block looks safe
>> without any rcu protections.
>
> Well yes, that's what I'm getting at. But I don't think it's quite complete...
>
>>
>> I'm not certain what pins the inodes. Let's see:
>>
>> mnt->d_mnt_root has the root dentry of the dentry tree, and that
>> dentry count is protected by the vfsmount_lock.
>
> If the mount is already detached from the namespace when we start
> to do a path walk, AFAIKS it can be freed up from underneath us at
> that point.
>
> This would require cycling vfsmount_lock for write in such path. It's
> better than rcu_barrier probably, but not terribly nice.

Where do you see the possibility of a mount detached from a namespace
causing problems? Simply having any count on a mount ensures we cycle
the vfsmount in mntput_no_expire.


Or if you want to see what I am seeing:

The rcu_path_walk starts at one of. "." "/" or file->f_path, all of
which hold a reference on a struct vfsmount.

We perform an rcu_path_walk with the locking.
br_read_lock(vfsmount_lock);
rcu_read_lock();

We can transition to another vfs mount via follow_mount_rcu
which consults the mount hash table which can only be modified
under the br_write_lock(vfsmount_lock);

We can also transition to another vfs mount via follow_up_rcu
which simply goes to mnt->mnt_parent. Where our starting vfsmount
holds a reference to the target vfsmount.

When we complete the rcu_path_walk we do:
rcu_read_unlock()
br_write_lock(vfsmount_lock)

mntput_no_expire, which decrements mount counts takes and releases
br_write_lock before we put the final mount reference. Which means
that it is impossible for the final mntput on a mount to complete
while we are in the middle of an rcu path walk.

Once we have take and released br_write_lock(vfsmount_lock)
in mntput_no_expire we call mntfree. mntfree calls
deactivate_super. And deactivate_super calls deactivate_locked_super.

Which is a long winded way of saying we always call
deactivate_locked_super after we put our final mount count.

I don't possibly see how a mount can be freed while we are in
the middle of a rcu path walk. Not while we hold the
br_read_lock(vfsmount_lock), and the final mntput takes
br_write_lock(vfsmount_lock).


>> Documentation/filesystems/vfs.txt seems to duplicate this reasoning
>> of why the superblock is safe. ÂBecause we hold a real reference to it
>> from the vfsmount.
>
> rcu walk does not hold a reference to the vfsmount, however. It can
> go away. This is why functions which can be called from rcu-walk
> must go through synchronize_rcu() before they go away, also before
> the superblock goes away.

Not at all.

The rcu walk itself does not hold a reference to the vfsmount, but
something holds a reference to the vfsmount and to drop the final
reference on a vfsmount we must hold the vfsmount_lock for write.
The rcu walk holds the vfsmount_lock for read which prevents us from
grabbing the vfsmount_lock for write.

We need to wait an rcu grace period before freeing dentries and inodes
becuase for dentries and inodes we only have rcu protection for them.
For vfsmounts and the superblock we have a lock protected reference
count.

> The other way we could change the rule is to require barrier only for
> those filesystems which access superblock or other info from rcu-walk.
> I would prefer not to have such a rule, but it could be pragmatic.

I don't see that we need to change a rule.

Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/