Re: More parallel atomic_open/d_splice_alias fun with NFS and possibly more FSes.

From: Oleg Drokin
Date: Tue Jul 05 2016 - 11:21:49 EST



On Jul 5, 2016, at 9:51 AM, Al Viro wrote:

> On Tue, Jul 05, 2016 at 01:31:10PM +0100, Al Viro wrote:
>> On Tue, Jul 05, 2016 at 02:22:48AM -0400, Oleg Drokin wrote:
>>
>>>> + if (!(open_flags & O_CREAT) && !d_unhashed(dentry)) {
>>
>> s/d_unhashed/d_in_lookup/ in that.
>>
>>> So we come racing here from multiple threads (say 3 or more - we have seen this
>>> in the older crash reports, so totally possible)
>>>
>>>> + d_drop(dentry);
>>>
>>> One lucky one does this first before the others perform the !d_unhashed check above.
>>> This makes the other ones to not enter here.
>>>
>>> And we are back to the original problem of multiple threads trying to instantiate
>>> same dentry as before.
>>
>> Yep. See above - it should've been using d_in_lookup() in the first place,
>> through the entire nfs_atomic_open(). Same in the Lustre part of fixes,
>> obviously.
>
> See current #for-linus for hopefully fixed variants (both lustre and nfs)

The first patch of the series:
> @@ -416,9 +416,9 @@ static int ll_lookup_it_finish(struct ptlrpc_request *request,
> ...
> - if (d_unhashed(*de)) {
> + if (d_in_lookup(*de)) {
> struct dentry *alias;
>
> alias = ll_splice_alias(inode, *de);

This breaks Lustre because we now might progress further in this function
without calling into ll_splice_alias and that's the only place that we do
ll_d_init() that later code depends on so we violently crash next time
we call e.g. d_lustre_revalidate() further down that code.

Also I still wonder what's to stop d_alloc_parallel() from returning
a hashed dentry with d_in_lookup() still true?
Certainly there's a big gap between hashing the dentry and dropping the PAR
bit in there that I imagine might allow __d_lookup_rcu() to pick it up
in between?