Re: [PATCH] btrfs: optimize barrier usage for Rmw atomics
From: Nikolay Borisov
Date: Thu Jan 30 2020 - 03:18:41 EST
On 30.01.20 Ð. 1:55 Ñ., Qu Wenruo wrote:
>
>
> On 2020/1/30 äå3:25, Davidlohr Bueso wrote:
>> On Wed, 29 Jan 2020, David Sterba wrote:
>>
>>> On Wed, Jan 29, 2020 at 10:03:24AM -0800, Davidlohr Bueso wrote:
>>>> Use smp_mb__after_atomic() instead of smp_mb() and avoid the
>>>> unnecessary barrier for non LL/SC architectures, such as x86.
>>>
>>> So that's a conflicting advice from what we got when discussing wich
>>> barriers to use in 6282675e6708ec78518cc0e9ad1f1f73d7c5c53d and the
>>> memory is still fresh. My first idea was to take the
>>> smp_mb__after_atomic and __before_atomic variants and after discussion
>>> with various people the plain smp_wmb/smp_rmb were suggested and used in
>>> the end.
>>
>> So the patch you mention deals with test_bit(), which is out of the scope
>> of smp_mb__{before,after}_atomic() as it's not a RMW operation.
>> atomic_inc()
>> and set_bit() are, however, meant to use these barriers.
>
> Exactly!
> I'm still not convinced to use full barrier for test_bit() and I see no
> reason to use any barrier for test_bit().
> All mb should only be needed between two or more memory access, thus mb
> should sit between set/clear_bit() and other operations, not around
> test_bit().
>
>>
>>>
>>> I can dig the email threads and excerpts from irc conversations, maybe
>>> Nik has them at hand too. We do want to get rid of all unnecessary and
>>> uncommented barriers in btrfs code, so I appreciate your patch.
>>
>> Yeah, I struggled with the amount of undocumented barriers, and decided
>> not to go down that rabbit hole. This patch is only an equivalent of
>> what is currently there. When possible, getting rid of barriers is of
>> course better.
>
> BTW, is there any convincing method to do proper mb examination?
>
> I really found it hard to convince others or even myself when mb is
> involved.
Yes there is - the LKMM, you can write a litmus test. Check out
tootls/memory-model
>
> Thanks,
> Qu
>
>>
>> Thanks,
>> Davidlohr