Re: [PATCH] mm,unmap: avoid flushing TLB in batch if PTE is inaccessible
From: Nadav Amit
Date: Wed Apr 12 2023 - 13:01:36 EST
> On Apr 11, 2023, at 6:50 PM, Huang, Ying <ying.huang@xxxxxxxxx> wrote:
>
> !! External Email
>
> Nadav Amit <namit@xxxxxxxxxx> writes:
>
>>> On Apr 10, 2023, at 6:31 PM, Huang, Ying <ying.huang@xxxxxxxxx> wrote:
>>>
>>> !! External Email
>>>
>>> Hi, Amit,
>>>
>>> Thank you very much for review!
>>>
>>> Nadav Amit <namit@xxxxxxxxxx> writes:
>>>
>>>>> On Apr 10, 2023, at 12:52 AM, Huang Ying <ying.huang@xxxxxxxxx> wrote:
>>>>>
>>>>> 0Day/LKP reported a performance regression for commit
>>>>> 7e12beb8ca2a ("migrate_pages: batch flushing TLB"). In the commit, the
>>>>> TLB flushing during page migration is batched. So, in
>>>>> try_to_migrate_one(), ptep_clear_flush() is replaced with
>>>>> set_tlb_ubc_flush_pending(). In further investigation, it is found
>>>>> that the TLB flushing can be avoided in ptep_clear_flush() if the PTE
>>>>> is inaccessible. In fact, we can optimize in similar way for the
>>>>> batched TLB flushing too to improve the performance.
>>>>>
>>>>> So in this patch, we check pte_accessible() before
>>>>> set_tlb_ubc_flush_pending() in try_to_unmap/migrate_one(). Tests show
>>>>> that the benchmark score of the anon-cow-rand-mt test case of
>>>>> vm-scalability test suite can improve up to 2.1% with the patch on a
>>>>> Intel server machine. The TLB flushing IPI can reduce up to 44.3%.
>>>>
>>>> LGTM.
>>>
>>> Thanks!
>>>
>>>> I know it’s meaningless for x86 (but perhaps ARM would use this infra
>>>> too): do we need smp_mb__after_atomic() after ptep_get_and_clear() and
>>>> before pte_accessible()?
>>>
>>> Why do we need the memory barrier? IIUC, the PTL is locked, so PTE
>>> value will not be changed under us. Anything else?
>>
>> I was thinking about the ordering with respect to
>> atomic_read(&mm->tlb_flush_pending), which is not protected by the PTL.
>> I guess you can correctly argue that because of other control-flow
>> dependencies, the barrier is not necessary.
>
> For ordering between ptep_get_and_clear() and
> atomic_read(&mm->tlb_flush_pending), I think PTL has provided the
> necessary protection already. The code path to write
> mm->tlb_flush_pending is,
>
> tlb_gather_mmu
> inc_tlb_flush_pending a)
> lock PTL
> change PTE b)
> unlock PTL
> tlb_finish_mmu
> dec_tlb_flush_pending c)
>
> While code path of try_to_unmap/migrate_one is,
>
> lock PTL
> read and change PTE d)
> read mm->tlb_flush_pending e)
> unlock PTL
>
> Even if e) occurs before d), they cannot occur at the same time of b).
> Do I miss anything?
You didn’t miss anything. I went over the comment on
inc_tlb_flush_pending() and you follow the scheme.