Re: [PATCH 0/4] hugetlbfs: use hinode_rwsem for pmd sharing synchronization
From: Mike Kravetz
Date: Thu Nov 05 2020 - 16:59:53 EST
On 11/2/20 4:28 PM, Mike Kravetz wrote:
> The RFC series reverted all patches where i_mmap_rwsem was used for
> pmd sharing synchronization, and then added code to use hinode_rwsem.
> This series ends up with the same code in the end, but is structured
> as follows:
>
> - Revert the page fault/truncation race patch which depends on i_mmap_rwsem
> always being taken in page fault path.
> - Add hinode_rwsem to hugetlbfs specific inode and supporting routines.
> - Convert code from using i_mmap_rwsem for pmd sharing synchronization
> to using hinode_rwsem.
> - Add code to more robustly deal with page fault/truncation races.
>
> My hope is that this will be easier to review.
My apologies. Please do not spend any time on this patch series and it
certainly is not something to be sent to stable.
I have created a patch to address the BUG and propose that for stable [1].
After further thought, the approach in this series also has lock ordering
issues. Here is a simple summary of the problem.
With pmd sharing, the pte pointer returned by huge_pte_alloc is not
guaranteed to be valid. This is because another thread could have made
a call to huge_pmd_unshare and 'unshared' the pmd page containing the
pte pointer.
The current i_mmap_rwsem locking and the inode locking proposed in this
series acquire the semaphore in read mode before calling huge_pte_alloc.
Code continues to hold the semaphore until finished with the pte pointer.
Callers of huge_pmd_unshare hold the semaphore in write mode. Thus, the
semaphore prevents the race.
The problem with this type of approach is lock ordering. The semaphore is
acquired in read mode during page fault processing. The first thing this
code does is 'allocate' a pte with huge_pte_alloc. It will then, find or
allocate a page and lock it. Finally, it will lock the page table to update
the pte. Two instances where we may need to take the semaphore in write
mode are page migration and memory failure. In these cases, the first thing
we need to do is lock the page. Only after locking the page can we locate
the semaphore which needs to be acquired in write mode. Hence we end up with
a classic cause for ABBA deadlocks.
I'm starting to think that adding more synchronization is not the best way
to approach this issue. Rather, we should always validate pte pointers after
acquiring the page table lock. At the lowest level, pmd sharing is
synchronized by the page table lock. We already do some validation of the
pte after acquiring the page table lock. For example, checking if
huge_pte_none() is still true. Before even checking for none, we would need
to lookup the pte again (huge_pte_offset) and compare to the pte we previously
acquired. If they are not the same, then we would need to backout and retry.
Unless someone has another suggestion, I'll start exploring this approach.
[1] https://lore.kernel.org/linux-mm/20201105195058.78401-1-mike.kravetz@xxxxxxxxxx/
--
Mike Kravetz