Re: [PATCH v1 1/2] mm: let pte_lockptr() consume a pte_t pointer

From: Qi Zheng
Date: Tue Jul 30 2024 - 05:10:24 EST


Hi David,

On 2024/7/30 16:40, David Hildenbrand wrote:
... also because I still want to understand why the PTL of the PMD table
is required at all. What if we lock it first and somebody else wants to
lock it after us while we already ripped it out? Sure there must be some
reason for the lock, I just don't understand it yet :/.

For pmd lock, I think this is needed to clear the pmd entry
(pmdp_collapse_flush()). For pte lock, there should be the following two
reasons:


Thanks for the details.

My current understanding correct that removing *empty* page tables is
currently only possible if we can guarantee that nothing would try modifying the
page tables after we drop the PTE page table lock, but we would be happy if someone
would walk an empty page table while we remove it as long as the access is read-only.

In retract_page_tables() I thought that would be guaranteed as we prevent refaults
from the page cache and exclude any uffd + anon handling using the big hammer
(if any could be active, disallow zapping the page table).

What I am still not quite getting is what happens if someone would grab the PTE page
table lock after we released it -- and how that really protects us here.


I assume it's the

spin_lock(ptl);
if (likely(pmd_same(pmdval, pmdp_get_lockless(pmd)))) {
...

handling in __pte_offset_map_lock() that guarantees that.

Yes.



That indeed means that pte_offset_map_nolock() requires similar checks after
obtaining the PTL (for the cases where we are not holding the PMD table lock
and can be sure that we are the one ripping out that table right now).

Yes, if the PTE page will be freed concurrently, then the pmd entry
needs to be rechecked after acquiring the PTL. So I made
pte_offset_map_nolock() return pmdval in my patch[1], and then we
can do the following:

start_pte = pte_offset_map_nolock(mm, pmd, &pmdval, addr, &ptl);
spin_lock(ptl)
if (likely(pmd_same(pmdval, pmdp_get_lockless(pmd)))) {
...


[1]. https://lore.kernel.org/lkml/7f5233f9f612c7f58abf218852fb1042d764940b.1719570849.git.zhengqi.arch@xxxxxxxxxxxxx/



1. release it after clearing pmd entry, then we can capture the changed
     pmd in pte_offset_map_lock() etc after holding this pte lock.
     (This is also what I did in my patchset)

Yes, I get it now.


2. As mentioned in the comments, we may be concurrent with
     userfaultfd_ioctl(), but we do not hold the read lock of mmap (or
     read lock of vma), so the VM_UFFD_WP may be set. Therefore, we need
     to hold the pte lock to check whether a new pte entry has been
     inserted.
     (See commit[1] for more details)


Yes, I see we tried to document that magic and it is still absolutely confusing :)

But at least now it's clearer to me why retract_page_tables() uses slightly different
locking than collapse_pte_mapped_thp().

Maybe we should look into letting collapse_pte_mapped_thp() use a similar approach as
retract_page_tables() to at least make it more consistent. That topic is also touched
in a98460494b16.

Agree, more consistency is better.

Thanks,
Qi