Re: [PATCH v2 12/26] userfaultfd: wp: apply _PAGE_UFFD_WP bit

From: Peter Xu
Date: Fri Feb 22 2019 - 02:31:51 EST


On Thu, Feb 21, 2019 at 12:44:02PM -0500, Jerome Glisse wrote:
> On Tue, Feb 12, 2019 at 10:56:18AM +0800, Peter Xu wrote:
> > Firstly, introduce two new flags MM_CP_UFFD_WP[_RESOLVE] for
> > change_protection() when used with uffd-wp and make sure the two new
> > flags are exclusively used. Then,
> >
> > - For MM_CP_UFFD_WP: apply the _PAGE_UFFD_WP bit and remove _PAGE_RW
> > when a range of memory is write protected by uffd
> >
> > - For MM_CP_UFFD_WP_RESOLVE: remove the _PAGE_UFFD_WP bit and recover
> > _PAGE_RW when write protection is resolved from userspace
> >
> > And use this new interface in mwriteprotect_range() to replace the old
> > MM_CP_DIRTY_ACCT.
> >
> > Do this change for both PTEs and huge PMDs. Then we can start to
> > identify which PTE/PMD is write protected by general (e.g., COW or soft
> > dirty tracking), and which is for userfaultfd-wp.
> >
> > Since we should keep the _PAGE_UFFD_WP when doing pte_modify(), add it
> > into _PAGE_CHG_MASK as well. Meanwhile, since we have this new bit, we
> > can be even more strict when detecting uffd-wp page faults in either
> > do_wp_page() or wp_huge_pmd().
> >
> > Signed-off-by: Peter Xu <peterx@xxxxxxxxxx>
>
> Few comments but still:
>
> Reviewed-by: JÃrÃme Glisse <jglisse@xxxxxxxxxx>

Thanks!

>
> > ---
> > arch/x86/include/asm/pgtable_types.h | 2 +-
> > include/linux/mm.h | 5 +++++
> > mm/huge_memory.c | 14 +++++++++++++-
> > mm/memory.c | 4 ++--
> > mm/mprotect.c | 12 ++++++++++++
> > mm/userfaultfd.c | 8 ++++++--
> > 6 files changed, 39 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> > index 8cebcff91e57..dd9c6295d610 100644
> > --- a/arch/x86/include/asm/pgtable_types.h
> > +++ b/arch/x86/include/asm/pgtable_types.h
> > @@ -133,7 +133,7 @@
> > */
> > #define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \
> > _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \
> > - _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
> > + _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_UFFD_WP)
> > #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
>
> This chunk needs to be in the earlier arch specific patch.

Indeed. I'll move it over.

>
> [...]
>
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 8d65b0f041f9..817335b443c2 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
>
> [...]
>
> > @@ -2198,6 +2208,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> > entry = pte_mkold(entry);
> > if (soft_dirty)
> > entry = pte_mksoft_dirty(entry);
> > + if (uffd_wp)
> > + entry = pte_mkuffd_wp(entry);
> > }
> > pte = pte_offset_map(&_pmd, addr);
> > BUG_ON(!pte_none(*pte));
>
> Reading that code and i thought i would be nice if we could define a
> pte_mask that we can or instead of all those if () entry |= ... but
> that is just some dumb optimization and does not have any bearing on
> the present patch. Just wanted to say that outloud.

(I agree; though I'll just concentrate on the series for now)

>
>
> > diff --git a/mm/mprotect.c b/mm/mprotect.c
> > index a6ba448c8565..9d4433044c21 100644
> > --- a/mm/mprotect.c
> > +++ b/mm/mprotect.c
> > @@ -46,6 +46,8 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> > int target_node = NUMA_NO_NODE;
> > bool dirty_accountable = cp_flags & MM_CP_DIRTY_ACCT;
> > bool prot_numa = cp_flags & MM_CP_PROT_NUMA;
> > + bool uffd_wp = cp_flags & MM_CP_UFFD_WP;
> > + bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE;
> >
> > /*
> > * Can be called with only the mmap_sem for reading by
> > @@ -117,6 +119,14 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> > if (preserve_write)
> > ptent = pte_mk_savedwrite(ptent);
> >
> > + if (uffd_wp) {
> > + ptent = pte_wrprotect(ptent);
> > + ptent = pte_mkuffd_wp(ptent);
> > + } else if (uffd_wp_resolve) {
> > + ptent = pte_mkwrite(ptent);
> > + ptent = pte_clear_uffd_wp(ptent);
> > + }
> > +
> > /* Avoid taking write faults for known dirty pages */
> > if (dirty_accountable && pte_dirty(ptent) &&
> > (pte_soft_dirty(ptent) ||
> > @@ -301,6 +311,8 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
> > {
> > unsigned long pages;
> >
> > + BUG_ON((cp_flags & MM_CP_UFFD_WP_ALL) == MM_CP_UFFD_WP_ALL);
>
> Don't you want to abort and return here if both flags are set ?

Here I would slightly prefer BUG_ON() because current code (any
userspace syscalls) cannot trigger this without changing the kernel
(currently the only kernel user of these two flags will be
mwriteprotect_range but it'll definitely only pass one flag in). This
line will be only useful when we add new kernel code (or writting new
kernel drivers) and it can be used to detect programming errors. In
that case IMHO BUG_ON() would be more straightforward.

Thanks,

--
Peter Xu