Re: [PATCH 1/4] mm: Trial do_wp_page() simplification
From: Peter Xu
Date: Fri Sep 18 2020 - 16:41:00 EST
On Fri, Sep 18, 2020 at 02:32:40PM -0300, Jason Gunthorpe wrote:
> On Fri, Sep 18, 2020 at 12:40:32PM -0400, Peter Xu wrote:
>
> > Firstly in the draft patch mm->has_pinned is introduced and it's written to 1
> > as long as FOLL_GUP is called once. It's never reset after set.
>
> Worth thinking about also adding FOLL_LONGTERM here, at last as long
> as it is not a counter. That further limits the impact.
But theoritically we should also trigger COW here for pages even with PIN &&
!LONGTERM, am I right? Assuming that FOLL_PIN is already a corner case.
> > One issue is when we charge for cgroup we probably can't do that onto the new
> > mm/task, since copy_namespaces() is called after copy_mm(). I don't know
> > enough about cgroup, I thought the child will inherit the parent's, but I'm not
> > sure. Or, can we change that order of copy_namespaces() && copy_mm()? I don't
> > see a problem so far but I'd like to ask first..
>
> Know nothing about cgroups, but I would have guessed that the page
> table allocations would want to be in the cgroup too, is the struct
> page a different bucket?
Good question... I feel like this kind of accountings were always done to
"current" via alloc_page(). But frankly speaking I don't know whether I
understand it right because afaict "current" is the parent during fork(), while
I feel like it will make more sense if it is accounted to the child process. I
think I should have missed something important but I can't tell..
>
> > The other thing is on how to fail. E.g., when COW failed due to either
> > charging of cgroup or ENOMEM, ideally we should fail fork() too. Though that
> > might need more changes - current patch silently kept the shared page for
> > simplicity.
>
> I didn't notice anything tricky here.. Something a bit gross but
> simple seemed workable:
>
> @@ -852,7 +852,7 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> continue;
> }
> entry.val = copy_one_pte(dst_mm, src_mm, dst_pte, src_pte,
> - vma, addr, rss);
> + vma, addr, rss, &err);
> if (entry.val)
> break;
> progress += 8;
> @@ -865,6 +865,9 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> pte_unmap_unlock(orig_dst_pte, dst_ptl);
> cond_resched();
>
> + if (err)
> + return err;
> +
> if (entry.val) {
> if (add_swap_count_continuation(entry, GFP_KERNEL) < 0)
> return -ENOMEM;
>
> It is not really any different from add_swap_count_continuation()
> failure, which already works..
Yes it's not pretty, but I do plan to use something like this to avoid touching
all the return path in coyp_one_pte(), and I think the answer to the last
question matters too, below.
> > diff --git a/mm/gup.c b/mm/gup.c
> > index e5739a1974d5..cab10cefefe4 100644
> > +++ b/mm/gup.c
> > @@ -1255,6 +1255,17 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
> > BUG_ON(*locked != 1);
> > }
> >
> > + /*
> > + * Mark the mm struct if there's any page pinning attempt. We're
> > + * aggresive on this bit since even if the pinned pages were unpinned
> > + * later on, we'll still keep this bit set for this address space just
> > + * to make everything easy.
> > + *
> > + * TODO: Ideally we can use mm->pinned_vm but only until it's stable.
> > + */
> > + if (flags & FOLL_PIN)
> > + WRITE_ONCE(mm->has_pinned, 1);
>
> This should probably be its own commit, here is a stab at a commit
> message:
>
> Reduce the chance of false positive from page_maybe_dma_pinned() by
> keeping track if the mm_struct has ever been used with
> pin_user_pages(). mm_structs that have never been passed to
> pin_user_pages() cannot have a positive page_maybe_dma_pinned() by
> definition. This allows cases that might drive up the page ref_count
> to avoid any penalty from handling dma_pinned pages.
>
> Due to complexities with unpining this trivial version is a permanent
> sticky bit, future work will be needed to make this a counter.
Thanks for writting this. I'll keep the commit message once split until I need
to post a formal patch. Before that hope it's fine I'll still use a single
patch for simplicity because I still want to keep the discussion within the
thread.
>
> > +/*
> > + * Do early cow for the page and the pte. Return true if page duplicate
> > + * succeeded, false otherwise.
> > + */
> > +static bool duplicate_page(struct mm_struct *mm, struct vm_area_struct *vma,
>
> Suggest calling 'vma' 'new' here for consistency
OK.
>
> > + unsigned long address, struct page *page,
> > + pte_t *newpte)
> > +{
> > + struct page *new_page;
> > + pte_t entry;
> > +
> > + new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
> > + if (!new_page)
> > + return false;
> > +
> > + copy_user_highpage(new_page, page, address, vma);
> > + if (mem_cgroup_charge(new_page, mm, GFP_KERNEL)) {
> > + put_page(new_page);
> > + return false;
> > + }
> > + cgroup_throttle_swaprate(new_page, GFP_KERNEL);
> > + __SetPageUptodate(new_page);
>
> It looks like these GFP flags can't be GFP_KERNEL, this is called
> inside the pte_alloc_map_lock() which is a spinlock
>
> One thought is to lift this logic out to around
> add_swap_count_continuation()? Would need some serious rework to be
> able to store the dst_pte though.
What would be the result if we simply use GFP_ATOMIC? Would there be too many
pages to allocate in bulk for ATOMIC? IMHO slowness would be fine, but I don't
know the inside of page allocation, and not sure whether __GFP_KSWAPD_RECLAIM
means we might kick kswapd and whether we'll deadlock when the kswapd could
potentially try to take the spinlock again somewhere while we waiting for it?
It would be good to go this (easy) way considering this is a very rare to
trigger path, so we can still keep copy_one page simple. Otherwise I seem to
have no choice to move the page copy logic out of copy_one_pte(), as you
suggested.
--
Peter Xu