Re: [PATCH v3] mm/gup: Allow real explicit breaking of COW
From: Linus Torvalds
Date: Thu Aug 20 2020 - 18:01:26 EST
On Thu, Aug 20, 2020 at 2:54 PM Peter Xu <peterx@xxxxxxxxxx> wrote:
>
> I kind of prefer the new suggestion to remove code rather than adding new
> codes. I definitely don't know enough on the side effect of it, especially
> performance-wise on either ksm or swap, but... IIUC the worst case is we'll get
> some perf report later on, and it seems also not hard to revert the patch later
> if we want.
Well, would you be willing to try this patch out?
After you apply that patch, you should be able to remove the
should_force_cow_break() games entirely, because a write to the page
should always break cow towards the writer if there are any GUP users
around (put another way: away from the GUP).
However, to make the test meaningful, it really should do some swap testing too.
Linus
From f41082844ea82ad1278e167fe6e973fa4efc974a Mon Sep 17 00:00:00 2001
From: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Date: Tue, 11 Aug 2020 14:23:04 -0700
Subject: [PATCH] Trial do_wp_page() simplification
How about we just make sure we're the only possible valid user fo the
page before we bother to reuse it?
Simplify, simplify, simplify.
And get rid of the nasty serialization on the page lock at the same time.
Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
---
mm/memory.c | 58 +++++++++++++++--------------------------------------
1 file changed, 16 insertions(+), 42 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 602f4283122f..a43004dd2ff6 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2927,50 +2927,24 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
* not dirty accountable.
*/
if (PageAnon(vmf->page)) {
- int total_map_swapcount;
- if (PageKsm(vmf->page) && (PageSwapCache(vmf->page) ||
- page_count(vmf->page) != 1))
+ struct page *page = vmf->page;
+
+ if (page_count(page) != 1)
+ goto copy;
+ if (!trylock_page(page))
+ goto copy;
+ if (page_mapcount(page) != 1 && page_count(page) != 1) {
+ unlock_page(page);
goto copy;
- if (!trylock_page(vmf->page)) {
- get_page(vmf->page);
- pte_unmap_unlock(vmf->pte, vmf->ptl);
- lock_page(vmf->page);
- vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
- vmf->address, &vmf->ptl);
- if (!pte_same(*vmf->pte, vmf->orig_pte)) {
- update_mmu_tlb(vma, vmf->address, vmf->pte);
- unlock_page(vmf->page);
- pte_unmap_unlock(vmf->pte, vmf->ptl);
- put_page(vmf->page);
- return 0;
- }
- put_page(vmf->page);
- }
- if (PageKsm(vmf->page)) {
- bool reused = reuse_ksm_page(vmf->page, vmf->vma,
- vmf->address);
- unlock_page(vmf->page);
- if (!reused)
- goto copy;
- wp_page_reuse(vmf);
- return VM_FAULT_WRITE;
- }
- if (reuse_swap_page(vmf->page, &total_map_swapcount)) {
- if (total_map_swapcount == 1) {
- /*
- * The page is all ours. Move it to
- * our anon_vma so the rmap code will
- * not search our parent or siblings.
- * Protected against the rmap code by
- * the page lock.
- */
- page_move_anon_rmap(vmf->page, vma);
- }
- unlock_page(vmf->page);
- wp_page_reuse(vmf);
- return VM_FAULT_WRITE;
}
- unlock_page(vmf->page);
+ /*
+ * Ok, we've got the only map reference, and the only
+ * page count reference, and the page is locked,
+ * it's dark out, and we're wearing sunglasses. Hit it.
+ */
+ wp_page_reuse(vmf);
+ unlock_page(page);
+ return VM_FAULT_WRITE;
} else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
(VM_WRITE|VM_SHARED))) {
return wp_page_shared(vmf);
--
2.28.0.218.gc12ef3d349