[RFC 1/6] mm: keep dirty bit on KSM page

From: Minchan Kim
Date: Wed Jun 03 2015 - 02:16:31 EST


I encountered segfault of test program while I tested MADV_FREE
with KSM. By investigation,

1. A KSM page is mapped on page table of A, B processes with
!pte_dirty(but it marked the page as PG_dirty if pte_dirty is on)

2. MADV_FREE of A process can remove the page from swap cache
if it was in there and then clear *PG_dirty* to indicate we could
discard it instead of swapping out.

3. So, the KSM page's status is !pte_dirty of A, B processes &&
!PageDirty.

4. VM judges it as freeable page and discard it.

5. Process B encounters segfault even though B didn't call MADV_FREE.

Clearing PG_dirty after anonymous page is removed from swap cache
was no problem on integrity POV for private page(ie, normal anon page,
not KSM). Just worst case caused by that was unnecessary write out
which we have avoided it if same data is already on swap.

However, with introducing MADV_FREE, it could make above problem
so this patch fixes it with keeping dirty bit of the page table
when the page is replaced with KSM page.

Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx>
---
mm/ksm.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index bc7be0ee2080..9c07346e57f2 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -901,9 +901,8 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
set_pte_at(mm, addr, ptep, entry);
goto out_unlock;
}
- if (pte_dirty(entry))
- set_page_dirty(page);
- entry = pte_mkclean(pte_wrprotect(entry));
+
+ entry = pte_wrprotect(entry);
set_pte_at_notify(mm, addr, ptep, entry);
}
*orig_pte = *ptep;
@@ -932,11 +931,13 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
struct mm_struct *mm = vma->vm_mm;
pmd_t *pmd;
pte_t *ptep;
+ pte_t entry;
spinlock_t *ptl;
unsigned long addr;
int err = -EFAULT;
unsigned long mmun_start; /* For mmu_notifiers */
unsigned long mmun_end; /* For mmu_notifiers */
+ bool dirty;

addr = page_address_in_vma(page, vma);
if (addr == -EFAULT)
@@ -956,12 +957,22 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
goto out_mn;
}

+ dirty = pte_dirty(*ptep);
get_page(kpage);
page_add_anon_rmap(kpage, vma, addr);

flush_cache_page(vma, addr, pte_pfn(*ptep));
ptep_clear_flush_notify(vma, addr, ptep);
- set_pte_at_notify(mm, addr, ptep, mk_pte(kpage, vma->vm_page_prot));
+
+ entry = mk_pte(kpage, vma->vm_page_prot);
+ /*
+ * Keep a dirty bit to prevent a KSM page sudden freeing
+ * by MADV_FREE.
+ */
+ if (dirty)
+ entry = pte_mkdirty(entry);
+
+ set_pte_at_notify(mm, addr, ptep, entry);

page_remove_rmap(page);
if (!page_mapped(page))
--
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/