[PATCH] mm: softdirty: write protect PTEs created for read faults after VM_SOFTDIRTY cleared
From: Peter Feiner
Date: Wed Aug 20 2014 - 17:46:43 EST
In readable+writable+shared VMAs, PTEs created for read faults have
their write bit set. If the read fault happens after VM_SOFTDIRTY is
cleared, then the PTE's softdirty bit will remain clear after
subsequent writes.
Here's a simple code snippet to demonstrate the bug:
char* m = mmap(NULL, getpagesize(), PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_SHARED, -1, 0);
system("echo 4 > /proc/$PPID/clear_refs"); /* clear VM_SOFTDIRTY */
assert(*m == '\0'); /* new PTE allows write access */
assert(!soft_dirty(x));
*m = 'x'; /* should dirty the page */
assert(soft_dirty(x)); /* fails */
With this patch, new PTEs created for read faults are write protected
if the VMA has VM_SOFTDIRTY clear.
Signed-off-by: Peter Feiner <pfeiner@xxxxxxxxxx>
---
mm/memory.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index ab3537b..282a959 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2755,6 +2755,8 @@ void do_set_pte(struct vm_area_struct *vma, unsigned long address,
entry = maybe_mkwrite(pte_mkdirty(entry), vma);
else if (pte_file(*pte) && pte_file_soft_dirty(*pte))
entry = pte_mksoft_dirty(entry);
+ else if (!(vma->vm_flags & VM_SOFTDIRTY))
+ entry = pte_wrprotect(entry);
if (anon) {
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
page_add_new_anon_rmap(page, vma, address);
--
2.1.0.rc2.206.gedb03e5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/