On 23/01/2024 13:06, David Hildenbrand wrote:
On 23.01.24 13:25, Ryan Roberts wrote:
On 22/01/2024 19:41, David Hildenbrand wrote:
Let's ignore these bits: they are irrelevant for fork, and will likely
be irrelevant for upcoming users such as page unmapping.
Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
---
mm/memory.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index f563aec85b2a8..341b2be845b6e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -953,24 +953,30 @@ static __always_inline void __copy_present_ptes(struct
vm_area_struct *dst_vma,
set_ptes(dst_vma->vm_mm, addr, dst_pte, pte, nr);
}
+static inline pte_t __pte_batch_clear_ignored(pte_t pte)
+{
+ return pte_clear_soft_dirty(pte_mkclean(pte_mkold(pte)));
+}
+
/*
* Detect a PTE batch: consecutive (present) PTEs that map consecutive
* pages of the same folio.
*
* All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN.
nit: last char should be a comma (,) not a full stop (.)
+ * the accessed bit, dirty bit and soft-dirty bit.
*/
static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
pte_t *start_ptep, pte_t pte, int max_nr)
{
unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
const pte_t *end_ptep = start_ptep + max_nr;
- pte_t expected_pte = pte_next_pfn(pte);
+ pte_t expected_pte = __pte_batch_clear_ignored(pte_next_pfn(pte));
pte_t *ptep = start_ptep + 1;
VM_WARN_ON_FOLIO(!pte_present(pte), folio);
while (ptep != end_ptep) {
- pte = ptep_get(ptep);
+ pte = __pte_batch_clear_ignored(ptep_get(ptep));
if (!pte_same(pte, expected_pte))
break;
I think you'll lose dirty information in the child for private mappings? If the
first pte in a batch is clean, but a subsequent page is dirty, you will end up
setting all the pages in the batch as clean in the child. Previous behavior
would preserve dirty bit for private mappings.
In my version (v3) that did arbitrary batching, I had some fun and games
tracking dirty, write and uffd_wp:
https://lore.kernel.org/linux-arm-kernel/20231204105440.61448-2-ryan.roberts@xxxxxxx/
Also, I think you will currently either set soft dirty on all or none of the
pages in the batch, depending on the value of the first. I previously convinced
myself that the state was unimportant so always cleared it in the child to
provide consistency.
Good points regarding dirty and soft-dirty. I wanted to avoid passing flags to
folio_pte_batch(), but maybe that's just what we need to not change behavior.
I think you could not bother with the enforce_uffd_wp - just always enforce
uffd-wp. So that's one simplification vs mine. Then you just need an any_dirty
flag following the same pattern as your any_writable. Then just set dirty on the
whole batch in the child if any were dirty in the parent.
Although now I'm wondering if there is a race here... What happens if a page in
the parent becomes dirty after you have checked it but before you write protect
it? Isn't that already a problem with the current non-batched version? Why do we
even to preserve dirty in the child for private mappings?