Re: [PATCH v5 4/8] mm: Add write-protect and clean utilities for address space ranges

From: Peter Zijlstra
Date: Thu Oct 10 2019 - 09:06:06 EST


On Thu, Oct 10, 2019 at 02:43:10PM +0200, Thomas Hellström (VMware) wrote:

> +/**
> + * struct wp_walk - Private struct for pagetable walk callbacks
> + * @range: Range for mmu notifiers
> + * @tlbflush_start: Address of first modified pte
> + * @tlbflush_end: Address of last modified pte + 1
> + * @total: Total number of modified ptes
> + */
> +struct wp_walk {
> + struct mmu_notifier_range range;
> + unsigned long tlbflush_start;
> + unsigned long tlbflush_end;
> + unsigned long total;
> +};
> +
> +/**
> + * wp_pte - Write-protect a pte
> + * @pte: Pointer to the pte
> + * @addr: The virtual page address
> + * @walk: pagetable walk callback argument
> + *
> + * The function write-protects a pte and records the range in
> + * virtual address space of touched ptes for efficient range TLB flushes.
> + */
> +static int wp_pte(pte_t *pte, unsigned long addr, unsigned long end,
> + struct mm_walk *walk)
> +{
> + struct wp_walk *wpwalk = walk->private;
> + pte_t ptent = *pte;
> +
> + if (pte_write(ptent)) {
> + pte_t old_pte = ptep_modify_prot_start(walk->vma, addr, pte);
> +
> + ptent = pte_wrprotect(old_pte);
> + ptep_modify_prot_commit(walk->vma, addr, pte, old_pte, ptent);
> + wpwalk->total++;
> + wpwalk->tlbflush_start = min(wpwalk->tlbflush_start, addr);
> + wpwalk->tlbflush_end = max(wpwalk->tlbflush_end,
> + addr + PAGE_SIZE);
> + }
> +
> + return 0;
> +}

> +/*
> + * wp_clean_pre_vma - The pagewalk pre_vma callback.
> + *
> + * The pre_vma callback performs the cache flush, stages the tlb flush
> + * and calls the necessary mmu notifiers.
> + */
> +static int wp_clean_pre_vma(unsigned long start, unsigned long end,
> + struct mm_walk *walk)
> +{
> + struct wp_walk *wpwalk = walk->private;
> +
> + wpwalk->tlbflush_start = end;
> + wpwalk->tlbflush_end = start;
> +
> + mmu_notifier_range_init(&wpwalk->range, MMU_NOTIFY_PROTECTION_PAGE, 0,
> + walk->vma, walk->mm, start, end);
> + mmu_notifier_invalidate_range_start(&wpwalk->range);
> + flush_cache_range(walk->vma, start, end);
> +
> + /*
> + * We're not using tlb_gather_mmu() since typically
> + * only a small subrange of PTEs are affected, whereas
> + * tlb_gather_mmu() records the full range.
> + */
> + inc_tlb_flush_pending(walk->mm);
> +
> + return 0;
> +}
> +
> +/*
> + * wp_clean_post_vma - The pagewalk post_vma callback.
> + *
> + * The post_vma callback performs the tlb flush and calls necessary mmu
> + * notifiers.
> + */
> +static void wp_clean_post_vma(struct mm_walk *walk)
> +{
> + struct wp_walk *wpwalk = walk->private;
> +
> + if (wpwalk->tlbflush_end > wpwalk->tlbflush_start)
> + flush_tlb_range(walk->vma, wpwalk->tlbflush_start,
> + wpwalk->tlbflush_end);
> +
> + mmu_notifier_invalidate_range_end(&wpwalk->range);
> + dec_tlb_flush_pending(walk->mm);
> +}

> +/**
> + * wp_shared_mapping_range - Write-protect all ptes in an address space range
> + * @mapping: The address_space we want to write protect
> + * @first_index: The first page offset in the range
> + * @nr: Number of incremental page offsets to cover
> + *
> + * Note: This function currently skips transhuge page-table entries, since
> + * it's intended for dirty-tracking on the PTE level. It will warn on
> + * encountering transhuge write-enabled entries, though, and can easily be
> + * extended to handle them as well.
> + *
> + * Return: The number of ptes actually write-protected. Note that
> + * already write-protected ptes are not counted.
> + */
> +unsigned long wp_shared_mapping_range(struct address_space *mapping,
> + pgoff_t first_index, pgoff_t nr)
> +{
> + struct wp_walk wpwalk = { .total = 0 };
> +
> + i_mmap_lock_read(mapping);
> + WARN_ON(walk_page_mapping(mapping, first_index, nr, &wp_walk_ops,
> + &wpwalk));
> + i_mmap_unlock_read(mapping);
> +
> + return wpwalk.total;
> +}

That's a read lock, this means there's concurrency to self. What happens
if someone does two concurrent wp_shared_mapping_range() on the same
mapping?

The thing is, because of pte_wrprotect() the iteration that starts last
will see a smaller pte_write range, if it completes first and does
flush_tlb_range(), it will only flush a partial range.

This is exactly what {inc,dec}_tlb_flush_pending() is for, but you're
not using mm_tlb_flush_nested() to detect the situation and do a bigger
flush.

Or if you're not needing that, then I'm missing why.