Re: [PATCH, RFC 2/2] Implement sharing/unsharing of PMDs for FS/DAX

From: Larry Bassel
Date: Fri May 10 2019 - 12:18:01 EST


On 09 May 19 09:49, Matthew Wilcox wrote:
> On Thu, May 09, 2019 at 09:05:33AM -0700, Larry Bassel wrote:
> > This is based on (but somewhat different from) what hugetlbfs
> > does to share/unshare page tables.
>
> Wow, that worked out far more cleanly than I was expecting to see.

Yes, I was pleasantly surprised. As I've mentioned already, I
think this is at least partially due to the nature of DAX.

>
> > @@ -4763,6 +4763,19 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
> > unsigned long *start, unsigned long *end)
> > {
> > }
> > +
> > +unsigned long page_table_shareable(struct vm_area_struct *svma,
> > + struct vm_area_struct *vma,
> > + unsigned long addr, pgoff_t idx)
> > +{
> > + return 0;
> > +}
> > +
> > +bool vma_shareable(struct vm_area_struct *vma, unsigned long addr)
> > +{
> > + return false;
> > +}
>
> I don't think you need these stubs, since the only caller of them is
> also gated by MAY_SHARE_FSDAX_PMD ... right?

These are also called in mm/hugetlb.c, but those calls are gated by
CONFIG_ARCH_WANT_HUGE_PMD_SHARE. In fact if this is not set (though
it is the default), then one wouldn't get FS/DAX sharing even if
MAY_SHARE_FSDAX_PMD is set. I think that this isn't what we want
(perhaps the real question is how should these two config options interact?).
Removing the stubs would fix this and I will make that change.

Maybe these two functions should be moved into mm/memory.c as well.

>
> > + vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {
> > + if (svma == vma)
> > + continue;
> > +
> > + saddr = page_table_shareable(svma, vma, addr, idx);
> > + if (saddr) {
> > + spmd = huge_pmd_offset(svma->vm_mm, saddr,
> > + vma_mmu_pagesize(svma));
> > + if (spmd) {
> > + get_page(virt_to_page(spmd));
> > + break;
> > + }
> > + }
> > + }
>
> I'd be tempted to reduce the indentation here:
>
> vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {
> if (svma == vma)
> continue;
>
> saddr = page_table_shareable(svma, vma, addr, idx);
> if (!saddr)
> continue;
>
> spmd = huge_pmd_offset(svma->vm_mm, saddr,
> vma_mmu_pagesize(svma));
> if (spmd)
> break;
> }
>
>
> > + if (!spmd)
> > + goto out;
>
> ... and move the get_page() down to here, so we don't split the
> "when we find it" logic between inside and outside the loop.
>
> get_page(virt_to_page(spmd));
>
> > +
> > + ptl = pmd_lockptr(mm, spmd);
> > + spin_lock(ptl);
> > +
> > + if (pud_none(*pud)) {
> > + pud_populate(mm, pud,
> > + (pmd_t *)((unsigned long)spmd & PAGE_MASK));
> > + mm_inc_nr_pmds(mm);
> > + } else {
> > + put_page(virt_to_page(spmd));
> > + }
> > + spin_unlock(ptl);
> > +out:
> > + pmd = pmd_alloc(mm, pud, addr);
> > + i_mmap_unlock_write(mapping);
>
> I would swap these two lines. There's no need to hold the i_mmap_lock
> while allocating this PMD, is there? I mean, we don't for the !may_share
> case.
>

These were done in the style of functions already in mm/hugetlb.c and I was
trying to change as little as necessary in my copy of those. I agree that
these are good suggestions. One could argue that if these changes
were made, they should also be made in mm/hugetlb.c, though
this is perhaps beyond the scope of getting FS/DAX PMD sharing
implemented -- your thoughts?

Thanks for the review, I'll wait a few more days for other comments
and then send out a v2.

Larry