Re: [PATCH v4 5/6] dax: fix missing writeprotect the pte entry

From: Muchun Song
Date: Tue Mar 15 2022 - 03:53:25 EST


On Tue, Mar 15, 2022 at 4:50 AM Dan Williams <dan.j.williams@xxxxxxxxx> wrote:
>
> On Fri, Mar 11, 2022 at 1:06 AM Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote:
> >
> > On Thu, Mar 10, 2022 at 8:59 AM Dan Williams <dan.j.williams@xxxxxxxxx> wrote:
> > >
> > > On Wed, Mar 2, 2022 at 12:30 AM Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote:
> > > >
> > > > Currently dax_mapping_entry_mkclean() fails to clean and write protect
> > > > the pte entry within a DAX PMD entry during an *sync operation. This
> > > > can result in data loss in the following sequence:
> > > >
> > > > 1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
> > > > making the pmd entry dirty and writeable.
> > > > 2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
> > > > write to the same file, dirtying PMD radix tree entry (already
> > > > done in 1)) and making the pte entry dirty and writeable.
> > > > 3) fsync, flushing out PMD data and cleaning the radix tree entry. We
> > > > currently fail to mark the pte entry as clean and write protected
> > > > since the vma of process B is not covered in dax_entry_mkclean().
> > > > 4) process B writes to the pte. These don't cause any page faults since
> > > > the pte entry is dirty and writeable. The radix tree entry remains
> > > > clean.
> > > > 5) fsync, which fails to flush the dirty PMD data because the radix tree
> > > > entry was clean.
> > > > 6) crash - dirty data that should have been fsync'd as part of 5) could
> > > > still have been in the processor cache, and is lost.
> > >
> > > Excellent description.
> > >
> > > >
> > > > Just to use pfn_mkclean_range() to clean the pfns to fix this issue.
> > >
> > > So the original motivation for CONFIG_FS_DAX_LIMITED was for archs
> > > that do not have spare PTE bits to indicate pmd_devmap(). So this fix
> > > can only work in the CONFIG_FS_DAX_LIMITED=n case and in that case it
> > > seems you can use the current page_mkclean_one(), right?
> >
> > I don't know the history of CONFIG_FS_DAX_LIMITED.
> > page_mkclean_one() need a struct page associated with
> > the pfn, do the struct pages exist when CONFIG_FS_DAX_LIMITED
> > and ! FS_DAX_PMD?
>
> CONFIG_FS_DAX_LIMITED was created to preserve some DAX use for S390
> which does not have CONFIG_ARCH_HAS_PTE_DEVMAP. Without PTE_DEVMAP
> then get_user_pages() for DAX mappings fails.
>
> To your question, no, there are no pages at all in the
> CONFIG_FS_DAX_LIMITED=y case. So page_mkclean_one() could only be
> deployed for PMD mappings, but I think it is reasonable to just
> disable PMD mappings for the CONFIG_FS_DAX_LIMITED=y case.
>
> Going forward the hope is to remove the ARCH_HAS_PTE_DEVMAP
> requirement for DAX, and use PTE_SPECIAL for the S390 case. However,
> that still wants to have 'struct page' availability as an across the
> board requirement.

Got it. Thanks for your patient explanation.

>
> > If yes, I think you are right. But I don't
> > see this guarantee. I am not familiar with DAX code, so what am
> > I missing here?
>
> Perhaps I missed a 'struct page' dependency? I thought the bug you are
> fixing only triggers in the presence of PMDs. The

Right.

> CONFIG_FS_DAX_LIMITED=y case can still use the current "page-less"
> mkclean path for PTEs.

But I think introducing pfn_mkclean_range() could make the code
simple and easy to maintain here since it could handle both PTE
and PMD mappings. And page_vma_mapped_walk() could work
on PFNs since commit [1], which is the case here, we do not need
extra code to handle the page-less case here. What do you
think?

[1] https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=b786e44a4dbfe64476e7120ec7990b89a37be37d