RE: [PATCH v5 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared

From: Justin He (Arm Technology China)
Date: Thu Sep 19 2019 - 21:14:03 EST


Hi Catalin

> -----Original Message-----
> From: Catalin Marinas <catalin.marinas@xxxxxxx>
> Sent: 2019å9æ20æ 0:42
> To: Justin He (Arm Technology China) <Justin.He@xxxxxxx>
> Cc: Will Deacon <will@xxxxxxxxxx>; Mark Rutland
> <Mark.Rutland@xxxxxxx>; James Morse <James.Morse@xxxxxxx>; Marc
> Zyngier <maz@xxxxxxxxxx>; Matthew Wilcox <willy@xxxxxxxxxxxxx>; Kirill A.
> Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>; linux-arm-
> kernel@xxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; linux-
> mm@xxxxxxxxx; Suzuki Poulose <Suzuki.Poulose@xxxxxxx>; Punit
> Agrawal <punitagrawal@xxxxxxxxx>; Anshuman Khandual
> <Anshuman.Khandual@xxxxxxx>; Alex Van Brunt
> <avanbrunt@xxxxxxxxxx>; Robin Murphy <Robin.Murphy@xxxxxxx>;
> Thomas Gleixner <tglx@xxxxxxxxxxxxx>; Andrew Morton <akpm@linux-
> foundation.org>; JÃrÃme Glisse <jglisse@xxxxxxxxxx>; Ralph Campbell
> <rcampbell@xxxxxxxxxx>; hejianet@xxxxxxxxx; Kaly Xin (Arm Technology
> China) <Kaly.Xin@xxxxxxx>
> Subject: Re: [PATCH v5 3/3] mm: fix double page fault on arm64 if PTE_AF
> is cleared
>
> On Fri, Sep 20, 2019 at 12:12:04AM +0800, Jia He wrote:
> > @@ -2152,7 +2163,29 @@ static inline void cow_user_page(struct page
> *dst, struct page *src, unsigned lo
> > */
> > if (unlikely(!src)) {
> > void *kaddr = kmap_atomic(dst);
> > - void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > + void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> > + pte_t entry;
> > +
> > + /* On architectures with software "accessed" bits, we would
> > + * take a double page fault, so mark it accessed here.
> > + */
> > + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte))
> {
> > + spin_lock(vmf->ptl);
> > + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> > + entry = pte_mkyoung(vmf->orig_pte);
> > + if (ptep_set_access_flags(vma, addr,
> > + vmf->pte, entry, 0))
> > + update_mmu_cache(vma, addr, vmf-
> >pte);
> > + } else {
> > + /* Other thread has already handled the
> fault
> > + * and we don't need to do anything. If it's
> > + * not the case, the fault will be triggered
> > + * again on the same address.
> > + */
> > + return -1;
> > + }
> > + spin_unlock(vmf->ptl);
>
> Returning with the spinlock held doesn't normally go very well ;).
Yes, my bad. Will fix asap

--
Cheers,
Justin (Jia He)


IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.