Re: [PATCH] RISC-V: mm: Support huge page in vmalloc_fault()

From: Dylan Jhong
Date: Wed Mar 01 2023 - 06:17:45 EST


On Fri, Feb 24, 2023 at 01:47:20PM +0100, Alexandre Ghiti wrote:
> Hi Dylan,
>
> On 2/24/23 11:40, Dylan Jhong wrote:
> > RISC-V supports ioremap() with huge page (pud/pmd) mapping, but
> > vmalloc_fault() assumes that the vmalloc range is limited to pte
> > mappings. Add huge page support to complete the vmalloc_fault()
> > function.
> >
> > Fixes: 310f541a027b ("riscv: Enable HAVE_ARCH_HUGE_VMAP for 64BIT")
> >
> > Signed-off-by: Dylan Jhong <dylan@xxxxxxxxxxxxx>
> > ---
> > arch/riscv/mm/fault.c | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> > diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> > index eb0774d9c03b..4b9953b47d81 100644
> > --- a/arch/riscv/mm/fault.c
> > +++ b/arch/riscv/mm/fault.c
> > @@ -143,6 +143,8 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
> > no_context(regs, addr);
> > return;
> > }
> > + if (pud_leaf(*pud_k))
> > + goto flush_tlb;
> > /*
> > * Since the vmalloc area is global, it is unnecessary
> > @@ -153,6 +155,8 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
> > no_context(regs, addr);
> > return;
> > }
> > + if (pmd_leaf(*pmd_k))
> > + goto flush_tlb;
> > /*
> > * Make sure the actual PTE exists as well to
> > @@ -172,6 +176,7 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
> > * ordering constraint, not a cache flush; it is
> > * necessary even after writing invalid entries.
> > */
> > +flush_tlb:
> > local_flush_tlb_page(addr);
> > }
>
>
> This looks good to me, you can add:
>
> Reviewed-by: Alexandre Ghiti <alexghiti@xxxxxxxxxxxx>
>
> One question: how did you encounter this bug?
>
> Thanks,
>
> Alex
>
Hi Alex,

>>> One question: how did you encounter this bug?
This bug is caused by the combination of out-of-order excutiuon and ioremap().
The OoO excution will try to access the VA that is given by ioremap() and record
a page fault in TLB before the mapping is created in ioremap(). When the CPU
really accesses the VA after ioremap(), the CPU will trigger page fault because
of the TLB already has the VA mapping.

We hope that the vmalloc_fault() in page fault handler will trigger sfence.vma
to invalidate the TLB[1]. But since we do not support the huge page in vmalloc_fault(),
we encountered the nested page faults in vmalloc_fault() while forcing the pmd/pud
huge pages to resolve pte entry. This is the reason I send this patch.

ref:
[1]: https://patchwork.kernel.org/project/linux-riscv/patch/20210412000531.12249-1-liu@xxxxxxxxxx/