Re: [PATCH] ARM: add BUILD_BUG_ON to check if fixmap range spans multiple pmds
From: Ard Biesheuvel
Date: Tue Oct 26 2021 - 07:26:20 EST
On Tue, 26 Oct 2021 at 13:16, Russell King (Oracle)
<linux@xxxxxxxxxxxxxxx> wrote:
>
> On Tue, Oct 26, 2021 at 12:56:08PM +0200, Ard Biesheuvel wrote:
> > On Tue, 26 Oct 2021 at 12:55, Russell King (Oracle)
> > <linux@xxxxxxxxxxxxxxx> wrote:
> > >
> > > On Tue, Oct 26, 2021 at 06:38:16PM +0800, Quanyang Wang wrote:
> > > > Hi Ard,
> > > >
> > > > On 10/26/21 6:12 PM, Ard Biesheuvel wrote:
> > > > > On Tue, 26 Oct 2021 at 11:53, Quanyang Wang <quanyang.wang@xxxxxxxxxxxxx> wrote:
...
> > > > But the ptep is calculated by "kmap_pte - idx", which means all ptes must be
> > > > placed next to each other and no gaps. But for ARM, the ptes for the range
> > > > "0xffe00000~0xfff00000" is not next to the ptes for the range
> > > > "0xffc80000~0xffdfffff".
> > > >
> > > > When the idx is larger than 256, virtual address is in 0xffdxxxxx, access
> > > > this address will crash since its pteval isn't set correctly.
> > >
> > > Thanks for the explanation.
> > >
> > > Sadly, this does seem to be correct. Even if the PTE tables are
> > > located next to each other in memory, they _still_ won't be a
> > > contiguous array of entries due to being interleaved with the Linux
> > > PTE table and the hardware PTE table.
> > >
> > > Since the address range 0xffe00000-0xfff00000 is already half of one
> > > PTE table containing 512 contiguous entries, we are limited to 256
> > > fixmap PTEs maximum. If we have more than that we will start trampling
> > > over memory below the PTE table _and_ we will start corrupting Linux
> > > PTE entries in the 0xfff00000-0xffffffff range.
> > >
> > > I suspect this hasn't been seen because of a general lack of ARM
> > > systems with more than 4 CPUs.
> > >
> >
> > But doesn't that make it a kmap_local regression? Or do you think this
> > issue existed before that as well?
>
> It definitely is a bug in tglx's kmap_local code, which assumes all
> PTEs in the fixmap region are contiguously arranged.
>
> Looking back further, when local kmaps were handled in arch code, this
> bug did /not/ exist. We used to get the PTE entry to update via:
>
> unsigned long vaddr = __fix_to_virt(idx);
> pte_t *ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
>
> which later became:
>
> pte_t *ptep = virt_to_kpte(vaddr);
>
> Both of which walk the page tables.
>
> So in summary a regression caused by converting ARM to kmap_local.
>
> I think we could fix it by providing our own arch_kmap_local_set_pte()
> which ignores the ptep argument, and instead walks the page tables
> using the vaddr argument.
>
Removing all occurrences of 'kmap_pte - idx' and replacing them with
virt_to_kpte() seems to do the trick. Unfortunately, these occur in
other places as well, not only on the map path, so I doubt that
overriding arch_kmap_local_set_pte will be sufficient.