Re: [PATCH] x86/mm: Handle physical-virtual alignment mismatch in phys_p4d_init()

From: Kirill A. Shutemov
Date: Fri Jun 21 2019 - 06:54:52 EST


On Fri, Jun 21, 2019 at 05:02:49PM +0800, Baoquan He wrote:
> Hi Kirill,
>
> On 06/20/19 at 02:22pm, Kirill A. Shutemov wrote:
> > Kyle has reported that kernel crashes sometimes when it boots in
> > 5-level paging mode with KASLR enabled:
>
> This is a great finding, thanks for the fix. I ever have modified codes
> to make them accommodate PMD level of randomization, this
> phys_p4d_init() part is included. Not sure why I missed it when later
> took PUD level randomization for 5-level.
>
> https://github.com/baoquan-he/linux/commit/dc91f5292bf1f55666c9139b6621d830b5b38aa5
>
> Have some concerns, please check.
>
> > [ 0.000000] WARNING: CPU: 0 PID: 0 at arch/x86/mm/init_64.c:87 phys_p4d_init+0x1d4/0x1ea
> ......
> > Kyle bisected the issue to commmit b569c1843498 ("x86/mm/KASLR: Reduce
> > randomization granularity for 5-level paging to 1GB")
> >
> > The commit relaxes KASLR alignment requirements and it can lead to
> > mismatch bentween 'i' and 'p4d_index(vaddr)' inside the loop in
> ^ between
> > phys_p4d_init(). The mismatch in its turn leads to clearing wrong p4d
> > entry and eventually to the oops.
> >
> > The fix is to make phys_p4d_init() walk virtual address space, not
> > physical one.
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
> > Reported-and-tested-by: Kyle Pelton <kyle.d.pelton@xxxxxxxxx>
> > Fixes: b569c1843498 ("x86/mm/KASLR: Reduce randomization granularity for 5-level paging to 1GB")
> > Cc: Baoquan He <bhe@xxxxxxxxxx>
> > ---
> > arch/x86/mm/init_64.c | 39 ++++++++++++++++-----------------------
> > 1 file changed, 16 insertions(+), 23 deletions(-)
> >
> > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > index 693aaf28d5fe..4628ac9105a2 100644
> > --- a/arch/x86/mm/init_64.c
> > +++ b/arch/x86/mm/init_64.c
> > @@ -671,41 +671,34 @@ static unsigned long __meminit
> > phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
> > unsigned long page_size_mask, bool init)
> > {
> > - unsigned long paddr_next, paddr_last = paddr_end;
> > - unsigned long vaddr = (unsigned long)__va(paddr);
> > - int i = p4d_index(vaddr);
> > + unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next, paddr_last;
> > +
> > + paddr_last = paddr_end;
> > + vaddr = (unsigned long)__va(paddr);
> > + vaddr_end = (unsigned long)__va(paddr_end);
> > + vaddr_start = vaddr;
>
> Variable vaddr_start is not used in this patch, redundent?

Yep. I'll drop it in v2.

> > if (!pgtable_l5_enabled())
> >
> > return phys_pud_init((pud_t *) p4d_page, paddr, paddr_end,
> > page_size_mask, init);
> >
> > - for (; i < PTRS_PER_P4D; i++, paddr = paddr_next) {
> > - p4d_t *p4d;
> > + for (; vaddr < vaddr_end; vaddr = vaddr_next) {
> > + p4d_t *p4d = p4d_page + p4d_index(vaddr);
> > pud_t *pud;
> >
> > - vaddr = (unsigned long)__va(paddr);
> > - p4d = p4d_page + p4d_index(vaddr);
> > - paddr_next = (paddr & P4D_MASK) + P4D_SIZE;
> > + vaddr_next = (vaddr & P4D_MASK) + P4D_SIZE;
> >
>
> The code block as below is to zero p4d entries which are not coverred by
> the current memory range, and if haven't been mapped already. It's
> clearred away in this patch, could you also mention it in log, and tell
> why it doesn't matter now?
>
> If it doesn't matter, should we clear away the simillar code in
> phys_pud_init/phys_pmd_init/phys_pte_init? Maybe a prep patch to do the
> clean up?

It only matters for the levels that contains page table entries that can
point to pages, not page tables. There's no p4d or pgd huge pages on x86.
Otherwise we only leak page tables without any benefit.

We might have this on all leveles under p?d_large() condition and don't
touch page tables at all.

BTW, it all becomes rather risky for this late in the release cycle. Maybe
we should revert the original patch and try again later with more
comprehansive solution?

--
Kirill A. Shutemov