Re: Crash on armv7-a using KASAN

From: Ard Biesheuvel
Date: Tue Oct 15 2024 - 13:28:26 EST


On Tue, 15 Oct 2024 at 18:27, Mark Rutland <mark.rutland@xxxxxxx> wrote:
>
> On Tue, Oct 15, 2024 at 06:07:00PM +0200, Ard Biesheuvel wrote:
> > On Tue, 15 Oct 2024 at 17:26, Mark Rutland <mark.rutland@xxxxxxx> wrote:
> > > Looking some more, I don't see how VMAP_STACK guarantees that the
> > > old/active stack is mapped in the new mm when switching from the old mm
> > > to the new mm (which happens before __switch_to()).
> > >
> > > Either I'm missing something, or we have a latent bug. Maybe we have
> > > some explicit copying/prefaulting elsewhere I'm missing?
> >
> > We bump the vmalloc_seq counter for that. Given that the top-level
> > page table can only gain entries covering the kernel space, this
> > should be sufficient for the old task's stack to be mapped in the new
> > task's page tables.
>
> Ah, yep -- I had missed that. Thanks for the pointer!
>
> From a superficial look, it sounds like it should be possible to extend
> that to also handle the KASAN shadow of the vmalloc area (which
> __check_vmalloc_seq() currently doesn't copy), but I'm not sure of
> exactly when we initialise the shadow for a vmalloc allocation relative
> to updating vmalloc_seq.
>

Indeed. It appears both vmalloc_seq() and arch_sync_kernel_mappings()
need to take the vmalloc shadow into account specifically. And we may
also need the dummy read from the stack's shadow in __switch_to - I am
pretty sure I added that for a reason.