Re: [kernel-hardening] Re: [PATCH 12/13] x86/mm/64: Enable vmapped stacks

From: Rik van Riel
Date: Thu Jun 16 2016 - 09:11:45 EST


On Wed, 2016-06-15 at 22:33 -0700, Andy Lutomirski wrote:
>Â
> > > +++ b/arch/x86/mm/tlb.c
> > > @@ -77,10 +77,25 @@ void switch_mm_irqs_off(struct mm_struct
> > > *prev, struct mm_struct *next,
> > > ÂÂÂÂÂÂunsigned cpu = smp_processor_id();
> > >
> > > ÂÂÂÂÂÂif (likely(prev != next)) {
> > > +ÂÂÂÂÂÂÂÂÂÂÂÂÂif (IS_ENABLED(CONFIG_VMAP_STACK)) {
> > > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ/*
> > > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ* If our current stack is in vmalloc space
> > > and isn't
> > > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ* mapped in the new pgd, we'll double-
> > > fault.ÂÂForcibly
> > > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ* map it.
> > > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ*/
> > > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂunsigned int stack_pgd_index =
> > > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂpgd_index(current_stack_pointer());
> >
> > stack pointer is still the previous task's, current_stack_pointer()
> > returns that, not
> > next task's which was intention I guess. Things may happen to work
> > if on same pgd, but at least the
> > boot cpu init_task_struct is special.
> This is intentional.ÂÂWhen switching processes, we first switch the
> mm
> and then switch the task.ÂÂWe need to make sure that the prev stack
> is
> mapped in the new mm or we'll double-fault and die after switching
> the
> mm which still trying to execute on the old stack.
>
> The change to switch_to makes sure that the new stack is mapped.
>

On a HARDENED_USERCOPY tangential note: by not allowing
copy_to/from_user access to vmalloc memory by default,
with exception of the stack, a task will only be able
to copy_to/from_user from its own stack, not another task's
stack, at least using the kernel virtual address the
kernel uses to access that stack.

This can be accomplished by simply not adding any vmalloc
checking code to the current HARDENED_USERCOPY patch set :)

--
All rights reversed

Attachment: signature.asc
Description: This is a digitally signed message part