Re: [GIT pull] x86/asm for 5.1

From: Linus Torvalds
Date: Mon Mar 11 2019 - 13:41:55 EST

On Mon, Mar 11, 2019 at 9:55 AM Kees Cook <keescook@xxxxxxxxxxxx> wrote:
> Clarification, just so I get the design considerations adjusted
> correctly... I did this with a global because of the observation that
> once CPU setup is done, the pin mask is the same for all CPUs.

Yes. Once we're booted, it's all the same.

It's only during bootup that it has this issue.

> However, yes, I see the point about the chosen implementation
> resulting in a potential timing problem, etc. What about enabling the
> pin mask once CPU init is finished? The goal is to protect those bits
> during runtime (and to stay out of the way at init time).

Yes, that would fix things too, although I suspect you'll find that it
gets hairy for hotplug CPU's (including suspend/resume), because new
CPU's keep booting "forever".

One - and perhaps better - option is to really force the %cr4 bits for
all CPU's *very* early. We actually have some of that logic for %cr4
already, because %cr4 absolutely needs to enable things like PAE and
PGE very early it even enables paging, because those bits are already
set in the base kernel page tables that it loads.

At that point, the SMEP/SMAP/UMIP bits would *really* be set globally
after the first CPU boots.

Actually, I think it might be less important to initialize %cr4 itself
early, and more important to initialize 'cpu_tlbstate.cr4', but right
now that actually gets initialized from the early %cr4 value (see

So what might be workable is to make 'cr4_init_shadow()' itself just
do something like

this_cpu_write(cpu_tlbstate.cr4, __read_cr4() | cr4_pin);

but I did *not* check whether there might be some other random
accesses to %cr4 in various legacy paths..

The "use a percpu pinning mask" seemed to be the simplest solution,
but the above might work fairly easily too.