Re: [PATCH] x86/boot/64: Make level2_kernel_pgt pages invalid outside kernel area.

From: Steve Wahl
Date: Tue Sep 10 2019 - 10:28:42 EST


On Mon, Sep 09, 2019 at 11:14:14AM +0300, Kirill A. Shutemov wrote:
> On Fri, Sep 06, 2019 at 04:29:50PM -0500, Steve Wahl wrote:
> > ...
> > The answer is to invalidate the pages of this table outside the
> > address range occupied by the kernel before the page table is
> > activated. This patch has been validated to fix this problem on our
> > hardware.
>
> If the goal is to avoid *any* mapping of the reserved region to stop
> speculation, I don't think this patch will do the job. We still (likely)
> have the same memory mapped as part of the identity mapping. And it
> happens at least in two places: here and before on decompression stage.

I imagine you are likely correct, ideally you would not map any
reserved pages in these spaces.

I've been reading the code to try to understand what you say above.
For identity mappings in the kernel, I see level2_ident_pgt mapping
the first 1G. And I see early_dyanmic_pgts being set up with an
identity mapping of the kernel that seems to be pretty well restricted
to the range _text through _end.

Within the decompression code, I see an identity mapping of the first
4G set up within the 32 bit code. I believe we go past that to the
startup_64 entry point. (I don't know how common that path is, but I
don't have a way to test it without figuring out how to force it.)

>From a pragmatic standpoint, the guy who can verify this for me is on
vacation, but I believe our BIOS won't ever place the halt-causing
ranges in a space below 4GiB. Which explains why this patch works for
our hardware. (We do have reserved regions below 4G, just not the
ones that hardware causes a halt for accessing.)

In case it helps you picture the situation, our hardware takes a small
portion of RAM from the end of each NUMA node (or it might be pairs or
quads of NUMA nodes, I'm not entirely clear on this at the moment) for
its own purposes. Here's a section of our e820 table:

[ 0.000000] BIOS-e820: [mem 0x000000007c000000-0x000000008fffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000002f7fffffff] usable
[ 0.000000] BIOS-e820: [mem 0x0000002f80000000-0x000000303fffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000003040000000-0x0000005f7bffffff] usable
[ 0.000000] BIOS-e820: [mem 0x0000005f7c000000-0x000000603fffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000006040000000-0x0000008f7bffffff] usable
[ 0.000000] BIOS-e820: [mem 0x0000008f7c000000-0x000000903fffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000009040000000-0x000000bf7bffffff] usable
[ 0.000000] BIOS-e820: [mem 0x000000bf7c000000-0x000000c03fffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x000000c040000000-0x000000ef7bffffff] usable
[ 0.000000] BIOS-e820: [mem 0x000000ef7c000000-0x000000f03fffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x000000f040000000-0x0000011f7bffffff] usable
[ 0.000000] BIOS-e820: [mem 0x0000011f7c000000-0x000001203fffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000012040000000-0x0000014f7bffffff] usable
[ 0.000000] BIOS-e820: [mem 0x0000014f7c000000-0x000001503fffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000015040000000-0x0000017f7bffffff] usable
[ 0.000000] BIOS-e820: [mem 0x0000017f7c000000-0x000001803fffffff] reserved

Our problem occurs when KASLR (or kexec) places the kernel close
enough to the end of one of the usable sections, and the 1G of 1:1
mapped space includes a portion of the following reserved section, and
speculation touches the reserved area.

--> Steve Wahl
--
Steve Wahl, Hewlett Packard Enterprise