[PATCH 3.16 120/131] x86/mm/pat: Make set_memory_np() L1TF safe

From: Ben Hutchings
Date: Sat Sep 29 2018 - 17:58:59 EST


3.16.59-rc1 review patch. If anyone has any objections, please let me know.

------------------

From: Andi Kleen <ak@xxxxxxxxxxxxxxx>

commit 958f79b9ee55dfaf00c8106ed1c22a2919e0028b upstream.

set_memory_np() is used to mark kernel mappings not present, but it has
it's own open coded mechanism which does not have the L1TF protection of
inverting the address bits.

Replace the open coded PTE manipulation with the L1TF protecting low level
PTE routines.

Passes the CPA self test.

Signed-off-by: Andi Kleen <ak@xxxxxxxxxxxxxxx>
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
[bwh: Backported to 3.16:
- cpa->pfn is actually a physical address here and needs to be shifted to
produce a PFN
- Adjust context]
Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx>
---
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -952,7 +952,8 @@ static int populate_pmd(struct cpa_data

pmd = pmd_offset(pud, start);

- set_pmd(pmd, __pmd(cpa->pfn | _PAGE_PSE | massage_pgprot(pgprot)));
+ set_pmd(pmd, pmd_mkhuge(pfn_pmd(cpa->pfn >> PAGE_SHIFT,
+ canon_pgprot(pgprot))));

start += PMD_SIZE;
cpa->pfn += PMD_SIZE;
@@ -1022,7 +1023,8 @@ static int populate_pud(struct cpa_data
* Map everything starting from the Gb boundary, possibly with 1G pages
*/
while (end - start >= PUD_SIZE) {
- set_pud(pud, __pud(cpa->pfn | _PAGE_PSE | massage_pgprot(pgprot)));
+ set_pud(pud, pud_mkhuge(pfn_pud(cpa->pfn >> PAGE_SHIFT,
+ canon_pgprot(pgprot))));

start += PUD_SIZE;
cpa->pfn += PUD_SIZE;