[RFC PATCH 1/4] x86/mm/cpa: restore global bit when page is present
From: Aaron Lu
Date: Mon Aug 08 2022 - 10:57:46 EST
For configs that don't have PTI enabled or cpus that don't need
meltdown mitigation, current kernel can lose GLOBAL bit after a page
goes through a cycle of present -> not present -> present.
It happened like this(__vunmap() does this in vm_remove_mappings()):
original page protection: 0x8000000000000163 (NX/G/D/A/RW/P)
set_memory_np(page, 1): 0x8000000000000062 (NX/D/A/RW) lose G and P
set_memory_p(pagem 1): 0x8000000000000063 (NX/D/A/RW/P) restored P
In the end, this page's protection no longer has Global bit set and this
would create problem for this merge small mapping feature.
For this reason, restore Global bit for systems that do not have PTI
enabled if page is present.
(pgprot_clear_protnone_bits() deserves a better name if this patch is
acceptible but first, I would like to get some feedback if this is the
right way to solve this so I didn't bother with the name yet)
Signed-off-by: Aaron Lu <aaron.lu@xxxxxxxxx>
---
arch/x86/mm/pat/set_memory.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 1abd5438f126..33657a54670a 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -758,6 +758,8 @@ static pgprot_t pgprot_clear_protnone_bits(pgprot_t prot)
*/
if (!(pgprot_val(prot) & _PAGE_PRESENT))
pgprot_val(prot) &= ~_PAGE_GLOBAL;
+ else
+ pgprot_val(prot) |= _PAGE_GLOBAL & __default_kernel_pte_mask;
return prot;
}
--
2.37.1