[PATCH v3 5/5] x86/mm: add WARN_ON_ONCE() for wrong large page mapping
From: Bin Yang
Date: Mon Aug 20 2018 - 21:16:46 EST
If there is a large page which contains an area which requires a
different mapping that the one which the large page provides,
then something went wrong _before_ this code is called.
Here we can catch a case where the existing mapping is wrong
Inspired-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Signed-off-by: Bin Yang <bin.yang@xxxxxxxxx>
arch/x86/mm/pageattr.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index fd90c5b..91a250c 100644
@@ -625,6 +625,7 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
psize = page_level_size(level);
pmask = page_level_mask(level);
+ addr = address & pmask;
* Calculate the number of pages, which fit into this large
@@ -636,6 +637,12 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
cpa->numpages = numpages;
+ * The old pgprot should not have any protection bit. Otherwise,
+ * the existing mapping is wrong already.
+ WARN_ON_ONCE(needs_static_protections(old_prot, addr, psize, old_pfn));
* We are safe now. Check whether the new pgprot is the same:
* Convert protection attributes to 4k-format, as cpa->mask* are set
* up accordingly.
@@ -690,7 +697,6 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
* would anyway result in a split after doing all the check work
* for nothing.
- addr = address & pmask;
if (address != addr || cpa->numpages != numpages)