[PATCH] x86/i386: Check PSE bit before using PAGE_KERNEL_LARGE.

From: Konrad Rzeszutek Wilk
Date: Thu May 31 2012 - 15:48:12 EST


During bootup we would unconditionally do this on any
machine that was built with CONFIG_NUMA=y on i386:

setup_arch
\-initmem_init
\-x86_numa_init (with dummy_init as callback)
\- init_alloc_remap
\- set_pmd_pfn (with PAGE_PSE)

without checking to see if the CPU supports PSE. This
patch adds that and also allows the init_alloc_remap function
to properly work by falling back on PTEs.

CC: stable@xxxxxxxxxx
Tested-by: William Dauchy <wdauchy@xxxxxxxxx>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
---
arch/x86/mm/pgtable_32.c | 25 ++++++++++++++++++++++++-
1 files changed, 24 insertions(+), 1 deletions(-)

diff --git a/arch/x86/mm/pgtable_32.c b/arch/x86/mm/pgtable_32.c
index a69bcb8..9fd4abc 100644
--- a/arch/x86/mm/pgtable_32.c
+++ b/arch/x86/mm/pgtable_32.c
@@ -86,7 +86,30 @@ void set_pmd_pfn(unsigned long vaddr, unsigned long pfn, pgprot_t flags)
}
pud = pud_offset(pgd, vaddr);
pmd = pmd_offset(pud, vaddr);
- set_pmd(pmd, pfn_pmd(pfn, flags));
+
+ if ((!cpu_has_pse) && (pgprot_val(flags) & _PAGE_PSE)) {
+ pte_t *pte;
+ int i;
+
+ pgprot_val(flags) &= ~_PAGE_PSE;
+
+ /*
+ * This is run _after_ initial memory mapped so the
+ * PTE page are allocated - but we check it just in case.
+ */
+ if (pmd_none(*pmd)) {
+ printk(KERN_WARNING "set_pmd_pfn: pmd_none\n");
+ return;
+ }
+
+ pte = (pte_t *)pmd_page_vaddr(*pmd);
+ for (i = 0; i < PTRS_PER_PTE; i++) {
+ set_pte(pte, pfn_pte(pfn + i, flags));
+ pte++;
+ }
+ } else
+ set_pmd(pmd, pfn_pmd(pfn, flags));
+
/*
* It's enough to flush this one mapping.
* (PGE mappings get flushed as well)
--
1.7.7.6

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/