[PATCH 4.14 09/71] arm64: Enforce BBM for huge IO/VMAP mappings

From: Greg Kroah-Hartman
Date: Thu Jan 16 2020 - 18:36:04 EST


From: Will Deacon <will.deacon@xxxxxxx>

commit 15122ee2c515a253b0c66a3e618bc7ebe35105eb upstream.

ioremap_page_range doesn't honour break-before-make and attempts to put
down huge mappings (using p*d_set_huge) over the top of pre-existing
table entries. This leads to us leaking page table memory and also gives
rise to TLB conflicts and spurious aborts, which have been seen in
practice on Cortex-A75.

Until this has been resolved, refuse to put block mappings when the
existing entry is found to be present.

Fixes: 324420bf91f60 ("arm64: add support for ioremap() block mappings")
Reported-by: Hanjun Guo <hanjun.guo@xxxxxxxxxx>
Reported-by: Lei Li <lious.lilei@xxxxxxxxxxxxx>
Acked-by: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
Signed-off-by: Will Deacon <will.deacon@xxxxxxx>
Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx>
Signed-off-by: Ben Hutchings <ben.hutchings@xxxxxxxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
arch/arm64/mm/mmu.c | 10 ++++++++++
1 file changed, 10 insertions(+)

--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -917,6 +917,11 @@ int pud_set_huge(pud_t *pudp, phys_addr_
{
pgprot_t sect_prot = __pgprot(PUD_TYPE_SECT |
pgprot_val(mk_sect_prot(prot)));
+
+ /* ioremap_page_range doesn't honour BBM */
+ if (pud_present(READ_ONCE(*pudp)))
+ return 0;
+
BUG_ON(phys & ~PUD_MASK);
set_pud(pudp, pfn_pud(__phys_to_pfn(phys), sect_prot));
return 1;
@@ -926,6 +931,11 @@ int pmd_set_huge(pmd_t *pmdp, phys_addr_
{
pgprot_t sect_prot = __pgprot(PMD_TYPE_SECT |
pgprot_val(mk_sect_prot(prot)));
+
+ /* ioremap_page_range doesn't honour BBM */
+ if (pmd_present(READ_ONCE(*pmdp)))
+ return 0;
+
BUG_ON(phys & ~PMD_MASK);
set_pmd(pmdp, pfn_pmd(__phys_to_pfn(phys), sect_prot));
return 1;