[PATCH v2] LoongArch: Clear invalid tlb when set huge page PTE entry

From: Bibo Mao
Date: Wed Sep 06 2023 - 22:09:45 EST


For LoongArch machines where hardware page table walk is not
support, if hugetlb pte entry is invalid, invalid tlb with normal
page will be filled and triggers page fault exception from HW.

During page fault handling, current thread maybe migrates to other
CPUs and set huge page pte entry. And if the thread migrates to
the old CPU, stale tlb with normal page exists still, there will
be confliction. So the invalid tlb need be flushed when set huge
page pte entry like set_huge_pte_at and set_pmd_at function.

Reported-by: kernel test robot <lkp@xxxxxxxxx>
Closes: https://lore.kernel.org/oe-kbuild-all/202309062224.jKf5JY7H-lkp@xxxxxxxxx/
Signed-off-by: Bibo Mao <maobibo@xxxxxxxxxxx>
---
Changes in v2:
Put function set_huge_pte_at in file hugetlbpage.c to remove
compiling warning.

---
arch/loongarch/include/asm/hugetlb.h | 4 ++++
arch/loongarch/mm/hugetlbpage.c | 18 ++++++++++++++++++
arch/loongarch/mm/pgtable.c | 8 +++++++-
3 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/arch/loongarch/include/asm/hugetlb.h b/arch/loongarch/include/asm/hugetlb.h
index aa44b3fe43dd..62cd7528a07f 100644
--- a/arch/loongarch/include/asm/hugetlb.h
+++ b/arch/loongarch/include/asm/hugetlb.h
@@ -59,6 +59,10 @@ static inline int huge_pte_none(pte_t pte)
return !val || (val == (unsigned long)invalid_pte_table);
}

+#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT
+extern void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pte);
+
#define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr,
diff --git a/arch/loongarch/mm/hugetlbpage.c b/arch/loongarch/mm/hugetlbpage.c
index ba138117b124..cc31c090d4ba 100644
--- a/arch/loongarch/mm/hugetlbpage.c
+++ b/arch/loongarch/mm/hugetlbpage.c
@@ -85,3 +85,21 @@ uint64_t pmd_to_entrylo(unsigned long pmd_val)

return val;
}
+
+void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pte)
+{
+ /*
+ * If huge pte entry is none, tlb entry with normal page size is filled
+ * for machines which does not support hardware page walking.
+ *
+ * Thread maybe migrates to other CPUs after page fault happends and
+ * migrates back again after hugepage pte is set, tlbs with normal page
+ * about invalid_pte_table need be flushed
+ */
+ if (!cpu_has_ptw && huge_pte_none(*ptep))
+ flush_tlb_mm(mm);
+
+ set_pte_at(mm, addr, ptep, pte);
+}
+
diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c
index b14343e211b6..dfae34484f43 100644
--- a/arch/loongarch/mm/pgtable.c
+++ b/arch/loongarch/mm/pgtable.c
@@ -116,8 +116,14 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot)
void set_pmd_at(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, pmd_t pmd)
{
+ /*
+ * The similar with function set_huge_pte_at
+ * Need flush invalid normal page pte if hw ptw is not supported
+ */
+ if (!cpu_has_ptw && pmd_none(*pmdp))
+ flush_tlb_mm(mm);
+
*pmdp = pmd;
- flush_tlb_all();
}

void __init pagetable_init(void)

base-commit: 744a759492b5c57ff24a6e8aabe47b17ad8ee964
--
2.27.0