[PATCH 2/3] mm/huge_memory.c: update tlb entry if pmd is changed

From: Bibo Mao
Date: Wed Jun 24 2020 - 05:26:54 EST


When set_pmd_at is called in function do_huge_pmd_anonymous_page,
new tlb entry can be added by software on MIPS platform.

Here add update_mmu_cache_pmd when pmd entry is set, and
update_mmu_cache_pmd is defined as empty excepts arc/mips platform.
This patch has no negative effect on other platforms except arc/mips
system.

Signed-off-by: Bibo Mao <maobibo@xxxxxxxxxxx>
---
mm/huge_memory.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0f9187b..8b4ccf7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -643,6 +643,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
lru_cache_add_active_or_unevictable(page, vma);
pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
+ update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
mm_inc_nr_ptes(vma->vm_mm);
spin_unlock(vmf->ptl);
@@ -756,6 +757,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
} else {
set_huge_zero_page(pgtable, vma->vm_mm, vma,
haddr, vmf->pmd, zero_page);
+ update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
spin_unlock(vmf->ptl);
set = true;
}
--
1.8.3.1