[PATCH v2 2/5] mm/hugetlb: use PTE page lock to protect CONT-PTE entries
From: Baolin Wang
Date: Tue Aug 23 2022 - 03:50:46 EST
Considering the pte entries of a CONT-PTE hugetlb can not span on
multiple PTEs, we can change to use the PTE page lock, which can
be much finer grain that lock in the mm.
Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
---
include/linux/hugetlb.h | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index d491138..4b172a7 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -892,9 +892,23 @@ static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask)
static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
struct mm_struct *mm, pte_t *pte)
{
- if (huge_page_size(h) == PMD_SIZE)
- return pmd_lockptr(mm, (pmd_t *) pte);
VM_BUG_ON(huge_page_size(h) == PAGE_SIZE);
+
+ if (huge_page_size(h) == PMD_SIZE) {
+ return pmd_lockptr(mm, (pmd_t *) pte);
+ } else if (huge_page_size(h) < PMD_SIZE) {
+ unsigned long mask = ~(PTRS_PER_PTE * sizeof(pte_t) - 1);
+ struct page *page =
+ virt_to_page((void *)((unsigned long)pte & mask));
+
+ /*
+ * Considering CONT-PTE size hugetlb, since the CONT-PTE
+ * entry can not span multiple PTEs, we can use the PTE
+ * page lock to get a fine grained lock.
+ */
+ return ptlock_ptr(page);
+ }
+
return &mm->page_table_lock;
}
--
1.8.3.1