Re: [PATCH mm-new v2 3/3] mm/khugepaged: merge PTE scanning logic into a new helper

From: Lance Yang

Date: Tue Oct 07 2025 - 04:32:19 EST




On 2025/10/7 14:28, Dev Jain wrote:

On 06/10/25 8:13 pm, Lance Yang wrote:
+static inline int thp_collapse_check_pte(pte_t pte, struct vm_area_struct *vma,
+        unsigned long addr, struct collapse_control *cc,
+        struct folio **foliop, int *none_or_zero, int *unmapped,
+        int *shared, int *scan_result)

Nit: Will prefer the cc parameter to go at the last.

Yep, got it.


+{
+    struct folio *folio = NULL;
+    struct page *page = NULL;
+
+    if (pte_none(pte) || is_zero_pfn(pte_pfn(pte))) {
+        (*none_or_zero)++;
+        if (!userfaultfd_armed(vma) &&
+            (!cc->is_khugepaged ||
+             *none_or_zero <= khugepaged_max_ptes_none)) {
+            return PTE_CHECK_CONTINUE;
+        } else {
+            *scan_result = SCAN_EXCEED_NONE_PTE;
+            count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
+            return PTE_CHECK_FAIL;
+        }
+    } else if (!pte_present(pte)) {
+        if (!unmapped) {
+            *scan_result = SCAN_PTE_NON_PRESENT;
+            return PTE_CHECK_FAIL;
+        }
+
+        if (non_swap_entry(pte_to_swp_entry(pte))) {
+            *scan_result = SCAN_PTE_NON_PRESENT;
+            return PTE_CHECK_FAIL;
+        }
+
+        (*unmapped)++;
+        if (!cc->is_khugepaged ||
+            *unmapped <= khugepaged_max_ptes_swap) {
+            /*
+             * Always be strict with uffd-wp enabled swap
+             * entries. Please see comment below for
+             * pte_uffd_wp().
+             */
+            if (pte_swp_uffd_wp(pte)) {
+                *scan_result = SCAN_PTE_UFFD_WP;
+                return PTE_CHECK_FAIL;
+            }
+            return PTE_CHECK_CONTINUE;
+        } else {
+            *scan_result = SCAN_EXCEED_SWAP_PTE;
+            count_vm_event(THP_SCAN_EXCEED_SWAP_PTE);
+            return PTE_CHECK_FAIL;
+        }
+    } else if (pte_uffd_wp(pte)) {
+        /*
+         * Don't collapse the page if any of the small PTEs are
+         * armed with uffd write protection. Here we can also mark
+         * the new huge pmd as write protected if any of the small
+         * ones is marked but that could bring unknown userfault
+         * messages that falls outside of the registered range.
+         * So, just be simple.
+         */
+        *scan_result = SCAN_PTE_UFFD_WP;
+        return PTE_CHECK_FAIL;
+    }
+
+    page = vm_normal_page(vma, addr, pte);

You should use vm_normal_folio here and drop struct page altogether - this was also
noted during the review of the mTHP collapse patchset.

Right, I missed that vm_normal_folio() was the way to go here :)

Thanks for the pointer!
Lance