On Wed, Mar 26, 2025 at 11:38:11AM +0800, Baolin Wang wrote:
@@ -118,16 +120,31 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
walk->action = ACTION_AGAIN;
return 0;
}
- for (; addr != end; ptep++, addr += PAGE_SIZE) {
+ for (; addr != end; ptep += step, addr += step * PAGE_SIZE) {
pte_t pte = ptep_get(ptep);
+ step = 1;
/* We need to do cache lookup too for pte markers */
if (pte_none_mostly(pte))
__mincore_unmapped_range(addr, addr + PAGE_SIZE,
vma, vec);
- else if (pte_present(pte))
- *vec = 1;
- else { /* pte is a swap entry */
+ else if (pte_present(pte)) {
+ if (pte_batch_hint(ptep, pte) > 1) {
AFAIU, you will only batch if the CONT_PTE is set, but that is only true for arm64,
and so we lose the ability to batch in e.g: x86 when we have contiguous
entries, right?
So why not have folio_pte_batch take care of it directly without involving
pte_batch_hint here?