One issue is that folio_maybe_pinned...() ... is unreliable as soon as your page is mapped more than 1024 times.
One might argue that we also want to exclude pages that are mapped that often. That might possibly work.
Staring at page #2, are we still missing something similar for THPs?Yes.
Why is that MMU notifier thingy and touching KVM code required?Because NUMA balancing code will firstly send .invalidate_range_start() with
event type MMU_NOTIFY_PROTECTION_VMA to KVM in change_pmd_range()
unconditionally, before it goes down into change_pte_range() and
change_huge_pmd() to check each page count and apply PROT_NONE.
Ah, okay I see, thanks. That's indeed unfortunate.
similar issues with GPU memory: NUMA hinting is actually counter-productive and they end up disabling it.
Then current KVM will unmap all notified pages from secondary MMU
in .invalidate_range_start(), which could include pages that finally not
set to PROT_NONE in primary MMU.
For VMs with pass-through devices, though all guest pages are pinned,
KVM still periodically unmap pages in response to the
.invalidate_range_start() notification from auto NUMA balancing, which
is a waste.
Should we want to disable NUMA hinting for such VMAs instead (for example, by QEMU/hypervisor) that knows that any NUMA hinting activity on these ranges would be a complete waste of time? I recall that John H. once mentioned that there are