On Fri, Jan 10, 2025 at 09:15:53PM +0000, Ankit Agrawal wrote:
This patch solves the problems where it is possible for the kernel to
have VMAs pointing at cachable memory without causing
pfn_is_map_memory() to be true, eg DAX memremap cases and CXL/pre-CXL
devices. This memory is now properly marked as cachable in KVM.
Does this only imply in worse performance, or does this also affect
correctness? I suspect performance is the problem, correct?
Correctness. Things like atomics don't work on non-cachable mappings.
Hah! This needs to be highlighted in the patch description. And maybe
this even implies Fixes: etc?
Understood. I'll put that in the patch description.
Likely you assume to never end up with COW VM_PFNMAP -- I think it's
possible when doing a MAP_PRIVATE /dev/mem mapping on systems that allow
for mapping /dev/mem. Maybe one could just reject such cases (if KVM PFN
lookup code not already rejects them, which might just be that case IIRC).
At least VFIO enforces SHARED or it won't create the VMA.
drivers/vfio/pci/vfio_pci_core.c: if ((vma->vm_flags & VM_SHARED) == 0)
That makes a lot of sense for VFIO.
So, I suppose we don't need to check this? Specially if we only extend the
changes to the following case:
- type is VM_PFNMAP &&
- user mapping is cacheable (MT_NORMAL or MT_NORMAL_TAGGED) &&
- The suggested VM_FORCE_CACHED is set.
Do we really want another weirdly defined VMA flag? I'd really like to
avoid this..
the flag when the questions seem to be around things like MTE that
have nothing to do with VFIO?