Re: [PATCH v3 1/1] KVM: arm64: Allow cacheable stage 2 mapping using VMA flags

From: David Hildenbrand
Date: Tue Mar 18 2025 - 15:35:55 EST


On 18.03.25 20:27, Catalin Marinas wrote:
On Tue, Mar 18, 2025 at 09:55:27AM -0300, Jason Gunthorpe wrote:
On Tue, Mar 18, 2025 at 09:39:30AM +0000, Marc Zyngier wrote:
The memslot must also be created with a new flag ((2c) in the taxonomy
above) that carries the "Please map VM_PFNMAP VMAs as cacheable". This
flag is only allowed if (1) is valid.

This results in the following behaviours:

- If the VMM creates the memslot with the cacheable attribute without
(1) being advertised, we fail.

- If the VMM creates the memslot without the cacheable attribute, we
map as NC, as it is today.

Is that OK though?

Now we have the MM page tables mapping this memory as cachable but KVM
and the guest is accessing it as non-cached.

I don't think we should allow this.

I thought ARM tried hard to avoid creating such mismatches? This is
why the pgprot flags were used to drive this, not an opt-in flag. To
prevent userspace from forcing a mismatch.

We have the vma->vm_page_prot when the memslot is added, so we could use
this instead of additional KVM flags.

I thought we try to avoid messing with the VMA when adding memslots; because KVM_CAP_SYNC_MMU allows user space for changing the VMAs afterwards without changing the memslot?

include/uapi/linux/kvm.h:#define KVM_CAP_SYNC_MMU 16 /* Changes to host mmap are reflected in guest */

--
Cheers,

David / dhildenb