Re: [PATCH v3 1/1] KVM: arm64: Allow cacheable stage 2 mapping using VMA flags
From: Catalin Marinas
Date: Tue Mar 18 2025 - 15:29:21 EST
On Tue, Mar 18, 2025 at 09:55:27AM -0300, Jason Gunthorpe wrote:
> On Tue, Mar 18, 2025 at 09:39:30AM +0000, Marc Zyngier wrote:
> > The memslot must also be created with a new flag ((2c) in the taxonomy
> > above) that carries the "Please map VM_PFNMAP VMAs as cacheable". This
> > flag is only allowed if (1) is valid.
> >
> > This results in the following behaviours:
> >
> > - If the VMM creates the memslot with the cacheable attribute without
> > (1) being advertised, we fail.
> >
> > - If the VMM creates the memslot without the cacheable attribute, we
> > map as NC, as it is today.
>
> Is that OK though?
>
> Now we have the MM page tables mapping this memory as cachable but KVM
> and the guest is accessing it as non-cached.
I don't think we should allow this.
> I thought ARM tried hard to avoid creating such mismatches? This is
> why the pgprot flags were used to drive this, not an opt-in flag. To
> prevent userspace from forcing a mismatch.
We have the vma->vm_page_prot when the memslot is added, so we could use
this instead of additional KVM flags. If it's Normal Cacheable and the
platform does not support FWB, reject it. If the prot bits say
cacheable, it means that the driver was ok with such mapping. Some extra
checks for !MTE or MTE_PERM.
As additional safety, we could check this again in user_mem_abort() in
case the driver played with the vm_page_prot field in the meantime (e.g.
in the .fault() callback).
I'm not particularly keen on using the vm_page_prot but we probably need
to do this anyway to avoid aliases as we can't fully trust the VMM. The
alternative is a VM_* flag that says "cacheable everywhere" and we avoid
the low-level attributes checking.
> > What this doesn't do is *automatically* decide for the VMM what
> > attributes to use. The VMM must know what it is doing, and only
> > provide the memslot flag when appropriate. Doing otherwise may eat
> > your data and/or take the machine down (cacheable mapping on a device
> > can be great fun).
>
> Again, this is why we followed the VMA flags. The thing creating the
> VMA already made this safety determination when it set pgprot
> cachable. We should not allow KVM to randomly make any PGPROT
> cachable!
Can this be moved to kvm_arch_prepare_memory_region() and maybe an
additional check in user_mem_abort()?
Thinking some more about a KVM capability that the VMM can check, I'm
not sure what it can do with this. The VMM simply maps something from a
device and cannot probe the cacheability - that's a property of the
device that's not usually exposed to user by the driver. The VMM just
passes this vma to KVM. As with the Normal NC, we tried to avoid
building device knowledge into the VMM (and ended up with
VM_ALLOW_ANY_UNCACHED since the VFIO driver did not allow such user
mapping and probably wasn't entirely safe either).
I assume with the cacheable pfn mapping, the whole range covered by the
vma is entirely safe to be mapped as such in user space.
--
Catalin