Re: [PATCH v3 1/1] KVM: arm64: Allow cacheable stage 2 mapping using VMA flags
From: Oliver Upton
Date: Tue Mar 18 2025 - 15:31:11 EST
On Tue, Mar 18, 2025 at 09:55:27AM -0300, Jason Gunthorpe wrote:
> On Tue, Mar 18, 2025 at 09:39:30AM +0000, Marc Zyngier wrote:
>
> > The memslot must also be created with a new flag ((2c) in the taxonomy
> > above) that carries the "Please map VM_PFNMAP VMAs as cacheable". This
> > flag is only allowed if (1) is valid.
> >
> > This results in the following behaviours:
> >
> > - If the VMM creates the memslot with the cacheable attribute without
> > (1) being advertised, we fail.
> >
> > - If the VMM creates the memslot without the cacheable attribute, we
> > map as NC, as it is today.
>
> Is that OK though?
>
> Now we have the MM page tables mapping this memory as cachable but KVM
> and the guest is accessing it as non-cached.
>
> I thought ARM tried hard to avoid creating such mismatches? This is
> why the pgprot flags were used to drive this, not an opt-in flag. To
> prevent userspace from forcing a mismatch.
It's far more problematic the other way around, e.g. the host knows that
something needs a Device-* attribute and the VM has done something
cacheable. The endpoint for that PA could, for example, fall over when
lines pulled in by the guest are written back, which of course can't
always be traced back to the offending VM.
OTOH, if the host knows that a PA is cacheable and the guest does
something non-cacheable, you 'just' have to deal with the usual
mismatched attributes problem as laid out in the ARM ARM.
> > What this doesn't do is *automatically* decide for the VMM what
> > attributes to use. The VMM must know what it is doing, and only
> > provide the memslot flag when appropriate. Doing otherwise may eat
> > your data and/or take the machine down (cacheable mapping on a device
> > can be great fun).
>
> Again, this is why we followed the VMA flags. The thing creating the
> VMA already made this safety determination when it set pgprot
> cachable. We should not allow KVM to randomly make any PGPROT
> cachable!
That doesn't seem to be the suggestion.
Userspace should be stating intentions on the memslot with the sort of
mapping that it wants to create, and a memslot flag to say "I allow
cacheable mappings" seems to fit the bill.
Then we have:
- Memslot creation fails for any PFNMAP slot with the flag set &&
!FEAT_FWB
- Stage-2 faults fail (exit to userspace) if the above conditions are
not met
- Stage-2 faults serviced w/ a cacheable mapping if the precondition is
satisfied and pgprot on the VMA is cacheable
- Stage-2 faults serviced w/ a non-cacheable mapping if flag is not
set
Seems workable + would prevent KVM from being excessively permissive?
Thanks,
Oliver