Re: [PATCH v1 1/2] KVM: arm64: determine memory type from VMA
From: Jason Gunthorpe
Date: Tue Oct 10 2023 - 11:05:13 EST
On Tue, Oct 10, 2023 at 03:25:22PM +0100, Catalin Marinas wrote:
> On Thu, Oct 05, 2023 at 01:54:58PM -0300, Jason Gunthorpe wrote:
> > On Thu, Oct 05, 2023 at 05:15:37PM +0100, Catalin Marinas wrote:
> > > On Thu, Sep 07, 2023 at 11:14:58AM -0700, ankita@xxxxxxxxxx wrote:
> > > > From: Ankit Agrawal <ankita@xxxxxxxxxx>
> > > > Currently KVM determines if a VMA is pointing at IO memory by checking
> > > > pfn_is_map_memory(). However, the MM already gives us a way to tell what
> > > > kind of memory it is by inspecting the VMA.
> > >
> > > Well, it doesn't. It tells us what attributes the user mapped that
> > > memory with, not whether it's I/O memory or standard RAM.
> >
> > There is VM_IO which is intended to be used for address space with
> > side effects.
> >
> > And there is VM_PFNMAP which is intended to be used for address space
> > without struct page (IO or not)
> >
> > And finally we have the pgprot bit which define the cachability.
> >
> > Do you have a definition of IO memory that those three things don't
> > cover?
> >
> > I would propose that, for KVM's purpose, IO memory is marked with
> > VM_IO or a non-cachable pgprot
> >
> > And "standard RAM" is defined by a cachable pgprot. Linux never makes
> > something that is VM_IO cachable.
>
> I think we can safely set a stage 2 Normal NC for a vma with pgprot
> other than MT_NORMAL or MT_NORMAL_TAGGED. But the other way around is
> not that simple. Just because the VMM was allowed to map it as cacheable
> does not mean that it supports all the CPU features. One example is MTE
> where we can only guarantee that the RAM given to the OS at boot
> supports tagged accesses.
Is there a use case to supply the VMM with cachable memory that is not
full featured? At least the vfio cases I am aware of do not actually
want to do this and would probably like the arch to prevent these
corner cases upfront.
> I've seen something similar in the past with
> LSE atomics (or was it exclusives?) not being propagated. These don't
> make the memory safe for a guest to use as general purpose RAM.
At least from a mm perspective, I think it is important that cachable
struct pages are all the same and all interchangable. If the arch
cannot provide this it should not allow the pgmap/memremap to succeed
at all. Otherwise drivers using these new APIs are never going to work
fully right..
That leaves open the question of what to do with a cachable VM_PFNMAP,
but if the arch can deal with the memremap issue then it seems like it
can use the same logic when inspecting the VMA contents?
> I was thinking more about keeping things simpler and avoid any lack of
> coherence between the VMM and the VM, in case the latter maps it as
> Normal NC. But if the VMM doesn't touch it, the initial cache
> maintenance by the KVM would do.
I see, in general we usually don't have use cases for the VMM touching
the vfio memory as it is hard to make that secure. I would like it if
that memory could be a FD instead of a VMA. Maybe someday..
> See above on the characteristics of the memory. If some of them are not
> supported, it's probably fine (atomics not working) but others like MTE
> accesses could either cause external aborts or access random addresses
> from elsewhere. Currently we rely on pfn_is_map_memory() for this but we
> need a way to tell that other ranges outside the initial RAM supports
> all features.
Indeed, I did not realize this about arm. See above on my note that at
the mm level we should not have different types of cachable struct
pages. So I think the right thing to do is fix that and still rely on
the struct page test here in kvm.
> IOW, is any of this memory (mapped as cacheable in the
> VMM) special purpose with only a subset of the CPU features supported?
At least the use cases I am interested in has the memory be
functionally indistinguishable from boot time DDR.
Cachable memory that does not function the same as DDR is something
else, maybe that is your "cachable IO memory" concept. I don't have a
use case for it. If we do decide to create it then it likely should be
tagged as MEMORY_DEVICE_PCI_P2PDMA in the mm side and not get into any
code paths that assume DDR.
Jason