Re: [PATCH v1 0/4] virt: vmgenid: Add devicetree bindings support
From: David Woodhouse
Date: Wed Mar 20 2024 - 12:56:08 EST
On Wed, 2024-03-20 at 11:15 -0500, Rob Herring wrote:
> On Wed, Mar 20, 2024 at 01:50:43PM +0000, David Woodhouse wrote:
> > On Tue, 2024-03-19 at 16:24 +0100, Krzysztof Kozlowski wrote:
> > > On 19/03/2024 15:32, Sudan Landge wrote:
> > > > This small series of patches aims to add devicetree bindings support for
> > > > the Virtual Machine Generation ID (vmgenid) driver.
> > > >
> > > > Virtual Machine Generation ID driver was introduced in commit af6b54e2b5ba
> > > > ("virt: vmgenid: notify RNG of VM fork and supply generation ID") as an
> > > > ACPI only device.
> > > > We would like to extend vmgenid to support devicetree bindings because:
> > > > 1. A device should not be defined as an ACPI or DT only device.
>
> This (and the binding patch) tells me nothing about what "Virtual
> Machine Generation ID driver" is and isn't really justification for
> "why".
It's a reference to a memory area which the OS can use to tell whether
it's been snapshotted and restored (or 'forked'). A future submission
should have a reference to something like
https://www.qemu.org/docs/master/specs/vmgenid.html or the Microsoft
doc which is linked from there.
> DT/ACPI is for discovering what hardware folks failed to make
> discoverable. But here, both sides are software. Can't the software
> folks do better?
We are. Using device-tree *is* better. :)
> This is just the latest in $hypervisor bindings[1][2][3]. The value add
> must be hypervisors because every SoC vendor seems to be creating their
> own with their own interfaces.
The VMGenId one is cross-platform; we don't *want* to reinvent the
wheel there. We just want to discover that same memory area with
precisely the same semantics, but through the device-tree instead of
being forced to shoe-horn the whole of the ACPI horridness into a
platform which doesn't need it. (Or make it the BAR of a newly-invented
PCI device and have to add PCI to a microVM platform which doesn't
otherwise need it, etc.)
> I assume you have other calls into the hypervisor and notifications from
> the hypervisor? Are you going to add DT nodes for each one? I'd be more
> comfortable with DT describing THE communication channel with the
> hypervisor than what sounds like a singular function.
This isn't hypervisor-specific. There is a memory region with certain
semantics which may exist on all kinds of platforms, and we're just
allowing the guest to discover where it is. I don't see how it fits
into the model you're describing above.
> Otherwise, what's the next binding?
You meant that last as a rhetorical question, but I'll answer it
anyway. The thing I'm actually working on this week is a mechanism to
expose clock synchronisation (since it's kind of pointless for *all* of
the guests running on a host to run NTP/PTP/PPS/whatever to calibrate
the *same* underlying oscillator).
As far as the *discoverability* is concerned, it's fundamentally the
same thing — just a memory region with certain defined semantics, and
probably an interrupt for when the contents change.
There *isn't* an ACPI specification for that one already; I was
thinking of describing it *only* in DT and if someone wants it on a
platform which is afflicted with ACPI, they can just do it in a PRP0001
device.
As with vmgenid, there's really very little benefit to wrapping a whole
bunch of pointless emulated "hardware discoverability" around it, when
it can just be described by ACPI/DT directly. That's what they're
*for*.
Attachment:
smime.p7s
Description: S/MIME cryptographic signature