Introduce a paravirtualization interface for KVM/arm64 to obtain the VCPUNIT: ^ whether
is currently running or not.
The PV lock structure of the guest is allocated by user space.
A hypercall interface is provided for the guest to interrogate the
hypervisor's support for this interface and the location of the shared
memory structures.
Signed-off-by: Zengruan Ye <yezengruan@xxxxxxxxxx>
---
Documentation/virt/kvm/arm/pvlock.rst | 63 +++++++++++++++++++++++++
Documentation/virt/kvm/devices/vcpu.txt | 14 ++++++
2 files changed, 77 insertions(+)
create mode 100644 Documentation/virt/kvm/arm/pvlock.rst
diff --git a/Documentation/virt/kvm/arm/pvlock.rst b/Documentation/virt/kvm/arm/pvlock.rst
new file mode 100644
index 000000000000..58b3b8ee7537
--- /dev/null
+++ b/Documentation/virt/kvm/arm/pvlock.rst
@@ -0,0 +1,63 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Paravirtualized lock support for arm64
+======================================
+
+KVM/arm64 provides some hypervisor service calls to support a paravirtualized
+guest obtaining the VCPU is currently running or not.
+
+Two new SMCCC compatible hypercalls are defined:
+
+* PV_LOCK_FEATURES: 0xC6000020
+* PV_LOCK_PREEMPTED: 0xC6000021
+
+The existence of the PV_LOCK hypercall should be probed using the SMCCC 1.1
+ARCH_FEATURES mechanism before calling it.
+
+PV_LOCK_FEATURES
+ ============= ======== ==========
+ Function ID: (uint32) 0xC6000020
+ PV_call_id: (uint32) The function to query for support.
+ Return value: (int64) NOT_SUPPORTED (-1) or SUCCESS (0) if the relevant
+ PV-lock feature is supported by the hypervisor.
+ ============= ======== ==========
+
+PV_LOCK_PREEMPTED
+ ============= ======== ==========
+ Function ID: (uint32) 0xC6000021
+ Return value: (int64) NOT_SUPPORTED (-1) or SUCCESS (0) if the IPA of
+ this VCPU's pv data structure is configured by
+ the hypervisor.
+ ============= ======== ==========
+
+The IPA returned by PV_LOCK_PREEMPTED should be mapped by the guest as normal
+memory with inner and outer write back caching attributes, in the inner
+shareable domain.
+
+PV_LOCK_PREEMPTED returns the structure for the calling VCPU.
+
+PV lock state
+-------------
+
+The structure pointed to by the PV_LOCK_PREEMPTED hypercall is as follows:
+
++-----------+-------------+-------------+---------------------------------+
+| Field | Byte Length | Byte Offset | Description |
++===========+=============+=============+=================================+
+| preempted | 8 | 0 | Indicate the VCPU who owns this |
+| | | | struct is running or not. |
+| | | | Non-zero values mean the VCPU |
+| | | | has been preempted. Zero means |
+| | | | the VCPU is not preempted. |
++-----------+-------------+-------------+---------------------------------+
+
+The preempted field will be updated to 1 by the hypervisor prior to scheduling
+a VCPU. When the VCPU is scheduled out, the preempted field will be updated
+to 0 by the hypervisor.
+
+The structure will be present within a reserved region of the normal memory
+given to the guest. The guest should not attempt to write into this memory.
+There is a structure per VCPU of the guest.
+
+For the user space interface see Documentation/virt/kvm/devices/vcpu.txt
+section "4. GROUP: KVM_ARM_VCPU_PVLOCK_CTRL".
diff --git a/Documentation/virt/kvm/devices/vcpu.txt b/Documentation/virt/kvm/devices/vcpu.txt
index 6f3bd64a05b0..c10a5945075b 100644
--- a/Documentation/virt/kvm/devices/vcpu.txt
+++ b/Documentation/virt/kvm/devices/vcpu.txt
@@ -74,3 +74,17 @@ Specifies the base address of the stolen time structure for this VCPU. The
base address must be 64 byte aligned and exist within a valid guest memory
region. See Documentation/virt/kvm/arm/pvtime.txt for more information
including the layout of the stolen time structure.
+
+4. GROUP: KVM_ARM_VCPU_PVLOCK_CTRL
+Architectures: ARM64
+
+4.1 ATTRIBUTE: KVM_ARM_VCPU_PVLOCK_IPA
+Parameters: 64-bit base address
+Returns: -ENXIO: PV lock not implemented
+ -EEXIST: Base address already set for this VCPU
+ -EINVAL: Base address not 64 byte aligned
+
+Specifies the base address of the PV lock structure for this VCPU. The
+base address must be 64 byte aligned and exist within a valid guest memory
+region. See Documentation/virt/kvm/arm/pvlock.rst for more information
+including the layout of the pv lock structure.