Re: [PATCH v4] KVM: Add KVM_GET_REG_LIST ioctl for LoongArch

From: Bibo Mao

Date: Wed Feb 04 2026 - 21:02:47 EST




On 2026/2/5 上午9:50, liushuyu wrote:
Hi Bibo,

Hi Zixing,

Thanks for doing this.

On 2026/2/4 下午7:36, Zixing Liu wrote:
This ioctl can be used by the userspace applications to determine which
(special) registers are get/set-able in a meaningful way.

This can be very useful for cross-platform VMMs so that they do not have
to hardcode register indices for each supported architectures.

Signed-off-by: Zixing Liu <liushuyu@xxxxxxx>
---
  Documentation/virt/kvm/api.rst |  2 +-
  arch/loongarch/kvm/vcpu.c      | 87 ++++++++++++++++++++++++++++++++++
  2 files changed, 88 insertions(+), 1 deletion(-)

diff --git a/Documentation/virt/kvm/api.rst
b/Documentation/virt/kvm/api.rst
index 01a3abef8abb..f46dd8be282f 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -3603,7 +3603,7 @@ VCPU matching underlying host.
  ---------------------
    :Capability: basic
-:Architectures: arm64, mips, riscv, x86 (if KVM_CAP_ONE_REG)
+:Architectures: arm64, loongarch, mips, riscv, x86 (if KVM_CAP_ONE_REG)
  :Type: vcpu ioctl
  :Parameters: struct kvm_reg_list (in/out)
  :Returns: 0 on success; -1 on error
diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 656b954c1134..bd855ee20ee2 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -3,6 +3,7 @@
   * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
   */
  +#include "asm/kvm_host.h"
Had better put after #include <asm/fpu.h>, and keep alphabetical order.
  #include <linux/kvm_host.h>
  #include <asm/fpu.h>
  #include <asm/lbt.h>
@@ -14,6 +15,8 @@
  #define CREATE_TRACE_POINTS
  #include "trace.h"
  +#define NUM_LBT_REGS 6
+
  const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
      KVM_GENERIC_VCPU_STATS(),
      STATS_DESC_COUNTER(VCPU, int_exits),
@@ -1186,6 +1189,72 @@ static int kvm_loongarch_vcpu_set_attr(struct
kvm_vcpu *vcpu,
      return ret;
  }
  +static int kvm_loongarch_walk_csrs(struct kvm_vcpu *vcpu, u64
__user *uindices)
+{
+    unsigned int i, count;
+
+    for (i = 0, count = 0; i < CSR_MAX_NUMS; i++) {
+        if (!(get_gcsr_flag(i) & (SW_GCSR | HW_GCSR)))
+            continue;
+        if (i >= LOONGARCH_CSR_PERFCTRL0 && i <=
LOONGARCH_CSR_PERFCNTR3) {
+            /* Skip PMU CSRs if not supported by the guest */
+            if (!kvm_guest_has_pmu(&vcpu->arch))
+                continue;
+        }
This is workable, gcsr_flag can be changed with structure, and new
element "int required_features" added. However it does not matter, it
can be done in later.

CSR registers relative with msgint feature can be done with this
method also.

How about debug/watch CSR registers? can it be skipped also?  the same
MERR CSR registers with LOONGARCH_CSR_MERR*.

The CSR register list difference can be checked with
kvm_loongarch_get_csr() in qemu VMM, with website
https://gitlab.com/qemu-project/qemu/-/blob/master/target/loongarch/kvm/kvm.c?ref_type=heads

Do you think for KVM guests, the only CSRs need to be saved are listed
at
https://gitlab.com/qemu-project/qemu/-/blob/master/target/loongarch/kvm/kvm.c?ref_type=heads#L375-544?
yes, I think so.

Then the concern about embedding a big list will become valid again.
There is no obvious error with bigger list from the present, however from performance or unexpected compatible issue, the smaller is better.

With new feature or new hardware, then the register list will become bigger.

Regards
Bibo Mao

What do you think?

Regards
Bibo Mao

Thanks,

Zixing

+        const u64 reg = KVM_IOC_CSRID(i);
+        if (uindices && put_user(reg, uindices++))
+            return -EFAULT;
+        count++;
+    }
+
+    return count;
+}
+
+static unsigned long kvm_loongarch_num_regs(struct kvm_vcpu *vcpu)
+{
+    /* +1 for the KVM_REG_LOONGARCH_COUNTER register */
+    unsigned long res =
+        kvm_loongarch_walk_csrs(vcpu, NULL) + KVM_MAX_CPUCFG_REGS + 1;
+
+    if (kvm_guest_has_lbt(&vcpu->arch))
+        res += NUM_LBT_REGS;
+
+    return res;
+}
+
+static int kvm_loongarch_copy_reg_indices(struct kvm_vcpu *vcpu,
+                      u64 __user *uindices)
+{
+    u64 reg;
+    unsigned int i;
+
+    i = kvm_loongarch_walk_csrs(vcpu, uindices);
+    if (i < 0)
+        return i;
+    uindices += i;
+
+    for (i = 0; i < KVM_MAX_CPUCFG_REGS; i++) {
+        reg = KVM_IOC_CPUCFG(i);
+        if (put_user(reg, uindices++))
+            return -EFAULT;
+    }
+
+    reg = KVM_REG_LOONGARCH_COUNTER;
+    if (put_user(reg, uindices++))
+        return -EFAULT;
+
+    if (!kvm_guest_has_lbt(&vcpu->arch))
+        return 0;
+
+    for (i = 1; i <= NUM_LBT_REGS; i++) {
+        reg = (KVM_REG_LOONGARCH_LBT | KVM_REG_SIZE_U64 | i);
+        if (put_user(reg, uindices++))
+            return -EFAULT;
+    }
+
+    return 0;
+}
+
  long kvm_arch_vcpu_ioctl(struct file *filp,
               unsigned int ioctl, unsigned long arg)
  {
@@ -1251,6 +1320,24 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
          r = kvm_loongarch_vcpu_set_attr(vcpu, &attr);
          break;
      }
+    case KVM_GET_REG_LIST: {
+        struct kvm_reg_list __user *user_list = argp;
+        struct kvm_reg_list reg_list;
+        unsigned n;
+
+        r = -EFAULT;
+        if (copy_from_user(&reg_list, user_list, sizeof(reg_list)))
+            break;
+        n = reg_list.n;
+        reg_list.n = kvm_loongarch_num_regs(vcpu);
+        if (copy_to_user(user_list, &reg_list, sizeof(reg_list)))
+            break;
+        r = -E2BIG;
+        if (n < reg_list.n)
+            break;
+        r = kvm_loongarch_copy_reg_indices(vcpu, user_list->reg);
+        break;
+    }
      default:
          r = -ENOIOCTLCMD;
          break;