Re: [PATCH 6.12 8/8] KVM: arm64: Eagerly switch ZCR_EL{1,2}
From: Marc Zyngier
Date: Wed Mar 19 2025 - 05:16:43 EST
On Wed, 19 Mar 2025 00:26:14 +0000,
Gavin Shan <gshan@xxxxxxxxxx> wrote:
>
> Hi Mark,
>
> On 3/14/25 10:35 AM, Mark Brown wrote:
> > From: Mark Rutland <mark.rutland@xxxxxxx>
> >
> > [ Upstream commit 59419f10045bc955d2229819c7cf7a8b0b9c5b59 ]
> >
> > In non-protected KVM modes, while the guest FPSIMD/SVE/SME state is live on the
> > CPU, the host's active SVE VL may differ from the guest's maximum SVE VL:
> >
> > * For VHE hosts, when a VM uses NV, ZCR_EL2 contains a value constrained
> > by the guest hypervisor, which may be less than or equal to that
> > guest's maximum VL.
> >
> > Note: in this case the value of ZCR_EL1 is immaterial due to E2H.
> >
> > * For nVHE/hVHE hosts, ZCR_EL1 contains a value written by the guest,
> > which may be less than or greater than the guest's maximum VL.
> >
> > Note: in this case hyp code traps host SVE usage and lazily restores
> > ZCR_EL2 to the host's maximum VL, which may be greater than the
> > guest's maximum VL.
> >
> > This can be the case between exiting a guest and kvm_arch_vcpu_put_fp().
> > If a softirq is taken during this period and the softirq handler tries
> > to use kernel-mode NEON, then the kernel will fail to save the guest's
> > FPSIMD/SVE state, and will pend a SIGKILL for the current thread.
> >
> > This happens because kvm_arch_vcpu_ctxsync_fp() binds the guest's live
> > FPSIMD/SVE state with the guest's maximum SVE VL, and
> > fpsimd_save_user_state() verifies that the live SVE VL is as expected
> > before attempting to save the register state:
> >
> > | if (WARN_ON(sve_get_vl() != vl)) {
> > | force_signal_inject(SIGKILL, SI_KERNEL, 0, 0);
> > | return;
> > | }
> >
> > Fix this and make this a bit easier to reason about by always eagerly
> > switching ZCR_EL{1,2} at hyp during guest<->host transitions. With this
> > happening, there's no need to trap host SVE usage, and the nVHE/nVHE
> > __deactivate_cptr_traps() logic can be simplified to enable host access
> > to all present FPSIMD/SVE/SME features.
> >
> > In protected nVHE/hVHE modes, the host's state is always saved/restored
> > by hyp, and the guest's state is saved prior to exit to the host, so
> > from the host's PoV the guest never has live FPSIMD/SVE/SME state, and
> > the host's ZCR_EL1 is never clobbered by hyp.
> >
> > Fixes: 8c8010d69c132273 ("KVM: arm64: Save/restore SVE state for nVHE")
> > Fixes: 2e3cf82063a00ea0 ("KVM: arm64: nv: Ensure correct VL is loaded before saving SVE state")
> > Signed-off-by: Mark Rutland <mark.rutland@xxxxxxx>
> > Reviewed-by: Mark Brown <broonie@xxxxxxxxxx>
> > Tested-by: Mark Brown <broonie@xxxxxxxxxx>
> > Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
> > Cc: Fuad Tabba <tabba@xxxxxxxxxx>
> > Cc: Marc Zyngier <maz@xxxxxxxxxx>
> > Cc: Oliver Upton <oliver.upton@xxxxxxxxx>
> > Cc: Will Deacon <will@xxxxxxxxxx>
> > Reviewed-by: Oliver Upton <oliver.upton@xxxxxxxxx>
> > Link: https://lore.kernel.org/r/20250210195226.1215254-9-mark.rutland@xxxxxxx
> > Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx>
> > Signed-off-by: Mark Brown <broonie@xxxxxxxxxx>
> > ---
> > arch/arm64/kvm/fpsimd.c | 30 -----------------
> > arch/arm64/kvm/hyp/entry.S | 5 +++
> > arch/arm64/kvm/hyp/include/hyp/switch.h | 59 +++++++++++++++++++++++++++++++++
> > arch/arm64/kvm/hyp/nvhe/hyp-main.c | 13 ++++----
> > arch/arm64/kvm/hyp/nvhe/switch.c | 33 +++++++++++++++---
> > arch/arm64/kvm/hyp/vhe/switch.c | 4 +++
> > 6 files changed, 103 insertions(+), 41 deletions(-)
> >
>
> [...]
>
> > diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > index 4e757a77322c9efc59cdff501745f7c80d452358..1c8e2ad32e8c396fc4b11d5fec2e86728f2829d9 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> > @@ -5,6 +5,7 @@
> > */
> > #include <hyp/adjust_pc.h>
> > +#include <hyp/switch.h>
> > #include <asm/pgtable-types.h>
> > #include <asm/kvm_asm.h>
> > @@ -176,8 +177,12 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
> > sync_hyp_vcpu(hyp_vcpu);
> > pkvm_put_hyp_vcpu(hyp_vcpu);
> > } else {
> > + struct kvm_vcpu *vcpu = kern_hyp_va(host_vcpu);
> > +
> > /* The host is fully trusted, run its vCPU directly. */
> > - ret = __kvm_vcpu_run(host_vcpu);
> > + fpsimd_lazy_switch_to_guest(vcpu);
> > + ret = __kvm_vcpu_run(vcpu);
> > + fpsimd_lazy_switch_to_host(vcpu);
> > }
> >
>
> @host_vcpu should have been hypervisor's linear mapping address in v6.12. It looks
> incorrect to assume it's a kernel's linear mapping address and convert it (@host_vcpu)
> to the hypervisor's linear address agin, if I don't miss anything.
host_vcpu is passed as a parameter to the hypercall, and is definitely
a kernel address.
However, at this stage, we have *already* converted it to a HYP VA:
https://web.git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/kvm/hyp/nvhe/hyp-main.c?h=linux-6.12.y#n147
The result is that this change is turning a perfectly valid HYP VA
into... something. Odds are that the masking/patching will not mess up
the address, but this is completely buggy anyway. In general,
kern_hyp_va() is not an idempotent operation.
Thanks for noticing that something was wrong.
Broonie, can you please look into this?
Greg, it may be more prudent to unstage this series from 6.12-stable
until we know for sure this is the only problem.
M.
--
Without deviation from the norm, progress is not possible.