Re: [PATCH v7 7/7] KVM: arm64: Normalize cache configuration

From: Marc Zyngier

Date: Thu Apr 09 2026 - 11:48:52 EST


On Thu, 09 Apr 2026 15:51:59 +0100,
David Woodhouse <dwmw2@xxxxxxxxxxxxx> wrote:
>
> [1 <text/plain; UTF-8 (quoted-printable)>]
> On Thu, 2026-04-09 at 14:36 +0100, Marc Zyngier wrote:
> > On Thu, 09 Apr 2026 13:25:24 +0100,
> > David Woodhouse <dwmw2@xxxxxxxxxxxxx> wrote:
> > >
> > > On Thu, 12 Jan 2023 at 11:38:52 +0900, Akihiko Odaki wrote:
> > > > Before this change, the cache configuration of the physical CPU was
> > > > exposed to vcpus. This is problematic because the cache configuration a
> > > > vcpu sees varies when it migrates between vcpus with different cache
> > > > configurations.
> > > >
> > > > Fabricate cache configuration from the sanitized value, which holds the
> > > > CTR_EL0 value the userspace sees regardless of which physical CPU it
> > > > resides on.
> > > >
> > > > CLIDR_EL1 and CCSIDR_EL1 are now writable from the userspace so that
> > > > the VMM can restore the values saved with the old kernel.
> > >
> > > (commit 7af0c2534f4c5)
> > >
> > > How does the VMM set the values that the old kernel would have set?
> >
> > By reading them at the source?
>
> Yes, but the VMM in EL0 *can't* read the source, can it?

Again, this is designed for migration, and snapshoting the source
values gives you the expected result.

>
> > > Let's say we're deploying a kernel with this change for the first time,
> > > and we need to ensure that we provide a consistent environment to
> > > guests, which can be live migrated back to an older host.
> >
> > We have never guaranteed host downgrade. It almost never works.
>
> Huh? Host downgrade absolutely *does* work; KVM on Arm isn't a toy.
> We'd never be able to ship a new kernel if we didn't know we could roll
> it *back* if we needed to.

You have clearly been lucky so far, because I'd never such claim.

>
> I *know* that you know perfectly well that we actively test host
> downgrades prior to every rollout of a new kernel. So I'm a little
> confused about what you're trying to say here, Marc.

I said exactly this: we make no effort to ensure that a guest started
on a version of the kernel can be migrated to an older version -- the
state space might be different.

>
> > > So for new launches, we need to provide the values that the old kernel
> > > *would* have provided to the guest. A new launch isn't a migration;
> > > there are no "values saved with the old kernel".
> >
> > And you can provide these values.
>
> If I know them, sure. But I don't, because:
>
> > > Userspace can't read the CLIDR_EL1 and CCSIDR_EL1 registers directly,
> > > and AFAICT not everything we need to reconstitute them is in sysfs. How
> > > is this supposed to work?
> > >
> > > Shouldn't this change have been made as a capability that the VMM can
> > > explicitly opt in or out of? Environments that don't do cross-CPU
> > > migration absolutely don't care about, and actively don't *want*, the
> > > sanitisation that this commit inflicted on us, surely?
> >
> > I don't think a capability buys you anything. You want to expose
> > something to the guest? Make it so. You are in the favourable
> > situation to completely own the HW and the VMM.
>
> Now that the values are writable, but userspace can't easily see *what*
> they would have been in the previous kernel. So having the capability
> to ask KVM to set them to those values seems like it might be useful.

Well, it's only a patch away.

M.

--
Without deviation from the norm, progress is not possible.