Re: [Patch v4 07/13] perf/x86: Add constraint for guest perf metrics event
From: Mingwei Zhang
Date: Tue Oct 03 2023 - 18:03:34 EST
On Mon, Oct 2, 2023 at 5:56 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Mon, Oct 02, 2023, Peter Zijlstra wrote:
> > On Mon, Oct 02, 2023 at 08:56:50AM -0700, Sean Christopherson wrote:
> > > > > worse it's not a choice based in technical reality.
> > >
> > > The technical reality is that context switching the PMU between host and guest
> > > requires reading and writing far too many MSRs for KVM to be able to context
> > > switch at every VM-Enter and every VM-Exit. And PMIs skidding past VM-Exit adds
> > > another layer of complexity to deal with.
> >
> > I'm not sure what you're suggesting here. It will have to save/restore
> > all those MSRs anyway. Suppose it switches between vCPUs.
>
> The "when" is what's important. If KVM took a literal interpretation of
> "exclude guest" for pass-through MSRs, then KVM would context switch all those
> MSRs twice for every VM-Exit=>VM-Enter roundtrip, even when the VM-Exit isn't a
> reschedule IRQ to schedule in a different task (or vCPU). The overhead to save
> all the host/guest MSRs and load all of the guest/host MSRs *twice* for every
> VM-Exit would be a non-starter. E.g. simple VM-Exits are completely handled in
> <1500 cycles, and "fastpath" exits are something like half that. Switching all
> the MSRs is likely 1000+ cycles, if not double that.
Hi Sean,
Sorry, I have no intention to interrupt the conversation, but this is
slightly confusing to me.
I remember when doing AMX, we added gigantic 8KB memory in the FPU
context switch. That works well in Linux today. Why can't we do the
same for PMU? Assuming we context switch all counters, selectors and
global stuff there?
On the VM boundary, all we need is for global ctrl, right? We stop all
counters when we exit from the guest and restore the guest value of
global control when entering it. But the actual PMU context switch
should be deferred roughly to the same time we switch FPU (xsave
state). This means we do that when switching task_struct and/or
returning to userspace.
Please kindly correct me if this is flawed.
ah, I think I understand what you are saying... So, "If KVM took a
literal interpretation of "exclude guest" for pass-through MSRs..."
perf_event.attr.exclude_guest might need a different meaning, if we
have a pass-through PMU for KVM. exclude_guest=1 does not mean the
counters are restored at the VMEXIT boundary, which is a disaster if
we do that.
Thanks.
-Mingwei
-Mingwei
>
> FWIW, the primary use case we care about is for slice-of-hardware VMs, where each
> vCPU is pinned 1:1 with a host pCPU. I suspect it's a similar story for the other
> CSPs that are trying to provide accurate PMUs to guests. If a vCPU is scheduled
> out, then yes, a bunch of context switching will need to happen. But for the
> types of VMs that are the target audience, their vCPUs will rarely be scheduled
> out.
>
> > > > > It's a choice out of lazyness, disabling host PMU is not a requirement
> > > > > for pass-through.
> > >
> > > The requirement isn't passthrough access, the requirements are that the guest's
> > > PMU has accuracy that is on par with bare metal, and that exposing a PMU to the
> > > guest doesn't have a meaningful impact on guest performance.
> >
> > Given you don't think that trapping MSR accesses is viable, what else
> > besides pass-through did you have in mind?
>
> Sorry, I didn't mean to imply that we don't want pass-through of MSRs. What I was
> trying to say is that *just* passthrough MSRs doesn't solve the problem, because
> again I thought the whole "context switch PMU state less often" approach had been
> firmly nak'd.
>
> > > > Not just a choice of laziness, but it will clearly be forced upon users
> > > > by external entities:
> > > >
> > > > "Pass ownership of the PMU to the guest and have no host PMU, or you
> > > > won't have sane guest PMU support at all. If you disagree, please open
> > > > a support ticket, which we'll ignore."
> > >
> > > We don't have sane guest PMU support today.
> >
> > Because KVM is too damn hard to use, rebooting a machine is *sooo* much
> > easier -- and I'm really not kidding here.
> >
> > Anyway, you want pass-through, but that doesn't mean host cannot use
> > PMU when vCPU thread is not running.
> >
> > > If y'all are willing to let KVM redefined exclude_guest to be KVM's outer run
> > > loop, then I'm all for exploring that option. But that idea got shot down over
> > > a year ago[*].
> >
> > I never saw that idea in that thread. You virt people keep talking like
> > I know how KVM works -- I'm not joking when I say I have no clue about
> > virt.
> >
> > Sometimes I get a little clue after y'all keep bashing me over the head,
> > but it quickly erases itself.
> >
> > > Or at least, that was my reading of things. Maybe it was just a
> > > misunderstanding because we didn't do a good job of defining the behavior.
> >
> > This might be the case. I don't particularly care where the guest
> > boundary lies -- somewhere in the vCPU thread. Once the thread is gone,
> > PMU is usable again etc..
>
> Well drat, that there would have saved a wee bit of frustration. Better late
> than never though, that's for sure.
>
> Just to double confirm: keeping guest PMU state loaded until the vCPU is scheduled
> out or KVM exits to userspace, would mean that host perf events won't be active
> for potentially large swaths of non-KVM code. Any function calls or event/exception
> handlers that occur within the context of ioctl(KVM_RUN) would run with host
> perf events disabled.
>
> Are you ok with that approach? Assuming we don't completely botch things, the
> interfaces are sane, we can come up with a clean solution for handling NMIs, etc.
>
> > Re-reading parts of that linked thread, I see mention of
> > PT_MODE_HOST_GUEST -- see I knew we had something there, but I can never
> > remember all that nonsense. Worst part is that I can't find the relevant
> > perf code when I grep for that string :/
>
> The PT stuff is actually an example of what we don't want, at least not exactly.
> The concept of a hard switch between guest and host is ok, but as-is, KVM's PT
> code does a big pile of MSR reads and writes on every VM-Enter and VM-Exit.
>
> > Anyway, what I don't like is KVM silently changing all events to
> > ::exclude_guest=1. I would like all (pre-existing) ::exclude_guest=0
> > events to hard error when they run into a vCPU with pass-through on
> > (PERF_EVENT_STATE_ERROR). I would like event-creation to error out on
> > ::exclude_guest=0 events when a vCPU with pass-through exists -- with
> > minimal scope (this probably means all CPU events, but only relevant
> > vCPU events).
>
> Agreed, I am definitely against KVM silently doing anything. And the more that
> is surfaced to the user, the better.
>
> > It also means ::exclude_guest should actually work -- it often does not
> > today -- the IBS thing for example totally ignores it.
>
> Is that already an in-tree, or are you talking about Manali's proposed series to
> support virtualizing IBS?
>
> > Anyway, none of this means host cannot use PMU because virt muck wants
> > it.