Re: [PATCH v1 0/5] KVM in-guest performance monitoring
From: Joerg Roedel
Date: Thu May 12 2011 - 10:24:55 EST
On Thu, May 12, 2011 at 04:31:38PM +0300, Avi Kivity wrote:
> - when the cpu gains support for virtualizing the architectural feature,
> we transparently speed the guest up, including support for live
> migrating from a deployment that emulates the feature to a deployment
> that properly virtualizes the feature, and back. Usually the
> virtualized support will beat the pants off any paravirtualization we can
> do
> - following an existing spec is a lot easier to get right than doing
> something from scratch
> - no need to meticulously document the feature
Need to be done, but not problematic I think.
> - easier testing
Testing shouldn't be different on both variants I think.
> - existing guest support - only need to write the host side (sometimes
> the only one available to us)
Otherwise I agree.
> Paravirtualizing does have its advantages. For the PMU, for example, we
> can have a single hypercall read and reprogram all counters, saving
> *many* exits. But I think we need to start from the architectural PMU
> and see exactly what the problems are, before we optimize it to death.
The problem certainly is that with arch-pmu we add a lot of msr-exits to
the guest-context-switch path if it uses per-task profiling. Depending
on the workload this can very much distort the results.
Joerg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/