Re: [PATCH v2 1/3] xen/pv: allow pmu msr accesses to cause GP

From: Juergen Gross
Date: Tue Oct 04 2022 - 11:22:49 EST


On 04.10.22 12:58, Jan Beulich wrote:
On 04.10.2022 10:43, Juergen Gross wrote:
Today pmu_msr_read() and pmu_msr_write() fall back to the safe variants
of read/write MSR in case the MSR access isn't emulated via Xen. Allow
the caller to select the potentially faulting variant by passing NULL
for the error pointer.

Restructure the code to make it more readable.

Signed-off-by: Juergen Gross <jgross@xxxxxxxx>

I think the title (and to some degree also the description) is misleading:
The property we care about here isn't whether an MSR access would raise
#GP (we can't control that), but whether that #GP would be recovered from.

Would you be fine with adding "fatal" or "visible"?


--- a/arch/x86/xen/pmu.c
+++ b/arch/x86/xen/pmu.c
@@ -131,6 +131,9 @@ static inline uint32_t get_fam15h_addr(u32 addr)
static inline bool is_amd_pmu_msr(unsigned int msr)
{
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
+ return false;

I understand this and ...

@@ -144,6 +147,9 @@ static int is_intel_pmu_msr(u32 msr_index, int *type, int *index)
{
u32 msr_index_pmc;
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ return false;

... this matches prior behavior, but may I suggest that while moving
these here you at least accompany them by a comment clarifying that
these aren't really correct? We'd come closer if is_amd_pmu_msr()
accepted AMD and Hygon, while is_intel_pmu_msr() may want to accept
Intel and Centaur (but I understand this would be largely orthogonal,
hence the suggestion towards comments). In the hypervisor we kind of
also support Shanghai, but I wonder whether we wouldn't better rip
out that code as unmaintained.

Maybe the correct thing to do would be to add another patch to fix
is_*_pmu_msr() along the lines you are suggesting.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature