On 04.10.2022 10:43, Juergen Gross wrote:
Today pmu_msr_read() and pmu_msr_write() fall back to the safe variants
of read/write MSR in case the MSR access isn't emulated via Xen. Allow
the caller to select the potentially faulting variant by passing NULL
for the error pointer.
Restructure the code to make it more readable.
Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
I think the title (and to some degree also the description) is misleading:
The property we care about here isn't whether an MSR access would raise
#GP (we can't control that), but whether that #GP would be recovered from.
@@ -131,6 +131,9 @@ static inline uint32_t get_fam15h_addr(u32 addr)
static inline bool is_amd_pmu_msr(unsigned int msr)
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
+ return false;
I understand this and ...
@@ -144,6 +147,9 @@ static int is_intel_pmu_msr(u32 msr_index, int *type, int *index)
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ return false;
... this matches prior behavior, but may I suggest that while moving
these here you at least accompany them by a comment clarifying that
these aren't really correct? We'd come closer if is_amd_pmu_msr()
accepted AMD and Hygon, while is_intel_pmu_msr() may want to accept
Intel and Centaur (but I understand this would be largely orthogonal,
hence the suggestion towards comments). In the hypervisor we kind of
also support Shanghai, but I wonder whether we wouldn't better rip
out that code as unmaintained.
Description: OpenPGP public key
Description: OpenPGP digital signature