Re: [PATCH v6 1/7] KVM: x86: Deflect unknown MSR accesses to user space

From: Alexander Graf
Date: Wed Sep 16 2020 - 15:22:36 EST




On 16.09.20 19:08, Sean Christopherson wrote:

On Wed, Sep 16, 2020 at 11:31:30AM +0200, Alexander Graf wrote:
On 03.09.20 21:27, Aaron Lewis wrote:
@@ -412,6 +414,15 @@ struct kvm_run {
__u64 esr_iss;
__u64 fault_ipa;
} arm_nisv;
+ /* KVM_EXIT_X86_RDMSR / KVM_EXIT_X86_WRMSR */
+ struct {
+ __u8 error; /* user -> kernel */
+ __u8 pad[3];

__u8 pad[7] to maintain 8 byte alignment? unless we can get away with
fewer bits for 'reason' and
get them from 'pad'.

Why would we need an 8 byte alignment here? I always thought natural u64
alignment on x86_64 was on 4 bytes?

u64 will usually (always?) be 8 byte aligned by the compiler. "Natural"
alignment means an object is aligned to its size. E.g. an 8-byte object
can split a cache line if it's only aligned on a 4-byte boundary.

For some reason I always thought that x86_64 had a special hack that allows u64s to be "naturally" aligned on a 32bit boundary. But I just double checked what you said and indeed, gcc does pad it to an actual natural boundary.

You never stop learning :).

In that case, it absolutely makes sense to make the padding explicit (and pull it earlier)!


Alex




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879