[PATCH 3/9] x86/virt: Mark flags and memory as clobbered by VMXOFF

From: Sean Christopherson
Date: Wed Dec 30 2020 - 19:28:48 EST


From: David P. Reed <dpreed@xxxxxxxxxxxx>

Explicitly tell the compiler that VMXOFF modifies flags (like all VMX
instructions), and mark memory as clobbered since VMXOFF must not be
reordered and also may have memory side effects (though the kernel
really shouldn't be accessing the root VMCS anyways).

Practically speaking, adding the clobbers is most likely a nop; the
primary motivation is to properly document VMXOFF's behavior.

For the flags clobber, both Clang and GCC automatically mark flags as
clobbered; this is noted in commit 4b1e54786e48 ("KVM/x86: Use assembly
instruction mnemonics instead of .byte streams"), which intentionally
removed the previous clobber. But, neither Clang nor GCC documents
this behavior, and there's no downside to including the clobber.

For the memory clobber, the RFLAGS.IF and CR4.VMXE manipulations that
immediately follow VMXOFF have compiler barriers of their own, i.e.
VMXOFF can't get reordered after clearing CR4.VMXE, which is really
what's of interest.

Cc: Randy Dunlap <rdunlap@xxxxxxxxxxxxx>
Signed-off-by: David P. Reed <dpreed@xxxxxxxxxxxx>
[sean: rewrote changelog, dropped comment adjustments]
Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
---
arch/x86/include/asm/virtext.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index fda3e7747c22..2cc585467667 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -44,7 +44,8 @@ static inline int cpu_has_vmx(void)
static inline void cpu_vmxoff(void)
{
asm_volatile_goto("1: vmxoff\n\t"
- _ASM_EXTABLE(1b, %l[fault]) :::: fault);
+ _ASM_EXTABLE(1b, %l[fault])
+ ::: "cc", "memory" : fault);
fault:
cr4_clear_bits(X86_CR4_VMXE);
}
--
2.29.2.729.g45daf8777d-goog