Re: [PATCH v10 1/8] x86/vmware: Introduce VMware hypercall API

From: Alexey Makhalov
Date: Wed May 29 2024 - 20:44:59 EST




On 5/27/24 10:07 AM, Borislav Petkov wrote:
On Thu, May 23, 2024 at 12:14:39PM -0700, Alexey Makhalov wrote:
+#define VMWARE_HYPERCALL \
+ ALTERNATIVE_3("", \
+ "jmp .Lport_call%=", X86_FEATURE_HYPERVISOR, \
+ "jmp .Lvmcall%=", X86_FEATURE_VMCALL, \
+ "vmmcall\n\t" \
+ "jmp .Lend%=", X86_FEATURE_VMW_VMMCALL) \
+ "cmpb $" \
+ __stringify(CPUID_VMWARE_FEATURES_ECX_VMMCALL) \
+ ", %[mode]\n\t" \
+ "jg .Lvmcall%=\n\t" \
+ "je .Lvmmcall%=\n\t" \
+ ".Lport_call%=: movw %[port], %%dx\n\t" \
+ "inl (%%dx), %%eax\n\t" \
+ "jmp .Lend%=\n\t" \
+ ".Lvmmcall%=: vmmcall\n\t" \
+ "jmp .Lend%=\n\t" \
+ ".Lvmcall%=: vmcall\n\t" \
+ ".Lend%=:"

So applied (and with minor fixups for the proper indentation, see end of
this mail) this looks like this:

.pushsection .altinstructions,"a"
.long 661b - .
.long 6641f - .
.4byte ( 4*32+31)
.byte 663b-661b
.byte 6651f-6641f
.long 661b - .
.long 6642f - .
.4byte ( 8*32+18)
.byte 663b-661b
.byte 6652f-6642f
.long 661b - .
.long 6643f - .
.4byte ( 8*32+19)
.byte 663b-661b
.byte 6653f-6643f
.popsection
.pushsection .altinstr_replacement, "ax"
# ALT: replacement 1
6641:
jmp .Lport_call72
6651:
# ALT: replacement 2
6642:
jmp .Lvmcall72
6652:
# ALT: replacement 3
6643:
vmmcall
jmp .Lend72
6653:
.popsection
cmpb $((((1UL))) << (0)), vmware_hypercall_mode(%rip) # vmware_hypercall_mode
jg .Lvmcall72
je .Lvmmcall72
.Lport_call72:
movw $22104, %dx #
inl (%dx), %eax
jmp .Lend72
.Lvmmcall72:
vmmcall
jmp .Lend72
.Lvmcall72:
vmcall
.Lend72:

---

so AFAICT, you want three things:

1. X86_FEATURE_HYPERVISOR - that is always set when running as a guest.
For that it should do:

movw $22104, %dx #
inl (%dx), %eax

2. X86_FEATURE_VMCALL:

vmcall

3. X86_FEATURE_VMW_VMMCALL:

vmmcall

So why don't you simply do that?

vmware_set_capabilities() sets vmware_hypercall_mode *and* those feature
flags at the same time.

And you either support VMCALL or VMMCALL so the first thing should be the
fallback for some ancient crap.

IOW, your hypercall alternative should simply be:

ALTERNATIVE_2("vmcall", "vmmcall", X86_FEATURE_VMW_VMMCALL, "movw %[port], %%dx; "inl (%%dx), %%eax", X86_FEATURE_HYPERVISOR);

without any more silly dance?
While most of the vmware_hypercall callers are executed after alternative patching applied, there are small amount of hypercalls running before that.
Only for them we have the logic of analyzing vmware_hypercall_mode as a default alternative code. And there are 2 constraints:
1. vmcall/vmmcall are not supported by old ESXi/Workstation/Fusion. We have to use in/out instructions. After the end of support of old hypervisors the alternative can be simplified as follow:
ALTERNATIVE("vmcall", "vmmcall", X86_FEATURE_VMW_VMMCALL);
2. SEV-ES enabled VMs should use _only_ vmcall/vmmcall as in/out instructions cause faults.

Another approach that we discussed internally was to use
ALTERNATIVE_2("movw %[port], %%dx; "inl (%%dx), %%eax", "vmcall", X86_FEATURE_VMW_VMCALL, "vmmcall", X86_FEATURE_VMW_VMMCALL) for vmware_hypercallX family of functions, _and_ to have a separate API
vmware_sev_hypercallX, with the silly dance without an alternative inside, to be used only by early boot code, before alternative application. But, it's error prone when things come to boot time related code movements or rearrangements as it puts additional requirement for SEV-ES understanding/testing for VMware guests.

So, we picked a safe solution until a deprecation of port based hypercalls, which was mentioned above.


See also a commit bac7b4e843232 ("x86/vmware: Update platform detection code for VMCALL/VMMCALL hypercalls") where silly dance was introduced with VMWARE_CMD macro.


Hmmm?

---

Fixup indentation for proper .s output:

diff --git a/arch/x86/include/asm/vmware.h b/arch/x86/include/asm/vmware.h
index 5114f4c75c54..8be877d8bb7c 100644
--- a/arch/x86/include/asm/vmware.h
+++ b/arch/x86/include/asm/vmware.h
@@ -70,17 +70,18 @@ extern u8 vmware_hypercall_mode;
"jmp .Lvmcall%=", X86_FEATURE_VMCALL, \
"vmmcall\n\t" \
"jmp .Lend%=", X86_FEATURE_VMW_VMMCALL) \
- "cmpb $" \
- __stringify(CPUID_VMWARE_FEATURES_ECX_VMMCALL) \
- ", %[mode]\n\t" \
+ "\tcmpb $" __stringify(CPUID_VMWARE_FEATURES_ECX_VMMCALL) ", %[mode]\n\t" \
Noted \t prefix before cmpb, but will keep original 3 lines to fit in 80 columns limit.

"jg .Lvmcall%=\n\t" \
- "je .Lvmmcall%=\n\t" \
- ".Lport_call%=: movw %[port], %%dx\n\t" \
+ "je .Lvmmcall%=\n" \
+ ".Lport_call%=:\n\t" \
+ "movw %[port], %%dx\n\t" \
Noted having labels on a separate line.
"inl (%%dx), %%eax\n\t" \
- "jmp .Lend%=\n\t" \
- ".Lvmmcall%=: vmmcall\n\t" \
- "jmp .Lend%=\n\t" \
- ".Lvmcall%=: vmcall\n\t" \
+ "jmp .Lend%=\n" \
+ ".Lvmmcall%=:\n\t" \
+ "vmmcall\n\t" \
+ "jmp .Lend%=\n" \
+ ".Lvmcall%=:\n\t" \
+ "vmcall\n" \
".Lend%=:"
static inline


Best regards,
--Alexey