Re: [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support

From: David Riley

Date: Tue Apr 07 2026 - 07:57:22 EST


Hi Paolo, Jon,

Thank you for your work on this!

I have tested this patchset (v2) on both Intel and AMD platforms using
Proxmox VE (based on Debian Trixie) with a Windows Server
(24H2, Build 26100.1742) guest.
The focus of the test were the performance improvements when using
Virtualization-Based Security (VBS) and Hypervisor-Protected Code
Integrity (HVCI).

Environments

Intel Node:
  CPU: Intel(R) Xeon(R) Gold 6426Y
  Kernel: mainline 7.0.0-rc5 (with MBEC/GMET patches applied)
  QEMU: our downstream QEMU build based on 10.2.1, plus Jon's patches
  QEMU CPU option: -cpu host,level=30,+/-vmx-mbec,hv_relaxed,hv_reset,
hv_runtime,hv_time,hv_spinlocks=0x1fff,hv_vapic,hv_vpindex,hv_ipi,
    hv_synic,hv_stimer,hv_apicv,hv_xmm_input,hv_stimer_direct,
    hv_frequencies,hv_reenlightenment,hv_evmcs,hv_emsr_bitmap,
    hv_tlbflush,hv_tlbflush_ext,hv_tlbflush_direct
  virtio-win: 0.1.271

AMD Node:
  CPU: AMD EPYC 7302P
  Kernel: mainline 7.0.0-rc5 (with MBEC/GMET patches applied)
  QEMU: our downstream QEMU build based on 10.2.1, plus Jon's patches
  QEMU CPU option: -cpu host,level=30,+/-gmet,hv_relaxed,hv_reset,
hv_runtime,hv_time,hv_spinlocks=0x1fff,hv_vapic,hv_vpindex,hv_ipi,
    hv_synic,hv_stimer,hv_apicv,hv_xmm_input,hv_stimer_direct,
    hv_frequencies,hv_reenlightenment,hv_emsr_bitmap,hv_tlbflush,
    hv_tlbflush_ext,hv_tlbflush_direct
  virtio-win: 0.1.285

QEMU Patches: I manually integrated Jon Kohler's QEMU patch series [0]
(Patches 1-3) to enable feature exposure to the guest.

In the guest, the Win32_DeviceGuard output confirms MBEC/GMET
availability via the presence of value 7 in
AvailableSecurityProperties [1].

Methodology

I enabled VBS and Memory Integrity (HVCI) via Group Policy within
the guest. I collected VM-Exit samples using perf kvm stat on the
host while running a small script intended to provoke high-frequency
mode-transitions [2] inside the guest for 25 seconds ("Load" in the
table below). I did the same measurements without running the script
to get the measurements for the idle state ("Idle" in the table
below).

Results

I compared the VM-Exit volume by running the workload with the feature
flags (+vmx-mbec / +gmet) enabled vs. disabled. The table below shows
the average number of samples (Total count of VM-Exit Events) over
the 3 runs for each mode (idle/load).

Platform  Mode  #Avg Exits Disabled  #Avg Exits Enabled  Reduction
Intel          Idle          631722                   189460                   70.01%
Intel          Load   10020824                    775194               92.26%
AMD         Idle          249200                    118042                 52.63%
AMD         Load        425217                    306945               27.81%

Observations

Intel MBEC:
The reduction in VM-exits is massive (over 92% under load).

AMD GMET:
There is a measurable benefit (~28% reduction), though it appears less
significant than the Intel results. However, it is worth noting that the
AMD platform maintains a much lower baseline of VM-exits even with
the feature disabled, suggesting more efficient native handling of
these transitions compared to the Intel baseline in this specific test
environment.

On both architectures, I did not notice any stability issues with the
patches applied.

Let me know if there are more specific tests you're interested in.

Tested-by: David Riley <d.riley@xxxxxxxxxxx>

[0] https://lore.kernel.org/qemu-devel/20260330193428.1663253-2-jon@xxxxxxxxxxx/
[1] https://learn.microsoft.com/en-us/windows/security/hardware-security/enable-virtualization-based-protection-of-code-integrity?tabs=security
[2]

$Duration = 25
$StopWatch = [System.Diagnostics.StopWatch]::StartNew()

while($StopWatch.Elapsed.TotalSeconds -lt $Duration) {
    for ($i = 0; $i -lt 100; $i++) {
        $Value = Get-Random -Maximum 1000
        $Code = [scriptblock]::Create("($Value * 2) / 2")
        $Code.Invoke() | Out-Null
    }
    Start-Sleep -Milliseconds 20
}