Re: [PATCH v2 2/7] x86/sev: add support for enabling RMPOPT
From: Dave Hansen
Date: Mon Mar 02 2026 - 17:34:19 EST
On 3/2/26 13:35, Ashish Kalra wrote:
> The new RMPOPT instruction sets bits in a per-CPU RMPOPT table, which
> indicates whether specific 1GB physical memory regions contain SEV-SNP
> guest memory.
Honestly, this is an implementation detail that we don't need to know
about in the kernel. It's also not even factually correct. The
instruction _might_ not set any bits, either because there is SEV-SNP
memory or because it's being run in query mode.
The new RMPOPT instruction helps manage per-CPU RMP optimization
structures inside the CPU. It takes a 1GB-aligned physical
address and either returns the status of the optimizations or
tries to enable the optimizations.
> Per-CPU RMPOPT tables support at most 2 TB of addressable memory for
> RMP optimizations.
>
> Initialize the per-CPU RMPOPT table base to the starting physical
> address. This enables RMP optimization for up to 2 TB of system RAM on
> all CPUs.
The reset looks good.
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index da5275d8eda6..8e7da03abd5b 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -753,6 +753,9 @@
> #define MSR_AMD64_SEG_RMP_ENABLED_BIT 0
> #define MSR_AMD64_SEG_RMP_ENABLED BIT_ULL(MSR_AMD64_SEG_RMP_ENABLED_BIT)
> #define MSR_AMD64_RMP_SEGMENT_SHIFT(x) (((x) & GENMASK_ULL(13, 8)) >> 8)
> +#define MSR_AMD64_RMPOPT_BASE 0xc0010139
> +#define MSR_AMD64_RMPOPT_ENABLE_BIT 0
> +#define MSR_AMD64_RMPOPT_ENABLE BIT_ULL(MSR_AMD64_RMPOPT_ENABLE_BIT)
>
> #define MSR_SVSM_CAA 0xc001f000
>
> diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
> index a4f3a364fb65..405199c2f563 100644
> --- a/arch/x86/virt/svm/sev.c
> +++ b/arch/x86/virt/svm/sev.c
> @@ -500,6 +500,41 @@ static bool __init setup_rmptable(void)
> }
> }
>
> +static void __configure_rmpopt(void *val)
> +{
> + u64 rmpopt_base = ((u64)val & PUD_MASK) | MSR_AMD64_RMPOPT_ENABLE;
> +
> + wrmsrq(MSR_AMD64_RMPOPT_BASE, rmpopt_base);
> +}
> +
> +static __init void configure_and_enable_rmpopt(void)
> +{
> + phys_addr_t pa_start = ALIGN_DOWN(PFN_PHYS(min_low_pfn), PUD_SIZE);
> +
> + if (!cpu_feature_enabled(X86_FEATURE_RMPOPT)) {
> + pr_debug("RMPOPT not supported on this platform\n");
> + return;
> + }
> +
> + if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP)) {
> + pr_debug("RMPOPT optimizations not enabled as SNP support is not enabled\n");
> + return;
> + }
To be honest, I think those two are just plain noise ^^.
> + if (!(rmp_cfg & MSR_AMD64_SEG_RMP_ENABLED)) {
> + pr_info("RMPOPT optimizations not enabled, segmented RMP required\n");
> + return;
> + }
> +
> + /*
> + * Per-CPU RMPOPT tables support at most 2 TB of addressable memory for RMP optimizations.
> + *
> + * Set per-core RMPOPT base to min_low_pfn to enable RMP optimization for
> + * up to 2TB of system RAM on all CPUs.
> + */
Please at least be consistent with your comments. This is both over 80
columns *and* not even consistent in the two sentences.
> + on_each_cpu_mask(cpu_online_mask, __configure_rmpopt, (void *)pa_start, true);
> +}
What's wrong with:
u64 rmpopt_base = pa_start | MSR_AMD64_RMPOPT_ENABLE;
...
for_each_online_cpu(cpu)
wrmsrq_on_cpu(cpu, MSR_AMD64_RMPOPT_BASE, rmpopt_base);
Then there's at least no ugly casting.