Re: [PATCH 0/6] Add RMPOPT support.

From: Kalra, Ashish

Date: Wed Feb 18 2026 - 16:09:53 EST


Hello Dave,

On 2/18/2026 11:15 AM, Dave Hansen wrote:
> On 2/18/26 09:03, Kalra, Ashish wrote:
>>> They are known not to contain any SEV-SNP guest memory at the
>>> moment snp_rmptable_init() finishes, no?
>> Yes, but RMP checks are still performed and they affect performance.
>>
>> Testing a bit in the per‑CPU RMPOPT table to avoid RMP checks
>> significantly improves performance.
>
> Sorry, Ashish, I don't think I'm explaining myself very well. Let me try
> again, please.
>
> First, my goal here is to ensure that the system has a whole has good
> performance, with minimal kernel code, and in the most common
> configurations.
>
> I would wager that the most common SEV-SNP configuration in the whole
> world is a system that has booted, enabled SEV-SNP, and has never run an
> SEV-SNP guest. If it's not *the* most common, it's certainly going to be
> common enough to care about deeply.
>
> Do you agree?

Yes.

>
> If you agree, I hope we can also agree that a "SNP enabled but never ran
> a guest" state is deserving of good performance with minimal kernel code.
>
> My assumption (which is maybe a bad one) is that there is a natural
> point when SEV-SNP is enabled on the system when the system as a whole
> can easily assert that no SEV-SNP guest has ever run. I'm assuming that
> there is *a* point where, for instance, the RMP table gets atomically
> flipped from being unprotected to being protected. At that point, its
> state *must* be known. It must also be naturally obvious that no guest
> has had a chance to run at this point.
>
> If that point can be leveraged, and the RMPOPT optimization can be
> applied at SEV-SNP enabled time, then an important SEV-SNP configuration
> would be optimized by default and with zero or little kernel code needed
> to drive it.
>
> To me, that seems like a valuable goal.
>
> Do you agree?

Now, RMP gets protected at the *same* point where SNP is enabled and then
RMP checking is started. And this is the same point at which RMPOPT
optimizations are enabled with this patch.

I believe you are talking about the hardware doing it as part of SNP enablement,
but that isn't how it is implemented and the reasons for that are it would take
a long time (in CPU terms) for a single WRMSR, and we don't support that.

And if RMP has been allocated means that you are going to be running SNP guests,
otherwise you wouldn't have allocated the RMP and enabled SNP in BIOS.

The RMPOPT feature address the RMP checks associated with non-SNP guests and the
hypervisor itself, theoretically, a cloud provider has good memory placement for
guests and can benefit even when launching/running SNP guests.

We can simplify this initial series to just using this RMPOPT feature and enabling
RMP optimizations for 0 to 2TB across the system and then do the optimizations
for/or supporting larger systems as a follow on series.

That will address your concerns of performing the RMPOPT optimizations at
SEV-SNP enabled time, and having the important SEV-SNP configuration
optimized by default and with little kernel code needed to drive it.

Thanks,
Ashish