Re: [PATCH 5/6] x86/sev: Use configfs to re-enable RMP optimizations.
From: Kalra, Ashish
Date: Tue Feb 17 2026 - 23:40:14 EST
On 2/17/2026 9:34 PM, Kalra, Ashish wrote:
> Hello Dave,
>
> On 2/17/2026 4:19 PM, Dave Hansen wrote:
>> On 2/17/26 12:11, Ashish Kalra wrote:
>>> From: Ashish Kalra <ashish.kalra@xxxxxxx>
>>>
>>> Use configfs as an interface to re-enable RMP optimizations at runtime
>>>
>>> When SNP guests are launched, RMPUPDATE disables the corresponding
>>> RMPOPT optimizations. Therefore, an interface is required to manually
>>> re-enable RMP optimizations, as no mechanism currently exists to do so
>>> during SNP guest cleanup.
>>
>> Is this like a proof-of-concept to poke the hardware and show it works?
>> Or, is this intended to be the way that folks actually interact with
>> SEV-SNP optimization in real production scenarios?
>>
>> Shouldn't freeing SEV-SNP memory back to the system do this
>> automatically? Worst case, keep a 1-bit-per-GB bitmap of memory that's
>> been freed and schedule_work() to run in 1 or 10 or 100 seconds. That
>> should batch things up nicely enough. No?
And there is a cost associated with re-enabling the optimizations for all
system RAM (even though it runs as a background kernel thread executing RMPOPT
on different 1GB regions in parallel and with inline cond_resched()'s),
we don't want to run this periodically.
In case of running SNP guests, this scheduled/periodic run will conflict with
RMPUPDATE(s) being executed for assigning the guest pages and marking them as private.
Even though the hardware takes care of handling such race conditions where
one CPU is doing RMPOPT on it while another is changing one of the pages in that
region to be assigned via RMPUPDATE. In this case, the hardware ensures that after
the RMPUPDATE completes, the CPU that did RMPOPT will see the region as un-optimized.
Once 1GB hugetlb support (for guest_memfd) has been merged, however it will be
straightforward to plumb it into the 1GB hugetlb cleanup path.
Thanks,
Ashish
>
> Actually, the RMPOPT implementation is going to be a multi-phased development.
>
> In the first phase (which is this patch-series) we enable RMPOPT globally, and let RMPUPDATE(s)
> slowly switch it off over time as SNP guest spin up, and then in phase#2 once 1GB hugetlb is in place,
> we enable re-issuing of RMPOPT during 1GB page cleanup.
>
> So automatic re-issuing of RMPOPT will be done when SNP guests are shutdown and as part of
> SNP guest cleanup once 1GB hugetlb support (for guest_memfd) has been merged.
>
> As currently, i.e, as part of this patch series, there is no mechanism to re-issue RMPOPT
> automatically as part of SNP guest cleanup, therefore this support exists to doing it
> manually at runtime via configfs.
>
> I will describe this multi-phased RMPOPT implementation plan in the cover letter for
> next revision of this patch series.
>
>
>>
>> I can't fathom that users don't want this to be done automatically for them.
>>
>> Is the optimization scan really expensive or something? 1GB of memory
>> should have a small number of megabytes of metadata to scan.