Re: [PATCH v2 3/7] x86/sev: add support for RMPOPT instruction

From: Andrew Cooper

Date: Wed Mar 25 2026 - 20:42:46 EST


On 25/03/2026 9:53 pm, Kalra, Ashish wrote:
> On 3/4/2026 9:56 AM, Andrew Cooper wrote:
>> It should be:
>>
>> static inline bool __rmpopt(unsigned long addr, unsigned int fn)
>> {
>>     bool res;
>>
>>     asm volatile (".byte 0xf2, 0x0f, 0x01, 0xfc"
>>                  : "=ccc" (res)
>>                  : "a" (addr), "c" (fn));
>>
>>     return res;
>> }
>>
> The above constraints to use on_each_cpu_mask() is forcing the use of:
>
> void rmpopt(void *val)

No.  You don't break your thin wrapper in order to force it into a
wrong-shaped hole.

You need something like this:

void do_rmpopt_optimise(void *val)
{
    unsigned long addr = *(unsigned long *)val;

    WARN_ON_ONCE(__rmpopt(addr, OPTIMISE));
}

to invoke the wrapper safely from the IPI.  That will at obvious when
something wrong occurs.

~Andrew