Re: [PATCH 2/6] x86/sev: add support for enabling RMPOPT
From: Dave Hansen
Date: Wed Feb 18 2026 - 17:56:22 EST
On 2/18/26 14:17, Kalra, Ashish wrote:
> Yes, by default Venice platform has the NPS2 configuration enabled by default,
> so we have 'X' nodes per socket and we have to consider this NPSx configuration
> and optimize for those groups.
Why, though?
You keep saying: "We have NPS so we must configure sockets". But not *why*.
I suspect this is another premature optimization. Nodes are a bit too
small so if you configure via nodes, the later nodes will have RMPOPT
tables that cover empty address space off the end of system memory.
Honestly, I think this is all just done wrong. It doesn't need to even
consider sockets. Sockets might even be the wrong thing to look at.
Basically, RMPOPT gives you a 2TB window of potentially "fast" memory.
The rest of memory is "slow". If you're lucky, the memory that's fast
because of RMPOPT is also in a low-distance NUMA node.
Sockets are a good thing to use, for sure. But they're not even optimal!
Just imagine what's going to happen if you have more than 2TB in a
socket. You just turn off the per-socket optimization. If that happens,
the last node in the socket will end up with an RMPOPT table that has
itself at the beginning, but probably a nonzero amount of off-socket memory.
I'd probably just do something like this:
Given a NUMA node, go through each 1GB of memory in the system and see
what the average NUMA distance of that 2TB window of memory is. Find the
2TB window with the lowest average distance. That'll give you a more or
less optimal RMPOPT window. It'll work with NPS or regular NUMA or
whatever bonkers future fancy thing shows up.
But that's all optimization territory. Please squirrel that away to go
look at in 6 months once you get the rest of this merged.