Re: [PATCH] x86/fred: Fix early boot failures on SEV-ES/SNP guests

From: Dave Hansen

Date: Thu Feb 05 2026 - 12:20:33 EST


On 2/5/26 08:10, Dave Hansen wrote:
> Shouldn't we flip the FRED CR4 bit _last_, once all the MSRs are set up?
> Why is it backwards in the first place? Why can't it be fixed?

Ahhh, it was done by CR4 pinning. It's the first thing in C code for
booting secondaries:

static void notrace __noendbr start_secondary(void *unused)
{
cr4_init();

Since FRED is set in 'cr4_pinned_mask', cr4_init() sets the FRED bit far
before the FRED MSRs are ready. Anyone else doing native_write_cr4()
will do the same thing. That's obviously not what was intended from the
pinning code or the FRED init code.

Shouldn't we fix this properly rather than moving printk()'s around?

One idea is just to turn off all the CR-pinning logic while bringing
CPUs up. That way, nothing before:

set_cpu_online(smp_processor_id(), true);

can get tripped up by CR pinning. I've attached a completely untested
patch to do that.

The other thing would be to make pinning actually per-cpu:
'cr4_pinned_bits' could be per-cpu and we'd just keep it empty until the
CPU is actually booted and everything is fully set up.

Either way, this is looking like it'll be a bit more than one patch to
do properly.

---

b/arch/x86/kernel/cpu/common.c | 21 ++++++++++++++++++---
1 file changed, 18 insertions(+), 3 deletions(-)

diff -puN arch/x86/kernel/cpu/common.c~no-cr4-pinning-for-offline-cpus arch/x86/kernel/cpu/common.c
--- a/arch/x86/kernel/cpu/common.c~no-cr4-pinning-for-offline-cpus 2026-02-05 08:27:40.743539064 -0800
+++ b/arch/x86/kernel/cpu/common.c 2026-02-05 09:10:08.815134577 -0800
@@ -434,6 +434,21 @@ static const unsigned long cr4_pinned_ma
static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
static unsigned long cr4_pinned_bits __ro_after_init;

+static bool cr4_pinning_enabled(void)
+{
+ if (!static_branch_likely(&cr_pinning))
+ return false;
+
+ /*
+ * Do not enforce pinning during CPU bringup. It might
+ * turn on features that are not set up yet, like FRED.
+ */
+ if (!cpu_online(smp_processor_id()))
+ return false;
+
+ return true;
+}
+
void native_write_cr0(unsigned long val)
{
unsigned long bits_missing = 0;
@@ -441,7 +456,7 @@ void native_write_cr0(unsigned long val)
set_register:
asm volatile("mov %0,%%cr0": "+r" (val) : : "memory");

- if (static_branch_likely(&cr_pinning)) {
+ if (cr4_pinning_enabled()) {
if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) {
bits_missing = X86_CR0_WP;
val |= bits_missing;
@@ -460,7 +475,7 @@ void __no_profile native_write_cr4(unsig
set_register:
asm volatile("mov %0,%%cr4": "+r" (val) : : "memory");

- if (static_branch_likely(&cr_pinning)) {
+ if (cr4_pinning_enabled()) {
if (unlikely((val & cr4_pinned_mask) != cr4_pinned_bits)) {
bits_changed = (val & cr4_pinned_mask) ^ cr4_pinned_bits;
val = (val & ~cr4_pinned_mask) | cr4_pinned_bits;
@@ -502,7 +517,7 @@ void cr4_init(void)

if (boot_cpu_has(X86_FEATURE_PCID))
cr4 |= X86_CR4_PCIDE;
- if (static_branch_likely(&cr_pinning))
+ if (cr4_pinning_enabled())
cr4 = (cr4 & ~cr4_pinned_mask) | cr4_pinned_bits;

__write_cr4(cr4);
_