[PATCH] x86/cpu: Add comment clarifying CRn pinning
From: Peter Zijlstra
Date: Fri Mar 20 2026 - 05:26:43 EST
Since Boris wanted a nice patch to just press 'apply' on, here goes :-)
---
Subject: x86/cpu: Add comment clarifying CRn pinning
From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Date: Wed, 18 Mar 2026 23:09:39 +0100
To avoid future confusion on the purpose and design of the CRn pinning
code.
Also note that if the attacker controls page-tables, the CRn bits
loose much of the attraction anyway.
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
---
arch/x86/kernel/cpu/common.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -434,6 +434,19 @@ static __always_inline void setup_lass(s
/* These bits should not change their value after CPU init is finished. */
static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP |
X86_CR4_FSGSBASE | X86_CR4_CET | X86_CR4_FRED;
+
+/*
+ * The CR pinning protects against ROP on the 'mov %reg, %CRn' instruction(s).
+ * Since you can ROP directly to these instructions (barring shadow stack),
+ * any protection must follow immediately and unconditionally after that.
+ *
+ * Specifically, the CR[04] write functions below will have the value
+ * validation controlled by the @cr_pinning static_branch which is
+ * __ro_after_init, just like the cr4_pinned_bits value.
+ *
+ * Once set, an attacker will have to defeat page-tables to get around these
+ * restrictions. Which is a much bigger ask than 'simple' ROP.
+ */
static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
static unsigned long cr4_pinned_bits __ro_after_init;