[PATCH v3 2/7] arm64/runtime-const: Use aarch64_insn_patch_text_nosync() for patching

From: K Prateek Nayak

Date: Thu Apr 02 2026 - 07:33:52 EST


The current scheme to directly patch the kernel text for runtime
constants runs into the following issue with futex adapted to using
runtime constants on arm64:

Unable to handle kernel write to read-only memory at virtual address ...

The pc points to the *p assignment in the following call chain:

futex_init()
runtime_const_init(shift, __futex_shift)
__runtime_fixup_shift()
*p = cpu_to_le32(insn);

which suggests that core_initcall() is too late to patch the kernel text
directly unlike the "d_hash_shift" which is initialized during
vfs_caches_init_early() before the protections are in place.

Use aarch64_insn_patch_text_nosync() to patch the runtime constants
instead of doing it directly to allow runtime_const_init() slightly
later into the boot.

Since aarch64_insn_patch_text_nosync() calls caches_clean_inval_pou()
internally, __runtime_fixup_caches() ends up being redundant.
runtime_const_init() are rare and the overheads of multiple calls to
caches_clean_inval_pou() instead of batching them together should be
negligible in practice.

The cpu_to_le32() conversion of instruction isn't necessary since it is
handled later in the aarch64_insn_patch_text_nosync() call-chain:

aarch64_insn_patch_text_nosync(addr, insn)
aarch64_insn_write(addr, insn)
__aarch64_insn_write(addr, cpu_to_le32(insn))

Signed-off-by: K Prateek Nayak <kprateek.nayak@xxxxxxx>
---
Changelog v2..v3:

o Reordered this to come before the introduction of
runtime_const_mask_32(). (David)

o Trimmed down the commit message to be more precise.
---
arch/arm64/include/asm/runtime-const.h | 13 +++----------
1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/runtime-const.h b/arch/arm64/include/asm/runtime-const.h
index c3dbd3ae68f6..a3106f80912b 100644
--- a/arch/arm64/include/asm/runtime-const.h
+++ b/arch/arm64/include/asm/runtime-const.h
@@ -7,6 +7,7 @@
#endif

#include <asm/cacheflush.h>
+#include <asm/text-patching.h>

/* Sigh. You can still run arm64 in BE mode */
#include <asm/byteorder.h>
@@ -50,13 +51,7 @@ static inline void __runtime_fixup_16(__le32 *p, unsigned int val)
u32 insn = le32_to_cpu(*p);
insn &= 0xffe0001f;
insn |= (val & 0xffff) << 5;
- *p = cpu_to_le32(insn);
-}
-
-static inline void __runtime_fixup_caches(void *where, unsigned int insns)
-{
- unsigned long va = (unsigned long)where;
- caches_clean_inval_pou(va, va + 4*insns);
+ aarch64_insn_patch_text_nosync(p, insn);
}

static inline void __runtime_fixup_ptr(void *where, unsigned long val)
@@ -66,7 +61,6 @@ static inline void __runtime_fixup_ptr(void *where, unsigned long val)
__runtime_fixup_16(p+1, val >> 16);
__runtime_fixup_16(p+2, val >> 32);
__runtime_fixup_16(p+3, val >> 48);
- __runtime_fixup_caches(where, 4);
}

/* Immediate value is 6 bits starting at bit #16 */
@@ -76,8 +70,7 @@ static inline void __runtime_fixup_shift(void *where, unsigned long val)
u32 insn = le32_to_cpu(*p);
insn &= 0xffc0ffff;
insn |= (val & 63) << 16;
- *p = cpu_to_le32(insn);
- __runtime_fixup_caches(where, 1);
+ aarch64_insn_patch_text_nosync(p, insn);
}

static inline void runtime_const_fixup(void (*fn)(void *, unsigned long),
--
2.34.1