[PATCH v3 3/7] arm64/runtime-const: Introduce runtime_const_mask_32()
From: K Prateek Nayak
Date: Thu Apr 02 2026 - 07:33:06 EST
Futex hash computation requires a mask operation with read-only after
init data that will be converted to a runtime constant in the subsequent
commit.
Introduce runtime_const_mask_32 to further optimize the mask operation
in the futex hash computation hot path. GCC generates a:
movz w1, #lo16, lsl #0 // w1 = bits [15:0]
movk w1, #hi16, lsl #16 // w1 = full 32-bit value
and w0, w0, w1 // w0 = w0 & w1
pattern to tackle arbitrary 32-bit masks and the same was also suggested
by Claude which is implemented here. The (__mask & value) operation is
intentiaonally placed outside of asm block to allow compilers to further
optimize it if possible.
__runtime_fixup_ptr() already patches a "movz, + movk lsl #16" sequence
which has been reused to patch the same sequence for
__runtime_fixup_mask().
Assisted-by: Claude:claude-sonnet-4-5
Signed-off-by: K Prateek Nayak <kprateek.nayak@xxxxxxx>
---
Changelog v2..v3:
o Reordered this to come after the text patching fixes for ARM64.
(David)
o Moved the "&" operation outside the inline asm block to allow for
compilers to further optimize it if possible. (David)
---
arch/arm64/include/asm/runtime-const.h | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/arch/arm64/include/asm/runtime-const.h b/arch/arm64/include/asm/runtime-const.h
index a3106f80912b..21f817eb5951 100644
--- a/arch/arm64/include/asm/runtime-const.h
+++ b/arch/arm64/include/asm/runtime-const.h
@@ -36,6 +36,17 @@
:"r" (0u+(val))); \
__ret; })
+#define runtime_const_mask_32(val, sym) ({ \
+ unsigned long __mask; \
+ asm_inline("1:\t" \
+ "movz %w0, #0xcdef\n\t" \
+ "movk %w0, #0x89ab, lsl #16\n\t" \
+ ".pushsection runtime_mask_" #sym ",\"a\"\n\t" \
+ ".long 1b - .\n\t" \
+ ".popsection" \
+ :"=r" (__mask)); \
+ (__mask & val); })
+
#define runtime_const_init(type, sym) do { \
extern s32 __start_runtime_##type##_##sym[]; \
extern s32 __stop_runtime_##type##_##sym[]; \
@@ -73,6 +84,14 @@ static inline void __runtime_fixup_shift(void *where, unsigned long val)
aarch64_insn_patch_text_nosync(p, insn);
}
+/* Immediate value is 6 bits starting at bit #16 */
+static inline void __runtime_fixup_mask(void *where, unsigned long val)
+{
+ __le32 *p = lm_alias(where);
+ __runtime_fixup_16(p, val);
+ __runtime_fixup_16(p+1, val >> 16);
+}
+
static inline void runtime_const_fixup(void (*fn)(void *, unsigned long),
unsigned long val, s32 *start, s32 *end)
{
--
2.34.1