arm64_feature_bits for a register in arm64_ftr_regs[] are in a descending
order as per their shift values. Validate that these features bits are
defined correctly and do not overlap with each other. This check protects
against any inadvertent erroneous changes to the register definitions.
Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
Cc: Will Deacon <will@xxxxxxxxxx>
Cc: Suzuki K Poulose <suzuki.poulose@xxxxxxx>
Cc: Mark Brown <broonie@xxxxxxxxxx>
Cc: Mark Rutland <mark.rutland@xxxxxxx>
Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
Cc: linux-kernel@xxxxxxxxxxxxxxx
Signed-off-by: Anshuman Khandual <anshuman.khandual@xxxxxxx>
---
Applies on 5.8-rc3.
Changes in V2:
- Replaced WARN_ON() with WARN() dropping the conditional block per Suzuki
Changes in V1: (https://patchwork.kernel.org/patch/11606285/)
arch/arm64/kernel/cpufeature.c | 45 +++++++++++++++++++++++++++++++---
1 file changed, 42 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9f63053a63a9..7bd7e6f936a5 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -697,11 +697,50 @@ static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new,
static void __init sort_ftr_regs(void)
{
- int i;
+
+ unsigned int i;
+const struct arm64_ftr_reg *ftr_reg = arm64_ftr_regs[i].reg;
+ for (i = 0; i < ARRAY_SIZE(arm64_ftr_regs); i++) {
+ /*unsigned int shift = ftr_bits->shift;
+ * Features here must be sorted in descending order with respect
+ * to their shift values and should not overlap with each other.
+ */
+ ftr_reg = arm64_ftr_regs[i].reg;
+ for (ftr_bits = ftr_reg->ftr_bits,
+ ftr_bits->width != 0; ftr_bits++, j++) {
+ WARN((ftr_bits->shift + ftr_bits->width) > 64,
+ "%s has invalid feature at shift %d\n",
+ ftr_reg->name, ftr_bits->shift);