On 03/01/2025 01:17, Yang Shi wrote:
FEAT_BBM level 2 allow changing block size of a translation with relaxednit: recuesive -> recursive ?
TLB flushing. But it may incur TLB conflict abort. We can handle the
abort in kernel, however it is hard to guarantee the recuesive TLB
conflct will never happen in the handling itself.nit: Arm language internally is starting to refer to FEAT_BBML1 / FEAT_BBML2 and
Some implementations can handle TLB conflict gracefully without fault
handler in kernel so FEAT_BBM level 2 can be enabled on those
implementations safely.
Look up MIDR to filter out those CPUs. AmpereOne is one of them.
Suggested-by: Will Deacon<will@xxxxxxxxxx>
Signed-off-by: Yang Shi<yang@xxxxxxxxxxxxxxxxxxxxxx>
---
arch/arm64/include/asm/cpufeature.h | 19 +++++++++++++++++++
arch/arm64/kernel/cpufeature.c | 11 +++++++++++
arch/arm64/tools/cpucaps | 1 +
3 files changed, 31 insertions(+)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 8b4e5a3cd24c..33ca9db42741 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -866,6 +866,25 @@ static __always_inline bool system_supports_mpam_hcr(void)
return alternative_has_cap_unlikely(ARM64_MPAM_HCR);
}
+static inline bool system_supports_bbmlv2(void)
I believe this will soon make it's way to the Arm ARM. So probably better to
refer to bbml2 rather than bbmlv2 throughout.
+{This function has no need to be in the header. system_supports_bbmlv2() is what
+ return cpus_have_final_boot_cap(ARM64_HAS_BBMLV2);
+}
+
+static inline bool bbmlv2_available(void)
users should use. Suggest moving to has_bbmlv2() in cpufeature.c.
+{I'm wondering if this will potentially lead to problems for assymetric
+ static const struct midr_range support_bbmlv2[] = {
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+ MIDR_ALL_VERSIONS(MIDR_AMPERE1A),
+ {}
+ };
+
+ if (is_midr_in_range_list(read_cpuid_id(), support_bbmlv2))
+ return true;
+
+ return false;
+}
+
int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
bool try_emulate_mrs(struct pt_regs *regs, u32 isn);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6ce71f444ed8..a60d5fa04828 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1889,6 +1889,11 @@ static bool has_lpa2(const struct arm64_cpu_capabilities *entry, int scope)
}
#endif
+static bool has_bbmlv2(const struct arm64_cpu_capabilities *entry, int scope)
+{
+ return bbmlv2_available();
+}
+
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
#define KPTI_NG_TEMP_VA (-(1UL << PMD_SHIFT))
@@ -2990,6 +2995,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, GCS, IMP)
},
#endif
+ {
+ .desc = "BBM Level 2",
+ .capability = ARM64_HAS_BBMLV2,
+ .type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
collections of CPUs (e.g. big.LITTLE)? I can imagine that little CPUs might not
support BBML2. In this case if you boot on a big CPU that does have BBML2, you
will require the feature and refuse to online the secondary little CPUs.
Perhaps this really needs to be a system feature, where it is only enabled if
all CPUs in the system support it? I'm guessing that will make painting the
linear map harder; I guess you will need to initially set it up with PTE
mappings, then repaint as block mappings if BBML2 is determined to be supported
if that's not already what you are doing.
Thanks,
Ryan
+ .matches = has_bbmlv2,
+ },
{},
};
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index eb17f59e543c..287bdede53f5 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -14,6 +14,7 @@ HAS_ADDRESS_AUTH_ARCH_QARMA5
HAS_ADDRESS_AUTH_IMP_DEF
HAS_AMU_EXTN
HAS_ARMv8_4_TTL
+HAS_BBMLV2
HAS_CACHE_DIC
HAS_CACHE_IDC
HAS_CNP