[tip: x86/cpu] x86/cacheinfo: Consolidate AMD/Hygon leaf 0x8000001d calls

From: tip-bot2 for Ahmed S. Darwish
Date: Tue Mar 25 2025 - 05:41:19 EST


The following commit has been merged into the x86/cpu branch of tip:

Commit-ID: 77676e6802a10ffa5a0ad6367e8f6e14cbd88781
Gitweb: https://git.kernel.org/tip/77676e6802a10ffa5a0ad6367e8f6e14cbd88781
Author: Ahmed S. Darwish <darwi@xxxxxxxxxxxxx>
AuthorDate: Mon, 24 Mar 2025 14:33:06 +01:00
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitterDate: Tue, 25 Mar 2025 10:22:32 +01:00

x86/cacheinfo: Consolidate AMD/Hygon leaf 0x8000001d calls

While gathering CPU cache info, CPUID leaf 0x8000001d is invoked in two
separate if blocks: one for Hygon CPUs and one for AMDs with topology
extensions. After each invocation, amd_init_l3_cache() is called.

Merge the two if blocks into a single condition, thus removing the
duplicated code. Future commits will expand these if blocks, so
combining them now is both cleaner and more maintainable.

Note, while at it, remove a useless "better error?" comment that was
within the same function since the 2005 commit e2cac78935ff ("[PATCH]
x86_64: When running cpuid4 need to run on the correct CPU").

Note, as previously done at commit aec28d852ed2 ("x86/cpuid: Standardize
on u32 in <asm/cpuid/api.h>"), standardize on using 'u32' and 'u8' types.

Signed-off-by: Ahmed S. Darwish <darwi@xxxxxxxxxxxxx>
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: H. Peter Anvin <hpa@xxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Link: https://lore.kernel.org/r/20250324133324.23458-12-darwi@xxxxxxxxxxxxx
---
arch/x86/kernel/cpu/cacheinfo.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 1b2a2bf..f1055e8 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -593,28 +593,28 @@ static void amd_init_l3_cache(struct _cpuid4_info_regs *id4, int index)
static int
cpuid4_cache_lookup_regs(int index, struct _cpuid4_info_regs *id4)
{
- union _cpuid4_leaf_eax eax;
- union _cpuid4_leaf_ebx ebx;
- union _cpuid4_leaf_ecx ecx;
- unsigned edx;
-
- if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
- if (boot_cpu_has(X86_FEATURE_TOPOEXT))
- cpuid_count(0x8000001d, index, &eax.full,
- &ebx.full, &ecx.full, &edx);
- else
+ u8 cpu_vendor = boot_cpu_data.x86_vendor;
+ union _cpuid4_leaf_eax eax;
+ union _cpuid4_leaf_ebx ebx;
+ union _cpuid4_leaf_ecx ecx;
+ u32 edx;
+
+ if (cpu_vendor == X86_VENDOR_AMD || cpu_vendor == X86_VENDOR_HYGON) {
+ if (boot_cpu_has(X86_FEATURE_TOPOEXT) || cpu_vendor == X86_VENDOR_HYGON) {
+ /* AMD with TOPOEXT, or HYGON */
+ cpuid_count(0x8000001d, index, &eax.full, &ebx.full, &ecx.full, &edx);
+ } else {
+ /* Legacy AMD fallback */
amd_cpuid4(index, &eax, &ebx, &ecx);
- amd_init_l3_cache(id4, index);
- } else if (boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
- cpuid_count(0x8000001d, index, &eax.full,
- &ebx.full, &ecx.full, &edx);
+ }
amd_init_l3_cache(id4, index);
} else {
+ /* Intel */
cpuid_count(4, index, &eax.full, &ebx.full, &ecx.full, &edx);
}

if (eax.split.type == CTYPE_NULL)
- return -EIO; /* better error ? */
+ return -EIO;

id4->eax = eax;
id4->ebx = ebx;