Re: 6.11/regression/bisected - The commit c1385c1f0ba3 caused a new possible recursive locking detected warning at computer boot.
From: Hans de Goede
Date: Sat Aug 03 2024 - 11:48:35 EST
Hi,
On 7/26/24 6:26 PM, Thomas Gleixner wrote:
> On Thu, Jul 25 2024 at 18:13, Jonathan Cameron wrote:
>> On Tue, 23 Jul 2024 18:20:06 +0100
>> Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> wrote:
>>
>>>> This is an interesting corner and perhaps reflects a flawed
>>>> assumption we were making that for this path anything that can happen for an
>>>> initially present CPU can also happen for a hotplugged one. On the hotplugged
>>>> path the lock was always held and hence the static_key_enable() would
>>>> have failed.
>
> No. The original code invoked this without cpus read locked via:
>
> acpi_processor_driver.probe()
> __acpi_processor_start()
> ....
>
> and the cpu hotplug callback finds it already set up, so it won't reach
> the static_key_enable() anymore.
>
>> One bit I need to check out tomorrow is to make sure this doesn't race with the
>> workfn that is used to tear down the same static key on error.
>
> There is a simpler solution for that. See the uncompiled below.
I was seeing the same lockdep issue as the original reporter on my
AMD Ryzen 9 5950X machine and I can confirm that the suggested patch
(below) fixes this:
Tested-by: Hans de Goede <hdegoede@xxxxxxxxxx>
Regards,
Hans
> ---
> diff --git a/arch/x86/kernel/cpu/aperfmperf.c b/arch/x86/kernel/cpu/aperfmperf.c
> index b3fa61d45352..0b69bfbf345d 100644
> --- a/arch/x86/kernel/cpu/aperfmperf.c
> +++ b/arch/x86/kernel/cpu/aperfmperf.c
> @@ -306,7 +306,7 @@ static void freq_invariance_enable(void)
> WARN_ON_ONCE(1);
> return;
> }
> - static_branch_enable(&arch_scale_freq_key);
> + static_branch_enable_cpuslocked(&arch_scale_freq_key);
> register_freq_invariance_syscore_ops();
> pr_info("Estimated ratio of average max frequency by base frequency (times 1024): %llu\n", arch_max_freq_ratio);
> }
> @@ -323,8 +323,10 @@ static void __init bp_init_freq_invariance(void)
> if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
> return;
>
> - if (intel_set_max_freq_ratio())
> + if (intel_set_max_freq_ratio()) {
> + guard(cpus_read_lock)();
> freq_invariance_enable();
> + }
> }
>
> static void disable_freq_invariance_workfn(struct work_struct *work)
>
>
>