Re: 6.11/regression/bisected - The commit c1385c1f0ba3 caused a new possible recursive locking detected warning at computer boot.
From: Mikhail Gavrilov
Date: Thu Jul 25 2024 - 18:30:37 EST
On Thu, Jul 25, 2024 at 10:13 PM Jonathan Cameron
<Jonathan.Cameron@xxxxxxxxxx> wrote:
>
> Hi Mikhail.
>
> So the short story, ignoring the journey (which should only be described
> with beer in hand), is that I now have an emulated test setup in QEMU
> that fakes enough of the previously missing bits to bring up this path
> and can trigger the splat you shared. With the below fix I can get to
> something approaching a running system.
>
> However, without more work the emulation isn't actually doing any control
> of frequency etc so I have no idea if the code actually works after this
> patch.
>
> If you are in a position to test a patch, could you try the following?
>
> One bit I need to check out tomorrow is to make sure this doesn't race with the
> workfn that is used to tear down the same static key on error.
>
> From 8f7ad4c73954aae74265a3ec50a1d56e0c56050d Mon Sep 17 00:00:00 2001
> From: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx>
> Date: Thu, 25 Jul 2024 17:56:00 +0100
> Subject: [RFC PATCH] x86/aperfmperf: Push static_branch_enable(&arch_scale_freq_key) onto work queue
>
> This to avoid a deadlock reported by lockdep.
>
> TODO: Fix up this commit message before posting to actually give
> some details and tags etc.
>
> Reported-by: Mikhail Gavrilov <mikhail.v.gavrilov@xxxxxxxxx>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx>
> ---
> arch/x86/kernel/cpu/aperfmperf.c | 13 ++++++++++---
> 1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/aperfmperf.c b/arch/x86/kernel/cpu/aperfmperf.c
> index b3fa61d45352..41c729d3517c 100644
> --- a/arch/x86/kernel/cpu/aperfmperf.c
> +++ b/arch/x86/kernel/cpu/aperfmperf.c
> @@ -300,15 +300,22 @@ static void register_freq_invariance_syscore_ops(void)
> static inline void register_freq_invariance_syscore_ops(void) {}
> #endif
>
> +static void enable_freq_invariance_workfn(struct work_struct *work)
> +{
> + static_branch_enable(&arch_scale_freq_key);
> + register_freq_invariance_syscore_ops();
> + pr_info("Estimated ratio of average max frequency by base frequency (times 1024): %llu\n", arch_max_freq_ratio);
> +}
> +static DECLARE_WORK(enable_freq_invariance_work,
> + enable_freq_invariance_workfn);
> +
> static void freq_invariance_enable(void)
> {
> if (static_branch_unlikely(&arch_scale_freq_key)) {
> WARN_ON_ONCE(1);
> return;
> }
> - static_branch_enable(&arch_scale_freq_key);
> - register_freq_invariance_syscore_ops();
> - pr_info("Estimated ratio of average max frequency by base frequency (times 1024): %llu\n", arch_max_freq_ratio);
> + schedule_work(&enable_freq_invariance_work);
> }
>
> void freq_invariance_set_perf_ratio(u64 ratio, bool turbo_disabled)
> --
> 2.43.0
>
>
Jonathan, thanks a lot.
With this patch, the issue has gone.
Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@xxxxxxxxx>
--
Best Regards,
Mike Gavrilov.
Attachment:
6.10.0-d67978318827-with-enable-onto-work-queue-patch.zip
Description: Zip archive