Re: resctrl mount fail on v6.13-rc1

From: Ming Lei
Date: Thu Dec 05 2024 - 04:29:38 EST


On Wed, Dec 04, 2024 at 08:48:14AM -0800, Reinette Chatre wrote:
> Hi Ming,
>
> On 12/3/24 7:27 PM, Ming Lei wrote:
> > On Mon, Dec 02, 2024 at 09:02:45PM -0800, Reinette Chatre wrote:
> >>
> >>
> >> On 12/2/24 8:54 PM, Reinette Chatre wrote:
> >>>
> >>>
> >>> On 12/2/24 6:47 PM, Luck, Tony wrote:
> >>>> On Mon, Dec 02, 2024 at 02:26:48PM -0800, Reinette Chatre wrote:
> >>>>> Hi Tony,
> >>>>>
> >>>>> On 12/2/24 1:42 PM, Luck, Tony wrote:
> >>>>>> Anyone better a decoding lockdep dumps then me make sense of this?
> >>>>>>
> >>>>>> All I did was build v6.13-rc1 with (among others)
> >>>>>>
> >>>>>> CONFIG_PROVE_LOCKING=y
> >>>>>> CONFIG_PROVE_RAW_LOCK_NESTING=y
> >>>>>> CONFIG_PROVE_RCU=y
> >>>>>>
> >>>>>> and then mount the resctrl filesystem:
> >>>>>>
> >>>>>> $ sudo mount -t resctrl resctrl /sys/fs/resctrl
> >>>>>>
> >>>>>> There are only trivial changes to the resctrl code between
> >>>>>> v6.12 (which works) and v6.13-rc1:
> >>>>>>
> >>>>>> $ git log --oneline v6.13-rc1 ^v6.12 -- arch/x86/kernel/cpu/resctrl
> >>>>>> 5a4b3fbb4849 Merge tag 'x86_cache_for_v6.13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
> >>>>>> 9bce6e94c4b3 x86/resctrl: Support Sub-NUMA cluster mode SNC6
> >>>>>> 29eaa7958367 x86/resctrl: Slightly clean-up mbm_config_show()
> >>>>>>
> >>>>>> So something in kernfs? Or the way resctrl uses kernfs?
> >>>>>
> >>>>> I am not seeing this but that may be because I am not testing with
> >>>>> selinux enabled. My test kernel has:
> >>>>> # CONFIG_SECURITY_SELINUX is not set
> >>>>>
> >>>>> I am also not running with any btrfs filesystems.
> >>>>>
> >>>>> Is this your usual setup in which you are seeing this the first time? Is it
> >>>>> perhaps possible for you to bisect?
> >>>>
> >>>> Bisection says:
> >>>>
> >>>> $ git bisect bad
> >>>> f1be1788a32e8fa63416ad4518bbd1a85a825c9d is the first bad commit
> >>>> commit f1be1788a32e8fa63416ad4518bbd1a85a825c9d
> >>>> Author: Ming Lei <ming.lei@xxxxxxxxxx>
> >>>> Date: Fri Oct 25 08:37:20 2024 +0800
> >>>>
> >>>> block: model freeze & enter queue as lock for supporting lockdep
> >>>>
> >>>
> >>> Thank you very much Tony. Since you did not respond to the question about
> >>> bisect I assumed that you would not do it. I ended up duplicating the bisect
> >>> effort after getting an environment in which I can reproduce the issue. Doing so
> >>> I am able to confirm the commit pointed to by bisect.
> >>> The commit cannot be reverted cleanly so I could not test v6.13-rc1 with it
> >>> reverted.
> >>>
> > Gi> > Ming Lei: I'd be happy to help with testing if you do not have hardware with
> >>> which you can reproduce the issue.
> >>
> >> One datapoint that I neglected to mention: btrfs does not seem to be required. The system
> >> I tested on used ext4 filesystem resulting in trace below:
> >
> > Hi Reinette and Tony,
> >
> > The warning is triggered because the two subsystems are connected with
> > &cpu_hotplug_lock.
> >
> > rdt_get_tree():
> > cpus_read_lock();
> > mutex_lock(&rdtgroup_mutex);
> > ...
> >
> > blk_mq_realloc_hw_ctxs()
> > mutex_lock(&q->sysfs_lock);
> > ...
> > blk_mq_alloc_and_init_hctx()
> > blk_mq_init_hctx
> > cpuhp_state_add_instance_nocalls
> > __cpuhp_state_add_instance
> > cpus_read_lock();
> >
> > Given cpus_read_lock() is often implied in cpuhp APIs, I feel rdt_get_tree()
> > may re-order the two locks for avoiding the dependency.
>
> This is not possible for exactly the reason you provide ("cpus_read_lock() is
> often implied in cpuhp APIs").
>
> resctrl relies on hotplug state callbacks for its initialization. You can find
> the callback setup in:
>
> arch/x86/kernel/cpu/resctrl/core.c:
>
> static int __init resctrl_late_init(void)
> {
>
> ...
> state = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
> "x86/resctrl/cat:online:",
> resctrl_arch_online_cpu,
> resctrl_arch_offline_cpu);
> ...
> }
>
> Since resctrl code is called by the CPU hotplug subsystem with cpu_hotplug_lock
> already held it is not possible for resctrl to change the lock ordering.

OK, I see now, and thanks for the explanation.

I will try to figure out moving cpuhp_state_add_instance_nocalls out of
q->sysfs_lock, and it should be fine in case that queue is live.

Thanks,
Ming