Re: Re: [PATCH 1/2] cgroup/cpuset: Make cpuset hotplug processing synchronous

From: Valentin Schneider
Date: Wed Apr 03 2024 - 12:05:17 EST


On 03/04/24 16:54, Michal Koutný wrote:
> On Wed, Apr 03, 2024 at 04:26:38PM +0200, Valentin Schneider <vschneid@xxxxxxxxxx> wrote:
>> Also, I gave Michal's patch a try and it looks like it's introducing a
>
> Thank you.
>
>> cgroup_threadgroup_rwsem -> cpuset_mutex
>> ordering from
>> cgroup_transfer_tasks_locked()
>> `\
>> percpu_down_write(&cgroup_threadgroup_rwsem);
>> cgroup_migrate()
>> `\
>> cgroup_migrate_execute()
>> `\
>> ss->can_attach() // cpuset_can_attach()
>> `\
>> mutex_lock(&cpuset_mutex);
>>
>> which is invalid, see below.
>
> _This_ should be the right order (cpuset_mutex inside
> cgroup_threadgroup_rwsem), at least in my mental model. Thus I missed
> that cpuset_mutex must have been taken somewhere higher up in the
> hotplug code (CPU 0 in the lockdep dump, I can't easily see where from)
> :-/.
>

If I got this right...

cpuset_hotplug_update_tasks()
`\
mutex_lock(&cpuset_mutex);
hotplug_update_tasks_legacy()
`\
remove_tasks_in_empty_cpuset()
`\
cgroup_transfer_tasks_locked()
`\
percpu_down_write(&cgroup_threadgroup_rwsem);

But then that is also followed by:

cgroup_migrate()
`\
cgroup_migrate_execute()
`\
ss->can_attach() // cpuset_can_attach()
`\
mutex_lock(&cpuset_mutex);

which doesn't look good...


Also, I missed that earlier, but this triggers:

cgroup_transfer_tasks_locked() ~> lockdep_assert_held(&cgroup_mutex);

[ 30.773092] WARNING: CPU: 2 PID: 24 at kernel/cgroup/cgroup-v1.c:112 cgroup_transfer_tasks_locked+0x39f/0x450
[ 30.773807] Modules linked in:
[ 30.774063] CPU: 2 PID: 24 Comm: cpuhp/2 Not tainted 6.9.0-rc1-00042-g844178b616c7-dirty #25
[ 30.774672] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
[ 30.775457] RIP: 0010:cgroup_transfer_tasks_locked+0x39f/0x450
[ 30.775891] Code: 0f 85 70 ff ff ff 0f 1f 44 00 00 e9 6d ff ff ff be ff ff ff ff 48 c7 c7 48 82 d6 82 e8 5a 6a ec 00 85 c0 0f 85 6d fd ff ff 90 <0f> 0b 90 e9 64 fd ff ff 48 8b bd e8 fe ff ff be 01 00 00 00 e8 78
[ 30.777270] RSP: 0000:ffffc900000e7c20 EFLAGS: 00010246
[ 30.777813] RAX: 0000000000000000 RBX: ffffc900000e7cb0 RCX: 0000000000000000
[ 30.778443] RDX: 0000000000000000 RSI: ffffffff82d68248 RDI: ffff888004a9a300
[ 30.779142] RBP: ffffc900000e7d50 R08: 0000000000000001 R09: 0000000000000000
[ 30.779889] R10: ffffc900000e7d70 R11: 0000000000000001 R12: ffff8880057c6040
[ 30.780420] R13: ffff88800539f800 R14: 0000000000000001 R15: 0000000000000004
[ 30.780951] FS: 0000000000000000(0000) GS:ffff88801f500000(0000) knlGS:0000000000000000
[ 30.781561] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 30.781989] CR2: 00000000f7e6fe85 CR3: 00000000064ac000 CR4: 00000000000006f0
[ 30.782558] Call Trace:
[ 30.782783] <TASK>
[ 30.782982] ? __warn+0x87/0x180
[ 30.783250] ? cgroup_transfer_tasks_locked+0x39f/0x450
[ 30.783644] ? report_bug+0x164/0x190
[ 30.783970] ? handle_bug+0x3b/0x70
[ 30.784288] ? exc_invalid_op+0x17/0x70
[ 30.784641] ? asm_exc_invalid_op+0x1a/0x20
[ 30.784992] ? cgroup_transfer_tasks_locked+0x39f/0x450
[ 30.785375] ? __lock_acquire+0xe9d/0x16d0
[ 30.785707] ? cpuset_update_active_cpus+0x15a/0xca0
[ 30.786074] ? cpuset_update_active_cpus+0x782/0xca0
[ 30.786443] cpuset_update_active_cpus+0x782/0xca0
[ 30.786816] sched_cpu_deactivate+0x1ad/0x1d0
[ 30.787148] ? __pfx_sched_cpu_deactivate+0x10/0x10
[ 30.787509] cpuhp_invoke_callback+0x16b/0x6b0
[ 30.787859] ? cpuhp_thread_fun+0x56/0x240
[ 30.788175] cpuhp_thread_fun+0x1ba/0x240
[ 30.788485] smpboot_thread_fn+0xd8/0x1d0
[ 30.788823] ? __pfx_smpboot_thread_fn+0x10/0x10
[ 30.789229] kthread+0xce/0x100
[ 30.789526] ? __pfx_kthread+0x10/0x10
[ 30.789876] ret_from_fork+0x2f/0x50
[ 30.790200] ? __pfx_kthread+0x10/0x10
[ 30.792341] ret_from_fork_asm+0x1a/0x30
[ 30.792716] </TASK>

> Michal