Re: [PATCH -tip V3 0/8] workqueue: break affinity initiatively

From: Valentin Schneider
Date: Mon Jan 11 2021 - 19:26:59 EST


On 11/01/21 21:23, Peter Zijlstra wrote:
> On Mon, Jan 11, 2021 at 07:21:06PM +0000, Valentin Schneider wrote:
>> I'm less fond of the workqueue pcpu flag toggling, but it gets us what
>> we want: allow those threads to run on !active CPUs during online, but
>> move them away before !online during offline.
>>
>> Before I get ahead of myself, do we *actually* require that first part
>> for workqueue kthreads? I'm thinking (raise alarm) we could try another
>> approach of making them pcpu kthreads that don't abide by the !active &&
>> online rule.
>
> There is code that really requires percpu workqueues to be percpu. Such
> code will flush the percpu workqueue on hotplug and never hit the unbind
> scenario.
>
> Other code uses those same percpu workqueues and only uses it as a
> performance enhancer, it likes things to stay local, but if not, meh..
> And these users are what got us the weird ass semantics of workqueue.
>
> Sadly workqueue itself can't tell them apart.
>

Oh well...

FWIW now that I've unconfused myself, that does look okay.

>> > ---
>> > include/linux/kthread.h | 3 +++
>> > kernel/kthread.c | 25 ++++++++++++++++++++++++-
>> > kernel/sched/core.c | 2 +-
>> > kernel/sched/sched.h | 4 ++--
>> > kernel/smpboot.c | 1 +
>> > kernel/workqueue.c | 12 +++++++++---
>> > 6 files changed, 40 insertions(+), 7 deletions(-)
>> >
>> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> > index 15d2562118d1..e71f9e44789e 100644
>> > --- a/kernel/sched/core.c
>> > +++ b/kernel/sched/core.c
>> > @@ -7277,7 +7277,7 @@ static void balance_push(struct rq *rq)
>> > * Both the cpu-hotplug and stop task are in this case and are
>> > * required to complete the hotplug process.
>> > */
>> > - if (is_per_cpu_kthread(push_task) || is_migration_disabled(push_task)) {
>> > + if (rq->idle == push_task || is_per_cpu_kthread(push_task) || is_migration_disabled(push_task)) {
>>
>> I take it the p->set_child_tid thing you were complaining about on IRC
>> is what prevents us from having the idle task seen as a pcpu kthread?
>
> Yes, to to_kthread() only tests PF_KTHREAD and then assumes
> p->set_child_tid points to struct kthread, _however_ init_task has
> PF_KTHREAD set, but a NULL ->set_child_tid.
>
> This then means the idle thread for the boot CPU will malfunction with
> to_kthread() and will certainly not have KTHREAD_IS_PER_CPU set. Per
> construction (fork_idle()) none of the other idle threads will have that
> cured either.
>
> For fun and giggles, init (pid-1) will have PF_KTHREAD set for a while
> as well, until we exec /sbin/init.
>
> Anyway, idle will fail kthread_is_per_cpu(), and hence without the
> above, we'll try and push the idle task away, which results in much
> fail.
>

Quite!

>> Also, shouldn't this be done before the previous set_cpus_allowed_ptr()
>> call (in the same function)?
>
> Don't see why; we need nr_cpus_allowed == 1, so best do it after, right?
>

Duh, yes.

>> That is, if we patch
>> __set_cpus_allowed_ptr() to also use kthread_is_per_cpu().
>
> That seems wrong.
>

It is, apologies.

>> > list_add_tail(&worker->node, &pool->workers);
>> > worker->pool = pool;