Re: [PATCH 3/4] workqueue: Tag bound workers with KTHREAD_IS_PER_CPU

From: Lai Jiangshan
Date: Sat Jan 16 2021 - 01:42:53 EST


On Thu, Jan 14, 2021 at 11:35 PM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:

>
> -void kthread_set_per_cpu(struct task_struct *k, bool set)
> +void kthread_set_per_cpu(struct task_struct *k, int cpu)
> {
> struct kthread *kthread = to_kthread(k);
> if (!kthread)
> return;
>
> - if (set) {
> - WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
> - WARN_ON_ONCE(k->nr_cpus_allowed != 1);
> - set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
> - } else {
> + WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
> +
> + if (cpu < 0) {
> clear_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
> + return;
> }
> +
> + kthread->cpu = cpu;
> + set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
> }
>

I don't see the code to set the mask of the cpu to the task
since set_cpus_allowed_ptr() is removed from rebind_worker().

Is it somewhere I missed?

>
> @@ -2371,6 +2371,7 @@ static int worker_thread(void *__worker)
> /* tell the scheduler that this is a workqueue worker */
> set_pf_worker(true);
> woke_up:
> + kthread_parkme();
> raw_spin_lock_irq(&pool->lock);
>
> /* am I supposed to die? */
> @@ -2428,7 +2429,7 @@ static int worker_thread(void *__worker)
> move_linked_works(work, &worker->scheduled, NULL);
> process_scheduled_works(worker);
> }
> - } while (keep_working(pool));
> + } while (keep_working(pool) && !kthread_should_park());
>
> worker_set_flags(worker, WORKER_PREP);
> sleep:
> @@ -2440,9 +2441,12 @@ static int worker_thread(void *__worker)
> * event.
> */
> worker_enter_idle(worker);
> - __set_current_state(TASK_IDLE);
> + set_current_state(TASK_IDLE);
> raw_spin_unlock_irq(&pool->lock);
> - schedule();
> +
> + if (!kthread_should_park())
> + schedule();
> +
> goto woke_up;
> }
>
> @@ -4923,7 +4927,7 @@ static void unbind_workers(int cpu)
> raw_spin_unlock_irq(&pool->lock);
>
> for_each_pool_worker(worker, pool) {
> - kthread_set_per_cpu(worker->task, false);
> + kthread_set_per_cpu(worker->task, -1);
> WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
> }
>
> @@ -4978,9 +4982,9 @@ static void rebind_workers(struct worker_pool *pool)
> * from CPU_ONLINE, the following shouldn't fail.
> */
> for_each_pool_worker(worker, pool) {
> - WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
> - pool->attrs->cpumask) < 0);
> - kthread_set_per_cpu(worker->task, true);
> + WARN_ON_ONCE(kthread_park(worker->task) < 0);
> + kthread_set_per_cpu(worker->task, pool->cpu);
> + kthread_unpark(worker->task);

I feel nervous to use kthread_park() here and kthread_parkme() in
worker thread. And adding kthread_should_park() to the fast path
also daunt me.

How about using a new KTHREAD_XXXX instead of KTHREAD_IS_PER_CPU,
so that we can set and clear KTHREAD_XXXX freely, especially before
set_cpus_allowed_ptr().

For example, we can add a new KTHREAD_ACTIVE_MASK_ONLY which means
even when
is_per_cpu_kthread() && the_cpu_is_online() &&
the_cpu_is_not_active() && KTHREAD_ACTIVE_MASK_ONLY
we should also break the affinity.

So that we can easily set KTHREAD_ACTIVE_MASK_ONLY in unbind_workers()
add clear KTHREAD_ACTIVE_MASK_ONLY here and avoid adding new
synchronization like kthread_park().

> }
>
> raw_spin_lock_irq(&pool->lock);