Re: [RFC 4/6] softirq: Run per-group per-cpu ksoftirqd thread

From: Mike Galbraith
Date: Thu Jan 18 2018 - 12:01:06 EST


On Thu, 2018-01-18 at 16:12 +0000, Dmitry Safonov wrote:
>
> diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> index 2ea09896bd6e..17e1a04445fa 100644
> --- a/include/linux/interrupt.h
> +++ b/include/linux/interrupt.h
> @@ -508,11 +508,21 @@ extern void __raise_softirq_irqoff(unsigned int nr);
> extern void raise_softirq_irqoff(unsigned int nr);
> extern void raise_softirq(unsigned int nr);
>
> -DECLARE_PER_CPU(struct task_struct *, ksoftirqd);
> +extern struct task_struct *__percpu **ksoftirqd;
> +extern unsigned nr_softirq_groups;
>
> -static inline struct task_struct *this_cpu_ksoftirqd(void)
> +extern bool servicing_softirq(unsigned nr);
> +static inline bool current_is_ksoftirqd(void)
> {
> - return this_cpu_read(ksoftirqd);
> + unsigned i;
> +
> + if (!ksoftirqd)
> + return false;
> +
> + for (i = 0; i < nr_softirq_groups; i++)
> + if (*this_cpu_ptr(ksoftirqd[i]) ==
> current)
> + return true;
> + return false;
> }

I haven't read all this, but in a quick drive-by this poked me in the
eye.  For RT tree fully threaded softirqs, I stole a ->flags bit to
identify threads ala PF_KTHREAD (PF_KSOFTIRQD).  In previous versions,
I added a bit field to do the same, either is quicker than rummaging.

-Mike