Re: [PATCH RFC v1 1/2] genirq/affinity: add support for limiting managed interrupts
From: Ming Lei
Date: Thu Oct 31 2024 - 06:51:24 EST
On Thu, Oct 31, 2024 at 6:35 PM Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:
>
> On Thu, Oct 31 2024 at 15:46, guanjun@xxxxxxxxxxxxxxxxx wrote:
> > #ifdef CONFIG_SMP
> >
> > +static unsigned int __read_mostly managed_irqs_per_node;
> > +static struct cpumask managed_irqs_cpumsk[MAX_NUMNODES] __cacheline_aligned_in_smp = {
> > + [0 ... MAX_NUMNODES-1] = {CPU_BITS_ALL}
> > +};
> >
> > +static void __group_prepare_affinity(struct cpumask *premask,
> > + cpumask_var_t *node_to_cpumask)
> > +{
> > + nodemask_t nodemsk = NODE_MASK_NONE;
> > + unsigned int ncpus, n;
> > +
> > + get_nodes_in_cpumask(node_to_cpumask, premask, &nodemsk);
> > +
> > + for_each_node_mask(n, nodemsk) {
> > + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], premask);
> > + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], node_to_cpumask[n]);
>
> How is this managed_irqs_cpumsk array protected against concurrency?
>
> > + ncpus = cpumask_weight(&managed_irqs_cpumsk[n]);
> > + if (ncpus < managed_irqs_per_node) {
> > + /* Reset node n to current node cpumask */
> > + cpumask_copy(&managed_irqs_cpumsk[n], node_to_cpumask[n]);
>
> This whole logic is incomprehensible and aside of the concurrency
> problem it's broken when CPUs are made present at run-time because these
> cpu masks are static and represent the stale state of the last
> invocation.
>
> Given the limitations of the x86 vector space, which is not going away
> anytime soon, there are only two options IMO to handle such a scenario.
>
> 1) Tell the nvme/block layer to disable queue affinity management
+1
There are other use cases, such as cpu isolation, which can benefit from
this way too.
https://lore.kernel.org/linux-nvme/20240702104112.4123810-1-ming.lei@xxxxxxxxxx/
Thanks,