Re: [PATCH v6] x86/apic: limit irq affinity

From: Dimitri Sivanich
Date: Thu Dec 03 2009 - 12:19:57 EST


On Thu, Dec 03, 2009 at 09:07:21AM -0800, Waskiewicz Jr, Peter P wrote:
> On Thu, 3 Dec 2009, Dimitri Sivanich wrote:
>
> > On Thu, Dec 03, 2009 at 08:53:23AM -0800, Waskiewicz Jr, Peter P wrote:
> > > On Thu, 3 Dec 2009, Dimitri Sivanich wrote:
> > >
> > > > On Wed, Nov 25, 2009 at 07:40:33AM -0800, Arjan van de Ven wrote:
> > > > > On Tue, 24 Nov 2009 09:41:18 -0800
> > > > > ebiederm@xxxxxxxxxxxx (Eric W. Biederman) wrote:
> > > > > > Oii.
> > > > > >
> > > > > > I don't think it is bad to export information to applications like
> > > > > > irqbalance.
> > > > > >
> > > > > > I think it pretty horrible that one of the standard ways I have heard
> > > > > > to improve performance on 10G nics is to kill irqbalance.
> > > > >
> > > > > irqbalance does not move networking irqs; if it does there's something
> > > > > evil going on in the system. But thanks for the bugreport ;)
> > > >
> > > > It does move networking irqs.
> > > >
> > > > >
> > > > > we had that; it didn't work.
> > > > > what I'm asking for is for the kernel to expose the numa information;
> > > > > right now that is the piece that is missing.
> > > > >
> > > >
> > > > I'm wondering if we should expose that numa information in the form of a node or the set of allowed cpus, or both?
> > > >
> > > > I'm guessing 'both' is the correct answer, so that apps like irqbalance can make a qualitative decision based on the node (affinity to cpus on this node is better), but an absolute decision based on allowed cpus (I cannot change affinity to anything but this set of cpus).
> > >
> > > That's exactly what my patch in the thread "irq: Add node_affinity CPU
> > > masks for smarter irqbalance hints" is doing. I've also done the
> > > irqbalance changes based on that kernel patch, and Arjan currently has
> > > that patch.
> >
> > So if I understand correctly, you're patch takes care of the qualitative portion of it (we prefer to set affinity to these cpus, which may be on more than one node), but not the restrictive part of it (we cannot change affinity to anything but these cpus)?
>
> That is correct. The patch provides an interface to both the kernel
> (functions) and /proc for userspace to set a CPU mask. That is the
> preferred mask for the interrupt to be balanced on. Then irqbalance will
> make decisions on how to balance within that provided mask, if it in fact
> has been provided.

What if it's not provided? Will irqbalance make decisions based on the numa_node of that irq (I would hope)?

Also, can we add a restricted mask as I mention above into this scheme? If we can't send an IRQ to some node, we don't want to bother attempting to change affinity to cpus on that node (hopefully code in the kernel will eventually restrict this).

As a matter of fact, driver's allocating rings, buffers, queues on other nodes should optimally be made aware of the restriction.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/