Re: [PATCH] irq: Add node_affinity CPU masks for smarter irqbalancehints

From: Eric Dumazet
Date: Tue Nov 24 2009 - 14:01:47 EST


Peter P Waskiewicz Jr a Ãcrit :

> That's exactly what we're doing in our 10GbE driver right now (isn't
> pushed upstream yet, still finalizing our testing). We spread to all
> NUMA nodes in a semi-intelligent fashion when allocating our rings and
> buffers. The last piece is ensuring the interrupts tied to the various
> queues all route to the NUMA nodes those CPUs belong to. irqbalance
> needs some kind of hint to make sure it does the right thing, which
> today it does not.

sk_buff allocations should be done on the node of the cpu handling rx interrupts.

For rings, I am ok for irqbalance and driver cooperation, in case admin
doesnt want to change the defaults.

>
> I don't see how this is complex though. Driver loads, allocates across
> the NUMA nodes for optimal throughput, then writes CPU masks for the
> NUMA nodes each interrupt belongs to. irqbalance comes along and looks
> at the new mask "hint," and then balances that interrupt within that
> hinted mask.

So NUMA policy is given by the driver at load time ?

An admin might chose to direct all NIC trafic to a given node, because
its machine has mixed workload. 3 nodes out of 4 for database workload,
one node for network IO...

So if an admin changes smp_affinity, is your driver able to reconfigure itself
and re-allocate all its rings to be on NUMA node chosen by admin ? This is
what I qualify as complex.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/