Re: [PATCH RFC net-next 0/3] Multi-CPU DSA support
From: Tobias Waldekranz
Date: Mon Apr 12 2021 - 08:46:18 EST
On Sun, Apr 11, 2021 at 21:50, Vladimir Oltean <olteanv@xxxxxxxxx> wrote:
> On Sun, Apr 11, 2021 at 08:01:35PM +0200, Marek Behun wrote:
>> On Sat, 10 Apr 2021 15:34:46 +0200
>> Ansuel Smith <ansuelsmth@xxxxxxxxx> wrote:
>>
>> > Hi,
>> > this is a respin of the Marek series in hope that this time we can
>> > finally make some progress with dsa supporting multi-cpu port.
>> >
>> > This implementation is similar to the Marek series but with some tweaks.
>> > This adds support for multiple-cpu port but leave the driver the
>> > decision of the type of logic to use about assigning a CPU port to the
>> > various port. The driver can also provide no preference and the CPU port
>> > is decided using a round-robin way.
>>
>> In the last couple of months I have been giving some thought to this
>> problem, and came up with one important thing: if there are multiple
>> upstream ports, it would make a lot of sense to dynamically reallocate
>> them to each user port, based on which user port is actually used, and
>> at what speed.
>>
>> For example on Turris Omnia we have 2 CPU ports and 5 user ports. All
>> ports support at most 1 Gbps. Round-robin would assign:
>> CPU port 0 - Port 0
>> CPU port 1 - Port 1
>> CPU port 0 - Port 2
>> CPU port 1 - Port 3
>> CPU port 0 - Port 4
>>
>> Now suppose that the user plugs ethernet cables only into ports 0 and 2,
>> with 1, 3 and 4 free:
>> CPU port 0 - Port 0 (plugged)
>> CPU port 1 - Port 1 (free)
>> CPU port 0 - Port 2 (plugged)
>> CPU port 1 - Port 3 (free)
>> CPU port 0 - Port 4 (free)
>>
>> We end up in a situation where ports 0 and 2 share 1 Gbps bandwidth to
>> CPU, and the second CPU port is not used at all.
>>
>> A mechanism for automatic reassignment of CPU ports would be ideal here.
>>
>> What do you guys think?
>
> The reason why I don't think this is such a great idea is because the
> CPU port assignment is a major reconfiguration step which should at the
> very least be done while the network is down, to avoid races with the
> data path (something which this series does not appear to handle).
> And if you allow the static user-port-to-CPU-port assignment to change
> every time a link goes up/down, I don't think you really want to force
> the network down through the entire switch basically.
>
> So I'd be tempted to say 'tough luck' if all your ports are not up, and
> the ones that are are assigned statically to the same CPU port. It's a
> compromise between flexibility and simplicity, and I would go for
> simplicity here. That's the most you can achieve with static assignment,
> just put the CPU ports in a LAG if you want better dynamic load balancing
> (for details read on below).
I agree. Unless you only have a few really wideband flows, a LAG will
typically do a great job with balancing. This will happen without the
user having to do any configuration at all. It would also perform well
in "router-on-a-stick"-setups where the incoming and outgoing port is
the same.
...
> But there is something which is even more interesting about Felix with
> the ocelot-8021q tagger. Since Marek posted his RFC and until Ansuel
> posted the follow-up, things have happened, and now both Felix and the
> Marvell driver support LAG offload via the bonding and/or team drivers.
> At least for Felix, when using the ocelot-8021q tagged, it should be
> possible to put the two CPU ports in a hardware LAG, and the two DSA
> masters in a software LAG, and let the bond/team upper of the DSA
> masters be the CPU port.
>
> I would like us to keep the door open for both alternatives, and to have
> a way to switch between static user-to-CPU port assignment, and LAG.
> I think that if there are multiple 'ethernet = ' phandles present in the
> device tree, DSA should populate a list of valid DSA masters, and then
> call into the driver to allow it to select which master it prefers for
> each user port. This is similar to what Ansuel added with 'port_get_preferred_cpu',
> except that I chose "DSA master" and not "CPU port" for a specific reason.
> For LAG, the DSA master would be bond0.
I do not see why we would go through the trouble of creating a
user-visible bond/team for this. As you detail below, it would mean
jumping through a lot of hoops. I am not sure there is that much we can
use from those drivers.
- We know that the CPU ports are statically up, so there is no "active
transmit set" to manage, it always consists of all ports.
- The LAG members are statically known at boot time via the DT, so we do
not need (or want, in fact) any management of that from userspace.
We could just let the drivers setup the LAG internally, and then do the
load-balancing in dsa_slave_xmit or provide a generic helper that the
taggers could use.