Re: [PATCH net-next v2 2/5] net: dsa: add out-of-band tagging protocol

From: Florian Fainelli
Date: Sat May 14 2022 - 12:33:53 EST


Hi Maxime,

On 5/14/2022 8:06 AM, Maxime Chevallier wrote:
This tagging protocol is designed for the situation where the link
between the MAC and the Switch is designed such that the Destination
Port, which is usually embedded in some part of the Ethernet Header, is
sent out-of-band, and isn't present at all in the Ethernet frame.

This can happen when the MAC and Switch are tightly integrated on an
SoC, as is the case with the Qualcomm IPQ4019 for example, where the DSA
tag is inserted directly into the DMA descriptors. In that case,
the MAC driver is responsible for sending the tag to the switch using
the out-of-band medium. To do so, the MAC driver needs to have the
information of the destination port for that skb.

This out-of-band tagging protocol is using the very beggining of the skb
headroom to store the tag. The drawback of this approch is that the
headroom isn't initialized upon allocating it, therefore we have a
chance that the garbage data that lies there at allocation time actually
ressembles a valid oob tag. This is only problematic if we are
sending/receiving traffic on the master port, which isn't a valid DSA
use-case from the beggining. When dealing from traffic to/from a slave
port, then the oob tag will be initialized properly by the tagger or the
mac driver through the use of the dsa_oob_tag_push() call.

What I like about your approach is that you have aligned the way an out of band switch tag is communicated to the networking stack the same way that an "in-band" switch tag would be communicated. I think this is a good way forward to provide the out of band tag and I don't think it creates a performance problem because the Ethernet frame is hot in the cache (dma_unmap_single()) and we already have an "expensive" read of the DMA descriptor in coherent memory anyway.

You could possibly optimize the data flow a bit to limit the amount of sk_buff data movement by asking your Ethernet controller to DMA into the data buffer N bytes into the beginning of the data buffer. That way, if you have reserved say, 2 bytes at the front data buffer you can deposit the QCA tag there and you do not need to push, process the tag, then pop it, just process and pop. Consider using the 2byte stuffing that the Ethernet controller might be adding to the beginning of the Ethernet frame to align the IP header on a 4-byte boundary to provide the tag in there?

If we want to have a generic out of band tagger like you propose, it seems to me that we will need to invent a synthetic DSA tagging format which is the largest common denominator of the out of band tags that we want to support. We could imagine being more compact in the representation for instance by using an u8 for storing a bitmask of ports (works for both RX and TX then) and another u8 for various packet forwarding reasons.

Then we would request the various Ethernet MAC drivers to marshall their proprietary tag into the DSA synthetic one on receive, and unmarshall it on transmit.

Another approach IMHO which maybe helps the maintainability of the code moving forward as well as ensuring that all Ethernet switch tagging code lives in one place, is to teach each tagger driver how to optimize their data paths to minimize the amount of data movements and checksum re-calculations, this is what I had in mind a few years ago:

https://lore.kernel.org/lkml/1438322920.20182.144.camel@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/T/

This might scale a little less well, and maybe this makes too many assumptions as to where and how the checksums are calculated on the packet contents, but at least, you don't have logic processing the same type of switch tag scattered between the Ethernet MAC drivers (beyond copying/pushing) and DSA switch taggers.

I would like to hear other's opinion on this.
--
Florian