Re: Scaling problem with a lot of AF_PACKET sockets on differentinterfaces
From: Daniel Borkmann
Date: Fri Jun 07 2013 - 10:33:25 EST
On 06/07/2013 04:17 PM, Vitaly V. Bursov wrote:
07.06.2013 16:05, Daniel Borkmann ÐÐÑÐÑ:
[...]
Ideas are welcome :)
Probably, that depends on _your scenario_ and/or BPF filter, but would it be
an alternative if you have only a few packet sockets (maybe one pinned to each
cpu) and cluster/load-balance them together via packet fanout? (Where you
bind the socket to ifindex 0, so that you get traffic from all devs...) That
would at least avoid that "hot spot", and you could post-process the interface
via sockaddr_ll. But I'd agree that this will not solve the actual problem you've
observed. ;-)
I was't aware of the ifindex 0 thing, it can help, thanks! Of course, if it'll
work for me (applications is a custom DHCP server) it'll surely
increase the overhead of BPF (I don't need to tap the traffic from all
interfaces), there are vlans, bridges and bonds - likely the server will receive
same packets multiple times and replies must be sent too...
but it still should be faster.
Well, as already said, if you use a fanout socket group, then you won't receive the
_exact_ same packet twice. Rather, packets are balanced by different policies among
your packet sockets in that group. What you could do is to have a (e.g.) single BPF
filter (jitted) for all those sockets that'll let needed packets pass and you can then
access the interface they came from via sockaddr_ll, which then is further processed
in your fast path (or dropped depending on the iface). There's also a BPF extension
(BPF_S_ANC_IFINDEX) that lets you load the ifindex of the skb into the BPF accumulator,
so you could also filter early from there for a range of ifindexes (in combination to
bind the sockets to index 0). Probably that could work.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/