On Wed, 6 Jan 2021 10:46:43 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote:
On 2021/1/5 下午8:42, Xuan Zhuo wrote:I understand what you mean, this problem does exist, and I encountered it when I
On Tue, 5 Jan 2021 17:32:19 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote:
On 2021/1/5 下午5:11, Xuan Zhuo wrote:Sorry, it may just be a problem with the backend I used here. I just tested the
The first patch made some adjustments to xsk.Thanks a lot for the work. It's rather interesting.
The second patch itself can be used as an independent patch to solve the problemIt would be better to send this as a separated patch. Several people
that XDP may fail to load when the number of queues is insufficient.
asked for this before.
The third to last patch implements support for xsk in virtio-net.This is sub-optimal. We need figure out the root cause. We don't meet
A practical problem with virtio is that tx interrupts are not very reliable.
There will always be some missing or delayed tx interrupts. So I specially added
a point timer to solve this problem. Of course, considering performance issues,
The timer only triggers when the ring of the network card is full.
such issue before.
Several questions:
- is tx interrupt enabled?
- can you still see the issue if you disable event index?
- what's backend did you use? qemu or vhost(user)?
latest qemu and it did not have this problem. I think I should delete the
timer-related code?
Yes, please.
I don't really understand what you mean. In the case of multiple queues,Regarding the issue of virtio-net supporting xsk's zero copy rx, I am alsoThat's fine, but a question here.
developing it, but I found that the modification may be relatively large, so I
consider this patch set to be separated from the code related to xsk zero copy
rx.
How is the multieuque being handled here. I'm asking since there's no
programmable filters/directors support in virtio spec now.
Thanks
there is no problem.
So consider we bind xsk to queue 4, how can you make sure the traffic to
be directed queue 4? One possible solution is to use filters as what
suggested in af_xdp.rst:
ethtool -N p3p2 rx-flow-hash udp4 fn
ethtool -N p3p2 flow-type udp4 src-port 4242 dst-port 4242 \
action 16
...
But virtio-net doesn't have any filters that could be programmed from
the driver.
Anything I missed here?
Thanks
tested qemu.
First of all, this is that the problem only affects recv. This patch is for
xmit. Of course, our normal business must also have recv scenarios.
My solution in developing the upper-level application is to bond all the queues
to ensure that we can receive the packets we want.
And I think in the
implementation of the use, even if the network card supports filters, we should
also bond all the queues, because we don't know which queue the traffic we care
about will arrive from.
Regarding the problem of virtio-net, I think our core question is whether we
need to deal with this problem in the driver of virtio-net, I personally think
that we should add the virtio specification to define this scenario.
When I tested it, I found that some cloud vendors' implementations guarantee
this queue selection algorithm.
Thanks!!
Xuan Zhuo (5):
xsk: support get page for drv
virtio-net: support XDP_TX when not more queues
virtio-net, xsk: distinguish XDP_TX and XSK XMIT ctx
xsk, virtio-net: prepare for support xsk
virtio-net, xsk: virtio-net support xsk zero copy tx
drivers/net/virtio_net.c | 643 +++++++++++++++++++++++++++++++++++++++-----
include/linux/netdevice.h | 1 +
include/net/xdp_sock_drv.h | 10 +
include/net/xsk_buff_pool.h | 1 +
net/xdp/xsk_buff_pool.c | 10 +-
5 files changed, 597 insertions(+), 68 deletions(-)
--
1.8.3.1