Randy Li wrote:
On 2024/8/1 05:57, Willem de Bruijn wrote:Lack of experience with an existing interface is insufficient reason
nits:I see.
- INDX->INDEX. It's correct in the code
- prefix networking patches with the target tree: PATCH net-next
Randy Li wrote:I know this eBPF thing. But I am newbie to eBPF as well I didn't figure
On 2024/7/31 22:12, Willem de Bruijn wrote:If using an IFF_MULTI_QUEUE tun device, packets are automatically
Randy Li wrote:That is for sch_multiq, here is an example
We need the queue index in qdisc mapping rule. There is no way toIn which command exactly?
fetch that.
tc qdisc add dev tun0 root handle 1: multiq
tc filter add dev tun0 parent 1: protocol ip prio 1 u32 match ip dst
172.16.10.1 action skbedit queue_mapping 0
tc filter add dev tun0 parent 1: protocol ip prio 1 u32 match ip dst
172.16.10.20 action skbedit queue_mapping 1
tc filter add dev tun0 parent 1: protocol ip prio 1 u32 match ip dst
172.16.10.10 action skbedit queue_mapping 2
load balanced across the multiple queues, in tun_select_queue.
If you want more explicit queue selection than by rxhash, tun
supports TUNSETSTEERINGEBPF.
out how to config eBPF dynamically.
to introduce another interface, of course.
I would look into it. Wish I don't need the patch that keeps the queue index unchanged.Besides, I think I still need to know which queue is the target in eBPF.See SKF_AD_QUEUE for classic BPF and __sk_buff queue_mapping for eBPF.
The thread in the userspace. Each thread responds for a queue.The purpose here is taking advantage of the multiple threads. For theA thread in which context? Or do you mean queue?
the server side(gateway of the tunnel's subnet), usually a different
peer would invoked a different encryption/decryption key pair, it would
be better to handle each in its own thread. Or the application would
need to implement a dispatcher here.
I don't think there would be sequence lock in creating multiple queue.I am newbie to the tc(8), I verified the command above with a tun typeNot opposed to exposing the queue index if there is a need. Not sure
multiple threads demo. But I don't know how to drop the unwanted ingress
filter here, the queue 0 may be a little broken.
yet that there is.
Also, since for an IFF_MULTI_QUEUE the queue_id is just assigned
iteratively, it can also be inferred without an explicit call.
Unless application uses an explicitly lock itself.
While that did makes a problem when a queue would be disabled. It would
swap the last queue index with that queue, leading to fetch the queue
index calling again, also it would request an update for the qdisc flow
rule.
Could I submit a ***new*** PATCH which would peak a hole, also it
applies for re-enabling the queue.
diff --git a/drivers/net/tun.c b/drivers/net/tun.cI was trying to not hold the global lock or long period, that is why I
index 1d06c560c5e6..5473a0fca2e1 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -3115,6 +3115,10 @@ static long __tun_chr_ioctl(struct file *file,
unsigned int cmd,
if (!ns_capable(net->user_ns, CAP_NET_ADMIN))
return -EPERM;
return open_related_ns(&net->ns, get_net_ns);
+ } else if (cmd == TUNGETQUEUEINDEX) {
+ if (tfile->detached)
+ return -EINVAL;
+ return put_user(tfile->queue_index, (unsigned int __user*)argp);
Unless you're certain that these fields can be read without RTNL, move
below rtnl_lock() statement.
Would fix in v2.
didn't made v2 yesterday.
When I wrote this, I saw ioctl() TUNSETQUEUE->tun_attach() above. Is
the rtnl_lock() scope the lighting lock here?