Re: [PATCH] net: tuntap: add ioctl() TUNGETQUEUEINDX to fetch queue index

From: ayaka
Date: Thu Aug 08 2024 - 00:17:12 EST



Sent from my iPad

> On Aug 8, 2024, at 11:13 AM, Willem de Bruijn <willemdebruijn.kernel@xxxxxxxxx> wrote:
>
> 
>>
>>> In that case, a tc egress tc_bpf program may be able to do both.
>>> Again, by writing to __sk_buff queue_mapping. Instead of u32 +
>>> skbedit.
>>>
>>> See also
>>>
>>> "
>>> commit 74e31ca850c1cddeca03503171dd145b6ce293b6
>>> Author: Jesper Dangaard Brouer <brouer@xxxxxxxxxx>
>>> Date: Tue Feb 19 19:53:02 2019 +0100
>>>
>>> bpf: add skb->queue_mapping write access from tc clsact
>>> "
>>>
>>> But I suppose you could prefer u32 + skbedit.
>>>
>>> Either way, the pertinent point is that you want to map some flow
>>> match to a specific queue id.
>>>
>>> This is straightforward if all queues are opened and none are closed.
>>> But it is not if queues can get detached and attached dynamically.
>>> Which I guess you encounter in practice?
>>>
>>> I'm actually not sure how the current `tfile->queue_index =
>>> tun->numqueues;` works in that case. As __tun_detach will do decrement
>>> `--tun->numqueues;`. So multiple tfiles could end up with the same
>>> queue_index. Unless dynamic detach + attach is not possible.
>>
>> It is expected to work, otherwise there should be a bug.
>>
>>> But it
>>> seems it is. Jason, if you're following, do you know this?
>>
>> __tun_detach() will move the last tfile in the tfiles[] array to the
>> current tfile->queue_index, and modify its queue_index:
>>
>> rcu_assign_pointer(tun->tfiles[index],
>> tun->tfiles[tun->numqueues - 1]);
>> ntfile = rtnl_dereference(tun->tfiles[index]);
>> ntfile->queue_index = index;
>> rcu_assign_pointer(tun->tfiles[tun->numqueues - 1],
>> NULL);
>>
>> --tun->numqueues;
>>
>> tun_attach() will move the detached tfile to the end of the tfiles[]
>> array and enable it:
>>
>>
>> tfile->queue_index = tun->numqueues;
>> ....
>> rcu_assign_pointer(tun->tfiles[tun->numqueues], tfile);
>> tun->numqueues++;
>>
>
> Ah right. Thanks. I had forgotten about that.
>
> So I guess an application that owns all the queues could keep track of
> the queue-id to FD mapping. But it is not trivial, nor defined ABI
> behavior.
>
> Querying the queue_id as in the proposed patch might not solve the
> challenge, though. Since an FD's queue-id may change simply because
Yes, when I asked about those eBPF thing, I thought I don’t need the queue id in those ebpf. It turns out a misunderstanding.
Do we all agree that no matter which filter or steering method we used here, we need a method to query queue index assigned with a fd?
> another queue was detached. So this would have to be queried on each
> detach.
>
Thank you Jason. That is why I mentioned I may need to submit another patch to bind the queue index with a flow.

I think here is a good chance to discuss about this.
I think from the design, the number of queue was a fixed number in those hardware devices? Also for those remote processor type wireless device(I think those are the modem devices).
The way invoked with hash in every packet could consume lots of CPU times. And it is not necessary to track every packet.
Could I add another property in struct tun_file and steering program return wanted value. Then it is application’s work to keep this new property unique.
> I suppose one underlying question is how important is the mapping of
> flows to specific queue-id's? Is it a problem if the destination queue
> for a flow changes mid-stream?
Yes, it matters. Or why I want to use this feature. From all the open source VPN I know, neither enabled this multiqueu feature nor create more than one queue for it.
And virtual machine would use the tap at the most time(they want to emulate a real nic).
So basically this multiple queue feature was kind of useless for the VPN usage.
If the filter can’t work atomically here, which would lead to unwanted packets transmitted to the wrong thread.