在 2022/1/11 下午11:50, Pavel Begunkov 写道:
On 1/11/22 13:51, Hao Xu wrote:Make sense to me, thanks.
在 2021/12/21 下午11:35, Pavel Begunkov 写道:
Instead of the net stack managing ubuf_info, allow to pass it in fromHi Pavel,
outside in a struct msghdr (in-kernel structure), so io_uring can make
use of it.
Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx>
---
I've some confusions here since I have a lack of
network knowledge.
The first one is why do we make ubuf_info visible
for io_uring. Why not just follow the old MSG_ZEROCOPY
logic?
I assume you mean leaving allocation up and so in socket awhile the
patchset let's io_uring to manage and control ubufs. In short,
performance and out convenience
TL;DR;
First, we want a nice and uniform API with io_uring, i.e. posting
an CQE instead of polling an err queue/etc., and for that the network
will need to know about io_uring ctx in some way. As an alternative it
may theoretically be registered in socket, but it'll quickly turn into
a huge mess, consider that it's a many to many relation b/w io_uring and
sockets. The fact that io_uring holds refs to files will only complicate
it.
Is there any use cases for this multiple sockets with single
It will also limit API. For instance, we won't be able to use a single
ubuf with several different sockets.
notification?
Another problem is performance, registration or some other tricksI see, I saw another ref inc in skb_zcopy_set() which I previously
would some additional sync. It'd also need sync on use, say it's
just one rcu_read, but the problem that it only adds up to complexity
and prevents some other optimisations. E.g. we amortise to ~0 atomics
getting refs on skb setups based on guarantees io_uring provides, and
not only. SKBFL_MANAGED_FRAGS can only work with pages being controlled
by the issuer, and so it needs some context as currently provided by
ubuf. io_uring also caches ubufs, which relies on io_uring locking, so
it removes kmalloc/free for almost zero overhead.
The second one, my understanding about the buffer
lifecycle is that the kernel side informs
the userspace by a cqe generated by the ubuf_info
callback that all the buffers attaching to the
same notifier is now free to use when all the data
is sent, then why is the flush in 13/19 needed as
it is at the submission period?
Probably I wasn't clear enough. A user has to flush a notifier, only
then it's expected to post an CQE after all buffers attached to it
are freed. io_uring holds one ubuf ref, which will be release on flush.
misunderstood and thus thought there was only one refcount. Thanks!
I also need to add a way to flush without send.
Will spend some time documenting for next iteration.