Re: [PATCH] io_uring: reduce latency by reissueing the operation

From: Olivier Langlois
Date: Sun Jun 20 2021 - 17:32:04 EST


On Sun, 2021-06-20 at 21:55 +0100, Pavel Begunkov wrote:
> On 6/18/21 11:45 PM, Olivier Langlois wrote:
> >
>
> For io_uring part, e.g. recv is slimmer than recvmsg, doesn't
> need to copy extra.
>
> Read can be more expensive on the io_uring side because it
> may copy/alloc extra stuff. Plus additional logic on the
> io_read() part for generality.
>
> But don't expect it to be much of a difference, but never
> tested.

That is super interesting. The way that I see it after getting your
explanations it is that in the worse case scenario, there won't be any
difference but in the best case, I could see a small speed gain.

I made the switch yesterday evening. One of the metric that I monitor
the most is my system reaction time from incoming packets.

I will let you know if switching to recv() is beneficial in that
regard.
>
> >
>
> > > Also, not particularly about reissue stuff, but a note to myself:
> > > 59us is much, so I wonder where the overhead comes from.
> > > Definitely not the iowq queueing (i.e. putting into a list).
> > > - waking a worker?
> > > - creating a new worker? Do we manage workers sanely? e.g.
> > >   don't keep them constantly recreated and dying back.
> > > - scheduling a worker?
> >
> > creating a new worker is for sure not free but I would remove that
> > cause from the suspect list as in my scenario, it was a one-shot
> > event.
>
> Not sure what you mean, but speculating, io-wq may have not
> optimal policy for recycling worker threads leading to
> recreating/removing more than needed. Depends on bugs, use
> cases and so on.

Since that I absolutely don't use the async workers feature I was
obsessed about the fact that I was seeing a io worker created. This is
root of why I ended up writing the patch.

My understanding of how io worker life scope are managed, it is that
one remains present once created.

In my scenario, once that single persistent io worker thread is
created, no others are ever created. So this is a one shot cost. I was
prepared to eliminate the first measurement to be as fair as possible
and not pollute the async performance result with a one time only
thread creation cost but to my surprise... The thread creation cost was
not visible in the first measurement time...

>From that, maybe this is an erroneous shortcut, I do not feel that
thread creation is the bottleneck.
>
> > First measurement was even not significantly higher than all the
> > other
> > measurements.
>
> You get a huge max for io-wq case. Obviously nothing can be
> said just because of max. We'd need latency distribution
> and probably longer runs, but I'm still curious where it's
> coming from. Just keeping an eye in general

Maybe it is scheduling...

I'll keep this mystery in the back of my mind in case that I would end
up with a way to find out where the time is spend...

> >