Re: [Xen-devel] [PATCH v2 09/13] xen/pvcalls: implement recvmsg
From: Stefano Stabellini
Date: Mon Jul 31 2017 - 18:26:48 EST
On Thu, 27 Jul 2017, Boris Ostrovsky wrote:
> On 07/26/2017 08:08 PM, Stefano Stabellini wrote:
> > On Wed, 26 Jul 2017, Boris Ostrovsky wrote:
> >>>> + count++;
> >>>> + else
> >>>> + wait_event_interruptible(map->active.inflight_conn_req,
> >>>> + pvcalls_front_read_todo(map));
> >>>> + }
> >>> Should we be using PVCALLS_FRONT_MAX_SPIN here? In sendmsg it is
> >>> counting non-sleeping iterations but here we are sleeping so
> >>> PVCALLS_FRONT_MAX_SPIN (5000) may take a while.
> >>>
> >>> In fact, what shouldn't this waiting be a function of MSG_DONTWAIT
> >> err, which it already is. But the question still stands (except for
> >> MSG_DONTWAIT).
> > The code (admittedly unintuitive) is busy-looping (non-sleeping) for
> > 5000 iterations *before* attempting to sleep. So in that regard, recvmsg
> > and sendmsg use PVCALLS_FRONT_MAX_SPIN in the same way: only for
> > non-sleeping iterations.
> >
>
> OK.
>
> Why not go directly into wait_event_interruptible()? I see you write in
> the commit message
>
> If not enough data is available on the ring, rather than returning
> immediately or sleep-waiting, spin for up to 5000 cycles. This small
> optimization turns out to improve performance and latency significantly.
>
>
> Is this because of scheduling latency? I think this should be mentioned not just in the commit message but also as a comment in the code.
It tries to mitigate scheduling latencies on both ends (dom0 and domU)
when the ring buffer is the bottleneck (high bandwidth connections). But
to be honest with you, it's mostly beneficial in the sendmsg case,
because for recvmsg we also introduce a busy-wait in regular
circumstances, when no data is actually available. I confirmed this
statement with a quick iperf test. I'll remove the spin from recvmsg and
keep it in sendmsg.
>
> (I also think it's not "not enough data" but rather "no data"?)
you are right