Re: [PATCH] hrtimer: Update softirq_expires_next correctly after __hrtimer_get_next_event()
From: Thomas Gleixner
Date: Thu Apr 22 2021 - 06:08:03 EST
On Thu, Apr 22 2021 at 02:08, Thomas Gleixner wrote:
> On Wed, Apr 21 2021 at 23:08, Lorenzo Colitti wrote:
> That said, even if you manage to avoid the timer hardware trainwreck on
> which your system is depending on right now, then still the underlying
> problem remains that hrtimers are not the proper tool for high frequency
> modification and never will be.
>
> There are smarter approaches to that USB/NTB problem, but that's beyond
> the discussion at hand.
And because I was looking at it anyway, here are some hints:
Buffersize: 16k
Timeout: 300us
Let's assume small packets with a size of 256 bytes each. The packets
arrive with a period of ~299us, i.e. right before the timer fires. Which
means the timer wont fire because it's moved 300us ahead each time and
the buffer is sent to the wire when it's full.
That means the 1st packet in the buffer is sent 63 * 299 us = 18.8ms
after it was queued in the buffer.
How is that supposed to be correct? I know, throughput is what matters
and correctness is overrated.
For the high volume traffic case the timer is not needed at all if
the logic is done proper.
xmit()
{
if (!packet_fits_in_buffer())
send_buffer_and_cancel_timer_if_armed();
if (first_packet_in_buffer())
buffer_deadline = now() + MAX_DELAY;
add_packet_to_buffer();
/* Are more packets available for xmit at the core? */
if (!xmit_queue_empty())
return;
/*
* Last packet from network core for now, check how long the
* first packet sits in the buffer.
*/
if (buffer_deadline <= now()) {
send_buffer_and_cancel_timer_if_armed();
return;
}
if (!timer_armed())
arm_timer(buffer_deadline);
}
IOW, as long as the network core has packets outstanding for xmit, there
is no point in arming the timer at all.
But yes, that needs more work and thoughts than the 'bolt and duct tape'
approach which led to the current situation.
Aside of that it would certainly be interesting to start very simple
with an experiment which does not use a timer in the first place:
xmit()
{
if (!packet_fits_in_buffer())
send_buffer();
add_packet_to_buffer();
/* Are more packets available for xmit at the core? */
if (!xmit_queue_empty())
return;
/* Last packet from network core for now. Drain it. */
send_buffer();
}
I wouldn't be surprised if that'd turn out to be the right thing to do
both latency-wise and throughput-wise.
But that'd be not convoluted enough and of course the commit which
introduced that magic does not explain anything at all, so I can't tell
if that was actually tried or not.
Thanks,
tglx