Re: [PATCH v2] random: use immediate per-cpu timer rather than workqueue for mixing fast pool
From: Jason A. Donenfeld
Date: Tue Sep 27 2022 - 04:23:45 EST
On Tue, Sep 27, 2022 at 07:41:52AM +0000, David Laight wrote:
> From: Jason A. Donenfeld
> > Sent: 26 September 2022 23:05
> >
> > Previously, the fast pool was dumped into the main pool peroidically in
> > the fast pool's hard IRQ handler. This worked fine and there weren't
> > problems with it, until RT came around. Since RT converts spinlocks into
> > sleeping locks, problems cropped up. Rather than switching to raw
> > spinlocks, the RT developers preferred we make the transformation from
> > originally doing:
> >
> > do_some_stuff()
> > spin_lock()
> > do_some_other_stuff()
> > spin_unlock()
> >
> > to doing:
> >
> > do_some_stuff()
> > queue_work_on(some_other_stuff_worker)
> >
> > This is an ordinary pattern done all over the kernel. However, Sherry
> > noticed a 10% performance regression in qperf TCP over a 40gbps
> > InfiniBand card. Quoting her message:
> >
> > > MT27500 Family [ConnectX-3] cards:
> > > Infiniband device 'mlx4_0' port 1 status:
> > > default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> > > base lid: 0x6
> > > sm lid: 0x1
> > > state: 4: ACTIVE
> > > phys state: 5: LinkUp
> > > rate: 40 Gb/sec (4X QDR)
> > > link_layer: InfiniBand
> > >
> > > Cards are configured with IP addresses on private subnet for IPoIB
> > > performance testing.
> > > Regression identified in this bug is in TCP latency in this stack as reported
> > > by qperf tcp_lat metric:
> > >
> > > We have one system listen as a qperf server:
> > > [root@yourQperfServer ~]# qperf
> > >
> > > Have the other system connect to qperf server as a client (in this
> > > case, it’s X7 server with Mellanox card):
> > > [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -
> > oo msg_size:4K:1024K:*2 tcp_lat
> >
> > Rather than incur the scheduling latency from queue_work_on, we can
> > instead switch to running on the next timer tick, on the same core,
> > deferrably so. This also batches things a bit more -- once per jiffy --
> > which is probably okay now that mix_interrupt_randomness() can credit
> > multiple bits at once. It still puts a bit of pressure on fast_mix(),
> > but hopefully that's acceptable.
>
> I though NOHZ systems didn't take a timer interrupt every 'jiffy'.
> If that is true what actually happens?
The TIMER_DEFERRABLE part of this patch is a mistake; I'm going to make
that 0. However, since expires==jiffies, there's no difference. It's
still undesirable though.
Jason