Re: [RFC 0/2] optimise local-tw task resheduling

From: Pavel Begunkov
Date: Wed Mar 15 2023 - 12:56:45 EST


On 3/15/23 02:35, Ming Lei wrote:
Hi Pavel

On Fri, Mar 10, 2023 at 07:04:14PM +0000, Pavel Begunkov wrote:
io_uring extensively uses task_work, but when a task is waiting
for multiple CQEs it causes lots of rescheduling. This series
is an attempt to optimise it and be a base for future improvements.

For some zc network tests eventually waiting for a portion of
buffers I've got 10x descrease in the number of context switches,
which reduced the CPU consumption more than twice (17% -> 8%).
It also helps storage cases, while running fio/t/io_uring against
a low performant drive it got 2x descrease of the number of context
switches for QD8 and ~4 times for QD32.

ublk uses io_uring_cmd_complete_in_task()(io_req_task_work_add())
heavily. So I tried this patchset, looks not see obvious change
on both IOPS and context switches when running 't/io_uring /dev/ublkb0',
and it is one null ublk target(ublk add -t null -z -u 1 -q 2), IOPS
is ~2.8M.

Hi Ming,

It's enabled for rw requests and send-zc notifications, but
io_uring_cmd_complete_in_task() is not covered. I'll be enabling
it for more cases, including pass through.

But ublk applies batch schedule similar with io_uring before calling
io_uring_cmd_complete_in_task().

The feature doesn't tolerate tw that produce multiple CQEs, so
it can't be applied to this batching and the task would stuck
waiting.

btw, from a quick look it appeared that ublk batching is there
to keep requests together but not to improve batching. And if so,
I think we can get rid of it, rely on io_uring batching and
let ublk to gather its requests from tw list, which sounds
cleaner. I'll elaborate on that later

--
Pavel Begunkov