You're mostly correct. This is exactly why an I/O scheduler may be
applicable here IMO. Mostly because I/O schedulers tend to optimize for
something specific and always present tradeoffs. Users need to
understand what they are optimizing for.
Hence I'd say this functionality can definitely be available to an I/O
scheduler should one exist.
I guess it is just that there can be multiple requests available from
scheduler queue. Actually it can be so for other non-nvme drivers in
case of none, such as SCSI.
Another way is to use one per-task list(such as plug list) to hold the
requests for dispatch, then every drivers may see real .last flag, so they
may get chance for optimizing batch queuing. I will think about the
idea further and see if it is really doable.
How about my RFC v1 patch set[1], which allows dispatching more than
one request from the scheduler to support batch requests?
[1]
https://lore.kernel.org/patchwork/patch/1210034/
https://lore.kernel.org/patchwork/patch/1210035/
Basically, my idea is to dequeue request one by one, and for each
dequeued request:
- we try to get a budget and driver tag, if both succeed, add the
request to one per-task list which can be stored in stack variable,
then continue to dequeue more request
- if either budget or driver tag can't be allocated for this request,
marks the last request in the per-task list as .last, and send the
batching requests stored in the list to LLD
- when queueing batching requests to LLD, if one request isn't queued
to driver successfully, calling .commit_rqs() like before, meantime
adding the remained requests in the per-task list back to scheduler
queue or hctx->dispatch.
One issue is that this way might degrade sequential IO performance if
the LLD just tells queue busy to blk-mq via return value of .queue_rq(),
so I guess we still may need one flag, such as BLK_MQ_F_BATCHING_SUBMISSION.