Re: [PATCH 3/3] io_uring: add support for zone-append

From: Matias BjÃrling
Date: Fri Jun 19 2020 - 12:00:12 EST


On 19/06/2020 16.18, Jens Axboe wrote:
On 6/19/20 5:15 AM, Matias BjÃrling wrote:
On 19/06/2020 11.41, javier.gonz@xxxxxxxxxxx wrote:
Jens,

Would you have time to answer a question below in this thread?

On 18.06.2020 11:11, javier.gonz@xxxxxxxxxxx wrote:
On 18.06.2020 08:47, Damien Le Moal wrote:
On 2020/06/18 17:35, javier.gonz@xxxxxxxxxxx wrote:
On 18.06.2020 07:39, Damien Le Moal wrote:
On 2020/06/18 2:27, Kanchan Joshi wrote:
From: Selvakumar S <selvakuma.s1@xxxxxxxxxxx>

Introduce three new opcodes for zone-append -

 IORING_OP_ZONE_APPEND : non-vectord, similiar to
IORING_OP_WRITE
 IORING_OP_ZONE_APPENDV : vectored, similar to IORING_OP_WRITEV
 IORING_OP_ZONE_APPEND_FIXED : append using fixed-buffers

Repurpose cqe->flags to return zone-relative offset.

Signed-off-by: SelvaKumar S <selvakuma.s1@xxxxxxxxxxx>
Signed-off-by: Kanchan Joshi <joshi.k@xxxxxxxxxxx>
Signed-off-by: Nitesh Shetty <nj.shetty@xxxxxxxxxxx>
Signed-off-by: Javier Gonzalez <javier.gonz@xxxxxxxxxxx>
---
fs/io_uring.cÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ | 72
+++++++++++++++++++++++++++++++++++++++++--
include/uapi/linux/io_uring.h |Â 8 ++++-
2 files changed, 77 insertions(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 155f3d8..c14c873 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -649,6 +649,10 @@ struct io_kiocb {
ÂÂÂÂunsigned longÂÂÂÂÂÂÂ fsize;
ÂÂÂÂu64ÂÂÂÂÂÂÂÂÂÂÂ user_data;
ÂÂÂÂu32ÂÂÂÂÂÂÂÂÂÂÂ result;
+#ifdef CONFIG_BLK_DEV_ZONED
+ÂÂÂ /* zone-relative offset for append, in bytes */
+ÂÂÂ u32ÂÂÂÂÂÂÂÂÂÂÂ append_offset;
this can overflow. u64 is needed.
We chose to do it this way to start with because struct io_uring_cqe
only has space for u32 when we reuse the flags.

We can of course create a new cqe structure, but that will come with
larger changes to io_uring for supporting append.

Do you believe this is a better approach?
The problem is that zone size are 32 bits in the kernel, as a number
of sectors.
So any device that has a zone size smaller or equal to 2^31 512B
sectors can be
accepted. Using a zone relative offset in bytes for returning zone
append result
is OK-ish, but to match the kernel supported range of possible zone
size, you
need 31+9 bits... 32 does not cut it.
Agree. Our initial assumption was that u32 would cover current zone size
requirements, but if this is a no-go, we will take the longer path.
Converting to u64 will require a new version of io_uring_cqe, where we
extend at least 32 bits. I believe this will need a whole new allocation
and probably ioctl().

Is this an acceptable change for you? We will of course add support for
liburing when we agree on the right way to do this.
I took a quick look at the code. No expert, but why not use the existing
userdata variable? use the lowest bits (40 bits) for the Zone Starting
LBA, and use the highest (24 bits) as index into the completion data
structure?

If you want to pass the memory address (same as what fio does) for the
data structure used for completion, one may also play some tricks by
using a relative memory address to the data structure. For example, the
x86_64 architecture uses 48 address bits for its memory addresses. With
24 bit, one can allocate the completion entries in a 32MB memory range,
and then use base_address + index to get back to the completion data
structure specified in the sqe.
For any current request, sqe->user_data is just provided back as
cqe->user_data. This would make these requests behave differently
from everything else in that sense, which seems very confusing to me
if I was an application writer.

But generally I do agree with you, there are lots of ways to make
< 64-bit work as a tag without losing anything or having to jump
through hoops to do so. The lack of consistency introduced by having
zone append work differently is ugly, though.

Yep, agree, and extending to three cachelines is big no-go. We could add a flag that said the kernel has changes the userdata variable. That'll make it very explicit.