Re: [PATCH v4] zram: Implement multi-page write-back

From: Yuwen Chen

Date: Mon Nov 10 2025 - 02:22:55 EST


On 10 Nov 2025 13:49:26 +0900, Sergey Senozhatsky wrote:
> As a side note:
> You almost never do sequential writes to the backing device. The
> thing is, e.g. when zram is used as swap, page faults happen randomly
> and free up (slot-free) random page-size chunks (so random bits in
> zram->bitmap become clear), which then get overwritten (zram simply
> picks the first available bit from zram->bitmap) during next writeback.
> There is nothing sequential about that, in systems with sufficiently
> large uptime and sufficiently frequent writeback/readback events
> writeback bitmap becomes sparse, which results in random IO, so your
> test tests an ideal case that almost never happens in practice.

Thank you very much for your reply.
As you mentioned, the current test data was measured under the condition
that all writes were sequential writes. In a normal user environment,
there are a large number of random writes. However, the multiple
concurrent submissions implemented in this submission still have performance
advantages for storage devices. I artificially created the worst - case
scenario (all writes are random writes) with the following code:

for (int i = 0; i < nr_pages; i++)
alloc_block_bdev(zram);

for (int i = 0; i < nr_pages; i += 2)
free_block_bdev(zram, i);

On the physical machine, the measured data is as follows:
before modification:
real 0m0.624s
user 0m0.000s
sys 0m0.347s

real 0m0.663s
user 0m0.001s
sys 0m0.354s

real 0m0.635s
user 0m0.000s
sys 0m0.335s

after modification:
real 0m0.340s
user 0m0.000s
sys 0m0.239s

real 0m0.326s
user 0m0.000s
sys 0m0.230s

real 0m0.313s
user 0m0.000s
sys 0m0.223s

The test script is as follows:
# mknod /dev/loop45 b 7 45
# losetup /dev/loop45 ./zram_writeback.img
# echo "/dev/loop45" > /sys/block/zram0/backing_dev
# echo "1024000000" > /sys/block/zram0/disksize
# dd if=/dev/random of=/dev/zram0
# time echo "page_indexes=1-100000" > /sys/block/zram0/writeback

Thank you again for your reply.