Re: [PATCHv2 1/7] zram: introduce compressed data writeback

From: zhangdongdong

Date: Wed Jan 07 2026 - 02:28:44 EST


On 1/7/26 12:28, Sergey Senozhatsky wrote:
On (26/01/07 11:50), zhangdongdong wrote:
Hi Sergey,

Thanks for the work on decompression-on-demand.

One concern I’d like to raise is the use of a workqueue for readback
decompression. In our measurements, deferring decompression to a worker
introduces non-trivial scheduling overhead, and under memory pressure
the added latency can be noticeable (tens of milliseconds in some cases).

The problem is those bio completions happen in atomic context, and zram
requires both compression and decompression to be non-atomic. And we
can't do sync read on the zram side, because those bio-s are chained.
So the current plan is to look how system hi-prio per-cpu workqueue
will handle this.

Did you try high priority workqueue?

Hi,Sergey

Yes, we have tried high priority workqueues. In fact, our current
implementation already uses a dedicated workqueue created with
WQ_HIGHPRI and marked as UNBOUND, which handles the read/decompression
path for swap-in.

Below is a simplified snippet of the queue we are currently using:

zgroup_read_wq = alloc_workqueue("zgroup_read",
WQ_HIGHPRI | WQ_UNBOUND, 0);

static int zgroup_submit_zio_async(struct zgroup_io *zio,
struct zram_group *zgroup)
{
struct zgroup_req req = {
.zio = zio,
};

if (!zgroup_io_step_chg(zio, ZIO_STARTED, ZIO_INFLIGHT)) {
wait_for_completion(&zio->wait);
if (zio->status)
zgroup_put_io(zio);
return zio->status;
}

INIT_WORK_ONSTACK(&req.work, zgroup_submit_zio_work);
queue_work(zgroup_read_wq, &req.work);
flush_work(&req.work);
destroy_work_on_stack(&req.work);

return req.status ?: zgroup_decrypt_pages(zio);
}

Thanks,
dongdong