Re: [RFC PATCH] zram: support asynchronous GC for lazy slot freeing
From: Barry Song
Date: Fri Apr 17 2026 - 18:00:01 EST
On Sun, Apr 12, 2026 at 7:48 PM Kairui Song <ryncsn@xxxxxxxxx> wrote:
>
> On Sun, Apr 12, 2026 at 02:04:50PM +0800, Barry Song (Xiaomi) wrote:
> > Swap freeing can be expensive when unmapping a VMA containing
> > many swap entries. This has been reported to significantly
> > delay memory reclamation during Android’s low-memory killing,
> > especially when multiple processes are terminated to free
> > memory, with slot_free() accounting for more than 80% of
> > the total cost of freeing swap entries.
> >
> > Two earlier attempts by Lei and Zhiguo added a new thread in the mm core
> > to asynchronously collect and free swap entries [1][2], but the
> > design itself is fairly complex.
> >
> > When anon folios and swap entries are mixed within a
> > process, reclaiming anon folios from killed processes
> > helps return memory to the system as quickly as possible,
> > so that newly launched applications can satisfy their
> > memory demands. It is not ideal for swap freeing to block
> > anon folio freeing. On the other hand, swap freeing can
> > still return memory to the system, although at a slower
> > rate due to memory compression.
> >
> > Therefore, in zram, we introduce a GC worker to allow anon
> > folio freeing and slot_free to run in parallel, since
> > slot_free is performed asynchronously, maximizing the rate at
> > which memory is returned to the system.
> >
> > Xueyuan’s test on RK3588 shows that unmapping a 256MB swap-filled
> > VMA becomes 3.4× faster when pinning tasks to CPU2, reducing the
> > execution time from 63,102,982 ns to 18,570,726 ns.
> >
> > A positive side effect is that async GC also slightly improves
> > do_swap_page() performance, as it no longer has to wait for
> > slot_free() to complete.
> >
> > Xueyuan’s test shows that swapping in 256MB of data (each page
> > filled with repeating patterns such as “1024 one”, “1024 two”,
> > “1024 three”, and “1024 four”) reduces execution time from
> > 1,358,133,886 ns to 1,104,315,986 ns, achieving a 1.22× speedup.
> >
> > [1] https://lore.kernel.org/all/20240805153639.1057-1-justinjiang@xxxxxxxx/
> > [2] https://lore.kernel.org/all/20250909065349.574894-1-liulei.rjpt@xxxxxxxx/
> >
> > Tested-by: Xueyuan Chen <xueyuan.chen21@xxxxxxxxx>
> > Signed-off-by: Barry Song (Xiaomi) <baohua@xxxxxxxxxx>
>
> Hi Barry
>
> This looks an interesting idea to me.
>
> > ---
> > drivers/block/zram/zram_drv.c | 56 ++++++++++++++++++++++++++++++++++-
> > drivers/block/zram/zram_drv.h | 3 ++
> > 2 files changed, 58 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index c2afd1c34f4a..f5c07eb997a8 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -1958,6 +1958,23 @@ static ssize_t debug_stat_show(struct device *dev,
> > return ret;
> > }
> >
> > +static void gc_slots_free(struct zram *zram)
> > +{
> > + size_t num_pages = zram->disksize >> PAGE_SHIFT;
> > + unsigned long index;
> > +
> > + index = find_next_bit(zram->gc_map, num_pages, 0);
> > + while (index < num_pages) {
> > + if (slot_trylock(zram, index)) {
> > + if (test_bit(index, zram->gc_map))
> > + slot_free(zram, index);
> > + slot_unlock(zram, index);
> > + cond_resched();
> > + }
> > + index = find_next_bit(zram->gc_map, num_pages, index + 1);
> > + }
> > +}
> > +
>
> The ideas looks interesting but the implementation looks not that
> optimal to me. find_next_bit does a O(n) looks up for every gc call
> looks really expensive if the pending slot is at tail.
Agreed. It’s essentially a prototype at this stage to demonstrate the
idea.
>
> Perhaps a percpu stack can be used, something like the folio batch?
I guess a major difference is that folio batching aims to reduce
lruvec lock contention. Once a CPU’s slot space is empty, it batches
draining folios into the lruvec by checking whether some slots can
share the same lock. This procedure is synchronous within
folio_batch_move_lru(this_cpu_ptr(fbatch), move_fn);
In our case, we might not want a synchronous procedure, so each CPU
could launch its own workqueue. I’m not sure whether this is actually
beneficial, as it might trigger the zsmalloc lock contention we are
trying to eliminate.
If we end up wanting to drain all CPUs together, that would make things
quite complex again.
So I guess a hierarchical bitmap, an XArray, or even a simple array
could work. If we cap it at 64MB, the array would be at most 128KB on a
PAGE_SIZE=4KB system.
I am CC’ing Wenchao, who may be interested in further measurements
and also involved in a more efficient implementation.
>
> > - slot_free(zram, index);
> > + if (!try_slot_lazy_free(zram, index))
> > + slot_free(zram, index);
>
> What is making this slot_free so costly? zs_free?
>
> > slot_unlock(zram, index);
> > }
> >
> > diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> > index 08d1774c15db..1f3ffd79fcb1 100644
> > --- a/drivers/block/zram/zram_drv.h
> > +++ b/drivers/block/zram/zram_drv.h
> > @@ -88,6 +88,7 @@ struct zram_stats {
> > atomic64_t pages_stored; /* no. of pages currently stored */
> > atomic_long_t max_used_pages; /* no. of maximum pages stored */
> > atomic64_t miss_free; /* no. of missed free */
> > + atomic64_t gc_slots; /* no. of queued for lazy free by gc */
>
> Maybe we want to track the size of content being delayed instead
> of slots number? I saw there is a 30000 hard limit for that.
Yep, definitely we want size, not number of pages, since PAGE_SIZE
is not constant.
>
> Perhaps it will make more sense if we have a "buffer size"
> (e.g. 64M), seems more intuitive to me. e.g. the ZRAM module can occupy
> at most 64M of memory, so the delayed free won't cause a significant
> global pressure.
>
> Also I think this patch is batching the memory free operations, so the
> workqueue or design can also be further optimized for batching, for
> example if the zs_free is the expensive part then maybe we shall just
> clear the handler for the freeing slot and leave the handler in a
> percpu stack, then batch free these handlers. zsmalloc might make
> use some batch optimization based on that too, something like
> kmem_cache_free_bulk but for zsmalloc?
I’m not really sure a per-CPU approach is the right direction, since
zsmalloc already has a lot of contention we may want to eliminate. If
we introduce per-CPU workqueues or similar mechanisms, we might end up
increasing contention rather than reducing it.
A kmem_cache_free_bulk()-like approach might be a good direction to
investigate for zsmalloc. I guess Xueyuan is also thinking about it?
Right now, zsmalloc frequently takes and releases multiple locks for
each individual.
>
> if zs_free is not all the expensive part, I took a look at slot_free
> maybe a lot of read / write of slot data can be merged.
>
> This patch currently doesn't reduce the total amount of work, but
> if above idea works, a lot of redundant operations might be be dropped,
> result in better performance in every case.
Yep, hopefully we can optimize for every case. Of course, that will
take a lot of time :-)
>
> Just my two cents and ideas, not sure if I got everything correct.
> Looking forward for more disscussion on this :)
Thanks for your suggestions—they are always welcome. We may
discuss this further.
Best Regards
Barry