Re: [RFC PATCH v3 5/5] mm: support large folios swapin as a whole

From: Barry Song
Date: Mon Jun 10 2024 - 20:26:20 EST


On Tue, Jun 11, 2024 at 8:43 AM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote:
>
> On Thu, Mar 14, 2024 at 08:56:17PM GMT, Chuanhua Han wrote:
> [...]
> > >
> > > So in the common case, swap-in will pull in the same size of folio as was
> > > swapped-out. Is that definitely the right policy for all folio sizes? Certainly
> > > it makes sense for "small" large folios (e.g. up to 64K IMHO). But I'm not sure
> > > it makes sense for 2M THP; As the size increases the chances of actually needing
> > > all of the folio reduces so chances are we are wasting IO. There are similar
> > > arguments for CoW, where we currently copy 1 page per fault - it probably makes
> > > sense to copy the whole folio up to a certain size.
> > For 2M THP, IO overhead may not necessarily be large? :)
> > 1.If 2M THP are continuously stored in the swap device, the IO
> > overhead may not be very large (such as submitting bio with one
> > bio_vec at a time).
> > 2.If the process really needs this 2M data, one page-fault may perform
> > much better than multiple.
> > 3.For swap devices like zram,using 2M THP might also improve
> > decompression efficiency.
> >
>
> Sorry for late response, do we have any performance data backing the
> above claims particularly for zswap/swap-on-zram cases?

no need to say sorry. You are always welcome to give comments.

this, combining with zram modification, not only improves compression
ratio but also reduces CPU time significantly. you may find some data
here[1].

granularity orig_data_size compr_data_size time(us)
4KiB-zstd 1048576000 246876055 50259962
64KiB-zstd 1048576000 199763892 18330605

On mobile devices, We tested the performance of swapin by running
100 iterations of swapping in 100MB of data ,and the results were
as follows.the swapin speed increased by about 45%.

time consumption of swapin(ms)
lz4 4k 45274
lz4 64k 22942

zstdn 4k 85035
zstdn 64k 46558

[1] https://lore.kernel.org/linux-mm/20240327214816.31191-1-21cnbao@xxxxxxxxx/

Thanks
Barry