Re: [PATCH -mm -v4 3/5] mm, swap: VMA based swap readahead

From: Andrew Morton
Date: Wed Sep 13 2017 - 17:02:39 EST


On Wed, 13 Sep 2017 10:40:19 +0900 Minchan Kim <minchan@xxxxxxxxxx> wrote:

> Every zram users like low-end android device has used 0 page-cluster
> to disable swap readahead because it has no seek cost and works as
> synchronous IO operation so if we do readahead multiple pages,
> swap falut latency would be (4K * readahead window size). IOW,
> readahead is meaningful only if it doesn't bother faulted page's
> latency.
>
> However, this patch introduces additional knob /sys/kernel/mm/swap/
> vma_ra_max_order as well as page-cluster. It means existing users
> has used disabled swap readahead doesn't work until they should be
> aware of new knob and modification of their script/code to disable
> vma_ra_max_order as well as page-cluster.
>
> I say it's a *regression* and wanted to fix it but Huang's opinion
> is that it's not a functional regression so userspace should be fixed
> by themselves.
> Please look into detail of discussion in
> http://lkml.kernel.org/r/%3C1505183833-4739-4-git-send-email-minchan@xxxxxxxxxx%3E

hm, tricky problem. I do agree that linking the physical and virtual
readahead schemes in the proposed fashion is unfortunate. I also agree
that breaking existing setups (a bit) is also unfortunate.

Would it help if, when page-cluster is written to zero, we do

printk_once("physical readahead disabled, virtual readahead still
enabled. Disable virtual readhead via
/sys/kernel/mm/swap/vma_ra_max_order").

Or something like that. It's pretty lame, but it should help alert the
zram-readahead-disabling people to the issue?