Re: [PATCH] vfs: keep inodes with page cache off the inode shrinker LRU

From: Arnd Bergmann
Date: Mon Mar 09 2020 - 09:33:47 EST


On Sun, Mar 8, 2020 at 3:20 PM Russell King - ARM Linux admin
<linux@xxxxxxxxxxxxxxx> wrote:
> On Sun, Mar 08, 2020 at 11:58:52AM +0100, Arnd Bergmann wrote:
> > On Fri, Mar 6, 2020 at 9:36 PM Nishanth Menon <nm@xxxxxx> wrote:
> > > On 13:11-20200226, santosh.shilimkar@xxxxxxxxxx wrote:
>
> > - extend zswap to use all the available high memory for swap space
> > when highmem is disabled.
>
> I don't think that's a good idea. Running debian stable kernels on my
> 8GB laptop, I have problems when leaving firefox running long before
> even half the 16GB of swap gets consumed - the entire machine slows
> down very quickly when it starts swapping more than about 2 or so GB.
> It seems either the kernel has become quite bad at selecting pages to
> evict.
>
> It gets to the point where any git operation has a battle to fight
> for RAM, despite not touching anything else other than git.
>
> The behaviour is much like firefox is locking memory into core, but
> that doesn't seem to be what's actually going on. I've never really
> got to the bottom of it though.
>
> This is with 64-bit kernel and userspace.

I agree there is something going wrong on your machine, but I
don't really see how that relates to my suggestion.

> So, I'd suggest that trading off RAM available through highmem for VM
> space available through zswap is likely a bad idea if you have a
> workload that requires 4GB of RAM on a 32-bit machine.

Aside from every workload being different, I was thinking of
these general observations:

- If we are looking at a future without highmem, then it's better to use
the extra memory for something than not using it. zswap seems like
a reasonable use.

- A lot of embedded systems are configured to have no swap at all,
which can be for good or not-so-good reasons. Having some
swap space available often improves things, even if it comes
out of RAM.

- A particularly important case to optimize for is 2GB of RAM with
LPAE enabled. With CONFIG_VMSPLIT_2G and highmem, this
leads to the paradox -ENOMEM when 256MB of highmem are
full while plenty of lowmem is available. With highmem disabled,
you avoid that at the cost of losing 12% of RAM.

- With 4GB+ of RAM and CONFIG_VMSPLIT_2G or
CONFIG_VMSPLIT_3G, using gigabytes of RAM for swap
space would usually be worse than highmem, but once
we have VMSPLIT_4G_4G, it's the same situation as above
with 6% of RAM used for zswap instead of highmem.

Arnd