Re: [PATCH] vfs: keep inodes with page cache off the inode shrinker LRU

From: Johannes Weiner
Date: Thu May 14 2020 - 07:27:26 EST


On Wed, May 13, 2020 at 02:15:19PM -0700, Andrew Morton wrote:
> On Tue, 12 May 2020 17:29:36 -0400 Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
>
> >
> > ...
> >
> > Solution
> >
> > This patch fixes the aging inversion described above on
> > !CONFIG_HIGHMEM systems, without reintroducing the problems associated
> > with excessive shrinker LRU rotations, by keeping populated inodes off
> > the shrinker LRUs entirely.
> >
> > Currently, inodes are kept off the shrinker LRU as long as they have
> > an elevated i_count, indicating an active user. Unfortunately, the
> > page cache cannot simply hold an i_count reference, because unlink()
> > *should* result in the inode being dropped and its cache invalidated.
> >
> > Instead, this patch makes iput_final() consult the state of the page
> > cache and punt the LRU linking to the VM if the inode is still
> > populated; the VM in turn checks the inode state when it depopulates
> > the page cache, and adds the inode to the LRU if necessary.
> >
> > This is not unlike what we do for dirty inodes, which are moved off
> > the LRU permanently until writeback completion puts them back on (iff
> > still unused). We can reuse the same code -- inode_add_lru() - here.
> >
> > This is also not unlike page reclaim, where the lower VM layer has to
> > negotiate state with the higher VFS layer. Follow existing precedence
> > and handle the inversion as much as possible on the VM side:
> >
> > - introduce an I_PAGES flag that the VM maintains under the i_lock, so
> > that any inode code holding that lock can check the page cache state
> > without having to lock and inspect the struct address_space
>
> Maintaining the same info in two places is a hassle. Is this
> optimization worthwhile?

Hm, maybe not. I'll try to get rid of it and test cache / LRU state
directly.

> > - introduce inode_pages_set() and inode_pages_clear() to maintain the
> > inode LRU state from the VM side, then update all cache mutators to
> > use them when populating the first cache entry or clearing the last
> >
> > With this, the concept of "inodesteal" - where the inode shrinker
> > drops page cache - is relegated to CONFIG_HIGHMEM systems only. The VM
> > is in charge of the cache, the shrinker in charge of struct inode.
>
> How tested is this on highmem machines?

I don't have a highmem machine, but my code is ifdeffed out on
CONFIG_HIGHMEM so the behavior shouldn't have changed there.