Re: [PATCH 2/2] [RFC] fadvise: Add _VOLATILE,_ISVOLATILE, and_NONVOLATILE flags

From: Dave Chinner
Date: Tue Feb 14 2012 - 18:51:38 EST


On Mon, Feb 13, 2012 at 09:55:32PM -0800, John Stultz wrote:
> On Tue, 2012-02-14 at 16:16 +1100, Dave Chinner wrote:
> > On Thu, Feb 09, 2012 at 04:16:33PM -0800, John Stultz wrote:
> > > This patch provides new fadvise flags that can be used to mark
> > > file pages as volatile, which will allow it to be discarded if the
> > > kernel wants to reclaim memory.
> > >
> > > This is useful for userspace to allocate things like caches, and lets
> > > the kernel destructively (but safely) reclaim them when there's memory
> > > pressure.
> > .....
> > > @@ -655,6 +656,8 @@ struct address_space {
> > > spinlock_t private_lock; /* for use by the address_space */
> > > struct list_head private_list; /* ditto */
> > > struct address_space *assoc_mapping; /* ditto */
> > > + struct range_tree_node *volatile_root; /* volatile range list */
> > > + struct mutex vlist_mutex; /* protect volatile_list */
> > > } __attribute__((aligned(sizeof(long))));
> >
> > So you're adding roughly 32 bytes to every cached inode in the
> > system? This will increasing the memory footprint of the inode cache
> > by 2-5% (depending on the filesystem). Almost no-one will be using
> > this functionality on most inodes that are cached in the system, so
> > that seems like a pretty bad trade-off to me...
>
> Yea. Bloating the address_space is a concern I'm aware of, but for the
> initial passes I left it to see where folks would rather I keep it.
> Pushing the mutex into a range_tree_root structure or something could
> cut this down, but I still suspect it won't be loved. Another idea would
> be to manage the mapping -> range tree separately via something like a
> hash. Do you have any preferences or suggestions here?

Given that it is a single state bit per page (volatile/non volatile)
you could just use a radix tree tag for keeping the state. Changing
the state isn't a performance critical operation, and tagging large
ranges isn't that expensive (e.g. we do that in the writeback code),
so I'm not sure the overhead of a separate tree is necessary here....

That doesn't help with the reclaim side of things, but I would have
thought that such functioanlity would be better integrated into the
VM page cache/lru scanning code than adding a shrinker to shrink the
page cache additionally on top of what the VM has already done
before calling the shrinkers. I'm not sure what is best here,
though...

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/