Re: [PATCH 3/3] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks
From: Ben Widawsky
Date: Wed Jul 01 2020 - 16:04:51 EST
On 20-07-01 13:03:01, David Rientjes wrote:
> On Wed, 1 Jul 2020, Dave Hansen wrote:
>
> > diff -puN include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper include/linux/swap.h
> > --- a/include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper 2020-07-01 08:22:13.650955330 -0700
> > +++ b/include/linux/swap.h 2020-07-01 08:22:13.659955330 -0700
> > @@ -12,6 +12,7 @@
> > #include <linux/fs.h>
> > #include <linux/atomic.h>
> > #include <linux/page-flags.h>
> > +#include <uapi/linux/mempolicy.h>
> > #include <asm/page.h>
> >
> > struct notifier_block;
> > @@ -374,6 +375,12 @@ extern int sysctl_min_slab_ratio;
> > #define node_reclaim_mode 0
> > #endif
> >
> > +static inline bool node_reclaim_enabled(void)
> > +{
> > + /* Is any node_reclaim_mode bit set? */
> > + return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP);
> > +}
> > +
> > extern void check_move_unevictable_pages(struct pagevec *pvec);
> >
> > extern int kswapd_run(int nid);
>
> If a user writes a bit that isn't a RECLAIM_* bit to vm.zone_reclaim_mode
> today, it acts as though RECLAIM_ZONE is enabled: we try to reclaim in
> zonelist order before falling back to the next zone in the page allocator.
> The sysctl doesn't enforce any max value :/ I dont know if there is any
> such user, but this would break them if there is.
>
> Should this simply be return !!node_reclaim_mode?
>
I don't think so because I don't think anything else validates the unused bits
remain unused.