[PATCH 3/3] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks
From: Dave Hansen
Date: Wed Jul 01 2020 - 11:30:05 EST
From: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
RECLAIM_ZONE was assumed to be unused because it was never explicitly
used in the kernel. However, there were a number of places where it
was checked implicitly by checking 'node_reclaim_mode' for a zero
value.
These zero checks are not great because it is not obvious what a zero
mode *means* in the code. Replace them with a helper which makes it
more obvious: node_reclaim_enabled().
This helper also provides a handy place to explicitly check the
RECLAIM_ZONE bit itself. Check it explicitly there to make it more
obvious where the bit can affect behavior.
This should have no functional impact.
Signed-off-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Cc: Ben Widawsky <ben.widawsky@xxxxxxxxx>
Cc: Alex Shi <alex.shi@xxxxxxxxxxxxxxxxx>
Cc: Daniel Wagner <dwagner@xxxxxxx>
Cc: "Tobin C. Harding" <tobin@xxxxxxxxxx>
Cc: Christoph Lameter <cl@xxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Huang Ying <ying.huang@xxxxxxxxx>
Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
Cc: Qian Cai <cai@xxxxxx>
Cc: Daniel Wagner <dwagner@xxxxxxx>
--
Note: This is not cc'd to stable. It does not fix any bugs.
---
b/include/linux/swap.h | 7 +++++++
b/mm/khugepaged.c | 2 +-
b/mm/page_alloc.c | 2 +-
3 files changed, 9 insertions(+), 2 deletions(-)
diff -puN include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper include/linux/swap.h
--- a/include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper 2020-07-01 08:22:13.650955330 -0700
+++ b/include/linux/swap.h 2020-07-01 08:22:13.659955330 -0700
@@ -12,6 +12,7 @@
#include <linux/fs.h>
#include <linux/atomic.h>
#include <linux/page-flags.h>
+#include <uapi/linux/mempolicy.h>
#include <asm/page.h>
struct notifier_block;
@@ -374,6 +375,12 @@ extern int sysctl_min_slab_ratio;
#define node_reclaim_mode 0
#endif
+static inline bool node_reclaim_enabled(void)
+{
+ /* Is any node_reclaim_mode bit set? */
+ return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP);
+}
+
extern void check_move_unevictable_pages(struct pagevec *pvec);
extern int kswapd_run(int nid);
diff -puN mm/khugepaged.c~mm-vmscan-node_reclaim_mode_helper mm/khugepaged.c
--- a/mm/khugepaged.c~mm-vmscan-node_reclaim_mode_helper 2020-07-01 08:22:13.652955330 -0700
+++ b/mm/khugepaged.c 2020-07-01 08:22:13.660955330 -0700
@@ -709,7 +709,7 @@ static bool khugepaged_scan_abort(int ni
* If node_reclaim_mode is disabled, then no extra effort is made to
* allocate memory locally.
*/
- if (!node_reclaim_mode)
+ if (!node_reclaim_enabled())
return false;
/* If there is a count for this node already, it must be acceptable */
diff -puN mm/page_alloc.c~mm-vmscan-node_reclaim_mode_helper mm/page_alloc.c
--- a/mm/page_alloc.c~mm-vmscan-node_reclaim_mode_helper 2020-07-01 08:22:13.655955330 -0700
+++ b/mm/page_alloc.c 2020-07-01 08:22:13.662955330 -0700
@@ -3733,7 +3733,7 @@ retry:
if (alloc_flags & ALLOC_NO_WATERMARKS)
goto try_this_zone;
- if (node_reclaim_mode == 0 ||
+ if (!node_reclaim_enabled() ||
!zone_allows_reclaim(ac->preferred_zoneref->zone, zone))
continue;
_