[v2 PATCH 1/3] mm/mempolicy: cleanup nodemask intersection check for oom
From: Feng Tang
Date: Fri May 28 2021 - 10:07:22 EST
mempolicy_nodemask_intersects() is used in oom case to check if a
task may have memory allocated on some memory nodes.
As it's only used by OOM check, rename it to mempolicy_in_oom_domain()
to reduce confusion.
As only for 'bind' policy, the nodemask is a force requirement for
from where to allocate memory, only do the intesection check for it,
and return true for all other policies.
Suggested-by: Michal Hocko <mhocko@xxxxxxxx>
Signed-off-by: Feng Tang <feng.tang@xxxxxxxxx>
---
include/linux/mempolicy.h | 2 +-
mm/mempolicy.c | 34 +++++++++-------------------------
mm/oom_kill.c | 2 +-
3 files changed, 11 insertions(+), 27 deletions(-)
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 5f1c74d..8773c55 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -150,7 +150,7 @@ extern int huge_node(struct vm_area_struct *vma,
unsigned long addr, gfp_t gfp_flags,
struct mempolicy **mpol, nodemask_t **nodemask);
extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
-extern bool mempolicy_nodemask_intersects(struct task_struct *tsk,
+extern bool mempolicy_in_oom_domain(struct task_struct *tsk,
const nodemask_t *mask);
extern nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index d79fa29..6795a6a 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2094,16 +2094,16 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
#endif
/*
- * mempolicy_nodemask_intersects
+ * mempolicy_in_oom_domain
*
- * If tsk's mempolicy is "default" [NULL], return 'true' to indicate default
- * policy. Otherwise, check for intersection between mask and the policy
- * nodemask for 'bind' or 'interleave' policy. For 'preferred' or 'local'
- * policy, always return true since it may allocate elsewhere on fallback.
+ * If tsk's mempolicy is "bind", check for intersection between mask and
+ * the policy nodemask. Otherwise, return true for all other policies
+ * including "interleave", as a tsk with "interleave" policy may have
+ * memory allocated from all nodes in system.
*
* Takes task_lock(tsk) to prevent freeing of its mempolicy.
*/
-bool mempolicy_nodemask_intersects(struct task_struct *tsk,
+bool mempolicy_in_oom_domain(struct task_struct *tsk,
const nodemask_t *mask)
{
struct mempolicy *mempolicy;
@@ -2111,29 +2111,13 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk,
if (!mask)
return ret;
+
task_lock(tsk);
mempolicy = tsk->mempolicy;
- if (!mempolicy)
- goto out;
-
- switch (mempolicy->mode) {
- case MPOL_PREFERRED:
- /*
- * MPOL_PREFERRED and MPOL_F_LOCAL are only preferred nodes to
- * allocate from, they may fallback to other nodes when oom.
- * Thus, it's possible for tsk to have allocated memory from
- * nodes in mask.
- */
- break;
- case MPOL_BIND:
- case MPOL_INTERLEAVE:
+ if (mempolicy && mempolicy->mode == MPOL_BIND)
ret = nodes_intersects(mempolicy->v.nodes, *mask);
- break;
- default:
- BUG();
- }
-out:
task_unlock(tsk);
+
return ret;
}
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index eefd3f5..fcc29e9 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -104,7 +104,7 @@ static bool oom_cpuset_eligible(struct task_struct *start,
* mempolicy intersects current, otherwise it may be
* needlessly killed.
*/
- ret = mempolicy_nodemask_intersects(tsk, mask);
+ ret = mempolicy_in_oom_domain(tsk, mask);
} else {
/*
* This is not a mempolicy constrained oom, so only
--
2.7.4