On Mon 07-06-21 15:18:38, Waiman Long wrote:I need to ask to see if I can release the full report.
On 6/7/21 3:04 PM, Michal Hocko wrote:Do you happen to have the full report?
On Mon 07-06-21 14:51:05, Waiman Long wrote:A partial OOM report below:
On 6/7/21 2:43 PM, Shakeel Butt wrote:Do you have the oom report? I do not see why the allocating task hasn't
On Mon, Jun 7, 2021 at 9:45 AM Waiman Long <llong@xxxxxxxxxx> wrote:It is because the other processes have a oom_adjust_score of -1000. So they
On 6/7/21 12:31 PM, Aaron Tomlin wrote:Why was there no killable process? What about the process allocating
At the present time, in the context of memcg OOM, even whenTo provide more context for this patch, we are actually seeing that in a
sysctl_oom_kill_allocating_task is enabled/or set, the "allocating"
task cannot be selected, as a target for the OOM killer.
This patch removes the restriction entirely.
Signed-off-by: Aaron Tomlin <atomlin@xxxxxxxxxx>
---
mm/oom_kill.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index eefd3f5fde46..3bae33e2d9c2 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -1089,9 +1089,9 @@ bool out_of_memory(struct oom_control *oc)
oc->nodemask = NULL;
check_panic_on_oom(oc);
- if (!is_memcg_oom(oc) && sysctl_oom_kill_allocating_task &&
- current->mm && !oom_unkillable_task(current) &&
- oom_cpuset_eligible(current, oc) &&
+ if (sysctl_oom_kill_allocating_task && current->mm &&
+ !oom_unkillable_task(current) &&
+ oom_cpuset_eligible(current, oc) &&
current->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
get_task_struct(current);
oc->chosen = current;
customer report about OOM happened in a container where the dominating
task used up most of the memory and it happened to be the task that
triggered the OOM with the result that no killable process could be
found.
the memory or is this remote memcg charging?
are non-killable. Anyway, they don't consume that much memory and killing
them won't free up that much.
The other process that uses most of the memory is the one that trigger the
OOM kill in the first place because the memory limit has been reached in new
memory allocation. Based on the current logic, this process cannot be killed
at all even if we set the oom_kill_allocating_task to 1 if the OOM happens
only within the memcg context, not in a global OOM situation. This patch is
to allow this process to be killed under this circumstance.
been chosen.
[ 8221.433608] memory: usage 21280kB, limit 204800kB, failcnt 49116The process is clearly listed as eligible so the oom killer should find
:
[ 8227.239769] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
[ 8227.242495] [1611298] 0 1611298 35869 635 167936 0 -1000 conmon
[ 8227.242518] [1702509] 0 1702509 35869 701 176128 0 -1000 conmon
[ 8227.242522] [1703345] 1001050000 1703294 183440 0 2125824 0 999 node
[ 8227.242706] Out of memory and no killable processes...
[ 8227.242731] node invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=999
[ 8227.242732] node cpuset=crio-b8ac7e23f7b520c0365461defb66738231918243586e287bfb9e206bb3a0227a.scope mems_allowed=0-1
So in this case, node cannot kill itself and no other processes are
available to be killed.
it and if it hasn't then this should be investigated. Which kernel is
this?