Re: [PATCH] mm, vmscan: do not loop on too_many_isolated for ever

From: Tetsuo Handa
Date: Fri Jun 30 2017 - 12:00:19 EST


Michal Hocko wrote:
> On Fri 30-06-17 09:14:22, Tetsuo Handa wrote:
> [...]
> > Ping? Ping? When are we going to apply this patch or watchdog patch?
> > This problem occurs with not so insane stress like shown below.
> > I can't test almost OOM situation because test likely falls into either
> > printk() v.s. oom_lock lockup problem or this too_many_isolated() problem.
>
> So you are saying that the patch fixes this issue. Do I understand you
> corretly? And you do not see any other negative side effectes with it
> applied?

I hit this problem using http://lkml.kernel.org/r/20170626130346.26314-1-mhocko@xxxxxxxxxx
on next-20170628. We won't be able to test whether the patch fixes this issue without
seeing any other negative side effects without sending this patch to linux-next.git.
But at least we know that even this patch is sent to linux-next.git, we will still see
bugs like http://lkml.kernel.org/r/201703031948.CHJ81278.VOHSFFFOOLJQMt@xxxxxxxxxxxxxxxxxxx .

>
> I am sorry I didn't have much time to think about feedback from Johannes
> yet. A more robust throttling method is surely due but also not trivial.
> So I am not sure how to proceed. It is true that your last test case
> with only 10 processes fighting resembles the reality much better than
> hundreds (AFAIR) that you were using previously.

Even if hundreds are running, most of them are simply blocked inside open()
at down_write() (like an example from serial-20170423-2.txt.xz shown below).
Actual number of processes fighting for memory is always less than 100.

? __schedule+0x1d2/0x5a0
? schedule+0x2d/0x80
? rwsem_down_write_failed+0x1f9/0x370
? walk_component+0x43/0x270
? call_rwsem_down_write_failed+0x13/0x20
? down_write+0x24/0x40
? path_openat+0x670/0x1210
? do_filp_open+0x8c/0x100
? getname_flags+0x47/0x1e0
? do_sys_open+0x121/0x200
? do_syscall_64+0x5c/0x140
? entry_SYSCALL64_slow_path+0x25/0x25

>
> Rik, Johannes what do you think? Should we go with the simpler approach
> for now and think of a better plan longterm?

I don't hurry if we can check using watchdog whether this problem is occurring
in the real world. I have to test corner cases because watchdog is missing.

Watchdog does not introduce negative side effects, will avoid soft lockups like
http://lkml.kernel.org/r/CAM_iQpWuPVGc2ky8M-9yukECtS+zKjiDasNymX7rMcBjBFyM_A@xxxxxxxxxxxxxx ,
will avoid console_unlock() v.s. oom_lock mutext lockups due to warn_alloc(),
will catch similar bugs which people are failing to reproduce.