Re: possible deadlock in __wake_up_common_lock

From: Qian Cai
Date: Thu Jan 03 2019 - 14:40:41 EST

On 1/3/19 11:37 AM, Mel Gorman wrote:
> On Wed, Jan 02, 2019 at 07:29:43PM +0100, Dmitry Vyukov wrote:
>>>> This wakeup_kswapd is new due to Mel's 1c30844d2dfe ("mm: reclaim small
>>>> amounts of memory when an external fragmentation event occurs") so CC Mel.
>>> New year new bugs :(
>> Old too :(
> Well, that can ruin a day! Lets see can we knock one off the list.
>>> While I recognise there is no test case available, how often does this
>>> trigger in syzbot as it would be nice to have some confirmation any
>>> patch is really fixing the problem.
>> This info is always available over the "dashboard link" in the report:
> Noted for future reference.
>> In this case it's 1. I don't know why. Lock inversions are easier to
>> trigger in some sense as information accumulates globally. Maybe one
>> of these stacks is hard to trigger, or maybe all these stacks are
>> rarely triggered on one machine. While the info accumulates globally,
>> non of the machines are actually run for any prolonged time: they all
>> crash right away on hundreds of known bugs.
>> So good that Qian can reproduce this.
> I think this might simply be hard to reproduce. I tried for hours on two
> separate machines and failed. Nevertheless this should still fix it and
> hopefully syzbot picks this up automaticlly when cc'd. If I hear
> nothing, I'll send the patch unconditionally (and cc syzbot). Hopefully
> Qian can give it a whirl too.
> Thanks
> --8<--
> mm, page_alloc: Do not wake kswapd with zone lock held
> syzbot reported the following and it was confirmed by Qian Cai that a
> similar bug was visible from a different context.
> ======================================================
> WARNING: possible circular locking dependency detected
> 4.20.0+ #297 Not tainted
> ------------------------------------------------------
> syz-executor0/8529 is trying to acquire lock:
> 000000005e7fb829 (&pgdat->kswapd_wait){....}, at:
> __wake_up_common_lock+0x19e/0x330 kernel/sched/wait.c:120
> but task is already holding lock:
> 000000009bb7bae0 (&(&zone->lock)->rlock){-.-.}, at: spin_lock
> include/linux/spinlock.h:329 [inline]
> 000000009bb7bae0 (&(&zone->lock)->rlock){-.-.}, at: rmqueue_bulk
> mm/page_alloc.c:2548 [inline]
> 000000009bb7bae0 (&(&zone->lock)->rlock){-.-.}, at: __rmqueue_pcplist
> mm/page_alloc.c:3021 [inline]
> 000000009bb7bae0 (&(&zone->lock)->rlock){-.-.}, at: rmqueue_pcplist
> mm/page_alloc.c:3050 [inline]
> 000000009bb7bae0 (&(&zone->lock)->rlock){-.-.}, at: rmqueue
> mm/page_alloc.c:3072 [inline]
> 000000009bb7bae0 (&(&zone->lock)->rlock){-.-.}, at:
> get_page_from_freelist+0x1bae/0x52a0 mm/page_alloc.c:3491
> It appears to be a false positive in that the only way the lock
> ordering should be inverted is if kswapd is waking itself and the
> wakeup allocates debugging objects which should already be allocated
> if it's kswapd doing the waking. Nevertheless, the possibility exists
> and so it's best to avoid the problem.
> This patch flags a zone as needing a kswapd using the, surprisingly,
> unused zone flag field. The flag is read without the lock held to
> do the wakeup. It's possible that the flag setting context is not
> the same as the flag clearing context or for small races to occur.
> However, each race possibility is harmless and there is no visible
> degredation in fragmentation treatment.
> While zone->flag could have continued to be unused, there is potential
> for moving some existing fields into the flags field instead. Particularly
> read-mostly ones like zone->initialized and zone->contiguous.
> Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>

Tested-by: Qian Cai <cai@xxxxxx>