Re: [PATCH 0/6] stackdepot, kasan, workqueue: Avoid expanding stackdepot slabs when holding raw_spin_lock

From: Vlastimil Babka
Date: Fri Sep 10 2021 - 06:50:55 EST


On 9/7/21 22:05, Shuah Khan wrote:
> On 9/7/21 8:13 AM, Marco Elver wrote:
>> Shuah Khan reported [1]:
>>
>>   | When CONFIG_PROVE_RAW_LOCK_NESTING=y and CONFIG_KASAN are enabled,
>>   | kasan_record_aux_stack() runs into "BUG: Invalid wait context" when
>>   | it tries to allocate memory attempting to acquire spinlock in page
>>   | allocation code while holding workqueue pool raw_spinlock.
>>   |
>>   | There are several instances of this problem when block layer tries
>>   | to __queue_work(). Call trace from one of these instances is below:
>>   |
>>   |     kblockd_mod_delayed_work_on()
>>   |       mod_delayed_work_on()
>>   |         __queue_delayed_work()
>>   |           __queue_work() (rcu_read_lock, raw_spin_lock pool->lock held)
>>   |             insert_work()
>>   |               kasan_record_aux_stack()
>>   |                 kasan_save_stack()
>>   |                   stack_depot_save()
>>   |                     alloc_pages()
>>   |                       __alloc_pages()
>>   |                         get_page_from_freelist()
>>   |                           rm_queue()
>>   |                             rm_queue_pcplist()
>>   |                               local_lock_irqsave(&pagesets.lock, flags);
>>   |                               [ BUG: Invalid wait context triggered ]
>>
>> [1]
>> https://lkml.kernel.org/r/20210902200134.25603-1-skhan@xxxxxxxxxxxxxxxxxxx
>>
>> PROVE_RAW_LOCK_NESTING is pointing out that (on RT kernels) the locking
>> rules are being violated. More generally, memory is being allocated from
>> a non-preemptive context (raw_spin_lock'd c-s) where it is not allowed.
>>
>> To properly fix this, we must prevent stackdepot from replenishing its
>> "stack slab" pool if memory allocations cannot be done in the current
>> context: it's a bug to use either GFP_ATOMIC nor GFP_NOWAIT in certain
>> non-preemptive contexts, including raw_spin_locks (see gfp.h and
>> ab00db216c9c7).
>>
>> The only downside is that saving a stack trace may fail if: stackdepot
>> runs out of space AND the same stack trace has not been recorded before.
>> I expect this to be unlikely, and a simple experiment (boot the kernel)
>> didn't result in any failure to record stack trace from insert_work().
>>
>> The series includes a few minor fixes to stackdepot that I noticed in
>> preparing the series. It then introduces __stack_depot_save(), which
>> exposes the option to force stackdepot to not allocate any memory.
>> Finally, KASAN is changed to use the new stackdepot interface and
>> provide kasan_record_aux_stack_noalloc(), which is then used by
>> workqueue code.
>>
>> Marco Elver (6):
>>    lib/stackdepot: include gfp.h
>>    lib/stackdepot: remove unused function argument
>>    lib/stackdepot: introduce __stack_depot_save()
>>    kasan: common: provide can_alloc in kasan_save_stack()
>>    kasan: generic: introduce kasan_record_aux_stack_noalloc()
>>    workqueue, kasan: avoid alloc_pages() when recording stack
>>
>>   include/linux/kasan.h      |  2 ++
>>   include/linux/stackdepot.h |  6 +++++
>>   kernel/workqueue.c         |  2 +-
>>   lib/stackdepot.c           | 51 ++++++++++++++++++++++++++++++--------
>>   mm/kasan/common.c          |  6 ++---
>>   mm/kasan/generic.c         | 14 +++++++++--
>>   mm/kasan/kasan.h           |  2 +-
>>   7 files changed, 65 insertions(+), 18 deletions(-)
>>
>
> Thank you. Tested all the 6 patches in this series on Linux 5.14. This problem
> exists in 5.13 and needs to be marked for both 5.14 and 5.13 stable releases.

I think if this problem manifests only with CONFIG_PROVE_RAW_LOCK_NESTING
then it shouldn't be backported to stable. CONFIG_PROVE_RAW_LOCK_NESTING is
an experimental/development option to earlier discover what will collide
with RT lock semantics, without needing the full RT tree.
Thus, good to fix going forward, but not necessary to stable backport.

> Here is my
>
> Tested-by: Shuah Khan <skhan@xxxxxxxxxxxxxxxxxxx>
>
> thanks,
> -- Shuah
>