Re: [PATCH v2] workqueue: Don't record workqueue stack holding raw_spin_lock
From: Marco Elver
Date: Mon Sep 06 2021 - 03:13:09 EST
On Thu, Sep 02, 2021 at 05:46PM -0600, Shuah Khan wrote:
[...]
> > 3. Try to not allocate memory in stackdepot. Not sure this is feasible
> > without telling stackdepot to preallocate the max slabs on boot if RT.
> >
>
> We could. I have to ask though how much of the real world cases do we
> need to impact for the debug code to work?
>
> > Anything else? Because I don't think any of the options are satisfying.
>
> One option to consider is checking dry-run invalid nesting check and
> bail out if it is true in kasan_record_aux_stack()
Sadly, if lockdep is off, this won't work. And we need a way to
generically fix this, as otherwise we still have a bug (which may also
cause issues on RT kernels).
I propose we properly fix this and prevent stackdepot from replenishing
its "stack slab" pool if memory allocations cannot be done in the
current context. Specifically, I noticed technically it's a bug to use
either GFP_ATOMIC nor GFP_NOWAIT in certain non-preemptive contexts,
including raw_spin_locks (see gfp.h and ab00db216c9c7).
This is what kasan_record_aux_stack() via stackdepot does, and it's a
bug here regardless if lockdep is on or off.
I've prepared a series (see attached draft patches) that allows telling
stackdepot to not replenish its pool if alloc_pages() cannot be called
at all (where GFP_ATOMIC/NOWAIT doesn't even work).
The only downside is that saving a stack trace may fail if: stackdepot
runs out of space AND the same stack trace has not been recorded before.
I expect this to be unlikely, and a simple experiment (boot the kernel)
didn't result in any failure to record stack trace from insert_work().
I think this is a reasonable trade-off. And considering that we're
unsure if queuing work can or cannot be done from within an outer
raw_spin_lock'ed critical section, I don't see a better way.
If you agree, I'll send this series out for further review.
Thanks,
-- Marco