On Thu, 2 Sept 2021 at 22:01, Shuah Khan <skhan@xxxxxxxxxxxxxxxxxxx> wrote:
When CONFIG_PROVE_RAW_LOCK_NESTING=y and CONFIG_KASAN are enabled,
kasan_record_aux_stack() runs into "BUG: Invalid wait context" when
it tries to allocate memory attempting to acquire spinlock in page
allocation code while holding workqueue pool raw_spinlock.
Fix it by calling kasan_record_aux_stack() conditionally only when
CONFIG_PROVE_RAW_LOCK_NESTING is not enabled. After exploring other
options such as calling kasan_record_aux_stack() after releasing the
pool lock, opting for a least disruptive path of stubbing this record
function to avoid nesting raw spinlock and spinlock.
Fixes: e89a85d63fb2 ("workqueue: kasan: record workqueue stack")
Signed-off-by: Shuah Khan <skhan@xxxxxxxxxxxxxxxxxxx>
---
Changes since v1:
-- Instead of changing when record happens, disable record
when CONFIG_PROVE_RAW_LOCK_NESTING=y
kernel/workqueue.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f148eacda55a..435970ef81ae 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1328,8 +1328,16 @@ static void insert_work(struct pool_workqueue *pwq, struct work_struct *work,
{
struct worker_pool *pool = pwq->pool;
- /* record the work call stack in order to print it in KASAN reports */
+ /*
+ * record the work call stack in order to print it in KASAN reports
+ * Doing this when CONFIG_PROVE_RAW_LOCK_NESTING is enabled results
+ * in nesting raw spinlock with page allocation spinlock.
+ *
+ * Avoid recording when CONFIG_PROVE_RAW_LOCK_NESTING is enabled.
+ */
+#if !defined(CONFIG_PROVE_RAW_LOCK_NESTING)
Just "if (!IS_ENABLED(CONFIG_PROVE_RAW_LOCK_NESTING))" should work
here, however...
... PROVE_RAW_LOCK_NESTING exists for PREEMPT_RT's benefit. I don't
think silencing the debugging tool is the solution, because the bug
still exists in a PREEMPT_RT kernel.
+Cc Sebastian for advice. I may have missed something obvious. :-)
I have a suspicion that kasan_record_aux_stack() (via
stack_depot_save()) is generally unsound on PREEMPT_RT kernels,
because allocating memory cannot be done within raw-locked critical
sections because memory allocation is preemptible on RT. Even using
GWP_NOWAIT/ATOMIC doesn't help (which kasan_record_aux_stack() uses).
It follows that if we do not know what type of locks may be held when
calling kasan_record_aux_stack() we have a bug in RT.
I see 3 options:
1. Try to move kasan_record_aux_stack() where no raw lock is held.
(Seems complicated per v1 attempt?)
But ideally we make kasan_record_aux_stack() more robust on RT:
2. Make kasan_record_aux_stack() a no-op on RT (and if
PROVE_RAW_LOCK_NESTING). Perhaps overkill?
3. Try to not allocate memory in stackdepot. Not sure this is feasible
without telling stackdepot to preallocate the max slabs on boot if RT.
Anything else? Because I don't think any of the options are satisfying.