Re: [PATCH 2/7] stacktrace,sched: Make stack_trace_save_tsk() more robust

From: Peter Zijlstra
Date: Mon Oct 25 2021 - 16:42:53 EST


On Fri, Oct 22, 2021 at 07:01:35PM +0200, Peter Zijlstra wrote:
> On Fri, Oct 22, 2021 at 05:54:31PM +0100, Mark Rutland wrote:
>
> > > Pardon my thin understanding of the scheduler, but I assume this change
> > > doesn't mean stack_trace_save_tsk() stops working for "current", right?
> > > In trying to answer this for myself, I couldn't convince myself what value
> > > current->__state have here. Is it one of TASK_(UN)INTERRUPTIBLE ?
> >
> > Regardless of that, current->on_rq will be non-zero, so you're right that this
> > causes stack_trace_save_tsk() to not work for current, e.g.
> >
> > | # cat /proc/self/stack
> > | # wc /proc/self/stack
> > | 0 0 0 /proc/self/stack
> >
> > TBH, I think that (taking a step back from this issue in particular)
> > stack_trace_save_tsk() *shouldn't* work for current, and callers *should* be
> > forced to explicitly handle current separately from blocked tasks.
>
> That..

So I think I'd prefer the following approach to that (and i'm not
currently volunteering for it):

- convert all archs to ARCH_STACKWALK; this gets the semantics out of
arch code and into the single kernel/stacktrace.c file.

- bike-shed a new/improved stack_trace_save*() API and implement it
*once* in generic code based on arch_stack_walk().

- convert users; delete old etc..

For now, current users of stack_trace_save_tsk() very much expect
tsk==current to work.

> > So we could fix this in the stacktrace code with:
> >
> > | diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c
> > | index a1cdbf8c3ef8..327af9ff2c55 100644
> > | --- a/kernel/stacktrace.c
> > | +++ b/kernel/stacktrace.c
> > | @@ -149,7 +149,10 @@ unsigned int stack_trace_save_tsk(struct task_struct *tsk, unsigned long *store,
> > | .skip = skipnr + (current == tsk),
> > | };
> > |
> > | - task_try_func(tsk, try_arch_stack_walk_tsk, &c);
> > | + if (tsk == current)
> > | + try_arch_stack_walk_tsk(tsk, &c);
> > | + else
> > | + task_try_func(tsk, try_arch_stack_walk_tsk, &c);
> > |
> > | return c.len;
> > | }
> >
> > ... and we could rename task_try_func() to blocked_task_try_func(), and
> > later push the distinction into higher-level callers.
>
> I think I favour this fix if we have to. But that's for next week :-)

I ended up with the below delta to this patch.

--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -101,7 +101,7 @@ static bool stack_trace_consume_entry_no
}

/**
- * stack_trace_save - Save a stack trace into a storage array
+ * stack_trace_save - Save a stack trace (of current) into a storage array
* @store: Pointer to storage array
* @size: Size of the storage array
* @skipnr: Number of entries to skip at the start of the stack trace
@@ -132,7 +132,7 @@ static int try_arch_stack_walk_tsk(struc

/**
* stack_trace_save_tsk - Save a task stack trace into a storage array
- * @task: The task to examine
+ * @task: The task to examine (current allowed)
* @store: Pointer to storage array
* @size: Size of the storage array
* @skipnr: Number of entries to skip at the start of the stack trace
@@ -149,13 +149,25 @@ unsigned int stack_trace_save_tsk(struct
.skip = skipnr + (current == tsk),
};

- task_try_func(tsk, try_arch_stack_walk_tsk, &c);
+ /*
+ * If the task doesn't have a stack (e.g., a zombie), the stack is
+ * empty.
+ */
+ if (!try_get_task_stack(tsk))
+ return 0;
+
+ if (tsk == current)
+ try_arch_stack_walk_tsk(tsk, &c);
+ else
+ task_try_func(tsk, try_arch_stack_walk_tsk, &c);
+
+ put_task_stack(tsk);

return c.len;
}

/**
- * stack_trace_save_regs - Save a stack trace based on pt_regs into a storage array
+ * stack_trace_save_regs - Save a stack trace (of current) based on pt_regs into a storage array
* @regs: Pointer to pt_regs to examine
* @store: Pointer to storage array
* @size: Size of the storage array