Re: [RFC 12/12] x86/dumpstack: Optimize save_stack_trace

From: xinhui
Date: Mon Jun 20 2016 - 03:29:54 EST



On 2016å06æ20æ 12:55, Byungchul Park wrote:
Currently, x86 implementation of save_stack_trace() is walking all stack
region word by word regardless of what the trace->max_entries is.
However, it's unnecessary to walk after already fulfilling caller's
requirement, say, if trace->nr_entries >= trace->max_entries is true.

For example, CONFIG_LOCKDEP_CROSSRELEASE implementation calls
save_stack_trace() with max_entries = 5 frequently. I measured its
overhead and printed its difference of sched_clock() with my QEMU x86
machine.

The latency was improved over 70% when trace->max_entries = 5.

[snip]

+static int save_stack_end(void *data)
+{
+ struct stack_trace *trace = data;
+ return trace->nr_entries >= trace->max_entries;
+}
+
static const struct stacktrace_ops save_stack_ops = {
.stack = save_stack_stack,
.address = save_stack_address,
then why not check the return value of ->address(), -1 indicate there is no room to store any pointer.

.walk_stack = print_context_stack,
+ .end_walk = save_stack_end,
};

static const struct stacktrace_ops save_stack_ops_nosched = {