Currently, x86 implementation of save_stack_trace() is walking all stack[snip]
region word by word regardless of what the trace->max_entries is.
However, it's unnecessary to walk after already fulfilling caller's
requirement, say, if trace->nr_entries >= trace->max_entries is true.
For example, CONFIG_LOCKDEP_CROSSRELEASE implementation calls
save_stack_trace() with max_entries = 5 frequently. I measured its
overhead and printed its difference of sched_clock() with my QEMU x86
machine.
The latency was improved over 70% when trace->max_entries = 5.
+static int save_stack_end(void *data)then why not check the return value of ->address(), -1 indicate there is no room to store any pointer.
+{
+ struct stack_trace *trace = data;
+ return trace->nr_entries >= trace->max_entries;
+}
+
static const struct stacktrace_ops save_stack_ops = {
.stack = save_stack_stack,
.address = save_stack_address,
.walk_stack = print_context_stack,
+ .end_walk = save_stack_end,
};
static const struct stacktrace_ops save_stack_ops_nosched = {