[GIT PULL] tracing: One more small update for 4.11

From: Steven Rostedt
Date: Mon Feb 27 2017 - 10:41:49 EST



Linus,

Commit 79c6f448c8b79c ("tracing: Fix hwlat kthread migration") fixed a
bug that was caused by a race condition in initializing the hwlat
thread. When fixing this code, I realized that it should have been done
differently. Instead of doing the rewrite and sending that to stable,
I just sent the above commit to fix the bug that should be back ported.

This commit is on top of the quick fix commit to rewrite the code the
way it should have been written in the first place. Which is why it
wasn't included in the previous pull request.


Please pull the latest trace-v4.11-2 tree, which can be found at:


git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
trace-v4.11-2

Tag SHA1: baa2b8d5fc86cc547ad807c6967388285e5780eb
Head SHA1: f447c196fe7a3a92c6396f7628020cb8d564be15


Steven Rostedt (VMware) (1):
tracing: Clean up the hwlat binding code

----
kernel/trace/trace_hwlat.c | 34 +++++++++++++++++-----------------
1 file changed, 17 insertions(+), 17 deletions(-)
---------------------------
commit f447c196fe7a3a92c6396f7628020cb8d564be15
Author: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
Date: Tue Jan 31 16:48:23 2017 -0500

tracing: Clean up the hwlat binding code

Instead of initializing the affinity of the hwlat kthread in the thread
itself, simply set up the initial affinity at thread creation. This
simplifies the code.

Signed-off-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>

diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
index af344a1bf0d0..75fb54a3acb2 100644
--- a/kernel/trace/trace_hwlat.c
+++ b/kernel/trace/trace_hwlat.c
@@ -266,24 +266,13 @@ static int get_sample(void)
static struct cpumask save_cpumask;
static bool disable_migrate;

-static void move_to_next_cpu(bool initmask)
+static void move_to_next_cpu(void)
{
- static struct cpumask *current_mask;
+ struct cpumask *current_mask = &save_cpumask;
int next_cpu;

if (disable_migrate)
return;
-
- /* Just pick the first CPU on first iteration */
- if (initmask) {
- current_mask = &save_cpumask;
- get_online_cpus();
- cpumask_and(current_mask, cpu_online_mask, tracing_buffer_mask);
- put_online_cpus();
- next_cpu = cpumask_first(current_mask);
- goto set_affinity;
- }
-
/*
* If for some reason the user modifies the CPU affinity
* of this thread, than stop migrating for the duration
@@ -300,7 +289,6 @@ static void move_to_next_cpu(bool initmask)
if (next_cpu >= nr_cpu_ids)
next_cpu = cpumask_first(current_mask);

- set_affinity:
if (next_cpu >= nr_cpu_ids) /* Shouldn't happen! */
goto disable;

@@ -330,12 +318,10 @@ static void move_to_next_cpu(bool initmask)
static int kthread_fn(void *data)
{
u64 interval;
- bool initmask = true;

while (!kthread_should_stop()) {

- move_to_next_cpu(initmask);
- initmask = false;
+ move_to_next_cpu();

local_irq_disable();
get_sample();
@@ -366,13 +352,27 @@ static int kthread_fn(void *data)
*/
static int start_kthread(struct trace_array *tr)
{
+ struct cpumask *current_mask = &save_cpumask;
struct task_struct *kthread;
+ int next_cpu;
+
+ /* Just pick the first CPU on first iteration */
+ current_mask = &save_cpumask;
+ get_online_cpus();
+ cpumask_and(current_mask, cpu_online_mask, tracing_buffer_mask);
+ put_online_cpus();
+ next_cpu = cpumask_first(current_mask);

kthread = kthread_create(kthread_fn, NULL, "hwlatd");
if (IS_ERR(kthread)) {
pr_err(BANNER "could not start sampling thread\n");
return -ENOMEM;
}
+
+ cpumask_clear(current_mask);
+ cpumask_set_cpu(next_cpu, current_mask);
+ sched_setaffinity(kthread->pid, current_mask);
+
hwlat_kthread = kthread;
wake_up_process(kthread);