[RFC][PATCH 01/11] sched: Simplify INIT_PREEMPT_COUNT

From: Peter Zijlstra
Date: Tue Sep 29 2015 - 05:45:13 EST


As per commit d86ee4809d03 ("sched: optimize cond_resched()") we need
PREEMPT_ACTIVE to avoid cond_resched() from working before the
scheduler is setup.

However, keeping preemption disabled should do the same thing already,
making the PREEMPT_ACTIVE part entirely redundant.

Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
---
include/linux/sched.h | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)

--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -608,17 +608,14 @@ struct task_cputime_atomic {
/*
* Disable preemption until the scheduler is running.
* Reset by start_kernel()->sched_init()->init_idle().
- *
- * We include PREEMPT_ACTIVE to avoid cond_resched() from working
- * before the scheduler is active -- see should_resched().
*/
-#define INIT_PREEMPT_COUNT (PREEMPT_DISABLED + PREEMPT_ACTIVE)
+#define INIT_PREEMPT_COUNT PREEMPT_DISABLED

/**
* struct thread_group_cputimer - thread group interval timer counts
* @cputime_atomic: atomic thread group interval timers.
* @running: non-zero when there are timers running and
- * @cputime receives updates.
+ * @cputime receives updates.
*
* This structure contains the version of task_cputime, above, that is
* used for thread group CPU timer calculations.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/