Re: [PATCH -rt 2/5] Thread Migration Preemption - v2

From: Peter Zijlstra
Date: Sat Jul 14 2007 - 15:08:09 EST


How about somethign like this?

---

Avoid busy looping on unmigratable tasks by pushing the migration requests
onto a delayed_migration_queue, which we try on each wakeup.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
---
kernel/sched.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)

Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -288,6 +288,7 @@ struct rq {

struct task_struct *migration_thread;
struct list_head migration_queue;
+ struct list_head delayed_migration_queue;
#endif

#ifdef CONFIG_SCHEDSTATS
@@ -5623,6 +5624,11 @@ static int migration_thread(void *data)
head = &rq->migration_queue;

if (list_empty(head)) {
+ /*
+ * we got a wakeup, give the delayed list another shot.
+ */
+ if (current->state != TASK_INTERRUPTIBLE)
+ list_splice(&rq->delayed_migration_queue, head);
spin_unlock_irq(&rq->lock);
schedule();
set_current_state(TASK_INTERRUPTIBLE);
@@ -5641,8 +5647,7 @@ static int migration_thread(void *data)
* wake us up.
*/
spin_lock_irq(&rq->lock);
- head = &rq->migration_queue;
- list_add(&req->list, head);
+ list_add(&req->list, &rq->delayed_migration_queue);
set_tsk_thread_flag(req->task, TIF_NEED_MIGRATE);
spin_unlock_irq(&rq->lock);
wake_up_process(req->task);
@@ -7006,6 +7011,7 @@ void __init sched_init(void)
rq->cpu = i;
rq->migration_thread = NULL;
INIT_LIST_HEAD(&rq->migration_queue);
+ INIT_LIST_HEAD(&rq->delayed_migration_queue);
#endif
atomic_set(&rq->nr_iowait, 0);



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/