Re: Make yield_task_fair more efficient
From: Balbir Singh
Date: Thu Feb 21 2008 - 02:45:11 EST
Ingo Molnar wrote:
> * Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> wrote:
>
>> I disagree. The cost is only adding a field to cfs_rq [...]
>
> wrong. The cost is "only" of adding a field to cfs_rq and _updating it_,
> in the hottest paths of the scheduler:
>
> @@ -256,6 +257,7 @@ static void __enqueue_entity(struct cfs_
> */
> if (key < entity_key(cfs_rq, entry)) {
> link = &parent->rb_left;
> + rightmost = 0;
That's an update when we move leftwards.
> } else {
> link = &parent->rb_right;
> leftmost = 0;
> @@ -268,6 +270,8 @@ static void __enqueue_entity(struct cfs_
> */
> if (leftmost)
> cfs_rq->rb_leftmost = &se->run_node;
> + if (rightmost)
> + cfs_rq->rb_rightmost = &se->run_node;
>
&se->run_node is already in the cache, we are assigning cfs_rq->rb_rightmost to it.
>> [...] For a large number of tasks - say 10000, we need to walk 14
>> levels before we reach the node (each time). [...]
>
> 10,000 yield-ing tasks is not a common workload we care about. It's not
> even a rare workload we care about. _Especially_ we dont care about it
> if it slows down every other workload (a tiny bit).
>
sched_yield() is supported API and also look at
http://lkml.org/lkml/2007/9/19/351. I am trying to make sched_yield() efficient
when compat_sched_yield is turned on (which is most likely), since people will
want that behaviour (Hint, please read the man-page for sched_yield).There are
already several applications using sched_yield(), so they all suffer.
>> [...] Doesn't matter if the data is cached, we are still spending CPU
>> time looking through pointers and walking to the right node. [...]
>
> have you actually measured how much it takes to walk the tree that deep
> on recent hardware? I have.
I have measured how much time can be saved by not doing that and it's quite a lot.
--
Warm Regards,
Balbir Singh
Linux Technology Center
IBM, ISTL
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/