sch_htb: fix "too many events" situation
From: Chris Wright
Date: Wed Apr 16 2008 - 21:24:01 EST
-stable review patch. If anyone has any objections, please let us know.
---------------------
From: Martin Devera <devik@xxxxxx>
Upstream commit: 8f3ea33a5078a09eba12bfe57424507809367756
HTB is event driven algorithm and part of its work is to apply
scheduled events at proper times. It tried to defend itself from
livelock by processing only limited number of events per dequeue.
Because of faster computers some users already hit this hardcoded
limit.
This patch limits processing up to 2 jiffies (why not 1 jiffie ?
because it might stop prematurely when only fraction of jiffie
remains).
Signed-off-by: Martin Devera <devik@xxxxxx>
Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx>
Signed-off-by: Chris Wright <chrisw@xxxxxxxxxxxx>
---
net/sched/sch_htb.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -708,9 +708,11 @@ static void htb_charge_class(struct htb_
*/
static psched_time_t htb_do_events(struct htb_sched *q, int level)
{
- int i;
-
- for (i = 0; i < 500; i++) {
+ /* don't run for longer than 2 jiffies; 2 is used instead of
+ 1 to simplify things when jiffy is going to be incremented
+ too soon */
+ unsigned long stop_at = jiffies + 2;
+ while (time_before(jiffies, stop_at)) {
struct htb_class *cl;
long diff;
struct rb_node *p = rb_first(&q->wait_pq[level]);
@@ -728,9 +730,8 @@ static psched_time_t htb_do_events(struc
if (cl->cmode != HTB_CAN_SEND)
htb_add_to_wait_tree(q, cl, diff);
}
- if (net_ratelimit())
- printk(KERN_WARNING "htb: too many events !\n");
- return q->now + PSCHED_TICKS_PER_SEC / 10;
+ /* too much load - let's continue on next jiffie */
+ return q->now + PSCHED_TICKS_PER_SEC / HZ;
}
/* Returns class->node+prio from id-tree where classe's id is >= id. NULL
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/