Re: TTY_FLIPBUF_SIZE too low causing dataloss on too fast serial lines?

Benjamin C.R. LaHaise (blah@kvack.org)
Mon, 31 May 1999 15:29:31 -0400 (EDT)


On Fri, 28 May 1999, Theodore Y. Ts'o wrote:

> Date: Thu, 27 May 1999 00:15:03 +0200
> From: Pavel Machek <pavel@bug.ucw.cz>
...

> Is there some easy way for me to make it work 'just now'? Like calling
> tty_flip_buffer_push from some kind of bh or increasing hz to 1000 or
> ...?
>
> Well, you could set tty->low_latency and then call
> tty_flip_buffer_push(), which will force it out to the line discpline
> right away. Increasaing HZ to 1000 is also a possibility.

UGH! Please don't kludge around the broken bottom half handling in 2.2.
I've offered the patch below, and received a couple of good comments. 2.2
as-is will not run a bottom half if it was marked while bottom halves were
being run (this affects the serial driver when using ppp (SERIAL_BH ->
NET_BH), and any other driver that uses IMMED_BH to defer things) which is
pretty broken if you ask me. If the following patch isn't the right way to
do things (the original patch would run bhs indefinately, wheas this
version will run each bh at most once -- would it be preferred to just use
a counter for the maximum number of times to loop or just to revert things
back to the state they were in 2.0?), please make suggestions.

-ben

--- kernel/softirq.c.orig Sun Mar 21 10:22:00 1999
+++ kernel/softirq.c Thu May 6 21:30:47 1999
@@ -26,45 +26,50 @@
void (*bh_base[32])(void);

/*
- * This needs to make sure that only one bottom half handler
- * is ever active at a time. We do this without locking by
- * doing an atomic increment on the intr_count, and checking
- * (nonatomically) against 1. Only if it's 1 do we schedule
- * the bottom half.
- *
- * Note that the non-atomicity of the test (as opposed to the
- * actual update) means that the test may fail, and _nobody_
- * runs the handlers if there is a race that makes multiple
- * CPU's get here at the same time. That's ok, we'll run them
- * next time around.
+ * Repeatedly run over the bottom halves until there are no more, but only run
+ * each bottom half at most once. If we don't loop, if one bottom half triggers
+ * another, it might get delayed a long time. If we loop indefinately, then the
+ * system will lock completely under a heavy irq load. -bcrl
*/
-static inline void run_bottom_halves(void)
-{
- unsigned long active;
- void (**bh)(void);
-
- active = get_active_bhs();
- clear_active_bhs(active);
- bh = bh_base;
- do {
- if (active & 1)
- (*bh)();
- bh++;
- active >>= 1;
- } while (active);
-}
-
asmlinkage void do_bottom_half(void)
{
int cpu = smp_processor_id();
+ unsigned long left = ~0UL;
+ unsigned long active;
+ void (**bh)(void);

+again:
if (softirq_trylock(cpu)) {
if (hardirq_trylock(cpu)) {
__sti();
- run_bottom_halves();
+
+ while ((active = left & get_active_bhs())) {
+ left &= ~active;
+ clear_active_bhs(active);
+ bh = bh_base;
+ do {
+ if (active & 1)
+ (*bh)();
+ bh++;
+ active >>= 1;
+ } while (active);
+ }
+
__cli();
hardirq_endlock(cpu);
+ softirq_endlock(cpu);
+
+ /*
+ * Avoid the race by checking if any bottom halves
+ * are active after releasing all locks. This is a
+ * rare race, but should inexpensive to check. -bcrl
+ */
+ rmb();
+ if (get_active_bhs() & left)
+ goto again;
+ return;
}
softirq_endlock(cpu);
}
}
+

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/