> For example, when we take serial or networking interrupts at a high
> speed, we'd often be much better off buffering them for a while,
> and then do the bottom half handler as a tight loop. Much better
> icache behaviour, and often you can get other advantages of
> aggregation (especially for the simple cases like serial
> reception).
Abysmal latency for high-speed network IO?
> (Aggregation tends to be bad for latency, which is why it's hard:
> there needs to be some way for the interrupt handler to tell the
> system that "I want bh's run _now_, because I got an important
> packet or I'm close to filling up the queues").
Yes.
> I don't know if it's really worth pursuing, though.
I think perhaps it is - with 100MBs networks, latency is an issue,
presumably it gets worse of GB Ethernet and if we ever put ATM
routing into the kernel, then possible worse again.
Actually, I'm starting to think PCs are evolving in ways that making
reducing network latency harder and harder, there may come a time
when we simply decide that an OS doesn't and shouldn't make a good
router, and opt for smarter network cards or multibus systems with
really smart network cards.
-cw
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/