Re: NTP dumps Linux, film at 11. [Fwd/FYI]

Linus Torvalds (
Wed, 2 Dec 1998 09:29:47 -0800 (PST)

On Wed, 2 Dec 1998, Alan Cox wrote:

> > On Tue, 1 Dec 1998, Theodore Y. Ts'o wrote:
> >
> > > So there are at least a few device drivers which are apparently holding
> > > interrupts off long enough so that we lose a clock interrupt. Not good,
> > > and one of the reasons why any attempts to tweak the Linux scheduler to
> > > make it a "real-time system" simply elicits a smile from me..... oh, if
> > > only it were so easy! :-)
> >
> > it's not that hard or hopeless as it seems. i wrote a tool for exactly
> > this reason, it measures driver delays pretty exactly by instrumenting
> Its also far far worse in 2.1.x because of the io lock and the limited
> locks available to the networking (notably some network device drivers
> trash your performance on a kernel built SMP=1)
> Right now 2.0.x is way way better on this paticular count.

I just want to clarify. It's NOT a generic io-lock issue. It's a driver

The 2.1.x io-lock is essentially locking the _same_ region as 2.0.x used
to lock with global "cli()/sti()".

As such, 2.1.x gets _better_ interrupt latency, simply because there is
less lock contention.

HOWEVER. There are a few SCSI drivers and a few network drivers that were
written for a single-threaded setup, and the code was essentially too
broken for SMP locking. Those drivers have been forced to work - usually
by expanding the lock to cover a larger area. That's simply because 2.0.x
was so single-threaded under SMP that sometimes you didn't need the locks
because you knew that certain things couldn't happen.

So the issue is basically an issue of specific drivers (as tytso said),
not of any new locking mechanism per se. If a driver causes bad interrupt
latency, it's WAY too easy to just blame the io-lock, but let's not do
that. 2.1.x is more parallel, and needs more thought, and not all drivers
have been trivial to adjust for that.


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
Please read the FAQ at