> I remember when you were reporting problems with the aic7xxx driver
> and cards, and I'll only mention that those reports where at least one
> major and two minor revisions ago.
On the contrary, I have kept rather anal records of my raid performance
testing. I started using BT958 and PCI-SC875 around 2.1.48 and gave
testing 2940UW around 2.1.77. I realize that there were problems in the
quality of adaptec documentation that were available to you and I also
admit that there was a slight advantage in performance of the 2940UW over
BT958 and PCI-SC875. I can't test the 2940UW's any longer having sold
them. As far as adaptec goes, I would prefer to play with the AAA-133
three port card.
> Who ever said a controller can't live with that?
Not I. My point is more that MAX_LAT is probably not important at all.
> These registers are a guide. At worst, cards that have small buffers
> and no flow control capability would drop information if these params
> are not met, but that doesn't account for very many devices that I'm
> aware of. The SCSI sub-system in particular is immune to this problem
> (simply quit sending ACKs during a transfer cycle and once the offset
> value of outstanding REQs has been sent, the device will quit
> transferring data, then you can wait forever to get the bus if need
> be). If your bus is congested, then you are going to fail on meeting
> everything's requirements eventually. If your bus isn't congested,
> then the devices can get whatever they want.
What time I have spent studying systems with PCI busses that are running
workloads with not just disk activity points out that allowing long burst
transfers is almost always the sanest course of action, especially if
there is any kind of bus bridge involved. I am not saying that short
latencies won't work, they just won't have stellar performance. The
arbitration overhead can gobble up plenty of the available bandwidth and
forcing retries on serial communications cards by spoon feeding them PCI
bandwidth is usually quite expensive.
> So, the only time that there is any justification in saying that a
> device is a [...] is when your PCI bus is overloaded, and then it
> doesn't matter what the device is, they are all going to suffer from
> the bus congestion.
It is important in these cases to not waste the available bandwidth on
arbitration and short bursts, it may be better to force some devices to
have to deal with MAX_LAT values that are quite high. To know for certain
you probably have to test it in a given configuration.
> For example, I have two different 3950U2B controllers in my machine
> right now. Each controller has two separate PCI functions. Each
> function reports MIN_GNT as 39 and MAX_LAT as 25. Multiply that times
> 4 and what do you get? Impossible to meet. Why are they so
> particular, well, each funtion is a separate Ultra2 wide SCSI
> controller and they operate entirely independantly of each other, and
> fully in parallel, so the four channels are capable of 320MB/s of data
> transfer.
Fine, but for how long will you be able to read the disks that the 3950U2B
are controlling at a sustained rate of 320MB/s? In an old fashioned PCI
bus (33Mhz and 32 bits wide) the peak disk transfer rate is something less
than 133MB/s - can you coax the disks to sustain even half of that?
High speed devices on slow PCI busses seems to be all the more reason to
agree with Shanley, MAX_LAT is probably at best only useful as a guide for
prioritization of the arbiter. A nice idea but one that has been
obsoleted by advances in bandwidth needs.
Ed Welbon welbon@spaminator.bga.com
ln -sf /dev/null cookies
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/