On Thu, 1 Sep 2005, Hidetoshi Seto wrote:static inline unsigned int
___ia64_inb (unsigned long port)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
unsigned char ret;
+ unsigned long flags;
+ read_lock_irqsave(&iochk_lock,flags);
ret = *addr;
__ia64_mf_a();
+ ia64_mca_barrier(ret);
+ read_unlock_irqrestore(&iochk_lock,flags);
+
return ret;
}
I am extremely concerned about the performance implications of this
implementation. These changes have several deleterious effects on I/O
performance.
The first is serialization of all I/O reads and writes. This will
be a severe problem on systems with large numbers of PCI buses, the
very type of system that stands the most to gain in reliability from
these efforts. At a minimum any locking should be done on a per-bus
basis.
The second is the raw performance penalty from acquiring and dropping
a lock with every read and write. This will be a substantial amount
of activity for any I/O-intensive system, heck even for moderate I/O
levels.
The third is lock contention for this single lock -- I would fully expect
many dozens of processors to be performing I/O at any given time on
systems of interest, causing this to be a heavily contended lock.
This will be even more severe on NUMA systems, as the lock cacheline
bounces across the memory fabric. A per-bus lock would again be much
more appropriate.
The final problem is that these performance penalties are paid even
by drivers which are not IOCHK aware, which for the time being is
all of them. A reasonable solution must pay these penalties only
for drivers which are IOCHK aware. Reinstating the readX_check()
interface is the obvious solution here. It's simply too heavy a
performance burden to pay when almost no drivers currently benefit
from it.
Otherwise, I also wonder if you have any plans to handle similar
errors experienced under device-initiated DMA, or asynchronous I/O.
It's not clear that there's sufficient infrastructure in the current
patches to adequately handle those situations.
Thank you,
Brent Casavant