Re: pci_set_mwi() ... why isn't it used more?

From: Ivan Kokshaysky (ink@jurassic.park.msu.ru)
Date: Thu Jan 30 2003 - 18:34:19 EST


On Thu, Jan 30, 2003 at 10:35:25AM -0800, David Brownell wrote:
> I think the first answer is better, but it looks like 2.5.59 will
> set the pci cache line size to 16 bytes not 128 bytes in that case.

Yes, and it looks dangerous as the device would transfer incomplete
cache lines with MWI...

> Another option would be to do like SPARC64 and set the cacheline
> sizes as part of DMA enable (which is what I'd first thought of).
> And have the breakage test in the ARCH_PCI_MWI code -- something
> that sparc64 doesn't do, fwiw.

Actually I think there is nothing wrong if we'll try to be a bit
more aggressive with MWI and move all of this into generic
pci_set_master().
To do it safely, we need
- kind of "broken_mwi" field in the struct pci_dev for buggy devices,
  it can be set either by PCI quirks or by driver before pci_set_master()
  call;
- arch-specific pci_cache_line_size() function/macro (instead of
  SMP_CACHE_BYTES) that returns either actual CPU cache line size
  or other safe value (including 0, which means "don't enable MWI");
- check that the device does support desired cache line size, i.e.
  read back the value that we've written into the PCI_CACHE_LINE_SIZE
  register and if it's zero (or dev->broken_mwi == 1) don't enable MWI.

Thoughts?

Ivan.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Jan 31 2003 - 22:00:24 EST