Re: MMIO and gcc re-ordering issue
From: Jeremy Higdon
Date: Sat May 31 2008 - 03:57:31 EST
On Fri, May 30, 2008 at 10:21:00AM -0700, Jesse Barnes wrote:
> On Friday, May 30, 2008 2:36 am Jes Sorensen wrote:
> > James Bottomley wrote:
> > >> The only way to guarantee ordering in the above setup, is to either
> > >> make writel() fully ordered or adding the mmiowb()'s inbetween the two
> > >> writel's. On Altix you have to go and read from the PCI brige to
> > >> ensure all writes to it have been flushed, which is also what mmiowb()
> > >> is doing. If writel() was to guarantee this ordering, it would make
> > >> every writel() call extremely expensive :-(
> > >
> > > So if a read from the bridge achieves the same effect, can't we just put
> > > one after the writes within the spinlock (an unrelaxed one). That way
A relaxed readX would be sufficient. It's the next lowest cost way (after
mmiowb) of ensuring write ordering between CPUs. Regular readX is the
most expensive method (well, we could probably come up with something worse,
but we'd have to work at it :).
> > > this whole sequence will look like a well understood PCI posting flush
> > > rather than have to muck around with little understood (at least by most
> > > driver writers) io barriers?
> >
> > Hmmm,
> >
> > I think mmiowb() does some sort of status read from the bridge, I am not
> > sure if it's enough to just do a regular readl().
> >
> > I'm adding Jeremy to the list, he should know for sure.
>
> I think a read from the target host bridge is enough. What mmiowb() does
> though is to read a *local* host bridge register, which contains a count of
> the number of PIO ops still "in flight" on their way to their target bridge.
> When it reaches 0, all PIOs have arrived at the target host bridge (they
> still may be bufferd), so ordering is guaranteed.
Note that is the main advantage over a read. There is no round trip across
the NUMA fabric.
jeremy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/