Re: [PATCH] Document Linux's memory barriers [try #2]
From: Linus Torvalds
Date: Wed Mar 08 2006 - 20:25:53 EST
On Thu, 9 Mar 2006, Paul Mackerras wrote:
>
> ... and x86 mmiowb is a no-op. It's not x86 that I think is buggy.
x86 mmiowb would have to be a real op too if there were any multi-pathed
PCI buses out there for x86, methinks.
Basically, the issue boils down to one thing: no "normal" barrier will
_ever_ show up on the bus on x86 (ie ia64, afaik). That, together with any
situation where there are multiple paths to one physical device means that
mmiowb() _has_ to be a special op, and no spinlocks etc will _ever_ do the
serialization you look for.
Put another way: the only way to avoid mmiowb() being special is either
one of:
(a) have the bus fabric itself be synchronizing
(b) pay a huge expense on the much more critical _regular_ barriers
Now, I claim that (b) is just broken. I'd rather take the hit when I need
to, than every time.
Now, (a) is trivial for small cases, but scales badly unless you do some
fancy footwork. I suspect you could do some scalable multi-pathable
version with using similar approaches to resolving device conflicts as the
cache coherency protocol does (or by having a token-passing thing), but it
seems SGI's solution was fairly well thought out.
That said, when I heard of the NUMA IO issues on the SGI platform, I was
initially pretty horrified. It seems to have worked out ok, and as long as
we're talking about machines where you can concentrate on validating just
a few drivers, it seems to be a good tradeoff.
Would I want the hard-to-think-about IO ordering on a regular desktop
platform? No.
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/