On Mon, Nov 17, 2014 at 01:11:57PM -0800, Alexander Duyck wrote:
On 11/17/2014 12:18 PM, Paul E. McKenney wrote:Would it make sense to have a memory barrier that enforced the
On Mon, Nov 17, 2014 at 09:18:13AM -0800, Alexander Duyck wrote:This barrier is not meant for use in MMIO operations, for that you
There are a number of situations where the mandatory barriers rmb() andIs this still the case if one of the memory operations is MMIO? Last
wmb() are used to order memory/memory operations in the device drivers
and those barriers are much heavier than they actually need to be. For
example in the case of PowerPC wmb() calls the heavy-weight sync
instruction when for memory/memory operations all that is really needed is
an lsync or eieio instruction.
I knew, it was not.
still need a full barrier as I call out in the documentation
section. What the barrier does is allow for a lightweight barrier
for accesses to coherent system memory. So for example many device
drivers have to perform a read of the descriptor to see if the
device is done with it. We need an rmb() following that check to
prevent any other accesses.
Right now on x86 that rmb() becomes an lfence instruction and is
quite expensive, and as it turns out we don't need it since the x86
doesn't reorder reads. The same kind of thing applies to PowerPC,
only in that case we use a sync when what we really need is a
lwsync.
non-store-buffer orderings, that is prior reads before later
reads and writes and prior writes before later writes? This was
discussed earlier this year ((http://lwn.net/Articles/586838/,
https://lwn.net/Articles/588300/). If I recall correctly, one of
the biggest obstacles was the name. ;-)
Ah, so ARM will motivate a fast_wmb(), given its instruction set.The motivation is to provide finer grained barriers. So thisThis commit adds a fast (and loose) version of the mandatory memoryI must confess that I still don't entirely understand the motivation.
barriers rmb() and wmb(). The prefix to the name is actually based on the
version of the functions that already exist in the mips and tile trees.
However I thought it applicable since it gets at what we are trying to
accomplish with these barriers and somethat implies their risky nature.
These new barriers are not as safe as the standard rmb() and wmb().
Specifically they do not guarantee ordering between cache-enabled and
cache-inhibited memories. The primary use case for these would be to
enforce ordering of memory reads/writes when accessing cache-enabled memory
that is shared between the CPU and a device.
It may also be noted that there is no fast_mb(). This is due to the fact
that most architectures didn't seem to have a good way to do a full memory
barrier quickly and so they usually resorted to an mb() for their smp_mb
call. As such there no point in adding a fast_mb() function if it is going
to map to mb() for all architectures anyway.
provides an in-between that allows us to "choose the right hammer".
In the case of PowerPC it is the difference between sync/lwsync, on
ARM it is dsb()/dmb(), and on x86 it is lfence/barrier().
<snip>An smp_lwsync() would be a great improvement!
The problem I had with __lwsync is that it really wasn't all thatdiff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.hThis is not good at all. For smp_store_release(), we absolutely
index cb6d66c..f480097 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -36,22 +36,20 @@
#define set_mb(var, value) do { var = value; mb(); } while (0)
-#ifdef CONFIG_SMP
-
#ifdef __SUBARCH_HAS_LWSYNC
# define SMPWMB LWSYNC
#else
# define SMPWMB eieio
#endif
-#define __lwsync() __asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory")
+#define fast_rmb() __asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory")
+#define fast_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
+#ifdef CONFIG_SMP
#define smp_mb() mb()
-#define smp_rmb() __lwsync()
-#define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
+#define smp_rmb() fast_rmb()
+#define smp_wmb() fast_wmb()
#else
-#define __lwsync() barrier()
-
#define smp_mb() barrier()
#define smp_rmb() barrier()
#define smp_wmb() barrier()
@@ -69,10 +67,16 @@
#define data_barrier(x) \
asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
+/*
+ * The use of smp_rmb() in these functions are actually meant to map from
+ * smp_rmb()->fast_rmb()->LWSYNC. This way if smp is disabled then
+ * smp_rmb()->barrier(), or if the platform doesn't support lwsync it will
+ * map to the more heavy-weight sync.
+ */
#define smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
- __lwsync(); \
+ smp_rmb(); \
must order prior loads and stores against the assignment on the following
line. This is not something that smp_rmb() does, nor is it something
that smp_wmb() does. Yes, it might happen to now, but this could easily
break in the future -- plus this change is extremely misleading.
The original __lwsync() is much more clear.
clear. It was the lwsync instruction if SMP was enabled, otherwise
it was just a barrier call. What I did is move the definition of
__lwsync in the SMP case into fast_rmb, which in turn is accessed by
smp_rmb. I tried to make this clear in the comment just above the
two calls. The resultant assembly code should be exactly the same.
What I could do is have it added back as a smp_lwsync if that works
for you. That way there is something there to give you a hint that
it becomes a barrier() call as soon as SMP is disabled.
Thanx, Paul