Re: [isocpp-parallel] Proposal for new memory_order_consume definition
From: Paul E. McKenney
Date: Sat Feb 27 2016 - 18:10:45 EST
On Sat, Feb 27, 2016 at 11:16:51AM -0800, Linus Torvalds wrote:
> On Feb 27, 2016 09:06, "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
> wrote:
> >
> >
> > But we do already have something very similar with signed integer
> > overflow. If the compiler can see a way to generate faster code that
> > does not handle the overflow case, then the semantics suddenly change
> > from twos-complement arithmetic to something very strange. The standard
> > does not specify all the ways that the implementation might deduce that
> > faster code can be generated by ignoring the overflow case, it instead
> > simply says that signed integer overflow invoked undefined behavior.
> >
> > And if that is a problem, you use unsigned integers instead of signed
> > integers.
>
> Actually, in the case of there Linux kernel we just tell the compiler to
> not be an ass. We use
>
> -fno-strict-overflow
That is the one!
> or something. I forget the exact compiler flag needed for "the standard is
> as broken piece of shit and made things undefined for very bad reasons".
>
> See also there idiotic standard C alias rules. Same deal.
For which we use -fno-strict-aliasing.
> So no, standards aren't that important. When the standards screw up, the
> right answer is not to turn the other cheek.
Agreed, hence my current (perhaps quixotic and insane) attempt to get
the standard to do something useful for dependency ordering. But if
that doesn't work, yes, a fallback position is to get the relevant
compilers to provide flags to avoid problematic behavior, similar to
-fno-strict-overflow.
Thanx, Paul
> And undefined behavior is pretty much *always* a sign of "the standard is
> wrong".
>
> Linus