Re: [isocpp-parallel] Proposal for new memory_order_consume definition
From: Markus Trippelsdorf
Date: Sun Feb 28 2016 - 03:27:27 EST
On 2016.02.27 at 15:10 -0800, Paul E. McKenney via llvm-dev wrote:
> On Sat, Feb 27, 2016 at 11:16:51AM -0800, Linus Torvalds wrote:
> > On Feb 27, 2016 09:06, "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
> > wrote:
> > >
> > >
> > > But we do already have something very similar with signed integer
> > > overflow. If the compiler can see a way to generate faster code that
> > > does not handle the overflow case, then the semantics suddenly change
> > > from twos-complement arithmetic to something very strange. The standard
> > > does not specify all the ways that the implementation might deduce that
> > > faster code can be generated by ignoring the overflow case, it instead
> > > simply says that signed integer overflow invoked undefined behavior.
> > >
> > > And if that is a problem, you use unsigned integers instead of signed
> > > integers.
> >
> > Actually, in the case of there Linux kernel we just tell the compiler to
> > not be an ass. We use
> >
> > -fno-strict-overflow
>
> That is the one!
>
> > or something. I forget the exact compiler flag needed for "the standard is
> > as broken piece of shit and made things undefined for very bad reasons".
> >
> > See also there idiotic standard C alias rules. Same deal.
>
> For which we use -fno-strict-aliasing.
Do not forget -fno-delete-null-pointer-checks.
So the kernel obviously is already using its own C dialect, that is
pretty far from standard C.
All these options also have a negative impact on the performance of the
generated code.
--
Markus