Re: [isocpp-parallel] Proposal for new memory_order_consume definition
From: Michael Matz
Date: Mon Feb 29 2016 - 12:37:11 EST
Hi,
On Sun, 28 Feb 2016, Linus Torvalds wrote:
> > So the kernel obviously is already using its own C dialect, that is
> > pretty far from standard C. All these options also have a negative
> > impact on the performance of the generated code.
>
> They really don't.
They do.
> Have you ever seen code that cared about signed integer overflow?
>
> Yeah, getting it right can make the compiler generate an extra ALU
> instruction once in a blue moon, but trust me - you'll never notice.
> You *will* notice when you suddenly have a crash or a security issue
> due to bad code generation, though.
No, that's not at all the important piece of making signed overflow
undefined. The important part is with induction variables controlling
loops:
short i; for (i = start; i < end; i++)
vs.
unsigned short u; for (u = start; u < end; u++)
For the former you're allowed to assume that the loop will terminate, and
that its iteration count is easily computable. For the latter you get
modulo arithmetic and (if start/end are of larger type than u, say 'int')
it might not even terminate at all. That has direct consequences of
vectorizability of such loops (or profitability of such transformation)
and hence quite important performance implications in practice. Not for
the kernel of course. Now we can endlessly debate how (non)practical it
is to write HPC code in C or C++, but there we are.
> The fact is, undefined compiler behavior is never a good idea. Not for
> serious projects.
Perhaps if these undefinednesses wouldn't have been put into the standard,
people wouldn't have written HPC code, and if that were so the world would
be a nicer place sometimes (certainly for the compiler). Alas, it isn't.
Ciao,
Michael.