Re: Q: why didn't GCC warn about this uninitialized variable? (was: Re: [PATCH] perf tests: initialize sa.sa_flags)
From: Ingo Molnar
Date: Thu Mar 03 2016 - 09:53:36 EST
* Jakub Jelinek <jakub@xxxxxxxxxx> wrote:
> On Thu, Mar 03, 2016 at 02:47:16PM +0100, Ingo Molnar wrote:
> > I tried to distill a testcase out of it, and the following silly hack seems to
> > trigger it:
>
> ...
>
> This is a known issue, which we don't have a solution for yet.
> The thing is, GCC has 2 uninitialized warning passes, one is done
> very early, on fairly unoptimized code, which warns for -O and above
> only about must be uninitialized cases in code that is executed
> unconditionally (if the containing function is executed, and doesn't
> have PHI handling code), and then a very late uninitialized pass,
> that warns also about maybe-uninitialized cases, has predicate aware
> handling in it, etc.; but this warns only about the cases where the
> uninitialized uses survived through the optimizations until that phase.
> In the testcase, the conditional uninitialized uses got optimized away,
> passes seeing that you can get alt_idx initialized say to 2 from one branch
> and uninitialized from another one just optimize it into 2.
> Warning right away at that spot when the optimization pass performs this
> might not be the right thing, as it could warn for stuff in dead code,
> or couldn't be backed up by the predicate aware uninit analysis which is
> costly and couldn't be done in every pass that just happens to optimize away
> some uninitialized stuff. Not to mention that it doesn't have to be always
> even so obvious to the optimizing pass. Say, when computing value ranges,
> the uninitialized uses should be ignored, because they can't be used in
> valid paths, so if say you have value range [2, 34] from one branch and
> uninitialized use from another branch, the resulting value range will be
> [2, 34]. Then later on, you just optimize based on this value range and
> perhaps the uninitialized use will go away because of that.
> We could handle the uninitialized uses pessimistically, by not optimizing
> PHI <initialized_2, uninited_3(D)> into just initialized_2, etc., by
> considering uninitialized uses as VARYING ([min, max] range) rather than
> something that doesn't happen, etc., and then the late uninitialized pass
> would warn here. But then we'd trade the warning for less optimized code.
> GCC is primarily an optimizing compiler, rather than static analyzer, so
> that is why GCC chooses to do what it does. Do you want us introduce
> -Ow mode, which will prefer warnings over generated code quality?
Yes, -Ow would be very useful, if it can 'guarantee' that no false negatives slip
through:
It could be combined with the following 'safe' runtime behavior: when built with
-Ow then all uninitialized values are initialized to 0. This should be relatively
easy to implement, as it does not depend on any optimization. After all is said
and done, there's two cases:
- a 0-initialization gets optimized out by an optimization pass. This is the
common case.
- a variable gets initialized to 0 unnecessarily. (If a warning got ignored.)
having some runtime overhead for zero initialization is much preferred for many
projects.
The warning could even be generated at this late stage: i.e. the warning would
simply warn about remaining 0-initializations that previous passes were unable to
eliminate.
This way no undeterministic, random, uninitialized (and worst-case: attacker
controlled) values can ever enter the program flow (from the stack) - in the worst
case (where a warning was ignored) a 0 value is set implicitly - which is still
deterministic behavior.
This is one of the big plusses of managed languages - and we could bring it to C
as well.
Thanks,
Ingo