Re: [GIT PULL] scheduler fixes

From: Ingo Molnar
Date: Mon Nov 18 2019 - 03:03:46 EST



* Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:

> On Sun, Nov 17, 2019 at 2:20 AM Valentin Schneider
> <valentin.schneider@xxxxxxx> wrote:
> >
> > AFAIUI the requirement for the enum type is that it has to be an int
> > type that covers all its values, so I could see some funky
> > optimization (e.g. check the returned value is < 512 but it's assumed
> > the type for the enum is 8 bits so this becomes always true). Then
> > again we don't have any explicit check on those returned values, plus
> > they fit in 11 bits, so as you say it's mostly likely inconsequential
> > (and I didn't see any compile diff).
>
> Gcc can - and does - narrow enums to smaller integer types with the
> '-fshort-enums' flag.

Good point - but at least according to the GCC 9.2.1 documentation,
-fshort-enums is a non-default code generation option:

Options for Code Generation Conventions

These machine-independent options control the interface
conventions used in code generation.

Most of them have both positive and negative forms; the negative
form of -ffoo is -fno-foo. In the table below, only one of the
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
forms is listed---the one that is not the default. You can figure
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
out the other form by either removing no- or adding it.

[...]

-fshort-enums

Allocate to an "enum" type only as many bytes as it needs for
the declared range of possible values. Specifically, the
"enum" type is equivalent to the smallest integer type that
has enough room.

Warning: the -fshort-enums switch causes GCC to generate code
that is not binary compatible with code generated without that
switch. Use it to conform to a non-default application binary
interface.

Unless this option is used AFAIK GCC will treat enums as "int" if at
least one enumeration constant is negative, it's "unsigned int"
otherwise.

The only current use reference to the non-standard -fshort-enums option
within the kernel source is the Hexagon arch, which (seemingly
unnecessarily) disables the option:

arch/hexagon/Makefile:KBUILD_CFLAGS += -fno-short-enums

That flag came with the original Hexagon commits, 8 years ago:

e95bf452a9e22 (Richard Kuo 2011-10-31 18:55:58 -0500 10)# Do not use single-byte enums; these will overflow.
e95bf452a9e22 (Richard Kuo 2011-10-31 18:55:58 -0500 11)KBUILD_CFLAGS += -fno-short-enums

Maybe they had a GCC build where it was on by default? Or GCC changed
this option sometime in the past? Or it's simply an unnecessary but
harmless code generation flag out of paranoia?

Out of curiosity I searched all the historic trees, none ever made use of
the -f*short-enums option, so I don't think this is a GCC option we ever
actively utilized or ran into.

> However, in practice nobody uses that, and it can cause interop
> problems. So I think for us, enums are always at least 'int' (they can
> be bigger).

Yeah, the GCC documentation specifically warns that it breaks the ABI:
the size of structs using enums will generally change from 4 bytes to 1
or 2 bytes, and function call signatures will change incompatibly as
well.

BTW., -fshort-enum looks like a bad code generation option to me, on x86
at least, because it will also use 16-bit width, which is generally a bad
idea on x86. If it limited itself to u8 and 32-bit types it could even be
useful.

Also, I wouldn't be surprised if the kernel ABI broke if we attempted to
use -short-enum, I bet there's a lot of accidental reliance on
enum=int/uint.

> That said, mixing enums and values that are bigger than the enumerated
> ones is just a bad idea
>
> It will, for example, cause us to miss compiler warnings (eg switch
> statements with an enum will warn if you don't handle all cases, but
> the 'all cases' is based on the actual enum range, not on the
> _possible_ invalid values).

That's true. Will check whether we can do something about improving the
affected uclamp data structures.

Thanks,

Ingo