Re: [PATCH v9 04/10] x86: refcount: prevent gcc distortions

From: Ingo Molnar
Date: Thu Oct 04 2018 - 03:58:03 EST



* Nadav Amit <namit@xxxxxxxxxx> wrote:

> GCC considers the number of statements in inlined assembly blocks,
> according to new-lines and semicolons, as an indication to the cost of
> the block in time and space. This data is distorted by the kernel code,
> which puts information in alternative sections. As a result, the
> compiler may perform incorrect inlining and branch optimizations.
>
> The solution is to set an assembly macro and call it from the inlined
> assembly block. As a result GCC considers the inline assembly block as
> a single instruction.
>
> This patch allows to inline functions such as __get_seccomp_filter().
> Interestingly, this allows more aggressive inlining while reducing the
> kernel size.
>
> text data bss dec hex filename
> 18140970 10225412 2957312 31323694 1ddf62e ./vmlinux before
> 18140140 10225284 2957312 31322736 1ddf270 ./vmlinux after (-958)
>
> Static text symbols:
> Before: 40302
> After: 40286 (-16)
>
> Functions such as kref_get(), free_user(), fuse_file_get() now get
> inlined.

Yeah, so I kind of had your series on the back-burner (I'm sure you noticed!),
mostly because what I complained about in a previous round of review a couple
of months ago: that the description of the series and the changelog of every
single patch in it is tiptoeing around the *real* problem and never truly
describes it:

** This is a GCC bug, plain and simple, and we are uglifying **
** and complicating kernel assembly code to work it around. **

We'd never ever consider such uglification for Clang, not even _close_.

Sure this would have warranted a passing mention? Instead the changelogs are
lovingly calling it a "distortion" as if this was no-one's fault really, and
the patch a "solution".

How about calling it a "GCC inlining bug" and a "workaround with costs"
which it is in reality, and stop whitewashing the problem?

At the same time I realize that we still need this series because GCC won't
get fixed, so as a consolation I wrote the changelog below that explains
how it really is, no holds barred.

Since the tone of the changelog is a bit ... frosty, I added this disclaimer:

[ mingo: Wrote new changelog. ]

Let me know if you want me to make it more prominent that you had absolutely
no role in writing that changelog.

I'm also somewhat annoyed at the fact that this series carries a boatload
of reviewed-by's and acked-by's, yet none of those reviewers found it
important to point out the large chasm that is gaping between description
and reality.

Thanks,

Ingo


=============>
Subject: x86/refcount: Prevent inlining related GCC distortions
From: Nadav Amit <namit@xxxxxxxxxx>
Date: Wed, 3 Oct 2018 14:30:54 -0700

The inlining pass of GCC doesn't include an assembler, so it's not aware
of basic properties of the generated code, such as its size in bytes,
or that there are such things as discontiuous blocks of code and data
due to the newfangled linker feature called 'sections' ...

Instead GCC uses a lazy and fragile heuristic: it does a linear count of
certain syntactic and whitespace elements in inlined assembly block source
code, such as a count of new-lines and semicolons (!), as a poor substitute
for "code size and complexity".

Unsurprisingly this heuristic falls over and breaks its neck whith certain
common types of kernel code that use inline assembly, such as the frequent
practice of putting useful information into alternative sections.

As a result of this fresh, 20+ years old GCC bug, GCC's inlining decisions
are effectively disabled for inlined functions that make use of such asm()
blocks, because GCC thinks those sections of code are "large" - when in
reality they are often result in just a very low number of machine
instructions generated.

This absolute lack of inlining provess when GCC comes across such asm()
blocks both increases generated kernel code size and causes performance
overhead, which is particularly noticeable on paravirt kernels, which make
frequent use of these inlining facilities in attemt to stay out of the
way when running on baremetal hardware.

Instead of fixing the compiler we use a workaround: we set an assembly macro
and call it from the inlined assembly block. As a result GCC considers the
inline assembly block as a single instruction. (Which it often isn't but I digress.)

This uglifies and bloats the source code:

2 files changed, 46 insertions(+), 29 deletions(-)

Yay readability and maintainability, it's not like assembly code is hard to read
and maintain ...

This patch allows GCC to inline simple functions such as __get_seccomp_filter().

To no-one's surprise the result is GCC performs more aggressive (read: correct)
inlining decisions in these senarios, which reduces the kernel size and presumably
also speeds it up:

text data bss dec hex filename
18140970 10225412 2957312 31323694 1ddf62e ./vmlinux before
18140140 10225284 2957312 31322736 1ddf270 ./vmlinux after (-958)

Change in size of static text symbols:

Before: 40302
After: 40286 (-16)

Functions such as kref_get(), free_user(), fuse_file_get() now get inlined. Hurray!

We also hope that GCC will eventually get fixed, but we are not holding
our breath for that. Yet we are optimistic, it might still happen, any decade now.

[ mingo: Wrote new changelog. ]

Tested-by: Kees Cook <keescook@xxxxxxxxxxxx>
Signed-off-by: Nadav Amit <namit@xxxxxxxxxx>
Acked-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: Jan Beulich <JBeulich@xxxxxxxx>
Cc: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Link: http://lkml.kernel.org/r/20181003213100.189959-5-namit@xxxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---