Re: [PATCH] arm64: make atomic helpers __always_inline

From: Arnd Bergmann
Date: Fri Jan 08 2021 - 05:27:53 EST


On Fri, Jan 8, 2021 at 10:33 AM Will Deacon <will@xxxxxxxxxx> wrote:
> On Fri, Jan 08, 2021 at 10:19:56AM +0100, Arnd Bergmann wrote:
> > From: Arnd Bergmann <arnd@xxxxxxxx>
> >
> > With UBSAN enabled and building with clang, there are occasionally
> > warnings like
> >
> > WARNING: modpost: vmlinux.o(.text+0xc533ec): Section mismatch in reference from the function arch_atomic64_or() to the variable .init.data:numa_nodes_parsed
> > The function arch_atomic64_or() references
> > the variable __initdata numa_nodes_parsed.
> > This is often because arch_atomic64_or lacks a __initdata
> > annotation or the annotation of numa_nodes_parsed is wrong.
> >
> > for functions that end up not being inlined as intended but operating
> > on __initdata variables. Mark these as __always_inline, along with
> > the corresponding asm-generic wrappers.
>
> Hmm, I don't fully grok this. Why does it matter if a non '__init' function
> is called with a pointer to some '__initdata'? Or is the reference coming
> from somewhere else? (where?).

There are (at least) three ways for gcc to deal with a 'static inline'
function:

a) fully inline it as the __always_inline attribute does
b) not inline it at all, treating it as a regular static function
c) create a specialized version with different calling conventions

In this case, clang goes with option c when it notices that all
callers pass the same constant pointer. This means we have a
synthetic

static noinline long arch_atomic64_or(long i)
{
return __lse_ll_sc_body(atomic64_fetch_or, i, &numa_nodes_parsed);
}

which is a few bytes shorter than option b as it saves a load in the
caller. This function definition however violates the kernel's rules
for section references, as the synthetic version is not marked __init.

Arnd