Re: [RFC PATCH] x86, vmlinux.lds: Add debug option to force all data sections aligned

From: Josh Poimboeuf
Date: Wed Sep 22 2021 - 14:51:47 EST


On Wed, Jul 28, 2021 at 03:21:40PM +0800, Feng Tang wrote:
> 0day has reported many strange performance changes (regression or
> improvement), in which there was no obvious relation between the culprit
> commit and the benchmark at the first look, and it causes people to doubt
> the test itself is wrong.
>
> Upon further check, many of these cases are caused by the change to the
> alignment of kernel text or data, as whole text/data of kernel are linked
> together, change in one domain can affect alignments of other domains.
>
> To help to quickly identify if the strange performance change is caused
> by _data_ alignment. add a debug option to force the data sections from
> all .o files aligned on THREAD_SIZE, so that change in one domain won't
> affect other modules' data alignment.
>
> We have used this option to check some strange kernel changes [1][2][3],
> and those performance changes were gone after enabling it, which proved
> they are data alignment related.
>
> Similarly, there is another kernel debug option to check text alignment
> related performance changes: CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B,
> which forces all function's start address to be 64 bytes alinged.
>
> This option depends on CONFIG_DYNAMIC_DEBUG==n, as '__dyndbg' subsection
> of .data has a hard requirement of ALIGN(8), shown in the 'vmlinux.lds':
>
> "
> . = ALIGN(8); __start___dyndbg = .; KEEP(*(__dyndbg)) __stop___dyndbg = .;
> "
>
> It contains all pointers to 'struct _ddebug', and dynamic_debug_init()
> will "pointer++" to loop accessing these pointers, which will be broken
> with this option enabled.
>
> [1]. https://lore.kernel.org/lkml/20200205123216.GO12867@shao2-debian/
> [2]. https://lore.kernel.org/lkml/20200305062138.GI5972@shao2-debian/
> [3]. https://lore.kernel.org/lkml/20201112140625.GA21612@xsang-OptiPlex-9020/
>
> Signed-off-by: Feng Tang <feng.tang@xxxxxxxxx>
> ---
> arch/x86/Kconfig.debug | 13 +++++++++++++
> arch/x86/kernel/vmlinux.lds.S | 7 ++++++-
> 2 files changed, 19 insertions(+), 1 deletion(-)

Hi Feng,

Thanks for the interesting LPC presentation about alignment-related
performance issues (which mentioned this patch).

https://linuxplumbersconf.org/event/11/contributions/895/

I wonder if we can look at enabling some kind of data section alignment
unconditionally instead of just making it a debug option. Have you done
any performance and binary size comparisons?

On a similar vein I think we should re-explore permanently enabling
cacheline-sized function alignment i.e. making something like
CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B the default. Ingo did some
research on that a while back:

https://lkml.kernel.org/r/20150519213820.GA31688@xxxxxxxxx

At the time, the main reported drawback of -falign-functions=64 was that
even small functions got aligned. But now I think that can be mitigated
with some new options like -flimit-function-alignment and/or
-falign-functions=64,X (for some carefully-chosen value of X).

--
Josh