Re: [PATCH v1] arm64/module: Optimize module load time by optimizing PLT counting
From: Will Deacon
Date: Wed Jun 17 2020 - 10:05:34 EST
Hi all,
On Wed, Jun 17, 2020 at 10:17:33AM +0200, Ard Biesheuvel wrote:
> On Tue, 16 Jun 2020 at 23:40, Will Deacon <will@xxxxxxxxxx> wrote:
> > On Fri, Jun 05, 2020 at 03:22:57PM -0700, Saravana Kannan wrote:
> > > This gives significant reduction in module load time for modules with
> > > large number of relocations with no measurable impact on modules with a
> > > small number of relocations. In my test setup with CONFIG_RANDOMIZE_BASE
> > > enabled, the load time for one module went down from 268ms to 100ms.
> > > Another module went down from 143ms to 83ms.
> >
> > Whilst I can see that's a significant relative saving, what proportion of
> > actual boot time are we talking about here? It would be interesting to
> > know if there are bigger potential savings elsewhere.
> >
>
> Also, 'some module' vs 'some other module' doesn't really say
> anything. Please explain which modules and their sizes.
I suspect these are all out-of-tree modules, but yes, some metadata such as
sizes, nr or relocs etc would be good to have in the commit message.
> > > diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c
> > > index 65b08a74aec6..bf5118b3b828 100644
> > > --- a/arch/arm64/kernel/module-plts.c
> > > +++ b/arch/arm64/kernel/module-plts.c
> > > @@ -253,6 +253,36 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num,
> > > return ret;
> > > }
> > >
> > > +static bool rela_needs_dedup(Elf64_Rela *rela)
> > > +{
> > > + return ELF64_R_TYPE(rela->r_info) == R_AARCH64_JUMP26
> > > + || ELF64_R_TYPE(rela->r_info) == R_AARCH64_CALL26;
> > > +}
> >
>
> Would it help to check the section index here as well? Call/jump
> instructions within a section are never sent through a PLT entry.
(I tried hacking this in below)
> > Does this handle A53 erratum 843419 correctly? I'm worried that we skip
> > the ADRP PLTs there.
> >
>
> ADRP PLTs cannot be deduplicated, as they incorporate a relative jump
> back to the caller.
Duh yes, thanks. We can't trash the link register here.
> > > +/* Group the CALL26/JUMP26 relas toward the beginning of the array. */
> > > +static int partition_dedup_relas(Elf64_Rela *rela, int numrels)
> > > +{
> > > + int i = 0, j = numrels - 1;
> > > + Elf64_Rela t;
> > > +
> > > + while (i < j) {
> > > + while (rela_needs_dedup(rela + i) && i < j)
> > > + i++;
> > > + while (!rela_needs_dedup(rela + j) && i < j)
> > > + j--;
> > > + if (i < j) {
> > > + t = *(rela + j);
> > > + *(rela + j) = *(rela + i);
> > > + *(rela + i) = t;
> > > + }
> > > + }
> >
> > This is very hard to read and I think some of the 'i < j' comparisons are
> > redundant. Would it make more sense to assign a temporary rather than
> > post-inc/decrement and recheck?
> >
>
> Agreed.
>
> Also, what's wrong with [] array indexing?
Saravana, since our stylistic objections are reasonably vague, I tried
to clean this up so you can get an idea of how I'd prefer it to look (can't
speak for Ard). I haven't tried running this, but please feel free to adapt
it. Replacement diff below.
Will
--->8
diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c
index 65b08a74aec6..204290314054 100644
--- a/arch/arm64/kernel/module-plts.c
+++ b/arch/arm64/kernel/module-plts.c
@@ -253,6 +253,39 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num,
return ret;
}
+static bool branch_rela_needs_plt(Elf64_Sym *syms, Elf64_Rela *rela,
+ Elf64_Word dstidx)
+{
+
+ Elf64_Sym *s = syms + ELF64_R_SYM(rela->r_info);
+
+ if (s->st_shndx == dstidx)
+ return false;
+
+ return ELF64_R_TYPE(rela->r_info) == R_AARCH64_JUMP26 ||
+ ELF64_R_TYPE(rela->r_info) == R_AARCH64_CALL26;
+}
+
+static int partition_branch_plt_relas(Elf64_Sym *syms, Elf64_Rela *rela,
+ int numrels, Elf64_Word dstidx)
+{
+ int i = 0, j = numrels - 1;
+
+ if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+ return 0;
+
+ while (i < j) {
+ if (branch_rela_needs_plt(syms, &rela[i], dstidx))
+ i++;
+ else if (branch_rela_needs_plt(syms, &rela[j], dstidx))
+ swap(rela[i], rela[j]);
+ else
+ j--;
+ }
+
+ return i;
+}
+
int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
char *secstrings, struct module *mod)
{
@@ -290,7 +323,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
for (i = 0; i < ehdr->e_shnum; i++) {
Elf64_Rela *rels = (void *)ehdr + sechdrs[i].sh_offset;
- int numrels = sechdrs[i].sh_size / sizeof(Elf64_Rela);
+ int nents, numrels = sechdrs[i].sh_size / sizeof(Elf64_Rela);
Elf64_Shdr *dstsec = sechdrs + sechdrs[i].sh_info;
if (sechdrs[i].sh_type != SHT_RELA)
@@ -300,8 +333,14 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
if (!(dstsec->sh_flags & SHF_EXECINSTR))
continue;
- /* sort by type, symbol index and addend */
- sort(rels, numrels, sizeof(Elf64_Rela), cmp_rela, NULL);
+ /*
+ * sort branch relocations requiring a PLT by type, symbol index
+ * and addend
+ */
+ nents = partition_branch_plt_relas(syms, rels, numrels,
+ sechdrs[i].sh_info);
+ if (nents)
+ sort(rels, nents, sizeof(Elf64_Rela), cmp_rela, NULL);
if (!str_has_prefix(secstrings + dstsec->sh_name, ".init"))
core_plts += count_plts(syms, rels, numrels,