[PATCH v1 4/6] bpf, core: Add weak arch_prepare_goto()

From: Tiezhu Yang
Date: Tue Oct 15 2024 - 07:40:19 EST


The objtool program needs to analysis the control flow of each
object file generated by compiler toolchain, it needs to know
all the locations that a branch instruction may jump into.

In the past, objtool only works on x86, where objtool can find
the relocation against the nearest instruction before the jump
instruction, which points to the goto table, because there is
only one table jump instruction even if there is more than one
computed goto in a function such as ___bpf_prog_run().

In fact, the compiler behaviors are different for various archs.
On RISC machines (for example LoongArch) this approach does not
work: with -fsection-anchors (often enabled at -O1 or above) the
relocation entry may actually points to the section anchor instead
of the table. Furthermore, objdump kernel/bpf/core.o shows that
there are many table jump instructions in ___bpf_prog_run() with
more than one computed gotos, but there are no relocations which
actually points to the table for some table jump instructions on
LoongArch.

For the jump table of switch cases, a GCC patch "LoongArch: Add
support to annotate tablejump" has been merged into the upstream
mainline, it makes life much easier with the additional section
".discard.tablejump_annotate" which stores the jump info as pairs
of addresses, each pair contains the address of jump instruction
and the address of jump table.

For the jump table of computed gotos, it is indeed not so easy
to implement in the compiler, especially if there is more than
one computed goto in a function.

Without the help of compiler, in order to figure out the address
of goto table by interpreting the LoongArch machine code, add a
function arch_prepare_goto() for goto table, it is an empty weak
definition and is only overridden by archs that have special
requirements.

This is preparation for later patch on LoongArch, there is no any
effect for the other archs with this patch.

Suggested-by: Xi Ruoyao <xry111@xxxxxxxxxxx>
Signed-off-by: Tiezhu Yang <yangtiezhu@xxxxxxxxxxx>
---
kernel/bpf/core.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 5e77c58e0601..81e5d42619d5 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1706,6 +1706,14 @@ bool bpf_opcode_in_insntable(u8 code)
}

#ifndef CONFIG_BPF_JIT_ALWAYS_ON
+/*
+ * This symbol is an empty weak definition and is only overridden
+ * by archs that have special requirements.
+ */
+#ifndef arch_prepare_goto
+#define arch_prepare_goto()
+#endif
+
/**
* ___bpf_prog_run - run eBPF program on a given context
* @regs: is the array of MAX_BPF_EXT_REG eBPF pseudo-registers
@@ -1743,6 +1751,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
#define CONT_JMP ({ insn++; goto select_insn; })

select_insn:
+ arch_prepare_goto();
goto *jumptable[insn->code];

/* Explicitly mask the register-based shift amounts with 63 or 31
--
2.42.0