Re: [PATCH bpf-next v3 1/3] bpf: Detect jumping to reserved code during check_cfg()

From: Hao Sun
Date: Thu Oct 12 2023 - 02:32:53 EST


On Wed, Oct 11, 2023 at 3:39 PM Alexei Starovoitov
<alexei.starovoitov@xxxxxxxxx> wrote:
>
> On Wed, Oct 11, 2023 at 2:01 AM Hao Sun <sunhao.th@xxxxxxxxx> wrote:
> >
> > Currently, we don't check if the branch-taken of a jump is reserved code of
> > ld_imm64. Instead, such a issue is captured in check_ld_imm(). The verifier
> > gives the following log in such case:
> >
> > func#0 @0
> > 0: R1=ctx(off=0,imm=0) R10=fp0
> > 0: (18) r4 = 0xffff888103436000 ; R4_w=map_ptr(off=0,ks=4,vs=128,imm=0)
> > 2: (18) r1 = 0x1d ; R1_w=29
> > 4: (55) if r4 != 0x0 goto pc+4 ; R4_w=map_ptr(off=0,ks=4,vs=128,imm=0)
> > 5: (1c) w1 -= w1 ; R1_w=0
> > 6: (18) r5 = 0x32 ; R5_w=50
> > 8: (56) if w5 != 0xfffffff4 goto pc-2
> > mark_precise: frame0: last_idx 8 first_idx 0 subseq_idx -1
> > mark_precise: frame0: regs=r5 stack= before 6: (18) r5 = 0x32
> > 7: R5_w=50
> > 7: BUG_ld_00
> > invalid BPF_LD_IMM insn
> >
> > Here the verifier rejects the program because it thinks insn at 7 is an
> > invalid BPF_LD_IMM, but such a error log is not accurate since the issue
> > is jumping to reserved code not because the program contains invalid insn.
> > Therefore, make the verifier check the jump target during check_cfg(). For
> > the same program, the verifier reports the following log:
> >
> > func#0 @0
> > jump to reserved code from insn 8 to 7
> >
> > Signed-off-by: Hao Sun <sunhao.th@xxxxxxxxx>
> > ---
> > kernel/bpf/verifier.c | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index eed7350e15f4..725ac0b464cf 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -14980,6 +14980,7 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
> > {
> > int *insn_stack = env->cfg.insn_stack;
> > int *insn_state = env->cfg.insn_state;
> > + struct bpf_insn *insns = env->prog->insnsi;
> >
> > if (e == FALLTHROUGH && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
> > return DONE_EXPLORING;
> > @@ -14993,6 +14994,12 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
> > return -EINVAL;
> > }
> >
> > + if (e == BRANCH && insns[w].code == 0) {
> > + verbose_linfo(env, t, "%d", t);
> > + verbose(env, "jump to reserved code from insn %d to %d\n", t, w);
> > + return -EINVAL;
> > + }
>
> I don't think we should be changing the verifier to make
> fuzzer logs more readable.
>
> Same with patch 2. The code is fine as-is.

Confused, the changes are not for fuzzer logs but to handle jumping to
the middle of ld_imm64. Like jumping out of bounds, both are similar
issues and can be handled in one place.

The current code handles such incorrect jumps in check_ld_imm(), which
is strange, and the error log "BAD_LD_IMM" rather than "bad jump" is
also strange.

The second one is just for verifier debugging because the only
caller of check_ld_imm() is do_check(), before which we already
have resolve_pseudo_ldimm64() which has opcode_in_insntable()
to check the validity of insn code. The only reason we could see
an invalid ld_imm64 in check_id_imm() is errors somewhere else.