Re: [PATCH bpf-next v9 07/11] bpf,x86: add fsession support for x86_64

From: Menglong Dong

Date: Tue Jan 13 2026 - 22:35:46 EST


On 2026/1/14 11:27 Menglong Dong <menglong.dong@xxxxxxxxx> write:
> On 2026/1/14 09:25 Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> write:
> > On Sat, Jan 10, 2026 at 6:12 AM Menglong Dong <menglong8.dong@xxxxxxxxx> wrote:
> > >
[...]
> > >
> > > + if (bpf_fsession_cnt(tlinks)) {
> > > + /* clear all the session cookies' value */
> > > + for (int i = 0; i < cookie_cnt; i++)
> > > + emit_store_stack_imm64(&prog, cookie_off - 8 * i, 0);
> > > + /* clear the return value to make sure fentry always get 0 */
> > > + emit_store_stack_imm64(&prog, 8, 0);
> > > + }
> > > + func_meta = nr_regs + (((cookie_off - regs_off) / 8) << BPF_TRAMP_M_COOKIE);
> >
> > func_meta conceptually is a collection of bit fields, so using +/-
> > feels weird, use | and &, more in line with working with bits?
>
>
> It's not only for bit fields. For nr_args and cookie offset, they are
> byte fields. Especially for cookie offset, arithmetic operation is performed
> too. So I think it make sense here, right?

Oh, I see what you mean now. It's OK to use "&" instead of "+"
here. I were explaining the decreasing of func_meta in invoke_bpf().
That can use "&/|" too, but use "-/+" can make the code much
simpler.

Thanks!
Menglong Dong

>
>
> >
> > (also you defined that BPF_TRAMP_M_NR_ARGS but you are not using it
> > consistently...)
>
>
> I'm not sure if we should define it. As we use the least significant byte for
> the nr_args, the shift for it is always 0. If we use it in the inline, unnecessary
> instruction will be generated, which is the bit shift instruction.
>
>
> I defined it here for better code reading. Maybe we can do some comment
> in the inline of bpf_get_func_arg(), instead of defining such a unused
> macro?
>
>
> Thanks!
> Menglong Dong
>
>
> >
> >
> >
> >
> > > +
> > > if (fentry->nr_links) {
> > > if (invoke_bpf(m, &prog, fentry, regs_off, run_ctx_off,
> > > - flags & BPF_TRAMP_F_RET_FENTRY_RET, image, rw_image))
> > > + flags & BPF_TRAMP_F_RET_FENTRY_RET, image, rw_image,
> > > + func_meta))
> > > return -EINVAL;
> > > }
> > >
> > > @@ -3445,9 +3467,14 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
> > > }
> > > }
> > >
> > > + /* set the "is_return" flag for fsession */
> > > + func_meta += (1 << BPF_TRAMP_M_IS_RETURN);
> > > + if (bpf_fsession_cnt(tlinks))
> > > + emit_store_stack_imm64(&prog, nregs_off, func_meta);
> > > +
> > > if (fexit->nr_links) {
> > > if (invoke_bpf(m, &prog, fexit, regs_off, run_ctx_off,
> > > - false, image, rw_image)) {
> > > + false, image, rw_image, func_meta)) {
> > > ret = -EINVAL;
> > > goto cleanup;
> > > }
> > > --
> > > 2.52.0
> > >
> >
>
>
>
>
>
>
>