Re: [PATCH] sched/tracing: append prev_state to tp args instead

From: Andrii Nakryiko
Date: Wed Apr 27 2022 - 14:18:05 EST


On Wed, Apr 27, 2022 at 3:35 AM Qais Yousef <qais.yousef@xxxxxxx> wrote:
>
> On 04/26/22 08:54, Andrii Nakryiko wrote:
> > On Tue, Apr 26, 2022 at 7:10 AM Qais Yousef <qais.yousef@xxxxxxx> wrote:
> > >
> > > On 04/26/22 14:28, Peter Zijlstra wrote:
> > > > On Fri, Apr 22, 2022 at 11:30:12AM -0700, Alexei Starovoitov wrote:
> > > > > On Fri, Apr 22, 2022 at 10:22 AM Delyan Kratunov <delyank@xxxxxx> wrote:
> > > > > >
> > > > > > On Fri, 2022-04-22 at 13:09 +0200, Peter Zijlstra wrote:
> > > > > > > And on the other hand; those users need to be fixed anyway, right?
> > > > > > > Accessing prev->__state is equally broken.
> > > > > >
> > > > > > The users that access prev->__state would most likely have to be fixed, for sure.
> > > > > >
> > > > > > However, not all users access prev->__state. `offcputime` for example just takes a
> > > > > > stack trace and associates it with the switched out task. This kind of user
> > > > > > would continue working with the proposed patch.
> > > > > >
> > > > > > > If bpf wants to ride on them, it needs to suffer the pain of doing so.
> > > > > >
> > > > > > Sure, I'm just advocating for a fairly trivial patch to avoid some of the suffering,
> > > > > > hopefully without being a burden to development. If that's not the case, then it's a
> > > > > > clear no-go.
> > > > >
> > > > >
> > > > > Namhyung just sent this patch set:
> > > > > https://patchwork.kernel.org/project/netdevbpf/patch/20220422053401.208207-3-namhyung@xxxxxxxxxx/
> > > >
> > > > That has:
> > > >
> > > > + * recently task_struct->state renamed to __state so it made an incompatible
> > > > + * change.
> > > >
> > > > git tells me:
> > > >
> > > > 2f064a59a11f ("sched: Change task_struct::state")
> > > >
> > > > is almost a year old by now. That don't qualify as recently in my book.
> > > > That says that 'old kernels used to call this...'.
> > > >
> > > > > to add off-cpu profiling to perf.
> > > > > It also hooks into sched_switch tracepoint.
> > > > > Notice it deals with state->__state rename just fine.
> > > >
> > > > So I don't speak BPF much; it always takes me more time to make bpf work
> > > > than to just hack up the kernel, which makes it hard to get motivated.
> > > >
> > > > However, it was not just a rename, state changed type too, which is why I
> > > > did the rename, to make sure all users would get a compile fail and
> > > > could adjust.
> > > >
> > > > If you're silently making it work by frobbing the name, you loose that.
> > > >
> > > > Specifically, task_struct::state used to be 'volatile long', while
> > > > task_struct::__state is 'unsigned int'. As such, any user must now be
> > > > very careful to use READ_ONCE(). I don't see that happening with just
> > > > frobbing the name.
> > > >
> > > > Additinoally, by shrinking the field, I suppose BE systems get to keep
> > > > the pieces?
> > > >
> > > > > But it will have a hard time without this patch
> > > > > until we add all the extra CO-RE features to detect
> > > > > and automatically adjust bpf progs when tracepoint
> > > > > arguments order changed.
> > > >
> > > > Could be me, but silently making it work sounds like fail :/ There's a
> > > > reason code changes, users need to adapt, not silently pretend stuff is
> > > > as before.
> > > >
> > > > How will you know you need to fix your tool?
> > >
> > > If libbpf doesn't fail, then yeah it's a big problem. I wonder how users of
> > > kprobe who I suppose are more prone to this kind of problems have been coping.
> >
> > See my reply to Peter. libbpf can't know user's intent to fail this
> > automatically, in general. In some cases when it can it does
> > accommodate this automatically. In other cases it provides instruments
> > for user to handle this (bpf_core_field_size(),
> > BPF_CORE_READ_BITFIELD(), etc).
>
> My naiive thinking is that the function signature has changed (there's 1 extra
> arg not just a subtle swap of args of the same type) - so I thought that can be
> detected. But maybe it is harder said than done.

It is. We don't have number of arguments either:

struct bpf_raw_tracepoint_args {
__u64 args[0];
};

What BPF program is getting is just an array of u64s.

>
> I am trying to remember as I've used this before; I think you get the arg list
> as part of ctx when you attach to a function?
>
> I wonder if it'd be hard to provide a macro for the user to provide the
> signature of the function they expect; this macro can try then to verify/assert
> the number, type and order is the same. Not bullet proof and requires opt-in,
> but could be useful?
>
>
> // dummy pseudo-code
>
> BPF_CORE_ASSERT_SIG(sched_switch, NR_ARGS, ARG0, ARG1, ...)
> if (ctx->nr_args != NR_ARGS)
> assert()
> if (type_of(ctx->args[0]) != type_of(ARG0))
> assert()
> ...
>
> I'm not sure if you have any info about the type though..

What we have now under discussion is more generic way for user to
check signature of function prototype, struct/union, etc. But all that
will take some time to implement and finalize. So this patch is a way
to stop/prevent the bleeding until we have that available to users.

>
> > But in the end no one eliminated the need for testing your application
> > for correctness. Tracing programs do break on kernel changes and BPF
> > users do adapt to them. Sometimes adapting is easy (like state ->
> > __state transition), sometimes it's much more involved (like this
> > argument order change).
>
> It's not just an arg re-order, it's a new argument inserted in the middle. But
> fair enough :-)
>
> Cheers
>
> --
> Qais Yousef