Re: [PATCH v2 2/2] bpf/selftests: Add selftests for new task kfuncs
From: Kumar Kartikeya Dwivedi
Date: Mon Oct 03 2022 - 11:57:44 EST
On Mon, 3 Oct 2022 at 17:35, David Vernet <void@xxxxxxxxxxxxx> wrote:
>
> On Mon, Oct 03, 2022 at 02:32:41AM +0200, Kumar Kartikeya Dwivedi wrote:
> > > [...]
> > > +#endif /* _TASK_KFUNC_COMMON_H */
> > > diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
> > > new file mode 100644
> > > index 000000000000..4cf01bbc8a16
> > > --- /dev/null
> > > +++ b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
> > > @@ -0,0 +1,225 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
> > > +
> > > +#include <vmlinux.h>
> > > +#include <bpf/bpf_tracing.h>
> > > +#include <bpf/bpf_helpers.h>
> > > +
> > > +#include "task_kfunc_common.h"
> > > +
> > > +char _license[] SEC("license") = "GPL";
> > > +
> > > +/* Prototype for all of the program trace events below:
> > > + *
> > > + * TRACE_EVENT(task_newtask,
> > > + * TP_PROTO(struct task_struct *p, u64 clone_flags)
> > > + */
> > > +
> > > +SEC("tp_btf/task_newtask")
> > > +int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_flags)
> > > +{
> > > + struct task_struct *acquired, *stack_ptr;
> > > +
> > > + if (!is_test_kfunc_task(task))
> > > + return 0;
> > > +
> > > + /* Can't invoke bpf_task_acquire() on an untrusted, random pointer. */
> > > + stack_ptr = (struct task_struct *)0xcafef00d;
> >
> > This seems like a misleading comment. 'stack_ptr' would just be a
> > scalar, not a pointer. Maybe you should be testing this for an actual
> > PTR_UNTRUSTED pointer instead. Load of a __kptr tagged pointer would
> > be a good way.
> >
> > Very soon a lot of other pointers obtained from pointer walking are
> > going to be marked PTR_UNTRUSTED, so then we would cover those as well
> > similar to this test.
>
> Good point and good suggestion, let me update the test to do this.
>
> > Also, could you include a test to make sure sleepable programs cannot
> > call bpf_task_acquire? It seems to assume RCU read lock is held while
> > that may not be true. If already not possible, maybe a WARN_ON_ONCE
> > inside the helper to ensure future cases don't creep in.
>
> I don't _think_ it's unsafe for a sleepable program to call
> bpf_task_acquire(). My understanding is that the struct task_struct *
> parameter to bpf_task_acquire() is not PTR_UNTRUSTED, so it's safe to
> dereference directly in the kfunc. The implicit assumption here is that
> the task was either passed to the BPF program (which is calling
> bpf_task_acquire()) from the main kernel in something like a trace or
> struct_ops callback, or it was a referenced kptr that was removed from a
> map with bpf_kptr_xchg(), and is now owned by the BPF program. Given
> that the ptr type is not PTR_UNTRUSTED, it seemed correct to assume that
> the task was valid in bpf_task_acquire() regardless of whether we were
> in an RCU read region or not, but please let me know if I'm wrong about
I don't think it's correct. You can just walk arbitrary structures and
obtain a normal PTR_TO_BTF_ID that looks seemingly ok to the verifier
but has no lifetime guarantees. It's a separate pre-existing problem
unrelated to your series [0]. PTR_UNTRUSTED is not set for those cases
currently.
So the argument to your bpf_task_acquire may already be freed by then.
This issue would be exacerbated in sleepable BPF programs, where RCU
read lock is not held, so some cases of pointer walking where it may
be safe no longer holds.
I am planning to clean this up, but I'd still prefer if we don't allow
this helper in sleepable programs, yet. kptr_get is ok to allow.
Once you have explicit BPF RCU read sections, sleepable programs can
take it, do loads, and operate on the RCU pointer safely until they
invalidate it with the outermost bpf_rcu_read_unlock. It's needed for
local kptrs as well, not just this. I plan to post this very soon, so
we should be able to fix it up in the current cycle after landing your
series.
__rcu tags in the kernel will automatically be understood by the
verifier and for the majority of the correctly annotated cases this
will work fine and not break tracing programs.
[0]: https://lore.kernel.org/bpf/CAADnVQJxe1QT5bvcsrZQCLeZ6kei6WEESP5bDXf_5qcB2Bb6_Q@xxxxxxxxxxxxxx
> that. Other kfuncs I saw such as bpf_xdp_ct_lookup() assumed that the
> parameter passed by the BPF program (which itself was passing on a
> pointer given to it by the main kernel) is valid as well.
Yeah, but the CT API doesn't assume validity of random PTR_TO_BTF_ID,
it requires KF_TRUSTED_ARGS which forces them to have ref_obj_id != 0.
>
> Note that the difference between bpf_task_acquire() and
> bpf_task_kptr_get() is that for bpf_task_kptr_get(), we're passing a
> pointer to a kptr which could be swapped out and invalidated at any
> moment by a bpf_kptr_xchg() elsewhere in the program. That's why it's
> necessary to enter an RCU read region before calling READ_ONCE(), and
> why we have to do a refcount_inc_not_zero() rather than just a
> refcount_inc() as we do in bpf_task_acquire().
>
Swapping out is not a problem if the object is RCU protected (which
again, is a requirement for kptr_ref if you wish to support kptr_get,
otherwise it's not needed).
The double pointer ugliness is to allow sleepable programs to safely
do the rcu_dereference/READ_ONCE inside the rcu_read_lock that the
kptr_get helper holds. In non-sleepable programs RCU read lock is
already held, so technically we can just load and pass it to
refcount_inc_not_zero and make it work.
All of this stuff will be gone once we have explicit BPF RCU read
sections, the kptr will be tagged __rcu and while it won't work with
all helpers but those that can deal with refcount == 0, it would be
safer to operate on compared to the PTR_UNTRUSTED case (like the
normal load is right now for kptr_ref), and it also makes it easier to
rely on data read from the object.