Re: [PATCH v4 bpf-next 2/3] mm/bpf: Add bpf_get_kmem_cache() kfunc

From: Namhyung Kim
Date: Fri Oct 04 2024 - 17:58:21 EST


On Fri, Oct 04, 2024 at 02:36:30PM -0700, Song Liu wrote:
> On Fri, Oct 4, 2024 at 2:25 PM Roman Gushchin <roman.gushchin@xxxxxxxxx> wrote:
> >
> > On Fri, Oct 04, 2024 at 01:10:58PM -0700, Song Liu wrote:
> > > On Wed, Oct 2, 2024 at 11:10 AM Namhyung Kim <namhyung@xxxxxxxxxx> wrote:
> > > >
> > > > The bpf_get_kmem_cache() is to get a slab cache information from a
> > > > virtual address like virt_to_cache(). If the address is a pointer
> > > > to a slab object, it'd return a valid kmem_cache pointer, otherwise
> > > > NULL is returned.
> > > >
> > > > It doesn't grab a reference count of the kmem_cache so the caller is
> > > > responsible to manage the access. The intended use case for now is to
> > > > symbolize locks in slab objects from the lock contention tracepoints.
> > > >
> > > > Suggested-by: Vlastimil Babka <vbabka@xxxxxxx>
> > > > Acked-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> (mm/*)
> > > > Acked-by: Vlastimil Babka <vbabka@xxxxxxx> #mm/slab
> > > > Signed-off-by: Namhyung Kim <namhyung@xxxxxxxxxx>
> > > > ---
> > > > kernel/bpf/helpers.c | 1 +
> > > > mm/slab_common.c | 19 +++++++++++++++++++
> > > > 2 files changed, 20 insertions(+)
> > > >
> > > > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > > > index 4053f279ed4cc7ab..3709fb14288105c6 100644
> > > > --- a/kernel/bpf/helpers.c
> > > > +++ b/kernel/bpf/helpers.c
> > > > @@ -3090,6 +3090,7 @@ BTF_ID_FLAGS(func, bpf_iter_bits_new, KF_ITER_NEW)
> > > > BTF_ID_FLAGS(func, bpf_iter_bits_next, KF_ITER_NEXT | KF_RET_NULL)
> > > > BTF_ID_FLAGS(func, bpf_iter_bits_destroy, KF_ITER_DESTROY)
> > > > BTF_ID_FLAGS(func, bpf_copy_from_user_str, KF_SLEEPABLE)
> > > > +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RET_NULL)
> > > > BTF_KFUNCS_END(common_btf_ids)
> > > >
> > > > static const struct btf_kfunc_id_set common_kfunc_set = {
> > > > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > > > index 7443244656150325..5484e1cd812f698e 100644
> > > > --- a/mm/slab_common.c
> > > > +++ b/mm/slab_common.c
> > > > @@ -1322,6 +1322,25 @@ size_t ksize(const void *objp)
> > > > }
> > > > EXPORT_SYMBOL(ksize);
> > > >
> > > > +#ifdef CONFIG_BPF_SYSCALL
> > > > +#include <linux/btf.h>
> > > > +
> > > > +__bpf_kfunc_start_defs();
> > > > +
> > > > +__bpf_kfunc struct kmem_cache *bpf_get_kmem_cache(u64 addr)
> > > > +{
> > > > + struct slab *slab;
> > > > +
> > > > + if (!virt_addr_valid(addr))
> > > > + return NULL;
> > > > +
> > > > + slab = virt_to_slab((void *)(long)addr);
> > > > + return slab ? slab->slab_cache : NULL;
> > > > +}
> > >
> > > Do we need to hold a refcount to the slab_cache? Given
> > > we make this kfunc available everywhere, including
> > > sleepable contexts, I think it is necessary.
> >
> > It's a really good question.
> >
> > If the callee somehow owns the slab object, as in the example
> > provided in the series (current task), it's not necessarily.
> >
> > If a user can pass a random address, you're right, we need to
> > grab the slab_cache's refcnt. But then we also can't guarantee
> > that the object still belongs to the same slab_cache, the
> > function becomes racy by the definition.
>
> To be safe, we can limit the kfunc to sleepable context only. Then
> we can lock slab_mutex for virt_to_slab, and hold a refcount
> to slab_cache. We will need a KF_RELEASE kfunc to release
> the refcount later.

Then it needs to call kmem_cache_destroy() for release which contains
rcu_barrier. :(

>
> IIUC, this limitation (sleepable context only) shouldn't be a problem
> for perf use case?

No, it would be called from the lock contention path including
spinlocks. :(

Can we limit it to non-sleepable ctx and not to pass arbtrary address
somehow (or not to save the result pointer)?

Thanks,
Namhyung