Re: [PATCH v2 2/4] perf lock contention: Run BPF slab cache iterator
From: Andrii Nakryiko
Date: Mon Dec 09 2024 - 18:01:02 EST
On Mon, Dec 9, 2024 at 12:23 PM Arnaldo Carvalho de Melo
<acme@xxxxxxxxxx> wrote:
>
> On Mon, Dec 09, 2024 at 01:38:39PM -0300, Arnaldo Carvalho de Melo wrote:
> > On Mon, Dec 09, 2024 at 01:36:52PM -0300, Arnaldo Carvalho de Melo wrote:
> > > On Thu, Nov 07, 2024 at 10:14:57PM -0800, Namhyung Kim wrote:
> > > > Recently the kernel got the kmem_cache iterator to traverse metadata of
> > > > slab objects. This can be used to symbolize dynamic locks in a slab.
>
> > > > The new slab_caches hash map will have the pointer of the kmem_cache as
> > > > a key and save the name and a id. The id will be saved in the flags
> > > > part of the lock.
>
> > > Trying to fix this
> >
> > So you have that struct in tools/perf/util/bpf_skel/vmlinux/vmlinux.h,
> > but then, this kernel is old and doesn't have the kmem_cache iterator,
> > so using the generated vmlinux.h will fail the build.
>
> I tried passing the right offset to the iterator so as not to try to use
> a type that isn't in vmlinux.h generated from the old kernel BTF:
>
> +++ b/tools/perf/util/bpf_lock_contention.c
> @@ -52,7 +52,7 @@ static void check_slab_cache_iter(struct lock_contention *con)
> pr_debug("slab cache iterator is not available: %d\n", ret);
> goto out;
> } else {
> - const struct btf_member *s = __btf_type__find_member_by_name(btf, ret, "s");
> + const struct btf_member *s = __btf_type__find_unnamed_union_with_member_by_name(btf, ret, "s");
>
> if (s == NULL) {
> skel->rodata->slab_cache_iter_member_offset = -1;
> @@ -60,7 +60,9 @@ static void check_slab_cache_iter(struct lock_contention *con)
> goto out;
> }
>
> skel->rodata->slab_cache_iter_member_offset = s->offset / 8; // bits -> bytes
> + pr_debug("slab cache iterator kmem_cache pointer offset: %d\n",
> + skel->rodata->slab_cache_iter_member_offset);
> }
>
>
> but the verifier doesn't like that:
>
> ; struct kmem_cache *s = slab_cache_iter_member_offset < 0 ? NULL : @ lock_contention.bpf.c:615
> 12: (7b) *(u64 *)(r10 -8) = r2 ; R2_w=ctx(off=8) R10=fp0 fp-8_w=ctx(off=8)
> ; if (s == NULL) @ lock_contention.bpf.c:619
> 13: (15) if r1 == 0x0 goto pc+22 ; R1=ctx()
> ; d.id = ++slab_cache_id << LCB_F_SLAB_ID_SHIFT; @ lock_contention.bpf.c:622
> 14: (18) r1 = 0xffffc14bcde3a014 ; R1_w=map_value(map=lock_con.bss,ks=4,vs=40,off=20)
> 16: (61) r3 = *(u32 *)(r1 +0) ; R1_w=map_value(map=lock_con.bss,ks=4,vs=40,off=20) R3_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))
> 17: (07) r3 += 1 ; R3_w=scalar(smin=umin=1,smax=umax=0x100000000,var_off=(0x0; 0x1ffffffff))
> 18: (63) *(u32 *)(r1 +0) = r3 ; R1_w=map_value(map=lock_con.bss,ks=4,vs=40,off=20) R3_w=scalar(smin=umin=1,smax=umax=0x100000000,var_off=(0x0; 0x1ffffffff))
> 19: (67) r3 <<= 16 ; R3_w=scalar(smin=umin=0x10000,smax=umax=0x1000000000000,smax32=0x7fff0000,umax32=0xffff0000,var_off=(0x0; 0x1ffffffff0000))
> 20: (63) *(u32 *)(r10 -40) = r3 ; R3_w=scalar(smin=umin=0x10000,smax=umax=0x1000000000000,smax32=0x7fff0000,umax32=0xffff0000,var_off=(0x0; 0x1ffffffff0000)) R10=fp0 fp-40=????scalar(smin=umin=0x10000,smax=umax=0x1000000000000,smax32=0x7fff0000,umax32=0xffff0000,var_off=(0x0; 0x1ffffffff0000))
> ; bpf_probe_read_kernel_str(d.name, sizeof(d.name), s->name); @ lock_contention.bpf.c:623
> 21: (79) r3 = *(u64 *)(r2 +96)
> dereference of modified ctx ptr R2 off=8 disallowed
> processed 19 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
> -- END PROG LOAD LOG --
> libbpf: prog 'slab_cache_iter': failed to load: -EACCES
> libbpf: failed to load object 'lock_contention_bpf'
> libbpf: failed to load BPF skeleton 'lock_contention_bpf': -EACCES
> Failed to load lock-contention BPF skeleton
> lock contention BPF setup failed
> root@number:~#
>
> and additionally the type is not like the one you added to the barebones
> vmlinux.h:
>
> ⬢ [acme@toolbox perf-tools-next]$ git show d82e2e170d1c756b | grep 'struct bpf_iter__kmem_cache {' -A3
> +struct bpf_iter__kmem_cache {
> + struct kmem_cache *s;
> +} __attribute__((preserve_access_index));
> +
> ⬢ [acme@toolbox perf-tools-next]$
>
> But:
>
> ⬢ [acme@toolbox perf-tools-next]$ uname -a
> Linux toolbox 6.13.0-rc2 #1 SMP PREEMPT_DYNAMIC Mon Dec 9 12:33:35 -03 2024 x86_64 GNU/Linux
> ⬢ [acme@toolbox perf-tools-next]$ pahole bpf_iter__kmem_cache
> struct bpf_iter__kmem_cache {
> union {
> struct bpf_iter_meta * meta; /* 0 8 */
> }; /* 0 8 */
> union {
> struct kmem_cache * s; /* 8 8 */
> }; /* 8 8 */
>
> /* size: 16, cachelines: 1, members: 2 */
> /* last cacheline: 16 bytes */
> };
>
> ⬢ [acme@toolbox perf-tools-next]$
>
> Do CO-RE handle this?
>
I don't know exactly what the problem you are running into is, but
yes, BPF CO-RE allows handling missing fields, incompatible field type
changes, field renames, etc. All without having to break a
compilation. See [0] (and one subsection after that) for
"documentation" and examples.
[0] https://nakryiko.com/posts/bpf-core-reference-guide/#defining-own-co-re-relocatable-type-definitions
> - Arnaldo