Re: [PATCH] bpf: Separate bpf_local_storage_lookup() fast and slow paths

From: Song Liu
Date: Mon Feb 05 2024 - 18:04:47 EST


On Wed, Jan 31, 2024 at 6:19 AM Marco Elver <elver@xxxxxxxxxx> wrote:
>
[...]
>
> Signed-off-by: Marco Elver <elver@xxxxxxxxxx>
> ---
> include/linux/bpf_local_storage.h | 17 ++++++++++++++++-
> kernel/bpf/bpf_local_storage.c | 14 ++++----------
> .../selftests/bpf/progs/cgrp_ls_recursion.c | 2 +-
> .../selftests/bpf/progs/task_ls_recursion.c | 2 +-
> 4 files changed, 22 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
> index 173ec7f43ed1..c8cecf7fff87 100644
> --- a/include/linux/bpf_local_storage.h
> +++ b/include/linux/bpf_local_storage.h
> @@ -130,9 +130,24 @@ bpf_local_storage_map_alloc(union bpf_attr *attr,
> bool bpf_ma);
>
> struct bpf_local_storage_data *
> +bpf_local_storage_lookup_slowpath(struct bpf_local_storage *local_storage,
> + struct bpf_local_storage_map *smap,
> + bool cacheit_lockit);
> +static inline struct bpf_local_storage_data *
> bpf_local_storage_lookup(struct bpf_local_storage *local_storage,
> struct bpf_local_storage_map *smap,
> - bool cacheit_lockit);
> + bool cacheit_lockit)
> +{
> + struct bpf_local_storage_data *sdata;
> +
> + /* Fast path (cache hit) */
> + sdata = rcu_dereference_check(local_storage->cache[smap->cache_idx],
> + bpf_rcu_lock_held());
> + if (likely(sdata && rcu_access_pointer(sdata->smap) == smap))
> + return sdata;

We have two changes here: 1) inlining; 2) likely() annotation. Could you please
include in the commit log how much do the two contribute to the performance
improvement?

Thanks,
Song

> +
> + return bpf_local_storage_lookup_slowpath(local_storage, smap, cacheit_lockit);
> +}
>
[...]