Re: [PATCH bpf v1] bpf: Fix OOB in bpf_obj_memcpy for cgroup storage
From: Paul Chaignon
Date: Thu Mar 12 2026 - 14:05:49 EST
On Thu, Mar 12, 2026 at 09:41:40AM -0700, Yonghong Song wrote:
>
>
> On 3/12/26 4:51 AM, Paul Chaignon wrote:
> > On Thu, Mar 12, 2026 at 01:25:25PM +0800, xulang wrote:
> > > From: Lang Xu <xulang@xxxxxxxxxxxxx>
> > >
> > > An out-of-bounds read occurs when copying element from a
> > > BPF_MAP_TYPE_CGROUP_STORAGE map to another map type with the same
> > > value_size that is not 8-byte aligned.
> > >
> > > The issue happens when:
> > > 1. A CGROUP_STORAGE map is created with value_size not aligned to
> > > 8 bytes (e.g., 4 bytes)
> > > 2. A HASH map is created with the same value_size (e.g., 4 bytes)
> > > 3. Update element in 2 with data in 1
> > >
> > > In the kernel, map elements are typically aligned to 8 bytes. However,
> > > bpf_cgroup_storage_calculate_size() allocates storage based on the exact
> > > value_size without alignment. When copy_map_value_long() is called, it
> > > assumes all map values are 8-byte aligned and rounds up the copy size,
> > > leading to a 4-byte out-of-bounds read from the cgroup storage buffer.
> > >
> > > This patch fixes the issue by ensuring cgroup storage allocates 8-byte
> > > aligned buffers, matching the assumptions in copy_map_value_long().
> > I don't think this bug is specific to the CGROUP_STORAGE maps. Wouldn't
> > it affect any copy from a non-percpu map into a percpu hashmap? The
> > reproducer in [1] copies from a BPF_MAP_TYPE_CGROUP_STORAGE map to a
> > BPF_MAP_TYPE_LRU_PERCPU_HASH map, but I suspect you'd hit the same bug
> > if copying from BPF_MAP_TYPE_HASH into BPF_MAP_TYPE_PERCPU_HASH because
> > for BPF_MAP_TYPE_HASH the value size is also not rounded up to a
> > multiple of 8.
>
> The BPF_MAP_TYPE_HASH table have value size rounds up to 8. See:
>
> if (percpu)
> htab->elem_size += sizeof(void *);
> else
> htab->elem_size += round_up(htab->map.value_size, 8);
>
> The same for array size.
My bad, I looked at the _alloc_check and assumed any round_up would be
reflected there :/ Given that:
Acked-by: Paul Chaignon <paul.chaignon@xxxxxxxxx>
I also had a look at other map types and they all seem to round up to 8
or to not be susceptible to the oob copy (ex., queue & stack). The one
for which I'm unsure is BPF_MAP_TYPE_*_CGROUP_STORAGE. It doesn't seem
to round up to 8, but I'm unsure it could be used to reproduce the copy.
On a related note, this is the sort of reproducer that would be good to
add in https://github.com/google/syzkaller/tree/master/sys/linux/test
because syzbot can easily learn from it and reach potentially similar
bugs.
>
> >
> > 1 - https://lore.kernel.org/all/14e6c70c.6c121.19c0399d948.Coremail.kaiyanm@xxxxxxxxxxx/
> >
> > > Fixes: b741f1630346 ("bpf: introduce per-cpu cgroup local storage")
> > > Reported-by: Kaiyan Mei <kaiyanm@xxxxxxxxxxx>
> > > Closes: https://lore.kernel.org/all/14e6c70c.6c121.19c0399d948.Coremail.kaiyanm@xxxxxxxxxxx/
> > > Signed-off-by: Lang Xu <xulang@xxxxxxxxxxxxx>
> > > ---
> > > kernel/bpf/local_storage.c | 7 +++----
> > > 1 file changed, 3 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c
> > > index 8fca0c64f7b1..54b32ba19194 100644
> > > --- a/kernel/bpf/local_storage.c
> > > +++ b/kernel/bpf/local_storage.c
> > > @@ -487,14 +487,13 @@ static size_t bpf_cgroup_storage_calculate_size(struct bpf_map *map, u32 *pages)
> > > {
> > > size_t size;
> > > + size = round_up(map->value_size, 8);
> > > if (cgroup_storage_type(map) == BPF_CGROUP_STORAGE_SHARED) {
> > > - size = sizeof(struct bpf_storage_buffer) + map->value_size;
> > > + size += sizeof(struct bpf_storage_buffer);
> > > *pages = round_up(sizeof(struct bpf_cgroup_storage) + size,
> > > PAGE_SIZE) >> PAGE_SHIFT;
> > > } else {
> > > - size = map->value_size;
> > > - *pages = round_up(round_up(size, 8) * num_possible_cpus(),
> > > - PAGE_SIZE) >> PAGE_SHIFT;
> > > + *pages = round_up(size * num_possible_cpus(), PAGE_SIZE) >> PAGE_SHIFT;
> > > }
> > > return size;
> > > --
> > > 2.51.0
> > >
> > >
>