Re: [PATCH v2] bpf: hashtab: fix 32-bit overflow in memory usage calculation

From: Yafang Shao
Date: Sat Nov 08 2025 - 22:01:21 EST


On Fri, Nov 7, 2025 at 7:41 PM David Laight
<david.laight.linux@xxxxxxxxx> wrote:
>
> On Fri, 7 Nov 2025 13:03:05 +0300
> Alexei Safin <a.safin@xxxxxxx> wrote:
>
> > The intermediate product value_size * num_possible_cpus() is evaluated
> > in 32-bit arithmetic and only then promoted to 64 bits. On systems with
> > large value_size and many possible CPUs this can overflow and lead to
> > an underestimated memory usage.
> >
> > Found by Linux Verification Center (linuxtesting.org) with SVACE.
>
> That code is insane.
> The size being calculated looks like a kernel memory size.
> You really don't want to be allocating single structures that exceed 4GB.

I failed to get your point.
The calculation `value_size * num_possible_cpus() * num_entries` can
overflow. While the creation of a hashmap limits `value_size *
num_entries` to U32_MAX, this new formula can easily exceed that
limit. For example, on my test server with just 64 CPUs, the following
operation will trigger an overflow:

map_fd = bpf_map_create(BPF_MAP_TYPE_PERCPU_HASH, "count_map", 4, 4,
1 << 27, &map_opts)

--
Regards
Yafang