Re: [PATCH v2] bpf: hashtab: fix 32-bit overflow in memory usage calculation
From: Yafang Shao
Date: Sun Nov 09 2025 - 03:21:50 EST
On Sun, Nov 9, 2025 at 11:00 AM Yafang Shao <laoar.shao@xxxxxxxxx> wrote:
>
> On Fri, Nov 7, 2025 at 7:41 PM David Laight
> <david.laight.linux@xxxxxxxxx> wrote:
> >
> > On Fri, 7 Nov 2025 13:03:05 +0300
> > Alexei Safin <a.safin@xxxxxxx> wrote:
> >
> > > The intermediate product value_size * num_possible_cpus() is evaluated
> > > in 32-bit arithmetic and only then promoted to 64 bits. On systems with
> > > large value_size and many possible CPUs this can overflow and lead to
> > > an underestimated memory usage.
> > >
> > > Found by Linux Verification Center (linuxtesting.org) with SVACE.
> >
> > That code is insane.
> > The size being calculated looks like a kernel memory size.
> > You really don't want to be allocating single structures that exceed 4GB.
>
> I failed to get your point.
> The calculation `value_size * num_possible_cpus() * num_entries` can
> overflow. While the creation of a hashmap limits `value_size *
> num_entries` to U32_MAX, this new formula can easily exceed that
> limit. For example, on my test server with just 64 CPUs, the following
> operation will trigger an overflow:
>
> map_fd = bpf_map_create(BPF_MAP_TYPE_PERCPU_HASH, "count_map", 4, 4,
> 1 << 27, &map_opts)
Upon reviewing the code, I see that `num_entries` is declared as u64,
which prevents overflow in the calculation `value_size *
num_possible_cpus() * num_entries`. Therefore, this change is
unnecessary.
It seems that the Linux Verification Center (linuxtesting.org) needs
to be improved ;-)
--
Regards
Yafang