Re: [PATCH v4 next 0/9] Implement mul_u64_u64_div_u64_roundup()

From: Nicolas Pitre

Date: Tue Nov 04 2025 - 12:16:17 EST


On Thu, 30 Oct 2025, Andrew Morton wrote:

> Thanks, I added this to mm.git's mm-nonmm-unstable branch for some
> linux-next exposure. I have a note that [3/9] may be updated in
> response to Nicolas's comment.

This is the change I'd like to see:

----- >8
FRom: Nicolas Pitre <npitre@xxxxxxxxxxxx>
Subject: lib: mul_u64_u64_div_u64(): optimize quick path for small numbers

If the 128-bit product is small enough (n_hi == 0) we should branch to
div64_u64() right away. This saves one test for this quick path which is
more prevalent than divide-by-0 cases and div64_u64() can deal with the
(theoretically undefined behavior) zero divisor just fine too. The cost
remains the same for regular cases.

Signed-off-by: Nicolas Pitre <npitre@xxxxxxxxxxxx>
---
diff --git a/lib/math/div64.c b/lib/math/div64.c
index 4e4e962261c3..d1e92ea24fce 100644
--- a/lib/math/div64.c
+++ b/lib/math/div64.c
@@ -247,6 +247,9 @@ u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d)

n_hi = mul_u64_u64_add_u64(&n_lo, a, b, c);

+ if (!n_hi)
+ return div64_u64(n_lo, d);
+
if (unlikely(n_hi >= d)) {
/* trigger runtime exception if divisor is zero */
if (d == 0) {
@@ -259,9 +262,6 @@ u64 mul_u64_add_u64_div_u64(u64 a, u64 b, u64 c, u64 d)
return ~0ULL;
}

- if (!n_hi)
- return div64_u64(n_lo, d);
-
/* Left align the divisor, shifting the dividend to match */
d_z_hi = __builtin_clzll(d);
if (d_z_hi) {