On 24/05/2021 17.59, Richard Fitzgerald wrote:
sparse was producing warnings of the form:
sparse: cast truncates bits from constant value (ffff0001 becomes 1)
The problem was that value_representable_in_type() compared unsigned types
against type_min(). But type_min() is only valid for signed types because
it is calculating the value -type_max() - 1.
... and casts that to (T), so it does produce 0 as it should. E.g. for
T==unsigned char, we get
#define type_min(T) ((T)((T)-type_max(T)-(T)1))
(T)((T)-255 - (T)1)
(T)(-256)
which is 0 of type unsigned char.
The minimum value of an
unsigned is obviously 0, so only type_max() need be tested.
That part is true.
But type_min and type_max have been carefully created to produce values
of the appropriate type that actually represent the minimum/maximum
representable in that type, without invoking UB. If this program doesn't
produce the expected results for you, I'd be very interested in knowing
your compiler version:
#include <stdio.h>
#define is_signed_type(type) (((type)(-1)) < (type)1)
#define __type_half_max(type) ((type)1 << (8*sizeof(type) - 1 -
is_signed_type(type)))
#define type_max(T) ((T)((__type_half_max(T) - 1) + __type_half_max(T)))
#define type_min(T) ((T)((T)-type_max(T)-(T)1))
int main(int argc, char *argv[])
{
#define p(T, PT, fmt) do { \
PT vmin = type_min(T); \
PT vmax = type_max(T); \
printf("min(%s) = "fmt", max(%s) = "fmt"\n",#T, vmin, #T, vmax); \
} while (0)
p(_Bool, int, "%d");
p(unsigned char, int, "%d");
p(signed char, int, "%d");
p(unsigned int, unsigned int, "%u");
p(unsigned long long, unsigned long long, "%llu");
p(signed long long, signed long long, "%lld");
return 0;
}
lib/test_scanf.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/lib/test_scanf.c b/lib/test_scanf.c
index 8d577aec6c28..48ff5747a4da 100644
--- a/lib/test_scanf.c
+++ b/lib/test_scanf.c
@@ -187,8 +187,8 @@ static const unsigned long long numbers[] __initconst = {
#define value_representable_in_type(T, val) \
(is_signed_type(T) \
? ((long long)(val) >= type_min(T)) && ((long long)(val) <= type_max(T)) \
- : ((unsigned long long)(val) >= type_min(T)) && \
- ((unsigned long long)(val) <= type_max(T)))
+ : ((unsigned long long)(val) <= type_max(T)))
With or without this, these tests are tautological when T is "long long"
or "unsigned long long". I don't know if that is intended. But it won't,
say, exclude ~0ULL if that is in the numbers[] array from being treated
as fitting in a "long long".
Rasmus