Re: [PATCH 1/2] lib: test_scanf: Fix incorrect use of type_min() with unsigned types

From: Rasmus Villemoes
Date: Tue May 25 2021 - 06:30:51 EST


On 25/05/2021 12.10, Richard Fitzgerald wrote:
> On 25/05/2021 10:55, Rasmus Villemoes wrote:
>> On 24/05/2021 17.59, Richard Fitzgerald wrote:
>>> sparse was producing warnings of the form:
>>>
>>>   sparse: cast truncates bits from constant value (ffff0001 becomes 1)
>>>
>>> The problem was that value_representable_in_type() compared unsigned
>>> types
>>> against type_min(). But type_min() is only valid for signed types
>>> because
>>> it is calculating the value -type_max() - 1.
>
> Ok, I see I was wrong about that. It does in fact work safely. Do you
> want me to update the commit message to remove this?

Well, it was the "is only valid for signed types" I reacted to, so yes,
please reword.

>> ... and casts that to (T), so it does produce 0 as it should. E.g. for
>> T==unsigned char, we get
>>
>> #define type_min(T) ((T)((T)-type_max(T)-(T)1))
>> (T)((T)-255 - (T)1)
>> (T)(-256)
>>
>
> sparse warns about those truncating casts.

That's sad. As the comments and commit log indicate, I was very careful
to avoid gcc complaining, even with various -Wfoo that are not normally
enabled in a kernel build. I think sparse is wrong here. Cc += Luc.



>>> diff --git a/lib/test_scanf.c b/lib/test_scanf.c
>>> index 8d577aec6c28..48ff5747a4da 100644
>>> --- a/lib/test_scanf.c
>>> +++ b/lib/test_scanf.c
>>> @@ -187,8 +187,8 @@ static const unsigned long long numbers[]
>>> __initconst = {
>>>   #define value_representable_in_type(T, val)                     \
>>>   (is_signed_type(T)                                 \
>>>       ? ((long long)(val) >= type_min(T)) && ((long long)(val) <=
>>> type_max(T)) \
>>> -    : ((unsigned long long)(val) >= type_min(T)) &&                 \
>>> -      ((unsigned long long)(val) <= type_max(T)))
>>> +    : ((unsigned long long)(val) <= type_max(T)))
>>
>>
>> With or without this, these tests are tautological when T is "long long"
>> or "unsigned long long". I don't know if that is intended. But it won't,
>> say, exclude ~0ULL if that is in the numbers[] array from being treated
>> as fitting in a "long long".
>
> I don't entirely understand your comment. But the point of the test is
> to exclude values that can't be represented by a type shorter than
> long long or unsigned long long.

Right. But ~0ULL aka 0xffffffffffffffffULL is in that numbers[] array,
and that value cannot be represented in a "long long". Yet the test
still proceeds to do a test with it, AFAICT first sprinting it with
"%lld", then reading it back with "%lld". The first will produce -1,
which of course does fit, and the test case passes. I was just wondering
if this is really intended.

Rasmus