Re: [PATCH v3] err.h: allow IS_ERR_VALUE to handle properly more types
From: Andrzej Hajda
Date: Thu Feb 11 2016 - 02:02:41 EST
On 02/10/2016 10:01 PM, Arnd Bergmann wrote:
> On Tuesday 09 February 2016 09:42:26 Andrzej Hajda wrote:
>> +cc Rasmus Villemoes, I forgot to add him earlier.
>>
>> On 02/08/2016 01:01 PM, Arnd Bergmann wrote:
>>> On Monday 08 February 2016 09:45:55 Andrzej Hajda wrote:
>>>> On 02/05/2016 11:52 AM, Arnd Bergmann wrote:
>>>>> On Thursday 04 February 2016 10:59:31 Andrew Morton wrote:
>>>> My version produces shortest code, Arnd's is the same as the old one.
>>>> On the other side Rasmus proposition seems to be the most straightforward
>>>> to me. Anyway I am not sure if the code length is the most important here.
>>>>
>>>> By the way .data segment size grows almost 4 times between gcc 4.4 and
>>>> 4.8 :)
>>>> Also numbers for arm64 looks interesting.
>>>>
>>>> Just for the record below all proposed implementations:
>>>> #define IS_ERR_VALUE_old(x) unlikely((x) >= (unsigned long)-MAX_ERRNO)
>>>> #define IS_ERR_VALUE_andrzej(x) ((typeof(x))(-1) <= 0 \
>>>> ? unlikely((x) <= -1) \
>>>> : unlikely((x) >= (typeof(x))-MAX_ERRNO))
>>>> #define IS_ERR_VALUE_arnd(x) (unlikely((unsigned long long)(x) >=
>>>> (unsigned long long)(typeof(x))-MAX_ERRNO))
>>>> #define IS_ERR_VALUE_rasmus(x) ({\
>>>> typeof(x) _x = (x);\
>>>> unlikely(_x >= (typeof(x))-MAX_ERRNO && _x <= (typeof(x))-1);\
>>>> })
>>>>
>>>>> Andrzej's version is a little shorter on ARM because in case of signed numbers
>>>>> it only checks for negative values, rather than checking for values in the
>>>>> [-MAX_ERRNO..-1] range. I think the original behavior is more logical
>>>>> in this case, and my version restores it.
>>>> As I looked at the usage of the macro in the kernel I have not found any
>>>> code
>>>> which could benefit from the original behavior, except some buggy code in
>>>> staging which have already pending fix[1].
>>>> But maybe it would be better to use IS_ERR_VALUE to always check if err
>>>> is in
>>>> range [-MAX_ERRNO..-1] and just use simple 'err < 0' in typical case of
>>>> signed types.
>>> If we do that, should we also make it illegal to use an invalid type
>>> for IS_ERR()? At least that could also catch any use of 'char' and 'unsigned
>>> char' that are still broken.
>> I meant rather to make such 'policy' for future code by adding some
>> comment to the macro. Optionally adding compile time warning
>> to encourage developers to change current usage, however I am
>> not sure if it is not too harsh.
>> This way it could be also good to use your version of the macro.
>> It could be also good to add compiletime_assert to prevent char types
>> as suggested by Rasmus.
>>
>> Finally it could look like:
>> /*
>> * Use IS_ERR_VALUE only on unsigned types of at least two bytes size.
>> * For signed types use '< 0' comparison.
>> */
>> #define IS_ERR_VALUE(x)\
>> ({\
>> compiletime_assert(sizeof(x) > 1, "IS_ERR_VALUE does not handle
>> byte-size types");\
>> compiletime_assert_warning((typeof(x))(-1) > 0, "IS_ERR_VALUE
>> should be called on unsigned types only, use '< 0' instead");\
>> (unlikely((unsigned long long)(x) >= (unsigned long
>> long)(typeof(x))-MAX_ERRNO));\
>> })
>>
> I think the easiest way to express this would be to ensure that the argument
> is 'unsigned long', like:
>
> #define IS_ERR_VALUE(x) ((unsigned long*)NULL == (typeof (x)*)NULL && \
> unlikely((unsigned long long)(x) >= (unsigned long long)(typeof(x))-MAX_ERRNO))
This way you will limit it only to unsigned long type, which seems too
strict to me.
I think the macro should accept all long enough unsigned types, otherwise we
could end up with bunch of macros IS_ERR_VALUE_U32, IS_ERR_VALUE_ULL...
Regards
Andrzej