Re: [PATCH v3] err.h: allow IS_ERR_VALUE to handle properly more types
From: Andrzej Hajda
Date: Tue Feb 09 2016 - 03:43:35 EST
+cc Rasmus Villemoes, I forgot to add him earlier.
On 02/08/2016 01:01 PM, Arnd Bergmann wrote:
> On Monday 08 February 2016 09:45:55 Andrzej Hajda wrote:
>> On 02/05/2016 11:52 AM, Arnd Bergmann wrote:
>>> On Thursday 04 February 2016 10:59:31 Andrew Morton wrote:
>> My version produces shortest code, Arnd's is the same as the old one.
>> On the other side Rasmus proposition seems to be the most straightforward
>> to me. Anyway I am not sure if the code length is the most important here.
>>
>> By the way .data segment size grows almost 4 times between gcc 4.4 and
>> 4.8 :)
>> Also numbers for arm64 looks interesting.
>>
>> Just for the record below all proposed implementations:
>> #define IS_ERR_VALUE_old(x) unlikely((x) >= (unsigned long)-MAX_ERRNO)
>> #define IS_ERR_VALUE_andrzej(x) ((typeof(x))(-1) <= 0 \
>> ? unlikely((x) <= -1) \
>> : unlikely((x) >= (typeof(x))-MAX_ERRNO))
>> #define IS_ERR_VALUE_arnd(x) (unlikely((unsigned long long)(x) >=
>> (unsigned long long)(typeof(x))-MAX_ERRNO))
>> #define IS_ERR_VALUE_rasmus(x) ({\
>> typeof(x) _x = (x);\
>> unlikely(_x >= (typeof(x))-MAX_ERRNO && _x <= (typeof(x))-1);\
>> })
>>
>>> Andrzej's version is a little shorter on ARM because in case of signed numbers
>>> it only checks for negative values, rather than checking for values in the
>>> [-MAX_ERRNO..-1] range. I think the original behavior is more logical
>>> in this case, and my version restores it.
>> As I looked at the usage of the macro in the kernel I have not found any
>> code
>> which could benefit from the original behavior, except some buggy code in
>> staging which have already pending fix[1].
>> But maybe it would be better to use IS_ERR_VALUE to always check if err
>> is in
>> range [-MAX_ERRNO..-1] and just use simple 'err < 0' in typical case of
>> signed types.
> If we do that, should we also make it illegal to use an invalid type
> for IS_ERR()? At least that could also catch any use of 'char' and 'unsigned
> char' that are still broken.
I meant rather to make such 'policy' for future code by adding some
comment to the macro. Optionally adding compile time warning
to encourage developers to change current usage, however I am
not sure if it is not too harsh.
This way it could be also good to use your version of the macro.
It could be also good to add compiletime_assert to prevent char types
as suggested by Rasmus.
Finally it could look like:
/*
* Use IS_ERR_VALUE only on unsigned types of at least two bytes size.
* For signed types use '< 0' comparison.
*/
#define IS_ERR_VALUE(x)\
({\
compiletime_assert(sizeof(x) > 1, "IS_ERR_VALUE does not handle
byte-size types");\
compiletime_assert_warning((typeof(x))(-1) > 0, "IS_ERR_VALUE
should be called on unsigned types only, use '< 0' instead");\
(unlikely((unsigned long long)(x) >= (unsigned long
long)(typeof(x))-MAX_ERRNO));\
})
Minor issue: there are no compile-time warning macros in kernel.
Helper provided by gcc (warning attribute) is not so nice:
optimizations removes the warning itself, preventing optimization
influences final code.
I have put my proposition of workaround below:
#define compiletime_assert_warning(cond, msg) \
({ \
__maybe_unused void const *p = (cond) ? 0 : 1; \
})
On older compilers it issues just warning:
drivers/nvmem/core.c:1059: warning: initialization makes pointer from
integer without a cast
Since gcc 4.8 it is more verbose:
drivers/nvmem/core.c: In function ‘nvmem_device_write’:
include/linux/err.h:33:33: warning: initialization makes pointer from
integer without a cast [enabled by default]
__maybe_unused void const *p = (cond) ? 0 : 1; \
^
include/linux/err.h:41:2: note: in expansion of macro
‘compiletime_assert_warning’
compiletime_assert_warning((typeof(x))(-1) > 0, "IS_ERR_VALUE should
be called on unsigned types only, use '< 0' instead");\
^
drivers/nvmem/core.c:1059:6: note: in expansion of macro ‘IS_ERR_VALUE’
if (IS_ERR_VALUE(rc))
Regards
Andrzej