On Fri, Oct 14, 2016 at 12:12:43AM +0200, none wrote:With large strings, you can make buffer overflows by turning ints into negative values (this lead to cwe 195). However, they just crash the process and thus canât be used for remote code execution. So as long as the truncation canât lead to positive values thereâs nothing to fear (which mean using in instead of size_t is acceptable if the machine isnât big_endian).
Hello,
I wanted to known the rules in coding guidelines concerning the use of
size_t.
It seems the signed int type is used most of the time for representing
string sizes, including in some parts written by Linus in /lib.
Theyâre can buffer overflows attack if ssize_t if larger than sizeof(int)
(though I agree this isnât the only way, but at least itÂs less error
prone).
Huh? size_t is the type of sizoef result; ssize_t is its signed counterpart.
No this is guaranteed, at least for amd64 because of -mcmodel=kernel
So is it guaranteed for all current and future cpu architectures the Linux
kernel support that ssize_t will always be equal to sizeof(int)â?
Of course it isn't. Not true on any 64bit architecture we support...
What attacks are, in your opinion, enabled by that fact? I'm sure thatPlenty attacks which leads to plenty types of cwe (192 or 190)â Basically you feed the software with a string which can fit in size_t but not in an unsigned int.
libc (and C standard) folks would be very interested, considering that
e.g. strlen() is declared as function that takes a pointer to const char and
returns size_t...