The manual page for getdents64() says the prototype should be the following:
int getdents64(unsigned int fd, struct linux_dirent64 *dirp,
unsigned int count);
Note the type of 'count': 'unsigned int'
(usually a 32-bit unsigned integer).
And the Linux kernel seems to use those types (fs/readdir.c:351):
SYSCALL_DEFINE3(getdents64, unsigned int, fd,
struct linux_dirent64 __user *, dirent,
unsigned int, count)
{
...
}
But glibc uses 'size_t' (usually a 64-bit unsigned integer)
for the parameter 'count' (sysdeps/unix/linux/getdents64.c:25):
/* The kernel struct linux_dirent64 matches the 'struct dirent64' type. */
ssize_t
__getdents64 (int fd, void *buf, size_t nbytes)
{
/* The system call takes an unsigned int argument, and some length
checks in the kernel use an int type. */
if (nbytes > INT_MAX)
nbytes = INT_MAX;
return INLINE_SYSCALL_CALL (getdents64, fd, buf, nbytes);
}
libc_hidden_def (__getdents64)
weak_alias (__getdents64, getdents64)
Isn't it undefined behavior to pass a variable of a different (larger) type to a variadic function than what it expects?
Is that behavior defined in this implementation?
Wouldn't a cast to 'unsigned int' be needed?
Thanks,
Alex