[f]truncate64 and user space sparse file handling

From: Elmar Stellnberger
Date: Mon Nov 30 2015 - 07:30:38 EST


a.) Having written a program to patch binary files I wonder how it is possible to truncate a file to a length greater than 2GB on a 32bit system. This is f.i. an issue when patching big files like qemu disk images. The given limitations of the off_t on 32bit systems are not as bad for seeking as you can seek forward in a loop (and that is the way I have implemented binpatch) multiple times. Why not have an [f]truncate64/off64_t if we are currently implementing timespec64?

b.) A very similar program has to check each page/sector of a qemu image file for all containing zeroes when compressing such files. This results in an unnecessary high CPU load for huge sparse files which are very sparsely populated with data. Why not have a user space callable 'readzeroes' function which reads as long as the input just contains zero bytes and then returns the number of read bytes like a normal read would do. The only difference would be that it could skip sparse sections without having to zero the memory first and then let the user program compare the read block for all being zeroed out.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/