Am Dienstag, 23. Dezember 2014, 18:16:01 schrieb leroy christophe:Yes your are probably right. There splice needs to be called with SPLICE_F_MORE flag, hope that works.
Le 20/12/2014 07:37, Stephan Mueller a écrit :I do not believe that is the case. IMHO the blocking issue is found in the
Am Donnerstag, 18. Dezember 2014, 13:22:20 schrieb leroy christophe:Yes, it looks like the function I fixed is exclusively used by
Le 18/12/2014 13:15, Stephan Mueller a écrit :After testing, this patch does not work for me. The operation still stops
Hi Herbert,Hi Stephan,
While testing the vmsplice/splice interface of algif_hash I was made
aware of the problem that data blobs larger than 16 pages do not seem to
be hashed properly.
For testing, a file is mmap()ed and handed to vmsplice / splice. If the
file is smaller than 2**16, the interface returns the proper hash.
However, when the file is larger, only the first 2**16 bytes seem to be
When adding printk's to hash_sendpage, I see that this function is
invoked exactly 16 times where the first 15 invocations have the
MSG_MORE flag set and the last invocation does not have MSG_MORE.
I have already noticed the same issue and proposed a patch, but I never
got any feedback and it has never been merged, allthought I pinged it a
after 16 pages.
sendfile() system call.
So there is probably the same kind of fix to be done in another function.
splice_from_pipe_feed walks the pipe->nrbufs. And vmsplice_to_pipe defines the
maximum number of nrbufs as PIPE_DEF_BUFFERS -- which is 16. As subsequent
functions allocate memory based on PIPE_DEF_BUFFERS, there is no trivial way
to increase the number of pages to be processed.
Thus I see that the vmsplice/splice combo can at most operate on a chunk of 16
pages. Thus, you have to segment your input buffer into chunks of that size
and invoke the vmsplice/splice syscalls for each segment.