Re: seq_file: Use larger buffer to reduce time traversing lists
From: Eric Dumazet
Date: Fri Jun 01 2012 - 08:10:39 EST
On Fri, 2012-06-01 at 11:39 +0100, Steven Whitehouse wrote:
> I've just been taking a look at the seq_read() code, since we've noticed
> that dumping files with large numbers of records can take some
> considerable time. This is due to seq_read() using a buffer which, at
> most is a single page in size, and that it has to find its place again
> on every call to seq_read(). That makes it rather inefficient.
>
> As an example, I created a GFS2 filesystem with 100k inodes in it, and
> then ran ls -l to get a decent number of cached inodes. This result in
> there being approx 400k lines in the debugfs file containing GFS2's
> glocks. I then timed how long it takes to read this file:
>
> [root@chywoon mnt]# time dd if=/sys/kernel/debug/gfs2/unity\:myfs/glocks
> of=/dev/null bs=1M
> 0+5769 records in
> 0+5769 records out
> 23273958 bytes (23 MB) copied, 63.3681 s, 367 kB/s
What time do you get if you do
time dd if=/sys/kernel/debug/gfs2/unity\:myfs/glocks of=/dev/null bs=4k
This patch seems the wrong way to me.
seq_read(size = 1MB) should perform many copy_to_user() calls instead of a single one.
Instead of doing kmalloc(m->size <<= 1, GFP_KERNEL) each time we overflow the buffer,
we should flush its content to user space.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/