Re: [RFC] IO scheduler based io controller (V5)

From: Vivek Goyal
Date: Mon Jun 22 2009 - 12:04:21 EST


On Mon, Jun 22, 2009 at 11:40:42AM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal@xxxxxxxxxx> writes:
>
> > On Sun, Jun 21, 2009 at 08:51:16PM +0530, Balbir Singh wrote:
> >> * Vivek Goyal <vgoyal@xxxxxxxxxx> [2009-06-19 16:37:18]:
> >>
> >> >
> >> > Hi All,
> >> >
> >> > Here is the V5 of the IO controller patches generated on top of 2.6.30.
> >> [snip]
> >>
> >> > Testing
> >> > =======
> >> >
> >>
> >> [snip]
> >>
> >> I've not been reading through the discussions in complete detail, but
> >> I see no reference to async reads or aio. In the case of aio, aio
> >> presumes the context of the user space process. Could you elaborate on
> >> any testing you've done with these cases?
> >>
> >
> > Hi Balbir,
> >
> > So far I had not done any testing with AIO. I have done some just now.
> > Here are the results.
> >
> > Test1 (AIO reads)
> > ================
> > Set up two fio, AIO read jobs in two cgroup with weight 1000 and 500
> > respectively. I am using cfq scheduler. Following are some lines from my test
> > script.
> >
> > ===================================================================
> > fio_args="--ioengine=libaio --rw=read --size=512M"
>
> AIO doesn't make sense without O_DIRECT.
>

Ok, here are the read results with --direct=1 for reads. In previous posting,
writes were already direct.

test1 statistics: time=8 16 20796 sectors=8 16 1049648
test2 statistics: time=8 16 10551 sectors=8 16 581160


Not sure why reads are so slow with --direct=1? In the previous test
(no direct IO), I had cleared the caches using
(echo 3 > /proc/sys/vm/drop_caches) so reads could not have come from page
cache?

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/