Re: [RESEND][RFC] BFQ I/O Scheduler
From: Jens Axboe
Date: Thu Apr 17 2008 - 04:57:20 EST
On Thu, Apr 17 2008, Pavel Machek wrote:
> > On Thu, Apr 17 2008, Paolo Valente wrote:
> > > Pavel Machek ha scritto:
> > > >
> > > >>In the first type of tests, to achieve a higher throughput than CFQ
> > > >>(with the default 100 ms time slice), the maximum budget for BFQ
> > > >>had to be set to at least 4k sectors. Using the same value for the
> > > >>
> > > >
> > > >Hmm, 4k sectors is ~40 seconds worst case, no? That's quite long...
> > > >
> > > Actually, in the worst case among our tests, the aggregate throughput
> > > with 4k sectors was ~ 20 MB/s, hence the time for 4k sectors ~ 4k * 512
> > > / 20M = 100 ms.
> >
> > That's not worse case, it is pretty close to BEST case. Worst case is 4k
> > of sectors, with each being a 512b IO and causing a full stroke seek.
> > For that type of workload, even a modern sata hard drive will be doing
> > 500kb/sec or less. That's rougly a thousand sectors per seconds, so ~4
> > seconds worst case for 4k sectors.
>
> One seek is still 10msec on modern drive, right? So 4k seeks =
> 40seconds, no? 4seconds would correspond to 1msec per seek, which
> seems low.
I actually meant 4k ios, not 512b at that isn't really realistic. With
512b full device seeks, you are looking at probably 30kb/sec on a normal
7200rpm drive and that would be around a minute worst case time. The 4kb
number of 500kb/sec may even be a bit too high, doing a quick test here
shows a little less than 300kb/sec on this drive. So more than 4 seconds
still, around 7-8s or so.
> writes with O_SYNC could force full seek on each request, right?
Writes generally work somewhat better due to caching, but doing O_DIRECT
512 byte reads all over the drive would exhibit worst case behaviour
easily.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/