Well, if you have only 4 sectors to process, you are right. But if you
continue the one series from 1 to 1000 and the other from 100000 to
101000, you will see the point I tried to show. Suppose you can read
(or read-ahead) larger chains from both series, you will get a substantial
reduction in the number of seeks as compared with alternating sectorwise.
|> [ Chunky idea deleted ]
|> > Note that doing priority calculations on chunks may be much less
|> > cost-intensive, since the number of chunks should be much smaller than
|> > the number of single requests in practice. Timestamps associated
|> > with chunks tell the age of the eldest request in the whole chunk.
|> >
|> Umm, but with chunks you have to go through two queues -- the list of
|> chunks and the list of sectors within the chunk.
Yes, there is some overhead to pay for. Choosing the right chunk can be
done without looking at the individual sectors if you have "chunk headers"
that contain the information you look for. So searching at chunk level
will be even less expensive, since there are normally less chunks than
sectors. The list of sectors within a chunk has to be accessed only when
the chunk is processed by the disk controller or when new sectors are
added. In the last case, the overhead will be small, because you now
have a sorting at two levels: the chunks are pre-sorted by start-address,
and the list of sectors is also sorted. Anyway, I think this little
CPU overhead is much less than the gain from disk movement improvements.
-- Thomas