Re: (reiserfs) Re: More on 2.2.18pre2aa2

From: Rik van Riel (riel@conectiva.com.br)
Date: Wed Sep 13 2000 - 18:08:12 EST


On Thu, 14 Sep 2000, Ragnar Kjørstad wrote:
> On Wed, Sep 13, 2000 at 06:57:21PM -0300, Rik van Riel wrote:
> > > Another potentially stupid question:
> > > When the queue gets too long/old, new requests should be put in
> > > a new queue to avoid starvation for the ones in the current
> > > queue, right?
> >
> > Indeed. That would solve the problem...
>
> I think something like this: (I don't know the current code, so maybe
> this suggestion is really stupid.....)
>
> struct request_queue {
> int4 start_time;
> struct request *list;
> struct request_queue *next;
> }
>
> struct request_queue *current_queue;
>
> function too_old (struct request_queue *q) {
> /* can actually easily be implemented using time, nr of requests
> or any other messure of amount of work */
> if (current_time > q->start_time + MAXTIME)
> return 1;
> else
> return 0;
> }
>
> function insert_request(struct request *new) {
> struct request_queue *q=current_queue;
> if (new->sectornr < current_sector);
> q=q->next;
> while (too_old(q))
> q=q->next;
> insert_request(q->list, new);
> }

This (IMHO) is wrong. When the drive has trouble keeping up
with requests in FCFS order, we should do some elevator sorting
to keep throughput high enough to deal with the requests we get.

Jeff's idea would take care of this, together with a *limited*
(say, 32 entries) queue size. Every time the "work" queue (the
one the disk is working on) is empty, we switch queues.

Processes get to put their request on the other queue, either
until the disk takes their requests (the queues are switched)
or until the queue is full. That way only the N highest priority
processes can get access to the disk if we're really overloaded
in a bad way.

The current HUGE queue size is probably another reason for
the very bad latencies we sometimes see...

> > (maybe with the twist that we /do/ allow merges on the queue
> > that's being processed by the disk, but no insertions of new
> > requests)
>
> I don't think you should allow merges. If one process is doing a big
> IO operation on a big file it would still get _all_ capacity, right?

Only if you were to allow unlimited merges ;)

I guess a 'decent' maximum size for one request would
be somewhere around 256kB, since this is the amount of
data a drive can read in the time of one disk seek...

regards,

Rik

--
"What you're running that piece of shit Gnome?!?!"
       -- Miguel de Icaza, UKUUG 2000

http://www.conectiva.com/ http://www.surriel.com/

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Sep 15 2000 - 21:00:22 EST