Re: Asynch I/O overloaded 2.2.15/2.3.99

From: Dimitris Michailidis (dimitris@darkside.engr.sgi.com)
Date: Tue Apr 11 2000 - 18:25:45 EST


From: Dimitris Michailidis <dimitris@engr.sgi.com>
Date: 11 Apr 2000 16:25:44 -0700
In-Reply-To: "Jeff V. Merkey"'s message of "11 Apr 2000 15:27:15 -0700"
Message-ID: <6yrg0ssrzif.fsf@darkside.engr.sgi.com>
Lines: 30
X-Mailer: Gnus v5.5/XEmacs 20.4 - "Emerald"

"Jeff V. Merkey" <jmerkey@timpanogas.com> writes:

> I tries runs of 500 buffers, 1000 buffers, 2000 buffers, 3000 buffers,
> and 4000 buffers.
>
> And the winners are!
>
> 1. ll_rw_blk (and add_request/make_request) (oink, oink..... oink,
> oink ... rooting around down in the hardware -- I think it's looking for
> truffles)

I suspect that add_request/make_request are not the real culprits here. My
experience with heavy disk I/O tests is that the bottleneck is usually
__get_request_wait(), but that executes with interrupts off so profiling
charges the callers instead. Here's an excerpt from a call graph profile
(kernel is 99-pre3):

                0.87 1.19 20662/20662 generic_make_request [22]
[51] 0.5 0.87 1.19 20662 __get_request_wait [51]
                1.15 0.04 200799/379645 schedule [47]

As you can see processes sleep/wake_up a lot in __get_request_wait and
generate more than half of all the scheduling activity. This is despite the
wake_one, and actually having all processes wake up simultaneously doesn't
make things all that worse (increases calls to schedule() by about 20% in my
case). The real bottleneck under disk I/O load is the single request array,
IMO.

-- 
Dimitris Michailidis                    dimitris@engr.sgi.com

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sat Apr 15 2000 - 21:00:17 EST