kiobuf I/O.
From: Naresh Kumar Inna
Date: Mon Mar 22 2004 - 06:37:37 EST
Hi,
This is about a problem I am facing on an IA64 box with Linux 2.4.18. I have
a program that opens a device file in raw mode. It reads 8K bytes of data
from this raw device in consecutive attempts until it reaches the end of
device or encounters EIO. The device is an IDE ATAPI CDROM drive with media
in it. If I insert a CDROM with bad sectors, I see that my program takes a
very long time to complete. I figured that when 'read( )' encounters bad
sectors within the 8K of data it has requested, it gets blocked for very
long periods. At the end of this period, I can see either EIO or less than
8K of data being returned by the read call.
My 'dmesg' is filled with these messages:
hda: cdrom_decode_status: status=0x51 { DriveReady SeekComplete Error }
hda: cdrom_decode_status: error=0x34
hda: ATAPI reset complete
end_request: I/O error, dev 03:00 (hda), sector 2688
On seeing some of the raw driver code paths, I could see why this was
happening. The cause of the problem is of course, bad media in the drive.
The IDE drivers take a very long time to recover from errors. They retry
upto 8 times, performing a couple of resets. My read of 8K is being broken
into 16 buffers of 512 bytes each ( I guess this is the max sector size for
this drive) by the raw interface. This request is handed over to
'brw_kiovec()'. One 'kiobuf' having 16 'buffer head' pointers is what I
could see being handed over to 'brw_kiovec()'. This function despatches
these 16 buffers to the low level driver for I/O and waits in
'kiobuf_wait_for_io( )' until completion.
On my system, all 16 buffer heads come back with errors ( EIO ) because of
bad sectors. However, because we synchronously wait for all the 16 buffers
to complete in 'kiobuf_wait_for_io( )', we are delayed by the time it takes
for one IDE driver sector read failure multiplied by 16.
I could see that the 16 buffers are serviced by the low-level driver in
ascending order of sector number ( though I guess this may not always be
true). So would it make sense for a read/write to be blocked for the
duration of I/O on 16 sectors when the first sector itself is bad? Is there
a possibility of making 'end_kio_request( )' wake up a request blocked on
'kiobuf->wait_queue' by zeroing out 'kiobuf->io_count' if the buffer head is
'kiobuf->bh[0]'? If this is too specific, then end_kio_request( )' could
wake up the blocked request once it has encountered an I/O error on the
lowest numbered sector. I know that this change would involve protecting the
mapping of the pages mapped by kiobuf until all I/O on these pages are over.
Some I/O may still be happening on these pages we may end up calling '
unmap_kiobuf( )' & 'free_kiovec()' and the results may be bad. Instead, is
it possible to notify the user of an EIO ( or the number of successful
bytes ) and do the cleaning-up out-of-band?
Thanks,
Naresh.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/