Re: [PATCH] speed up SATA

From: Andrea Arcangeli
Date: Mon Mar 29 2004 - 08:22:41 EST


On Sun, Mar 28, 2004 at 11:02:43PM -0500, Jeff Garzik wrote:
> My point is there are two maximums:
>
> 1) the hardware limit
> 2) the limit that "makes sense", e.g. 512k or 1M for most
>
> The driver should only care about #1, and should be "told" #2.
>
> A very, very, very minimal implementation could be this:
>
> --- 1.138/include/linux/blkdev.h Fri Mar 12 04:33:07 2004
> +++ edited/include/linux/blkdev.h Sun Mar 28 22:44:15 2004
> @@ -607,6 +607,24 @@
>
> extern void drive_stat_acct(struct request *, int, int);
>
> +#define BLK_DISK_MAX_SECTORS 2048
> +#define BLK_FLOPPY_MAX_SECTORS 64
>
>
> Hardcoding such a maximum in the driver is inflexible and IMO incorrect.

you're complaining the current API and you're arguing long term the
selection of the dma size for bulk I/O should change and be more
dynamic, but this doesn't change your patch is still wrong today.

Once you change the API too, then you can set the hardwre limit in your
driver and relay on the highlevel blkdev code to find the optimal runtime
dma size for bulk I/O, but today it's your driver that is enforcing a
runtime bulk dma size, not the maximim limit of the controller, and so
you should code your patch accordingly.

Probably both VM and I/O scheduler will require an override if we're
going to set the dma size to 32M anytime in the future (though even that
isn't obvious, I mean a battery backed ram isn't going to take that a
long latency to read/write 32M). But that VM/IOscheduler stuff will be
an "override", the blkdev value you're playing with is a blkdev-only
thing indipdentent from VM and I/O scheduler, and it's the min dma size
required to reach bulk I/O performance with minimal cpu overhead and
that's just an hardware issue that you can measure with an OS w/o VM and
with an OS w/o I/O scheduler. If it's too big the VM and the I/O
scheduler may want to reduce it for various reasons, but the decision of
this value from a blkdev point of view is indipendent from VM and I/O
scheduler as far as I can tell. VM and I/O scheduler cares about this
bit only if it's too big because if it's too big it hurts on latency and
memory pressure.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/