Re: regression in page writeback
From: Theodore Tso
Date: Thu Oct 01 2009 - 17:55:03 EST
On Thu, Oct 01, 2009 at 11:14:29PM +0800, Wu Fengguang wrote:
> Yes and no. Yes if the queue was empty for the slow device. No if the
> queue was full, in which case IO submission speed = IO complete speed
> for previously queued requests.
>
> So wbc.timeout will be accurate for IO submission time, and mostly
> accurate for IO completion time. The transient queue fill up phase
> shall not be a big problem?
So the problem is if we have a mixed workload where there are lots
large contiguous writes, and lots of small writes which are fsync'ed()
--- for example, consider the workload of copying lots of big DVD
images combined with the infamous firefox-we-must-write-out-300-megs-of-
small-random-writes-and-then-fsync-them-on-every-single-url-click-so-
that-every-last-visited-page-is-preserved-for-history-bar-autocompletion
workload. The big writes, if the are contiguous, could take 1-2 seconds
on a very slow, ancient laptop disk, and that will hold up any kind of
small synchornous activities --- such as either a disk read or a firefox-
triggered fsync().
That's why the IO completion time matters; it causes latency problems
for slow disks and mixed large and small write workloads. It was the
original reason for the 1024 MAX_WRITEBACK_PAGES, which might have
made sense 10 years ago back when disks were a lot slower. One of the
advantages of an auto-tuning algorithm, beyond auto-adjusting for
different types of hardware, is that we don't need to worry about
arbitrary and magic caps beocoming obsolete due to technological
changes. :-)
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/