Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev

From: Ronald Moesbergen
Date: Thu Jul 16 2009 - 03:32:56 EST


2009/7/15 Vladislav Bolkhovitin <vst@xxxxxxxx>:
>> The drop with 64 max_sectors_kb on the client is a consequence of how CFQ
>> is working. I can't find the exact code responsible for this, but from all
>> signs, CFQ stops delaying requests if amount of outstanding requests exceeds
>> some threshold, which is 2 or 3. With 64 max_sectors_kb and 5 SCST I/O
>> threads this threshold is exceeded, so CFQ doesn't recover order of
>> requests, hence the performance drop. With default 512 max_sectors_kb and
>> 128K RA the server sees at max 2 requests at time.
>>
>> Ronald, can you perform the same tests with 1 and 2 SCST I/O threads,
>> please?

Ok. Should I still use the file-on-xfs testcase for this, or should I
go back to using a regular block device? The file-over-iscsi is quite
uncommon I suppose, most people will export a block device over iscsi,
not a file.

> With context-RA patch, please, in those and future tests, since it should
> make RA for cooperative threads much better.
>
>> You can limit amount of SCST I/O threads by num_threads parameter of
>> scst_vdisk module.

Ok, I'll try that and include the blk_run_backing_dev,
readahead-context and io_context patches.

Ronald.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/