On Thu, Jun 26, 2014 at 02:10:28PM +0200, Bernd Schubert wrote:
On 06/26/2014 01:57 PM, LukÃÅ Czerner wrote:
On Thu, 26 Jun 2014, Artem Bityutskiy wrote:
On Thu, 2014-06-26 at 12:36 +0200, Bernd Schubert wrote:
On 06/26/2014 08:13 AM, Artem Bityutskiy wrote:
On Thu, 2014-06-26 at 11:06 +1000, Dave Chinner wrote:
Your particular use case can be handled by directing your benchmark
at a filesystem mount point and unmounting the filesystem in between
benchmark runs. There is no ned to adding kernel functionality for
somethign that can be so easily acheived by other means, especially
in benchmark environments where *everything* is tightly controlled.
If I was a benchmark writer, I would not be willing running it as root
to be able to mount/unmount, I would not be willing to require the
customer creating special dedicated partitions for the benchmark,
because this is too user-unfriendly. Or do I make incorrect assumptions?
But why a sysctl then? And also don't see a point for that at all, why
can't the benchmark use posix_fadvise(POSIX_FADV_DONTNEED)?
The latter question was answered - people want a way to drop caches for
a file. They need a method which guarantees that the caches are dropped.
They do not need an advisory method which does not give any guarantees.
I'm not sure if a benchmark really needs that so much that
FADV_DONTNEED isn't sufficient.
Personally I would also like to know if FADV_DONTNEED succeeded.
I.e. 'ql-fstest' is to check if the written pattern went to the
block device and currently it does not know if data really had been
dropped from the page cache. As it reads files several times this is
not critical, but only would be a nice to have - nothing worth to
add a new syscall.
ql-test is not a benchmark, it's a data integrity test. The re-read
verification problem is easily solved by using direct IO to read the
files directly without going through the page cache. Indeed, direct
IO will invalidate cached pages over the range it reads before it
does the read, so the guarantee that you are after - no cached pages
when the read is done - is also fulfilled by the direct IO read...
I really don't understand why people keep trying to make cached IO
behave like uncached IO when we already have uncached IO
interfaces....