Re: Very poor latency when using hard drive (raid1)

From: Michael Tokarev
Date: Tue Apr 16 2013 - 07:23:14 EST


15.04.2013 13:59, lkml@xxxxxxxxxxx ÐÐÑÐÑ:
> There are 2 hard drives (normal, magnetic) in software raid 1
> on 3.2.41 kernel.
>
> When I write into them e.g. using dd from /dev/zero to a local file
> (ext4 on default settings), running 2 dd at once (writing two files) it
> starves all other programs that try to use the disk.
>
> Running ls on any directory on same disk (same fs btw), takes over half
> minute to execute, same for any other disk touching action.
>
> Did anyone seen such problem, where too look, what to test?

This is typical, known for many years, issue.

Your dds are run against buffer cache, the same as used by all other
regular accesses. So once it fills up, cached directories and the
like are thrown away to make room for new cache space. So once
you need something else, that something needs to be read from disk,
which is busy together with the buffer cache.

> What could solve it (other then ionice on applications that I expect to
> use hard drive)?

Just don't mix these two workloads. Or, if you really need to transfer
large amount of data, use direct I/O (O_DIRECT) -- for dd it is
iflag=direct or oflag=direct (depending on the I/O direction).

ionice wont help much.

Thanks,

/mjt
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/