Re: [RFC][PATCH] Per file dirty limit throttling

From: Peter Zijlstra
Date: Tue Aug 17 2010 - 04:25:21 EST


On Tue, 2010-08-17 at 10:39 +0530, Nikanth Karthikesan wrote:
> Oh, nice. Per-task limit is an elegant solution, which should help during
> most of the common cases.
>
> But I just wonder what happens, when
> 1. The dirtier is multiple co-operating processes
> 2. Some app like a shell script, that repeatedly calls dd with seek and skip?
> People do this for data deduplication, sparse skipping etc..
> 3. The app dies and comes back again. Like a VM that is rebooted, and
> continues writing to a disk backed by a file on the host.
>
> Do you think, in those cases this might still be useful?

Those cases do indeed defeat the current per-task-limit, however I think
the solution to that is to limit the amount of writeback done by each
blocked process.

Jan Kara had some good ideas in that department.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/