Re: [RFC][PATCH] Per file dirty limit throttling

From: Balbir Singh
Date: Wed Aug 18 2010 - 10:09:18 EST


* Peter Zijlstra <peterz@xxxxxxxxxxxxx> [2010-08-18 11:58:56]:

> On Wed, 2010-08-18 at 14:52 +0530, Nikanth Karthikesan wrote:
> > On Tuesday 17 August 2010 13:54:35 Peter Zijlstra wrote:
> > > On Tue, 2010-08-17 at 10:39 +0530, Nikanth Karthikesan wrote:
> > > > Oh, nice. Per-task limit is an elegant solution, which should help
> > > > during most of the common cases.
> > > >
> > > > But I just wonder what happens, when
> > > > 1. The dirtier is multiple co-operating processes
> > > > 2. Some app like a shell script, that repeatedly calls dd with seek and
> > > > skip? People do this for data deduplication, sparse skipping etc..
> > > > 3. The app dies and comes back again. Like a VM that is rebooted, and
> > > > continues writing to a disk backed by a file on the host.
> > > >
> > > > Do you think, in those cases this might still be useful?
> > >
> > > Those cases do indeed defeat the current per-task-limit, however I think
> > > the solution to that is to limit the amount of writeback done by each
> > > blocked process.
> > >
> >
> > Blocked on what? Sorry, I do not understand.
>
> balance_dirty_pages(), by limiting the work done there (or actually, the
> amount of page writeback completions you wait for -- starting IO isn't
> that expensive), you can also affect the time it takes, and therefore
> influence the impact.
>

There is an ongoing effort to look at per-cgroup dirty limits and I
honestly think it would be nice to do it at that level first. We need
it there as a part of the overall I/O controller. As a specialized
need it could handle your case as well.

--
Three Cheers,
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/