Re: merging the per-bdi writeback patchset
From: Jens Axboe
Date: Tue Jun 23 2009 - 07:28:59 EST
On Tue, Jun 23 2009, KOSAKI Motohiro wrote:
> > On Tue, Jun 23 2009, KOSAKI Motohiro wrote:
> > > Hi
> > >
> > > > On Tue, Jun 23 2009, Andrew Morton wrote:
> > > > > On Tue, 23 Jun 2009 10:11:56 +0200 Jens Axboe <jens.axboe@xxxxxxxxxx> wrote:
> > > > >
> > > > > > Things are looking good for this patchset and it's been in -next for
> > > > > > almost a week without any reports of problems. So I'd like to merge it
> > > > > > for 2.6.31 if at all possible. Any objections?
> > > > >
> > > > > erk. I was rather expecting I'd have time to have a look at it all.
> > > >
> > > > OK, we can wait if we have to, just trying to avoid having to keep this
> > > > fresh for one full cycle. I have posted this patchset 11 times though
> > > > over months, so it's not like it's a new piece of work :-)
> > > >
> > > > > It's unclear to me actually _why_ the performance changes which were
> > > > > observed have actually occurred. In fact it's a bit unclear (to me)
> > > > > why the patchset was written and what it sets out to achieve :(
> > > >
> > > > It started out trying to get rid of the pdflush uneven writeout. If you
> > > > look at various pdflush intensive workloads, even on a single disk you
> > > > often have 5 or more pdflush threads working the same device. It's just
> > > > not optimal. Another issue was starvation with request allocation. Given
> > > > that pdflush does non-blocking writes (it has to, by design), pdflush
> > > > can potentially be starved if someone else is working the device.
> > >
> > > Can you please make reproduce program and post mesurement result?
> > > I hope to mesure the same program on my box.
> >
> > For which issue? Lumpy writeout can often be observed just by doing
> > buffered writes to a bunch of files.
>
> Yes, I know current behavior is not perfectly optimal.
> but I haven't seen it cause serious issue.
>
> Then, I guess you have big degression workload, yes? if so, I hope to
> see it.
Not really, I was just interested in making it more optimal. I work from
various fio job files, one case that is sped up greatly is doing random
writes with mmap to an otherwise buffered file. pdflush is both lumpy
and a lot slower there, even with many pdflush threads active. Looking
at disk utilization, pdflush doesn't manage more than ~80% for that. The
per-bdi writeback is completely smooth and gets about as close to 100%
utilization as possible (around ~98% there). And this is just one 1
disk, the per-bdi writeback scales nicely upwards. pdflush falls flat.
And then there are lots of cases where the performance is the same. For
many workloads, pdflush isn't really very active.
> > > Plus, Can you please write more vervose patch description? your patch is a
> > > bit harder review.
> >
> > OK, I can probably improve on that. Do you mean the general description
> > of the patchset, or some of the individual patches?
>
> Hopefully both. honestly I haven't understand your main worryed issue.
Does the above help? It's all about making the writeback more
consistent. So getting rid of the lumpy writeback and eliminating the
pdflush starvation were the prime motivators.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/