Re: RAID1 might_sleep() warning on 3.19-rc7
From: Peter Zijlstra
Date: Tue Feb 10 2015 - 04:29:51 EST
On Tue, Feb 10, 2015 at 01:50:17PM +1100, NeilBrown wrote:
> On Mon, 9 Feb 2015 10:10:00 +0100 Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> > > However, when io_schedule() explicitly calls blk_flush_plug(), then
> > > @from_schedule=false variant is used, and the unplug functions are allowed to
> > > allocate memory and block and maybe even call mempool_alloc() which might
> > > call io_schedule().
> > >
> > > This shouldn't be a problem as blk_flush_plug() spliced out the plug list, so
> > > any recursive call will find an empty list and do nothing.
> > Unless, something along the way stuck something back on, right? So
> > should we stick an:
> > WARN_ON(current->in_iowait);
> > somewhere near where things are added to this plug list? (and move the
> > blk_flush_plug() call inside of where that's actually true of course).
> No, I don't think so.
> It is certainly possible that some request on plug->cb_list could add
> something to plug->list - which is processed after ->cb_list.
> I think the best way to think about this is that the *problem* was that a
> wait_event loop could spin without making any progress. So any time that
> clear forward progress is made it is safe sleep without necessitating the
> warning. Hence sched_annotate_sleep() is reasonable.
> blk_flush_plug() with definitely have dispatched some requests if it
> might_sleep(), so the sleep is OK.
Well, yes, but you forget that this gets us back into recursion land.
io_schedule() calling io_schedule() calling io_schedule() and *boom*
stack overflow -> dead machine.
We must either guarantee io_schedule() will never call io_schedule() or
that io_schedule() itself will not add new work to the current plug such
that calling io_schedule() itself will not recurse on the blk stuff.
Pick either option, but pick one.
Without providing such a guarantee I'm not comfortable making this warn
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/