Re: [RFC] ext3/jbd race: releasing in-use journal_heads
From: Stephen C. Tweedie
Date: Mon Mar 07 2005 - 18:11:07 EST
Hi,
On Mon, 2005-03-07 at 21:11, Andrew Morton wrote:
> > I'm having trouble testing it, though --- I seem to be getting livelocks
> > in O_DIRECT running 400 fsstress processes in parallel; ring any bells?
>
> Nope. I dont think anyone has been that cruel to ext3 for a while.
> I assume this workload used to succeed?
I think so. I remember getting a lockup a couple of months ago that I
couldn't get to the bottom of and that looked very similar, so it may
just be a matter of it being much more reproducible now. But I seem to
be able to livelock the directIO bits in minutes now, quite reliably.
I'll try to narrow things down.
altgr-scrlck is showing a range of EIPs all in ext3_direct_IO->
invalidate_inode_pages2_range(). I'm seeing
invalidate_inode_pages2_range()->pagevec_lookup()->find_get_pages()
as by far the most common trace, but also
invalidate_inode_pages2_range()->__might_sleep()
invalidate_inode_pages2_range()->unlock_page()
and a few other similar paths.
invalidate_inode_pages2_range() does do a cond_resched(), so the machine
carries on despite having 316 fsstress tasks in system-mode R state at
the moment. :-) The scheduler is doing a rather impressive job.
--Stephen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/