Re: 3.4.4-rt13: btrfs + xfstests 006 = BOOM.. and a bonus rt_mutexdeadlock report for absolutely free!
From: Steven Rostedt
Date: Mon Jul 16 2012 - 12:02:44 EST
On Mon, 2012-07-16 at 04:02 +0200, Mike Galbraith wrote:
> > Great, thanks! I got stuck in bug land on Friday. You mentioned
> > performance problems earlier on Saturday, did this improve performance?
>
> Yeah, the read_trylock() seems to improve throughput. That's not
> heavily tested, but it certainly looks like it does. No idea why.
Ouch, you just turned the rt_read_lock() into a spin lock. If a higher
priority process preempted a lower priority process that holds the same
lock, it will deadlock.
I'm not sure why you would get a performance benefit from this, as the
mutex used is an adaptive one (failure to acquire the lock will only
sleep if preempted or if the owner is not running).
We should look at why this performs better (if it really does).
-- Steve
>
> WRT performance, dbench isn't thrilled, but btrfs seems to work just
> fine for my routine usage, and spinning rust bucket is being all it can
> be. I hope I don't have to care overly much about dbench's opinon. It
> doesn't make happy multi-thread numbers with btrfs, but those numbers
> suddenly look great if you rebase relative to xfs -rt throughput :)
>
> > One other question:
> >
> > > again:
> > > +#ifdef CONFIG_PREEMPT_RT_BASE
> > > + while (atomic_read(&eb->blocking_readers))
> > > + cpu_chill();
> > > + while(!read_trylock(&eb->lock))
> > > + cpu_chill();
> > > + if (atomic_read(&eb->blocking_readers)) {
> > > + read_unlock(&eb->lock);
> > > + goto again;
> > > + }
> >
> > Why use read_trylock() in a loop instead of just trying to take the
> > lock? Is this an RTism or are there other reasons?
>
> First stab paranoia. It worked, so I removed it. It still worked but
> lost throughput, removed all my bits leaving only the lockdep bits, it
> still worked.
>
> -Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/