Re: frequent softlockups with 3.10rc6.
From: Linus Torvalds
Date: Fri Jun 28 2013 - 04:22:54 EST
On Thu, Jun 27, 2013 at 9:21 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>
> Besides, making the inode_sb_list_lock per sb won't help solve this
> problem, anyway. The case that I'm testing involves a filesystem
> that contains 99.97% of all inodes cached by the system. This is a
> pretty common situation....
Yeah..
> The problem is not the inode->i_lock. lockstat is pretty clear on
> that...
So the problem is that we're at -rc7, and apparently this has
magically gotten much worse. I'd *really* prefer to polish some turds
here over being fancy.
> Right, we could check some of it optimisitcally, but we'd still be
> walking millions of inodes under the inode_sb_list_lock on each
> sync() call just to find the one inode that is dirty. It's like
> polishing a turd - no matter how shiny you make it, it's still just
> a pile of shit.
Agreed. But it's not a _new_ pile of shit, and so I'm looking for
something less scary than a whole new list with totally new locking.
If we could make the cost of walking the (many) inodes sufficiently
lower so that we can paper over things for now, that would be lovely.
And with the inode i_lock we might well get into some kind of lockstep
worst-case behavior wrt the sb_lock too. I was hoping that making the
inner loop more optimized would possibly improve the contention case -
or at least push it out a bit (which is presumably what the situation
*used* to be).
> It looks ok, but I still think it is solving the wrong problem.
> FWIW, your optimisation has much wider application that just this
> one place. I'll have a look to see how we can apply this approach
> across all the inode lookup+validate code we currently have that
> unconditionally takes the inode->i_lock....
Yes, I was looking at all the other cases that also seemed to be
testing i_state for those "about to go away" cases.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/