Re: safety of *mutex_unlock() (Was: [BUG] signal: sighand unprotected when accessed by /proc)
From: Paul E. McKenney
Date: Tue Jun 10 2014 - 11:20:19 EST
On Tue, Jun 10, 2014 at 07:36:32AM -0700, Paul E. McKenney wrote:
> On Tue, Jun 10, 2014 at 03:01:38PM +0200, Peter Zijlstra wrote:
> > On Tue, Jun 10, 2014 at 05:52:35AM -0700, Paul E. McKenney wrote:
> > > On Tue, Jun 10, 2014 at 10:37:26AM +0200, Peter Zijlstra wrote:
> > > > On Mon, Jun 09, 2014 at 09:26:13AM -0700, Paul E. McKenney wrote:
> > > > > That would indeed be a bad thing, as it could potentially lead to
> > > > > use-after-free bugs. Though one could argue that any code that resulted
> > > > > in use-after-free would be quite aggressive. But still...
> > > >
> > > > Let me hijack this thread for yet another issue... So I had an RCU
> > > > related use-after-free the other day, and while Sasha was able to
> > > > trigger it quite easily, I had a multi-day struggle to reproduce.
> > > >
> > > > Once I figured out what the exact problem was it was also clear to me
> > > > why it was so hard for me to reproduce.
> > > >
> > > > So normally its easier to trigger races on bigger machines, more cpus,
> > > > more concurrency, more races, all good.
> > > >
> > > > _However_ with RCU the grace period machinery is slower the bigger the
> > > > machine, so bigger machine, slower grace period, slower RCU free, less
> > > > likely to hit use-after-free.
> > > >
> > > > So I was thinking, and I know you all will go kick me for this because
> > > > the very last thing we need is what I'm about to propose: more RCU
> > > > flavours :-).
> > > >
> > > > How about an rcu_read_unlock() reference counted RCU variant that's
> > > > ultra aggressive in doing the callbacks in order to better trigger such
> > > > issues?
> > >
> > > If you are using synchronize_rcu() for the update side, then I suggest
> > > rcutorture.gp_exp=1 to force use expediting throughout.
> >
> > No such luck, this was regular kfree() from call_rcu(). And the callback
> > execution was typically delayed long enough to never 'see' the
> > use-after-free.
>
> Figures. ;-)
>
> Well, there is always the approach of booting your big systems with most
> of the CPUs turned off. Another approach would be to set HZ=10000 or
> some such, assuming the kernel can actually survive that kind of abuse.
And yet another approach is to have a pair of low-priority processes
per CPU that context-switch back and forth to each other if that CPU
has nothing else to do. This should get rid of most of the increase in
grace-period duration with increasing numbers of CPUs.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/