Re: [PATCH] ipc,sem block sem_lock on sma->lock during sma initialization

From: Davidlohr Bueso
Date: Sun Nov 23 2014 - 16:37:14 EST


On Sun, 2014-11-23 at 16:03 -0500, Rik van Riel wrote:
> On 11/23/2014 01:23 PM, Manfred Spraul wrote:
> > Hi Rik,
> >
> > On 11/21/2014 08:52 PM, Rik van Riel wrote:
> >> When manipulating just one semaphore with semop, sem_lock only
> >> takes that single semaphore's lock. This creates a problem during
> >> initialization of the semaphore array, when the data structures
> >> used by sem_lock have not been set up yet. The sma->lock is
> >> already held by newary, and we just have to make sure everything
> >> else waits on that lock during initialization.
> >>
> >> Luckily it is easy to make sem_lock wait on the sma->lock, by
> >> pretending there is a complex operation in progress while the sma
> >> is being initialized.
> > That's not sufficient, as sma->sem_nsems is accessed before
> > calling sem_lock(), both within find_alloc_undo() and within
> > semtimedop().
> >
> > The root problem is that sma->sem_nsems and sma->sem_base are
> > accessed without any locks, this conflicts with the approach that
> > sma starts to exist as not yet initialized but locked and is
> > unlocked after the initialization is completed.
> >
> > Attached is an idea. It did pass a few short tests. What do you
> > think?
>
> This was my other idea for fixing the issue; unfortunately
> I didn't think of it until after I sent the first patch :)

Yep, this is what I was mentioning as well.

> You are right that without that change, we can return the
> wrong error codes to userspace.
>
> I will give the patch a try, though I have so far been unable
> to reproduce the bug that the customer reported, so I am unlikely
> to give much in the way of useful testing results...
>
> Andrew, feel free to give Manfred's patch my
>
> Acked-by: Rik van Riel <riel@xxxxxxxxxx>

Acked-by: Davidlohr Bueso <dave@xxxxxxxxxxxx>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/