Re: [PATCH RFC tip/core/rcu 23/23] vhost: add __rcu annotations
From: Paul E. McKenney
Date: Mon May 17 2010 - 19:05:44 EST
On Mon, May 17, 2010 at 06:00:25PM -0400, Mathieu Desnoyers wrote:
> * Paul E. McKenney (paulmck@xxxxxxxxxxxxxxxxxx) wrote:
> > On Mon, May 17, 2010 at 11:33:49PM +0300, Michael S. Tsirkin wrote:
> > > On Thu, May 13, 2010 at 08:23:40AM -0700, Paul E. McKenney wrote:
> > > > On Thu, May 13, 2010 at 03:07:23PM +0200, Peter Zijlstra wrote:
> > > > > On Wed, 2010-05-12 at 16:00 -0700, Paul E. McKenney wrote:
> > > > > > Any thoughts? One approach would be to create a separate lockdep class
> > > > > > for vhost workqueue state, similar to the approach used in instrument
> > > > > > rcu_read_lock() and friends.
> > > > >
> > > > > workqueue_struct::lockdep_map, its held while executing worklets.
> > > > >
> > > > > lock_is_held(&vhost_workqueue->lockdep_map), should do as you want.
> > > >
> > > > Thank you, Peter!!!
> > > >
> > > > Thanx, Paul
> > >
> > > vhost in fact does flush_work rather than
> > > flush_workqueue, so while for now everything runs
> > > from vhost_workqueue in theory nothing would break
> > > if we use some other workqueue or even a combination
> > > thereof.
> > >
> > > I guess when/if this happens, we could start by converting
> > > to _raw and then devise a solution.
> >
> > If there are a small finite number of work queues involved, we can
> > easily do something like:
> >
> > #ifdef CONFIG_PROVE_RCU
> > int in_vhost_workqueue(void)
> > {
> > return in_workqueue_context(vhost_workqueue) ||
> > in_workqueue_context(vhost_other_workqueue) ||
> > in_workqueue_context(yet_another_vhost_workqueue);
> > }
> > #endif /* #ifdef CONFIG_PROVE_RCU */
> >
> > Seem reasonable?
> >
> > > By the way what would be really nice is if we had a way
> > > to trap when rcu protected pointer is freed without a flush
> > > while some reader is running. Current annotation does not
> > > allow this, does it?
> >
> > Right now, it does not, but I wonder if something like Thomas's and
> > Mathieu's debugobjects work could be brought to bear on this problem?
> > This would need to be implemented in vhost, as synchronize_rcu() has
> > no way to know what memory it is flushing, nor does flush_work().
>
> We can think of my recent debugobjects addition as a small state machine
> that is described by the code that owns the objects. At each state
> transition, the code passes the expected state as well as the next
> state.
>
> The current implementation can only keep track of a single "state" per
> object at once. This should be extended to be able to count the number
> RCU read side C.S. in flight that are accessing to an object.
Not a problem, as vhost doesn't use call_rcu(). So there won't be a
conflict between different debugobjects views of the same memory.
> We could use a hook in rcu_dereference (which knows about the object)
> and a hook in rcu_read_unlock (which determines the end of valid object
> use).
>
> We should hook into rcu_assign_pointer() to detect RCU structure
> privatization. It should put these objects in a "privatized" hash table.
>
> We should also hook into synchronize_rcu/sched() to remove the
> privatized structures from the privatized hash.
>
> A hook in "kfree" (maybe a new rcu_free(void (fctptr*)(void *)) wrapper ?)
> would call a debugobject hook that would lookup the "privatized" hash.
> If it contains the object to free, we check if there are RCU read-side
> C.S. in flight using this object at the same time, and show an error if
> both are true.
I believe that we can't bury this into the RCU primitives, because
rcu_read_unlock() doesn't know what objects were referenced in the
RCU read-side critical section.
But perhaps we should be simply treating this as a use-after-free
problem, so that RCU is not directly involved. Isn't that the standard
use of debugobjects anyway?
Thanx, Paul
> Thoughts ?
>
> Mathieu
>
>
> >
> > Thanx, Paul
>
> --
> Mathieu Desnoyers
> Operating System Efficiency R&D Consultant
> EfficiOS Inc.
> http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/