Re: [PATCH RFC tip/core/rcu 23/23] vhost: add __rcu annotations

From: Paul E. McKenney
Date: Mon May 17 2010 - 17:06:22 EST


On Mon, May 17, 2010 at 11:33:49PM +0300, Michael S. Tsirkin wrote:
> On Thu, May 13, 2010 at 08:23:40AM -0700, Paul E. McKenney wrote:
> > On Thu, May 13, 2010 at 03:07:23PM +0200, Peter Zijlstra wrote:
> > > On Wed, 2010-05-12 at 16:00 -0700, Paul E. McKenney wrote:
> > > > Any thoughts? One approach would be to create a separate lockdep class
> > > > for vhost workqueue state, similar to the approach used in instrument
> > > > rcu_read_lock() and friends.
> > >
> > > workqueue_struct::lockdep_map, its held while executing worklets.
> > >
> > > lock_is_held(&vhost_workqueue->lockdep_map), should do as you want.
> >
> > Thank you, Peter!!!
> >
> > Thanx, Paul
>
> vhost in fact does flush_work rather than
> flush_workqueue, so while for now everything runs
> from vhost_workqueue in theory nothing would break
> if we use some other workqueue or even a combination
> thereof.
>
> I guess when/if this happens, we could start by converting
> to _raw and then devise a solution.

If there are a small finite number of work queues involved, we can
easily do something like:

#ifdef CONFIG_PROVE_RCU
int in_vhost_workqueue(void)
{
return in_workqueue_context(vhost_workqueue) ||
in_workqueue_context(vhost_other_workqueue) ||
in_workqueue_context(yet_another_vhost_workqueue);
}
#endif /* #ifdef CONFIG_PROVE_RCU */

Seem reasonable?

> By the way what would be really nice is if we had a way
> to trap when rcu protected pointer is freed without a flush
> while some reader is running. Current annotation does not
> allow this, does it?

Right now, it does not, but I wonder if something like Thomas's and
Mathieu's debugobjects work could be brought to bear on this problem?
This would need to be implemented in vhost, as synchronize_rcu() has
no way to know what memory it is flushing, nor does flush_work().

Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/