Re: [PATCH] flush_work_sync vs. flush_scheduled_work Re: [PATCH] PHYLIB: IRQ event workqueue handling fixes

From: Oleg Nesterov
Date: Mon Oct 22 2007 - 13:58:25 EST


On 10/22, Jarek Poplawski wrote:
>
> On Fri, Oct 19, 2007 at 09:50:14AM +0200, Jarek Poplawski wrote:
> > On Thu, Oct 18, 2007 at 07:48:19PM +0400, Oleg Nesterov wrote:
> > > On 10/18, Jarek Poplawski wrote:
> > > >
> > > > +/**
> > > > + * flush_work_sync - block until a work_struct's callback has terminated
> > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > Hmm...
> > >
> > > > + * Similar to cancel_work_sync() but will only busy wait (without cancel)
> > > > + * if the work is queued.
> > >
> > > Yes, it won't block, but will spin in busy-wait loop until all other works
> > > scheduled before this work are finished. Not good. After that it really
> > > blocks waiting for this work to complete.
> > >
> > > And I am a bit confused. We can't use flush_workqueue() because some of the
> > > queued work_structs may take rtnl_lock, yes? But in that case we can't use
> > > the new flush_work_sync() helper as well, no?
>
> OK, I know I'm dumber and dumber everyday,

You are not alone. I have the same feeling about myself!

> these all flushes are
> rtnl lockup vulnerable wrt. other work functions, but cancel_work_sync
> looks perfectly fine

Yes,

> Then, if by any chance I'm right, something like flush_work_sync
> (or changed flush_scheduled_work, if there is no problem with such
> a change of implementation) could be safely (if it's called without
> locks used by flushed work only) done cancel_work_sync() way, by
> running a work function after try_to_grab_pending() returns 1

If this work doesn't rearm itself - yes. (otherwise, the same ->func
can run twice _at the same time_)

But again, in this case wait_on_work() after try_to_grab_pending() == 1
doesn't block, so we can just do

if (cancel_work_sync(w))
w->func();

Oleg.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/