Re: deadlocks if use htb
From: Jarek Poplawski
Date: Thu Jan 15 2009 - 05:54:41 EST
On Thu, Jan 15, 2009 at 11:46:48AM +0100, Peter Zijlstra wrote:
> On Thu, 2009-01-15 at 09:01 +0000, Jarek Poplawski wrote:
...
> > spin_lock
> > (not this hrtimer's anymore)
> > __remove_hrtimer
> > list_add_tail enqueue_hrtimer
> >
>
> (looking at .28 code)
>
> run_hrtimer_pending() reads like:
>
> while (pending timers) {
> __remove_hrtimer(timer, HRTIMER_STATE_CALLBACK);
> spin_unlock(&cpu_base->lock);
>
> fn(timer);
>
> spin_lock(&cpu_base->lock);
> timer->state &= ~HRTIMER_STATE_CALLBACK; // _should_ result in HRTIMER_STATE_INACTIVE
> if (HRTIMER_RESTART)
> re-queue
> else if (timer->state != INACTIVE) {
> // so another cpu re-queued this timer _while_ we were executing it.
> if (timer is first && !reprogramm) {
> __remove_hrtimer(timer, HRTIMER_STATE_PENDING);
> list_add_tail(timer, &cb_pending);
> }
> }
> }
>
> So in the window where we drop the lock, one can, as you said, have
> another cpu requeue the timer, but the rb_entry and list_entry are free,
> so it should not cause the data corruption we're seeing.
>
Can't they be enqueued to the list (without a lock) and rbtree at the
same time? Then removing is done for the list only?
Jarek P.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/