Re: [PATCH] netfilter: use per-cpu recursive lock (v11)
From: Stephen Hemminger
Date: Tue Apr 21 2009 - 01:46:11 EST
On Tue, 21 Apr 2009 13:22:27 +0800
Lai Jiangshan <laijs@xxxxxxxxxxxxxx> wrote:
> Eric Dumazet wrote:
> > Lai Jiangshan a Ãcrit :
> >> Stephen Hemminger wrote:
> >>> +/**
> >>> + * xt_table_info_rdlock_bh - recursive read lock for xt table info
> >>> + *
> >>> + * Table processing calls this to hold off any changes to table
> >>> + * (on current CPU). Always leaves with bottom half disabled.
> >>> + * If called recursively, then assumes bh/preempt already disabled.
> >>> + */
> >>> +void xt_info_rdlock_bh(void)
> >>> +{
> >>> + struct xt_info_lock *lock;
> >>> +
> >>> + preempt_disable();
> >>> + lock = &__get_cpu_var(xt_info_locks);
> >>> + if (likely(++lock->depth == 0))
> >> Maybe I missed something. I think softirq may be still enabled here.
> >> So what happen when xt_info_rdlock_bh() called recursively here?
> >
> > well, first time its called, you are right softirqs are enabled until
> > the point we call spin_lock_bh(), right after this line :
>
> xt_info_rdlock_bh() called recursively here will enter the
> critical region without &__get_cpu_var(xt_info_locks)->lock.
NO spin_lock_bh always does a preempt_disable
xt_info_rdlock_bh (depth = -1)
+1 preempt_disable
spin_lock_bh
+1 preempt_disable
-1 preempt_enable_no_resched
---
+1
Second call preempt_count=1 (depth = 0)
xt_info_rdlock_bh
+1 preempt_disable
-1 preempt_enable_no_resched
---
Result is preempt_count=1 (depth = 1)
Now lets do unlocks
xt_info_rdunlock_bh preempt_count=1 depth=1
does nothing
xt_info_rdunlock_bh preempt_count=1 depth = 0
-1 spin_unlock_bh
Resulting preempt_count=0 depth = -1
Same as starting point.
> Because xt_info_rdlock_bh() called recursively here sees
> lock->depth >= 0, and "++lock->depth == 0" is false.
>
> >
> >
> >>> + spin_lock_bh(&lock->lock);
> >>> + preempt_enable_no_resched();
> >
> > After this line, both softirqs and preempt are disabled.
No. spin_lock_bh on first pass does this.
> > Future calls to this function temporarly raise preemptcount and decrease it.
> > (Null effect)
> >
> >>> +}
> >>> +EXPORT_SYMBOL_GPL(xt_info_rdlock_bh);
> >>> +
> >> Is this OK for you:
> >>
> >> void xt_info_rdlock_bh(void)
> >> {
> >> struct xt_info_lock *lock;
> >>
> >> local_bh_disable();
> >
> > well, Stephen was trying to not change preempt count for the 2nd, 3rd, 4th?... invocation of this function.
> > This is how I understood the code.
> >
> >> lock = &__get_cpu_var(xt_info_locks);
> >> if (likely(++lock->depth == 0))
> >> spin_lock(&lock->lock);
> >> }
> >>
>
> Sorry for it.
> Is this OK:
>
> void xt_info_rdlock_bh(void)
> {
> struct xt_info_lock *lock;
>
> local_bh_disable();
> lock = &__get_cpu_var(xt_info_locks);
> if (likely(++lock->depth == 0))
> spin_lock(&lock->lock);
> else
> local_bh_enable();
> }
Unnecessary.
> I did not think things carefully enough, and I do know
> nothing about ip/ip6/arp.
>
> Lai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/