Re: Performance regression from switching lock to rw-sem foranon-vma tree

From: Tim Chen
Date: Wed Jun 26 2013 - 20:25:04 EST


On Wed, 2013-06-26 at 14:36 -0700, Tim Chen wrote:
> On Wed, 2013-06-26 at 11:51 +0200, Ingo Molnar wrote:
> > * Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> wrote:
> >
> > > On Wed, 2013-06-19 at 09:53 -0700, Tim Chen wrote:
> > > > On Wed, 2013-06-19 at 15:16 +0200, Ingo Molnar wrote:
> > > >
> > > > > > vmstat for mutex implementation:
> > > > > > procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
> > > > > > r b swpd free buff cache si so bi bo in cs us sy id wa st
> > > > > > 38 0 0 130957920 47860 199956 0 0 0 56 236342 476975 14 72 14 0 0
> > > > > > 41 0 0 130938560 47860 219900 0 0 0 0 236816 479676 14 72 14 0 0
> > > > > >
> > > > > > vmstat for rw-sem implementation (3.10-rc4)
> > > > > > procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
> > > > > > r b swpd free buff cache si so bi bo in cs us sy id wa st
> > > > > > 40 0 0 130933984 43232 202584 0 0 0 0 321817 690741 13 71 16 0 0
> > > > > > 39 0 0 130913904 43232 224812 0 0 0 0 322193 692949 13 71 16 0 0
> > > > >
> > > > > It appears the main difference is that the rwsem variant context-switches
> > > > > about 36% more than the mutex version, right?
> > > > >
> > > > > I'm wondering how that's possible - the lock is mostly write-locked,
> > > > > correct? So the lock-stealing from Davidlohr Bueso and Michel Lespinasse
> > > > > ought to have brought roughly the same lock-stealing behavior as mutexes
> > > > > do, right?
> > > > >
> > > > > So the next analytical step would be to figure out why rwsem lock-stealing
> > > > > is not behaving in an equivalent fashion on this workload. Do readers come
> > > > > in frequently enough to disrupt write-lock-stealing perhaps?
> > >
> > > Ingo,
> > >
> > > I did some instrumentation on the write lock failure path. I found that
> > > for the exim workload, there are no readers blocking for the rwsem when
> > > write locking failed. The lock stealing is successful for 9.1% of the
> > > time and the rest of the write lock failure caused the writer to go to
> > > sleep. About 1.4% of the writers sleep more than once. Majority of the
> > > writers sleep once.
> > >
> > > It is weird that lock stealing is not successful more often.
> >
> > For this to be comparable to the mutex scalability numbers you'd have to
> > compare wlock-stealing _and_ adaptive spinning for failed-wlock rwsems.
> >
> > Are both techniques applied in the kernel you are running your tests on?
> >
>
> Ingo,
>
> The previous experiment was done on a kernel without spinning.
> I've redone the testing on two kernel for a 15 sec stretch of the
> workload run. One with the adaptive (or optimistic)
> spinning and the other without. Both have the patches from Alex to avoid
> cmpxchg induced cache bouncing.
>
> With the spinning, I sleep much less for lock acquisition (18.6% vs 91.58%).
> However, I've got doubling of write lock acquisition getting
> blocked. So that offset the gain from spinning which may be why
> I didn't see gain for this particular workload.
>
> No Opt Spin Opt Spin
> Writer acquisition blocked count 3448946 7359040
> Blocked by reader 0.00% 0.55%
> Lock acquired first attempt (lock stealing) 8.42% 16.92%
> Lock acquired second attempt (1 sleep) 90.26% 17.60%
> Lock acquired after more than 1 sleep 1.32% 1.00%
> Lock acquired with optimistic spin N/A 64.48%
>

Adding also the mutex statistics for the 3.10-rc4 kernel with mutex
implemenation of lock for anon_vma tree. Wonder if Ingo has any
insight on why mutex performs better from these stats.

Mutex acquisition blocked count 14380340
Lock acquired in slowpath (no sleep) 0.06%
Lock acquired in slowpath (1 sleep) 0.24%
Lock acquired in slowpath more than 1 sleep 0.98%
Lock acquired with optimistic spin 99.6%

Thanks.

Tim

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/