Re: Performance regression from switching lock to rw-sem foranon-vma tree

From: Ingo Molnar
Date: Fri Jun 28 2013 - 05:20:49 EST



* Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> wrote:

> > Yet the 17.6% sleep percentage is still much higher than the 1% in the
> > mutex case. Why doesn't spinning work - do we time out of spinning
> > differently?
>
> I have some stats for the 18.6% cases (including 1% more than 1 sleep
> cases) that go to sleep and failed optimistic spinning. There are 3
> abort points in the rwsem_optimistic_spin code:
>
> 1. 11.8% is due to abort point #1, where we don't find an owner and
> assumed that probably a reader owned lock as we've just tried to acquire
> lock previously for lock stealing. I think I will need to actually
> check the sem->count to make sure we have reader owned lock before
> aborting spin.

That looks like to be the biggest remaining effect.

> 2. 6.8% is due to abort point #2, where the mutex owner switches
> to another writer or we need rescheduling.
>
> 3. Minuscule amount due to abort point #3, where we don't have
> a owner of the lock but need rescheduling

The percentages here might go down if #1 is fixed. Excessive scheduling
creates wakeups and has a higher rate of preemption as well as waiting
writers are woken.

There's a chance that if you fix #1 you'll get to the mutex equivalency
Holy Grail! :-)

> See the other thread for complete patch of rwsem optimistic spin code:
> https://lkml.org/lkml/2013/6/26/692
>
> Any suggestions on tweaking this is appreciated.

I think you are on the right track: the goal is to eliminate these sleeps,
the mutex case proves that it's possible to just spin and not sleep much.

It would be even more complex to match it if the mutex workload showed
significant internal complexity - but it does not, it still just behaves
like spinlocks, right?

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/