Re: [PATCH -v8][RFC] mutex: implement adaptive spinning

From: Boaz Harrosh
Date: Mon Jan 12 2009 - 12:33:35 EST


Peter Zijlstra wrote:
> On Mon, 2009-01-12 at 08:20 -0800, Linus Torvalds wrote:
>> On Mon, 12 Jan 2009, Linus Torvalds wrote:
>>> You made it back into the locked version.
>> Btw, even if you probably had some reason for this, one thing to note is
>> that I think Chris' performance testing showed that the version using a
>> lock was inferior to his local btrfs hack, while the unlocked version
>> actually beat his hack.
>>
>> Maybe I misunderstood his numbers, though. But if I followed that sub-part
>> of the test right, it really means that the locked version is pointless -
>> it will never be able to replace peoples local hacks for this same thing,
>> because it just doesn't give the performance people are looking for.
>>
>> Since the whole (and _only_) point of this thing is to perform well,
>> that's a big deal.
>
> Like said in reply to Chris' email, I just wanted to see if fairness was
> worth the effort, because the pure unlocked spin showed significant
> unfairness (and I know some people really care about some level of
> fairness).
>

Which brings me back to my initial reaction to this work. Do we need
two flavors of Mutex? some program sections need Fairness, some need
performance. Some need low-latency, some need absolute raw CPU power.

Because at the end of the day spinning in a saturated CPU work-load
that does not care about latency, eats away cycles that could be spent
on computation. Think multi-threaded video processing for example.
Thing I would like to measure is
1 - how many times we spin and at the end get a lock
2 - how many times we spin and at the end sleep.
3 - how many times we sleep like before.
vs. In old case CPU spent on scheduling. Just to see if we are actually loosing
cycles at the end.

> Initial testing with the simple test-mutex thing didn't show too bad
> numbers.
>

Boaz
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/