Re: [Resend patch v8 0/13] use runnable load in schedule balance

From: Alex Shi
Date: Tue Jul 09 2013 - 04:54:21 EST


On 06/29/2013 12:00 AM, Paul Turner wrote:
> On Fri, Jun 28, 2013 at 6:20 AM, Alex Shi <lkml.alex@xxxxxxxxx> wrote:
>>
>>> So this is actually an interesting idea, but don't think of it as
>>> overweight. What "cfs_rq->blocked_load_avg / 2" means is actually
>>> blocked_load_avg one period from now. This is interesting because it
>>> makes the (reasonable) supposition that blocked load is not about to
>>> immediately wake, but will continue to decay.
>>>
>>> Could you try testing the gvr_lb_tip branch at
>>> git://git.kernel.org/pub/scm/linux/kernel/git/pjt/sched-tip.git ?
>>>
>>
>> Could you rebase the patch on latest tip/sched/core?
>
> I suspect it's more direct to just check out and test the branch
> directly (e.g. you should not need to apply it on top of any other
> branch). It should be based on round-about where you previously
> tested.

I tested aim7, hackbench, tbench, dbench, on NHM EP, SNB EP 2S/4S and
IVB EP.
Comparing to Alex's rlbv8(same as upstream except no blocked_load_avg on
tg), -- both base on 3.9.0 kernel.
aim7 drops about 10% on SNB EP 2S/4S,
hackbench drops 10% on SNB EP 4S. drops 1~5% on other 2 sockets NHM
EP/IVB EP/SNB EP.


tbench/dbench failed due to a bug commit you had dependent on. but it
was fixed on upstream kernel.
---
Running for 600 seconds with load '/usr/local/share/client.txt' and
minimum warmup 120 secs
failed to create barrier semaphore.


>
>>
>>>
>>> It's an extension to your series that tries to improve some of the
>>> cpu_load interactions in an alternate way to the above.
>>>
>>> It seems a little better on one and two-socket machines; but we
>>> couldn't reproduce/compare to your best performance results since they
>>> were taken on larger machines.
>>>
--
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/