Re: [PATCH] sched: fix env->src_cpu for active migration

From: Vincent Guittot
Date: Wed Feb 13 2013 - 02:54:56 EST


Hi Damien,

Thanks for the test and the feedback.
Could you send me the sched_domain configuration of your machine with
the kernel that boots on your machine ?
It's available in /proc/sys/kernel/sched_domain/cpu*/

This should not have any impact on your machine but it looks like it have one.

Regards,
Vincent


On 13 February 2013 07:18, Damien Wyart <damien.wyart@xxxxxxxxx> wrote:
> Hi,
>
> I tested this on top of 3.8-rc7 and this made the machine (x86_64, Core
> i7 920) unable to boot (very early as nothing at all is displayed on
> screen). Nothing in the kernel log (after booting with a working
> kernel).
>
> Double-checked by just backing out only this patch and this made the
> machine boot again.
>
> Damien
>
>> need_active_balance uses env->src_cpu which is set only if there is more
>> than 1 task on the run queue. We must set the src_cpu field unconditionnally
>> otherwise the test "env->src_cpu > env->dst_cpu" will always fail if there is
>> only 1 task on the run queue
>>
>> Signed-off-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
>> ---
>> kernel/sched/fair.c | 6 ++++--
>> 1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 81fa536..32938ea 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -5044,6 +5044,10 @@ redo:
>>
>> ld_moved = 0;
>> lb_iterations = 1;
>> +
>> + env.src_cpu = busiest->cpu;
>> + env.src_rq = busiest;
>> +
>> if (busiest->nr_running > 1) {
>> /*
>> * Attempt to move tasks. If find_busiest_group has found
>> @@ -5052,8 +5056,6 @@ redo:
>> * correctly treated as an imbalance.
>> */
>> env.flags |= LBF_ALL_PINNED;
>> - env.src_cpu = busiest->cpu;
>> - env.src_rq = busiest;
>> env.loop_max = min(sysctl_sched_nr_migrate, busiest->nr_running);
>>
>> update_h_load(env.src_cpu);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/