Re: [RFC PATCH 00/14] sched: entity load-tracking re-work

From: Paul Turner
Date: Mon Feb 20 2012 - 21:34:12 EST


On Mon, Feb 20, 2012 at 1:41 AM, Nikunj A Dadhania
<nikunj@xxxxxxxxxxxxxxxxxx> wrote:
> On Fri, 17 Feb 2012 02:48:06 -0800, Paul Turner <pjt@xxxxxxxxxx> wrote:
>>
>> This is almost certainly a result of me twiddling with the weight in
>> calc_cfs_shares (using average instead of instantaneous weight) in
>> this version -- see patch 11/14.  While this had some nice stability
>> properties it was not hot for fairness so I've since reverted it
>> (snippet attached below).
>>
> For my understanding, what do you mean by stability here?

The result is stable; meaning repeating the experiment returns the same number.

>
>>
>> 24-core:
>> Starting task group fair16...done
>> Starting task group fair32...done
>> Starting task group fair48...done
>> Waiting for the task to run for 120 secs
>> Interpreting the results. Please wait....
>> Time consumed by fair16 cgroup:  12628615 Tasks: 96
>> Time consumed by fair32 cgroup:  12562859 Tasks: 192
>> Time consumed by fair48 cgroup:  12600364 Tasks: 288
>>
> "Tasks:" should be 16,32,48?
>

Ah, I ran your script multiple times (for stability) above, it must
not have been killing itself properly (notice each of those numbers is
the respective tasks per run times 6).

A correct first run on a 24-core looks like:
Starting task group fair16...done
Starting task group fair32...done
Starting task group fair48...done
Waiting for the task to run for 120 secs
Interpreting the results. Please wait....
Time consumed by fair16 cgroup: 1332211 Tasks: 16
Time consumed by fair32 cgroup: 1227356 Tasks: 32
Time consumed by fair48 cgroup: 1217174 Tasks: 48

The small boost to the tasks=16 case is almost certainly tied to our
current handling of sleeper credit and entity placement -- since there
are less tasks than cores, whenever a task moves to a core it has not
been previously executing on it gets a vruntime boost.

Thanks,

- Paul


> Regards,
> Nikunj
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/