Re: [ANNOUNCE] Linsched for 2.6.35 released

From: Ranjit Manomohan
Date: Mon Nov 15 2010 - 20:52:15 EST


On Mon, Oct 18, 2010 at 9:52 PM, Vaidyanathan Srinivasan
<svaidy@xxxxxxxxxxxxxxxxxx> wrote:
> * Ranjit Manomohan <ranjitm@xxxxxxxxxx> [2010-10-12 10:29:54]:
>
>> Hi,
>>   I would like to announce the availability of the Linux Scheduler Simulator
>> (Linsched) for 2.6.35.
>>
>> Originally developed at the University of North Carolina, LinSched is a
>> user-space program that hosts the Linux scheduling subsystem.
>> Its purpose is to provide a tool for observing and modifying the behavior
>> of the Linux scheduler. This makes it a valuable tool in prototyping new
>> Linux scheduling policies in a way that may be easier (or otherwise
>> less painful or time-consuming) to many developers when compared
>> to working with real hardware.
>
> The idea and framework looks very interesting.  I tried it out in
> order to understand the workload model and verification model and it
> worked fine for the test cases that you have provided.

Thanks for evaluating it.

>
>> Since Linsched allows arbitrary hardware topologies to be modeled,
>> it enables testing of scheduler changes on hardware that may not be
>> easily accessible to the developer. For example, most developers don't
>> have access to a quad-core quad-socket box, but they can use LinSched
>> to see how their changes affect the scheduler on such boxes.
>
> I am interested in trying this simulator in order to
> design/study/verify task placement logic within the SMP loadbalancer.
> Basically the effects of SD_POWERSAVINGS_BALANCE, SD_PREFER_SIBLING,
> etc in various topologies.
>
> The current interface and verification mechanism is to create tasks
> and observe the runtime received by each task.  In an ideal
> loadbalancer situation, all tasks should have received runtime
> proportional to their priority.
>
> Can you help me figure out how to get to kstat_cpu() or per-cpu
> kernel_stat accounting/utilisation metrics within the simulation?

we don't use the kstat_cpu accounting in the simulation since it does
not really make sense in this environment.

We have a timer driven loop that advances time globally and kicks of
events scheduled to run at specified times on each CPU. The periodic
timer tick is one among these events. Since there is really no notion
of system vs user time in this scenario, the current code disables the
update_process_times routine. I am not sure how these times relate to
the task placement logic you are trying to verify. If you could let me
know how you plan to use these then I can try to accommodate that in
the simulation.

Sorry for the delay in response. My mail filters messed this up.

-Thanks,
Ranjit

>
> Thanks for sharing the framework.
>
> --Vaidy
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/