Re: iperf yield usage

From: Ingo Molnar
Date: Mon Oct 01 2007 - 15:10:35 EST



* Chris Friesen <cfriesen@xxxxxxxxxx> wrote:

> >See the background and answers to that in:
> >
> > http://lkml.org/lkml/2007/9/19/357
> > http://lkml.org/lkml/2007/9/19/328
> >
> >there's plenty of recourse possible to all possible kinds of apps.
> >Tune the sysctl flag in one direction or another, depending on which
> >behavior the app is expecting.
>
> Yeah, I read those threads.
>
> It seems like the fundamental source of the disconnect is that the
> tasks used to be sorted by priority (thus making it easy to bump a
> yielding task to the end of that priority level) while now they're
> organized by time (making it harder to do anything priority-based).
> Do I have that right?

not really - the old yield implementation in essence gave the task a
time hit too, because we rotated through tasks based on timeslices. But
the old one requeued yield-ing tasks to the 'active array', and the
decision whether a task is in the active or in the expired array was a
totally stohastic, load-dependent thing. As a result, certain tasks,
under certain workloads saw a "stronger" yield, other tasks saw a
"weaker" yield. (The reason for that implementation was simple: yield
was (and is) unimportant and it was implemented in the most
straightforward way that caused no overhead anywhere else in the
scheduler.)

( and to keep perspective it's also important to correct the subject
line here: it's not about "network slowdown" - nothing in networking
slowed down in any way - it was that iperf used yield in a horrible
way. I changed the subject line to reflect that. )

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/