Re: [RFC][PATCH 18/22] sched: add reclaiming logic to -deadlinetasks

From: Peter Zijlstra
Date: Fri Nov 12 2010 - 19:43:55 EST


On Fri, 2010-11-12 at 19:07 +0100, Tommaso Cucinotta wrote:
> Il 12/11/2010 18:41, Luca Abeni ha scritto:
> >
> >> algorithm with resource reservation. It explicitly allows for deadline
> >> misses, but requires the tardiness of those misses to be bounded, ie.
> >> the UNC soft real-time definition.
> >>
> >> The problem the stochastic execution time model tries to address is the
> >> WCET computation mess, WCET computation is hard and often overly
> >> pessimistic, resulting in under-utilized systems.
> >
> > [...]
> > BTW, sorry for the shameless plug, but even with the current
> > SCHED_DEADLINE you are not forced to dimension the runtime using the
> > WCET. You can use some stochastic analysis, providing probabilistic
> > deadline guarantees. See (for example) "QoS Guarantee Using
> > Probabilistic Deadlines"
> > http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.7683&rep=rep1&type=pdf
> >
> > and "Stochastic analysis of a reservation based system"
> > http://www.computer.org/portal/web/csdl/doi?doc=doi/10.1109/IPDPS.2001.925049
> >
> > (sorry, this is not easy to download... But I can provide a copy if
> > you are interested).
> Thanks, Luca, for supporting the viewpoint. I also repeated this
> multiple times, during the LPC as well.
>
> Let me underline a few key points, also about the technique suggested by
> Zijlstra:
>
> -) the specification of a budget every period may be exploited for
> providing deterministic guarantees to applications, if the budget =
> WCET, as well as probabilistic guarantees, if the budget < WCET. For
> example, what we do in many of our papers is to set budget = to some
> percentile/quantile of the observed computation time distribution,
> especially in those cases in which there are isolated peaks of
> computation times which would cause an excessive under-utilization of
> the system (these are ruled out by the percentile-based allocation); I
> think this is a way of reasoning that can be easily understood and used
> by developers;

Maybe, but I'm clearly not one of them because I'm not getting it.

> -) setting a budget equal to (or too close to) the average computation
> time is *bad*, because the is almost in a meta-stable condition in which
> its response-time may easily grow uncontrolled;

How so? Didn't the paper referenced just prove that the response time
stays bounded?

Setting it lower will of course wreak havoc, but that's what we have
bandwidth control for (implementing stochastic bandwidth control is a
whole separate fun topic though -- although I've been thinking we could
do something by lowering the max runtime every time a job overruns the
average, and limit it at 2*avg - max, if you take a simple parametrized
reduction function and compute the variability of th resulting series
you can invert that and find the reduction parameter to a given
variability).

> -) same thing applies to admitting tasks in the system: if you only
> ensure that the sum of average/expected bandwidths < system capacity,
> then the whole system is at risk of having uncontrolled and arbitrarily
> high peak delays, but from a theoretical viewpoint it is still a
> "stable" system; this is not a condition you want to have in a sane
> real-time scenario;

I'm not seeing where the unbounded comes from.

I am seeing that if you place your budget request slightly higher than
the actual average (say 1 stdev) your variability in the response time
will decrease, but at a cost of lower utilization.

> -) if you want to apply the Mills & Anderson's rule for controlling the
> bound on the tardiness percentiles, as in that paper (A Stochastic
> Framework for Multiprocessor
> Soft Real-Time Scheduling), then I can see 2 major drawbacks:
> a) you need to compute the "\psi" in order to use the "Corollary 10"
> of that paper, but that quantity needs to solve a LP optimization
> problem (see also the example in Section 6); the \psi can be used in Eq.
> (36) in order to compute the *expected tardiness*;

Right, but do we ever actually want to compute the bound? G-EDF also
incurs tardiness but we don't calculate it either. Both depend on the
full task set parameters which is not stable (nor accessible to
user-space in a meaningful manner).

> Please, understand me: I don't want to say that that particular
> technique is not useful, but I'd like simply to stress that such
> policies might just belong to the user-space. If you really want, you
> can disable *any* type of admission control at the kernel-level, and you
> can disable *any* kind of budget enforcement, and just trust the
> user-space to have deployed the proper/correct number & type of tasks
> into your embedded RT platform.

I'm very much against disabling everything and letting the user sort it,
that's basically what SCHED_FIFO does too and its a frigging nightmare.

The whole admission control and schedulability test is what makes this
thing usable. People who build closed systems can already hack their
kernel and do as they please, but for anything other than a closed
system this approach is useless.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/