Re: [PATCH 12/16] sched: refactor update_shares_cpu() ->update_blocked_avgs()

From: Peter Zijlstra
Date: Thu Jul 05 2012 - 07:59:03 EST


On Wed, 2012-06-27 at 19:24 -0700, Paul Turner wrote:
> Now that running entities maintain their own load-averages the work we must do
> in update_shares() is largely restricted to the periodic decay of blocked
> entities. This allows us to be a little less pessimistic regarding our
> occupancy on rq->lock and the associated rq->clock updates required.

So what you're saying is that since 'weight' now includes runtime
behaviour (where we hope the recent past matches the near future) we
don't need to update shares quite as often since the effect of
sleep-wakeup cycles isn't near as big since they're already anticipated.

So how is the decay of blocked load still significant, surely that too
is mostly part of the anticipated sleep/wake cycle already caught in the
runtime behaviour.

Or is this the primary place where we decay? If so that wasn't obvious
and thus wants a comment someplace.

> Signed-off-by: Paul Turner <pjt@xxxxxxxxxx>
> ---

> +static void update_blocked_averages(int cpu)
> {
> struct rq *rq = cpu_rq(cpu);
> + struct cfs_rq *cfs_rq;
> +
> + unsigned long flags;
> + int num_updates = 0;
>
> rcu_read_lock();
> + raw_spin_lock_irqsave(&rq->lock, flags);
> + update_rq_clock(rq);
> /*
> * Iterates the task_group tree in a bottom up fashion, see
> * list_add_leaf_cfs_rq() for details.
> */
> for_each_leaf_cfs_rq(rq, cfs_rq) {
> + __update_blocked_averages_cpu(cfs_rq->tg, rq->cpu);
>
> + /*
> + * Periodically release the lock so that a cfs_rq with many
> + * children cannot hold it for an arbitrary period of time.
> + */
> + if (num_updates++ % 20 == 0) {
> + raw_spin_unlock_irqrestore(&rq->lock, flags);
> + cpu_relax();
> + raw_spin_lock_irqsave(&rq->lock, flags);

Gack.. that's not real pretty is it.. Esp. since we're still holding RCU
lock and are thus (mostly) still not preemptable.

How much of a problem was this?, the changelog is silent on this.

> + update_rq_clock(rq);
> + }
> }
> +
> + raw_spin_unlock_irqrestore(&rq->lock, flags);
> rcu_read_unlock();
> }
>



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/