Re: [PATCH 1/2] sched/deadline: add per rq tracking of admitted bandwidth

From: luca abeni
Date: Wed Feb 24 2016 - 16:47:00 EST


Hi,

On Wed, 24 Feb 2016 20:17:52 +0100
Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:

> On Fri, Feb 12, 2016 at 06:05:30PM +0100, Peter Zijlstra wrote:
> > Having two separate means of accounting this also feels more fragile
> > than one would want.
> >
> > Let me think a bit about this.
>
> I think there's a fundamental problem that makes the whole notion of
> per-rq accounting 'impossible'.
>
> On hot-unplug we only migrate runnable tasks, all blocked tasks remain
> on the dead cpu. This would very much include their bandwidth
> requirements.
>
> This means that between a hot-unplug and the moment that _all_ those
> blocked tasks have ran at least once, the sum of online bandwidth
> doesn't match and we can get into admission trouble (same for GRUB,
> which can also use per-rq bw like this).

After Juri's patch and emails, I tried to think about the CPU
hot-(un)plugging issues, and to check if/how they affect GRUB
reclaiming...

I arrived to the conclusion that for GRUB this is not a problem (but,
as usual, I might be wrong): GRUB just needs to track the per-runqueue
active/inactive utilization, and is not badly affected by the fact that
inactive utilization is migrated "too late" (when a task wakes up
instead of when the CPU goes offline). This is because GRUB does not
care about "global" utilization, but considers the various runqueues in
isolation (there is a flavor of the m-grub algorithm that uses global
inactive utilization, but it is not implemented by the patches I
submitted).
In other words: Juri's patch uses per-runqueue utilizations to re-build
the global utilization, while GRUB does not care if the sum of the
"active utilizations" match with the utilization used for admission
control.

I still have to check some details, and to run some more tests with CPU
hot-(un)plug (and this is why I did not send a v2 of the reclaiming RFC
yet)... In particular, I need to check what happens if the "inactive
timer" fires when the CPU on which the task was running is already
offline.



Thanks,
Luca