Re: [PATCH v3 2/7] sched/topology: Define and assign sched_domain flag metadata

From: Valentin Schneider
Date: Thu Jul 02 2020 - 12:25:50 EST



On 02/07/20 16:45, Quentin Perret wrote:
> On Thursday 02 Jul 2020 at 15:31:07 (+0100), Valentin Schneider wrote:
>> There an "interesting" quirk of asym_cpu_capacity_level() in that it does
>> something slightly different than what it says on the tin: it detects
>> the lowest topology level where *the biggest* CPU capacity is visible by
>> all CPUs. That works just fine on big.LITTLE, but there are questionable
>> DynamIQ topologies that could hit some issues.
>>
>> Consider:
>>
>> DIE [ ]
>> MC [ ][ ] <- sd_asym_cpucapacity
>> 0 1 2 3 4 5
>> L L B B B B
>>
>> asym_cpu_capacity_level() would pick MC as the asymmetric topology level,
>> and you can argue either way: it should be DIE, because that's where CPUs 4
>> and 5 can see a LITTLE, or it should be MC, at least for CPUs 0-3 because
>> there they see all CPU capacities.
>
> Right, I am not looking forward to these topologies...

I'll try my best to prevent those from seeing the light of day, but you
know how this works...

>> Say there are two clusters in the system, one with a lone big CPU and the
>> other with a mix of big and LITTLE CPUs:
>>
>> DIE [ ]
>> MC [ ][ ]
>> 0 1 2 3 4
>> L L B B B
>>
>> asym_cpu_capacity_level() will figure out that the MC level is the one
>> where all CPUs can see a CPU of max capacity, and we will thus set
>> SD_ASYM_CPUCAPACITY at MC level for all CPUs.
>>
>> That lone big CPU will degenerate its MC domain, since it would be alone in
>> there, and will end up with just a DIE domain. Since the flag was only set
>> at MC, this CPU ends up not seeing any SD with the flag set, which is
>> broken.
>
> +1
>
>> Rather than clearing dflags at every topology level, clear it before
>> entering the topology level loop. This will properly propagate upwards
>> flags that are set starting from a certain level.
>
> I'm feeling a bit nervous about that asymmetry -- in your example
> select_idle_capacity() on, say, CPU3 will see less CPUs than on CPU4.
> So, you might get fun side-effects where all task migrated to CPUs 0-3
> will be 'stuck' there while CPU 4 stays mostly idle.
>

It's actually pretty close to what happens with the LLC domain on SMP -
select_idle_sibling() doesn't look outside of it. The wake_affine() stuff
might steer the task towards a different LLC, but that's about it for
wakeups. We rely on load balancing (fork/exec, newidle, nohz and periodic)
to spread this further - and we would here too.

It gets "funny" for EAS when we aren't overutilized and thus can't rely on
load balancing; at least misfit ought to still work. It *is* a weird
topology, for sure.

> I have a few ideas to avoid that (e.g. looking at the rd span in
> select_idle_capacity() instead of sd_asym_cpucapacity) but all this is
> theoretical, so I'm happy to wait for a real platform to be released
> before we worry too much about it.
>
> In the meantime:
>
> Reviewed-by: Quentin Perret <qperret@xxxxxxxxxx>

Thanks!