Re: [PATCH v3 06/10] sched/fair: Use the prefer_sibling flag of the current sched domain

From: Valentin Schneider
Date: Fri Feb 10 2023 - 09:55:51 EST


On 10/02/23 11:08, Peter Zijlstra wrote:
> On Mon, Feb 06, 2023 at 08:58:34PM -0800, Ricardo Neri wrote:
>> SD_PREFER_SIBLING is set from the SMT scheduling domain up to the first
>> non-NUMA domain (the exception is systems with SD_ASYM_CPUCAPACITY).
>>
>> Above the SMT sched domain, all domains have a child. The SD_PREFER_
>> SIBLING is honored always regardless of the scheduling domain at which the
>> load balance takes place.
>>
>> There are cases, however, in which the busiest CPU's sched domain has
>> child but the destination CPU's does not. Consider, for instance a non-SMT
>> core (or an SMT core with only one online sibling) doing load balance with
>> an SMT core at the MC level. SD_PREFER_SIBLING will not be honored. We are
>> left with a fully busy SMT core and an idle non-SMT core.
>>
>> Avoid inconsistent behavior. Use the prefer_sibling behavior at the current
>> scheduling domain, not its child.
>>
>> The NUMA sched domain does not have the SD_PREFER_SIBLING flag. Thus, we
>> will not spread load among NUMA sched groups, as desired.
>>
>
> Like many of the others; I don't much like this.
>
> Why not simply detect this asymmetric having of SMT and kill the
> PREFER_SIBLING flag on the SMT leafs in that case?
>
> Specifically, I'm thinking something in the degenerate area where it
> looks if a given domain has equal depth children or so.
>
> Note that this should not be tied to having special hardware, you can
> create the very same weirdness by just offlining a few SMT siblings and
> leaving a few on.

So something like have SD_PREFER_SIBLING affect the SD it's on (and not
its parent), but remove it from the lowest non-degenerated topology level?
(+ add it to the first NUMA level to keep things as they are, even if TBF I
find relying on it for NUMA balancing a bit odd).