Re: [PATCH v12 6/8] sched/fair: Add sched group latency support

From: Vincent Guittot
Date: Mon Feb 27 2023 - 08:44:39 EST


On Fri, 24 Feb 2023 at 20:29, Michal Koutný <mkoutny@xxxxxxxx> wrote:
>
> Hello Vincent.
>
> On Fri, Feb 24, 2023 at 10:34:52AM +0100, Vincent Guittot <vincent.guittot@xxxxxxxxxx> wrote:
> > + cpu.latency.nice
> > + A read-write single value file which exists on non-root
> > + cgroups. The default is "0".
> > +
> > + The nice value is in the range [-20, 19].
> > +
> > + This interface file allows reading and setting latency using the
> > + same values used by sched_setattr(2). The latency_nice of a group is
> > + used to limit the impact of the latency_nice of a task outside the
> > + group.
>
> IIUC, the latency priority is taken into account when deciding between
> entitites at the same level (as in pick_next_entity() or
> check_preempt_wake()/find_matchig_se()).
>
> So this group attribute is relevant in context of siblings (i.e. like
> cpu.weight ~ bandwidth priority)?

Yes

>
> I'm thus confused when it's referred to as a limit (in vertical sense).
> You somewhat imply that in [1]:

There were discussions about adding more features that could make use
of the latency nice. This comment mainly wants to describe how this
would behave in case that we need to compre entities/tasks not at the
same level.

Regarding the current use of latency nice to set a latency offset, the
problem doesn't appear because latency offset applies between entities
at the same level as you mentioned above

>
> > Regarding the behavior, the rule remains the same that a sched_entity
> > attached to a cgroup will not get more (latency in this case) than
> > what has been set for the group entity.
>
> But I don't see where such a constraint would be implemented in the
> code. (My cursory understanding above tends to horizontal comparisons.)
>
> Could you please hint me which is right?

Does my explanation above make sense to you ?

>
> Thanks,
> Michal
>
> [1] https://lore.kernel.org/r/CAKfTPtDu=c-psGnHkoWSPRWoh1Z0VBBfsN++g+krv4B1SJmFjg@xxxxxxxxxxxxxx/
>