Re: [RFC PATCH 1/2] sched: Rate limit migrations to 1 per 2ms per task

From: Tim Chen
Date: Tue Sep 05 2023 - 18:45:06 EST


On Tue, 2023-09-05 at 17:16 -0400, Mathieu Desnoyers wrote:
> On 9/5/23 16:28, Tim Chen wrote:
> > On Tue, 2023-09-05 at 13:11 -0400, Mathieu Desnoyers wrote:
> > > Rate limit migrations to 1 migration per 2 milliseconds per task. On a
> > > kernel with EEVDF scheduler (commit b97d64c722598ffed42ece814a2cb791336c6679),
> > > this speeds up hackbench from 62s to 45s on AMD EPYC 192-core (over 2 sockets).
> > >
> > >
> > >
> > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > index 479db611f46e..0d294fce261d 100644
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -4510,6 +4510,7 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
> > > p->se.vruntime = 0;
> > > p->se.vlag = 0;
> > > p->se.slice = sysctl_sched_base_slice;
> > > + p->se.next_migration_time = 0;
> >
> > It seems like the next_migration_time should be initialized to the current time,
> > in case the system run for a long time and clock wrap around could cause problem.
>
> next_migration_time is a u64, which should "never" overflow. Other

Reading up on sched_clock() documentation and seems like it should 
indeed be monotonic.
For TSC based clock, which starts from 0 at boot
and TSC doesn't wrap around on the order of ~190 years.  

I wonder about the corner case when a system suspeds and resume. The
documentation on sched clock says "The clock driving sched_clock() may 
stop or reset to zero during system suspend/sleep".
If the sched_clock is reset to 0, the next_migration_time for all 
suspended tasks should also be reset to 0
before they resume so the next migration time is not in the long future.

Thanks.

Tim

> scheduler code comparing with sched_clock() don't appear to care about
> u64 overflow. Sampling the next_migration_time on fork could delay
> migrations for a 2ms window after process creation, which I don't think
> is something we want. Or if we do want this behavior, it should be
> validated with benchmarks beforehand.
>
> >
> > > INIT_LIST_HEAD(&p->se.group_node);
> > >
> > > #ifdef CONFIG_FAIR_GROUP_SCHED
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index d92da2d78774..24ac69913005 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -960,6 +960,14 @@ int sched_update_scaling(void)
> > >
> > > static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se);
> > >
> > > +static bool should_migrate_task(struct task_struct *p, int prev_cpu)
> > > +{
> > > + /* Rate limit task migration. */
> > > + if (sched_clock_cpu(prev_cpu) < p->se.next_migration_time)
> >
> > Should we use time_before(sched_clock_cpu(prev_cpu), p->se.next_migration_time) ?
>
> No, because time_before expects unsigned long parameters, and
> sched_clock_cpu() and next_migration_time are u64.
>
> Thanks,
>
> Mathieu