Re: [PATCH] sched/core: fix affine_move_task failure case

From: Valentin Schneider
Date: Mon Mar 18 2024 - 13:36:42 EST


On 18/03/24 12:17, Daniel Vacek wrote:
> Bill Peters reported CPU hangs while offlining/onlining CPUs on s390.
>
> Analyzing the vmcore data shows `stop_one_cpu_nowait()` in `affine_move_task()`
> can fail when racing with off-/on-lining resulting in a deadlock waiting for
> the pending migration stop work completion which is never done.
>
> Fix this by correctly handling such a condition.
>

IIUC the problem is that the dest_cpu and its stopper thread can be taken
down by take_cpu_down(), and affine_move_task() currently isn't aware of
that. I thought we had tested this vs hotplug, but oh well...

> Fixes: 9e81889c7648 ("sched: Fix affine_move_task() self-concurrency")
> Cc: stable@xxxxxxxxxxxxxxx
> Reported-by: Bill Peters <wpeters@xxxxxxxxx>
> Tested-by: Bill Peters <wpeters@xxxxxxxxx>
> Signed-off-by: Daniel Vacek <neelx@xxxxxxxxxx>
> ---
> kernel/sched/core.c | 13 +++++++++++--
> 1 file changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 9116bcc903467..d0ff5c611a1c8 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3069,8 +3069,17 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
> preempt_disable();
> task_rq_unlock(rq, p, rf);
> if (!stop_pending) {
> - stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
> - &pending->arg, &pending->stop_work);
> + stop_pending =
> + stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
> + &pending->arg, &pending->stop_work);
> +
> + if (!stop_pending) {
> + rq = task_rq_lock(p, rf);
> + pending->stop_pending = false;
> + p->migration_pending = NULL;
> + task_rq_unlock(rq, p, rf);
> + complete_all(&pending->done);
> + }

This can leave the task @p on a now-illegal CPU; consider a task affined to
CPUs 0-1, migrate_disable(); then affined to CPUs 2-3, then in
migrate_enable() the dest_cpu is chosen as 3 but that's racing with it
being brought down. The stop_one_cpu_nowait() fails, and we leave the task
on CPUs 0-1.

Issuing a redo of affine_move_task() with a different dest_cpu doesn't
sound great, and while very unlikely that doesn't have forward progress
guarantees.

Unfortunately we can't hold the hotplug lock during the affinity change of
migrate_enable(), as migrate_enable() isn't allowed to block.

Now, the CPU selection in __set_cpus_allowed_ptr_locked() that is passed
down to affine_move_task() relies on the active mask, which itself is
cleared in sched_cpu_deactivate() and is followed by a
synchronize_rcu().

What if we made the affinity change of migrate_enable() an RCU read-side
section? Then if a CPU X is observed as active in
migrate_enable()->__set_cpus_allowed_ptr_locked()
, then its' hotplug state cannot go lower than CPUHP_AP_ACTIVE until the task is
migrated away.

Something like the below. Thoughts?
---
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 929fce69f555e..c6d128711d1a9 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2450,8 +2450,11 @@ void migrate_enable(void)
* __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
*/
guard(preempt)();
- if (p->cpus_ptr != &p->cpus_mask)
+ if (p->cpus_ptr != &p->cpus_mask) {
+ guard(rcu)();
__set_cpus_allowed_ptr(p, &ac);
+ }
+
/*
* Mustn't clear migration_disabled() until cpus_ptr points back at the
* regular cpus_mask, otherwise things that race (eg.