Re: [PATCH] workqueue: Use active mask for new worker when pool is DISASSOCIATED

From: Schspa Shi
Date: Mon Aug 01 2022 - 05:53:19 EST



Lai Jiangshan <jiangshanlai@xxxxxxxxx> writes:

> On Mon, Aug 1, 2022 at 1:08 PM Schspa Shi <schspa@xxxxxxxxx> wrote:
>>
>>
>> Lai Jiangshan <jiangshanlai@xxxxxxxxx> writes:
>>
>> > On Sat, Jul 30, 2022 at 12:19 PM Schspa Shi <schspa@xxxxxxxxx> wrote:
>> >>
>> >>
>> >> Peter Zijlstra <peterz@xxxxxxxxxxxxx> writes:
>> >>
>> >> > On Wed, Jul 13, 2022 at 05:52:58PM +0800, Lai Jiangshan wrote:
>> >> >>
>> >> >>
>> >> >> CC Peter.
>> >> >> Peter has changed the CPU binding code in workqueue.c.
>> >> >
>> >> > [ 1622.829091] WARNING: CPU: 3 PID: 31 at kernel/sched/core.c:7756 sched_cpu_dying+0x74/0x204
>> >> > [ 1622.829374] CPU: 3 PID: 31 Comm: migration/3 Tainted: P O 5.10.59-rt52 #2
>> >> > ^^^^^^^^^^^^^^^^^^^^^
>> >> >
>> >> > I think we can ignore this as being some ancient kernel. Please try
>> >> > something recent.
>> >>
>> >> Hi peter:
>> >>
>> >> I spent a few days writing a test case and reproduced the problem on
>> >> kernel 5.19. I think it's time for us to review the V3 patch for a fix.
>> >>
>> >> The V3 patch is at
>> >> https://lore.kernel.org/all/20220714031645.28004-1-schspa@xxxxxxxxx/
>> >> Please help to review it.
>> >
>> > Because of the implementation of is_cpu_allowed(),
>> > I am still suspicious about how the newly created worker can be woken
>> > up in the dying cpu since it has no KTHREAD_IS_PER_CPU set.
>> >
>>
>> I think the comments of the V3 patch have explained the problem, the
>> newly created worker runs on the dying CPU because we bind it to the
>> dying CPU via kthread_bind_mask or set_cpus_allowed_ptr.
>>
>
> wake_up_process() will modify the task's cpumask and select
> a fallback cpu. The schedule code does it for all kthreads that
> happen to have a single CPU affinity which is lost.
>

I'm not very familiar with this part of the code, I'll check the code
here first.

>> Please refer to the following scenarios.
>>
>> CPU0 CPU1
>> ------------------------------------------------------------------
>> sched_cpu_deactivate(cpu_active_mask clear)
>> workqueue_offline_cpu(work pool POOL_DISASSOCIATED)
>> -- all worker will migrate to another cpu --
>> worker_thread
>> -- will create new worker if
>> pool->worklist is not empty
>> create_worker()
>> -- new kworker will bound to CPU0
>> (pool->attrs->cpumask will be mask of CPU0).
>> kworker/0:x will running on rq
>>
>> sched_cpu_dying
>> if (rq->nr_running != 1 || rq_has_pinned_tasks(rq))
>> WARN(true, "Dying CPU not properly vacated!");
>> ---------OOPS-------------
>>
>> When CPUHP_HRTIMERS_PREPARE < CPU0's state < CPUHP_AP_WORKQUEUE_ONLINE,
>> the CPU0 still can schedule a new work, which will add a new work to
>> pool->worklist of a dying CPU's work pool.
>>
>> > Is it a dying cpu when onlining a CPU fails? I think balance_push
>> > is not set in this case when it fails at some point during online.
>> > But I haven't looked into the details and linked the clues.
>> >
>>
>> Not this case, there is on onlining CPU fails log in my test log.
>>
>> >
>> >>
>> >> Test branch as:
>> >> https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/tag/?h=v5.19-rc8-rt8
>> >>
>> >
>> > Can it only be triggered in the rt kernel?
>> >
>>
>> I only test on rt kernel, because of I work on RT kernel now, But I
>> think non-rt kernel have the issues too.
>>
>> >> I think this code is new enough to demonstrate that the problem persists.
>> >>
>> >> The log as fellowing:
>> >>
>> >> [ 3103.198684] ------------[ cut here ]------------
>> >> [ 3103.198684] Dying CPU not properly vacated!
>> >> [ 3103.198684] WARNING: CPU: 1 PID: 23 at kernel/sched/core.c:9575 sched_cpu_dying.cold+0xc/0xc3
>> >> [ 3103.198684] Modules linked in: work_test(O)
>> >
>> > Could you give me the code of this module?
>>
>> The module's source code is at
>> https://github.com/schspa/code_snippet/blob/master/kernel_module/workqueue_test/work-test.c
>> Kernel defconfig is at
>> https://github.com/schspa/ktest/blob/main/configs/linux_rt_defconfig
>>
>
> How long will it take to reproduce the bug?
> I have tested for a short time on the non-rt kernel, no thing
> has happened yet.
>

Test host machine:
4 core virtual machines with nested KVM support.
16GB memory.

Test VM:
3 * 4core KVM virtual machine.

With 3 test instances, it will reproduce in one day.

This needs some stress to make it reproduce, you can use my script to
compile & launch the test automatically.

https://github.com/schspa/ktest/blob/main/test-lnx.sh
~/work/src/ktest/test-lnx.sh ~/src/linux linux-rt-devel/linux-5.18.y-rt ~/test/linux-5.18.y-rt wq-test

>> I add a hook to the entry of worker_thread, which add extra 50ms delay
>> to make it reproduced faster.
>>
>> --
>> BRs
>> Schspa Shi


--
BRs
Schspa Shi