Re: [PATCH 2/2] timers: Fix removed self-IPI on global timer's enqueue in nohz_full

From: Frederic Weisbecker
Date: Fri Mar 22 2024 - 07:32:49 EST


On Thu, Mar 21, 2024 at 05:47:59AM -0700, Paul E. McKenney wrote:
> > We should probably just remove this warning. This remote tick is my most horrible
> > hackery anyway.
>
> Would it make sense to increase it to somewhere around the timeout for
> RCU CPU stall warnings, softlockup, and so on?

We can also do that yes.

>
> > _ The hrtimer enqueue to an offline CPU warning:
> >
> > [ 1054.265335] WARNING: CPU: 8 PID: 150 at kernel/time/hrtimer.c:1091 enqueue_hrtimer+0x6f/0x80
> > [ 1054.269166] Modules linked in:
> > [ 1054.270077] CPU: 8 PID: 150 Comm: rcu_torture_rea Not tainted 6.8.0-11407-ge990136580ab-dirty #21
> > [ 1054.272768] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.2-3-gd478f380-rebuilt.opensuse.org 04/01/2014
> > [ 1054.275893] RIP: 0010:enqueue_hrtimer+0x6f/0x80
> > [ 1054.277252] Code: a3 05 b5 ff b7 01 73 bd 48 8b 05 44 c5 b5 01 48 85 c0 74 0c 48 8b 78 08 48 89 ee e8 7b b5 ff ff 48 8b 03 f6 40 10 10 75 a5 90 <0f> 0b 90 eb 9f 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90
> > [ 1054.282448] RSP: 0000:ffffb1480056fd70 EFLAGS: 00010046
> > [ 1054.283931] RAX: ffff95b7df2616c0 RBX: ffff95b7df261700 RCX: ffff95b7df2616c0
> > [ 1054.286091] RDX: 0000000000000001 RSI: ffff95b7df261700 RDI: ffffb1480056fde0
> > [ 1054.288095] RBP: ffffb1480056fde0 R08: 0000000000000001 R09: 000000000000fc03
> > [ 1054.290189] R10: 0000000000000001 R11: ffff95b7c1271c80 R12: 0000000000000040
> > [ 1054.292592] R13: ffff95b7df261700 R14: ffff95b7df261700 R15: ffff95b7df2616c0
> > [ 1054.294622] FS: 0000000000000000(0000) GS:ffff95b7df400000(0000) knlGS:0000000000000000
> > [ 1054.296884] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [ 1054.298497] CR2: 0000000000000000 CR3: 000000001822c000 CR4: 00000000000006f0
> > [ 1054.300475] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > [ 1054.302516] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> > [ 1054.304580] Call Trace:
> > [ 1054.305354] <TASK>
> > [ 1054.306009] ? __warn+0x77/0x120
> > [ 1054.306973] ? enqueue_hrtimer+0x6f/0x80
> > [ 1054.308177] ? report_bug+0xf1/0x1d0
> > [ 1054.310200] ? handle_bug+0x40/0x70
> > [ 1054.311252] ? exc_invalid_op+0x13/0x70
> > [ 1054.312437] ? asm_exc_invalid_op+0x16/0x20
> > [ 1054.313738] ? enqueue_hrtimer+0x6f/0x80
> > [ 1054.314994] hrtimer_start_range_ns+0x258/0x2f0
> > [ 1054.316531] ? __pfx_rcu_torture_reader+0x10/0x10
> > [ 1054.317983] schedule_hrtimeout_range_clock+0x95/0x120
> > [ 1054.319604] ? __pfx_hrtimer_wakeup+0x10/0x10
> > [ 1054.320957] torture_hrtimeout_ns+0x50/0x70
> > [ 1054.322211] rcu_torture_reader+0x1be/0x280
> > [ 1054.323455] ? __pfx_rcu_torture_timer+0x10/0x10
> > [ 1054.329888] ? kthread+0xc4/0xf0
> > [ 1054.330880] ? __pfx_rcu_torture_reader+0x10/0x10
> > [ 1054.332334] kthread+0xc4/0xf0
> > [ 1054.333305] ? __pfx_kthread+0x10/0x10
> > [ 1054.334466] ret_from_fork+0x2b/0x40
> > [ 1054.335551] ? __pfx_kthread+0x10/0x10
> > [ 1054.336690] ret_from_fork_asm+0x1a/0x30
> > [ 1054.337878] </TASK>
> >
> > I don't know how it happened. This is where I'm investigating now.
>
> It seems that rcutorture is the gift that keeps on giving? ;-)

Right. Also I just managed to reproduce that hrtimer issue on v6.8:

[ 1484.955213] WARNING: CPU: 6 PID: 162 at kernel/time/hrtimer.c:1088 enqueue_hrtimer+0x6f/0x80
[ 1484.962513] Modules linked in:
[ 1484.966476] CPU: 6 PID: 162 Comm: rcu_torture_rea Not tainted 6.8.0 #25
[ 1484.972975] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.2-3-gd478f380-rebuilt.opensuse.org 04/01/2014
[ 1484.977653] RIP: 0010:enqueue_hrtimer+0x6f/0x80
[ 1484.978679] Code: a3 05 75 6a b7 01 73 bd 48 8b 05 e4 47 b5 01 48 85 c0 74 0c 48 8b 78 08 48 89 ee e8 eb bd ff ff 48 8b 03 f6 40 10 10 75 a5 90 <0f> 0b 90 eb 9f 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90
[ 1484.983136] RSP: 0000:ffffa553805cfd70 EFLAGS: 00010046
[ 1484.984386] RAX: ffff8c395f260440 RBX: ffff8c395f260480 RCX: ffff8c395f260440
[ 1484.986121] RDX: 0000000000000000 RSI: ffff8c395f260480 RDI: ffffa553805cfde0
[ 1484.987704] RBP: ffffa553805cfde0 R08: 0000000000000001 R09: 000000000000fc03
[ 1484.989513] R10: 0000000000000001 R11: ffff8c3941248e40 R12: 0000000000000040
[ 1484.991116] R13: ffff8c395f260480 R14: ffff8c395f260480 R15: ffff8c395f260440
[ 1484.992835] FS: 0000000000000000(0000) GS:ffff8c395f380000(0000) knlGS:0000000000000000
[ 1484.994683] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1484.995985] CR2: 0000000000000000 CR3: 000000001a62c000 CR4: 00000000000006f0
[ 1484.997789] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1484.999376] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 1485.001177] Call Trace:
[ 1485.002388] <TASK>
[ 1485.002923] ? __warn+0x78/0x120
[ 1485.003641] ? enqueue_hrtimer+0x6f/0x80
[ 1485.004641] ? report_bug+0xf1/0x1d0
[ 1485.005537] ? handle_bug+0x40/0x70
[ 1485.006318] ? exc_invalid_op+0x13/0x70
[ 1485.007167] ? asm_exc_invalid_op+0x16/0x20
[ 1485.008117] ? enqueue_hrtimer+0x6f/0x80
[ 1485.009131] hrtimer_start_range_ns+0x258/0x2f0
[ 1485.010153] ? __pfx_rcu_torture_reader+0x10/0x10
[ 1485.011185] schedule_hrtimeout_range_clock+0x96/0x120
[ 1485.012394] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 1485.013502] stutter_wait+0x7f/0x90
[ 1485.014319] rcu_torture_reader+0x10e/0x280
[ 1485.015240] ? __pfx_rcu_torture_timer+0x10/0x10
[ 1485.016317] ? kthread+0xc6/0xf0
[ 1485.017169] ? __pfx_rcu_torture_reader+0x10/0x10
[ 1485.018215] kthread+0xc6/0xf0
[ 1485.018899] ? __pfx_kthread+0x10/0x10
[ 1485.019703] ret_from_fork+0x2b/0x40
[ 1485.020546] ? __pfx_kthread+0x10/0x10
[ 1485.021428] ret_from_fork_asm+0x1b/0x30
[ 1485.022295] </TASK>

This happens within the following loop

for_each_domain(cpu, sd) {
for_each_cpu_and(i, sched_domain_span(sd), hk_mask) {
if (cpu == i)
continue;

if (!idle_cpu(i))
return i;
}
}

An offline CPU is returned from there. Which is not supposed to happen
as it's within an RCU read side. But I can't manage to find where those
rq->sd things are modified when a CPU is offlining. The code path is hard
to follow. Adding some scheduler people involved in topology just in case.
Investigation continues...

Thanks.