Re: [RFC] Make need_resched() return true when rcu_urgent_qs requested

From: David Woodhouse
Date: Wed Jul 11 2018 - 06:57:51 EST


On Mon, 2018-07-09 at 15:08 -0700, Paul E. McKenney wrote:

>
> And the earlier patch was against my -rcu tree, which won't be all that
> helpful for v4.15. Please see below for a lightly tested backport to v4.15.
>
> It should apply to all the releases of interest. If other backports
> are needed, please remind me of my woodhouse.v4.15.2018.07.09a tag.
>
> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂThanx, Paul
>
> ------------------------------------------------------------------------
>
> commit 6361b81827a8f93f582124da385258fc04a38a7f
> Author: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
> Date:ÂÂ Mon Jul 9 13:47:30 2018 -0700
>
> ÂÂÂ rcu: Make need_resched() respond to urgent RCU-QS needs
> ÂÂÂÂ
> ÂÂÂ The per-CPU rcu_dynticks.rcu_urgent_qs variable communicates an urgent
> ÂÂÂ need for an RCU quiescent state from the force-quiescent-state processing
> ÂÂÂ within the grace-period kthread to context switches and to cond_resched().
> ÂÂÂ Unfortunately, such urgent needs are not communicated to need_resched(),
> ÂÂÂ which is sometimes used to decide when to invoke cond_resched(), for
>  but one example, within the KVM vcpu_run() function. As of v4.15, this
> ÂÂÂ can result in synchronize_sched() being delayed by up to ten seconds,
> ÂÂÂ which can be problematic, to say nothing of annoying.
> ÂÂÂÂ
> ÂÂÂ This commit therefore checks rcu_dynticks.rcu_urgent_qs from within
> ÂÂÂ rcu_check_callbacks(), which is invoked from the scheduling-clock
>  interrupt handler. If the current task is not an idle task and is
> ÂÂÂ not executing in usermode, a context switch is forced, and either way,
>  the rcu_dynticks.rcu_urgent_qs variable is set to false. If the current
> ÂÂÂ task is an idle task, then RCU's dyntick-idle code will detect the
>  quiescent state, so no further action is required. Similarly, if the
> ÂÂÂ task is executing in usermode, other code in rcu_check_callbacks() and
> ÂÂÂ its called functions will report the corresponding quiescent state.
> ÂÂÂÂ
> ÂÂÂ Reported-by: David Woodhouse <dwmw2@xxxxxxxxxxxxx>
> ÂÂÂ Suggested-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> ÂÂÂ Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
>  [ paulmck: Backported to v4.15. Probably applies elsewhere. ]

Hm, this doesn't appear to work. I'm still seeing latencies of 4-5
seconds in my testing. In fact, even our old workaround of adding
rcu_all_qs() into vcpu_enter_guest() didn't properly fix it AFAICT.

I'm just creating a VM with lots of CPUs, then attaching new devices to
it to cause the VMM to open more file descriptors, until it hits a
power of two and invokes expand_fdtable().

expand_fdtable (512) sync took 10472394964 cycles (3500000 Âs).
expand_fdtable (512) sync took 15298908072 cycles (5100000 Âs).


--- a/fs/file.c
+++ b/fs/file.c
@@ -162,8 +162,16 @@ static int expand_fdtable(struct files_struct *files, unsigned int nr)
ÂÂÂÂÂÂÂÂ/* make sure all __fd_install() have seen resize_in_progress
ÂÂÂÂÂÂÂÂÂ* or have finished their rcu_read_lock_sched() section.
ÂÂÂÂÂÂÂÂÂ*/
-ÂÂÂÂÂÂÂif (atomic_read(&files->count) > 1)
+ÂÂÂÂÂÂÂif (atomic_read(&files->count) > 1) {
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂunsigned long sync_start, sync_end;
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂunsigned long j_start, j_end;
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂj_start = jiffies;
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂsync_start = get_cycles();
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂsynchronize_sched();
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂsync_end = get_cycles();
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂj_end = jiffies;
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂprintk("expand_fdtable (%d) sync took %ld cycles (%ld Âs).\n", nr, sync_end - sync_start, jiffies_to_usecs(j_end - j_start));
+ÂÂÂÂÂÂÂ}
Â
ÂÂÂÂÂÂÂÂspin_lock(&files->file_lock);
ÂÂÂÂÂÂÂÂif (!new_fdt)

Attachment: smime.p7s
Description: S/MIME cryptographic signature