Re: [PATCH v7 00/13] fold per-CPU vmstats remotely

From: Michal Hocko
Date: Mon Mar 20 2023 - 14:35:29 EST


On Mon 20-03-23 15:03:32, Marcelo Tosatti wrote:
> This patch series addresses the following two problems:
>
> 1. A customer provided evidence indicating that a process
> was stalled in direct reclaim:
>
This is addressed by the trivial patch 1.

[...]
> 2. With a task that busy loops on a given CPU,
> the kworker interruption to execute vmstat_update
> is undesired and may exceed latency thresholds
> for certain applications.

Yes it can but why does that matter?

> By having vmstat_shepherd flush the per-CPU counters to the
> global counters from remote CPUs.
>
> This is done using cmpxchg to manipulate the counters,
> both CPU locally (via the account functions),
> and remotely (via cpu_vm_stats_fold).
>
> Thanks to Aaron Tomlin for diagnosing issue 1 and writing
> the initial patch series.
>
>
> Performance details for the kworker interruption:
>
> oslat 1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
> oslat 1094.456971: workqueue_queue_work: ... function=vmstat_update ...
> oslat 1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
> kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
>
> The example above shows an additional 7us for the
>
> oslat -> kworker -> oslat
>
> switches. In the case of a virtualized CPU, and the vmstat_update
> interruption in the host (of a qemu-kvm vcpu), the latency penalty
> observed in the guest is higher than 50us, violating the acceptable
> latency threshold for certain applications.

I do not think we have ever promissed any specific latency guarantees
for vmstat. These are statistics have been mostly used for debugging
purposes AFAIK. I am not aware of any specific user space use case that
would be latency sensitive. Your changelog doesn't go into details there
either.

[...]
> mm/vmstat.c | 440 +++++++++++++++++++++++++++++++++++++++++++++++------------------------------

This requires much more detailed story why we really need that.
--
Michal Hocko
SUSE Labs