Re: [PATCH] mm: fix the inaccurate memory statistics issue for users

From: Baolin Wang
Date: Tue Jun 03 2025 - 04:32:59 EST




On 2025/6/3 16:15, Michal Hocko wrote:
On Tue 03-06-25 16:08:21, Baolin Wang wrote:


On 2025/5/30 21:39, Michal Hocko wrote:
On Thu 29-05-25 20:53:13, Andrew Morton wrote:
On Sat, 24 May 2025 09:59:53 +0800 Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> wrote:

On some large machines with a high number of CPUs running a 64K pagesize
kernel, we found that the 'RES' field is always 0 displayed by the top
command for some processes, which will cause a lot of confusion for users.

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
875525 root 20 0 12480 0 0 R 0.3 0.0 0:00.08 top
1 root 20 0 172800 0 0 S 0.0 0.0 0:04.52 systemd

The main reason is that the batch size of the percpu counter is quite large
on these machines, caching a significant percpu value, since converting mm's
rss stats into percpu_counter by commit f1a7941243c1 ("mm: convert mm's rss
stats into percpu_counter"). Intuitively, the batch number should be optimized,
but on some paths, performance may take precedence over statistical accuracy.
Therefore, introducing a new interface to add the percpu statistical count
and display it to users, which can remove the confusion. In addition, this
change is not expected to be on a performance-critical path, so the modification
should be acceptable.

Fixes: f1a7941243c1 ("mm: convert mm's rss stats into percpu_counter")

Three years ago.

Tested-by Donet Tom <donettom@xxxxxxxxxxxxx>
Reviewed-by: Aboorva Devarajan <aboorvad@xxxxxxxxxxxxx>
Tested-by: Aboorva Devarajan <aboorvad@xxxxxxxxxxxxx>
Acked-by: Shakeel Butt <shakeel.butt@xxxxxxxxx>
Acked-by: SeongJae Park <sj@xxxxxxxxxx>
Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>

Thanks, I added cc:stable to this.

I have only noticed this new posting now. I do not think this is a
stable material. I am also not convinced that the impact of the pcp lock
exposure to the userspace has been properly analyzed and documented in
the changelog. I am not nacking the patch (yet) but I would like to see
a serious analyses that this has been properly thought through.

Good point. I did a quick measurement on my 32 cores Arm machine. I ran two
workloads, one is the 'top' command: top -d 1 (updating every second).
Another workload is kernel building (time make -j32).

From the following data, I did not see any significant impact of the patch
changes on the execution of the kernel building workload.

I do not think this is really representative of an adverse workload. I
believe you need to have a look which potentially sensitive kernel code
paths run with the lock held how would a busy loop over affected proc
files influence those in the worst case. Maybe there are none of such
kernel code paths to really worry about. This should be a part of the
changelog though.

IMO, kernel code paths usually have batch caching to avoid lock contention, so I think the impact on kernel code paths is not that obvious. Therefore, I also think it's hard to find an adverse workload.

How about adding the following comments in the commit log?
"
I did a quick measurement on my 32 cores Arm machine. I ran two workloads, one is the 'top' command: top -d 1 (updating every second). Another workload is kernel building (time make -j32).

From the following data, I did not see any significant impact of the patch changes on the execution of the kernel building workload. In addition, kernel code paths usually have batch caching to avoid pcp lock contention, so I think the impact on kernel code paths is not that obvious even the pcp lock is exposed to the userspace.

w/o patch:
real 4m33.887s
user 118m24.153s
sys 9m51.402s

w/ patch:
real 4m34.495s
user 118m21.739s
sys 9m39.232s
"