Re: rseq CPU ID not correct on 6.0 kernels for pinned threads

From: Waiman Long
Date: Fri Jan 13 2023 - 11:21:30 EST


On 1/13/23 11:06, Florian Weimer wrote:
* Mathieu Desnoyers:

On 2023-01-12 11:33, Florian Weimer wrote:
* Mathieu Desnoyers:

As you also point out, it can also be caused by some other task
modifying the affinity of your task concurrently. You could print
the result of sched_getaffinity on error to get a better idea of
the expected vs actual mask.

Lastly, it could be caused by CPU hotplug which would set all bits
in the affinity mask as a fallback. As you mention it should not be
the cause there.

Can you share your kernel configuration ?
Attached.
cpupower frequency-info says:
analyzing CPU 0:
driver: intel_cpufreq
CPUs which run at the same hardware frequency: 0
CPUs which need to have their frequency coordinated by software: 0
maximum transition latency: 20.0 us
hardware limits: 800 MHz - 4.60 GHz
available cpufreq governors: conservative ondemand userspace powersave performance schedutil
current policy: frequency should be within 800 MHz and 4.60 GHz.
The governor "schedutil" may decide which speed to use
within this range.
current CPU frequency: Unable to call hardware
current CPU frequency: 3.20 GHz (asserted by call to kernel)
boost state support:
Supported: yes
Active: yes
And I have: kernel.sched_energy_aware = 1

Is this on a physical machine or in a virtual machine ?
I think it happened on both.
I added additional error reporting to the test (running on kernel
6.0.18-300.fc37.x86_64), and it seems that there is something that is
mucking with affinity masks:
info: Detected CPU set size (in bits): 64
info: Maximum test CPU: 19
error: Pinned thread 17 ran on impossible cpu 7
info: getcpu reported CPU 7, node 0
info: CPU affinity mask: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
error: Pinned thread 3 ran on impossible cpu 13
info: getcpu reported CPU 13, node 0
info: CPU affinity mask: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
info: Main thread ran on 2 CPU(s) of 20 available CPU(s)
info: Other threads ran on 20 CPU(s)
For each of these threads, the affinity mask should be a singleton
set.
Now I need to find out if there is a process that changes affinity
settings.
If it's not cpu hotunplug, then perhaps something like systemd
modifies the AllowedCPUs of your cpuset concurrently ?
It's probably just this kernel bug:

commit da019032819a1f09943d3af676892ec8c627668e
Author: Waiman Long <longman@xxxxxxxxxx>
Date: Thu Sep 22 14:00:39 2022 -0400

sched: Enforce user requested affinity
It was found that the user requested affinity via sched_setaffinity()
can be easily overwritten by other kernel subsystems without an easy way
to reset it back to what the user requested. For example, any change
to the current cpuset hierarchy may reset the cpumask of the tasks in
the affected cpusets to the default cpuset value even if those tasks
have pre-existing user requested affinity. That is especially easy to
trigger under a cgroup v2 environment where writing "+cpuset" to the
root cgroup's cgroup.subtree_control file will reset the cpus affinity
of all the processes in the system.
That is problematic in a nohz_full environment where the tasks running
in the nohz_full CPUs usually have their cpus affinity explicitly set
and will behave incorrectly if cpus affinity changes.
Fix this problem by looking at user_cpus_ptr in __set_cpus_allowed_ptr()
and use it to restrcit the given cpumask unless there is no overlap. In
that case, it will fallback to the given one. The SCA_USER flag is
reused to indicate intent to set user_cpus_ptr and so user_cpus_ptr
masking should be skipped. In addition, masking should also be skipped
if any of the SCA_MIGRATE_* flag is set.
All callers of set_cpus_allowed_ptr() will be affected by this change.
A scratch cpumask is added to percpu runqueues structure for doing
additional masking when user_cpus_ptr is set.
Signed-off-by: Waiman Long <longman@xxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Link: https://lkml.kernel.org/r/20220922180041.1768141-4-longman@xxxxxxxxxx

I don't think it's been merged into any stable kernels yet?

This patch will be in the v6.2 kernel. Since it is not marked as a fix, it won't go into a stable kernel by default.

Cheers,
Longman