Re: [PATCH v2] sched: Store restrict_cpus_allowed_ptr() call state

From: Waiman Long
Date: Tue Jan 24 2023 - 15:25:41 EST



On 1/24/23 14:48, Will Deacon wrote:
Hi Waiman,

[+Thorsten given where we are in the release cycle]

On Fri, Jan 20, 2023 at 09:17:49PM -0500, Waiman Long wrote:
The user_cpus_ptr field was originally added by commit b90ca8badbd1
("sched: Introduce task_struct::user_cpus_ptr to track requested
affinity"). It was used only by arm64 arch due to possible asymmetric
CPU setup.

Since commit 8f9ea86fdf99 ("sched: Always preserve the user requested
cpumask"), task_struct::user_cpus_ptr is repurposed to store user
requested cpu affinity specified in the sched_setaffinity().

This results in a performance regression in an arm64 system when booted
with "allow_mismatched_32bit_el0" on the command-line. The arch code will
(amongst other things) calls force_compatible_cpus_allowed_ptr() and
relax_compatible_cpus_allowed_ptr() when exec()'ing a 32-bit or a 64-bit
task respectively. Now a call to relax_compatible_cpus_allowed_ptr()
will always result in a __sched_setaffinity() call whether there is a
previous force_compatible_cpus_allowed_ptr() call or not.
I'd argue it's more than just a performance regression -- the affinity
masks are set incorrectly, which is a user visible thing
(i.e. sched_getaffinity() gives unexpected values).

Can your elaborate a bit more on what you mean by getting unexpected sched_getaffinity() results? You mean the result is wrong after a relax_compatible_cpus_allowed_ptr(). Right?

sched_getaffinity() just return whatever is in cpus_mask. Normally, it should be whatever cpus are allowed by the current cpuset unless sched_setaffinity() has been called before. So after a call to relax_compatible_cpus_allowed_ptr(), it should revert back to the cpu_allowed set in the cpuset. If sched_setaffinity() has been called, it should revert back to the intersection of the current cpuset and user_cpus_ptr.

Cheers,
Longman