Re: [PATCH] PM: QoS: Add support for CPU affinity mask-based CPUs latency QoS

From: Zhongqiu Han
Date: Fri Apr 25 2025 - 07:24:37 EST


On 4/24/2025 6:25 PM, Christian Loehle wrote:
On 4/24/25 10:52, Zhongqiu Han wrote:
Currently, the PM QoS framework supports global CPU latency QoS and
per-device CPU latency QoS requests. An example of using global CPU
latency QoS is a commit 2777e73fc154 ("scsi: ufs: core: Add CPU latency
QoS support for UFS driver") that improved random io performance by 15%
for ufs on specific device platform.

However, this prevents all CPUs in the system from entering C states.
Typically, some threads or drivers know which specific CPUs they are
interested in. For example, drivers with IRQ affinity only want interrupts
to wake up and be handled on specific CPUs. Similarly, kernel thread bound
to specific CPUs through affinity only care about the latency of those
particular CPUs.

This patch introduces support for partial CPUs PM QoS using a CPU affinity
mask, allowing flexible and more precise latency QoS settings for specific
CPUs. This can help save power, especially on heterogeneous platforms with
big and little cores, as well as some power-conscious embedded systems for
example:

driver A rt kthread B module C
QoS cpu mask: 0-3 2-5 6-7
target latency: 20 30 50
| | |
v v v
+---------------------------------+
| PM QoS Framework |
+---------------------------------+
| | |
v v v
cpu mask: 0-3 2-3,4-5 6-7
actual latency: 20 20, 30 50

Implement this support based on per-device CPU latency PM QoS.

Signed-off-by: Zhongqiu Han <quic_zhonhan@xxxxxxxxxxx>

I like the idea!
The interface does need an in-tree user, why not convert the UFS driver?


Thanks Christian for the review.

As far as I know, the UFS IRQ affinity varies across different
platforms, so a universal solution is needed (we need to investigate
whether there is already a solution or add parameters like intr_mask to
represent the IRQ affinity mask). Let me investigate it, or raise other
user patches as a patch series in PATCH v2 as soon as possible. Thanks


--
Thx and BRs,
Zhongqiu Han