Re: [PATCH v15 21/34] x86/resctrl: Refactor resctrl_arch_rmid_read()

From: Reinette Chatre
Date: Thu Jul 17 2025 - 23:51:43 EST


Hi Babu,

On 7/8/25 3:17 PM, Babu Moger wrote:
> resctrl_arch_rmid_read() modifies the value obtained from MSR_IA32_QM_CTR
> to account for overflow. This adjustment is common to both

The portion factored out does not just handle MBM overflow counts but also
handles counter scaling for *all* events, including cache occupancy.

> resctrl_arch_rmid_read() and resctrl_arch_cntr_read().
>
> To prepare for the implementation of resctrl_arch_cntr_read(), refactor
> this logic into a new function called mbm_corrected_val().

This thus cannot be made specific to MBM. More accurate may be
get_corrected_val().

>
> Signed-off-by: Babu Moger <babu.moger@xxxxxxx>
> ---
> v15: New patch to add arch calls resctrl_arch_cntr_read() and resctrl_arch_reset_cntr()
> with mbm_event mode.
> https://lore.kernel.org/lkml/b4b14670-9cb0-4f65-abd5-39db996e8da9@xxxxxxxxx/
> ---
> arch/x86/kernel/cpu/resctrl/monitor.c | 38 ++++++++++++++++-----------
> 1 file changed, 23 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 017f3b8e28f9..a230d98e9d73 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -217,15 +217,33 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_msr, unsigned int width)
> return chunks >> shift;
> }
>
> +static u64 mbm_corrected_val(struct rdt_resource *r, struct rdt_mon_domain *d,
> + u32 rmid, enum resctrl_event_id eventid, u64 msr_val)
> +{
> + struct rdt_hw_mon_domain *hw_dom = resctrl_to_arch_mon_dom(d);
> + struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
> + struct arch_mbm_state *am;
> + u64 chunks;
> +
> + am = get_arch_mbm_state(hw_dom, rmid, eventid);
> + if (am) {

These are MBM counter adjustments.

> + am->chunks += mbm_overflow_count(am->prev_msr, msr_val,
> + hw_res->mbm_width);

Above can be aligned to open parentheses.

> + chunks = get_corrected_mbm_count(rmid, am->chunks);
> + am->prev_msr = msr_val;
> + } else {

Cache occupancy handled here.

> + chunks = msr_val;
> + }
> +
Both MBM and cache occupancy scaled below:
> + return chunks * hw_res->mon_scale;
> +}
> +
> int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *d,
> u32 unused, u32 rmid, enum resctrl_event_id eventid,
> u64 *val, void *ignored)
> {
> - struct rdt_hw_mon_domain *hw_dom = resctrl_to_arch_mon_dom(d);
> - struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
> int cpu = cpumask_any(&d->hdr.cpu_mask);
> - struct arch_mbm_state *am;
> - u64 msr_val, chunks;
> + u64 msr_val;
> u32 prmid;
> int ret;
>
> @@ -236,17 +254,7 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *d,
> if (ret)
> return ret;
>
> - am = get_arch_mbm_state(hw_dom, rmid, eventid);
> - if (am) {
> - am->chunks += mbm_overflow_count(am->prev_msr, msr_val,
> - hw_res->mbm_width);
> - chunks = get_corrected_mbm_count(rmid, am->chunks);
> - am->prev_msr = msr_val;
> - } else {
> - chunks = msr_val;
> - }
> -
> - *val = chunks * hw_res->mon_scale;
> + *val = mbm_corrected_val(r, d, rmid, eventid, msr_val);
>
> return 0;
> }

Reinette