Re: [PATCH v2] sched/numa: advanced per-cgroup numa statistic

From: çè
Date: Fri Nov 01 2019 - 21:13:48 EST


Hi, Michal

On 2019/11/2 äå1:39, Michal Koutnà wrote:
> Hello Yun.
>
> On Tue, Oct 29, 2019 at 03:57:20PM +0800, çè <yun.wang@xxxxxxxxxxxxxxxxx> wrote:
>> +static void update_numa_statistics(struct cfs_rq *cfs_rq)
>> +{
>> + int idx;
>> + unsigned long remote = current->numa_faults_locality[3];
>> + unsigned long local = current->numa_faults_locality[4];
>> +
>> + cfs_rq->nstat.jiffies++;
> This statistics effectively doubles what
> kernel/sched/cpuacct.c:cpuacct_charge() does (measuring per-cpu time).
> Hence it seems redundant.

Yes, while there is no guarantee the cpu cgroup always binding
with cpuacct in v1, we can't rely on that...

>
>> +
>> + if (!remote && !local)
>> + return;
>> +
>> + idx = (NR_NL_INTERVAL - 1) * local / (remote + local);
>> + cfs_rq->nstat.locality[idx]++;
> IIUC, the mechanism numa_faults_locality values, this statistics only
> estimates the access locality based on NUMA balancing samples, i.e.
> there exists more precise source of that information.>
> All in all, I'd concur to Mel's suggestion of external measurement.

Currently I can only find numa balancing who is telling the real story,
at least we know after the PF, task do access the page on that CPU,
although it can't cover all the cases, it still giving good hints :-)

It would be great if we could find more similar indicators, like the
migration failure counter Mel mentioned, which give good hints on
memory policy problems, could be used as external measurement.

Regards,
Michael Wang

>
> Michal
>