Re: [PATCH v9 00/12] Support PPTT for ARM64

From: Jeremy Linton
Date: Tue May 29 2018 - 16:48:43 EST


On 05/29/2018 03:16 PM, Will Deacon wrote:
Hi Geert,

On Tue, May 29, 2018 at 05:51:29PM +0200, Geert Uytterhoeven wrote:
On Tue, May 29, 2018 at 5:08 PM, Will Deacon <will.deacon@xxxxxxx> wrote:
On Tue, May 29, 2018 at 02:18:40PM +0100, Sudeep Holla wrote:
On 29/05/18 12:56, Geert Uytterhoeven wrote:
On Tue, May 29, 2018 at 1:14 PM, Sudeep Holla <sudeep.holla@xxxxxxx> wrote:
On 29/05/18 11:48, Geert Uytterhoeven wrote:
System supend still works fine on systems with big cores only:

R-Car H3 ES1.0 (4xCA57 (4xCA53 disabled in firmware))
R-Car M3-N (2xCA57)

Reverting this commit fixes the issue for me.

I can't find anything that relates to system suspend in these patches
unless they are messing with something during CPU hot plug-in back
during resume.

It's only the last patch that introduces the breakage.


As specified in the commit log, it won't change any behavior for DT
systems if it's non-NUMA or single node system. So I am still wondering
what could trigger this regression.

I wonder if we're somehow giving an uninitialised/invalid NUMA configuration
to the scheduler, although I can't see how this would happen.

Geert -- if you enable CONFIG_DEBUG_PER_CPU_MAPS=y and apply the diff below
do you see anything shouting in dmesg?

Thanks, but unfortunately it doesn't help.
I added some debug code to print cpumask, but so far I don't see anything
suspicious.

Damn, sorry for wasting your time. For the record, Catalin's been seeing
boot failures under KVM on a non-big/LITTLE machine that bisect reliably
to this patch, but we've also not been able to explain them. Worse, adding
so much as a printk makes the problem disappear.



I was about to post a patch to remove the numa check if CONFIG_NUMA disabled. But that seems pointless if the its happening with numa enabled. So assuming, its the removal of the core from the numa mask which is causing problems. It looks like numa_clear_node() might cause similar problems when numa is enabled. In my case the problem I see is NULL dereference in __bitmap_intersect called from select_task_rq_fair. That said, I only see the problem when CONFIG_NUMA isn't set.

So, I've also got another work around which caches the numa node to the cpu_topology and then only builds it when store_cpu_topology() is called. That should stabilize the numa mask, and assure that the bit maps are correct when the scheduler requests them.

Do you guys want that patch, or are we looking for a deeper root cause?