Re: [PATCH v9 00/12] Support PPTT for ARM64

From: Morten Rasmussen
Date: Wed May 30 2018 - 04:52:41 EST


On Tue, May 29, 2018 at 05:50:47PM +0200, Geert Uytterhoeven wrote:
> Hi Sudeep,
>
> On Tue, May 29, 2018 at 3:18 PM, Sudeep Holla <sudeep.holla@xxxxxxx> wrote:
> > On 29/05/18 12:56, Geert Uytterhoeven wrote:
> >> On Tue, May 29, 2018 at 1:14 PM, Sudeep Holla <sudeep.holla@xxxxxxx> wrote:
> >>> On 29/05/18 11:48, Geert Uytterhoeven wrote:
> >>>> On Thu, May 17, 2018 at 7:05 PM, Catalin Marinas
> >>>> <catalin.marinas@xxxxxxx> wrote:
> >>>>> On Fri, May 11, 2018 at 06:57:55PM -0500, Jeremy Linton wrote:
> >>>>>> Jeremy Linton (12):
> >>>>>> arm64: topology: divorce MC scheduling domain from core_siblings
> >>>>>
> >>>>> Queued for 4.18 (without Sudeep's latest property_read_u64 cacheinfo
> >>>>> patch - http://lkml.kernel.org/r/20180517154701.GA20281@e107155-lin; I
> >>>>> can add it separately).
> >>>>
> >>>> This is now commit 37c3ec2d810f87ea ("arm64: topology: divorce MC
> >>>> scheduling domain from core_siblings") in arm64/for-next/core, causing
> >>>> system suspend on big.LITTLE systems to hang after shutting down the first
> >>>> CPU:
> >>>>
> >>>> $ echo mem > /sys/power/state
> >>>> PM: suspend entry (deep)
> >>>> PM: Syncing filesystems ... done.
> >>>> Freezing user space processes ... (elapsed 0.001 seconds) done.
> >>>> OOM killer disabled.
> >>>> Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done.
> >>>> Disabling non-boot CPUs ...
> >>>> CPU1: shutdown
> >>>> psci: CPU1 killed.
> >>>
> >>> Is it OK to assume the suspend failed just after shutting down one CPU
> >>> or it's failing during resume ? It depends on whether you had console
> >>> disabled or not.
> >>
> >> I have no-console-suspend enabled.
> >> It's failing during suspend, the next lines should be:
> >>
> >> CPU2: shutdown
> >> psci: CPU2 killed.
> >> ...
> >
> > OK, I was hoping to be something during resume as this patch has nothing
> > executed during suspend. Do you see any change in topology before and
> > after this patch applied. I am interested in the output of:
> >
> > $ grep "" /sys/devices/system/cpu/cpu*/topology/*
>
> /sys/devices/system/cpu/cpu0/topology/core_id:0
> /sys/devices/system/cpu/cpu0/topology/core_siblings:0f
> /sys/devices/system/cpu/cpu0/topology/core_siblings_list:0-3
> /sys/devices/system/cpu/cpu0/topology/physical_package_id:0
> /sys/devices/system/cpu/cpu0/topology/thread_siblings:01
> /sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0
> /sys/devices/system/cpu/cpu1/topology/core_id:1
> /sys/devices/system/cpu/cpu1/topology/core_siblings:0f
> /sys/devices/system/cpu/cpu1/topology/core_siblings_list:0-3
> /sys/devices/system/cpu/cpu1/topology/physical_package_id:0
> /sys/devices/system/cpu/cpu1/topology/thread_siblings:02
> /sys/devices/system/cpu/cpu1/topology/thread_siblings_list:1
> /sys/devices/system/cpu/cpu2/topology/core_id:2
> /sys/devices/system/cpu/cpu2/topology/core_siblings:0f
> /sys/devices/system/cpu/cpu2/topology/core_siblings_list:0-3
> /sys/devices/system/cpu/cpu2/topology/physical_package_id:0
> /sys/devices/system/cpu/cpu2/topology/thread_siblings:04
> /sys/devices/system/cpu/cpu2/topology/thread_siblings_list:2
> /sys/devices/system/cpu/cpu3/topology/core_id:3
> /sys/devices/system/cpu/cpu3/topology/core_siblings:0f
> /sys/devices/system/cpu/cpu3/topology/core_siblings_list:0-3
> /sys/devices/system/cpu/cpu3/topology/physical_package_id:0
> /sys/devices/system/cpu/cpu3/topology/thread_siblings:08
> /sys/devices/system/cpu/cpu3/topology/thread_siblings_list:3
> /sys/devices/system/cpu/cpu4/topology/core_id:0
> /sys/devices/system/cpu/cpu4/topology/core_siblings:f0
> /sys/devices/system/cpu/cpu4/topology/core_siblings_list:4-7
> /sys/devices/system/cpu/cpu4/topology/physical_package_id:1
> /sys/devices/system/cpu/cpu4/topology/thread_siblings:10
> /sys/devices/system/cpu/cpu4/topology/thread_siblings_list:4
> /sys/devices/system/cpu/cpu5/topology/core_id:1
> /sys/devices/system/cpu/cpu5/topology/core_siblings:f0
> /sys/devices/system/cpu/cpu5/topology/core_siblings_list:4-7
> /sys/devices/system/cpu/cpu5/topology/physical_package_id:1
> /sys/devices/system/cpu/cpu5/topology/thread_siblings:20
> /sys/devices/system/cpu/cpu5/topology/thread_siblings_list:5
> /sys/devices/system/cpu/cpu6/topology/core_id:2
> /sys/devices/system/cpu/cpu6/topology/core_siblings:f0
> /sys/devices/system/cpu/cpu6/topology/core_siblings_list:4-7
> /sys/devices/system/cpu/cpu6/topology/physical_package_id:1
> /sys/devices/system/cpu/cpu6/topology/thread_siblings:40
> /sys/devices/system/cpu/cpu6/topology/thread_siblings_list:6
> /sys/devices/system/cpu/cpu7/topology/core_id:3
> /sys/devices/system/cpu/cpu7/topology/core_siblings:f0
> /sys/devices/system/cpu/cpu7/topology/core_siblings_list:4-7
> /sys/devices/system/cpu/cpu7/topology/physical_package_id:1
> /sys/devices/system/cpu/cpu7/topology/thread_siblings:80
> /sys/devices/system/cpu/cpu7/topology/thread_siblings_list:7
>
> No change before/after (both match my view of the hardware).

There shouldn't be any change in the reported topology with this patch
as that the topology_* functions are not touched by the patch.

The patch should only affect the topology used by the scheduler which
isn't necessarily the same as the user-space visible one.

Morten