Re: [PATCH v4] x86/platform/uv: UV support for sub-NUMA clustering

From: Steve Wahl
Date: Thu Mar 23 2023 - 17:46:53 EST


Dave,

Thank you for taking the time to review. I will create another
version, doing my best to clarify and simplify.

--> Steve

On Thu, Mar 23, 2023 at 09:26:50AM -0700, Dave Hansen wrote:
> On 2/24/23 14:59, Steve Wahl wrote:
> > + if (np) {
> > + s = ((i * 64) + __ffs(np)) & s_mask;
> > + if (sock_min > s)
> > + sock_min = s;
> > + s = ((i * 64) + __fls(np)) & s_mask;
> > + if (sock_max < s)
> > + sock_max = s;
> > + }
> > }
> > }
> > if (UVH_NODE_PRESENT_0) {
> > np = uv_read_local_mmr(UVH_NODE_PRESENT_0);
> > pr_info("UV: NODE_PRESENT_0 = 0x%016lx\n", np);
> > - uv_pb += hweight64(np);
> > + if (np) {
> > + s = __ffs(np) & s_mask;
> > + if (sock_min > s)
> > + sock_min = s;
> > + s = __fls(np) & s_mask;
> > + if (sock_max < s)
> > + sock_max = s;
> > + }
> > }
> > if (UVH_NODE_PRESENT_1) {
> > np = uv_read_local_mmr(UVH_NODE_PRESENT_1);
> > pr_info("UV: NODE_PRESENT_1 = 0x%016lx\n", np);
> > - uv_pb += hweight64(np);
> > + if (np) {
> > + s = (64 + __ffs(np)) & s_mask;
> > + if (sock_min > s)
> > + sock_min = s;
> > + s = (64 + __fls(np)) & s_mask;
> > + if (sock_max < s)
> > + sock_max = s;
> > + }
> > + }
>
> I guess this patch is modifying code that very few people care about.
> But, this is kinda a mess. This patch does a *TON* of different things
> and makes almost no effort to refactor the code before diving into the
> changes.
>
> I quoted the above text because whatever that code is doing, it's
> gloriously uncommented. That kind of thing demands a helper even if
> it's just used once so that a read can have _some_ idea what it's doing
> logically.
>
> Could you please take another pass at this? I think it demands to be
> broken up into at _least_ 5-10 individual patches.
>
> For instance, you could introduce and use uv_pnode_to_socket() in one patch.
>
> Or this:
>
> > - nasid_mask = UVH_RH_GAM_MMIOH_OVERLAY_CONFIG0_BASE_MASK;
> > + nasid_mask =
> > + is_uv(UV4A) ? UV4AH_RH_GAM_MMIOH_REDIRECT_CONFIG0_NASID_MASK :
> > + is_uv(UV4) ? UV4H_RH_GAM_MMIOH_REDIRECT_CONFIG0_NASID_MASK :
> > + UV3H_RH_GAM_MMIOH_REDIRECT_CONFIG0_NASID_MASK;
> > n = UVH_RH_GAM_MMIOH_REDIRECT_CONFIG0_DEPTH;
> > min_nasid = min_pnode * 2;
> > max_nasid = max_pnode * 2;
> > @@ -1046,7 +1050,10 @@ static void __init calc_mmioh_map(enum mmioh_arch index,
> > break;
> > case UVX_MMIOH1:
> > mmr = UVH_RH_GAM_MMIOH_REDIRECT_CONFIG1;
> > - nasid_mask = UVH_RH_GAM_MMIOH_OVERLAY_CONFIG1_BASE_MASK;
> > + nasid_mask =
> > + is_uv(UV4A) ? UV4AH_RH_GAM_MMIOH_REDIRECT_CONFIG0_NASID_MASK :
> > + is_uv(UV4) ? UV4H_RH_GAM_MMIOH_REDIRECT_CONFIG1_NASID_MASK :
> > + UV3H_RH_GAM_MMIOH_REDIRECT_CONFIG1_NASID_MASK;
>
> That could use a helper to reduce the duplication and add clarity and be
> done in a separate patch.

--
Steve Wahl, Hewlett Packard Enterprise