Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

From: Daniel J Blueman
Date: Fri Mar 14 2014 - 05:58:01 EST


Hi Boris,

On 14/03/2014 17:06, Borislav Petkov wrote:
On Thu, Mar 13, 2014 at 07:43:01PM +0800, Daniel J Blueman wrote:
For systems with multiple servers and routed fabric, all northbridges get
assigned to the first server. Fix this by also using the node reported from
the PCI bus. For single-fabric systems, the northbriges are on PCI bus 0
by definition, which are on NUMA node 0 by definition, so this is invarient
on most systems.

Yeah, I think this is of very low risk for !Numascale setups. :-) So

Acked-by: Borislav Petkov <bp@xxxxxxx>

Tested on fam10h and fam15h single and multi-fabric systems and candidate
for stable.

I'm not sure about it - this is only reporting the wrong node, right?
Does anything depend on that node setting being correct and breaks due
to this?

It's only reporting the wrong node, yes. The irqbalance daemon uses /sys/devices/.../numa_node, and we found we have to disable it to prevent hangs on certain systems after a while, but I didn't establish a link just yet, though found this to be incorrect.

Thanks,
Daniel
--
Daniel J Blueman
Principal Software Engineer, Numascale
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/