Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

From: Borislav Petkov
Date: Fri Mar 14 2014 - 05:06:32 EST


On Thu, Mar 13, 2014 at 07:43:01PM +0800, Daniel J Blueman wrote:
> For systems with multiple servers and routed fabric, all northbridges get
> assigned to the first server. Fix this by also using the node reported from
> the PCI bus. For single-fabric systems, the northbriges are on PCI bus 0
> by definition, which are on NUMA node 0 by definition, so this is invarient
> on most systems.

Yeah, I think this is of very low risk for !Numascale setups. :-) So

Acked-by: Borislav Petkov <bp@xxxxxxx>

> Tested on fam10h and fam15h single and multi-fabric systems and candidate
> for stable.

I'm not sure about it - this is only reporting the wrong node, right?
Does anything depend on that node setting being correct and breaks due
to this?

Thanks.

--
Regards/Gruss,
Boris.

Sent from a fat crate under my desk. Formatting is fine.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/