Re: [BUG] sched: big numa dynamic sched domain memory corruption

From: Siddha, Suresh B
Date: Mon Jul 31 2006 - 12:13:11 EST


On Mon, Jul 31, 2006 at 09:12:42AM +0200, Ingo Molnar wrote:
>
> * Paul Jackson <pj@xxxxxxx> wrote:
>
> > @@ -5675,12 +5675,13 @@ void build_sched_domains(const cpumask_t
> > int group;
> > struct sched_domain *sd = NULL, *p;
> > cpumask_t nodemask = node_to_cpumask(cpu_to_node(i));
> > + int cpus_per_node = cpus_weight(nodemask);
> >
> > cpus_and(nodemask, nodemask, *cpu_map);
> >
> > #ifdef CONFIG_NUMA
> > - if (cpus_weight(*cpu_map)
> > - > SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
> > + if (cpus_weight(cpu_online_map)
> > + > SD_NODES_PER_DOMAIN*cpus_per_node) {
> > if (!sched_group_allnodes) {
> > sched_group_allnodes
> > = kmalloc(sizeof(struct sched_group)
>
> even if the bug is not fully understood in time, i think we should queue
> the patch above for v2.6.18. (with the small nit that you should put the

I believe that this problem doesn't happen with the current mainline code.
Paul can you please test the mainline code and confirm? After going through
SLES10 code and current mainline code, my understanding is that SLES10 has
this bug but not mainline.

thanks,
suresh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/