Re: [PATCH 1/2] sched: Fix "divide error: 0000" infind_busiest_group

From: Mike Galbraith
Date: Tue Jul 19 2011 - 23:32:17 EST


On Wed, 2011-07-20 at 04:29 +0200, Peter Zijlstra wrote:
> On Wed, 2011-07-20 at 04:26 +0200, Mike Galbraith wrote:
> > On Tue, 2011-07-19 at 23:17 +0200, Peter Zijlstra wrote:
> > > On Tue, 2011-07-19 at 14:58 -0600, Terry Loftin wrote:
> > > > Correct the protection expression in update_cpu_power() to avoid setting
> > > > rq->cpu_power to zero.
> > >
> > > Firstly you fail to mention what kernel this is again, secondly this
> > > should never happen in the first place, so this fix is wrong. At best it
> > > papers over another bug.
> > >
> > > > Signed-off-by: Terry Loftin <terry.loftin@xxxxxx>
> > > > Signed-off-by: Bob Montgomery <bob.montgomery@xxxxxx>
> > > > ---
> > > > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> > > > index 0c26e2d..9c50020 100644
> > > > --- a/kernel/sched_fair.c
> > > > +++ b/kernel/sched_fair.c
> > > > @@ -2549,7 +2549,7 @@ static void update_cpu_power(struct sched_domain *sd, int cpu)
> > > > power *= scale_rt_power(cpu);
> > > > power >>= SCHED_LOAD_SHIFT;
> > > >
> > > > - if (!power)
> > > > + if ((u32)power == 0)
> > > > power = 1;
> > > >
> > > > cpu_rq(cpu)->cpu_power = power;
> >
> > I put that (and a bunch more protection+warnings) in an enterprise
> > kernel so it would not explode, but would gather some data. The entire
> > world has been utterly silent, except for a gaggle of POWER7 boxen,
> > which manage to convince scale_rt_power() to return negative values.
> >
> > Turning on PRINTK_TIME made these boxen go silent. A printk with
> > timestamps, which doesn't happen, hides the problem. Tilt.
>
> Did those kernels contain the scale_rt_power() hunk from commit
> aa483808516ca5cacfa0e5849691f64fec25828e? Venki thought that might cure
> sure woes, but since we never could reproduce...

Yeah, that commit is present.

-Mike


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/