Re: divide error: 0000 [#1] SMP in task_numa_migrate - handle_mm_fault vanilla 4.4.6

From: Campbell Steven
Date: Wed Jul 06 2016 - 19:20:42 EST


On 22 June 2016 at 18:13, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Wed, Jun 22, 2016 at 01:19:54PM +1200, Campbell Steven wrote:
>> >>>>>>> This suggests the CONFIG_FAIR_GROUP_SCHED version of task_h_load:
>> >>>>>>>
>> >>>>>>> update_cfs_rq_h_load(cfs_rq);
>> >>>>>>> return div64_ul(p->se.avg.load_avg * cfs_rq->h_load,
>> >>>>>>> cfs_rq_load_avg(cfs_rq) + 1);
>> >>>>>>>
>
>
> ---
> commit 8974189222159154c55f24ddad33e3613960521a
> Author: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Date: Thu Jun 16 10:50:40 2016 +0200
>
> sched/fair: Fix cfs_rq avg tracking underflow
>
> As per commit:
>
> b7fa30c9cc48 ("sched/fair: Fix post_init_entity_util_avg() serialization")
>
> > the code generated from update_cfs_rq_load_avg():
> >
> > if (atomic_long_read(&cfs_rq->removed_load_avg)) {
> > s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
> > sa->load_avg = max_t(long, sa->load_avg - r, 0);
> > sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0);
> > removed_load = 1;
> > }


Hi Peter,

I just wanted to report back to say thanks for this, and we have (and
others) have tested this out in 4.7 rc6 and have not been able to
repeat the issue. It seems that anyone running busy ceph osd's or high
load KVM instances is able to trigger this on a dual socket box pretty
easily.

Since these early reports from Stefan and I it looks like it's been
hit but alot more folks now so I'd like to ask what the process is for
getting this backported into 4.6, 4.5 and 4.4 as in our testing all
those versions for their latest point release seem to have the same
problem.

Thanks

Campbell