Re: [PATCH v2 1/3] sched: sync a se with its cfs_rq when attaching and dettaching

From: Byungchul Park
Date: Tue Aug 18 2015 - 19:43:00 EST


On Wed, Aug 19, 2015 at 12:32:43AM +0800, T. Zhou wrote:
> Hi,
>
> On Mon, Aug 17, 2015 at 04:45:50PM +0900, byungchul.park@xxxxxxx wrote:
> > From: Byungchul Park <byungchul.park@xxxxxxx>
> >
> > current code is wrong with cfs_rq's avg loads when changing a task's
> > cfs_rq to another. i tested with "echo pid > cgroup" and found that
> > e.g. cfs_rq->avg.load_avg became larger and larger whenever i changed
> > a cgroup to another again and again. we have to sync se's avg loads
> > with both *prev* cfs_rq and next cfs_rq when changing its group.
> >
>
> my simple think about above, may be nothing or wrong, just ignore it.
>
> if a load balance migration happened just before cgroup change, prev
> cfs_rq and next cfs_rq will be on different cpu. migrate_task_rq_fair()

hello,

two oerations, migration and cgroup change, are protected by lock.
therefore it would never happen. :)

thanks,
byungchul

> and update_cfs_rq_load_avg() will sync and remove se's load avg from
> prev cfs_rq. whether or not queued, well done. dequeue_task() decay se
> and pre_cfs before calling task_move_group_fair(). after set cfs_rq in
> task_move_group_fair(), if queued, se's load avg do not add to next
> cfs_rq(try set last_update_time to 0 like migration to add), if !queued,
> also need to add se's load avg to next cfs_rq.
>
> if no load balance migration happened when change cgroup. prev cfs_rq
> and next cfs_rq may be on same cpu(not sure), this time, need to remove
> se's load avg by ourself, also need to add se's load avg on next cfs_rq.
>
> thinks,
> --
> Tao
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/