Re: v4.16+ seeing many unaligned access in dequeue_task_fair() on IA64

From: Peter Zijlstra
Date: Wed Apr 04 2018 - 03:25:37 EST


On Wed, Apr 04, 2018 at 12:04:00AM +0000, Luck, Tony wrote:
> > bisect says:
> >
> > d519329f72a6 ("sched/fair: Update util_est only on util_avg updates")
> >
> > Reverting just this commit makes the problem go away.
>
> The unaligned read and write seem to come from:
>
> struct util_est ue = READ_ONCE(p->se.avg.util_est);
> WRITE_ONCE(p->se.avg.util_est, ue);
>
> which is puzzling as they were around before. Also the "avg"
> field is tagged with an attribute to make it cache aligned
> and there don't look to be holes in the structure that would
> make util_est not be 8-byte aligned ... though it does consist
> of two 4-byte fields, so legal for it to be 4-byte aligned.

Right, I remember being careful with that. Which again brings me to the
RANDSTRUCT thing, which will mess that up.

Does the below cure things? It makes absolutely no difference for my
x86_64-defconfig build, but it puts more explicit alignment constraints
on things.


diff --git a/include/linux/sched.h b/include/linux/sched.h
index f228c6033832..b3d697f3b573 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -300,7 +300,7 @@ struct util_est {
unsigned int enqueued;
unsigned int ewma;
#define UTIL_EST_WEIGHT_SHIFT 2
-};
+} __attribute__((__aligned__(sizeof(u64))));

/*
* The load_avg/util_avg accumulates an infinite geometric series
@@ -364,7 +364,7 @@ struct sched_avg {
unsigned long runnable_load_avg;
unsigned long util_avg;
struct util_est util_est;
-};
+} ____cacheline_aligned;

struct sched_statistics {
#ifdef CONFIG_SCHEDSTATS
@@ -435,7 +435,7 @@ struct sched_entity {
* Put into separate cache line so it does not
* collide with read-mostly values above.
*/
- struct sched_avg avg ____cacheline_aligned_in_smp;
+ struct sched_avg avg;
#endif
};