Re: [RFC][PATCH 03/16] sched: Wrap rq::lock access

From: Tim Chen
Date: Fri Mar 22 2019 - 19:28:43 EST


On 3/19/19 7:29 PM, Subhra Mazumdar wrote:
>
> On 3/18/19 8:41 AM, Julien Desfossez wrote:
>> The case where we try to acquire the lock on 2 runqueues belonging to 2
>> different cores requires the rq_lockp wrapper as well otherwise we
>> frequently deadlock in there.
>>
>> This fixes the crash reported in
>> 1552577311-8218-1-git-send-email-jdesfossez@xxxxxxxxxxxxxxxx
>>
>> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
>> index 76fee56..71bb71f 100644
>> --- a/kernel/sched/sched.h
>> +++ b/kernel/sched/sched.h
>> @@ -2078,7 +2078,7 @@ static inline void double_rq_lock(struct rq *rq1, struct rq *rq2)
>> ÂÂÂÂÂÂÂÂÂ raw_spin_lock(rq_lockp(rq1));
>> ÂÂÂÂÂÂÂÂÂ __acquire(rq2->lock);ÂÂÂ /* Fake it out ;) */
>> ÂÂÂÂÂ } else {
>> -ÂÂÂÂÂÂÂ if (rq1 < rq2) {
>> +ÂÂÂÂÂÂÂ if (rq_lockp(rq1) < rq_lockp(rq2)) {
>> ÂÂÂÂÂÂÂÂÂÂÂÂÂ raw_spin_lock(rq_lockp(rq1));
>> ÂÂÂÂÂÂÂÂÂÂÂÂÂ raw_spin_lock_nested(rq_lockp(rq2), SINGLE_DEPTH_NESTING);
>> ÂÂÂÂÂÂÂÂÂ } else {


Pawan was seeing occasional crashes and lock up that's avoided by doing the following.
We're trying to dig a little more tracing to see why pick_next_entity is returning
NULL.

Tim

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5349ebedc645..4c7f353b8900 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7031,6 +7031,8 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
}

se = pick_next_entity(cfs_rq, curr);
+ if (!se)
+ return NULL;
cfs_rq = group_cfs_rq(se);
} while (cfs_rq);

@@ -7070,6 +7072,8 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf

do {
se = pick_next_entity(cfs_rq, NULL);
+ if (!se)
+ return NULL;
set_next_entity(cfs_rq, se);
cfs_rq = group_cfs_rq(se);
} while (cfs_rq);