[tip: sched/core] sched: Fix leftover comment typos

From: tip-bot2 for Ingo Molnar
Date: Wed May 12 2021 - 16:02:40 EST


The following commit has been merged into the sched/core branch of tip:

Commit-ID: cc00c1988801dc71f63bb7bad019e85046865095
Gitweb: https://git.kernel.org/tip/cc00c1988801dc71f63bb7bad019e85046865095
Author: Ingo Molnar <mingo@xxxxxxxxxx>
AuthorDate: Wed, 12 May 2021 19:51:31 +02:00
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitterDate: Wed, 12 May 2021 19:54:49 +02:00

sched: Fix leftover comment typos

A few more snuck in. Also capitalize 'CPU' while at it.

Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
include/linux/sched_clock.h | 2 +-
kernel/sched/core.c | 4 ++--
kernel/sched/fair.c | 6 +++---
3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/sched_clock.h b/include/linux/sched_clock.h
index 528718e..835ee87 100644
--- a/include/linux/sched_clock.h
+++ b/include/linux/sched_clock.h
@@ -14,7 +14,7 @@
* @sched_clock_mask: Bitmask for two's complement subtraction of non 64bit
* clocks.
* @read_sched_clock: Current clock source (or dummy source when suspended).
- * @mult: Multipler for scaled math conversion.
+ * @mult: Multiplier for scaled math conversion.
* @shift: Shift value for scaled math conversion.
*
* Care must be taken when updating this structure; it is read by
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9d00f49..ac8882d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5506,7 +5506,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
}

/*
- * Try and select tasks for each sibling in decending sched_class
+ * Try and select tasks for each sibling in descending sched_class
* order.
*/
for_each_class(class) {
@@ -5520,7 +5520,7 @@ again:

/*
* If this sibling doesn't yet have a suitable task to
- * run; ask for the most elegible task, given the
+ * run; ask for the most eligible task, given the
* highest priority task already selected for this
* core.
*/
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2635e10..161b92a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10808,11 +10808,11 @@ static inline void task_tick_core(struct rq *rq, struct task_struct *curr)
* sched_slice() considers only this active rq and it gets the
* whole slice. But during force idle, we have siblings acting
* like a single runqueue and hence we need to consider runnable
- * tasks on this cpu and the forced idle cpu. Ideally, we should
+ * tasks on this CPU and the forced idle CPU. Ideally, we should
* go through the forced idle rq, but that would be a perf hit.
- * We can assume that the forced idle cpu has atleast
+ * We can assume that the forced idle CPU has at least
* MIN_NR_TASKS_DURING_FORCEIDLE - 1 tasks and use that to check
- * if we need to give up the cpu.
+ * if we need to give up the CPU.
*/
if (rq->core->core_forceidle && rq->cfs.nr_running == 1 &&
__entity_slice_used(&curr->se, MIN_NR_TASKS_DURING_FORCEIDLE))