diff mbox series

[5.10,131/593] sched/fair: Fix ascii art by relpacing tabs

Message ID 20210712060857.536900755@linuxfoundation.org
State Superseded
Headers show
Series None | expand

Commit Message

Greg Kroah-Hartman July 12, 2021, 6:04 a.m. UTC
From: Odin Ugedal <odin@uged.al>

[ Upstream commit 08f7c2f4d0e9f4283f5796b8168044c034a1bfcb ]

When using something other than 8 spaces per tab, this ascii art
makes not sense, and the reader might end up wondering what this
advanced equation "is".

Signed-off-by: Odin Ugedal <odin@uged.al>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/20210518125202.78658-4-odin@uged.al
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 kernel/sched/fair.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Pavel Machek July 14, 2021, 7:44 p.m. UTC | #1
Hi!

> When using something other than 8 spaces per tab, this ascii art

> makes not sense, and the reader might end up wondering what this

> advanced equation "is".


I believe this should not be in stable. Our stable-rules documentation
is quite clear there.

> +++ b/kernel/sched/fair.c

> @@ -3141,7 +3141,7 @@ void reweight_task(struct task_struct *p, int prio)

>   *

>   *                     tg->weight * grq->load.weight

>   *   ge->load.weight = -----------------------------               (1)

> - *			  \Sum grq->load.weight

> + *                       \Sum grq->load.weight

>   *

>   * Now, because computing that sum is prohibitively expensive to compute (been

>   * there, done that) we approximate it with this average stuff. The average


Best regards,
								Pavel
-- 
DENX Software Engineering GmbH,      Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
diff mbox series

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d6e1c90de570..3d92de7909bf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3141,7 +3141,7 @@  void reweight_task(struct task_struct *p, int prio)
  *
  *                     tg->weight * grq->load.weight
  *   ge->load.weight = -----------------------------               (1)
- *			  \Sum grq->load.weight
+ *                       \Sum grq->load.weight
  *
  * Now, because computing that sum is prohibitively expensive to compute (been
  * there, done that) we approximate it with this average stuff. The average
@@ -3155,7 +3155,7 @@  void reweight_task(struct task_struct *p, int prio)
  *
  *                     tg->weight * grq->avg.load_avg
  *   ge->load.weight = ------------------------------              (3)
- *				tg->load_avg
+ *                             tg->load_avg
  *
  * Where: tg->load_avg ~= \Sum grq->avg.load_avg
  *
@@ -3171,7 +3171,7 @@  void reweight_task(struct task_struct *p, int prio)
  *
  *                     tg->weight * grq->load.weight
  *   ge->load.weight = ----------------------------- = tg->weight   (4)
- *			    grp->load.weight
+ *                         grp->load.weight
  *
  * That is, the sum collapses because all other CPUs are idle; the UP scenario.
  *
@@ -3190,7 +3190,7 @@  void reweight_task(struct task_struct *p, int prio)
  *
  *                     tg->weight * grq->load.weight
  *   ge->load.weight = -----------------------------		   (6)
- *				tg_load_avg'
+ *                             tg_load_avg'
  *
  * Where:
  *