diff mbox series

[v2] sched/schedutil : optimize computation of utilization in schedutil

Message ID 1538465657-20605-1-git-send-email-vincent.guittot@linaro.org
State New
Headers show
Series [v2] sched/schedutil : optimize computation of utilization in schedutil | expand

Commit Message

Vincent Guittot Oct. 2, 2018, 7:34 a.m. UTC
Scaling the utilization of CPUs with irq util_avg in schedutil doesn't give
any benefit and just waste CPU cycles when irq time is not accounted but
only steal time.
Skip the irq scaling when irq time is not accounted

Suggested-by: Wanpeng Li <kernellwp@gmail.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

---
 kernel/sched/cpufreq_schedutil.c | 2 ++
 1 file changed, 2 insertions(+)

-- 
2.7.4
diff mbox series

Patch

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 3fffad3..edbc4d2 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -238,6 +238,7 @@  static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
 	if ((util + cpu_util_dl(rq)) >= max)
 		return max;
 
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
 	/*
 	 * There is still idle time; further improve the number by using the
 	 * irq metric. Because IRQ/steal time is hidden from the task clock we
@@ -249,6 +250,7 @@  static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
 	 */
 	util = scale_irq_capacity(util, irq, max);
 	util += irq;
+#endif
 
 	/*
 	 * Bandwidth required by DEADLINE must always be granted while, for