From patchwork Thu Mar 2 15:45:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 94789 Delivered-To: patch@linaro.org Received: by 10.182.3.34 with SMTP id 2csp346452obz; Thu, 2 Mar 2017 07:49:26 -0800 (PST) X-Received: by 10.99.188.10 with SMTP id q10mr16314777pge.106.1488469765942; Thu, 02 Mar 2017 07:49:25 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j187si7750484pge.96.2017.03.02.07.49.25; Thu, 02 Mar 2017 07:49:25 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754639AbdCBPtU (ORCPT + 25 others); Thu, 2 Mar 2017 10:49:20 -0500 Received: from foss.arm.com ([217.140.101.70]:32872 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752735AbdCBPqj (ORCPT ); Thu, 2 Mar 2017 10:46:39 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1BFE21424; Thu, 2 Mar 2017 07:45:27 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2E80B3F220; Thu, 2 Mar 2017 07:45:24 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Patrick Bellasi , Viresh Kumar , Steven Rostedt , Vincent Guittot , John Stultz , Juri Lelli , Todd Kjos , Tim Murray , Andres Oportus , Joel Fernandes , Morten Rasmussen , Dietmar Eggemann , Chris Redpath , Ingo Molnar , Peter Zijlstra , "Rafael J . Wysocki" Subject: [PATCH 1/6] cpufreq: schedutil: reset sg_cpus's flags at IDLE enter Date: Thu, 2 Mar 2017 15:45:02 +0000 Message-Id: <1488469507-32463-2-git-send-email-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1488469507-32463-1-git-send-email-patrick.bellasi@arm.com> References: <1488469507-32463-1-git-send-email-patrick.bellasi@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, sg_cpu's flags are set to the value defined by the last call of the cpufreq_update_util()/cpufreq_update_this_cpu(); for RT/DL classes this corresponds to the SCHED_CPUFREQ_{RT/DL} flags always being set. When multiple CPU shares the same frequency domain it might happen that a CPU which executed an RT task, right before entering IDLE, has one of the SCHED_CPUFREQ_RT_DL flags set, permanently, until it exits IDLE. Thus, in sugov_next_freq_shared(), where utilisation and flags are aggregated across all the CPUs of a frequency domain, it turns out that all the CPUs of that domain will always run at the maximum OPP until another event happens in the idle CPU to eventually clear the SCHED_CPUFREQ_{RT/DL} flag. Such a behaviour can harm the energy efficiency of systems when RT workloads are not so frequent and other CPUs in the same frequency domain are running small utilisation workloads, which is a quite common scenario in mobile embedded systems. This patch proposes a solution which is aligned with the current principle to update the flags each time a scheduling event happens. The scheduling of the idle_task on a CPU is considered one of such meaningful events. That's why when the idle_task is selected for execution we poke the schedutil policy to reset the flags for that CPU. Moreover, no frequency transitions are activated at that point, which is fair in case the RT workload should come back in the future, but it allows other CPUs in the same frequency domain to scale down the frequency in case that should be needed. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- include/linux/sched.h | 1 + kernel/sched/cpufreq_schedutil.c | 7 +++++++ kernel/sched/idle_task.c | 4 ++++ 3 files changed, 12 insertions(+) -- 2.7.4 diff --git a/include/linux/sched.h b/include/linux/sched.h index e2ed46d..739b29d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -3653,6 +3653,7 @@ static inline unsigned long rlimit_max(unsigned int limit) #define SCHED_CPUFREQ_RT (1U << 0) #define SCHED_CPUFREQ_DL (1U << 1) #define SCHED_CPUFREQ_IOWAIT (1U << 2) +#define SCHED_CPUFREQ_IDLE (1U << 3) #define SCHED_CPUFREQ_RT_DL (SCHED_CPUFREQ_RT | SCHED_CPUFREQ_DL) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index fd46593..084a98b 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -281,6 +281,12 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, raw_spin_lock(&sg_policy->update_lock); + /* CPU is entering IDLE, reset flags without triggering an update */ + if (flags & SCHED_CPUFREQ_IDLE) { + sg_cpu->flags = 0; + goto done; + } + sg_cpu->util = util; sg_cpu->max = max; sg_cpu->flags = flags; @@ -293,6 +299,7 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, sugov_update_commit(sg_policy, time, next_f); } +done: raw_spin_unlock(&sg_policy->update_lock); } diff --git a/kernel/sched/idle_task.c b/kernel/sched/idle_task.c index 0c00172..a844c91 100644 --- a/kernel/sched/idle_task.c +++ b/kernel/sched/idle_task.c @@ -29,6 +29,10 @@ pick_next_task_idle(struct rq *rq, struct task_struct *prev, struct rq_flags *rf put_prev_task(rq, prev); update_idle_core(rq); schedstat_inc(rq->sched_goidle); + + /* kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_IDLE); + return rq->idle; }