From patchwork Thu Mar 2 15:45:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 94786 Delivered-To: patch@linaro.org Received: by 10.182.3.34 with SMTP id 2csp346119obz; Thu, 2 Mar 2017 07:48:37 -0800 (PST) X-Received: by 10.84.233.193 with SMTP id m1mr19807681pln.118.1488469716953; Thu, 02 Mar 2017 07:48:36 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x63si7759384pfb.142.2017.03.02.07.48.36; Thu, 02 Mar 2017 07:48:36 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753802AbdCBPs0 (ORCPT + 13 others); Thu, 2 Mar 2017 10:48:26 -0500 Received: from foss.arm.com ([217.140.101.70]:32906 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752783AbdCBPqj (ORCPT ); Thu, 2 Mar 2017 10:46:39 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 816E91476; Thu, 2 Mar 2017 07:45:33 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9251A3F220; Thu, 2 Mar 2017 07:45:30 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Patrick Bellasi , Viresh Kumar , Steven Rostedt , Vincent Guittot , John Stultz , Juri Lelli , Todd Kjos , Tim Murray , Andres Oportus , Joel Fernandes , Morten Rasmussen , Dietmar Eggemann , Chris Redpath , Ingo Molnar , Peter Zijlstra , "Rafael J . Wysocki" Subject: [PATCH 3/6] cpufreq: schedutil: ensure max frequency while running RT/DL tasks Date: Thu, 2 Mar 2017 15:45:04 +0000 Message-Id: <1488469507-32463-4-git-send-email-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1488469507-32463-1-git-send-email-patrick.bellasi@arm.com> References: <1488469507-32463-1-git-send-email-patrick.bellasi@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org The policy in use for RT/DL tasks sets the maximum frequency when a task in these classes calls for a cpufreq_update_this_cpu(). However, the current implementation might cause a frequency drop while a RT/DL task is still running, just because for example a FAIR task wakes up and is enqueued in the same CPU. This issue is due to the sg_cpu's flags being overwritten at each call of sugov_update_*. The wakeup of a FAIR task resets the flags and can trigger a frequency update thus affecting the currently running RT/DL task. This can be fixed, in shared frequency domains, by adding (instead of overwriting) the new flags before triggering a frequency update. This grants to stay at least at the frequency requested by the RT/DL class, which is the maximum one for the time being, but can also be lower when for example DL will be extended to provide a precise bandwidth requirement. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- kernel/sched/cpufreq_schedutil.c | 32 +++++++++++++++++++++++++++++--- 1 file changed, 29 insertions(+), 3 deletions(-) -- 2.7.4 diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index a3fe5e4..b98a167 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -196,10 +196,21 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, unsigned int flags) { struct sugov_cpu *sg_cpu = container_of(hook, struct sugov_cpu, update_util); + struct task_struct *curr = cpu_curr(smp_processor_id()); struct sugov_policy *sg_policy = sg_cpu->sg_policy; struct cpufreq_policy *policy = sg_policy->policy; unsigned long util, max; unsigned int next_f; + bool rt_mode; + + /* + * While RT/DL tasks are running we do not want FAIR tasks to + * overvrite this CPU's flags, still we can update utilization and + * frequency (if required/possible) to be fair with these tasks. + */ + rt_mode = task_has_dl_policy(curr) || + task_has_rt_policy(curr) || + (flags & SCHED_CPUFREQ_RT_DL); sugov_set_iowait_boost(sg_cpu, time, flags); sg_cpu->last_update = time; @@ -207,7 +218,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, if (!sugov_should_update_freq(sg_policy, time)) return; - if (flags & SCHED_CPUFREQ_RT_DL) { + if (rt_mode) { next_f = policy->cpuinfo.max_freq; } else { sugov_get_util(&util, &max); @@ -278,6 +289,7 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, struct task_struct *curr = cpu_curr(cpu); unsigned long util, max; unsigned int next_f; + bool rt_mode; sugov_get_util(&util, &max); @@ -293,15 +305,29 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, if (curr == sg_policy->thread) goto done; + /* + * While RT/DL tasks are running we do not want FAIR tasks to + * overwrite this CPU's flags, still we can update utilization and + * frequency (if required/possible) to be fair with these tasks. + */ + rt_mode = task_has_dl_policy(curr) || + task_has_rt_policy(curr) || + (flags & SCHED_CPUFREQ_RT_DL); + if (rt_mode) + sg_cpu->flags |= flags; + else + sg_cpu->flags = flags; + sg_cpu->util = util; sg_cpu->max = max; - sg_cpu->flags = flags; sugov_set_iowait_boost(sg_cpu, time, flags); sg_cpu->last_update = time; if (sugov_should_update_freq(sg_policy, time)) { - next_f = sugov_next_freq_shared(sg_cpu, util, max, flags); + next_f = sg_policy->policy->cpuinfo.max_freq; + if (!rt_mode) + next_f = sugov_next_freq_shared(sg_cpu, util, max, flags); sugov_update_commit(sg_policy, time, next_f); } From patchwork Thu Mar 2 15:45:06 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 94784 Delivered-To: patch@linaro.org Received: by 10.182.3.34 with SMTP id 2csp346098obz; Thu, 2 Mar 2017 07:48:34 -0800 (PST) X-Received: by 10.99.2.139 with SMTP id 133mr8302858pgc.168.1488469714595; Thu, 02 Mar 2017 07:48:34 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x63si7759384pfb.142.2017.03.02.07.48.34; Thu, 02 Mar 2017 07:48:34 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753436AbdCBPsW (ORCPT + 13 others); Thu, 2 Mar 2017 10:48:22 -0500 Received: from foss.arm.com ([217.140.101.70]:32986 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753003AbdCBPql (ORCPT ); Thu, 2 Mar 2017 10:46:41 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E792614FF; Thu, 2 Mar 2017 07:45:39 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0623D3F220; Thu, 2 Mar 2017 07:45:36 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Patrick Bellasi , Viresh Kumar , Steven Rostedt , Vincent Guittot , John Stultz , Juri Lelli , Todd Kjos , Tim Murray , Andres Oportus , Joel Fernandes , Morten Rasmussen , Dietmar Eggemann , Chris Redpath , Ingo Molnar , Peter Zijlstra , "Rafael J . Wysocki" Subject: [PATCH 5/6] cpufreq: schedutil: avoid utilisation update when not necessary Date: Thu, 2 Mar 2017 15:45:06 +0000 Message-Id: <1488469507-32463-6-git-send-email-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1488469507-32463-1-git-send-email-patrick.bellasi@arm.com> References: <1488469507-32463-1-git-send-email-patrick.bellasi@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Under certain conditions (i.e. CPU entering idle and current task being the sugov thread) we can skip a frequency update. Thus, let's postpone the collection of the FAIR utilisation when really needed. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- kernel/sched/cpufreq_schedutil.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- 2.7.4 diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 44bff37..c8ed645 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -296,8 +296,6 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int next_f; bool rt_mode; - sugov_get_util(&util, &max); - raw_spin_lock(&sg_policy->update_lock); /* CPU is entering IDLE, reset flags without triggering an update */ @@ -323,6 +321,7 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, else sg_cpu->flags = flags; + sugov_get_util(&util, &max); sg_cpu->util = util; sg_cpu->max = max; From patchwork Thu Mar 2 15:45:07 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Bellasi X-Patchwork-Id: 94785 Delivered-To: patch@linaro.org Received: by 10.182.3.34 with SMTP id 2csp346103obz; Thu, 2 Mar 2017 07:48:35 -0800 (PST) X-Received: by 10.84.208.227 with SMTP id c32mr19383918plj.71.1488469714945; Thu, 02 Mar 2017 07:48:34 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x63si7759384pfb.142.2017.03.02.07.48.34; Thu, 02 Mar 2017 07:48:34 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752574AbdCBPsU (ORCPT + 13 others); Thu, 2 Mar 2017 10:48:20 -0500 Received: from foss.arm.com ([217.140.101.70]:32988 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753120AbdCBPqm (ORCPT ); Thu, 2 Mar 2017 10:46:42 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 260101509; Thu, 2 Mar 2017 07:45:43 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 384153F220; Thu, 2 Mar 2017 07:45:40 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Patrick Bellasi , Viresh Kumar , Steven Rostedt , Vincent Guittot , John Stultz , Juri Lelli , Todd Kjos , Tim Murray , Andres Oportus , Joel Fernandes , Morten Rasmussen , Dietmar Eggemann , Chris Redpath , Ingo Molnar , Peter Zijlstra , "Rafael J . Wysocki" Subject: [PATCH 6/6] sched/rt: fast switch to maximum frequency when RT tasks are scheduled Date: Thu, 2 Mar 2017 15:45:07 +0000 Message-Id: <1488469507-32463-7-git-send-email-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1488469507-32463-1-git-send-email-patrick.bellasi@arm.com> References: <1488469507-32463-1-git-send-email-patrick.bellasi@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Currently schedutil updates are triggered for the RT class using a single call place, which is part of the rt::update_curr_rt() used in: - dequeue_task_rt: but it does not make sense to set the schedutil's SCHED_CPUFREQ_RT in case the next task should not be an RT one - put_prev_task_rt: likewise, we set the SCHED_CPUFREQ_RT flag without knowing if required by the next task - pick_next_task_rt: likewise, the schedutil's SCHED_CPUFREQ_RT is set in case the prev task was RT, while we don't yet know if the next will be RT - task_tick_rt: that's the only really useful call, which can ramp up the frequency in case a RT task started its execution without a chance to order a frequency switch (e.g. because of the schedutil ratelimit) Apart from the last call in task_tick_rt, the others are at least useless. Thus, although being a simple solution, not all the call sites of that update_curr_rt() are interesting to trigger a frequency switch as well as some of the most interesting points are not covered by that call. For example, a task set to RT has to wait the next tick to get the frequency boost. This patch fixes these issue by placing explicitly the schedutils update calls in the only sensible places, which are: - when an RT task wakeups and it's enqueued in a CPU - when we actually pick a RT task for execution - at each tick time - when a task is set to be RT Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- kernel/sched/rt.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) -- 2.7.4 diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 4101f9d..df7046c 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -958,9 +958,6 @@ static void update_curr_rt(struct rq *rq) if (unlikely((s64)delta_exec <= 0)) return; - /* Kick cpufreq (see the comment in kernel/sched/sched.h). */ - cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_RT); - schedstat_set(curr->se.statistics.exec_max, max(curr->se.statistics.exec_max, delta_exec)); @@ -1326,6 +1323,9 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags) if (!task_current(rq, p) && tsk_nr_cpus_allowed(p) > 1) enqueue_pushable_task(rq, p); + + /* Kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_RT); } static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags) @@ -1563,6 +1563,9 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) p = _pick_next_task_rt(rq); + /* Kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_RT); + /* The running task is never eligible for pushing */ dequeue_pushable_task(rq, p); @@ -2272,6 +2275,9 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued) { struct sched_rt_entity *rt_se = &p->rt; + /* Kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_RT); + update_curr_rt(rq); watchdog(rq, p); @@ -2307,6 +2313,9 @@ static void set_curr_task_rt(struct rq *rq) p->se.exec_start = rq_clock_task(rq); + /* Kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_RT); + /* The running task is never eligible for pushing */ dequeue_pushable_task(rq, p); }