From patchwork Fri Mar 24 14:08:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 95951 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp64162qgd; Fri, 24 Mar 2017 07:09:35 -0700 (PDT) X-Received: by 10.98.147.10 with SMTP id b10mr9495712pfe.177.1490364575427; Fri, 24 Mar 2017 07:09:35 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y68si3015079pgb.245.2017.03.24.07.09.35; Fri, 24 Mar 2017 07:09:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934155AbdCXOJc (ORCPT + 13 others); Fri, 24 Mar 2017 10:09:32 -0400 Received: from foss.arm.com ([217.140.101.70]:41204 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757103AbdCXOJ1 (ORCPT ); Fri, 24 Mar 2017 10:09:27 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B9B81B16; Fri, 24 Mar 2017 07:09:20 -0700 (PDT) Received: from e106622-lin.cambridge.arm.com (e106622-lin.cambridge.arm.com [10.1.211.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5C4953F575; Fri, 24 Mar 2017 07:09:17 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, andresoportus@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, juri.lelli@arm.com, Ingo Molnar , "Rafael J . Wysocki" Subject: [RFD PATCH 1/5] sched/cpufreq_schedutil: make use of DEADLINE utilization signal Date: Fri, 24 Mar 2017 14:08:56 +0000 Message-Id: <20170324140900.7334-2-juri.lelli@arm.com> X-Mailer: git-send-email 2.10.0 In-Reply-To: <20170324140900.7334-1-juri.lelli@arm.com> References: <20170324140900.7334-1-juri.lelli@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org SCHED_DEADLINE tracks active utilization signal with a per rq variable named running_bw. Make use of that to drive cpu frequency selection: add up FAIR and DEADLINE contribution to get the required CPU capacity to handle both requirements. Co-authored-by: Claudio Scordino Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni --- include/linux/sched/cpufreq.h | 2 -- kernel/sched/cpufreq_schedutil.c | 13 ++++++------- 2 files changed, 6 insertions(+), 9 deletions(-) -- 2.10.0 diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h index d2be2ccbb372..39640bb3a8ee 100644 --- a/include/linux/sched/cpufreq.h +++ b/include/linux/sched/cpufreq.h @@ -11,8 +11,6 @@ #define SCHED_CPUFREQ_DL (1U << 1) #define SCHED_CPUFREQ_IOWAIT (1U << 2) -#define SCHED_CPUFREQ_RT_DL (SCHED_CPUFREQ_RT | SCHED_CPUFREQ_DL) - #ifdef CONFIG_CPU_FREQ struct update_util_data { void (*func)(struct update_util_data *data, u64 time, unsigned int flags); diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index f5ffe241812e..05f5625ea005 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -154,12 +154,11 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, static void sugov_get_util(unsigned long *util, unsigned long *max) { struct rq *rq = this_rq(); - unsigned long cfs_max; + unsigned long dl_util = (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> 20; - cfs_max = arch_scale_cpu_capacity(NULL, smp_processor_id()); + *max = arch_scale_cpu_capacity(NULL, smp_processor_id()); - *util = min(rq->cfs.avg.util_avg, cfs_max); - *max = cfs_max; + *util = min(rq->cfs.avg.util_avg + dl_util, *max); } static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, @@ -207,7 +206,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, if (!sugov_should_update_freq(sg_policy, time)) return; - if (flags & SCHED_CPUFREQ_RT_DL) { + if (flags & SCHED_CPUFREQ_RT) { next_f = policy->cpuinfo.max_freq; } else { sugov_get_util(&util, &max); @@ -242,7 +241,7 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu) j_sg_cpu->iowait_boost = 0; continue; } - if (j_sg_cpu->flags & SCHED_CPUFREQ_RT_DL) + if (j_sg_cpu->flags & SCHED_CPUFREQ_RT) return policy->cpuinfo.max_freq; j_util = j_sg_cpu->util; @@ -278,7 +277,7 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, sg_cpu->last_update = time; if (sugov_should_update_freq(sg_policy, time)) { - if (flags & SCHED_CPUFREQ_RT_DL) + if (flags & SCHED_CPUFREQ_RT) next_f = sg_policy->policy->cpuinfo.max_freq; else next_f = sugov_next_freq_shared(sg_cpu); From patchwork Fri Mar 24 14:08:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 95950 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp64156qgd; Fri, 24 Mar 2017 07:09:34 -0700 (PDT) X-Received: by 10.98.131.75 with SMTP id h72mr9775252pfe.4.1490364574888; Fri, 24 Mar 2017 07:09:34 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y68si3015079pgb.245.2017.03.24.07.09.34; Fri, 24 Mar 2017 07:09:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932990AbdCXOJb (ORCPT + 13 others); Fri, 24 Mar 2017 10:09:31 -0400 Received: from foss.arm.com ([217.140.101.70]:41234 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757033AbdCXOJZ (ORCPT ); Fri, 24 Mar 2017 10:09:25 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5F7841478; Fri, 24 Mar 2017 07:09:24 -0700 (PDT) Received: from e106622-lin.cambridge.arm.com (e106622-lin.cambridge.arm.com [10.1.211.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 03F203F575; Fri, 24 Mar 2017 07:09:20 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, andresoportus@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, juri.lelli@arm.com, Ingo Molnar , "Rafael J . Wysocki" Subject: [RFD PATCH 2/5] sched/deadline: move cpu frequency selection triggering points Date: Fri, 24 Mar 2017 14:08:57 +0000 Message-Id: <20170324140900.7334-3-juri.lelli@arm.com> X-Mailer: git-send-email 2.10.0 In-Reply-To: <20170324140900.7334-1-juri.lelli@arm.com> References: <20170324140900.7334-1-juri.lelli@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Since SCHED_DEADLINE doesn't track utilization signal (but reserves a fraction of CPU bandwidth to tasks admitted to the system), there is no point in evaluating frequency changes during each tick event. Move frequency selection triggering points to where running_bw changes. Co-authored-by: Claudio Scordino Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni --- kernel/sched/deadline.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) -- 2.10.0 diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 55471016d73c..5c1a205e830f 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -52,6 +52,8 @@ void add_running_bw(u64 dl_bw, struct dl_rq *dl_rq) dl_rq->running_bw += dl_bw; SCHED_WARN_ON(dl_rq->running_bw < old); /* overflow */ SCHED_WARN_ON(dl_rq->running_bw > dl_rq->this_bw); + /* kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_this_cpu(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL); } static inline @@ -64,6 +66,8 @@ void sub_running_bw(u64 dl_bw, struct dl_rq *dl_rq) SCHED_WARN_ON(dl_rq->running_bw > old); /* underflow */ if (dl_rq->running_bw > old) dl_rq->running_bw = 0; + /* kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_this_cpu(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL); } static inline @@ -953,9 +957,6 @@ static void update_curr_dl(struct rq *rq) return; } - /* kick cpufreq (see the comment in kernel/sched/sched.h). */ - cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_DL); - schedstat_set(curr->se.statistics.exec_max, max(curr->se.statistics.exec_max, delta_exec)); From patchwork Fri Mar 24 14:08:58 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 95952 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp64164qgd; Fri, 24 Mar 2017 07:09:35 -0700 (PDT) X-Received: by 10.98.8.206 with SMTP id 75mr9455008pfi.198.1490364575764; Fri, 24 Mar 2017 07:09:35 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y68si3015079pgb.245.2017.03.24.07.09.35; Fri, 24 Mar 2017 07:09:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935126AbdCXOJd (ORCPT + 13 others); Fri, 24 Mar 2017 10:09:33 -0400 Received: from foss.arm.com ([217.140.101.70]:41260 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757137AbdCXOJ3 (ORCPT ); Fri, 24 Mar 2017 10:09:29 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C4DC1576; Fri, 24 Mar 2017 07:09:28 -0700 (PDT) Received: from e106622-lin.cambridge.arm.com (e106622-lin.cambridge.arm.com [10.1.211.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9DD753F575; Fri, 24 Mar 2017 07:09:24 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, andresoportus@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, juri.lelli@arm.com, Ingo Molnar , "Rafael J . Wysocki" Subject: [RFD PATCH 3/5] sched/cpufreq_schedutil: make worker kthread be SCHED_DEADLINE Date: Fri, 24 Mar 2017 14:08:58 +0000 Message-Id: <20170324140900.7334-4-juri.lelli@arm.com> X-Mailer: git-send-email 2.10.0 In-Reply-To: <20170324140900.7334-1-juri.lelli@arm.com> References: <20170324140900.7334-1-juri.lelli@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Worker kthread needs to be able to change frequency for all other threads. Make it special, just under STOP class. Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Cc: Claudio Scordino --- include/linux/sched.h | 1 + include/uapi/linux/sched.h | 1 + kernel/sched/core.c | 19 +++++++++++++++++-- kernel/sched/cpufreq_schedutil.c | 15 ++++++++++++--- kernel/sched/deadline.c | 6 ++++++ kernel/sched/sched.h | 8 +++++++- 6 files changed, 44 insertions(+), 6 deletions(-) -- 2.10.0 diff --git a/include/linux/sched.h b/include/linux/sched.h index 952cac87e433..6f508980f320 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1351,6 +1351,7 @@ extern int idle_cpu(int cpu); extern int sched_setscheduler(struct task_struct *, int, const struct sched_param *); extern int sched_setscheduler_nocheck(struct task_struct *, int, const struct sched_param *); extern int sched_setattr(struct task_struct *, const struct sched_attr *); +extern int sched_setattr_nocheck(struct task_struct *, const struct sched_attr *); extern struct task_struct *idle_task(int cpu); /** diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h index e2a6c7b3510b..72723859ef74 100644 --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -48,5 +48,6 @@ */ #define SCHED_FLAG_RESET_ON_FORK 0x01 #define SCHED_FLAG_RECLAIM 0x02 +#define SCHED_FLAG_SPECIAL 0x04 #endif /* _UAPI_LINUX_SCHED_H */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 378d402ee7a6..9b211c77cb54 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2495,6 +2495,9 @@ static int dl_overflow(struct task_struct *p, int policy, u64 new_bw = dl_policy(policy) ? to_ratio(period, runtime) : 0; int cpus, err = -1; + if (attr->sched_flags & SCHED_FLAG_SPECIAL) + return 0; + /* !deadline task may carry old deadline bandwidth */ if (new_bw == p->dl.dl_bw && task_has_dl_policy(p)) return 0; @@ -4052,6 +4055,10 @@ __getparam_dl(struct task_struct *p, struct sched_attr *attr) static bool __checkparam_dl(const struct sched_attr *attr) { + /* special dl tasks don't actually use any parameter */ + if (attr->sched_flags & SCHED_FLAG_SPECIAL) + return true; + /* deadline != 0 */ if (attr->sched_deadline == 0) return false; @@ -4138,7 +4145,9 @@ static int __sched_setscheduler(struct task_struct *p, } if (attr->sched_flags & - ~(SCHED_FLAG_RESET_ON_FORK | SCHED_FLAG_RECLAIM)) + ~(SCHED_FLAG_RESET_ON_FORK | + SCHED_FLAG_RECLAIM | + SCHED_FLAG_SPECIAL)) return -EINVAL; /* @@ -4260,7 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p, } #endif #ifdef CONFIG_SMP - if (dl_bandwidth_enabled() && dl_policy(policy)) { + if (dl_bandwidth_enabled() && dl_policy(policy) && + !(attr->sched_flags & SCHED_FLAG_SPECIAL)) { cpumask_t *span = rq->rd->span; /* @@ -4390,6 +4400,11 @@ int sched_setattr(struct task_struct *p, const struct sched_attr *attr) } EXPORT_SYMBOL_GPL(sched_setattr); +int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr) +{ + return __sched_setscheduler(p, attr, false, true); +} + /** * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace. * @p: the task in question. diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 05f5625ea005..da67a1cf91e7 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -394,7 +394,16 @@ static void sugov_policy_free(struct sugov_policy *sg_policy) static int sugov_kthread_create(struct sugov_policy *sg_policy) { struct task_struct *thread; - struct sched_param param = { .sched_priority = MAX_USER_RT_PRIO / 2 }; + struct sched_attr attr = { + .size = sizeof(struct sched_attr), + .sched_policy = SCHED_DEADLINE, + .sched_flags = SCHED_FLAG_SPECIAL, + .sched_nice = 0, + .sched_priority = 0, + .sched_runtime = 0, + .sched_deadline = 0, + .sched_period = 0, + }; struct cpufreq_policy *policy = sg_policy->policy; int ret; @@ -412,10 +421,10 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy) return PTR_ERR(thread); } - ret = sched_setscheduler_nocheck(thread, SCHED_FIFO, ¶m); + ret = sched_setattr_nocheck(thread, &attr); if (ret) { kthread_stop(thread); - pr_warn("%s: failed to set SCHED_FIFO\n", __func__); + pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__); return ret; } diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 5c1a205e830f..853de524c6c6 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -131,6 +131,9 @@ static void task_non_contending(struct task_struct *p) if (dl_se->dl_runtime == 0) return; + if (dl_entity_is_special(dl_se)) + return; + WARN_ON(hrtimer_active(&dl_se->inactive_timer)); WARN_ON(dl_se->dl_non_contending); @@ -968,6 +971,9 @@ static void update_curr_dl(struct rq *rq) sched_rt_avg_update(rq, delta_exec); + if (unlikely(dl_entity_is_special(dl_se))) + return; + if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) delta_exec = grub_reclaim(delta_exec, rq, curr->dl.dl_bw); dl_se->runtime -= delta_exec; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 93c24528ceb6..7b5e81120813 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -155,13 +155,19 @@ static inline int task_has_dl_policy(struct task_struct *p) return dl_policy(p->policy); } +static inline int dl_entity_is_special(struct sched_dl_entity *dl_se) +{ + return dl_se->flags & SCHED_FLAG_SPECIAL; +} + /* * Tells if entity @a should preempt entity @b. */ static inline bool dl_entity_preempt(struct sched_dl_entity *a, struct sched_dl_entity *b) { - return dl_time_before(a->deadline, b->deadline); + return dl_entity_is_special(a) || + dl_time_before(a->deadline, b->deadline); } /* From patchwork Fri Mar 24 14:08:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 95953 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp64369qgd; Fri, 24 Mar 2017 07:10:00 -0700 (PDT) X-Received: by 10.99.152.9 with SMTP id q9mr9069035pgd.225.1490364600458; Fri, 24 Mar 2017 07:10:00 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 204si2152997pfx.75.2017.03.24.07.10.00; Fri, 24 Mar 2017 07:10:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935710AbdCXOJg (ORCPT + 13 others); Fri, 24 Mar 2017 10:09:36 -0400 Received: from foss.arm.com ([217.140.101.70]:41274 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935011AbdCXOJd (ORCPT ); Fri, 24 Mar 2017 10:09:33 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E8F3B1595; Fri, 24 Mar 2017 07:09:31 -0700 (PDT) Received: from e106622-lin.cambridge.arm.com (e106622-lin.cambridge.arm.com [10.1.211.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4A9F03F575; Fri, 24 Mar 2017 07:09:28 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, andresoportus@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, juri.lelli@arm.com, Ingo Molnar , "Rafael J . Wysocki" Subject: [RFD PATCH 4/5] sched/cpufreq_schedutil: always consider all CPUs when deciding next freq Date: Fri, 24 Mar 2017 14:08:59 +0000 Message-Id: <20170324140900.7334-5-juri.lelli@arm.com> X-Mailer: git-send-email 2.10.0 In-Reply-To: <20170324140900.7334-1-juri.lelli@arm.com> References: <20170324140900.7334-1-juri.lelli@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org No assumption can be made upon the rate at which frequency updates get triggered, as there are scheduling policies (like SCHED_DEADLINE) which don't trigger them so frequently. Remove such assumption from the code. Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Cc: Claudio Scordino --- kernel/sched/cpufreq_schedutil.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) -- 2.10.0 diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index da67a1cf91e7..40f30373b709 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -233,14 +233,13 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu) * If the CPU utilization was last updated before the previous * frequency update and the time elapsed between the last update * of the CPU utilization and the last frequency update is long - * enough, don't take the CPU into account as it probably is - * idle now (and clear iowait_boost for it). + * enough, reset iowait_boost, as it probably is not boosted + * anymore now. */ delta_ns = last_freq_update_time - j_sg_cpu->last_update; - if (delta_ns > TICK_NSEC) { + if (delta_ns > TICK_NSEC) j_sg_cpu->iowait_boost = 0; - continue; - } + if (j_sg_cpu->flags & SCHED_CPUFREQ_RT) return policy->cpuinfo.max_freq; From patchwork Fri Mar 24 14:09:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 95954 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp64386qgd; Fri, 24 Mar 2017 07:10:02 -0700 (PDT) X-Received: by 10.98.139.78 with SMTP id j75mr9469430pfe.122.1490364601962; Fri, 24 Mar 2017 07:10:01 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 204si2152997pfx.75.2017.03.24.07.10.01; Fri, 24 Mar 2017 07:10:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934874AbdCXOJy (ORCPT + 13 others); Fri, 24 Mar 2017 10:09:54 -0400 Received: from foss.arm.com ([217.140.101.70]:41300 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935823AbdCXOJr (ORCPT ); Fri, 24 Mar 2017 10:09:47 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8FAD51596; Fri, 24 Mar 2017 07:09:35 -0700 (PDT) Received: from e106622-lin.cambridge.arm.com (e106622-lin.cambridge.arm.com [10.1.211.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3367F3F575; Fri, 24 Mar 2017 07:09:32 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, andresoportus@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, juri.lelli@arm.com, Ingo Molnar , "Rafael J . Wysocki" Subject: [RFD PATCH 5/5] sched/deadline: make bandwidth enforcement scale-invariant Date: Fri, 24 Mar 2017 14:09:00 +0000 Message-Id: <20170324140900.7334-6-juri.lelli@arm.com> X-Mailer: git-send-email 2.10.0 In-Reply-To: <20170324140900.7334-1-juri.lelli@arm.com> References: <20170324140900.7334-1-juri.lelli@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Apply frequency and cpu scale-invariance correction factor to bandwidth enforcement (similar to what we already do to fair utilization tracking). Each delta_exec gets scaled considering current frequency and maximum cpu capacity; which means that the reservation runtime parameter (that need to be specified profiling the task execution at max frequency on biggest capacity core) gets thus scaled accordingly. Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Cc: Claudio Scordino --- kernel/sched/deadline.c | 27 +++++++++++++++++++++++---- kernel/sched/fair.c | 2 -- kernel/sched/sched.h | 2 ++ 3 files changed, 25 insertions(+), 6 deletions(-) -- 2.10.0 diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 853de524c6c6..7141d6f51ee0 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -940,7 +940,9 @@ static void update_curr_dl(struct rq *rq) { struct task_struct *curr = rq->curr; struct sched_dl_entity *dl_se = &curr->dl; - u64 delta_exec; + u64 delta_exec, scaled_delta_exec; + unsigned long scale_freq, scale_cpu; + int cpu = cpu_of(rq); if (!dl_task(curr) || !on_dl_rq(dl_se)) return; @@ -974,9 +976,26 @@ static void update_curr_dl(struct rq *rq) if (unlikely(dl_entity_is_special(dl_se))) return; - if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) - delta_exec = grub_reclaim(delta_exec, rq, curr->dl.dl_bw); - dl_se->runtime -= delta_exec; + /* + * XXX When clock frequency is controlled by the scheduler (via + * schedutil governor) we implement GRUB-PA: the spare reclaimed + * bandwidth is used to clock down frequency. + * + * However, what below seems to assume scheduler to always be in + * control of clock frequency; when running at a fixed frequency + * (e.g., performance or userspace governor), shouldn't we instead + * use the grub_reclaim mechanism below? + * + * if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) + * delta_exec = grub_reclaim(delta_exec, rq, curr->dl.dl_bw); + * dl_se->runtime -= delta_exec; + */ + scale_freq = arch_scale_freq_capacity(NULL, cpu); + scale_cpu = arch_scale_cpu_capacity(NULL, cpu); + + scaled_delta_exec = cap_scale(delta_exec, scale_freq); + scaled_delta_exec = cap_scale(scaled_delta_exec, scale_cpu); + dl_se->runtime -= scaled_delta_exec; throttle: if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) { diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2805bd7c8994..37f12d0a3bc4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2818,8 +2818,6 @@ static u32 __compute_runnable_contrib(u64 n) return contrib + runnable_avg_yN_sum[n]; } -#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) - /* * We can represent the historical contribution to runnable average as the * coefficients of a geometric series. To do this we sub-divide our runnable diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 7b5e81120813..81bd048ed181 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -155,6 +155,8 @@ static inline int task_has_dl_policy(struct task_struct *p) return dl_policy(p->policy); } +#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) + static inline int dl_entity_is_special(struct sched_dl_entity *dl_se) { return dl_se->flags & SCHED_FLAG_SPECIAL;