From patchwork Tue May 23 08:53:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 100355 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp217305qge; Tue, 23 May 2017 01:54:39 -0700 (PDT) X-Received: by 10.84.218.71 with SMTP id f7mr34454943plm.135.1495529678948; Tue, 23 May 2017 01:54:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495529678; cv=none; d=google.com; s=arc-20160816; b=A4bSHnhgF2dvbjSIbUmzdPTh4ONy/GS3XO4+mFprOEyOk7F49rQ427m5rDko7dWjES NMT7kobqA8AMtudkgWCYBV8w8Eqdo8zWNY9wIEGX8feUludA1sGtuiw8wYL6+B/jhb/H t7Gp6X01vhEX+dsp9hWIxYM5egr7g2DGK1MYfk3Q44nGnwG3dyE6fzsGhpdb3p4vTdxG u78RyCCKtBtFMzg02Ag84qt7/D5AH6GK9VqTXoWlDXLCIIwjaqO9QnLc7B1EHFOi4Zun 0pd88RAI/HAvvYmbyXL6vR/2GCsu8tLnZTAednolmH5280BreYxP031FRrWPs7M6MUmF 3f1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=2lwkKeQQqtLXf1bx904D3q/KEMo7/2lp/35iUnoB4r8=; b=0xzMj7paWZ4ocQgmUNQPedWil0kJwSwQis+ylxf3SXrjl8m/Bf+YY309hMTAzi08SO e6zza1uFvFql6ibapaXcOSRPxe96Lsw5BUdt5KIB3yFKxeljK1crsFt22Nc6d7p7IQmc me6KGQaNsQxgeZJt+a2dFQIQwe7IcKlfiu+gG272t0ijTWJR0uk8DTlvDcazdVv+WdoP QfzkFj4WzZVcN3MJNMB/VqpqznJm6WLmIyEnp9N2VOWrDBeoOCcQjz6TXaz2y7omuDwh QhZzf+8FheanqP3whawjOA2e+MUk5NrUKQP7LerYyyN1S3n1rnnVpgpjCfq/hl5+uDna zBpw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r3si20536564plj.55.2017.05.23.01.54.38; Tue, 23 May 2017 01:54:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936670AbdEWIyh (ORCPT + 14 others); Tue, 23 May 2017 04:54:37 -0400 Received: from foss.arm.com ([217.140.101.70]:48094 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936682AbdEWIyf (ORCPT ); Tue, 23 May 2017 04:54:35 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2B7C515A2; Tue, 23 May 2017 01:54:34 -0700 (PDT) Received: from cam-smtp0.cambridge.arm.com (e106622-lin.cambridge.arm.com [10.1.211.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 88E873F53D; Tue, 23 May 2017 01:54:30 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, andresoportus@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, juri.lelli@arm.com, Ingo Molnar , "Rafael J . Wysocki" Subject: [PATCH RFC 2/8] sched/deadline: move cpu frequency selection triggering points Date: Tue, 23 May 2017 09:53:45 +0100 Message-Id: <20170523085351.18586-3-juri.lelli@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170523085351.18586-1-juri.lelli@arm.com> References: <20170523085351.18586-1-juri.lelli@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Since SCHED_DEADLINE doesn't track utilization signal (but reserves a fraction of CPU bandwidth to tasks admitted to the system), there is no point in evaluating frequency changes during each tick event. Move frequency selection triggering points to where running_bw changes. Co-authored-by: Claudio Scordino Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni --- kernel/sched/deadline.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) -- 2.11.0 diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 1da44b36fae0..fed54b078240 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -52,6 +52,8 @@ void add_running_bw(u64 dl_bw, struct dl_rq *dl_rq) dl_rq->running_bw += dl_bw; SCHED_WARN_ON(dl_rq->running_bw < old); /* overflow */ SCHED_WARN_ON(dl_rq->running_bw > dl_rq->this_bw); + /* kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_this_cpu(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL); } static inline @@ -64,6 +66,8 @@ void sub_running_bw(u64 dl_bw, struct dl_rq *dl_rq) SCHED_WARN_ON(dl_rq->running_bw > old); /* underflow */ if (dl_rq->running_bw > old) dl_rq->running_bw = 0; + /* kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_this_cpu(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL); } static inline @@ -1021,9 +1025,6 @@ static void update_curr_dl(struct rq *rq) return; } - /* kick cpufreq (see the comment in kernel/sched/sched.h). */ - cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_DL); - schedstat_set(curr->se.statistics.exec_max, max(curr->se.statistics.exec_max, delta_exec)); From patchwork Tue May 23 08:53:48 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 100357 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp217390qge; Tue, 23 May 2017 01:54:57 -0700 (PDT) X-Received: by 10.98.32.18 with SMTP id g18mr30870582pfg.153.1495529697106; Tue, 23 May 2017 01:54:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495529697; cv=none; d=google.com; s=arc-20160816; b=jkWFKe1ATD/9rntsS9KsKptNHLREXvTTDasg3p2DfEy+2XRzErm462THpfzZ58sNdi uuPd3rw7I5KBQeQPrSBTlSMQO1HskhUWciIDUDcnRkrMTFWK8aI/8g99ak6GDkV70c6o /BN3s3GuxWgT0BzhlWmdhKm9lcX3RnTtyfiDjlAGWRQX3YgsCUrpSV5XVVfDFgbbFlJP jzaw7Djf7wJd56HfRs91kXlMUuzfvUg/q6wY6sWqiMGjv8O1vbL41UCPQBYlKB3RKt92 haZBp3qfEWOg6eUajPIENx+SBOKInYXYhgvdepGKIm8Xw/ccOBXBfkT/uBAFN25CYTJR ndJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=tPrg55gd5thIhOGgou0aWFy+oxRODVx0oDxSQdVC5Ms=; b=UsBjMvjoDu6wrGDzfHs0NiRkF7fMygMvyqC3mXj/4wQCvpRwd2GBuxpP6e0gT7qPvb pQvRAr+F4GG4AA7kuR0/rK0HzgiGvgk9JXAbb/rKNzKXZMZJHKRBEeANN0DTVEnCBOPq dq2UQx8PYvueCRsGh75axeJMnFBMRmhS7gO97lZJOnOPIeTxFdP5iIMFSYZOCK8hWiFK n1fOT4IGguMumrfJTK0tYM9qQ0j72Qro8ld+KLD0045zpHxJymxvTXvcVF3kuK9q5Mb8 1JaKKhyyIIGiPYiJX00zGi7XNansWS5skJXhGsdC+AohAMdilgWnZHC+8AVyS/Izw8LV ZZYA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c19si20093703pgk.90.2017.05.23.01.54.56; Tue, 23 May 2017 01:54:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967028AbdEWIyt (ORCPT + 14 others); Tue, 23 May 2017 04:54:49 -0400 Received: from foss.arm.com ([217.140.101.70]:48196 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S967029AbdEWIyq (ORCPT ); Tue, 23 May 2017 04:54:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D7278165C; Tue, 23 May 2017 01:54:45 -0700 (PDT) Received: from cam-smtp0.cambridge.arm.com (e106622-lin.cambridge.arm.com [10.1.211.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3F0243F53D; Tue, 23 May 2017 01:54:42 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, andresoportus@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, juri.lelli@arm.com, Ingo Molnar , "Rafael J . Wysocki" Subject: [PATCH RFC 5/8] sched/cpufreq_schedutil: always consider all CPUs when deciding next freq Date: Tue, 23 May 2017 09:53:48 +0100 Message-Id: <20170523085351.18586-6-juri.lelli@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170523085351.18586-1-juri.lelli@arm.com> References: <20170523085351.18586-1-juri.lelli@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org No assumption can be made upon the rate at which frequency updates get triggered, as there are scheduling policies (like SCHED_DEADLINE) which don't trigger them so frequently. Remove such assumption from the code, by always considering SCHED_DEADLINE utilization signal as not stale. Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Cc: Claudio Scordino --- Changes from RFD - discard CFS contribution only as stale (as suggested by Rafael) --- kernel/sched/cpufreq_schedutil.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) -- 2.11.0 diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index f930cec4c3d4..688bd11c2641 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -259,17 +259,22 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) s64 delta_ns; /* - * If the CPU utilization was last updated before the previous - * frequency update and the time elapsed between the last update - * of the CPU utilization and the last frequency update is long - * enough, don't take the CPU into account as it probably is - * idle now (and clear iowait_boost for it). + * If the CFS CPU utilization was last updated before the + * previous frequency update and the time elapsed between the + * last update of the CPU utilization and the last frequency + * update is long enough, reset iowait_boost and util_cfs, as + * they are now probably stale. However, still consider the + * CPU contribution if it has some DEADLINE utilization + * (util_dl). */ delta_ns = time - j_sg_cpu->last_update; if (delta_ns > TICK_NSEC) { j_sg_cpu->iowait_boost = 0; - continue; + j_sg_cpu->util_cfs = 0; + if (j_sg_cpu->util_dl == 0) + continue; } + if (j_sg_cpu->flags & SCHED_CPUFREQ_RT) return policy->cpuinfo.max_freq; From patchwork Tue May 23 08:53:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 100360 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp217526qge; Tue, 23 May 2017 01:55:28 -0700 (PDT) X-Received: by 10.99.160.68 with SMTP id u4mr30816732pgn.39.1495529728724; Tue, 23 May 2017 01:55:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495529728; cv=none; d=google.com; s=arc-20160816; b=GZoRzNMkROF9Bp+2Vu8Bn22VaVOI6vQILgIz5npBx1Hfw+idoGDJWAl4mF8sg14BgW bBGcRN/MDFwwh+QJfsXGBj/nTu/KylMPRY6OFjekYoUwX2j30U8ltzBCgoPPOlPk8hsJ XT6Qxo2DLb/5ZKHPLEECGt8Q1sTR0+7zNva4MSF7XSfck4FImxMNR544FwvOyhe8DvJs qBwjV/GTikQv/YRwOeNXNS6owN5yktbTqkcA9jUb/Ixehd2pwv9i6MgZJrOv5/Ij3Sd0 Ai2XmyG192JAvQOOXNYBK+Ywb5zq67Ghlu4tD4KIgYEHQDqx90Z7/7IyNyvJEMjIo1yk QofA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=IwWvXwAz4GeY8KHi5XwyilF1+2dVeZIhy4wJjnt1IkE=; b=mjoVZuDb68L2yi+NQC8AlygnEur6SiQgNVpTIvWzNfNmBW79RMCxfLNVWVrO+dD1Br VgJX9APsSranoG2BoEUMsnonn89WO9fqb/eNLJCZXkeOFZJwoIGOuS1oewEltxIylhbf H25e3UOkUL1pk9h43NxuiLsu9D+P+lFPwfjQo+C9SC/4k4QyTTZgPLor7uzGT8ZNNGbm fRkrCLXwDylH4xPf5YlbuPlo/QqPUJjCvOHJawITL9H/eajnHQPTgZN6pxacihbF7XyM g5tKUOxoymtaBJwQVm2X+2Y381ooJfQFNErrYaO/Q1VOpw8TzPCqsi8Z6gbF/unqt7q+ vffw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h83si20224967pfk.207.2017.05.23.01.55.28; Tue, 23 May 2017 01:55:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936761AbdEWIzK (ORCPT + 14 others); Tue, 23 May 2017 04:55:10 -0400 Received: from foss.arm.com ([217.140.101.70]:48262 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936737AbdEWIzD (ORCPT ); Tue, 23 May 2017 04:55:03 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DAEBF15A1; Tue, 23 May 2017 01:54:57 -0700 (PDT) Received: from cam-smtp0.cambridge.arm.com (e106622-lin.cambridge.arm.com [10.1.211.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 430853F53D; Tue, 23 May 2017 01:54:54 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, andresoportus@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, juri.lelli@arm.com, Ingo Molnar , "Rafael J . Wysocki" Subject: [PATCH RFC 8/8] sched/deadline: make bandwidth enforcement scale-invariant Date: Tue, 23 May 2017 09:53:51 +0100 Message-Id: <20170523085351.18586-9-juri.lelli@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170523085351.18586-1-juri.lelli@arm.com> References: <20170523085351.18586-1-juri.lelli@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Apply frequency and cpu scale-invariance correction factor to bandwidth enforcement (similar to what we already do to fair utilization tracking). Each delta_exec gets scaled considering current frequency and maximum cpu capacity; which means that the reservation runtime parameter (that need to be specified profiling the task execution at max frequency on biggest capacity core) gets thus scaled accordingly. Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Cc: Claudio Scordino --- Changes from RFD - either apply grub_reclaim or perform freq/cpu scaling; what's the correct thing to do it's actually very much up for discussion --- kernel/sched/deadline.c | 26 ++++++++++++++++++++++---- kernel/sched/fair.c | 2 -- kernel/sched/sched.h | 2 ++ 3 files changed, 24 insertions(+), 6 deletions(-) -- 2.11.0 diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 5ee4fd9b1c7f..b6c3886478c3 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1008,7 +1008,8 @@ static void update_curr_dl(struct rq *rq) { struct task_struct *curr = rq->curr; struct sched_dl_entity *dl_se = &curr->dl; - u64 delta_exec; + u64 delta_exec, scaled_delta_exec; + int cpu = cpu_of(rq); if (!dl_task(curr) || !on_dl_rq(dl_se)) return; @@ -1042,9 +1043,26 @@ static void update_curr_dl(struct rq *rq) if (unlikely(dl_entity_is_special(dl_se))) return; - if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) - delta_exec = grub_reclaim(delta_exec, rq, &curr->dl); - dl_se->runtime -= delta_exec; + /* + * For tasks that participate in GRUB, we implement GRUB-PA: the + * spare reclaimed bandwidth is used to clock down frequency. + * + * For the others, we still need to scale reservation parameters + * according to current frequency and CPU maximum capacity. + */ + if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) { + scaled_delta_exec = grub_reclaim(delta_exec, + rq, + &curr->dl); + } else { + unsigned long scale_freq = arch_scale_freq_capacity(cpu); + unsigned long scale_cpu = arch_scale_cpu_capacity(NULL, cpu); + + scaled_delta_exec = cap_scale(delta_exec, scale_freq); + scaled_delta_exec = cap_scale(scaled_delta_exec, scale_cpu); + } + + dl_se->runtime -= scaled_delta_exec; throttle: if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) { diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index b0f31064bbbd..39224813e038 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2781,8 +2781,6 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3) return c1 + c2 + c3; } -#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) - /* * Accumulate the three separate parts of the sum; d1 the remainder * of the last (incomplete) period, d2 the span of full periods and d3 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index cc474c62cd18..019c46768ecb 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -155,6 +155,8 @@ static inline int task_has_dl_policy(struct task_struct *p) return dl_policy(p->policy); } +#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) + static inline int dl_entity_is_special(struct sched_dl_entity *dl_se) { return dl_se->flags & SCHED_FLAG_SPECIAL;