From patchwork Mon Dec 4 10:23:18 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 120499 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4244752qgn; Mon, 4 Dec 2017 02:24:16 -0800 (PST) X-Google-Smtp-Source: AGs4zMZyX0UC6rafvjmD/GimH2+5FZhmGSEbVgFGVBrTpqR2OwOHb7lC+kDTlusssFptzzMwqA0y X-Received: by 10.84.134.34 with SMTP id 31mr14216851plg.154.1512383056691; Mon, 04 Dec 2017 02:24:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512383056; cv=none; d=google.com; s=arc-20160816; b=ZASq1Tu/vsM3VaLyNP5SFAMc208xW6AvrLjzwNfjqNpIW7NIE5yV3orcMnAQUtlY/q D1eMdEpzNfc77nlNuVWnLaqQt5AX3UQIwIkyjPJXr9sg55cJ7NuoPCRxz7iNufs50YsT 9gWH/h3GChP+Waf+Kdh93Bli7+7ZbVwRsp0EsVIZz9HzOQijHpSzD00OzoeYCFA/FN/b cnkoTLAVVeS0zBJOpIJFS39bIVQTTUNEzAIlZnyzyHElHc6flpkqsf45O4C1/gFtY/6F +P6lKlsyQPMLuAqVVsrC+kF2jsxvGEpwxFkcDgR5kMrOilWnz1P33yALtYTUvYMmkBL/ CBGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=0XhyR1UASuJ8L7pnK8ePgzf3K/mAtz1dG3Iqsd5Bb4k=; b=PkHR2esf8TfQ6R8585c44znUJ+Ss1XEzwotzhrw9osdZbg39+kekJMf/8Y63OZm/mA LjseD5j/eq3/L68a9VCpEr7UrWrEkrxIcVLWQ/74MAWxdalP573KXCjKQnZNZrS7K2JU 0Gdb7Zb7ih5oxiIz6AlgziWRSXYiWjhUQ/0QGMVUPJaYMJ7CZNf65rLm79//1SD54m4g N/fv1AXAWSjnm9Yje0dqqabFZrCrgUKLcb08Bbu1f1Losf7BxiWAm+6FsWp6/ufqbymt rkKXwjYVzZ6rJroiqG+mvMebStX4GOLHswwQ6y7EsS16PDcvla/gY5HdMV1EieZJ45Bu XLjQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t19si9350389plo.269.2017.12.04.02.24.16; Mon, 04 Dec 2017 02:24:16 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753596AbdLDKYN (ORCPT + 28 others); Mon, 4 Dec 2017 05:24:13 -0500 Received: from mail-wr0-f194.google.com ([209.85.128.194]:40124 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752166AbdLDKYG (ORCPT ); Mon, 4 Dec 2017 05:24:06 -0500 Received: by mail-wr0-f194.google.com with SMTP id q9so16627145wre.7 for ; Mon, 04 Dec 2017 02:24:05 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0XhyR1UASuJ8L7pnK8ePgzf3K/mAtz1dG3Iqsd5Bb4k=; b=KyPyQsvvzX1/gldULVFs0GfIfU/49pIIdkOQqm5KcJuzWyYkgpe2M3xih4xkB+DFHM p3T7MloKnm65TDqMAB8M4ncc0dVr+yuDEvVDIXE14O+YsrwlhOw74lSS73jkzcDJ4CUh UONZeGPKMhRlxqTyOhR50cyCrQdG6/f8OASu9z6Rqerz1VxrErekus7WCMbGtg2AmW5i +ddZ/m7ZA841q0HKDfUYJ2Dp2SnFaz9JENaSEK5gmU9YuyHuQ0azNS2y47LdHKAgK52y W0Lvd9WJBsumNX70q8zys1DnP3XTuuhvRBgtdp1SHjhqcJjiaui0TrtWwsvbu+Z+0KQE bAJQ== X-Gm-Message-State: AJaThX5KG4Q6IvbDqe6459InBhzZsZ/e35COD2/9b1gspyEqVmf30vJ/ mz0MA6/H/T+ZLrR9vn3/4oHq8A== X-Received: by 10.223.170.143 with SMTP id h15mr13081351wrc.49.1512383044832; Mon, 04 Dec 2017 02:24:04 -0800 (PST) Received: from localhost.localdomain.com ([151.68.92.1]) by smtp.gmail.com with ESMTPSA id y2sm13542473wra.18.2017.12.04.02.24.01 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 04 Dec 2017 02:24:04 -0800 (PST) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, alessio.balsini@arm.com, juri.lelli@redhat.com, Juri Lelli , Ingo Molnar , "Rafael J . Wysocki" Subject: [RFC PATCH v2 1/8] sched/cpufreq_schedutil: make use of DEADLINE utilization signal Date: Mon, 4 Dec 2017 11:23:18 +0100 Message-Id: <20171204102325.5110-2-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171204102325.5110-1-juri.lelli@redhat.com> References: <20171204102325.5110-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Juri Lelli SCHED_DEADLINE tracks active utilization signal with a per dl_rq variable named running_bw. Make use of that to drive cpu frequency selection: add up FAIR and DEADLINE contribution to get the required CPU capacity to handle both requirements (while RT still selects max frequency). Co-authored-by: Claudio Scordino Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Acked-by: Viresh Kumar --- include/linux/sched/cpufreq.h | 2 -- kernel/sched/cpufreq_schedutil.c | 25 +++++++++++++++---------- 2 files changed, 15 insertions(+), 12 deletions(-) -- 2.14.3 diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h index d1ad3d825561..0b55834efd46 100644 --- a/include/linux/sched/cpufreq.h +++ b/include/linux/sched/cpufreq.h @@ -12,8 +12,6 @@ #define SCHED_CPUFREQ_DL (1U << 1) #define SCHED_CPUFREQ_IOWAIT (1U << 2) -#define SCHED_CPUFREQ_RT_DL (SCHED_CPUFREQ_RT | SCHED_CPUFREQ_DL) - #ifdef CONFIG_CPU_FREQ struct update_util_data { void (*func)(struct update_util_data *data, u64 time, unsigned int flags); diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 2f52ec0f1539..de1ad1fffbdc 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -179,12 +179,17 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, static void sugov_get_util(unsigned long *util, unsigned long *max, int cpu) { struct rq *rq = cpu_rq(cpu); - unsigned long cfs_max; + unsigned long dl_util = (rq->dl.running_bw * SCHED_CAPACITY_SCALE) + >> BW_SHIFT; - cfs_max = arch_scale_cpu_capacity(NULL, cpu); + *max = arch_scale_cpu_capacity(NULL, cpu); - *util = min(rq->cfs.avg.util_avg, cfs_max); - *max = cfs_max; + /* + * Ideally we would like to set util_dl as min/guaranteed freq and + * util_cfs + util_dl as requested freq. However, cpufreq is not yet + * ready for such an interface. So, we only do the latter for now. + */ + *util = min(rq->cfs.avg.util_avg + dl_util, *max); } static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, @@ -272,7 +277,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, busy = sugov_cpu_is_busy(sg_cpu); - if (flags & SCHED_CPUFREQ_RT_DL) { + if (flags & SCHED_CPUFREQ_RT) { next_f = policy->cpuinfo.max_freq; } else { sugov_get_util(&util, &max, sg_cpu->cpu); @@ -317,7 +322,7 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) j_sg_cpu->iowait_boost_pending = false; continue; } - if (j_sg_cpu->flags & SCHED_CPUFREQ_RT_DL) + if (j_sg_cpu->flags & SCHED_CPUFREQ_RT) return policy->cpuinfo.max_freq; j_util = j_sg_cpu->util; @@ -353,7 +358,7 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, sg_cpu->last_update = time; if (sugov_should_update_freq(sg_policy, time)) { - if (flags & SCHED_CPUFREQ_RT_DL) + if (flags & SCHED_CPUFREQ_RT) next_f = sg_policy->policy->cpuinfo.max_freq; else next_f = sugov_next_freq_shared(sg_cpu, time); @@ -383,9 +388,9 @@ static void sugov_irq_work(struct irq_work *irq_work) sg_policy = container_of(irq_work, struct sugov_policy, irq_work); /* - * For RT and deadline tasks, the schedutil governor shoots the - * frequency to maximum. Special care must be taken to ensure that this - * kthread doesn't result in the same behavior. + * For RT tasks, the schedutil governor shoots the frequency to maximum. + * Special care must be taken to ensure that this kthread doesn't result + * in the same behavior. * * This is (mostly) guaranteed by the work_in_progress flag. The flag is * updated only at the end of the sugov_work() function and before that From patchwork Mon Dec 4 10:23:19 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 120500 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4244776qgn; Mon, 4 Dec 2017 02:24:18 -0800 (PST) X-Google-Smtp-Source: AGs4zMYs5BhaaO+k/kCM9EuFMrFQz+zWbRddeO7mdlKDUNFDZ7XVipputHuSxGMgaZXsd+kcIDGU X-Received: by 10.98.193.1 with SMTP id i1mr18992861pfg.29.1512383058622; Mon, 04 Dec 2017 02:24:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512383058; cv=none; d=google.com; s=arc-20160816; b=dHkqkmPok/9632GYhQV85zXvZXVTzLZQ20AI6VhdG5Gff+Or29TsBLH/MHV35pD6SO 4PxBfD3ccytKO3Lt1dfgTw57xp/mwILRTYS/n//VJ0sEYyKSP5Ro/EOj/rKNtFgB0R+b pQvWJGSbo9xrx6gbB7S3qmpwMzhVx0bBMk+ovC+RYDLbQngxEKIvnfG9DbNdflTO/o6E EJtZXR9hPF9GMvuTtpNrpkopLld+BRBzQSp9fsF11SnTwG00vqYOPGF8cBVWZK3Z6on7 pQPUZRSDRJQ/BRhDetVATeTRPzLZ5djDZkfzZG7o/QzmlCm3ZdgBHUUk8chPNrjuCQHo X69Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ok1IoNWsSc5SPbW7Qs1gsW6XD9UfS7OPuFMYCjsLEcU=; b=W5iJ5+Xgxiic9G1aRINwLHKRdS+1jMpRyLH/GlXpVRUyuVE7ZW8AUugUDnaHdKxoi1 KgbNdarORhZMHztXieCejgIuaAjs2q9g+SSzl+Cdisf1qg5PA4NWfXk0NHa3dLKKO1nL +xJ8QhIqR3WqtCef3Luk+NxC7BY+VgNmU9bSr/x4kQZGMTpSBwk1eTSICmuF6s01tUlm mnmUqYfp6HKjXq4et6gLa3UrD/nJb9IzKS4iLWOPnnN5XRNEPrdkBRfzDdgq+Uh5lEpZ SOWx8Sb2hQZXRfL+tFt92h0QaS6x3k0TlML2CLtgXO2fVGzis7SB/VbppJ7FbOCzNEhy 3qSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t19si9350389plo.269.2017.12.04.02.24.18; Mon, 04 Dec 2017 02:24:18 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753627AbdLDKYQ (ORCPT + 28 others); Mon, 4 Dec 2017 05:24:16 -0500 Received: from mail-wr0-f193.google.com ([209.85.128.193]:39229 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751619AbdLDKYJ (ORCPT ); Mon, 4 Dec 2017 05:24:09 -0500 Received: by mail-wr0-f193.google.com with SMTP id a41so14762443wra.6 for ; Mon, 04 Dec 2017 02:24:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ok1IoNWsSc5SPbW7Qs1gsW6XD9UfS7OPuFMYCjsLEcU=; b=ZxjMUvEfZqoEwEHa7eFh8HB2PuXudlMeNbSV6lhYv+3VgUTGAfS46iyIzNxtGVMz/Y ytrAo7zVlyqX9x0oWQfrO0nN/XV6N4JHje40R3VtTiB6lNxDNmBsreZsS20WWFXFXH92 bI3siypR/svq8Hant1B07a77IopjPniebzjrTL4Ksz/mlNiSVU6VyEcD/NsLPEyrissB guaOTUNT9KdK4jRQExndToqb2QwKP+e3k+nBPyVsg5HfdfPfW/AtPp3sutDnLDAKGe/u D4FlycdYpFM0F1fjK4XE44fiqwIAvSWKjHe3PaUNPr7sNnPc6hVwq3WL/4Yx+cLdh43X Uo6Q== X-Gm-Message-State: AJaThX7x301mlJq+jWkE/dO7Oy2lpIIiU2MbTQLZzKHKR2WBrvv+x/6V 0p90k6XrwhjqO0h2lwqVhKgvlg== X-Received: by 10.223.133.250 with SMTP id 55mr13144423wru.23.1512383048157; Mon, 04 Dec 2017 02:24:08 -0800 (PST) Received: from localhost.localdomain.com ([151.68.92.1]) by smtp.gmail.com with ESMTPSA id y2sm13542473wra.18.2017.12.04.02.24.04 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 04 Dec 2017 02:24:07 -0800 (PST) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, alessio.balsini@arm.com, juri.lelli@redhat.com, Juri Lelli , Ingo Molnar , "Rafael J . Wysocki" Subject: [RFC PATCH v2 2/8] sched/deadline: move cpu frequency selection triggering points Date: Mon, 4 Dec 2017 11:23:19 +0100 Message-Id: <20171204102325.5110-3-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171204102325.5110-1-juri.lelli@redhat.com> References: <20171204102325.5110-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Juri Lelli Since SCHED_DEADLINE doesn't track utilization signal (but reserves a fraction of CPU bandwidth to tasks admitted to the system), there is no point in evaluating frequency changes during each tick event. Move frequency selection triggering points to where running_bw changes. Co-authored-by: Claudio Scordino Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Reviewed-by: Viresh Kumar --- kernel/sched/deadline.c | 7 ++++--- kernel/sched/sched.h | 12 ++++++------ 2 files changed, 10 insertions(+), 9 deletions(-) -- 2.14.3 diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 2473736c7616..7e4038bf9954 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -86,6 +86,8 @@ void add_running_bw(u64 dl_bw, struct dl_rq *dl_rq) dl_rq->running_bw += dl_bw; SCHED_WARN_ON(dl_rq->running_bw < old); /* overflow */ SCHED_WARN_ON(dl_rq->running_bw > dl_rq->this_bw); + /* kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_util(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL); } static inline @@ -98,6 +100,8 @@ void sub_running_bw(u64 dl_bw, struct dl_rq *dl_rq) SCHED_WARN_ON(dl_rq->running_bw > old); /* underflow */ if (dl_rq->running_bw > old) dl_rq->running_bw = 0; + /* kick cpufreq (see the comment in kernel/sched/sched.h). */ + cpufreq_update_util(rq_of_dl_rq(dl_rq), SCHED_CPUFREQ_DL); } static inline @@ -1134,9 +1138,6 @@ static void update_curr_dl(struct rq *rq) return; } - /* kick cpufreq (see the comment in kernel/sched/sched.h). */ - cpufreq_update_util(rq, SCHED_CPUFREQ_DL); - schedstat_set(curr->se.statistics.exec_max, max(curr->se.statistics.exec_max, delta_exec)); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b19552a212de..a1730e39cbc6 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2096,14 +2096,14 @@ DECLARE_PER_CPU(struct update_util_data *, cpufreq_update_util_data); * The way cpufreq is currently arranged requires it to evaluate the CPU * performance state (frequency/voltage) on a regular basis to prevent it from * being stuck in a completely inadequate performance level for too long. - * That is not guaranteed to happen if the updates are only triggered from CFS, - * though, because they may not be coming in if RT or deadline tasks are active - * all the time (or there are RT and DL tasks only). + * That is not guaranteed to happen if the updates are only triggered from CFS + * and DL, though, because they may not be coming in if only RT tasks are + * active all the time (or there are RT tasks only). * - * As a workaround for that issue, this function is called by the RT and DL - * sched classes to trigger extra cpufreq updates to prevent it from stalling, + * As a workaround for that issue, this function is called periodically by the + * RT sched class to trigger extra cpufreq updates to prevent it from stalling, * but that really is a band-aid. Going forward it should be replaced with - * solutions targeted more specifically at RT and DL tasks. + * solutions targeted more specifically at RT tasks. */ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) { From patchwork Mon Dec 4 10:23:20 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 120506 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4246423qgn; Mon, 4 Dec 2017 02:26:14 -0800 (PST) X-Google-Smtp-Source: AGs4zMaq6GQZn2nPWAtN+fEEJcAgHCci3DxZ4p6O1ytwNI8hKaaVpVL7cOEpO+iN5PdsJSBECb4k X-Received: by 10.159.207.139 with SMTP id z11mr1598473plo.432.1512383174618; Mon, 04 Dec 2017 02:26:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512383174; cv=none; d=google.com; s=arc-20160816; b=D9He0jv0kcGdTPTF1+8cbLYO5HnyrH+Jh3qn6aluM5s0EWGLzSYCYSVjLuit8YtlC2 LDiRXtDA+iFLA5rrEDZpeT5NCvgKAYW2gi8XKlnRqQpaCdtSlRSsKujI0Ox8vFBJlN24 G03ayLfbFAg4WOm9AjHViz58Lq7sC34I62oA+bgVl7FK5Ns33U0vd2Bueob69Kv1DDKv GL9X6WeaNDmhoRqejkae3k0bePr0ZIYR1QaBGc7t119I4gZ86jiO126mqSuPjXMREUoX DL+m2JUf/5XIdfpE+ngg7RgkckxO6jJCTnMxAqvar5lMVN5NkhE6846TMXcgmTNQsI38 QQQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=R3EeHnmvKeChgMZH2SzT2WOYAz6MUGCy7vTaxDDeqEc=; b=il3458ZtAJpaDzf8mCOCDCDrtacrV1lgOxPxNcKI6676yrO2nbI9kOxsxXcQ4Ki0P3 UEjvM055hui+kwvC/Ukln0d6guwqv5wRmKatS+xKuIbt24zu6l8WSYn0nXRoGtriU26Q 4pyE8kporSuAf+1gZdW1lK/XLf/7/9vEdt45rWGNRdUmPptqnp5cCbjFPXgTHzb8hcNS FH+LHIw0n7SZi3uMsImn3MmEe8UYFvQieZXqfBfr66hbB1MY8PPsx/UPE3FZO4y7cpjp aSXPZeRHz0Ej0gRIt+aScVChjWx9JB6YuUSfxQk3whbQG8Kwfq2TrkWW0QBeBn7m7jYf 4Hng== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q29si9178601pgc.795.2017.12.04.02.26.14; Mon, 04 Dec 2017 02:26:14 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753956AbdLDK0M (ORCPT + 28 others); Mon, 4 Dec 2017 05:26:12 -0500 Received: from mail-wr0-f196.google.com ([209.85.128.196]:43452 "EHLO mail-wr0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752166AbdLDKYO (ORCPT ); Mon, 4 Dec 2017 05:24:14 -0500 Received: by mail-wr0-f196.google.com with SMTP id z34so16636351wrz.10 for ; Mon, 04 Dec 2017 02:24:13 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=R3EeHnmvKeChgMZH2SzT2WOYAz6MUGCy7vTaxDDeqEc=; b=KyEGxWG3SWsnD+WSEvSPfLjZ3Sk8Eh3h/9gkUouMHUiOlUW7qi6q1x8tXsGevq6voY +MSCzxGmXyyCPis0FGDF165Wtc5A4DQ4SXLrXO4GPlxA1PrrkFZPh09MM1Baqz4WkcGq o8bWDs6j4pQZn9XuAZUl6PB96UfnVayh63l3/IalxWa4GayZ7G76yBBtkUKtHv+hJ0GC LqsDMpAnYwDYllilQB2KUSMy/nDgZIbVBhMkccPZbOPLl+bBhqdCGB3Ub7Y+ihi6hNuG eiMz0c5qToVqXlp5yumwxDc+mzKizUbHT8hjALhe3/tNsYf01seRiqeziXGdP4OL+O6p iSBg== X-Gm-Message-State: AJaThX5YDfuAjNFpKhxCQVt94uVU0AzfAVmPV0+zBKIThyLIP/KXRzz/ LXH9LuKgPbk2nvKZuBTPB1Zydg== X-Received: by 10.223.170.143 with SMTP id h15mr13081823wrc.49.1512383053170; Mon, 04 Dec 2017 02:24:13 -0800 (PST) Received: from localhost.localdomain.com ([151.68.92.1]) by smtp.gmail.com with ESMTPSA id y2sm13542473wra.18.2017.12.04.02.24.08 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 04 Dec 2017 02:24:12 -0800 (PST) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, alessio.balsini@arm.com, juri.lelli@redhat.com, Juri Lelli , Ingo Molnar , "Rafael J . Wysocki" Subject: [RFC PATCH v2 3/8] sched/cpufreq_schedutil: make worker kthread be SCHED_DEADLINE Date: Mon, 4 Dec 2017 11:23:20 +0100 Message-Id: <20171204102325.5110-4-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171204102325.5110-1-juri.lelli@redhat.com> References: <20171204102325.5110-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Juri Lelli Worker kthread needs to be able to change frequency for all other threads. Make it special, just under STOP class. Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Cc: Claudio Scordino --- Changes from RFCv1: - return -EINVAL for user trying to use the new flag (Peter) - s/SPECIAL/SUGOV/ in the flag name (several comments from people to find better naming, Steve thinks SUGOV is more greppable than others) - give worker kthread a fake (unused) bandwidth, so that if priority inheritance is triggered we don't BUG_ON on zero runtime - filter out fake bandwidth when computing SCHED_DEADLINE bandwidth (fix by Claudio Scordino) --- include/linux/sched.h | 1 + kernel/sched/core.c | 15 +++++- kernel/sched/cpufreq_schedutil.c | 19 ++++++-- kernel/sched/deadline.c | 103 +++++++++++++++++++++++++++------------ kernel/sched/sched.h | 22 ++++++++- 5 files changed, 124 insertions(+), 36 deletions(-) -- 2.14.3 diff --git a/include/linux/sched.h b/include/linux/sched.h index 21991d668d35..c4b2d4a5cfab 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1438,6 +1438,7 @@ extern int idle_cpu(int cpu); extern int sched_setscheduler(struct task_struct *, int, const struct sched_param *); extern int sched_setscheduler_nocheck(struct task_struct *, int, const struct sched_param *); extern int sched_setattr(struct task_struct *, const struct sched_attr *); +extern int sched_setattr_nocheck(struct task_struct *, const struct sched_attr *); extern struct task_struct *idle_task(int cpu); /** diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 75554f366fd3..5be52a3c1c1b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4041,7 +4041,9 @@ static int __sched_setscheduler(struct task_struct *p, } if (attr->sched_flags & - ~(SCHED_FLAG_RESET_ON_FORK | SCHED_FLAG_RECLAIM)) + ~(SCHED_FLAG_RESET_ON_FORK | + SCHED_FLAG_RECLAIM | + SCHED_FLAG_SUGOV)) return -EINVAL; /* @@ -4108,6 +4110,9 @@ static int __sched_setscheduler(struct task_struct *p, } if (user) { + if (attr->sched_flags & SCHED_FLAG_SUGOV) + return -EINVAL; + retval = security_task_setscheduler(p); if (retval) return retval; @@ -4163,7 +4168,8 @@ static int __sched_setscheduler(struct task_struct *p, } #endif #ifdef CONFIG_SMP - if (dl_bandwidth_enabled() && dl_policy(policy)) { + if (dl_bandwidth_enabled() && dl_policy(policy) && + !(attr->sched_flags & SCHED_FLAG_SUGOV)) { cpumask_t *span = rq->rd->span; /* @@ -4293,6 +4299,11 @@ int sched_setattr(struct task_struct *p, const struct sched_attr *attr) } EXPORT_SYMBOL_GPL(sched_setattr); +int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr) +{ + return __sched_setscheduler(p, attr, false, true); +} + /** * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace. * @p: the task in question. diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index de1ad1fffbdc..c22457868ee6 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -475,7 +475,20 @@ static void sugov_policy_free(struct sugov_policy *sg_policy) static int sugov_kthread_create(struct sugov_policy *sg_policy) { struct task_struct *thread; - struct sched_param param = { .sched_priority = MAX_USER_RT_PRIO / 2 }; + struct sched_attr attr = { + .size = sizeof(struct sched_attr), + .sched_policy = SCHED_DEADLINE, + .sched_flags = SCHED_FLAG_SUGOV, + .sched_nice = 0, + .sched_priority = 0, + /* + * Fake (unused) bandwidth; workaround to "fix" + * priority inheritance. + */ + .sched_runtime = 1000000, + .sched_deadline = 10000000, + .sched_period = 10000000, + }; struct cpufreq_policy *policy = sg_policy->policy; int ret; @@ -493,10 +506,10 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy) return PTR_ERR(thread); } - ret = sched_setscheduler_nocheck(thread, SCHED_FIFO, ¶m); + ret = sched_setattr_nocheck(thread, &attr); if (ret) { kthread_stop(thread); - pr_warn("%s: failed to set SCHED_FIFO\n", __func__); + pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__); return ret; } diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 7e4038bf9954..40f12aab9250 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -78,7 +78,7 @@ static inline int dl_bw_cpus(int i) #endif static inline -void add_running_bw(u64 dl_bw, struct dl_rq *dl_rq) +void __add_running_bw(u64 dl_bw, struct dl_rq *dl_rq) { u64 old = dl_rq->running_bw; @@ -91,7 +91,7 @@ void add_running_bw(u64 dl_bw, struct dl_rq *dl_rq) } static inline -void sub_running_bw(u64 dl_bw, struct dl_rq *dl_rq) +void __sub_running_bw(u64 dl_bw, struct dl_rq *dl_rq) { u64 old = dl_rq->running_bw; @@ -105,7 +105,7 @@ void sub_running_bw(u64 dl_bw, struct dl_rq *dl_rq) } static inline -void add_rq_bw(u64 dl_bw, struct dl_rq *dl_rq) +void __add_rq_bw(u64 dl_bw, struct dl_rq *dl_rq) { u64 old = dl_rq->this_bw; @@ -115,7 +115,7 @@ void add_rq_bw(u64 dl_bw, struct dl_rq *dl_rq) } static inline -void sub_rq_bw(u64 dl_bw, struct dl_rq *dl_rq) +void __sub_rq_bw(u64 dl_bw, struct dl_rq *dl_rq) { u64 old = dl_rq->this_bw; @@ -127,16 +127,46 @@ void sub_rq_bw(u64 dl_bw, struct dl_rq *dl_rq) SCHED_WARN_ON(dl_rq->running_bw > dl_rq->this_bw); } +static inline +void add_rq_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) +{ + if (!(dl_se->flags & SCHED_FLAG_SUGOV)) + __add_rq_bw(dl_se->dl_bw, dl_rq); +} + +static inline +void sub_rq_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) +{ + if (!(dl_se->flags & SCHED_FLAG_SUGOV)) + __sub_rq_bw(dl_se->dl_bw, dl_rq); +} + +static inline +void add_running_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) +{ + if (!(dl_se->flags & SCHED_FLAG_SUGOV)) + __add_running_bw(dl_se->dl_bw, dl_rq); +} + +static inline +void sub_running_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) +{ + if (!(dl_se->flags & SCHED_FLAG_SUGOV)) + __sub_running_bw(dl_se->dl_bw, dl_rq); +} + void dl_change_utilization(struct task_struct *p, u64 new_bw) { struct rq *rq; + BUG_ON(p->dl.flags & SCHED_FLAG_SUGOV); + if (task_on_rq_queued(p)) return; rq = task_rq(p); if (p->dl.dl_non_contending) { - sub_running_bw(p->dl.dl_bw, &rq->dl); + sub_running_bw(&p->dl, &rq->dl); p->dl.dl_non_contending = 0; /* * If the timer handler is currently running and the @@ -148,8 +178,8 @@ void dl_change_utilization(struct task_struct *p, u64 new_bw) if (hrtimer_try_to_cancel(&p->dl.inactive_timer) == 1) put_task_struct(p); } - sub_rq_bw(p->dl.dl_bw, &rq->dl); - add_rq_bw(new_bw, &rq->dl); + __sub_rq_bw(p->dl.dl_bw, &rq->dl); + __add_rq_bw(new_bw, &rq->dl); } /* @@ -221,6 +251,9 @@ static void task_non_contending(struct task_struct *p) if (dl_se->dl_runtime == 0) return; + if (unlikely(dl_entity_is_special(dl_se))) + return; + WARN_ON(hrtimer_active(&dl_se->inactive_timer)); WARN_ON(dl_se->dl_non_contending); @@ -240,12 +273,12 @@ static void task_non_contending(struct task_struct *p) */ if (zerolag_time < 0) { if (dl_task(p)) - sub_running_bw(dl_se->dl_bw, dl_rq); + sub_running_bw(dl_se, dl_rq); if (!dl_task(p) || p->state == TASK_DEAD) { struct dl_bw *dl_b = dl_bw_of(task_cpu(p)); if (p->state == TASK_DEAD) - sub_rq_bw(p->dl.dl_bw, &rq->dl); + sub_rq_bw(&p->dl, &rq->dl); raw_spin_lock(&dl_b->lock); __dl_sub(dl_b, p->dl.dl_bw, dl_bw_cpus(task_cpu(p))); __dl_clear_params(p); @@ -272,7 +305,7 @@ static void task_contending(struct sched_dl_entity *dl_se, int flags) return; if (flags & ENQUEUE_MIGRATED) - add_rq_bw(dl_se->dl_bw, dl_rq); + add_rq_bw(dl_se, dl_rq); if (dl_se->dl_non_contending) { dl_se->dl_non_contending = 0; @@ -293,7 +326,7 @@ static void task_contending(struct sched_dl_entity *dl_se, int flags) * when the "inactive timer" fired). * So, add it back. */ - add_running_bw(dl_se->dl_bw, dl_rq); + add_running_bw(dl_se, dl_rq); } } @@ -1149,6 +1182,9 @@ static void update_curr_dl(struct rq *rq) sched_rt_avg_update(rq, delta_exec); + if (unlikely(dl_entity_is_special(dl_se))) + return; + if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) delta_exec = grub_reclaim(delta_exec, rq, &curr->dl); dl_se->runtime -= delta_exec; @@ -1205,8 +1241,8 @@ static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) struct dl_bw *dl_b = dl_bw_of(task_cpu(p)); if (p->state == TASK_DEAD && dl_se->dl_non_contending) { - sub_running_bw(p->dl.dl_bw, dl_rq_of_se(&p->dl)); - sub_rq_bw(p->dl.dl_bw, dl_rq_of_se(&p->dl)); + sub_running_bw(&p->dl, dl_rq_of_se(&p->dl)); + sub_rq_bw(&p->dl, dl_rq_of_se(&p->dl)); dl_se->dl_non_contending = 0; } @@ -1223,7 +1259,7 @@ static enum hrtimer_restart inactive_task_timer(struct hrtimer *timer) sched_clock_tick(); update_rq_clock(rq); - sub_running_bw(dl_se->dl_bw, &rq->dl); + sub_running_bw(dl_se, &rq->dl); dl_se->dl_non_contending = 0; unlock: task_rq_unlock(rq, p, &rf); @@ -1417,8 +1453,8 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags) dl_check_constrained_dl(&p->dl); if (p->on_rq == TASK_ON_RQ_MIGRATING || flags & ENQUEUE_RESTORE) { - add_rq_bw(p->dl.dl_bw, &rq->dl); - add_running_bw(p->dl.dl_bw, &rq->dl); + add_rq_bw(&p->dl, &rq->dl); + add_running_bw(&p->dl, &rq->dl); } /* @@ -1458,8 +1494,8 @@ static void dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags) __dequeue_task_dl(rq, p, flags); if (p->on_rq == TASK_ON_RQ_MIGRATING || flags & DEQUEUE_SAVE) { - sub_running_bw(p->dl.dl_bw, &rq->dl); - sub_rq_bw(p->dl.dl_bw, &rq->dl); + sub_running_bw(&p->dl, &rq->dl); + sub_rq_bw(&p->dl, &rq->dl); } /* @@ -1565,7 +1601,7 @@ static void migrate_task_rq_dl(struct task_struct *p) */ raw_spin_lock(&rq->lock); if (p->dl.dl_non_contending) { - sub_running_bw(p->dl.dl_bw, &rq->dl); + sub_running_bw(&p->dl, &rq->dl); p->dl.dl_non_contending = 0; /* * If the timer handler is currently running and the @@ -1577,7 +1613,7 @@ static void migrate_task_rq_dl(struct task_struct *p) if (hrtimer_try_to_cancel(&p->dl.inactive_timer) == 1) put_task_struct(p); } - sub_rq_bw(p->dl.dl_bw, &rq->dl); + sub_rq_bw(&p->dl, &rq->dl); raw_spin_unlock(&rq->lock); } @@ -2020,11 +2056,11 @@ static int push_dl_task(struct rq *rq) } deactivate_task(rq, next_task, 0); - sub_running_bw(next_task->dl.dl_bw, &rq->dl); - sub_rq_bw(next_task->dl.dl_bw, &rq->dl); + sub_running_bw(&next_task->dl, &rq->dl); + sub_rq_bw(&next_task->dl, &rq->dl); set_task_cpu(next_task, later_rq->cpu); - add_rq_bw(next_task->dl.dl_bw, &later_rq->dl); - add_running_bw(next_task->dl.dl_bw, &later_rq->dl); + add_rq_bw(&next_task->dl, &later_rq->dl); + add_running_bw(&next_task->dl, &later_rq->dl); activate_task(later_rq, next_task, 0); ret = 1; @@ -2112,11 +2148,11 @@ static void pull_dl_task(struct rq *this_rq) resched = true; deactivate_task(src_rq, p, 0); - sub_running_bw(p->dl.dl_bw, &src_rq->dl); - sub_rq_bw(p->dl.dl_bw, &src_rq->dl); + sub_running_bw(&p->dl, &src_rq->dl); + sub_rq_bw(&p->dl, &src_rq->dl); set_task_cpu(p, this_cpu); - add_rq_bw(p->dl.dl_bw, &this_rq->dl); - add_running_bw(p->dl.dl_bw, &this_rq->dl); + add_rq_bw(&p->dl, &this_rq->dl); + add_running_bw(&p->dl, &this_rq->dl); activate_task(this_rq, p, 0); dmin = p->dl.deadline; @@ -2225,7 +2261,7 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p) task_non_contending(p); if (!task_on_rq_queued(p)) - sub_rq_bw(p->dl.dl_bw, &rq->dl); + sub_rq_bw(&p->dl, &rq->dl); /* * We cannot use inactive_task_timer() to invoke sub_running_bw() @@ -2257,7 +2293,7 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p) /* If p is not queued we will update its parameters at next wakeup. */ if (!task_on_rq_queued(p)) { - add_rq_bw(p->dl.dl_bw, &rq->dl); + add_rq_bw(&p->dl, &rq->dl); return; } @@ -2436,6 +2472,9 @@ int sched_dl_overflow(struct task_struct *p, int policy, u64 new_bw = dl_policy(policy) ? to_ratio(period, runtime) : 0; int cpus, err = -1; + if (attr->sched_flags & SCHED_FLAG_SUGOV) + return 0; + /* !deadline task may carry old deadline bandwidth */ if (new_bw == p->dl.dl_bw && task_has_dl_policy(p)) return 0; @@ -2522,6 +2561,10 @@ void __getparam_dl(struct task_struct *p, struct sched_attr *attr) */ bool __checkparam_dl(const struct sched_attr *attr) { + /* special dl tasks don't actually use any parameter */ + if (attr->sched_flags & SCHED_FLAG_SUGOV) + return true; + /* deadline != 0 */ if (attr->sched_deadline == 0) return false; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index a1730e39cbc6..280b421a82e8 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -156,13 +156,33 @@ static inline int task_has_dl_policy(struct task_struct *p) return dl_policy(p->policy); } +/* + * !! For sched_setattr_nocheck() (kernel) only !! + * + * This is actually gross. :( + * + * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE + * tasks, but still be able to sleep. We need this on platforms that cannot + * atomically change clock frequency. Remove once fast switching will be + * available on such platforms. + * + * SUGOV stands for SchedUtil GOVernor. + */ +#define SCHED_FLAG_SUGOV 0x10000000 + +static inline int dl_entity_is_special(struct sched_dl_entity *dl_se) +{ + return dl_se->flags & SCHED_FLAG_SUGOV; +} + /* * Tells if entity @a should preempt entity @b. */ static inline bool dl_entity_preempt(struct sched_dl_entity *a, struct sched_dl_entity *b) { - return dl_time_before(a->deadline, b->deadline); + return dl_entity_is_special(a) || + dl_time_before(a->deadline, b->deadline); } /* From patchwork Mon Dec 4 10:23:21 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 120501 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4244886qgn; Mon, 4 Dec 2017 02:24:26 -0800 (PST) X-Google-Smtp-Source: AGs4zMb568YdLzEL4cvRuORnVtRLc55nbgd9a/uvqI79chCOcsx+XOZU1RLN+iKReVYjGUa7zMIc X-Received: by 10.101.85.135 with SMTP id j7mr13366756pgs.17.1512383066615; Mon, 04 Dec 2017 02:24:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512383066; cv=none; d=google.com; s=arc-20160816; b=gDCVR9MwkjF6r/Z6ysA7g1YLF/5LjrbkJH1/4XRCl6goXehmQ4f7Qj1gSYSolC2AdL F3VwXoaj+nDAPm9jgXlFjW+PARBYqfjKIiayojiyKEOzWiosU+uO31JhlfjFiRWYI9/t IynMSBB9KcGs9K1c9/Qs/Hx0h6/T7QCzTbz/TlvNjcVwpZFwevGa8d9cr3d/4YRpIVTa GyPZYWlp3hc/5NMiZ+KK3rnWQLEZoGd1BkNf1wCjojBkgTFzl4HkezjuUQjQSuJXArBN ldgEmqtxLTmuuF+7igC7X7/1DcdkbvYH5hFXG+rTn7Luz7PaF0zhIx6HMl/s7ekOnpC2 NdQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=DiIz+dRGzD6hszwNekGffBbUjVBvBcw0BwL4QHjZOV4=; b=q4cbqd0Naf+crHiOynRP397WibYBwgJqEZz3tywzXeqGgoWVuRMeMMMx2txnoKe82z RvzqLEDi3mqXhYJpwgmJpgyOxQmOunAJ11u8E00TsKXxane6LdiU5oIkTmHuE8wtSQud zKwnAlgIM7PcfHFDeNGM2vUdd9VmASypXA9BiEV3aCRQKSbVGwJ/jKedTUpfK8Am/JtE Z5QuAEJScHEX2ete9fO6NmFoMDS3iFznQuzi1gSUMa5UCYxIWEH3+vCdB4NOIKdCwyX6 RC0pR4s5YV2s1OTZTTNn6VM1+1h9Sa9k676Igxx6T/z3ZYS19iRY8Uc1YG5D+DF6gmfU AZlw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s187si9221693pgc.532.2017.12.04.02.24.26; Mon, 04 Dec 2017 02:24:26 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753659AbdLDKYY (ORCPT + 28 others); Mon, 4 Dec 2017 05:24:24 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:38947 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752231AbdLDKYS (ORCPT ); Mon, 4 Dec 2017 05:24:18 -0500 Received: by mail-wm0-f68.google.com with SMTP id i11so13225413wmf.4 for ; Mon, 04 Dec 2017 02:24:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DiIz+dRGzD6hszwNekGffBbUjVBvBcw0BwL4QHjZOV4=; b=SqvMqxYMg0Op4RrwLYh1XvkHyQEqCkm9ZKtmGgIuVpTGDBwTCfuXQzYZQS5uhzQIkp OyPmbo/ejarTV577pmVLT2cVdU9YQoF1Ag1j6cJglc8fl4IgpNF36b7YDsOO/4Q2zry/ Dkcn7gK3zPH9x1YZ6wqYdmrpVl5H1z8uMFFRYe+M5La5jaTQkj1TuTk88EQe9syv3+zL dvTyLUjIJHlQTkbkSIl2uE5vzhB7lbtyBq6JKBPy5XFY2+af76YzVNfsPXdMgvgi+7tu bposMZGq3lUQk2+KRAxfGua2CoZbA76Fyw+a5vS1yAdElE1bU5jRs5eshbBsJ7a6y8yx uG9Q== X-Gm-Message-State: AKGB3mKQaqxTo/Jzw2Xb4cyyn4gf3esRBN0wjBytZml/Iied2vSNoaaH etXw1HTImA3WmeMV/gG21qexKg== X-Received: by 10.28.230.78 with SMTP id d75mr7257887wmh.54.1512383057107; Mon, 04 Dec 2017 02:24:17 -0800 (PST) Received: from localhost.localdomain.com ([151.68.92.1]) by smtp.gmail.com with ESMTPSA id y2sm13542473wra.18.2017.12.04.02.24.13 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 04 Dec 2017 02:24:16 -0800 (PST) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, alessio.balsini@arm.com, juri.lelli@redhat.com, Juri Lelli , Ingo Molnar , "Rafael J . Wysocki" Subject: [RFC PATCH v2 4/8] sched/cpufreq_schedutil: split utilization signals Date: Mon, 4 Dec 2017 11:23:21 +0100 Message-Id: <20171204102325.5110-5-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171204102325.5110-1-juri.lelli@redhat.com> References: <20171204102325.5110-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Juri Lelli To be able to treat utilization signals of different scheduling classes in different ways (e.g., CFS signal might be stale while DEADLINE signal is never stale by design) we need to split sugov_cpu::util signal in two: util_cfs and util_dl. This patch does that by also changing sugov_get_util() parameter list. After this change, aggregation of the different signals has to be performed by sugov_get_util() users (so that they can decide what to do with the different signals). Suggested-by: Rafael J. Wysocki Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Cc: Claudio Scordino Acked-by: Viresh Kumar --- kernel/sched/cpufreq_schedutil.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-) -- 2.14.3 diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index c22457868ee6..a3072f24dc16 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -60,7 +60,8 @@ struct sugov_cpu { u64 last_update; /* The fields below are only needed when sharing a policy. */ - unsigned long util; + unsigned long util_cfs; + unsigned long util_dl; unsigned long max; unsigned int flags; @@ -176,20 +177,25 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, return cpufreq_driver_resolve_freq(policy, freq); } -static void sugov_get_util(unsigned long *util, unsigned long *max, int cpu) +static void sugov_get_util(struct sugov_cpu *sg_cpu) { - struct rq *rq = cpu_rq(cpu); - unsigned long dl_util = (rq->dl.running_bw * SCHED_CAPACITY_SCALE) - >> BW_SHIFT; + struct rq *rq = cpu_rq(sg_cpu->cpu); - *max = arch_scale_cpu_capacity(NULL, cpu); + sg_cpu->max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu); + sg_cpu->util_cfs = rq->cfs.avg.util_avg; + sg_cpu->util_dl = (rq->dl.running_bw * SCHED_CAPACITY_SCALE) + >> BW_SHIFT; +} + +static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu) +{ /* * Ideally we would like to set util_dl as min/guaranteed freq and * util_cfs + util_dl as requested freq. However, cpufreq is not yet * ready for such an interface. So, we only do the latter for now. */ - *util = min(rq->cfs.avg.util_avg + dl_util, *max); + return min(sg_cpu->util_cfs + sg_cpu->util_dl, sg_cpu->max); } static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, @@ -280,7 +286,9 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, if (flags & SCHED_CPUFREQ_RT) { next_f = policy->cpuinfo.max_freq; } else { - sugov_get_util(&util, &max, sg_cpu->cpu); + sugov_get_util(sg_cpu); + max = sg_cpu->max; + util = sugov_aggregate_util(sg_cpu); sugov_iowait_boost(sg_cpu, &util, &max); next_f = get_next_freq(sg_policy, util, max); /* @@ -325,8 +333,8 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) if (j_sg_cpu->flags & SCHED_CPUFREQ_RT) return policy->cpuinfo.max_freq; - j_util = j_sg_cpu->util; j_max = j_sg_cpu->max; + j_util = sugov_aggregate_util(j_sg_cpu); if (j_util * max > j_max * util) { util = j_util; max = j_max; @@ -343,15 +351,11 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, { struct sugov_cpu *sg_cpu = container_of(hook, struct sugov_cpu, update_util); struct sugov_policy *sg_policy = sg_cpu->sg_policy; - unsigned long util, max; unsigned int next_f; - sugov_get_util(&util, &max, sg_cpu->cpu); - raw_spin_lock(&sg_policy->update_lock); - sg_cpu->util = util; - sg_cpu->max = max; + sugov_get_util(sg_cpu); sg_cpu->flags = flags; sugov_set_iowait_boost(sg_cpu, time, flags); From patchwork Mon Dec 4 10:23:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 120504 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4245782qgn; Mon, 4 Dec 2017 02:25:28 -0800 (PST) X-Google-Smtp-Source: AGs4zMapsvygQqyIyvVCLf0npM44GJKZ66Zc2NsuDstJMmGS81CEn1l6EJRVbdNjdfTiW3z69okR X-Received: by 10.84.252.134 with SMTP id y6mr1515455pll.204.1512383128181; Mon, 04 Dec 2017 02:25:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512383128; cv=none; d=google.com; s=arc-20160816; b=K+4qUA9ujZT1G9AWHpJSKh10iaDVWye12J+hbYkTINDtQC75zrSuWx2ER99PHTArzG gybvl2V3KmPUldSJ+mmhu89lg1jN0qGvLnbn1rhHtcLzz+pyV0DJbm3y1IIilIHGYtWv PR3pMooVam5KLrs5bVkEpP5nCwRaH+aL4f11EHsX6AMhRQw+RctDffzHK1B+51dV7jMk FsizyYYtG9qcYYushZH6b7oLrCHXf1pQjG5swPiT/PxxOAtrygHGr9xv/rMvG7x4Pdqo 9e9V9yY0BIZ17x9E78LE0zikshcdkwBc+0MVIX7Om4wXKA8aq0ZiGp4Ol/W2qjSViAtb Opvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=z2h7d020JJrC/3LuYQE0dDc0DybZlP1ck+Qhh2DWIXc=; b=vsrnyeST01I2MdGTZWGz0x8pY7pT6Cr3wlxfGUP4AkSV+leEiHglcfVZ3Q/Mf3wxY/ o7zxfmST09xko295UeXNTG9Dx5wpt5byOWQHUIIRVerGNopcHn9y0GXHvcFvuWkjUGjR rkDcElM05k+Jb5C3+a3tUoFo5b6H//I9/hKmPBClRCJvaR7UeMb0goyxbERdbTNwVZRE gF7wk0+O9s33qSnyPt6JnIf7cNBjF0FeS8mWfXiVOGMhgoHclIms3EdmbAVG6NKOD9lo xr+6rK4Ro24tBfQV/MI0ChLx8ki+BSLcDYFvlc0y2A/i2s8S1j4UiAAnL8aUK9adS8Uy fl9Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d2si9123343pgq.658.2017.12.04.02.25.27; Mon, 04 Dec 2017 02:25:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753702AbdLDKYb (ORCPT + 28 others); Mon, 4 Dec 2017 05:24:31 -0500 Received: from mail-wr0-f193.google.com ([209.85.128.193]:43469 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753631AbdLDKYZ (ORCPT ); Mon, 4 Dec 2017 05:24:25 -0500 Received: by mail-wr0-f193.google.com with SMTP id z34so16636996wrz.10 for ; Mon, 04 Dec 2017 02:24:24 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=z2h7d020JJrC/3LuYQE0dDc0DybZlP1ck+Qhh2DWIXc=; b=pdy9sqZKgYw3ZwflVJMyaz1kKVW4LxjboNlwUj38DeruksgOhAQc4wW025SOPIDPxS Y38T+dMm9QxzXCDY5jEWCGqHQDwrH7PNHw62TyVlnLM63uCYWGHVfFL2DegqsiENfv7O qXrIBtyRigE549LndVx/CujGZtSdi0S5LvTv3i7qEjgsKfTG9BpwWONYx/pv34kgNYBK CsO/1uyOLIZhMBDf9oTjU/P3SipoSPr93NzBj6Sw1oW0YM7rVfdgsY5slFRcMTX32GqS Et7GpUsoQJipmHuN97Hdq5EmyCXVr/QGglchddpkvq00bi4DSP5buiLwvhzLoSIQyz+T dekw== X-Gm-Message-State: AJaThX6A+ibDsbwwasMmuqumBj/yZi1Sbm7lHQsZe2xcetpNu0Kw1WOg F5u1smQYu4+k+jdawmcx9w3X+A== X-Received: by 10.223.175.49 with SMTP id z46mr12559022wrc.12.1512383063945; Mon, 04 Dec 2017 02:24:23 -0800 (PST) Received: from localhost.localdomain.com ([151.68.92.1]) by smtp.gmail.com with ESMTPSA id y2sm13542473wra.18.2017.12.04.02.24.17 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 04 Dec 2017 02:24:23 -0800 (PST) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, alessio.balsini@arm.com, juri.lelli@redhat.com, Juri Lelli , Ingo Molnar , "Rafael J . Wysocki" Subject: [RFC PATCH v2 5/8] sched/cpufreq_schedutil: always consider all CPUs when deciding next freq Date: Mon, 4 Dec 2017 11:23:22 +0100 Message-Id: <20171204102325.5110-6-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171204102325.5110-1-juri.lelli@redhat.com> References: <20171204102325.5110-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Juri Lelli No assumption can be made upon the rate at which frequency updates get triggered, as there are scheduling policies (like SCHED_DEADLINE) which don't trigger them so frequently. Remove such assumption from the code, by always considering SCHED_DEADLINE utilization signal as not stale. Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Cc: Claudio Scordino Acked-by: Viresh Kumar --- kernel/sched/cpufreq_schedutil.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) -- 2.14.3 diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index a3072f24dc16..b7a576c8dcaa 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -318,17 +318,21 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) s64 delta_ns; /* - * If the CPU utilization was last updated before the previous - * frequency update and the time elapsed between the last update - * of the CPU utilization and the last frequency update is long - * enough, don't take the CPU into account as it probably is - * idle now (and clear iowait_boost for it). + * If the CFS CPU utilization was last updated before the + * previous frequency update and the time elapsed between the + * last update of the CPU utilization and the last frequency + * update is long enough, reset iowait_boost and util_cfs, as + * they are now probably stale. However, still consider the + * CPU contribution if it has some DEADLINE utilization + * (util_dl). */ delta_ns = time - j_sg_cpu->last_update; if (delta_ns > TICK_NSEC) { j_sg_cpu->iowait_boost = 0; j_sg_cpu->iowait_boost_pending = false; - continue; + j_sg_cpu->util_cfs = 0; + if (j_sg_cpu->util_dl == 0) + continue; } if (j_sg_cpu->flags & SCHED_CPUFREQ_RT) return policy->cpuinfo.max_freq; From patchwork Mon Dec 4 10:23:23 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 120502 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4245031qgn; Mon, 4 Dec 2017 02:24:36 -0800 (PST) X-Google-Smtp-Source: AGs4zMYqmQXX5ssQMDl8/CXjWy0slATyaUTNvBM87mOZXnvF2vh1xB+8dzy06onK29VRy0hO7Bkc X-Received: by 10.99.111.1 with SMTP id k1mr5644507pgc.401.1512383076329; Mon, 04 Dec 2017 02:24:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512383076; cv=none; d=google.com; s=arc-20160816; b=j/zygjKjHSveMk+dfLcpH6PAsuQD1Fssno/QwDwEstMtTRtDujz8fMdUvMicAu155A x3RAfVGjwKbDJzG4bMjw4wGQv4RbwTlwOZzioquvNvgcgjkYDePrnohNzYR9tOgjE3fH 2iXXke6YTkYCZmHF+iS5LQ3ANaw1jJmVjs4b/ZVJWDCORvWb6oXAulKd/eOs1leZeJMm Stro7Th358CtsVwzI4/u79Mshj60TrQtHrtLQAZpPxgrzFUZc8GwIz6byglh9g1pD/Jl wlqSn+X2kItRytol3Y7Zpwc46CdNvwkx6EY5xFvOOQ0JYjYUe3BPmznzMlV0RgEQ7DEb /0LQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=OpMmTNGDvXNwFAi0/4Gz4YTMkJOw7JNssamTuu++29w=; b=BmyTAvQSvjsgq4EEqie0dnfS9ZWiLw+9lUoPJc78cWOoyfElp4P2o+KxI4Hku3UrcV zxII3WGa2rSfzssq/eCRzClldnU6mBAKLeADZHR96e2D14vmyJ1lYCLQ04JOsR3+j8BP 3uNgJrfMGCP2DbG+1WIHh7oVp5w7wZYevGUKIESeoLSLIzBXAHpF+zno94u7xwslMczx lShPlPeWBzcgW2xysqbSRX1xLjVXmNlCEW0Y1YlYInfqIJclqx1hpVTeVagEX/TXEkNv I+JPU0g02SC0Zj1tevNbdVVvkviqQPJrYu5rLpVXtmr8F6lAR7OXf+CqVnVfGk3xV5QU TOvQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h90si9425784plb.320.2017.12.04.02.24.36; Mon, 04 Dec 2017 02:24:36 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753746AbdLDKYe (ORCPT + 28 others); Mon, 4 Dec 2017 05:24:34 -0500 Received: from mail-wm0-f65.google.com ([74.125.82.65]:35513 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753671AbdLDKY2 (ORCPT ); Mon, 4 Dec 2017 05:24:28 -0500 Received: by mail-wm0-f65.google.com with SMTP id f9so13309263wmh.0 for ; Mon, 04 Dec 2017 02:24:28 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=OpMmTNGDvXNwFAi0/4Gz4YTMkJOw7JNssamTuu++29w=; b=L1Pp+BWW4+2ATRixyRjqyhxeO9R3e0xZcrXyQLqAe5UafBIehtuacu+L4E2PwD6EnD 6dVGa396bxjsC3Tv0Vb3NF2QQwi1Bl+SILo8IWPNOCxgmC2xMz0QmFGgfzFFaAhsRFpd mpFVjFgjOxzOrGiCJqlldXjXiGZWD15wCn5xcvYR+9sgGAJzklXbu91XpE4aZAhprYLf gidgVrpUnJoQKCGQX/QreZ01W2sp+SWmwCVo6XMzcvz4A/if/LvJbxDdlV6+DDKdcBpw KjUjpC21YwFurZJDGUx9PVz8SdpAtMJ9pINDycu75FQc/dccewHCdRW5QhD5jxabwvyJ 3zOA== X-Gm-Message-State: AKGB3mJvwdrUgUMJcSmuebPMqpb6z9XnAyFEq1/pi9bftIk4ltKXZnca 64eB49AFtKtWjzlNgz8d49SkJQ== X-Received: by 10.28.110.24 with SMTP id j24mr6341723wmc.100.1512383066797; Mon, 04 Dec 2017 02:24:26 -0800 (PST) Received: from localhost.localdomain.com ([151.68.92.1]) by smtp.gmail.com with ESMTPSA id y2sm13542473wra.18.2017.12.04.02.24.24 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 04 Dec 2017 02:24:26 -0800 (PST) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, alessio.balsini@arm.com, juri.lelli@redhat.com, Juri Lelli , Ingo Molnar Subject: [RFC PATCH v2 6/8] sched/sched.h: remove sd arch_scale_freq_capacity parameter Date: Mon, 4 Dec 2017 11:23:23 +0100 Message-Id: <20171204102325.5110-7-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171204102325.5110-1-juri.lelli@redhat.com> References: <20171204102325.5110-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Juri Lelli sd parameter is never used in arch_scale_freq_capacity (and it's hard to see where information coming from scheduling domains might help doing frequency invariance scaling). Remove it; also in anticipation of moving arch_scale_freq_capacity outside CONFIG_SMP. Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar --- include/linux/arch_topology.h | 2 +- kernel/sched/fair.c | 2 +- kernel/sched/sched.h | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) -- 2.14.3 diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h index 304511267c82..2b709416de05 100644 --- a/include/linux/arch_topology.h +++ b/include/linux/arch_topology.h @@ -27,7 +27,7 @@ void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity); DECLARE_PER_CPU(unsigned long, freq_scale); static inline -unsigned long topology_get_freq_scale(struct sched_domain *sd, int cpu) +unsigned long topology_get_freq_scale(int cpu) { return per_cpu(freq_scale, cpu); } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4037e19bbca2..535d9409f4af 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3122,7 +3122,7 @@ accumulate_sum(u64 delta, int cpu, struct sched_avg *sa, u32 contrib = (u32)delta; /* p == 0 -> delta < 1024 */ u64 periods; - scale_freq = arch_scale_freq_capacity(NULL, cpu); + scale_freq = arch_scale_freq_capacity(cpu); scale_cpu = arch_scale_cpu_capacity(NULL, cpu); delta += sa->period_contrib; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 280b421a82e8..b64207d54a55 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1712,7 +1712,7 @@ extern void sched_avg_update(struct rq *rq); #ifndef arch_scale_freq_capacity static __always_inline -unsigned long arch_scale_freq_capacity(struct sched_domain *sd, int cpu) +unsigned long arch_scale_freq_capacity(int cpu) { return SCHED_CAPACITY_SCALE; } @@ -1731,7 +1731,7 @@ unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { - rq->rt_avg += rt_delta * arch_scale_freq_capacity(NULL, cpu_of(rq)); + rq->rt_avg += rt_delta * arch_scale_freq_capacity(cpu_of(rq)); sched_avg_update(rq); } #else From patchwork Mon Dec 4 10:23:24 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 120505 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4245945qgn; Mon, 4 Dec 2017 02:25:42 -0800 (PST) X-Google-Smtp-Source: AGs4zMaLuxhzbqrEa0zmddipkL59FRP2fvg9Utc1daAgaWkWY1wqSagEnAYRDmSULmetS0HpcW7E X-Received: by 10.84.174.129 with SMTP id r1mr14280749plb.337.1512383142196; Mon, 04 Dec 2017 02:25:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512383142; cv=none; d=google.com; s=arc-20160816; b=so5r4eBg0MleazNYeXBByJKq/z21R6XQ999WR78jlp55cWiQ2lOCv0F4ZO1OQSMWOI CzHe0xWPOIqV3A4JjoElHSn7ComRMwE2rQUG98uw8nEyUkLkzDGnse7TjRIbHt8pGGH2 XHdlcqBqFW2tMaJZSZXFZBYGPMIRmauxq5xPL17hDNkHjIET08SYwCyBGdSy21Sf4sni WstKZCOVQr8+DaY8vLtk3LE8hYyc/N7VjjioXV746wn2r9JoBlxUTqaeViQp6Kz3awxp Bh45rRVcL/q8+xSPujLFAprdaJj1kl9TVpgj/kcUFNjxh1JIZS6ygh+APE+pm37VdtSG AXjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=tLn4GyCuFxGfw4Hk63E/gxzsjQsZx/a2/VQMhr7uee4=; b=o/0G47wGKaQO0lBQCbXCC4Fo5jPRkuRMF8eGtx6wAZmwwZ0wXODJB42T7aYR1VWq8X mE+z+BK/upAADmXnXgyW5CwXe0LThAsqpIs/2jXKXdENqkOwMm8rdEwwjRCeHlS6OiPP aj2E6MV2Ym7k0n0e/3EITQjOmJ0kfYSIfwjM1gJGHBemm73K6LdpL09ohVHCHelpFXgO 92YvY1t38L9T9GAuV1HSHIsajRAJmOz0u2WxkI0XtGZIrN/DrcEVoORF1wcS5AgCJDam Z4/WC2/9UzAf69DJsVvVg6jOeo6zQQ7COHpM6f6A4cRTzRLHt9/GKeSbM+z/+r/1LjLw n6cA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d2si9123343pgq.658.2017.12.04.02.25.41; Mon, 04 Dec 2017 02:25:42 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753915AbdLDKZi (ORCPT + 28 others); Mon, 4 Dec 2017 05:25:38 -0500 Received: from mail-wm0-f65.google.com ([74.125.82.65]:45738 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753677AbdLDKYa (ORCPT ); Mon, 4 Dec 2017 05:24:30 -0500 Received: by mail-wm0-f65.google.com with SMTP id 9so4783875wme.4 for ; Mon, 04 Dec 2017 02:24:30 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tLn4GyCuFxGfw4Hk63E/gxzsjQsZx/a2/VQMhr7uee4=; b=XtvYq3J/oQuwF800F4O/QYdDP2ZNrcqu31N9wawlhQD9AYPIp/ldSfuh2hpiZNSP7q snJUnPOHl8OzscifAILS08UNuzDajnHVNjgWHf1Z52br73O5j/lbGRMBioQ/vIIlBNK+ +U4CAQW6DKjMCmZWCbFyNb8u1t63YdYWRYoQwmPPFfje/Uq+0mngbHL2DUEL1dFli4OJ XWTAdqE9QDK3UPcVm5bkVpGO6sajZzZJM1ghHkMjir/S880YdQ8vNGhTcCJa7JuaLZux WJgpNpoBqyzzcXA+0Rx0Qek0OdZ+L9caMemiik5LWrgnpDjpaqxMXEQLs+1gB+34I8DQ Btiw== X-Gm-Message-State: AKGB3mIPdOwCIn+H2PpjJQbRdhANE9wF7QR8C1PDdLMOmf9osoanktHL 7ulRPXF7bm8S9SfjXVB+Bm3y3w== X-Received: by 10.28.169.198 with SMTP id s189mr7554851wme.65.1512383069514; Mon, 04 Dec 2017 02:24:29 -0800 (PST) Received: from localhost.localdomain.com ([151.68.92.1]) by smtp.gmail.com with ESMTPSA id y2sm13542473wra.18.2017.12.04.02.24.26 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 04 Dec 2017 02:24:29 -0800 (PST) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, alessio.balsini@arm.com, juri.lelli@redhat.com, Juri Lelli , Ingo Molnar Subject: [RFC PATCH v2 7/8] sched/sched.h: move arch_scale_{freq, cpu}_capacity outside CONFIG_SMP Date: Mon, 4 Dec 2017 11:23:24 +0100 Message-Id: <20171204102325.5110-8-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171204102325.5110-1-juri.lelli@redhat.com> References: <20171204102325.5110-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Juri Lelli Currently, frequency and cpu capacity scaling is only performed on CONFIG_SMP systems (as CFS PELT signals are only present for such systems). However, other scheduling classes want to do freq/cpu scaling, and for !CONFIG_SMP configurations as well. arch_scale_freq_capacity is useful to implement frequency scaling even on !CONFIG_SMP platforms, so we simply move it outside CONFIG_SMP ifdeffery. Even if arch_scale_cpu_capacity is not useful on !CONFIG_SMP platforms, we make a default implementation available for such configurations anyway to simplify scheduler code doing CPU scale invariance. Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Reviewed-by: Steven Rostedt (VMware) --- include/linux/sched/topology.h | 12 ++++++------ kernel/sched/sched.h | 13 ++++++++++--- 2 files changed, 16 insertions(+), 9 deletions(-) -- 2.14.3 diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index cf257c2e728d..26347741ba50 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -6,6 +6,12 @@ #include +/* + * Increase resolution of cpu_capacity calculations + */ +#define SCHED_CAPACITY_SHIFT SCHED_FIXEDPOINT_SHIFT +#define SCHED_CAPACITY_SCALE (1L << SCHED_CAPACITY_SHIFT) + /* * sched-domains (multiprocessor balancing) declarations: */ @@ -27,12 +33,6 @@ #define SD_OVERLAP 0x2000 /* sched_domains of this level overlap */ #define SD_NUMA 0x4000 /* cross-node balancing */ -/* - * Increase resolution of cpu_capacity calculations - */ -#define SCHED_CAPACITY_SHIFT SCHED_FIXEDPOINT_SHIFT -#define SCHED_CAPACITY_SCALE (1L << SCHED_CAPACITY_SHIFT) - #ifdef CONFIG_SCHED_SMT static inline int cpu_smt_flags(void) { diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b64207d54a55..0022c649fabb 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1707,9 +1707,6 @@ static inline int hrtick_enabled(struct rq *rq) #endif /* CONFIG_SCHED_HRTICK */ -#ifdef CONFIG_SMP -extern void sched_avg_update(struct rq *rq); - #ifndef arch_scale_freq_capacity static __always_inline unsigned long arch_scale_freq_capacity(int cpu) @@ -1718,6 +1715,9 @@ unsigned long arch_scale_freq_capacity(int cpu) } #endif +#ifdef CONFIG_SMP +extern void sched_avg_update(struct rq *rq); + #ifndef arch_scale_cpu_capacity static __always_inline unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) @@ -1735,6 +1735,13 @@ static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) sched_avg_update(rq); } #else +#ifndef arch_scale_cpu_capacity +static __always_inline +unsigned long arch_scale_cpu_capacity(void __always_unused *sd, int cpu) +{ + return SCHED_CAPACITY_SCALE; +} +#endif static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { } static inline void sched_avg_update(struct rq *rq) { } #endif From patchwork Mon Dec 4 10:23:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 120503 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4245108qgn; Mon, 4 Dec 2017 02:24:42 -0800 (PST) X-Google-Smtp-Source: AGs4zMZGBxV+PPweFVSHXECjCejcAAgsyDgXbqJ0Ne8SsYAkviuAdYpAnkVV2sMZmbsQc23EJDkt X-Received: by 10.84.244.139 with SMTP id h11mr14272199pll.127.1512383082683; Mon, 04 Dec 2017 02:24:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512383082; cv=none; d=google.com; s=arc-20160816; b=s3RutoWNlMe4kNkDPOfY/Yzgj1U+ydugq/geLK68iiJs0jp/7w0Lr519cYK1Bi1CL+ SdwxjV2u1dvDMxxj1B0PcS+nIi5fZDt52nNzFmnCCmGJd4RghIJx/BksrcebxfwMT6Lk Kd7AKr21W0PFj4ywbibEGx01Wp4w1/u2MPEexHdszRivPR/TPFjEMj2LjzDdZF8Rcjat /jg0+ThgoIqS5RaN4HGfTjBMhMW4QBAmR+ne1TewcQlG6jVTvLD0KOfEqH9y7O/N5r10 LMsOC7bwX+DusQz040/zazmwWzmhj6cGGUQxHBlcLkBRCLtl/ZsC62s9katn1KlU3mrF AZlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=K72fsJ+PJZNJBPaMK2x9IH+/HBd820m0CcThAZhsxsA=; b=fa2o5zIeq9nv3yuRzBlfW8utAAx5Ue/0FfI5p6aU3yL9sQaHKBu68gq016RApF2yIU IZbp0n9PEOiO3TUzatf8k1qufJM7z/IRrNtj/ZAsn7QcmavCy4BqZu3CjV2lLNz/m5AF 1p+pg1OyVZMR7z20gX7R0rrKiR6kr0sylG/hBbPHnHxultRI1MtIj30dOEtpRbZ6m/aR UDR0KDIkvU+540D1JNCvhySraZ6wLMJGgy7TJM3WdO9Xnql4OmfiueTBDdtvVD2/ZLGn YuvWqQkOZcKmU8Fl+G0h8c/8tnJnw5Nhp5byD53aW8XNaG0w3hK96gVudgHdFdi7/DbR kMhA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b62si9951345pfe.291.2017.12.04.02.24.42; Mon, 04 Dec 2017 02:24:42 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753781AbdLDKYj (ORCPT + 28 others); Mon, 4 Dec 2017 05:24:39 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:41379 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753724AbdLDKYe (ORCPT ); Mon, 4 Dec 2017 05:24:34 -0500 Received: by mail-wm0-f68.google.com with SMTP id g75so4821899wme.0 for ; Mon, 04 Dec 2017 02:24:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=K72fsJ+PJZNJBPaMK2x9IH+/HBd820m0CcThAZhsxsA=; b=P6RDOIfQpx6swSTazd6WD9ZE87aniFYpSTdKD94RHXpFgH216Naj341Wbf7RnpvP48 8gKmaKIX9hFPKbPXl+8mfaY3stjKP7u5TzMBbUh3wFMfyoL4B4yapPZnJBtV6H40Z2Yy bZepZ+9gscBnLcw/49LDqk7mDRvxTikmuZbR98ZUiuiHipVCXrHjm0Q9xjPThn8Ai8mX JO//UQQhqCr4G/98rJ904x+RYRQLKGS3uoMzTzbr+0HUGbYJnPnVuFwS2/+KfkPK8yLG ATeVCtPlTKQEr2TI6+FAS0N6o/xk/aap+XLoEJyrfj3hjd9SB290HJGTG5ryOgbYWGHe MBDA== X-Gm-Message-State: AKGB3mL01vObGGacTuKQ+UIg+47UFc0bwlYMps0Y/1o2AX4hffaq/Q0k VDvFjyxCMuErtkrr+gXpiRl6xg== X-Received: by 10.28.209.141 with SMTP id i135mr2428979wmg.153.1512383072469; Mon, 04 Dec 2017 02:24:32 -0800 (PST) Received: from localhost.localdomain.com ([151.68.92.1]) by smtp.gmail.com with ESMTPSA id y2sm13542473wra.18.2017.12.04.02.24.29 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 04 Dec 2017 02:24:32 -0800 (PST) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, alessio.balsini@arm.com, juri.lelli@redhat.com, Juri Lelli , Ingo Molnar , "Rafael J . Wysocki" Subject: [RFC PATCH v2 8/8] sched/deadline: make bandwidth enforcement scale-invariant Date: Mon, 4 Dec 2017 11:23:25 +0100 Message-Id: <20171204102325.5110-9-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171204102325.5110-1-juri.lelli@redhat.com> References: <20171204102325.5110-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Juri Lelli Apply frequency and cpu scale-invariance correction factor to bandwidth enforcement (similar to what we already do to fair utilization tracking). Each delta_exec gets scaled considering current frequency and maximum cpu capacity; which means that the reservation runtime parameter (that need to be specified profiling the task execution at max frequency on biggest capacity core) gets thus scaled accordingly. Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Cc: Claudio Scordino --- kernel/sched/deadline.c | 26 ++++++++++++++++++++++---- kernel/sched/fair.c | 2 -- kernel/sched/sched.h | 2 ++ 3 files changed, 24 insertions(+), 6 deletions(-) -- 2.14.3 diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 40f12aab9250..741d2fe26f88 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1151,7 +1151,8 @@ static void update_curr_dl(struct rq *rq) { struct task_struct *curr = rq->curr; struct sched_dl_entity *dl_se = &curr->dl; - u64 delta_exec; + u64 delta_exec, scaled_delta_exec; + int cpu = cpu_of(rq); if (!dl_task(curr) || !on_dl_rq(dl_se)) return; @@ -1185,9 +1186,26 @@ static void update_curr_dl(struct rq *rq) if (unlikely(dl_entity_is_special(dl_se))) return; - if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) - delta_exec = grub_reclaim(delta_exec, rq, &curr->dl); - dl_se->runtime -= delta_exec; + /* + * For tasks that participate in GRUB, we implement GRUB-PA: the + * spare reclaimed bandwidth is used to clock down frequency. + * + * For the others, we still need to scale reservation parameters + * according to current frequency and CPU maximum capacity. + */ + if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) { + scaled_delta_exec = grub_reclaim(delta_exec, + rq, + &curr->dl); + } else { + unsigned long scale_freq = arch_scale_freq_capacity(cpu); + unsigned long scale_cpu = arch_scale_cpu_capacity(NULL, cpu); + + scaled_delta_exec = cap_scale(delta_exec, scale_freq); + scaled_delta_exec = cap_scale(scaled_delta_exec, scale_cpu); + } + + dl_se->runtime -= scaled_delta_exec; throttle: if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) { diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 535d9409f4af..5bc3273a5c1c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3091,8 +3091,6 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3) return c1 + c2 + c3; } -#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) - /* * Accumulate the three separate parts of the sum; d1 the remainder * of the last (incomplete) period, d2 the span of full periods and d3 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 0022c649fabb..6d9d55e764fa 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -156,6 +156,8 @@ static inline int task_has_dl_policy(struct task_struct *p) return dl_policy(p->policy); } +#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) + /* * !! For sched_setattr_nocheck() (kernel) only !! *