From patchwork Fri Feb 27 15:54:08 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 45247 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f71.google.com (mail-wg0-f71.google.com [74.125.82.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3C82C21488 for ; Fri, 27 Feb 2015 15:55:31 +0000 (UTC) Received: by wghb13 with SMTP id b13sf14774863wgh.2 for ; Fri, 27 Feb 2015 07:55:30 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=EivtI5h6zNSKEqOEi3GFF0q1ru0DpLQQHIdaeDIYR/g=; b=HFpFghHg7iTpPY/PNZ9vHf54a4mBhr+8EMR/FFYRLIlah+Ww2d1SvVVE5Fq5GJ/I8J 9oLczBctCIzAfFNMgRQlIOOyxY1om1+3BE/2BY6W9Lh0t3E8/oet3ZCUinKGtnomz3XF AcHAKys6bAjUhfrQozXlV9W2MEa2o5hdV9JHTma+S9c40Gf++jyj7q4WnQ/BeUBjnJy3 49Ijq/mCzIq16asBaPlbcf3NKv5T623ZjOwPxpup9rLEx0K6t6GvSXZRnrtkQbkS3Ata pt6JcnFoQWWSMj0l1OZSeh8u2P5Fhwsu9YunMefhO7falYIfC3cMpFYwgec7rRHZgbtj qmPQ== X-Gm-Message-State: ALoCoQmE/365azexQyhQG76G6O6jHLORtTEwfkWMlWe5j9jqSZ6Py2ZwF+9DfMCeCKZeyDy6HgKg X-Received: by 10.180.75.232 with SMTP id f8mr586898wiw.0.1425052530504; Fri, 27 Feb 2015 07:55:30 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.198.6 with SMTP id iy6ls174742lac.38.gmail; Fri, 27 Feb 2015 07:55:30 -0800 (PST) X-Received: by 10.112.132.101 with SMTP id ot5mr12935349lbb.30.1425052530238; Fri, 27 Feb 2015 07:55:30 -0800 (PST) Received: from mail-lb0-f176.google.com (mail-lb0-f176.google.com. [209.85.217.176]) by mx.google.com with ESMTPS id zn9si3129269lbb.164.2015.02.27.07.55.30 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 27 Feb 2015 07:55:30 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.176 as permitted sender) client-ip=209.85.217.176; Received: by lbjb6 with SMTP id b6so18121242lbj.12 for ; Fri, 27 Feb 2015 07:55:30 -0800 (PST) X-Received: by 10.152.22.67 with SMTP id b3mr12904513laf.117.1425052530061; Fri, 27 Feb 2015 07:55:30 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp4027943lbj; Fri, 27 Feb 2015 07:55:29 -0800 (PST) X-Received: by 10.66.132.6 with SMTP id oq6mr25574248pab.29.1425052526470; Fri, 27 Feb 2015 07:55:26 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n10si782845pap.20.2015.02.27.07.55.24; Fri, 27 Feb 2015 07:55:26 -0800 (PST) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753122AbbB0PzQ (ORCPT + 28 others); Fri, 27 Feb 2015 10:55:16 -0500 Received: from mail-wg0-f53.google.com ([74.125.82.53]:33539 "EHLO mail-wg0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751693AbbB0PzN (ORCPT ); Fri, 27 Feb 2015 10:55:13 -0500 Received: by wghb13 with SMTP id b13so21272207wgh.0 for ; Fri, 27 Feb 2015 07:55:12 -0800 (PST) X-Received: by 10.180.84.166 with SMTP id a6mr7801281wiz.4.1425052511125; Fri, 27 Feb 2015 07:55:11 -0800 (PST) Received: from lmenx30s.lme.st.com (LPuteaux-656-1-48-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPSA id y3sm6519459wju.14.2015.02.27.07.55.08 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 27 Feb 2015 07:55:09 -0800 (PST) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, preeti@linux.vnet.ibm.com, Morten.Rasmussen@arm.com, kamalesh@linux.vnet.ibm.com Cc: riel@redhat.com, efault@gmx.de, nicolas.pitre@linaro.org, dietmar.eggemann@arm.com, linaro-kernel@lists.linaro.org, Vincent Guittot Subject: [PATCH v10 05/11] sched: make scale_rt invariant with frequency Date: Fri, 27 Feb 2015 16:54:08 +0100 Message-Id: <1425052454-25797-6-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1425052454-25797-1-git-send-email-vincent.guittot@linaro.org> References: <1425052454-25797-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: vincent.guittot@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.176 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The average running time of RT tasks is used to estimate the remaining compute capacity for CFS tasks. This remaining capacity is the original capacity scaled down by a factor (aka scale_rt_capacity). This estimation of available capacity must also be invariant with frequency scaling. A frequency scaling factor is applied on the running time of the RT tasks for computing scale_rt_capacity. In sched_rt_avg_update, we now scale the RT execution time like below: rq->rt_avg += rt_delta * arch_scale_freq_capacity() >> SCHED_CAPACITY_SHIFT Then, scale_rt_capacity can be summarized by: scale_rt_capacity = SCHED_CAPACITY_SCALE * available / total with available = total - rq->rt_avg This has been been optimized in current code by scale_rt_capacity = available / (total >> SCHED_CAPACITY_SHIFT) But we can also developed the equation like below scale_rt_capacity = SCHED_CAPACITY_SCALE - ((rq->rt_avg << SCHED_CAPACITY_SHIFT) / total) and we can optimize the equation by removing SCHED_CAPACITY_SHIFT shift in the computation of rq->rt_avg and scale_rt_capacity so rq->rt_avg += rt_delta * arch_scale_freq_capacity() and scale_rt_capacity = SCHED_CAPACITY_SCALE - (rq->rt_avg / total) arch_scale_frequency_capacity will be called in the hot path of the scheduler which implies to have a short and efficient function. As an example, arch_scale_frequency_capacity should return a cached value that is updated periodically outside of the hot path. Signed-off-by: Vincent Guittot Acked-by: Morten Rasmussen --- kernel/sched/fair.c | 17 +++++------------ kernel/sched/sched.h | 4 +++- 2 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 7f031e4..dc7c693 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6004,7 +6004,7 @@ unsigned long __weak arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) static unsigned long scale_rt_capacity(int cpu) { struct rq *rq = cpu_rq(cpu); - u64 total, available, age_stamp, avg; + u64 total, used, age_stamp, avg; s64 delta; /* @@ -6020,19 +6020,12 @@ static unsigned long scale_rt_capacity(int cpu) total = sched_avg_period() + delta; - if (unlikely(total < avg)) { - /* Ensures that capacity won't end up being negative */ - available = 0; - } else { - available = total - avg; - } + used = div_u64(avg, total); - if (unlikely((s64)total < SCHED_CAPACITY_SCALE)) - total = SCHED_CAPACITY_SCALE; + if (likely(used < SCHED_CAPACITY_SCALE)) + return SCHED_CAPACITY_SCALE - used; - total >>= SCHED_CAPACITY_SHIFT; - - return div_u64(available, total); + return 1; } static void update_cpu_capacity(struct sched_domain *sd, int cpu) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 65fa7b5..23c6dd7 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1374,9 +1374,11 @@ static inline int hrtick_enabled(struct rq *rq) #ifdef CONFIG_SMP extern void sched_avg_update(struct rq *rq); +extern unsigned long arch_scale_freq_capacity(struct sched_domain *sd, int cpu); + static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { - rq->rt_avg += rt_delta; + rq->rt_avg += rt_delta * arch_scale_freq_capacity(NULL, cpu_of(rq)); sched_avg_update(rq); } #else