From patchwork Mon Sep 22 16:24:01 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 37694 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f70.google.com (mail-wg0-f70.google.com [74.125.82.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id A0098202A1 for ; Mon, 22 Sep 2014 16:25:40 +0000 (UTC) Received: by mail-wg0-f70.google.com with SMTP id a1sf1721096wgh.9 for ; Mon, 22 Sep 2014 09:25:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=tpcZvcOGtPQQgghfiooj3OZMguJs6uYB1B4eU35zAng=; b=V4TBi+ZYMRHwBmkxcukCg23WlvcB4zqny16ae+ndwZHyY5k8aLy1DLQvxbsc4b4x3s DXzU4572uK5R7D+5apKpT7hB1aBhzkbhavTVTfLSoMSNSEsBz25fMmuVmWCDYl7Fjh3u gQLCCGZe87tiOdSlPUXLeQbA5Hhk/ZO4OOy4WocnH41FBF5gZkYovoVdV9yKNaj5A4Ue rgyIW5g80rP5uA6tUfFh59d488sMfnb1Ev7olEW6Iu/j1O7Hrgk+CGgpW75woAS/2kOK yKLH8RsFQYcDdE4zw2WmkZGLKFxJ7vhaVgbdewURqyxga9hw4UoVihnphhzOCgE5oE6a dXOg== X-Gm-Message-State: ALoCoQmdZSti2bQWv8XIb/GPK4VIoTcgpJmwC+GmrVnrXVQpVLOzgbxsiKJvNeIKSwhe/XcDHvaP X-Received: by 10.152.2.97 with SMTP id 1mr617584lat.6.1411403139797; Mon, 22 Sep 2014 09:25:39 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.27.2 with SMTP id p2ls562329lag.10.gmail; Mon, 22 Sep 2014 09:25:39 -0700 (PDT) X-Received: by 10.112.218.70 with SMTP id pe6mr24203792lbc.65.1411403139597; Mon, 22 Sep 2014 09:25:39 -0700 (PDT) Received: from mail-lb0-f178.google.com (mail-lb0-f178.google.com [209.85.217.178]) by mx.google.com with ESMTPS id su1si4217535lbb.33.2014.09.22.09.25.39 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 22 Sep 2014 09:25:39 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) client-ip=209.85.217.178; Received: by mail-lb0-f178.google.com with SMTP id z12so4602234lbi.9 for ; Mon, 22 Sep 2014 09:25:39 -0700 (PDT) X-Received: by 10.152.246.6 with SMTP id xs6mr26981510lac.56.1411403139524; Mon, 22 Sep 2014 09:25:39 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.130.169 with SMTP id of9csp269674lbb; Mon, 22 Sep 2014 09:25:38 -0700 (PDT) X-Received: by 10.68.203.105 with SMTP id kp9mr6126996pbc.76.1411403138001; Mon, 22 Sep 2014 09:25:38 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id rb5si16356186pab.183.2014.09.22.09.25.37 for ; Mon, 22 Sep 2014 09:25:37 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754548AbaIVQYI (ORCPT + 27 others); Mon, 22 Sep 2014 12:24:08 -0400 Received: from service87.mimecast.com ([91.220.42.44]:60068 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754008AbaIVQYE (ORCPT ); Mon, 22 Sep 2014 12:24:04 -0400 Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Mon, 22 Sep 2014 17:24:02 +0100 Received: from e103034-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 22 Sep 2014 17:24:02 +0100 From: Morten Rasmussen To: peterz@infradead.org, mingo@redhat.com Cc: dietmar.eggemann@arm.com, pjt@google.com, bsegall@google.com, vincent.guittot@linaro.org, nicolas.pitre@linaro.org, mturquette@linaro.org, rjw@rjwysocki.net, linux-kernel@vger.kernel.org, Morten Rasmussen Subject: [PATCH 1/7] sched: Introduce scale-invariant load tracking Date: Mon, 22 Sep 2014 17:24:01 +0100 Message-Id: <1411403047-32010-2-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1411403047-32010-1-git-send-email-morten.rasmussen@arm.com> References: <1411403047-32010-1-git-send-email-morten.rasmussen@arm.com> X-OriginalArrivalTime: 22 Sep 2014 16:24:02.0081 (UTC) FILETIME=[9ED35910:01CFD681] X-MC-Unique: 114092217240209501 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: morten.rasmussen@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Dietmar Eggemann The per-entity load-tracking currently neither accounts for frequency changes due to frequency scaling (cpufreq) nor for micro-architectural differences between cpus (ARM big.LITTLE). Comparing tracked loads between different cpus might therefore be quite misleading. This patch introduces a scale-invariance scaling factor to the load-tracking computation that can be used to compensate for compute capacity variations. The scaling factor is to be provided by the architecture through an arch specific function. It may be as simple as: current_freq(cpu) * SCHED_CAPACITY_SCALE / max_freq(cpu) If the architecture has more sophisticated ways of tracking compute capacity, it can do so in its implementation. By default, no scaling is applied. The patch is loosely based on a patch by Chris Redpath . cc: Paul Turner cc: Ben Segall Signed-off-by: Dietmar Eggemann Signed-off-by: Morten Rasmussen --- kernel/sched/fair.c | 32 ++++++++++++++++++++++++++------ 1 file changed, 26 insertions(+), 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2a1e6ac..52abb3e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2267,6 +2267,8 @@ static u32 __compute_runnable_contrib(u64 n) return contrib + runnable_avg_yN_sum[n]; } +unsigned long arch_scale_load_capacity(int cpu); + /* * We can represent the historical contribution to runnable average as the * coefficients of a geometric series. To do this we sub-divide our runnable @@ -2295,13 +2297,14 @@ static u32 __compute_runnable_contrib(u64 n) * load_avg = u_0` + y*(u_0 + u_1*y + u_2*y^2 + ... ) * = u_0 + u_1*y + u_2*y^2 + ... [re-labeling u_i --> u_{i+1}] */ -static __always_inline int __update_entity_runnable_avg(u64 now, +static __always_inline int __update_entity_runnable_avg(u64 now, int cpu, struct sched_avg *sa, int runnable) { u64 delta, periods; u32 runnable_contrib; int delta_w, decayed = 0; + u32 scale_cap = arch_scale_load_capacity(cpu); delta = now - sa->last_runnable_update; /* @@ -2334,8 +2337,10 @@ static __always_inline int __update_entity_runnable_avg(u64 now, * period and accrue it. */ delta_w = 1024 - delta_w; + if (runnable) - sa->runnable_avg_sum += delta_w; + sa->runnable_avg_sum += (delta_w * scale_cap) + >> SCHED_CAPACITY_SHIFT; sa->runnable_avg_period += delta_w; delta -= delta_w; @@ -2351,14 +2356,17 @@ static __always_inline int __update_entity_runnable_avg(u64 now, /* Efficiently calculate \sum (1..n_period) 1024*y^i */ runnable_contrib = __compute_runnable_contrib(periods); + if (runnable) - sa->runnable_avg_sum += runnable_contrib; + sa->runnable_avg_sum += (runnable_contrib * scale_cap) + >> SCHED_CAPACITY_SHIFT; sa->runnable_avg_period += runnable_contrib; } /* Remainder of delta accrued against u_0` */ if (runnable) - sa->runnable_avg_sum += delta; + sa->runnable_avg_sum += (delta * scale_cap) + >> SCHED_CAPACITY_SHIFT; sa->runnable_avg_period += delta; return decayed; @@ -2464,7 +2472,8 @@ static inline void __update_group_entity_contrib(struct sched_entity *se) static inline void update_rq_runnable_avg(struct rq *rq, int runnable) { - __update_entity_runnable_avg(rq_clock_task(rq), &rq->avg, runnable); + __update_entity_runnable_avg(rq_clock_task(rq), rq->cpu, &rq->avg, + runnable); __update_tg_runnable_avg(&rq->avg, &rq->cfs); } #else /* CONFIG_FAIR_GROUP_SCHED */ @@ -2518,6 +2527,7 @@ static inline void update_entity_load_avg(struct sched_entity *se, { struct cfs_rq *cfs_rq = cfs_rq_of(se); long contrib_delta; + int cpu = rq_of(cfs_rq)->cpu; u64 now; /* @@ -2529,7 +2539,7 @@ static inline void update_entity_load_avg(struct sched_entity *se, else now = cfs_rq_clock_task(group_cfs_rq(se)); - if (!__update_entity_runnable_avg(now, &se->avg, se->on_rq)) + if (!__update_entity_runnable_avg(now, cpu, &se->avg, se->on_rq)) return; contrib_delta = __update_entity_load_avg_contrib(se); @@ -5719,6 +5729,16 @@ unsigned long __weak arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) return default_scale_cpu_capacity(sd, cpu); } +static unsigned long default_scale_load_capacity(int cpu) +{ + return SCHED_CAPACITY_SCALE; +} + +unsigned long __weak arch_scale_load_capacity(int cpu) +{ + return default_scale_load_capacity(cpu); +} + static unsigned long scale_rt_capacity(int cpu) { struct rq *rq = cpu_rq(cpu);