From patchwork Wed May 14 20:57:07 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Pitre X-Patchwork-Id: 30204 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pa0-f72.google.com (mail-pa0-f72.google.com [209.85.220.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5F2B72055D for ; Wed, 14 May 2014 21:01:41 +0000 (UTC) Received: by mail-pa0-f72.google.com with SMTP id rd3sf689343pab.3 for ; Wed, 14 May 2014 14:01:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe :content-transfer-encoding; bh=md247UHvzJMUiK70giTpVysvXGTxsMUsbvhIWzWxkZk=; b=iWdTlDj7yuLMMIIBQlFh/FECoJUJl7ROkiCUHCWNup1EAh08A8JUHe3nNONOuCIp5X wyZVawio7zPEuyzbVb0Mp0hrI3i8MwsLR557iWiW9U8unsLKzYs9hihMdP1aTDuweLsp rqCucDiSRsD3lnk0fcC+4a3cmzibE3T8vEGljBU63EWHJx98ZYXojweBa7Z4m02Eb+9S cISj277OL1NBxJPv1KpNEv4/fuUE/z9+k018avhDaZ0AWxXeJnLRLSzQZYmB7fhE9WvI +KnYap1jtE3VG0+nrLQ05tADTZc6lMB4dK/F5Sx1W2JHpZEUh4TRbu++L8GMIzseGOdz /JBQ== X-Gm-Message-State: ALoCoQnojIMhWJpkB9pMCjx3ANx7jAkHAUzdURmAQmd3HM+Rc6A91pjPZNl/I5esBF94e66tMyv7 X-Received: by 10.66.230.226 with SMTP id tb2mr280574pac.41.1400101300671; Wed, 14 May 2014 14:01:40 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.37.164 with SMTP id r33ls2648012qgr.2.gmail; Wed, 14 May 2014 14:01:40 -0700 (PDT) X-Received: by 10.58.209.233 with SMTP id mp9mr4828842vec.30.1400101300558; Wed, 14 May 2014 14:01:40 -0700 (PDT) Received: from mail-ve0-f182.google.com (mail-ve0-f182.google.com [209.85.128.182]) by mx.google.com with ESMTPS id rz3si535679veb.25.2014.05.14.14.01.40 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 14 May 2014 14:01:40 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.182 as permitted sender) client-ip=209.85.128.182; Received: by mail-ve0-f182.google.com with SMTP id sa20so171549veb.27 for ; Wed, 14 May 2014 14:01:40 -0700 (PDT) X-Received: by 10.52.53.101 with SMTP id a5mr4062392vdp.14.1400101300467; Wed, 14 May 2014 14:01:40 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp271120vcb; Wed, 14 May 2014 14:01:40 -0700 (PDT) X-Received: by 10.68.136.226 with SMTP id qd2mr7277121pbb.72.1400101299632; Wed, 14 May 2014 14:01:39 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id sg3si3073110pac.93.2014.05.14.14.01.39; Wed, 14 May 2014 14:01:39 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753223AbaENVB3 (ORCPT + 27 others); Wed, 14 May 2014 17:01:29 -0400 Received: from relais.videotron.ca ([24.201.245.36]:62660 "EHLO relais.videotron.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751238AbaENU74 (ORCPT ); Wed, 14 May 2014 16:59:56 -0400 Received: from yoda.home ([66.130.143.177]) by VL-VM-MR002.ip.videotron.ca (Oracle Communications Messaging Exchange Server 7u4-22.01 64bit (built Apr 21 2011)) with ESMTP id <0N5L0053J0ZTGG31@VL-VM-MR002.ip.videotron.ca> for linux-kernel@vger.kernel.org; Wed, 14 May 2014 16:59:53 -0400 (EDT) Received: from xanadu.home (xanadu.home [192.168.2.2]) by yoda.home (Postfix) with ESMTP id 56D632DA072D; Wed, 14 May 2014 16:59:53 -0400 (EDT) From: Nicolas Pitre To: Peter Zijlstra , Ingo Molnar Cc: Vincent Guittot , Daniel Lezcano , Morten Rasmussen , "Rafael J. Wysocki" , linux-kernel@vger.kernel.org, linaro-kernel@lists.linaro.org Subject: [PATCH 3/6] sched/fair.c: disambiguate existing/remaining "capacity" usage Date: Wed, 14 May 2014 16:57:07 -0400 Message-id: <1400101030-17717-4-git-send-email-nicolas.pitre@linaro.org> X-Mailer: git-send-email 1.8.4.108.g55ea5f6 In-reply-to: <1400101030-17717-1-git-send-email-nicolas.pitre@linaro.org> References: <1400101030-17717-1-git-send-email-nicolas.pitre@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: nicolas.pitre@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.182 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Content-transfer-encoding: 7BIT We have "power" (which should actually become "capacity") and "capacity" which is a scaled down "capacity factor" in terms of possible tasks. Let's use "capa_factor" to make room for proper usage of "capacity" later. Signed-off-by: Nicolas Pitre --- kernel/sched/fair.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0eda4c527e..2633c42692 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5487,7 +5487,7 @@ struct sg_lb_stats { unsigned long load_per_task; unsigned long group_power; unsigned int sum_nr_running; /* Nr tasks running in the group */ - unsigned int group_capacity; + unsigned int group_capa_factor; unsigned int idle_cpus; unsigned int group_weight; int group_imb; /* Is there an imbalance in the group ? */ @@ -5782,15 +5782,15 @@ static inline int sg_imbalanced(struct sched_group *group) } /* - * Compute the group capacity. + * Compute the group capacity factor. * * Avoid the issue where N*frac(smt_power) >= 1 creates 'phantom' cores by * first dividing out the smt factor and computing the actual number of cores * and limit power unit capacity with that. */ -static inline int sg_capacity(struct lb_env *env, struct sched_group *group) +static inline int sg_capa_factor(struct lb_env *env, struct sched_group *group) { - unsigned int capacity, smt, cpus; + unsigned int capa_factor, smt, cpus; unsigned int power, power_orig; power = group->sgp->power; @@ -5799,13 +5799,13 @@ static inline int sg_capacity(struct lb_env *env, struct sched_group *group) /* smt := ceil(cpus / power), assumes: 1 < smt_power < 2 */ smt = DIV_ROUND_UP(SCHED_POWER_SCALE * cpus, power_orig); - capacity = cpus / smt; /* cores */ + capa_factor = cpus / smt; /* cores */ - capacity = min_t(unsigned, capacity, DIV_ROUND_CLOSEST(power, SCHED_POWER_SCALE)); - if (!capacity) - capacity = fix_small_capacity(env->sd, group); + capa_factor = min_t(unsigned, capa_factor, DIV_ROUND_CLOSEST(power, SCHED_POWER_SCALE)); + if (!capa_factor) + capa_factor = fix_small_capacity(env->sd, group); - return capacity; + return capa_factor; } /** @@ -5855,9 +5855,9 @@ static inline void update_sg_lb_stats(struct lb_env *env, sgs->group_weight = group->group_weight; sgs->group_imb = sg_imbalanced(group); - sgs->group_capacity = sg_capacity(env, group); + sgs->group_capa_factor = sg_capa_factor(env, group); - if (sgs->group_capacity > sgs->sum_nr_running) + if (sgs->group_capa_factor > sgs->sum_nr_running) sgs->group_has_free_capacity = 1; } @@ -5882,7 +5882,7 @@ static bool update_sd_pick_busiest(struct lb_env *env, if (sgs->avg_load <= sds->busiest_stat.avg_load) return false; - if (sgs->sum_nr_running > sgs->group_capacity) + if (sgs->sum_nr_running > sgs->group_capa_factor) return true; if (sgs->group_imb) @@ -5973,17 +5973,17 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd /* * In case the child domain prefers tasks go to siblings - * first, lower the sg capacity to one so that we'll try + * first, lower the sg capacity factor to one so that we'll try * and move all the excess tasks away. We lower the capacity * of a group only if the local group has the capacity to fit - * these excess tasks, i.e. nr_running < group_capacity. The + * these excess tasks, i.e. nr_running < group_capa_factor. The * extra check prevents the case where you always pull from the * heaviest group when it is already under-utilized (possible * with a large weight task outweighs the tasks on the system). */ if (prefer_sibling && sds->local && sds->local_stat.group_has_free_capacity) - sgs->group_capacity = min(sgs->group_capacity, 1U); + sgs->group_capa_factor = min(sgs->group_capa_factor, 1U); if (update_sd_pick_busiest(env, sds, sg, sgs)) { sds->busiest = sg; @@ -6157,7 +6157,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s * have to drop below capacity to reach cpu-load equilibrium. */ load_above_capacity = - (busiest->sum_nr_running - busiest->group_capacity); + (busiest->sum_nr_running - busiest->group_capa_factor); load_above_capacity *= (SCHED_LOAD_SCALE * SCHED_POWER_SCALE); load_above_capacity /= busiest->group_power; @@ -6301,7 +6301,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, int i; for_each_cpu_and(i, sched_group_cpus(group), env->cpus) { - unsigned long power, capacity, wl; + unsigned long power, capa_factor, wl; enum fbq_type rt; rq = cpu_rq(i); @@ -6330,9 +6330,9 @@ static struct rq *find_busiest_queue(struct lb_env *env, continue; power = power_of(i); - capacity = DIV_ROUND_CLOSEST(power, SCHED_POWER_SCALE); - if (!capacity) - capacity = fix_small_capacity(env->sd, group); + capa_factor = DIV_ROUND_CLOSEST(power, SCHED_POWER_SCALE); + if (!capa_factor) + capa_factor = fix_small_capacity(env->sd, group); wl = weighted_cpuload(i); @@ -6340,7 +6340,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, * When comparing with imbalance, use weighted_cpuload() * which is not scaled with the cpu power. */ - if (capacity && rq->nr_running == 1 && wl > env->imbalance) + if (capa_factor && rq->nr_running == 1 && wl > env->imbalance) continue; /*