From patchwork Mon May 26 22:19:36 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Pitre X-Patchwork-Id: 30947 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yk0-f197.google.com (mail-yk0-f197.google.com [209.85.160.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6356020A25 for ; Mon, 26 May 2014 22:21:14 +0000 (UTC) Received: by mail-yk0-f197.google.com with SMTP id 19sf20737482ykq.0 for ; Mon, 26 May 2014 15:21:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe :content-transfer-encoding; bh=HuJ5RkVaV9qmtx7mMMPX75QgxRLuGmdpG42EcVoKaBM=; b=QyQgYLRRVW+I+Yk7qDFiiLKlON1Joxim6+/Va8JnNrOVusctZKjXnu8iVWPRlZ8zDt 4Boo2O9wAxgC2Ehmcc2aOodY1pjdHsI7q25dNB7bOq9l8o89fvH9XJBv9o+b5RMeOnxW a52vvmny3dkzx6MIIG4k4ourgq8YtH65pfpp/KiJoJjgMEg5rrBIbA2ZR2Im/jTawGf2 vWjXdcQH3+Pn//EMCO7GqwOH9URx5FENrNfBrdqDM6+QjzQd+uarP9l1/tyNq+fwbn3z VnXh4JnTqG31DWeTQ1r3h+3woi+hZA2BEHkxzYaCDAUhTwu93oDgXiTDyIUvMX6vfzt4 ekGw== X-Gm-Message-State: ALoCoQkyqdoCxhWxLzLE+FsS2MKC6XUjAaS7EHrrOK/BwSWyKo66qi4iCuSe7va7Y6oHB/di52EV X-Received: by 10.236.55.69 with SMTP id j45mr9856663yhc.49.1401142874092; Mon, 26 May 2014 15:21:14 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.23.7 with SMTP id 7ls2938644qgo.60.gmail; Mon, 26 May 2014 15:21:14 -0700 (PDT) X-Received: by 10.220.44.141 with SMTP id a13mr27121vcf.71.1401142873994; Mon, 26 May 2014 15:21:13 -0700 (PDT) Received: from mail-vc0-f169.google.com (mail-vc0-f169.google.com [209.85.220.169]) by mx.google.com with ESMTPS id a20si7051129vej.36.2014.05.26.15.21.13 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 26 May 2014 15:21:13 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) client-ip=209.85.220.169; Received: by mail-vc0-f169.google.com with SMTP id ij19so9907008vcb.28 for ; Mon, 26 May 2014 15:21:13 -0700 (PDT) X-Received: by 10.58.186.207 with SMTP id fm15mr23416456vec.4.1401142873909; Mon, 26 May 2014 15:21:13 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp72991vcb; Mon, 26 May 2014 15:21:13 -0700 (PDT) X-Received: by 10.66.249.165 with SMTP id yv5mr23050205pac.79.1401142873047; Mon, 26 May 2014 15:21:13 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id xb6si16453062pab.45.2014.05.26.15.21.12 for ; Mon, 26 May 2014 15:21:12 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752037AbaEZWVF (ORCPT + 27 others); Mon, 26 May 2014 18:21:05 -0400 Received: from relais.videotron.ca ([24.201.245.36]:61825 "EHLO relais.videotron.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752671AbaEZWTs (ORCPT ); Mon, 26 May 2014 18:19:48 -0400 Received: from yoda.home ([66.130.143.177]) by VL-VM-MR006.ip.videotron.ca (Oracle Communications Messaging Exchange Server 7u4-22.01 64bit (built Apr 21 2011)) with ESMTP id <0N67003UBCOV7GC0@VL-VM-MR006.ip.videotron.ca> for linux-kernel@vger.kernel.org; Mon, 26 May 2014 18:19:44 -0400 (EDT) Received: from xanadu.home (xanadu.home [192.168.2.2]) by yoda.home (Postfix) with ESMTP id A524C2DA0873; Mon, 26 May 2014 18:19:43 -0400 (EDT) From: Nicolas Pitre To: Peter Zijlstra , Ingo Molnar Cc: Vincent Guittot , Daniel Lezcano , Morten Rasmussen , "Rafael J. Wysocki" , linux-kernel@vger.kernel.org, linaro-kernel@lists.linaro.org Subject: [PATCH v2 3/6] sched/fair.c: disambiguate existing/remaining "capacity" usage Date: Mon, 26 May 2014 18:19:36 -0400 Message-id: <1401142779-6633-4-git-send-email-nicolas.pitre@linaro.org> X-Mailer: git-send-email 1.8.4.108.g55ea5f6 In-reply-to: <1401142779-6633-1-git-send-email-nicolas.pitre@linaro.org> References: <1401142779-6633-1-git-send-email-nicolas.pitre@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: nicolas.pitre@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.169 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Content-transfer-encoding: 7BIT We have "power" (which should actually become "capacity") and "capacity" which is a scaled down "capacity factor" in terms of unitary tasks. Let's use "capacity_factor" to make room for proper usage of "capacity" later. Signed-off-by: Nicolas Pitre --- kernel/sched/fair.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8f9ac4826c..87a39559cc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5532,7 +5532,7 @@ struct sg_lb_stats { unsigned long load_per_task; unsigned long group_power; unsigned int sum_nr_running; /* Nr tasks running in the group */ - unsigned int group_capacity; + unsigned int group_capacity_factor; unsigned int idle_cpus; unsigned int group_weight; int group_imb; /* Is there an imbalance in the group ? */ @@ -5827,15 +5827,15 @@ static inline int sg_imbalanced(struct sched_group *group) } /* - * Compute the group capacity. + * Compute the group capacity factor. * * Avoid the issue where N*frac(smt_power) >= 1 creates 'phantom' cores by * first dividing out the smt factor and computing the actual number of cores * and limit power unit capacity with that. */ -static inline int sg_capacity(struct lb_env *env, struct sched_group *group) +static inline int sg_capacity_factor(struct lb_env *env, struct sched_group *group) { - unsigned int capacity, smt, cpus; + unsigned int capacity_factor, smt, cpus; unsigned int power, power_orig; power = group->sgp->power; @@ -5844,13 +5844,13 @@ static inline int sg_capacity(struct lb_env *env, struct sched_group *group) /* smt := ceil(cpus / power), assumes: 1 < smt_power < 2 */ smt = DIV_ROUND_UP(SCHED_POWER_SCALE * cpus, power_orig); - capacity = cpus / smt; /* cores */ + capacity_factor = cpus / smt; /* cores */ - capacity = min_t(unsigned, capacity, DIV_ROUND_CLOSEST(power, SCHED_POWER_SCALE)); - if (!capacity) - capacity = fix_small_capacity(env->sd, group); + capacity_factor = min_t(unsigned, capacity_factor, DIV_ROUND_CLOSEST(power, SCHED_POWER_SCALE)); + if (!capacity_factor) + capacity_factor = fix_small_capacity(env->sd, group); - return capacity; + return capacity_factor; } /** @@ -5900,9 +5900,9 @@ static inline void update_sg_lb_stats(struct lb_env *env, sgs->group_weight = group->group_weight; sgs->group_imb = sg_imbalanced(group); - sgs->group_capacity = sg_capacity(env, group); + sgs->group_capacity_factor = sg_capacity_factor(env, group); - if (sgs->group_capacity > sgs->sum_nr_running) + if (sgs->group_capacity_factor > sgs->sum_nr_running) sgs->group_has_free_capacity = 1; } @@ -5927,7 +5927,7 @@ static bool update_sd_pick_busiest(struct lb_env *env, if (sgs->avg_load <= sds->busiest_stat.avg_load) return false; - if (sgs->sum_nr_running > sgs->group_capacity) + if (sgs->sum_nr_running > sgs->group_capacity_factor) return true; if (sgs->group_imb) @@ -6018,17 +6018,17 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd /* * In case the child domain prefers tasks go to siblings - * first, lower the sg capacity to one so that we'll try + * first, lower the sg capacity factor to one so that we'll try * and move all the excess tasks away. We lower the capacity * of a group only if the local group has the capacity to fit - * these excess tasks, i.e. nr_running < group_capacity. The + * these excess tasks, i.e. nr_running < group_capacity_factor. The * extra check prevents the case where you always pull from the * heaviest group when it is already under-utilized (possible * with a large weight task outweighs the tasks on the system). */ if (prefer_sibling && sds->local && sds->local_stat.group_has_free_capacity) - sgs->group_capacity = min(sgs->group_capacity, 1U); + sgs->group_capacity_factor = min(sgs->group_capacity_factor, 1U); if (update_sd_pick_busiest(env, sds, sg, sgs)) { sds->busiest = sg; @@ -6202,7 +6202,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s * have to drop below capacity to reach cpu-load equilibrium. */ load_above_capacity = - (busiest->sum_nr_running - busiest->group_capacity); + (busiest->sum_nr_running - busiest->group_capacity_factor); load_above_capacity *= (SCHED_LOAD_SCALE * SCHED_POWER_SCALE); load_above_capacity /= busiest->group_power; @@ -6346,7 +6346,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, int i; for_each_cpu_and(i, sched_group_cpus(group), env->cpus) { - unsigned long power, capacity, wl; + unsigned long power, capacity_factor, wl; enum fbq_type rt; rq = cpu_rq(i); @@ -6375,9 +6375,9 @@ static struct rq *find_busiest_queue(struct lb_env *env, continue; power = power_of(i); - capacity = DIV_ROUND_CLOSEST(power, SCHED_POWER_SCALE); - if (!capacity) - capacity = fix_small_capacity(env->sd, group); + capacity_factor = DIV_ROUND_CLOSEST(power, SCHED_POWER_SCALE); + if (!capacity_factor) + capacity_factor = fix_small_capacity(env->sd, group); wl = weighted_cpuload(i); @@ -6385,7 +6385,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, * When comparing with imbalance, use weighted_cpuload() * which is not scaled with the cpu power. */ - if (capacity && rq->nr_running == 1 && wl > env->imbalance) + if (capacity_factor && rq->nr_running == 1 && wl > env->imbalance) continue; /*