From patchwork Wed May 14 20:57:06 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Pitre X-Patchwork-Id: 30200 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pb0-f70.google.com (mail-pb0-f70.google.com [209.85.160.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1739C2055D for ; Wed, 14 May 2014 21:00:15 +0000 (UTC) Received: by mail-pb0-f70.google.com with SMTP id rq2sf692423pbb.1 for ; Wed, 14 May 2014 14:00:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe :content-transfer-encoding; bh=3D71MFrEeF6Jq6QkXJ/rD1VXwBP7wIUhG7nXwEqilgQ=; b=P3WcQOs7BCvVCfxNymJHcnngqUdCfpEy3jc5JH76ebQczRRZox6/w+irhmdgwF2ETC a3F/UYF/QFonREHLb/Mq7xUV3iACBpWHz8KsHU3WnBlRavAiuk6bKHtWbOI2vc0ME86/ pijyZU6nFXLAMg3dl2A1LNLsNW6tVgotYdbkayq9/94gn8nYdSbi5gD2phvjEHxpOx9b pfDVkMLWb+sMZ3SqZQwkAwXJZZtHp0WpqjhhGpZyPmkITcN7uPukAkvpJ+3wHVvmGEDD mojDoeV4p99Tr4Arxjtv2tDU7nNOMqEazdjargNf6IHXTzTIZQuHBSAP9BLVoZAM0HaR dE+A== X-Gm-Message-State: ALoCoQmHX9zk8KbDqli2Fl9jE7iZKCrp41Rz9DMUjqvHGAggcTNF5zROkSbfQaBDgrIwjDes5FBu X-Received: by 10.66.65.109 with SMTP id w13mr319625pas.21.1400101214383; Wed, 14 May 2014 14:00:14 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.84.72 with SMTP id k66ls2466131qgd.96.gmail; Wed, 14 May 2014 14:00:14 -0700 (PDT) X-Received: by 10.52.2.229 with SMTP id 5mr4007968vdx.24.1400101214249; Wed, 14 May 2014 14:00:14 -0700 (PDT) Received: from mail-vc0-f170.google.com (mail-vc0-f170.google.com [209.85.220.170]) by mx.google.com with ESMTPS id a8si532812vej.71.2014.05.14.14.00.14 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 14 May 2014 14:00:14 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.170 as permitted sender) client-ip=209.85.220.170; Received: by mail-vc0-f170.google.com with SMTP id lf12so3281610vcb.1 for ; Wed, 14 May 2014 14:00:14 -0700 (PDT) X-Received: by 10.52.228.134 with SMTP id si6mr3948882vdc.5.1400101214169; Wed, 14 May 2014 14:00:14 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp271032vcb; Wed, 14 May 2014 14:00:13 -0700 (PDT) X-Received: by 10.68.218.231 with SMTP id pj7mr7082768pbc.95.1400101213315; Wed, 14 May 2014 14:00:13 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id rm9si1532209pbc.294.2014.05.14.14.00.12; Wed, 14 May 2014 14:00:12 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752819AbaENU75 (ORCPT + 27 others); Wed, 14 May 2014 16:59:57 -0400 Received: from relais.videotron.ca ([24.201.245.36]:62660 "EHLO relais.videotron.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752217AbaENU7z (ORCPT ); Wed, 14 May 2014 16:59:55 -0400 Received: from yoda.home ([66.130.143.177]) by VL-VM-MR002.ip.videotron.ca (Oracle Communications Messaging Exchange Server 7u4-22.01 64bit (built Apr 21 2011)) with ESMTP id <0N5L00MEK0ZTKAR0@VL-VM-MR002.ip.videotron.ca> for linux-kernel@vger.kernel.org; Wed, 14 May 2014 16:59:53 -0400 (EDT) Received: from xanadu.home (xanadu.home [192.168.2.2]) by yoda.home (Postfix) with ESMTP id 4559F2DA0715; Wed, 14 May 2014 16:59:53 -0400 (EDT) From: Nicolas Pitre To: Peter Zijlstra , Ingo Molnar Cc: Vincent Guittot , Daniel Lezcano , Morten Rasmussen , "Rafael J. Wysocki" , linux-kernel@vger.kernel.org, linaro-kernel@lists.linaro.org Subject: [PATCH 2/6] sched/fair.c: change "has_capacity" to "has_free_capacity" Date: Wed, 14 May 2014 16:57:06 -0400 Message-id: <1400101030-17717-3-git-send-email-nicolas.pitre@linaro.org> X-Mailer: git-send-email 1.8.4.108.g55ea5f6 In-reply-to: <1400101030-17717-1-git-send-email-nicolas.pitre@linaro.org> References: <1400101030-17717-1-git-send-email-nicolas.pitre@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: nicolas.pitre@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.170 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Content-transfer-encoding: 7BIT The capacity of a CPU/group should be some intrinsic value that doesn't change with task placement. It is like a container which capacity is stable regardless of the amount of liquid in it... unless the container itself is crushed that is, but that's another story. Therefore let's rename "has_capacity" to "has_free_capacity" in order to better convey the wanted meaning. Signed-off-by: Nicolas Pitre --- kernel/sched/fair.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e375dcc3f2..0eda4c527e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1030,7 +1030,7 @@ struct numa_stats { /* Approximate capacity in terms of runnable tasks on a node */ unsigned long task_capacity; - int has_capacity; + int has_free_capacity; }; /* @@ -1056,8 +1056,8 @@ static void update_numa_stats(struct numa_stats *ns, int nid) * the @ns structure is NULL'ed and task_numa_compare() will * not find this node attractive. * - * We'll either bail at !has_capacity, or we'll detect a huge imbalance - * and bail there. + * We'll either bail at !has_free_capacity, or we'll detect a huge + * imbalance and bail there. */ if (!cpus) return; @@ -1065,7 +1065,7 @@ static void update_numa_stats(struct numa_stats *ns, int nid) ns->load = (ns->load * SCHED_POWER_SCALE) / ns->compute_capacity; ns->task_capacity = DIV_ROUND_CLOSEST(ns->compute_capacity, SCHED_POWER_SCALE); - ns->has_capacity = (ns->nr_running < ns->task_capacity); + ns->has_free_capacity = (ns->nr_running < ns->task_capacity); } struct task_numa_env { @@ -1167,8 +1167,8 @@ static void task_numa_compare(struct task_numa_env *env, if (!cur) { /* Is there capacity at our destination? */ - if (env->src_stats.has_capacity && - !env->dst_stats.has_capacity) + if (env->src_stats.has_free_capacity && + !env->dst_stats.has_free_capacity) goto unlock; goto balance; @@ -1276,8 +1276,8 @@ static int task_numa_migrate(struct task_struct *p) groupimp = group_weight(p, env.dst_nid) - groupweight; update_numa_stats(&env.dst_stats, env.dst_nid); - /* If the preferred nid has capacity, try to use it. */ - if (env.dst_stats.has_capacity) + /* If the preferred nid has free capacity, try to use it. */ + if (env.dst_stats.has_free_capacity) task_numa_find_cpu(&env, taskimp, groupimp); /* No space available on the preferred nid. Look elsewhere. */ @@ -5491,7 +5491,7 @@ struct sg_lb_stats { unsigned int idle_cpus; unsigned int group_weight; int group_imb; /* Is there an imbalance in the group ? */ - int group_has_capacity; /* Is there extra capacity in the group? */ + int group_has_free_capacity; #ifdef CONFIG_NUMA_BALANCING unsigned int nr_numa_running; unsigned int nr_preferred_running; @@ -5858,7 +5858,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, sgs->group_capacity = sg_capacity(env, group); if (sgs->group_capacity > sgs->sum_nr_running) - sgs->group_has_capacity = 1; + sgs->group_has_free_capacity = 1; } /** @@ -5982,7 +5982,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd * with a large weight task outweighs the tasks on the system). */ if (prefer_sibling && sds->local && - sds->local_stat.group_has_capacity) + sds->local_stat.group_has_free_capacity) sgs->group_capacity = min(sgs->group_capacity, 1U); if (update_sd_pick_busiest(env, sds, sg, sgs)) { @@ -6242,8 +6242,8 @@ static struct sched_group *find_busiest_group(struct lb_env *env) goto force_balance; /* SD_BALANCE_NEWIDLE trumps SMP nice when underutilized */ - if (env->idle == CPU_NEWLY_IDLE && local->group_has_capacity && - !busiest->group_has_capacity) + if (env->idle == CPU_NEWLY_IDLE && local->group_has_free_capacity && + !busiest->group_has_free_capacity) goto force_balance; /*