From patchwork Mon Feb 24 05:12:19 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 25163 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qa0-f72.google.com (mail-qa0-f72.google.com [209.85.216.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CC6F621406 for ; Mon, 24 Feb 2014 05:13:20 +0000 (UTC) Received: by mail-qa0-f72.google.com with SMTP id f11sf8557871qae.7 for ; Sun, 23 Feb 2014 21:13:20 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=k3Ylke7zr4htYC1pOmlut8VTWjDxZ3XKonKoXwrrS6E=; b=hzVRm+83aMk1V8+tyTormdct/q3ftdl7Mqr3gTh3rhW4D+2YfzN5AAygAbbSn/Q2To nqiXZhaVTULb8YBq5pYrwtMittLrQFQLKEt1HrV1gp8oU1BQrBlSvz00LKCuUy3X4z5D 28MBur/3LcN+HlkoKPT3i0OkLp8IpZsesXs7jnvhl70fQK5cYXQQpVUrfCZTtPD5hVpA HfkP9Snn9HU3DOsCPl/Kq8sSR0OvYciMzHZrW4gXdRZLyuYSibZdTfz4eCInT+IbW7oV Bgp9c9vZCdIpX8xtuHYTKq82bYlSYhC7cldmghIlbGBwmcVTQrc0haTBiSTGTec9lEYh rWpA== X-Gm-Message-State: ALoCoQnL1X1WWVGaO1bAPt+YzNUrGwT4QfIr0pq10L/hmoHygsAs5brpvV1g8TDeWg2c0lscIeoe X-Received: by 10.236.126.170 with SMTP id b30mr8746140yhi.49.1393218800487; Sun, 23 Feb 2014 21:13:20 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.85.35 with SMTP id m32ls1847657qgd.81.gmail; Sun, 23 Feb 2014 21:13:20 -0800 (PST) X-Received: by 10.221.66.73 with SMTP id xp9mr11301241vcb.27.1393218800398; Sun, 23 Feb 2014 21:13:20 -0800 (PST) Received: from mail-vc0-f180.google.com (mail-vc0-f180.google.com [209.85.220.180]) by mx.google.com with ESMTPS id uo16si5558023veb.86.2014.02.23.21.13.20 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 23 Feb 2014 21:13:20 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.180; Received: by mail-vc0-f180.google.com with SMTP id ks9so5327774vcb.39 for ; Sun, 23 Feb 2014 21:13:20 -0800 (PST) X-Received: by 10.52.246.227 with SMTP id xz3mr9559530vdc.95.1393218800305; Sun, 23 Feb 2014 21:13:20 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp36564vcz; Sun, 23 Feb 2014 21:13:19 -0800 (PST) X-Received: by 10.66.141.231 with SMTP id rr7mr22195774pab.41.1393218799424; Sun, 23 Feb 2014 21:13:19 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id tt8si15434509pbc.160.2014.02.23.21.13.18; Sun, 23 Feb 2014 21:13:18 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752437AbaBXFNO (ORCPT + 26 others); Mon, 24 Feb 2014 00:13:14 -0500 Received: from mail-pb0-f49.google.com ([209.85.160.49]:64673 "EHLO mail-pb0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752313AbaBXFNK (ORCPT ); Mon, 24 Feb 2014 00:13:10 -0500 Received: by mail-pb0-f49.google.com with SMTP id jt11so680461pbb.36 for ; Sun, 23 Feb 2014 21:13:09 -0800 (PST) X-Received: by 10.68.247.6 with SMTP id ya6mr22740942pbc.45.1393218789523; Sun, 23 Feb 2014 21:13:09 -0800 (PST) Received: from localhost.localdomain ([162.243.130.63]) by mx.google.com with ESMTPSA id vn10sm45690743pbc.21.2014.02.23.21.13.01 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 23 Feb 2014 21:13:08 -0800 (PST) From: Alex Shi To: mingo@redhat.com, peterz@infradead.org, morten.rasmussen@arm.com Cc: vincent.guittot@linaro.org, daniel.lezcano@linaro.org, fweisbec@gmail.com, linux@arm.linux.org.uk, tony.luck@intel.com, fenghua.yu@intel.com, james.hogan@imgtec.com, alex.shi@linaro.org, jason.low2@hp.com, viresh.kumar@linaro.org, hanjun.guo@linaro.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, akpm@linux-foundation.org, arjan@linux.intel.com, pjt@google.com, fengguang.wu@intel.com, linaro-kernel@lists.linaro.org, wangyun@linux.vnet.ibm.com Subject: [PATCH 04/10] sched: unify imbalance bias for target group Date: Mon, 24 Feb 2014 13:12:19 +0800 Message-Id: <1393218745-8795-5-git-send-email-alex.shi@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1393218745-8795-1-git-send-email-alex.shi@linaro.org> References: <1393218745-8795-1-git-send-email-alex.shi@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: alex.shi@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Old code considers the bias in source/target_load already. but still use imbalance_pct as last check in idlest/busiest group finding. It is also a kind of redundant job. If we bias imbalance in source/target_load, we'd better not use imbalance_pct again. After cpu_load array removed, it is nice time to unify the target bias consideration. So I remove the imbalance_pct from last check and add the live bias using. On wake_affine, since all archs' wake_idx is 0, current logical is just want to prefer current cpu. so we follows this logical. Just renaming the target_load/source_load to wegithed_cpuload for more exact meaning. Thanks for reminding from Morten! Signed-off-by: Alex Shi --- kernel/sched/fair.c | 32 +++++++++++++++----------------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index eeffe75..5a3ea72 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1016,7 +1016,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, static unsigned long weighted_cpuload(const int cpu); static unsigned long source_load(int cpu); -static unsigned long target_load(int cpu); +static unsigned long target_load(int cpu, int imbalance_pct); static unsigned long power_of(int cpu); static long effective_load(struct task_group *tg, int cpu, long wl, long wg); @@ -3967,7 +3967,7 @@ static unsigned long source_load(int cpu) * Return a high guess at the load of a migration-target cpu weighted * according to the scheduling class and "nice" value. */ -static unsigned long target_load(int cpu) +static unsigned long target_load(int cpu, int imbalance_pct) { struct rq *rq = cpu_rq(cpu); unsigned long total = weighted_cpuload(cpu); @@ -3975,6 +3975,11 @@ static unsigned long target_load(int cpu) if (!sched_feat(LB_BIAS)) return total; + /* + * Bias target load with imbalance_pct. + */ + total = total * imbalance_pct / 100; + return max(rq->cpu_load, total); } @@ -4190,8 +4195,8 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) this_cpu = smp_processor_id(); prev_cpu = task_cpu(p); - load = source_load(prev_cpu); - this_load = target_load(this_cpu); + load = weighted_cpuload(prev_cpu); + this_load = weighted_cpuload(this_cpu); /* * If sync wakeup then subtract the (maximum possible) @@ -4247,7 +4252,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) if (balanced || (this_load <= load && - this_load + target_load(prev_cpu) <= tl_per_task)) { + this_load + weighted_cpuload(prev_cpu) <= tl_per_task)) { /* * This domain has SD_WAKE_AFFINE and * p is cache cold in this domain, and @@ -4293,7 +4298,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) if (local_group) load = source_load(i); else - load = target_load(i); + load = target_load(i, imbalance); avg_load += load; } @@ -4309,7 +4314,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) } } while (group = group->next, group != sd->groups); - if (!idlest || 100*this_load < imbalance*min_load) + if (!idlest || this_load < min_load) return NULL; return idlest; } @@ -5745,6 +5750,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, { unsigned long load; int i; + int bias = 100 + (env->sd->imbalance_pct - 100) / 2; memset(sgs, 0, sizeof(*sgs)); @@ -5752,8 +5758,8 @@ static inline void update_sg_lb_stats(struct lb_env *env, struct rq *rq = cpu_rq(i); /* Bias balancing toward cpus of our domain */ - if (local_group) - load = target_load(i); + if (local_group && env->idle != CPU_IDLE) + load = target_load(i, bias); else load = source_load(i); @@ -6193,14 +6199,6 @@ static struct sched_group *find_busiest_group(struct lb_env *env) if ((local->idle_cpus < busiest->idle_cpus) && busiest->sum_nr_running <= busiest->group_weight) goto out_balanced; - } else { - /* - * In the CPU_NEWLY_IDLE, CPU_NOT_IDLE cases, use - * imbalance_pct to be conservative. - */ - if (100 * busiest->avg_load <= - env->sd->imbalance_pct * local->avg_load) - goto out_balanced; } force_balance: