From patchwork Wed Jun 22 17:03:18 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 70678 Delivered-To: patch@linaro.org Received: by 10.140.28.4 with SMTP id 4csp16599qgy; Wed, 22 Jun 2016 10:06:59 -0700 (PDT) X-Received: by 10.66.43.203 with SMTP id y11mr35706005pal.96.1466615217674; Wed, 22 Jun 2016 10:06:57 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t15si826375pfj.37.2016.06.22.10.06.57; Wed, 22 Jun 2016 10:06:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753324AbcFVRGz (ORCPT + 30 others); Wed, 22 Jun 2016 13:06:55 -0400 Received: from foss.arm.com ([217.140.101.70]:49316 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753147AbcFVRCl (ORCPT ); Wed, 22 Jun 2016 13:02:41 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0F369BB3; Wed, 22 Jun 2016 10:03:28 -0700 (PDT) Received: from e105550-lin.cambridge.arm.com (e105550-lin.cambridge.arm.com [10.1.207.130]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8420E3F246; Wed, 22 Jun 2016 10:02:39 -0700 (PDT) From: Morten Rasmussen To: peterz@infradead.org, mingo@redhat.com Cc: dietmar.eggemann@arm.com, yuyang.du@intel.com, vincent.guittot@linaro.org, mgalbraith@suse.de, linux-kernel@vger.kernel.org, Morten Rasmussen Subject: [PATCH v2 07/13] sched/fair: Let asymmetric cpu configurations balance at wake-up Date: Wed, 22 Jun 2016 18:03:18 +0100 Message-Id: <1466615004-3503-8-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1466615004-3503-1-git-send-email-morten.rasmussen@arm.com> References: <1466615004-3503-1-git-send-email-morten.rasmussen@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, SD_WAKE_AFFINE always takes priority over wakeup balancing if SD_BALANCE_WAKE is set on the sched_domains. For asymmetric configurations SD_WAKE_AFFINE is only desirable if the waking task's compute demand (utilization) is suitable for all the cpu capacities available within the SD_WAKE_AFFINE sched_domain. If not, let wakeup balancing take over (find_idlest_{group, cpu}()). This patch makes affine wake-ups conditional on whether both the waker cpu and prev_cpu has sufficient capacity for the waking task, or not. It is assumed that the sched_group(s) containing the waker cpu and prev_cpu only contain cpu with the same capacity (homogeneous). Ideally, we shouldn't set 'want_affine' in the first place, but we don't know if SD_BALANCE_WAKE is enabled on the sched_domain(s) until we start traversing them. cc: Ingo Molnar cc: Peter Zijlstra Signed-off-by: Morten Rasmussen --- kernel/sched/fair.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) -- 1.9.1 Acked-by: Vincent Guittot diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 216db302e87d..dba02c7b57b3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -114,6 +114,12 @@ unsigned int __read_mostly sysctl_sched_shares_window = 10000000UL; unsigned int sysctl_sched_cfs_bandwidth_slice = 5000UL; #endif +/* + * The margin used when comparing utilization with cpu capacity: + * util * 1024 < capacity * margin + */ +unsigned int capacity_margin = 1280; /* ~20% */ + static inline void update_load_add(struct load_weight *lw, unsigned long inc) { lw->weight += inc; @@ -5260,6 +5266,25 @@ static int cpu_util(int cpu) return (util >= capacity) ? capacity : util; } +static inline int task_util(struct task_struct *p) +{ + return p->se.avg.util_avg; +} + +static int wake_cap(struct task_struct *p, int cpu, int prev_cpu) +{ + long min_cap, max_cap; + + min_cap = min(capacity_orig_of(prev_cpu), capacity_orig_of(cpu)); + max_cap = cpu_rq(cpu)->rd->max_cpu_capacity; + + /* Minimum capacity is close to max, no need to abort wake_affine */ + if (max_cap - min_cap < max_cap >> 3) + return 0; + + return min_cap * 1024 < task_util(p) * capacity_margin; +} + /* * select_task_rq_fair: Select target runqueue for the waking task in domains * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE, @@ -5283,7 +5308,8 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f if (sd_flag & SD_BALANCE_WAKE) { record_wakee(p); - want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p)); + want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu) + && cpumask_test_cpu(cpu, tsk_cpus_allowed(p)); } rcu_read_lock();