From patchwork Fri Nov 25 15:34:32 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 84161 Delivered-To: patch@linaro.org Received: by 10.140.20.101 with SMTP id 92csp165699qgi; Fri, 25 Nov 2016 07:36:33 -0800 (PST) X-Received: by 10.98.149.140 with SMTP id c12mr8552934pfk.100.1480088193886; Fri, 25 Nov 2016 07:36:33 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p27si16592974pli.313.2016.11.25.07.36.33; Fri, 25 Nov 2016 07:36:33 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755266AbcKYPgU (ORCPT + 25 others); Fri, 25 Nov 2016 10:36:20 -0500 Received: from mail-wm0-f51.google.com ([74.125.82.51]:36842 "EHLO mail-wm0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755205AbcKYPgE (ORCPT ); Fri, 25 Nov 2016 10:36:04 -0500 Received: by mail-wm0-f51.google.com with SMTP id g23so147449622wme.1 for ; Fri, 25 Nov 2016 07:34:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=n+DzA7cEqBi98DQMk/57h4KEJw5SSpQ9qN8/s+CnO24=; b=eBreoPGfmo98fJaYAcpwDEpSAf0GQtnKNn8RdTIgx+KgfLShf90VaBsyhzMHkFO2vH bFxuzXw5Gn+q969WLJqVxdmNCU/NzAaziQS9T36XbRXivGgeBTlw47YBZc+Fb1sytAtp /mGOT5n5DZ6UqR/EXCdtjuXRUS1AB44hIxa2A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=n+DzA7cEqBi98DQMk/57h4KEJw5SSpQ9qN8/s+CnO24=; b=j1sWz+A3L/5ictxt5Sa3cwJSpeXJ8YJTVr38cxzBE+UELybxzuTvecB1XXfW0wZVQX U0wLTbfyEw0vU7Nx6MhNgQe/7pHHVL5CHVxVo9sAWlA83AbUrhOTtgUVr74A8ohqXi6t xwBSQC77sGoQ8vdYFprx1KvBLCodB+u3nBap4k7rWVxsnFdBqzneyYt/cl4uMS1ljEVc 1PLUVYUCRPA2T/NYV1JAAigm3/H7bcypN5c1719nEa35/bbAsZ3tvICqHCr9X22CV85W z6inavg1UtopKA/tLGyus8C6kIRYL8Pl+Hi2lKY5L/Am14gd5T7BJOJs93QwnLx5lZSN GPiQ== X-Gm-Message-State: AKaTC00ab6V1rSh5CGi7jd0fvSr4TFtnL1kcc+ARecr2kKPkORJjlmfvP+K5cU/dVxqJriTJ X-Received: by 10.28.19.67 with SMTP id 64mr8063708wmt.111.1480088093818; Fri, 25 Nov 2016 07:34:53 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:4812:2d30:6a06:ea6d]) by smtp.gmail.com with ESMTPSA id vr9sm47572039wjc.35.2016.11.25.07.34.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 Nov 2016 07:34:52 -0800 (PST) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, matt@codeblueprint.co.uk, Morten.Rasmussen@arm.com, dietmar.eggemann@arm.com Cc: kernellwp@gmail.com, yuyang.du@intel.com, umgwanakikbuti@gmail.com, Vincent Guittot Subject: [PATCH 1/2 v2] sched: fix find_idlest_group for fork Date: Fri, 25 Nov 2016 16:34:32 +0100 Message-Id: <1480088073-11642-2-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1480088073-11642-1-git-send-email-vincent.guittot@linaro.org> References: <1480088073-11642-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org During fork, the utilization of a task is init once the rq has been selected because the current utilization level of the rq is used to set the utilization of the fork task. As the task's utilization is still null at this step of the fork sequence, it doesn't make sense to look for some spare capacity that can fit the task's utilization. Furthermore, I can see perf regressions for the test "hackbench -P -g 1" because the least loaded policy is always bypassed and tasks are not spread during fork. With this patch and the fix below, we are back to same performances as for v4.8. The fix below is only a temporary one used for the test until a smarter solution is found because we can't simply remove the test which is useful for others benchmarks @@ -5708,13 +5708,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t avg_cost = this_sd->avg_scan_cost; - /* - * Due to large variance we need a large fuzz factor; hackbench in - * particularly is sensitive here. - */ - if ((avg_idle / 512) < avg_cost) - return -1; - time = local_clock(); for_each_cpu_wrap(cpu, sched_domain_span(sd), target, wrap) { Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 6 ++++++ 1 file changed, 6 insertions(+) -- 2.7.4 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index aa47589..820a787 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5463,13 +5463,19 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, * utilized systems if we require spare_capacity > task_util(p), * so we allow for some task stuffing by using * spare_capacity > task_util(p)/2. + * spare capacity can't be used for fork because the utilization has + * not been set yet as it need to get a rq to init the utilization */ + if (sd_flag & SD_BALANCE_FORK) + goto no_spare; + if (this_spare > task_util(p) / 2 && imbalance*this_spare > 100*most_spare) return NULL; else if (most_spare > task_util(p) / 2) return most_spare_sg; +no_spare: if (!idlest || 100*this_load < imbalance*min_load) return NULL; return idlest;