From patchwork Thu Dec 8 16:56:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 87294 Delivered-To: patch@linaro.org Received: by 10.182.112.6 with SMTP id im6csp969720obb; Thu, 8 Dec 2016 08:57:16 -0800 (PST) X-Received: by 10.99.184.18 with SMTP id p18mr15849034pge.33.1481216235953; Thu, 08 Dec 2016 08:57:15 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e70si29619228pfj.217.2016.12.08.08.57.15; Thu, 08 Dec 2016 08:57:15 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932470AbcLHQ5I (ORCPT + 25 others); Thu, 8 Dec 2016 11:57:08 -0500 Received: from mail-wj0-f175.google.com ([209.85.210.175]:35380 "EHLO mail-wj0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932226AbcLHQ5G (ORCPT ); Thu, 8 Dec 2016 11:57:06 -0500 Received: by mail-wj0-f175.google.com with SMTP id v7so393169415wjy.2 for ; Thu, 08 Dec 2016 08:57:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=coTGpjIrcBEjNdgSYRP8GMqhr7whezkjauDmxk0DvfA=; b=S4Y0Gq4Z/HA0HRaQ+gCedfolZh7e7nhiTWxqibiYYzu7mVSFUSBPVHtZ5IduITJsFf 4kH5iLMwuDiPksgXds8JyEJRUdnbyK85tHtfchYCBU4Z/wCuTDFKRM7gZf27OqIRADNd GmfzNfPEayOLWvTqZVV/8nhugbNA/kXEmxX9U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=coTGpjIrcBEjNdgSYRP8GMqhr7whezkjauDmxk0DvfA=; b=R2EqTj1eGe9H+suu7e8T9K0E95IvaZ3bKEh14uv+iP3gQ9WpydQhxIeuBjTP+0itNN 8WoH6C170JrqLOXS3XA7te6fczk70t8PPcSKoYxGZoLL5dpQoY/ZDaKU/UOyLDvUyCM7 H6mC6RFAYtQTqzr8nKYwnXrKskXxwjgj9JKd30KleNQzeD2v8oWAeRDXDOhFs0aBob6g Hgb1UpsMIzVgVSyeHQPI+EdxFgxWpRRvI+33wcLN2ACmlh20L5mOGfwOcvlO/GojNnDl EPruSRALcDF8sKCI+QK13vzHxaRVfozLk0IqSkjf32Kq7tQjIQa6zpeQroaOD1cnST// 1rpQ== X-Gm-Message-State: AKaTC01P4Y7GeT4PqOnOhVzDJ/R86k0zbeeAMi7al0AqHNX+fvFwKscIN7Wv7wY1dnLeC56o X-Received: by 10.194.80.42 with SMTP id o10mr76621426wjx.65.1481216224429; Thu, 08 Dec 2016 08:57:04 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:9046:2b86:6f44:ba52]) by smtp.gmail.com with ESMTPSA id k11sm16142544wmf.24.2016.12.08.08.57.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 08 Dec 2016 08:57:03 -0800 (PST) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, matt@codeblueprint.co.uk, Morten.Rasmussen@arm.com Cc: dietmar.eggemann@arm.com, kernellwp@gmail.com, yuyang.du@intel.comc, umgwanakikbuti@gmail.com, Vincent Guittot Subject: [PATCH 1/2 v3] sched: fix find_idlest_group for fork Date: Thu, 8 Dec 2016 17:56:53 +0100 Message-Id: <1481216215-24651-2-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1481216215-24651-1-git-send-email-vincent.guittot@linaro.org> References: <1481216215-24651-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org During fork, the utilization of a task is init once the rq has been selected because the current utilization level of the rq is used to set the utilization of the fork task. As the task's utilization is still null at this step of the fork sequence, it doesn't make sense to look for some spare capacity that can fit the task's utilization. Furthermore, I can see perf regressions for the test "hackbench -P -g 1" because the least loaded policy is always bypassed and tasks are not spread during fork. With this patch and the fix below, we are back to same performances as for v4.8. The fix below is only a temporary one used for the test until a smarter solution is found because we can't simply remove the test which is useful for others benchmarks @@ -5708,13 +5708,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t avg_cost = this_sd->avg_scan_cost; - /* - * Due to large variance we need a large fuzz factor; hackbench in - * particularly is sensitive here. - */ - if ((avg_idle / 512) < avg_cost) - return -1; - time = local_clock(); for_each_cpu_wrap(cpu, sched_domain_span(sd), target, wrap) { Signed-off-by: Vincent Guittot Acked-by: Morten Rasmussen --- kernel/sched/fair.c | 6 ++++++ 1 file changed, 6 insertions(+) -- 2.7.4 Tested-by: Matt Fleming Reviewed-by: Matt Fleming diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 92cb50d..1da846b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5473,13 +5473,19 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, * utilized systems if we require spare_capacity > task_util(p), * so we allow for some task stuffing by using * spare_capacity > task_util(p)/2. + * spare capacity can't be used for fork because the utilization has + * not been set yet as it need to get a rq to init the utilization */ + if (sd_flag & SD_BALANCE_FORK) + goto skip_spare; + if (this_spare > task_util(p) / 2 && imbalance*this_spare > 100*most_spare) return NULL; else if (most_spare > task_util(p) / 2) return most_spare_sg; +skip_spare: if (!idlest || 100*this_load < imbalance*min_load) return NULL; return idlest;