From patchwork Thu Sep 19 07:33:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 174045 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp664339ill; Thu, 19 Sep 2019 00:34:24 -0700 (PDT) X-Google-Smtp-Source: APXvYqyET/QQuxdlkSJaGYcPNU3M8Dh+YJX20uBiggllmNC/pP2R3lsFGv3MdJUpg52TAgxZ5RcH X-Received: by 2002:a05:6402:1259:: with SMTP id l25mr14634754edw.174.1568878464120; Thu, 19 Sep 2019 00:34:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568878464; cv=none; d=google.com; s=arc-20160816; b=BSWy3Y3aCYbXG3nlxwC23nS73wmUEt6eKMVQJjyTGIYB9QJl0ByM9eNeumDeI5Wopv O4kKmUdXeRZf2NGSoYt9TK0vWIm15YH5ghuCi9SOK/QfeyAnRImtfE4mdV2p1hU45yK5 IR06vqa/K6PoJiBbEezNz5rOHvIVagNgGLhuu1gN5+4W8Yp/yww5a9xUr1wtpFf9qQ02 Ndu/Rdu1h9AfLYOLBQBFpcJnVV6AhT5SG6y43x24dl2wHG/zDy4on7WNLAZ3T455vf9y +87Lm8DyNQnAc75M9RddO8lNwwnOItS12LO8rSE+36kHjc/TZkcX55oiDrg42iiKqq86 eNvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=E0Lgz25SpUetPrpayHmg7zZdcUkNn2aH/KnWe2aleok=; b=FAnI0l9z7e4OShdBNnynF3/EwLiQsKrzFHLInof9WQRQvcBOIx0KFQwTvGSXVyzPrn D/QuM75MX2jScGZ6rm2fkTB1Z0OWcDZ3WGEUP5DUos081W9vE+VopLQ2nY9K7K3Sjr/x 4jO6C3DlWIsZDcsrfyu5NUp4H8Ll0L+MaxbVz/sCp1JJJLqm/lqk5xJ3lXEzzDq1sj/M OhPXQ5DjVdExCmubDCFTdtwjIXCi/kNEUDaCsPSRvPuuxVOqAqlTQ2mGKvDwCxRdVajX FOpC6R7HgFp1tEog8wpDMejhKbLhJW9UHaYFPbhhXGPd0CKlo4QhmUJ81yohKr+j3OgE zLCQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Lk+J5omQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z11si4857911edl.1.2019.09.19.00.34.23; Thu, 19 Sep 2019 00:34:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Lk+J5omQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388719AbfISHeX (ORCPT + 26 others); Thu, 19 Sep 2019 03:34:23 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:41723 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388657AbfISHeP (ORCPT ); Thu, 19 Sep 2019 03:34:15 -0400 Received: by mail-wr1-f65.google.com with SMTP id h7so1905342wrw.8 for ; Thu, 19 Sep 2019 00:34:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=E0Lgz25SpUetPrpayHmg7zZdcUkNn2aH/KnWe2aleok=; b=Lk+J5omQRVbDf3S2TzthQ2TDCGVfOORMYRykG6caePHrKT4UJLOqzRiVomJwl0v6Si Zh5ON9sbhJ3nB4SATF5E6spIrYRy8+8bz2gvvd3eEajirzSv9l/jYn4+YCiLanGOZJ35 EpdhzEhoSF5cMu0ARjJSK84l14QiFALGNWSAFpLF4vIQWtWC86co/Bs+WpB0RJJZ3M4k SURkqGWEDAhRG0csovcm51s9ACqc0d09JHgtg3SWiN6HbXPJQdcNvKa6e1UORvVRXGsr bQsDRMz6SxErS/ZnVS1aw3rLhFF7KIdAi11uZaFZU3FpnW9vP97JToCaCnWADiFYI899 cQ4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=E0Lgz25SpUetPrpayHmg7zZdcUkNn2aH/KnWe2aleok=; b=X+Yy4f2QpthjB8UAnpPiv2iP7sr0MQGjdT/7UDzQGwM3RmSO9vhVpjkDl/S5MZDfhN B6seS9RFczJefpahLJ2K6El2KV3NTm4s5n/rKLYWyWYfW+yK8y/pQ4fVT0ztlf/O/i6Y nH2JrP2SdAX1IjaZwpNV/WeP4tSAbTqMRFs7AF/wQ6ZMr5HR+KeNVXA3x2yiE0r2rS3F DCL/QH+uqN+L0Om2GhXcEmONpy0ycaUvs+oCy1UhteJkt2ONaRpSrsH5oUF2H35A15kL ZDRveVwT/53IO6gZl2YpzbS7SErq3lVYN2ULhXBRmoXKJDU7q3h7hG7kG7uJ7xG0wlTL Y2yg== X-Gm-Message-State: APjAAAV8VTT4u7VlS49ZOIgwwZhNEOh2kwmdYPKftDyGLac3E0Wqdl3H 3iPrASnjirra2cH1YG63EFurJHuK7Aw= X-Received: by 2002:adf:dbc6:: with SMTP id e6mr1492515wrj.149.1568878453024; Thu, 19 Sep 2019 00:34:13 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:d555:8fca:a19c:222c]) by smtp.gmail.com with ESMTPSA id s12sm13300250wra.82.2019.09.19.00.34.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 19 Sep 2019 00:34:11 -0700 (PDT) From: Vincent Guittot To: linux-kernel@vger.kernel.org, mingo@redhat.com, peterz@infradead.org Cc: pauld@redhat.com, valentin.schneider@arm.com, srikar@linux.vnet.ibm.com, quentin.perret@arm.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, hdanton@sina.com, Vincent Guittot Subject: [PATCH v3 09/10] sched/fair: use load instead of runnable load in wakeup path Date: Thu, 19 Sep 2019 09:33:40 +0200 Message-Id: <1568878421-12301-10-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1568878421-12301-1-git-send-email-vincent.guittot@linaro.org> References: <1568878421-12301-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org runnable load has been introduced to take into account the case where blocked load biases the wake up path which may end to select an overloaded CPU with a large number of runnable tasks instead of an underutilized CPU with a huge blocked load. Tha wake up path now starts to looks for idle CPUs before comparing runnable load and it's worth aligning the wake up path with the load_balance. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) -- 2.7.4 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index acca869..39a37ae 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5485,7 +5485,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p, s64 this_eff_load, prev_eff_load; unsigned long task_load; - this_eff_load = cpu_runnable_load(cpu_rq(this_cpu)); + this_eff_load = cpu_load(cpu_rq(this_cpu)); if (sync) { unsigned long current_load = task_h_load(current); @@ -5503,7 +5503,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p, this_eff_load *= 100; this_eff_load *= capacity_of(prev_cpu); - prev_eff_load = cpu_runnable_load(cpu_rq(prev_cpu)); + prev_eff_load = cpu_load(cpu_rq(prev_cpu)); prev_eff_load -= task_load; if (sched_feat(WA_BIAS)) prev_eff_load *= 100 + (sd->imbalance_pct - 100) / 2; @@ -5591,7 +5591,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, max_spare_cap = 0; for_each_cpu(i, sched_group_span(group)) { - load = cpu_runnable_load(cpu_rq(i)); + load = cpu_load(cpu_rq(i)); runnable_load += load; avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs); @@ -5732,7 +5732,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this continue; } - load = cpu_runnable_load(cpu_rq(i)); + load = cpu_load(cpu_rq(i)); if (load < min_load) { min_load = load; least_loaded_cpu = i;