From patchwork Thu Apr 25 09:37:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 162837 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1701234jan; Thu, 25 Apr 2019 02:37:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqw0EYIzW/hdYEOeqkIIZ/LahDNlNxLrH5tik3A6AACTK5D1IIt3mhqMY3Sklv7L/70c7Atb X-Received: by 2002:aa7:9a99:: with SMTP id w25mr16959993pfi.240.1556185076112; Thu, 25 Apr 2019 02:37:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556185076; cv=none; d=google.com; s=arc-20160816; b=VlwcNMJUG/APbvSvma8Pf6pyGyt0opaPiAxujgCgY5BIGZyfuaUjlyZ18ZSoioCMie 6xnFyr9uoVwf06mF8kG1YZVa5xmrwlPZ92zN2PB8YRS/VfnArUvPDOyzb3UjnGSw2AYX fh3GH1jEutj+anwrFnNgUJUZbpYTsRtCbhCvz5Homrb23gkl8phd7yxF4bj3MkVGhHvo OZs4QA+SU++R9r77CQKxbSXOcVVGtKGgakTTTT/tWazpVYt0le/Hmjxzay19k4FlYHml PR414c86uZzeyQlIM7xHbwTfDA98TjmNoQM/ykNCufYL2p4qpkIO7efqYd5idEaTo8U2 2+0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=VxJHwMzwUC98NetKMBaSVmux2SmEBHzy7mtAhsKX81g=; b=zfets2zaH2KXzZypVOAn13LD1sDgvnh7nwPWMB3UJVscX2rFZ9qx8ylUKCUC5a5rxX wXB3fLXsaeer/ASPMbvgNZOn+dUa5Nfw5Xph7AnYJ8gXItS6LI0GGJv+6Cdm5TUyvjuf 2RMuoDCrSq4zLUKrkn7xQCvK3OuBJWWqmAtBFgjKkEbEnHCd92Qmz3TMS8Pjl8icebZh czc1t1kJAJVdlkU9GCFOCjnYgdAru6ieTMD0DiFlRrBf/4MKTpD7t1QVjJapIJickFwK TRVcYNDdL06gHY4dmJ/I2EMgGFyFuLHKBQ2do03dxrHCuNzjgEly0gzUIr7lr+GDSsRq h3AQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=D7k0W7bi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c12si20614179pgj.461.2019.04.25.02.37.55; Thu, 25 Apr 2019 02:37:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=D7k0W7bi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728683AbfDYJhy (ORCPT + 30 others); Thu, 25 Apr 2019 05:37:54 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:34051 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728102AbfDYJhw (ORCPT ); Thu, 25 Apr 2019 05:37:52 -0400 Received: by mail-pl1-f194.google.com with SMTP id y6so10831791plt.1 for ; Thu, 25 Apr 2019 02:37:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VxJHwMzwUC98NetKMBaSVmux2SmEBHzy7mtAhsKX81g=; b=D7k0W7bi+xA2D+e+ltNJAPSgTGiJWE2lFoihqveBXLl44pk3niJa9NTdPbh2WrMMig T/k4eOG/q3M4oMs7VflyUURz0JTx5d0NQckbSwOtEv6wexIA9U0EbGYyMI9Oc22je+ef 46yHHl0CJY95v7ISduXRs8mupNH/G0uYl9paXlwasBw4MNij9AenhXPWfPFno/jnNlmV +PXbap3M7CvxWx06poiISMFME5Whg803sNTXS6S583CsgTDRxVAzzYxE5IRmcu6kIAi5 Jp5FGEfwNq5xhnnDI5Ms3fWJaedUTiOyi72k/FintRuXdkoW6wZRo5nO7hjCJs8EwFZN heHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VxJHwMzwUC98NetKMBaSVmux2SmEBHzy7mtAhsKX81g=; b=Rp/vyTnPmmGhMMFB1ik1iXq6VR/mlGxZra5wasWjvqWLXYscDgZGdgGntZyTOYOwJF R/7Hzi6UnO+fYhMr4BKD9RpLwPUKKNCRht8a17d8ArO/+ZJb4f81iroGlgYX43JwUUqo kOAKXjEynSHI58629qm8AiLkT1NeCXeKP0x0PsfEYF1nxXjbQkj/H8V3mlomHw3t886x bEYKhfRAVcAUw0Mfjy6KqwDDumMfUhpey80y9oRGagzm2Qi6PyRrOV4l9BEMoUKWWZwm oek21ZZnS3vA1mcPY7FYx+S3j6HOfIckGB/s4FkwQ6Xw7D66Dm1W9/+F1IIXQ1+MUQT8 f6Ww== X-Gm-Message-State: APjAAAUEjY2wqv9BkwtJkV/Najplok1GsRUaxYRVm3zvvquX6snB2v52 dC2pvhN3HLT96FzeKjE7hrfnXw== X-Received: by 2002:a17:902:7d8f:: with SMTP id a15mr37534600plm.3.1556185070478; Thu, 25 Apr 2019 02:37:50 -0700 (PDT) Received: from localhost ([122.166.139.136]) by smtp.gmail.com with ESMTPSA id b13sm30552400pfd.12.2019.04.25.02.37.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Apr 2019 02:37:49 -0700 (PDT) From: Viresh Kumar To: Ingo Molnar , Peter Zijlstra Cc: Viresh Kumar , Vincent Guittot , tkjos@google.com, Daniel Lezcano , quentin.perret@linaro.org, chris.redpath@arm.com, Dietmar.Eggemann@arm.com, linux-kernel@vger.kernel.org Subject: [RFC V2 2/2] sched/fair: Fallback to sched-idle CPU if idle CPU isn't found Date: Thu, 25 Apr 2019 15:07:40 +0530 Message-Id: <59b37c56b8fcb834f7d3234e776eaeff74ad117f.1556182965.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 2.21.0.rc0.269.g1a574e7a288b In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We target for an idle CPU in select_idle_sibling() to run the next task, but in case we don't find idle CPUs it is better to pick a CPU which will run the task the soonest, for performance reason. A CPU which isn't idle but has only SCHED_IDLE activity queued on it should be a good target based on this criteria as any normal fair task will most likely preempt the currently running SCHED_IDLE task immediately. In fact, choosing a SCHED_IDLE CPU shall give better results as it should be able to run the task sooner than an idle CPU (which requires to be woken up from an idle state). This patch updates the fast path to fallback to a sched-idle CPU if the idle CPU isn't found, the slow path can be updated separately later. Following is the order in which select_idle_sibling() picks up next CPU to run the task now: 1. idle_cpu(target) OR sched_idle_cpu(target) 2. idle_cpu(prev) OR sched_idle_cpu(prev) 3. idle_cpu(recent_used_cpu) OR sched_idle_cpu(recent_used_cpu) 4. idle core(sd) 5. idle_cpu(sd) 6. sched_idle_cpu(sd) 7. idle_cpu(p) - smt 8. sched_idle_cpu(p)- smt Though the policy can be tweaked a bit if we want to have different priorities. Signed-off-by: Viresh Kumar --- kernel/sched/fair.c | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) -- 2.21.0.rc0.269.g1a574e7a288b diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6511cb57acdd..fbaefb9a9296 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6057,6 +6057,15 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p return new_cpu; } +/* CPU only has SCHED_IDLE tasks enqueued */ +static int sched_idle_cpu(int cpu) +{ + struct rq *rq = cpu_rq(cpu); + + return unlikely(rq->nr_running == rq->cfs.idle_h_nr_running && + rq->nr_running); +} + #ifdef CONFIG_SCHED_SMT DEFINE_STATIC_KEY_FALSE(sched_smt_present); EXPORT_SYMBOL_GPL(sched_smt_present); @@ -6154,7 +6163,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int */ static int select_idle_smt(struct task_struct *p, int target) { - int cpu; + int cpu, si_cpu = -1; if (!static_branch_likely(&sched_smt_present)) return -1; @@ -6164,9 +6173,11 @@ static int select_idle_smt(struct task_struct *p, int target) continue; if (available_idle_cpu(cpu)) return cpu; + if (si_cpu == -1 && sched_idle_cpu(cpu)) + si_cpu = cpu; } - return -1; + return si_cpu; } #else /* CONFIG_SCHED_SMT */ @@ -6194,7 +6205,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t u64 avg_cost, avg_idle; u64 time, cost; s64 delta; - int cpu, nr = INT_MAX; + int cpu, nr = INT_MAX, si_cpu = -1; this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc)); if (!this_sd) @@ -6222,11 +6233,13 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t for_each_cpu_wrap(cpu, sched_domain_span(sd), target) { if (!--nr) - return -1; + return si_cpu; if (!cpumask_test_cpu(cpu, &p->cpus_allowed)) continue; if (available_idle_cpu(cpu)) break; + if (si_cpu == -1 && sched_idle_cpu(cpu)) + si_cpu = cpu; } time = local_clock() - time; @@ -6245,13 +6258,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) struct sched_domain *sd; int i, recent_used_cpu; - if (available_idle_cpu(target)) + if (available_idle_cpu(target) || sched_idle_cpu(target)) return target; /* * If the previous CPU is cache affine and idle, don't be stupid: */ - if (prev != target && cpus_share_cache(prev, target) && available_idle_cpu(prev)) + if (prev != target && cpus_share_cache(prev, target) && + (available_idle_cpu(prev) || sched_idle_cpu(prev))) return prev; /* Check a recently used CPU as a potential idle candidate: */ @@ -6259,7 +6273,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) if (recent_used_cpu != prev && recent_used_cpu != target && cpus_share_cache(recent_used_cpu, target) && - available_idle_cpu(recent_used_cpu) && + (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) && cpumask_test_cpu(p->recent_used_cpu, &p->cpus_allowed)) { /* * Replace recent_used_cpu with prev as it is a potential