From patchwork Wed Jun 26 05:06:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 167770 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp458520ilk; Tue, 25 Jun 2019 22:06:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqz03AAQlsc8OTBqTmLEB7V9+oeJBVFMhNp+HgTY0inl0slsKeUVbR2ZPSqIJ29ukdOnC3Xm X-Received: by 2002:a17:902:b592:: with SMTP id a18mr571780pls.278.1561525616118; Tue, 25 Jun 2019 22:06:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561525616; cv=none; d=google.com; s=arc-20160816; b=rTCw7mRLLe1Oz8pCz47M3CUqcEFBATnYZAEKNgTbyTVE9PFUAb3tXM17JBicgJK8Oj ikWi4fRwUmX76c8iI/EfwM5XfAsCJbPy3dlPecEEF02CyrVYdayAKxsfGtdyEEE1IuMV ghyAU2rQsbEe0j+4hzF+en2gn1cXfgaegDYm0G2iY3dev5sXQgdE1pvkaSLWfAUcCi5C rmVHsLBzLHfQLFaIQuaHh2WaNnZj75JDSSFVSXZ3GRu5qI+7MdUpfMibmp5KAALkpCX7 C+lcaicU8Rp3LfEuW6eSHeXQAOvuhYhvFgOCfS4MZ9VbS107sZee861lOa0j2Fa1bU17 N5Ng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Xzo/mnAvVEkpnq/buLdSuv4HV/H6/z86Y20R9WnPjE4=; b=V+knp6MhzLUDETxl9IwyawbHdn8C6J9ISgBJhtdMfJWUaoS9cn/I5YHIeGyDJ24cRO SxrmIC3yyUsdyBLuLbpCvzw01b6Ze9EfahNrL1/i44CkZTjjAPxjEpkTj3nhcHbgw9bw C2fTyM9fU3A7ap4cFM6ryFEFS2A6Z5iBsCN3SdTSdyA4vLsqAVdS41FsyIMTrTE1cchy R5j39+8jHsV0b71+1wIuiMQ0wPQN0gpNlnj0upmy2NBDfUWnJF6QJRObsEzxYRVN0NLj R/JR3NuCkLMQz4LxpNPCxxXV/9g65MtuwjL6pdGZkBkOR3GaE+JhOAwDsdiA2qla+O+P RZRQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eQwE7NU+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g14si15961650pgo.579.2019.06.25.22.06.55; Tue, 25 Jun 2019 22:06:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eQwE7NU+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726590AbfFZFGy (ORCPT + 30 others); Wed, 26 Jun 2019 01:06:54 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:43864 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725308AbfFZFGv (ORCPT ); Wed, 26 Jun 2019 01:06:51 -0400 Received: by mail-pl1-f194.google.com with SMTP id cl9so711621plb.10 for ; Tue, 25 Jun 2019 22:06:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Xzo/mnAvVEkpnq/buLdSuv4HV/H6/z86Y20R9WnPjE4=; b=eQwE7NU+oCp9KQLtu9if6s1naRyX6V47OLHvSUOnQMJ57wE+SdMi2eKkLSlOb1qbh5 8LPJLx3BspfBQgsO4BgJ2IAotpJk7MPyWQiK5nw6qDnPpCv/smYkWpuyi9dDIwgBS8/M D6AiPX8T8EQKfPoZwuHWUQLIN+peIPHkz0+eHbltgRjgCiuGV+OSWwobVagioOE9IKi/ j2KLd+qBcB9t36RKX+2Mzqrw2LcZx6/XOYdrdViJk1IxBlsDrFg8C/4oX7T7YgqYeO2B zaYR3DLn5Y6/83Prbprq79lVnywZ75NHOvSHzOc3UX0YFm+YGx0MRfF6fi0YypiHyu5k 17sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Xzo/mnAvVEkpnq/buLdSuv4HV/H6/z86Y20R9WnPjE4=; b=J1M1WVExQL2XQLbWuTUfnW1YgHbfvofNEuyffIwGXvvncjwuy1BUcXxBI+faZl/u5e lLnc1nKl1JzEI0iPYCUL4kz+dIFqhygtGfkH4D2SQVd8EuRiZGl/cV2K7vVkFFFhTNej 4EJtWqbJocIimlsE8MaXH7iHoKEjwgiodMct1VvrebhAbs5/dWbDIN/3mARzw6ALqcji DMVeLp61iea0T2DNsQFttzuAGpIMlXytjF/6vWkRO3gZYTLdt6GBtF/UYK/VaidFB81Y PPSwE5pRFpbbvChQQO5cP8pWtEHADQj8O/IzSTphtZKfCwbFxKNAIvM8UWkyPc43LokF Gu/g== X-Gm-Message-State: APjAAAWlkqNPSdzlD34tYEOGH5WUKB+ktEhAOH3fwwL3IncNJRfRzjjC i6xPo2ICL2CRHcNfHD6YCwZ3IQ== X-Received: by 2002:a17:902:4aa3:: with SMTP id x32mr2854591pld.119.1561525610441; Tue, 25 Jun 2019 22:06:50 -0700 (PDT) Received: from localhost ([122.172.211.128]) by smtp.gmail.com with ESMTPSA id d6sm18462347pgf.55.2019.06.25.22.06.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 25 Jun 2019 22:06:49 -0700 (PDT) From: Viresh Kumar To: Ingo Molnar , Peter Zijlstra Cc: Viresh Kumar , linux-kernel@vger.kernel.org, Vincent Guittot , tkjos@google.com, Daniel Lezcano , quentin.perret@linaro.org, chris.redpath@arm.com, steven.sistare@oracle.com, subhra.mazumdar@oracle.com, songliubraving@fb.com Subject: [PATCH V3 2/2] sched/fair: Fallback to sched-idle CPU if idle CPU isn't found Date: Wed, 26 Jun 2019 10:36:30 +0530 Message-Id: X-Mailer: git-send-email 2.21.0.rc0.269.g1a574e7a288b In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We try to find an idle CPU to run the next task, but in case we don't find an idle CPU it is better to pick a CPU which will run the task the soonest, for performance reason. A CPU which isn't idle but has only SCHED_IDLE activity queued on it should be a good target based on this criteria as any normal fair task will most likely preempt the currently running SCHED_IDLE task immediately. In fact, choosing a SCHED_IDLE CPU over a fully idle one shall give better results as it should be able to run the task sooner than an idle CPU (which requires to be woken up from an idle state). This patch updates both fast and slow paths with this optimization. Signed-off-by: Viresh Kumar --- kernel/sched/fair.c | 43 +++++++++++++++++++++++++++++++++---------- 1 file changed, 33 insertions(+), 10 deletions(-) -- 2.21.0.rc0.269.g1a574e7a288b diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1277adc3e7ed..2e0527fd468c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5376,6 +5376,15 @@ static struct { #endif /* CONFIG_NO_HZ_COMMON */ +/* CPU only has SCHED_IDLE tasks enqueued */ +static int sched_idle_cpu(int cpu) +{ + struct rq *rq = cpu_rq(cpu); + + return unlikely(rq->nr_running == rq->cfs.idle_h_nr_running && + rq->nr_running); +} + static unsigned long cpu_runnable_load(struct rq *rq) { return cfs_rq_runnable_load_avg(&rq->cfs); @@ -5698,7 +5707,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this unsigned int min_exit_latency = UINT_MAX; u64 latest_idle_timestamp = 0; int least_loaded_cpu = this_cpu; - int shallowest_idle_cpu = -1; + int shallowest_idle_cpu = -1, si_cpu = -1; int i; /* Check if we have any choice: */ @@ -5729,7 +5738,12 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this latest_idle_timestamp = rq->idle_stamp; shallowest_idle_cpu = i; } - } else if (shallowest_idle_cpu == -1) { + } else if (shallowest_idle_cpu == -1 && si_cpu == -1) { + if (sched_idle_cpu(i)) { + si_cpu = i; + continue; + } + load = cpu_runnable_load(cpu_rq(i)); if (load < min_load) { min_load = load; @@ -5738,7 +5752,11 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this } } - return shallowest_idle_cpu != -1 ? shallowest_idle_cpu : least_loaded_cpu; + if (shallowest_idle_cpu != -1) + return shallowest_idle_cpu; + if (si_cpu != -1) + return si_cpu; + return least_loaded_cpu; } static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p, @@ -5891,7 +5909,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int */ static int select_idle_smt(struct task_struct *p, int target) { - int cpu; + int cpu, si_cpu = -1; if (!static_branch_likely(&sched_smt_present)) return -1; @@ -5901,9 +5919,11 @@ static int select_idle_smt(struct task_struct *p, int target) continue; if (available_idle_cpu(cpu)) return cpu; + if (si_cpu == -1 && sched_idle_cpu(cpu)) + si_cpu = cpu; } - return -1; + return si_cpu; } #else /* CONFIG_SCHED_SMT */ @@ -5931,7 +5951,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t u64 avg_cost, avg_idle; u64 time, cost; s64 delta; - int cpu, nr = INT_MAX; + int cpu, nr = INT_MAX, si_cpu = -1; int this = smp_processor_id(); this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc)); @@ -5960,11 +5980,13 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t for_each_cpu_wrap(cpu, sched_domain_span(sd), target) { if (!--nr) - return -1; + return si_cpu; if (!cpumask_test_cpu(cpu, p->cpus_ptr)) continue; if (available_idle_cpu(cpu)) break; + if (si_cpu == -1 && sched_idle_cpu(cpu)) + si_cpu = cpu; } time = cpu_clock(this) - time; @@ -5983,13 +6005,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) struct sched_domain *sd; int i, recent_used_cpu; - if (available_idle_cpu(target)) + if (available_idle_cpu(target) || sched_idle_cpu(target)) return target; /* * If the previous CPU is cache affine and idle, don't be stupid: */ - if (prev != target && cpus_share_cache(prev, target) && available_idle_cpu(prev)) + if (prev != target && cpus_share_cache(prev, target) && + (available_idle_cpu(prev) || sched_idle_cpu(prev))) return prev; /* Check a recently used CPU as a potential idle candidate: */ @@ -5997,7 +6020,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) if (recent_used_cpu != prev && recent_used_cpu != target && cpus_share_cache(recent_used_cpu, target) && - available_idle_cpu(recent_used_cpu) && + (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) && cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr)) { /* * Replace recent_used_cpu with prev as it is a potential