From patchwork Tue Oct 24 12:25:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 116946 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp5745489qgn; Tue, 24 Oct 2017 05:26:15 -0700 (PDT) X-Google-Smtp-Source: ABhQp+Ral/Jz8sK15U1EjvX5zaPdy0wL+AvawH1m0b1RizI8OwOoUkO3obKBxmFFh4K1aREIoItS X-Received: by 10.84.136.36 with SMTP id 33mr13112482plk.108.1508847975674; Tue, 24 Oct 2017 05:26:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1508847975; cv=none; d=google.com; s=arc-20160816; b=WnxSpITM6GZpGCaeP7NsRIFA7KySFVeJ4ITnYe5vGydG1bpaE4q7iQNfSTBBYtKA71 h/TTfIXDsQNEhe82Q0lIGgn/S9N2WQHvGstEHny3uh4iOkntqwb3lJGgYAfeBXLY211i Ne979ZEMcUSYgQht17HijIAx4+8JrEoKGs8kETXGdbnWb71AY0StK93VdYqGWXjv+/2h QBghtvpaBL9G88NJIwD2deWAF8zWcbweZseq0DGqG72t8XdbYjczRwP4YP0aKLgpbCqr oZqGmDZLXZZmbovJfIKJWh72Ix/yXXD98RzcAWvEjhigNU+Io4Uyz48dlkAGSXsd6eu0 1t+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Ht5RTBnWy2t6OgXUPwXzqIS/MJ8nV1W5ZYufTVPXqsk=; b=YpeXDxofxe0Ifq2IMC4q+h0pBms0RFW5gJsr7Vvr/oCTxZlBIj7bRlDPG5VmSIODvc FIEYb0M2B2c/t5ouVgHxHEYA5h6ulYM1nFaN9IKQQzBac4cblap6c0wxwX/45hIrSe1d PR+AxJsfaPky3UKjGN3RPChDqRUcpf5Bh3Lh+nVM2v/XEhFivgpQ5d3R6CMw3lZaws5l QnPNolXIOMVDWNw9hwReQNYfC9FreYBimckq5AQp40el+dOJHWWZ84Am5n5i3JsP2jr/ 2PUiEDYz83XwMoAiT0yq3xUemacZzVmxLsXp9vogYucT4mo4RusVEp83rjrh+cr8aK/l y+zA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 1si94901plw.595.2017.10.24.05.26.15; Tue, 24 Oct 2017 05:26:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932503AbdJXM0N (ORCPT + 27 others); Tue, 24 Oct 2017 08:26:13 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:54192 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932406AbdJXM0F (ORCPT ); Tue, 24 Oct 2017 08:26:05 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1FFCD15AD; Tue, 24 Oct 2017 05:26:05 -0700 (PDT) Received: from brendan-thinkstation.cambridge.arm.com (brendan-thinkstation.cambridge.arm.com [10.1.207.44]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9B37D3F25D; Tue, 24 Oct 2017 05:26:03 -0700 (PDT) From: Brendan Jackman To: Vincent Guittot , Dietmar Eggemann , Ingo Molnar , Peter Zijlstra , linux-kernel@vger.kernel.org Cc: Ingo Molnar , Morten Rasmussen Subject: [PATCH 1/2] sched: force update of blocked load of idle cpus Date: Tue, 24 Oct 2017 13:25:55 +0100 Message-Id: <20171024122556.15872-2-brendan.jackman@arm.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171024122556.15872-1-brendan.jackman@arm.com> References: <20171024122556.15872-1-brendan.jackman@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Vincent Guittot When idle, the blocked load of CPUs will be updated only when an idle load balance is triggered which may never happen. Because of this uncertainty on the execution of idle load balance, the utilization, the load and the shares of idle cfs_rq can stay artificially high and steal shares and running time to busy cfs_rqs of the task group. Add a new light idle load balance state which ensures that blocked loads are periodically updated and decayed but does not perform any task migration. The remote load udpates are rate-limited, so that they are not performed with a shorter period than LOAD_AVG_PERIOD (i.e. PELT half-life). This is the period after which we have a known 50% error in stale load. Cc: Dietmar Eggemann Cc: Vincent Guittot Cc: Ingo Molnar Cc: Morten Rasmussen Cc: Peter Zijlstra Signed-off-by: Vincent Guittot [Switched remote update interval to use PELT half life] [Moved update_blocked_averges call outside rebalance_domains to simplify code] Signed-off-by: Brendan Jackman --- kernel/sched/fair.c | 71 +++++++++++++++++++++++++++++++++++++++++++++------- kernel/sched/sched.h | 1 + 2 files changed, 63 insertions(+), 9 deletions(-) -- 2.14.1 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 85d1ec1c3b39..9085caf49c76 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5976,6 +5976,9 @@ static int wake_cap(struct task_struct *p, int cpu, int prev_cpu) return min_cap * 1024 < task_util(p) * capacity_margin; } +static inline bool nohz_kick_needed(struct rq *rq, bool only_update); +static void nohz_balancer_kick(bool only_update); + /* * select_task_rq_fair: Select target runqueue for the waking task in domains * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE, @@ -6074,6 +6077,11 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f } rcu_read_unlock(); +#ifdef CONFIG_NO_HZ_COMMON + if (nohz_kick_needed(cpu_rq(new_cpu), true)) + nohz_balancer_kick(true); +#endif + return new_cpu; } @@ -8653,6 +8661,7 @@ static struct { cpumask_var_t idle_cpus_mask; atomic_t nr_cpus; unsigned long next_balance; /* in jiffy units */ + unsigned long next_update; /* in jiffy units */ } nohz ____cacheline_aligned; static inline int find_new_ilb(void) @@ -8670,7 +8679,7 @@ static inline int find_new_ilb(void) * nohz_load_balancer CPU (if there is one) otherwise fallback to any idle * CPU (if there is one). */ -static void nohz_balancer_kick(void) +static void nohz_balancer_kick(bool only_update) { int ilb_cpu; @@ -8683,6 +8692,10 @@ static void nohz_balancer_kick(void) if (test_and_set_bit(NOHZ_BALANCE_KICK, nohz_flags(ilb_cpu))) return; + + if (only_update) + set_bit(NOHZ_STATS_KICK, nohz_flags(ilb_cpu)); + /* * Use smp_send_reschedule() instead of resched_cpu(). * This way we generate a sched IPI on the target cpu which @@ -8770,6 +8783,8 @@ void nohz_balance_enter_idle(int cpu) atomic_inc(&nohz.nr_cpus); set_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)); } +#else +static inline void nohz_balancer_kick(bool only_update) {} #endif static DEFINE_SPINLOCK(balancing); @@ -8801,8 +8816,6 @@ static void rebalance_domains(struct rq *rq, enum cpu_idle_type idle) int need_serialize, need_decay = 0; u64 max_cost = 0; - update_blocked_averages(cpu); - rcu_read_lock(); for_each_domain(cpu, sd) { /* @@ -8901,6 +8914,7 @@ static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) { int this_cpu = this_rq->cpu; struct rq *rq; + struct sched_domain *sd; int balance_cpu; /* Earliest time when we have to do rebalance again */ unsigned long next_balance = jiffies + 60*HZ; @@ -8910,6 +8924,23 @@ static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) !test_bit(NOHZ_BALANCE_KICK, nohz_flags(this_cpu))) goto end; + /* + * This cpu is going to update the blocked load of idle CPUs either + * before doing a rebalancing or just to keep metrics up to date. we + * can safely update the next update timestamp + */ + rcu_read_lock(); + sd = rcu_dereference(this_rq->sd); + /* + * Check whether there is a sched_domain available for this cpu. + * The last other cpu can have been unplugged since the ILB has been + * triggered and the sched_domain can now be null. The idle balance + * sequence will quickly be aborted as there is no more idle CPUs + */ + if (sd) + nohz.next_update = jiffies + msecs_to_jiffies(LOAD_AVG_PERIOD); + rcu_read_unlock(); + for_each_cpu(balance_cpu, nohz.idle_cpus_mask) { if (balance_cpu == this_cpu || !idle_cpu(balance_cpu)) continue; @@ -8936,7 +8967,15 @@ static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) cpu_load_update_idle(rq); rq_unlock_irq(rq, &rf); - rebalance_domains(rq, CPU_IDLE); + update_blocked_averages(balance_cpu); + /* + * This idle load balance softirq may have been + * triggered only to update the blocked load and shares + * of idle CPUs (which we have just done for + * balance_cpu). In that case skip the actual balance. + */ + if (!test_bit(NOHZ_STATS_KICK, nohz_flags(this_cpu))) + rebalance_domains(rq, idle); } if (time_after(next_balance, rq->next_balance)) { @@ -8967,7 +9006,7 @@ static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) * - For SD_ASYM_PACKING, if the lower numbered cpu's in the scheduler * domain span are idle. */ -static inline bool nohz_kick_needed(struct rq *rq) +static inline bool nohz_kick_needed(struct rq *rq, bool only_update) { unsigned long now = jiffies; struct sched_domain_shared *sds; @@ -8975,7 +9014,7 @@ static inline bool nohz_kick_needed(struct rq *rq) int nr_busy, i, cpu = rq->cpu; bool kick = false; - if (unlikely(rq->idle_balance)) + if (unlikely(rq->idle_balance) && !only_update) return false; /* @@ -8992,6 +9031,13 @@ static inline bool nohz_kick_needed(struct rq *rq) if (likely(!atomic_read(&nohz.nr_cpus))) return false; + if (only_update) { + if (time_before(now, nohz.next_update)) + return false; + else + return true; + } + if (time_before(now, nohz.next_balance)) return false; @@ -9041,6 +9087,7 @@ static inline bool nohz_kick_needed(struct rq *rq) } #else static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) { } +static inline bool nohz_kick_needed(struct rq *rq, bool only_update) { return false; } #endif /* @@ -9062,7 +9109,12 @@ static __latent_entropy void run_rebalance_domains(struct softirq_action *h) * and abort nohz_idle_balance altogether if we pull some load. */ nohz_idle_balance(this_rq, idle); - rebalance_domains(this_rq, idle); + update_blocked_averages(this_rq->cpu); + if (!test_bit(NOHZ_STATS_KICK, nohz_flags(this_rq->cpu))) + rebalance_domains(this_rq, idle); +#ifdef CONFIG_NO_HZ_COMMON + clear_bit(NOHZ_STATS_KICK, nohz_flags(this_rq->cpu)); +#endif } /* @@ -9077,8 +9129,8 @@ void trigger_load_balance(struct rq *rq) if (time_after_eq(jiffies, rq->next_balance)) raise_softirq(SCHED_SOFTIRQ); #ifdef CONFIG_NO_HZ_COMMON - if (nohz_kick_needed(rq)) - nohz_balancer_kick(); + if (nohz_kick_needed(rq, false)) + nohz_balancer_kick(false); #endif } @@ -9657,6 +9709,7 @@ __init void init_sched_fair_class(void) #ifdef CONFIG_NO_HZ_COMMON nohz.next_balance = jiffies; + nohz.next_update = jiffies; zalloc_cpumask_var(&nohz.idle_cpus_mask, GFP_NOWAIT); #endif #endif /* SMP */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 14db76cd496f..6f95ef653f73 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1978,6 +1978,7 @@ extern void cfs_bandwidth_usage_dec(void); enum rq_nohz_flag_bits { NOHZ_TICK_STOPPED, NOHZ_BALANCE_KICK, + NOHZ_STATS_KICK }; #define nohz_flags(cpu) (&cpu_rq(cpu)->nohz_flags)