From patchwork Mon Dec 12 19:21:08 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 5607 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 6832F23E03 for ; Mon, 12 Dec 2011 19:21:23 +0000 (UTC) Received: from mail-bw0-f52.google.com (mail-bw0-f52.google.com [209.85.214.52]) by fiordland.canonical.com (Postfix) with ESMTP id 5313DA186D2 for ; Mon, 12 Dec 2011 19:21:23 +0000 (UTC) Received: by bke17 with SMTP id 17so7849297bke.11 for ; Mon, 12 Dec 2011 11:21:23 -0800 (PST) Received: by 10.204.157.12 with SMTP id z12mr7957601bkw.18.1323717683004; Mon, 12 Dec 2011 11:21:23 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.205.129.2 with SMTP id hg2cs56956bkc; Mon, 12 Dec 2011 11:21:22 -0800 (PST) Received: by 10.14.2.212 with SMTP id 60mr2977679eef.204.1323717680371; Mon, 12 Dec 2011 11:21:20 -0800 (PST) Received: from mail-ey0-f178.google.com (mail-ey0-f178.google.com [209.85.215.178]) by mx.google.com with ESMTPS id h15si10047755eea.115.2011.12.12.11.21.19 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 12 Dec 2011 11:21:20 -0800 (PST) Received-SPF: neutral (google.com: 209.85.215.178 is neither permitted nor denied by best guess record for domain of vincent.guittot@linaro.org) client-ip=209.85.215.178; Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.215.178 is neither permitted nor denied by best guess record for domain of vincent.guittot@linaro.org) smtp.mail=vincent.guittot@linaro.org Received: by eaad13 with SMTP id d13so1264316eaa.37 for ; Mon, 12 Dec 2011 11:21:19 -0800 (PST) Received: by 10.14.18.93 with SMTP id k69mr2858709eek.2.1323717679520; Mon, 12 Dec 2011 11:21:19 -0800 (PST) Received: from localhost.localdomain (pas72-1-88-161-60-229.fbx.proxad.net. [88.161.60.229]) by mx.google.com with ESMTPS id 39sm13704063eei.1.2011.12.12.11.21.16 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 12 Dec 2011 11:21:18 -0800 (PST) From: Vincent Guittot To: linux-kernel@vger.kernel.org, linaro-dev@lists.linaro.org, a.p.zijlstra@chello.nl Cc: patches@linaro.org, Vincent Guittot Subject: [RFC] sched: Ensure cpu_power periodic update Date: Mon, 12 Dec 2011 20:21:08 +0100 Message-Id: <1323717668-2143-1-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.7.4.1 With a lot of small tasks, the softirq sched is nearly never called when no_hz is enable. In this case the load_balance is mainly called with the newly_idle mode which doesn't update the cpu_power. Add a next_update field which ensure a maximum update period when there is short activity Signed-off-by: Vincent Guittot --- include/linux/sched.h | 1 + kernel/sched/fair.c | 24 ++++++++++++++++-------- 2 files changed, 17 insertions(+), 8 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 64527c4..7178446 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -903,6 +903,7 @@ struct sched_group_power { * single CPU. */ unsigned int power, power_orig; + unsigned long next_update; /* * Number of busy cpus in this group. */ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a4d2b7a..809f484 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -215,6 +215,8 @@ calc_delta_mine(unsigned long delta_exec, unsigned long weight, const struct sched_class fair_sched_class; +static unsigned long __read_mostly max_load_balance_interval = HZ/10; + /************************************************************** * CFS operations on generic schedulable entities: */ @@ -3786,6 +3788,11 @@ void update_group_power(struct sched_domain *sd, int cpu) struct sched_domain *child = sd->child; struct sched_group *group, *sdg = sd->groups; unsigned long power; + unsigned long interval; + + interval = msecs_to_jiffies(sd->balance_interval); + interval = clamp(interval, 1UL, max_load_balance_interval); + sdg->sgp->next_update = jiffies + interval; if (!child) { update_cpu_power(sd, cpu); @@ -3893,12 +3900,15 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, * domains. In the newly idle case, we will allow all the cpu's * to do the newly idle load balance. */ - if (idle != CPU_NEWLY_IDLE && local_group) { - if (balance_cpu != this_cpu) { - *balance = 0; - return; - } - update_group_power(sd, this_cpu); + if (local_group) { + if (idle != CPU_NEWLY_IDLE) { + if (balance_cpu != this_cpu) { + *balance = 0; + return; + } + update_group_power(sd, this_cpu); + } else if (time_after_eq(jiffies, group->sgp->next_update)) + update_group_power(sd, this_cpu); } /* Adjust by relative CPU power of the group */ @@ -4917,8 +4927,6 @@ void select_nohz_load_balancer(int stop_tick) static DEFINE_SPINLOCK(balancing); -static unsigned long __read_mostly max_load_balance_interval = HZ/10; - /* * Scale the max load_balance interval with the number of CPUs in the system. * This trades load-balance latency on larger machines for less cross talk.