From patchwork Mon Nov 14 12:40:27 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 5103 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 2843A23E01 for ; Mon, 14 Nov 2011 12:40:40 +0000 (UTC) Received: from mail-fx0-f52.google.com (mail-fx0-f52.google.com [209.85.161.52]) by fiordland.canonical.com (Postfix) with ESMTP id 1D276A180E8 for ; Mon, 14 Nov 2011 12:40:40 +0000 (UTC) Received: by mail-fx0-f52.google.com with SMTP id a26so398320faa.11 for ; Mon, 14 Nov 2011 04:40:40 -0800 (PST) Received: by 10.152.104.1 with SMTP id ga1mr13798401lab.40.1321274439869; Mon, 14 Nov 2011 04:40:39 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.152.40.7 with SMTP id t7cs37837lak; Mon, 14 Nov 2011 04:40:38 -0800 (PST) Received: by 10.227.198.77 with SMTP id en13mr14696074wbb.28.1321274437273; Mon, 14 Nov 2011 04:40:37 -0800 (PST) Received: from mail-wy0-f178.google.com (mail-wy0-f178.google.com [74.125.82.178]) by mx.google.com with ESMTPS id ge20si4978565wbb.113.2011.11.14.04.40.36 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 14 Nov 2011 04:40:37 -0800 (PST) Received-SPF: neutral (google.com: 74.125.82.178 is neither permitted nor denied by best guess record for domain of vincent.guittot@linaro.org) client-ip=74.125.82.178; Authentication-Results: mx.google.com; spf=neutral (google.com: 74.125.82.178 is neither permitted nor denied by best guess record for domain of vincent.guittot@linaro.org) smtp.mail=vincent.guittot@linaro.org Received: by mail-wy0-f178.google.com with SMTP id 13so1348853wyh.37 for ; Mon, 14 Nov 2011 04:40:36 -0800 (PST) Received: by 10.180.109.106 with SMTP id hr10mr25772894wib.9.1321274436454; Mon, 14 Nov 2011 04:40:36 -0800 (PST) Received: from localhost.localdomain (pas72-1-88-161-60-229.fbx.proxad.net. [88.161.60.229]) by mx.google.com with ESMTPS id i8sm12197833wie.11.2011.11.14.04.40.34 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 14 Nov 2011 04:40:35 -0800 (PST) From: Vincent Guittot To: linaro-dev@lists.linaro.org Cc: patches@linaro.org, Vincent Guittot Subject: [RFC PATCH v2 09/09] sched: Ensure cpu_power periodic update Date: Mon, 14 Nov 2011 13:40:27 +0100 Message-Id: <1321274427-2539-1-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.7.4.1 With a lot of small task, the softirq sched is nearly never called when no_hz is enable. Te load_balance is mainly called with the newly_idle mode which doesn't update the cpu_power. Add a next_update field which ensure a maximum update period when there is short activity Signed-off-by: Vincent Guittot --- include/linux/sched.h | 1 + kernel/sched_fair.c | 24 ++++++++++++++++-------- 2 files changed, 17 insertions(+), 8 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 41d0237..8610921 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -901,6 +901,7 @@ struct sched_group_power { * single CPU. */ unsigned int power, power_orig; + unsigned long next_update; }; struct sched_group { diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index bc8ee99..320b7a0 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -91,6 +91,8 @@ unsigned int __read_mostly sysctl_sched_shares_window = 10000000UL; static const struct sched_class fair_sched_class; +static unsigned long __read_mostly max_load_balance_interval = HZ/10; + /************************************************************** * CFS operations on generic schedulable entities: */ @@ -2667,6 +2669,11 @@ static void update_group_power(struct sched_domain *sd, int cpu) struct sched_domain *child = sd->child; struct sched_group *group, *sdg = sd->groups; unsigned long power; + unsigned long interval; + + interval = msecs_to_jiffies(sd->balance_interval); + interval = clamp(interval, 1UL, max_load_balance_interval); + sdg->sgp->next_update = jiffies + interval; if (!child) { update_cpu_power(sd, cpu); @@ -2774,12 +2781,15 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, * domains. In the newly idle case, we will allow all the cpu's * to do the newly idle load balance. */ - if (idle != CPU_NEWLY_IDLE && local_group) { - if (balance_cpu != this_cpu) { - *balance = 0; - return; - } - update_group_power(sd, this_cpu); + if (local_group) { + if (idle != CPU_NEWLY_IDLE) { + if (balance_cpu != this_cpu) { + *balance = 0; + return; + } + update_group_power(sd, this_cpu); + } else if (time_after_eq(jiffies, group->sgp->next_update)) + update_group_power(sd, this_cpu); } /* Adjust by relative CPU power of the group */ @@ -3879,8 +3889,6 @@ void select_nohz_load_balancer(int stop_tick) static DEFINE_SPINLOCK(balancing); -static unsigned long __read_mostly max_load_balance_interval = HZ/10; - /* * Scale the max load_balance interval with the number of CPUs in the system. * This trades load-balance latency on larger machines for less cross talk.