From patchwork Fri Oct 18 11:52:16 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 21116 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pd0-f199.google.com (mail-pd0-f199.google.com [209.85.192.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0824625E7E for ; Fri, 18 Oct 2013 11:53:48 +0000 (UTC) Received: by mail-pd0-f199.google.com with SMTP id y10sf2165509pdj.2 for ; Fri, 18 Oct 2013 04:53:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:x-gm-message-state:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=klWu9If9EeO5STcuEg1MD4Gc7rGyEvYiyTdJdpU45u8=; b=JyTS8S+ALSo31UOLeQUTcYH4IAatZvghaN6H/JUHdQTdoXZhHJqm8IblZb5Tnfndro UMmerSFTu8/v+K+pEUEfCMkPW2PQumlMM34r4gEvN0Gsw+cPYr5HHaCA5O6m8BgcOBzj 7II2+zh3nGY6tQumTst8F+dt2gCs5OIFuL6cif0MqtB3r5iqAupeNxUuZcXaFpBAxBWT kpy8wGs6LGRBKqqxQqDjwhiWbQUpT/WMe/JCkENyUDI55wfKerXyvOMV/55A9cQPqSgF GLinGVshUM2wwHF0DXgjfoEYq+MOyRGa5l/wMG27ywGphs203qYHRwLe7kwLJcwXOCCa MNWw== X-Received: by 10.68.218.163 with SMTP id ph3mr942053pbc.5.1382097228130; Fri, 18 Oct 2013 04:53:48 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.16.200 with SMTP id i8ls1302655qed.65.gmail; Fri, 18 Oct 2013 04:53:48 -0700 (PDT) X-Received: by 10.58.217.130 with SMTP id oy2mr47639vec.24.1382097227961; Fri, 18 Oct 2013 04:53:47 -0700 (PDT) Received: from mail-vb0-f46.google.com (mail-vb0-f46.google.com [209.85.212.46]) by mx.google.com with ESMTPS id m3si238021vdw.10.2013.10.18.04.53.47 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 18 Oct 2013 04:53:47 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.212.46 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.46; Received: by mail-vb0-f46.google.com with SMTP id 10so1521916vbe.33 for ; Fri, 18 Oct 2013 04:53:47 -0700 (PDT) X-Gm-Message-State: ALoCoQk6gQXiS1Z0PugK6mbpsEfWnNqOFst0Iva7WI2fbNkPvd9YpDQPf/hF6k5afV8VoYhgfVq2 X-Received: by 10.58.143.17 with SMTP id sa17mr1801237veb.14.1382097227864; Fri, 18 Oct 2013 04:53:47 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp29478vcz; Fri, 18 Oct 2013 04:53:47 -0700 (PDT) X-Received: by 10.180.208.45 with SMTP id mb13mr2088734wic.27.1382097226775; Fri, 18 Oct 2013 04:53:46 -0700 (PDT) Received: from mail-we0-f175.google.com (mail-we0-f175.google.com [74.125.82.175]) by mx.google.com with ESMTPS id wg1si414896wjb.115.2013.10.18.04.53.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 18 Oct 2013 04:53:46 -0700 (PDT) Received-SPF: neutral (google.com: 74.125.82.175 is neither permitted nor denied by best guess record for domain of vincent.guittot@linaro.org) client-ip=74.125.82.175; Received: by mail-we0-f175.google.com with SMTP id t61so3641069wes.34 for ; Fri, 18 Oct 2013 04:53:46 -0700 (PDT) X-Received: by 10.194.242.200 with SMTP id ws8mr1321942wjc.60.1382097226033; Fri, 18 Oct 2013 04:53:46 -0700 (PDT) Received: from localhost.localdomain (LPuteaux-156-14-44-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPSA id lr3sm25000673wic.5.2013.10.18.04.53.44 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 18 Oct 2013 04:53:45 -0700 (PDT) From: Vincent Guittot To: linux-kernel@vger.kernel.org, peterz@infradead.org, mingo@kernel.org, pjt@google.com, Morten.Rasmussen@arm.com, cmetcalf@tilera.com, tony.luck@intel.com, alex.shi@intel.com, preeti@linux.vnet.ibm.com, linaro-kernel@lists.linaro.org Cc: rjw@sisk.pl, paulmck@linux.vnet.ibm.com, corbet@lwn.net, tglx@linutronix.de, len.brown@intel.com, arjan@linux.intel.com, amit.kucheria@linaro.org, l.majewski@samsung.com, Vincent Guittot Subject: [RFC][PATCH v5 03/14] sched: define pack buddy CPUs Date: Fri, 18 Oct 2013 13:52:16 +0200 Message-Id: <1382097147-30088-3-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1382097147-30088-1-git-send-email-vincent.guittot@linaro.org> References: <1382097147-30088-1-git-send-email-vincent.guittot@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: vincent.guittot@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.46 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , During the creation of sched_domain, we define a pack buddy CPU for each CPU when one is available. We want to pack at all levels where a group of CPUs can be power gated independently from others. On a system that can't power gate a group of CPUs independently, the flag is set at all sched_domain level and the buddy is set to -1. This is the default behavior for all architectures. On a dual clusters / dual cores system which can power gate each core and cluster independently, the buddy configuration will be : | Cluster 0 | Cluster 1 | | CPU0 | CPU1 | CPU2 | CPU3 | ----------------------------------- buddy | CPU0 | CPU0 | CPU0 | CPU2 | If the cores in a cluster can't be power gated independently, the buddy configuration becomes: | Cluster 0 | Cluster 1 | | CPU0 | CPU1 | CPU2 | CPU3 | ----------------------------------- buddy | CPU0 | CPU1 | CPU0 | CPU0 | Signed-off-by: Vincent Guittot --- kernel/sched/core.c | 1 + kernel/sched/fair.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++ kernel/sched/sched.h | 5 ++++ 3 files changed, 76 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 735e964..0bf5f4d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5184,6 +5184,7 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu) rcu_assign_pointer(rq->sd, sd); destroy_sched_domains(tmp, cpu); + update_packing_domain(cpu); update_top_cache_domain(cpu); } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 11cd136..5547831 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -178,6 +178,76 @@ void sched_init_granularity(void) update_sysctl(); } +#ifdef CONFIG_SMP +#ifdef CONFIG_SCHED_PACKING_TASKS +/* + * Save the id of the optimal CPU that should be used to pack small tasks + * The value -1 is used when no buddy has been found + */ +DEFINE_PER_CPU(int, sd_pack_buddy); + +/* + * Look for the best buddy CPU that can be used to pack small tasks + * We make the assumption that it doesn't wort to pack on CPU that share the + * same powerline. We look for the 1st sched_domain without the + * SD_SHARE_POWERDOMAIN flag. Then we look for the sched_group with the lowest + * power per core based on the assumption that their power efficiency is + * better + */ +void update_packing_domain(int cpu) +{ + struct sched_domain *sd; + int id = -1; + + sd = highest_flag_domain(cpu, SD_SHARE_POWERDOMAIN); + if (!sd) + sd = rcu_dereference_check_sched_domain(cpu_rq(cpu)->sd); + else + sd = sd->parent; + + while (sd && (sd->flags & SD_LOAD_BALANCE) + && !(sd->flags & SD_SHARE_POWERDOMAIN)) { + struct sched_group *sg = sd->groups; + struct sched_group *pack = sg; + struct sched_group *tmp; + + /* + * The sched_domain of a CPU points on the local sched_group + * and this CPU of this local group is a good candidate + */ + id = cpu; + + /* loop the sched groups to find the best one */ + for (tmp = sg->next; tmp != sg; tmp = tmp->next) { + if (tmp->sgp->power * pack->group_weight > + pack->sgp->power * tmp->group_weight) + continue; + + if ((tmp->sgp->power * pack->group_weight == + pack->sgp->power * tmp->group_weight) + && (cpumask_first(sched_group_cpus(tmp)) >= id)) + continue; + + /* we have found a better group */ + pack = tmp; + + /* Take the 1st CPU of the new group */ + id = cpumask_first(sched_group_cpus(pack)); + } + + /* Look for another CPU than itself */ + if (id != cpu) + break; + + sd = sd->parent; + } + + pr_debug("CPU%d packing on CPU%d\n", cpu, id); + per_cpu(sd_pack_buddy, cpu) = id; +} +#endif /* CONFIG_SCHED_PACKING_TASKS */ +#endif /* CONFIG_SMP */ + #if BITS_PER_LONG == 32 # define WMULT_CONST (~0UL) #else diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b3c5653..22e3f1d 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1022,6 +1022,11 @@ extern void update_group_power(struct sched_domain *sd, int cpu); extern void trigger_load_balance(struct rq *rq, int cpu); extern void idle_balance(int this_cpu, struct rq *this_rq); +#ifdef CONFIG_SCHED_PACKING_TASKS +extern void update_packing_domain(int cpu); +#else +static inline void update_packing_domain(int cpu) {}; +#endif extern void idle_enter_fair(struct rq *this_rq); extern void idle_exit_fair(struct rq *this_rq);