From patchwork Tue Sep 25 10:36:08 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 11713 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 11A0423EFD for ; Tue, 25 Sep 2012 10:36:40 +0000 (UTC) Received: from mail-ie0-f180.google.com (mail-ie0-f180.google.com [209.85.223.180]) by fiordland.canonical.com (Postfix) with ESMTP id A0FBAA1905D for ; Tue, 25 Sep 2012 10:36:39 +0000 (UTC) Received: by mail-ie0-f180.google.com with SMTP id e10so12424369iej.11 for ; Tue, 25 Sep 2012 03:36:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references:in-reply-to :references:x-originalarrivaltime:x-mc-unique:content-type :content-transfer-encoding:x-gm-message-state; bh=ZzXsq1T/RRpwZZEGhWOo91OR38Hh6Stgx13KG25AUic=; b=Sjxvn2LihadmN0YxeLJ6a51aicHx6ZSvC75gMSRE6JeilfR6SntRmgInNubjMetue5 ssetEKfeCdMGTdTXOOPtMMEVNmllYJTkzuZvZzJJP2y30sHn9bCUT226Xi+5Zvxy24bK IHRmbqAsAQ7fbzqpp1dvv4pkN8Qaa9y3I8zGZEbwHpAamyS1ZjuT2rpS5n9WGeQlYMO2 rAHPAlwRuZRNymeKuizby6Zfl43KB2NM8/pV7kUllkC+vkDahyi1kvNQWeDask2myY4s ardWruYPiM+H1KxPKb3rDclO89rs0zw0KuwRj3iKaxPAxZ97sj7G10RjY8UPJo9haO+i grFQ== Received: by 10.42.110.130 with SMTP id q2mr11658453icp.53.1348569399312; Tue, 25 Sep 2012 03:36:39 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.232 with SMTP id ex8csp286665igc; Tue, 25 Sep 2012 03:36:38 -0700 (PDT) Received: by 10.180.99.99 with SMTP id ep3mr4163099wib.15.1348569397536; Tue, 25 Sep 2012 03:36:37 -0700 (PDT) Received: from service87.mimecast.com (service87.mimecast.com. [91.220.42.44]) by mx.google.com with ESMTP id k54si20419390wec.0.2012.09.25.03.36.37; Tue, 25 Sep 2012 03:36:37 -0700 (PDT) Received-SPF: pass (google.com: domain of viresh.kumar2@arm.com designates 91.220.42.44 as permitted sender) client-ip=91.220.42.44; Authentication-Results: mx.google.com; spf=pass (google.com: domain of viresh.kumar2@arm.com designates 91.220.42.44 as permitted sender) smtp.mail=viresh.kumar2@arm.com Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Tue, 25 Sep 2012 11:36:36 +0100 Received: from localhost ([10.1.255.212]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.0); Tue, 25 Sep 2012 11:36:33 +0100 From: Viresh Kumar To: linux-kernel@vger.kernel.org Cc: pjt@google.com, paul.mckenney@linaro.org, tglx@linutronix.de, tj@kernel.org, suresh.b.siddha@intel.com, venki@google.com, mingo@redhat.com, peterz@infradead.org, robin.randhawa@arm.com, Steve.Bannister@arm.com, Arvind.Chauhan@arm.com, amit.kucheria@linaro.org, vincent.guittot@linaro.org, linaro-dev@lists.linaro.org, patches@linaro.org, Viresh Kumar Subject: [PATCH 3/3] workqueue: Schedule work on non-idle cpu instead of current one Date: Tue, 25 Sep 2012 16:06:08 +0530 Message-Id: <3232b3192e2e373cc4aaf43359d454c5ad53cddb.1348568074.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 1.7.12.rc2.18.g61b472e In-Reply-To: References: In-Reply-To: References: X-OriginalArrivalTime: 25 Sep 2012 10:36:33.0913 (UTC) FILETIME=[A2091E90:01CD9B09] X-MC-Unique: 112092511363601701 X-Gm-Message-State: ALoCoQkask8INaMO03LmJZ4KiWcCHYNxsG0ovnHHvFH5lLpyG9p+7zel1s/1DcWwERK+xP6MtJEC Workqueues queues work on current cpu, if the caller haven't passed a preferred cpu. This may wake up an idle CPU, which is actually not required. This work can be processed by any CPU and so we must select a non-idle CPU here. This patch adds in support in workqueue framework to get preferred CPU details from the scheduler, instead of using current CPU. Signed-off-by: Viresh Kumar --- arch/arm/Kconfig | 11 +++++++++++ kernel/workqueue.c | 25 ++++++++++++++++++------- 2 files changed, 29 insertions(+), 7 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 5944511..da17bd0 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1594,6 +1594,17 @@ config HMP_SLOW_CPU_MASK Specify the cpuids of the slow CPUs in the system as a list string, e.g. cpuid 0+1 should be specified as 0-1. +config MIGRATE_WQ + bool "(EXPERIMENTAL) Migrate Workqueues to non-idle cpu" + depends on SMP && EXPERIMENTAL + help + Workqueues queues work on current cpu, if the caller haven't passed a + preferred cpu. This may wake up an idle CPU, which is actually not + required. This work can be processed by any CPU and so we must select + a non-idle CPU here. This patch adds in support in workqueue + framework to get preferred CPU details from the scheduler, instead of + using current CPU. + config HAVE_ARM_SCU bool help diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 692a55b..fd8df4a 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -456,6 +456,16 @@ static inline void debug_work_activate(struct work_struct *work) { } static inline void debug_work_deactivate(struct work_struct *work) { } #endif +/* This enables migration of a work to a non-IDLE cpu instead of current cpu */ +#ifdef CONFIG_MIGRATE_WQ +static int wq_select_cpu(void) +{ + return sched_select_cpu(SD_NUMA, -1); +} +#else +#define wq_select_cpu() smp_processor_id() +#endif + /* Serializes the accesses to the list of workqueues. */ static DEFINE_SPINLOCK(workqueue_lock); static LIST_HEAD(workqueues); @@ -995,7 +1005,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq, struct global_cwq *last_gcwq; if (unlikely(cpu == WORK_CPU_UNBOUND)) - cpu = raw_smp_processor_id(); + cpu = wq_select_cpu(); /* * It's multi cpu. If @wq is non-reentrant and @work @@ -1066,8 +1076,9 @@ int queue_work(struct workqueue_struct *wq, struct work_struct *work) { int ret; - ret = queue_work_on(get_cpu(), wq, work); - put_cpu(); + preempt_disable(); + ret = queue_work_on(wq_select_cpu(), wq, work); + preempt_enable(); return ret; } @@ -1102,7 +1113,7 @@ static void delayed_work_timer_fn(unsigned long __data) struct delayed_work *dwork = (struct delayed_work *)__data; struct cpu_workqueue_struct *cwq = get_work_cwq(&dwork->work); - __queue_work(smp_processor_id(), cwq->wq, &dwork->work); + __queue_work(wq_select_cpu(), cwq->wq, &dwork->work); } /** @@ -1158,7 +1169,7 @@ int queue_delayed_work_on(int cpu, struct workqueue_struct *wq, if (gcwq && gcwq->cpu != WORK_CPU_UNBOUND) lcpu = gcwq->cpu; else - lcpu = raw_smp_processor_id(); + lcpu = wq_select_cpu(); } else lcpu = WORK_CPU_UNBOUND; @@ -2823,8 +2834,8 @@ EXPORT_SYMBOL_GPL(cancel_work_sync); static inline void __flush_delayed_work(struct delayed_work *dwork) { if (del_timer_sync(&dwork->timer)) - __queue_work(raw_smp_processor_id(), - get_work_cwq(&dwork->work)->wq, &dwork->work); + __queue_work(wq_select_cpu(), get_work_cwq(&dwork->work)->wq, + &dwork->work); } /**