From patchwork Tue Sep 25 10:36:07 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 11712 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id A7C4823EFD for ; Tue, 25 Sep 2012 10:36:38 +0000 (UTC) Received: from mail-ie0-f180.google.com (mail-ie0-f180.google.com [209.85.223.180]) by fiordland.canonical.com (Postfix) with ESMTP id 40136A1905D for ; Tue, 25 Sep 2012 10:36:38 +0000 (UTC) Received: by mail-ie0-f180.google.com with SMTP id e10so12424361iej.11 for ; Tue, 25 Sep 2012 03:36:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references:in-reply-to :references:x-originalarrivaltime:x-mc-unique:content-type :content-transfer-encoding:x-gm-message-state; bh=lvEpZ7YGP8rnuvFLo5IcvF8xy5ouHmP5FZ74oxxfZKs=; b=QYILTasRhwJgU0q5QQyhYXlUHwi+eg37E3WVX2FGglpNYHMC+VRTVBUa+eq9ZiCD2Z hqp9T/pMqcc6S92wT6SC8uQjA+WPwnOzKO459wNQp2WaFfYimkUGf2DmBiY595GtJz2d nS/feVF/1XFdLcsC8VYh2T4ZsUcP1k2eyg4gtQATzq8zBxNiQZUzdz5czES206AiBwp2 f5Bplm5Lg5Iq37GY6H4VuBClnO9dV+iMS5keZI/NpdVcSneRKQuflctylQEG6HGcA3I3 FwD1E7s+XsKnUE6GwFFbglgIJJlxPzXvqpWAYwMrnz9zEy5rLwreFldw/yLRrrkMAsJS GFDQ== Received: by 10.50.242.3 with SMTP id wm3mr7869986igc.0.1348569398055; Tue, 25 Sep 2012 03:36:38 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.184.232 with SMTP id ex8csp286663igc; Tue, 25 Sep 2012 03:36:37 -0700 (PDT) Received: by 10.216.72.5 with SMTP id s5mr9124421wed.154.1348569396638; Tue, 25 Sep 2012 03:36:36 -0700 (PDT) Received: from service87.mimecast.com (service87.mimecast.com. [91.220.42.44]) by mx.google.com with ESMTP id b16si7429131weq.77.2012.09.25.03.36.36; Tue, 25 Sep 2012 03:36:36 -0700 (PDT) Received-SPF: pass (google.com: domain of viresh.kumar2@arm.com designates 91.220.42.44 as permitted sender) client-ip=91.220.42.44; Authentication-Results: mx.google.com; spf=pass (google.com: domain of viresh.kumar2@arm.com designates 91.220.42.44 as permitted sender) smtp.mail=viresh.kumar2@arm.com Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Tue, 25 Sep 2012 11:36:35 +0100 Received: from localhost ([10.1.255.212]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.0); Tue, 25 Sep 2012 11:36:31 +0100 From: Viresh Kumar To: linux-kernel@vger.kernel.org Cc: pjt@google.com, paul.mckenney@linaro.org, tglx@linutronix.de, tj@kernel.org, suresh.b.siddha@intel.com, venki@google.com, mingo@redhat.com, peterz@infradead.org, robin.randhawa@arm.com, Steve.Bannister@arm.com, Arvind.Chauhan@arm.com, amit.kucheria@linaro.org, vincent.guittot@linaro.org, linaro-dev@lists.linaro.org, patches@linaro.org, Viresh Kumar Subject: [PATCH 2/3] workqueue: create __flush_delayed_work to avoid duplicating code Date: Tue, 25 Sep 2012 16:06:07 +0530 Message-Id: X-Mailer: git-send-email 1.7.12.rc2.18.g61b472e In-Reply-To: References: In-Reply-To: References: X-OriginalArrivalTime: 25 Sep 2012 10:36:31.0741 (UTC) FILETIME=[A0BDB2D0:01CD9B09] X-MC-Unique: 112092511363507401 X-Gm-Message-State: ALoCoQliPHXeNagA7Yxn9aamgcDUusArAYZpz1esl6zg+WvZqwugrx9erTgNNbzm5KmCxawv1WdD flush_delayed_work() and flush_delayed_work_sync() had major portion of code similar. This patch introduces another routine __flush_delayed_work() which contains the common part to avoid code duplication. Signed-off-by: Viresh Kumar --- kernel/workqueue.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 692d976..692a55b 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -2820,6 +2820,13 @@ bool cancel_work_sync(struct work_struct *work) } EXPORT_SYMBOL_GPL(cancel_work_sync); +static inline void __flush_delayed_work(struct delayed_work *dwork) +{ + if (del_timer_sync(&dwork->timer)) + __queue_work(raw_smp_processor_id(), + get_work_cwq(&dwork->work)->wq, &dwork->work); +} + /** * flush_delayed_work - wait for a dwork to finish executing the last queueing * @dwork: the delayed work to flush @@ -2834,9 +2841,7 @@ EXPORT_SYMBOL_GPL(cancel_work_sync); */ bool flush_delayed_work(struct delayed_work *dwork) { - if (del_timer_sync(&dwork->timer)) - __queue_work(raw_smp_processor_id(), - get_work_cwq(&dwork->work)->wq, &dwork->work); + __flush_delayed_work(dwork); return flush_work(&dwork->work); } EXPORT_SYMBOL(flush_delayed_work); @@ -2855,9 +2860,7 @@ EXPORT_SYMBOL(flush_delayed_work); */ bool flush_delayed_work_sync(struct delayed_work *dwork) { - if (del_timer_sync(&dwork->timer)) - __queue_work(raw_smp_processor_id(), - get_work_cwq(&dwork->work)->wq, &dwork->work); + __flush_delayed_work(dwork); return flush_work_sync(&dwork->work); } EXPORT_SYMBOL(flush_delayed_work_sync);