From patchwork Mon Mar 18 15:23:28 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 15406 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 8336023E3E for ; Mon, 18 Mar 2013 15:25:00 +0000 (UTC) Received: from mail-ve0-f180.google.com (mail-ve0-f180.google.com [209.85.128.180]) by fiordland.canonical.com (Postfix) with ESMTP id 2DF52A1970C for ; Mon, 18 Mar 2013 15:25:00 +0000 (UTC) Received: by mail-ve0-f180.google.com with SMTP id jx10so4380386veb.25 for ; Mon, 18 Mar 2013 08:24:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:in-reply-to:references:x-gm-message-state; bh=M/AvV7WY7DKpbSMxAHD/5pKS974TBl125zdDiV7/J00=; b=TW6w0sEIQ93FklEieoaSXYFoILmbx6pLkjZKg+r3qKkqhhj15OJIYfhuxoHE8dM2d4 lu3t+qCk4cY3efjn9vaEdl/kxpAGggyLEtgp/IrVyxj8/wqwhN22Y8Lqn6ZQ3+oe9GHf 4mL2U0QfopWpNNX4S1Ks1w5T3MyO798ZxXmEAjOMUD805twmCT+hdGzW3ZUjrT95F/PK 7yZu1aOqbaASA8KCecjtd044AGPcyNZ5WshIutopwaa3XljQd5VGCUE9ig6imBopOVVX 79b+0ro2PbK5bjlivdHTdb7Aw4Vjx68gXMhOApMOgqAdSPGDQ15F2oWGIcj6xB5YIcpM CG7g== X-Received: by 10.221.11.135 with SMTP id pe7mr20181099vcb.41.1363620299701; Mon, 18 Mar 2013 08:24:59 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.58.127.98 with SMTP id nf2csp29044veb; Mon, 18 Mar 2013 08:24:59 -0700 (PDT) X-Received: by 10.66.154.4 with SMTP id vk4mr10952315pab.21.1363620298760; Mon, 18 Mar 2013 08:24:58 -0700 (PDT) Received: from mail-da0-x232.google.com (mail-da0-x232.google.com [2607:f8b0:400e:c00::232]) by mx.google.com with ESMTPS id ot10si21487881pbb.157.2013.03.18.08.24.58 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 18 Mar 2013 08:24:58 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c00::232 is neither permitted nor denied by best guess record for domain of viresh.kumar@linaro.org) client-ip=2607:f8b0:400e:c00::232; Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400e:c00::232 is neither permitted nor denied by best guess record for domain of viresh.kumar@linaro.org) smtp.mail=viresh.kumar@linaro.org Received: by mail-da0-f50.google.com with SMTP id t1so1385215dae.23 for ; Mon, 18 Mar 2013 08:24:58 -0700 (PDT) X-Received: by 10.68.231.70 with SMTP id te6mr33386541pbc.159.1363620298159; Mon, 18 Mar 2013 08:24:58 -0700 (PDT) Received: from localhost ([122.167.69.232]) by mx.google.com with ESMTPS id pb3sm10480482pbc.7.2013.03.18.08.24.51 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 18 Mar 2013 08:24:57 -0700 (PDT) From: Viresh Kumar To: pjt@google.com, paul.mckenney@linaro.org, tglx@linutronix.de, tj@kernel.org, suresh.b.siddha@intel.com, venki@google.com, mingo@redhat.com, peterz@infradead.org, rostedt@goodmis.org Cc: linaro-kernel@lists.linaro.org, robin.randhawa@arm.com, Steve.Bannister@arm.com, Liviu.Dudau@arm.com, charles.garcia-tobin@arm.com, Arvind.Chauhan@arm.com, linux-rt-users@vger.kernel.org, linux-kernel@vger.kernel.org, Viresh Kumar , Jens Axboe Subject: [PATCH V3 6/7] block: queue work on any cpu Date: Mon, 18 Mar 2013 20:53:28 +0530 Message-Id: <9aa157216e60de7d245128d42fb896918184ae8c.1363617402.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 1.7.12.rc2.18.g61b472e In-Reply-To: References: In-Reply-To: References: X-Gm-Message-State: ALoCoQkUN+hlAsQqnZU9ds293+KZQ6rNNSXalpMw4SvzLsXU4DkKJ6lmeElWgw7T6gWSxThfDxIa block layer uses workqueues for multiple purposes. There is no real dependency of scheduling these on the cpu which scheduled them. On a idle system, it is observed that and idle cpu wakes up many times just to service this work. It would be better if we can schedule it on a cpu which isn't idle to save on power. By idle cpu (from scheduler's perspective) we mean: - Current task is idle task - nr_running == 0 - wake_list is empty This patch replaces schedule_work() and queue_[delayed_]work() with queue_[delayed_]work_on_any_cpu() siblings. These routines would look for the closest (via scheduling domains) non-idle cpu (non-idle from schedulers perspective). If the current cpu is not idle or all cpus are idle, work will be scheduled on local cpu. Cc: Jens Axboe Signed-off-by: Viresh Kumar --- block/blk-core.c | 6 +++--- block/blk-ioc.c | 2 +- block/genhd.c | 9 ++++++--- 3 files changed, 10 insertions(+), 7 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 186603b..14ae74f 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -225,7 +225,7 @@ static void blk_delay_work(struct work_struct *work) void blk_delay_queue(struct request_queue *q, unsigned long msecs) { if (likely(!blk_queue_dead(q))) - queue_delayed_work(kblockd_workqueue, &q->delay_work, + queue_delayed_work_on_any_cpu(kblockd_workqueue, &q->delay_work, msecs_to_jiffies(msecs)); } EXPORT_SYMBOL(blk_delay_queue); @@ -2852,14 +2852,14 @@ EXPORT_SYMBOL_GPL(blk_rq_prep_clone); int kblockd_schedule_work(struct request_queue *q, struct work_struct *work) { - return queue_work(kblockd_workqueue, work); + return queue_work_on_any_cpu(kblockd_workqueue, work); } EXPORT_SYMBOL(kblockd_schedule_work); int kblockd_schedule_delayed_work(struct request_queue *q, struct delayed_work *dwork, unsigned long delay) { - return queue_delayed_work(kblockd_workqueue, dwork, delay); + return queue_delayed_work_on_any_cpu(kblockd_workqueue, dwork, delay); } EXPORT_SYMBOL(kblockd_schedule_delayed_work); diff --git a/block/blk-ioc.c b/block/blk-ioc.c index 9c4bb82..2eefeb1 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -144,7 +144,7 @@ void put_io_context(struct io_context *ioc) if (atomic_long_dec_and_test(&ioc->refcount)) { spin_lock_irqsave(&ioc->lock, flags); if (!hlist_empty(&ioc->icq_list)) - schedule_work(&ioc->release_work); + queue_work_on_any_cpu(system_wq, &ioc->release_work); else free_ioc = true; spin_unlock_irqrestore(&ioc->lock, flags); diff --git a/block/genhd.c b/block/genhd.c index a1ed52a..4bdb735 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -1488,9 +1488,11 @@ static void __disk_unblock_events(struct gendisk *disk, bool check_now) intv = disk_events_poll_jiffies(disk); set_timer_slack(&ev->dwork.timer, intv / 4); if (check_now) - queue_delayed_work(system_freezable_wq, &ev->dwork, 0); + queue_delayed_work_on_any_cpu(system_freezable_wq, &ev->dwork, + 0); else if (intv) - queue_delayed_work(system_freezable_wq, &ev->dwork, intv); + queue_delayed_work_on_any_cpu(system_freezable_wq, &ev->dwork, + intv); out_unlock: spin_unlock_irqrestore(&ev->lock, flags); } @@ -1626,7 +1628,8 @@ static void disk_check_events(struct disk_events *ev, intv = disk_events_poll_jiffies(disk); if (!ev->block && intv) - queue_delayed_work(system_freezable_wq, &ev->dwork, intv); + queue_delayed_work_on_any_cpu(system_freezable_wq, &ev->dwork, + intv); spin_unlock_irq(&ev->lock);