From patchwork Sun Mar 31 14:31:46 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 15785 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 011B023E33 for ; Sun, 31 Mar 2013 14:32:36 +0000 (UTC) Received: from mail-ve0-f170.google.com (mail-ve0-f170.google.com [209.85.128.170]) by fiordland.canonical.com (Postfix) with ESMTP id 9825CA1981F for ; Sun, 31 Mar 2013 14:32:35 +0000 (UTC) Received: by mail-ve0-f170.google.com with SMTP id 15so1832186vea.1 for ; Sun, 31 Mar 2013 07:32:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:in-reply-to:references:x-gm-message-state; bh=ifMPe72CAOrwWi6ycksqoJDn7fh6KcqoHVBwZFVOJz8=; b=Qgupq5hzraLN6YpZw149O55u7pd3YUQBlQvnAtAqgqT2SiIOFtBDKq2b3TYmDMu4c1 k6FzLx0WE8rFHxGqjgcKb440FJXZLbLLot8fFB/57cATTf2I1yfZfKzwxuSE0Npetmn4 +WaUp5RGIPNveP7BveOXSiJCocqVciiLtu/+hlOj+VEDov1tJAOKRW4YoWiGurG31uOK 0cZM25hkC+5KqYdrDsd/KePnyB/gL50oXcX6rL17rTkhKu0Vm3eH9b+NZpLNO+5Plgst HVqUty2nP0buEbnwjfDgS4yWpvS8youxtbPX/dFWg8S6DtqHs3ezeARuPFa9HkhiWV9/ OU/g== X-Received: by 10.58.224.101 with SMTP id rb5mr6903554vec.17.1364740355121; Sun, 31 Mar 2013 07:32:35 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.59.4.204 with SMTP id cg12csp51856ved; Sun, 31 Mar 2013 07:32:34 -0700 (PDT) X-Received: by 10.68.227.129 with SMTP id sa1mr13656735pbc.107.1364740353949; Sun, 31 Mar 2013 07:32:33 -0700 (PDT) Received: from mail-pd0-f171.google.com (mail-pd0-f171.google.com [209.85.192.171]) by mx.google.com with ESMTPS id gl1si10724653pac.90.2013.03.31.07.32.33 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 31 Mar 2013 07:32:33 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.192.171 is neither permitted nor denied by best guess record for domain of viresh.kumar@linaro.org) client-ip=209.85.192.171; Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.192.171 is neither permitted nor denied by best guess record for domain of viresh.kumar@linaro.org) smtp.mail=viresh.kumar@linaro.org Received: by mail-pd0-f171.google.com with SMTP id z10so860187pdj.30 for ; Sun, 31 Mar 2013 07:32:33 -0700 (PDT) X-Received: by 10.66.13.35 with SMTP id e3mr14657039pac.186.1364740353537; Sun, 31 Mar 2013 07:32:33 -0700 (PDT) Received: from localhost ([122.167.73.68]) by mx.google.com with ESMTPS id lb8sm11399550pab.13.2013.03.31.07.32.27 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Sun, 31 Mar 2013 07:32:33 -0700 (PDT) From: Viresh Kumar To: tj@kernel.org Cc: linaro-kernel@lists.linaro.org, patches@linaro.org, robin.randhawa@arm.com, Steve.Bannister@arm.com, Liviu.Dudau@arm.com, charles.garcia-tobin@arm.com, arvind.chauhan@arm.com, davem@davemloft.net, airlied@redhat.com, axboe@kernel.dk, tglx@linutronix.de, peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org, linux-rt-users@vger.kernel.org, linux-kernel@vger.kernel.org, Viresh Kumar Subject: [PATCH V4 3/4] block: queue work on unbound wq Date: Sun, 31 Mar 2013 20:01:46 +0530 Message-Id: <91239cde99aaba2715f63db1f88241d9f4a36e13.1364740180.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 1.7.12.rc2.18.g61b472e In-Reply-To: References: In-Reply-To: References: X-Gm-Message-State: ALoCoQl0wD42oQ0qqlB016clkLdHp4GheRtmpxjIeI+SGxw/J6upx51lWJhVVJ9cLnBjqOm52Qzr Block layer uses workqueues for multiple purposes. There is no real dependency of scheduling these on the cpu which scheduled them. On a idle system, it is observed that and idle cpu wakes up many times just to service this work. It would be better if we can schedule it on a cpu which the scheduler believes to be the most appropriate one. This patch replaces normal workqueues with UNBOUND versions. Cc: Jens Axboe Signed-off-by: Viresh Kumar --- block/blk-core.c | 3 ++- block/blk-ioc.c | 2 +- block/genhd.c | 10 ++++++---- 3 files changed, 9 insertions(+), 6 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 492242f..91cd486 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -3186,7 +3186,8 @@ int __init blk_dev_init(void) /* used for unplugging and affects IO latency/throughput - HIGHPRI */ kblockd_workqueue = alloc_workqueue("kblockd", - WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); + WQ_MEM_RECLAIM | WQ_HIGHPRI | + WQ_UNBOUND, 0); if (!kblockd_workqueue) panic("Failed to create kblockd\n"); diff --git a/block/blk-ioc.c b/block/blk-ioc.c index 9c4bb82..5dd576d 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -144,7 +144,7 @@ void put_io_context(struct io_context *ioc) if (atomic_long_dec_and_test(&ioc->refcount)) { spin_lock_irqsave(&ioc->lock, flags); if (!hlist_empty(&ioc->icq_list)) - schedule_work(&ioc->release_work); + queue_work(system_unbound_wq, &ioc->release_work); else free_ioc = true; spin_unlock_irqrestore(&ioc->lock, flags); diff --git a/block/genhd.c b/block/genhd.c index a1ed52a..0f4470a 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -1488,9 +1488,10 @@ static void __disk_unblock_events(struct gendisk *disk, bool check_now) intv = disk_events_poll_jiffies(disk); set_timer_slack(&ev->dwork.timer, intv / 4); if (check_now) - queue_delayed_work(system_freezable_wq, &ev->dwork, 0); + queue_delayed_work(system_freezable_unbound_wq, &ev->dwork, 0); else if (intv) - queue_delayed_work(system_freezable_wq, &ev->dwork, intv); + queue_delayed_work(system_freezable_unbound_wq, &ev->dwork, + intv); out_unlock: spin_unlock_irqrestore(&ev->lock, flags); } @@ -1533,7 +1534,7 @@ void disk_flush_events(struct gendisk *disk, unsigned int mask) spin_lock_irq(&ev->lock); ev->clearing |= mask; if (!ev->block) - mod_delayed_work(system_freezable_wq, &ev->dwork, 0); + mod_delayed_work(system_freezable_unbound_wq, &ev->dwork, 0); spin_unlock_irq(&ev->lock); } @@ -1626,7 +1627,8 @@ static void disk_check_events(struct disk_events *ev, intv = disk_events_poll_jiffies(disk); if (!ev->block && intv) - queue_delayed_work(system_freezable_wq, &ev->dwork, intv); + queue_delayed_work(system_freezable_unbound_wq, &ev->dwork, + intv); spin_unlock_irq(&ev->lock);