From patchwork Sun Mar 31 14:27:04 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 15781 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id D435423E33 for ; Sun, 31 Mar 2013 14:27:42 +0000 (UTC) Received: from mail-vc0-f177.google.com (mail-vc0-f177.google.com [209.85.220.177]) by fiordland.canonical.com (Postfix) with ESMTP id 77D30A18D2C for ; Sun, 31 Mar 2013 14:27:42 +0000 (UTC) Received: by mail-vc0-f177.google.com with SMTP id ia10so1682552vcb.22 for ; Sun, 31 Mar 2013 07:27:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:in-reply-to:references:x-gm-message-state; bh=ifMPe72CAOrwWi6ycksqoJDn7fh6KcqoHVBwZFVOJz8=; b=YLaFcbaMBturs875G4Ztgc6CqrpXb5bn77Heg61QvpDQAzkupf0SrvS5pg8eXDecDI sO5TsF1JK3T0JeOA7EYGJHZjMjcQKMAtAh/w7iDrJd77PlNTu7eXCyhNEAd9d/dtVCyc dWj3wdr1l6N0g42uCsD88IgyLFKMnkozHxdIknY9at1fXlrvcMF+Gau5TtVKeefRx/M/ Qdd0k0UCd3O7h5HsRtzg6eWJwMeidr/5aqCiu1vnLzPxexIe88qlGmqE1FB4LChbH9GP 7j4idW7mRw4Hn+FCa+75BoM0ZOgpjSRxCaV0Ll6rEGJ++ONirL984dknRZ7JsgQLZBp+ jgjg== X-Received: by 10.52.233.225 with SMTP id tz1mr5932220vdc.54.1364740061953; Sun, 31 Mar 2013 07:27:41 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.59.4.204 with SMTP id cg12csp51744ved; Sun, 31 Mar 2013 07:27:41 -0700 (PDT) X-Received: by 10.66.20.7 with SMTP id j7mr14604440pae.142.1364740061051; Sun, 31 Mar 2013 07:27:41 -0700 (PDT) Received: from mail-da0-x22a.google.com (mail-da0-x22a.google.com [2607:f8b0:400e:c00::22a]) by mx.google.com with ESMTPS id yf1si10073050pbc.283.2013.03.31.07.27.40 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 31 Mar 2013 07:27:41 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400e:c00::22a is neither permitted nor denied by best guess record for domain of viresh.kumar@linaro.org) client-ip=2607:f8b0:400e:c00::22a; Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400e:c00::22a is neither permitted nor denied by best guess record for domain of viresh.kumar@linaro.org) smtp.mail=viresh.kumar@linaro.org Received: by mail-da0-f42.google.com with SMTP id n15so753190dad.1 for ; Sun, 31 Mar 2013 07:27:40 -0700 (PDT) X-Received: by 10.66.164.6 with SMTP id ym6mr14588573pab.92.1364740060640; Sun, 31 Mar 2013 07:27:40 -0700 (PDT) Received: from localhost ([122.167.73.68]) by mx.google.com with ESMTPS id qs10sm10172602pbb.28.2013.03.31.07.27.36 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Sun, 31 Mar 2013 07:27:40 -0700 (PDT) From: Viresh Kumar To: tj@kernel.org Cc: linaro-kernel@lists.linaro.org, patches@linaro.org, robin.randhawa@arm.com, Steve.Bannister@arm.com, Liviu.Dudau@arm.com, charles.garcia-tobin@arm.com, arvind.chauhan@arm.com, davem@davemloft.net, airlied@redhat.com, axboe@kernel.dk, Viresh Kumar Subject: [PATCH V4 3/4] block: queue work on unbound wq Date: Sun, 31 Mar 2013 19:57:04 +0530 Message-Id: <91239cde99aaba2715f63db1f88241d9f4a36e13.1364739015.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 1.7.12.rc2.18.g61b472e In-Reply-To: References: In-Reply-To: References: X-Gm-Message-State: ALoCoQmgQj++UIY7GLpCnK7lOSMPJjArOMPmmUMi8WV5T5/ScSTKLQc7jJ5KkccRJO64IWdrkkWf Block layer uses workqueues for multiple purposes. There is no real dependency of scheduling these on the cpu which scheduled them. On a idle system, it is observed that and idle cpu wakes up many times just to service this work. It would be better if we can schedule it on a cpu which the scheduler believes to be the most appropriate one. This patch replaces normal workqueues with UNBOUND versions. Cc: Jens Axboe Signed-off-by: Viresh Kumar --- block/blk-core.c | 3 ++- block/blk-ioc.c | 2 +- block/genhd.c | 10 ++++++---- 3 files changed, 9 insertions(+), 6 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 492242f..91cd486 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -3186,7 +3186,8 @@ int __init blk_dev_init(void) /* used for unplugging and affects IO latency/throughput - HIGHPRI */ kblockd_workqueue = alloc_workqueue("kblockd", - WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); + WQ_MEM_RECLAIM | WQ_HIGHPRI | + WQ_UNBOUND, 0); if (!kblockd_workqueue) panic("Failed to create kblockd\n"); diff --git a/block/blk-ioc.c b/block/blk-ioc.c index 9c4bb82..5dd576d 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -144,7 +144,7 @@ void put_io_context(struct io_context *ioc) if (atomic_long_dec_and_test(&ioc->refcount)) { spin_lock_irqsave(&ioc->lock, flags); if (!hlist_empty(&ioc->icq_list)) - schedule_work(&ioc->release_work); + queue_work(system_unbound_wq, &ioc->release_work); else free_ioc = true; spin_unlock_irqrestore(&ioc->lock, flags); diff --git a/block/genhd.c b/block/genhd.c index a1ed52a..0f4470a 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -1488,9 +1488,10 @@ static void __disk_unblock_events(struct gendisk *disk, bool check_now) intv = disk_events_poll_jiffies(disk); set_timer_slack(&ev->dwork.timer, intv / 4); if (check_now) - queue_delayed_work(system_freezable_wq, &ev->dwork, 0); + queue_delayed_work(system_freezable_unbound_wq, &ev->dwork, 0); else if (intv) - queue_delayed_work(system_freezable_wq, &ev->dwork, intv); + queue_delayed_work(system_freezable_unbound_wq, &ev->dwork, + intv); out_unlock: spin_unlock_irqrestore(&ev->lock, flags); } @@ -1533,7 +1534,7 @@ void disk_flush_events(struct gendisk *disk, unsigned int mask) spin_lock_irq(&ev->lock); ev->clearing |= mask; if (!ev->block) - mod_delayed_work(system_freezable_wq, &ev->dwork, 0); + mod_delayed_work(system_freezable_unbound_wq, &ev->dwork, 0); spin_unlock_irq(&ev->lock); } @@ -1626,7 +1627,8 @@ static void disk_check_events(struct disk_events *ev, intv = disk_events_poll_jiffies(disk); if (!ev->block && intv) - queue_delayed_work(system_freezable_wq, &ev->dwork, intv); + queue_delayed_work(system_freezable_unbound_wq, &ev->dwork, + intv); spin_unlock_irq(&ev->lock);