From patchwork Wed Apr 24 11:42:56 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 16376 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ye0-f198.google.com (mail-ye0-f198.google.com [209.85.213.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8EA2623918 for ; Wed, 24 Apr 2013 11:47:54 +0000 (UTC) Received: by mail-ye0-f198.google.com with SMTP id m5sf1871889yen.5 for ; Wed, 24 Apr 2013 04:47:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:x-beenthere:x-received:received-spf :x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:in-reply-to:references:x-gm-message-state :x-original-sender:x-original-authentication-results:precedence :mailing-list:list-id:x-google-group-id:list-post:list-help :list-archive:list-unsubscribe; bh=ZrWyGZASzsRBN3l0vhsHpNALW3OikPxH8siEaAYsXgU=; b=njk8yrM4gzZns+pMtGRNWCZATDoJtTdaLwqf2Fm07lIdI/PhnYqH/6pqVzg9knV/QC wicbg6T9y1XaAc5QCLXgJ8F1Zt/830fl32coPppi3W9KwV9747kcORMQ+Ge1Moejkfz/ /aJvN3mmBT7uVQd013brFnicyK+3X3GsgVUvEEEmGwD24GWsOWFAq8jd35VM5QYQnLhu RLFEQTT7wVz8aMRl+aoryHXmW7x1HGFNI37hQsSbpwRy5anO7Lck4LxCVwET5pTYVa2L ij6y6s92UZ3T7G3SwDOss9kgth2TXBNKPC3SYvMCRpnSntgvDKJ8E2GirpXuIHf4LqhC NRiQ== X-Received: by 10.224.185.17 with SMTP id cm17mr10707248qab.6.1366804023653; Wed, 24 Apr 2013 04:47:03 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.47.34 with SMTP id a2ls772538qen.73.gmail; Wed, 24 Apr 2013 04:47:03 -0700 (PDT) X-Received: by 10.220.167.9 with SMTP id o9mr24841305vcy.25.1366804023495; Wed, 24 Apr 2013 04:47:03 -0700 (PDT) Received: from mail-vc0-f182.google.com (mail-vc0-f182.google.com [209.85.220.182]) by mx.google.com with ESMTPS id d10si1070946vdv.106.2013.04.24.04.47.03 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 24 Apr 2013 04:47:03 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.182 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.182; Received: by mail-vc0-f182.google.com with SMTP id ht11so1668379vcb.13 for ; Wed, 24 Apr 2013 04:47:03 -0700 (PDT) X-Received: by 10.52.163.231 with SMTP id yl7mr21251226vdb.57.1366804023399; Wed, 24 Apr 2013 04:47:03 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.58.127.98 with SMTP id nf2csp169550veb; Wed, 24 Apr 2013 04:47:02 -0700 (PDT) X-Received: by 10.66.250.230 with SMTP id zf6mr22887451pac.153.1366804022267; Wed, 24 Apr 2013 04:47:02 -0700 (PDT) Received: from mail-pa0-f52.google.com (mail-pa0-f52.google.com [209.85.220.52]) by mx.google.com with ESMTPS id pf4si2321589pbc.14.2013.04.24.04.47.01 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 24 Apr 2013 04:47:02 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.52 is neither permitted nor denied by best guess record for domain of viresh.kumar@linaro.org) client-ip=209.85.220.52; Received: by mail-pa0-f52.google.com with SMTP id fb1so1137932pad.39 for ; Wed, 24 Apr 2013 04:47:01 -0700 (PDT) X-Received: by 10.68.117.135 with SMTP id ke7mr47765826pbb.0.1366804021726; Wed, 24 Apr 2013 04:47:01 -0700 (PDT) Received: from localhost ([122.172.242.146]) by mx.google.com with ESMTPSA id mm9sm2738936pbc.43.2013.04.24.04.46.55 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 24 Apr 2013 04:47:01 -0700 (PDT) From: Viresh Kumar To: tj@kernel.org Cc: davem@davemloft.net, airlied@redhat.com, axboe@kernel.dk, tglx@linutronix.de, peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org, linux-rt-users@vger.kernel.org, linux-kernel@vger.kernel.org, robin.randhawa@arm.com, Steve.Bannister@arm.com, Liviu.Dudau@arm.com, charles.garcia-tobin@arm.com, arvind.chauhan@arm.com, linaro-kernel@lists.linaro.org, patches@linaro.org, Viresh Kumar Subject: [PATCH V5 4/5] block: queue work on power efficient wq Date: Wed, 24 Apr 2013 17:12:56 +0530 Message-Id: <2e89f7f3943438f8b1e58f2034242a5b53e31b43.1366803121.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 1.7.12.rc2.18.g61b472e In-Reply-To: References: In-Reply-To: References: X-Gm-Message-State: ALoCoQn//KCAKFwyZthAHR/X+sQ1Kq9DstcPRj76LOb3X0c716HecBNrQIsWHNtrh/H9v8KSyQnh X-Original-Sender: viresh.kumar@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.182 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Block layer uses workqueues for multiple purposes. There is no real dependency of scheduling these on the cpu which scheduled them. On a idle system, it is observed that and idle cpu wakes up many times just to service this work. It would be better if we can schedule it on a cpu which the scheduler believes to be the most appropriate one. This patch replaces normal workqueues with power efficient versions. Cc: Jens Axboe Signed-off-by: Viresh Kumar --- block/blk-core.c | 3 ++- block/blk-ioc.c | 3 ++- block/genhd.c | 12 ++++++++---- 3 files changed, 12 insertions(+), 6 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 94aa4e7..3eb9870 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -3187,7 +3187,8 @@ int __init blk_dev_init(void) /* used for unplugging and affects IO latency/throughput - HIGHPRI */ kblockd_workqueue = alloc_workqueue("kblockd", - WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); + WQ_MEM_RECLAIM | WQ_HIGHPRI | + WQ_POWER_EFFICIENT, 0); if (!kblockd_workqueue) panic("Failed to create kblockd\n"); diff --git a/block/blk-ioc.c b/block/blk-ioc.c index 9c4bb82..4464c82 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -144,7 +144,8 @@ void put_io_context(struct io_context *ioc) if (atomic_long_dec_and_test(&ioc->refcount)) { spin_lock_irqsave(&ioc->lock, flags); if (!hlist_empty(&ioc->icq_list)) - schedule_work(&ioc->release_work); + queue_work(system_power_efficient_wq, + &ioc->release_work); else free_ioc = true; spin_unlock_irqrestore(&ioc->lock, flags); diff --git a/block/genhd.c b/block/genhd.c index 5a9f893..2e1cfe3 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -1489,9 +1489,11 @@ static void __disk_unblock_events(struct gendisk *disk, bool check_now) intv = disk_events_poll_jiffies(disk); set_timer_slack(&ev->dwork.timer, intv / 4); if (check_now) - queue_delayed_work(system_freezable_wq, &ev->dwork, 0); + queue_delayed_work(system_freezable_power_efficient_wq, + &ev->dwork, 0); else if (intv) - queue_delayed_work(system_freezable_wq, &ev->dwork, intv); + queue_delayed_work(system_freezable_power_efficient_wq, + &ev->dwork, intv); out_unlock: spin_unlock_irqrestore(&ev->lock, flags); } @@ -1534,7 +1536,8 @@ void disk_flush_events(struct gendisk *disk, unsigned int mask) spin_lock_irq(&ev->lock); ev->clearing |= mask; if (!ev->block) - mod_delayed_work(system_freezable_wq, &ev->dwork, 0); + mod_delayed_work(system_freezable_power_efficient_wq, + &ev->dwork, 0); spin_unlock_irq(&ev->lock); } @@ -1627,7 +1630,8 @@ static void disk_check_events(struct disk_events *ev, intv = disk_events_poll_jiffies(disk); if (!ev->block && intv) - queue_delayed_work(system_freezable_wq, &ev->dwork, intv); + queue_delayed_work(system_freezable_power_efficient_wq, + &ev->dwork, intv); spin_unlock_irq(&ev->lock);