From patchwork Mon Sep 13 15:12:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509773 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp654783jao; Mon, 13 Sep 2021 08:17:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwfXdRjXMPjvaCebqTR7XJPVQN+klMNhqFpeF65+4cGrDx4q4LWXJP7CFnexHV8uIi00nJA X-Received: by 2002:a17:906:3f95:: with SMTP id b21mr12779898ejj.368.1631546255119; Mon, 13 Sep 2021 08:17:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546255; cv=none; d=google.com; s=arc-20160816; b=iOUNrzclIEAGc9pHTuws0woz+zYkvYmOXoww6jlw7LAb3/yvw0P8po0jjfyGYKX8Fh Q4NT+kgx6OzhGFTGDHO47JTDLGqfyaFeCwSSe0JL1DvJNOJsiYw386/f9m/xZ2v8z4gW 6cur6gOp/tR1RX6wSYZtYn+aLGhGTvCJIi1rrfHMHeW3km+TM6DNGBAHTl6194EEnhNh z6hDWxiinVlDgju4asUK/XxscQEd0MzUi0JfSfkeCsCebMrs7yf9MlVsZKZ45lzoc2cG 0rdiUYyiFNjQUGLI04pgIF3df3txlVETZyWNtosSDRo0aB0+nbkWBi4QBSOG4tRBafCZ qToQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=mo1NBPr+bQ9EazPo088sbrKUs/FIn3oYsLy4ISo+cOs=; b=RJLTI0/dLNBGD4et96pNGL7f87UUZny/v7xgfOQ/rErg5xnSjwYX74Y7MiQXQI9FqY 5kemI6LWXuQGyc8lKhzf5cOR7wZL9i8S/2pquEjLpO5wTuMuzEKSjKxIHPb6BQhDMepW E1I7hVkvKuG6ZAH/0yxFOEBtDrZkrTf/kDSYs3lW4nA93ZAhtKaj+B/316MgCu51A08R uTacsJPZc3ORF0m4JwlV8mhzzzSy9kcIh6hpXbqcKys40K1//Ux6EuNy6kh0W40f22+5 aCl5JmComKyiwK7MswX1gmQAC4xcYd9CQHKHGDbUNOnKCrSXkdfUki2Lh7MECv6fClWK RP1A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bl4si9658754ejb.689.2021.09.13.08.17.34 for ; Mon, 13 Sep 2021 08:17:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345930AbhIMPSq (ORCPT ); Mon, 13 Sep 2021 11:18:46 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3773 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345587AbhIMPSm (ORCPT ); Mon, 13 Sep 2021 11:18:42 -0400 Received: from fraeml744-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VQt4QMqz683h2; Mon, 13 Sep 2021 23:15:18 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml744-chm.china.huawei.com (10.206.15.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:24 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:22 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 01/13] blk-mq: Change rqs check in blk_mq_free_rqs() Date: Mon, 13 Sep 2021 23:12:18 +0800 Message-ID: <1631545950-56586-2-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The original code in commit 24d2f90309b23 ("blk-mq: split out tag initialization, support shared tags") would check tags->rqs is non-NULL and then dereference tags->rqs[]. Then in commit 2af8cbe30531 ("blk-mq: split tag ->rqs[] into two"), we started to dereference tags->static_rqs[], but continued to check non-NULL tags->rqs. Check tags->static_rqs as non-NULL instead, which is more logical. Signed-off-by: John Garry Reviewed-by: Ming Lei --- block/blk-mq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.26.2 Reviewed-by: Hannes Reinecke diff --git a/block/blk-mq.c b/block/blk-mq.c index 108a352051be..2316ff27c1f5 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2340,7 +2340,7 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, { struct page *page; - if (tags->rqs && set->ops->exit_request) { + if (tags->static_rqs && set->ops->exit_request) { int i; for (i = 0; i < tags->nr_tags; i++) { From patchwork Mon Sep 13 15:12:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509775 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp654913jao; Mon, 13 Sep 2021 08:17:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyk6Oa8Q/Nssaln4/zJBJNAvNJ92AkHEYmR3CsbssRn4nRu3TLi35bxcUoOc/aPQzRYXShF X-Received: by 2002:aa7:d0c9:: with SMTP id u9mr13507337edo.167.1631546262393; Mon, 13 Sep 2021 08:17:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546262; cv=none; d=google.com; s=arc-20160816; b=nnt+DZ8AQ2GomgDvYyP5fWnsB9bEAhSfOz9YGWZmifCLHWZiRqmr7VV4PKTMGP1UgP G/UQ790kUbkwHS05QEdsFHJsT3cul6+0lnR/hLpu8x+HDvuywN1dyeTyzg15IMl7WuJB bVxCWP1mDl+D6nZhJlg8zEOo0x6qpXh7KLWPPm3k2FDF5+97J+t+/oVa27v5k8xd+w8t C43Aof2iBqyzLMt1Re4LJXXCpR3spgQmDa0XisOsvYLMWWzm5E3DKoFeyLk2aSpvi0jw EClwFSih0jXLAxlIVQuuCBYBbu35iwwxveXPNz3B6n4wew2g/5peODK3GkRyCnoaZ4az iThw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=g87XVBV5e5D6Xo/92hyeaVaYALMmJj6zkhYadnG4Zoo=; b=phVLO2GFX8g+ScFeZm8jVWmG9v1xyk30DDWiGZ86zuFwFULr8csyKoSyGtsZLUYb8t WfxcU9qEGnWpBOALnkZ0pJE8c9Pq8qbTY4T8eiCSJhF6ePue1dw262EmeWvkxzQKttl8 rbrJ9+lrGRGCZ7KDHh6UsqiZHqKOiAZOFFvkRNrGg5XNT3cuBFKPYus8dBj3rJCZUbM+ f6vf91lXcFpDFZkRuC1Kfqbe6zU33I9TnEX7yhcPh2tCUL082HUXGsxLVHwPfp5e1Hn3 uNYk3iT+/guQq1nD0nncDDTVPbHjCVCRF5A20wZsxdFUn+7YqbKRa8DLgEMQthJmfeTC 1D+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bl4si9658754ejb.689.2021.09.13.08.17.42 for ; Mon, 13 Sep 2021 08:17:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345587AbhIMPSw (ORCPT ); Mon, 13 Sep 2021 11:18:52 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3774 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345634AbhIMPSn (ORCPT ); Mon, 13 Sep 2021 11:18:43 -0400 Received: from fraeml743-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VQw26n7z684Jd; Mon, 13 Sep 2021 23:15:20 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml743-chm.china.huawei.com (10.206.15.224) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:26 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:24 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 02/13] block: Rename BLKDEV_MAX_RQ -> BLKDEV_DEFAULT_RQ Date: Mon, 13 Sep 2021 23:12:19 +0800 Message-ID: <1631545950-56586-3-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org It is a bit confusing that there is BLKDEV_MAX_RQ and MAX_SCHED_RQ, as the name BLKDEV_MAX_RQ would imply the max requests always, which it is not. Rename to BLKDEV_MAX_RQ to BLKDEV_DEFAULT_RQ, matching it's usage - that being the default number of requests assigned when allocating a request queue. Signed-off-by: John Garry Reviewed-by: Ming Lei --- block/blk-core.c | 2 +- block/blk-mq-sched.c | 2 +- block/blk-mq-sched.h | 2 +- drivers/block/rbd.c | 2 +- include/linux/blkdev.h | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) -- 2.26.2 Reviewed-by: Hannes Reinecke diff --git a/block/blk-core.c b/block/blk-core.c index 5454db2fa263..5d7137bec48e 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -568,7 +568,7 @@ struct request_queue *blk_alloc_queue(int node_id) blk_queue_dma_alignment(q, 511); blk_set_default_limits(&q->limits); - q->nr_requests = BLKDEV_MAX_RQ; + q->nr_requests = BLKDEV_DEFAULT_RQ; return q; diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 0f006cabfd91..2231fb0d4c35 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -606,7 +606,7 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) * Additionally, this is a per-hw queue depth. */ q->nr_requests = 2 * min_t(unsigned int, q->tag_set->queue_depth, - BLKDEV_MAX_RQ); + BLKDEV_DEFAULT_RQ); queue_for_each_hw_ctx(q, hctx, i) { ret = blk_mq_sched_alloc_tags(q, hctx, i); diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index 5246ae040704..1e46be6c5178 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -5,7 +5,7 @@ #include "blk-mq.h" #include "blk-mq-tag.h" -#define MAX_SCHED_RQ (16 * BLKDEV_MAX_RQ) +#define MAX_SCHED_RQ (16 * BLKDEV_DEFAULT_RQ) void blk_mq_sched_assign_ioc(struct request *rq); diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index e65c9d706f6f..bf60aebd0cfb 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -836,7 +836,7 @@ struct rbd_options { u32 alloc_hint_flags; /* CEPH_OSD_OP_ALLOC_HINT_FLAG_* */ }; -#define RBD_QUEUE_DEPTH_DEFAULT BLKDEV_MAX_RQ +#define RBD_QUEUE_DEPTH_DEFAULT BLKDEV_DEFAULT_RQ #define RBD_ALLOC_SIZE_DEFAULT (64 * 1024) #define RBD_LOCK_TIMEOUT_DEFAULT 0 /* no timeout */ #define RBD_READ_ONLY_DEFAULT false diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 12b9dbcc980e..4baf9435232d 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -40,7 +40,7 @@ struct blk_stat_callback; struct blk_keyslot_manager; #define BLKDEV_MIN_RQ 4 -#define BLKDEV_MAX_RQ 128 /* Default maximum */ +#define BLKDEV_DEFAULT_RQ 128 /* Must be consistent with blk_mq_poll_stats_bkt() */ #define BLK_MQ_POLL_STATS_BKTS 16 From patchwork Mon Sep 13 15:12:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509776 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp654949jao; Mon, 13 Sep 2021 08:17:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzUY3t6hSD9zCtD5PQh7AbENx9rUDkzFvv0PAzp0csGnRJvQK6nGEzAVdf5EWZWj7akqCBL X-Received: by 2002:a17:906:7147:: with SMTP id z7mr11031966ejj.94.1631546266125; Mon, 13 Sep 2021 08:17:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546266; cv=none; d=google.com; s=arc-20160816; b=FYiX2TKiJ287e/w/CVxctxyMhOPqb/YMCdzOFTdu6sPjwbK+Agbk7Mh9IF4gUbvJ7q rP8xBWbvfnKkbtrc3hLKJcCc7Lj+msVaYx4fIH+eifJ1VZRJpWnY8itYc1a9UVLPmKMS t5/FYQkETe2G3UuJILVUGM+q9NlbziFuNeTWZq9YV1zzjCSJ9vBWwhOIGfwLDMOdRqgg oNDPYgXGDuvBnu2Ypsz1VMtfI1eOEjkzDfpK5X7k2mnpBkrO9yiuIp9nx2cyeW+ESH89 5ECDlzROj3fNNtuayhZWCfTBNyW3xUQ7aNQnBpWmilcfw3fHPvIOc3M92pSMHWlam1wr Ubxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=Y7a4MQU5fiMoTKKFr/PtgYvI9Ssnm7AeRu03RUWVdno=; b=osZVwCLr1tz1GmYxnz0lN6DaukVaTGPIp5wKVG0Ew73y7X7kRR6lt1619gypqS1AYV Ge4TSuLan+q544gqYcc+oaTVpF7WQoVA2/Srsuwy/IvZNL81tA16qu8ijEZZbqW0bX81 CrAUWnlUb+xeBwog7FXGHPmRFaJcRCo7CIt1Teq3NYD1IM3AQ/4kGMwed8p7DSUFF1H1 gWbLTAQlEsMvRbbI6yGWL3BCNZImO4HEOfVmvlZpQdTq9HaaDNGLveJQZ4HbDF0uZs5r nvuuarvjFfBpAuWxL+Um4BK6Qze9khjLeJsF/MKMuQPMCWG1OZe1urg6rXYCUeplF6m2 PxzQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 18si7608876ejh.282.2021.09.13.08.17.45 for ; Mon, 13 Sep 2021 08:17:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347090AbhIMPS7 (ORCPT ); Mon, 13 Sep 2021 11:18:59 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3775 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345654AbhIMPSp (ORCPT ); Mon, 13 Sep 2021 11:18:45 -0400 Received: from fraeml742-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VQq1spNz67M3Q; Mon, 13 Sep 2021 23:15:15 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml742-chm.china.huawei.com (10.206.15.223) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:27 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:25 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 03/13] blk-mq: Relocate shared sbitmap resize in blk_mq_update_nr_requests() Date: Mon, 13 Sep 2021 23:12:20 +0800 Message-ID: <1631545950-56586-4-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org For shared sbitmap, if the call to blk_mq_tag_update_depth() was successful for any hctx when hctx->sched_tags is not set, then it would be successful for all (due to nature in which blk_mq_tag_update_depth() fails). As such, there is no need to call blk_mq_tag_resize_shared_sbitmap() for each hctx. So relocate the call until after the hctx iteration under the !q->elevator check, which is equivalent (to !hctx->sched_tags). Signed-off-by: John Garry Reviewed-by: Ming Lei --- block/blk-mq.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) -- 2.26.2 Reviewed-by: Hannes Reinecke diff --git a/block/blk-mq.c b/block/blk-mq.c index 2316ff27c1f5..1a4bb2db30e5 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3616,8 +3616,6 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) if (!hctx->sched_tags) { ret = blk_mq_tag_update_depth(hctx, &hctx->tags, nr, false); - if (!ret && blk_mq_is_sbitmap_shared(set->flags)) - blk_mq_tag_resize_shared_sbitmap(set, nr); } else { ret = blk_mq_tag_update_depth(hctx, &hctx->sched_tags, nr, true); @@ -3635,9 +3633,13 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) } if (!ret) { q->nr_requests = nr; - if (q->elevator && blk_mq_is_sbitmap_shared(set->flags)) - sbitmap_queue_resize(&q->sched_bitmap_tags, - nr - set->reserved_tags); + if (blk_mq_is_sbitmap_shared(set->flags)) { + if (q->elevator) + sbitmap_queue_resize(&q->sched_bitmap_tags, + nr - set->reserved_tags); + else + blk_mq_tag_resize_shared_sbitmap(set, nr); + } } blk_mq_unquiesce_queue(q); From patchwork Mon Sep 13 15:12:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509777 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp654981jao; Mon, 13 Sep 2021 08:17:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxtATrHB5fTTSuZ/p+anvgv5fS06Hf5Fahvl45UZHLN40P9sSv1RJGz3F5BvUrLSrck67dl X-Received: by 2002:ac2:504a:: with SMTP id a10mr6268857lfm.470.1631546268911; Mon, 13 Sep 2021 08:17:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546268; cv=none; d=google.com; s=arc-20160816; b=VypdK2qXonwKGfIDKspY51eZspNtyIQCSuZo7plHXe/kYIGw0OPAHp2c9ulR7Ljiae X00Odvko7RNqHvJSBohl1SjcRsIEUL31xxn3EnMyfcwp9t4mS+8DHw8vA/t5rMvBjTZ8 cHcF7yOF51BNAjqoPZCNv0lbGOi+lrRAVndJ21pqgUP1lYUchiv0Qmot8qFiFc3frPpA YuvVcz/bdPK+uFECxhfUpDNZ90FqZzMFp7HMUOli/+OJ0rqM/vNs3Lqd3vg80vKwbPzh Ll8v6m0nC+YA4cRQNnnDeEPdYFH3r07BBhGncuSpObOgorJWJm9MgGPbSlGiYHGGHG7v I9uQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=HMVizd3BB5u7IGEsInc3EChgYC+PDf2ewKNJUUfMcaY=; b=V//sm4XxC1PBZKkr6R1/JfxHQhgvsQBG223ifvOrbFIDzqmor+AVB0BVw3lCUMKh9h sIiwXtcrfC8oTurgdbyxfE/+8eMoApt9xSqY0Phke1c/f/3R5PVKGHW93DOifTtSXmjn kvlVKQtrrFZxkXsV+jJ4YMHR5wHYP3sGPVYmOixXjytIGjt4lw3eKcCgP4pPmmgFjKRQ E6aEv3TiLvy9sux5PulIHED6BDO75sJEG4/4orP3yj6tSNm9A/cXSEXNjQUpwnNUStE+ XBeIQHuv6JmXt9G+PLButh904PA/L+ol/dkLxS48gXuhM3NOXWnmAEIHoPaA5uTmubE5 aS4g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 18si7608876ejh.282.2021.09.13.08.17.48 for ; Mon, 13 Sep 2021 08:17:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346008AbhIMPTA (ORCPT ); Mon, 13 Sep 2021 11:19:00 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3776 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345730AbhIMPSr (ORCPT ); Mon, 13 Sep 2021 11:18:47 -0400 Received: from fraeml741-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VQs33bQz682vm; Mon, 13 Sep 2021 23:15:17 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml741-chm.china.huawei.com (10.206.15.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:29 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:27 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 04/13] blk-mq: Invert check in blk_mq_update_nr_requests() Date: Mon, 13 Sep 2021 23:12:21 +0800 Message-ID: <1631545950-56586-5-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org It's easier to read: if (x) X; else Y; over: if (!x) Y; else X; No functional change intended. Signed-off-by: John Garry Reviewed-by: Ming Lei --- block/blk-mq.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) -- 2.26.2 Reviewed-by: Hannes Reinecke diff --git a/block/blk-mq.c b/block/blk-mq.c index 1a4bb2db30e5..47d6ab725bcc 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3613,18 +3613,18 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) * If we're using an MQ scheduler, just update the scheduler * queue depth. This is similar to what the old code would do. */ - if (!hctx->sched_tags) { - ret = blk_mq_tag_update_depth(hctx, &hctx->tags, nr, - false); - } else { + if (hctx->sched_tags) { ret = blk_mq_tag_update_depth(hctx, &hctx->sched_tags, - nr, true); + nr, true); if (blk_mq_is_sbitmap_shared(set->flags)) { hctx->sched_tags->bitmap_tags = &q->sched_bitmap_tags; hctx->sched_tags->breserved_tags = &q->sched_breserved_tags; } + } else { + ret = blk_mq_tag_update_depth(hctx, &hctx->tags, nr, + false); } if (ret) break; From patchwork Mon Sep 13 15:12:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509784 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp655808jao; Mon, 13 Sep 2021 08:18:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyGTsVQSu8fwtTMEMWdCdmP68hTn9TyKbeP9SXlrEd6ef1dnoSYZET0pTSU4lMayO9c7elf X-Received: by 2002:a17:907:2168:: with SMTP id rl8mr13669050ejb.42.1631546331205; Mon, 13 Sep 2021 08:18:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546331; cv=none; d=google.com; s=arc-20160816; b=E2XQ4t8sBFu36baf3ek/dnveDUyyVQgJaL7dILpnRM8HnBcGXWj+8AJ3+OcvC/zhJz RShy/O1BKP/ogpjpousmrtlZp4pXkX/ixnz1Qv/b59Z9EC1BbVmEuDVDj3dcSFNzMZUC 8gQ9uU19qImPzTFbm+3DJcc4kbwNxE8Ee3rIXSU+5S+SuqvVFLQXqBYFto+WvlElxqwf l4Ha9eID0KQ9d86EqKiW93DRAB6EpqN/VemzxGABwO9N6e2BXvPY0PUdRX53M10Hv5Gg 8PxH/dBHNaJQQlcLCxMrVouozLvCY3jXsc0qmghepfVpPQet3zkEktchwV5kv+S1RiLu JFzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=Paz5e9pxEoNv7W8lcrWO623CLc7ymBcnFvyvtNCROIk=; b=MryMQGpncbNSQMaO/nhINZdh297fq6GXbZA0o0bW25AG407bRWQTd6aiM6I6FRyIwM edcSOfOO4+oPovBWxpuMFvxtXR1gDbx3Fd2kFoxRQF96PMSKmq4w+rHl6msLwFLmJX7T dDAItWyhGgsHyTBwo/cmTsPLS2c+/rzor2Aq/xK8HVsjk9U/XWw4wb5pUxU6j1l0/OIB IVwnQ0rCD2oRQ6Fuxyi20BlQfTuHIbZIA5hO+DkmASGi17YJSuRaH+XZ7HAgARl4/47P 4KCgYIE2xur4AUV4PnCldm0MAoiJ2S+zI0eZkYD9h8z6yJ1NQHMAcfhjcWbQKS0UjeL9 Vh1Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c30si7358358ejj.786.2021.09.13.08.18.51 for ; Mon, 13 Sep 2021 08:18:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347261AbhIMPTG (ORCPT ); Mon, 13 Sep 2021 11:19:06 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3777 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346143AbhIMPSv (ORCPT ); Mon, 13 Sep 2021 11:18:51 -0400 Received: from fraeml739-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VR15sgQz67wNx; Mon, 13 Sep 2021 23:15:25 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml739-chm.china.huawei.com (10.206.15.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:31 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:29 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 05/13] blk-mq-sched: Rename blk_mq_sched_alloc_{tags -> map_and_rqs}() Date: Mon, 13 Sep 2021 23:12:22 +0800 Message-ID: <1631545950-56586-6-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Function blk_mq_sched_alloc_tags() does same as __blk_mq_alloc_map_and_request(), so give a similar name to be consistent. Similarly rename label err_free_tags -> err_free_map_and_rqs. Signed-off-by: John Garry Reviewed-by: Ming Lei --- block/blk-mq-sched.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) -- 2.26.2 diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 2231fb0d4c35..5f340203e6e5 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -515,9 +515,9 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx, percpu_ref_put(&q->q_usage_counter); } -static int blk_mq_sched_alloc_tags(struct request_queue *q, - struct blk_mq_hw_ctx *hctx, - unsigned int hctx_idx) +static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q, + struct blk_mq_hw_ctx *hctx, + unsigned int hctx_idx) { struct blk_mq_tag_set *set = q->tag_set; int ret; @@ -609,15 +609,15 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) BLKDEV_DEFAULT_RQ); queue_for_each_hw_ctx(q, hctx, i) { - ret = blk_mq_sched_alloc_tags(q, hctx, i); + ret = blk_mq_sched_alloc_map_and_rqs(q, hctx, i); if (ret) - goto err_free_tags; + goto err_free_map_and_rqs; } if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) { ret = blk_mq_init_sched_shared_sbitmap(q); if (ret) - goto err_free_tags; + goto err_free_map_and_rqs; } ret = e->ops.init_sched(q, e); @@ -645,8 +645,8 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) err_free_sbitmap: if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) blk_mq_exit_sched_shared_sbitmap(q); -err_free_tags: blk_mq_sched_free_requests(q); +err_free_map_and_rqs: blk_mq_sched_tags_teardown(q); q->elevator = NULL; return ret; From patchwork Mon Sep 13 15:12:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509782 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp655423jao; Mon, 13 Sep 2021 08:18:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwNX7js1YJ/K5Qzh7D3cgO9HbhhCi0Iz6oLCq19+8MRx5GEkt/B7kHQLi6nTB6IBujdaC1f X-Received: by 2002:a17:907:9908:: with SMTP id ka8mr13699078ejc.164.1631546303081; Mon, 13 Sep 2021 08:18:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546303; cv=none; d=google.com; s=arc-20160816; b=Yt2xsK2rVLqc7GMRnKCeDFAEDjbwoPUy9xsM+1OYHjPdmSh/wU3XFdie7VCZa6jTmZ Hc1AYtB+ol6WTvarmI7XoiuUwyNZHRW4GOKBmQ5Mih/Gd4cZDyqHzxxLezgxqLG8Q1B2 i3FUV4oRHQI9EHfhhlTakv13uFZhoo33a5CazqwqipG3hD29g3gap0wA0PRNgXthW8SS hRC5p596Yrumb1eo7FiIrKcVTtA7QkHD5csfjSDnUc8qSxQ0bD3q/+HowcUhiya25oRC UlYD7z9QXp9IR30djjrfC+/M9F4gWREYqJSYM1dRINFwUpQaSsGW0WawJ6+vRs4moewN W8Rw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=SaJDUbzOVR887+TW0Z1MzFGHen8bQ48g9SwkAjwQlOI=; b=IJEeIu7in25lhtvdfkf/qOt5JHEn51wxLGf+77cH8ARl7SG0xy4M/KwIGwMAd0454z xoEuwltWdICtpR4fZBp+t1mRyht3fe2D6AzVsoSBNZ4ttQm5qYk9yFiB4Cn8ucK1tzFd TYuF2tsuymJfxxBmlSqp6AJ/NU860rmrqqdAEkrpl/+j+xGAT49+kCKD/5ToBn+ubNNE Ke5y9oqkK2t83S6WsLX/b5HpCH4l0R47FDtejaVhPNVfW5dqStBE//TdemPavvp8BF1J enumXhBgzW4kXhSiHzqvYX4Hj8+8dLnNHwTD1IvQgMTvAn8qu3Mi09bUMaBjUeyVGQ0t Oc5w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 18si7608876ejh.282.2021.09.13.08.18.22 for ; Mon, 13 Sep 2021 08:18:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231202AbhIMPT1 (ORCPT ); Mon, 13 Sep 2021 11:19:27 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3778 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346154AbhIMPSv (ORCPT ); Mon, 13 Sep 2021 11:18:51 -0400 Received: from fraeml740-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VQp68wlz6H6qJ; Mon, 13 Sep 2021 23:15:14 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml740-chm.china.huawei.com (10.206.15.221) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:33 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:31 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 06/13] blk-mq-sched: Rename blk_mq_sched_free_{requests -> rqs}() Date: Mon, 13 Sep 2021 23:12:23 +0800 Message-ID: <1631545950-56586-7-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org To be more concise and consistent in naming, rename blk_mq_sched_free_requests() -> blk_mq_sched_free_rqs(). Signed-off-by: John Garry --- block/blk-core.c | 2 +- block/blk-mq-sched.c | 6 +++--- block/blk-mq-sched.h | 2 +- block/blk.h | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) -- 2.26.2 Reviewed-by: Hannes Reinecke diff --git a/block/blk-core.c b/block/blk-core.c index 5d7137bec48e..3480df0e958c 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -406,7 +406,7 @@ void blk_cleanup_queue(struct request_queue *q) */ mutex_lock(&q->sysfs_lock); if (q->elevator) - blk_mq_sched_free_requests(q); + blk_mq_sched_free_rqs(q); mutex_unlock(&q->sysfs_lock); percpu_ref_exit(&q->q_usage_counter); diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 5f340203e6e5..3ab26154f0ea 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -631,7 +631,7 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) ret = e->ops.init_hctx(hctx, i); if (ret) { eq = q->elevator; - blk_mq_sched_free_requests(q); + blk_mq_sched_free_rqs(q); blk_mq_exit_sched(q, eq); kobject_put(&eq->kobj); return ret; @@ -645,7 +645,7 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) err_free_sbitmap: if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) blk_mq_exit_sched_shared_sbitmap(q); - blk_mq_sched_free_requests(q); + blk_mq_sched_free_rqs(q); err_free_map_and_rqs: blk_mq_sched_tags_teardown(q); q->elevator = NULL; @@ -656,7 +656,7 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) * called in either blk_queue_cleanup or elevator_switch, tagset * is required for freeing requests */ -void blk_mq_sched_free_requests(struct request_queue *q) +void blk_mq_sched_free_rqs(struct request_queue *q) { struct blk_mq_hw_ctx *hctx; int i; diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index 1e46be6c5178..e70748d18754 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -28,7 +28,7 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx); int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e); void blk_mq_exit_sched(struct request_queue *q, struct elevator_queue *e); -void blk_mq_sched_free_requests(struct request_queue *q); +void blk_mq_sched_free_rqs(struct request_queue *q); static inline bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, diff --git a/block/blk.h b/block/blk.h index 7d2a0ba7ed21..6f125c8cddb0 100644 --- a/block/blk.h +++ b/block/blk.h @@ -200,7 +200,7 @@ static inline void elevator_exit(struct request_queue *q, { lockdep_assert_held(&q->sysfs_lock); - blk_mq_sched_free_requests(q); + blk_mq_sched_free_rqs(q); __elevator_exit(q, e); } From patchwork Mon Sep 13 15:12:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509785 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp655869jao; Mon, 13 Sep 2021 08:18:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzWaf61WjeoTt0HW+3jts0Xnv3k85Fxtx6dFHerh+JZ8cQSD4jmneUHVLnVV7brAgatPtDM X-Received: by 2002:a17:906:38db:: with SMTP id r27mr13341644ejd.338.1631546335697; Mon, 13 Sep 2021 08:18:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546335; cv=none; d=google.com; s=arc-20160816; b=T1Jg68gT6uJ73bSVK76DrMreRSf5RivzgYggW6w4wJPg+78Ex9OKix9yvlMnNZe+Uc rkO4OYXeLO7vTUh8ntmXRRIrh3qwA2cgWNJHfXVyJsOj1i17PFks+ir1S0TVaYJ1wID/ 25vaMtBvwo4NfnkanTCK9+9EEHYFVJrlxd/Lua4x/VtqZ8iqr82LepEtfjbsedzUWtUB lXwpJQaXFW/Aqsm/TIC/a1v17ps0/OkpnX+oPxtUYarwI29CcedAkNoTAA9BLMN83mWD RpuOR6ZCBOTxVIpl2zowxPJIq0dvqWwV5bvqs2DwIR/j0XISgCQh/4rubM614CMlPHEe po8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=PI1Ld2tpMQlU1ATmdnntkldxyhgRIN91VeHOabteg+s=; b=Kk2s4TbBZDACX4rtzOnEoBfqjYP8I19li6pRTISIc4ORNV1uGI0/LPzmn7N50T5e/Z r8RkNhZdn5ShOtG2ERp8n3la8RjwOG/MJx+IATJDmPpeZURngvqErQ90eVG4Z/RYfY44 48CtHJuJ2FxXfLpwUAnumDD/KRpVYpdoDmgUy0jDWPNVdTcbf9TLb1/oTlRmRZk8yu5+ kKMMCM0xjRiAkiKHqQ8eJau7Lqh6TrU1oKHLxLAIRWZSg9V49vOeoLzoaLEyUJi1ZCdH IBOgjP0vcpc2ID9LlQbFS2Q0BJi1SA4dpYoY+oW3iIpEjasLwUbDk6soXT3xhX8WyOJH t4Ow== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c30si7358358ejj.786.2021.09.13.08.18.55 for ; Mon, 13 Sep 2021 08:18:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346776AbhIMPUI (ORCPT ); Mon, 13 Sep 2021 11:20:08 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3779 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346189AbhIMPSw (ORCPT ); Mon, 13 Sep 2021 11:18:52 -0400 Received: from fraeml738-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VQy1DYhz67PfY; Mon, 13 Sep 2021 23:15:22 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml738-chm.china.huawei.com (10.206.15.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:34 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:32 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 07/13] blk-mq: Pass driver tags to blk_mq_clear_rq_mapping() Date: Mon, 13 Sep 2021 23:12:24 +0800 Message-ID: <1631545950-56586-8-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Function blk_mq_clear_rq_mapping() will be used for shared sbitmap tags in future, so pass a driver tags pointer instead of the tagset container and HW queue index. Signed-off-by: John Garry --- block/blk-mq.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) -- 2.26.2 Reviewed-by: Hannes Reinecke diff --git a/block/blk-mq.c b/block/blk-mq.c index 47d6ab725bcc..4bae8afdfbe1 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2302,10 +2302,9 @@ static size_t order_to_size(unsigned int order) } /* called before freeing request pool in @tags */ -static void blk_mq_clear_rq_mapping(struct blk_mq_tag_set *set, - struct blk_mq_tags *tags, unsigned int hctx_idx) +static void blk_mq_clear_rq_mapping(struct blk_mq_tags *drv_tags, + struct blk_mq_tags *tags) { - struct blk_mq_tags *drv_tags = set->tags[hctx_idx]; struct page *page; unsigned long flags; @@ -2314,7 +2313,7 @@ static void blk_mq_clear_rq_mapping(struct blk_mq_tag_set *set, unsigned long end = start + order_to_size(page->private); int i; - for (i = 0; i < set->queue_depth; i++) { + for (i = 0; i < drv_tags->nr_tags; i++) { struct request *rq = drv_tags->rqs[i]; unsigned long rq_addr = (unsigned long)rq; @@ -2338,8 +2337,11 @@ static void blk_mq_clear_rq_mapping(struct blk_mq_tag_set *set, void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx) { + struct blk_mq_tags *drv_tags; struct page *page; + drv_tags = set->tags[hctx_idx]; + if (tags->static_rqs && set->ops->exit_request) { int i; @@ -2353,7 +2355,7 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, } } - blk_mq_clear_rq_mapping(set, tags, hctx_idx); + blk_mq_clear_rq_mapping(drv_tags, tags); while (!list_empty(&tags->page_list)) { page = list_first_entry(&tags->page_list, struct page, lru); From patchwork Mon Sep 13 15:12:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509786 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp655905jao; Mon, 13 Sep 2021 08:18:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyRQgJ4vPloGjAe0J+gLN9cgw7pL6v6J1V3wA6hRp0TkpXCQCQByqT5JEAYiIGFKjJZrs/6 X-Received: by 2002:a17:907:9908:: with SMTP id ka8mr13701843ejc.164.1631546337929; Mon, 13 Sep 2021 08:18:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546337; cv=none; d=google.com; s=arc-20160816; b=mZU9foeSic9Y8wsTH8t+W6dXL+cVYbUl7VbxDqkOtSRKQ4pDC9yEsDBM8Uzt9RsWUp l2BXUYy6yGARHcHra/X1WUA3vCs4icfTcBY4qSqZcDlQrMLs4ioX7I6aFBvECDkqnK6a XC06qL5zjDtpqzxTKaVBq1N2Uqb8hbM6JM229vB8ietOHTtAAroS38jwwkL8cYIs2tpi DPL9/Q9YWDWG5F0ZHkAMIGI0IXMxTy9XItLwKhIp/Xh1MpWTX76sdIfh/R/M1UVSzlNk OoL3BL0A008xep/w1pHLhJVoISTuozxp3tEEZFKhTp3HeLHk/29278YIcd+fn+yMbePs lsXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=tsDAQuiKCQfx2UhEjpGdFtaDweHF5uClg29Vmu2j1BA=; b=hg8/LoFcE4NmTW8Ry5zMdIveRTvhTNLguVxjl4EVXLs/94TaqORYkHRfnkDdDN381d ldgCYYfqB6nDSw3oYG5nCHBbrYrtGi3j7ZDzBAG3Y0ab75VAF30BtC/mp0zgP5QvtDJG IYwFRiJ9h/cLgW2mNH37jE6FLxRQaCCwL5fCZSsQxvXCu3TfKoKvfDROTchGR5IxeAXy 74E+KhL9jNAzDKm8/87joYMRq3TKh5RXbWvci4YCwNXCeUrpgj052ZMEsIlyviE2jdxs MQRTuT9aLUto2Lb4rM6h7Jyzw8tP7TCoOqo3y5FFTZ5NiazAwNsYMnZT55qQ0RBvwG20 2luw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c30si7358358ejj.786.2021.09.13.08.18.57 for ; Mon, 13 Sep 2021 08:18:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346780AbhIMPUL (ORCPT ); Mon, 13 Sep 2021 11:20:11 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3780 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346248AbhIMPSy (ORCPT ); Mon, 13 Sep 2021 11:18:54 -0400 Received: from fraeml736-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VR65gJGz67nJX; Mon, 13 Sep 2021 23:15:30 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml736-chm.china.huawei.com (10.206.15.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:36 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:34 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 08/13] blk-mq: Don't clear driver tags own mapping Date: Mon, 13 Sep 2021 23:12:25 +0800 Message-ID: <1631545950-56586-9-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Function blk_mq_clear_rq_mapping() is required to clear the sched tags mappings in driver tags rqs[]. But there is no need for a driver tags to clear its own mapping, so skip clearing the mapping in this scenario. Signed-off-by: John Garry --- block/blk-mq.c | 4 ++++ 1 file changed, 4 insertions(+) -- 2.26.2 Reviewed-by: Hannes Reinecke diff --git a/block/blk-mq.c b/block/blk-mq.c index 4bae8afdfbe1..5229c5420b85 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2308,6 +2308,10 @@ static void blk_mq_clear_rq_mapping(struct blk_mq_tags *drv_tags, struct page *page; unsigned long flags; + /* There is no need to clear a driver tags own mapping */ + if (drv_tags == tags) + return; + list_for_each_entry(page, &tags->page_list, lru) { unsigned long start = (unsigned long)page_address(page); unsigned long end = start + order_to_size(page->private); From patchwork Mon Sep 13 15:12:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509778 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp655325jao; Mon, 13 Sep 2021 08:18:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwEI3+Jx/jUedCjlF1AX+V71Vq1ztOWcX5R/A1z+IaC0BJZE8f0Y6d3P1BMOPkWDX8eF2d9 X-Received: by 2002:a17:906:a195:: with SMTP id s21mr12831770ejy.181.1631546295331; Mon, 13 Sep 2021 08:18:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546295; cv=none; d=google.com; s=arc-20160816; b=HcG91dNsppnuQT45tHrK6IQDQKuTb5Pi+e52/FseiiAAdi1w0rJzgRvE1+HS9bWnr+ anEO0FB9xk5n3/1hxQRhUzqcY9/vySSOrTdb+r35f/SN31l6IuuCywYd7O8SJvDbJR03 hnrMn4M4k1DV9TsU6WqmagNmyoqqHZxryZ08J7xD8+6sFWEatZVWJ160j2wXqhfpK3XV iYBu2hfIyjOzZH+882aldByjvKqPv24jYxe7pdzrigy5NERtSN0K+SuwNxXrODQOViRi 0rj3W85ujmiBPrNJuge7fPJ3F3g0+ZJTkgiPm2o8wssxMCCEH5kEFfBnTjo8C9HTc9X/ DkBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=SzZKOwcQIHijEXFK3nyqrHzMotCwWzJg+PhcGUzFI4w=; b=HakZhnJHUYaf0omO9qznsNgV04A4ZBYkEgfBXrsJAV2HLmeKVbLSfI9q5OAVj82P6N gGsdmvAyZ5fLm7LghH97h59nPooxbTAnnRGJdvXfmVSPfkbmUCDyFePfvnCsxA1XEoRg 3k4Y5Vx3L8YFonUEoeajcH2MNYb43rMQ9We9dl0lVdxx4eOV89pE4Z9++7JbzPzPizMF nPXwkRO1nPxg922xNVKWGvhiHILmJsk9HT6iFI92oMz42IWa4iINOsLC/Fa95/dA1NE3 Jc4LdbmW/CDvqNH7Gim3e4dwC9IFoit4dr6N5U4DzVH7W3smC/Ce87mQb9+D5s+88U7o oLJg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 18si7608876ejh.282.2021.09.13.08.18.15 for ; Mon, 13 Sep 2021 08:18:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237922AbhIMPT3 (ORCPT ); Mon, 13 Sep 2021 11:19:29 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3781 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346381AbhIMPSz (ORCPT ); Mon, 13 Sep 2021 11:18:55 -0400 Received: from fraeml737-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VR15YDNz67PGs; Mon, 13 Sep 2021 23:15:25 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml737-chm.china.huawei.com (10.206.15.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:38 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:36 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 09/13] blk-mq: Add blk_mq_tag_update_sched_shared_sbitmap() Date: Mon, 13 Sep 2021 23:12:26 +0800 Message-ID: <1631545950-56586-10-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Put the functionality to update the sched shared sbitmap size in a common function. Since the same formula is always used to resize, and it can be got from the request queue argument, so just pass the request queue pointer. Signed-off-by: John Garry Reviewed-by: Ming Lei --- block/blk-mq-sched.c | 3 +-- block/blk-mq-tag.c | 6 ++++++ block/blk-mq-tag.h | 1 + block/blk-mq.c | 3 +-- 4 files changed, 9 insertions(+), 4 deletions(-) -- 2.26.2 diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 3ab26154f0ea..a3b5a5399bc8 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -575,8 +575,7 @@ static int blk_mq_init_sched_shared_sbitmap(struct request_queue *queue) &queue->sched_breserved_tags; } - sbitmap_queue_resize(&queue->sched_bitmap_tags, - queue->nr_requests - set->reserved_tags); + blk_mq_tag_update_sched_shared_sbitmap(queue); return 0; } diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 86f87346232a..5f06ad6efc8f 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -634,6 +634,12 @@ void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int s sbitmap_queue_resize(&set->__bitmap_tags, size - set->reserved_tags); } +void blk_mq_tag_update_sched_shared_sbitmap(struct request_queue *q) +{ + sbitmap_queue_resize(&q->sched_bitmap_tags, + q->nr_requests - q->tag_set->reserved_tags); +} + /** * blk_mq_unique_tag() - return a tag that is unique queue-wide * @rq: request for which to compute a unique tag diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 8ed55af08427..88f3c6485543 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -48,6 +48,7 @@ extern int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, unsigned int depth, bool can_grow); extern void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int size); +extern void blk_mq_tag_update_sched_shared_sbitmap(struct request_queue *q); extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool); void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, diff --git a/block/blk-mq.c b/block/blk-mq.c index 5229c5420b85..5fec444d6399 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3641,8 +3641,7 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) q->nr_requests = nr; if (blk_mq_is_sbitmap_shared(set->flags)) { if (q->elevator) - sbitmap_queue_resize(&q->sched_bitmap_tags, - nr - set->reserved_tags); + blk_mq_tag_update_sched_shared_sbitmap(q); else blk_mq_tag_resize_shared_sbitmap(set, nr); } From patchwork Mon Sep 13 15:12:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509780 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp655389jao; Mon, 13 Sep 2021 08:18:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzzveJ/BcYSxNbNFuTPUR/GFxTNvlyt3A/qCvLRklntenEbc/27x5f9J+dXjZyX4kTP3IO7 X-Received: by 2002:a17:907:aa4:: with SMTP id bz4mr13440206ejc.97.1631546300201; Mon, 13 Sep 2021 08:18:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546300; cv=none; d=google.com; s=arc-20160816; b=emABK1M99Pz2DruomtWM0Tvp0GVpkfJyCW96Tg8Mh4vUOO9kkHfA9Fomx+2Yn7w14b p4+T2gYgGvhyU08w6B5eUAi97Ahaki2HwiQvvS04EKDkx8g+KU/06SrNLCdFtvZPOtbn ZZ+Gmp5/lCzAngsrvUtxopPgJAci4ZlYi/7hvhu4V/gzJtb4kxqF0PdVB290+VOrgaX0 nzG74t7nDMh6JaB5zmGUyRmwDrBNX1oxo1gbniuLSpT5/9CcVyFig7UYDmXEQgKQgYqB d7oJPrhWb5rejR5Ojrgi6EgXgc7EVhEZ8VjaD52K616ut3/weg4Cq3GQtnUKASynlOHs c9qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=mnKit3ydR3FVjujrB9blxfSo6YmxIrMck1Q1AMK8ZOk=; b=Qx0lGhNfB5iNyJO49LEqLuXbOO7rzgPbhOGurGeHtjsjhzKKSmp2HMD2J+Pa9yspSz /YGhp2J8t5+VatkPRO5oxUtV7iIZ0VzNArgcCES82Htf90VlNJwFbcBWM1sOwgsmXR76 xpGx1s6NObh6/KLlj5J35c8gGn0eKJyc1OFsnlSj0e+IyX3goinW9GF+Blidqrq9jKLC RECemk9Tsg7aBBMHycUgxjL0CAoSHCuYKPBKBqCx9xiMJQUWuWNMM46FwEGhKcdqLvZv HBmfYg+gV6OvtsaseXWUypJY0G32bWwFypNSVyBOkGzK5JVkV8WcUfCkDdbIKMXE5pTZ 7wtg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 18si7608876ejh.282.2021.09.13.08.18.20 for ; Mon, 13 Sep 2021 08:18:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241252AbhIMPTa (ORCPT ); Mon, 13 Sep 2021 11:19:30 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3782 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346456AbhIMPS5 (ORCPT ); Mon, 13 Sep 2021 11:18:57 -0400 Received: from fraeml735-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VR34Kp1z67PN4; Mon, 13 Sep 2021 23:15:27 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml735-chm.china.huawei.com (10.206.15.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:39 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:38 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 10/13] blk-mq: Add blk_mq_alloc_map_and_rqs() Date: Mon, 13 Sep 2021 23:12:27 +0800 Message-ID: <1631545950-56586-11-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add a function to combine allocating tags and the associated requests, and factor out common patterns to use this new function. Some functions only call blk_mq_alloc_map_and_rqs() now, but more functionality will be added later to those functions. Also make blk_mq_alloc_rq_map() and blk_mq_alloc_rqs() static since they are only used in blk-mq.c, and finally rename some functions for conciseness and consistency with other function names: - __blk_mq_alloc_map_and_{request -> rqs}() - blk_mq_alloc_{map_and_requests -> set_map_and_rqs}() Suggested-by: Ming Lei Signed-off-by: John Garry --- block/blk-mq-sched.c | 15 +++-------- block/blk-mq-tag.c | 9 +------ block/blk-mq.c | 62 +++++++++++++++++++++++++------------------- block/blk-mq.h | 9 ++----- 4 files changed, 42 insertions(+), 53 deletions(-) -- 2.26.2 diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index a3b5a5399bc8..17752f39e144 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -519,21 +519,12 @@ static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q, struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) { - struct blk_mq_tag_set *set = q->tag_set; - int ret; + hctx->sched_tags = blk_mq_alloc_map_and_rqs(q->tag_set, hctx_idx, + q->nr_requests); - hctx->sched_tags = blk_mq_alloc_rq_map(set, hctx_idx, q->nr_requests, - set->reserved_tags, set->flags); if (!hctx->sched_tags) return -ENOMEM; - - ret = blk_mq_alloc_rqs(set, hctx->sched_tags, hctx_idx, q->nr_requests); - if (ret) { - blk_mq_free_rq_map(hctx->sched_tags, set->flags); - hctx->sched_tags = NULL; - } - - return ret; + return 0; } /* called in queue's release handler, tagset has gone away */ diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 5f06ad6efc8f..d0b5e52be3c8 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -592,7 +592,6 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, if (tdepth > tags->nr_tags) { struct blk_mq_tag_set *set = hctx->queue->tag_set; struct blk_mq_tags *new; - bool ret; if (!can_grow) return -EINVAL; @@ -604,15 +603,9 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, if (tdepth > MAX_SCHED_RQ) return -EINVAL; - new = blk_mq_alloc_rq_map(set, hctx->queue_num, tdepth, - tags->nr_reserved_tags, set->flags); + new = blk_mq_alloc_map_and_rqs(set, hctx->queue_num, tdepth); if (!new) return -ENOMEM; - ret = blk_mq_alloc_rqs(set, new, hctx->queue_num, tdepth); - if (ret) { - blk_mq_free_rq_map(new, set->flags); - return -ENOMEM; - } blk_mq_free_rqs(set, *tagsptr, hctx->queue_num); blk_mq_free_rq_map(*tagsptr, set->flags); diff --git a/block/blk-mq.c b/block/blk-mq.c index 5fec444d6399..46772773b9c4 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2383,11 +2383,11 @@ void blk_mq_free_rq_map(struct blk_mq_tags *tags, unsigned int flags) blk_mq_free_tags(tags, flags); } -struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, - unsigned int hctx_idx, - unsigned int nr_tags, - unsigned int reserved_tags, - unsigned int flags) +static struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, + unsigned int hctx_idx, + unsigned int nr_tags, + unsigned int reserved_tags, + unsigned int flags) { struct blk_mq_tags *tags; int node; @@ -2435,8 +2435,9 @@ static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq, return 0; } -int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, - unsigned int hctx_idx, unsigned int depth) +static int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, + struct blk_mq_tags *tags, + unsigned int hctx_idx, unsigned int depth) { unsigned int i, j, entries_per_page, max_order = 4; size_t rq_size, left; @@ -2847,25 +2848,34 @@ static void blk_mq_init_cpu_queues(struct request_queue *q, } } -static bool __blk_mq_alloc_map_and_request(struct blk_mq_tag_set *set, - int hctx_idx) +struct blk_mq_tags *blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set, + unsigned int hctx_idx, + unsigned int depth) { - unsigned int flags = set->flags; - int ret = 0; + struct blk_mq_tags *tags; + int ret; - set->tags[hctx_idx] = blk_mq_alloc_rq_map(set, hctx_idx, - set->queue_depth, set->reserved_tags, flags); - if (!set->tags[hctx_idx]) - return false; + tags = blk_mq_alloc_rq_map(set, hctx_idx, depth, set->reserved_tags, + set->flags); + if (!tags) + return NULL; - ret = blk_mq_alloc_rqs(set, set->tags[hctx_idx], hctx_idx, - set->queue_depth); - if (!ret) - return true; + ret = blk_mq_alloc_rqs(set, tags, hctx_idx, depth); + if (ret) { + blk_mq_free_rq_map(tags, set->flags); + return NULL; + } - blk_mq_free_rq_map(set->tags[hctx_idx], flags); - set->tags[hctx_idx] = NULL; - return false; + return tags; +} + +static bool __blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set, + int hctx_idx) +{ + set->tags[hctx_idx] = blk_mq_alloc_map_and_rqs(set, hctx_idx, + set->queue_depth); + + return set->tags[hctx_idx]; } static void blk_mq_free_map_and_requests(struct blk_mq_tag_set *set, @@ -2910,7 +2920,7 @@ static void blk_mq_map_swqueue(struct request_queue *q) hctx_idx = set->map[j].mq_map[i]; /* unmapped hw queue can be remapped after CPU topo changed */ if (!set->tags[hctx_idx] && - !__blk_mq_alloc_map_and_request(set, hctx_idx)) { + !__blk_mq_alloc_map_and_rqs(set, hctx_idx)) { /* * If tags initialization fail for some hctx, * that hctx won't be brought online. In this @@ -3343,7 +3353,7 @@ static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set) int i; for (i = 0; i < set->nr_hw_queues; i++) { - if (!__blk_mq_alloc_map_and_request(set, i)) + if (!__blk_mq_alloc_map_and_rqs(set, i)) goto out_unwind; cond_resched(); } @@ -3362,7 +3372,7 @@ static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set) * may reduce the depth asked for, if memory is tight. set->queue_depth * will be updated to reflect the allocated depth. */ -static int blk_mq_alloc_map_and_requests(struct blk_mq_tag_set *set) +static int blk_mq_alloc_set_map_and_rqs(struct blk_mq_tag_set *set) { unsigned int depth; int err; @@ -3528,7 +3538,7 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) if (ret) goto out_free_mq_map; - ret = blk_mq_alloc_map_and_requests(set); + ret = blk_mq_alloc_set_map_and_rqs(set); if (ret) goto out_free_mq_map; diff --git a/block/blk-mq.h b/block/blk-mq.h index d08779f77a26..83585a344568 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -55,13 +55,8 @@ void blk_mq_put_rq_ref(struct request *rq); void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx); void blk_mq_free_rq_map(struct blk_mq_tags *tags, unsigned int flags); -struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, - unsigned int hctx_idx, - unsigned int nr_tags, - unsigned int reserved_tags, - unsigned int flags); -int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, - unsigned int hctx_idx, unsigned int depth); +struct blk_mq_tags *blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set, + unsigned int hctx_idx, unsigned int depth); /* * Internal helpers for request insertion into sw queues From patchwork Mon Sep 13 15:12:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509779 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp655377jao; Mon, 13 Sep 2021 08:18:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxm2TvwW9L3haNDN514gMG6iyMP7MTFoEyEwSnzIuflcBa0ohH1SNViuEcLOMC5ETHDi39T X-Received: by 2002:a05:6402:1e88:: with SMTP id f8mr874559edf.126.1631546299511; Mon, 13 Sep 2021 08:18:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546299; cv=none; d=google.com; s=arc-20160816; b=ydpozK46JoQn9AAKGGbzfAec0pAbG2t3pg/lG19P+rzgl8yhKHeqaqbGVYJ3YRXl7m IDL6Iaem8viQ0KdjwVLHR+R0Aq6cGJ+D6g02ytEGQbOa5nVkwyzncWJFMPP/eS8QtwuZ fXbTdH9iNvq/GzLgJb1y4AY0iPw0Vw2zwMeyslX8zaBIgJezFE1zBpNPtKKPZ7su1PuS 4ArXAQdyw5OyS7mpWfZpOx9K7PkKXQlS2qgJfUpEHM3eDHrrCjFSvvml4SzSKcdKVU6A FG9RidgQDWSpeW3D+FubsXPmWYdbeZmilUxpFCbHQHEHC0U4PaYiCipUXAvkCvJ8ITV5 ALag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=prapnx1R08381YvO6dAK2/rsLI2bCtzsC5RmOSBXPNM=; b=ujt8JjvgvrrPSkA9JMqIYdkv0FmIEoQs5f6VF/kxuusN12xZ7xLKnrNDvhxf7QS8XP c6AFy1Jfr4wFl5EQQp8OzSC3Sdd32amAUfsyduf1Y/ysR6QT7/FnitELlL1eL963g7jn I1Cnk8+hCk21VH0zH/f+pWbTsTZNy+fltWi137rICgvJrod8pRQaolHUadNvVgPlYzUe Tn8LDs1khJbn2bxR57YdFrk5YrLdPAB3ZfCVZZY+u4lpgm7MMZq/j3EQWk9txFBo3SRy 6upLfm0stxpvXAHdQfFvr0yCYYCbWlfRGbOPaaOj8BPC3ozSItpNC/csZ/GsZLM5SAhZ ProQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 18si7608876ejh.282.2021.09.13.08.18.19 for ; Mon, 13 Sep 2021 08:18:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244030AbhIMPTc (ORCPT ); Mon, 13 Sep 2021 11:19:32 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3783 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345634AbhIMPS7 (ORCPT ); Mon, 13 Sep 2021 11:18:59 -0400 Received: from fraeml734-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VRC6kVMz67DYv; Mon, 13 Sep 2021 23:15:35 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml734-chm.china.huawei.com (10.206.15.215) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:41 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:39 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 11/13] blk-mq: Refactor and rename blk_mq_free_map_and_{requests->rqs}() Date: Mon, 13 Sep 2021 23:12:28 +0800 Message-ID: <1631545950-56586-12-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Refactor blk_mq_free_map_and_requests() such that it can be used at many sites at which the tag map and rqs are freed. Also rename to blk_mq_free_map_and_rqs(), which is shorter and matches the alloc equivalent. Suggested-by: Ming Lei Signed-off-by: John Garry --- block/blk-mq-tag.c | 3 +-- block/blk-mq.c | 40 ++++++++++++++++++++++++---------------- block/blk-mq.h | 4 +++- 3 files changed, 28 insertions(+), 19 deletions(-) -- 2.26.2 Reviewed-by: Hannes Reinecke diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index d0b5e52be3c8..fcc33581ce5e 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -607,8 +607,7 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, if (!new) return -ENOMEM; - blk_mq_free_rqs(set, *tagsptr, hctx->queue_num); - blk_mq_free_rq_map(*tagsptr, set->flags); + blk_mq_free_map_and_rqs(set, *tagsptr, hctx->queue_num); *tagsptr = new; } else { /* diff --git a/block/blk-mq.c b/block/blk-mq.c index 46772773b9c4..464ea20b9bcb 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2878,15 +2878,15 @@ static bool __blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set, return set->tags[hctx_idx]; } -static void blk_mq_free_map_and_requests(struct blk_mq_tag_set *set, - unsigned int hctx_idx) +void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set, + struct blk_mq_tags *tags, + unsigned int hctx_idx) { unsigned int flags = set->flags; - if (set->tags && set->tags[hctx_idx]) { - blk_mq_free_rqs(set, set->tags[hctx_idx], hctx_idx); - blk_mq_free_rq_map(set->tags[hctx_idx], flags); - set->tags[hctx_idx] = NULL; + if (tags) { + blk_mq_free_rqs(set, tags, hctx_idx); + blk_mq_free_rq_map(tags, flags); } } @@ -2967,8 +2967,10 @@ static void blk_mq_map_swqueue(struct request_queue *q) * fallback in case of a new remap fails * allocation */ - if (i && set->tags[i]) - blk_mq_free_map_and_requests(set, i); + if (i && set->tags[i]) { + blk_mq_free_map_and_rqs(set, set->tags[i], i); + set->tags[i] = NULL; + } hctx->tags = NULL; continue; @@ -3264,8 +3266,8 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, struct blk_mq_hw_ctx *hctx = hctxs[j]; if (hctx) { - if (hctx->tags) - blk_mq_free_map_and_requests(set, j); + blk_mq_free_map_and_rqs(set, set->tags[j], j); + set->tags[j] = NULL; blk_mq_exit_hctx(q, set, hctx, j); hctxs[j] = NULL; } @@ -3361,8 +3363,10 @@ static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set) return 0; out_unwind: - while (--i >= 0) - blk_mq_free_map_and_requests(set, i); + while (--i >= 0) { + blk_mq_free_map_and_rqs(set, set->tags[i], i); + set->tags[i] = NULL; + } return -ENOMEM; } @@ -3557,8 +3561,10 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) return 0; out_free_mq_rq_maps: - for (i = 0; i < set->nr_hw_queues; i++) - blk_mq_free_map_and_requests(set, i); + for (i = 0; i < set->nr_hw_queues; i++) { + blk_mq_free_map_and_rqs(set, set->tags[i], i); + set->tags[i] = NULL; + } out_free_mq_map: for (i = 0; i < set->nr_maps; i++) { kfree(set->map[i].mq_map); @@ -3590,8 +3596,10 @@ void blk_mq_free_tag_set(struct blk_mq_tag_set *set) { int i, j; - for (i = 0; i < set->nr_hw_queues; i++) - blk_mq_free_map_and_requests(set, i); + for (i = 0; i < set->nr_hw_queues; i++) { + blk_mq_free_map_and_rqs(set, set->tags[i], i); + set->tags[i] = NULL; + } if (blk_mq_is_sbitmap_shared(set->flags)) blk_mq_exit_shared_sbitmap(set); diff --git a/block/blk-mq.h b/block/blk-mq.h index 83585a344568..bcb0ca89d37a 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -57,7 +57,9 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, void blk_mq_free_rq_map(struct blk_mq_tags *tags, unsigned int flags); struct blk_mq_tags *blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set, unsigned int hctx_idx, unsigned int depth); - +void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set, + struct blk_mq_tags *tags, + unsigned int hctx_idx); /* * Internal helpers for request insertion into sw queues */ From patchwork Mon Sep 13 15:12:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509781 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp655414jao; Mon, 13 Sep 2021 08:18:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwwKqoKoAu1xwjGW3DEh1Q3dzT3Ynf2qeQnsU/yKBmgy0UBewpyaT+SU1o21mM/xm25jPPL X-Received: by 2002:a05:600c:21d9:: with SMTP id x25mr5491691wmj.7.1631546302568; Mon, 13 Sep 2021 08:18:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546302; cv=none; d=google.com; s=arc-20160816; b=CJmSgPeuNdMQSGsr7cS6uzJC0fZIdvg2K7/5ODTQp4gIUS4AcTbgrXKlpNPAVcJnWS OVvpLSDO29h3lH2QqZeb/wU6yKKYt/NKznY665NMKx8/wKjU5CYa98vwqCz8Tcvx0MSA tsJc0gZ4jG1+5i8Cgy1KUUT9YnWE/yk5DG/+lIMSMtOuU6qYMlyez6/cqm+ncEu3OAJu Xm8Bsp1TPujc/zRZRK1rw2HSEMQdpz/FZTZ7BM+71begy9JGSDejVBuBBJuJSjt+jE7/ Huhbsb02GGWcAycRbyXcTRTPIzODDwrtncSfImkth+SmmKEpIghOtSitBjA0yzTFbh8R ov9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=qH5Cc5euoO2L9o8LKXOWq135++xbC2KmlA+O10WSmHg=; b=sf2aMWAhe4d7FrTHQrvlqgxs0vHtkd3Mw226SG2LW0S3ArNEj5TUHXNC6BkIMt/7O3 ec2tMvnGZ+i0PRIkkAgNy+NTgNfH9xUBHYld7YPU+XzLwwn8RY0ZMzvMVzh32jePV4g3 00p1C+b7cgH9xkSZnxC0R0G3w/nZqAe8qQTInKK9WBe26j8YxYxzcQG6tSRgprc1YzDi 1TZPQMENcXeTSCf2S9H9P8HmyrKlLCaS3MkNpz01oIgu28Fo9H9UpKZzxRkQRdHUCJgq y3EpqM/JatKnuIqGtlwDcMtSkMKa6WTFofSWNRRfmtgls/II8y71fwb7UkNEnNzcw/p0 dF0Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 18si7608876ejh.282.2021.09.13.08.18.22 for ; Mon, 13 Sep 2021 08:18:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345239AbhIMPTe (ORCPT ); Mon, 13 Sep 2021 11:19:34 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3784 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346496AbhIMPTB (ORCPT ); Mon, 13 Sep 2021 11:19:01 -0400 Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VR70Z2gz682vm; Mon, 13 Sep 2021 23:15:31 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:43 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:41 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 12/13] blk-mq: Use shared tags for shared sbitmap support Date: Mon, 13 Sep 2021 23:12:29 +0800 Message-ID: <1631545950-56586-13-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Currently we use separate sbitmap pairs and active_queues atomic_t for shared sbitmap support. However a full sets of static requests are used per HW queue, which is quite wasteful, considering that the total number of requests usable at any given time across all HW queues is limited by the shared sbitmap depth. As such, it is considerably more memory efficient in the case of shared sbitmap to allocate a set of static rqs per tag set or request queue, and not per HW queue. So replace the sbitmap pairs and active_queues atomic_t with a shared tags per tagset and request queue, which will hold a set of shared static rqs. Since there is now no valid HW queue index to be passed to the blk_mq_ops .init and .exit_request callbacks, pass an invalid index token. This changes the semantics of the APIs, such that the callback would need to validate the HW queue index before using it. Currently no user of shared sbitmap actually uses the HW queue index (as would be expected). Continue to use term "shared sbitmap" for now, as the meaning is known. Signed-off-by: John Garry --- block/blk-mq-sched.c | 84 +++++++++++++++++++------------------- block/blk-mq-tag.c | 61 +++++++++------------------- block/blk-mq-tag.h | 6 +-- block/blk-mq.c | 92 ++++++++++++++++++++++++------------------ block/blk-mq.h | 5 ++- include/linux/blk-mq.h | 15 ++++--- include/linux/blkdev.h | 3 +- 7 files changed, 127 insertions(+), 139 deletions(-) -- 2.26.2 diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 17752f39e144..428da4949d80 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -519,6 +519,11 @@ static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q, struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) { + if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) { + hctx->sched_tags = q->shared_sbitmap_tags; + return 0; + } + hctx->sched_tags = blk_mq_alloc_map_and_rqs(q->tag_set, hctx_idx, q->nr_requests); @@ -527,61 +532,54 @@ static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q, return 0; } +static void blk_mq_exit_sched_shared_sbitmap(struct request_queue *queue) +{ + blk_mq_free_rq_map(queue->shared_sbitmap_tags); + queue->shared_sbitmap_tags = NULL; +} + /* called in queue's release handler, tagset has gone away */ -static void blk_mq_sched_tags_teardown(struct request_queue *q) +static void blk_mq_sched_tags_teardown(struct request_queue *q, unsigned int flags) { struct blk_mq_hw_ctx *hctx; int i; queue_for_each_hw_ctx(q, hctx, i) { if (hctx->sched_tags) { - blk_mq_free_rq_map(hctx->sched_tags, hctx->flags); + if (!blk_mq_is_sbitmap_shared(q->tag_set->flags)) + blk_mq_free_rq_map(hctx->sched_tags); hctx->sched_tags = NULL; } } + + if (blk_mq_is_sbitmap_shared(flags)) + blk_mq_exit_sched_shared_sbitmap(q); } static int blk_mq_init_sched_shared_sbitmap(struct request_queue *queue) { struct blk_mq_tag_set *set = queue->tag_set; - int alloc_policy = BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags); - struct blk_mq_hw_ctx *hctx; - int ret, i; /* * Set initial depth at max so that we don't need to reallocate for * updating nr_requests. */ - ret = blk_mq_init_bitmaps(&queue->sched_bitmap_tags, - &queue->sched_breserved_tags, - MAX_SCHED_RQ, set->reserved_tags, - set->numa_node, alloc_policy); - if (ret) - return ret; - - queue_for_each_hw_ctx(queue, hctx, i) { - hctx->sched_tags->bitmap_tags = - &queue->sched_bitmap_tags; - hctx->sched_tags->breserved_tags = - &queue->sched_breserved_tags; - } + queue->shared_sbitmap_tags = blk_mq_alloc_map_and_rqs(set, + BLK_MQ_NO_HCTX_IDX, + MAX_SCHED_RQ); + if (!queue->shared_sbitmap_tags) + return -ENOMEM; blk_mq_tag_update_sched_shared_sbitmap(queue); return 0; } -static void blk_mq_exit_sched_shared_sbitmap(struct request_queue *queue) -{ - sbitmap_queue_free(&queue->sched_bitmap_tags); - sbitmap_queue_free(&queue->sched_breserved_tags); -} - int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) { + unsigned int i, flags = q->tag_set->flags; struct blk_mq_hw_ctx *hctx; struct elevator_queue *eq; - unsigned int i; int ret; if (!e) { @@ -598,21 +596,21 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) q->nr_requests = 2 * min_t(unsigned int, q->tag_set->queue_depth, BLKDEV_DEFAULT_RQ); - queue_for_each_hw_ctx(q, hctx, i) { - ret = blk_mq_sched_alloc_map_and_rqs(q, hctx, i); + if (blk_mq_is_sbitmap_shared(flags)) { + ret = blk_mq_init_sched_shared_sbitmap(q); if (ret) - goto err_free_map_and_rqs; + return ret; } - if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) { - ret = blk_mq_init_sched_shared_sbitmap(q); + queue_for_each_hw_ctx(q, hctx, i) { + ret = blk_mq_sched_alloc_map_and_rqs(q, hctx, i); if (ret) goto err_free_map_and_rqs; } ret = e->ops.init_sched(q, e); if (ret) - goto err_free_sbitmap; + goto err_free_map_and_rqs; blk_mq_debugfs_register_sched(q); @@ -632,12 +630,10 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) return 0; -err_free_sbitmap: - if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) - blk_mq_exit_sched_shared_sbitmap(q); - blk_mq_sched_free_rqs(q); err_free_map_and_rqs: - blk_mq_sched_tags_teardown(q); + blk_mq_sched_free_rqs(q); + blk_mq_sched_tags_teardown(q, flags); + q->elevator = NULL; return ret; } @@ -651,9 +647,15 @@ void blk_mq_sched_free_rqs(struct request_queue *q) struct blk_mq_hw_ctx *hctx; int i; - queue_for_each_hw_ctx(q, hctx, i) { - if (hctx->sched_tags) - blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i); + if (blk_mq_is_sbitmap_shared(q->tag_set->flags)) { + blk_mq_free_rqs(q->tag_set, q->shared_sbitmap_tags, + BLK_MQ_NO_HCTX_IDX); + } else { + queue_for_each_hw_ctx(q, hctx, i) { + if (hctx->sched_tags) + blk_mq_free_rqs(q->tag_set, + hctx->sched_tags, i); + } } } @@ -674,8 +676,6 @@ void blk_mq_exit_sched(struct request_queue *q, struct elevator_queue *e) blk_mq_debugfs_unregister_sched(q); if (e->type->ops.exit_sched) e->type->ops.exit_sched(e); - blk_mq_sched_tags_teardown(q); - if (blk_mq_is_sbitmap_shared(flags)) - blk_mq_exit_sched_shared_sbitmap(q); + blk_mq_sched_tags_teardown(q, flags); q->elevator = NULL; } diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index fcc33581ce5e..29b93ea0ea27 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -27,10 +27,11 @@ bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx) if (blk_mq_is_sbitmap_shared(hctx->flags)) { struct request_queue *q = hctx->queue; struct blk_mq_tag_set *set = q->tag_set; + struct blk_mq_tags *tags = set->shared_sbitmap_tags; if (!test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags) && !test_and_set_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags)) - atomic_inc(&set->active_queues_shared_sbitmap); + atomic_inc(&tags->active_queues); } else { if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state) && !test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) @@ -61,10 +62,12 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx) struct blk_mq_tag_set *set = q->tag_set; if (blk_mq_is_sbitmap_shared(hctx->flags)) { + struct blk_mq_tags *tags = set->shared_sbitmap_tags; + if (!test_and_clear_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags)) return; - atomic_dec(&set->active_queues_shared_sbitmap); + atomic_dec(&tags->active_queues); } else { if (!test_and_clear_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) return; @@ -510,38 +513,10 @@ static int blk_mq_init_bitmap_tags(struct blk_mq_tags *tags, return 0; } -int blk_mq_init_shared_sbitmap(struct blk_mq_tag_set *set) -{ - int alloc_policy = BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags); - int i, ret; - - ret = blk_mq_init_bitmaps(&set->__bitmap_tags, &set->__breserved_tags, - set->queue_depth, set->reserved_tags, - set->numa_node, alloc_policy); - if (ret) - return ret; - - for (i = 0; i < set->nr_hw_queues; i++) { - struct blk_mq_tags *tags = set->tags[i]; - - tags->bitmap_tags = &set->__bitmap_tags; - tags->breserved_tags = &set->__breserved_tags; - } - - return 0; -} - -void blk_mq_exit_shared_sbitmap(struct blk_mq_tag_set *set) -{ - sbitmap_queue_free(&set->__bitmap_tags); - sbitmap_queue_free(&set->__breserved_tags); -} - struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, unsigned int reserved_tags, - int node, unsigned int flags) + int node, int alloc_policy) { - int alloc_policy = BLK_MQ_FLAG_TO_ALLOC_POLICY(flags); struct blk_mq_tags *tags; if (total_tags > BLK_MQ_TAG_MAX) { @@ -557,9 +532,6 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, tags->nr_reserved_tags = reserved_tags; spin_lock_init(&tags->lock); - if (blk_mq_is_sbitmap_shared(flags)) - return tags; - if (blk_mq_init_bitmap_tags(tags, node, alloc_policy) < 0) { kfree(tags); return NULL; @@ -567,12 +539,10 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, return tags; } -void blk_mq_free_tags(struct blk_mq_tags *tags, unsigned int flags) +void blk_mq_free_tags(struct blk_mq_tags *tags) { - if (!blk_mq_is_sbitmap_shared(flags)) { - sbitmap_queue_free(tags->bitmap_tags); - sbitmap_queue_free(tags->breserved_tags); - } + sbitmap_queue_free(tags->bitmap_tags); + sbitmap_queue_free(tags->breserved_tags); kfree(tags); } @@ -603,6 +573,13 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, if (tdepth > MAX_SCHED_RQ) return -EINVAL; + /* + * Only the sbitmap needs resizing since we allocated the max + * initially. + */ + if (blk_mq_is_sbitmap_shared(set->flags)) + return 0; + new = blk_mq_alloc_map_and_rqs(set, hctx->queue_num, tdepth); if (!new) return -ENOMEM; @@ -623,12 +600,14 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int size) { - sbitmap_queue_resize(&set->__bitmap_tags, size - set->reserved_tags); + struct blk_mq_tags *tags = set->shared_sbitmap_tags; + + sbitmap_queue_resize(&tags->__bitmap_tags, size - set->reserved_tags); } void blk_mq_tag_update_sched_shared_sbitmap(struct request_queue *q) { - sbitmap_queue_resize(&q->sched_bitmap_tags, + sbitmap_queue_resize(q->shared_sbitmap_tags->bitmap_tags, q->nr_requests - q->tag_set->reserved_tags); } diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index 88f3c6485543..e433e39a9cfa 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -30,16 +30,14 @@ struct blk_mq_tags { extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags, unsigned int reserved_tags, - int node, unsigned int flags); -extern void blk_mq_free_tags(struct blk_mq_tags *tags, unsigned int flags); + int node, int alloc_policy); +extern void blk_mq_free_tags(struct blk_mq_tags *tags); extern int blk_mq_init_bitmaps(struct sbitmap_queue *bitmap_tags, struct sbitmap_queue *breserved_tags, unsigned int queue_depth, unsigned int reserved, int node, int alloc_policy); -extern int blk_mq_init_shared_sbitmap(struct blk_mq_tag_set *set); -extern void blk_mq_exit_shared_sbitmap(struct blk_mq_tag_set *set); extern unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data); extern void blk_mq_put_tag(struct blk_mq_tags *tags, struct blk_mq_ctx *ctx, unsigned int tag); diff --git a/block/blk-mq.c b/block/blk-mq.c index 464ea20b9bcb..1bb33a402294 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2344,7 +2344,10 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, struct blk_mq_tags *drv_tags; struct page *page; - drv_tags = set->tags[hctx_idx]; + if (blk_mq_is_sbitmap_shared(set->flags)) + drv_tags = set->shared_sbitmap_tags; + else + drv_tags = set->tags[hctx_idx]; if (tags->static_rqs && set->ops->exit_request) { int i; @@ -2354,6 +2357,7 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, if (!rq) continue; + set->ops->exit_request(set, rq, hctx_idx); tags->static_rqs[i] = NULL; } @@ -2373,21 +2377,20 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, } } -void blk_mq_free_rq_map(struct blk_mq_tags *tags, unsigned int flags) +void blk_mq_free_rq_map(struct blk_mq_tags *tags) { kfree(tags->rqs); tags->rqs = NULL; kfree(tags->static_rqs); tags->static_rqs = NULL; - blk_mq_free_tags(tags, flags); + blk_mq_free_tags(tags); } static struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, unsigned int hctx_idx, unsigned int nr_tags, - unsigned int reserved_tags, - unsigned int flags) + unsigned int reserved_tags) { struct blk_mq_tags *tags; int node; @@ -2396,7 +2399,8 @@ static struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, if (node == NUMA_NO_NODE) node = set->numa_node; - tags = blk_mq_init_tags(nr_tags, reserved_tags, node, flags); + tags = blk_mq_init_tags(nr_tags, reserved_tags, node, + BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags)); if (!tags) return NULL; @@ -2404,7 +2408,7 @@ static struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY, node); if (!tags->rqs) { - blk_mq_free_tags(tags, flags); + blk_mq_free_tags(tags); return NULL; } @@ -2413,7 +2417,7 @@ static struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, node); if (!tags->static_rqs) { kfree(tags->rqs); - blk_mq_free_tags(tags, flags); + blk_mq_free_tags(tags); return NULL; } @@ -2855,14 +2859,13 @@ struct blk_mq_tags *blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags; int ret; - tags = blk_mq_alloc_rq_map(set, hctx_idx, depth, set->reserved_tags, - set->flags); + tags = blk_mq_alloc_rq_map(set, hctx_idx, depth, set->reserved_tags); if (!tags) return NULL; ret = blk_mq_alloc_rqs(set, tags, hctx_idx, depth); if (ret) { - blk_mq_free_rq_map(tags, set->flags); + blk_mq_free_rq_map(tags); return NULL; } @@ -2872,6 +2875,12 @@ struct blk_mq_tags *blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set, static bool __blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set, int hctx_idx) { + if (blk_mq_is_sbitmap_shared(set->flags)) { + set->tags[hctx_idx] = set->shared_sbitmap_tags; + + return true; + } + set->tags[hctx_idx] = blk_mq_alloc_map_and_rqs(set, hctx_idx, set->queue_depth); @@ -2882,14 +2891,22 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx) { - unsigned int flags = set->flags; - if (tags) { blk_mq_free_rqs(set, tags, hctx_idx); - blk_mq_free_rq_map(tags, flags); + blk_mq_free_rq_map(tags); } } +static void __blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set, + struct blk_mq_tags *tags, + unsigned int hctx_idx) +{ + if (blk_mq_is_sbitmap_shared(set->flags)) + return; + + blk_mq_free_map_and_rqs(set, tags, hctx_idx); +} + static void blk_mq_map_swqueue(struct request_queue *q) { unsigned int i, j, hctx_idx; @@ -2968,7 +2985,7 @@ static void blk_mq_map_swqueue(struct request_queue *q) * allocation */ if (i && set->tags[i]) { - blk_mq_free_map_and_rqs(set, set->tags[i], i); + __blk_mq_free_map_and_rqs(set, set->tags[i], i); set->tags[i] = NULL; } @@ -3266,7 +3283,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, struct blk_mq_hw_ctx *hctx = hctxs[j]; if (hctx) { - blk_mq_free_map_and_rqs(set, set->tags[j], j); + __blk_mq_free_map_and_rqs(set, set->tags[j], j); set->tags[j] = NULL; blk_mq_exit_hctx(q, set, hctx, j); hctxs[j] = NULL; @@ -3354,6 +3371,14 @@ static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set) { int i; + if (blk_mq_is_sbitmap_shared(set->flags)) { + set->shared_sbitmap_tags = blk_mq_alloc_map_and_rqs(set, + BLK_MQ_NO_HCTX_IDX, + set->queue_depth); + if (!set->shared_sbitmap_tags) + return -ENOMEM; + } + for (i = 0; i < set->nr_hw_queues; i++) { if (!__blk_mq_alloc_map_and_rqs(set, i)) goto out_unwind; @@ -3364,10 +3389,15 @@ static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set) out_unwind: while (--i >= 0) { - blk_mq_free_map_and_rqs(set, set->tags[i], i); + __blk_mq_free_map_and_rqs(set, set->tags[i], i); set->tags[i] = NULL; } + if (blk_mq_is_sbitmap_shared(set->flags)) { + blk_mq_free_map_and_rqs(set, set->shared_sbitmap_tags, + BLK_MQ_NO_HCTX_IDX); + } + return -ENOMEM; } @@ -3546,25 +3576,11 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) if (ret) goto out_free_mq_map; - if (blk_mq_is_sbitmap_shared(set->flags)) { - atomic_set(&set->active_queues_shared_sbitmap, 0); - - if (blk_mq_init_shared_sbitmap(set)) { - ret = -ENOMEM; - goto out_free_mq_rq_maps; - } - } - mutex_init(&set->tag_list_lock); INIT_LIST_HEAD(&set->tag_list); return 0; -out_free_mq_rq_maps: - for (i = 0; i < set->nr_hw_queues; i++) { - blk_mq_free_map_and_rqs(set, set->tags[i], i); - set->tags[i] = NULL; - } out_free_mq_map: for (i = 0; i < set->nr_maps; i++) { kfree(set->map[i].mq_map); @@ -3597,12 +3613,14 @@ void blk_mq_free_tag_set(struct blk_mq_tag_set *set) int i, j; for (i = 0; i < set->nr_hw_queues; i++) { - blk_mq_free_map_and_rqs(set, set->tags[i], i); + __blk_mq_free_map_and_rqs(set, set->tags[i], i); set->tags[i] = NULL; } - if (blk_mq_is_sbitmap_shared(set->flags)) - blk_mq_exit_shared_sbitmap(set); + if (blk_mq_is_sbitmap_shared(set->flags)) { + blk_mq_free_map_and_rqs(set, set->shared_sbitmap_tags, + BLK_MQ_NO_HCTX_IDX); + } for (j = 0; j < set->nr_maps; j++) { kfree(set->map[j].mq_map); @@ -3640,12 +3658,6 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) if (hctx->sched_tags) { ret = blk_mq_tag_update_depth(hctx, &hctx->sched_tags, nr, true); - if (blk_mq_is_sbitmap_shared(set->flags)) { - hctx->sched_tags->bitmap_tags = - &q->sched_bitmap_tags; - hctx->sched_tags->breserved_tags = - &q->sched_breserved_tags; - } } else { ret = blk_mq_tag_update_depth(hctx, &hctx->tags, nr, false); diff --git a/block/blk-mq.h b/block/blk-mq.h index bcb0ca89d37a..b34385211e0a 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -54,7 +54,7 @@ void blk_mq_put_rq_ref(struct request *rq); */ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx); -void blk_mq_free_rq_map(struct blk_mq_tags *tags, unsigned int flags); +void blk_mq_free_rq_map(struct blk_mq_tags *tags); struct blk_mq_tags *blk_mq_alloc_map_and_rqs(struct blk_mq_tag_set *set, unsigned int hctx_idx, unsigned int depth); void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set, @@ -331,10 +331,11 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx, if (blk_mq_is_sbitmap_shared(hctx->flags)) { struct request_queue *q = hctx->queue; struct blk_mq_tag_set *set = q->tag_set; + struct blk_mq_tags *tags = set->shared_sbitmap_tags; if (!test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags)) return true; - users = atomic_read(&set->active_queues_shared_sbitmap); + users = atomic_read(&tags->active_queues); } else { if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) return true; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 13ba1861e688..808854a8ebc4 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -232,13 +232,11 @@ enum hctx_type { * @flags: Zero or more BLK_MQ_F_* flags. * @driver_data: Pointer to data owned by the block driver that created this * tag set. - * @active_queues_shared_sbitmap: - * number of active request queues per tag set. - * @__bitmap_tags: A shared tags sbitmap, used over all hctx's - * @__breserved_tags: - * A shared reserved tags sbitmap, used over all hctx's * @tags: Tag sets. One tag set per hardware queue. Has @nr_hw_queues * elements. + * @shared_sbitmap_tags: + * Shared sbitmap set of tags. Has @nr_hw_queues elements. If + * set, shared by all @tags. * @tag_list_lock: Serializes tag_list accesses. * @tag_list: List of the request queues that use this tag set. See also * request_queue.tag_set_list. @@ -255,12 +253,11 @@ struct blk_mq_tag_set { unsigned int timeout; unsigned int flags; void *driver_data; - atomic_t active_queues_shared_sbitmap; - struct sbitmap_queue __bitmap_tags; - struct sbitmap_queue __breserved_tags; struct blk_mq_tags **tags; + struct blk_mq_tags *shared_sbitmap_tags; + struct mutex tag_list_lock; struct list_head tag_list; }; @@ -432,6 +429,8 @@ enum { ((policy & ((1 << BLK_MQ_F_ALLOC_POLICY_BITS) - 1)) \ << BLK_MQ_F_ALLOC_POLICY_START_BIT) +#define BLK_MQ_NO_HCTX_IDX (-1U) + struct gendisk *__blk_mq_alloc_disk(struct blk_mq_tag_set *set, void *queuedata, struct lock_class_key *lkclass); #define blk_mq_alloc_disk(set, queuedata) \ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 4baf9435232d..17e50e5ef47b 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -459,8 +459,7 @@ struct request_queue { atomic_t nr_active_requests_shared_sbitmap; - struct sbitmap_queue sched_bitmap_tags; - struct sbitmap_queue sched_breserved_tags; + struct blk_mq_tags *shared_sbitmap_tags; struct list_head icq_list; #ifdef CONFIG_BLK_CGROUP From patchwork Mon Sep 13 15:12:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 509783 Delivered-To: patch@linaro.org Received: by 2002:a02:c816:0:0:0:0:0 with SMTP id p22csp655432jao; Mon, 13 Sep 2021 08:18:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx5nCTnXwEfqe5i5AXnG/ksBxPEUSguILgiz8tBKGsPSz8ZTyfffbMlPwKSF5+rEtdrcJRp X-Received: by 2002:a19:ac4c:: with SMTP id r12mr9443367lfc.414.1631546303491; Mon, 13 Sep 2021 08:18:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631546303; cv=none; d=google.com; s=arc-20160816; b=rrA7bnp5TuB/ZK5Fr6vs78KwsJBhgSnD26yTBdCvOTkakjWg5PliJkPtMF2ALfkQuN 8RTDNMfXU+Eegvz3HyqFmtJn8vhR8Oq/6TVjFurywZt2if0J0V5yg711BWgCBeh4mIO3 xxMhqlQgVP6ecK+sK3AyDa/Q7jf3zO+dCzTFswYWNuPaE/cb19bzMQK7SoYClLOwouae nZX0kre0kRiv6j7aOcvYSFfvuCSig+v4NGYohRASRb2F8yDH+9Vd5pWjR0CUioTdKqqJ uGFsWsGXClEAzx/NoXRSK8Z+cMfesAkkusbnk/3IGvqnV+12srSOvH8S3u9ZxI55eiOn /BVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=+q8MWQpO5/AhdYQ4K/YmVgE0MJ59XbvxIwxFW1vGz5E=; b=T6pMkte3LTqR8hH39QbjDypPk/A9bLXycnldM5dEnO5mSSo3zZy4Z+TVwoh14jHmnV /Esl2WMfIGKXH7WH31lRe9PTESAOjLDzmYqmFVMIr8hFaDnYqEXium79zjRkTMLFPYUf 5m1K5XEU0/eKjJpsQh6J0IzF+p5LdPlQZ9fLvVlH2cakyCXafGas49CnOXqGrUHr9Ysa 5VCgNz5sl+6ZFEEACnsZDi2II+wHa7Gq9GwKgcxfrbNP1H8EWR2yNizCDCHSrmeUrAQo SDU1oSqsL/9fCaYW4cv/2rSr2rvc4mj6OUzG7BRna6OrTafvuiEn9/XcgNoIzW761Rs6 xNOA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 18si7608876ejh.282.2021.09.13.08.18.23 for ; Mon, 13 Sep 2021 08:18:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346336AbhIMPTg (ORCPT ); Mon, 13 Sep 2021 11:19:36 -0400 Received: from frasgout.his.huawei.com ([185.176.79.56]:3785 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345724AbhIMPTD (ORCPT ); Mon, 13 Sep 2021 11:19:03 -0400 Received: from fraeml713-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4H7VR86Sfsz67y8L; Mon, 13 Sep 2021 23:15:32 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml713-chm.china.huawei.com (10.206.15.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 17:17:45 +0200 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 13 Sep 2021 16:17:43 +0100 From: John Garry To: CC: , , , , John Garry Subject: [PATCH RESEND v3 13/13] blk-mq: Stop using pointers for blk_mq_tags bitmap tags Date: Mon, 13 Sep 2021 23:12:30 +0800 Message-ID: <1631545950-56586-14-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1631545950-56586-1-git-send-email-john.garry@huawei.com> References: <1631545950-56586-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Now that we use shared tags for shared sbitmap support, we don't require the tags sbitmap pointers, so drop them. This essentially reverts commit 222a5ae03cdd ("blk-mq: Use pointers for blk_mq_tags bitmap tags"). Function blk_mq_init_bitmap_tags() is removed also, since it would be only a wrappper for blk_mq_init_bitmaps(). Reviewed-by: Ming Lei Signed-off-by: John Garry --- block/bfq-iosched.c | 4 +-- block/blk-mq-debugfs.c | 8 +++--- block/blk-mq-tag.c | 56 +++++++++++++++--------------------------- block/blk-mq-tag.h | 7 ++---- block/blk-mq.c | 8 +++--- block/kyber-iosched.c | 4 +-- block/mq-deadline.c | 2 +- 7 files changed, 35 insertions(+), 54 deletions(-) -- 2.26.2 Reviewed-by: Hannes Reinecke diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index dd13c2bbc29c..4674f85d7df0 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -6894,8 +6894,8 @@ static void bfq_depth_updated(struct blk_mq_hw_ctx *hctx) struct blk_mq_tags *tags = hctx->sched_tags; unsigned int min_shallow; - min_shallow = bfq_update_depths(bfqd, tags->bitmap_tags); - sbitmap_queue_min_shallow_depth(tags->bitmap_tags, min_shallow); + min_shallow = bfq_update_depths(bfqd, &tags->bitmap_tags); + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, min_shallow); } static int bfq_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int index) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 4b66d2776eda..4000376330c9 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -452,11 +452,11 @@ static void blk_mq_debugfs_tags_show(struct seq_file *m, atomic_read(&tags->active_queues)); seq_puts(m, "\nbitmap_tags:\n"); - sbitmap_queue_show(tags->bitmap_tags, m); + sbitmap_queue_show(&tags->bitmap_tags, m); if (tags->nr_reserved_tags) { seq_puts(m, "\nbreserved_tags:\n"); - sbitmap_queue_show(tags->breserved_tags, m); + sbitmap_queue_show(&tags->breserved_tags, m); } } @@ -487,7 +487,7 @@ static int hctx_tags_bitmap_show(void *data, struct seq_file *m) if (res) goto out; if (hctx->tags) - sbitmap_bitmap_show(&hctx->tags->bitmap_tags->sb, m); + sbitmap_bitmap_show(&hctx->tags->bitmap_tags.sb, m); mutex_unlock(&q->sysfs_lock); out: @@ -521,7 +521,7 @@ static int hctx_sched_tags_bitmap_show(void *data, struct seq_file *m) if (res) goto out; if (hctx->sched_tags) - sbitmap_bitmap_show(&hctx->sched_tags->bitmap_tags->sb, m); + sbitmap_bitmap_show(&hctx->sched_tags->bitmap_tags.sb, m); mutex_unlock(&q->sysfs_lock); out: diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 29b93ea0ea27..a313e6869639 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -46,9 +46,9 @@ bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx) */ void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool include_reserve) { - sbitmap_queue_wake_all(tags->bitmap_tags); + sbitmap_queue_wake_all(&tags->bitmap_tags); if (include_reserve) - sbitmap_queue_wake_all(tags->breserved_tags); + sbitmap_queue_wake_all(&tags->breserved_tags); } /* @@ -104,10 +104,10 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) WARN_ON_ONCE(1); return BLK_MQ_NO_TAG; } - bt = tags->breserved_tags; + bt = &tags->breserved_tags; tag_offset = 0; } else { - bt = tags->bitmap_tags; + bt = &tags->bitmap_tags; tag_offset = tags->nr_reserved_tags; } @@ -153,9 +153,9 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) data->ctx); tags = blk_mq_tags_from_data(data); if (data->flags & BLK_MQ_REQ_RESERVED) - bt = tags->breserved_tags; + bt = &tags->breserved_tags; else - bt = tags->bitmap_tags; + bt = &tags->bitmap_tags; /* * If destination hw queue is changed, fake wake up on @@ -189,10 +189,10 @@ void blk_mq_put_tag(struct blk_mq_tags *tags, struct blk_mq_ctx *ctx, const int real_tag = tag - tags->nr_reserved_tags; BUG_ON(real_tag >= tags->nr_tags); - sbitmap_queue_clear(tags->bitmap_tags, real_tag, ctx->cpu); + sbitmap_queue_clear(&tags->bitmap_tags, real_tag, ctx->cpu); } else { BUG_ON(tag >= tags->nr_reserved_tags); - sbitmap_queue_clear(tags->breserved_tags, tag, ctx->cpu); + sbitmap_queue_clear(&tags->breserved_tags, tag, ctx->cpu); } } @@ -343,9 +343,9 @@ static void __blk_mq_all_tag_iter(struct blk_mq_tags *tags, WARN_ON_ONCE(flags & BT_TAG_ITER_RESERVED); if (tags->nr_reserved_tags) - bt_tags_for_each(tags, tags->breserved_tags, fn, priv, + bt_tags_for_each(tags, &tags->breserved_tags, fn, priv, flags | BT_TAG_ITER_RESERVED); - bt_tags_for_each(tags, tags->bitmap_tags, fn, priv, flags); + bt_tags_for_each(tags, &tags->bitmap_tags, fn, priv, flags); } /** @@ -462,8 +462,8 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, continue; if (tags->nr_reserved_tags) - bt_for_each(hctx, tags->breserved_tags, fn, priv, true); - bt_for_each(hctx, tags->bitmap_tags, fn, priv, false); + bt_for_each(hctx, &tags->breserved_tags, fn, priv, true); + bt_for_each(hctx, &tags->bitmap_tags, fn, priv, false); } blk_queue_exit(q); } @@ -495,24 +495,6 @@ int blk_mq_init_bitmaps(struct sbitmap_queue *bitmap_tags, return -ENOMEM; } -static int blk_mq_init_bitmap_tags(struct blk_mq_tags *tags, - int node, int alloc_policy) -{ - int ret; - - ret = blk_mq_init_bitmaps(&tags->__bitmap_tags, - &tags->__breserved_tags, - tags->nr_tags, tags->nr_reserved_tags, - node, alloc_policy); - if (ret) - return ret; - - tags->bitmap_tags = &tags->__bitmap_tags; - tags->breserved_tags = &tags->__breserved_tags; - - return 0; -} - struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, unsigned int reserved_tags, int node, int alloc_policy) @@ -532,7 +514,9 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, tags->nr_reserved_tags = reserved_tags; spin_lock_init(&tags->lock); - if (blk_mq_init_bitmap_tags(tags, node, alloc_policy) < 0) { + if (blk_mq_init_bitmaps(&tags->bitmap_tags, &tags->breserved_tags, + total_tags, reserved_tags, node, + alloc_policy) < 0) { kfree(tags); return NULL; } @@ -541,8 +525,8 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags, void blk_mq_free_tags(struct blk_mq_tags *tags) { - sbitmap_queue_free(tags->bitmap_tags); - sbitmap_queue_free(tags->breserved_tags); + sbitmap_queue_free(&tags->bitmap_tags); + sbitmap_queue_free(&tags->breserved_tags); kfree(tags); } @@ -591,7 +575,7 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx, * Don't need (or can't) update reserved tags here, they * remain static and should never need resizing. */ - sbitmap_queue_resize(tags->bitmap_tags, + sbitmap_queue_resize(&tags->bitmap_tags, tdepth - tags->nr_reserved_tags); } @@ -602,12 +586,12 @@ void blk_mq_tag_resize_shared_sbitmap(struct blk_mq_tag_set *set, unsigned int s { struct blk_mq_tags *tags = set->shared_sbitmap_tags; - sbitmap_queue_resize(&tags->__bitmap_tags, size - set->reserved_tags); + sbitmap_queue_resize(&tags->bitmap_tags, size - set->reserved_tags); } void blk_mq_tag_update_sched_shared_sbitmap(struct request_queue *q) { - sbitmap_queue_resize(q->shared_sbitmap_tags->bitmap_tags, + sbitmap_queue_resize(&q->shared_sbitmap_tags->bitmap_tags, q->nr_requests - q->tag_set->reserved_tags); } diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h index e433e39a9cfa..23747ea2bb53 100644 --- a/block/blk-mq-tag.h +++ b/block/blk-mq-tag.h @@ -11,11 +11,8 @@ struct blk_mq_tags { atomic_t active_queues; - struct sbitmap_queue *bitmap_tags; - struct sbitmap_queue *breserved_tags; - - struct sbitmap_queue __bitmap_tags; - struct sbitmap_queue __breserved_tags; + struct sbitmap_queue bitmap_tags; + struct sbitmap_queue breserved_tags; struct request **rqs; struct request **static_rqs; diff --git a/block/blk-mq.c b/block/blk-mq.c index 1bb33a402294..13812b64ed99 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1062,14 +1062,14 @@ static inline unsigned int queued_to_index(unsigned int queued) static bool __blk_mq_get_driver_tag(struct request *rq) { - struct sbitmap_queue *bt = rq->mq_hctx->tags->bitmap_tags; + struct sbitmap_queue *bt = &rq->mq_hctx->tags->bitmap_tags; unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags; int tag; blk_mq_tag_busy(rq->mq_hctx); if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) { - bt = rq->mq_hctx->tags->breserved_tags; + bt = &rq->mq_hctx->tags->breserved_tags; tag_offset = 0; } else { if (!hctx_may_queue(rq->mq_hctx, bt)) @@ -1112,7 +1112,7 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode, struct sbitmap_queue *sbq; list_del_init(&wait->entry); - sbq = hctx->tags->bitmap_tags; + sbq = &hctx->tags->bitmap_tags; atomic_dec(&sbq->ws_active); } spin_unlock(&hctx->dispatch_wait_lock); @@ -1130,7 +1130,7 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode, static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx, struct request *rq) { - struct sbitmap_queue *sbq = hctx->tags->bitmap_tags; + struct sbitmap_queue *sbq = &hctx->tags->bitmap_tags; struct wait_queue_head *wq; wait_queue_entry_t *wait; bool ret; diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index 15a8be57203d..9fb735bf1134 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -451,11 +451,11 @@ static void kyber_depth_updated(struct blk_mq_hw_ctx *hctx) { struct kyber_queue_data *kqd = hctx->queue->elevator->elevator_data; struct blk_mq_tags *tags = hctx->sched_tags; - unsigned int shift = tags->bitmap_tags->sb.shift; + unsigned int shift = tags->bitmap_tags.sb.shift; kqd->async_depth = (1U << shift) * KYBER_ASYNC_PERCENT / 100U; - sbitmap_queue_min_shallow_depth(tags->bitmap_tags, kqd->async_depth); + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, kqd->async_depth); } static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 7f3c3932b723..7fd07d00838e 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -519,7 +519,7 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx) dd->async_depth = max(1UL, 3 * q->nr_requests / 4); - sbitmap_queue_min_shallow_depth(tags->bitmap_tags, dd->async_depth); + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth); } /* Called by blk_mq_init_hctx() and blk_mq_init_sched(). */