From patchwork Fri Jul 9 08:09:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 472190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EA1FC07E99 for ; Fri, 9 Jul 2021 08:10:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F32A5613C8 for ; Fri, 9 Jul 2021 08:10:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231402AbhGIINV (ORCPT ); Fri, 9 Jul 2021 04:13:21 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:26958 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231278AbhGIINV (ORCPT ); Fri, 9 Jul 2021 04:13:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818237; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ztDTYYWkp+t+kb7GPKkSbcrbttoIILQY+N+mzBGgSjg=; b=IM4IELSSy2ztNXULy0Tb7/Ekdk+p/T4/SmdO9HhAqoY7/gmxvxWDwAUx/xdaG0PsL6QIOd 0+2cKnJbyJH2rtKohPQdbtRT0Ti3Jc9XbYxK0Lae1j7K1jpwlLDpwsynOEnKTpDMLm+eEJ HIcP2i9jY5Sr7mfHo4aPqRvlptNdvdk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-374-aYu-RRKjM52p_c725-P8Ew-1; Fri, 09 Jul 2021 04:10:36 -0400 X-MC-Unique: aYu-RRKjM52p_c725-P8Ew-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C9809192FDA0; Fri, 9 Jul 2021 08:10:34 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1B5B160C13; Fri, 9 Jul 2021 08:10:29 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 01/10] blk-mq: rename blk-mq-cpumap.c as blk-mq-map.c Date: Fri, 9 Jul 2021 16:09:56 +0800 Message-Id: <20210709081005.421340-2-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Firstly the name of cpumap isn't very useful because all kinds of map helpers(pci, rdma, virtio) are for mapping cpu(s) to hw queue. Secondly prepare for moving physical device related mapping into its own subsystems, and we will put all map related functions/helpers into this renamed source file. Signed-off-by: Ming Lei --- block/Makefile | 2 +- block/{blk-mq-cpumap.c => blk-mq-map.c} | 0 2 files changed, 1 insertion(+), 1 deletion(-) rename block/{blk-mq-cpumap.c => blk-mq-map.c} (100%) diff --git a/block/Makefile b/block/Makefile index bfbe4e13ca1e..0f31c7e8a475 100644 --- a/block/Makefile +++ b/block/Makefile @@ -7,7 +7,7 @@ obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-sysfs.o \ blk-flush.o blk-settings.o blk-ioc.o blk-map.o \ blk-exec.o blk-merge.o blk-timeout.o \ blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \ - blk-mq-sysfs.o blk-mq-cpumap.o blk-mq-sched.o ioctl.o \ + blk-mq-sysfs.o blk-mq-map.o blk-mq-sched.o ioctl.o \ genhd.o ioprio.o badblocks.o partitions/ blk-rq-qos.o \ disk-events.o diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-map.c similarity index 100% rename from block/blk-mq-cpumap.c rename to block/blk-mq-map.c From patchwork Fri Jul 9 08:09:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 472189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72C59C07E99 for ; Fri, 9 Jul 2021 08:11:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5522A613C3 for ; Fri, 9 Jul 2021 08:11:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231414AbhGIINr (ORCPT ); Fri, 9 Jul 2021 04:13:47 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:40936 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231347AbhGIINp (ORCPT ); Fri, 9 Jul 2021 04:13:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818262; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ae4rwY22KJlGMWYfTdNuML5h3kZw5x2TXcHut6UGzGc=; b=WdbloYBM5Vt3yesD1waPqkra7UyVAov6Wt4Ugz7dfJUR3qYvh2nszSuNqlSlJD2XVM4j/H v3akiMxy+kUAZeomjB3/v719jogJt00GFHyod+tUt7IPi74qhIn0B2tnL8QSD4dwTTwian gMWga395ejUIjSCmtPFcyYeeFFbg4H8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-559-jMNzBXjnNbKEGRrQ4kao1A-1; Fri, 09 Jul 2021 04:10:58 -0400 X-MC-Unique: jMNzBXjnNbKEGRrQ4kao1A-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 51761804302; Fri, 9 Jul 2021 08:10:56 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 349DE189C7; Fri, 9 Jul 2021 08:10:51 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 03/10] blk-mq: pass use managed irq info to blk_mq_dev_map_queues Date: Fri, 9 Jul 2021 16:09:58 +0800 Message-Id: <20210709081005.421340-4-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Managed irq is special because genirq core will shut down it when all cpus in its affinity mask are offline, so blk-mq has to drain requests and prevent new allocation on the hw queue before its managed irq is shutdown. In current implementation, we drain all hctx when the last cpu in hctx->cpumask is going to be offline. However, we need to avoid the draining of hw queues which don't use managed irq, one kind of user is nvme fc/rdma/tcp because these controllers require to submit connection request successfully even though all cpus in hctx->cpumask are offline. And we have lots of kernel panic reports on blk_mq_alloc_request_hctx(). Once we know if one qmap uses managed irq or not, we needn't to drain requests for hctx which doesn't use managed irq, and we can allow to allocate request on hctx in which all CPUs in hctx->cpumask are offline, then not only fix kernel panic in blk_mq_alloc_request_hctx(), but also meet nvme fc/rdma/tcp's requirement. Signed-off-by: Ming Lei --- block/blk-mq-map.c | 6 +++++- include/linux/blk-mq.h | 5 +++-- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/block/blk-mq-map.c b/block/blk-mq-map.c index e3ba2ef1e9e2..6b453f8d7965 100644 --- a/block/blk-mq-map.c +++ b/block/blk-mq-map.c @@ -103,6 +103,8 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index) * @dev_data: Device data passed to get_queue_affinity() * @fallback: If true, fallback to default blk-mq mapping in case of * any failure + * @managed_irq: If driver is likely to use managed irq, pass @managed_irq + * as true. * * Generic function to setup each queue mapping in @qmap. It will query * each queue's affinity via @get_queue_affinity and built queue mapping @@ -113,7 +115,7 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index) */ int blk_mq_dev_map_queues(struct blk_mq_queue_map *qmap, void *dev_data, int dev_off, get_queue_affinty_fn *get_queue_affinity, - bool fallback) + bool fallback, bool managed_irq) { const struct cpumask *mask; unsigned int queue, cpu; @@ -136,6 +138,8 @@ int blk_mq_dev_map_queues(struct blk_mq_queue_map *qmap, void *dev_data, qmap->mq_map[cpu] = qmap->queue_offset + queue; } + qmap->use_managed_irq = managed_irq; + return 0; fallback: diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index b6090d691594..a2cd85ac0354 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -192,7 +192,8 @@ struct blk_mq_hw_ctx { struct blk_mq_queue_map { unsigned int *mq_map; unsigned int nr_queues; - unsigned int queue_offset; + unsigned int queue_offset:31; + unsigned int use_managed_irq:1; }; /** @@ -558,7 +559,7 @@ typedef const struct cpumask * (get_queue_affinty_fn)(void *dev_data, int blk_mq_map_queues(struct blk_mq_queue_map *qmap); int blk_mq_dev_map_queues(struct blk_mq_queue_map *qmap, void *dev_data, int dev_off, get_queue_affinty_fn *get_queue_affinity, - bool fallback); + bool fallback, bool managed_irq); void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues); void blk_mq_quiesce_queue_nowait(struct request_queue *q); From patchwork Fri Jul 9 08:10:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 472188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F3FCC07E99 for ; Fri, 9 Jul 2021 08:11:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 85A78613C3 for ; Fri, 9 Jul 2021 08:11:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231452AbhGIINy (ORCPT ); Fri, 9 Jul 2021 04:13:54 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:59339 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231382AbhGIINx (ORCPT ); Fri, 9 Jul 2021 04:13:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818270; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Jm4OK+kJW+IbN57HQi1K1eGMDSFWIwNIJiAc3mqthkg=; b=B8yxGxDRpztFHt1NKRr1jnxOqvTpt+6WaghgLhuF7Wz/PiIXCqzqvetCy06hOUqZPbJ7ND //VXm93Lbok2z8bN0g6DH3Mq6Rx+7JxahFYrR9eWhVZZ6CZQFNI+S649hcoVt3+gImOsas NOh7q62r6/orvrgXePTcgJ+efyJvEJg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-302-wEX9Y4jMMsSWbad_rSB2pQ-1; Fri, 09 Jul 2021 04:11:08 -0400 X-MC-Unique: wEX9Y4jMMsSWbad_rSB2pQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 054061007273; Fri, 9 Jul 2021 08:11:07 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 30F031346F; Fri, 9 Jul 2021 08:11:05 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 05/10] nvme: replace blk_mq_pci_map_queues with blk_mq_dev_map_queues Date: Fri, 9 Jul 2021 16:10:00 +0800 Message-Id: <20210709081005.421340-6-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Replace blk_mq_pci_map_queues with blk_mq_dev_map_queues which is more generic from blk-mq viewpoint, so we can unify all map queue via blk_mq_dev_map_queues(). Meantime we can pass 'use_manage_irq' info to blk-mq via blk_mq_dev_map_queues(), this info needn't be 100% accurate, and what we need is that true has to be passed in if the hba really uses managed irq. Signed-off-by: Ming Lei --- drivers/nvme/host/pci.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index d3c5086673bc..d16ba661560d 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -433,6 +433,14 @@ static int nvme_init_request(struct blk_mq_tag_set *set, struct request *req, return 0; } +static const struct cpumask *nvme_pci_get_queue_affinity( + void *dev_data, int offset, int queue) +{ + struct pci_dev *pdev = dev_data; + + return pci_irq_get_affinity(pdev, offset + queue); +} + static int queue_irq_offset(struct nvme_dev *dev) { /* if we have more than 1 vec, admin queue offsets us by 1 */ @@ -463,7 +471,9 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set) */ map->queue_offset = qoff; if (i != HCTX_TYPE_POLL && offset) - blk_mq_pci_map_queues(map, to_pci_dev(dev->dev), offset); + blk_mq_dev_map_queues(map, to_pci_dev(dev->dev), offset, + nvme_pci_get_queue_affinity, false, + true); else blk_mq_map_queues(map); qoff += map->nr_queues; From patchwork Fri Jul 9 08:10:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 472187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31C6CC07E9B for ; Fri, 9 Jul 2021 08:11:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1A33D613C9 for ; Fri, 9 Jul 2021 08:11:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231480AbhGIIOI (ORCPT ); Fri, 9 Jul 2021 04:14:08 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:38964 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231458AbhGIIOI (ORCPT ); Fri, 9 Jul 2021 04:14:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818284; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SrR2xOhsWwMcxQ7gA7t7ElNB6dEuVROBnVSLBdn7FcE=; b=OVls1Z70dSXzsgTPDhhYF0MnytuitdDl14YKj5V/e7q5fHdJf9Z+NzmFnZKXugQ8SWOKIG qxh0PoSKCX3gMcvUpLIMXF3kNRFQEvWciBIg1+pTrNK8ceEjQxDenkwVqNtwDZddXUq8+a LWg9s3q7/0Uyt4vHkoND2i2tm50cIy0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-53-pVL0D9-uO_-Xmp8BVC4Ijg-1; Fri, 09 Jul 2021 04:11:20 -0400 X-MC-Unique: pVL0D9-uO_-Xmp8BVC4Ijg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8A12A192FDA0; Fri, 9 Jul 2021 08:11:18 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id C07D05D6D3; Fri, 9 Jul 2021 08:11:17 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 07/10] virtio: blk/scsi: replace blk_mq_virtio_map_queues with blk_mq_dev_map_queues Date: Fri, 9 Jul 2021 16:10:02 +0800 Message-Id: <20210709081005.421340-8-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Replace blk_mq_virtio_map_queues with blk_mq_dev_map_queues which is more generic from blk-mq viewpoint, so we can unify all map queue implementation. Meantime we can pass 'use_manage_irq' info to blk-mq via blk_mq_dev_map_queues(), this info needn't be 100% accurate, and what we need is that true has to be passed in if the hba really uses managed irq. Signed-off-by: Ming Lei --- drivers/block/virtio_blk.c | 12 ++++++++++-- drivers/scsi/virtio_scsi.c | 11 ++++++++++- 2 files changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index e4bd3b1fc3c2..9188b5bcbe78 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -677,12 +677,20 @@ static int virtblk_init_request(struct blk_mq_tag_set *set, struct request *rq, return 0; } +static const struct cpumask *virtblk_get_vq_affinity(void *dev_data, + int offset, int queue) +{ + struct virtio_device *vdev = dev_data; + + return virtio_get_vq_affinity(vdev, offset + queue); +} + static int virtblk_map_queues(struct blk_mq_tag_set *set) { struct virtio_blk *vblk = set->driver_data; - return blk_mq_virtio_map_queues(&set->map[HCTX_TYPE_DEFAULT], - vblk->vdev, 0); + return blk_mq_dev_map_queues(&set->map[HCTX_TYPE_DEFAULT], vblk->vdev, + 0, virtblk_get_vq_affinity, true, true); } static const struct blk_mq_ops virtio_mq_ops = { diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c index fd69a03d6137..c4b97a0926df 100644 --- a/drivers/scsi/virtio_scsi.c +++ b/drivers/scsi/virtio_scsi.c @@ -712,12 +712,21 @@ static int virtscsi_abort(struct scsi_cmnd *sc) return virtscsi_tmf(vscsi, cmd); } +static const struct cpumask *virtscsi_get_vq_affinity(void *dev_data, + int offset, int queue) +{ + struct virtio_device *vdev = dev_data; + + return virtio_get_vq_affinity(vdev, offset + queue); +} + static int virtscsi_map_queues(struct Scsi_Host *shost) { struct virtio_scsi *vscsi = shost_priv(shost); struct blk_mq_queue_map *qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT]; - return blk_mq_virtio_map_queues(qmap, vscsi->vdev, 2); + return blk_mq_dev_map_queues(qmap, vscsi->vdev, 2, + virtscsi_get_vq_affinity, true, true); } static void virtscsi_commit_rqs(struct Scsi_Host *shost, u16 hwq) From patchwork Fri Jul 9 08:10:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 472186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B685BC07E99 for ; Fri, 9 Jul 2021 08:11:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 985CC613C7 for ; Fri, 9 Jul 2021 08:11:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231516AbhGIIOR (ORCPT ); Fri, 9 Jul 2021 04:14:17 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:33700 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231477AbhGIIOQ (ORCPT ); Fri, 9 Jul 2021 04:14:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625818293; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uAae/n/zle4IpDOSCa7FGlf3dZkUVsBnDfub+cgN0WQ=; b=V0b9S0tJy8WoWywNmXQrbUCV+V5ymP2z3AcO+Hq+lkBJ4eGQAwSdyLgW4Ot/9Cqeo+3CrD eqiPgEQComusZmV2JsqokpSYll8lv+NlTzmKJ1q3JPIxIVuBCMUa3YTrQosmNKEkUt/om3 iscs9EhMTbZ/K2qXjRIhsxaG26t4/Ek= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-58-DXw8UBVePIamzHzY9TYfDg-1; Fri, 09 Jul 2021 04:11:32 -0400 X-MC-Unique: DXw8UBVePIamzHzY9TYfDg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 18847A40C0; Fri, 9 Jul 2021 08:11:30 +0000 (UTC) Received: from localhost (ovpn-13-13.pek2.redhat.com [10.72.13.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9CF5260C04; Fri, 9 Jul 2021 08:11:24 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , "Martin K . Petersen" , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org Cc: Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Damien Le Moal , Ming Lei Subject: [PATCH V3 09/10] blk-mq: remove map queue helpers for pci, rdma and virtio Date: Fri, 9 Jul 2021 16:10:04 +0800 Message-Id: <20210709081005.421340-10-ming.lei@redhat.com> In-Reply-To: <20210709081005.421340-1-ming.lei@redhat.com> References: <20210709081005.421340-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Now we have switched to blk_mq_dev_map_queues(), so remove these helpers and source files. Signed-off-by: Ming Lei --- block/Makefile | 3 --- block/blk-mq-pci.c | 48 ------------------------------------------- block/blk-mq-rdma.c | 44 --------------------------------------- block/blk-mq-virtio.c | 46 ----------------------------------------- 4 files changed, 141 deletions(-) delete mode 100644 block/blk-mq-pci.c delete mode 100644 block/blk-mq-rdma.c delete mode 100644 block/blk-mq-virtio.c diff --git a/block/Makefile b/block/Makefile index 0f31c7e8a475..9437518a16ae 100644 --- a/block/Makefile +++ b/block/Makefile @@ -31,9 +31,6 @@ obj-$(CONFIG_IOSCHED_BFQ) += bfq.o obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o obj-$(CONFIG_BLK_DEV_INTEGRITY) += bio-integrity.o blk-integrity.o obj-$(CONFIG_BLK_DEV_INTEGRITY_T10) += t10-pi.o -obj-$(CONFIG_BLK_MQ_PCI) += blk-mq-pci.o -obj-$(CONFIG_BLK_MQ_VIRTIO) += blk-mq-virtio.o -obj-$(CONFIG_BLK_MQ_RDMA) += blk-mq-rdma.o obj-$(CONFIG_BLK_DEV_ZONED) += blk-zoned.o obj-$(CONFIG_BLK_WBT) += blk-wbt.o obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c deleted file mode 100644 index b595a94c4d16..000000000000 --- a/block/blk-mq-pci.c +++ /dev/null @@ -1,48 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2016 Christoph Hellwig. - */ -#include -#include -#include -#include -#include -#include - -#include "blk-mq.h" - -/** - * blk_mq_pci_map_queues - provide a default queue mapping for PCI device - * @qmap: CPU to hardware queue map. - * @pdev: PCI device associated with @set. - * @offset: Offset to use for the pci irq vector - * - * This function assumes the PCI device @pdev has at least as many available - * interrupt vectors as @set has queues. It will then query the vector - * corresponding to each queue for it's affinity mask and built queue mapping - * that maps a queue to the CPUs that have irq affinity for the corresponding - * vector. - */ -int blk_mq_pci_map_queues(struct blk_mq_queue_map *qmap, struct pci_dev *pdev, - int offset) -{ - const struct cpumask *mask; - unsigned int queue, cpu; - - for (queue = 0; queue < qmap->nr_queues; queue++) { - mask = pci_irq_get_affinity(pdev, queue + offset); - if (!mask) - goto fallback; - - for_each_cpu(cpu, mask) - qmap->mq_map[cpu] = qmap->queue_offset + queue; - } - - return 0; - -fallback: - WARN_ON_ONCE(qmap->nr_queues > 1); - blk_mq_clear_mq_map(qmap); - return 0; -} -EXPORT_SYMBOL_GPL(blk_mq_pci_map_queues); diff --git a/block/blk-mq-rdma.c b/block/blk-mq-rdma.c deleted file mode 100644 index 14f968e58b8f..000000000000 --- a/block/blk-mq-rdma.c +++ /dev/null @@ -1,44 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2017 Sagi Grimberg. - */ -#include -#include -#include - -/** - * blk_mq_rdma_map_queues - provide a default queue mapping for rdma device - * @map: CPU to hardware queue map. - * @dev: rdma device to provide a mapping for. - * @first_vec: first interrupt vectors to use for queues (usually 0) - * - * This function assumes the rdma device @dev has at least as many available - * interrupt vetors as @set has queues. It will then query it's affinity mask - * and built queue mapping that maps a queue to the CPUs that have irq affinity - * for the corresponding vector. - * - * In case either the driver passed a @dev with less vectors than - * @set->nr_hw_queues, or @dev does not provide an affinity mask for a - * vector, we fallback to the naive mapping. - */ -int blk_mq_rdma_map_queues(struct blk_mq_queue_map *map, - struct ib_device *dev, int first_vec) -{ - const struct cpumask *mask; - unsigned int queue, cpu; - - for (queue = 0; queue < map->nr_queues; queue++) { - mask = ib_get_vector_affinity(dev, first_vec + queue); - if (!mask) - goto fallback; - - for_each_cpu(cpu, mask) - map->mq_map[cpu] = map->queue_offset + queue; - } - - return 0; - -fallback: - return blk_mq_map_queues(map); -} -EXPORT_SYMBOL_GPL(blk_mq_rdma_map_queues); diff --git a/block/blk-mq-virtio.c b/block/blk-mq-virtio.c deleted file mode 100644 index 7b8a42c35102..000000000000 --- a/block/blk-mq-virtio.c +++ /dev/null @@ -1,46 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2016 Christoph Hellwig. - */ -#include -#include -#include -#include -#include -#include "blk-mq.h" - -/** - * blk_mq_virtio_map_queues - provide a default queue mapping for virtio device - * @qmap: CPU to hardware queue map. - * @vdev: virtio device to provide a mapping for. - * @first_vec: first interrupt vectors to use for queues (usually 0) - * - * This function assumes the virtio device @vdev has at least as many available - * interrupt vectors as @set has queues. It will then query the vector - * corresponding to each queue for it's affinity mask and built queue mapping - * that maps a queue to the CPUs that have irq affinity for the corresponding - * vector. - */ -int blk_mq_virtio_map_queues(struct blk_mq_queue_map *qmap, - struct virtio_device *vdev, int first_vec) -{ - const struct cpumask *mask; - unsigned int queue, cpu; - - if (!vdev->config->get_vq_affinity) - goto fallback; - - for (queue = 0; queue < qmap->nr_queues; queue++) { - mask = vdev->config->get_vq_affinity(vdev, first_vec + queue); - if (!mask) - goto fallback; - - for_each_cpu(cpu, mask) - qmap->mq_map[cpu] = qmap->queue_offset + queue; - } - - return 0; -fallback: - return blk_mq_map_queues(qmap); -} -EXPORT_SYMBOL_GPL(blk_mq_virtio_map_queues);