From patchwork Wed Jan 18 22:54:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 644932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 907D9C46467 for ; Wed, 18 Jan 2023 22:55:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229728AbjARWzY (ORCPT ); Wed, 18 Jan 2023 17:55:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230007AbjARWy6 (ORCPT ); Wed, 18 Jan 2023 17:54:58 -0500 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB1BB3C14; Wed, 18 Jan 2023 14:54:57 -0800 (PST) Received: by mail-pj1-f53.google.com with SMTP id q64so570950pjq.4; Wed, 18 Jan 2023 14:54:57 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/wu5sPPFqMc2gBHaTHb+Yy396MxEjEU9ceKQwUeLcZM=; b=u5JESt4GF7HxezcqTi81/xI2hEtNlT0yG0gMrnIteLvu7TtIAF3seies0hOz0K7Yip EpPFPs1ruOi7axfHWqzU5SXXbWtR8ZoLrT1NCmvvWtWVI3pHDj35cKKmfRPlRy51Uiba scjw5jdfYhqE8gea784pu11jEPZX8usTLEVdxpzNTivuxc70pQC7prEq+UhRFxo0dUoX JG4nhddvfRmDl4p9q211iT11+dWEcQ9+H/mI5xDijHH0c4oayZSF5UJ4Mc/nfZMvpr11 atM2bylZKQo9HSADpxRnIZpajSBucZFfNMsXkhnrwVYdsnUziCPZPBLKdb960w8PqoUw U4vw== X-Gm-Message-State: AFqh2ko+ygyMnH8rlXOfXj6QP/wSMz06qaReVKi/+0jkxykitSw9uU76 iHwEBrLy8oEOC6h57DUbxAs= X-Google-Smtp-Source: AMrXdXs006pEZQv+FlK3LRVUZcPJiXtqaLcbVclirzXqcBfL7AtuI1m6WSnSroA2ZkWqBc1rEYKBbg== X-Received: by 2002:a17:902:ebc9:b0:194:85da:16 with SMTP id p9-20020a170902ebc900b0019485da0016mr9752834plg.13.1674082497331; Wed, 18 Jan 2023 14:54:57 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:22ae:3ae3:fde6:2308]) by smtp.gmail.com with ESMTPSA id u7-20020a17090341c700b00186e34524e3sm23649466ple.136.2023.01.18.14.54.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 14:54:56 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Keith Busch Subject: [PATCH v3 1/9] block: Introduce QUEUE_FLAG_SUB_PAGE_SEGMENTS and CONFIG_BLK_SUB_PAGE_SEGMENTS Date: Wed, 18 Jan 2023 14:54:39 -0800 Message-Id: <20230118225447.2809787-2-bvanassche@acm.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230118225447.2809787-1-bvanassche@acm.org> References: <20230118225447.2809787-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Prepare for introducing support for segments smaller than the page size by introducing the request queue flag QUEUE_FLAG_SUB_PAGE_SEGMENTS. Introduce CONFIG_BLK_SUB_PAGE_SEGMENTS to prevent that performance of block drivers that support segments >= PAGE_SIZE would be affected. Cc: Christoph Hellwig Cc: Ming Lei Cc: Keith Busch Signed-off-by: Bart Van Assche --- block/Kconfig | 9 +++++++++ include/linux/blkdev.h | 7 +++++++ 2 files changed, 16 insertions(+) diff --git a/block/Kconfig b/block/Kconfig index 5d9d9c84d516..e85061d2175b 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -35,6 +35,15 @@ config BLOCK_LEGACY_AUTOLOAD created on demand, but scripts that manually create device nodes and then call losetup might rely on this behavior. +config BLK_SUB_PAGE_SEGMENTS + bool "Support segments smaller than the page size" + default n + help + Most storage controllers support DMA segments larger than the typical + size of a virtual memory page. Some embedded controllers only support + DMA segments smaller than the page size. Enable this option to support + such controllers. + config BLK_RQ_ALLOC_TIME bool diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 89f51d68c68a..6cbb22fb93ee 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -547,6 +547,7 @@ struct request_queue { /* Keep blk_queue_flag_name[] in sync with the definitions below */ #define QUEUE_FLAG_STOPPED 0 /* queue is stopped */ #define QUEUE_FLAG_DYING 1 /* queue being torn down */ +#define QUEUE_FLAG_SUB_PAGE_SEGMENTS 2 /* segments smaller than one page */ #define QUEUE_FLAG_NOMERGES 3 /* disable merge attempts */ #define QUEUE_FLAG_SAME_COMP 4 /* complete on same CPU-group */ #define QUEUE_FLAG_FAIL_IO 5 /* fake timeout */ @@ -613,6 +614,12 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); #define blk_queue_sq_sched(q) test_bit(QUEUE_FLAG_SQ_SCHED, &(q)->queue_flags) #define blk_queue_skip_tagset_quiesce(q) \ test_bit(QUEUE_FLAG_SKIP_TAGSET_QUIESCE, &(q)->queue_flags) +#ifdef CONFIG_BLK_SUB_PAGE_SEGMENTS +#define blk_queue_sub_page_segments(q) \ + test_bit(QUEUE_FLAG_SUB_PAGE_SEGMENTS, &(q)->queue_flags) +#else +#define blk_queue_sub_page_segments(q) false +#endif extern void blk_set_pm_only(struct request_queue *q); extern void blk_clear_pm_only(struct request_queue *q); From patchwork Wed Jan 18 22:54:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 643987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D736C32793 for ; Wed, 18 Jan 2023 22:55:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229915AbjARWzZ (ORCPT ); Wed, 18 Jan 2023 17:55:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230073AbjARWzA (ORCPT ); Wed, 18 Jan 2023 17:55:00 -0500 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79B154EE3; Wed, 18 Jan 2023 14:54:59 -0800 (PST) Received: by mail-pj1-f50.google.com with SMTP id s13-20020a17090a6e4d00b0022900843652so4052050pjm.1; Wed, 18 Jan 2023 14:54:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jvYPO0sC/uPwDQnSqus7cVpI1wskucgHde3IyPruEHI=; b=Ajke1CCETnqekarc92qSJs5o3yD3jnBgeTwC38jbrHYE3wTnb8HjojfxxrjrLcA1eB Ae0eaAWoqtSpTkHXd46EpRBXEaa0VTvVPH2OsSTgkAKF6Fxn4uNWC07GwSV5RuyiZ+2B dvzDcmyUAf/g1rEfiPqzXyyWgNqRT9avE8Lw1OSqzyDnmM9lZg3xc1thSaNVKFwJvAtE KcDRb4WlcZN29uPbso9CN8tDtjjoUMvuou1ANLOjrWFlYimpbgHrKzfQkOCjsFc8P68h ugERGW5jLXbK9syixL/5KsqIM6KwMGddEb7twlMWsuvfSi9QmCFtgFH4lUSkQKIsy1qr RroA== X-Gm-Message-State: AFqh2kq6KEEYvkKtXbbBrsCzw0PJPITyT34NH7XKMi+kX1/wOaCpB1sG ts3nXZcqbAPm6eSGl/rwNjo= X-Google-Smtp-Source: AMrXdXsRr3a0JeCAqsTbbATGBS2/KzQwT27fynbYNYjvIv02swEGM2OSqJ8aMinQ/SpmJi9SOAnkiQ== X-Received: by 2002:a17:902:b598:b0:194:645a:fa9a with SMTP id a24-20020a170902b59800b00194645afa9amr9005792pls.8.1674082498880; Wed, 18 Jan 2023 14:54:58 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:22ae:3ae3:fde6:2308]) by smtp.gmail.com with ESMTPSA id u7-20020a17090341c700b00186e34524e3sm23649466ple.136.2023.01.18.14.54.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 14:54:58 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Keith Busch Subject: [PATCH v3 2/9] block: Support configuring limits below the page size Date: Wed, 18 Jan 2023 14:54:40 -0800 Message-Id: <20230118225447.2809787-3-bvanassche@acm.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230118225447.2809787-1-bvanassche@acm.org> References: <20230118225447.2809787-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Allow block drivers to configure the following if CONFIG_BLK_SUB_PAGE_SEGMENTS=y: * Maximum number of hardware sectors values smaller than PAGE_SIZE >> SECTOR_SHIFT. With PAGE_SIZE = 4096 this means that values below 8 are supported. * A maximum segment size smaller than the page size. This is most useful for page sizes above 4096 bytes. The behavior of the block layer is not modified if CONFIG_BLK_SUB_PAGE_SEGMENTS=n. Cc: Christoph Hellwig Cc: Ming Lei Cc: Keith Busch Signed-off-by: Bart Van Assche --- block/blk-settings.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 9c9713c9269c..9820ceb18c46 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -122,12 +122,14 @@ EXPORT_SYMBOL(blk_queue_bounce_limit); void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_sectors) { struct queue_limits *limits = &q->limits; + unsigned int min_max_hw_sectors = blk_queue_sub_page_segments(q) ? 1 : + PAGE_SIZE >> SECTOR_SHIFT; unsigned int max_sectors; - if ((max_hw_sectors << 9) < PAGE_SIZE) { - max_hw_sectors = 1 << (PAGE_SHIFT - 9); - printk(KERN_INFO "%s: set to minimum %d\n", - __func__, max_hw_sectors); + if (max_hw_sectors < min_max_hw_sectors) { + max_hw_sectors = min_max_hw_sectors; + printk(KERN_INFO "%s: set to minimum %u\n", __func__, + max_hw_sectors); } max_hw_sectors = round_down(max_hw_sectors, @@ -282,10 +284,12 @@ EXPORT_SYMBOL_GPL(blk_queue_max_discard_segments); **/ void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size) { - if (max_size < PAGE_SIZE) { - max_size = PAGE_SIZE; - printk(KERN_INFO "%s: set to minimum %d\n", - __func__, max_size); + unsigned int min_max_segment_size = blk_queue_sub_page_segments(q) ? + SECTOR_SIZE : PAGE_SIZE; + + if (max_size < min_max_segment_size) { + max_size = min_max_segment_size; + printk(KERN_INFO "%s: set to minimum %u\n", __func__, max_size); } /* see blk_queue_virt_boundary() for the explanation */ From patchwork Wed Jan 18 22:54:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 644931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C847C38159 for ; Wed, 18 Jan 2023 22:55:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230007AbjARWz1 (ORCPT ); Wed, 18 Jan 2023 17:55:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230083AbjARWzB (ORCPT ); Wed, 18 Jan 2023 17:55:01 -0500 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C1153C14; Wed, 18 Jan 2023 14:55:01 -0800 (PST) Received: by mail-pl1-f172.google.com with SMTP id y1so607028plb.2; Wed, 18 Jan 2023 14:55:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MNPNN968lQVNmCKl7FcUYv4jbSVa/pUDUUKlr0wb41Y=; b=Spc+6dgJTs4vA8JhbVl5SantWZ2q/EGEWNr6kUH/rtZfaDCKRKBZnL4cLmGx3D1klI F7jY6NbEfb7iuzX8v8D1+GWI2086fo84ZNHfFGqDtW6ofwwOj1GN5gz3uGQWXrO4JHYb 8TvXAed8tBEQmSAqWW13Fb7uNU5WUW2YJb9xPFgvzCy1rZZ2+B1J9DclY4E8yphrTiFO ZLdmqO/obbYQI93FvSjK7ez9azQPhdSdz9cDlnb+3HiNQWx2rDAnqX20mQvEFOALfrSe trzPNFaWEAfQGYNf0/O95wGHFv3rLGSjo3ONBtLVsIYrm2hKlbnGMhmi5ENgYBJZ16Nv camg== X-Gm-Message-State: AFqh2kqCEjh5tqQMaz7goDln8whEDLWWQRN5YieRqpgRzv0vB93OO38F adUWufHz1L1NrDIjUWU+fMw= X-Google-Smtp-Source: AMrXdXtIqTdFH/pDhwBM+d3zV5uB/gg6omJRCsVt8HUyeMsSL0gZ4jjhIJOfg+gk5wh5QLzvHZJSVw== X-Received: by 2002:a17:902:9a86:b0:194:4196:7dae with SMTP id w6-20020a1709029a8600b0019441967daemr8026329plp.3.1674082500490; Wed, 18 Jan 2023 14:55:00 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:22ae:3ae3:fde6:2308]) by smtp.gmail.com with ESMTPSA id u7-20020a17090341c700b00186e34524e3sm23649466ple.136.2023.01.18.14.54.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 14:54:59 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Keith Busch Subject: [PATCH v3 3/9] block: Support submitting passthrough requests with small segments Date: Wed, 18 Jan 2023 14:54:41 -0800 Message-Id: <20230118225447.2809787-4-bvanassche@acm.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230118225447.2809787-1-bvanassche@acm.org> References: <20230118225447.2809787-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org If the segment size is smaller than the page size there may be multiple segments per bvec even if a bvec only contains a single page. Hence this patch. Cc: Christoph Hellwig Cc: Ming Lei Cc: Keith Busch Signed-off-by: Bart Van Assche --- block/blk-map.c | 16 +++++++++++++++- block/blk.h | 11 +++++++++++ 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/block/blk-map.c b/block/blk-map.c index 9ee4be4ba2f1..4270db88e2c2 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -523,6 +523,20 @@ static struct bio *bio_copy_kern(struct request_queue *q, void *data, return ERR_PTR(-ENOMEM); } +#ifdef CONFIG_BLK_SUB_PAGE_SEGMENTS +/* Number of DMA segments required to transfer @bytes data. */ +unsigned int blk_segments(const struct queue_limits *limits, unsigned int bytes) +{ + const unsigned int mss = limits->max_segment_size; + + if (bytes <= mss) + return 1; + if (is_power_of_2(mss)) + return round_up(bytes, mss) >> ilog2(mss); + return (bytes + mss - 1) / mss; +} +#endif + /* * Append a bio to a passthrough request. Only works if the bio can be merged * into the request based on the driver constraints. @@ -534,7 +548,7 @@ int blk_rq_append_bio(struct request *rq, struct bio *bio) unsigned int nr_segs = 0; bio_for_each_bvec(bv, bio, iter) - nr_segs++; + nr_segs += blk_segments(&rq->q->limits, bv.bv_len); if (!rq->bio) { blk_rq_bio_prep(rq, bio, nr_segs); diff --git a/block/blk.h b/block/blk.h index 4c3b3325219a..8f5e749ad73b 100644 --- a/block/blk.h +++ b/block/blk.h @@ -76,6 +76,17 @@ struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs, gfp_t gfp_mask); void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned short nr_vecs); +#ifdef CONFIG_BLK_SUB_PAGE_SEGMENTS +unsigned int blk_segments(const struct queue_limits *limits, + unsigned int bytes); +#else +static inline unsigned int blk_segments(const struct queue_limits *limits, + unsigned int bytes) +{ + return 1; +} +#endif + static inline bool biovec_phys_mergeable(struct request_queue *q, struct bio_vec *vec1, struct bio_vec *vec2) { From patchwork Wed Jan 18 22:54:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 643986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26E3CC38147 for ; Wed, 18 Jan 2023 22:55:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230073AbjARWz2 (ORCPT ); Wed, 18 Jan 2023 17:55:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230097AbjARWzD (ORCPT ); Wed, 18 Jan 2023 17:55:03 -0500 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E3F43C14; Wed, 18 Jan 2023 14:55:02 -0800 (PST) Received: by mail-pl1-f170.google.com with SMTP id v23so613610plo.1; Wed, 18 Jan 2023 14:55:02 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/RZDuzsMWMDJVFdLKnQsMNBF2kHeHMM55tNHIuU+tqE=; b=Q0OH0n7VBUeKpqU4UEowfmrlPhFuksOLF4tLTUlNbjg2aTQgzfM5fPfOE2WurL4sca QGmOScgGWA0E954toT1ObZcRXl3j9UNsOrBLODyzNfm95TJ+OJwS1JFzYJdvklrv4i97 ndBcGst2F2XH9nKdt+YiEslGn8a2gVgFwQXFyaqLWBs037ejwyh93c4priIUW508tHAR Csy3DzXPPeFYBhdWg6eSDNThJLzUQr96r1Y8yvxCkxNkP/+U1yLiFUis/P/QJNo63ELN 8DbVQXYueBXwoQvFcdgIWqNzy0hfyH4ABDyT07Vnp5iV1DPb+GTeSthjVrPHK32FsW2P 5alQ== X-Gm-Message-State: AFqh2kpgOw2OnjgkfU/Symzs+ZxZ1podJOMjemXexplT1wggyPRidBhc o03D6u6IIYpB0cQ/JCMU+nk= X-Google-Smtp-Source: AMrXdXs6ckHlyNDzitSnUlzG8Vr+RRJj9xORGPo4FS7GXOITkCjTy5q/LwdE/dGw47O6JSKn0XUVNA== X-Received: by 2002:a17:902:7049:b0:194:c241:f604 with SMTP id h9-20020a170902704900b00194c241f604mr2303944plt.57.1674082502046; Wed, 18 Jan 2023 14:55:02 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:22ae:3ae3:fde6:2308]) by smtp.gmail.com with ESMTPSA id u7-20020a17090341c700b00186e34524e3sm23649466ple.136.2023.01.18.14.55.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 14:55:01 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Keith Busch Subject: [PATCH v3 4/9] block: Add support for filesystem requests and small segments Date: Wed, 18 Jan 2023 14:54:42 -0800 Message-Id: <20230118225447.2809787-5-bvanassche@acm.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230118225447.2809787-1-bvanassche@acm.org> References: <20230118225447.2809787-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add support in the bio splitting code and also in the bio submission code for bios with segments smaller than the page size. Cc: Christoph Hellwig Cc: Ming Lei Cc: Keith Busch Signed-off-by: Bart Van Assche --- block/blk-merge.c | 6 ++++-- block/blk-mq.c | 2 ++ block/blk.h | 11 +++++------ 3 files changed, 11 insertions(+), 8 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index b7c193d67185..bf727f67473d 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -294,7 +294,8 @@ static struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim, if (nsegs < lim->max_segments && bytes + bv.bv_len <= max_bytes && bv.bv_offset + bv.bv_len <= PAGE_SIZE) { - nsegs++; + /* single-page bvec optimization */ + nsegs += blk_segments(lim, bv.bv_len); bytes += bv.bv_len; } else { if (bvec_split_segs(lim, &bv, &nsegs, &bytes, @@ -543,7 +544,8 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio, __blk_segment_map_sg_merge(q, &bvec, &bvprv, sg)) goto next_bvec; - if (bvec.bv_offset + bvec.bv_len <= PAGE_SIZE) + if (bvec.bv_offset + bvec.bv_len <= PAGE_SIZE && + bvec.bv_len <= q->limits.max_segment_size) nsegs += __blk_bvec_map_sg(bvec, sglist, sg); else nsegs += blk_bvec_map_sg(q, &bvec, sglist, sg); diff --git a/block/blk-mq.c b/block/blk-mq.c index 9d463f7563bc..947cae2def76 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2959,6 +2959,8 @@ void blk_mq_submit_bio(struct bio *bio) bio = __bio_split_to_limits(bio, &q->limits, &nr_segs); if (!bio) return; + } else if (bio->bi_vcnt == 1) { + nr_segs = blk_segments(&q->limits, bio->bi_io_vec[0].bv_len); } if (!bio_integrity_prep(bio)) diff --git a/block/blk.h b/block/blk.h index 8f5e749ad73b..c3dd332ba618 100644 --- a/block/blk.h +++ b/block/blk.h @@ -316,14 +316,13 @@ static inline bool bio_may_exceed_limits(struct bio *bio, } /* - * All drivers must accept single-segments bios that are <= PAGE_SIZE. - * This is a quick and dirty check that relies on the fact that - * bi_io_vec[0] is always valid if a bio has data. The check might - * lead to occasional false negatives when bios are cloned, but compared - * to the performance impact of cloned bios themselves the loop below - * doesn't matter anyway. + * Check whether bio splitting should be performed. This check may + * trigger the bio splitting code even if splitting is not necessary. */ return lim->chunk_sectors || bio->bi_vcnt != 1 || +#ifdef CONFIG_BLK_SUB_PAGE_SEGMENTS + bio->bi_io_vec->bv_len > lim->max_segment_size || +#endif bio->bi_io_vec->bv_len + bio->bi_io_vec->bv_offset > PAGE_SIZE; } From patchwork Wed Jan 18 22:54:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 644930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D49FC32793 for ; Wed, 18 Jan 2023 22:55:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230083AbjARWz3 (ORCPT ); Wed, 18 Jan 2023 17:55:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230100AbjARWzE (ORCPT ); Wed, 18 Jan 2023 17:55:04 -0500 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 109334EE3; Wed, 18 Jan 2023 14:55:04 -0800 (PST) Received: by mail-pj1-f50.google.com with SMTP id dw9so565991pjb.5; Wed, 18 Jan 2023 14:55:04 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8FHeIkpZ3n/K79o6PVCquebIpeBJv71/l9JcoJvwK1g=; b=eeM3hXll1YYtfuBXhg4gvHADZYnNATGIjjYGkrImacQzs/E1QZXHvtbrFDT9CivoHH aDZX/p20mU0DZvj/sH39CwcBowfbD8nOBeMpk5C5HTSdIPLmxGOOoFiDPQncxpulW4T2 twqno8y5qCDF9tuaebINXSxITaICvzMPP8Rize5hMHvEfIAtqOCvIrYA+syPDE35ibwb QRJEawrQO3xWJUyNRiYfrkykp+UsvjxywDMHk5itApL4isuzJLAg9kTjnk1j43m1RWZu LKtrPY+SqexztkUyTVBoRvvpkIq6rbe4oD4uKMHJutplfuC01L9euO7vu0fSXarSYN3h lqSw== X-Gm-Message-State: AFqh2krDdwZujXBEk/9VyfPzmU6201txygdl5SGFwiQjJoJXrFEhzCUG ndYcRTJQHOgC6+GiboO94/I= X-Google-Smtp-Source: AMrXdXuFtPzD624UHf0yDyzCeTMwsNc4xwl52ec0sndI+H7slfBmvrjTlMV3kqUAdURi2AOXftymUw== X-Received: by 2002:a17:902:8a98:b0:194:9c0d:9732 with SMTP id p24-20020a1709028a9800b001949c0d9732mr9130265plo.46.1674082503545; Wed, 18 Jan 2023 14:55:03 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:22ae:3ae3:fde6:2308]) by smtp.gmail.com with ESMTPSA id u7-20020a17090341c700b00186e34524e3sm23649466ple.136.2023.01.18.14.55.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 14:55:02 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Keith Busch Subject: [PATCH v3 5/9] block: Add support for small segments in blk_rq_map_user_iov() Date: Wed, 18 Jan 2023 14:54:43 -0800 Message-Id: <20230118225447.2809787-6-bvanassche@acm.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230118225447.2809787-1-bvanassche@acm.org> References: <20230118225447.2809787-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Before changing the return value of bio_add_hw_page() into a value in the range [0, len], make blk_rq_map_user_iov() fall back to copying data if mapping the data is not possible due to the segment limit. Cc: Christoph Hellwig Cc: Ming Lei Cc: Keith Busch Signed-off-by: Bart Van Assche --- block/blk-map.c | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) diff --git a/block/blk-map.c b/block/blk-map.c index 4270db88e2c2..83163f9b2335 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -307,17 +307,26 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, else { for (j = 0; j < npages; j++) { struct page *page = pages[j]; - unsigned int n = PAGE_SIZE - offs; + unsigned int n = PAGE_SIZE - offs, added; bool same_page = false; if (n > bytes) n = bytes; - if (!bio_add_hw_page(rq->q, bio, page, n, offs, - max_sectors, &same_page)) { + added = bio_add_hw_page(rq->q, bio, page, n, + offs, max_sectors, &same_page); + if (added == 0) { if (same_page) put_page(page); break; + } else if (added != n) { + /* + * The segment size is smaller than the + * page size and an iov exceeds the + * segment size. Give up. + */ + ret = -EREMOTEIO; + goto out_unmap; } bytes -= n; @@ -671,10 +680,18 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq, i = *iter; do { - if (copy) + if (copy) { ret = bio_copy_user_iov(rq, map_data, &i, gfp_mask); - else + } else { ret = bio_map_user_iov(rq, &i, gfp_mask); + /* + * Fall back to copying the data if bio_map_user_iov() + * returns -EREMOTEIO. + */ + if (ret == -EREMOTEIO) + ret = bio_copy_user_iov(rq, map_data, &i, + gfp_mask); + } if (ret) goto unmap_rq; if (!bio) From patchwork Wed Jan 18 22:54:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 643985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 182B3C54EBE for ; Wed, 18 Jan 2023 22:55:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230097AbjARWza (ORCPT ); Wed, 18 Jan 2023 17:55:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230104AbjARWzG (ORCPT ); Wed, 18 Jan 2023 17:55:06 -0500 Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A21F03C14; Wed, 18 Jan 2023 14:55:05 -0800 (PST) Received: by mail-pj1-f44.google.com with SMTP id u1-20020a17090a450100b0022936a63a21so4019599pjg.4; Wed, 18 Jan 2023 14:55:05 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W8U5WnK3y+sPKAzy++IL/tcRApHh09TXdzuDO7B9I+Q=; b=E9+pJJjTxXylzFvULclWQ+TfQJ5G3h9rzmpSF/dVUlvIOJi+rp3q6pNA1D/PGVY21p 1jfD5u807mAZUotU8SqT5TXKDSQPWVTKPDlTDr3C4KX7dSVd+KEvN6hVlpvLAqjpJaN8 TVQ/NHIcC/Dnp4dCljHeFjH1BtS/66+jntpgSNOO4jh5AKFLtBao/U97PKB3mMmgp0LW JZ2yQrAMeCqYVkB9Hlhk1DEJjJ6u4WWl9Ygf5l3TrjtmCz/oVb6+S7HglN6tagfGxl6P fAb55ohIPM27dTcuQ3XDwx8z347JvBkmj8Ctp8oGlS5YPJBSwx2iMqnV/1HlJG/y8SzK 2/aA== X-Gm-Message-State: AFqh2kqA56hMg5crA/7i0mnc6sciV9IMVFCmgm5tsIlyTI7rPWqnYGX7 xdTDn7vs6SQztZdNzCLWVxk= X-Google-Smtp-Source: AMrXdXs5On0M3f4/aKhwMNcmr599cWo1FN/AfrQme8wqoc6MdhHbvN2y6A4l+MtC5mcVnCy6EuF59Q== X-Received: by 2002:a17:902:c10d:b0:185:441e:4cfc with SMTP id 13-20020a170902c10d00b00185441e4cfcmr9451532pli.44.1674082505088; Wed, 18 Jan 2023 14:55:05 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:22ae:3ae3:fde6:2308]) by smtp.gmail.com with ESMTPSA id u7-20020a17090341c700b00186e34524e3sm23649466ple.136.2023.01.18.14.55.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 14:55:04 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Doug Gilbert , "Martin K . Petersen" Subject: [PATCH v3 6/9] scsi_debug: Support configuring the maximum segment size Date: Wed, 18 Jan 2023 14:54:44 -0800 Message-Id: <20230118225447.2809787-7-bvanassche@acm.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230118225447.2809787-1-bvanassche@acm.org> References: <20230118225447.2809787-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add a kernel module parameter for configuring the maximum segment size. This patch enables testing SCSI support for segments smaller than the page size. Cc: Doug Gilbert Cc: Martin K. Petersen Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_debug.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 8553277effb3..d09f05b440ba 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -752,6 +752,7 @@ static int sdebug_host_max_queue; /* per host */ static int sdebug_lowest_aligned = DEF_LOWEST_ALIGNED; static int sdebug_max_luns = DEF_MAX_LUNS; static int sdebug_max_queue = SDEBUG_CANQUEUE; /* per submit queue */ +static unsigned int sdebug_max_segment_size = BLK_MAX_SEGMENT_SIZE; static unsigned int sdebug_medium_error_start = OPT_MEDIUM_ERR_ADDR; static int sdebug_medium_error_count = OPT_MEDIUM_ERR_NUM; static atomic_t retired_max_queue; /* if > 0 then was prior max_queue */ @@ -5205,6 +5206,11 @@ static int scsi_debug_slave_alloc(struct scsi_device *sdp) if (sdebug_verbose) pr_info("slave_alloc <%u %u %u %llu>\n", sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); + + if (sdebug_max_segment_size < PAGE_SIZE) + blk_queue_flag_set(QUEUE_FLAG_SUB_PAGE_SEGMENTS, + sdp->request_queue); + return 0; } @@ -5841,6 +5847,7 @@ module_param_named(lowest_aligned, sdebug_lowest_aligned, int, S_IRUGO); module_param_named(lun_format, sdebug_lun_am_i, int, S_IRUGO | S_IWUSR); module_param_named(max_luns, sdebug_max_luns, int, S_IRUGO | S_IWUSR); module_param_named(max_queue, sdebug_max_queue, int, S_IRUGO | S_IWUSR); +module_param_named(max_segment_size, sdebug_max_segment_size, uint, S_IRUGO); module_param_named(medium_error_count, sdebug_medium_error_count, int, S_IRUGO | S_IWUSR); module_param_named(medium_error_start, sdebug_medium_error_start, int, @@ -5917,6 +5924,7 @@ MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)"); MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method"); MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)"); MODULE_PARM_DESC(max_queue, "max number of queued commands (1 to max(def))"); +MODULE_PARM_DESC(max_segment_size, "max bytes in a single segment"); MODULE_PARM_DESC(medium_error_count, "count of sectors to return follow on MEDIUM error"); MODULE_PARM_DESC(medium_error_start, "starting sector number to return MEDIUM error"); MODULE_PARM_DESC(ndelay, "response delay in nanoseconds (def=0 -> ignore)"); @@ -6920,6 +6928,12 @@ static int __init scsi_debug_init(void) return -EINVAL; } + if (sdebug_max_segment_size < SECTOR_SIZE) { + pr_err("invalid max_segment_size %d\n", + sdebug_max_segment_size); + return -EINVAL; + } + switch (sdebug_dif) { case T10_PI_TYPE0_PROTECTION: break; @@ -7816,6 +7830,7 @@ static int sdebug_driver_probe(struct device *dev) sdebug_driver_template.can_queue = sdebug_max_queue; sdebug_driver_template.cmd_per_lun = sdebug_max_queue; + sdebug_driver_template.max_segment_size = sdebug_max_segment_size; if (!sdebug_clustering) sdebug_driver_template.dma_boundary = PAGE_SIZE - 1; From patchwork Wed Jan 18 22:54:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 644929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7153C38159 for ; Wed, 18 Jan 2023 22:55:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230103AbjARWzb (ORCPT ); Wed, 18 Jan 2023 17:55:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230117AbjARWzJ (ORCPT ); Wed, 18 Jan 2023 17:55:09 -0500 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6D741CAF3; Wed, 18 Jan 2023 14:55:07 -0800 (PST) Received: by mail-pl1-f178.google.com with SMTP id k13so646523plg.0; Wed, 18 Jan 2023 14:55:07 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=M+26zPAsx0x5i6iQb2oS6taz2VZG1SMytES1MLhydQQ=; b=E0pb/q/xBCycmVo4aJO/iY5bs8FJOCij78qdrLMqemSAV4WKbjaZoYoo8bDNYXi9RQ irdQkch0GCDKJ9vH9uLD3Dvb+HwCULzkfgJfSQcuz9irbmi7PQZCmNFQNs2M0c1YxHnX ZpLyOWdJ07ePZ6iMrVM5Xo7dSdWdON5IRfwS9axIb48kA5nT8romWcRTpgKXpO+twrOn GxF2s8eEMDLJQLmaL/v8HzxtiWJ4kMQL8HccYEiBMjlxnFHwiV6IoClTSMjz+ZauCrQ+ XKGHchZ3/DfrbFrRvxUT9x3tmxzEDALeLk15Laodw21EIwYzIIZVvd7fxnnngvLAgii6 0QLw== X-Gm-Message-State: AFqh2kpnzMmhtTJHITY+baeXPKLSZlODe2PMdnSs5lBNiS35oGfAcmPf SKjgLZfsia00nG1NIrESZIM= X-Google-Smtp-Source: AMrXdXtg2xtzs5hTfhGhhJz4zaFkw00+pY9VLqmBG42Eyd7nVvzVPMBByLHKrjzLrsAhG+IVq5n3kg== X-Received: by 2002:a17:902:eb44:b0:194:84eb:290a with SMTP id i4-20020a170902eb4400b0019484eb290amr9346397pli.50.1674082507034; Wed, 18 Jan 2023 14:55:07 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:22ae:3ae3:fde6:2308]) by smtp.gmail.com with ESMTPSA id u7-20020a17090341c700b00186e34524e3sm23649466ple.136.2023.01.18.14.55.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 14:55:06 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Damien Le Moal , Chaitanya Kulkarni Subject: [PATCH v3 7/9] null_blk: Support configuring the maximum segment size Date: Wed, 18 Jan 2023 14:54:45 -0800 Message-Id: <20230118225447.2809787-8-bvanassche@acm.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230118225447.2809787-1-bvanassche@acm.org> References: <20230118225447.2809787-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add support for configuring the maximum segment size. Add support for segments smaller than the page size. This patch enables testing segments smaller than the page size with a driver that does not call blk_rq_map_sg(). Cc: Christoph Hellwig Cc: Ming Lei Cc: Damien Le Moal Cc: Chaitanya Kulkarni Signed-off-by: Bart Van Assche --- drivers/block/null_blk/main.c | 21 ++++++++++++++++++--- drivers/block/null_blk/null_blk.h | 1 + 2 files changed, 19 insertions(+), 3 deletions(-) diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index 4c601ca9552a..822890d89427 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -157,6 +157,10 @@ static int g_max_sectors; module_param_named(max_sectors, g_max_sectors, int, 0444); MODULE_PARM_DESC(max_sectors, "Maximum size of a command (in 512B sectors)"); +static unsigned int g_max_segment_size = BLK_MAX_SEGMENT_SIZE; +module_param_named(max_segment_size, g_max_segment_size, int, 0444); +MODULE_PARM_DESC(max_segment_size, "Maximum size of a segment in bytes"); + static unsigned int nr_devices = 1; module_param(nr_devices, uint, 0444); MODULE_PARM_DESC(nr_devices, "Number of devices to register"); @@ -409,6 +413,7 @@ NULLB_DEVICE_ATTR(home_node, uint, NULL); NULLB_DEVICE_ATTR(queue_mode, uint, NULL); NULLB_DEVICE_ATTR(blocksize, uint, NULL); NULLB_DEVICE_ATTR(max_sectors, uint, NULL); +NULLB_DEVICE_ATTR(max_segment_size, uint, NULL); NULLB_DEVICE_ATTR(irqmode, uint, NULL); NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL); NULLB_DEVICE_ATTR(index, uint, NULL); @@ -550,6 +555,7 @@ static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_queue_mode, &nullb_device_attr_blocksize, &nullb_device_attr_max_sectors, + &nullb_device_attr_max_segment_size, &nullb_device_attr_irqmode, &nullb_device_attr_hw_queue_depth, &nullb_device_attr_index, @@ -630,7 +636,8 @@ static ssize_t memb_group_features_show(struct config_item *item, char *page) return snprintf(page, PAGE_SIZE, "badblocks,blocking,blocksize,cache_size," "completion_nsec,discard,home_node,hw_queue_depth," - "irqmode,max_sectors,mbps,memory_backed,no_sched," + "irqmode,max_sectors,max_segment_size,mbps," + "memory_backed,no_sched," "poll_queues,power,queue_mode,shared_tag_bitmap,size," "submit_queues,use_per_node_hctx,virt_boundary,zoned," "zone_capacity,zone_max_active,zone_max_open," @@ -693,6 +700,7 @@ static struct nullb_device *null_alloc_dev(void) dev->queue_mode = g_queue_mode; dev->blocksize = g_bs; dev->max_sectors = g_max_sectors; + dev->max_segment_size = g_max_segment_size; dev->irqmode = g_irqmode; dev->hw_queue_depth = g_hw_queue_depth; dev->blocking = g_blocking; @@ -1234,6 +1242,8 @@ static int null_transfer(struct nullb *nullb, struct page *page, unsigned int valid_len = len; int err = 0; + WARN_ONCE(len > dev->max_segment_size, "%u > %u\n", len, + dev->max_segment_size); if (!is_write) { if (dev->zoned) valid_len = null_zone_valid_read_len(nullb, @@ -1269,7 +1279,8 @@ static int null_handle_rq(struct nullb_cmd *cmd) spin_lock_irq(&nullb->lock); rq_for_each_segment(bvec, rq, iter) { - len = bvec.bv_len; + len = min(bvec.bv_len, nullb->dev->max_segment_size); + bvec.bv_len = len; err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset, op_is_write(req_op(rq)), sector, rq->cmd_flags & REQ_FUA); @@ -1296,7 +1307,8 @@ static int null_handle_bio(struct nullb_cmd *cmd) spin_lock_irq(&nullb->lock); bio_for_each_segment(bvec, bio, iter) { - len = bvec.bv_len; + len = min(bvec.bv_len, nullb->dev->max_segment_size); + bvec.bv_len = len; err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset, op_is_write(bio_op(bio)), sector, bio->bi_opf & REQ_FUA); @@ -2108,6 +2120,8 @@ static int null_add_dev(struct nullb_device *dev) nullb->q->queuedata = nullb; blk_queue_flag_set(QUEUE_FLAG_NONROT, nullb->q); blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, nullb->q); + if (dev->max_segment_size < PAGE_SIZE) + blk_queue_flag_set(QUEUE_FLAG_SUB_PAGE_SEGMENTS, nullb->q); mutex_lock(&lock); rv = ida_simple_get(&nullb_indexes, 0, 0, GFP_KERNEL); @@ -2125,6 +2139,7 @@ static int null_add_dev(struct nullb_device *dev) dev->max_sectors = queue_max_hw_sectors(nullb->q); dev->max_sectors = min(dev->max_sectors, BLK_DEF_MAX_SECTORS); blk_queue_max_hw_sectors(nullb->q, dev->max_sectors); + blk_queue_max_segment_size(nullb->q, dev->max_segment_size); if (dev->virt_boundary) blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1); diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h index eb5972c50be8..8cb73fe05fa3 100644 --- a/drivers/block/null_blk/null_blk.h +++ b/drivers/block/null_blk/null_blk.h @@ -102,6 +102,7 @@ struct nullb_device { unsigned int queue_mode; /* block interface */ unsigned int blocksize; /* block size */ unsigned int max_sectors; /* Max sectors per command */ + unsigned int max_segment_size; /* Max size of a single DMA segment. */ unsigned int irqmode; /* IRQ completion handler */ unsigned int hw_queue_depth; /* queue depth */ unsigned int index; /* index of the disk, only valid with a disk */ From patchwork Wed Jan 18 22:54:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 643984 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29E95C46467 for ; Wed, 18 Jan 2023 22:55:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230113AbjARWzb (ORCPT ); Wed, 18 Jan 2023 17:55:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230148AbjARWzK (ORCPT ); Wed, 18 Jan 2023 17:55:10 -0500 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91A1F4EE3; Wed, 18 Jan 2023 14:55:09 -0800 (PST) Received: by mail-pj1-f42.google.com with SMTP id n20-20020a17090aab9400b00229ca6a4636so3176549pjq.0; Wed, 18 Jan 2023 14:55:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Pdxak3ks4z8LUD82adauJLLCkfT4Il1pbTlxgU/qcbQ=; b=8IWk92G06hgv/hdWJhFkEpTwoeBVzMgsrG6mMGuUtlMmrR+ZCugeKjW0Ojw8+5CBZa TNLt1zgw7WzSCudQ4Gcnke2cZjHareTv+phkSEtBU6D9PJH85rQhNPFb4L8oaPMzHvQK iD/51QhfcG9SR1t3cWIicNi0mRBLkP9eeOceT/0AxHxs/e/a08AkfLpOai3Em2cMzHf6 td+Za8GNCFdiVD6tnbLDim38R3drmJhDdARO9QewkI1mRqnMS4j7RusMzZbvLJAwfuHj rfIeS0sJ0ym3gabka/xMP3BFr3JN+6UtE4g42PxIfzitzad23OxSZ3DEk8qkuwL+U2CK iumw== X-Gm-Message-State: AFqh2ko6UvII7q1LNzY9W7vsaKo8zb2Lw+3J+M8Bn0XROUIkJiu1vlUE yyArgw0xcinHM8bQq9NI440= X-Google-Smtp-Source: AMrXdXvuP1GSpvNRUGFcAH8lvWpT/mj0NSaH3OV5uoRMllKfmFhN3hLeiRwKrA8EQUiTHERrDhFCnQ== X-Received: by 2002:a17:902:ef8c:b0:189:890c:4f6f with SMTP id iz12-20020a170902ef8c00b00189890c4f6fmr28600134plb.64.1674082508789; Wed, 18 Jan 2023 14:55:08 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:22ae:3ae3:fde6:2308]) by smtp.gmail.com with ESMTPSA id u7-20020a17090341c700b00186e34524e3sm23649466ple.136.2023.01.18.14.55.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 14:55:08 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Alim Akhtar , Kiwoong Kim Subject: [PATCH v3 8/9] scsi: core: Set BLK_SUB_PAGE_SEGMENTS for small max_segment_size values Date: Wed, 18 Jan 2023 14:54:46 -0800 Message-Id: <20230118225447.2809787-9-bvanassche@acm.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230118225447.2809787-1-bvanassche@acm.org> References: <20230118225447.2809787-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The block layer only accepts max_segment_size values smaller than the page size if the QUEUE_FLAG_SUB_PAGE_SEGMENTS flag is set. Hence this patch. Cc: Alim Akhtar Cc: Kiwoong Kim Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_lib.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 9ed1ebcb7443..91f2e7f787d8 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1876,6 +1876,9 @@ void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q) { struct device *dev = shost->dma_dev; + if (shost->max_segment_size && shost->max_segment_size < PAGE_SIZE) + blk_queue_flag_set(QUEUE_FLAG_SUB_PAGE_SEGMENTS, q); + /* * this limit is imposed by hardware restrictions */ From patchwork Wed Jan 18 22:54:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 644928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 087E2C38147 for ; Wed, 18 Jan 2023 22:55:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230115AbjARWzd (ORCPT ); Wed, 18 Jan 2023 17:55:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230152AbjARWzL (ORCPT ); Wed, 18 Jan 2023 17:55:11 -0500 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04F33B468; Wed, 18 Jan 2023 14:55:11 -0800 (PST) Received: by mail-pl1-f178.google.com with SMTP id d3so566319plr.10; Wed, 18 Jan 2023 14:55:11 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=O4iBtXHjSJgcJqJAuc3ZI/3Wh+WcZqDuUJSQ66mgoGQ=; b=7BtFzqL3fmAmgJyx6xo4SAoRIuFxQ8uykPMYDjUuGmxeqOvX7PSaJ2bpjdaA5XXcDo N5UzQ0eIGVvkuWP4a8B7UFYgu3aFAujY5mABwPJc+stS1RK/dZTryaD+6nD9e45NuiQF YvKlWbTTtwI8CoOUUvuDEW9sVedjgpuKgDHDu4rNvbiKaevpe+EzJbV4MRyGJWyF9KMZ uDpFoA+2jPtrzPyrsTT7czBSKk4rJcRYZSAQoVvJSVRF5MB3tPMg+AJBD5QuxmYFZ6ML ju6TRmwXxpvjkZ91fjWvj669k8Du04lxh6Zz1go5B1P1PYOp3/hnMHedStm2QqRY+GqZ RqsA== X-Gm-Message-State: AFqh2kq2UmHYoh5GhSqozbsXK52aSzazmsga34/mrjB3YVH4BzMdro+v qGd64Ck3Vlttbr+1pEtS+CE= X-Google-Smtp-Source: AMrXdXs478pkhRkiVXxjniYY/Ww1GTTsv4PcTU8Ab9oYSqljYBV19BC4RHPZlTH4dDBaIRS+PadWuQ== X-Received: by 2002:a17:903:32c1:b0:194:7c95:dc3c with SMTP id i1-20020a17090332c100b001947c95dc3cmr12594316plr.12.1674082510455; Wed, 18 Jan 2023 14:55:10 -0800 (PST) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:22ae:3ae3:fde6:2308]) by smtp.gmail.com with ESMTPSA id u7-20020a17090341c700b00186e34524e3sm23649466ple.136.2023.01.18.14.55.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 14:55:09 -0800 (PST) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, Jaegeuk Kim , Avri Altman , Adrian Hunter , Christoph Hellwig , Ming Lei , Bart Van Assche , Alim Akhtar , Kiwoong Kim Subject: [PATCH v3 9/9] scsi: ufs: exynos: Select CONFIG_BLK_SUB_PAGE_SEGMENTS for lage page sizes Date: Wed, 18 Jan 2023 14:54:47 -0800 Message-Id: <20230118225447.2809787-10-bvanassche@acm.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230118225447.2809787-1-bvanassche@acm.org> References: <20230118225447.2809787-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Since the maximum segment size supported by the Exynos controller is 4 KiB, this controller needs CONFIG_BLK_SUB_PAGE_SEGMENTS if the page size exceeds 4 KiB. Cc: Alim Akhtar Cc: Kiwoong Kim Signed-off-by: Bart Van Assche --- drivers/ufs/host/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/ufs/host/Kconfig b/drivers/ufs/host/Kconfig index 4cc2dbd79ed0..376a4039912d 100644 --- a/drivers/ufs/host/Kconfig +++ b/drivers/ufs/host/Kconfig @@ -117,6 +117,7 @@ config SCSI_UFS_TI_J721E config SCSI_UFS_EXYNOS tristate "Exynos specific hooks to UFS controller platform driver" depends on SCSI_UFSHCD_PLATFORM && (ARCH_EXYNOS || COMPILE_TEST) + select BLK_SUB_PAGE_SEGMENTS if PAGE_SIZE > 4096 help This selects the Samsung Exynos SoC specific additions to UFSHCD platform driver. UFS host on Samsung Exynos SoC includes HCI and