From patchwork Wed Nov 17 06:13:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 518722 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CC85C4332F for ; Wed, 17 Nov 2021 06:14:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1093561B66 for ; Wed, 17 Nov 2021 06:14:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233002AbhKQGRU (ORCPT ); Wed, 17 Nov 2021 01:17:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229927AbhKQGRT (ORCPT ); Wed, 17 Nov 2021 01:17:19 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89416C061764; Tue, 16 Nov 2021 22:14:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=HhvXg+t7CvF5Dfw/VnDXlrbf8FkqH0fuHrNXW8+Oqr8=; b=FxS3D+VuBXJ/jOvsk0iJE5Cggq hkKiK142RJlaFO/l+aubED/UoDKZ2kPpK50OfE/MBdMUVjohYmcQl/FLgv6arNchvlJYlich92nGL DQi29H+smSKkz4WVGjnDOrCMpUwFv8bIvih9K8kD0ss+w09bhp5wfVIrcsBzr9DvLcg5DpdN8NW5T WuPgmkStUGBbSh8N25NgtYprZf7dwMe411c8dnN9CRXd/uFOZ5SCt7+Tid7NzcghzZ5ZYP+Cwtx8g Y0jeZxrwQZyj+SFX2CVshrV8phri0oFthR//YoF0ntBvqKx29EhpqMqSEeuCpgpc00FBWnA45H4S4 1TW0pTqQ==; Received: from 213-225-5-109.nat.highway.a1.net ([213.225.5.109] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1mnECt-007MEP-8v; Wed, 17 Nov 2021 06:14:08 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 01/11] block: move blk_rq_err_bytes to scsi Date: Wed, 17 Nov 2021 07:13:54 +0100 Message-Id: <20211117061404.331732-2-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211117061404.331732-1-hch@lst.de> References: <20211117061404.331732-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org blk_rq_err_bytes is only used by the scsi midlayer, so move it there. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 41 ---------------------------------------- drivers/scsi/scsi_lib.c | 42 ++++++++++++++++++++++++++++++++++++++++- include/linux/blk-mq.h | 3 --- 3 files changed, 41 insertions(+), 45 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 9ee32f85d74e1..e27a659973965 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1173,47 +1173,6 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request * } EXPORT_SYMBOL_GPL(blk_insert_cloned_request); -/** - * blk_rq_err_bytes - determine number of bytes till the next failure boundary - * @rq: request to examine - * - * Description: - * A request could be merge of IOs which require different failure - * handling. This function determines the number of bytes which - * can be failed from the beginning of the request without - * crossing into area which need to be retried further. - * - * Return: - * The number of bytes to fail. - */ -unsigned int blk_rq_err_bytes(const struct request *rq) -{ - unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK; - unsigned int bytes = 0; - struct bio *bio; - - if (!(rq->rq_flags & RQF_MIXED_MERGE)) - return blk_rq_bytes(rq); - - /* - * Currently the only 'mixing' which can happen is between - * different fastfail types. We can safely fail portions - * which have all the failfast bits that the first one has - - * the ones which are at least as eager to fail as the first - * one. - */ - for (bio = rq->bio; bio; bio = bio->bi_next) { - if ((bio->bi_opf & ff) != ff) - break; - bytes += bio->bi_iter.bi_size; - } - - /* this could lead to infinite loop */ - BUG_ON(blk_rq_bytes(rq) && !bytes); - return bytes; -} -EXPORT_SYMBOL_GPL(blk_rq_err_bytes); - static void update_io_ticks(struct block_device *part, unsigned long now, bool end) { diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 621d841d819a3..5e8b5ecb3245a 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -617,6 +617,46 @@ static blk_status_t scsi_result_to_blk_status(struct scsi_cmnd *cmd, int result) } } +/** + * scsi_rq_err_bytes - determine number of bytes till the next failure boundary + * @rq: request to examine + * + * Description: + * A request could be merge of IOs which require different failure + * handling. This function determines the number of bytes which + * can be failed from the beginning of the request without + * crossing into area which need to be retried further. + * + * Return: + * The number of bytes to fail. + */ +static unsigned int scsi_rq_err_bytes(const struct request *rq) +{ + unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK; + unsigned int bytes = 0; + struct bio *bio; + + if (!(rq->rq_flags & RQF_MIXED_MERGE)) + return blk_rq_bytes(rq); + + /* + * Currently the only 'mixing' which can happen is between + * different fastfail types. We can safely fail portions + * which have all the failfast bits that the first one has - + * the ones which are at least as eager to fail as the first + * one. + */ + for (bio = rq->bio; bio; bio = bio->bi_next) { + if ((bio->bi_opf & ff) != ff) + break; + bytes += bio->bi_iter.bi_size; + } + + /* this could lead to infinite loop */ + BUG_ON(blk_rq_bytes(rq) && !bytes); + return bytes; +} + /* Helper for scsi_io_completion() when "reprep" action required. */ static void scsi_io_completion_reprep(struct scsi_cmnd *cmd, struct request_queue *q) @@ -794,7 +834,7 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result) scsi_print_command(cmd); } } - if (!scsi_end_request(req, blk_stat, blk_rq_err_bytes(req))) + if (!scsi_end_request(req, blk_stat, scsi_rq_err_bytes(req))) return; fallthrough; case ACTION_REPREP: diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 2949d9ac74849..a78d9a0f2a1be 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -947,7 +947,6 @@ struct req_iterator { * blk_rq_pos() : the current sector * blk_rq_bytes() : bytes left in the entire request * blk_rq_cur_bytes() : bytes left in the current segment - * blk_rq_err_bytes() : bytes left till the next error boundary * blk_rq_sectors() : sectors left in the entire request * blk_rq_cur_sectors() : sectors left in the current segment * blk_rq_stats_sectors() : sectors of the entire request used for stats @@ -971,8 +970,6 @@ static inline int blk_rq_cur_bytes(const struct request *rq) return bio_iovec(rq->bio).bv_len; } -unsigned int blk_rq_err_bytes(const struct request *rq); - static inline unsigned int blk_rq_sectors(const struct request *rq) { return blk_rq_bytes(rq) >> SECTOR_SHIFT; From patchwork Wed Nov 17 06:13:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 518721 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFD6CC433EF for ; Wed, 17 Nov 2021 06:14:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB8E662F90 for ; Wed, 17 Nov 2021 06:14:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233523AbhKQGRW (ORCPT ); Wed, 17 Nov 2021 01:17:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232300AbhKQGRT (ORCPT ); Wed, 17 Nov 2021 01:17:19 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95F7BC061767; Tue, 16 Nov 2021 22:14:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=B5W4T0FJ3gk7XDJbVOEv8wyaVixGSlICP7LZCQ2Vt60=; b=PpHW+4xxxLkPpXHIkvdTcUdodH FR8uxhRsUF0VS4H9e2LnN0wdH2kFZhESvE6U8eWCdWPewlEYA6OGHWjScPQThz5UxThEBhibJukDN bt2KOm1DNWJqFHlpbUd8sEOENuJWte281IFte1DHGsT42a8shsKW5I8CQ23nWKjS6Pd/PvQ/8yxbZ 53XpGAdkhd8YakpmZzpanxl4+R4iz+/pK/1X495uod/s/w1tXwUwg7cS1WoyHKYSTDESs4RmC8gZo h86Xp0K/Sry0eai9BRh0CVTrGvIDmYKmxhhIz2p/Sz/1l82UC9zLTO6tkqt5LukzuQN4N1mF+dqdH Ot9ZerLw==; Received: from 213-225-5-109.nat.highway.a1.net ([213.225.5.109] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1mnECx-007MEp-TD; Wed, 17 Nov 2021 06:14:12 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 04/11] blk-mq: move blk_mq_flush_plug_list Date: Wed, 17 Nov 2021 07:13:57 +0100 Message-Id: <20211117061404.331732-5-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211117061404.331732-1-hch@lst.de> References: <20211117061404.331732-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Move blk_mq_flush_plug_list and blk_mq_plug_issue_direct down in blk-mq.c to prepare for marking blk_mq_request_issue_directly static without the need of a forward declaration. Signed-off-by: Christoph Hellwig --- block/blk-mq.c | 184 ++++++++++++++++++++++++------------------------- 1 file changed, 92 insertions(+), 92 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index c33411f9ce898..d70a470c9c1f1 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2308,98 +2308,6 @@ static void blk_mq_commit_rqs(struct blk_mq_hw_ctx *hctx, int *queued, *queued = 0; } -static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule) -{ - struct blk_mq_hw_ctx *hctx = NULL; - struct request *rq; - int queued = 0; - int errors = 0; - - while ((rq = rq_list_pop(&plug->mq_list))) { - bool last = rq_list_empty(plug->mq_list); - blk_status_t ret; - - if (hctx != rq->mq_hctx) { - if (hctx) - blk_mq_commit_rqs(hctx, &queued, from_schedule); - hctx = rq->mq_hctx; - } - - ret = blk_mq_request_issue_directly(rq, last); - switch (ret) { - case BLK_STS_OK: - queued++; - break; - case BLK_STS_RESOURCE: - case BLK_STS_DEV_RESOURCE: - blk_mq_request_bypass_insert(rq, false, last); - blk_mq_commit_rqs(hctx, &queued, from_schedule); - return; - default: - blk_mq_end_request(rq, ret); - errors++; - break; - } - } - - /* - * If we didn't flush the entire list, we could have told the driver - * there was more coming, but that turned out to be a lie. - */ - if (errors) - blk_mq_commit_rqs(hctx, &queued, from_schedule); -} - -void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) -{ - struct blk_mq_hw_ctx *this_hctx; - struct blk_mq_ctx *this_ctx; - unsigned int depth; - LIST_HEAD(list); - - if (rq_list_empty(plug->mq_list)) - return; - plug->rq_count = 0; - - if (!plug->multiple_queues && !plug->has_elevator && !from_schedule) { - blk_mq_plug_issue_direct(plug, false); - if (rq_list_empty(plug->mq_list)) - return; - } - - this_hctx = NULL; - this_ctx = NULL; - depth = 0; - do { - struct request *rq; - - rq = rq_list_pop(&plug->mq_list); - - if (!this_hctx) { - this_hctx = rq->mq_hctx; - this_ctx = rq->mq_ctx; - } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) { - trace_block_unplug(this_hctx->queue, depth, - !from_schedule); - blk_mq_sched_insert_requests(this_hctx, this_ctx, - &list, from_schedule); - depth = 0; - this_hctx = rq->mq_hctx; - this_ctx = rq->mq_ctx; - - } - - list_add(&rq->queuelist, &list); - depth++; - } while (!rq_list_empty(plug->mq_list)); - - if (!list_empty(&list)) { - trace_block_unplug(this_hctx->queue, depth, !from_schedule); - blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, - from_schedule); - } -} - static void blk_mq_bio_to_request(struct request *rq, struct bio *bio, unsigned int nr_segs) { @@ -2539,6 +2447,98 @@ blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last) return ret; } +static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule) +{ + struct blk_mq_hw_ctx *hctx = NULL; + struct request *rq; + int queued = 0; + int errors = 0; + + while ((rq = rq_list_pop(&plug->mq_list))) { + bool last = rq_list_empty(plug->mq_list); + blk_status_t ret; + + if (hctx != rq->mq_hctx) { + if (hctx) + blk_mq_commit_rqs(hctx, &queued, from_schedule); + hctx = rq->mq_hctx; + } + + ret = blk_mq_request_issue_directly(rq, last); + switch (ret) { + case BLK_STS_OK: + queued++; + break; + case BLK_STS_RESOURCE: + case BLK_STS_DEV_RESOURCE: + blk_mq_request_bypass_insert(rq, false, last); + blk_mq_commit_rqs(hctx, &queued, from_schedule); + return; + default: + blk_mq_end_request(rq, ret); + errors++; + break; + } + } + + /* + * If we didn't flush the entire list, we could have told the driver + * there was more coming, but that turned out to be a lie. + */ + if (errors) + blk_mq_commit_rqs(hctx, &queued, from_schedule); +} + +void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) +{ + struct blk_mq_hw_ctx *this_hctx; + struct blk_mq_ctx *this_ctx; + unsigned int depth; + LIST_HEAD(list); + + if (rq_list_empty(plug->mq_list)) + return; + plug->rq_count = 0; + + if (!plug->multiple_queues && !plug->has_elevator && !from_schedule) { + blk_mq_plug_issue_direct(plug, false); + if (rq_list_empty(plug->mq_list)) + return; + } + + this_hctx = NULL; + this_ctx = NULL; + depth = 0; + do { + struct request *rq; + + rq = rq_list_pop(&plug->mq_list); + + if (!this_hctx) { + this_hctx = rq->mq_hctx; + this_ctx = rq->mq_ctx; + } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) { + trace_block_unplug(this_hctx->queue, depth, + !from_schedule); + blk_mq_sched_insert_requests(this_hctx, this_ctx, + &list, from_schedule); + depth = 0; + this_hctx = rq->mq_hctx; + this_ctx = rq->mq_ctx; + + } + + list_add(&rq->queuelist, &list); + depth++; + } while (!rq_list_empty(plug->mq_list)); + + if (!list_empty(&list)) { + trace_block_unplug(this_hctx->queue, depth, !from_schedule); + blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, + from_schedule); + } +} + void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, struct list_head *list) { From patchwork Wed Nov 17 06:13:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 518720 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69239C43219 for ; Wed, 17 Nov 2021 06:14:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 432D363238 for ; Wed, 17 Nov 2021 06:14:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233544AbhKQGRY (ORCPT ); Wed, 17 Nov 2021 01:17:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232879AbhKQGRT (ORCPT ); Wed, 17 Nov 2021 01:17:19 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C5A6C061570; Tue, 16 Nov 2021 22:14:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IsPROaCipz2qEku5ZBq0gX9TFxSFlAzEARfEL6H87tY=; b=hhYVowLzafOFv5t8eJzlg+BWuG 7VqE2d7aQNFqOCv47qhVva5144E4Duy490/C228lJgN1rLca6VTjCB54NXPDq1WnfwV0yG57QJ0Sx vCHKN+kKmsR0BCgDY2QbS3ZnfIBWngdsNy3Z2g2rNCI6bo5D0Ec245umktpl5gEfmG3uTqniZyHJf OmlFVNkrP/TMK3C1SpEcPNvFj8UkP9gC+QnzR7urnoBi9aualtXssYTpXONtWpI5/vsnL0y6FZbUh iJmNlJjKtcxw/w6zmOKbHsxm+e69KAg4omH8SHc8SL85NjWqcUZece0yqapXXnZCX2pTjomRT82Xt KUYGz10Q==; Received: from 213-225-5-109.nat.highway.a1.net ([213.225.5.109] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1mnECz-007MF5-Dm; Wed, 17 Nov 2021 06:14:14 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 05/11] block: move request based cloning helpers to blk-mq.c Date: Wed, 17 Nov 2021 07:13:58 +0100 Message-Id: <20211117061404.331732-6-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211117061404.331732-1-hch@lst.de> References: <20211117061404.331732-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Keep all the request based code together. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 184 +---------------------------------------------- block/blk-mq.c | 175 +++++++++++++++++++++++++++++++++++++++++++- block/blk-mq.h | 3 - block/blk.h | 10 +++ 4 files changed, 185 insertions(+), 187 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index f1ca31a89493a..e1c928ec92946 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -592,7 +592,7 @@ static int __init setup_fail_make_request(char *str) } __setup("fail_make_request=", setup_fail_make_request); -static bool should_fail_request(struct block_device *part, unsigned int bytes) +bool should_fail_request(struct block_device *part, unsigned int bytes) { return part->bd_make_it_fail && should_fail(&fail_make_request, bytes); } @@ -606,15 +606,6 @@ static int __init fail_make_request_debugfs(void) } late_initcall(fail_make_request_debugfs); - -#else /* CONFIG_FAIL_MAKE_REQUEST */ - -static inline bool should_fail_request(struct block_device *part, - unsigned int bytes) -{ - return false; -} - #endif /* CONFIG_FAIL_MAKE_REQUEST */ static inline bool bio_check_ro(struct bio *bio) @@ -1087,92 +1078,6 @@ int iocb_bio_iopoll(struct kiocb *kiocb, struct io_comp_batch *iob, } EXPORT_SYMBOL_GPL(iocb_bio_iopoll); -/** - * blk_cloned_rq_check_limits - Helper function to check a cloned request - * for the new queue limits - * @q: the queue - * @rq: the request being checked - * - * Description: - * @rq may have been made based on weaker limitations of upper-level queues - * in request stacking drivers, and it may violate the limitation of @q. - * Since the block layer and the underlying device driver trust @rq - * after it is inserted to @q, it should be checked against @q before - * the insertion using this generic function. - * - * Request stacking drivers like request-based dm may change the queue - * limits when retrying requests on other queues. Those requests need - * to be checked against the new queue limits again during dispatch. - */ -static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q, - struct request *rq) -{ - unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq)); - - if (blk_rq_sectors(rq) > max_sectors) { - /* - * SCSI device does not have a good way to return if - * Write Same/Zero is actually supported. If a device rejects - * a non-read/write command (discard, write same,etc.) the - * low-level device driver will set the relevant queue limit to - * 0 to prevent blk-lib from issuing more of the offending - * operations. Commands queued prior to the queue limit being - * reset need to be completed with BLK_STS_NOTSUPP to avoid I/O - * errors being propagated to upper layers. - */ - if (max_sectors == 0) - return BLK_STS_NOTSUPP; - - printk(KERN_ERR "%s: over max size limit. (%u > %u)\n", - __func__, blk_rq_sectors(rq), max_sectors); - return BLK_STS_IOERR; - } - - /* - * The queue settings related to segment counting may differ from the - * original queue. - */ - rq->nr_phys_segments = blk_recalc_rq_segments(rq); - if (rq->nr_phys_segments > queue_max_segments(q)) { - printk(KERN_ERR "%s: over max segments limit. (%hu > %hu)\n", - __func__, rq->nr_phys_segments, queue_max_segments(q)); - return BLK_STS_IOERR; - } - - return BLK_STS_OK; -} - -/** - * blk_insert_cloned_request - Helper for stacking drivers to submit a request - * @q: the queue to submit the request - * @rq: the request being queued - */ -blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq) -{ - blk_status_t ret; - - ret = blk_cloned_rq_check_limits(q, rq); - if (ret != BLK_STS_OK) - return ret; - - if (rq->rq_disk && - should_fail_request(rq->rq_disk->part0, blk_rq_bytes(rq))) - return BLK_STS_IOERR; - - if (blk_crypto_insert_cloned_request(rq)) - return BLK_STS_IOERR; - - blk_account_io_start(rq); - - /* - * Since we have a scheduler attached on the top device, - * bypass a potential scheduler on the bottom device for - * insert. - */ - return blk_mq_request_issue_directly(rq, true); -} -EXPORT_SYMBOL_GPL(blk_insert_cloned_request); - static void update_io_ticks(struct block_device *part, unsigned long now, bool end) { @@ -1325,93 +1230,6 @@ int blk_lld_busy(struct request_queue *q) } EXPORT_SYMBOL_GPL(blk_lld_busy); -/** - * blk_rq_unprep_clone - Helper function to free all bios in a cloned request - * @rq: the clone request to be cleaned up - * - * Description: - * Free all bios in @rq for a cloned request. - */ -void blk_rq_unprep_clone(struct request *rq) -{ - struct bio *bio; - - while ((bio = rq->bio) != NULL) { - rq->bio = bio->bi_next; - - bio_put(bio); - } -} -EXPORT_SYMBOL_GPL(blk_rq_unprep_clone); - -/** - * blk_rq_prep_clone - Helper function to setup clone request - * @rq: the request to be setup - * @rq_src: original request to be cloned - * @bs: bio_set that bios for clone are allocated from - * @gfp_mask: memory allocation mask for bio - * @bio_ctr: setup function to be called for each clone bio. - * Returns %0 for success, non %0 for failure. - * @data: private data to be passed to @bio_ctr - * - * Description: - * Clones bios in @rq_src to @rq, and copies attributes of @rq_src to @rq. - * Also, pages which the original bios are pointing to are not copied - * and the cloned bios just point same pages. - * So cloned bios must be completed before original bios, which means - * the caller must complete @rq before @rq_src. - */ -int blk_rq_prep_clone(struct request *rq, struct request *rq_src, - struct bio_set *bs, gfp_t gfp_mask, - int (*bio_ctr)(struct bio *, struct bio *, void *), - void *data) -{ - struct bio *bio, *bio_src; - - if (!bs) - bs = &fs_bio_set; - - __rq_for_each_bio(bio_src, rq_src) { - bio = bio_clone_fast(bio_src, gfp_mask, bs); - if (!bio) - goto free_and_out; - - if (bio_ctr && bio_ctr(bio, bio_src, data)) - goto free_and_out; - - if (rq->bio) { - rq->biotail->bi_next = bio; - rq->biotail = bio; - } else { - rq->bio = rq->biotail = bio; - } - bio = NULL; - } - - /* Copy attributes of the original request to the clone request. */ - rq->__sector = blk_rq_pos(rq_src); - rq->__data_len = blk_rq_bytes(rq_src); - if (rq_src->rq_flags & RQF_SPECIAL_PAYLOAD) { - rq->rq_flags |= RQF_SPECIAL_PAYLOAD; - rq->special_vec = rq_src->special_vec; - } - rq->nr_phys_segments = rq_src->nr_phys_segments; - rq->ioprio = rq_src->ioprio; - - if (rq->bio && blk_crypto_rq_bio_prep(rq, rq->bio, gfp_mask) < 0) - goto free_and_out; - - return 0; - -free_and_out: - if (bio) - bio_put(bio); - blk_rq_unprep_clone(rq); - - return -ENOMEM; -} -EXPORT_SYMBOL_GPL(blk_rq_prep_clone); - int kblockd_schedule_work(struct work_struct *work) { return queue_work(kblockd_workqueue, work); diff --git a/block/blk-mq.c b/block/blk-mq.c index d70a470c9c1f1..0362ec9ad4d14 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2434,7 +2434,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, hctx_unlock(hctx, srcu_idx); } -blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last) +static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last) { blk_status_t ret; int srcu_idx; @@ -2821,6 +2821,179 @@ void blk_mq_submit_bio(struct bio *bio) } } +/** + * blk_cloned_rq_check_limits - Helper function to check a cloned request + * for the new queue limits + * @q: the queue + * @rq: the request being checked + * + * Description: + * @rq may have been made based on weaker limitations of upper-level queues + * in request stacking drivers, and it may violate the limitation of @q. + * Since the block layer and the underlying device driver trust @rq + * after it is inserted to @q, it should be checked against @q before + * the insertion using this generic function. + * + * Request stacking drivers like request-based dm may change the queue + * limits when retrying requests on other queues. Those requests need + * to be checked against the new queue limits again during dispatch. + */ +static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q, + struct request *rq) +{ + unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq)); + + if (blk_rq_sectors(rq) > max_sectors) { + /* + * SCSI device does not have a good way to return if + * Write Same/Zero is actually supported. If a device rejects + * a non-read/write command (discard, write same,etc.) the + * low-level device driver will set the relevant queue limit to + * 0 to prevent blk-lib from issuing more of the offending + * operations. Commands queued prior to the queue limit being + * reset need to be completed with BLK_STS_NOTSUPP to avoid I/O + * errors being propagated to upper layers. + */ + if (max_sectors == 0) + return BLK_STS_NOTSUPP; + + printk(KERN_ERR "%s: over max size limit. (%u > %u)\n", + __func__, blk_rq_sectors(rq), max_sectors); + return BLK_STS_IOERR; + } + + /* + * The queue settings related to segment counting may differ from the + * original queue. + */ + rq->nr_phys_segments = blk_recalc_rq_segments(rq); + if (rq->nr_phys_segments > queue_max_segments(q)) { + printk(KERN_ERR "%s: over max segments limit. (%hu > %hu)\n", + __func__, rq->nr_phys_segments, queue_max_segments(q)); + return BLK_STS_IOERR; + } + + return BLK_STS_OK; +} + +/** + * blk_insert_cloned_request - Helper for stacking drivers to submit a request + * @q: the queue to submit the request + * @rq: the request being queued + */ +blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq) +{ + blk_status_t ret; + + ret = blk_cloned_rq_check_limits(q, rq); + if (ret != BLK_STS_OK) + return ret; + + if (rq->rq_disk && + should_fail_request(rq->rq_disk->part0, blk_rq_bytes(rq))) + return BLK_STS_IOERR; + + if (blk_crypto_insert_cloned_request(rq)) + return BLK_STS_IOERR; + + blk_account_io_start(rq); + + /* + * Since we have a scheduler attached on the top device, + * bypass a potential scheduler on the bottom device for + * insert. + */ + return blk_mq_request_issue_directly(rq, true); +} +EXPORT_SYMBOL_GPL(blk_insert_cloned_request); + +/** + * blk_rq_unprep_clone - Helper function to free all bios in a cloned request + * @rq: the clone request to be cleaned up + * + * Description: + * Free all bios in @rq for a cloned request. + */ +void blk_rq_unprep_clone(struct request *rq) +{ + struct bio *bio; + + while ((bio = rq->bio) != NULL) { + rq->bio = bio->bi_next; + + bio_put(bio); + } +} +EXPORT_SYMBOL_GPL(blk_rq_unprep_clone); + +/** + * blk_rq_prep_clone - Helper function to setup clone request + * @rq: the request to be setup + * @rq_src: original request to be cloned + * @bs: bio_set that bios for clone are allocated from + * @gfp_mask: memory allocation mask for bio + * @bio_ctr: setup function to be called for each clone bio. + * Returns %0 for success, non %0 for failure. + * @data: private data to be passed to @bio_ctr + * + * Description: + * Clones bios in @rq_src to @rq, and copies attributes of @rq_src to @rq. + * Also, pages which the original bios are pointing to are not copied + * and the cloned bios just point same pages. + * So cloned bios must be completed before original bios, which means + * the caller must complete @rq before @rq_src. + */ +int blk_rq_prep_clone(struct request *rq, struct request *rq_src, + struct bio_set *bs, gfp_t gfp_mask, + int (*bio_ctr)(struct bio *, struct bio *, void *), + void *data) +{ + struct bio *bio, *bio_src; + + if (!bs) + bs = &fs_bio_set; + + __rq_for_each_bio(bio_src, rq_src) { + bio = bio_clone_fast(bio_src, gfp_mask, bs); + if (!bio) + goto free_and_out; + + if (bio_ctr && bio_ctr(bio, bio_src, data)) + goto free_and_out; + + if (rq->bio) { + rq->biotail->bi_next = bio; + rq->biotail = bio; + } else { + rq->bio = rq->biotail = bio; + } + bio = NULL; + } + + /* Copy attributes of the original request to the clone request. */ + rq->__sector = blk_rq_pos(rq_src); + rq->__data_len = blk_rq_bytes(rq_src); + if (rq_src->rq_flags & RQF_SPECIAL_PAYLOAD) { + rq->rq_flags |= RQF_SPECIAL_PAYLOAD; + rq->special_vec = rq_src->special_vec; + } + rq->nr_phys_segments = rq_src->nr_phys_segments; + rq->ioprio = rq_src->ioprio; + + if (rq->bio && blk_crypto_rq_bio_prep(rq, rq->bio, gfp_mask) < 0) + goto free_and_out; + + return 0; + +free_and_out: + if (bio) + bio_put(bio); + blk_rq_unprep_clone(rq); + + return -ENOMEM; +} +EXPORT_SYMBOL_GPL(blk_rq_prep_clone); + static size_t order_to_size(unsigned int order) { return (size_t)PAGE_SIZE << order; diff --git a/block/blk-mq.h b/block/blk-mq.h index 8acfa650f5751..f39454456c064 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -65,9 +65,6 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head, bool run_queue); void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, struct list_head *list); - -/* Used by blk_insert_cloned_request() to issue request directly */ -blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last); void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, struct list_head *list); diff --git a/block/blk.h b/block/blk.h index b4fed2033e48f..1bac4063afffb 100644 --- a/block/blk.h +++ b/block/blk.h @@ -493,4 +493,14 @@ int disk_register_independent_access_ranges(struct gendisk *disk, struct blk_independent_access_ranges *new_iars); void disk_unregister_independent_access_ranges(struct gendisk *disk); +#ifdef CONFIG_FAIL_MAKE_REQUEST +bool should_fail_request(struct block_device *part, unsigned int bytes); +#else /* CONFIG_FAIL_MAKE_REQUEST */ +static inline bool should_fail_request(struct block_device *part, + unsigned int bytes) +{ + return false; +} +#endif /* CONFIG_FAIL_MAKE_REQUEST */ + #endif /* BLK_INTERNAL_H */ From patchwork Wed Nov 17 06:14:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 518719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1991AC433F5 for ; Wed, 17 Nov 2021 06:14:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02A1B61C4F for ; Wed, 17 Nov 2021 06:14:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233559AbhKQGR1 (ORCPT ); Wed, 17 Nov 2021 01:17:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233522AbhKQGRV (ORCPT ); Wed, 17 Nov 2021 01:17:21 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97DA5C061570; Tue, 16 Nov 2021 22:14:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aN/HpC6wz8bll47+40mhrwWrH9ninZ18GAqRjH5cT1o=; b=azjkoGewnH7jnFCJwphvLSW1FO ngIJHBJG7GgBqo4ELYXpGO9Ym4Q+hXt33dE9kw+Rkz/WzOJ3NiZt4XWiH0QHNQrCALaa8eT6+AQ4B 7xNPb4rU2GWPCYY2WwCtZUVfjlreUEgkDc2D9ZPxyNQGsXZD13+9BeHd0FtECT9bcBQB4q7hUxh0t oy54ffqzGvUWHrascqh9FxpVg3+wQRuxmuHMcb0/DSIFOWvMLV063uq7gOiSpjmQGqeV4vtjOmcoB lX/MvzWaswPcHI7i1YJJODMAtIxtpNKbr9uayMFSkmpIgO7l99teFTl6zc0fwHey9gWEoCQnJ4kc3 muOoL59Q==; Received: from 213-225-5-109.nat.highway.a1.net ([213.225.5.109] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1mnED3-007MG1-0p; Wed, 17 Nov 2021 06:14:17 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 07/11] block: move blk_steal_bios to blk-mq.c Date: Wed, 17 Nov 2021 07:14:00 +0100 Message-Id: <20211117061404.331732-8-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211117061404.331732-1-hch@lst.de> References: <20211117061404.331732-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Keep all the request based code together. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 21 --------------------- block/blk-mq.c | 21 +++++++++++++++++++++ 2 files changed, 21 insertions(+), 21 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index a3384c85074e3..723a8c84aef12 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1164,27 +1164,6 @@ void disk_end_io_acct(struct gendisk *disk, unsigned int op, } EXPORT_SYMBOL(disk_end_io_acct); -/* - * Steal bios from a request and add them to a bio list. - * The request must not have been partially completed before. - */ -void blk_steal_bios(struct bio_list *list, struct request *rq) -{ - if (rq->bio) { - if (list->tail) - list->tail->bi_next = rq->bio; - else - list->head = rq->bio; - list->tail = rq->biotail; - - rq->bio = NULL; - rq->biotail = NULL; - } - - rq->__data_len = 0; -} -EXPORT_SYMBOL_GPL(blk_steal_bios); - /** * blk_lld_busy - Check if underlying low-level drivers of a device are busy * @q : the queue of the device being checked diff --git a/block/blk-mq.c b/block/blk-mq.c index 8d0d18ef07d09..300fa393e6445 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3011,6 +3011,27 @@ int blk_rq_prep_clone(struct request *rq, struct request *rq_src, } EXPORT_SYMBOL_GPL(blk_rq_prep_clone); +/* + * Steal bios from a request and add them to a bio list. + * The request must not have been partially completed before. + */ +void blk_steal_bios(struct bio_list *list, struct request *rq) +{ + if (rq->bio) { + if (list->tail) + list->tail->bi_next = rq->bio; + else + list->head = rq->bio; + list->tail = rq->biotail; + + rq->bio = NULL; + rq->biotail = NULL; + } + + rq->__data_len = 0; +} +EXPORT_SYMBOL_GPL(blk_steal_bios); + static size_t order_to_size(unsigned int order) { return (size_t)PAGE_SIZE << order; From patchwork Wed Nov 17 06:14:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 518718 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37E6DC433FE for ; Wed, 17 Nov 2021 06:14:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 248B861BFA for ; Wed, 17 Nov 2021 06:14:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233574AbhKQGR2 (ORCPT ); Wed, 17 Nov 2021 01:17:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233539AbhKQGRY (ORCPT ); Wed, 17 Nov 2021 01:17:24 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4609EC061764; Tue, 16 Nov 2021 22:14:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5Y57JG7VXaoFWW1akQPI6hTMWlsbyIej2L4X3VlG1LE=; b=EWByrGMG+/If75tDNJKhTRfVCs 8wmJEu33W62TXmRsyr2sJE7UB0/H6rgMJq6/naMYCMVc0hGTlAAJURaRmiSnC/bnOYrp79rbGutil nleJTeHxJYdm/VaoVyRkuuNYS4q8ydLUz0PiNxIOfrBCfpdzrB+jbHREtOfi5zNzKGbzSnKEMiIJO dsGrwD/ME6GTH7RhEHRkCmPsLS55YGh3enE4+7FSRwIOpsdHUXtCW7dHq6dQkgKck1zVCasr23MFV 4IxjnmFzX9Scqr2yT/29aRFUO3dP8K9T3zeLLBgSBDCF0Dwticqzv19aotFn6izACPL6CMoCRRDZ/ EKBaTgiQ==; Received: from 213-225-5-109.nat.highway.a1.net ([213.225.5.109] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1mnED6-007MGe-9r; Wed, 17 Nov 2021 06:14:21 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 09/11] block: move blk_dump_rq_flags to blk-mq.c Date: Wed, 17 Nov 2021 07:14:02 +0100 Message-Id: <20211117061404.331732-10-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211117061404.331732-1-hch@lst.de> References: <20211117061404.331732-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org blk_dump_rq_flags deals with a request, so move it to blk-mq.c. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 14 -------------- block/blk-mq.c | 14 ++++++++++++++ 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 60cc44418ce79..89971630f092f 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -217,20 +217,6 @@ void blk_print_req_error(struct request *req, blk_status_t status) IOPRIO_PRIO_CLASS(req->ioprio)); } -void blk_dump_rq_flags(struct request *rq, char *msg) -{ - printk(KERN_INFO "%s: dev %s: flags=%llx\n", msg, - rq->rq_disk ? rq->rq_disk->disk_name : "?", - (unsigned long long) rq->cmd_flags); - - printk(KERN_INFO " sector %llu, nr/cnr %u/%u\n", - (unsigned long long)blk_rq_pos(rq), - blk_rq_sectors(rq), blk_rq_cur_sectors(rq)); - printk(KERN_INFO " bio %p, biotail %p, len %u\n", - rq->bio, rq->biotail, blk_rq_bytes(rq)); -} -EXPORT_SYMBOL(blk_dump_rq_flags); - /** * blk_sync_queue - cancel any pending callbacks on a queue * @q: the queue diff --git a/block/blk-mq.c b/block/blk-mq.c index 8b7edfc9623dd..f8a39f4fce01e 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -667,6 +667,20 @@ void blk_mq_free_plug_rqs(struct blk_plug *plug) blk_mq_free_request(rq); } +void blk_dump_rq_flags(struct request *rq, char *msg) +{ + printk(KERN_INFO "%s: dev %s: flags=%llx\n", msg, + rq->rq_disk ? rq->rq_disk->disk_name : "?", + (unsigned long long) rq->cmd_flags); + + printk(KERN_INFO " sector %llu, nr/cnr %u/%u\n", + (unsigned long long)blk_rq_pos(rq), + blk_rq_sectors(rq), blk_rq_cur_sectors(rq)); + printk(KERN_INFO " bio %p, biotail %p, len %u\n", + rq->bio, rq->biotail, blk_rq_bytes(rq)); +} +EXPORT_SYMBOL(blk_dump_rq_flags); + static void req_bio_endio(struct request *rq, struct bio *bio, unsigned int nbytes, blk_status_t error) { From patchwork Wed Nov 17 06:14:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 518717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E84EFC433EF for ; Wed, 17 Nov 2021 06:14:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CAE3661BFE for ; Wed, 17 Nov 2021 06:14:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233563AbhKQGRf (ORCPT ); Wed, 17 Nov 2021 01:17:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233573AbhKQGR2 (ORCPT ); Wed, 17 Nov 2021 01:17:28 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5582C061206; Tue, 16 Nov 2021 22:14:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=viN/M0s94isJpAI9XG+tAyGwzk5B+gYmCJUD6Ta5toY=; b=DmCyOSBxb/zzr/09ETedwRIi2e wWu/BkxWyRBfumSegeofjM4bpJqi0m9vEI6i0xBOuwI9Gwh9MMNARP0J1F9Vi9I05uDZZCDVSr+W7 v9lYzYgc/g9CT+TAXDZrBx3TfrOM9xdGQVNfZ5UL98rTwqUPY69IfH0kbS2cMcJum/4sAFZCyejEl Ld9+aLsy/l6DJB7tm7Tuw0xxCu6mDG81/dFRT8+GIjorxEdkJ7tDF6hlJWRn+vUKnOlAoGTIyotui COsMsboULPAegSzeAsB1bqpm86SsCfyVxLTBY193JUQFR0jleXGT9xRyU2za1UJ6jIBPeHLvLrHJd qU0PwygQ==; Received: from 213-225-5-109.nat.highway.a1.net ([213.225.5.109] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1mnED9-007MHU-OQ; Wed, 17 Nov 2021 06:14:24 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 11/11] block: don't include blk-mq headers in blk-core.c Date: Wed, 17 Nov 2021 07:14:04 +0100 Message-Id: <20211117061404.331732-12-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211117061404.331732-1-hch@lst.de> References: <20211117061404.331732-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org All request based code is in the blk-mq files now. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 5722c1d9da09c..ee54b34d5e99c 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -16,7 +16,6 @@ #include #include #include -#include #include #include #include @@ -47,8 +46,6 @@ #include #include "blk.h" -#include "blk-mq.h" -#include "blk-mq-sched.h" #include "blk-pm.h" #include "blk-throttle.h"