From patchwork Tue Jun 28 13:30:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Loehle X-Patchwork-Id: 585990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70290CCA47F for ; Tue, 28 Jun 2022 13:31:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245738AbiF1Nbi convert rfc822-to-8bit (ORCPT ); Tue, 28 Jun 2022 09:31:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346488AbiF1Nak (ORCPT ); Tue, 28 Jun 2022 09:30:40 -0400 Received: from mail4.swissbit.com (mail4.swissbit.com [176.95.1.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9A0E1014; Tue, 28 Jun 2022 06:30:04 -0700 (PDT) Received: from mail4.swissbit.com (localhost [127.0.0.1]) by DDEI (Postfix) with ESMTP id 57164122EC5; Tue, 28 Jun 2022 15:30:03 +0200 (CEST) Received: from mail4.swissbit.com (localhost [127.0.0.1]) by DDEI (Postfix) with ESMTP id 3B739122EB2; Tue, 28 Jun 2022 15:30:03 +0200 (CEST) X-TM-AS-ERS: 10.149.2.84-127.5.254.253 X-TM-AS-SMTP: 1.0 ZXguc3dpc3NiaXQuY29t Y2xvZWhsZUBoeXBlcnN0b25lLmNvbQ== X-DDEI-TLS-USAGE: Used Received: from ex.swissbit.com (SBDEEX02.sbitdom.lan [10.149.2.84]) by mail4.swissbit.com (Postfix) with ESMTPS; Tue, 28 Jun 2022 15:30:03 +0200 (CEST) Received: from sbdeex04.sbitdom.lan (10.149.2.42) by sbdeex02.sbitdom.lan (10.149.2.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.9; Tue, 28 Jun 2022 15:30:02 +0200 Received: from sbdeex02.sbitdom.lan (10.149.2.84) by sbdeex04.sbitdom.lan (10.149.2.42) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.9; Tue, 28 Jun 2022 15:30:02 +0200 Received: from sbdeex02.sbitdom.lan ([fe80::e0eb:ade8:2d90:1f74]) by sbdeex02.sbitdom.lan ([fe80::e0eb:ade8:2d90:1f74%8]) with mapi id 15.02.1118.009; Tue, 28 Jun 2022 15:30:02 +0200 From: Christian Loehle To: Adrian Hunter , Avri Altman , "ulf.hansson@linaro.org" , "linux-mmc@vger.kernel.org" , "linux-kernel@vger.kernel.org" Subject: [PATCHv3] mmc: block: Add single read for 4k sector cards Thread-Topic: [PATCHv3] mmc: block: Add single read for 4k sector cards Thread-Index: AdiK8uOh9sQudDNBTZK/b3j4jg18Iw== Date: Tue, 28 Jun 2022 13:30:02 +0000 Message-ID: Accept-Language: en-US, de-DE Content-Language: de-DE X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.153.3.44] MIME-Version: 1.0 X-TMASE-Version: DDEI-5.1-9.0.1002-26984.000 X-TMASE-Result: 10-0.824300-10.000000 X-TMASE-MatchedRID: /+von0vPuFHzXojwcywrzPCW/PNRRp/ZG24YVeuZGmPozDhGeQC9Er8F Hrw7frluf146W0iUu2tacZzTSiX0+bWfqrMzDJSXzYK5U+QI3O5Hyz3bB5kG52fIvzHS0qU71Ie ckOrbKExvzxvIxmtIpxi87IOQ/i43K6km+od/UDyF0P4btTlf9CiBpCFE00S+myiLZetSf8lRfL aSAVDtyiq2rl3dzGQ1ztE+5yrjfKvqmukZ9YtcgwaHSJ3fNGcaiUKfPOUK6NexdCJcd3nhqlXK9 tOD+u6c X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 X-TMASE-INERTIA: 0-0;;;; X-TMASE-XGENCLOUD: 609bbf6a-c05e-4d54-926e-c74465300547-0-0-200-0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Cards with 4k native sector size may only be read 4k-aligned, accommodate for this in the single read recovery and use it. Fixes: 81196976ed946 (mmc: block: Add blk-mq support) Signed-off-by: Christian Loehle --- drivers/mmc/core/block.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index f4a1281658db..7b73d77687b1 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -176,7 +176,7 @@ static inline int mmc_blk_part_switch(struct mmc_card *card, unsigned int part_type); static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, struct mmc_card *card, - int disable_multi, + int recovery_mode, struct mmc_queue *mq); static void mmc_blk_hsq_req_done(struct mmc_request *mrq); @@ -1302,7 +1302,7 @@ static void mmc_blk_eval_resp_error(struct mmc_blk_request *brq) } static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq, - int disable_multi, bool *do_rel_wr_p, + int recovery_mode, bool *do_rel_wr_p, bool *do_data_tag_p) { struct mmc_blk_data *md = mq->blkdata; @@ -1368,12 +1368,12 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq, brq->data.blocks--; /* - * After a read error, we redo the request one sector + * After a read error, we redo the request one (native) sector * at a time in order to accurately determine which * sectors can be read successfully. */ - if (disable_multi) - brq->data.blocks = 1; + if (recovery_mode) + brq->data.blocks = queue_physical_block_size(mq->queue) >> 9; /* * Some controllers have HW issues while operating @@ -1590,7 +1590,7 @@ static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req) static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, struct mmc_card *card, - int disable_multi, + int recovery_mode, struct mmc_queue *mq) { u32 readcmd, writecmd; @@ -1599,7 +1599,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, struct mmc_blk_data *md = mq->blkdata; bool do_rel_wr, do_data_tag; - mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag); + mmc_blk_data_prep(mq, mqrq, recovery_mode, &do_rel_wr, &do_data_tag); brq->mrq.cmd = &brq->cmd; @@ -1690,7 +1690,7 @@ static int mmc_blk_fix_state(struct mmc_card *card, struct request *req) #define MMC_READ_SINGLE_RETRIES 2 -/* Single sector read during recovery */ +/* Single (native) sector read during recovery */ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req) { struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req); @@ -1698,6 +1698,7 @@ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req) struct mmc_card *card = mq->card; struct mmc_host *host = card->host; blk_status_t error = BLK_STS_OK; + size_t bytes_per_read = queue_physical_block_size(mq->queue); do { u32 status; @@ -1732,13 +1733,13 @@ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req) else error = BLK_STS_OK; - } while (blk_update_request(req, error, 512)); + } while (blk_update_request(req, error, bytes_per_read)); return; error_exit: mrq->data->bytes_xfered = 0; - blk_update_request(req, BLK_STS_IOERR, 512); + blk_update_request(req, BLK_STS_IOERR, bytes_per_read); /* Let it try the remaining request again */ if (mqrq->retries > MMC_MAX_RETRIES - 1) mqrq->retries = MMC_MAX_RETRIES - 1; @@ -1879,10 +1880,8 @@ static void mmc_blk_mq_rw_recovery(struct mmc_queue *mq, struct request *req) return; } - /* FIXME: Missing single sector read for large sector size */ - if (!mmc_large_sector(card) && rq_data_dir(req) == READ && - brq->data.blocks > 1) { - /* Read one sector at a time */ + if (rq_data_dir(req) == READ && brq->data.blocks > 1) { + /* Read one (native) sector at a time */ mmc_blk_read_single(mq, req); return; }