From patchwork Wed Feb 1 12:47:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 93012 Delivered-To: patch@linaro.org Received: by 10.182.3.34 with SMTP id 2csp2301116obz; Wed, 1 Feb 2017 04:48:45 -0800 (PST) X-Received: by 10.84.241.66 with SMTP id u2mr4295172plm.107.1485953325142; Wed, 01 Feb 2017 04:48:45 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p22si19095150pli.181.2017.02.01.04.48.44; Wed, 01 Feb 2017 04:48:45 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750945AbdBAMso (ORCPT + 5 others); Wed, 1 Feb 2017 07:48:44 -0500 Received: from mail-lf0-f43.google.com ([209.85.215.43]:34796 "EHLO mail-lf0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750940AbdBAMsn (ORCPT ); Wed, 1 Feb 2017 07:48:43 -0500 Received: by mail-lf0-f43.google.com with SMTP id v186so227906685lfa.1 for ; Wed, 01 Feb 2017 04:48:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Kn8m0sV6b6G0jZkWkfO7YPZPNV7jxkcRhzqT6MTt9Lk=; b=V0edrGOUgARoEdgAr3Y6QYv1pScAR3BWP+AsgTJXCXAr/NGTtM/R9iBOaHWI6adWKy n7CJ2gr/SZ5X/3790stOg/OYLQP0lMtl0Fomz3blXgs/eArBNUp1Qi+qqaRINzSPxBGT TFrUw7kDAmtKBUX7QzNsdMlvLdeW7f2Q9wcM0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Kn8m0sV6b6G0jZkWkfO7YPZPNV7jxkcRhzqT6MTt9Lk=; b=MEC1dUn7p3TuT196O1otZBNBiIJVZRMOeHaaWFmm/dUYMJ7hXEWY6hwhsINn13fC8l is/YkvnE6UJK/vAbkvxPUynJ8kOtVVrlNex3NZDWoslpT+4c2ksugE6UJIl/vm6h5Xil WTxsoBjOqZ9QHQ/u/ET9+ofOf7C7op0ErQXyLlX/Z7bCJVtB7jPCa3333KOEqN59TxN9 WhIXZN3VXTvPRHugZVZWalVwJscbqXm8o+Q66Y4YCGe9sg7WZUunr0z7k1xFkMKCXw34 mDBTAU8+WhZQ2Iltk8D4rgkXzGQ/HwRTkRmrs01/RaYVasTXbNjBHCyKOZIHg+SN1wEH ZPGA== X-Gm-Message-State: AIkVDXKi+SB2e/9OEaO3XozX0SvZKSzyhHLCBAryUHHvHdn4gRjoAO50Y5tE+TaEgLUStD3j X-Received: by 10.25.92.145 with SMTP id u17mr866061lfi.160.1485953321667; Wed, 01 Feb 2017 04:48:41 -0800 (PST) Received: from gnarp.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id t126sm5707754lff.31.2017.02.01.04.48.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 01 Feb 2017 04:48:40 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: Chunyan Zhang , Baolin Wang , Linus Walleij Subject: [PATCH 02/10] mmc: block: rename rqc and req Date: Wed, 1 Feb 2017 13:47:52 +0100 Message-Id: <20170201124800.13865-3-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170201124800.13865-1-linus.walleij@linaro.org> References: <20170201124800.13865-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org In the function mmc_blk_issue_rw_rq() the new request coming in from the block layer is called "rqc" and the old request that was potentially just returned back from the asynchronous mechanism is called "req". This is really confusing when trying to analyze and understand the code, it becomes a perceptual nightmare to me. Maybe others have better parserheads but it is not working for me. Rename "rqc" to "new_req" and "req" to "old_req" to reflect what is semantically going on into the syntax. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 56 ++++++++++++++++++++++++------------------------ 1 file changed, 28 insertions(+), 28 deletions(-) -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 0e23dba62613..e1479f114247 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1619,7 +1619,7 @@ static void mmc_blk_rw_start_new(struct mmc_queue *mq, struct mmc_card *card, } } -static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) +static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) { struct mmc_blk_data *md = mq->blkdata; struct mmc_card *card = md->queue.card; @@ -1627,24 +1627,24 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) int ret = 1, disable_multi = 0, retry = 0, type, retune_retry_done = 0; enum mmc_blk_status status; struct mmc_queue_req *mq_rq; - struct request *req; + struct request *old_req; struct mmc_async_req *new_areq; struct mmc_async_req *old_areq; - if (!rqc && !mq->mqrq_prev->req) + if (!new_req && !mq->mqrq_prev->req) return; do { - if (rqc) { + if (new_req) { /* * When 4KB native sector is enabled, only 8 blocks * multiple read or write is allowed */ if (mmc_large_sector(card) && - !IS_ALIGNED(blk_rq_sectors(rqc), 8)) { + !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { pr_err("%s: Transfer size is not 4KB sector size aligned\n", - rqc->rq_disk->disk_name); - mmc_blk_rw_cmd_abort(card, rqc); + new_req->rq_disk->disk_name); + mmc_blk_rw_cmd_abort(card, new_req); return; } @@ -1671,8 +1671,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) */ mq_rq = container_of(old_areq, struct mmc_queue_req, mmc_active); brq = &mq_rq->brq; - req = mq_rq->req; - type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; + old_req = mq_rq->req; + type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; mmc_queue_bounce_post(mq_rq); switch (status) { @@ -1683,7 +1683,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) */ mmc_blk_reset_success(md, type); - ret = blk_end_request(req, 0, + ret = blk_end_request(old_req, 0, brq->data.bytes_xfered); /* @@ -1693,21 +1693,21 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) */ if (status == MMC_BLK_SUCCESS && ret) { pr_err("%s BUG rq_tot %d d_xfer %d\n", - __func__, blk_rq_bytes(req), + __func__, blk_rq_bytes(old_req), brq->data.bytes_xfered); - mmc_blk_rw_cmd_abort(card, req); + mmc_blk_rw_cmd_abort(card, old_req); return; } break; case MMC_BLK_CMD_ERR: - ret = mmc_blk_cmd_err(md, card, brq, req, ret); + ret = mmc_blk_cmd_err(md, card, brq, old_req, ret); if (mmc_blk_reset(md, card->host, type)) { - mmc_blk_rw_cmd_abort(card, req); - mmc_blk_rw_start_new(mq, card, rqc); + mmc_blk_rw_cmd_abort(card, old_req); + mmc_blk_rw_start_new(mq, card, new_req); return; } if (!ret) { - mmc_blk_rw_start_new(mq, card, rqc); + mmc_blk_rw_start_new(mq, card, new_req); return; } break; @@ -1719,8 +1719,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) case MMC_BLK_ABORT: if (!mmc_blk_reset(md, card->host, type)) break; - mmc_blk_rw_cmd_abort(card, req); - mmc_blk_rw_start_new(mq, card, rqc); + mmc_blk_rw_cmd_abort(card, old_req); + mmc_blk_rw_start_new(mq, card, new_req); return; case MMC_BLK_DATA_ERR: { int err; @@ -1729,8 +1729,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) if (!err) break; if (err == -ENODEV) { - mmc_blk_rw_cmd_abort(card, req); - mmc_blk_rw_start_new(mq, card, rqc); + mmc_blk_rw_cmd_abort(card, old_req); + mmc_blk_rw_start_new(mq, card, new_req); return; } /* Fall through */ @@ -1739,7 +1739,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) if (brq->data.blocks > 1) { /* Redo read one sector at a time */ pr_warn("%s: retrying using single block read\n", - req->rq_disk->disk_name); + old_req->rq_disk->disk_name); disable_multi = 1; break; } @@ -1748,22 +1748,22 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) * time, so we only reach here after trying to * read a single sector. */ - ret = blk_end_request(req, -EIO, + ret = blk_end_request(old_req, -EIO, brq->data.blksz); if (!ret) { - mmc_blk_rw_start_new(mq, card, rqc); + mmc_blk_rw_start_new(mq, card, new_req); return; } break; case MMC_BLK_NOMEDIUM: - mmc_blk_rw_cmd_abort(card, req); - mmc_blk_rw_start_new(mq, card, rqc); + mmc_blk_rw_cmd_abort(card, old_req); + mmc_blk_rw_start_new(mq, card, new_req); return; default: pr_err("%s: Unhandled return value (%d)", - req->rq_disk->disk_name, status); - mmc_blk_rw_cmd_abort(card, req); - mmc_blk_rw_start_new(mq, card, rqc); + old_req->rq_disk->disk_name, status); + mmc_blk_rw_cmd_abort(card, old_req); + mmc_blk_rw_start_new(mq, card, new_req); return; }