From patchwork Thu Oct 26 12:57:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117222 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp733101qgn; Thu, 26 Oct 2017 05:58:15 -0700 (PDT) X-Google-Smtp-Source: ABhQp+Se1SAKKsSzkTlr920Wrp2qbyYRdrLUSz82UZqQOorkII+zn07g3zr65UEmrwHgLcmAhktk X-Received: by 10.98.214.76 with SMTP id r73mr5274214pfg.261.1509022695776; Thu, 26 Oct 2017 05:58:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022695; cv=none; d=google.com; s=arc-20160816; b=UdIZ3Ne6OaEoYhJ4srRi8XJR21ow/nmZB+P97j4BNKQaXkLfgLfpINggNVdFK7w5EK VVR1kVXRImXavxzmjDtHZlLBGqawtpUFMNPCNlcFBhYpgRocjkCO3oUF8mMmov3adUjF ltuQm0DhbEpY2VShCweBLblFk0q1DYPbKSEZPWT3i0XjJQ8rNSLJPd8/xnF0kYVFtkvu EdcR3PIONI8Sp35652emo6wVIMydjK/BFPT2FjKyukiFCl++cKNHCabhZVLLM4P0sElX /wPfi05nP5d9vTUgxnFmIKeYEknhNtWfYWsUZSzCryhDBLjXkB3P0UCb2ZSkg/awLuyp fajA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=k+SDcUMr26sped1BCQAW9l42oXtVIDkfRM3FQSruy60=; b=NbLqGxhDLvTIrISpIrc1wL8EL1nVe1q6ud20jfWKbq9OGoJTeaAAEYu4IdoUpLGqc0 JXnFdLOlsNpso4MpQGss2eRioepqyy3HvPMnNQuMzB2h3G9Qq/IxTIVO7GuIlWP8aEK1 MZ5j0Urdcgz9lw88KcIhiC1WfrKWE6CNLUHLJA8vCTpVpkBh17C804i/vdHFSphQkOU5 osX3N4V7aH7loHcVAKB/Knm2zU27kdGSXi/a/kvxFzWttq2bVLtONTqyE+E7GP8+LeJd y41Pl96SGLS3cRBaHYLS28+tGAXRGj3AASL8IbJuZnitXwPawc91Zvd/jETyEPGQ9q7E e0Mg== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=EwEz9Vop; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y1si3605289pff.367.2017.10.26.05.58.15; Thu, 26 Oct 2017 05:58:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=EwEz9Vop; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932375AbdJZM6O (ORCPT + 6 others); Thu, 26 Oct 2017 08:58:14 -0400 Received: from mail-lf0-f66.google.com ([209.85.215.66]:45022 "EHLO mail-lf0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932377AbdJZM6M (ORCPT ); Thu, 26 Oct 2017 08:58:12 -0400 Received: by mail-lf0-f66.google.com with SMTP id 75so3657894lfx.1 for ; Thu, 26 Oct 2017 05:58:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=8vQbKvgu5n9TTja+C6GAMd1y73CvmPnbb2y1a2uC4ew=; b=EwEz9Voprnxp6C6MyA/PeRnwqHdo9kNQRLlfpxs0OJLpwDh5WsMs6s3MNBT84wEOyx Pxiu1MesejsUtIVeEddOPfe2TytyxarpjWSmn9uYyxwLRNFc4u7o/+4rYECOK5YPpTkA JP2dg997XoZ84khcrLFPv2e4niK4nzQaVQZI0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8vQbKvgu5n9TTja+C6GAMd1y73CvmPnbb2y1a2uC4ew=; b=PfRZVi9AhnPlqWnNUOyUA81mmWHqa08gm/pK5phZjBoWMC3s+DHWlvbqOUp3544QAm GhQ12+cAtAh+pHGEtvwujjNq6Q0qfhuxDsNg6Qfyhdsk4M9nkaW1Ee2sBN0AT5f9vu9V Dvl/C4joBBPETxkCs1KWKYgWI56nlT+D60wnef1BwUAhREcEq70zNn3LeKSk00fsQjf7 JZdSXhv5Thvp1x96Smc0zQ3LKg16W5JmG7utNozjdzQy+0SfvZRMn5R2Qq3BYmXU02Wz k+QKA7e+pgAaMhP1ivRflEwRGH6AODPo5lA3sxUH4l1DROfEX5hHn7K4SveGRDVGrbhL IYdw== X-Gm-Message-State: AMCzsaUuMCI6Bgv4GetLZ1CWRoS8rgcptFXP9neBmSxy8dK4nHC4HpqA EFFra/jdJRhjzaSb9eZjJcgeAl1dLLY= X-Received: by 10.46.22.83 with SMTP id 19mr9419487ljw.147.1509022690846; Thu, 26 Oct 2017 05:58:10 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.58.09 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:58:09 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 01/12 v4] mmc: core: move the asynchronous post-processing Date: Thu, 26 Oct 2017 14:57:46 +0200 Message-Id: <20171026125757.10200-2-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This moves the asynchronous post-processing of a request over to the finalization function. The patch has a slight semantic change: Both places will be in the code path for if (host->areq) and in the same sequence, but before this patch, the next request was started before performing post-processing. The effect is that whereas before, the post- and preprocessing happened after starting the next request, now the preprocessing will happen after the request is done and before the next has started which would cut half of the pre/post optimizations out. In the later patch named "mmc: core: replace waitqueue with worker" we move the finalization to a worker started by mmc_request_done() and in the patch named "mmc: block: issue requests in massive parallel" we introduce a forked success/failure path that can quickly complete requests when they come back from the hardware. These two later patches together restore the same optimization but in a more elegant manner that avoids the need to flush the two-stage pipleline with NULL, something we remove between these two patches in the commit named "mmc: queue: stop flushing the pipeline with NULL". Signed-off-by: Linus Walleij --- drivers/mmc/core/core.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 12b271c2a912..3d1270b9aec4 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -746,6 +746,9 @@ static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host) mmc_start_bkops(host->card, true); } + /* Successfully postprocess the old request at this point */ + mmc_post_req(host, host->areq->mrq, 0); + return status; } @@ -790,10 +793,6 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host, if (status == MMC_BLK_SUCCESS && areq) start_err = __mmc_start_data_req(host, areq->mrq); - /* Postprocess the old request at this point */ - if (host->areq) - mmc_post_req(host, host->areq->mrq, 0); - /* Cancel a prepared request if it was not started. */ if ((status != MMC_BLK_SUCCESS || start_err) && areq) mmc_post_req(host, areq->mrq, -EINVAL); From patchwork Thu Oct 26 12:57:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117223 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp733184qgn; Thu, 26 Oct 2017 05:58:22 -0700 (PDT) X-Google-Smtp-Source: ABhQp+RT3nKiS+JWTcuZlR4mdUkvTQjqXL/oGTK8R+9BqIoSBKBWKQzEu/w78ZyusIBiDLLnQ9Df X-Received: by 10.99.95.86 with SMTP id t83mr4872776pgb.34.1509022701936; Thu, 26 Oct 2017 05:58:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022701; cv=none; d=google.com; s=arc-20160816; b=yIEwQgZUxgd4l4jBmKRcDWejmWJc58/FbaMdY3q1blrh6iJkHdeUg6JMd76dLTD7oR 1PNo2U8z1yIZPyiMdN0QM2e29R9xp3Skt3p2BKGDm5Xc0r/yAzBMOiuWN9bfSE29qkBt UzF6zDemluaR5GHB8X9ukJXGuZfSol1W0+6sr7spgO4edhZbYYU81pJw0D1mjsQW34ux o42933YbkEtDsWskWbX8o69Sv5BWRQ+yBgYeSQ/ZRoK35rcTlpCBvIGu8eHCG68yTkiC our07XQvmJWsl0bi00XMaSbpetOizKMoLgdpnCG1f3AwWL3Ou9N4VJHWbFhwk8ZQZpL6 6DyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=AaL0ph4fpb97u1NDRjypRNTo486cOfU5eYt980KVLBk=; b=FNUJysNMAxCGqzXbApuD8lsnfFQGr3f4zdpzijrI6M6Vk8Oyq4U0/XDxL+ev15s00l Tb/yA9qGYD+CqFlEVeN7k2xm0QNHh2vIp3370MBJzWT6i7NrWdnGwFSbcwp8LhahwTno wVYTbzctgTXSY+87AVhXIt3jpuejRRkZ7fLJ2Ck3/B7MXgEeCMfM06aCtGashtxwZOKK qJFriyqo6OV7h03QU+bSq3IniLD5UdQ/2dT6K2dLWPXlsM+AAJo6BZ6KccGUk0HWkF7X pcxpfcuR7fMLo9/c2krcj4BskSy9qBV1ydB+zcJ6BxLDo97V7V6P8pexHapIJQP4OEqY p9sA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=FPNXrzwh; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i7si3348008pgc.634.2017.10.26.05.58.21; Thu, 26 Oct 2017 05:58:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=FPNXrzwh; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932299AbdJZM6U (ORCPT + 6 others); Thu, 26 Oct 2017 08:58:20 -0400 Received: from mail-lf0-f67.google.com ([209.85.215.67]:55807 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932376AbdJZM6T (ORCPT ); Thu, 26 Oct 2017 08:58:19 -0400 Received: by mail-lf0-f67.google.com with SMTP id p184so3613880lfe.12 for ; Thu, 26 Oct 2017 05:58:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BDcrgv4gwsJHSVqVAVJ0oWJDT3zLa6oxVxbcy5KJioM=; b=FPNXrzwhDgd32E9GmuxGo03g5Bdvg0dmf3pnvgSHedX9IZzeRBIpbm9noZU1hs9Uol ai4hXX7zibRXWX1+IjLLsdHgz7eagamHp0bjQg/0z0TkMJHsNS7Ca2n7XE16UbaQHSsg DSn88rLGw8K7R8iCgBiomnbIj/CmSxHOsnSQE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BDcrgv4gwsJHSVqVAVJ0oWJDT3zLa6oxVxbcy5KJioM=; b=tFAaeiNkif3J8SOO8D4/0zZXW7OxZ/52kwKSSHKX9ZoeDI4+RfHAMKdDpqcWHy08rX lu+AAr+71Jhx81j216L0kRFWoafiEsd6vpGVXFb0X9QvWyREideX4h81+gkCHEITPR8a Zbnh5SL/CcjGZVdoILhx9v+OU+0k+FqwCewJQukfiiTqiovixKBE8QSG844PlQY1sXkn ZH3cNcXTYX/v0Y8bnkOu9JtWRqstIgm2mwu7V9Y8hNewNrhNF2szdzfRwjunj7FTSmuX oDolrYT2cDT8zcwvmvblVfTV1sDSMSdgflvmLplPJwuY2dKs1PCGvY86UVyVRSABOErM cFAA== X-Gm-Message-State: AMCzsaW9uEsHrkiv6X+Eof9NlGju70MkqicWq7B0s7xFB6Pn/AA6xAj+ 0R+zot/d3OArb6wFyupVum/z0kC/lLg= X-Received: by 10.46.25.87 with SMTP id p84mr10580460lje.67.1509022698116; Thu, 26 Oct 2017 05:58:18 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.58.16 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:58:17 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 02/12 v4] mmc: core: add a workqueue for completing requests Date: Thu, 26 Oct 2017 14:57:47 +0200 Message-Id: <20171026125757.10200-3-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org As we want to complete requests autonomously from feeding the host with new requests, we create a workqueue to deal with this specifically in response to the callback from a host driver. This is necessary to exploit parallelism properly. This patch just adds the workqueu, later patches will make use of it. Signed-off-by: Linus Walleij --- drivers/mmc/core/core.c | 9 +++++++++ drivers/mmc/core/host.c | 1 - include/linux/mmc/host.h | 4 ++++ 3 files changed, 13 insertions(+), 1 deletion(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 3d1270b9aec4..9c3baaddb1bd 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -2828,6 +2828,14 @@ void mmc_start_host(struct mmc_host *host) host->f_init = max(freqs[0], host->f_min); host->rescan_disable = 0; host->ios.power_mode = MMC_POWER_UNDEFINED; + /* Workqueue for completing requests */ + host->req_done_wq = alloc_workqueue("mmc%d-reqdone", + WQ_FREEZABLE | WQ_HIGHPRI | WQ_MEM_RECLAIM, + 0, host->index); + if (!host->req_done_wq) { + dev_err(mmc_dev(host), "could not allocate workqueue\n"); + return; + } if (!(host->caps2 & MMC_CAP2_NO_PRESCAN_POWERUP)) { mmc_claim_host(host); @@ -2849,6 +2857,7 @@ void mmc_stop_host(struct mmc_host *host) host->rescan_disable = 1; cancel_delayed_work_sync(&host->detect); + destroy_workqueue(host->req_done_wq); /* clear pm flags now and let card drivers set them as needed */ host->pm_flags = 0; diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c index e58be39b1568..8193363a5a46 100644 --- a/drivers/mmc/core/host.c +++ b/drivers/mmc/core/host.c @@ -381,7 +381,6 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev) INIT_DELAYED_WORK(&host->detect, mmc_rescan); INIT_DELAYED_WORK(&host->sdio_irq_work, sdio_irq_work); setup_timer(&host->retune_timer, mmc_retune_timer, (unsigned long)host); - /* * By default, hosts do not support SGIO or large requests. * They have to set these according to their abilities. diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index c296f4351c1d..94a646eebf05 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -423,6 +424,9 @@ struct mmc_host { struct mmc_async_req *areq; /* active async req */ struct mmc_context_info context_info; /* async synchronization info */ + /* finalization workqueue, handles finalizing requests */ + struct workqueue_struct *req_done_wq; + /* Ongoing data transfer that allows commands during transfer */ struct mmc_request *ongoing_mrq; From patchwork Thu Oct 26 12:57:48 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117224 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp733316qgn; Thu, 26 Oct 2017 05:58:29 -0700 (PDT) X-Google-Smtp-Source: ABhQp+TUwineK4WBnOd+SRc9ZRWW4UMG8YuErelktvaVvS6f/iTTGVYDh6yQdhVxmUOJTWscJ+jJ X-Received: by 10.98.214.76 with SMTP id r73mr5274788pfg.261.1509022708949; Thu, 26 Oct 2017 05:58:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022708; cv=none; d=google.com; s=arc-20160816; b=miZRGa1qOgg5JTUyFzcaueowDI6+6GgvGsJf9PBpywbcf3tCq/GBpN5+jCFVxadBBv PWD4La5ZsRaKGVCy/PUHJiT8m6oYumooP4ZYa4VTNid+vsoPtTdM2MUjE5h5s9/JesPw SQjxXGEdGKb9OO7yaoUBO8g8aKPrnNO9hQVkTj2oooorYippasnEbt+C5/LY/gwvImlQ RHHlC7G2noWauZSyaadtBCDSly7jqQUPe4knwNvlyDDlPYmf/pYw5aIai1hw0o9cWhwB ZxpuC/eEBVk/M+finZh2Su8cRJTk2w7DTy8LQShn/DhkhHN1AVwifADQ7CQsHDgO+Gvd CLCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=gPqdkOjv5SxabteoSPpL4tah0H/0oEZpDVD6HsPoDjY=; b=05pCpOu229OYGXJrRIAaYlVvmu7ue8fG45eLhtIwkR07Qr5EqAx9c1IqiBBFJmesq2 adli/ycK4AZImnrnYTrhRO69BtC43+Bh1ICLL/Ndl31SiO28lv0+ekU1GiqHdBIIz3FC BYdN2ZJLChnK/GjfWelT1u+15bXU5QsUFYkBfMvf2RByQp1LzM1lBYqObj7X9bLr2djj ZxP+gfiFSxc+8tScfYlsoP2px0ELDdtxQ8GpfVhrQzr12qqq6ZmFzlg3wbViKlCNl6r9 FCwY7F49c0ho9pni2BymSwVaMDLX1dzHzCrYlY56ykFgun15F5u6czgIPaQSXogf77aC cTIQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=KDlkTBzs; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i7si3348008pgc.634.2017.10.26.05.58.28; Thu, 26 Oct 2017 05:58:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=KDlkTBzs; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932343AbdJZM61 (ORCPT + 6 others); Thu, 26 Oct 2017 08:58:27 -0400 Received: from mail-lf0-f67.google.com ([209.85.215.67]:49425 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932228AbdJZM61 (ORCPT ); Thu, 26 Oct 2017 08:58:27 -0400 Received: by mail-lf0-f67.google.com with SMTP id w21so3650070lfc.6 for ; Thu, 26 Oct 2017 05:58:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=eXTEKYuhGT/dKnCTxUG7905pSUcPrDpVBgs8E1kWCEs=; b=KDlkTBzsYQXTphwAZHrxd9uPZXWFRUL8LpA9zht6TsePqJjbfFlrKi+wK3vD4joRvm 0gispMen0sIUV5ZF8J5tRMNmOBp4v99q9Q9Tqn9VE9kyZnRuzk8KIHYuUi5vuxgY2fsF uVvoNQYKE49zSPii6cCVAhQHhgI1vUqq/BzYU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=eXTEKYuhGT/dKnCTxUG7905pSUcPrDpVBgs8E1kWCEs=; b=r+CxYpneb9VocWC0aGZjc0YHT2PV6G7sOshBfPr0p23lBgu6lPns7MsPVncaNyN5IL bp5bcSUZ+QsSJrDivc4h5gMTrSDsfgt23HV3VBWIV8YINH9qgTlViBiOMol6LfhchRUE 5G8LgRhYdKLHt8pfgztM8hHUfo04481sy0g2Ls+wn0XHJKIcSxf7ug4z2XaSfEoT5IPZ ufww4C5PwR3ilqHWosAYEPcvadWiS7CC9WnPh3hQCGCIb1c2QrFGyCpoVNAB23SV6rFP /QNuo/ERKn2gei1oTzuKevcimKN0XXCctGzG3TcUWzU0wiLdha0e7eIeQUr18GcLDIcv 5xXw== X-Gm-Message-State: AMCzsaXd/KyhDu+pypBzZ9OVg2bXg2m/AaKhD+UXXY73Y2fgRNKcjhrz 3DzSYp/3Eks8zG9aH31NjQnOZzilEb4= X-Received: by 10.25.79.18 with SMTP id d18mr7414912lfb.246.1509022704750; Thu, 26 Oct 2017 05:58:24 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.58.23 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:58:23 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 03/12 v4] mmc: core: replace waitqueue with worker Date: Thu, 26 Oct 2017 14:57:48 +0200 Message-Id: <20171026125757.10200-4-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The waitqueue in the host context is there to signal back from mmc_request_done() through mmc_wait_data_done() that the hardware is done with a command, and when the wait is over, the core will typically submit the next asynchronous request that is pending just waiting for the hardware to be available. This is in the way for letting the mmc_request_done() trigger the report up to the block layer that a block request is finished. Re-jig this as a first step, remvoving the waitqueue and introducing a work that will run after a completed asynchronous request, finalizing that request, including retransmissions, and eventually reporting back with a completion and a status code to the asynchronous issue method. This has the upside that we can remove the MMC_BLK_NEW_REQUEST status code and the "new_request" state in the request queue that is only there to make the state machine spin out the first time we send a request. Use the workqueue we introduced in the host for handling just this, and then add a work and completion in the asynchronous request to deal with this mechanism. We introduce a pointer from mmc_request back to the asynchronous request so these can be referenced from each other, and augment mmc_wait_data_done() to use this pointer to get at the areq and kick the worker since that function is only used by asynchronous requests anyway. This is a central change that let us do many other changes since we have broken the submit and complete code paths in two, and we can potentially remove the NULL flushing of the asynchronous pipeline and report block requests as finished directly from the worker. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 3 ++ drivers/mmc/core/core.c | 93 ++++++++++++++++++++++++------------------------ drivers/mmc/core/core.h | 2 ++ drivers/mmc/core/queue.c | 1 - include/linux/mmc/core.h | 3 +- include/linux/mmc/host.h | 7 ++-- 6 files changed, 59 insertions(+), 50 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index ea80ff4cd7f9..5c84175e49be 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1712,6 +1712,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag); brq->mrq.cmd = &brq->cmd; + brq->mrq.areq = NULL; brq->cmd.arg = blk_rq_pos(req); if (!mmc_card_blockaddr(card)) @@ -1764,6 +1765,8 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, } mqrq->areq.err_check = mmc_blk_err_check; + mqrq->areq.host = card->host; + INIT_WORK(&mqrq->areq.finalization_work, mmc_finalize_areq); } static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 9c3baaddb1bd..f6a51608ab0b 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -369,10 +369,15 @@ EXPORT_SYMBOL(mmc_start_request); */ static void mmc_wait_data_done(struct mmc_request *mrq) { - struct mmc_context_info *context_info = &mrq->host->context_info; + struct mmc_host *host = mrq->host; + struct mmc_context_info *context_info = &host->context_info; + struct mmc_async_req *areq = mrq->areq; context_info->is_done_rcv = true; - wake_up_interruptible(&context_info->wait); + /* Schedule a work to deal with finalizing this request */ + if (!areq) + pr_err("areq of the data mmc_request was NULL!\n"); + queue_work(host->req_done_wq, &areq->finalization_work); } static void mmc_wait_done(struct mmc_request *mrq) @@ -695,43 +700,34 @@ static void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq, * Returns the status of the ongoing asynchronous request, but * MMC_BLK_SUCCESS if no request was going on. */ -static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host) +void mmc_finalize_areq(struct work_struct *work) { + struct mmc_async_req *areq = + container_of(work, struct mmc_async_req, finalization_work); + struct mmc_host *host = areq->host; struct mmc_context_info *context_info = &host->context_info; - enum mmc_blk_status status; - - if (!host->areq) - return MMC_BLK_SUCCESS; - - while (1) { - wait_event_interruptible(context_info->wait, - (context_info->is_done_rcv || - context_info->is_new_req)); + enum mmc_blk_status status = MMC_BLK_SUCCESS; - if (context_info->is_done_rcv) { - struct mmc_command *cmd; + if (context_info->is_done_rcv) { + struct mmc_command *cmd; - context_info->is_done_rcv = false; - cmd = host->areq->mrq->cmd; + context_info->is_done_rcv = false; + cmd = areq->mrq->cmd; - if (!cmd->error || !cmd->retries || - mmc_card_removed(host->card)) { - status = host->areq->err_check(host->card, - host->areq); - break; /* return status */ - } else { - mmc_retune_recheck(host); - pr_info("%s: req failed (CMD%u): %d, retrying...\n", - mmc_hostname(host), - cmd->opcode, cmd->error); - cmd->retries--; - cmd->error = 0; - __mmc_start_request(host, host->areq->mrq); - continue; /* wait for done/new event again */ - } + if (!cmd->error || !cmd->retries || + mmc_card_removed(host->card)) { + status = areq->err_check(host->card, + areq); + } else { + mmc_retune_recheck(host); + pr_info("%s: req failed (CMD%u): %d, retrying...\n", + mmc_hostname(host), + cmd->opcode, cmd->error); + cmd->retries--; + cmd->error = 0; + __mmc_start_request(host, areq->mrq); + return; /* wait for done/new event again */ } - - return MMC_BLK_NEW_REQUEST; } mmc_retune_release(host); @@ -740,17 +736,19 @@ static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host) * Check BKOPS urgency for each R1 response */ if (host->card && mmc_card_mmc(host->card) && - ((mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1) || - (mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1B)) && - (host->areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) { + ((mmc_resp_type(areq->mrq->cmd) == MMC_RSP_R1) || + (mmc_resp_type(areq->mrq->cmd) == MMC_RSP_R1B)) && + (areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) { mmc_start_bkops(host->card, true); } /* Successfully postprocess the old request at this point */ - mmc_post_req(host, host->areq->mrq, 0); + mmc_post_req(host, areq->mrq, 0); - return status; + areq->finalization_status = status; + complete(&areq->complete); } +EXPORT_SYMBOL(mmc_finalize_areq); /** * mmc_start_areq - start an asynchronous request @@ -780,18 +778,22 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host, if (areq) mmc_pre_req(host, areq->mrq); - /* Finalize previous request */ - status = mmc_finalize_areq(host); + /* Finalize previous request, if there is one */ + if (previous) { + wait_for_completion(&previous->complete); + status = previous->finalization_status; + } else { + status = MMC_BLK_SUCCESS; + } if (ret_stat) *ret_stat = status; - /* The previous request is still going on... */ - if (status == MMC_BLK_NEW_REQUEST) - return NULL; - /* Fine so far, start the new request! */ - if (status == MMC_BLK_SUCCESS && areq) + if (status == MMC_BLK_SUCCESS && areq) { + init_completion(&areq->complete); + areq->mrq->areq = areq; start_err = __mmc_start_data_req(host, areq->mrq); + } /* Cancel a prepared request if it was not started. */ if ((status != MMC_BLK_SUCCESS || start_err) && areq) @@ -3005,7 +3007,6 @@ void mmc_init_context_info(struct mmc_host *host) host->context_info.is_new_req = false; host->context_info.is_done_rcv = false; host->context_info.is_waiting_last_req = false; - init_waitqueue_head(&host->context_info.wait); } static int __init mmc_init(void) diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h index 71e6c6d7ceb7..e493d9d73fe2 100644 --- a/drivers/mmc/core/core.h +++ b/drivers/mmc/core/core.h @@ -13,6 +13,7 @@ #include #include +#include struct mmc_host; struct mmc_card; @@ -112,6 +113,7 @@ int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq); struct mmc_async_req; +void mmc_finalize_areq(struct work_struct *work); struct mmc_async_req *mmc_start_areq(struct mmc_host *host, struct mmc_async_req *areq, enum mmc_blk_status *ret_stat); diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 4f33d277b125..c46be4402803 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -111,7 +111,6 @@ static void mmc_request_fn(struct request_queue *q) if (cntx->is_waiting_last_req) { cntx->is_new_req = true; - wake_up_interruptible(&cntx->wait); } if (mq->asleep) diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h index 927519385482..d755ef8ea880 100644 --- a/include/linux/mmc/core.h +++ b/include/linux/mmc/core.h @@ -13,6 +13,7 @@ struct mmc_data; struct mmc_request; +struct mmc_async_req; enum mmc_blk_status { MMC_BLK_SUCCESS = 0, @@ -23,7 +24,6 @@ enum mmc_blk_status { MMC_BLK_DATA_ERR, MMC_BLK_ECC_ERR, MMC_BLK_NOMEDIUM, - MMC_BLK_NEW_REQUEST, }; struct mmc_command { @@ -155,6 +155,7 @@ struct mmc_request { struct completion completion; struct completion cmd_completion; + struct mmc_async_req *areq; /* pointer to areq if any */ void (*done)(struct mmc_request *);/* completion function */ /* * Notify uppers layers (e.g. mmc block driver) that recovery is needed diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 94a646eebf05..65f23a9ea724 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -215,6 +216,10 @@ struct mmc_async_req { * Returns 0 if success otherwise non zero. */ enum mmc_blk_status (*err_check)(struct mmc_card *, struct mmc_async_req *); + struct work_struct finalization_work; + enum mmc_blk_status finalization_status; + struct completion complete; + struct mmc_host *host; }; /** @@ -239,13 +244,11 @@ struct mmc_slot { * @is_done_rcv wake up reason was done request * @is_new_req wake up reason was new request * @is_waiting_last_req mmc context waiting for single running request - * @wait wait queue */ struct mmc_context_info { bool is_done_rcv; bool is_new_req; bool is_waiting_last_req; - wait_queue_head_t wait; }; struct regulator; From patchwork Thu Oct 26 12:57:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117225 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp733404qgn; Thu, 26 Oct 2017 05:58:34 -0700 (PDT) X-Google-Smtp-Source: ABhQp+TCZXcPmGhflO8B/+SzncHcRjRGrVzRhu+tlE0cyNRRpAEuMrN+7E6GxDztMAuYIlFLpVIk X-Received: by 10.99.97.67 with SMTP id v64mr4899208pgb.89.1509022714685; Thu, 26 Oct 2017 05:58:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022714; cv=none; d=google.com; s=arc-20160816; b=deDKvLudgYhkD7QkG9BycBMdjFjhYQbmNimfY0VV80m+BGdEtqCF2J1PDQgshLFNWM xvbwDGyL0Ow0ilszH+WDKmwXHusCQqgEunQe//x4PnDE2A00p9a64NqW7FLjhiSp0Uk9 h/1kEBYYoG2KjPbAU4oT9hIjXoZgzXbes10b+7SsilJ9lcFMPgdkf5CwtzI0SOPsff1+ uyewg8rEzfTBN/lfe4pANsbnglM4/Mj7TRAPahEBoUGPW+xOzcyIGjhoTvaI9/0k2xO0 HCG47aDpNul2F9tv3i06Au6iMZFnljx8HhWauE9ftDw0RgllRjqtDnxDMGr+LN5+MLYa bjfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=TK6g0PkE//fg13ylO9usmhvEYh+hDChFdJzrxKLpoac=; b=nBEP/FTG8P/ItO5uhTmwNn0z0Q8CGKvtXxkxiLf4wDJDkW0FLAQhRqW/ZCfQRGdLI3 vLQPgEBrI6gWRWaYrl/xrkpCeAczly0xw6o0NYZ/gzgzb5JQv9R6oi32lS3Xc0hPSzIl 3YASYFq0OBkSWght1d2/p45Ep+ABbhuzaY+6P6CKh4H5x9RZcQ5p5BWdZmz9t3doriAf yoeMIFG4mA31ThrqbbS+pdI0CCm9AAcum2lDlJCA6sQ05kw0MSC5OfLt+Zbqa0NtRVAZ TKHa5iMj2u6NPb4vjGQnaEntDW80KVvSjloQy62Dz+Jx0m1x80yh6Z4aWV2W8KK5vkmB V6gw== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=M8tFNozz; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i7si3348008pgc.634.2017.10.26.05.58.34; Thu, 26 Oct 2017 05:58:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=M8tFNozz; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932376AbdJZM6d (ORCPT + 6 others); Thu, 26 Oct 2017 08:58:33 -0400 Received: from mail-lf0-f67.google.com ([209.85.215.67]:46614 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932338AbdJZM6c (ORCPT ); Thu, 26 Oct 2017 08:58:32 -0400 Received: by mail-lf0-f67.google.com with SMTP id g70so3649222lfl.3 for ; Thu, 26 Oct 2017 05:58:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xMb91XuupIR4LhC/kfjRgIv1ARLJbnO7aEAeAP3JwZw=; b=M8tFNozz8WjNyX9JhLLgfsaV9OaHHUZqgYmgw+UD/yCXm9zd7kLdlD9KGbkBWoIDAk 3geHky1DilR3i+f5Oh3wJfqjKrLdivghewCeCNZgmkg9a++0G8x/szL+PgA/C94+IKY9 WmPX4bNyTNs4uIZUnZUjMJ1qjuf0VPJZpyyW8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xMb91XuupIR4LhC/kfjRgIv1ARLJbnO7aEAeAP3JwZw=; b=SziHa+W/oReyv7zq1gaUCYaUVnkPtbIRMha15wB8QH3HKRsHE4tU0fqE9onRISBlLn 9yhGKEvoqkAfftwt2iHTdGcbfTcFdH7geLVTYrC9OyR8x+X6XMVw75+xJ5FEs3wxBNXR n3Rg4G5NnkldSc7lBWIVz+EvSsJ6HuZTUmQUroVtaITDjMwYfU8brktr63c6I0fHsOtF 7qJHjzIj9zl/NMHU2eTQFwVhhO/6rKhoKYhvZ7Nb7puXxblp32FTTu7hqM9ik3S51LlH LdknY+LqbsdIn2lz8iOokQiw/+6bOuPQARfusuREK7Jm+qYKtQkwOCAx56cFRAhlaKvu VO6Q== X-Gm-Message-State: AMCzsaWScC+zZRsVL15u6+MU1ZNePKFpcBTwBADRl/3pQF2M0PfYFaGW bUgLS+0fJKEhmxkHooOhA4WFllQhBn0= X-Received: by 10.25.104.21 with SMTP id d21mr5534550lfc.45.1509022710940; Thu, 26 Oct 2017 05:58:30 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.58.29 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:58:30 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 04/12 v4] mmc: core: do away with is_done_rcv Date: Thu, 26 Oct 2017 14:57:49 +0200 Message-Id: <20171026125757.10200-5-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The "is_done_rcv" in the context info for the host is no longer needed: it is clear from context (ha!) that as long as we are waiting for the asynchronous request to come to completion, we are not done receiving data, and when the finalization work has run and completed the completion, we are indeed done. Signed-off-by: Linus Walleij --- drivers/mmc/core/core.c | 40 ++++++++++++++++------------------------ include/linux/mmc/host.h | 2 -- 2 files changed, 16 insertions(+), 26 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index f6a51608ab0b..68125360a078 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -370,10 +370,8 @@ EXPORT_SYMBOL(mmc_start_request); static void mmc_wait_data_done(struct mmc_request *mrq) { struct mmc_host *host = mrq->host; - struct mmc_context_info *context_info = &host->context_info; struct mmc_async_req *areq = mrq->areq; - context_info->is_done_rcv = true; /* Schedule a work to deal with finalizing this request */ if (!areq) pr_err("areq of the data mmc_request was NULL!\n"); @@ -656,7 +654,7 @@ EXPORT_SYMBOL(mmc_cqe_recovery); bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq) { if (host->areq) - return host->context_info.is_done_rcv; + return completion_done(&host->areq->complete); else return completion_done(&mrq->completion); } @@ -705,29 +703,24 @@ void mmc_finalize_areq(struct work_struct *work) struct mmc_async_req *areq = container_of(work, struct mmc_async_req, finalization_work); struct mmc_host *host = areq->host; - struct mmc_context_info *context_info = &host->context_info; enum mmc_blk_status status = MMC_BLK_SUCCESS; + struct mmc_command *cmd; - if (context_info->is_done_rcv) { - struct mmc_command *cmd; - - context_info->is_done_rcv = false; - cmd = areq->mrq->cmd; + cmd = areq->mrq->cmd; - if (!cmd->error || !cmd->retries || - mmc_card_removed(host->card)) { - status = areq->err_check(host->card, - areq); - } else { - mmc_retune_recheck(host); - pr_info("%s: req failed (CMD%u): %d, retrying...\n", - mmc_hostname(host), - cmd->opcode, cmd->error); - cmd->retries--; - cmd->error = 0; - __mmc_start_request(host, areq->mrq); - return; /* wait for done/new event again */ - } + if (!cmd->error || !cmd->retries || + mmc_card_removed(host->card)) { + status = areq->err_check(host->card, + areq); + } else { + mmc_retune_recheck(host); + pr_info("%s: req failed (CMD%u): %d, retrying...\n", + mmc_hostname(host), + cmd->opcode, cmd->error); + cmd->retries--; + cmd->error = 0; + __mmc_start_request(host, areq->mrq); + return; /* wait for done/new event again */ } mmc_retune_release(host); @@ -3005,7 +2998,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host) void mmc_init_context_info(struct mmc_host *host) { host->context_info.is_new_req = false; - host->context_info.is_done_rcv = false; host->context_info.is_waiting_last_req = false; } diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 65f23a9ea724..d536325a9640 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -241,12 +241,10 @@ struct mmc_slot { /** * mmc_context_info - synchronization details for mmc context - * @is_done_rcv wake up reason was done request * @is_new_req wake up reason was new request * @is_waiting_last_req mmc context waiting for single running request */ struct mmc_context_info { - bool is_done_rcv; bool is_new_req; bool is_waiting_last_req; }; From patchwork Thu Oct 26 12:57:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117226 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp733466qgn; Thu, 26 Oct 2017 05:58:39 -0700 (PDT) X-Google-Smtp-Source: ABhQp+Txl6QK1DcBmytGO+hVkVOfzsg5kEqMiDbxMYNwy237kXDcHI1Dr+zioQnRwwogdnIqkc5g X-Received: by 10.98.14.75 with SMTP id w72mr5360164pfi.341.1509022719070; Thu, 26 Oct 2017 05:58:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022719; cv=none; d=google.com; s=arc-20160816; b=uTHCi8W/crR39OKJIpRfst4acvtK7asBime40zyEzwqI0a0wkXq5M8mtE5PTjCIbFI rmkwxrA15VBMQbJC4s8zU57jLjmF5+Ys6GIGNcCxvFSGRDiKwycAV73ZujhaqZ9/IuW5 ajkg2Rjk0mb3ffNvrXU7UctzdHNFIquKb9E1wrSUzwCPprwrqtWVSVVVhc7mSeV9o+y4 HWjCgz/zPgwipjSm/KQ6cCvg0OrQMlB8+71x79vB+LO4xBY91vFTsTNeBCWLQtbaIBIC inHlemnVC+Bm1NDQtQufMYd4fupXJoOo1VuUIoMiAZ0fjAmwL9fnNYtqShVzxBv/xkI1 NC9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=NrEgB5ff7GUz4J+1Rv5tjR/EjEA/LOV6pSeWsP/BnkE=; b=0CxUHQuqnarTKNRl9rWkZlF9seKeU1K8geefkK9xHPkk1VWUYo6dQ9Lw9OhMdycSUF PvaRkVHbQiigzP91a4K35/CVmWpi1elg0dBeRZTmoW/B0yT2bKkb8R6povH5xE0kedNF RM9rlQPGKrtATSqOhcn0l1jn6LnfnNBLnqSbQWoNPRyLtjO9XnG+HrCFw1ewwIvtLq2G bOxk/VZoJg7wip/j73v3epb3EXKdRUtygOm0F206JQRu2mF4JK+a/2lQcjQnBWIxwS9g GjOmuXkjbKBo5qdSA+EaomZXSN50+fx58UogntQ5iesOWHxRcX1awLt3sGt39vLNG9u1 58Yw== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=BFNp9M/U; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i7si3348008pgc.634.2017.10.26.05.58.38; Thu, 26 Oct 2017 05:58:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=BFNp9M/U; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932382AbdJZM6i (ORCPT + 6 others); Thu, 26 Oct 2017 08:58:38 -0400 Received: from mail-lf0-f68.google.com ([209.85.215.68]:49442 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932338AbdJZM6h (ORCPT ); Thu, 26 Oct 2017 08:58:37 -0400 Received: by mail-lf0-f68.google.com with SMTP id w21so3650713lfc.6 for ; Thu, 26 Oct 2017 05:58:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pN3xD0hLaqyNwDuq0bcQB0UpjXhSz2HKmvoWz/RIWfY=; b=BFNp9M/ULjQD3zajZLYGqTFep6Ja4lJsnEUy1I4SZcHlHAcTI/2Qr233PhGzVof324 7MlsA2ZVBmpO5qrTcVv+E6tDxNRiVTxJ2LDoB0cMHQ5bQTD/S/zKXNzThoG5WB7/zvg/ YbtkdDUA/Y/VQr8ebziceUt0yiC4M8kPaYAX4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pN3xD0hLaqyNwDuq0bcQB0UpjXhSz2HKmvoWz/RIWfY=; b=csTgrUzFh8X048xXp1jdQgbU7ECk6sCevMjSUDbUNpgtUAqk6YL5h7uWLbBLatJqup BN4eyIIMR7/QyWf13fHEuS9gFIZLCDg+rA+w2H0c/3v95CM19VnCiifrDD3sO1I9Si7O Mt6evIUajZiRvIJSeuMSfdpy+dQqoTUas9EkrJGa+bAm+ayw4vM6mY/rJ0Bor/CTTpKY LNCAs9aIkyQhOQKtZTAQ4nBfsAmCseie1J8KdXiQE4zi1vs4zmNhz/Titg+P94z8hmDu /Gr4tmxGtqNpUeSEeM1UQyQw5ZTKll6d3Mpazo/g3rgZxEgp+GR6Sw6cv/0/W4lYzwvV osVA== X-Gm-Message-State: AMCzsaWNSb5vhRU3VRIhAsbuq//5TIg4/bBQarlqIqzw9/qG+GdDVxms TPTw9t4o3hIn3cLnaI9BDl14tYptv/k= X-Received: by 10.25.195.88 with SMTP id t85mr8026720lff.214.1509022715598; Thu, 26 Oct 2017 05:58:35 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.58.34 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:58:34 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 05/12 v4] mmc: core: do away with is_new_req Date: Thu, 26 Oct 2017 14:57:50 +0200 Message-Id: <20171026125757.10200-6-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The host context member "is_new_req" is only assigned values, never checked. Delete it. Signed-off-by: Linus Walleij --- drivers/mmc/core/core.c | 1 - drivers/mmc/core/queue.c | 5 ----- include/linux/mmc/host.h | 2 -- 3 files changed, 8 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 68125360a078..ad832317f25b 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -2997,7 +2997,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host) */ void mmc_init_context_info(struct mmc_host *host) { - host->context_info.is_new_req = false; host->context_info.is_waiting_last_req = false; } diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index c46be4402803..4a0752ef6154 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -55,7 +55,6 @@ static int mmc_queue_thread(void *d) req = blk_fetch_request(q); mq->asleep = false; cntx->is_waiting_last_req = false; - cntx->is_new_req = false; if (!req) { /* * Dispatch queue is empty so set flags for @@ -109,10 +108,6 @@ static void mmc_request_fn(struct request_queue *q) cntx = &mq->card->host->context_info; - if (cntx->is_waiting_last_req) { - cntx->is_new_req = true; - } - if (mq->asleep) wake_up_process(mq->thread); } diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index d536325a9640..ceb58b27f402 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -241,11 +241,9 @@ struct mmc_slot { /** * mmc_context_info - synchronization details for mmc context - * @is_new_req wake up reason was new request * @is_waiting_last_req mmc context waiting for single running request */ struct mmc_context_info { - bool is_new_req; bool is_waiting_last_req; }; From patchwork Thu Oct 26 12:57:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117227 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp733540qgn; Thu, 26 Oct 2017 05:58:44 -0700 (PDT) X-Google-Smtp-Source: ABhQp+TBNnTEQ2M+1ofoLdhzJH3U2WT35cHinppAuCkNPCh7Z/EW54pZNP/I9p7aKdswpJTnJ33f X-Received: by 10.99.123.83 with SMTP id k19mr4862132pgn.338.1509022724674; Thu, 26 Oct 2017 05:58:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022724; cv=none; d=google.com; s=arc-20160816; b=oZ85lsCR3Z9N2rUVxThDeGg5nmaYB3/VvAetzJkvXsUe9zHjsAF5KswVaYs4bR1rWa QcoqauYlnrH+myN27KK0zDr4fD2nPMO+7qFpWRkTBXekiwKu/6a3c1aESULqmLlcCTKE YbBfu8rLl+oCwBsU49WeOezWZTExKASbOq7QXKSYRTSSW/wL5vTVY0O0bUsXFGyCyYP4 n9OAsJRACrigC6HQPeAllnt3hN1dEIZskGM06/ic9s4pSGHlA+f4vQPNrucecIM7PNEr hd1x2Nuu2cZmiUXFx0Z9q0cgxShvLHKNGvkIKj+O1H6j3OJI4St4CBmpiWqkKz5QMJzr 29NA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=n4nN0hd6mtnGAjfQ5cBDsDu63CpYx03koDyKIbZzz0w=; b=UU4oPEA7eWvGmEZ/v9Wh5KjFUHLsTXA2ptHb4wGq+QWr4us3yrFdxGTcQf8wiquLb5 enLgjWP4RymCgJkw6TFXjl0n5wfE/YAnHFcBtTrU1v8gYx/ZCibyXAdaMyBSEEY1COT+ v9hjN8YXMta8BOHaH9YrYIBBBAVuC4A3hbNMs810EMcrfyvxMYt2M9JfwTorHQsaTnIS RwwKrgCfe7jLPzp9TbDl69+u5CGSUaSK0eQFzF8gnvxzG5MdSFu8fgHg8nCNAY8ClDWL pnUKrdNUOFflL0sTM9Pb/d96zSEWJxAlGustDcF5pR6CksTph6RblDfpjGCYvFr8LlaX lMvg== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=JVcNqMCW; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i7si3348008pgc.634.2017.10.26.05.58.44; Thu, 26 Oct 2017 05:58:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=JVcNqMCW; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932386AbdJZM6n (ORCPT + 6 others); Thu, 26 Oct 2017 08:58:43 -0400 Received: from mail-lf0-f65.google.com ([209.85.215.65]:46068 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932338AbdJZM6m (ORCPT ); Thu, 26 Oct 2017 08:58:42 -0400 Received: by mail-lf0-f65.google.com with SMTP id n69so3654952lfn.2 for ; Thu, 26 Oct 2017 05:58:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=CKhhP85tZMeE5QmaXTW4pV2mzLEwHWZykNsK6POvjdQ=; b=JVcNqMCWIILoWRidkyRiwB8zwMxObA01qa2Fk3N5z0Rr+pMIJrwN/A3nHxhJenPBYd O+oTfQxliGGDlDO6KajfbFwgYX3PY3kXa0zdiokzkajUnk9y1WVIXfWmi+WvyZYT18TF DyVIR+ywnFGNTiH5Vr0N5AWtEbr1eNpjvd4kw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=CKhhP85tZMeE5QmaXTW4pV2mzLEwHWZykNsK6POvjdQ=; b=gcqkOrgXEl6QW8taYRVxHRlBZ5Vy9rkOqZ3RhbercO1dtpHnTd6nRO0EtljbzdIce0 PhsQ4NZHnwtrhSa5MiCNDaduwWqFRnBpl+sk/sjb8UVKCItxpGetJwmhpTITzpUDzhnf wXh+ZdQrWqXWFEzK4dgi7dwdhuQyWyQhP3xbQ1hPWVtOXSjKslL9g0U7f9zyNYQnRbxP Hs5l6Wv3T4jR6iVPt593oIhoyT2eXVTHMUbMmM1tVzLgB6kjnxnkHOC2v+vASofnVQD1 Xjd7Dr/+XzpPzcup1fnTjvoES6e+C6JZ9kbJTwIcX9RGKgnmgxGI3DZKWyf3LT9KybAx rt1A== X-Gm-Message-State: AMCzsaUe2Dn/K44kR1baRNUw8keozkrmClHhjBzyRSaOhsXDr4gmcLez We0y77tC+jvpklMWHBfq8VGWlqby+1I= X-Received: by 10.25.42.211 with SMTP id q80mr596370lfq.192.1509022720826; Thu, 26 Oct 2017 05:58:40 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.58.38 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:58:39 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 06/12 v4] mmc: core: kill off the context info Date: Thu, 26 Oct 2017 14:57:51 +0200 Message-Id: <20171026125757.10200-7-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The last member of the context info: is_waiting_last_req is just assigned values, never checked. Delete that and the whole context info as a result. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 2 -- drivers/mmc/core/bus.c | 1 - drivers/mmc/core/core.c | 13 ------------- drivers/mmc/core/core.h | 2 -- drivers/mmc/core/queue.c | 9 +-------- include/linux/mmc/host.h | 9 --------- 6 files changed, 1 insertion(+), 35 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 5c84175e49be..86ec87c17e71 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -2065,13 +2065,11 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) default: /* Normal request, just issue it */ mmc_blk_issue_rw_rq(mq, req); - card->host->context_info.is_waiting_last_req = false; break; } } else { /* No request, flushing the pipeline with NULL */ mmc_blk_issue_rw_rq(mq, NULL); - card->host->context_info.is_waiting_last_req = false; } out: diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c index a4b49e25fe96..45904a7e87be 100644 --- a/drivers/mmc/core/bus.c +++ b/drivers/mmc/core/bus.c @@ -348,7 +348,6 @@ int mmc_add_card(struct mmc_card *card) #ifdef CONFIG_DEBUG_FS mmc_add_card_debugfs(card); #endif - mmc_init_context_info(card->host); card->dev.of_node = mmc_of_find_child_device(card->host, 0); diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index ad832317f25b..865db736c717 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -2987,19 +2987,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host) } #endif -/** - * mmc_init_context_info() - init synchronization context - * @host: mmc host - * - * Init struct context_info needed to implement asynchronous - * request mechanism, used by mmc core, host driver and mmc requests - * supplier. - */ -void mmc_init_context_info(struct mmc_host *host) -{ - host->context_info.is_waiting_last_req = false; -} - static int __init mmc_init(void) { int ret; diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h index e493d9d73fe2..88b852ac8f74 100644 --- a/drivers/mmc/core/core.h +++ b/drivers/mmc/core/core.h @@ -92,8 +92,6 @@ void mmc_remove_host_debugfs(struct mmc_host *host); void mmc_add_card_debugfs(struct mmc_card *card); void mmc_remove_card_debugfs(struct mmc_card *card); -void mmc_init_context_info(struct mmc_host *host); - int mmc_execute_tuning(struct mmc_card *card); int mmc_hs200_to_hs400(struct mmc_card *card); int mmc_hs400_to_hs200(struct mmc_card *card); diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 4a0752ef6154..2c232ba4e594 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -42,7 +42,6 @@ static int mmc_queue_thread(void *d) { struct mmc_queue *mq = d; struct request_queue *q = mq->queue; - struct mmc_context_info *cntx = &mq->card->host->context_info; current->flags |= PF_MEMALLOC; @@ -54,15 +53,12 @@ static int mmc_queue_thread(void *d) set_current_state(TASK_INTERRUPTIBLE); req = blk_fetch_request(q); mq->asleep = false; - cntx->is_waiting_last_req = false; if (!req) { /* * Dispatch queue is empty so set flags for * mmc_request_fn() to wake us up. */ - if (mq->qcnt) - cntx->is_waiting_last_req = true; - else + if (!mq->qcnt) mq->asleep = true; } spin_unlock_irq(q->queue_lock); @@ -96,7 +92,6 @@ static void mmc_request_fn(struct request_queue *q) { struct mmc_queue *mq = q->queuedata; struct request *req; - struct mmc_context_info *cntx; if (!mq) { while ((req = blk_fetch_request(q)) != NULL) { @@ -106,8 +101,6 @@ static void mmc_request_fn(struct request_queue *q) return; } - cntx = &mq->card->host->context_info; - if (mq->asleep) wake_up_process(mq->thread); } diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index ceb58b27f402..638f11d185bd 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -239,14 +239,6 @@ struct mmc_slot { void *handler_priv; }; -/** - * mmc_context_info - synchronization details for mmc context - * @is_waiting_last_req mmc context waiting for single running request - */ -struct mmc_context_info { - bool is_waiting_last_req; -}; - struct regulator; struct mmc_pwrseq; @@ -421,7 +413,6 @@ struct mmc_host { struct dentry *debugfs_root; struct mmc_async_req *areq; /* active async req */ - struct mmc_context_info context_info; /* async synchronization info */ /* finalization workqueue, handles finalizing requests */ struct workqueue_struct *req_done_wq; From patchwork Thu Oct 26 12:57:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117228 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp733606qgn; Thu, 26 Oct 2017 05:58:50 -0700 (PDT) X-Google-Smtp-Source: ABhQp+STRT6CoETJLaAoIXeffYbjWYu9E6Bv1EeBfSt6LJC4ocTr+ePDQqa7gj1kZqfYUa6VnIww X-Received: by 10.84.198.131 with SMTP id p3mr4363437pld.245.1509022730194; Thu, 26 Oct 2017 05:58:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022730; cv=none; d=google.com; s=arc-20160816; b=I5hLwVJ4xp9loSdkocR0GBw7pmun11TJk+yqxaGw6Govjqlr2q+KRi9UinH2XJLW7w B6glxHBe3DwM77fu0G0c0ptGGmp5ddamnjzgqVNf1iWlQz6kj46aqnuKYMRz1gzZDpLK J6LUZVxNnLvQifhIRr5QOd+GcYiYzHJvG/7YDiZ6J8XD+AITblwmMnjSu4E59OGGbKdy buGnUuA15nh36FsOLjiqgZF/1ecN4SQX9EzuOtRJZw0iD3OgIrJCwf/qaBq0ZgD5AQBo X/QHaX20PJWWZ02Bmo7LP20As7iMCD/BnCrvNX2qmq7ZKpo9LtxDDkUDU//P6JJDaFzl 2xPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Thgi13qIfzYSuw7RSGc1PN1802oCbLzNal/a8MwzceY=; b=YozSm6QKytgxvck109cN8Q8Z3QQYUJkDp1mzt3u4gQ8eUZZB2xWi6VkcgxPY/Y0zfg akOcL3nNwQDR282xMrKBpx38pz654r5bUS5JuXGg+RjyVAVTfoGu8fC/CUcc/vxu72pU bCeHltCb6pM0R8bqCtaXrNojQDXBTwJRAxkTYBi5GZ9QLgh3LyQnSCVJRwbFUsOwavTq GfL3ky6tFxhGc0E0MIE6nm/4FXqTjJyDo9Qamu754uNmpuHelC/LaKBWYIRmE+gma28l QdAdV/KghTrTUdYuf02fRnOxU1lBVDJn3Q9HFML8nbdCaRXTeRQs+ivxBdQSJWMnndz+ CCrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=BQi+Cyud; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i7si3348008pgc.634.2017.10.26.05.58.50; Thu, 26 Oct 2017 05:58:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=BQi+Cyud; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932396AbdJZM6t (ORCPT + 6 others); Thu, 26 Oct 2017 08:58:49 -0400 Received: from mail-lf0-f67.google.com ([209.85.215.67]:51122 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932338AbdJZM6s (ORCPT ); Thu, 26 Oct 2017 08:58:48 -0400 Received: by mail-lf0-f67.google.com with SMTP id a132so3644042lfa.7 for ; Thu, 26 Oct 2017 05:58:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+x+40vAPZsGHjwmyiP841T7M9im2Ff9gitIr5cXMwTU=; b=BQi+CyuduQmOeOp6+fVsIg5aL51HKfpiIFle/c2MW4HZsWC+BT3CzLlD1u/87EeVzJ zv7WxFQMtgzOLcZ4sKeu6/gqAZ3upoBPczk8xW55WVHhI9sM8ckNlx00pr5352YgvOe4 MCPZNzO5g5SCz4Fyv4TjdunpQQXvQwhw9yViY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+x+40vAPZsGHjwmyiP841T7M9im2Ff9gitIr5cXMwTU=; b=Eonj4+ydieRLgAOf/ZVEmu+g3MpjId7NlrcQbBdYACW82qlubpuYnw+VQHU+Bw4u5W 2q6zLVhZwbyjbLuZK9keC3MclecSuvrWIVjJ8Pk496CT9am1JltCt+D947OVzGKsP2Rr 68NEmblCrGvMd9jSAS/PxCdpGYbS2wZJa9RTFW7KCqkVeQ+1BqPugENrxU261QR1uFyS oPBcCtbfHEVH4fAtW79Owz8pwVjKlHPXcNn40QOoX8NJxipVsdGkF+awdu1sxZFaz6eo hJLtmV9HbuZpXtEterXavKiRiKYGe7H8oNHPDg8Xu3Jn8xNW96j6eK1fnuhxRDuelLEj TxKA== X-Gm-Message-State: AMCzsaU62pkXtUiCMZuloE2Jmkm62ubmy0B0zS1bCyRKKMW+ZO/MEbCq 9rjqck5Shm3PhQyoJLCKAhi0xaXhbZ8= X-Received: by 10.25.78.141 with SMTP id u13mr7139052lfk.156.1509022726683; Thu, 26 Oct 2017 05:58:46 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.58.45 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:58:45 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 07/12 v4] mmc: queue: simplify queue logic Date: Thu, 26 Oct 2017 14:57:52 +0200 Message-Id: <20171026125757.10200-8-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The if() statment checking if there is no current or previous request is now just looking ahead at something that will be concluded a few lines below. Simplify the logic by moving the assignment of .asleep. Signed-off-by: Linus Walleij --- drivers/mmc/core/queue.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 2c232ba4e594..023bbddc1a0b 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -53,14 +53,6 @@ static int mmc_queue_thread(void *d) set_current_state(TASK_INTERRUPTIBLE); req = blk_fetch_request(q); mq->asleep = false; - if (!req) { - /* - * Dispatch queue is empty so set flags for - * mmc_request_fn() to wake us up. - */ - if (!mq->qcnt) - mq->asleep = true; - } spin_unlock_irq(q->queue_lock); if (req || mq->qcnt) { @@ -68,6 +60,7 @@ static int mmc_queue_thread(void *d) mmc_blk_issue_rq(mq, req); cond_resched(); } else { + mq->asleep = true; if (kthread_should_stop()) { set_current_state(TASK_RUNNING); break; From patchwork Thu Oct 26 12:57:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117229 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp733702qgn; Thu, 26 Oct 2017 05:58:56 -0700 (PDT) X-Google-Smtp-Source: ABhQp+REGKmL7e2mlTxqokfstiqNqEvgGRUCiJYiJtpWtT5K7QlPqZS701Vm9Vs3o0wef2gTZCJX X-Received: by 10.84.224.65 with SMTP id a1mr4323975plt.421.1509022735990; Thu, 26 Oct 2017 05:58:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022735; cv=none; d=google.com; s=arc-20160816; b=ytOxg+2alsdaS1xn1WV+G2OouCl4uHTQSUWN5tyaotmDnMx8gP6LllMw1tLxw7nzNh AOz7hTMt9vWwyzZdJNMAWcPtr125N4NduwbJQwpQ+BCCyHxYs1PHngmzfcvd8ej9pDO2 gCaILvx/0T6IvCqJlc5mjwLxJuU3ijmJNiQloGiWQL84hjAlPReWi17g5Pa7PfBdOGO2 SDSJQZbuwpN9C4FKRnzTGYBGIm6vpYydgbL2ijhToLryh8qzsbYMsDdgct7ADCDS4GkT cjGMlQs9HF1/A0SZSGhhBOZPrtCTbMmNsx5SmXa27dUeTFTm4se7q+eSmTSapMtK9qJV HIog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=hdV882OTVHkPBQBsBzFaLAO7iaB+mb+wMgfFxBaMFUM=; b=0nCI9fxq4Q6kbGd0oNad7+hPL5lIoUS0z93SCrnUlYhUXk4M8zUn3t7UWbSW/lft+A YSu/F1hSitxKG42u+xGgnsRMsUbLi61w1hyusmfXJgseHTqlOVi5A37GoBprJ2ExRskD 9iNJ6lmN0FUrEeioq4HQsHMmIYYqN0AAeiefTrAJ84G2LyQNMQ4q3zYqKxA/wDo4tqP5 /jTC3x39WFwxb/XqnfZaBYfVJE/8PJYefCodaf3cRwvD98cGZdWvWGseut9SJcFwLzvA NJmt4kPWMnrokr/HVuublyy6koAI+NrGahjWQX/CXOsViJ4m91A0vjmv4Uh+yWJdPjHP lLAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=L+QUKcI6; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i7si3348008pgc.634.2017.10.26.05.58.55; Thu, 26 Oct 2017 05:58:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=L+QUKcI6; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932228AbdJZM6y (ORCPT + 6 others); Thu, 26 Oct 2017 08:58:54 -0400 Received: from mail-lf0-f68.google.com ([209.85.215.68]:56756 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932338AbdJZM6x (ORCPT ); Thu, 26 Oct 2017 08:58:53 -0400 Received: by mail-lf0-f68.google.com with SMTP id 90so3625019lfs.13 for ; Thu, 26 Oct 2017 05:58:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=y9+As+/j5RC/UaAlXhlKTloM1UvJTdtZBG+zZNJkjkA=; b=L+QUKcI6pGtXaVdqVPG07/OuI4b4MFwE8Cvt9Kz0FDYgzckZBSxBPDOrvALxOVOd4t k6WsL6F6toNs5dpqzp4x7EOtFKk8R9Yw4wHMltRItRFlkg3igGvaivGYQxpqu0hJbyf8 t43TjtwhmT9WwvvFLbneuF5JEIacTfOZTn95g= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=y9+As+/j5RC/UaAlXhlKTloM1UvJTdtZBG+zZNJkjkA=; b=qGOiw5XA5IkNrSVftzNp/k/gH6W8a+O6Wauk3XleBeOL+hNcEUK6TEXUlKrNUcvHko ekCuagypWup2Utnui6GCFDMwQvZan8dsxupkpl6T7IKZqxRFzLrQF7LzsGKkGgl5DxML 3BI1EdUdnmpeTaIyZzHepfooJsvyoq0xsdb8KyqlnbRWGg/JEQNMqj83/KC8qJ1P1Ezl /Zt4Lnvi92giiqqo8NA/+kQ+9vJ0FM0A6AGg6jcKqThPBHF8cwMP1bLgIfBTt6ntFAdd 37JAu9o95jXKHK7eRNFeuhRPWRpjL7JEPpxgFlPtlYRSbMfXRMkhUrKMW4TlUQdsQ9LN 4yfA== X-Gm-Message-State: AMCzsaUVNuqjExAtn6dpBGT17tk72mEwSZsh21UEmAYO5g1DN70zjtCK iqSrt0zZJP1/MbWg5OFoV8rV+SJUC7Y= X-Received: by 10.46.42.196 with SMTP id q187mr5060941ljq.59.1509022731937; Thu, 26 Oct 2017 05:58:51 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.58.50 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:58:51 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 08/12 v4] mmc: block: shuffle retry and error handling Date: Thu, 26 Oct 2017 14:57:53 +0200 Message-Id: <20171026125757.10200-9-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Instead of doing retries at the same time as trying to submit new requests, do the retries when the request is reported as completed by the driver, in the finalization worker. This is achieved by letting the core worker call back into the block layer using a callback in the asynchronous request, ->report_done_status() that will pass the status back to the block core so it can repeatedly try to hammer the request using single request, retry etc by calling back to the core layer using mmc_restart_areq(), which will just kick the same asynchronous request without waiting for a previous ongoing request. The beauty of it is that the completion will not complete until the block layer has had the opportunity to hammer a bit at the card using a bunch of different approaches that used to be in the while() loop in mmc_blk_rw_done() The algorithm for recapture, retry and handle errors is identical to the one we used to have in mmc_blk_issue_rw_rq(), only augmented to get called in another path from the core. We have to add and initialize a pointer back to the struct mmc_queue from the struct mmc_queue_req to find the queue from the asynchronous request when reporting the status back to the core. Other users of the asynchrous request that do not need to retry and use misc error handling fallbacks will work fine since a NULL ->report_done_status() is just fine. This is currently only done by the test module. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 337 ++++++++++++++++++++++++----------------------- drivers/mmc/core/core.c | 47 ++++--- drivers/mmc/core/core.h | 1 + drivers/mmc/core/queue.c | 2 + drivers/mmc/core/queue.h | 1 + include/linux/mmc/host.h | 5 +- 6 files changed, 210 insertions(+), 183 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 86ec87c17e71..c1178fa83f75 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1811,198 +1811,207 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, /** * mmc_blk_rw_try_restart() - tries to restart the current async request * @mq: the queue with the card and host to restart - * @req: a new request that want to be started after the current one + * @mqrq: the mmc_queue_request containing the areq to be restarted */ -static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req, +static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct mmc_queue_req *mqrq) { - if (!req) - return; + /* Proceed and try to restart the current async request */ + mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); + mmc_restart_areq(mq->card->host, &mqrq->areq); +} + +static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status status) +{ + struct mmc_queue *mq; + struct mmc_blk_data *md; + struct mmc_card *card; + struct mmc_host *host; + struct mmc_queue_req *mq_rq; + struct mmc_blk_request *brq; + struct request *old_req; + bool req_pending = true; + int disable_multi = 0, retry = 0, type, retune_retry_done = 0; /* - * If the card was removed, just cancel everything and return. + * An asynchronous request has been completed and we proceed + * to handle the result of it. */ - if (mmc_card_removed(mq->card)) { - req->rq_flags |= RQF_QUIET; - blk_end_request_all(req, BLK_STS_IOERR); - mq->qcnt--; /* FIXME: just set to 0? */ + mq_rq = container_of(areq, struct mmc_queue_req, areq); + mq = mq_rq->mq; + md = mq->blkdata; + card = mq->card; + host = card->host; + brq = &mq_rq->brq; + old_req = mmc_queue_req_to_req(mq_rq); + type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; + + switch (status) { + case MMC_BLK_SUCCESS: + case MMC_BLK_PARTIAL: + /* + * A block was successfully transferred. + */ + mmc_blk_reset_success(md, type); + req_pending = blk_end_request(old_req, BLK_STS_OK, + brq->data.bytes_xfered); + /* + * If the blk_end_request function returns non-zero even + * though all data has been transferred and no errors + * were returned by the host controller, it's a bug. + */ + if (status == MMC_BLK_SUCCESS && req_pending) { + pr_err("%s BUG rq_tot %d d_xfer %d\n", + __func__, blk_rq_bytes(old_req), + brq->data.bytes_xfered); + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + return; + } + break; + case MMC_BLK_CMD_ERR: + req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); + if (mmc_blk_reset(md, card->host, type)) { + if (req_pending) + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + else + mq->qcnt--; + mmc_blk_rw_try_restart(mq, mq_rq); + return; + } + if (!req_pending) { + mq->qcnt--; + mmc_blk_rw_try_restart(mq, mq_rq); + return; + } + break; + case MMC_BLK_RETRY: + retune_retry_done = brq->retune_retry_done; + if (retry++ < 5) + break; + /* Fall through */ + case MMC_BLK_ABORT: + if (!mmc_blk_reset(md, card->host, type)) + break; + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + mmc_blk_rw_try_restart(mq, mq_rq); + return; + case MMC_BLK_DATA_ERR: { + int err; + err = mmc_blk_reset(md, card->host, type); + if (!err) + break; + if (err == -ENODEV) { + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + mmc_blk_rw_try_restart(mq, mq_rq); + return; + } + /* Fall through */ + } + case MMC_BLK_ECC_ERR: + if (brq->data.blocks > 1) { + /* Redo read one sector at a time */ + pr_warn("%s: retrying using single block read\n", + old_req->rq_disk->disk_name); + disable_multi = 1; + break; + } + /* + * After an error, we redo I/O one sector at a + * time, so we only reach here after trying to + * read a single sector. + */ + req_pending = blk_end_request(old_req, BLK_STS_IOERR, + brq->data.blksz); + if (!req_pending) { + mq->qcnt--; + mmc_blk_rw_try_restart(mq, mq_rq); + return; + } + break; + case MMC_BLK_NOMEDIUM: + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + mmc_blk_rw_try_restart(mq, mq_rq); + return; + default: + pr_err("%s: Unhandled return value (%d)", + old_req->rq_disk->disk_name, status); + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + mmc_blk_rw_try_restart(mq, mq_rq); return; } - /* Else proceed and try to restart the current async request */ - mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); - mmc_start_areq(mq->card->host, &mqrq->areq, NULL); + + if (req_pending) { + /* + * In case of a incomplete request + * prepare it again and resend. + */ + mmc_blk_rw_rq_prep(mq_rq, card, + disable_multi, mq); + mmc_start_areq(card->host, areq, NULL); + mq_rq->brq.retune_retry_done = retune_retry_done; + } else { + /* Else, this request is done */ + mq->qcnt--; + } } static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) { - struct mmc_blk_data *md = mq->blkdata; - struct mmc_card *card = md->queue.card; - struct mmc_blk_request *brq; - int disable_multi = 0, retry = 0, type, retune_retry_done = 0; enum mmc_blk_status status; - struct mmc_queue_req *mqrq_cur = NULL; - struct mmc_queue_req *mq_rq; - struct request *old_req; struct mmc_async_req *new_areq; struct mmc_async_req *old_areq; - bool req_pending = true; + struct mmc_card *card = mq->card; - if (new_req) { - mqrq_cur = req_to_mmc_queue_req(new_req); + if (new_req) mq->qcnt++; - } if (!mq->qcnt) return; - do { - if (new_req) { - /* - * When 4KB native sector is enabled, only 8 blocks - * multiple read or write is allowed - */ - if (mmc_large_sector(card) && - !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { - pr_err("%s: Transfer size is not 4KB sector size aligned\n", - new_req->rq_disk->disk_name); - mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); - return; - } - - mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); - new_areq = &mqrq_cur->areq; - } else - new_areq = NULL; - - old_areq = mmc_start_areq(card->host, new_areq, &status); - if (!old_areq) { - /* - * We have just put the first request into the pipeline - * and there is nothing more to do until it is - * complete. - */ - return; - } + /* + * If the card was removed, just cancel everything and return. + */ + if (mmc_card_removed(card)) { + new_req->rq_flags |= RQF_QUIET; + blk_end_request_all(new_req, BLK_STS_IOERR); + mq->qcnt--; /* FIXME: just set to 0? */ + return; + } + if (new_req) { + struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req); /* - * An asynchronous request has been completed and we proceed - * to handle the result of it. + * When 4KB native sector is enabled, only 8 blocks + * multiple read or write is allowed */ - mq_rq = container_of(old_areq, struct mmc_queue_req, areq); - brq = &mq_rq->brq; - old_req = mmc_queue_req_to_req(mq_rq); - type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; - - switch (status) { - case MMC_BLK_SUCCESS: - case MMC_BLK_PARTIAL: - /* - * A block was successfully transferred. - */ - mmc_blk_reset_success(md, type); - - req_pending = blk_end_request(old_req, BLK_STS_OK, - brq->data.bytes_xfered); - /* - * If the blk_end_request function returns non-zero even - * though all data has been transferred and no errors - * were returned by the host controller, it's a bug. - */ - if (status == MMC_BLK_SUCCESS && req_pending) { - pr_err("%s BUG rq_tot %d d_xfer %d\n", - __func__, blk_rq_bytes(old_req), - brq->data.bytes_xfered); - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - return; - } - break; - case MMC_BLK_CMD_ERR: - req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); - if (mmc_blk_reset(md, card->host, type)) { - if (req_pending) - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - else - mq->qcnt--; - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - } - if (!req_pending) { - mq->qcnt--; - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - } - break; - case MMC_BLK_RETRY: - retune_retry_done = brq->retune_retry_done; - if (retry++ < 5) - break; - /* Fall through */ - case MMC_BLK_ABORT: - if (!mmc_blk_reset(md, card->host, type)) - break; - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - case MMC_BLK_DATA_ERR: { - int err; - - err = mmc_blk_reset(md, card->host, type); - if (!err) - break; - if (err == -ENODEV) { - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - } - /* Fall through */ - } - case MMC_BLK_ECC_ERR: - if (brq->data.blocks > 1) { - /* Redo read one sector at a time */ - pr_warn("%s: retrying using single block read\n", - old_req->rq_disk->disk_name); - disable_multi = 1; - break; - } - /* - * After an error, we redo I/O one sector at a - * time, so we only reach here after trying to - * read a single sector. - */ - req_pending = blk_end_request(old_req, BLK_STS_IOERR, - brq->data.blksz); - if (!req_pending) { - mq->qcnt--; - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - } - break; - case MMC_BLK_NOMEDIUM: - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - default: - pr_err("%s: Unhandled return value (%d)", - old_req->rq_disk->disk_name, status); - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); + if (mmc_large_sector(card) && + !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { + pr_err("%s: Transfer size is not 4KB sector size aligned\n", + new_req->rq_disk->disk_name); + mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); return; } - if (req_pending) { - /* - * In case of a incomplete request - * prepare it again and resend. - */ - mmc_blk_rw_rq_prep(mq_rq, card, - disable_multi, mq); - mmc_start_areq(card->host, - &mq_rq->areq, NULL); - mq_rq->brq.retune_retry_done = retune_retry_done; - } - } while (req_pending); + mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); + new_areq = &mqrq_cur->areq; + new_areq->report_done_status = mmc_blk_rw_done; + } else + new_areq = NULL; - mq->qcnt--; + old_areq = mmc_start_areq(card->host, new_areq, &status); + if (!old_areq) { + /* + * We have just put the first request into the pipeline + * and there is nothing more to do until it is + * complete. + */ + return; + } + /* + * FIXME: yes, we just discard the old_areq, it will be + * post-processed when done, in mmc_blk_rw_done(). We clean + * this up in later patches. + */ } void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 865db736c717..620dcbed15b7 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -738,12 +738,28 @@ void mmc_finalize_areq(struct work_struct *work) /* Successfully postprocess the old request at this point */ mmc_post_req(host, areq->mrq, 0); - areq->finalization_status = status; + /* Call back with status, this will trigger retry etc if needed */ + if (areq->report_done_status) + areq->report_done_status(areq, status); + + /* This opens the gate for the next request to start on the host */ complete(&areq->complete); } EXPORT_SYMBOL(mmc_finalize_areq); /** + * mmc_restart_areq() - restart an asynchronous request + * @host: MMC host to restart the command on + * @areq: the asynchronous request to restart + */ +int mmc_restart_areq(struct mmc_host *host, + struct mmc_async_req *areq) +{ + return __mmc_start_data_req(host, areq->mrq); +} +EXPORT_SYMBOL(mmc_restart_areq); + +/** * mmc_start_areq - start an asynchronous request * @host: MMC host to start command * @areq: asynchronous request to start @@ -763,7 +779,6 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host, struct mmc_async_req *areq, enum mmc_blk_status *ret_stat) { - enum mmc_blk_status status; int start_err = 0; struct mmc_async_req *previous = host->areq; @@ -772,31 +787,27 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host, mmc_pre_req(host, areq->mrq); /* Finalize previous request, if there is one */ - if (previous) { + if (previous) wait_for_completion(&previous->complete); - status = previous->finalization_status; - } else { - status = MMC_BLK_SUCCESS; - } + + /* Just always succeed */ if (ret_stat) - *ret_stat = status; + *ret_stat = MMC_BLK_SUCCESS; /* Fine so far, start the new request! */ - if (status == MMC_BLK_SUCCESS && areq) { + if (areq) { init_completion(&areq->complete); areq->mrq->areq = areq; start_err = __mmc_start_data_req(host, areq->mrq); + /* Cancel a prepared request if it was not started. */ + if (start_err) { + mmc_post_req(host, areq->mrq, -EINVAL); + host->areq = NULL; + } else { + host->areq = areq; + } } - /* Cancel a prepared request if it was not started. */ - if ((status != MMC_BLK_SUCCESS || start_err) && areq) - mmc_post_req(host, areq->mrq, -EINVAL); - - if (status != MMC_BLK_SUCCESS) - host->areq = NULL; - else - host->areq = areq; - return previous; } EXPORT_SYMBOL(mmc_start_areq); diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h index 88b852ac8f74..1859804ecd80 100644 --- a/drivers/mmc/core/core.h +++ b/drivers/mmc/core/core.h @@ -112,6 +112,7 @@ int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq); struct mmc_async_req; void mmc_finalize_areq(struct work_struct *work); +int mmc_restart_areq(struct mmc_host *host, struct mmc_async_req *areq); struct mmc_async_req *mmc_start_areq(struct mmc_host *host, struct mmc_async_req *areq, enum mmc_blk_status *ret_stat); diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 023bbddc1a0b..db1fa11d9870 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -145,6 +145,7 @@ static int mmc_init_request(struct request_queue *q, struct request *req, mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp); if (!mq_rq->sg) return -ENOMEM; + mq_rq->mq = mq; return 0; } @@ -155,6 +156,7 @@ static void mmc_exit_request(struct request_queue *q, struct request *req) kfree(mq_rq->sg); mq_rq->sg = NULL; + mq_rq->mq = NULL; } static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index 68f68ecd94ea..dce7cedb9d0b 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -52,6 +52,7 @@ struct mmc_queue_req { struct mmc_blk_request brq; struct scatterlist *sg; struct mmc_async_req areq; + struct mmc_queue *mq; enum mmc_drv_op drv_op; int drv_op_result; void *drv_op_data; diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 638f11d185bd..74859a71e14b 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -216,8 +216,11 @@ struct mmc_async_req { * Returns 0 if success otherwise non zero. */ enum mmc_blk_status (*err_check)(struct mmc_card *, struct mmc_async_req *); + /* + * Report finalization status from the core to e.g. the block layer. + */ + void (*report_done_status)(struct mmc_async_req *, enum mmc_blk_status); struct work_struct finalization_work; - enum mmc_blk_status finalization_status; struct completion complete; struct mmc_host *host; }; From patchwork Thu Oct 26 12:57:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117230 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp733779qgn; Thu, 26 Oct 2017 05:59:01 -0700 (PDT) X-Google-Smtp-Source: ABhQp+RqEFNUEwkzEIe+e/b1AV72FBVzrn2hcIeZb3/eciZeRvvT39Dqst2Y2KebswBVuIKhQomr X-Received: by 10.101.69.137 with SMTP id o9mr4901219pgq.127.1509022741495; Thu, 26 Oct 2017 05:59:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022741; cv=none; d=google.com; s=arc-20160816; b=o9I7NmVrQMNRpXwkMITZZtX4IArM6N1QJP/0lA9tZ5XOEjABKpAhUq6cB7gTpsl4YU ZAif58ZAhFh373DycnqYt83/OXb7hKeHEszzLpA1weCW0eqxVadvY75Z9iH4tJExTfp9 lX41f4Qt9K7GVyFMcZLf302Ng2Wlm4lEUX5rM74557dqUp17k/cYteCK/6Xv6KONtXGF uhcBCTIcZUJJRtS6lOJH8derOmBKoG1enn1b7FtyN8mcYxxTct0Gwu0dgNAsk6U9ca3e iO7A7fySKffMwP3+Vn1KZVs3l67t9DW1NkLuERye3+OX+D3VkQvi2/p+NPj46oUQyKB3 nANQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=WRaTIv9CIEPlsyPNt7ND9NluXzsPZYobkK6V5AMQymI=; b=fWKpRRIps8wNZ0qr5W8SmMhflUYv/0bGGzXIIVg750clO3Lni8ipjQ4VXcPhZQMs8F 5Sa2fk8ef812ze0h84eKAElQBF5aUUdK03WI5ZQggxSHQ9s6ot1jHHPWlDwPPE51Q1Ik HYr1sVdBwI/ce4RWSUQ3ay8PCweO/503Q/p4nAKg6lJyvN6ETKD+5asC6WhaHk8tsNmw SI/HwFEJzNgkVpCeqVbVRGEijebWuYol5/ttDNlhprzFSCsFc46akRXO+KMZOwJNG0iV aTyAvlCNyo4f4aYfNOnj8XF6GE0sjmGhzpRVGt8ZF6ehKgRX3I3XQ8/EuijFzFd+U8Lu Hnrw== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=MYvYD5Ro; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i7si3348008pgc.634.2017.10.26.05.59.01; Thu, 26 Oct 2017 05:59:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=MYvYD5Ro; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932398AbdJZM7A (ORCPT + 6 others); Thu, 26 Oct 2017 08:59:00 -0400 Received: from mail-lf0-f66.google.com ([209.85.215.66]:51786 "EHLO mail-lf0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932409AbdJZM67 (ORCPT ); Thu, 26 Oct 2017 08:58:59 -0400 Received: by mail-lf0-f66.google.com with SMTP id r129so3636102lff.8 for ; Thu, 26 Oct 2017 05:58:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fUKcryUX/11Qz8brlBoUPHXhgqGtlgZ5RF/MwAgECsQ=; b=MYvYD5RoWsa43ppBBziz4ZsvsLVdgHGXK1a41z1f+gmAzedeKBBKZP6/Ob0effnaHS preQYgagq7OijYbmYZRmabUrkU48K6yRbydAfBdWTccHoQjYd+FwTvz/QpMT5sApjBh9 VpZVm3xWKalh4g5oR4RK0Ez5qTtxXWvnWEYG8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fUKcryUX/11Qz8brlBoUPHXhgqGtlgZ5RF/MwAgECsQ=; b=ptP29bXqb/TLHXaCJY1mSpeDqLYARoSBo60ahdv5VziHD1EgwM6MGV/APmqfujDM5R HsK/2WqrgmGkdQySSiJXntfDglfzr0nmtwIl5WhGSONnEgPK1ojXyZDctg4mYyGcaZ6X PFfO3aFcqC+dDEreKVne1JA3eX7z5oP5vAWt+Cr+/zkxx3lAvlXgrA12tno2ImWWQk/L bzZgWzOOOOJ4XSWmFLdBF9o7anVoaeQkN3Xg77LqNVnIo26/rV6fhx0a5or+GI91zYAI aiIiVqNbNk3Ebc6x5+zU5dbtBPQivbSfDYL0w17woYI6nk47XmnBJ4lHll8AzP4bvn97 a2/w== X-Gm-Message-State: AMCzsaVWEwReYrZxxLxlLgtAfMjXaqANN2eVycQ2IzytBQyec+3NIc1Z DoUyZVuq/17q119XUg+o5wgt1eReMiU= X-Received: by 10.46.64.141 with SMTP id r13mr9399294lje.112.1509022737192; Thu, 26 Oct 2017 05:58:57 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.58.55 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:58:56 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 09/12 v4] mmc: queue: stop flushing the pipeline with NULL Date: Thu, 26 Oct 2017 14:57:54 +0200 Message-Id: <20171026125757.10200-10-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Remove all the pipeline flush: i.e. repeatedly sending NULL down to the core layer to flush out asynchronous requests, and also sending NULL after "special" commands to achieve the same flush. Instead: let the "special" commands wait for any ongoing asynchronous transfers using the completion, and apart from that expect the core.c and block.c layers to deal with the ongoing requests autonomously without any "push" from the queue. Add a function in the core to wait for an asynchronous request to complete. Update the tests to use the new function prototypes. This kills off some FIXME's such as gettin rid of the mq->qcnt queue depth variable that was introduced a while back. It is a vital step toward multiqueue enablement that we stop pulling NULL off the end of the request queue to flush the asynchronous issueing mechanism. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 168 ++++++++++++++++---------------------------- drivers/mmc/core/core.c | 50 +++++++------ drivers/mmc/core/core.h | 6 +- drivers/mmc/core/mmc_test.c | 31 ++------ drivers/mmc/core/queue.c | 11 ++- drivers/mmc/core/queue.h | 7 -- 6 files changed, 106 insertions(+), 167 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index c1178fa83f75..ab01cab4a026 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1805,7 +1805,6 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, if (mmc_card_removed(card)) req->rq_flags |= RQF_QUIET; while (blk_end_request(req, BLK_STS_IOERR, blk_rq_cur_bytes(req))); - mq->qcnt--; } /** @@ -1873,13 +1872,10 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat if (mmc_blk_reset(md, card->host, type)) { if (req_pending) mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - else - mq->qcnt--; mmc_blk_rw_try_restart(mq, mq_rq); return; } if (!req_pending) { - mq->qcnt--; mmc_blk_rw_try_restart(mq, mq_rq); return; } @@ -1923,7 +1919,6 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat req_pending = blk_end_request(old_req, BLK_STS_IOERR, brq->data.blksz); if (!req_pending) { - mq->qcnt--; mmc_blk_rw_try_restart(mq, mq_rq); return; } @@ -1947,26 +1942,16 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat */ mmc_blk_rw_rq_prep(mq_rq, card, disable_multi, mq); - mmc_start_areq(card->host, areq, NULL); + mmc_start_areq(card->host, areq); mq_rq->brq.retune_retry_done = retune_retry_done; - } else { - /* Else, this request is done */ - mq->qcnt--; } } static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) { - enum mmc_blk_status status; - struct mmc_async_req *new_areq; - struct mmc_async_req *old_areq; struct mmc_card *card = mq->card; - - if (new_req) - mq->qcnt++; - - if (!mq->qcnt) - return; + struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req); + struct mmc_async_req *areq = &mqrq_cur->areq; /* * If the card was removed, just cancel everything and return. @@ -1974,44 +1959,25 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) if (mmc_card_removed(card)) { new_req->rq_flags |= RQF_QUIET; blk_end_request_all(new_req, BLK_STS_IOERR); - mq->qcnt--; /* FIXME: just set to 0? */ return; } - if (new_req) { - struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req); - /* - * When 4KB native sector is enabled, only 8 blocks - * multiple read or write is allowed - */ - if (mmc_large_sector(card) && - !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { - pr_err("%s: Transfer size is not 4KB sector size aligned\n", - new_req->rq_disk->disk_name); - mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); - return; - } - - mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); - new_areq = &mqrq_cur->areq; - new_areq->report_done_status = mmc_blk_rw_done; - } else - new_areq = NULL; - old_areq = mmc_start_areq(card->host, new_areq, &status); - if (!old_areq) { - /* - * We have just put the first request into the pipeline - * and there is nothing more to do until it is - * complete. - */ - return; - } /* - * FIXME: yes, we just discard the old_areq, it will be - * post-processed when done, in mmc_blk_rw_done(). We clean - * this up in later patches. + * When 4KB native sector is enabled, only 8 blocks + * multiple read or write is allowed */ + if (mmc_large_sector(card) && + !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { + pr_err("%s: Transfer size is not 4KB sector size aligned\n", + new_req->rq_disk->disk_name); + mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); + return; + } + + mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); + areq->report_done_status = mmc_blk_rw_done; + mmc_start_areq(card->host, areq); } void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) @@ -2020,70 +1986,56 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) struct mmc_blk_data *md = mq->blkdata; struct mmc_card *card = md->queue.card; - if (req && !mq->qcnt) - /* claim host only for the first request */ - mmc_get_card(card, NULL); + if (!req) { + pr_err("%s: tried to issue NULL request\n", __func__); + return; + } ret = mmc_blk_part_switch(card, md->part_type); if (ret) { - if (req) { - blk_end_request_all(req, BLK_STS_IOERR); - } - goto out; + blk_end_request_all(req, BLK_STS_IOERR); + return; } - if (req) { - switch (req_op(req)) { - case REQ_OP_DRV_IN: - case REQ_OP_DRV_OUT: - /* - * Complete ongoing async transfer before issuing - * ioctl()s - */ - if (mq->qcnt) - mmc_blk_issue_rw_rq(mq, NULL); - mmc_blk_issue_drv_op(mq, req); - break; - case REQ_OP_DISCARD: - /* - * Complete ongoing async transfer before issuing - * discard. - */ - if (mq->qcnt) - mmc_blk_issue_rw_rq(mq, NULL); - mmc_blk_issue_discard_rq(mq, req); - break; - case REQ_OP_SECURE_ERASE: - /* - * Complete ongoing async transfer before issuing - * secure erase. - */ - if (mq->qcnt) - mmc_blk_issue_rw_rq(mq, NULL); - mmc_blk_issue_secdiscard_rq(mq, req); - break; - case REQ_OP_FLUSH: - /* - * Complete ongoing async transfer before issuing - * flush. - */ - if (mq->qcnt) - mmc_blk_issue_rw_rq(mq, NULL); - mmc_blk_issue_flush(mq, req); - break; - default: - /* Normal request, just issue it */ - mmc_blk_issue_rw_rq(mq, req); - break; - } - } else { - /* No request, flushing the pipeline with NULL */ - mmc_blk_issue_rw_rq(mq, NULL); + switch (req_op(req)) { + case REQ_OP_DRV_IN: + case REQ_OP_DRV_OUT: + /* + * Complete ongoing async transfer before issuing + * ioctl()s + */ + mmc_wait_for_areq(card->host); + mmc_blk_issue_drv_op(mq, req); + break; + case REQ_OP_DISCARD: + /* + * Complete ongoing async transfer before issuing + * discard. + */ + mmc_wait_for_areq(card->host); + mmc_blk_issue_discard_rq(mq, req); + break; + case REQ_OP_SECURE_ERASE: + /* + * Complete ongoing async transfer before issuing + * secure erase. + */ + mmc_wait_for_areq(card->host); + mmc_blk_issue_secdiscard_rq(mq, req); + break; + case REQ_OP_FLUSH: + /* + * Complete ongoing async transfer before issuing + * flush. + */ + mmc_wait_for_areq(card->host); + mmc_blk_issue_flush(mq, req); + break; + default: + /* Normal request, just issue it */ + mmc_blk_issue_rw_rq(mq, req); + break; } - -out: - if (!mq->qcnt) - mmc_put_card(card, NULL); } static inline int mmc_blk_readonly(struct mmc_card *card) diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 620dcbed15b7..209ebb8a7f3f 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -747,6 +747,15 @@ void mmc_finalize_areq(struct work_struct *work) } EXPORT_SYMBOL(mmc_finalize_areq); +void mmc_wait_for_areq(struct mmc_host *host) +{ + if (host->areq) { + wait_for_completion(&host->areq->complete); + host->areq = NULL; + } +} +EXPORT_SYMBOL(mmc_wait_for_areq); + /** * mmc_restart_areq() - restart an asynchronous request * @host: MMC host to restart the command on @@ -775,40 +784,37 @@ EXPORT_SYMBOL(mmc_restart_areq); * return the completed request. If there is no ongoing request, NULL * is returned without waiting. NULL is not an error condition. */ -struct mmc_async_req *mmc_start_areq(struct mmc_host *host, - struct mmc_async_req *areq, - enum mmc_blk_status *ret_stat) +int mmc_start_areq(struct mmc_host *host, + struct mmc_async_req *areq) { - int start_err = 0; struct mmc_async_req *previous = host->areq; + int ret; + + /* Delete this check when we trust the code */ + if (!areq) + pr_err("%s: NULL asynchronous request!\n", __func__); /* Prepare a new request */ - if (areq) - mmc_pre_req(host, areq->mrq); + mmc_pre_req(host, areq->mrq); /* Finalize previous request, if there is one */ if (previous) wait_for_completion(&previous->complete); - /* Just always succeed */ - if (ret_stat) - *ret_stat = MMC_BLK_SUCCESS; - /* Fine so far, start the new request! */ - if (areq) { - init_completion(&areq->complete); - areq->mrq->areq = areq; - start_err = __mmc_start_data_req(host, areq->mrq); - /* Cancel a prepared request if it was not started. */ - if (start_err) { - mmc_post_req(host, areq->mrq, -EINVAL); - host->areq = NULL; - } else { - host->areq = areq; - } + init_completion(&areq->complete); + areq->mrq->areq = areq; + ret = __mmc_start_data_req(host, areq->mrq); + /* Cancel a prepared request if it was not started. */ + if (ret) { + mmc_post_req(host, areq->mrq, -EINVAL); + host->areq = NULL; + pr_err("%s: failed to start request\n", __func__); + } else { + host->areq = areq; } - return previous; + return ret; } EXPORT_SYMBOL(mmc_start_areq); diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h index 1859804ecd80..5b8d0f1147ef 100644 --- a/drivers/mmc/core/core.h +++ b/drivers/mmc/core/core.h @@ -113,9 +113,9 @@ struct mmc_async_req; void mmc_finalize_areq(struct work_struct *work); int mmc_restart_areq(struct mmc_host *host, struct mmc_async_req *areq); -struct mmc_async_req *mmc_start_areq(struct mmc_host *host, - struct mmc_async_req *areq, - enum mmc_blk_status *ret_stat); +int mmc_start_areq(struct mmc_host *host, + struct mmc_async_req *areq); +void mmc_wait_for_areq(struct mmc_host *host); int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr, unsigned int arg); diff --git a/drivers/mmc/core/mmc_test.c b/drivers/mmc/core/mmc_test.c index 478869805b96..256fdce38449 100644 --- a/drivers/mmc/core/mmc_test.c +++ b/drivers/mmc/core/mmc_test.c @@ -839,10 +839,8 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test, { struct mmc_test_req *rq1, *rq2; struct mmc_test_async_req test_areq[2]; - struct mmc_async_req *done_areq; struct mmc_async_req *cur_areq = &test_areq[0].areq; struct mmc_async_req *other_areq = &test_areq[1].areq; - enum mmc_blk_status status; int i; int ret = RESULT_OK; @@ -864,25 +862,16 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test, for (i = 0; i < count; i++) { mmc_test_prepare_mrq(test, cur_areq->mrq, sg, sg_len, dev_addr, blocks, blksz, write); - done_areq = mmc_start_areq(test->card->host, cur_areq, &status); + ret = mmc_start_areq(test->card->host, cur_areq); + mmc_wait_for_areq(test->card->host); - if (status != MMC_BLK_SUCCESS || (!done_areq && i > 0)) { - ret = RESULT_FAIL; - goto err; - } - - if (done_areq) - mmc_test_req_reset(container_of(done_areq->mrq, + mmc_test_req_reset(container_of(cur_areq->mrq, struct mmc_test_req, mrq)); swap(cur_areq, other_areq); dev_addr += blocks; } - done_areq = mmc_start_areq(test->card->host, NULL, &status); - if (status != MMC_BLK_SUCCESS) - ret = RESULT_FAIL; - err: kfree(rq1); kfree(rq2); @@ -2360,7 +2349,6 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test, struct mmc_request *mrq; unsigned long timeout; bool expired = false; - enum mmc_blk_status blkstat = MMC_BLK_SUCCESS; int ret = 0, cmd_ret; u32 status = 0; int count = 0; @@ -2388,11 +2376,8 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test, /* Start ongoing data request */ if (use_areq) { - mmc_start_areq(host, &test_areq.areq, &blkstat); - if (blkstat != MMC_BLK_SUCCESS) { - ret = RESULT_FAIL; - goto out_free; - } + mmc_start_areq(host, &test_areq.areq); + mmc_wait_for_areq(host); } else { mmc_wait_for_req(host, mrq); } @@ -2425,11 +2410,7 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test, } while (repeat_cmd && R1_CURRENT_STATE(status) != R1_STATE_TRAN); /* Wait for data request to complete */ - if (use_areq) { - mmc_start_areq(host, NULL, &blkstat); - if (blkstat != MMC_BLK_SUCCESS) - ret = RESULT_FAIL; - } else { + if (!use_areq) { mmc_wait_for_req_done(test->card->host, mrq); } diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index db1fa11d9870..cf43a2d5410d 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -42,6 +42,7 @@ static int mmc_queue_thread(void *d) { struct mmc_queue *mq = d; struct request_queue *q = mq->queue; + bool claimed_card = false; current->flags |= PF_MEMALLOC; @@ -55,7 +56,11 @@ static int mmc_queue_thread(void *d) mq->asleep = false; spin_unlock_irq(q->queue_lock); - if (req || mq->qcnt) { + if (req) { + if (!claimed_card) { + mmc_get_card(mq->card, NULL); + claimed_card = true; + } set_current_state(TASK_RUNNING); mmc_blk_issue_rq(mq, req); cond_resched(); @@ -72,6 +77,9 @@ static int mmc_queue_thread(void *d) } while (1); up(&mq->thread_sem); + if (claimed_card) + mmc_put_card(mq->card, NULL); + return 0; } @@ -207,7 +215,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, mq->queue->exit_rq_fn = mmc_exit_request; mq->queue->cmd_size = sizeof(struct mmc_queue_req); mq->queue->queuedata = mq; - mq->qcnt = 0; ret = blk_init_allocated_queue(mq->queue); if (ret) { blk_cleanup_queue(mq->queue); diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index dce7cedb9d0b..67ae311b107f 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -67,13 +67,6 @@ struct mmc_queue { bool asleep; struct mmc_blk_data *blkdata; struct request_queue *queue; - /* - * FIXME: this counter is not a very reliable way of keeping - * track of how many requests that are ongoing. Switch to just - * letting the block core keep track of requests and per-request - * associated mmc_queue_req data. - */ - int qcnt; }; extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *, From patchwork Thu Oct 26 12:57:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117231 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp733886qgn; Thu, 26 Oct 2017 05:59:07 -0700 (PDT) X-Google-Smtp-Source: ABhQp+SSKEoXrBm3HI4/Lwh2oEDs5a6ZsIBw121hcWIjSTybrGGjKOwMyFBA4OoSze+dR9lLz1Lf X-Received: by 10.159.216.131 with SMTP id s3mr4289516plp.432.1509022747453; Thu, 26 Oct 2017 05:59:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022747; cv=none; d=google.com; s=arc-20160816; b=eG1wRClG0AxzqclF+zbaC8zJYC3TTk+Kk1p0WFyJqjY7gSqEGKHglOvyqTHhf673KU sKQLeIlU4XKkb8mAtY2GUiYDBhkij0VQXrQWAylg8j5GIXKEaKA72XDPqdTmyfCq0EaT kJGU9ICX9a936rZ2m961I4Op2+OCRgBaUSd6SeKBQY7608IeL2EL+UJwHgQgIX4TJjei rdabpRSZcan4aynfzlRFah/Mcr401Notp+TtKaQQFvfL/PjvFpnCyfC691zegWSf9BkZ yp2Kpw15+vXt6RcqHwRN404o4Vuqb18dBa10id6KjxMaR9sy3zOCVx6WonAt5c6G2IUl sGhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=mB1nWl1yQcASG+W8g/KwemkhRRnrXT3XHh+blW8qoaw=; b=NAendsfKjSOKdydUiYtQo46iJoifbW1h6FD6ShCBhKvUxRQ8X88UV7RhaQHRCRMRN+ zvt5RUu5tlaXgDbCmZiSiVp6aLy3nVjYB+AyBpE9HSuY26E7bFBOVhJZG7eUiF0EZ2uL mTx4QE9OChNsuzmoiV3hC89NsI9wStY26qBrNKbbTiD+Tx+fK+StV01auMcbef7HaFDC IkWCQLVDQtgGMbu0cDTs7JEN9quFtRsrqSKRSkeFh+3GmZsRRoA6907patvkmUbaFuxU L+3E+t7i5arYv67hQj1wrEQMK8rJpu73B7YWSZOOho6+uWyWgC40BCc/Hq0Fd3BEUe7g KZCQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=ds9AHm8c; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i7si3348008pgc.634.2017.10.26.05.59.06; Thu, 26 Oct 2017 05:59:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=ds9AHm8c; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932344AbdJZM7F (ORCPT + 6 others); Thu, 26 Oct 2017 08:59:05 -0400 Received: from mail-lf0-f65.google.com ([209.85.215.65]:45085 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932405AbdJZM7E (ORCPT ); Thu, 26 Oct 2017 08:59:04 -0400 Received: by mail-lf0-f65.google.com with SMTP id n69so3656232lfn.2 for ; Thu, 26 Oct 2017 05:59:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TWjfKe8zWu2BjDtBfzD7aGiHMmnECngKT4uyt2QIxd8=; b=ds9AHm8cUxx/OGPVtJGfJloh0z2m3b2fqOgW3ookFbNf8I+Ia+xhe8Oa6AqmpzMJoJ L36Y2YJGd3gQt6Zgbq/ueEfgaW/p5G8nZiuRB+gFanllq+SJ+c1K3QGupqNpBqmLBUjv 4d97RcXg9PdywYqwFjsbIilCK6+K1YaQx/vgo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TWjfKe8zWu2BjDtBfzD7aGiHMmnECngKT4uyt2QIxd8=; b=TJQEVOGfNi/rm8H5oQvRZyBQHGeZy9YVlbjCfui7RRevFqFS3YQXlTP4LwX4poxUqj z0yxkAsQGROL4mwIXXWpIN+R9F77g8gu3pjAZQz5RFVGXxUHML+CJj6w3Nq82fB/4pFP VfZwzypjwNojLVzZw7piC+29LRwChmaSjHONLE5Ua8Fb/vTlr+rXabC5aurHUwhw+nfg jFhwyOa4WzXF7Eiz86p55ZmdJhAVgMn9OBo3IjC+80Kz5xY0rg4PsiDrK4DIItHKtrK7 1bswnm8kN54un1BKOGLO0ZkLs5e16KAacog9QR+GPWJLIOMleC49kg/lIgaldCpjJuTm 17Hg== X-Gm-Message-State: AMCzsaXNKKeXsPrtsIymSjiTr18wYa7lbVNJ6twQCaMYtizPbtFTRnCE rzo3/tm8W0YzppsLCfDPumo7QV4AdZg= X-Received: by 10.46.23.85 with SMTP id l82mr9272969lje.178.1509022742629; Thu, 26 Oct 2017 05:59:02 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.59.01 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:59:01 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 10/12 v4] mmc: queue/block: pass around struct mmc_queue_req*s Date: Thu, 26 Oct 2017 14:57:55 +0200 Message-Id: <20171026125757.10200-11-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Instead of passing two pointers around several pointers to mmc_queue_req, request, mmc_queue, and reassigning to the left and right, issue mmc_queue_req and dereference the queue and request from the mmq_queue_req where needed. The struct mmc_queue_req is the thing that has a lifecycle after all: this is what we are keeping in our queue, and what the block layer helps us manager. Augment a bunch of functions to take a single argument so we can see the trees and not just a big jungle of arguments. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 129 ++++++++++++++++++++++++----------------------- drivers/mmc/core/block.h | 5 +- drivers/mmc/core/queue.c | 2 +- 3 files changed, 70 insertions(+), 66 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index ab01cab4a026..184907f5fb97 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1208,9 +1208,9 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type) * processed it with all other requests and then they get issued in this * function. */ -static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_drv_op(struct mmc_queue_req *mq_rq) { - struct mmc_queue_req *mq_rq; + struct mmc_queue *mq = mq_rq->mq; struct mmc_card *card = mq->card; struct mmc_blk_data *md = mq->blkdata; struct mmc_blk_ioc_data **idata; @@ -1220,7 +1220,6 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req) int ret; int i; - mq_rq = req_to_mmc_queue_req(req); rpmb_ioctl = (mq_rq->drv_op == MMC_DRV_OP_IOCTL_RPMB); switch (mq_rq->drv_op) { @@ -1264,12 +1263,14 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req) break; } mq_rq->drv_op_result = ret; - blk_end_request_all(req, ret ? BLK_STS_IOERR : BLK_STS_OK); + blk_end_request_all(mmc_queue_req_to_req(mq_rq), + ret ? BLK_STS_IOERR : BLK_STS_OK); } -static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; unsigned int from, nr, arg; int err = 0, type = MMC_BLK_DISCARD; @@ -1310,10 +1311,10 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) blk_end_request(req, status, blk_rq_bytes(req)); } -static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, - struct request *req) +static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; unsigned int from, nr, arg; int err = 0, type = MMC_BLK_SECDISCARD; @@ -1380,14 +1381,15 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, blk_end_request(req, status, blk_rq_bytes(req)); } -static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; int ret = 0; ret = mmc_flush_cache(card); - blk_end_request_all(req, ret ? BLK_STS_IOERR : BLK_STS_OK); + blk_end_request_all(mmc_queue_req_to_req(mq_rq), + ret ? BLK_STS_IOERR : BLK_STS_OK); } /* @@ -1698,18 +1700,18 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq, *do_data_tag_p = do_data_tag; } -static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, - struct mmc_card *card, - int disable_multi, - struct mmc_queue *mq) +static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mq_rq, + int disable_multi) { u32 readcmd, writecmd; - struct mmc_blk_request *brq = &mqrq->brq; - struct request *req = mmc_queue_req_to_req(mqrq); + struct mmc_queue *mq = mq_rq->mq; + struct mmc_card *card = mq->card; + struct mmc_blk_request *brq = &mq_rq->brq; + struct request *req = mmc_queue_req_to_req(mq_rq); struct mmc_blk_data *md = mq->blkdata; bool do_rel_wr, do_data_tag; - mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag); + mmc_blk_data_prep(mq, mq_rq, disable_multi, &do_rel_wr, &do_data_tag); brq->mrq.cmd = &brq->cmd; brq->mrq.areq = NULL; @@ -1764,9 +1766,9 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, brq->mrq.sbc = &brq->sbc; } - mqrq->areq.err_check = mmc_blk_err_check; - mqrq->areq.host = card->host; - INIT_WORK(&mqrq->areq.finalization_work, mmc_finalize_areq); + mq_rq->areq.err_check = mmc_blk_err_check; + mq_rq->areq.host = card->host; + INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq); } static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, @@ -1798,10 +1800,12 @@ static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, return req_pending; } -static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, - struct request *req, - struct mmc_queue_req *mqrq) +static void mmc_blk_rw_cmd_abort(struct mmc_queue_req *mq_rq) { + struct mmc_queue *mq = mq_rq->mq; + struct mmc_card *card = mq->card; + struct request *req = mmc_queue_req_to_req(mq_rq); + if (mmc_card_removed(card)) req->rq_flags |= RQF_QUIET; while (blk_end_request(req, BLK_STS_IOERR, blk_rq_cur_bytes(req))); @@ -1809,15 +1813,15 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, /** * mmc_blk_rw_try_restart() - tries to restart the current async request - * @mq: the queue with the card and host to restart - * @mqrq: the mmc_queue_request containing the areq to be restarted + * @mq_rq: the mmc_queue_request containing the areq to be restarted */ -static void mmc_blk_rw_try_restart(struct mmc_queue *mq, - struct mmc_queue_req *mqrq) +static void mmc_blk_rw_try_restart(struct mmc_queue_req *mq_rq) { + struct mmc_queue *mq = mq_rq->mq; + /* Proceed and try to restart the current async request */ - mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); - mmc_restart_areq(mq->card->host, &mqrq->areq); + mmc_blk_rw_rq_prep(mq_rq, 0); + mmc_restart_areq(mq->card->host, &mq_rq->areq); } static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status status) @@ -1863,7 +1867,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat pr_err("%s BUG rq_tot %d d_xfer %d\n", __func__, blk_rq_bytes(old_req), brq->data.bytes_xfered); - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); return; } break; @@ -1871,12 +1875,12 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); if (mmc_blk_reset(md, card->host, type)) { if (req_pending) - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } if (!req_pending) { - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } break; @@ -1888,8 +1892,8 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat case MMC_BLK_ABORT: if (!mmc_blk_reset(md, card->host, type)) break; - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; case MMC_BLK_DATA_ERR: { int err; @@ -1897,8 +1901,8 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat if (!err) break; if (err == -ENODEV) { - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } /* Fall through */ @@ -1919,19 +1923,19 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat req_pending = blk_end_request(old_req, BLK_STS_IOERR, brq->data.blksz); if (!req_pending) { - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } break; case MMC_BLK_NOMEDIUM: - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; default: pr_err("%s: Unhandled return value (%d)", old_req->rq_disk->disk_name, status); - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } @@ -1940,25 +1944,25 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat * In case of a incomplete request * prepare it again and resend. */ - mmc_blk_rw_rq_prep(mq_rq, card, - disable_multi, mq); + mmc_blk_rw_rq_prep(mq_rq, disable_multi); mmc_start_areq(card->host, areq); mq_rq->brq.retune_retry_done = retune_retry_done; } } -static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) +static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) { + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_queue *mq = mq_rq->mq; struct mmc_card *card = mq->card; - struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req); - struct mmc_async_req *areq = &mqrq_cur->areq; + struct mmc_async_req *areq = &mq_rq->areq; /* * If the card was removed, just cancel everything and return. */ if (mmc_card_removed(card)) { - new_req->rq_flags |= RQF_QUIET; - blk_end_request_all(new_req, BLK_STS_IOERR); + req->rq_flags |= RQF_QUIET; + blk_end_request_all(req, BLK_STS_IOERR); return; } @@ -1968,22 +1972,23 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) * multiple read or write is allowed */ if (mmc_large_sector(card) && - !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { + !IS_ALIGNED(blk_rq_sectors(req), 8)) { pr_err("%s: Transfer size is not 4KB sector size aligned\n", - new_req->rq_disk->disk_name); - mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); + req->rq_disk->disk_name); + mmc_blk_rw_cmd_abort(mq_rq); return; } - mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); + mmc_blk_rw_rq_prep(mq_rq, 0); areq->report_done_status = mmc_blk_rw_done; mmc_start_areq(card->host, areq); } -void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) +void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq) { int ret; - struct mmc_blk_data *md = mq->blkdata; + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; if (!req) { @@ -2005,7 +2010,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * ioctl()s */ mmc_wait_for_areq(card->host); - mmc_blk_issue_drv_op(mq, req); + mmc_blk_issue_drv_op(mq_rq); break; case REQ_OP_DISCARD: /* @@ -2013,7 +2018,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * discard. */ mmc_wait_for_areq(card->host); - mmc_blk_issue_discard_rq(mq, req); + mmc_blk_issue_discard_rq(mq_rq); break; case REQ_OP_SECURE_ERASE: /* @@ -2021,7 +2026,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * secure erase. */ mmc_wait_for_areq(card->host); - mmc_blk_issue_secdiscard_rq(mq, req); + mmc_blk_issue_secdiscard_rq(mq_rq); break; case REQ_OP_FLUSH: /* @@ -2029,11 +2034,11 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * flush. */ mmc_wait_for_areq(card->host); - mmc_blk_issue_flush(mq, req); + mmc_blk_issue_flush(mq_rq); break; default: /* Normal request, just issue it */ - mmc_blk_issue_rw_rq(mq, req); + mmc_blk_issue_rw_rq(mq_rq); break; } } diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h index 860ca7c8df86..bbc1c8029b3b 100644 --- a/drivers/mmc/core/block.h +++ b/drivers/mmc/core/block.h @@ -1,9 +1,8 @@ #ifndef _MMC_CORE_BLOCK_H #define _MMC_CORE_BLOCK_H -struct mmc_queue; -struct request; +struct mmc_queue_req; -void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req); +void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq); #endif diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index cf43a2d5410d..5511e323db31 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -62,7 +62,7 @@ static int mmc_queue_thread(void *d) claimed_card = true; } set_current_state(TASK_RUNNING); - mmc_blk_issue_rq(mq, req); + mmc_blk_issue_rq(req_to_mmc_queue_req(req)); cond_resched(); } else { mq->asleep = true; From patchwork Thu Oct 26 12:57:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117232 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp734013qgn; Thu, 26 Oct 2017 05:59:15 -0700 (PDT) X-Google-Smtp-Source: ABhQp+Tuzp+EoMpYZ/faD+N8apm+5H92vkR/0tbqfI09XudNMRBKvpqt76E698NYP82TiKbptB9W X-Received: by 10.159.242.194 with SMTP id x2mr4425695plw.244.1509022755048; Thu, 26 Oct 2017 05:59:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022755; cv=none; d=google.com; s=arc-20160816; b=oy/tnx9g7Kr+QoR4gQKaPicYDlIZFEBSevsWEjx+q2EHN4CGZbGHOiIbOebHkQNPrT a6hbFiVBZaauIt49dXpws8REYioyrQLMChG73dgfVsJWHLKqTt6n3RJHCADGMfLIzdlb +n2dOZOEHUl9JYD2S74rN6RoRKQYn8LT3HJw/3PoP8muMKnUnguFphoSsAKQFOXn8Q8c xRdXaRcGpzI1uH7gHj1Hz4vrzsv7NJ7n0wxURr1Di/RmjJvUUkxlv3rw5wWsLJMwJPL0 S5BbOae3tKAC5NmEoOsJXaeaJ7zEJ+mSFTVNhYIvCpV4fb1AVURsW6cm4aFYXjrUdFoS ZnXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=SEWJFA6DCJyUaoHK2w5y4jg140VB81W8AWBzBrGIO04=; b=CDcOt9/AqC/+5fgEjLlDyClrioA0lwxAJDOzG+GwnKjM3ZSoGxWl0nsDkqlWb/XVOV CbtRWVMW/XHuejWIgYsr/UnRRZtxqiow5pgHdetUuqg11+aQBxIMiBtRijkP+Q28lRlL vBwjBURq8RPJbP6sWe3FdrtRTppDJfY7qI4/zZuLZK7FgKzvB8mmj9ZmLbwZ0+gn+e7g 0LM+Bg57eparEvMlBAEFZ6b6rwu8cRG694KvtX6L989Noa2nGU6X00GjekDlKfEVOY3+ N2esjkqGW24c1ZecRCdauqBuhMVVUEGJeEQg5GREjInuh8ymHdQW6MufAasrcY6Lto6t pCtQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=dEaTFO9X; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m6si3601406pfj.586.2017.10.26.05.59.14; Thu, 26 Oct 2017 05:59:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=dEaTFO9X; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932327AbdJZM7M (ORCPT + 6 others); Thu, 26 Oct 2017 08:59:12 -0400 Received: from mail-lf0-f65.google.com ([209.85.215.65]:53061 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932422AbdJZM7J (ORCPT ); Thu, 26 Oct 2017 08:59:09 -0400 Received: by mail-lf0-f65.google.com with SMTP id b190so3634420lfg.9 for ; Thu, 26 Oct 2017 05:59:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=t66NPdDWZI3wgs14N3/acNJuvCNlE/Y8yaReLfHIInY=; b=dEaTFO9XslJk5ai6oNhzuO/CTsWXiqcmD67QHUSA5cq9F6tqZmCse9NAwn/NnNKiMK d5hq651NWLB4wQVKZhP8XQhn1KbxDgCj9FU/TKVDuczKmTiKwD+L7qgiuq3YmMZR/S8N 3RxlNXA50h8QJjKdr9FGbxp2TEw2CF29mzg4M= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=t66NPdDWZI3wgs14N3/acNJuvCNlE/Y8yaReLfHIInY=; b=LIylPZ7IAMFIzDzwQ9QfnMLupxM48o6OOpSLweU5T9p+pPgRxuUJbG3QoGbCANITEr +1XwX1SXW3wXS9gyMoDtn7oJpB1JMUKNSD6YtRlf9zFvl8RpF+9khSAptjA9t4dww7X1 j019QSxfRebDtRa3LMQW0Ref/DN7jkhESFD0Nn6EvuLh0DyoKSb+AU4tDuSJmbxM6rYj 6YqQoke4E3/hVNsnFwGQTAVZlLKGJRMBkHHxY6MoNyJg8Z926n44ijstOozap5jb/MhB ghQkEelDrGkIM5yc0HZzr0BBZXFtiw8ftzqHlazG9MpNaDC7Bc8bY+ywFWAQxqDaOJeL wOzg== X-Gm-Message-State: AMCzsaVWJibrFY1tn1urgcX5rCRR71eowGaCczWaBFmzFaPufTlIRoL9 L/qj6QW0OV7pjyVFnWJokW8PUjeGaM0= X-Received: by 10.25.141.193 with SMTP id p184mr6335395lfd.222.1509022747114; Thu, 26 Oct 2017 05:59:07 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.59.05 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:59:06 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 11/12 v4] mmc: block: issue requests in massive parallel Date: Thu, 26 Oct 2017 14:57:56 +0200 Message-Id: <20171026125757.10200-12-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This makes a crucial change to the issueing mechanism for the MMC requests: Before commit "mmc: core: move the asynchronous post-processing" some parallelism on the read/write requests was achieved by speculatively postprocessing a request and re-preprocess and re-issue the request if something went wrong, which we discover later when checking for an error. This is kind of ugly. Instead we need a mechanism like here: We issue requests, and when they come back from the hardware, we know if they finished successfully or not. If the request was successful, we complete the asynchronous request and let a new request immediately start on the hardware. If, and only if, it returned an error from the hardware we go down the error path. This is achieved by splitting the work path from the hardware in two: a successful path ending up calling down to mmc_blk_rw_done() and completing quickly, and an errorpath calling down to mmc_blk_rw_done_error(). This has a profound effect: we reintroduce the parallelism on the successful path as mmc_post_req() can now be called in while the next request is in transit (just like prior to commit "mmc: core: move the asynchronous post-processing") and blk_end_request() is called while the next request is already on the hardware. The latter has the profound effect of issuing a new request again so that we actually may have three requests in transit at the same time: one on the hardware, one being prepared (such as DMA flushing) and one being prepared for issuing next by the block layer. This shows up when we transit to multiqueue, where this can be exploited. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 79 +++++++++++++++++++++++++++++++++--------------- drivers/mmc/core/core.c | 38 +++++++++++++++++------ 2 files changed, 83 insertions(+), 34 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 184907f5fb97..f06f381146a5 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1824,7 +1824,8 @@ static void mmc_blk_rw_try_restart(struct mmc_queue_req *mq_rq) mmc_restart_areq(mq->card->host, &mq_rq->areq); } -static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status status) +static void mmc_blk_rw_done_error(struct mmc_async_req *areq, + enum mmc_blk_status status) { struct mmc_queue *mq; struct mmc_blk_data *md; @@ -1832,7 +1833,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat struct mmc_host *host; struct mmc_queue_req *mq_rq; struct mmc_blk_request *brq; - struct request *old_req; + struct request *req; bool req_pending = true; int disable_multi = 0, retry = 0, type, retune_retry_done = 0; @@ -1846,33 +1847,18 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat card = mq->card; host = card->host; brq = &mq_rq->brq; - old_req = mmc_queue_req_to_req(mq_rq); - type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; + req = mmc_queue_req_to_req(mq_rq); + type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; switch (status) { - case MMC_BLK_SUCCESS: case MMC_BLK_PARTIAL: - /* - * A block was successfully transferred. - */ + /* This should trigger a retransmit */ mmc_blk_reset_success(md, type); - req_pending = blk_end_request(old_req, BLK_STS_OK, + req_pending = blk_end_request(req, BLK_STS_OK, brq->data.bytes_xfered); - /* - * If the blk_end_request function returns non-zero even - * though all data has been transferred and no errors - * were returned by the host controller, it's a bug. - */ - if (status == MMC_BLK_SUCCESS && req_pending) { - pr_err("%s BUG rq_tot %d d_xfer %d\n", - __func__, blk_rq_bytes(old_req), - brq->data.bytes_xfered); - mmc_blk_rw_cmd_abort(mq_rq); - return; - } break; case MMC_BLK_CMD_ERR: - req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); + req_pending = mmc_blk_rw_cmd_err(md, card, brq, req, req_pending); if (mmc_blk_reset(md, card->host, type)) { if (req_pending) mmc_blk_rw_cmd_abort(mq_rq); @@ -1911,7 +1897,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat if (brq->data.blocks > 1) { /* Redo read one sector at a time */ pr_warn("%s: retrying using single block read\n", - old_req->rq_disk->disk_name); + req->rq_disk->disk_name); disable_multi = 1; break; } @@ -1920,7 +1906,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat * time, so we only reach here after trying to * read a single sector. */ - req_pending = blk_end_request(old_req, BLK_STS_IOERR, + req_pending = blk_end_request(req, BLK_STS_IOERR, brq->data.blksz); if (!req_pending) { mmc_blk_rw_try_restart(mq_rq); @@ -1933,7 +1919,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat return; default: pr_err("%s: Unhandled return value (%d)", - old_req->rq_disk->disk_name, status); + req->rq_disk->disk_name, status); mmc_blk_rw_cmd_abort(mq_rq); mmc_blk_rw_try_restart(mq_rq); return; @@ -1950,6 +1936,49 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat } } +static void mmc_blk_rw_done(struct mmc_async_req *areq, + enum mmc_blk_status status) +{ + struct mmc_queue_req *mq_rq; + struct request *req; + struct mmc_blk_request *brq; + struct mmc_queue *mq; + struct mmc_blk_data *md; + bool req_pending; + int type; + + /* + * Anything other than success or partial transfers are errors. + */ + if (status != MMC_BLK_SUCCESS) { + mmc_blk_rw_done_error(areq, status); + return; + } + + /* The quick path if the request was successful */ + mq_rq = container_of(areq, struct mmc_queue_req, areq); + brq = &mq_rq->brq; + mq = mq_rq->mq; + md = mq->blkdata; + req = mmc_queue_req_to_req(mq_rq); + type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; + + mmc_blk_reset_success(md, type); + req_pending = blk_end_request(req, BLK_STS_OK, + brq->data.bytes_xfered); + /* + * If the blk_end_request function returns non-zero even + * though all data has been transferred and no errors + * were returned by the host controller, it's a bug. + */ + if (req_pending) { + pr_err("%s BUG rq_tot %d d_xfer %d\n", + __func__, blk_rq_bytes(req), + brq->data.bytes_xfered); + mmc_blk_rw_cmd_abort(mq_rq); + } +} + static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) { struct request *req = mmc_queue_req_to_req(mq_rq); diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 209ebb8a7f3f..0f57e9fe66b6 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -735,15 +735,35 @@ void mmc_finalize_areq(struct work_struct *work) mmc_start_bkops(host->card, true); } - /* Successfully postprocess the old request at this point */ - mmc_post_req(host, areq->mrq, 0); - - /* Call back with status, this will trigger retry etc if needed */ - if (areq->report_done_status) - areq->report_done_status(areq, status); - - /* This opens the gate for the next request to start on the host */ - complete(&areq->complete); + /* + * Here we postprocess the request differently depending on if + * we go on the success path or error path. The success path will + * immediately let new requests hit the host, whereas the error + * path will hold off new requests until we have retried and + * succeeded or failed the current asynchronous request. + */ + if (status == MMC_BLK_SUCCESS) { + /* + * This immediately opens the gate for the next request + * to start on the host while we perform post-processing + * and report back to the block layer. + */ + host->areq = NULL; + complete(&areq->complete); + mmc_post_req(host, areq->mrq, 0); + if (areq->report_done_status) + areq->report_done_status(areq, MMC_BLK_SUCCESS); + } else { + mmc_post_req(host, areq->mrq, 0); + /* + * Call back with error status, this will trigger retry + * etc if needed + */ + if (areq->report_done_status) + areq->report_done_status(areq, status); + host->areq = NULL; + complete(&areq->complete); + } } EXPORT_SYMBOL(mmc_finalize_areq); From patchwork Thu Oct 26 12:57:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 117233 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp734105qgn; Thu, 26 Oct 2017 05:59:21 -0700 (PDT) X-Google-Smtp-Source: ABhQp+TjNe3JdHa3D1vjCTH637te/A8xw/pFL+ePyTJk/0E0lPmu/LLQNZFoJSNDp1t47zaNg0T5 X-Received: by 10.99.122.92 with SMTP id j28mr4914946pgn.154.1509022761018; Thu, 26 Oct 2017 05:59:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509022761; cv=none; d=google.com; s=arc-20160816; b=pwk7s1/A3truNIs9NSNPR44I0686W9mdyq2LHqy6CHKSD+mIY6ke8VYV1yjWfI2eAY Z0EYehaVrHL1D4OVkUpiruNic5DW4sGMjPAy0CPHVKoefvcJMWxix5VBHQ7qAXUGKMRI 6FgBwSfrSrXi5pYejz+g7yk6gyhD18tsCqZO7Aymw6imdqANq1R8mK/bIruvIbHhZALF +u+fzecOAz6gEq03HmxgsMwVi7eeen1Jn0Sdn66gcRpWfyI/GMd+1BgVIL6rIB+EMWNz NKapysOh/4iG1/FGTmld6LZc0pmMYS4LCvskjOSc+FcYLxR8WeLDWErfBgMGC8OYCOK3 HW7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=SCT2dtZWu9xABbOr8pgLj2gKZQ6m4L2136REl9A8C38=; b=zpRrAEWy3URt9P59wOc8NiHRKQsABat/hI/rXbACIuOLr+vFiGvlY5eq5C2IAg8S8I E+kgjtJhV4XXaSAM/fmAZvgnIMBZtrffb8kwktbJ5QpGoX6JPFeE28DoM7xWyQJHpSA3 sYzkzRbw8984lWB9I9ORwA13EfDWvjLanIFeO+Daq7I+Gk9H2vjhoJumqu8ukO/L3t47 0cjxjvueQbdPmuhPZscJQZqT0/IG/pxSQj1P8GJlvJ4PiVPakFnJLbMZImX62eTnJlst EXhS43BcMTtwdHeLV5lKExp2hlyZd9Ryi4vc4gd3AhJAQjCiBllLt9mmRIbG34FyiOzc npVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=EDQ7CraF; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m6si3601406pfj.586.2017.10.26.05.59.20; Thu, 26 Oct 2017 05:59:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=EDQ7CraF; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932301AbdJZM7S (ORCPT + 6 others); Thu, 26 Oct 2017 08:59:18 -0400 Received: from mail-lf0-f68.google.com ([209.85.215.68]:54984 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932334AbdJZM7N (ORCPT ); Thu, 26 Oct 2017 08:59:13 -0400 Received: by mail-lf0-f68.google.com with SMTP id a2so3617481lfh.11 for ; Thu, 26 Oct 2017 05:59:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=P9Plg55ktmrHX1QFVXM/OjuQoCyZDvtt624BpLx0sBI=; b=EDQ7CraF3nAaNCDDyAbTJvqw27GZ7J2oWGhZx+M3BKWyFWe9LENiPJ1KSI9+CmLCE4 ubHSb7QiZzAfzJhSW30DscWAnyhF88RUvjueZ8EKIGHsnAqCY/wvg36+uAFYY5BTlB24 NxD7Us+XLa3mJi9mRwqS4bKBMGTZDhXSuLlnY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=P9Plg55ktmrHX1QFVXM/OjuQoCyZDvtt624BpLx0sBI=; b=ZBtkfjwk0PJaR90tccAOOFsIuqzQlm3OndnSKMalv/VRkWpBSBpORDi87gUaJq9mVv +1/R7hnhYzbq1dFKR5ZU2GMMWVbLPa/6FnsGFUZpgc1mYsljapWNBrvcwozE0AamOYPM r+yApJLHxwEtoeVSqBJLYv4AhTtlKvCVbou68ZwVvM8MMsqhlVY5B3eJ5poyNk/XEUq/ mbm/ty/PZbzU7QA3eRYkCIR/gMjvpwnfmqJsFdN63X6Fm1zxyl+nj1HtDsK0zhyY0dzC LxhbDQ2YNtZnqbB5cqkx/R9PHo0h+VQcjj5RAM8LmRzWtMJSp04a2nsisEGCbLjNAw9h htag== X-Gm-Message-State: AMCzsaW0FBHG/6r63EtzNTEpYSplIyOHjs5J6hKoZhScXaZIJT0/dhEa AD3tOVDFNTUzRwU99wdLYvbnJuA+VuI= X-Received: by 10.25.35.9 with SMTP id j9mr8056535lfj.24.1509022751494; Thu, 26 Oct 2017 05:59:11 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 34sm1165600lfr.25.2017.10.26.05.59.10 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Oct 2017 05:59:10 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 12/12 v4] mmc: switch MMC/SD to use blk-mq multiqueueing Date: Thu, 26 Oct 2017 14:57:57 +0200 Message-Id: <20171026125757.10200-13-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171026125757.10200-1-linus.walleij@linaro.org> References: <20171026125757.10200-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This switches the MMC/SD stack to use the multiqueue block layer interface. We kill off the kthread that was just calling blk_fetch_request() and let blk-mq drive all traffic, nice, that is how it should work. Due to having switched the submission mechanics around so that the completion of requests is now triggered from the host callbacks, we manage to keep the same performance for linear reads/writes as we have for the old block layer. The open questions from earlier patch series v1 thru v3 have been addressed: - mmc_[get|put]_card() is now issued across requests from .queue_rq() to .complete() using Adrians nifty context lock. This means that the block layer does not compete with itself on getting access to the host, and we can let other users of the host come in. (For SDIO and mixed-mode cards.) - Partial reads are handled by open coding calls to blk_update_request() as advised by Christoph. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 87 ++++++++++-------- drivers/mmc/core/queue.c | 223 ++++++++++++++++++----------------------------- drivers/mmc/core/queue.h | 8 +- 3 files changed, 139 insertions(+), 179 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index f06f381146a5..9e0fe07e098a 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -93,7 +94,6 @@ static DEFINE_IDA(mmc_rpmb_ida); * There is one mmc_blk_data per slot. */ struct mmc_blk_data { - spinlock_t lock; struct device *parent; struct gendisk *disk; struct mmc_queue queue; @@ -1204,6 +1204,18 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type) } /* + * This reports status back to the block layer for a finished request. + */ +static void mmc_blk_complete(struct mmc_queue_req *mq_rq, + blk_status_t status) +{ + struct request *req = mmc_queue_req_to_req(mq_rq); + + blk_mq_end_request(req, status); + blk_mq_complete_request(req); +} + +/* * The non-block commands come back from the block layer after it queued it and * processed it with all other requests and then they get issued in this * function. @@ -1262,9 +1274,9 @@ static void mmc_blk_issue_drv_op(struct mmc_queue_req *mq_rq) ret = -EINVAL; break; } + mq_rq->drv_op_result = ret; - blk_end_request_all(mmc_queue_req_to_req(mq_rq), - ret ? BLK_STS_IOERR : BLK_STS_OK); + mmc_blk_complete(mq_rq, ret ? BLK_STS_IOERR : BLK_STS_OK); } static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq) @@ -1308,7 +1320,7 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq) else mmc_blk_reset_success(md, type); fail: - blk_end_request(req, status, blk_rq_bytes(req)); + mmc_blk_complete(mq_rq, status); } static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq) @@ -1378,7 +1390,7 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq) if (!err) mmc_blk_reset_success(md, type); out: - blk_end_request(req, status, blk_rq_bytes(req)); + mmc_blk_complete(mq_rq, status); } static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq) @@ -1388,8 +1400,13 @@ static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq) int ret = 0; ret = mmc_flush_cache(card); - blk_end_request_all(mmc_queue_req_to_req(mq_rq), - ret ? BLK_STS_IOERR : BLK_STS_OK); + /* + * NOTE: this used to call blk_end_request_all() for both + * cases in the old block layer to flush all queued + * transactions. I am not sure it was even correct to + * do that for the success case. + */ + mmc_blk_complete(mq_rq, ret ? BLK_STS_IOERR : BLK_STS_OK); } /* @@ -1768,7 +1785,6 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mq_rq, mq_rq->areq.err_check = mmc_blk_err_check; mq_rq->areq.host = card->host; - INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq); } static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, @@ -1792,10 +1808,13 @@ static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, err = mmc_sd_num_wr_blocks(card, &blocks); if (err) req_pending = old_req_pending; - else - req_pending = blk_end_request(req, BLK_STS_OK, blocks << 9); + else { + req_pending = blk_update_request(req, BLK_STS_OK, + blocks << 9); + } } else { - req_pending = blk_end_request(req, BLK_STS_OK, brq->data.bytes_xfered); + req_pending = blk_update_request(req, BLK_STS_OK, + brq->data.bytes_xfered); } return req_pending; } @@ -1808,7 +1827,7 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue_req *mq_rq) if (mmc_card_removed(card)) req->rq_flags |= RQF_QUIET; - while (blk_end_request(req, BLK_STS_IOERR, blk_rq_cur_bytes(req))); + mmc_blk_complete(mq_rq, BLK_STS_IOERR); } /** @@ -1854,8 +1873,8 @@ static void mmc_blk_rw_done_error(struct mmc_async_req *areq, case MMC_BLK_PARTIAL: /* This should trigger a retransmit */ mmc_blk_reset_success(md, type); - req_pending = blk_end_request(req, BLK_STS_OK, - brq->data.bytes_xfered); + req_pending = blk_update_request(req, BLK_STS_OK, + brq->data.bytes_xfered); break; case MMC_BLK_CMD_ERR: req_pending = mmc_blk_rw_cmd_err(md, card, brq, req, req_pending); @@ -1906,11 +1925,13 @@ static void mmc_blk_rw_done_error(struct mmc_async_req *areq, * time, so we only reach here after trying to * read a single sector. */ - req_pending = blk_end_request(req, BLK_STS_IOERR, - brq->data.blksz); + req_pending = blk_update_request(req, BLK_STS_IOERR, + brq->data.blksz); if (!req_pending) { mmc_blk_rw_try_restart(mq_rq); return; + } else { + mmc_blk_complete(mq_rq, BLK_STS_IOERR); } break; case MMC_BLK_NOMEDIUM: @@ -1941,10 +1962,8 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, { struct mmc_queue_req *mq_rq; struct request *req; - struct mmc_blk_request *brq; struct mmc_queue *mq; struct mmc_blk_data *md; - bool req_pending; int type; /* @@ -1957,26 +1976,13 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, /* The quick path if the request was successful */ mq_rq = container_of(areq, struct mmc_queue_req, areq); - brq = &mq_rq->brq; mq = mq_rq->mq; md = mq->blkdata; req = mmc_queue_req_to_req(mq_rq); type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; mmc_blk_reset_success(md, type); - req_pending = blk_end_request(req, BLK_STS_OK, - brq->data.bytes_xfered); - /* - * If the blk_end_request function returns non-zero even - * though all data has been transferred and no errors - * were returned by the host controller, it's a bug. - */ - if (req_pending) { - pr_err("%s BUG rq_tot %d d_xfer %d\n", - __func__, blk_rq_bytes(req), - brq->data.bytes_xfered); - mmc_blk_rw_cmd_abort(mq_rq); - } + mmc_blk_complete(mq_rq, BLK_STS_OK); } static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) @@ -1991,7 +1997,12 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) */ if (mmc_card_removed(card)) { req->rq_flags |= RQF_QUIET; - blk_end_request_all(req, BLK_STS_IOERR); + /* + * NOTE: this used to call blk_end_request_all() + * to flush out all queued transactions to the now + * non-present card. + */ + mmc_blk_complete(mq_rq, BLK_STS_IOERR); return; } @@ -2017,8 +2028,9 @@ void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq) { int ret; struct request *req = mmc_queue_req_to_req(mq_rq); - struct mmc_blk_data *md = mq_rq->mq->blkdata; - struct mmc_card *card = md->queue.card; + struct mmc_queue *mq = mq_rq->mq; + struct mmc_blk_data *md = mq->blkdata; + struct mmc_card *card = mq->card; if (!req) { pr_err("%s: tried to issue NULL request\n", __func__); @@ -2027,7 +2039,7 @@ void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq) ret = mmc_blk_part_switch(card, md->part_type); if (ret) { - blk_end_request_all(req, BLK_STS_IOERR); + mmc_blk_complete(mq_rq, BLK_STS_IOERR); return; } @@ -2124,12 +2136,11 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card, goto err_kfree; } - spin_lock_init(&md->lock); INIT_LIST_HEAD(&md->part); INIT_LIST_HEAD(&md->rpmbs); md->usage = 1; - ret = mmc_init_queue(&md->queue, card, &md->lock, subname); + ret = mmc_init_queue(&md->queue, card, subname); if (ret) goto err_putdisk; diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 5511e323db31..dea6b4e3f828 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -38,74 +39,6 @@ static int mmc_prep_request(struct request_queue *q, struct request *req) return BLKPREP_OK; } -static int mmc_queue_thread(void *d) -{ - struct mmc_queue *mq = d; - struct request_queue *q = mq->queue; - bool claimed_card = false; - - current->flags |= PF_MEMALLOC; - - down(&mq->thread_sem); - do { - struct request *req; - - spin_lock_irq(q->queue_lock); - set_current_state(TASK_INTERRUPTIBLE); - req = blk_fetch_request(q); - mq->asleep = false; - spin_unlock_irq(q->queue_lock); - - if (req) { - if (!claimed_card) { - mmc_get_card(mq->card, NULL); - claimed_card = true; - } - set_current_state(TASK_RUNNING); - mmc_blk_issue_rq(req_to_mmc_queue_req(req)); - cond_resched(); - } else { - mq->asleep = true; - if (kthread_should_stop()) { - set_current_state(TASK_RUNNING); - break; - } - up(&mq->thread_sem); - schedule(); - down(&mq->thread_sem); - } - } while (1); - up(&mq->thread_sem); - - if (claimed_card) - mmc_put_card(mq->card, NULL); - - return 0; -} - -/* - * Generic MMC request handler. This is called for any queue on a - * particular host. When the host is not busy, we look for a request - * on any queue on this host, and attempt to issue it. This may - * not be the queue we were asked to process. - */ -static void mmc_request_fn(struct request_queue *q) -{ - struct mmc_queue *mq = q->queuedata; - struct request *req; - - if (!mq) { - while ((req = blk_fetch_request(q)) != NULL) { - req->rq_flags |= RQF_QUIET; - __blk_end_request_all(req, BLK_STS_IOERR); - } - return; - } - - if (mq->asleep) - wake_up_process(mq->thread); -} - static struct scatterlist *mmc_alloc_sg(int sg_len, gfp_t gfp) { struct scatterlist *sg; @@ -136,127 +69,158 @@ static void mmc_queue_setup_discard(struct request_queue *q, queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q); } +static blk_status_t mmc_queue_request(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) +{ + struct mmc_queue_req *mq_rq = blk_mq_rq_to_pdu(bd->rq); + struct mmc_queue *mq = mq_rq->mq; + + /* Claim card for block queue context */ + mmc_get_card(mq->card, &mq->blkctx); + mmc_blk_issue_rq(mq_rq); + + return BLK_STS_OK; +} + +static void mmc_complete_request(struct request *req) +{ + struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req); + struct mmc_queue *mq = mq_rq->mq; + + /* Release card for block queue context */ + mmc_put_card(mq->card, &mq->blkctx); +} + /** * mmc_init_request() - initialize the MMC-specific per-request data - * @q: the request queue + * @set: tag set for the request * @req: the request - * @gfp: memory allocation policy + * @hctx_idx: hardware context index + * @numa_node: NUMA node */ -static int mmc_init_request(struct request_queue *q, struct request *req, - gfp_t gfp) +static int mmc_init_request(struct blk_mq_tag_set *set, struct request *req, + unsigned int hctx_idx, unsigned int numa_node) { struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req); - struct mmc_queue *mq = q->queuedata; + struct mmc_queue *mq = set->driver_data; struct mmc_card *card = mq->card; struct mmc_host *host = card->host; - mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp); + mq_rq->sg = mmc_alloc_sg(host->max_segs, GFP_KERNEL); if (!mq_rq->sg) return -ENOMEM; mq_rq->mq = mq; + INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq); return 0; } -static void mmc_exit_request(struct request_queue *q, struct request *req) +/** + * mmc_exit_request() - tear down the MMC-specific per-request data + * @set: tag set for the request + * @req: the request + * @hctx_idx: hardware context index + */ +static void mmc_exit_request(struct blk_mq_tag_set *set, struct request *req, + unsigned int hctx_idx) { struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req); + flush_work(&mq_rq->areq.finalization_work); kfree(mq_rq->sg); mq_rq->sg = NULL; mq_rq->mq = NULL; } -static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) +static void mmc_setup_queue(struct mmc_queue *mq) { + struct request_queue *q = mq->queue; + struct mmc_card *card = mq->card; struct mmc_host *host = card->host; u64 limit = BLK_BOUNCE_HIGH; if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; - queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue); - queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, mq->queue); + blk_queue_max_segments(q, host->max_segs); + blk_queue_prep_rq(q, mmc_prep_request); + queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q); + queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, q); if (mmc_can_erase(card)) - mmc_queue_setup_discard(mq->queue, card); - - blk_queue_bounce_limit(mq->queue, limit); - blk_queue_max_hw_sectors(mq->queue, + mmc_queue_setup_discard(q, card); + blk_queue_bounce_limit(q, limit); + blk_queue_max_hw_sectors(q, min(host->max_blk_count, host->max_req_size / 512)); - blk_queue_max_segments(mq->queue, host->max_segs); - blk_queue_max_segment_size(mq->queue, host->max_seg_size); - - /* Initialize thread_sem even if it is not used */ - sema_init(&mq->thread_sem, 1); + blk_queue_max_segments(q, host->max_segs); + blk_queue_max_segment_size(q, host->max_seg_size); } +static const struct blk_mq_ops mmc_mq_ops = { + .queue_rq = mmc_queue_request, + .init_request = mmc_init_request, + .exit_request = mmc_exit_request, + .complete = mmc_complete_request, +}; + /** * mmc_init_queue - initialise a queue structure. * @mq: mmc queue * @card: mmc card to attach this queue - * @lock: queue lock * @subname: partition subname * * Initialise a MMC card request queue. */ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, - spinlock_t *lock, const char *subname) + const char *subname) { struct mmc_host *host = card->host; - int ret = -ENOMEM; + int ret; mq->card = card; - mq->queue = blk_alloc_queue(GFP_KERNEL); - if (!mq->queue) - return -ENOMEM; - mq->queue->queue_lock = lock; - mq->queue->request_fn = mmc_request_fn; - mq->queue->init_rq_fn = mmc_init_request; - mq->queue->exit_rq_fn = mmc_exit_request; - mq->queue->cmd_size = sizeof(struct mmc_queue_req); - mq->queue->queuedata = mq; - ret = blk_init_allocated_queue(mq->queue); + mq->tag_set.ops = &mmc_mq_ops; + /* The MMC/SD protocols have only one command pipe */ + mq->tag_set.nr_hw_queues = 1; + /* Set this to 2 to simulate async requests, should we use 3? */ + mq->tag_set.queue_depth = 2; + mq->tag_set.cmd_size = sizeof(struct mmc_queue_req); + mq->tag_set.numa_node = NUMA_NO_NODE; + /* We use blocking requests */ + mq->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING; + /* Should we use BLK_MQ_F_SG_MERGE? */ + mq->tag_set.driver_data = mq; + + ret = blk_mq_alloc_tag_set(&mq->tag_set); if (ret) { - blk_cleanup_queue(mq->queue); + dev_err(host->parent, "failed to allocate MQ tag set\n"); return ret; } - - blk_queue_prep_rq(mq->queue, mmc_prep_request); - - mmc_setup_queue(mq, card); - - mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s", - host->index, subname ? subname : ""); - - if (IS_ERR(mq->thread)) { - ret = PTR_ERR(mq->thread); - goto cleanup_queue; + mq->queue = blk_mq_init_queue(&mq->tag_set); + if (!mq->queue) { + dev_err(host->parent, "failed to initialize block MQ\n"); + goto cleanup_free_tag_set; } + mq->queue->queuedata = mq; + mmc_setup_queue(mq); return 0; -cleanup_queue: - blk_cleanup_queue(mq->queue); +cleanup_free_tag_set: + blk_mq_free_tag_set(&mq->tag_set); return ret; } void mmc_cleanup_queue(struct mmc_queue *mq) { struct request_queue *q = mq->queue; - unsigned long flags; /* Make sure the queue isn't suspended, as that will deadlock */ mmc_queue_resume(mq); - /* Then terminate our worker thread */ - kthread_stop(mq->thread); - /* Empty the queue */ - spin_lock_irqsave(q->queue_lock, flags); q->queuedata = NULL; blk_start_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); - + blk_cleanup_queue(q); + blk_mq_free_tag_set(&mq->tag_set); mq->card = NULL; } EXPORT_SYMBOL(mmc_cleanup_queue); @@ -265,23 +229,16 @@ EXPORT_SYMBOL(mmc_cleanup_queue); * mmc_queue_suspend - suspend a MMC request queue * @mq: MMC queue to suspend * - * Stop the block request queue, and wait for our thread to - * complete any outstanding requests. This ensures that we + * Stop the block request queue. This ensures that we * won't suspend while a request is being processed. */ void mmc_queue_suspend(struct mmc_queue *mq) { struct request_queue *q = mq->queue; - unsigned long flags; if (!mq->suspended) { - mq->suspended |= true; - - spin_lock_irqsave(q->queue_lock, flags); + mq->suspended = true; blk_stop_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); - - down(&mq->thread_sem); } } @@ -292,16 +249,10 @@ void mmc_queue_suspend(struct mmc_queue *mq) void mmc_queue_resume(struct mmc_queue *mq) { struct request_queue *q = mq->queue; - unsigned long flags; if (mq->suspended) { mq->suspended = false; - - up(&mq->thread_sem); - - spin_lock_irqsave(q->queue_lock, flags); blk_start_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); } } diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index 67ae311b107f..c78fbb226a90 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -61,16 +61,14 @@ struct mmc_queue_req { struct mmc_queue { struct mmc_card *card; - struct task_struct *thread; - struct semaphore thread_sem; bool suspended; - bool asleep; struct mmc_blk_data *blkdata; struct request_queue *queue; + struct mmc_ctx blkctx; + struct blk_mq_tag_set tag_set; }; -extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *, - const char *); +extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, const char *); extern void mmc_cleanup_queue(struct mmc_queue *); extern void mmc_queue_suspend(struct mmc_queue *); extern void mmc_queue_resume(struct mmc_queue *);