From patchwork Fri Nov 10 10:01:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118516 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722339qgn; Fri, 10 Nov 2017 02:01:57 -0800 (PST) X-Google-Smtp-Source: ABhQp+QNCZbZ1lE8PFq6c3EXjnOlQ3lmpCPIfzMGuLC1efy/XLVo7Pcm3lNNYzavOvqUA7HDE0WC X-Received: by 10.98.35.194 with SMTP id q63mr3767395pfj.15.1510308117725; Fri, 10 Nov 2017 02:01:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308117; cv=none; d=google.com; s=arc-20160816; b=gZKbIHXh/AwT9KMAgNM+k5+vIg0ekGh4zv8LQgbI6q8f7h0HqbhB/je5vFht7KWIsF LL6U20eIKYcqilDYnc0sD6aAhRWa2tE1yfbdezgoOPzhmGwM+EfKIuVEukq+Jem7NhY7 SRF8LxKXG1/1/yrmSAYEZJK/Qv7HUfzBJdaWGea4rIBZcwMjaVonnmkayVg2STTKtwyR gq9FN/Q8q2aVF0SFU0c77UBIog3pQzDMssrvWi8/VmqXwQ5a6gagMEg20yRoZCAydQhl cNNYpLBrOBI4wBOaGsFtsTNuGsXm7QnkjqdYuF1fdG02ZrTH5CZfQC3ls6UpFiQaFIfP TTKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=tTZRx8/iVMUdibUJYIgbxYNIFu347WN1kZpypmIZTpU=; b=uWlkVxJMNIN170wVzHuuoK8i0aqdwHyMhqOahYVL3Xpd83R6uYHs1ouIUfA7iKubIw yn7C6R/HueMjX/3vHrQGxbs7iHrgHn5+EyTa5xHwKnCJQDvwEu6jsSS44hGaV6WXTTlu ulkKHWh8dtXZ2tILWy9+q79q+30+cIRTqZ7gvvsoxtRElDy/NAeeuW7HnoU8LdoJyBR5 adDgpybU/HnnQbb7qhVwJYA6F3kuhU0ZKCjqtNacPVfP+V3OPSVn00mwZZCCMlj1b5z8 BIOd7/TrJuM7XJad9n+7zfbbYaICmRipVR6GC5hZcJjJW0/xadWW3NpBVQfwiE8dYngb UbaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=OTWGP+Sf; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.01.57; Fri, 10 Nov 2017 02:01:57 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=OTWGP+Sf; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751410AbdKJKBz (ORCPT + 6 others); Fri, 10 Nov 2017 05:01:55 -0500 Received: from mail-lf0-f67.google.com ([209.85.215.67]:49350 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751683AbdKJKBx (ORCPT ); Fri, 10 Nov 2017 05:01:53 -0500 Received: by mail-lf0-f67.google.com with SMTP id w21so10371677lfc.6 for ; Fri, 10 Nov 2017 02:01:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=eLY5CthlX10SDAKbYnppkdu75vU0SFr1NoPLjvOzlgk=; b=OTWGP+SfmdorcnalwnBjqFovdF3eR9Qr9o27n3pFhCwZIbY77CWQoV5MWNyYuTRNfZ rlDw19myeg1bSy91dNbi2WQLvcTJgWTfg6sPlYdBMRhrfKn+7v+6IeAIElOsO31gQizg Ih/q4rgEoJHxyrgmSTCRzujGGBED3mBjoau/4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=eLY5CthlX10SDAKbYnppkdu75vU0SFr1NoPLjvOzlgk=; b=XWw12onDMjHMSXq2SCXRfXPhUG9E+3ZD9NkGHiuHYSmvGkKZ1aTKjHS71HqrSqZBA/ /SyMxXYxHgwPpxhLyljkAvz6wwXcGGhj7QxJsFex13YLQZLOuEOjdXfk2Py/PXj1PQ1k +tg1gHpF2K0gL8dlXq3Yc4kN1RrgRY4J40FJNjOvmVq7HYcCAkTTnk6SBtS1Xqdvh4bU Qm2JSdoUuXVcOnhpNFB6z9ESQBlXfCrG3yDWFFLErGTHgTLSJtXEbL3Ut2MlWwah2gOS r7dMZEG/MH0t01WqobBl1taLP96YeIRjrmGDfO3t+gBaY1FXrqqvNT9oOtbPdPg2tQQW AoxA== X-Gm-Message-State: AJaThX6Xob5QIDyZz8q65NTkycoJCArMtz4Rm9gRAbwehj+zhhfMOfZQ Zn5j1SHPDo23RnMaFcLLwY3cLEpCBR8= X-Received: by 10.46.85.69 with SMTP id j66mr1377729ljb.137.1510308112010; Fri, 10 Nov 2017 02:01:52 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.01.50 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:01:51 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 01/12 v5] mmc: core: move the asynchronous post-processing Date: Fri, 10 Nov 2017 11:01:32 +0100 Message-Id: <20171110100143.12256-2-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This moves the asynchronous post-processing of a request over to the finalization function. The patch has a slight semantic change: Both places will be in the code path for if (host->areq) and in the same sequence, but before this patch, the next request was started before performing post-processing. The effect is that whereas before, the post- and preprocessing happened after starting the next request, now the preprocessing will happen after the request is done and before the next has started which would cut half of the pre/post optimizations out. In the later patch named "mmc: core: replace waitqueue with worker" we move the finalization to a worker started by mmc_request_done() and in the patch named "mmc: block: issue requests in massive parallel" we introduce a forked success/failure path that can quickly complete requests when they come back from the hardware. These two later patches together restore the same optimization but in a more elegant manner that avoids the need to flush the two-stage pipleline with NULL, something we remove between these two patches in the commit named "mmc: queue: stop flushing the pipeline with NULL". Signed-off-by: Linus Walleij --- ChangeLog v1->v5: - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/core.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 1f0f44f4dd5f..e2366a82eebe 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -746,6 +746,9 @@ static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host) mmc_start_bkops(host->card, true); } + /* Successfully postprocess the old request at this point */ + mmc_post_req(host, host->areq->mrq, 0); + return status; } @@ -790,10 +793,6 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host, if (status == MMC_BLK_SUCCESS && areq) start_err = __mmc_start_data_req(host, areq->mrq); - /* Postprocess the old request at this point */ - if (host->areq) - mmc_post_req(host, host->areq->mrq, 0); - /* Cancel a prepared request if it was not started. */ if ((status != MMC_BLK_SUCCESS || start_err) && areq) mmc_post_req(host, areq->mrq, -EINVAL); From patchwork Fri Nov 10 10:01:33 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118517 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722405qgn; Fri, 10 Nov 2017 02:02:00 -0800 (PST) X-Google-Smtp-Source: ABhQp+TEBVgJOoG0A90M6xhapnAKDUMJmBpEt2cJQy/Y99ActGo7bZSHll/YmUF3WQJM8799n3TI X-Received: by 10.159.194.18 with SMTP id x18mr3659570pln.273.1510308120674; Fri, 10 Nov 2017 02:02:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308120; cv=none; d=google.com; s=arc-20160816; b=zuh9Of2FiDMw2CE/bPBtsv3jJ1o2Hnt8K+3gzgJ1gn+/b3trHqgjsO5BK+MFnwaE9W sumU5D7W5dFoSROb6ahEf3BDkzySFHnvhKoAoClBc1X1rM+mT2mEOeEodpauWLSFcYGe Vr1XNPxx/VjtHo4laWM/5fp5G4xrQce9cD5mjfNqMKFEdY5T+k4tZcLA4jEUB8vRX4LU f2OYnxkVjS8dUkUxym52+Gd6ZrpdXI+adMgv0kwfMH8ZxjAr/nKe+7jQzy+a+SlKAUtl hjBElD9nTMHDifjfQV+kGT2mFqFjPSM0Vk7H1pywU9EvcOrj3BjpLG8eJ2RoJVaHccZm /8uQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=HM/9Jtd4NTN138WvciVBxCxRSMhNKZtgBVNnXihgXM4=; b=nTFlkmiFSVNabvnwLmE5nmDJqGdqOoeWiduZpawqzWR97Q3cwrbvC0NFsVZ23OT19d WitmZXD1nyRNjCaQbvpFRSBw7zXwNz0tsHWzoksqxPSV5JrOa7f4975nMWUFW+nES67u pt+f00XMG9KWAm2EekEd/NmwnMnXK8Eep3SKKBTwvSc+jhXg6IkcZ6wUWb2hBSg4q3V1 mpspeNpKWznoZTyPQQBYsiY4sGAyBMXOmqPgUvhXeZmMG8nMl0D/A2mxzdjhaC7Pllge xZAvz3x3zbgtE6RrnhgUJjEIdWX+wEEJwhBdtINZHvbn6MT0hdMDgThv1AlmIJdBCG50 IYQA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=Tq8eap0/; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.00; Fri, 10 Nov 2017 02:02:00 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=Tq8eap0/; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752127AbdKJKB6 (ORCPT + 6 others); Fri, 10 Nov 2017 05:01:58 -0500 Received: from mail-lf0-f68.google.com ([209.85.215.68]:52324 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752168AbdKJKBz (ORCPT ); Fri, 10 Nov 2017 05:01:55 -0500 Received: by mail-lf0-f68.google.com with SMTP id b190so10439574lfg.9 for ; Fri, 10 Nov 2017 02:01:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+8pUl9LvVHLvQAsP9ADlzbb3egQjfgPvBVOhWdZTL7I=; b=Tq8eap0/eLzV9if3s/KDpwtVK30o4LangB8SVjJOBOriKBdboiki6qv8T+ih3zMJ0h 3TxVNrFyQbC8hE09v3sNZt6ngpYgaI9/hnineROADV70FeXgaoO7nmNa38jVHt8UhzjK c0euC376C7SEZAtAGBWSOlXgtpu07a27VhOgg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+8pUl9LvVHLvQAsP9ADlzbb3egQjfgPvBVOhWdZTL7I=; b=PVH15FUw+UNOCscYsDxTVcaPidxvEq6/06xrs9aKVdNoTCeIWiYLOJm+TZiKej2Lz3 HnTDhU6FvxXWbjOjMV/PkDMSJ3dMACeLMbz00QlYX8CWSaQFXjgsfi6+YMw+qaEv0kX6 oVy+DiMtUktPkq5xOSpN3nJ9sXN0/YIRZY8SbdrZhJdWv/2dGgztzhFLj6yJG7FGD0zK 0Ow+TQ5owjUb342c+h310XyPqyn8/OEXJFxSxJaCD+Zame9TCqJ7/hUO7YQhp+3oUgs4 2N08IPEilqQFM9ClGbkVQbBmsWuQIldQy/glfvtWM3+jwH00FcXVD07nXi4eoKt3DWQi B8jg== X-Gm-Message-State: AJaThX6a4qRj9C5t4KepFZvnaGyMg1S8xyctcBBxA1/Ti23WdzAqXQng lHQqQHC4FiEocN3V1HmcZONU8CiuFgo= X-Received: by 10.46.99.203 with SMTP id s72mr1396121lje.7.1510308114133; Fri, 10 Nov 2017 02:01:54 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.01.52 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:01:53 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 02/12 v5] mmc: core: add a workqueue for completing requests Date: Fri, 10 Nov 2017 11:01:33 +0100 Message-Id: <20171110100143.12256-3-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org As we want to complete requests autonomously from feeding the host with new requests, we create a workqueue to deal with this specifically in response to the callback from a host driver. This is necessary to exploit parallelism properly. This patch just adds the workqueu, later patches will make use of it. Signed-off-by: Linus Walleij --- ChangeLog v1->v5: - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/core.c | 9 +++++++++ drivers/mmc/core/host.c | 1 - include/linux/mmc/host.h | 4 ++++ 3 files changed, 13 insertions(+), 1 deletion(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index e2366a82eebe..73ebee12e67b 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -2838,6 +2838,14 @@ void mmc_start_host(struct mmc_host *host) host->f_init = max(freqs[0], host->f_min); host->rescan_disable = 0; host->ios.power_mode = MMC_POWER_UNDEFINED; + /* Workqueue for completing requests */ + host->req_done_wq = alloc_workqueue("mmc%d-reqdone", + WQ_FREEZABLE | WQ_HIGHPRI | WQ_MEM_RECLAIM, + 0, host->index); + if (!host->req_done_wq) { + dev_err(mmc_dev(host), "could not allocate workqueue\n"); + return; + } if (!(host->caps2 & MMC_CAP2_NO_PRESCAN_POWERUP)) { mmc_claim_host(host); @@ -2859,6 +2867,7 @@ void mmc_stop_host(struct mmc_host *host) host->rescan_disable = 1; cancel_delayed_work_sync(&host->detect); + destroy_workqueue(host->req_done_wq); /* clear pm flags now and let card drivers set them as needed */ host->pm_flags = 0; diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c index 35a9e4fd1a9f..88033294832f 100644 --- a/drivers/mmc/core/host.c +++ b/drivers/mmc/core/host.c @@ -390,7 +390,6 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev) INIT_DELAYED_WORK(&host->detect, mmc_rescan); INIT_DELAYED_WORK(&host->sdio_irq_work, sdio_irq_work); setup_timer(&host->retune_timer, mmc_retune_timer, (unsigned long)host); - /* * By default, hosts do not support SGIO or large requests. * They have to set these according to their abilities. diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index e7743eca1021..e4fa7058c288 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -425,6 +426,9 @@ struct mmc_host { struct mmc_async_req *areq; /* active async req */ struct mmc_context_info context_info; /* async synchronization info */ + /* finalization workqueue, handles finalizing requests */ + struct workqueue_struct *req_done_wq; + /* Ongoing data transfer that allows commands during transfer */ struct mmc_request *ongoing_mrq; From patchwork Fri Nov 10 10:01:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118518 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722464qgn; Fri, 10 Nov 2017 02:02:03 -0800 (PST) X-Google-Smtp-Source: ABhQp+Tnt69rI7Zar6Ye+hjlsUco8wNsTX2fSPb8AS8orBlh5ATvxl+7zZLoL04QPgsRV5T3OGZs X-Received: by 10.98.163.84 with SMTP id s81mr3770101pfe.64.1510308123431; Fri, 10 Nov 2017 02:02:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308123; cv=none; d=google.com; s=arc-20160816; b=N5lwtV2U7PT4ojBrQxGr6zYdUYPjwzJIlo08oAp8neWLZZM4rrzD4CWNj7fb0izxZE ExQHqty/cr1Fp3F70mc6xiAazyvMaFRgqiXVO5lDI9gP5OnwMcdEBrDReuB+IWkZCEwk ZdkG7tR6NxAaJFe0wPTsBdka/hRybYaKzqOajy0ZgHHPEVvr4bNEJZGrtKUfH74pkQcA KqgvZAais8g1LtyH2xdTvctHs8YykVjSP11xDb+mriV5uJCrnpjBlCRSqhQckMQiMKjQ vVXgCcfY7BjAHFvSXJc8/9u47caRWk6J/uXLrvqqZqkFMz9HUnmi1V55ZI8xfl0SgUGM 3zMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=zVtxM5afa47qDcof6WYP2sWOZ+rPEqSOW+AzoCgY+b0=; b=x5DuaTfZqtdf++Y6ujv09uXsBvlAzvE1LiuX6IQFN/9XExOOlXrsn9Y6wA1gWMwWSN yjV1o8YCgCQUkQpLPv1fJLClUsEVjeXKrHG5LcVVVP1P7xvHWflDT6K+9OOfhoJIfEsZ 4Bb/sby/0Vn9niF+0+bX30wjU4nFIByZfmYM/bsqi3uH9IYAYSiaJNtYfZGZ5IfR3rCE 7LgWv9FbiazEY2Fif8TWhA7g/LiT5C6WYUOyj5rZYaUzsiwIZdXECRCn1edKOfqhojEL c8DQFZoVld/D6VVu/148Kx7S8wKWU61h5/A0c6WABVU0c7XjjjgdJQ2yMkRtyNnWH5Dh t6fQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=aZ7xtxV4; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.03; Fri, 10 Nov 2017 02:02:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=aZ7xtxV4; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752257AbdKJKCB (ORCPT + 6 others); Fri, 10 Nov 2017 05:02:01 -0500 Received: from mail-lf0-f68.google.com ([209.85.215.68]:56163 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751496AbdKJKB6 (ORCPT ); Fri, 10 Nov 2017 05:01:58 -0500 Received: by mail-lf0-f68.google.com with SMTP id e143so10425500lfg.12 for ; Fri, 10 Nov 2017 02:01:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ECnsxb03HfkDOvjTr3KsdcWGsc4hEWB8GFnkKlmkCUc=; b=aZ7xtxV42IkQN1tQA4YWBXdQSqChesMMybr8jap1JaWhS7D3WKNjttoSm6Wg96FW7V zKdl1S0OTqYgmyVO5cl94HzpRFjS6JhQHTeimkBuOj8WLZI8xT/iZzbV8ZFmVicvm6w0 csdsKu71e5NGX1iAhovBhI+TXneferJO+RoCg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ECnsxb03HfkDOvjTr3KsdcWGsc4hEWB8GFnkKlmkCUc=; b=dYPr3SYg3rMkTumhYMoEQIdksOEuio8/u25E0BIxAHPfdqTl6JRUQbpS2GUyZnmyKS oQk0Wkg2eo8aesCXkvM9xrZrJuKAu7jTA1gYi8oKSAWXKSLvgI92eySaECWiISfC53MT GHAANzoMW8igZd1Ool3ycRvl5efUYAJdltC7AASRKOuz2vnN5GWOZZxixmsuwhClHh1g rzM2bjh6zqvjbMsLPXNdrDP6SbLtQvRfEmUYeMhXhP5cotmOCUz2luYmiIXO1TU9Dy3C Hr1+Bkks7tzPG1ivi4x6j0v6e8c4eksmngl2WGWLalfz3CWIxFpyrFOWnsVs5XvsXYrt wf7A== X-Gm-Message-State: AJaThX502yrdxOdTGzTLnCafBK5KMuUs6kTn2YXSgjXufXC5tNS5LlK6 Uvw252vssG4qkrZcn4pykhkFcYqPM4k= X-Received: by 10.46.80.88 with SMTP id v24mr1431746ljd.93.1510308116334; Fri, 10 Nov 2017 02:01:56 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.01.54 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:01:55 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 03/12 v5] mmc: core: replace waitqueue with worker Date: Fri, 10 Nov 2017 11:01:34 +0100 Message-Id: <20171110100143.12256-4-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The waitqueue in the host context is there to signal back from mmc_request_done() through mmc_wait_data_done() that the hardware is done with a command, and when the wait is over, the core will typically submit the next asynchronous request that is pending just waiting for the hardware to be available. This is in the way for letting the mmc_request_done() trigger the report up to the block layer that a block request is finished. Re-jig this as a first step, remvoving the waitqueue and introducing a work that will run after a completed asynchronous request, finalizing that request, including retransmissions, and eventually reporting back with a completion and a status code to the asynchronous issue method. This has the upside that we can remove the MMC_BLK_NEW_REQUEST status code and the "new_request" state in the request queue that is only there to make the state machine spin out the first time we send a request. Use the workqueue we introduced in the host for handling just this, and then add a work and completion in the asynchronous request to deal with this mechanism. We introduce a pointer from mmc_request back to the asynchronous request so these can be referenced from each other, and augment mmc_wait_data_done() to use this pointer to get at the areq and kick the worker since that function is only used by asynchronous requests anyway. This is a central change that let us do many other changes since we have broken the submit and complete code paths in two, and we can potentially remove the NULL flushing of the asynchronous pipeline and report block requests as finished directly from the worker. Signed-off-by: Linus Walleij --- ChangeLog v1->v5: - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/block.c | 3 ++ drivers/mmc/core/core.c | 93 ++++++++++++++++++++++++------------------------ drivers/mmc/core/core.h | 2 ++ drivers/mmc/core/queue.c | 1 - include/linux/mmc/core.h | 3 +- include/linux/mmc/host.h | 7 ++-- 6 files changed, 59 insertions(+), 50 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index ea80ff4cd7f9..5c84175e49be 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1712,6 +1712,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag); brq->mrq.cmd = &brq->cmd; + brq->mrq.areq = NULL; brq->cmd.arg = blk_rq_pos(req); if (!mmc_card_blockaddr(card)) @@ -1764,6 +1765,8 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, } mqrq->areq.err_check = mmc_blk_err_check; + mqrq->areq.host = card->host; + INIT_WORK(&mqrq->areq.finalization_work, mmc_finalize_areq); } static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 73ebee12e67b..7440daa2f559 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -369,10 +369,15 @@ EXPORT_SYMBOL(mmc_start_request); */ static void mmc_wait_data_done(struct mmc_request *mrq) { - struct mmc_context_info *context_info = &mrq->host->context_info; + struct mmc_host *host = mrq->host; + struct mmc_context_info *context_info = &host->context_info; + struct mmc_async_req *areq = mrq->areq; context_info->is_done_rcv = true; - wake_up_interruptible(&context_info->wait); + /* Schedule a work to deal with finalizing this request */ + if (!areq) + pr_err("areq of the data mmc_request was NULL!\n"); + queue_work(host->req_done_wq, &areq->finalization_work); } static void mmc_wait_done(struct mmc_request *mrq) @@ -695,43 +700,34 @@ static void mmc_post_req(struct mmc_host *host, struct mmc_request *mrq, * Returns the status of the ongoing asynchronous request, but * MMC_BLK_SUCCESS if no request was going on. */ -static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host) +void mmc_finalize_areq(struct work_struct *work) { + struct mmc_async_req *areq = + container_of(work, struct mmc_async_req, finalization_work); + struct mmc_host *host = areq->host; struct mmc_context_info *context_info = &host->context_info; - enum mmc_blk_status status; - - if (!host->areq) - return MMC_BLK_SUCCESS; - - while (1) { - wait_event_interruptible(context_info->wait, - (context_info->is_done_rcv || - context_info->is_new_req)); + enum mmc_blk_status status = MMC_BLK_SUCCESS; - if (context_info->is_done_rcv) { - struct mmc_command *cmd; + if (context_info->is_done_rcv) { + struct mmc_command *cmd; - context_info->is_done_rcv = false; - cmd = host->areq->mrq->cmd; + context_info->is_done_rcv = false; + cmd = areq->mrq->cmd; - if (!cmd->error || !cmd->retries || - mmc_card_removed(host->card)) { - status = host->areq->err_check(host->card, - host->areq); - break; /* return status */ - } else { - mmc_retune_recheck(host); - pr_info("%s: req failed (CMD%u): %d, retrying...\n", - mmc_hostname(host), - cmd->opcode, cmd->error); - cmd->retries--; - cmd->error = 0; - __mmc_start_request(host, host->areq->mrq); - continue; /* wait for done/new event again */ - } + if (!cmd->error || !cmd->retries || + mmc_card_removed(host->card)) { + status = areq->err_check(host->card, + areq); + } else { + mmc_retune_recheck(host); + pr_info("%s: req failed (CMD%u): %d, retrying...\n", + mmc_hostname(host), + cmd->opcode, cmd->error); + cmd->retries--; + cmd->error = 0; + __mmc_start_request(host, areq->mrq); + return; /* wait for done/new event again */ } - - return MMC_BLK_NEW_REQUEST; } mmc_retune_release(host); @@ -740,17 +736,19 @@ static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host) * Check BKOPS urgency for each R1 response */ if (host->card && mmc_card_mmc(host->card) && - ((mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1) || - (mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1B)) && - (host->areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) { + ((mmc_resp_type(areq->mrq->cmd) == MMC_RSP_R1) || + (mmc_resp_type(areq->mrq->cmd) == MMC_RSP_R1B)) && + (areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) { mmc_start_bkops(host->card, true); } /* Successfully postprocess the old request at this point */ - mmc_post_req(host, host->areq->mrq, 0); + mmc_post_req(host, areq->mrq, 0); - return status; + areq->finalization_status = status; + complete(&areq->complete); } +EXPORT_SYMBOL(mmc_finalize_areq); /** * mmc_start_areq - start an asynchronous request @@ -780,18 +778,22 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host, if (areq) mmc_pre_req(host, areq->mrq); - /* Finalize previous request */ - status = mmc_finalize_areq(host); + /* Finalize previous request, if there is one */ + if (previous) { + wait_for_completion(&previous->complete); + status = previous->finalization_status; + } else { + status = MMC_BLK_SUCCESS; + } if (ret_stat) *ret_stat = status; - /* The previous request is still going on... */ - if (status == MMC_BLK_NEW_REQUEST) - return NULL; - /* Fine so far, start the new request! */ - if (status == MMC_BLK_SUCCESS && areq) + if (status == MMC_BLK_SUCCESS && areq) { + init_completion(&areq->complete); + areq->mrq->areq = areq; start_err = __mmc_start_data_req(host, areq->mrq); + } /* Cancel a prepared request if it was not started. */ if ((status != MMC_BLK_SUCCESS || start_err) && areq) @@ -3015,7 +3017,6 @@ void mmc_init_context_info(struct mmc_host *host) host->context_info.is_new_req = false; host->context_info.is_done_rcv = false; host->context_info.is_waiting_last_req = false; - init_waitqueue_head(&host->context_info.wait); } static int __init mmc_init(void) diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h index 71e6c6d7ceb7..e493d9d73fe2 100644 --- a/drivers/mmc/core/core.h +++ b/drivers/mmc/core/core.h @@ -13,6 +13,7 @@ #include #include +#include struct mmc_host; struct mmc_card; @@ -112,6 +113,7 @@ int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq); struct mmc_async_req; +void mmc_finalize_areq(struct work_struct *work); struct mmc_async_req *mmc_start_areq(struct mmc_host *host, struct mmc_async_req *areq, enum mmc_blk_status *ret_stat); diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 4f33d277b125..c46be4402803 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -111,7 +111,6 @@ static void mmc_request_fn(struct request_queue *q) if (cntx->is_waiting_last_req) { cntx->is_new_req = true; - wake_up_interruptible(&cntx->wait); } if (mq->asleep) diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h index 927519385482..d755ef8ea880 100644 --- a/include/linux/mmc/core.h +++ b/include/linux/mmc/core.h @@ -13,6 +13,7 @@ struct mmc_data; struct mmc_request; +struct mmc_async_req; enum mmc_blk_status { MMC_BLK_SUCCESS = 0, @@ -23,7 +24,6 @@ enum mmc_blk_status { MMC_BLK_DATA_ERR, MMC_BLK_ECC_ERR, MMC_BLK_NOMEDIUM, - MMC_BLK_NEW_REQUEST, }; struct mmc_command { @@ -155,6 +155,7 @@ struct mmc_request { struct completion completion; struct completion cmd_completion; + struct mmc_async_req *areq; /* pointer to areq if any */ void (*done)(struct mmc_request *);/* completion function */ /* * Notify uppers layers (e.g. mmc block driver) that recovery is needed diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index e4fa7058c288..d2ff79a16839 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -215,6 +216,10 @@ struct mmc_async_req { * Returns 0 if success otherwise non zero. */ enum mmc_blk_status (*err_check)(struct mmc_card *, struct mmc_async_req *); + struct work_struct finalization_work; + enum mmc_blk_status finalization_status; + struct completion complete; + struct mmc_host *host; }; /** @@ -239,13 +244,11 @@ struct mmc_slot { * @is_done_rcv wake up reason was done request * @is_new_req wake up reason was new request * @is_waiting_last_req mmc context waiting for single running request - * @wait wait queue */ struct mmc_context_info { bool is_done_rcv; bool is_new_req; bool is_waiting_last_req; - wait_queue_head_t wait; }; struct regulator; From patchwork Fri Nov 10 10:01:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118519 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722513qgn; Fri, 10 Nov 2017 02:02:05 -0800 (PST) X-Google-Smtp-Source: ABhQp+QpD0I+HAn0Nn+34WGJzq44vw2J0wNrhDaLGZaC3BXDyUgeucjGlJhJrzF2aegyt2KTPdaG X-Received: by 10.84.141.131 with SMTP id 3mr3680875plv.136.1510308125791; Fri, 10 Nov 2017 02:02:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308125; cv=none; d=google.com; s=arc-20160816; b=C4b7jckrUWBy7V1d2jE2qxjzYgst9EsKHRNhS+5hX0L5Wnfz3B6djeSxh1h6mgBF9B uV3pSSfcKTWuMxT3a3virolX9B5gWishgQBHlhwQFwPt1zFUXyIiQ6KT5l8rgncO5C8v EITstWNhGI5TA6mnVH76uhT06rjN+RDTJkHSCUGAnXGbHzNvnSMDQlaIvuRNUBoVpQvU iwxzmY6RH2wYLoEKjtQ0ugtOX9zhDMUujeprmOK3fLLCqalqFGsnTpw5UmBlkFhcG5mS vy72vBVDUxUOyDE8n402rWanZTYYP8JhvBG+1X45Bz4ENzyHUZE+Uxvtepe5fI8eGMGc LHjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=5pq/PVNz9TFEA5Xn+7UxJhgYNT5tI98Bf7wi3pwuxS0=; b=Qc7gyhgLeUQgrZsBGJ16TlQrAPib0eipry8a/iqFvwuNwb2u+rpRHsAhD1DHvtzo94 YsrBRtfsvDJ67C9OwmwHUwiXyTI3hed8ikwx06PeBoPW/ggbJvYDrRY5Lg3XTRsKe4qp CbO5hdDtmRZUkIV7MEIZTkytSB5qEhXZOXykJ1uvkJQRXFqDS8urFrgSK04VM6gVK03d ae1HQxAfXm8jfYm764XMZtvMCgF7k0B5gCEh+50MTDUam4QRtTW50CqE+r+lEJsNspmy k7BsfEfnKGumvBANHP76gaVMr0l+v8ltk2M+FgtEuypJO0TWwNNGYLQqOf+7QOQGvdF7 6zXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=V6ds9iV+; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.05; Fri, 10 Nov 2017 02:02:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=V6ds9iV+; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752314AbdKJKCC (ORCPT + 6 others); Fri, 10 Nov 2017 05:02:02 -0500 Received: from mail-lf0-f67.google.com ([209.85.215.67]:52334 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752241AbdKJKCA (ORCPT ); Fri, 10 Nov 2017 05:02:00 -0500 Received: by mail-lf0-f67.google.com with SMTP id b190so10439784lfg.9 for ; Fri, 10 Nov 2017 02:01:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=C3XU0+pVlGYVASjcGgKG1LTrtdMJsWWafHTD3xEx+EA=; b=V6ds9iV+7H6jVQn6Zs1bFtfj4dTgnvAj48WOjOUTRBxX2u9z7Yd0h/FepknHZk8ZJB +7m3ZsChJHfHD0l8vj9AuwdNMexXiGSlfkA96X+ouLJdhg0xVaa78yH57dwrMs3gHOe3 B5AVyqPmvYIuK9BcW2AmZHH7SxLsiVh1Vw/gs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=C3XU0+pVlGYVASjcGgKG1LTrtdMJsWWafHTD3xEx+EA=; b=Nmx4FMQ8cjAJSn4rg+l97FKi5k/DYsLUxdOZorzkSlqw8tFifQMQX3unghg1mEB8tv 9gYIJwC5NHDPJyeSAQNpOpf19Z7atY/0FmlRu1mVwA3c5uLo78IJxVtvwQX57JTu2OEg UsXIXqYWWVW99XWZgsLfe1AF0McpBzEo6qCPEzUDACimh7gNTcOt4FWr030+WLHzAwYo q1GEZ6xmoryCObPHKNXt/t4g9TveZS90dRG6cABFGNXciwVLbmLdgrlE8Ne2FeQuHZdq u/aiyYFarxmoBOf0xsr+EYIsb7H7LCSANNrToxR7acvLqFVUNC1CiLSLdl4nzmKbiQV6 1zAQ== X-Gm-Message-State: AJaThX4YBNT2mJzlZ5pR7DG2gl08oFl9aLnYG2MT54E5sGvAfIT/5BYB SSi4LSG2TssZhR+FPMR9DLsItnXRMUM= X-Received: by 10.46.85.16 with SMTP id j16mr1538103ljb.109.1510308118337; Fri, 10 Nov 2017 02:01:58 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.01.56 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:01:57 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 04/12] mmc: core: do away with is_done_rcv Date: Fri, 10 Nov 2017 11:01:35 +0100 Message-Id: <20171110100143.12256-5-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The "is_done_rcv" in the context info for the host is no longer needed: it is clear from context (ha!) that as long as we are waiting for the asynchronous request to come to completion, we are not done receiving data, and when the finalization work has run and completed the completion, we are indeed done. Signed-off-by: Linus Walleij --- ChangeLog v1->v5: - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/core.c | 40 ++++++++++++++++------------------------ include/linux/mmc/host.h | 2 -- 2 files changed, 16 insertions(+), 26 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 7440daa2f559..15a664d3c199 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -370,10 +370,8 @@ EXPORT_SYMBOL(mmc_start_request); static void mmc_wait_data_done(struct mmc_request *mrq) { struct mmc_host *host = mrq->host; - struct mmc_context_info *context_info = &host->context_info; struct mmc_async_req *areq = mrq->areq; - context_info->is_done_rcv = true; /* Schedule a work to deal with finalizing this request */ if (!areq) pr_err("areq of the data mmc_request was NULL!\n"); @@ -656,7 +654,7 @@ EXPORT_SYMBOL(mmc_cqe_recovery); bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq) { if (host->areq) - return host->context_info.is_done_rcv; + return completion_done(&host->areq->complete); else return completion_done(&mrq->completion); } @@ -705,29 +703,24 @@ void mmc_finalize_areq(struct work_struct *work) struct mmc_async_req *areq = container_of(work, struct mmc_async_req, finalization_work); struct mmc_host *host = areq->host; - struct mmc_context_info *context_info = &host->context_info; enum mmc_blk_status status = MMC_BLK_SUCCESS; + struct mmc_command *cmd; - if (context_info->is_done_rcv) { - struct mmc_command *cmd; - - context_info->is_done_rcv = false; - cmd = areq->mrq->cmd; + cmd = areq->mrq->cmd; - if (!cmd->error || !cmd->retries || - mmc_card_removed(host->card)) { - status = areq->err_check(host->card, - areq); - } else { - mmc_retune_recheck(host); - pr_info("%s: req failed (CMD%u): %d, retrying...\n", - mmc_hostname(host), - cmd->opcode, cmd->error); - cmd->retries--; - cmd->error = 0; - __mmc_start_request(host, areq->mrq); - return; /* wait for done/new event again */ - } + if (!cmd->error || !cmd->retries || + mmc_card_removed(host->card)) { + status = areq->err_check(host->card, + areq); + } else { + mmc_retune_recheck(host); + pr_info("%s: req failed (CMD%u): %d, retrying...\n", + mmc_hostname(host), + cmd->opcode, cmd->error); + cmd->retries--; + cmd->error = 0; + __mmc_start_request(host, areq->mrq); + return; /* wait for done/new event again */ } mmc_retune_release(host); @@ -3015,7 +3008,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host) void mmc_init_context_info(struct mmc_host *host) { host->context_info.is_new_req = false; - host->context_info.is_done_rcv = false; host->context_info.is_waiting_last_req = false; } diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index d2ff79a16839..d43d26562fae 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -241,12 +241,10 @@ struct mmc_slot { /** * mmc_context_info - synchronization details for mmc context - * @is_done_rcv wake up reason was done request * @is_new_req wake up reason was new request * @is_waiting_last_req mmc context waiting for single running request */ struct mmc_context_info { - bool is_done_rcv; bool is_new_req; bool is_waiting_last_req; }; From patchwork Fri Nov 10 10:01:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118520 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722568qgn; Fri, 10 Nov 2017 02:02:08 -0800 (PST) X-Google-Smtp-Source: ABhQp+SptKOHWFmLNCU/bBaLXfr7b7j0Y69kp0S997SJN4kNyzcqb/VmIa24ss/5LFMYxkuVXB4d X-Received: by 10.98.71.25 with SMTP id u25mr3808142pfa.75.1510308128373; Fri, 10 Nov 2017 02:02:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308128; cv=none; d=google.com; s=arc-20160816; b=cD7IviU5kNQatu7muvclkGwbvw93nngSaFMsb9n4PdGiaM1xgcT7SwrLkHqS8w/nLt X4bYhnmCWAcezkq2FU6ekTGZfdHswrMwh20r0em2cK/n5hfXu+NNt/blnZVyZesHYn96 Mr8dEkiGmhtDFlTq6nZEOeXv5hOh0I9HhIFaaQGuNYGl1iJXGvcNUi8PxC79Lmx6513M r0iQ3UvG0B6Vx9RLoJA3UuPTwR3bzjq2Asi6XHwpl+ONiKq93cx1BOzd2HB4D9NEF4jI rqCpR0muL+8uIFWAuP2TV+mj+1B+FaNZIG29FeI5bEvmkP6MO1O0B504IIRD7CQrY9XQ aZvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=KsXAi2/Qigw74r6XcDX8JU5pEqHmF+bUY+2PsW3IW6A=; b=HXdp9SnjTAhludnh+lUXzYk1OCb8sVrglcHChH5vLhMgYaqg2lJfSNiRY0SbunZXrw Iqh2F//UE8bDtQ+8gA2YW3YKLzKPXEfFBE+O9rYwHZvD5fu2GRPJDthBpFxrEjRJC1Tq VlNzqfpmZFPT+o4nmLCDLsWPTVkrDP2RNeqhGcJSkhLbpmbnB9XPED3C8vooiluWac7k 5G9446LQeuj0P9tI2Uw6QwIZ1Nb+apTeigN8xOC6/yyHchMxqkn6mPjz9ARSZdu/FkZL 91LBcn+g/5QqJSMUTBc0Ht9vhOzNt4VvAR6V2wdu3TNncV8N5QRmteU76dPfLUXU5Xzn ERHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=gVcIEbw5; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.08; Fri, 10 Nov 2017 02:02:08 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=gVcIEbw5; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752289AbdKJKCE (ORCPT + 6 others); Fri, 10 Nov 2017 05:02:04 -0500 Received: from mail-lf0-f66.google.com ([209.85.215.66]:47163 "EHLO mail-lf0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751496AbdKJKCC (ORCPT ); Fri, 10 Nov 2017 05:02:02 -0500 Received: by mail-lf0-f66.google.com with SMTP id f125so10365624lff.4 for ; Fri, 10 Nov 2017 02:02:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=E7YlvG7DlQwP+vdm50/EiojcBTJPzzEzXt9DrRyxwrA=; b=gVcIEbw500BzGvgH1ZgT/rYq5OXOcCP37tX4dP0+Aahz04RiPRW1EoycQhX8ItXTsb Z4xYwf2XhYbWhU5kJVFPKiFBW/lHchf6jFeojEyIBzIytDpFXekgBzVVXpb1rgBawU2W lYy8C9Eew/vs+e9knI+aIlR9KqwP0Ds56c84s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=E7YlvG7DlQwP+vdm50/EiojcBTJPzzEzXt9DrRyxwrA=; b=GB3n2bD16cTQo+oTvEsHrvB+5d0w6jVhdwACQxP5I3oI+E7giLUH0Pe60k+Cl0od+3 LaOpJ+2UBNKhO8AZKoSPQpxBIG4jxWWgcRpdaCoY2okXebPWlIgtnbfZ4l88EGa9HYC1 SipuejwBPxI08SyW5d18J6sqgHEf+tnkrt2IAs/NcH/B9xn0X0ex5qIxcn+DnuyD1T8K WQ+jNGxAoK7zqk5uBxe3fydq7AYWvJb1o2ytzIYfZSrxhiJxhc1YZKC4VXES72+a4RgM w1LrC9h+kbgrYv6dUk6vVRXhccojdXIg4YwRw94IowLERQNZAH3jXPMr1ku4MszoGM1c ueuw== X-Gm-Message-State: AJaThX7UOHQDqxOtO5oRokSjAecsDIlTERG/rKRhooKkdfkajXiCcEKC MMZjXVLzKF9tvh/NpEndhuw8IxXKGhQ= X-Received: by 10.46.89.147 with SMTP id g19mr1398584ljf.26.1510308120399; Fri, 10 Nov 2017 02:02:00 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.01.58 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:01:59 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 05/12] mmc: core: do away with is_new_req Date: Fri, 10 Nov 2017 11:01:36 +0100 Message-Id: <20171110100143.12256-6-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The host context member "is_new_req" is only assigned values, never checked. Delete it. Signed-off-by: Linus Walleij --- ChangeLog v1->v5: - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/core.c | 1 - drivers/mmc/core/queue.c | 5 ----- include/linux/mmc/host.h | 2 -- 3 files changed, 8 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 15a664d3c199..b1a5059f6cd1 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -3007,7 +3007,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host) */ void mmc_init_context_info(struct mmc_host *host) { - host->context_info.is_new_req = false; host->context_info.is_waiting_last_req = false; } diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index c46be4402803..4a0752ef6154 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -55,7 +55,6 @@ static int mmc_queue_thread(void *d) req = blk_fetch_request(q); mq->asleep = false; cntx->is_waiting_last_req = false; - cntx->is_new_req = false; if (!req) { /* * Dispatch queue is empty so set flags for @@ -109,10 +108,6 @@ static void mmc_request_fn(struct request_queue *q) cntx = &mq->card->host->context_info; - if (cntx->is_waiting_last_req) { - cntx->is_new_req = true; - } - if (mq->asleep) wake_up_process(mq->thread); } diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index d43d26562fae..36af19990683 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -241,11 +241,9 @@ struct mmc_slot { /** * mmc_context_info - synchronization details for mmc context - * @is_new_req wake up reason was new request * @is_waiting_last_req mmc context waiting for single running request */ struct mmc_context_info { - bool is_new_req; bool is_waiting_last_req; }; From patchwork Fri Nov 10 10:01:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118522 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722669qgn; Fri, 10 Nov 2017 02:02:12 -0800 (PST) X-Google-Smtp-Source: ABhQp+Qo2U43ivK5C758EFYaCA5YE9rOyVE4UgEdpYMseoORHq1iyciacoa9Lv26SYmzP9Iv0dl9 X-Received: by 10.84.191.228 with SMTP id a91mr3713645pld.176.1510308132658; Fri, 10 Nov 2017 02:02:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308132; cv=none; d=google.com; s=arc-20160816; b=NDJfwY2ACSpOoJLWjaR0Dr4WI1yAngmZwCWpFA/Tvw1MQwLraHMGcKul7kfeaLto4J hDWk9xuYTQbinAlRIUhcjNY7MwaZhljGKrDHHkIb4gC7WaSK7fvUoq4YJuLwnZcfgsGa P6TSi4XzGAN3JTqLHbmrCJFW35Ey8IiUa0Yyya+gIJhFuJbIlhZX+20+HCTzvzQZWiX8 Zbb5kAL5fHI47668dt2YIYqIC/D00RF64ns9o764adD04PGMrIFozNLRqIvLcAr/4sHf SopsbH3HOkNIrGw5pMEDVOLTwqjyYgkD9qPXpx1zhNVv88Jmk8nmK8ySI6WycyCIU0p/ Rk/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=QRe02e2LI2/klTwR3LZPpyb88qDaFsIq+jz/mRRWGCc=; b=IgiwzIoyvNsizdlKKKE2L9mBnYLWh4ZU4dWF0SYXDKIPbuenv86pM6sqQ6j3O4cnwv fHBsGeNdJDXm4ZLViP/bEI1t70bel9U7gbztgiTRXORyAINvdar3x8hsQQcPzN4M3nNw QimTnvo/usiUksluunGQyHdB5fSzpl4sw7gJP+fLyfyexwlbqRjqjevPqaRNfvxze6vD jzDFlG+cyE1BYnLaAz2V0DToRnKw44jZjqeGE/RKwxm4570zjohwOTX7IlYnyPiodN9F GqwgRm9TEC4ea3PqE5V0Rbla0grJWP6TqRCuym1tVnn9Jst8en5Pe+T2JE2DQh24FiEQ ojuw== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=gvMkzCTi; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.12; Fri, 10 Nov 2017 02:02:12 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=gvMkzCTi; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752211AbdKJKCG (ORCPT + 6 others); Fri, 10 Nov 2017 05:02:06 -0500 Received: from mail-lf0-f67.google.com ([209.85.215.67]:53735 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752356AbdKJKCE (ORCPT ); Fri, 10 Nov 2017 05:02:04 -0500 Received: by mail-lf0-f67.google.com with SMTP id l23so10430898lfk.10 for ; Fri, 10 Nov 2017 02:02:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PxZe9njkvceHICtXr/+7BvTjpqT8rTRXfESKme6cjUU=; b=gvMkzCTiqKj1ZUig+bm14M11ayUYdphoTi8WLVtF2DX4uz5IvMaN+SCRbCDfh3vet+ QKHcF1W+En6vieZAyrOJrY+J44RUrGHyX2oRwkyzCKmFQEeayogT93jiuHXqcDdNeX/B 1PILhIQ8JB9z239OVrz7qYCuhgNdjrq8t+Hdo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PxZe9njkvceHICtXr/+7BvTjpqT8rTRXfESKme6cjUU=; b=owNUXWAQPnBfGc9SmSyUO7RkL3AOyHF6/Par21D2qOW93aGKOx15gWsTcuwEk1kLIr AZgIIv0jbaGBB3bs1ZnENz65leuXAwBIu8/V1uf2eq5qqRmztQgnYNcj0zSktnKuOXzk gU6FJI8aZ7UrH2JUfdyYBMvk0MQX96AH1JkP96REVfZuj+ltV/5wXMHFa5fTuolCuoTm sjFWZTQnDnBCx8x9O0sXMyIOHoRJ9dyky6c2QQFhfeSNXQ3il3DwTjd0t+b7wWk69waR RnwovbbfiJkJyxUrtloZO57Tdo72EfDAJMTt91Y62gMAFJlAl5v2iFAM/WJKVetFfEYI SQAQ== X-Gm-Message-State: AJaThX7rAr2ZQGAZL4SAuzxg+AHbn+Vi36mfLLJWWRNSouitsc3B+lVC lDOT7fDmHGDOBqbQdUtZGG1KOQgWoPg= X-Received: by 10.46.46.12 with SMTP id u12mr1456966lju.65.1510308122383; Fri, 10 Nov 2017 02:02:02 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.02.01 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:02:01 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 06/12 v5] mmc: core: kill off the context info Date: Fri, 10 Nov 2017 11:01:37 +0100 Message-Id: <20171110100143.12256-7-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The last member of the context info: is_waiting_last_req is just assigned values, never checked. Delete that and the whole context info as a result. Signed-off-by: Linus Walleij --- ChangeLog v1->v5: - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/block.c | 2 -- drivers/mmc/core/bus.c | 1 - drivers/mmc/core/core.c | 13 ------------- drivers/mmc/core/core.h | 2 -- drivers/mmc/core/queue.c | 9 +-------- include/linux/mmc/host.h | 9 --------- 6 files changed, 1 insertion(+), 35 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 5c84175e49be..86ec87c17e71 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -2065,13 +2065,11 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) default: /* Normal request, just issue it */ mmc_blk_issue_rw_rq(mq, req); - card->host->context_info.is_waiting_last_req = false; break; } } else { /* No request, flushing the pipeline with NULL */ mmc_blk_issue_rw_rq(mq, NULL); - card->host->context_info.is_waiting_last_req = false; } out: diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c index a4b49e25fe96..45904a7e87be 100644 --- a/drivers/mmc/core/bus.c +++ b/drivers/mmc/core/bus.c @@ -348,7 +348,6 @@ int mmc_add_card(struct mmc_card *card) #ifdef CONFIG_DEBUG_FS mmc_add_card_debugfs(card); #endif - mmc_init_context_info(card->host); card->dev.of_node = mmc_of_find_child_device(card->host, 0); diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index b1a5059f6cd1..fa86f9a15d29 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -2997,19 +2997,6 @@ void mmc_unregister_pm_notifier(struct mmc_host *host) } #endif -/** - * mmc_init_context_info() - init synchronization context - * @host: mmc host - * - * Init struct context_info needed to implement asynchronous - * request mechanism, used by mmc core, host driver and mmc requests - * supplier. - */ -void mmc_init_context_info(struct mmc_host *host) -{ - host->context_info.is_waiting_last_req = false; -} - static int __init mmc_init(void) { int ret; diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h index e493d9d73fe2..88b852ac8f74 100644 --- a/drivers/mmc/core/core.h +++ b/drivers/mmc/core/core.h @@ -92,8 +92,6 @@ void mmc_remove_host_debugfs(struct mmc_host *host); void mmc_add_card_debugfs(struct mmc_card *card); void mmc_remove_card_debugfs(struct mmc_card *card); -void mmc_init_context_info(struct mmc_host *host); - int mmc_execute_tuning(struct mmc_card *card); int mmc_hs200_to_hs400(struct mmc_card *card); int mmc_hs400_to_hs200(struct mmc_card *card); diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 4a0752ef6154..2c232ba4e594 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -42,7 +42,6 @@ static int mmc_queue_thread(void *d) { struct mmc_queue *mq = d; struct request_queue *q = mq->queue; - struct mmc_context_info *cntx = &mq->card->host->context_info; current->flags |= PF_MEMALLOC; @@ -54,15 +53,12 @@ static int mmc_queue_thread(void *d) set_current_state(TASK_INTERRUPTIBLE); req = blk_fetch_request(q); mq->asleep = false; - cntx->is_waiting_last_req = false; if (!req) { /* * Dispatch queue is empty so set flags for * mmc_request_fn() to wake us up. */ - if (mq->qcnt) - cntx->is_waiting_last_req = true; - else + if (!mq->qcnt) mq->asleep = true; } spin_unlock_irq(q->queue_lock); @@ -96,7 +92,6 @@ static void mmc_request_fn(struct request_queue *q) { struct mmc_queue *mq = q->queuedata; struct request *req; - struct mmc_context_info *cntx; if (!mq) { while ((req = blk_fetch_request(q)) != NULL) { @@ -106,8 +101,6 @@ static void mmc_request_fn(struct request_queue *q) return; } - cntx = &mq->card->host->context_info; - if (mq->asleep) wake_up_process(mq->thread); } diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 36af19990683..4b210e9283f6 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -239,14 +239,6 @@ struct mmc_slot { void *handler_priv; }; -/** - * mmc_context_info - synchronization details for mmc context - * @is_waiting_last_req mmc context waiting for single running request - */ -struct mmc_context_info { - bool is_waiting_last_req; -}; - struct regulator; struct mmc_pwrseq; @@ -423,7 +415,6 @@ struct mmc_host { struct dentry *debugfs_root; struct mmc_async_req *areq; /* active async req */ - struct mmc_context_info context_info; /* async synchronization info */ /* finalization workqueue, handles finalizing requests */ struct workqueue_struct *req_done_wq; From patchwork Fri Nov 10 10:01:38 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118521 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722608qgn; Fri, 10 Nov 2017 02:02:10 -0800 (PST) X-Google-Smtp-Source: ABhQp+QplNo2AfcKpzC4c26r41h39Zxt8d3+KJ8cNOzk6MOie2qjiLAypON3aKp77HDNtZNrLWhZ X-Received: by 10.159.207.149 with SMTP id z21mr3786530plo.164.1510308130211; Fri, 10 Nov 2017 02:02:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308130; cv=none; d=google.com; s=arc-20160816; b=w7baRaplVtgyaCjT2LIaraXEWMgRHMJVA7y9vNQQa7xt2JobBnUsRqSDh8n68XKHKF mytLNeMvxUDhiwLEu5E/1QwcszCSgDWJaexInRTKqcA/eGyGG7x+jW3OuP+SIg+mCEUE 0U38aXZara7svIzIHBNrVRKCgi4v5Z5pzW6djzk6By5B7+8C2AyYamwz5LhfRhFCZHG2 O7+rGVHeUxz9wXf4UzLlwH63qeLdBK4biIopvWsBauD0dqKqIoJB8+1tNsgJHqFHliDw PBxX5bmA9i4jYqYLVRQLw+BNSSmLVe/pTiX3GQJgEiNUIOHWSgkTFB2XBFVhMmwEm/FB LxzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=W/o0Jj20me84xiVc6Ux69IQ+vQUO2/Ht+DPv08HAqKU=; b=vaoD1bNVyuz3GTXqXx9rekoHTmDq1RVVxXnzhchZhIMi883qqzeZwGGYWcCuelbi3n oxuzy8A/Y3iUcDPeIEOXrhH9JeWQE08En+UXj1VXaDl0l0B2/1kdHjT4M7A2/ky7bHXy 18h3OMe516pWVlErvxQ9kaqhyI0kL17N9mVDeq7ojyfeyVPSM/yGvbJLhbt9zYizPpOB JmtwnFC1GAuWDyRAScGMFrAhyKDiC+O7Ccef2nl9uU4pARuVlMJ2uiK+3m2oFvKC/QjN TGtRtStOgHmXoWmWxb8SZwVIeUAGQIdZYd0AeIUSsyASCSoWpka6c/cX52ULqBFbgJIs AatA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=BQZcOb6L; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.10; Fri, 10 Nov 2017 02:02:10 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=BQZcOb6L; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752356AbdKJKCI (ORCPT + 6 others); Fri, 10 Nov 2017 05:02:08 -0500 Received: from mail-lf0-f68.google.com ([209.85.215.68]:54895 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752400AbdKJKCG (ORCPT ); Fri, 10 Nov 2017 05:02:06 -0500 Received: by mail-lf0-f68.google.com with SMTP id a2so10424191lfh.11 for ; Fri, 10 Nov 2017 02:02:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=QcPT8Y7u6xwrClUs8PB51R4/qwTf7FbR7vzBwitYMMw=; b=BQZcOb6LklCKwaXnC1w+VkpK9hXBNGVDaRBsjNHzettBf7OxbFiak85K7ZfMWg7OS6 P+CjTEpM0t1OpQXIHdn+s6upfh3q7UqGsdty5RmMQgMKJ7utveSFYlX0nQRUKlelCsat K4pMQ6p/chcnJuT67quqQ/8yTVjOYuacx9PkM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QcPT8Y7u6xwrClUs8PB51R4/qwTf7FbR7vzBwitYMMw=; b=axHgIbfYYA3k2syAsnzs+azxfJT69APY0V2Ave5rLKzRCO5Ssv3UaICiaxHTqsYLbr Av1AmXtSejiEqi3bO1JC0qSuPHjCeeko/6RrDKmIALL+ev7l/mI3PCrwy1k2XWsRw9Kw azTRdDg/rsE6UtbhIqzirZ204D+/R4ULBIfFqRxS1smLkuSBmDZBjiGuWARQQ/IwHBpc irgmqEyOC+x0EcinAGK2xQ8QrAgH54oFUBXf++Xot2ZHP80jT5O/3TOsE3M/nMnFC9Np T1pfs+6cnUt28gztDsp5SVZUVHhQ6xkGeLDWvv99xmujl5cdziFW1U7Vrzoh8+kTCO3o 2jxw== X-Gm-Message-State: AJaThX7qFIoz7WoFji/+YSG1UmpR3P6Kkk5GJcqJIp3iGpZzGRY6fQd0 JiRQ9gT2DoTJ6MILxxHkN1UrH8XM/qA= X-Received: by 10.25.121.15 with SMTP id u15mr1031150lfc.9.1510308124362; Fri, 10 Nov 2017 02:02:04 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.02.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:02:03 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 07/12 v5] mmc: queue: simplify queue logic Date: Fri, 10 Nov 2017 11:01:38 +0100 Message-Id: <20171110100143.12256-8-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The if() statment checking if there is no current or previous request is now just looking ahead at something that will be concluded a few lines below. Simplify the logic by moving the assignment of .asleep. Signed-off-by: Linus Walleij --- ChangeLog v1->v5: - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/queue.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 2c232ba4e594..023bbddc1a0b 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -53,14 +53,6 @@ static int mmc_queue_thread(void *d) set_current_state(TASK_INTERRUPTIBLE); req = blk_fetch_request(q); mq->asleep = false; - if (!req) { - /* - * Dispatch queue is empty so set flags for - * mmc_request_fn() to wake us up. - */ - if (!mq->qcnt) - mq->asleep = true; - } spin_unlock_irq(q->queue_lock); if (req || mq->qcnt) { @@ -68,6 +60,7 @@ static int mmc_queue_thread(void *d) mmc_blk_issue_rq(mq, req); cond_resched(); } else { + mq->asleep = true; if (kthread_should_stop()) { set_current_state(TASK_RUNNING); break; From patchwork Fri Nov 10 10:01:39 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118523 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722717qgn; Fri, 10 Nov 2017 02:02:14 -0800 (PST) X-Google-Smtp-Source: ABhQp+Tjhvmu2C5G6liRihUMhtNZ+w22pBRsjQXjAJcPhYHSqgLNMssovYRL8zvhwfYriX6oa47R X-Received: by 10.159.194.18 with SMTP id x18mr3660555pln.273.1510308134552; Fri, 10 Nov 2017 02:02:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308134; cv=none; d=google.com; s=arc-20160816; b=H7BX7EoCTbrCA/bJale+r51+EHsr4Z0h65zDJPS3A6SsB2m2z8vUrjtv7sLUidi9Mx niXzh5L2o6g414R33LzMMDmsCKhQB9c0se8r4ggUuesiDpqnCWfIvMY25MS57oNgkrvV 2SCcVTc31kY2lvr+yoOjrgn5wndCzGkFtIyqAMosXqAZVh17ydeFQKF3LNXFBW3S7Vi5 QhMl3DCStMExY6fUolZNFzB5iXuO32kGh4zq21GzTifYLJy5mWhh6S+tbWzqtb6hjgjk psJQjT0iaPzn/pJanz5Aa2bZK4Rm/Y4RpIOzGv5OC5tnMSXU3jvPTLV8SYsr4P+8VMtq Jd4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=qW47QPETxiNfW5/BqvX4th6eatYQ4WYKB5pSamddD/o=; b=EvnJiuX32QopPpt2dONK4tuPKvCQKUobw+l3gJsLSUNDxvRjHXf4E8YrGj0/ukhb4n l75a+FYc7t2a9H3MOyuGL0Uzd9uhYX9sa9HMJsjrbXKIQ0v1levVDtQptpBS0AFSozAa ZPJr5uJBNMnM2putnPyAf9jhcyNfuURlS79VtQycg/ouFNfyOrmDC/KB7TEKEex4zPuZ /qdbN9oyzJ73/is5Jz0Fz5fg3cpN5LYDgYvd6mlYwrhHpcYeEnevJR8NL0ar06q0hYAf upeyONLVx75ZknDqyLITm+FMl0qhcaf6hUZJfzDGn4KaqZVchmkfggE+cuNyZu/Kywr8 M6YA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=jyUii/D/; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.14; Fri, 10 Nov 2017 02:02:14 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=jyUii/D/; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752430AbdKJKCL (ORCPT + 6 others); Fri, 10 Nov 2017 05:02:11 -0500 Received: from mail-lf0-f68.google.com ([209.85.215.68]:56668 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751496AbdKJKCJ (ORCPT ); Fri, 10 Nov 2017 05:02:09 -0500 Received: by mail-lf0-f68.google.com with SMTP id 90so10417004lfs.13 for ; Fri, 10 Nov 2017 02:02:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xC9kH89LILmrUaYbyheDJ4tfvhbEw3/KySt9jaeeFsQ=; b=jyUii/D/YBfVzlBM8iXYBAIy1emoWldOT6Bi4BsHAO3kBcMnnffpv4VJ1IWaUoWKHf 35X6lNjAyHD2jWVwCAfcPfP6011yaxjUCK1hrVJFNmzR3hCPTqP93PKku3TBd6CdBVFT dFtWN7yqMXCoKfs/eEYmrLkVZ3OajvPx8HLnM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xC9kH89LILmrUaYbyheDJ4tfvhbEw3/KySt9jaeeFsQ=; b=fezCCZH2qw7Dg+LT/nSVffmMbVqjzVpgJ/IDuXjkOjDtDJ9y0SBQigM4blq4UGoUmq +4t7UngJ6pxxaFARqgdDv/wjfdf+yFEABBK+W4UjzMAt/F/Hwt1wJ5W0nLUwCy1LF1TK fLtM8UWeWqiCs0bXPjqYMQ/DzADQWTGbTocDG8b5/ZCYdYJqAZ0n1fkQOwYCBKlrY3c5 jhVT91kUpg5lUFhoegOef4Hq0FHrg3zti3dQG95nuBy+8izffuwuAzVaEigIrI9RVRBT T+SdBs0wK+9epQlVWataHHMVYlpgB4ZzN8/IyVd7w3Ezp2E1PHNC87qqWgVvsYkUi2Kq 5fvw== X-Gm-Message-State: AJaThX7qDSX3gZV4MFIqx7gf07ergorLVXsjzi7gNTXAx9TRB8CNm/ID dnCNNjz36J+S3X1rVO6/5WpJ6saoY7g= X-Received: by 10.25.204.81 with SMTP id c78mr1463148lfg.49.1510308126838; Fri, 10 Nov 2017 02:02:06 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.02.05 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:02:05 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 08/12 v5] mmc: block: shuffle retry and error handling Date: Fri, 10 Nov 2017 11:01:39 +0100 Message-Id: <20171110100143.12256-9-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Instead of doing retries at the same time as trying to submit new requests, do the retries when the request is reported as completed by the driver, in the finalization worker. This is achieved by letting the core worker call back into the block layer using a callback in the asynchronous request, ->report_done_status() that will pass the status back to the block core so it can repeatedly try to hammer the request using single request, retry etc by calling back to the core layer using mmc_restart_areq(), which will just kick the same asynchronous request without waiting for a previous ongoing request. The beauty of it is that the completion will not complete until the block layer has had the opportunity to hammer a bit at the card using a bunch of different approaches that used to be in the while() loop in mmc_blk_rw_done() The algorithm for recapture, retry and handle errors is identical to the one we used to have in mmc_blk_issue_rw_rq(), only augmented to get called in another path from the core. We have to add and initialize a pointer back to the struct mmc_queue from the struct mmc_queue_req to find the queue from the asynchronous request when reporting the status back to the core. Other users of the asynchrous request that do not need to retry and use misc error handling fallbacks will work fine since a NULL ->report_done_status() is just fine. This is currently only done by the test module. Signed-off-by: Linus Walleij --- ChangeLog v4->v5: - The "disable_multi" and "retry" variables used to be inside the do {} loop in the error handler, so now that we restart the areq when there are problems, we need to make these part of the struct mmc_async_req and reinitialize them to false/zero when restarting an asynchronous request. - Assign mrq->areq also when restarting asynchronous requests: the mrq is a quick-turnaround produce and consume object and only lives for one request to the host, so this needs to be assigned every time we made a new mrq and want to send it off to the host. - Switch "disable_multi" to be a bool as is appropriate. - Be more careful to assign NULL to host->areq when it is not in use, and make sure this only happens at one spot. - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/block.c | 347 ++++++++++++++++++++++++----------------------- drivers/mmc/core/core.c | 46 ++++--- drivers/mmc/core/core.h | 1 + drivers/mmc/core/queue.c | 2 + drivers/mmc/core/queue.h | 1 + include/linux/mmc/host.h | 7 +- 6 files changed, 221 insertions(+), 183 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 86ec87c17e71..2cda2f52058e 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1575,7 +1575,7 @@ static enum mmc_blk_status mmc_blk_err_check(struct mmc_card *card, } static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq, - int disable_multi, bool *do_rel_wr_p, + bool disable_multi, bool *do_rel_wr_p, bool *do_data_tag_p) { struct mmc_blk_data *md = mq->blkdata; @@ -1700,7 +1700,7 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq, static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, struct mmc_card *card, - int disable_multi, + bool disable_multi, struct mmc_queue *mq) { u32 readcmd, writecmd; @@ -1811,198 +1811,213 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, /** * mmc_blk_rw_try_restart() - tries to restart the current async request * @mq: the queue with the card and host to restart - * @req: a new request that want to be started after the current one + * @mqrq: the mmc_queue_request containing the areq to be restarted */ -static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req, +static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct mmc_queue_req *mqrq) { - if (!req) - return; + struct mmc_async_req *areq = &mqrq->areq; + + /* Proceed and try to restart the current async request */ + mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); + areq->disable_multi = false; + areq->retry = 0; + mmc_restart_areq(mq->card->host, areq); +} + +static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status status) +{ + struct mmc_queue *mq; + struct mmc_blk_data *md; + struct mmc_card *card; + struct mmc_host *host; + struct mmc_queue_req *mq_rq; + struct mmc_blk_request *brq; + struct request *old_req; + bool req_pending = true; + int type, retune_retry_done = 0; /* - * If the card was removed, just cancel everything and return. + * An asynchronous request has been completed and we proceed + * to handle the result of it. */ - if (mmc_card_removed(mq->card)) { - req->rq_flags |= RQF_QUIET; - blk_end_request_all(req, BLK_STS_IOERR); - mq->qcnt--; /* FIXME: just set to 0? */ + mq_rq = container_of(areq, struct mmc_queue_req, areq); + mq = mq_rq->mq; + md = mq->blkdata; + card = mq->card; + host = card->host; + brq = &mq_rq->brq; + old_req = mmc_queue_req_to_req(mq_rq); + type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; + + switch (status) { + case MMC_BLK_SUCCESS: + case MMC_BLK_PARTIAL: + /* + * A block was successfully transferred. + */ + mmc_blk_reset_success(md, type); + req_pending = blk_end_request(old_req, BLK_STS_OK, + brq->data.bytes_xfered); + /* + * If the blk_end_request function returns non-zero even + * though all data has been transferred and no errors + * were returned by the host controller, it's a bug. + */ + if (status == MMC_BLK_SUCCESS && req_pending) { + pr_err("%s BUG rq_tot %d d_xfer %d\n", + __func__, blk_rq_bytes(old_req), + brq->data.bytes_xfered); + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + return; + } + break; + case MMC_BLK_CMD_ERR: + req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); + if (mmc_blk_reset(md, card->host, type)) { + if (req_pending) + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + else + mq->qcnt--; + mmc_blk_rw_try_restart(mq, mq_rq); + return; + } + if (!req_pending) { + mq->qcnt--; + mmc_blk_rw_try_restart(mq, mq_rq); + return; + } + break; + case MMC_BLK_RETRY: + retune_retry_done = brq->retune_retry_done; + if (areq->retry++ < 5) + break; + /* Fall through */ + case MMC_BLK_ABORT: + if (!mmc_blk_reset(md, card->host, type)) + break; + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + mmc_blk_rw_try_restart(mq, mq_rq); + return; + case MMC_BLK_DATA_ERR: { + int err; + err = mmc_blk_reset(md, card->host, type); + if (!err) + break; + if (err == -ENODEV) { + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + mmc_blk_rw_try_restart(mq, mq_rq); + return; + } + /* Fall through */ + } + case MMC_BLK_ECC_ERR: + if (brq->data.blocks > 1) { + /* Redo read one sector at a time */ + pr_warn("%s: retrying using single block read\n", + old_req->rq_disk->disk_name); + areq->disable_multi = true; + break; + } + /* + * After an error, we redo I/O one sector at a + * time, so we only reach here after trying to + * read a single sector. + */ + req_pending = blk_end_request(old_req, BLK_STS_IOERR, + brq->data.blksz); + if (!req_pending) { + mq->qcnt--; + mmc_blk_rw_try_restart(mq, mq_rq); + return; + } + break; + case MMC_BLK_NOMEDIUM: + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + mmc_blk_rw_try_restart(mq, mq_rq); + return; + default: + pr_err("%s: Unhandled return value (%d)", + old_req->rq_disk->disk_name, status); + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + mmc_blk_rw_try_restart(mq, mq_rq); return; } - /* Else proceed and try to restart the current async request */ - mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); - mmc_start_areq(mq->card->host, &mqrq->areq, NULL); + + if (req_pending) { + /* + * In case of a incomplete request + * prepare it again and resend. + */ + mmc_blk_rw_rq_prep(mq_rq, card, + areq->disable_multi, mq); + mmc_start_areq(card->host, areq, NULL); + mq_rq->brq.retune_retry_done = retune_retry_done; + } else { + /* Else, this request is done */ + mq->qcnt--; + } } static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) { - struct mmc_blk_data *md = mq->blkdata; - struct mmc_card *card = md->queue.card; - struct mmc_blk_request *brq; - int disable_multi = 0, retry = 0, type, retune_retry_done = 0; enum mmc_blk_status status; - struct mmc_queue_req *mqrq_cur = NULL; - struct mmc_queue_req *mq_rq; - struct request *old_req; struct mmc_async_req *new_areq; struct mmc_async_req *old_areq; - bool req_pending = true; + struct mmc_card *card = mq->card; - if (new_req) { - mqrq_cur = req_to_mmc_queue_req(new_req); + if (new_req) mq->qcnt++; - } if (!mq->qcnt) return; - do { - if (new_req) { - /* - * When 4KB native sector is enabled, only 8 blocks - * multiple read or write is allowed - */ - if (mmc_large_sector(card) && - !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { - pr_err("%s: Transfer size is not 4KB sector size aligned\n", - new_req->rq_disk->disk_name); - mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); - return; - } - - mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); - new_areq = &mqrq_cur->areq; - } else - new_areq = NULL; - - old_areq = mmc_start_areq(card->host, new_areq, &status); - if (!old_areq) { - /* - * We have just put the first request into the pipeline - * and there is nothing more to do until it is - * complete. - */ - return; - } + /* + * If the card was removed, just cancel everything and return. + */ + if (mmc_card_removed(card)) { + new_req->rq_flags |= RQF_QUIET; + blk_end_request_all(new_req, BLK_STS_IOERR); + mq->qcnt--; /* FIXME: just set to 0? */ + return; + } + if (new_req) { + struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req); /* - * An asynchronous request has been completed and we proceed - * to handle the result of it. + * When 4KB native sector is enabled, only 8 blocks + * multiple read or write is allowed */ - mq_rq = container_of(old_areq, struct mmc_queue_req, areq); - brq = &mq_rq->brq; - old_req = mmc_queue_req_to_req(mq_rq); - type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; - - switch (status) { - case MMC_BLK_SUCCESS: - case MMC_BLK_PARTIAL: - /* - * A block was successfully transferred. - */ - mmc_blk_reset_success(md, type); - - req_pending = blk_end_request(old_req, BLK_STS_OK, - brq->data.bytes_xfered); - /* - * If the blk_end_request function returns non-zero even - * though all data has been transferred and no errors - * were returned by the host controller, it's a bug. - */ - if (status == MMC_BLK_SUCCESS && req_pending) { - pr_err("%s BUG rq_tot %d d_xfer %d\n", - __func__, blk_rq_bytes(old_req), - brq->data.bytes_xfered); - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - return; - } - break; - case MMC_BLK_CMD_ERR: - req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); - if (mmc_blk_reset(md, card->host, type)) { - if (req_pending) - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - else - mq->qcnt--; - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - } - if (!req_pending) { - mq->qcnt--; - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - } - break; - case MMC_BLK_RETRY: - retune_retry_done = brq->retune_retry_done; - if (retry++ < 5) - break; - /* Fall through */ - case MMC_BLK_ABORT: - if (!mmc_blk_reset(md, card->host, type)) - break; - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - case MMC_BLK_DATA_ERR: { - int err; - - err = mmc_blk_reset(md, card->host, type); - if (!err) - break; - if (err == -ENODEV) { - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - } - /* Fall through */ - } - case MMC_BLK_ECC_ERR: - if (brq->data.blocks > 1) { - /* Redo read one sector at a time */ - pr_warn("%s: retrying using single block read\n", - old_req->rq_disk->disk_name); - disable_multi = 1; - break; - } - /* - * After an error, we redo I/O one sector at a - * time, so we only reach here after trying to - * read a single sector. - */ - req_pending = blk_end_request(old_req, BLK_STS_IOERR, - brq->data.blksz); - if (!req_pending) { - mq->qcnt--; - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - } - break; - case MMC_BLK_NOMEDIUM: - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); - return; - default: - pr_err("%s: Unhandled return value (%d)", - old_req->rq_disk->disk_name, status); - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); + if (mmc_large_sector(card) && + !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { + pr_err("%s: Transfer size is not 4KB sector size aligned\n", + new_req->rq_disk->disk_name); + mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); return; } - if (req_pending) { - /* - * In case of a incomplete request - * prepare it again and resend. - */ - mmc_blk_rw_rq_prep(mq_rq, card, - disable_multi, mq); - mmc_start_areq(card->host, - &mq_rq->areq, NULL); - mq_rq->brq.retune_retry_done = retune_retry_done; - } - } while (req_pending); + mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); + new_areq = &mqrq_cur->areq; + new_areq->report_done_status = mmc_blk_rw_done; + new_areq->disable_multi = false; + new_areq->retry = 0; + } else + new_areq = NULL; - mq->qcnt--; + old_areq = mmc_start_areq(card->host, new_areq, &status); + if (!old_areq) { + /* + * We have just put the first request into the pipeline + * and there is nothing more to do until it is + * complete. + */ + return; + } + /* + * FIXME: yes, we just discard the old_areq, it will be + * post-processed when done, in mmc_blk_rw_done(). We clean + * this up in later patches. + */ } void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index fa86f9a15d29..f49a2798fb56 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -738,12 +738,29 @@ void mmc_finalize_areq(struct work_struct *work) /* Successfully postprocess the old request at this point */ mmc_post_req(host, areq->mrq, 0); - areq->finalization_status = status; + /* Call back with status, this will trigger retry etc if needed */ + if (areq->report_done_status) + areq->report_done_status(areq, status); + + /* This opens the gate for the next request to start on the host */ complete(&areq->complete); } EXPORT_SYMBOL(mmc_finalize_areq); /** + * mmc_restart_areq() - restart an asynchronous request + * @host: MMC host to restart the command on + * @areq: the asynchronous request to restart + */ +int mmc_restart_areq(struct mmc_host *host, + struct mmc_async_req *areq) +{ + areq->mrq->areq = areq; + return __mmc_start_data_req(host, areq->mrq); +} +EXPORT_SYMBOL(mmc_restart_areq); + +/** * mmc_start_areq - start an asynchronous request * @host: MMC host to start command * @areq: asynchronous request to start @@ -763,7 +780,6 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host, struct mmc_async_req *areq, enum mmc_blk_status *ret_stat) { - enum mmc_blk_status status; int start_err = 0; struct mmc_async_req *previous = host->areq; @@ -774,29 +790,27 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host, /* Finalize previous request, if there is one */ if (previous) { wait_for_completion(&previous->complete); - status = previous->finalization_status; - } else { - status = MMC_BLK_SUCCESS; + host->areq = NULL; } + + /* Just always succeed */ if (ret_stat) - *ret_stat = status; + *ret_stat = MMC_BLK_SUCCESS; /* Fine so far, start the new request! */ - if (status == MMC_BLK_SUCCESS && areq) { + if (areq) { init_completion(&areq->complete); areq->mrq->areq = areq; start_err = __mmc_start_data_req(host, areq->mrq); + /* Cancel a prepared request if it was not started. */ + if (start_err) { + mmc_post_req(host, areq->mrq, -EINVAL); + host->areq = NULL; + } else { + host->areq = areq; + } } - /* Cancel a prepared request if it was not started. */ - if ((status != MMC_BLK_SUCCESS || start_err) && areq) - mmc_post_req(host, areq->mrq, -EINVAL); - - if (status != MMC_BLK_SUCCESS) - host->areq = NULL; - else - host->areq = areq; - return previous; } EXPORT_SYMBOL(mmc_start_areq); diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h index 88b852ac8f74..1859804ecd80 100644 --- a/drivers/mmc/core/core.h +++ b/drivers/mmc/core/core.h @@ -112,6 +112,7 @@ int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq); struct mmc_async_req; void mmc_finalize_areq(struct work_struct *work); +int mmc_restart_areq(struct mmc_host *host, struct mmc_async_req *areq); struct mmc_async_req *mmc_start_areq(struct mmc_host *host, struct mmc_async_req *areq, enum mmc_blk_status *ret_stat); diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 023bbddc1a0b..db1fa11d9870 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -145,6 +145,7 @@ static int mmc_init_request(struct request_queue *q, struct request *req, mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp); if (!mq_rq->sg) return -ENOMEM; + mq_rq->mq = mq; return 0; } @@ -155,6 +156,7 @@ static void mmc_exit_request(struct request_queue *q, struct request *req) kfree(mq_rq->sg); mq_rq->sg = NULL; + mq_rq->mq = NULL; } static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index 68f68ecd94ea..dce7cedb9d0b 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -52,6 +52,7 @@ struct mmc_queue_req { struct mmc_blk_request brq; struct scatterlist *sg; struct mmc_async_req areq; + struct mmc_queue *mq; enum mmc_drv_op drv_op; int drv_op_result; void *drv_op_data; diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 4b210e9283f6..f1c362e0765c 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -211,13 +211,18 @@ struct mmc_cqe_ops { struct mmc_async_req { /* active mmc request */ struct mmc_request *mrq; + bool disable_multi; + int retry; /* * Check error status of completed mmc request. * Returns 0 if success otherwise non zero. */ enum mmc_blk_status (*err_check)(struct mmc_card *, struct mmc_async_req *); + /* + * Report finalization status from the core to e.g. the block layer. + */ + void (*report_done_status)(struct mmc_async_req *, enum mmc_blk_status); struct work_struct finalization_work; - enum mmc_blk_status finalization_status; struct completion complete; struct mmc_host *host; }; From patchwork Fri Nov 10 10:01:40 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118524 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722770qgn; Fri, 10 Nov 2017 02:02:16 -0800 (PST) X-Google-Smtp-Source: ABhQp+QtqepNE+Y2i2SkYSY9vcGouLIXDasBGyZuxKAqiH0p4sy2MLsooeUkFkB35CbTxOaeANWB X-Received: by 10.99.96.210 with SMTP id u201mr2404388pgb.294.1510308136851; Fri, 10 Nov 2017 02:02:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308136; cv=none; d=google.com; s=arc-20160816; b=brCCafEUk9ZtJtcDIctZ5UtxSYv535mvLa78kyOnQpf61WyzWPqAvIAz6s+S1XKoxR ZgXzuWJOMUFAx4q9pTuJklDAOxHcv+K/B44hTeW/x9wpsVmrdsiWWqJvbCsxpFHRzcgG CBHW0uItYH/C8OhX26/qTu1lO+qqqevW3rkZhp3tttihT3pGRL+5uvH+4HDM0TPNr9tC 76qTVnpwC54dHeezp/C87Ov+x3ZQyyMw/lTTcbegryEhmPbJMj2/7G/VUkjP4bPoj8FC UL5rDVCJiexPXft2AHHjDEvZDjszD0b45zB03ZNhtX5HhgH/rYJ1BVpUnfguGSCkosNK Xl2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=6elQ7do2RX+wQRxQ8t/6b9RncrwdLQQMSG0P6OuDtv4=; b=B+1GHD8v2xHBSXK3drgIcju2WG5LG6u2DZKSgV0zXbsNexivaq6+pwW0Djbj4lNgRl eYMhh19ArneoTSdesnORsyzN1yHGfpmDGEjZ5gbcUSzyfHE2NlAUWeVIHdwaNI9d09fy w91m9Wt73Ltjvy74SYJmVVtP/uYbsS+R2W0vqOyT8jQ4bTt7E3DJz9mG9/z7NwfNNQqf JgvjsLl8033CIVt6WA7UAwW7rf3IMag5ZMAoyCcfmr15653zF3MzJK1SjUv+LIGDXjgi CwATj5CR7paV0HwJAZfM0PfMK2e4ONlgmEJyFt3T9OW7TnFCdZYdl/RsIFV0SNkstDn+ mU2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=BSN0qcLd; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.16; Fri, 10 Nov 2017 02:02:16 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=BSN0qcLd; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752473AbdKJKCN (ORCPT + 6 others); Fri, 10 Nov 2017 05:02:13 -0500 Received: from mail-lf0-f65.google.com ([209.85.215.65]:45258 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752419AbdKJKCL (ORCPT ); Fri, 10 Nov 2017 05:02:11 -0500 Received: by mail-lf0-f65.google.com with SMTP id n69so10446286lfn.2 for ; Fri, 10 Nov 2017 02:02:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/55gRGNxjnRg0lRl1qdewPvOH7DxXEczJUkEd/kOAWE=; b=BSN0qcLdsZAVkPlgn1F6dNxSWfkHOjqu8R9b/LevrWmjLXRMpcis03K/6ZfHFQtv0V 6agAbUniUqp26mPoPI2zVAMtInniNzfJ5vvev7PLSerndu7ZctuNs3oR4uALAa/cYbGa oDrL+KqhFL/V4BNTFhDOrnFK6fa2Wj/c2ys1I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/55gRGNxjnRg0lRl1qdewPvOH7DxXEczJUkEd/kOAWE=; b=tqnyHDfwHORK0GG+Qy1OP/E+SVZ8X81dFxFMN2cloioW6irvo6sz4AmHQSu9TspHos hvGeZnZOGPsUDOVz4PPKJTr7bsZWT2yrvDqbb3tcaEhIo6qulQmZR/Om9ZvRImDIsNsD IP93YfbwFlXhJxAgGsvFHw9WMgXHRrh4WOzlsry/haapxJ4l9gXqlB4iTqwbjXQxcL1g +fM4VbakJ8Qqbm2EkJbwrzfNWzcdS2xzuLJKoXV39/ltDgu+KJAsaCnUb2MCjqQTcLEA UsWXAwYwvKCIx2TO7VqRXDECEcTgg00NKt/6FkCm0PH7nZOLIfzaz/AVXaXsRYUc3GrS I09g== X-Gm-Message-State: AJaThX6Z7npFxK8CyMJnS97swMsOvnJ8P1xFm9FPP8lZkE0+zdyyAPk0 Rv+EV4Db+gkUCTTSBgsGPp7/qRX8dVQ= X-Received: by 10.25.143.78 with SMTP id r75mr1329838lfd.85.1510308129547; Fri, 10 Nov 2017 02:02:09 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.02.07 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:02:08 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 09/12 v5] mmc: queue: stop flushing the pipeline with NULL Date: Fri, 10 Nov 2017 11:01:40 +0100 Message-Id: <20171110100143.12256-10-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Remove all the pipeline flush: i.e. repeatedly sending NULL down to the core layer to flush out asynchronous requests, and also sending NULL after "special" commands to achieve the same flush. Instead: let the "special" commands wait for any ongoing asynchronous transfers using the completion, and apart from that expect the core.c and block.c layers to deal with the ongoing requests autonomously without any "push" from the queue. Add a function in the core to wait for an asynchronous request to complete. Update the tests to use the new function prototypes. This kills off some FIXME's such as gettin rid of the mq->qcnt queue depth variable that was introduced a while back. It is a vital step toward multiqueue enablement that we stop pulling NULL off the end of the request queue to flush the asynchronous issueing mechanism. Signed-off-by: Linus Walleij --- ChangeLog v1->v5: - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/block.c | 173 ++++++++++++++++---------------------------- drivers/mmc/core/core.c | 50 +++++++------ drivers/mmc/core/core.h | 6 +- drivers/mmc/core/mmc_test.c | 31 ++------ drivers/mmc/core/queue.c | 11 ++- drivers/mmc/core/queue.h | 7 -- 6 files changed, 108 insertions(+), 170 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 2cda2f52058e..c7a57006e27f 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1805,7 +1805,6 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, if (mmc_card_removed(card)) req->rq_flags |= RQF_QUIET; while (blk_end_request(req, BLK_STS_IOERR, blk_rq_cur_bytes(req))); - mq->qcnt--; } /** @@ -1877,13 +1876,10 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat if (mmc_blk_reset(md, card->host, type)) { if (req_pending) mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - else - mq->qcnt--; mmc_blk_rw_try_restart(mq, mq_rq); return; } if (!req_pending) { - mq->qcnt--; mmc_blk_rw_try_restart(mq, mq_rq); return; } @@ -1927,7 +1923,6 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat req_pending = blk_end_request(old_req, BLK_STS_IOERR, brq->data.blksz); if (!req_pending) { - mq->qcnt--; mmc_blk_rw_try_restart(mq, mq_rq); return; } @@ -1951,26 +1946,16 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat */ mmc_blk_rw_rq_prep(mq_rq, card, areq->disable_multi, mq); - mmc_start_areq(card->host, areq, NULL); + mmc_start_areq(card->host, areq); mq_rq->brq.retune_retry_done = retune_retry_done; - } else { - /* Else, this request is done */ - mq->qcnt--; } } static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) { - enum mmc_blk_status status; - struct mmc_async_req *new_areq; - struct mmc_async_req *old_areq; struct mmc_card *card = mq->card; - - if (new_req) - mq->qcnt++; - - if (!mq->qcnt) - return; + struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req); + struct mmc_async_req *areq = &mqrq_cur->areq; /* * If the card was removed, just cancel everything and return. @@ -1978,46 +1963,26 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) if (mmc_card_removed(card)) { new_req->rq_flags |= RQF_QUIET; blk_end_request_all(new_req, BLK_STS_IOERR); - mq->qcnt--; /* FIXME: just set to 0? */ return; } - if (new_req) { - struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req); - /* - * When 4KB native sector is enabled, only 8 blocks - * multiple read or write is allowed - */ - if (mmc_large_sector(card) && - !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { - pr_err("%s: Transfer size is not 4KB sector size aligned\n", - new_req->rq_disk->disk_name); - mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); - return; - } - - mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); - new_areq = &mqrq_cur->areq; - new_areq->report_done_status = mmc_blk_rw_done; - new_areq->disable_multi = false; - new_areq->retry = 0; - } else - new_areq = NULL; - - old_areq = mmc_start_areq(card->host, new_areq, &status); - if (!old_areq) { - /* - * We have just put the first request into the pipeline - * and there is nothing more to do until it is - * complete. - */ - return; - } /* - * FIXME: yes, we just discard the old_areq, it will be - * post-processed when done, in mmc_blk_rw_done(). We clean - * this up in later patches. + * When 4KB native sector is enabled, only 8 blocks + * multiple read or write is allowed */ + if (mmc_large_sector(card) && + !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { + pr_err("%s: Transfer size is not 4KB sector size aligned\n", + new_req->rq_disk->disk_name); + mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); + return; + } + + mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); + areq->disable_multi = false; + areq->retry = 0; + areq->report_done_status = mmc_blk_rw_done; + mmc_start_areq(card->host, areq); } void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) @@ -2026,70 +1991,56 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) struct mmc_blk_data *md = mq->blkdata; struct mmc_card *card = md->queue.card; - if (req && !mq->qcnt) - /* claim host only for the first request */ - mmc_get_card(card, NULL); + if (!req) { + pr_err("%s: tried to issue NULL request\n", __func__); + return; + } ret = mmc_blk_part_switch(card, md->part_type); if (ret) { - if (req) { - blk_end_request_all(req, BLK_STS_IOERR); - } - goto out; + blk_end_request_all(req, BLK_STS_IOERR); + return; } - if (req) { - switch (req_op(req)) { - case REQ_OP_DRV_IN: - case REQ_OP_DRV_OUT: - /* - * Complete ongoing async transfer before issuing - * ioctl()s - */ - if (mq->qcnt) - mmc_blk_issue_rw_rq(mq, NULL); - mmc_blk_issue_drv_op(mq, req); - break; - case REQ_OP_DISCARD: - /* - * Complete ongoing async transfer before issuing - * discard. - */ - if (mq->qcnt) - mmc_blk_issue_rw_rq(mq, NULL); - mmc_blk_issue_discard_rq(mq, req); - break; - case REQ_OP_SECURE_ERASE: - /* - * Complete ongoing async transfer before issuing - * secure erase. - */ - if (mq->qcnt) - mmc_blk_issue_rw_rq(mq, NULL); - mmc_blk_issue_secdiscard_rq(mq, req); - break; - case REQ_OP_FLUSH: - /* - * Complete ongoing async transfer before issuing - * flush. - */ - if (mq->qcnt) - mmc_blk_issue_rw_rq(mq, NULL); - mmc_blk_issue_flush(mq, req); - break; - default: - /* Normal request, just issue it */ - mmc_blk_issue_rw_rq(mq, req); - break; - } - } else { - /* No request, flushing the pipeline with NULL */ - mmc_blk_issue_rw_rq(mq, NULL); + switch (req_op(req)) { + case REQ_OP_DRV_IN: + case REQ_OP_DRV_OUT: + /* + * Complete ongoing async transfer before issuing + * ioctl()s + */ + mmc_wait_for_areq(card->host); + mmc_blk_issue_drv_op(mq, req); + break; + case REQ_OP_DISCARD: + /* + * Complete ongoing async transfer before issuing + * discard. + */ + mmc_wait_for_areq(card->host); + mmc_blk_issue_discard_rq(mq, req); + break; + case REQ_OP_SECURE_ERASE: + /* + * Complete ongoing async transfer before issuing + * secure erase. + */ + mmc_wait_for_areq(card->host); + mmc_blk_issue_secdiscard_rq(mq, req); + break; + case REQ_OP_FLUSH: + /* + * Complete ongoing async transfer before issuing + * flush. + */ + mmc_wait_for_areq(card->host); + mmc_blk_issue_flush(mq, req); + break; + default: + /* Normal request, just issue it */ + mmc_blk_issue_rw_rq(mq, req); + break; } - -out: - if (!mq->qcnt) - mmc_put_card(card, NULL); } static inline int mmc_blk_readonly(struct mmc_card *card) diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index f49a2798fb56..42795fdfb730 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -747,6 +747,15 @@ void mmc_finalize_areq(struct work_struct *work) } EXPORT_SYMBOL(mmc_finalize_areq); +void mmc_wait_for_areq(struct mmc_host *host) +{ + if (host->areq) { + wait_for_completion(&host->areq->complete); + host->areq = NULL; + } +} +EXPORT_SYMBOL(mmc_wait_for_areq); + /** * mmc_restart_areq() - restart an asynchronous request * @host: MMC host to restart the command on @@ -776,16 +785,18 @@ EXPORT_SYMBOL(mmc_restart_areq); * return the completed request. If there is no ongoing request, NULL * is returned without waiting. NULL is not an error condition. */ -struct mmc_async_req *mmc_start_areq(struct mmc_host *host, - struct mmc_async_req *areq, - enum mmc_blk_status *ret_stat) +int mmc_start_areq(struct mmc_host *host, + struct mmc_async_req *areq) { - int start_err = 0; struct mmc_async_req *previous = host->areq; + int ret; + + /* Delete this check when we trust the code */ + if (!areq) + pr_err("%s: NULL asynchronous request!\n", __func__); /* Prepare a new request */ - if (areq) - mmc_pre_req(host, areq->mrq); + mmc_pre_req(host, areq->mrq); /* Finalize previous request, if there is one */ if (previous) { @@ -793,25 +804,20 @@ struct mmc_async_req *mmc_start_areq(struct mmc_host *host, host->areq = NULL; } - /* Just always succeed */ - if (ret_stat) - *ret_stat = MMC_BLK_SUCCESS; - /* Fine so far, start the new request! */ - if (areq) { - init_completion(&areq->complete); - areq->mrq->areq = areq; - start_err = __mmc_start_data_req(host, areq->mrq); - /* Cancel a prepared request if it was not started. */ - if (start_err) { - mmc_post_req(host, areq->mrq, -EINVAL); - host->areq = NULL; - } else { - host->areq = areq; - } + init_completion(&areq->complete); + areq->mrq->areq = areq; + ret = __mmc_start_data_req(host, areq->mrq); + /* Cancel a prepared request if it was not started. */ + if (ret) { + mmc_post_req(host, areq->mrq, -EINVAL); + host->areq = NULL; + pr_err("%s: failed to start request\n", __func__); + } else { + host->areq = areq; } - return previous; + return ret; } EXPORT_SYMBOL(mmc_start_areq); diff --git a/drivers/mmc/core/core.h b/drivers/mmc/core/core.h index 1859804ecd80..5b8d0f1147ef 100644 --- a/drivers/mmc/core/core.h +++ b/drivers/mmc/core/core.h @@ -113,9 +113,9 @@ struct mmc_async_req; void mmc_finalize_areq(struct work_struct *work); int mmc_restart_areq(struct mmc_host *host, struct mmc_async_req *areq); -struct mmc_async_req *mmc_start_areq(struct mmc_host *host, - struct mmc_async_req *areq, - enum mmc_blk_status *ret_stat); +int mmc_start_areq(struct mmc_host *host, + struct mmc_async_req *areq); +void mmc_wait_for_areq(struct mmc_host *host); int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr, unsigned int arg); diff --git a/drivers/mmc/core/mmc_test.c b/drivers/mmc/core/mmc_test.c index 478869805b96..256fdce38449 100644 --- a/drivers/mmc/core/mmc_test.c +++ b/drivers/mmc/core/mmc_test.c @@ -839,10 +839,8 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test, { struct mmc_test_req *rq1, *rq2; struct mmc_test_async_req test_areq[2]; - struct mmc_async_req *done_areq; struct mmc_async_req *cur_areq = &test_areq[0].areq; struct mmc_async_req *other_areq = &test_areq[1].areq; - enum mmc_blk_status status; int i; int ret = RESULT_OK; @@ -864,25 +862,16 @@ static int mmc_test_nonblock_transfer(struct mmc_test_card *test, for (i = 0; i < count; i++) { mmc_test_prepare_mrq(test, cur_areq->mrq, sg, sg_len, dev_addr, blocks, blksz, write); - done_areq = mmc_start_areq(test->card->host, cur_areq, &status); + ret = mmc_start_areq(test->card->host, cur_areq); + mmc_wait_for_areq(test->card->host); - if (status != MMC_BLK_SUCCESS || (!done_areq && i > 0)) { - ret = RESULT_FAIL; - goto err; - } - - if (done_areq) - mmc_test_req_reset(container_of(done_areq->mrq, + mmc_test_req_reset(container_of(cur_areq->mrq, struct mmc_test_req, mrq)); swap(cur_areq, other_areq); dev_addr += blocks; } - done_areq = mmc_start_areq(test->card->host, NULL, &status); - if (status != MMC_BLK_SUCCESS) - ret = RESULT_FAIL; - err: kfree(rq1); kfree(rq2); @@ -2360,7 +2349,6 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test, struct mmc_request *mrq; unsigned long timeout; bool expired = false; - enum mmc_blk_status blkstat = MMC_BLK_SUCCESS; int ret = 0, cmd_ret; u32 status = 0; int count = 0; @@ -2388,11 +2376,8 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test, /* Start ongoing data request */ if (use_areq) { - mmc_start_areq(host, &test_areq.areq, &blkstat); - if (blkstat != MMC_BLK_SUCCESS) { - ret = RESULT_FAIL; - goto out_free; - } + mmc_start_areq(host, &test_areq.areq); + mmc_wait_for_areq(host); } else { mmc_wait_for_req(host, mrq); } @@ -2425,11 +2410,7 @@ static int mmc_test_ongoing_transfer(struct mmc_test_card *test, } while (repeat_cmd && R1_CURRENT_STATE(status) != R1_STATE_TRAN); /* Wait for data request to complete */ - if (use_areq) { - mmc_start_areq(host, NULL, &blkstat); - if (blkstat != MMC_BLK_SUCCESS) - ret = RESULT_FAIL; - } else { + if (!use_areq) { mmc_wait_for_req_done(test->card->host, mrq); } diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index db1fa11d9870..cf43a2d5410d 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -42,6 +42,7 @@ static int mmc_queue_thread(void *d) { struct mmc_queue *mq = d; struct request_queue *q = mq->queue; + bool claimed_card = false; current->flags |= PF_MEMALLOC; @@ -55,7 +56,11 @@ static int mmc_queue_thread(void *d) mq->asleep = false; spin_unlock_irq(q->queue_lock); - if (req || mq->qcnt) { + if (req) { + if (!claimed_card) { + mmc_get_card(mq->card, NULL); + claimed_card = true; + } set_current_state(TASK_RUNNING); mmc_blk_issue_rq(mq, req); cond_resched(); @@ -72,6 +77,9 @@ static int mmc_queue_thread(void *d) } while (1); up(&mq->thread_sem); + if (claimed_card) + mmc_put_card(mq->card, NULL); + return 0; } @@ -207,7 +215,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, mq->queue->exit_rq_fn = mmc_exit_request; mq->queue->cmd_size = sizeof(struct mmc_queue_req); mq->queue->queuedata = mq; - mq->qcnt = 0; ret = blk_init_allocated_queue(mq->queue); if (ret) { blk_cleanup_queue(mq->queue); diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index dce7cedb9d0b..67ae311b107f 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -67,13 +67,6 @@ struct mmc_queue { bool asleep; struct mmc_blk_data *blkdata; struct request_queue *queue; - /* - * FIXME: this counter is not a very reliable way of keeping - * track of how many requests that are ongoing. Switch to just - * letting the block core keep track of requests and per-request - * associated mmc_queue_req data. - */ - int qcnt; }; extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *, From patchwork Fri Nov 10 10:01:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118525 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722801qgn; Fri, 10 Nov 2017 02:02:18 -0800 (PST) X-Google-Smtp-Source: ABhQp+TQDlxVlZrtrR4dxS3xyauks/EH8lxbs3mMBrD4QUeN4C8zucU9GuKdEJ/Pz1oGy+z5E8jr X-Received: by 10.99.115.4 with SMTP id o4mr3514880pgc.371.1510308137982; Fri, 10 Nov 2017 02:02:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308137; cv=none; d=google.com; s=arc-20160816; b=T4lxcXEzHugw2Mn1ONKelkNttMAERQHuPMNfdYo1aHlO/kk3RVSifc4YqaBTTTOkNN f0+hF65f9JcVJ/qD1Hp0VdrHptA53aZ3zHZdfX02vSt4s1q4/E/NRY5xC1eRYG49rIQv fSTHAtg+ohxk4vg2kV3YnurmbHkn63hdGIcjVhNQAB75bcX36QbIhnlTSyrur1J4kLv6 GYGcQB1sBkiAJZgkR7KpegMIzg7rNFFHcvImTy9epty5yfbanu8UWUfVIZIWZG85AuiP XQ+1M7gPuBzjiPiQalAUKR4iMhlJKhfEfJedtPZqh7zXg6h1VlkIj3d+MEX0jVP51GC5 6wHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=90HcHF9B4xLooTXLu9H0KQ9Uo9hrsVsvRipKUE66bTg=; b=jpXajbK/ItlkrO6pZ1mmiY1x5zOtVPe3/24DUodzyP7sm3GJ9IZ2fmregvY3V6/RqW kvZZv/Gxi1T1l3gX+Seh6AQrmYw1QYtBBcDzBNfOnalBvSMotiZOSeUghRMj9huj/nWG qzPoq/ucp9M9KIa88qBObwDoy0bg/xyBVF3w8ThNcFMMVZBf5U3bRkEZa+uAC9rug66j Cxht8NqnTvy4cxEAbN40mjTdc2y9a7zKnRojmRFGMKuTOss+y8fPM5mKVkywJCB61Rh+ oThNmIAlyrKxzROgwNC9mGFVDy5MGUg+oToaWPDm2EOlxJdpIDvGzQSMugeJHURKI7WS i3MQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=YkdCBRmm; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.17; Fri, 10 Nov 2017 02:02:17 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=YkdCBRmm; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752414AbdKJKCP (ORCPT + 6 others); Fri, 10 Nov 2017 05:02:15 -0500 Received: from mail-lf0-f67.google.com ([209.85.215.67]:45260 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752451AbdKJKCN (ORCPT ); Fri, 10 Nov 2017 05:02:13 -0500 Received: by mail-lf0-f67.google.com with SMTP id n69so10446375lfn.2 for ; Fri, 10 Nov 2017 02:02:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dzbUeMGOEGHx9QrsmRsNEmqOoI8vEUck7ipK2opkvVw=; b=YkdCBRmmMcQwR3oWU1KdFc4IpH/WSYrY5ZaC7dISHkYsV7E+z518C1+l5frn81U2ZW +s6KOn648EjvVqQknnnHjsz4MxLM5GEevQcveiLbZ6+RZjaHmjHn1EyxEHjhX7/HPeH3 q/7fZCw/ztaeVbjdBPu8XV9kWcQNm9S4+FdiQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dzbUeMGOEGHx9QrsmRsNEmqOoI8vEUck7ipK2opkvVw=; b=M4Sg1yAhLFA5suqmOoTrL+2w69WQHTRbUHlkO0amiq5VgKBdbywfi+1G+fNv27Komr VnftmCCC459cpQNx1nbx8bx77fTda0pYMllBbmSmrGjBKP8JBWUu2sg7kFSC7bP5QvWs Pap5boZn/DWVX7yVxi79mbvpvMuP4ODBi+bxg/ghbB6OTNRDK1uOulc8ZK0kKq20dKAS icAyRmNum4iTQA2qtuu5WY+M9RGP6bGogCpwJXPBJDOIUFoIQayk/t+uO2wL4MwO9qLc sNnM0LTwQojaHNMNxdB0O5nmfjuejJnyy7O0Sbi/j4/7m12lXRXSFvhPGsibVVX1rGq4 UFCA== X-Gm-Message-State: AJaThX7OWTa4d+To6LgHA+g5vLup9hIGbZDP/GYl8rzpH2kxpEPMYrRW KiAIBTAQZ34rA3hVGqSuA/A0LVefFLo= X-Received: by 10.25.206.69 with SMTP id e66mr1183105lfg.259.1510308131611; Fri, 10 Nov 2017 02:02:11 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.02.10 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:02:10 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 10/12 v5] mmc: queue/block: pass around struct mmc_queue_req*s Date: Fri, 10 Nov 2017 11:01:41 +0100 Message-Id: <20171110100143.12256-11-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Instead of passing two pointers around several pointers to mmc_queue_req, request, mmc_queue, and reassigning to the left and right, issue mmc_queue_req and dereference the queue and request from the mmq_queue_req where needed. The struct mmc_queue_req is the thing that has a lifecycle after all: this is what we are keeping in our queue, and what the block layer helps us manager. Augment a bunch of functions to take a single argument so we can see the trees and not just a big jungle of arguments. Signed-off-by: Linus Walleij --- ChangeLog v1->v5: - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/block.c | 128 ++++++++++++++++++++++++----------------------- drivers/mmc/core/block.h | 5 +- drivers/mmc/core/queue.c | 2 +- 3 files changed, 69 insertions(+), 66 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index c7a57006e27f..2cd9fe5a8c9b 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1208,9 +1208,9 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type) * processed it with all other requests and then they get issued in this * function. */ -static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_drv_op(struct mmc_queue_req *mq_rq) { - struct mmc_queue_req *mq_rq; + struct mmc_queue *mq = mq_rq->mq; struct mmc_card *card = mq->card; struct mmc_blk_data *md = mq->blkdata; struct mmc_blk_ioc_data **idata; @@ -1220,7 +1220,6 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req) int ret; int i; - mq_rq = req_to_mmc_queue_req(req); rpmb_ioctl = (mq_rq->drv_op == MMC_DRV_OP_IOCTL_RPMB); switch (mq_rq->drv_op) { @@ -1264,12 +1263,14 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req) break; } mq_rq->drv_op_result = ret; - blk_end_request_all(req, ret ? BLK_STS_IOERR : BLK_STS_OK); + blk_end_request_all(mmc_queue_req_to_req(mq_rq), + ret ? BLK_STS_IOERR : BLK_STS_OK); } -static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; unsigned int from, nr, arg; int err = 0, type = MMC_BLK_DISCARD; @@ -1310,10 +1311,10 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) blk_end_request(req, status, blk_rq_bytes(req)); } -static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, - struct request *req) +static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; unsigned int from, nr, arg; int err = 0, type = MMC_BLK_SECDISCARD; @@ -1380,14 +1381,15 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, blk_end_request(req, status, blk_rq_bytes(req)); } -static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; int ret = 0; ret = mmc_flush_cache(card); - blk_end_request_all(req, ret ? BLK_STS_IOERR : BLK_STS_OK); + blk_end_request_all(mmc_queue_req_to_req(mq_rq), + ret ? BLK_STS_IOERR : BLK_STS_OK); } /* @@ -1698,18 +1700,18 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq, *do_data_tag_p = do_data_tag; } -static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, - struct mmc_card *card, - bool disable_multi, - struct mmc_queue *mq) +static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mq_rq, + bool disable_multi) { u32 readcmd, writecmd; - struct mmc_blk_request *brq = &mqrq->brq; - struct request *req = mmc_queue_req_to_req(mqrq); + struct mmc_queue *mq = mq_rq->mq; + struct mmc_card *card = mq->card; + struct mmc_blk_request *brq = &mq_rq->brq; + struct request *req = mmc_queue_req_to_req(mq_rq); struct mmc_blk_data *md = mq->blkdata; bool do_rel_wr, do_data_tag; - mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag); + mmc_blk_data_prep(mq, mq_rq, disable_multi, &do_rel_wr, &do_data_tag); brq->mrq.cmd = &brq->cmd; brq->mrq.areq = NULL; @@ -1764,9 +1766,9 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, brq->mrq.sbc = &brq->sbc; } - mqrq->areq.err_check = mmc_blk_err_check; - mqrq->areq.host = card->host; - INIT_WORK(&mqrq->areq.finalization_work, mmc_finalize_areq); + mq_rq->areq.err_check = mmc_blk_err_check; + mq_rq->areq.host = card->host; + INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq); } static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, @@ -1798,10 +1800,12 @@ static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, return req_pending; } -static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, - struct request *req, - struct mmc_queue_req *mqrq) +static void mmc_blk_rw_cmd_abort(struct mmc_queue_req *mq_rq) { + struct mmc_queue *mq = mq_rq->mq; + struct mmc_card *card = mq->card; + struct request *req = mmc_queue_req_to_req(mq_rq); + if (mmc_card_removed(card)) req->rq_flags |= RQF_QUIET; while (blk_end_request(req, BLK_STS_IOERR, blk_rq_cur_bytes(req))); @@ -1809,16 +1813,15 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, /** * mmc_blk_rw_try_restart() - tries to restart the current async request - * @mq: the queue with the card and host to restart - * @mqrq: the mmc_queue_request containing the areq to be restarted + * @mq_rq: the mmc_queue_request containing the areq to be restarted */ -static void mmc_blk_rw_try_restart(struct mmc_queue *mq, - struct mmc_queue_req *mqrq) +static void mmc_blk_rw_try_restart(struct mmc_queue_req *mq_rq) { - struct mmc_async_req *areq = &mqrq->areq; + struct mmc_async_req *areq = &mq_rq->areq; + struct mmc_queue *mq = mq_rq->mq; /* Proceed and try to restart the current async request */ - mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); + mmc_blk_rw_rq_prep(mq_rq, 0); areq->disable_multi = false; areq->retry = 0; mmc_restart_areq(mq->card->host, areq); @@ -1867,7 +1870,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat pr_err("%s BUG rq_tot %d d_xfer %d\n", __func__, blk_rq_bytes(old_req), brq->data.bytes_xfered); - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); return; } break; @@ -1875,12 +1878,12 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); if (mmc_blk_reset(md, card->host, type)) { if (req_pending) - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } if (!req_pending) { - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } break; @@ -1892,8 +1895,8 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat case MMC_BLK_ABORT: if (!mmc_blk_reset(md, card->host, type)) break; - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; case MMC_BLK_DATA_ERR: { int err; @@ -1901,8 +1904,8 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat if (!err) break; if (err == -ENODEV) { - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } /* Fall through */ @@ -1923,19 +1926,19 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat req_pending = blk_end_request(old_req, BLK_STS_IOERR, brq->data.blksz); if (!req_pending) { - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } break; case MMC_BLK_NOMEDIUM: - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; default: pr_err("%s: Unhandled return value (%d)", old_req->rq_disk->disk_name, status); - mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); - mmc_blk_rw_try_restart(mq, mq_rq); + mmc_blk_rw_cmd_abort(mq_rq); + mmc_blk_rw_try_restart(mq_rq); return; } @@ -1944,25 +1947,25 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat * In case of a incomplete request * prepare it again and resend. */ - mmc_blk_rw_rq_prep(mq_rq, card, - areq->disable_multi, mq); + mmc_blk_rw_rq_prep(mq_rq, areq->disable_multi); mmc_start_areq(card->host, areq); mq_rq->brq.retune_retry_done = retune_retry_done; } } -static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) +static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) { + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_queue *mq = mq_rq->mq; struct mmc_card *card = mq->card; - struct mmc_queue_req *mqrq_cur = req_to_mmc_queue_req(new_req); - struct mmc_async_req *areq = &mqrq_cur->areq; + struct mmc_async_req *areq = &mq_rq->areq; /* * If the card was removed, just cancel everything and return. */ if (mmc_card_removed(card)) { - new_req->rq_flags |= RQF_QUIET; - blk_end_request_all(new_req, BLK_STS_IOERR); + req->rq_flags |= RQF_QUIET; + blk_end_request_all(req, BLK_STS_IOERR); return; } @@ -1971,24 +1974,25 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) * multiple read or write is allowed */ if (mmc_large_sector(card) && - !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { + !IS_ALIGNED(blk_rq_sectors(req), 8)) { pr_err("%s: Transfer size is not 4KB sector size aligned\n", - new_req->rq_disk->disk_name); - mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); + req->rq_disk->disk_name); + mmc_blk_rw_cmd_abort(mq_rq); return; } - mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); + mmc_blk_rw_rq_prep(mq_rq, 0); areq->disable_multi = false; areq->retry = 0; areq->report_done_status = mmc_blk_rw_done; mmc_start_areq(card->host, areq); } -void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) +void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq) { int ret; - struct mmc_blk_data *md = mq->blkdata; + struct request *req = mmc_queue_req_to_req(mq_rq); + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; if (!req) { @@ -2010,7 +2014,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * ioctl()s */ mmc_wait_for_areq(card->host); - mmc_blk_issue_drv_op(mq, req); + mmc_blk_issue_drv_op(mq_rq); break; case REQ_OP_DISCARD: /* @@ -2018,7 +2022,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * discard. */ mmc_wait_for_areq(card->host); - mmc_blk_issue_discard_rq(mq, req); + mmc_blk_issue_discard_rq(mq_rq); break; case REQ_OP_SECURE_ERASE: /* @@ -2026,7 +2030,7 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * secure erase. */ mmc_wait_for_areq(card->host); - mmc_blk_issue_secdiscard_rq(mq, req); + mmc_blk_issue_secdiscard_rq(mq_rq); break; case REQ_OP_FLUSH: /* @@ -2034,11 +2038,11 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * flush. */ mmc_wait_for_areq(card->host); - mmc_blk_issue_flush(mq, req); + mmc_blk_issue_flush(mq_rq); break; default: /* Normal request, just issue it */ - mmc_blk_issue_rw_rq(mq, req); + mmc_blk_issue_rw_rq(mq_rq); break; } } diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h index 860ca7c8df86..bbc1c8029b3b 100644 --- a/drivers/mmc/core/block.h +++ b/drivers/mmc/core/block.h @@ -1,9 +1,8 @@ #ifndef _MMC_CORE_BLOCK_H #define _MMC_CORE_BLOCK_H -struct mmc_queue; -struct request; +struct mmc_queue_req; -void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req); +void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq); #endif diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index cf43a2d5410d..5511e323db31 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -62,7 +62,7 @@ static int mmc_queue_thread(void *d) claimed_card = true; } set_current_state(TASK_RUNNING); - mmc_blk_issue_rq(mq, req); + mmc_blk_issue_rq(req_to_mmc_queue_req(req)); cond_resched(); } else { mq->asleep = true; From patchwork Fri Nov 10 10:01:42 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118526 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722890qgn; Fri, 10 Nov 2017 02:02:22 -0800 (PST) X-Google-Smtp-Source: ABhQp+Shtw1OGAqL8xjyx6oCxBEYjOKeOXnp25sxljuM+Is9eXjfeLRPX+vU7/mTris+SOfhzAS5 X-Received: by 10.101.72.1 with SMTP id h1mr3621192pgs.249.1510308141820; Fri, 10 Nov 2017 02:02:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308141; cv=none; d=google.com; s=arc-20160816; b=JHdSqAsRvF+1z8bn9tz55BKUiNUzQSSvkUD6F/O8AVHlscs1bPb3OoyYZzocoNjoUb o3fh1BWAsvFbvN6roVuax0Kwfw1Yw8lpvbTNc1bCp+JdMu6EceGzbl+lt7APgi0D6yfG 1XiyjjCogIwVc/gZyqcSZVFCvwLFcsP5Cabe4Ofnffaj+nF/G7fHES4ak22doEOpaMw3 phHau/d7dh7vJI4q5StHqgzOUPKcZ4h4BtA3Rl0HpvDqqVDszAm73yfWVICwBz1iGmLL pm1oK3ekB7O9vziQYbBJQV7CGKeFaoL2qgUFeo6vCpCM3m6mRoNx+KJ8m/I2KZ5BwzUH ZR1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=5aznL5rWwKhrNTj44txOg5hG1paORnNV+LK7JH1q12g=; b=J5OEXvBrWxzz7GOj6ILqjoLqQQGboVj7N7pgk0nJvimkUQvVVs1TbjbWeaDEnB7RuQ Wln6fGx0iEMqtPGSlZhPVxfqMwITlfCtMtHB4PVCW8uQbb+Y99gZYA1jqur3xCM4sUrj JGr0DqwcN/khpCkSzuUps+I1cUKTO23RDO5vfX5jYqIcwREVwcNoDaexaANC9kgRxuPd kwxx/DVw6R2643kVL+gUPOH3dfl0ppPcxIl/oUPKuthcqxZzAqAfmM1hWRXo1CR6hyWs iq6cEfazAUhhzjLeM+wUNcUXWepI8wGvLJ3rAiopxWbFasUMk1aD8OLqz0oeMaRaqOvd BajA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=XPF3ZsRi; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.21; Fri, 10 Nov 2017 02:02:21 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=XPF3ZsRi; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752489AbdKJKCT (ORCPT + 6 others); Fri, 10 Nov 2017 05:02:19 -0500 Received: from mail-lf0-f68.google.com ([209.85.215.68]:44127 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752431AbdKJKCQ (ORCPT ); Fri, 10 Nov 2017 05:02:16 -0500 Received: by mail-lf0-f68.google.com with SMTP id s16so10176079lfs.1 for ; Fri, 10 Nov 2017 02:02:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6cFWTLpWg9tkxXbteCLZyexHGXNqQBTyJzgT4lb/kxY=; b=XPF3ZsRiSI5fMh5HDv71GFsdO2Ssx6cWMthiINoy9WIcbcedy/cx2sRV4xbYC8ZgbT Gjp3uLTzKz2V+yDPYZoa+glnVErVGF/kJ88vmjzcXyuFRIXi5nbSChF8e0obGneYr6qu 1Iy33jpFX0HZFoPmHULH8Lucm3g9+G7hc+gMM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6cFWTLpWg9tkxXbteCLZyexHGXNqQBTyJzgT4lb/kxY=; b=oqCTcUOARq0mzKEgTr1GZ4FYm9bGnMsEQY8xbqyibHz9oHmqiHGumrGcRMe5ZsWQPH Lvmbg6nA5KFkM6cl769wZpSWTuHvknxpRE9OscLl6YWfXVcS7d6Mp/TUmeZsx4TrRqgx AZxzJk1jsAQuXguISAZnnyv1EVwRJV2BQhJPDCa0n6uqXVjI+Xn8vN7WqAIYOBICmMMw MNfsxiKS3JK4vrF9JMGlC28KLeCM9QSR2QQ1JydoAbNi3rMAMt28IPKR5vA04kviwh3A lb/8WAin493+XGnHIQ2K3yfQ4tAXBJ3KuJ6y5cUGrLoOw0mSqdjkxpf3TvlDpgyQYTPg nhOQ== X-Gm-Message-State: AJaThX7Qd8NUxjM/oKmw/zHvzoeMjuWRuTFVbp64TgRaD+kPmXkGmN/n oudQdnt5y+xJucB1NvBuK2EHuBbMbew= X-Received: by 10.46.80.83 with SMTP id v19mr1407312ljd.101.1510308134092; Fri, 10 Nov 2017 02:02:14 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.02.12 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:02:13 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 11/12 v5] mmc: block: issue requests in massive parallel Date: Fri, 10 Nov 2017 11:01:42 +0100 Message-Id: <20171110100143.12256-12-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This makes a crucial change to the issueing mechanism for the MMC requests: Before commit "mmc: core: move the asynchronous post-processing" some parallelism on the read/write requests was achieved by speculatively postprocessing a request and re-preprocess and re-issue the request if something went wrong, which we discover later when checking for an error. This is kind of ugly. Instead we need a mechanism like here: We issue requests, and when they come back from the hardware, we know if they finished successfully or not. If the request was successful, we complete the asynchronous request and let a new request immediately start on the hardware. If, and only if, it returned an error from the hardware we go down the error path. This is achieved by splitting the work path from the hardware in two: a successful path ending up calling down to mmc_blk_rw_done() and completing quickly, and an errorpath calling down to mmc_blk_rw_done_error(). This has a profound effect: we reintroduce the parallelism on the successful path as mmc_post_req() can now be called in while the next request is in transit (just like prior to commit "mmc: core: move the asynchronous post-processing") and blk_end_request() is called while the next request is already on the hardware. The latter has the profound effect of issuing a new request again so that we actually may have three requests in transit at the same time: one on the hardware, one being prepared (such as DMA flushing) and one being prepared for issuing next by the block layer. This shows up when we transit to multiqueue, where this can be exploited. Signed-off-by: Linus Walleij --- ChangeLog v4->v5: - Fixes on the errorpath: when a request reports error back, keep the areq on the host as it is not yet finished (no assigning NULL to host->areq), do not postprocess the request or complete it. This will happen eventually when the request succeeds. - When restarting the command, use mmc_restart_areq() as could be expected. - Augmend the .report_done_status() callback to return a bool indicating whether the areq is now finished or not, to handle the error case where we eventually give up on the request and have returned an error to the block layer. - Make sure to post-process the request on the error path and pre-process it again when resending an asynchronous request. This satisfies the host's semantic expectation that every request will be in pre->req->post sequence even if there are errors. - To assure the ordering of pre/post-processing, we need to post-process any prepared request with -EINVAL if there is an error, then re-preprocess it again after error recovery. To this end a helper pointer in host->pending_areq is added so the error path can act on this. - Rebasing on the "next" branch in the MMC tree. --- drivers/mmc/core/block.c | 98 ++++++++++++++++++++++++++++++++---------------- drivers/mmc/core/core.c | 58 +++++++++++++++++++++++----- include/linux/mmc/host.h | 4 +- 3 files changed, 117 insertions(+), 43 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 2cd9fe5a8c9b..e3ae7241b2eb 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1827,7 +1827,8 @@ static void mmc_blk_rw_try_restart(struct mmc_queue_req *mq_rq) mmc_restart_areq(mq->card->host, areq); } -static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status status) +static bool mmc_blk_rw_done_error(struct mmc_async_req *areq, + enum mmc_blk_status status) { struct mmc_queue *mq; struct mmc_blk_data *md; @@ -1835,7 +1836,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat struct mmc_host *host; struct mmc_queue_req *mq_rq; struct mmc_blk_request *brq; - struct request *old_req; + struct request *req; bool req_pending = true; int type, retune_retry_done = 0; @@ -1849,42 +1850,27 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat card = mq->card; host = card->host; brq = &mq_rq->brq; - old_req = mmc_queue_req_to_req(mq_rq); - type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; + req = mmc_queue_req_to_req(mq_rq); + type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; switch (status) { - case MMC_BLK_SUCCESS: case MMC_BLK_PARTIAL: - /* - * A block was successfully transferred. - */ + /* This should trigger a retransmit */ mmc_blk_reset_success(md, type); - req_pending = blk_end_request(old_req, BLK_STS_OK, + req_pending = blk_end_request(req, BLK_STS_OK, brq->data.bytes_xfered); - /* - * If the blk_end_request function returns non-zero even - * though all data has been transferred and no errors - * were returned by the host controller, it's a bug. - */ - if (status == MMC_BLK_SUCCESS && req_pending) { - pr_err("%s BUG rq_tot %d d_xfer %d\n", - __func__, blk_rq_bytes(old_req), - brq->data.bytes_xfered); - mmc_blk_rw_cmd_abort(mq_rq); - return; - } break; case MMC_BLK_CMD_ERR: - req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); + req_pending = mmc_blk_rw_cmd_err(md, card, brq, req, req_pending); if (mmc_blk_reset(md, card->host, type)) { if (req_pending) mmc_blk_rw_cmd_abort(mq_rq); mmc_blk_rw_try_restart(mq_rq); - return; + return false; } if (!req_pending) { mmc_blk_rw_try_restart(mq_rq); - return; + return false; } break; case MMC_BLK_RETRY: @@ -1897,7 +1883,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat break; mmc_blk_rw_cmd_abort(mq_rq); mmc_blk_rw_try_restart(mq_rq); - return; + return false; case MMC_BLK_DATA_ERR: { int err; err = mmc_blk_reset(md, card->host, type); @@ -1906,7 +1892,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat if (err == -ENODEV) { mmc_blk_rw_cmd_abort(mq_rq); mmc_blk_rw_try_restart(mq_rq); - return; + return false; } /* Fall through */ } @@ -1914,7 +1900,7 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat if (brq->data.blocks > 1) { /* Redo read one sector at a time */ pr_warn("%s: retrying using single block read\n", - old_req->rq_disk->disk_name); + req->rq_disk->disk_name); areq->disable_multi = true; break; } @@ -1923,23 +1909,23 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat * time, so we only reach here after trying to * read a single sector. */ - req_pending = blk_end_request(old_req, BLK_STS_IOERR, + req_pending = blk_end_request(req, BLK_STS_IOERR, brq->data.blksz); if (!req_pending) { mmc_blk_rw_try_restart(mq_rq); - return; + return false; } break; case MMC_BLK_NOMEDIUM: mmc_blk_rw_cmd_abort(mq_rq); mmc_blk_rw_try_restart(mq_rq); - return; + return false; default: pr_err("%s: Unhandled return value (%d)", - old_req->rq_disk->disk_name, status); + req->rq_disk->disk_name, status); mmc_blk_rw_cmd_abort(mq_rq); mmc_blk_rw_try_restart(mq_rq); - return; + return false; } if (req_pending) { @@ -1948,9 +1934,55 @@ static void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status stat * prepare it again and resend. */ mmc_blk_rw_rq_prep(mq_rq, areq->disable_multi); - mmc_start_areq(card->host, areq); mq_rq->brq.retune_retry_done = retune_retry_done; + mmc_restart_areq(card->host, areq); + return false; + } + + return true; +} + +static bool mmc_blk_rw_done(struct mmc_async_req *areq, + enum mmc_blk_status status) +{ + struct mmc_queue_req *mq_rq; + struct request *req; + struct mmc_blk_request *brq; + struct mmc_queue *mq; + struct mmc_blk_data *md; + bool req_pending; + int type; + + /* + * Anything other than success or partial transfers are errors. + */ + if (status != MMC_BLK_SUCCESS) { + return mmc_blk_rw_done_error(areq, status); + } + + /* The quick path if the request was successful */ + mq_rq = container_of(areq, struct mmc_queue_req, areq); + brq = &mq_rq->brq; + mq = mq_rq->mq; + md = mq->blkdata; + req = mmc_queue_req_to_req(mq_rq); + type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; + + mmc_blk_reset_success(md, type); + req_pending = blk_end_request(req, BLK_STS_OK, + brq->data.bytes_xfered); + /* + * If the blk_end_request function returns non-zero even + * though all data has been transferred and no errors + * were returned by the host controller, it's a bug. + */ + if (req_pending) { + pr_err("%s BUG rq_tot %d d_xfer %d\n", + __func__, blk_rq_bytes(req), + brq->data.bytes_xfered); + mmc_blk_rw_cmd_abort(mq_rq); } + return true; } static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c index 42795fdfb730..95e8e9206f04 100644 --- a/drivers/mmc/core/core.c +++ b/drivers/mmc/core/core.c @@ -735,15 +735,52 @@ void mmc_finalize_areq(struct work_struct *work) mmc_start_bkops(host->card, true); } - /* Successfully postprocess the old request at this point */ - mmc_post_req(host, areq->mrq, 0); - - /* Call back with status, this will trigger retry etc if needed */ - if (areq->report_done_status) - areq->report_done_status(areq, status); - - /* This opens the gate for the next request to start on the host */ - complete(&areq->complete); + /* + * Here we postprocess the request differently depending on if + * we go on the success path or error path. The success path will + * immediately let new requests hit the host, whereas the error + * path will hold off new requests until we have retried and + * succeeded or failed the current asynchronous request. + */ + if (status == MMC_BLK_SUCCESS) { + /* + * This immediately opens the gate for the next request + * to start on the host while we perform post-processing + * and report back to the block layer. + */ + host->areq = NULL; + complete(&areq->complete); + mmc_post_req(host, areq->mrq, 0); + if (areq->report_done_status) + areq->report_done_status(areq, MMC_BLK_SUCCESS); + } else { + /* + * Post-process this request. Then, if + * another request was already prepared, back that out + * so we can handle the errors without anything prepared + * on the host. + */ + if (host->areq_pending) + mmc_post_req(host, host->areq_pending->mrq, -EINVAL); + /* + * Call back with error status, this will trigger retry + * etc if needed + */ + if (areq->report_done_status) { + if (areq->report_done_status(areq, status)) { + /* + * This happens when we finally give up after + * a few retries or on unrecoverable errors. + */ + mmc_post_req(host, areq->mrq, 0); + host->areq = NULL; + /* Re-prepare the next request */ + if (host->areq_pending) + mmc_pre_req(host, host->areq_pending->mrq); + complete(&areq->complete); + } + } + } } EXPORT_SYMBOL(mmc_finalize_areq); @@ -765,6 +802,7 @@ int mmc_restart_areq(struct mmc_host *host, struct mmc_async_req *areq) { areq->mrq->areq = areq; + mmc_pre_req(host, areq->mrq); return __mmc_start_data_req(host, areq->mrq); } EXPORT_SYMBOL(mmc_restart_areq); @@ -797,6 +835,7 @@ int mmc_start_areq(struct mmc_host *host, /* Prepare a new request */ mmc_pre_req(host, areq->mrq); + host->areq_pending = areq; /* Finalize previous request, if there is one */ if (previous) { @@ -805,6 +844,7 @@ int mmc_start_areq(struct mmc_host *host, } /* Fine so far, start the new request! */ + host->areq_pending = NULL; init_completion(&areq->complete); areq->mrq->areq = areq; ret = __mmc_start_data_req(host, areq->mrq); diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index f1c362e0765c..985bc479c8a8 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -220,8 +220,9 @@ struct mmc_async_req { enum mmc_blk_status (*err_check)(struct mmc_card *, struct mmc_async_req *); /* * Report finalization status from the core to e.g. the block layer. + * Returns true if the request is now finished. */ - void (*report_done_status)(struct mmc_async_req *, enum mmc_blk_status); + bool (*report_done_status)(struct mmc_async_req *, enum mmc_blk_status); struct work_struct finalization_work; struct completion complete; struct mmc_host *host; @@ -420,6 +421,7 @@ struct mmc_host { struct dentry *debugfs_root; struct mmc_async_req *areq; /* active async req */ + struct mmc_async_req *areq_pending; /* prepared but not issued async req */ /* finalization workqueue, handles finalizing requests */ struct workqueue_struct *req_done_wq; From patchwork Fri Nov 10 10:01:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 118527 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp7722901qgn; Fri, 10 Nov 2017 02:02:22 -0800 (PST) X-Google-Smtp-Source: ABhQp+Rwy6IQ7/kvjnYjhJUEds/316jRY0Gb6ZgLgA01KYO9vKuQvVwMeOt+ro8nLJ6w4fSRjl0M X-Received: by 10.84.236.7 with SMTP id q7mr3573558plk.87.1510308142274; Fri, 10 Nov 2017 02:02:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510308142; cv=none; d=google.com; s=arc-20160816; b=sjC106F37BrxpYpR4dOzv5TGGEXQc3vknWo01KtOWQvHRGAgcmcGmdDVY35YV20Ein cPt24HdavcnyJaZMudQBH6TOqIGHSIse24D4zBy3I3Ap8jeq7ZMFJZpEBcRlZuNMCCDr xip6sp0WalM5oFlY0WQA78tYFXZbJY00oyKTLgaAiXCY883CY5k9psdfxdSJIwwmz7tn k5Y5zhiUXQeNyidQW9x1NDdEVg+SSYQliR4ePJSv5thnrEQfx9RYkzJLQ78mzdR98Nid RMtVY9S8Rf7ewZE/gQJ+eIsIt/ZWNn7SMe1Q9bjUDfHTUsDfhY70qh06dE6x9TlMcgsx YW6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=FsHDSAlcWDKxEBl1TDAttT0e1L+EwFb/t0pRyKUQg6k=; b=ldES3QnuhSzTRDBfZW8v7jUZn7/vXqTvA7LZ20It9Cw/5ChGqwqTaKU/KceDpj0fEg rMuPCwoDbzKfo8JC9SDSFj639kUL5QYW6+/U2wxU6gUiEjqELSqAxNu1sAhDDpmQ4zyn yyVRhaLxoEI7NIPj9zkVNp/Sq9Z+tLdpZ1AAYka5l9GupilGz3yzog4zT0OjQa+JPLui tNT0S3enjbOBZJF2KcAoC3ZCLGCyzWgq4/PAbxwNavD07PEEsyRFrJJ24dYI4tlG/G2X jkVgmG7IdqILC4lZl4z3WqafPDwrdsXrflMauaaCpFQ/MMYxRC2aTGimLNkN4JLTIV4A O19Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=C1y7f7Ge; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z14si8801659plh.282.2017.11.10.02.02.22; Fri, 10 Nov 2017 02:02:22 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=C1y7f7Ge; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752431AbdKJKCU (ORCPT + 6 others); Fri, 10 Nov 2017 05:02:20 -0500 Received: from mail-lf0-f68.google.com ([209.85.215.68]:49408 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751496AbdKJKCS (ORCPT ); Fri, 10 Nov 2017 05:02:18 -0500 Received: by mail-lf0-f68.google.com with SMTP id w21so10373158lfc.6 for ; Fri, 10 Nov 2017 02:02:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5wlx6GQwX8mIZVDOrYJp+CFIhDXEai+GHTjGkUGagIw=; b=C1y7f7GeZz6StbvPJZQaewLkOU76mngnEmfzy/gpP4JLe+dybOiaXgN/fUcylDljQw YNo6k7cs2IoPJt/mPy+B6yycIPR4VCiFnz9md8tJ4r0qeGMAU1SsqV3+bsefLIVFPIqw w/6w/I0VpibsDpUihiOlzZnXOmcmO+W1oftJc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5wlx6GQwX8mIZVDOrYJp+CFIhDXEai+GHTjGkUGagIw=; b=m9JTbCugWI268pIJM6VrXnbezGdGwn0D2gTtX/9iJLdqnW8cZPIegTd10hgEZug2bM DlbWGuLYTjoVfC0OpqpVgxEY/omSkDtudBQIouVDQbMeHYlPDMivI4yfQFyG8llitp5o VayqdkXhR6Ve25HG3CU54KGkuhnd1aICzeX3cg4ghFrYBsXxVVychuvoe9193Nj3y8e1 B5RbEUTavRtmjhmzBiju0iz4G05J7jKrQUr4A7by7DOn/l+LPA5tcwPMSPGI3hVlEbXa c4jVCC02V185ULnNscUVMkoQ6alq1fVYEkEAFdmVe1lGkvksjRG550wGK9EqKMBjchOW RSeg== X-Gm-Message-State: AJaThX7m5inpS1RqvGbqPaHocIG5RHnWVluU9IJT5EADPsm6sNsLbXR+ RxxTLbzIbIHzFrbipxzJcICn8iCF6LA= X-Received: by 10.46.91.203 with SMTP id m72mr1470355lje.166.1510308136269; Fri, 10 Nov 2017 02:02:16 -0800 (PST) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id n36sm310843lfi.78.2017.11.10.02.02.14 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 Nov 2017 02:02:15 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Avri Altman , Adrian Hunter , Linus Walleij Subject: [PATCH 12/12 v5] mmc: switch MMC/SD to use blk-mq multiqueueing v5 Date: Fri, 10 Nov 2017 11:01:43 +0100 Message-Id: <20171110100143.12256-13-linus.walleij@linaro.org> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171110100143.12256-1-linus.walleij@linaro.org> References: <20171110100143.12256-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This switches the MMC/SD stack to use the multiqueue block layer interface. We kill off the kthread that was just calling blk_fetch_request() and let blk-mq drive all traffic, nice, that is how it should work. Due to having switched the submission mechanics around so that the completion of requests is now triggered from the host callbacks, we manage to keep the same performance for linear reads/writes as we have for the old block layer. The open questions from earlier patch series have been addressed: - mmc_[get|put]_card() is now issued across requests from .queue_rq() to .complete() using Adrians nifty context lock. This means that the block layer does not compete with itself on getting access to the host, and we can let other users of the host come in. (For SDIO and mixed-mode cards.) - Partial reads are handled by open coding calls to blk_update_request() as advised by Christoph. Signed-off-by: Linus Walleij --- ChangeLog v4->v5: - Rebase on the other changes including improved error handling. - Use quiesce and unquiesce on the queue in the suspend/resume cycle. --- drivers/mmc/core/block.c | 92 ++++++++++-------- drivers/mmc/core/queue.c | 237 ++++++++++++++++++++--------------------------- drivers/mmc/core/queue.h | 8 +- 3 files changed, 156 insertions(+), 181 deletions(-) -- 2.13.6 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index e3ae7241b2eb..9fa3bfa3b4f8 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -93,7 +94,6 @@ static DEFINE_IDA(mmc_rpmb_ida); * There is one mmc_blk_data per slot. */ struct mmc_blk_data { - spinlock_t lock; struct device *parent; struct gendisk *disk; struct mmc_queue queue; @@ -1204,6 +1204,23 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type) } /* + * This reports status back to the block layer for a finished request. + */ +static void mmc_blk_complete(struct mmc_queue_req *mq_rq, + blk_status_t status) +{ + struct request *req = mmc_queue_req_to_req(mq_rq); + + /* + * We are done with I/O, so this call will invoke .complete() and + * release the host lock. + */ + blk_mq_complete_request(req); + /* Then we report the request back to the block layer */ + blk_mq_end_request(req, status); +} + +/* * The non-block commands come back from the block layer after it queued it and * processed it with all other requests and then they get issued in this * function. @@ -1262,9 +1279,9 @@ static void mmc_blk_issue_drv_op(struct mmc_queue_req *mq_rq) ret = -EINVAL; break; } + mq_rq->drv_op_result = ret; - blk_end_request_all(mmc_queue_req_to_req(mq_rq), - ret ? BLK_STS_IOERR : BLK_STS_OK); + mmc_blk_complete(mq_rq, ret ? BLK_STS_IOERR : BLK_STS_OK); } static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq) @@ -1308,7 +1325,7 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq) else mmc_blk_reset_success(md, type); fail: - blk_end_request(req, status, blk_rq_bytes(req)); + mmc_blk_complete(mq_rq, status); } static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq) @@ -1378,7 +1395,7 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq) if (!err) mmc_blk_reset_success(md, type); out: - blk_end_request(req, status, blk_rq_bytes(req)); + mmc_blk_complete(mq_rq, status); } static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq) @@ -1388,8 +1405,13 @@ static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq) int ret = 0; ret = mmc_flush_cache(card); - blk_end_request_all(mmc_queue_req_to_req(mq_rq), - ret ? BLK_STS_IOERR : BLK_STS_OK); + /* + * NOTE: this used to call blk_end_request_all() for both + * cases in the old block layer to flush all queued + * transactions. I am not sure it was even correct to + * do that for the success case. + */ + mmc_blk_complete(mq_rq, ret ? BLK_STS_IOERR : BLK_STS_OK); } /* @@ -1768,7 +1790,6 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mq_rq, mq_rq->areq.err_check = mmc_blk_err_check; mq_rq->areq.host = card->host; - INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq); } static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, @@ -1792,10 +1813,13 @@ static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, err = mmc_sd_num_wr_blocks(card, &blocks); if (err) req_pending = old_req_pending; - else - req_pending = blk_end_request(req, BLK_STS_OK, blocks << 9); + else { + req_pending = blk_update_request(req, BLK_STS_OK, + blocks << 9); + } } else { - req_pending = blk_end_request(req, BLK_STS_OK, brq->data.bytes_xfered); + req_pending = blk_update_request(req, BLK_STS_OK, + brq->data.bytes_xfered); } return req_pending; } @@ -1808,7 +1832,7 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue_req *mq_rq) if (mmc_card_removed(card)) req->rq_flags |= RQF_QUIET; - while (blk_end_request(req, BLK_STS_IOERR, blk_rq_cur_bytes(req))); + mmc_blk_complete(mq_rq, BLK_STS_IOERR); } /** @@ -1857,8 +1881,8 @@ static bool mmc_blk_rw_done_error(struct mmc_async_req *areq, case MMC_BLK_PARTIAL: /* This should trigger a retransmit */ mmc_blk_reset_success(md, type); - req_pending = blk_end_request(req, BLK_STS_OK, - brq->data.bytes_xfered); + req_pending = blk_update_request(req, BLK_STS_OK, + brq->data.bytes_xfered); break; case MMC_BLK_CMD_ERR: req_pending = mmc_blk_rw_cmd_err(md, card, brq, req, req_pending); @@ -1909,11 +1933,13 @@ static bool mmc_blk_rw_done_error(struct mmc_async_req *areq, * time, so we only reach here after trying to * read a single sector. */ - req_pending = blk_end_request(req, BLK_STS_IOERR, - brq->data.blksz); + req_pending = blk_update_request(req, BLK_STS_IOERR, + brq->data.blksz); if (!req_pending) { mmc_blk_rw_try_restart(mq_rq); return false; + } else { + mmc_blk_complete(mq_rq, BLK_STS_IOERR); } break; case MMC_BLK_NOMEDIUM: @@ -1947,10 +1973,8 @@ static bool mmc_blk_rw_done(struct mmc_async_req *areq, { struct mmc_queue_req *mq_rq; struct request *req; - struct mmc_blk_request *brq; struct mmc_queue *mq; struct mmc_blk_data *md; - bool req_pending; int type; /* @@ -1962,26 +1986,13 @@ static bool mmc_blk_rw_done(struct mmc_async_req *areq, /* The quick path if the request was successful */ mq_rq = container_of(areq, struct mmc_queue_req, areq); - brq = &mq_rq->brq; mq = mq_rq->mq; md = mq->blkdata; req = mmc_queue_req_to_req(mq_rq); type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; mmc_blk_reset_success(md, type); - req_pending = blk_end_request(req, BLK_STS_OK, - brq->data.bytes_xfered); - /* - * If the blk_end_request function returns non-zero even - * though all data has been transferred and no errors - * were returned by the host controller, it's a bug. - */ - if (req_pending) { - pr_err("%s BUG rq_tot %d d_xfer %d\n", - __func__, blk_rq_bytes(req), - brq->data.bytes_xfered); - mmc_blk_rw_cmd_abort(mq_rq); - } + mmc_blk_complete(mq_rq, BLK_STS_OK); return true; } @@ -1997,7 +2008,12 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) */ if (mmc_card_removed(card)) { req->rq_flags |= RQF_QUIET; - blk_end_request_all(req, BLK_STS_IOERR); + /* + * NOTE: this used to call blk_end_request_all() + * to flush out all queued transactions to the now + * non-present card. + */ + mmc_blk_complete(mq_rq, BLK_STS_IOERR); return; } @@ -2024,8 +2040,9 @@ void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq) { int ret; struct request *req = mmc_queue_req_to_req(mq_rq); - struct mmc_blk_data *md = mq_rq->mq->blkdata; - struct mmc_card *card = md->queue.card; + struct mmc_queue *mq = mq_rq->mq; + struct mmc_blk_data *md = mq->blkdata; + struct mmc_card *card = mq->card; if (!req) { pr_err("%s: tried to issue NULL request\n", __func__); @@ -2034,7 +2051,7 @@ void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq) ret = mmc_blk_part_switch(card, md->part_type); if (ret) { - blk_end_request_all(req, BLK_STS_IOERR); + mmc_blk_complete(mq_rq, BLK_STS_IOERR); return; } @@ -2131,12 +2148,11 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card, goto err_kfree; } - spin_lock_init(&md->lock); INIT_LIST_HEAD(&md->part); INIT_LIST_HEAD(&md->rpmbs); md->usage = 1; - ret = mmc_init_queue(&md->queue, card, &md->lock, subname); + ret = mmc_init_queue(&md->queue, card, subname); if (ret) goto err_putdisk; diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 5511e323db31..2301573ba2e0 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -38,74 +39,6 @@ static int mmc_prep_request(struct request_queue *q, struct request *req) return BLKPREP_OK; } -static int mmc_queue_thread(void *d) -{ - struct mmc_queue *mq = d; - struct request_queue *q = mq->queue; - bool claimed_card = false; - - current->flags |= PF_MEMALLOC; - - down(&mq->thread_sem); - do { - struct request *req; - - spin_lock_irq(q->queue_lock); - set_current_state(TASK_INTERRUPTIBLE); - req = blk_fetch_request(q); - mq->asleep = false; - spin_unlock_irq(q->queue_lock); - - if (req) { - if (!claimed_card) { - mmc_get_card(mq->card, NULL); - claimed_card = true; - } - set_current_state(TASK_RUNNING); - mmc_blk_issue_rq(req_to_mmc_queue_req(req)); - cond_resched(); - } else { - mq->asleep = true; - if (kthread_should_stop()) { - set_current_state(TASK_RUNNING); - break; - } - up(&mq->thread_sem); - schedule(); - down(&mq->thread_sem); - } - } while (1); - up(&mq->thread_sem); - - if (claimed_card) - mmc_put_card(mq->card, NULL); - - return 0; -} - -/* - * Generic MMC request handler. This is called for any queue on a - * particular host. When the host is not busy, we look for a request - * on any queue on this host, and attempt to issue it. This may - * not be the queue we were asked to process. - */ -static void mmc_request_fn(struct request_queue *q) -{ - struct mmc_queue *mq = q->queuedata; - struct request *req; - - if (!mq) { - while ((req = blk_fetch_request(q)) != NULL) { - req->rq_flags |= RQF_QUIET; - __blk_end_request_all(req, BLK_STS_IOERR); - } - return; - } - - if (mq->asleep) - wake_up_process(mq->thread); -} - static struct scatterlist *mmc_alloc_sg(int sg_len, gfp_t gfp) { struct scatterlist *sg; @@ -136,127 +69,158 @@ static void mmc_queue_setup_discard(struct request_queue *q, queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q); } +static blk_status_t mmc_queue_request(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) +{ + struct mmc_queue_req *mq_rq = blk_mq_rq_to_pdu(bd->rq); + struct mmc_queue *mq = mq_rq->mq; + + /* Claim card for block queue context */ + mmc_get_card(mq->card, &mq->blkctx); + mmc_blk_issue_rq(mq_rq); + + return BLK_STS_OK; +} + +static void mmc_complete_request(struct request *req) +{ + struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req); + struct mmc_queue *mq = mq_rq->mq; + + /* Release card for block queue context */ + mmc_put_card(mq->card, &mq->blkctx); +} + /** * mmc_init_request() - initialize the MMC-specific per-request data - * @q: the request queue + * @set: tag set for the request * @req: the request - * @gfp: memory allocation policy + * @hctx_idx: hardware context index + * @numa_node: NUMA node */ -static int mmc_init_request(struct request_queue *q, struct request *req, - gfp_t gfp) +static int mmc_init_request(struct blk_mq_tag_set *set, struct request *req, + unsigned int hctx_idx, unsigned int numa_node) { struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req); - struct mmc_queue *mq = q->queuedata; + struct mmc_queue *mq = set->driver_data; struct mmc_card *card = mq->card; struct mmc_host *host = card->host; - mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp); + mq_rq->sg = mmc_alloc_sg(host->max_segs, GFP_KERNEL); if (!mq_rq->sg) return -ENOMEM; mq_rq->mq = mq; + INIT_WORK(&mq_rq->areq.finalization_work, mmc_finalize_areq); return 0; } -static void mmc_exit_request(struct request_queue *q, struct request *req) +/** + * mmc_exit_request() - tear down the MMC-specific per-request data + * @set: tag set for the request + * @req: the request + * @hctx_idx: hardware context index + */ +static void mmc_exit_request(struct blk_mq_tag_set *set, struct request *req, + unsigned int hctx_idx) { struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req); + flush_work(&mq_rq->areq.finalization_work); kfree(mq_rq->sg); mq_rq->sg = NULL; mq_rq->mq = NULL; } -static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) +static void mmc_setup_queue(struct mmc_queue *mq) { + struct request_queue *q = mq->queue; + struct mmc_card *card = mq->card; struct mmc_host *host = card->host; u64 limit = BLK_BOUNCE_HIGH; if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; - queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue); - queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, mq->queue); + blk_queue_max_segments(q, host->max_segs); + blk_queue_prep_rq(q, mmc_prep_request); + queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q); + queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, q); if (mmc_can_erase(card)) - mmc_queue_setup_discard(mq->queue, card); - - blk_queue_bounce_limit(mq->queue, limit); - blk_queue_max_hw_sectors(mq->queue, + mmc_queue_setup_discard(q, card); + blk_queue_bounce_limit(q, limit); + blk_queue_max_hw_sectors(q, min(host->max_blk_count, host->max_req_size / 512)); - blk_queue_max_segments(mq->queue, host->max_segs); - blk_queue_max_segment_size(mq->queue, host->max_seg_size); - - /* Initialize thread_sem even if it is not used */ - sema_init(&mq->thread_sem, 1); + blk_queue_max_segments(q, host->max_segs); + blk_queue_max_segment_size(q, host->max_seg_size); } +static const struct blk_mq_ops mmc_mq_ops = { + .queue_rq = mmc_queue_request, + .init_request = mmc_init_request, + .exit_request = mmc_exit_request, + .complete = mmc_complete_request, +}; + /** * mmc_init_queue - initialise a queue structure. * @mq: mmc queue * @card: mmc card to attach this queue - * @lock: queue lock * @subname: partition subname * * Initialise a MMC card request queue. */ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, - spinlock_t *lock, const char *subname) + const char *subname) { struct mmc_host *host = card->host; - int ret = -ENOMEM; + int ret; mq->card = card; - mq->queue = blk_alloc_queue(GFP_KERNEL); - if (!mq->queue) - return -ENOMEM; - mq->queue->queue_lock = lock; - mq->queue->request_fn = mmc_request_fn; - mq->queue->init_rq_fn = mmc_init_request; - mq->queue->exit_rq_fn = mmc_exit_request; - mq->queue->cmd_size = sizeof(struct mmc_queue_req); - mq->queue->queuedata = mq; - ret = blk_init_allocated_queue(mq->queue); + mq->tag_set.ops = &mmc_mq_ops; + /* The MMC/SD protocols have only one command pipe */ + mq->tag_set.nr_hw_queues = 1; + /* Set this to 2 to simulate async requests, should we use 3? */ + mq->tag_set.queue_depth = 2; + mq->tag_set.cmd_size = sizeof(struct mmc_queue_req); + mq->tag_set.numa_node = NUMA_NO_NODE; + /* We use blocking requests */ + mq->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING; + /* Should we use BLK_MQ_F_SG_MERGE? */ + mq->tag_set.driver_data = mq; + + ret = blk_mq_alloc_tag_set(&mq->tag_set); if (ret) { - blk_cleanup_queue(mq->queue); + dev_err(host->parent, "failed to allocate MQ tag set\n"); return ret; } - - blk_queue_prep_rq(mq->queue, mmc_prep_request); - - mmc_setup_queue(mq, card); - - mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s", - host->index, subname ? subname : ""); - - if (IS_ERR(mq->thread)) { - ret = PTR_ERR(mq->thread); - goto cleanup_queue; + mq->queue = blk_mq_init_queue(&mq->tag_set); + if (!mq->queue) { + dev_err(host->parent, "failed to initialize block MQ\n"); + goto cleanup_free_tag_set; } + mq->queue->queuedata = mq; + mmc_setup_queue(mq); return 0; -cleanup_queue: - blk_cleanup_queue(mq->queue); +cleanup_free_tag_set: + blk_mq_free_tag_set(&mq->tag_set); return ret; } void mmc_cleanup_queue(struct mmc_queue *mq) { struct request_queue *q = mq->queue; - unsigned long flags; /* Make sure the queue isn't suspended, as that will deadlock */ mmc_queue_resume(mq); - /* Then terminate our worker thread */ - kthread_stop(mq->thread); - /* Empty the queue */ - spin_lock_irqsave(q->queue_lock, flags); q->queuedata = NULL; blk_start_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); - + blk_cleanup_queue(q); + blk_mq_free_tag_set(&mq->tag_set); mq->card = NULL; } EXPORT_SYMBOL(mmc_cleanup_queue); @@ -265,23 +229,26 @@ EXPORT_SYMBOL(mmc_cleanup_queue); * mmc_queue_suspend - suspend a MMC request queue * @mq: MMC queue to suspend * - * Stop the block request queue, and wait for our thread to - * complete any outstanding requests. This ensures that we + * Stop the block request queue. This ensures that we * won't suspend while a request is being processed. */ void mmc_queue_suspend(struct mmc_queue *mq) { struct request_queue *q = mq->queue; - unsigned long flags; if (!mq->suspended) { - mq->suspended |= true; - - spin_lock_irqsave(q->queue_lock, flags); - blk_stop_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); - - down(&mq->thread_sem); + mq->suspended = true; + blk_mq_quiesce_queue(q); + /* + * Currently the block layer will just block + * new request from entering the queue after + * this call, so we need some way of making + * sure all outstanding requests are completed + * before suspending. This is one way, maybe + * not so elegant. + */ + mmc_get_card(mq->card, NULL); + mmc_put_card(mq->card, NULL); } } @@ -292,16 +259,10 @@ void mmc_queue_suspend(struct mmc_queue *mq) void mmc_queue_resume(struct mmc_queue *mq) { struct request_queue *q = mq->queue; - unsigned long flags; if (mq->suspended) { mq->suspended = false; - - up(&mq->thread_sem); - - spin_lock_irqsave(q->queue_lock, flags); - blk_start_queue(q); - spin_unlock_irqrestore(q->queue_lock, flags); + blk_mq_unquiesce_queue(q); } } diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index 67ae311b107f..c78fbb226a90 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -61,16 +61,14 @@ struct mmc_queue_req { struct mmc_queue { struct mmc_card *card; - struct task_struct *thread; - struct semaphore thread_sem; bool suspended; - bool asleep; struct mmc_blk_data *blkdata; struct request_queue *queue; + struct mmc_ctx blkctx; + struct blk_mq_tag_set tag_set; }; -extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *, - const char *); +extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, const char *); extern void mmc_cleanup_queue(struct mmc_queue *); extern void mmc_queue_suspend(struct mmc_queue *); extern void mmc_queue_resume(struct mmc_queue *);