From patchwork Wed May 10 08:24:14 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 98967 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp105528qge; Wed, 10 May 2017 01:24:57 -0700 (PDT) X-Received: by 10.99.126.20 with SMTP id z20mr4895413pgc.158.1494404697616; Wed, 10 May 2017 01:24:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1494404697; cv=none; d=google.com; s=arc-20160816; b=PpcaItl4OH5VynCsw6f1+W27V/yN+8EBeaLrDolCHMiQH0Dnxb87JZfuJsFRwmDhEt GGuez+Kph9NNhfDKrEIer7PXD0j1WfsaMMxTStgiN8Ju3wQnADqHos+YyHqfNpP7yHIQ DD5AQcV2SwH3ZYvi7qzkxiN9AfnvmuH2Gx3fEWEGXE4+HxDVNLlnqgtq3uw4VcS/FgKA NyBhz6YspRlITvVxj6X2q3kwLI1pPAsLML/N8lBmH8IAAcySMFMcWcx4LfzN1J8SqCeW 1IsdVW7+EFK6XJZBzZxNX19alA5p8AvXnlDV+T/vepmWVhV/CKWFgBTlzh7SmyasNUvP 447A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=h6NWiOXaEFL/3ROx3EmKxlJRszvYTfJ0vItljEnJrT4=; b=AWQ/ZU6snsBhDYmPE+ITHSzJgR2LosZB050rYMXR90I6992K0wPI9kQGJKH/i2NeQ7 cYbXAlXqtCMannfNWrpaAT2MyR3bctEu/ivIJbbqhpO6HM4TPOc9Hqp/3+PE17k4ryM4 bvWZgWHDMOBioOTy6VVYgDFiIlrpTHVVu4ZFz3BEGxsOFaUp0iXT2ijZKQD8yoeKEOW4 aNjpQjmPm7nxO00GBdYyo01eSPtgRHK+4SdwKUPLTrUz1uLZkkppax96m4Q6YvVClTbp T7SgOL7UYadmhz1vBQDFJu2aGnAm949NKEh654YSqlw7+5o5w9ji8os5sH+smRGpbzEQ 3IDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f132si2315878pgc.250.2017.05.10.01.24.57; Wed, 10 May 2017 01:24:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752606AbdEJIYy (ORCPT + 6 others); Wed, 10 May 2017 04:24:54 -0400 Received: from mail-qt0-f180.google.com ([209.85.216.180]:35703 "EHLO mail-qt0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752605AbdEJIYt (ORCPT ); Wed, 10 May 2017 04:24:49 -0400 Received: by mail-qt0-f180.google.com with SMTP id n4so20271883qte.2 for ; Wed, 10 May 2017 01:24:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WMv7MfKvantDzjgomr8MQN6gLjbIb278YoCOIeSV36Y=; b=JqIVsPHsaifq8oJTGOaoQTugEDeSC6f/tXA/VPKIOlRTOQYIjJKvX/+jK0jLeOhvhv JyOzGEodBR3oAQlU+aai/lS/q7eYB6sgO/Ll6KLkro3hjwgJEvsxZqx0MZirqUz4XXlF M3ZOtfz93dGKFKSCqK56JjI4UAl5T15LM0vhk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WMv7MfKvantDzjgomr8MQN6gLjbIb278YoCOIeSV36Y=; b=Wk3KtRHnTE99ri0p6UrwgDMxU7nJlkWHaP/BsE62UzF3zHZntLfNx4FiNRMMBpiHar k+cNXuAURkBlCbKb3Ak1bTzGaH7XaycEYtkEaCifrN7ZUEfVfi/U2XEFBv3sKOjkdoB/ prJNGzDUlCO59Z2sOYSBuDb07i5dajIiJTFfNGOUQENbTOUgQFrCQqVKQUzCf4bCE2vd ETcPjG/IYo2LvS+6niqDuczS6U/VCiGIFJkW2em2RIBkp4uDWIno47aJtuaSIxaYLeDg vS62bG3AYBscWHzkhkRVUSr0S+BCgf6XVShclqMWwlMERorncqvhwC58ss1OzNDbfi0s WxdQ== X-Gm-Message-State: AODbwcAcUo3jcVeQC1LTxNakGMun259uz2AAvQhEaAZ0+o78Q2OWF3+p 5eFz99XBdBfIEjkO X-Received: by 10.25.16.96 with SMTP id f93mr2114982lfi.37.1494404687916; Wed, 10 May 2017 01:24:47 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 17sm401500ljo.56.2017.05.10.01.24.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 May 2017 01:24:46 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson , Adrian Hunter Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Linus Walleij Subject: [PATCH 1/5] mmc: core: Delete bounce buffer Kconfig option Date: Wed, 10 May 2017 10:24:14 +0200 Message-Id: <20170510082418.10513-2-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170510082418.10513-1-linus.walleij@linaro.org> References: <20170510082418.10513-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This option is activated by all multiplatform configs and what not so we almost always have it turned on, and the memory it saves is negligible, even more so moving forward. The actual bounce buffer only gets allocated only when used, the only thing the ifdefs are saving is a little bit of code. It is highly improper to have this as a Kconfig option that get turned on by Kconfig, make this a pure runtime-thing and let the host decide whether we use bounce buffers. We add a new property "disable_bounce" to the host struct. Notice that mmc_queue_calc_bouncesz() already disables the bounce buffers if host->max_segs != 1, so any arch that has a maximum number of segments higher than 1 will have bounce buffers disabled. The option CONFIG_MMC_BLOCK_BOUNCE is default y so the majority of platforms in the kernel already have it on, and it then gets turned off at runtime since most of these have a host->max_segs > 1. The few exceptions that have host->max_segs == 1 and still turn off the bounce buffering are those that disable it in their defconfig. Those are the following: arch/arm/configs/colibri_pxa300_defconfig arch/arm/configs/zeus_defconfig - Uses MMC_PXA, drivers/mmc/host/pxamci.c - Sets host->max_segs = NR_SG, which is 1 - This needs its bounce buffer deactivated so we set host->disable_bounce to true in the host driver arch/arm/configs/davinci_all_defconfig - Uses MMC_DAVINCI, drivers/mmc/host/davinci_mmc.c - This driver sets host->max_segs to MAX_NR_SG, which is 16 - That means this driver anyways disabled bounce buffers - No special action needed for this platform arch/arm/configs/lpc32xx_defconfig arch/arm/configs/nhk8815_defconfig arch/arm/configs/u300_defconfig - Uses MMC_ARMMMCI, drivers/mmc/host/mmci.[c|h] - This driver by default sets host->max_segs to NR_SG, which is 128, unless a DMA engine is used, and in that case the number of segments are also > 1 - That means this driver already disables bounce buffers - No special action needed for these platforms arch/arm/configs/sama5_defconfig - Uses MMC_SDHCI, MMC_SDHCI_PLTFM, MMC_SDHCI_OF_AT91, MMC_ATMELMCI - Uses drivers/mmc/host/sdhci.c - Normally sets host->max_segs to SDHCI_MAX_SEGS which is 128 and thus disables bounce buffers - Sets host->max_segs to 1 if SDHCI_USE_SDMA is set - SDHCI_USE_SDMA is only set by SDHCI on PCI adapers - That means that for this platform bounce buffers are already disabled at runtime - No special action needed for this platform arch/blackfin/configs/CM-BF533_defconfig arch/blackfin/configs/CM-BF537E_defconfig - Uses MMC_SPI (a simple MMC card connected on SPI pins) - Uses drivers/mmc/host/mmc_spi.c - Sets host->max_segs to MMC_SPI_BLOCKSATONCE which is 128 - That means this platform already disables bounce buffers at runtime - No special action needed for these platforms arch/mips/configs/cavium_octeon_defconfig - Uses MMC_CAVIUM_OCTEON, drivers/mmc/host/cavium.c - Sets host->max_segs to 16 or 1 - Setting host->disable_bounce to be sure for the 1 case arch/mips/configs/qi_lb60_defconfig - Uses MMC_JZ4740, drivers/mmc/host/jz4740_mmc.c - This sets host->max_segs to 128 so bounce buffers are already runtime disabled - No action needed for this platform It would be interesting to come up with a list of the platforms that actually end up using bounce buffers. I have not been able to infer such a list, but it occurs when host->max_segs == 1 and the bounce buffering is not explicitly disabled. Signed-off-by: Linus Walleij --- drivers/mmc/core/Kconfig | 18 ------------------ drivers/mmc/core/queue.c | 15 +-------------- drivers/mmc/host/cavium.c | 3 +++ drivers/mmc/host/pxamci.c | 6 ++++++ include/linux/mmc/host.h | 1 + 5 files changed, 11 insertions(+), 32 deletions(-) -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/Kconfig b/drivers/mmc/core/Kconfig index fc1ecdaaa9ca..42e89060cd41 100644 --- a/drivers/mmc/core/Kconfig +++ b/drivers/mmc/core/Kconfig @@ -61,24 +61,6 @@ config MMC_BLOCK_MINORS If unsure, say 8 here. -config MMC_BLOCK_BOUNCE - bool "Use bounce buffer for simple hosts" - depends on MMC_BLOCK - default y - help - SD/MMC is a high latency protocol where it is crucial to - send large requests in order to get high performance. Many - controllers, however, are restricted to continuous memory - (i.e. they can't do scatter-gather), something the kernel - rarely can provide. - - Say Y here to help these restricted hosts by bouncing - requests back and forth from a large buffer. You will get - a big performance gain at the cost of up to 64 KiB of - physical memory. - - If unsure, say Y here. - config SDIO_UART tristate "SDIO UART/GPS class support" depends on TTY diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 5c37b6be3e7b..545466342fb1 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -219,7 +219,6 @@ static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth) return mqrq; } -#ifdef CONFIG_MMC_BLOCK_BOUNCE static int mmc_queue_alloc_bounce_bufs(struct mmc_queue_req *mqrq, int qdepth, unsigned int bouncesz) { @@ -258,7 +257,7 @@ static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) { unsigned int bouncesz = MMC_QUEUE_BOUNCESZ; - if (host->max_segs != 1) + if (host->max_segs != 1 || host->disable_bounce) return 0; if (bouncesz > host->max_req_size) @@ -273,18 +272,6 @@ static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) return bouncesz; } -#else -static inline bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq, - int qdepth, unsigned int bouncesz) -{ - return false; -} - -static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) -{ - return 0; -} -#endif static int mmc_queue_alloc_sgs(struct mmc_queue_req *mqrq, int qdepth, int max_segs) diff --git a/drivers/mmc/host/cavium.c b/drivers/mmc/host/cavium.c index 58b51ba6aabd..66066f73e477 100644 --- a/drivers/mmc/host/cavium.c +++ b/drivers/mmc/host/cavium.c @@ -1050,6 +1050,9 @@ int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host) else mmc->max_segs = 1; + /* Disable bounce buffers for max_segs = 1 */ + mmc->disable_bounce = true; + /* DMA size field can address up to 8 MB */ mmc->max_seg_size = 8 * 1024 * 1024; mmc->max_req_size = mmc->max_seg_size; diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c index c763b404510f..d3b5e6376504 100644 --- a/drivers/mmc/host/pxamci.c +++ b/drivers/mmc/host/pxamci.c @@ -666,6 +666,12 @@ static int pxamci_probe(struct platform_device *pdev) mmc->max_segs = NR_SG; /* + * This architecture used to disable bounce buffers through its + * defconfig, now it is done at runtime as a host property. + */ + mmc->disable_bounce = true; + + /* * Our hardware DMA can handle a maximum of one page per SG entry. */ mmc->max_seg_size = PAGE_SIZE; diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 21385ac0c9b1..b53c0e18a33b 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -310,6 +310,7 @@ struct mmc_host { /* host specific block data */ unsigned int max_seg_size; /* see blk_queue_max_segment_size */ unsigned short max_segs; /* see blk_queue_max_segments */ + bool disable_bounce; /* disable bounce buffers */ unsigned short unused; unsigned int max_req_size; /* maximum number of bytes in one req */ unsigned int max_blk_size; /* maximum size of one mmc block */ From patchwork Wed May 10 08:24:15 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 98968 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp105566qge; Wed, 10 May 2017 01:25:05 -0700 (PDT) X-Received: by 10.99.112.66 with SMTP id a2mr4952208pgn.7.1494404705555; Wed, 10 May 2017 01:25:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1494404705; cv=none; d=google.com; s=arc-20160816; b=ACQ1B9NBxXV/yK6B0d72Gr2C9FBb1mQekYQtKCG4dSD0e0wqvzEhshIUIRwy7saiYY kTRA8didNQXVG/2xtyGMmffKzQQrWhbvLWDnPGft6tmvTZwRpLuUDfqMwdX1uMamOvV5 ozNPq2C3I1huCzoL9fw9SjI9fChh6WwgMVuHzDMB4oPVtuWEHMAwX9VGpypsd2fn2idS oZNjLMCLb9wcMT7kX86su1GoR4V2DYcSFDl23oaatCfecc61SO3RDE2hWn4yq+RUjgaK IMMjPyw/8dVkO5VDtQv26GA6wxouNL7RvtS7/QOVA66xavgTPXeQXlHNbWp6Uba0C1su 88GA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=xePgxySpnWeIdlwsfvAYw+k5tL9V0iC8DZhUrVXGE1c=; b=IEeZ4i9qDi4d5+K0NKsJnnOnKoAHXSCe4AYkb3nzxCI1+owJw0CiJp5HKtqukzt1A8 i5lLogl5AISd9yrlFxHdIuwPMukhlvRiEAO7KsbUnwK9fvTNYKBvXRsXEpsGmFNwT7or LHcsIqRbC5M7pkcLsG1s11cc1Gw8vEVwXNDau+uNCFeyt26L8u7FUBXtOh3H73FTM1bC 0Cvy3hyVut3GOg8UFwTICMA6uaxH1gXoY8WwK59lDGP3PDg3zv74jYNr/JL2NZw9UAFy Z9B6FyeBnJN4d1xu2CfjjWRxCfEtC3oi+Rw2VUeyAy0/pcNsUFGLTkV1J/klhwE19foa BO8A== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f132si2315878pgc.250.2017.05.10.01.25.05; Wed, 10 May 2017 01:25:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752526AbdEJIZD (ORCPT + 6 others); Wed, 10 May 2017 04:25:03 -0400 Received: from mail-qt0-f171.google.com ([209.85.216.171]:33779 "EHLO mail-qt0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752605AbdEJIY6 (ORCPT ); Wed, 10 May 2017 04:24:58 -0400 Received: by mail-qt0-f171.google.com with SMTP id t26so20402601qtg.0 for ; Wed, 10 May 2017 01:24:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Dzvjy1VbN8Iq97jBX3Wk+eparAMAJAntUjTN/98DZI8=; b=ZBkQGeF2khPQmeJ/Tme6kHDobJymZJM2Cp5daZekSrxbcziMGoTzmXYiqkR4Q/87NT QL4494hXX8cowMsLyw04XrHGPSukYnZLeuYmznvrsPaYyoeU6AQMUPXNy65kbvKEvLPE xrUrvvGwBmwbo/+3YIsfbPcZVos3Q6YDQovr4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Dzvjy1VbN8Iq97jBX3Wk+eparAMAJAntUjTN/98DZI8=; b=cn4lvg+hm7lMPN6FDXpfKc6WY43GmMO8aFV6ZTnACgMR/KScUSMkHJ6MXZJr5qP16Z y3rGqRm5SDVp3nrsU8zdXc7pwXtUd9O0zImt+OpZFzUG6Yb6VY5ZnK2Ms4H76p00J3vH ZzV3iRE6GmMj56NYlae25DKYwu4Ck+lpuwhMkvaV+1udpg0lU8PEqYLtLQ/9VJHO0dpt ON5YIFcKxKA2EY348WnI4NI8wdJpHyHLdNoik2tK8yEi/kum1vG8UlLQmvOeTZ7pkUuh mFeGM+0SX67Enf04Wf1j3QDzaj97z06wEvG+Pchu3EexKzqUXPFfDvnN6ZV++ZKeiDMr g5kg== X-Gm-Message-State: AODbwcBaNvcJNm5pClmxIU3lkeZeCwxn/kpFMvXUE5iv8APltquXR1Ws cWnEGU6rkmf+/mjc X-Received: by 10.25.44.84 with SMTP id s81mr2293751lfs.166.1494404697394; Wed, 10 May 2017 01:24:57 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 17sm401500ljo.56.2017.05.10.01.24.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 May 2017 01:24:56 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson , Adrian Hunter Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Linus Walleij Subject: [PATCH 2/5] mmc: core: Allocate per-request data using the block layer core Date: Wed, 10 May 2017 10:24:15 +0200 Message-Id: <20170510082418.10513-3-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170510082418.10513-1-linus.walleij@linaro.org> References: <20170510082418.10513-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The mmc_queue_req is a per-request state container the MMC core uses to carry bounce buffers, pointers to asynchronous requests and so on. Currently allocated as a static array of objects, then as a request comes in, a mmc_queue_req is assigned to it, and used during the lifetime of the request. This is backwards compared to how other block layer drivers work: they usally let the block core provide a per-request struct that get allocated right beind the struct request, and which can be obtained using the blk_mq_rq_to_pdu() helper. (The _mq_ infix in this function name is misleading: it is used by both the old and the MQ block layer.) The per-request struct gets allocated to the size stored in the queue variable .cmd_size initialized using the .init_rq_fn() and cleaned up using .exit_rq_fn(). The block layer code makes the MMC core rely on this mechanism to allocate the per-request mmc_queue_req state container. Doing this make a lot of complicated queue handling go away. We only need to keep the .qnct that keeps count of how many request are currently being processed by the MMC layer. The MQ block layer will replace also this once we transition to it. Doing this refactoring is necessary to move the ioctl() operations into custom block layer requests tagged with REQ_OP_DRV_[IN|OUT] instead of the custom code using the BigMMCHostLock that we have today: those require that per-request data be obtainable easily from a request after creating a custom request with e.g.: struct request *rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM); struct mmc_queue_req *mq_rq = req_to_mq_rq(rq); And this is not possible with the current construction, as the request is not immediately assigned the per-request state container, but instead it gets assigned when the request finally enters the MMC queue, which is way too late for custom requests. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 38 ++------ drivers/mmc/core/queue.c | 222 +++++++++++++---------------------------------- drivers/mmc/core/queue.h | 22 ++--- include/linux/mmc/card.h | 2 - 4 files changed, 80 insertions(+), 204 deletions(-) -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 8273b078686d..be782b8d4a0d 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -129,13 +129,6 @@ static inline int mmc_blk_part_switch(struct mmc_card *card, struct mmc_blk_data *md); static int get_card_status(struct mmc_card *card, u32 *status, int retries); -static void mmc_blk_requeue(struct request_queue *q, struct request *req) -{ - spin_lock_irq(q->queue_lock); - blk_requeue_request(q, req); - spin_unlock_irq(q->queue_lock); -} - static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk) { struct mmc_blk_data *md; @@ -1642,7 +1635,7 @@ static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, if (mmc_card_removed(card)) req->rq_flags |= RQF_QUIET; while (blk_end_request(req, -EIO, blk_rq_cur_bytes(req))); - mmc_queue_req_free(mq, mqrq); + mq->qcnt--; } /** @@ -1662,7 +1655,7 @@ static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req, if (mmc_card_removed(mq->card)) { req->rq_flags |= RQF_QUIET; blk_end_request_all(req, -EIO); - mmc_queue_req_free(mq, mqrq); + mq->qcnt--; /* FIXME: just set to 0? */ return; } /* Else proceed and try to restart the current async request */ @@ -1685,12 +1678,8 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) bool req_pending = true; if (new_req) { - mqrq_cur = mmc_queue_req_find(mq, new_req); - if (!mqrq_cur) { - WARN_ON(1); - mmc_blk_requeue(mq->queue, new_req); - new_req = NULL; - } + mqrq_cur = req_to_mq_rq(new_req); + mq->qcnt++; } if (!mq->qcnt) @@ -1764,12 +1753,12 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) if (req_pending) mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); else - mmc_queue_req_free(mq, mq_rq); + mq->qcnt--; mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); return; } if (!req_pending) { - mmc_queue_req_free(mq, mq_rq); + mq->qcnt--; mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); return; } @@ -1814,7 +1803,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) req_pending = blk_end_request(old_req, -EIO, brq->data.blksz); if (!req_pending) { - mmc_queue_req_free(mq, mq_rq); + mq->qcnt--; mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); return; } @@ -1844,7 +1833,7 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) } } while (req_pending); - mmc_queue_req_free(mq, mq_rq); + mq->qcnt--; } void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) @@ -2166,7 +2155,6 @@ static int mmc_blk_probe(struct mmc_card *card) { struct mmc_blk_data *md, *part_md; char cap_str[10]; - int ret; /* * Check that the card supports the command class(es) we need. @@ -2176,15 +2164,9 @@ static int mmc_blk_probe(struct mmc_card *card) mmc_fixup_device(card, mmc_blk_fixups); - ret = mmc_queue_alloc_shared_queue(card); - if (ret) - return ret; - md = mmc_blk_alloc(card); - if (IS_ERR(md)) { - mmc_queue_free_shared_queue(card); + if (IS_ERR(md)) return PTR_ERR(md); - } string_get_size((u64)get_capacity(md->disk), 512, STRING_UNITS_2, cap_str, sizeof(cap_str)); @@ -2222,7 +2204,6 @@ static int mmc_blk_probe(struct mmc_card *card) out: mmc_blk_remove_parts(card, md); mmc_blk_remove_req(md); - mmc_queue_free_shared_queue(card); return 0; } @@ -2240,7 +2221,6 @@ static void mmc_blk_remove(struct mmc_card *card) pm_runtime_put_noidle(&card->dev); mmc_blk_remove_req(md); dev_set_drvdata(&card->dev, NULL); - mmc_queue_free_shared_queue(card); } static int _mmc_blk_suspend(struct mmc_card *card) diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 545466342fb1..65a8e0e63012 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -40,35 +40,6 @@ static int mmc_prep_request(struct request_queue *q, struct request *req) return BLKPREP_OK; } -struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq, - struct request *req) -{ - struct mmc_queue_req *mqrq; - int i = ffz(mq->qslots); - - if (i >= mq->qdepth) - return NULL; - - mqrq = &mq->mqrq[i]; - WARN_ON(mqrq->req || mq->qcnt >= mq->qdepth || - test_bit(mqrq->task_id, &mq->qslots)); - mqrq->req = req; - mq->qcnt += 1; - __set_bit(mqrq->task_id, &mq->qslots); - - return mqrq; -} - -void mmc_queue_req_free(struct mmc_queue *mq, - struct mmc_queue_req *mqrq) -{ - WARN_ON(!mqrq->req || mq->qcnt < 1 || - !test_bit(mqrq->task_id, &mq->qslots)); - mqrq->req = NULL; - mq->qcnt -= 1; - __clear_bit(mqrq->task_id, &mq->qslots); -} - static int mmc_queue_thread(void *d) { struct mmc_queue *mq = d; @@ -149,11 +120,11 @@ static void mmc_request_fn(struct request_queue *q) wake_up_process(mq->thread); } -static struct scatterlist *mmc_alloc_sg(int sg_len) +static struct scatterlist *mmc_alloc_sg(int sg_len, gfp_t gfp) { struct scatterlist *sg; - sg = kmalloc_array(sg_len, sizeof(*sg), GFP_KERNEL); + sg = kmalloc_array(sg_len, sizeof(*sg), gfp); if (sg) sg_init_table(sg, sg_len); @@ -179,80 +150,6 @@ static void mmc_queue_setup_discard(struct request_queue *q, queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q); } -static void mmc_queue_req_free_bufs(struct mmc_queue_req *mqrq) -{ - kfree(mqrq->bounce_sg); - mqrq->bounce_sg = NULL; - - kfree(mqrq->sg); - mqrq->sg = NULL; - - kfree(mqrq->bounce_buf); - mqrq->bounce_buf = NULL; -} - -static void mmc_queue_reqs_free_bufs(struct mmc_queue_req *mqrq, int qdepth) -{ - int i; - - for (i = 0; i < qdepth; i++) - mmc_queue_req_free_bufs(&mqrq[i]); -} - -static void mmc_queue_free_mqrqs(struct mmc_queue_req *mqrq, int qdepth) -{ - mmc_queue_reqs_free_bufs(mqrq, qdepth); - kfree(mqrq); -} - -static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth) -{ - struct mmc_queue_req *mqrq; - int i; - - mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL); - if (mqrq) { - for (i = 0; i < qdepth; i++) - mqrq[i].task_id = i; - } - - return mqrq; -} - -static int mmc_queue_alloc_bounce_bufs(struct mmc_queue_req *mqrq, int qdepth, - unsigned int bouncesz) -{ - int i; - - for (i = 0; i < qdepth; i++) { - mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL); - if (!mqrq[i].bounce_buf) - return -ENOMEM; - - mqrq[i].sg = mmc_alloc_sg(1); - if (!mqrq[i].sg) - return -ENOMEM; - - mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512); - if (!mqrq[i].bounce_sg) - return -ENOMEM; - } - - return 0; -} - -static bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq, int qdepth, - unsigned int bouncesz) -{ - int ret; - - ret = mmc_queue_alloc_bounce_bufs(mqrq, qdepth, bouncesz); - if (ret) - mmc_queue_reqs_free_bufs(mqrq, qdepth); - - return !ret; -} - static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) { unsigned int bouncesz = MMC_QUEUE_BOUNCESZ; @@ -273,71 +170,62 @@ static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) return bouncesz; } -static int mmc_queue_alloc_sgs(struct mmc_queue_req *mqrq, int qdepth, - int max_segs) +/** + * mmc_init_request() - initialize the MMC-specific per-request data + * @q: the request queue + * @req: the request + * @gfp: memory allocation policy + */ +static int mmc_init_request(struct request_queue *q, struct request *req, + gfp_t gfp) { - int i; + struct mmc_queue_req *mq_rq = req_to_mq_rq(req); + struct mmc_queue *mq = q->queuedata; + struct mmc_card *card = mq->card; + struct mmc_host *host = card->host; - for (i = 0; i < qdepth; i++) { - mqrq[i].sg = mmc_alloc_sg(max_segs); - if (!mqrq[i].sg) + /* FIXME: use req_to_mq_rq() everywhere this is dereferenced */ + mq_rq->req = req; + + if (card->bouncesz) { + mq_rq->bounce_buf = kmalloc(card->bouncesz, gfp); + if (!mq_rq->bounce_buf) + return -ENOMEM; + if (card->bouncesz > 512) { + mq_rq->sg = mmc_alloc_sg(1, gfp); + if (!mq_rq->sg) + return -ENOMEM; + mq_rq->bounce_sg = mmc_alloc_sg(card->bouncesz / 512, + gfp); + if (!mq_rq->bounce_sg) + return -ENOMEM; + } + } else { + mq_rq->bounce_buf = NULL; + mq_rq->bounce_sg = NULL; + mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp); + if (!mq_rq->sg) return -ENOMEM; } return 0; } -void mmc_queue_free_shared_queue(struct mmc_card *card) +static void mmc_exit_request(struct request_queue *q, struct request *req) { - if (card->mqrq) { - mmc_queue_free_mqrqs(card->mqrq, card->qdepth); - card->mqrq = NULL; - } -} + struct mmc_queue_req *mq_rq = req_to_mq_rq(req); -static int __mmc_queue_alloc_shared_queue(struct mmc_card *card, int qdepth) -{ - struct mmc_host *host = card->host; - struct mmc_queue_req *mqrq; - unsigned int bouncesz; - int ret = 0; - - if (card->mqrq) - return -EINVAL; - - mqrq = mmc_queue_alloc_mqrqs(qdepth); - if (!mqrq) - return -ENOMEM; + /* It is OK to kfree(NULL) so this will be smooth */ + kfree(mq_rq->bounce_sg); + mq_rq->bounce_sg = NULL; - card->mqrq = mqrq; - card->qdepth = qdepth; + kfree(mq_rq->bounce_buf); + mq_rq->bounce_buf = NULL; - bouncesz = mmc_queue_calc_bouncesz(host); - - if (bouncesz && !mmc_queue_alloc_bounce(mqrq, qdepth, bouncesz)) { - bouncesz = 0; - pr_warn("%s: unable to allocate bounce buffers\n", - mmc_card_name(card)); - } - - card->bouncesz = bouncesz; - - if (!bouncesz) { - ret = mmc_queue_alloc_sgs(mqrq, qdepth, host->max_segs); - if (ret) - goto out_err; - } + kfree(mq_rq->sg); + mq_rq->sg = NULL; - return ret; - -out_err: - mmc_queue_free_shared_queue(card); - return ret; -} - -int mmc_queue_alloc_shared_queue(struct mmc_card *card) -{ - return __mmc_queue_alloc_shared_queue(card, 2); + mq_rq->req = NULL; } /** @@ -360,13 +248,21 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; mq->card = card; - mq->queue = blk_init_queue(mmc_request_fn, lock); + mq->queue = blk_alloc_queue_node(GFP_KERNEL, NUMA_NO_NODE); if (!mq->queue) return -ENOMEM; - - mq->mqrq = card->mqrq; - mq->qdepth = card->qdepth; + mq->queue->queue_lock = lock; + mq->queue->request_fn = mmc_request_fn; + mq->queue->init_rq_fn = mmc_init_request; + mq->queue->exit_rq_fn = mmc_exit_request; + mq->queue->cmd_size = sizeof(struct mmc_queue_req); mq->queue->queuedata = mq; + mq->qcnt = 0; + ret = blk_init_allocated_queue(mq->queue); + if (ret) { + blk_cleanup_queue(mq->queue); + return ret; + } blk_queue_prep_rq(mq->queue, mmc_prep_request); queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue); @@ -374,6 +270,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, if (mmc_can_erase(card)) mmc_queue_setup_discard(mq->queue, card); + card->bouncesz = mmc_queue_calc_bouncesz(host); if (card->bouncesz) { blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY); blk_queue_max_hw_sectors(mq->queue, card->bouncesz / 512); @@ -400,7 +297,6 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, return 0; cleanup_queue: - mq->mqrq = NULL; blk_cleanup_queue(mq->queue); return ret; } @@ -421,8 +317,8 @@ void mmc_cleanup_queue(struct mmc_queue *mq) q->queuedata = NULL; blk_start_queue(q); spin_unlock_irqrestore(q->queue_lock, flags); + blk_cleanup_queue(mq->queue); - mq->mqrq = NULL; mq->card = NULL; } EXPORT_SYMBOL(mmc_cleanup_queue); diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index 871796c3f406..8aa10ffdf622 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -3,9 +3,15 @@ #include #include +#include #include #include +static inline struct mmc_queue_req *req_to_mq_rq(struct request *rq) +{ + return blk_mq_rq_to_pdu(rq); +} + static inline bool mmc_req_is_special(struct request *req) { return req && @@ -34,7 +40,6 @@ struct mmc_queue_req { struct scatterlist *bounce_sg; unsigned int bounce_sg_len; struct mmc_async_req areq; - int task_id; }; struct mmc_queue { @@ -45,14 +50,15 @@ struct mmc_queue { bool asleep; struct mmc_blk_data *blkdata; struct request_queue *queue; - struct mmc_queue_req *mqrq; - int qdepth; + /* + * FIXME: this counter is not a very reliable way of keeping + * track of how many requests that are ongoing. Switch to just + * letting the block core keep track of requests and per-request + * associated mmc_queue_req data. + */ int qcnt; - unsigned long qslots; }; -extern int mmc_queue_alloc_shared_queue(struct mmc_card *card); -extern void mmc_queue_free_shared_queue(struct mmc_card *card); extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *, const char *); extern void mmc_cleanup_queue(struct mmc_queue *); @@ -66,8 +72,4 @@ extern void mmc_queue_bounce_post(struct mmc_queue_req *); extern int mmc_access_rpmb(struct mmc_queue *); -extern struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *, - struct request *); -extern void mmc_queue_req_free(struct mmc_queue *, struct mmc_queue_req *); - #endif diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h index aad015e0152b..46c73e97e61f 100644 --- a/include/linux/mmc/card.h +++ b/include/linux/mmc/card.h @@ -305,9 +305,7 @@ struct mmc_card { struct mmc_part part[MMC_NUM_PHY_PARTITION]; /* physical partitions */ unsigned int nr_parts; - struct mmc_queue_req *mqrq; /* Shared queue structure */ unsigned int bouncesz; /* Bounce buffer size */ - int qdepth; /* Shared queue depth */ }; static inline bool mmc_large_sector(struct mmc_card *card) From patchwork Wed May 10 08:24:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 98970 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp105592qge; Wed, 10 May 2017 01:25:10 -0700 (PDT) X-Received: by 10.84.143.195 with SMTP id 61mr6386214plz.158.1494404710102; Wed, 10 May 2017 01:25:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1494404710; cv=none; d=google.com; s=arc-20160816; b=Wvbahu5xtsJ9sEtIBBeqiXBZXfvAr0Za/SxIifZCGY3OurGLlHIHnPAKOfJseJPfDh WIMM3T7n7yDMrA8aWq63JQNyDME7GJqPTzRL14OVRTqriBOcqmN4Jqpl/anZjrcJQDnK QQOqF0BF5dMil34c7GUWOi2HEBp8mgJRghR97M0rcuBRI5jLm+lQUhUxfoSY9jspQnA0 MIC1IrmyCm8i8I9Hcj4ILbJz+8Wp1x6Mz04ike326H9vxVQfds4XZXI0gwmFOBRn8CtV +uu5H7QHzMQYC5wjYgxEb4md+twgvz8+upAwKZ/6k4ESvdpxHl3DNMqYfuyVU1NbHnv/ jQMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Gd72MKrXzmiu9iU0wuTswZ2ag5CoBVqPsg4dcArbf04=; b=Jn0vWr6adKEWVuRo3sNLRwvZiQ9YC6Jyd9mCqJibkiciIzOXnsrPUjuit9Hd743W0J Idevf/0TjJLYyYAstdKCtR5F2pHDseF+ZxboBtiLI57Rj/Dl3Sqfzx4bDEUL4GoY2xkJ PMKryqSV+ZkFBSUcbM95cn1UwWnspByLA97f6yF5Pc5HZGYtTZtyDfc2Kzof7VXMLPmj syCPDbe/nGvGoTy98M9qfbUtHEBf2lo2SdDinRWFm0DAFUVTmLHwcUfCCzPkiGZOtH1v +FwUD29OIKdy0YO6s4sZNT65wXEZmQ3QfJOywcdhOrdH4SPgmtAQE3/uo3BrW033Ols0 CfWg== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f132si2315878pgc.250.2017.05.10.01.25.09; Wed, 10 May 2017 01:25:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752615AbdEJIZH (ORCPT + 6 others); Wed, 10 May 2017 04:25:07 -0400 Received: from mail-qt0-f170.google.com ([209.85.216.170]:36135 "EHLO mail-qt0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752605AbdEJIZF (ORCPT ); Wed, 10 May 2017 04:25:05 -0400 Received: by mail-qt0-f170.google.com with SMTP id m91so20258525qte.3 for ; Wed, 10 May 2017 01:25:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bsMYCHUFgCb6LWYvSLs6tg+4SsXf6t0r4L5YwTbhXRw=; b=Yyxj573Oua/jVFI/Dme2u262tfNUpZw/5cCU3Yn+etrSsjWzsxOFqllIrWK7Z6M647 9xOtynb2iuc73L7Ak9RAF1BUJpBzm1AaUG0hOcQXF4T4GjHwB1Sqh1abjukwO7sl43mt D5I8rpcIc7CDzl6ITiM8/sV5WCt9mC11pQoOA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bsMYCHUFgCb6LWYvSLs6tg+4SsXf6t0r4L5YwTbhXRw=; b=XjpqXsN5pRyr7qJvj2kgNySCdrR40XXDTr1XcY1o5/V2MIbyRNizbEBpUcKriiaUII b6AssZXac2l/vpCkYMMXGikQMUbOrJy6ScWtKK0+w6IP8mF7cMA1Kprr6aUaEIgSI42v HV/x/+us4Plf/+vMon2F1AvR+6EBkvMFGGqQl1hd5Dk/coGkgC9XA29MI/Cm4Z3EGmzX Ma24bu0M9r7iGTuTPoygSHWUVyOIxhTSzUkYsmL73DYU4kI+wPZvD2HcOs0knR0sHfra DoTRf+4DrF3KHy7qkE5Y7Z8U2Eq3kwDihK6fElDWQmP8sMa4S1/1LMFMVDtNmAvk6Gni GPaA== X-Gm-Message-State: AODbwcBp9cS3DXaSzcpGJBDIimK8M5hpNgB+zE4GOAVFBjjEzBFFKGu2 G7rSbNFGoNDZKD9R X-Received: by 10.25.195.21 with SMTP id t21mr2303556lff.103.1494404704143; Wed, 10 May 2017 01:25:04 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 17sm401500ljo.56.2017.05.10.01.25.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 May 2017 01:25:02 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson , Adrian Hunter Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Linus Walleij Subject: [PATCH 3/5] mmc: block: Tag is_rpmb as bool Date: Wed, 10 May 2017 10:24:16 +0200 Message-Id: <20170510082418.10513-4-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170510082418.10513-1-linus.walleij@linaro.org> References: <20170510082418.10513-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The variable is_rpmb is clearly a bool and even assigned true and false, yet declared as an int. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index be782b8d4a0d..323f3790b629 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -443,7 +443,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md, struct mmc_request mrq = {}; struct scatterlist sg; int err; - int is_rpmb = false; + bool is_rpmb = false; u32 status = 0; if (!card || !md || !idata) From patchwork Wed May 10 08:24:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 98971 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp105617qge; Wed, 10 May 2017 01:25:15 -0700 (PDT) X-Received: by 10.98.101.5 with SMTP id z5mr4702924pfb.96.1494404715593; Wed, 10 May 2017 01:25:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1494404715; cv=none; d=google.com; s=arc-20160816; b=Nr3QSoBPbT0arsyWNYH4tzV4COR6pqis10xThYRlNUU0PQqg/AIlv/6So/kb3FHR/t Rhfj5kf8949v18Kpsi9ky0e7vGPCGIap5rr9RN7fLTp059mQBsC5juazcJ+kOJbigVGN dweeLxuIyGOyhLVI0+Uls6Hu4t7fRfQh8Ue57bjeycWuvLNGDfl0LAw2voxTNCqbq23I +WavRbojR6VgjuiDGxZYJLKvPpWyD/l7R7AiubGVJ03QgXjruwhR0UsMoXFtx/0tArbl ZoI3WT8KJvvkO/v0qQ/nQ2jf6D5CDEppiJtRvKt7fQSTrTzyvSiduhiEctBwLMa9zZhp 56zQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=wC6vJaelYyVYoOqAx6tG+1nMOTo5XZuUNBcGZUQTSBs=; b=ypv8aW7iiZDNXUMGWhloZenCL+W3fVLJMGzawlnqReSOIrf2Edkd8pA2X1VKoVzEs3 TO2K8HWvm0SYTlR+ilQzhlaaDbelz7UKa2eYs5BAdSZj5vxzA6wSshEgguEDrtbQsCoN wRqQuQ/OVnI4jemr54AQ+vJ0ePnLioYYTSypGudjr+XsSr63IsmupIRRjeQRD73ZhrIw Q1UvXH9GosHW0p/+iqCtwsK1d1SPU492xd6sR2ftChzQz167jTOaEKwUD0Tp+qu4LZHz xyZqqsg+/t7E3Gmgd7mCtoGJS5429t0s46w/uSL8/sYZtFCJ0bLDKt+18VioSqWgsl0S sMoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f132si2315878pgc.250.2017.05.10.01.25.15; Wed, 10 May 2017 01:25:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752278AbdEJIZO (ORCPT + 6 others); Wed, 10 May 2017 04:25:14 -0400 Received: from mail-qt0-f180.google.com ([209.85.216.180]:32894 "EHLO mail-qt0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752613AbdEJIZM (ORCPT ); Wed, 10 May 2017 04:25:12 -0400 Received: by mail-qt0-f180.google.com with SMTP id t26so20408413qtg.0 for ; Wed, 10 May 2017 01:25:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=UMtKkJZUGL+7ICWLSZ3seGBxnIPSbPlXxAfdmv8DDF0=; b=CxE0t5dMmyzn3eoApq43L6dBpBYMG3/CObKdQDeaxADJlJ4hXkfOEKKaRxGwxH1sAi xRqCt883Kv5fzeXktBnn6ivkMNl7T/uApRNwIfnrhG0lUStFfd8NAiKZkpupgfy4IvkB Y5bRrNe2KYnRvF/rscLiWzictxEe5iPK2xl+Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=UMtKkJZUGL+7ICWLSZ3seGBxnIPSbPlXxAfdmv8DDF0=; b=FZY+LeNpXEt8DLeFFuWO6cPtnv1WWxZ9uCTwZBZslGH7BEMOOisLJNfJ0+yUSR8lLv XrDFhNzXjZs3liA831jVcQCHroLqoMfNUOlD9A1q3H+bf7l+28+WrNlSDz7xDkwn+1hv OF3ZV3ycbzgfI/bXMS3O91WpyHGL5nBjRM6d1o9xOxz9XEzjUuuDC4iUGaGhU1hjCEzr Xs3yrdfC0uJHcSngo2w1gu60LglNZQG8nBNZsAAqUM/EotVk+J3sazrgeIsrkaLNHoI8 gxUr4iXU7Nb3lC/2TQs5HqbIzKhdH36q9QBxH22E6jcfdP/k/TVCEIeg69SiCz6OZtlO rZbg== X-Gm-Message-State: AODbwcCv0eVHdMYuCgqiPVoCL8vZ6piUbCehbN65u6mbIn4rVJWoZAJr cmJHpCwDUrXQ5Syh X-Received: by 10.46.21.92 with SMTP id 28mr2110251ljv.98.1494404711769; Wed, 10 May 2017 01:25:11 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 17sm401500ljo.56.2017.05.10.01.25.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 May 2017 01:25:10 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson , Adrian Hunter Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Linus Walleij Subject: [PATCH 4/5] mmc: block: move single ioctl() commands to block requests Date: Wed, 10 May 2017 10:24:17 +0200 Message-Id: <20170510082418.10513-5-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170510082418.10513-1-linus.walleij@linaro.org> References: <20170510082418.10513-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This wraps single ioctl() commands into block requests using the custom block layer request types REQ_OP_DRV_IN and REQ_OP_DRV_OUT. By doing this we are loosening the grip on the big host lock, since two calls to mmc_get_card()/mmc_put_card() are removed. We are storing the ioctl() in/out argument as a pointer in the per-request struct mmc_blk_request container. Since we now let the block layer allocate this data, blk_get_request() will allocate it for us and we can immediately dereference it and use it to pass the argument into the block layer. Tested on the ux500 with the userspace: mmc extcsd read /dev/mmcblk3 resulting in a successful EXTCSD info dump back to the console. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 56 ++++++++++++++++++++++++++++++++++++++---------- drivers/mmc/core/queue.h | 3 +++ 2 files changed, 48 insertions(+), 11 deletions(-) -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html tested-by: Avri Altman diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 323f3790b629..640db4f57a31 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -564,8 +564,10 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, { struct mmc_blk_ioc_data *idata; struct mmc_blk_data *md; + struct mmc_queue *mq; struct mmc_card *card; int err = 0, ioc_err = 0; + struct request *req; /* * The caller must have CAP_SYS_RAWIO, and must be calling this on the @@ -591,17 +593,18 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, goto cmd_done; } - mmc_get_card(card); - - ioc_err = __mmc_blk_ioctl_cmd(card, md, idata); - - /* Always switch back to main area after RPMB access */ - if (md->area_type & MMC_BLK_DATA_AREA_RPMB) - mmc_blk_part_switch(card, dev_get_drvdata(&card->dev)); - - mmc_put_card(card); - + /* + * Dispatch the ioctl() into the block request queue. + */ + mq = &md->queue; + req = blk_get_request(mq->queue, + idata->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, + __GFP_RECLAIM); + req_to_mq_rq(req)->idata = idata; + blk_execute_rq(mq->queue, NULL, req, 0); + ioc_err = req_to_mq_rq(req)->ioc_result; err = mmc_blk_ioctl_copy_to_user(ic_ptr, idata); + blk_put_request(req); cmd_done: mmc_blk_put(md); @@ -611,6 +614,31 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, return ioc_err ? ioc_err : err; } +/* + * The ioctl commands come back from the block layer after it queued it and + * processed it with all other requests and then they get issued in this + * function. + */ +static void mmc_blk_ioctl_cmd_issue(struct mmc_queue *mq, struct request *req) +{ + struct mmc_queue_req *mq_rq; + struct mmc_blk_ioc_data *idata; + struct mmc_card *card = mq->card; + struct mmc_blk_data *md = mq->blkdata; + int ioc_err; + + mq_rq = req_to_mq_rq(req); + idata = mq_rq->idata; + ioc_err = __mmc_blk_ioctl_cmd(card, md, idata); + mq_rq->ioc_result = ioc_err; + + /* Always switch back to main area after RPMB access */ + if (md->area_type & MMC_BLK_DATA_AREA_RPMB) + mmc_blk_part_switch(card, dev_get_drvdata(&card->dev)); + + blk_end_request_all(req, ioc_err); +} + static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev, struct mmc_ioc_multi_cmd __user *user) { @@ -1854,7 +1882,13 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) goto out; } - if (req && req_op(req) == REQ_OP_DISCARD) { + if (req && + (req_op(req) == REQ_OP_DRV_IN || req_op(req) == REQ_OP_DRV_OUT)) { + /* complete ongoing async transfer before issuing ioctl()s */ + if (mq->qcnt) + mmc_blk_issue_rw_rq(mq, NULL); + mmc_blk_ioctl_cmd_issue(mq, req); + } else if (req && req_op(req) == REQ_OP_DISCARD) { /* complete ongoing async transfer before issuing discard */ if (mq->qcnt) mmc_blk_issue_rw_rq(mq, NULL); diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index 8aa10ffdf622..aeb3408dc85e 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -22,6 +22,7 @@ static inline bool mmc_req_is_special(struct request *req) struct task_struct; struct mmc_blk_data; +struct mmc_blk_ioc_data; struct mmc_blk_request { struct mmc_request mrq; @@ -40,6 +41,8 @@ struct mmc_queue_req { struct scatterlist *bounce_sg; unsigned int bounce_sg_len; struct mmc_async_req areq; + int ioc_result; + struct mmc_blk_ioc_data *idata; }; struct mmc_queue { From patchwork Wed May 10 08:24:18 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 98972 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp105649qge; Wed, 10 May 2017 01:25:23 -0700 (PDT) X-Received: by 10.98.205.65 with SMTP id o62mr4684154pfg.105.1494404722920; Wed, 10 May 2017 01:25:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1494404722; cv=none; d=google.com; s=arc-20160816; b=kbhYnlaGfIWV79LSRmopFtul49AtuX4w0CN2tsL8N9BtmTpBYxt2i2Q9dIBShpWkm9 EBm8idJGR9qak+wEVeaaA3EuE+x3v2VtH5XZfVtRSASKR1IXzvE83dwe7ZGCOJ0W9CVD Mg/DFNxHuwQl2o/HseNUWfFEFI99gMWvckfrFHSBXIEcK2GuDpVedN4chqHeeWPsax1f m4MCe++K1bTYUPvMdA9qmWNGTQlDbGZH9+2K2Z9dPv9gTPL8Yl4qWoff9wtq7A4R4xrv OSv9a0c+X+ta7/Ip3ihE3CEKv/KAQILNdVVJD1Luj8A1UbhWRHd6+9dnJF6RjyyFskmy xR7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=GHUrpngEbLZ6nthdWh54/IgdgMc77zK7lTxXLgFQCPA=; b=Q/VGQVY9jVUPCubEZ6IBQxYt1N/ldJ6LDmFi8sMT01Q8cCaiZqw+h7iRYF5QGxMarm 0xJOF0T4WxNGD7ToADcnuNBUk5DwEBgFmuCUCgKAnSZU3F8TRu07jag3xPPitkcFaSvK PR9rq7tp5D4BlhY5UMWVKWUS0n26g7ogn4IRCYTuROlYneVDQPzoTSySxppZP8rwgobM hbPKq5G0YXskri0GaewaE5Po9qMrbVDUff/aadoz8NLbWse096t57IG3W/Ny1xZiW6ps iyIlO4dLPrC9RUYsTZa+1ezTeXD91eLeHLMmMqGvnq2KQZTBqsWHFoLsLYYoqB1DMnKD YtSQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f132si2315878pgc.250.2017.05.10.01.25.22; Wed, 10 May 2017 01:25:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752624AbdEJIZV (ORCPT + 6 others); Wed, 10 May 2017 04:25:21 -0400 Received: from mail-qt0-f174.google.com ([209.85.216.174]:33985 "EHLO mail-qt0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752613AbdEJIZT (ORCPT ); Wed, 10 May 2017 04:25:19 -0400 Received: by mail-qt0-f174.google.com with SMTP id j29so20345300qtj.1 for ; Wed, 10 May 2017 01:25:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=kBuGPI1Gf+ex9EiGKyCFmJrHAqkXpuGGp97Tt8W3Sf0=; b=JWkdJOsZk/Ag3bUR8Fh5bjimvr005klaFhaifNBNVZkjOPbV0oBuSZtSWSao+B0URd +k2MZKWztk1lZaHrpbiZFeKqlMl60yDkEtYDR3GoidV/imql+utxfyai5AFMwmzpPA1m FKcVb9Y8w7/UXWylsVlA9oFyYfeYZLXFcHMj0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kBuGPI1Gf+ex9EiGKyCFmJrHAqkXpuGGp97Tt8W3Sf0=; b=Qbff1cq0YM6N5jkJTqFxWoVwIbz0dnbLZxIdhEstyBggJ0WOuoqoYx+JaMllkhW7+P 1eI+vCSrXZqYW7anqK0dvEam59K9+ZRzdiFllol9i+9G+e0+udxKQQUlfmINlz2+RKUO q2UGjKV5N5rU0lEHy9fBpkZA6FxFCU63MRHn9mn2gDwbV6WYm2+MxYL8wHZgceFUcU9/ 95L1/3LqoDNAVRTq7Oy+ZJbm/7Xx+bCmgClPv/9VmS83EnO3wBc4iQjlOms97GjfJzB0 zjnBbVKjlDzfyGaU3SPO5Qkaz3ArMCMZgooJH+CQoKrmXXJY24bmCzTI/jPWtXCGjPp4 luVQ== X-Gm-Message-State: AODbwcD7WbhFE3LCR2RcSeinV5hdZKt0J2PkVOIHcUXyY4E/kIVeobL7 fELaxbrNkrpew6uY X-Received: by 10.25.193.1 with SMTP id r1mr2078297lff.127.1494404718679; Wed, 10 May 2017 01:25:18 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id 17sm401500ljo.56.2017.05.10.01.25.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 May 2017 01:25:17 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson , Adrian Hunter Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Linus Walleij Subject: [PATCH 5/5] mmc: block: move multi-ioctl() to use block layer Date: Wed, 10 May 2017 10:24:18 +0200 Message-Id: <20170510082418.10513-6-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170510082418.10513-1-linus.walleij@linaro.org> References: <20170510082418.10513-1-linus.walleij@linaro.org> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This switches also the multiple-command ioctl() call to issue all ioctl()s through the block layer instead of going directly to the device. We extend the passed argument with an argument count and loop over all passed commands in the ioctl() issue function called from the block layer. By doing this we are again loosening the grip on the big host lock, since two calls to mmc_get_card()/mmc_put_card() are removed. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 38 +++++++++++++++++++++++++------------- drivers/mmc/core/queue.h | 3 ++- 2 files changed, 27 insertions(+), 14 deletions(-) -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Tested-by: Avri Altman diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 640db4f57a31..152de904d5e4 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -563,6 +563,7 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, struct mmc_ioc_cmd __user *ic_ptr) { struct mmc_blk_ioc_data *idata; + struct mmc_blk_ioc_data *idatas[1]; struct mmc_blk_data *md; struct mmc_queue *mq; struct mmc_card *card; @@ -600,7 +601,9 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, req = blk_get_request(mq->queue, idata->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, __GFP_RECLAIM); - req_to_mq_rq(req)->idata = idata; + idatas[0] = idata; + req_to_mq_rq(req)->idata = idatas; + req_to_mq_rq(req)->ioc_count = 1; blk_execute_rq(mq->queue, NULL, req, 0); ioc_err = req_to_mq_rq(req)->ioc_result; err = mmc_blk_ioctl_copy_to_user(ic_ptr, idata); @@ -622,14 +625,17 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, static void mmc_blk_ioctl_cmd_issue(struct mmc_queue *mq, struct request *req) { struct mmc_queue_req *mq_rq; - struct mmc_blk_ioc_data *idata; struct mmc_card *card = mq->card; struct mmc_blk_data *md = mq->blkdata; int ioc_err; + int i; mq_rq = req_to_mq_rq(req); - idata = mq_rq->idata; - ioc_err = __mmc_blk_ioctl_cmd(card, md, idata); + for (i = 0; i < mq_rq->ioc_count; i++) { + ioc_err = __mmc_blk_ioctl_cmd(card, md, mq_rq->idata[i]); + if (ioc_err) + break; + } mq_rq->ioc_result = ioc_err; /* Always switch back to main area after RPMB access */ @@ -646,8 +652,10 @@ static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev, struct mmc_ioc_cmd __user *cmds = user->cmds; struct mmc_card *card; struct mmc_blk_data *md; + struct mmc_queue *mq; int i, err = 0, ioc_err = 0; __u64 num_of_cmds; + struct request *req; /* * The caller must have CAP_SYS_RAWIO, and must be calling this on the @@ -689,21 +697,25 @@ static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev, goto cmd_done; } - mmc_get_card(card); - - for (i = 0; i < num_of_cmds && !ioc_err; i++) - ioc_err = __mmc_blk_ioctl_cmd(card, md, idata[i]); - - /* Always switch back to main area after RPMB access */ - if (md->area_type & MMC_BLK_DATA_AREA_RPMB) - mmc_blk_part_switch(card, dev_get_drvdata(&card->dev)); - mmc_put_card(card); + /* + * Dispatch the ioctl()s into the block request queue. + */ + mq = &md->queue; + req = blk_get_request(mq->queue, + idata[0]->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, + __GFP_RECLAIM); + req_to_mq_rq(req)->idata = idata; + req_to_mq_rq(req)->ioc_count = num_of_cmds; + blk_execute_rq(mq->queue, NULL, req, 0); + ioc_err = req_to_mq_rq(req)->ioc_result; /* copy to user if data and response */ for (i = 0; i < num_of_cmds && !err; i++) err = mmc_blk_ioctl_copy_to_user(&cmds[i], idata[i]); + blk_put_request(req); + cmd_done: mmc_blk_put(md); cmd_err: diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index aeb3408dc85e..7015df6681c3 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -42,7 +42,8 @@ struct mmc_queue_req { unsigned int bounce_sg_len; struct mmc_async_req areq; int ioc_result; - struct mmc_blk_ioc_data *idata; + struct mmc_blk_ioc_data **idata; + unsigned int ioc_count; }; struct mmc_queue {