From patchwork Fri Jan 5 14:15:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 123525 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp869692qgn; Fri, 5 Jan 2018 06:17:44 -0800 (PST) X-Google-Smtp-Source: ACJfBovCHOdHNw21UShD39wnNfeKGY2/HpX5x7ktr5YDbTby4022Z1E9wcj/g0yzROW4Ggn28cpo X-Received: by 10.84.129.1 with SMTP id 1mr3305019plb.42.1515161864311; Fri, 05 Jan 2018 06:17:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1515161864; cv=none; d=google.com; s=arc-20160816; b=BSvX4whxhbAV+zuNNr85jeN5S1wg8d+k11jKho0kIOzyXt/fxNojguZsTDNm5Y2zkR lD6dI4LDHolfWhr0xRKEg4KtNjwmftmDVmIsR3CdBStNMFEjJhGrYRGbOJCSRsZQ6mip nfjxFaKLtQtiBpMXP9QLZDvCop7pO0LSsJkWJYYKUdx4d2FP3gyPRGAoAhZN4yL7swAM uPqWwrAs8smrfFMogmqxDEr/92R7FnC95/dwpURRUXYsT1fwlmCaSYOV0xIBqYrX0sTz Ca1JAzctUt6tjkdI4VBlMQr629Xclymj46OpdF/DZ9V4YuP7ee4bDw3MNTlyYyKEoYXN 9oaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=g+71DOaR43W+NqnZj5T3xS+xNNnMJyE78/A6KaECllg=; b=vftqQUEZuhuxhsnSTRGOT8LdR15GwbgqcnYRLpYs4uoERLM+ugbWlhsLVruz1N0mPH 4uZyIjyDPRzESA+s8X5dPN3eYMl2xdAbIrWRL/5lZwEudW2Fg/DZZMK3R2YL3VGS00e9 +RVRDu/s+25DBDyKSNcHeYCUY/XIlMajkb58F5BVOZrkXLVOMNLSrSxiTnR+4b2sB8f8 uxnxgn1BR9RtwREcOzJqOF8s+QpszEwkXhA5YHZHbKdojKZ0hYXMQlzpagyDD/MFuwdH vvjOxUx4tyQi2+uiFuF7FirEzYUO2Saus7UH0KU2udNdXE/5J6FHjaMLiMLHr/1n9U9W olcA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=bJXUvi8S; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q3si3985332pll.254.2018.01.05.06.17.43; Fri, 05 Jan 2018 06:17:44 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=bJXUvi8S; spf=pass (google.com: best guess record for domain of linux-mmc-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-mmc-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751514AbeAEORn (ORCPT + 6 others); Fri, 5 Jan 2018 09:17:43 -0500 Received: from mail-lf0-f65.google.com ([209.85.215.65]:34856 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751365AbeAEORm (ORCPT ); Fri, 5 Jan 2018 09:17:42 -0500 Received: by mail-lf0-f65.google.com with SMTP id h5so5305743lfj.2 for ; Fri, 05 Jan 2018 06:17:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=LJyd8/Um8YtG/kiqA5BHOHjMImkQ/EbnYtu9XuMT+84=; b=bJXUvi8SKYxjyacwhdtlRMtPKAx1SAwptBnO7670T330c7YaL6NSPNOr37p/CDILNl Fi6fjFtSczkwe/AbkLTG4uFc7aMG5Nc2F1LMv3oAgkugGaUtKvdJuUfVvibDMffzU65b vmJOL6EAx1jrHvnqkF1r4zLDAboJ2NAmP5BJM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=LJyd8/Um8YtG/kiqA5BHOHjMImkQ/EbnYtu9XuMT+84=; b=KYKfFqKQhSmKGY1ZGkJ1yZd4krBdz8TuIL0uattU2bxTyOF5QMFkWuh1yK7CpK6bkN z7I1oLfEBKB+Xcw9CG1t13WDttcNR+ADrwlKLSj+l+dpH1IpyWV2gewO2xcNihpMoAEj mwihqhneqs0P4GxvlkqQtgi+pcsHkbduEQ6dSHBJsp+LSfO7BveaFz4zNpPZxF6ZY9Ti Vwe5YBjY87Bx/R+mdk9v09W4vf0u81Z6Ns1887XhJ9eYEGJOEuptPC6UVce/t3ZVDjef GuDFsBboKwZYBvCORGAt5sw8eltwMctTi56Lk+O/c695lDZbRm6RAb3dKQQGMAqR0goG uy6w== X-Gm-Message-State: AKwxytfU9XA8040pAhjtMg+QhdfVk6wb6eLbxkhJf1119juSAWP/wXFX 8CSEEaJpRM0edUuzemH7po5yvRWLhkA= X-Received: by 10.25.123.25 with SMTP id w25mr1782079lfc.104.1515161860331; Fri, 05 Jan 2018 06:17:40 -0800 (PST) Received: from localhost.localdomain (c-cb7471d5.014-348-6c756e10.cust.bredbandsbolaget.se. [213.113.116.203]) by smtp.gmail.com with ESMTPSA id m18sm1047478lje.21.2018.01.05.06.17.39 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 05 Jan 2018 06:17:39 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson , Benjamin Beckmeyer , Adrian Hunter Cc: Linus Walleij , Pierre Ossman Subject: [PATCH v2] RFT: mmc: sdhci: Implement an SDHCI-specific bounce buffer Date: Fri, 5 Jan 2018 15:15:35 +0100 Message-Id: <20180105141535.17614-1-linus.walleij@linaro.org> X-Mailer: git-send-email 2.14.3 Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The bounce buffer is gone from the MMC core, and now we found out that there are some (crippled) i.MX boards out there that have broken ADMA (cannot do scatter-gather), and broken PIO so they must use SDMA. SDMA sets down the number of segments to one, so that each segment gets turned into a singular request that ping-pongs to the block layer before the next request/segment is issued. These devices can see major benefits from a bounce buffer, as a fragmented read or write buffer may come in even though the sectors we will be reading or writing to the MMC/SD-card are consecutive. This patch accumulates those fragmented scatterlists in a physically contigous bounce buffer so that we can issue bigger DMA data chunks to/from the card. When tested with thise PCI-integrated host (1217:8221) that only supports SDMA: 0b:00.0 SD Host controller: O2 Micro, Inc. OZ600FJ0/OZ900FJ0/OZ600FJS SD/MMC Card Reader Controller (rev 05) This patch gave ~1Mbyte/s improved throughput on large reads and writes when testing using iozone than without the patch. It is possible to achieve even better speed-ups by adding a second bounce buffer so that the ->pre_req() hook in the driver can do the buffer copying and DMA mapping/flushing while the request is in flight. We save this optimization for later. Cc: Benjamin Beckmeyer Cc: Pierre Ossman Signed-off-by: Linus Walleij --- ChangeLog v1->v2: - Skip the remapping and fiddling with the buffer, instead use dma_alloc_coherent() and use a simple, coherent bounce buffer. - Couple kernel messages to ->parent of the mmc_host as it relates to the hardware characteristics. --- drivers/mmc/host/sdhci.c | 94 +++++++++++++++++++++++++++++++++++++++++++----- drivers/mmc/host/sdhci.h | 3 ++ 2 files changed, 89 insertions(+), 8 deletions(-) -- 2.14.3 -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index e9290a3439d5..97d4c6fc1159 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -502,8 +502,20 @@ static int sdhci_pre_dma_transfer(struct sdhci_host *host, if (data->host_cookie == COOKIE_PRE_MAPPED) return data->sg_count; - sg_count = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len, - mmc_get_dma_dir(data)); + /* Bounce write requests to the bounce buffer */ + if (host->bounce_buffer) { + if (mmc_get_dma_dir(data) == DMA_TO_DEVICE) { + /* Copy the data to the bounce buffer */ + sg_copy_to_buffer(data->sg, data->sg_len, + host->bounce_buffer, host->bounce_buffer_size); + } + /* Just a dummy value */ + sg_count = 1; + } else { + /* Just access the data directly from memory */ + sg_count = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len, + mmc_get_dma_dir(data)); + } if (sg_count == 0) return -ENOSPC; @@ -858,8 +870,13 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd) SDHCI_ADMA_ADDRESS_HI); } else { WARN_ON(sg_cnt != 1); - sdhci_writel(host, sg_dma_address(data->sg), - SDHCI_DMA_ADDRESS); + /* Bounce buffer goes to work */ + if (host->bounce_buffer) + sdhci_writel(host, host->bounce_addr, + SDHCI_DMA_ADDRESS); + else + sdhci_writel(host, sg_dma_address(data->sg), + SDHCI_DMA_ADDRESS); } } @@ -2248,7 +2265,12 @@ static void sdhci_pre_req(struct mmc_host *mmc, struct mmc_request *mrq) mrq->data->host_cookie = COOKIE_UNMAPPED; - if (host->flags & SDHCI_REQ_USE_DMA) + /* + * No pre-mapping in the pre hook if we're using the bounce buffer, + * for that we would need two bounce buffers since one buffer is + * in flight when this is getting called. + */ + if (host->flags & SDHCI_REQ_USE_DMA && !host->bounce_buffer) sdhci_pre_dma_transfer(host, mrq->data, COOKIE_PRE_MAPPED); } @@ -2352,8 +2374,19 @@ static bool sdhci_request_done(struct sdhci_host *host) struct mmc_data *data = mrq->data; if (data && data->host_cookie == COOKIE_MAPPED) { - dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, - mmc_get_dma_dir(data)); + if (host->bounce_buffer) { + /* On reads, copy the bounced data into the sglist */ + if (mmc_get_dma_dir(data) == DMA_FROM_DEVICE) { + sg_copy_from_buffer(data->sg, data->sg_len, + host->bounce_buffer, + host->bounce_buffer_size); + } + } else { + /* Unmap the raw data */ + dma_unmap_sg(mmc_dev(host->mmc), data->sg, + data->sg_len, + mmc_get_dma_dir(data)); + } data->host_cookie = COOKIE_UNMAPPED; } } @@ -2636,7 +2669,12 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask) */ if (intmask & SDHCI_INT_DMA_END) { u32 dmastart, dmanow; - dmastart = sg_dma_address(host->data->sg); + + if (host->bounce_buffer) + dmastart = host->bounce_addr; + else + dmastart = sg_dma_address(host->data->sg); + dmanow = dmastart + host->data->bytes_xfered; /* * Force update to the next DMA block boundary. @@ -3713,6 +3751,43 @@ int sdhci_setup_host(struct sdhci_host *host) */ mmc->max_blk_count = (host->quirks & SDHCI_QUIRK_NO_MULTIBLOCK) ? 1 : 65535; + if (mmc->max_segs == 1) { + unsigned int max_blocks; + unsigned int max_seg_size; + + max_seg_size = mmc->max_req_size; + max_blocks = max_seg_size / 512; + dev_info(mmc->parent, "host only supports SDMA, activate bounce buffer\n"); + + /* + * When we just support one segment, we can get significant speedups + * by the help of a bounce buffer to group scattered reads/writes + * together. + * + * TODO: is this too big? Stealing too much memory? The old bounce + * buffer is max 64K. This should be the 512K that SDMA can handle + * if I read the code above right. Anyways let's try this. + * FIXME: use devm_* + */ + host->bounce_buffer = dma_alloc_coherent(mmc->parent, max_seg_size, + &host->bounce_addr, GFP_KERNEL); + if (!host->bounce_buffer) { + dev_err(mmc->parent, + "failed to allocate %u bytes for bounce buffer\n", + max_seg_size); + return -ENOMEM; + } + host->bounce_buffer_size = max_seg_size; + + /* Lie about this since we're bouncing */ + mmc->max_segs = max_blocks; + mmc->max_seg_size = max_seg_size; + + dev_info(mmc->parent, + "bounce buffer: bounce up to %u segments into one, max segment size %u bytes\n", + max_blocks, max_seg_size); + } + return 0; unreg: @@ -3743,6 +3818,9 @@ void sdhci_cleanup_host(struct sdhci_host *host) host->align_addr); host->adma_table = NULL; host->align_buffer = NULL; + if (host->bounce_buffer) + dma_free_coherent(mmc->parent, host->bounce_buffer_size, + host->bounce_buffer, host->bounce_addr); } EXPORT_SYMBOL_GPL(sdhci_cleanup_host); diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h index 54bc444c317f..865e09618d22 100644 --- a/drivers/mmc/host/sdhci.h +++ b/drivers/mmc/host/sdhci.h @@ -440,6 +440,9 @@ struct sdhci_host { int irq; /* Device IRQ */ void __iomem *ioaddr; /* Mapped address */ + char *bounce_buffer; /* For packing SDMA reads/writes */ + dma_addr_t bounce_addr; + size_t bounce_buffer_size; const struct sdhci_ops *ops; /* Low level hw interface */