From patchwork Fri May 29 03:59:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serge Semin X-Patchwork-Id: 214506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 534EEC433E1 for ; Fri, 29 May 2020 04:01:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3AF6320707 for ; Fri, 29 May 2020 04:01:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388880AbgE2D7h (ORCPT ); Thu, 28 May 2020 23:59:37 -0400 Received: from mail.baikalelectronics.com ([87.245.175.226]:45350 "EHLO mail.baikalelectronics.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388517AbgE2D7g (ORCPT ); Thu, 28 May 2020 23:59:36 -0400 Received: from localhost (unknown [127.0.0.1]) by mail.baikalelectronics.ru (Postfix) with ESMTP id 6969A8030776; Fri, 29 May 2020 03:59:33 +0000 (UTC) X-Virus-Scanned: amavisd-new at baikalelectronics.ru Received: from mail.baikalelectronics.ru ([127.0.0.1]) by localhost (mail.baikalelectronics.ru [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ZNk5yuASX1PS; Fri, 29 May 2020 06:59:32 +0300 (MSK) From: Serge Semin To: Mark Brown CC: Serge Semin , Serge Semin , Georgy Vlasov , Ramil Zaripov , Alexey Malahov , Thomas Bogendoerfer , Arnd Bergmann , Feng Tang , Andy Shevchenko , Rob Herring , , , , Subject: [PATCH v5 03/16] spi: dw: Locally wait for the DMA transactions completion Date: Fri, 29 May 2020 06:59:01 +0300 Message-ID: <20200529035915.20790-4-Sergey.Semin@baikalelectronics.ru> In-Reply-To: <20200529035915.20790-1-Sergey.Semin@baikalelectronics.ru> References: <20200529035915.20790-1-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 X-ClientProxiedBy: MAIL.baikal.int (192.168.51.25) To mail (192.168.51.25) Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Even if DMA transactions are finished it doesn't mean that the SPI transfers are also completed. It's specifically concerns the Tx-only SPI transfers, since there might be data left in the SPI Tx FIFO after the DMA engine notifies that the Tx DMA procedure is done. In order to completely fix the problem first the driver has to wait for the DMA transaction completion, then for the corresponding SPI operations to be finished. In this commit we implement the former part of the solution. Note we can't just move the SPI operations wait procedure to the DMA completion callbacks, since these callbacks might be executed in the tasklet context (and they will be in case of the DW DMA). In case of slow SPI bus it can cause significant system performance drop. Signed-off-by: Serge Semin Cc: Georgy Vlasov Cc: Ramil Zaripov Cc: Alexey Malahov Cc: Thomas Bogendoerfer Cc: Arnd Bergmann Cc: Feng Tang Cc: Andy Shevchenko Cc: Rob Herring Cc: linux-mips@vger.kernel.org Cc: devicetree@vger.kernel.org --- drivers/spi/spi-dw-mid.c | 44 ++++++++++++++++++++++++++++++++++++---- drivers/spi/spi-dw.h | 2 ++ 2 files changed, 42 insertions(+), 4 deletions(-) diff --git a/drivers/spi/spi-dw-mid.c b/drivers/spi/spi-dw-mid.c index 7ff1acaa55f8..355b641c4483 100644 --- a/drivers/spi/spi-dw-mid.c +++ b/drivers/spi/spi-dw-mid.c @@ -11,9 +11,11 @@ #include "spi-dw.h" #ifdef CONFIG_SPI_DW_MID_DMA +#include #include #include #include +#include #include #include @@ -66,6 +68,8 @@ static int mid_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws) dws->master->dma_rx = dws->rxchan; dws->master->dma_tx = dws->txchan; + init_completion(&dws->dma_completion); + return 0; free_rxchan: @@ -91,6 +95,8 @@ static int mid_spi_dma_init_generic(struct device *dev, struct dw_spi *dws) dws->master->dma_rx = dws->rxchan; dws->master->dma_tx = dws->txchan; + init_completion(&dws->dma_completion); + return 0; } @@ -121,7 +127,7 @@ static irqreturn_t dma_transfer(struct dw_spi *dws) dev_err(&dws->master->dev, "%s: FIFO overrun/underrun\n", __func__); dws->master->cur_msg->status = -EIO; - spi_finalize_current_transfer(dws->master); + complete(&dws->dma_completion); return IRQ_HANDLED; } @@ -142,6 +148,29 @@ static enum dma_slave_buswidth convert_dma_width(u8 n_bytes) { return DMA_SLAVE_BUSWIDTH_UNDEFINED; } +static int dw_spi_dma_wait(struct dw_spi *dws, struct spi_transfer *xfer) +{ + unsigned long long ms; + + ms = xfer->len * MSEC_PER_SEC * BITS_PER_BYTE; + do_div(ms, xfer->effective_speed_hz); + ms += ms + 200; + + if (ms > UINT_MAX) + ms = UINT_MAX; + + ms = wait_for_completion_timeout(&dws->dma_completion, + msecs_to_jiffies(ms)); + + if (ms == 0) { + dev_err(&dws->master->cur_msg->spi->dev, + "DMA transaction timed out\n"); + return -ETIMEDOUT; + } + + return 0; +} + /* * dws->dma_chan_busy is set before the dma transfer starts, callback for tx * channel will clear a corresponding bit. @@ -155,7 +184,7 @@ static void dw_spi_dma_tx_done(void *arg) return; dw_writel(dws, DW_SPI_DMACR, 0); - spi_finalize_current_transfer(dws->master); + complete(&dws->dma_completion); } static struct dma_async_tx_descriptor *dw_spi_dma_prepare_tx(struct dw_spi *dws, @@ -204,7 +233,7 @@ static void dw_spi_dma_rx_done(void *arg) return; dw_writel(dws, DW_SPI_DMACR, 0); - spi_finalize_current_transfer(dws->master); + complete(&dws->dma_completion); } static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws, @@ -260,6 +289,8 @@ static int mid_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) /* Set the interrupt mask */ spi_umask_intr(dws, imr); + reinit_completion(&dws->dma_completion); + dws->transfer_handler = dma_transfer; return 0; @@ -268,6 +299,7 @@ static int mid_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer) static int mid_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) { struct dma_async_tx_descriptor *txdesc, *rxdesc; + int ret; /* Prepare the TX dma transfer */ txdesc = dw_spi_dma_prepare_tx(dws, xfer); @@ -288,7 +320,11 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer) dma_async_issue_pending(dws->txchan); } - return 1; + ret = dw_spi_dma_wait(dws, xfer); + if (ret) + return ret; + + return 0; } static void mid_spi_dma_stop(struct dw_spi *dws) diff --git a/drivers/spi/spi-dw.h b/drivers/spi/spi-dw.h index 79782e93eb12..9585d0c83a6d 100644 --- a/drivers/spi/spi-dw.h +++ b/drivers/spi/spi-dw.h @@ -2,6 +2,7 @@ #ifndef DW_SPI_HEADER_H #define DW_SPI_HEADER_H +#include #include #include #include @@ -145,6 +146,7 @@ struct dw_spi { unsigned long dma_chan_busy; dma_addr_t dma_addr; /* phy address of the Data register */ const struct dw_spi_dma_ops *dma_ops; + struct completion dma_completion; #ifdef CONFIG_DEBUG_FS struct dentry *debugfs;