From patchwork Tue Sep 27 11:21:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Whitchurch X-Patchwork-Id: 610514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA3D3C6FA97 for ; Tue, 27 Sep 2022 11:23:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229660AbiI0LXv (ORCPT ); Tue, 27 Sep 2022 07:23:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231964AbiI0LXJ (ORCPT ); Tue, 27 Sep 2022 07:23:09 -0400 Received: from smtp2.axis.com (smtp2.axis.com [195.60.68.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A081913E7DC; Tue, 27 Sep 2022 04:21:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axis.com; q=dns/txt; s=axis-central1; t=1664277713; x=1695813713; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r9z94nHNbG/a3bIMb6JB09Z2qRd5LB6E0MBIYBQgOAE=; b=SoT9DtW2m2H2jeVrb/N8+U+iqGJFc2f2fBMM82CxZnPClXAWHdOsz/Jk knsbru/ncOM0CGg2biKkMyJS4u0mjQMYyjVGObC34ZjzjaIsi7y3bOdJK 588WlhWJOfTKXIuyOMnxJN4wpjFitMr1xThC4fmhZ0HTe/eUYkq6pfPhE YSiDezPuaXcViOnADnOlhVbNQDR+E9tLWa4rp715u/t3tvIhK1zcI6xl4 FdvMXKk5mDJgi9WnDOw/4NNeeS3PsZFXL5qPkD+lZifVf/niv4lRvy7Ku ZyZEQUc3z2UK1VSAqmCQkUIXgGlMKTaCTmX7PD+hHgYeUbMErR010YHR2 A==; From: Vincent Whitchurch To: , , CC: , Vincent Whitchurch , , , , , Subject: [PATCH v2 1/4] spi: Save current RX and TX DMA devices Date: Tue, 27 Sep 2022 13:21:14 +0200 Message-ID: <20220927112117.77599-2-vincent.whitchurch@axis.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220927112117.77599-1-vincent.whitchurch@axis.com> References: <20220927112117.77599-1-vincent.whitchurch@axis.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org Save the current RX and TX DMA devices to avoid having to duplicate the logic to pick them, since we'll need access to them in some more functions to fix a bug in the cache handling. Signed-off-by: Vincent Whitchurch --- drivers/spi/spi.c | 19 ++++--------------- include/linux/spi/spi.h | 4 ++++ 2 files changed, 8 insertions(+), 15 deletions(-) diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index ad254b94308e..dd885df23870 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -1147,6 +1147,8 @@ static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) } } + ctlr->cur_rx_dma_dev = rx_dev; + ctlr->cur_tx_dma_dev = tx_dev; ctlr->cur_msg_mapped = true; return 0; @@ -1154,26 +1156,13 @@ static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) static int __spi_unmap_msg(struct spi_controller *ctlr, struct spi_message *msg) { + struct device *rx_dev = ctlr->cur_rx_dma_dev; + struct device *tx_dev = ctlr->cur_tx_dma_dev; struct spi_transfer *xfer; - struct device *tx_dev, *rx_dev; if (!ctlr->cur_msg_mapped || !ctlr->can_dma) return 0; - if (ctlr->dma_tx) - tx_dev = ctlr->dma_tx->device->dev; - else if (ctlr->dma_map_dev) - tx_dev = ctlr->dma_map_dev; - else - tx_dev = ctlr->dev.parent; - - if (ctlr->dma_rx) - rx_dev = ctlr->dma_rx->device->dev; - else if (ctlr->dma_map_dev) - rx_dev = ctlr->dma_map_dev; - else - rx_dev = ctlr->dev.parent; - list_for_each_entry(xfer, &msg->transfers, transfer_list) { if (!ctlr->can_dma(ctlr, msg->spi, xfer)) continue; diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h index 6ea889df0813..fbf8c0d95968 100644 --- a/include/linux/spi/spi.h +++ b/include/linux/spi/spi.h @@ -378,6 +378,8 @@ extern struct spi_device *spi_new_ancillary_device(struct spi_device *spi, u8 ch * @cleanup: frees controller-specific state * @can_dma: determine whether this controller supports DMA * @dma_map_dev: device which can be used for DMA mapping + * @cur_rx_dma_dev: device which is currently used for RX DMA mapping + * @cur_tx_dma_dev: device which is currently used for TX DMA mapping * @queued: whether this controller is providing an internal message queue * @kworker: pointer to thread struct for message pump * @pump_messages: work struct for scheduling work to the message pump @@ -610,6 +612,8 @@ struct spi_controller { struct spi_device *spi, struct spi_transfer *xfer); struct device *dma_map_dev; + struct device *cur_rx_dma_dev; + struct device *cur_tx_dma_dev; /* * These hooks are for drivers that want to use the generic From patchwork Tue Sep 27 11:21:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Whitchurch X-Patchwork-Id: 610513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAB2EC6FA95 for ; Tue, 27 Sep 2022 11:23:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229867AbiI0LXw (ORCPT ); Tue, 27 Sep 2022 07:23:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232016AbiI0LXL (ORCPT ); Tue, 27 Sep 2022 07:23:11 -0400 Received: from smtp2.axis.com (smtp2.axis.com [195.60.68.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86D4E139F7F; Tue, 27 Sep 2022 04:22:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axis.com; q=dns/txt; s=axis-central1; t=1664277725; x=1695813725; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hXMxvKDGa3S+E22Izonel1E3bJF/Yk47TCpSAP8sBLg=; b=D3+c/crr/lJPB2AuPi99Kod3fhehe2nSO/jiRq25WBE9F3nsEPln8J56 F9goEVHOaxk0pbhiuXf67m3SLEvGgSCllyHQuXeEC1Pz2A2eqPBFN+3Bs xHFNNjyXDoH5ceivdF6BbZyYoMzZMTdJwxuO32tixFM5nkfndBDuVDrw9 LxAylZYisdXuVwfEPgCa6N/csbqlYlnxFW9a0iuqp2ZmzqUGnPss2kV8y KY4Qxe0kJOxcBiDffsG7eBoCG3NOacmhlqsQjEmFCU/IhGxvHpOP2GVD5 3MbUbE/msl8ODgnOJlzLAXl090LRBKG6+zEux92XX0g/3cfMj6/gHz3Nq w==; From: Vincent Whitchurch To: , , CC: , Vincent Whitchurch , , , , , Subject: [PATCH v2 2/4] spi: Fix cache corruption due to DMA/PIO overlap Date: Tue, 27 Sep 2022 13:21:15 +0200 Message-ID: <20220927112117.77599-3-vincent.whitchurch@axis.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220927112117.77599-1-vincent.whitchurch@axis.com> References: <20220927112117.77599-1-vincent.whitchurch@axis.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org The SPI core DMA mapping support performs cache management once for the entire message and not between transfers, and this leads to cache corruption if a message has two or more RX transfers with both transfers targeting the same cache line, and the controller driver decides to handle one using DMA and the other using PIO (for example, because one is much larger than the other). Fix it by syncing before/after the actual transfers. This also means that we can skip the sync during the map/unmap of the message. Fixes: 99adef310f68 ("spi: Provide core support for DMA mapping transfers") Signed-off-by: Vincent Whitchurch --- drivers/spi/spi.c | 109 +++++++++++++++++++++++++++++++++++++--------- 1 file changed, 88 insertions(+), 21 deletions(-) diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index dd885df23870..f41a8c2752b8 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -1010,9 +1010,9 @@ static void spi_set_cs(struct spi_device *spi, bool enable, bool force) } #ifdef CONFIG_HAS_DMA -int spi_map_buf(struct spi_controller *ctlr, struct device *dev, - struct sg_table *sgt, void *buf, size_t len, - enum dma_data_direction dir) +static int spi_map_buf_attrs(struct spi_controller *ctlr, struct device *dev, + struct sg_table *sgt, void *buf, size_t len, + enum dma_data_direction dir, unsigned long attrs) { const bool vmalloced_buf = is_vmalloc_addr(buf); unsigned int max_seg_size = dma_get_max_seg_size(dev); @@ -1078,28 +1078,39 @@ int spi_map_buf(struct spi_controller *ctlr, struct device *dev, sg = sg_next(sg); } - ret = dma_map_sg(dev, sgt->sgl, sgt->nents, dir); - if (!ret) - ret = -ENOMEM; + ret = dma_map_sgtable(dev, sgt, dir, attrs); if (ret < 0) { sg_free_table(sgt); return ret; } - sgt->nents = ret; - return 0; } -void spi_unmap_buf(struct spi_controller *ctlr, struct device *dev, - struct sg_table *sgt, enum dma_data_direction dir) +int spi_map_buf(struct spi_controller *ctlr, struct device *dev, + struct sg_table *sgt, void *buf, size_t len, + enum dma_data_direction dir) +{ + return spi_map_buf_attrs(ctlr, dev, sgt, buf, len, dir, 0); +} + +static void spi_unmap_buf_attrs(struct spi_controller *ctlr, + struct device *dev, struct sg_table *sgt, + enum dma_data_direction dir, + unsigned long attrs) { if (sgt->orig_nents) { - dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir); + dma_unmap_sgtable(dev, sgt, dir, attrs); sg_free_table(sgt); } } +void spi_unmap_buf(struct spi_controller *ctlr, struct device *dev, + struct sg_table *sgt, enum dma_data_direction dir) +{ + spi_unmap_buf_attrs(ctlr, dev, sgt, dir, 0); +} + static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) { struct device *tx_dev, *rx_dev; @@ -1124,24 +1135,30 @@ static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) rx_dev = ctlr->dev.parent; list_for_each_entry(xfer, &msg->transfers, transfer_list) { + /* The sync is done before each transfer. */ + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC; + if (!ctlr->can_dma(ctlr, msg->spi, xfer)) continue; if (xfer->tx_buf != NULL) { - ret = spi_map_buf(ctlr, tx_dev, &xfer->tx_sg, - (void *)xfer->tx_buf, xfer->len, - DMA_TO_DEVICE); + ret = spi_map_buf_attrs(ctlr, tx_dev, &xfer->tx_sg, + (void *)xfer->tx_buf, + xfer->len, DMA_TO_DEVICE, + attrs); if (ret != 0) return ret; } if (xfer->rx_buf != NULL) { - ret = spi_map_buf(ctlr, rx_dev, &xfer->rx_sg, - xfer->rx_buf, xfer->len, - DMA_FROM_DEVICE); + ret = spi_map_buf_attrs(ctlr, rx_dev, &xfer->rx_sg, + xfer->rx_buf, xfer->len, + DMA_FROM_DEVICE, attrs); if (ret != 0) { - spi_unmap_buf(ctlr, tx_dev, &xfer->tx_sg, - DMA_TO_DEVICE); + spi_unmap_buf_attrs(ctlr, tx_dev, + &xfer->tx_sg, DMA_TO_DEVICE, + attrs); + return ret; } } @@ -1164,17 +1181,52 @@ static int __spi_unmap_msg(struct spi_controller *ctlr, struct spi_message *msg) return 0; list_for_each_entry(xfer, &msg->transfers, transfer_list) { + /* The sync has already been done after each transfer. */ + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC; + if (!ctlr->can_dma(ctlr, msg->spi, xfer)) continue; - spi_unmap_buf(ctlr, rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); - spi_unmap_buf(ctlr, tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); + spi_unmap_buf_attrs(ctlr, rx_dev, &xfer->rx_sg, + DMA_FROM_DEVICE, attrs); + spi_unmap_buf_attrs(ctlr, tx_dev, &xfer->tx_sg, + DMA_TO_DEVICE, attrs); } ctlr->cur_msg_mapped = false; return 0; } + +static void spi_dma_sync_for_device(struct spi_controller *ctlr, + struct spi_transfer *xfer) +{ + struct device *rx_dev = ctlr->cur_rx_dma_dev; + struct device *tx_dev = ctlr->cur_tx_dma_dev; + + if (!ctlr->cur_msg_mapped) + return; + + if (xfer->tx_sg.orig_nents) + dma_sync_sgtable_for_device(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); + if (xfer->rx_sg.orig_nents) + dma_sync_sgtable_for_device(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); +} + +static void spi_dma_sync_for_cpu(struct spi_controller *ctlr, + struct spi_transfer *xfer) +{ + struct device *rx_dev = ctlr->cur_rx_dma_dev; + struct device *tx_dev = ctlr->cur_tx_dma_dev; + + if (!ctlr->cur_msg_mapped) + return; + + if (xfer->rx_sg.orig_nents) + dma_sync_sgtable_for_cpu(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); + if (xfer->tx_sg.orig_nents) + dma_sync_sgtable_for_cpu(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); +} #else /* !CONFIG_HAS_DMA */ static inline int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) @@ -1187,6 +1239,16 @@ static inline int __spi_unmap_msg(struct spi_controller *ctlr, { return 0; } + +static void spi_dma_sync_for_device(struct spi_controller *ctrl, + struct spi_transfer *xfer) +{ +} + +static void spi_dma_sync_for_cpu(struct spi_controller *ctrl, + struct spi_transfer *xfer) +{ +} #endif /* !CONFIG_HAS_DMA */ static inline int spi_unmap_msg(struct spi_controller *ctlr, @@ -1445,8 +1507,11 @@ static int spi_transfer_one_message(struct spi_controller *ctlr, reinit_completion(&ctlr->xfer_completion); fallback_pio: + spi_dma_sync_for_device(ctlr, xfer); ret = ctlr->transfer_one(ctlr, msg->spi, xfer); if (ret < 0) { + spi_dma_sync_for_cpu(ctlr, xfer); + if (ctlr->cur_msg_mapped && (xfer->error & SPI_TRANS_FAIL_NO_START)) { __spi_unmap_msg(ctlr, msg); @@ -1469,6 +1534,8 @@ static int spi_transfer_one_message(struct spi_controller *ctlr, if (ret < 0) msg->status = ret; } + + spi_dma_sync_for_cpu(ctlr, xfer); } else { if (xfer->len) dev_err(&msg->spi->dev, From patchwork Tue Sep 27 11:21:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Whitchurch X-Patchwork-Id: 609974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E4C9C6FA93 for ; Tue, 27 Sep 2022 11:23:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229472AbiI0LXw (ORCPT ); Tue, 27 Sep 2022 07:23:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232035AbiI0LXM (ORCPT ); Tue, 27 Sep 2022 07:23:12 -0400 Received: from smtp1.axis.com (smtp1.axis.com [195.60.68.17]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8EF412DEEC; Tue, 27 Sep 2022 04:21:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axis.com; q=dns/txt; s=axis-central1; t=1664277718; x=1695813718; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hzApWBA3xTSZ1v8riDwMfzZx6pC2MkanA6btQmoyo+8=; b=dOMu+tMOOmvqqaVKoHdkBirhl/BpFE/xTb0qx9wJ3kItP7ZiHMMDZ7uN C5d9Aqv/BGne+2rvmUTLNRLWPANDWah/YxDRUCsKg6vDzDFKyqA6tyNyp pQzubNezNmCE9GD5hjEPyeb83LZP4+KoGDGHkORoeequpO/5jegPSyNZX l8NwT3szcVAUYQS4wY1SRT62dycXSTMTXXwilfmxtvpFyOgLtz5suKJtk r7XLEKuPxMZQf+436+UjLqxpcP/79o2leQnhOa+cOpnFbnnNScObypfrg AHgUw72RwVTSDrQFvLxlrTbyrL+UpBeTnMvso+O8k37LREDsA2Tug7+FZ A==; From: Vincent Whitchurch To: , , CC: , Vincent Whitchurch , , , , , Subject: [PATCH v2 3/4] spi: Split transfers larger than max size Date: Tue, 27 Sep 2022 13:21:16 +0200 Message-ID: <20220927112117.77599-4-vincent.whitchurch@axis.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220927112117.77599-1-vincent.whitchurch@axis.com> References: <20220927112117.77599-1-vincent.whitchurch@axis.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org A couple of drivers call spi_split_transfers_maxsize() from their ->prepare_message() callbacks to split transfers which are too big for them to handle. Add support in the core to do this based on ->max_transfer_size() to avoid code duplication. Signed-off-by: Vincent Whitchurch --- drivers/spi/spi.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index f41a8c2752b8..44e4352d948b 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -1649,6 +1649,15 @@ static int __spi_pump_transfer_message(struct spi_controller *ctlr, trace_spi_message_start(msg); + ret = spi_split_transfers_maxsize(ctlr, msg, + spi_max_transfer_size(msg->spi), + GFP_KERNEL | GFP_DMA); + if (ret) { + msg->status = ret; + spi_finalize_current_message(ctlr); + return ret; + } + if (ctlr->prepare_message) { ret = ctlr->prepare_message(ctlr, msg); if (ret) { From patchwork Tue Sep 27 11:21:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Whitchurch X-Patchwork-Id: 609975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23C06C6FA96 for ; Tue, 27 Sep 2022 11:23:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232225AbiI0LXv (ORCPT ); Tue, 27 Sep 2022 07:23:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231393AbiI0LXD (ORCPT ); Tue, 27 Sep 2022 07:23:03 -0400 Received: from smtp1.axis.com (smtp1.axis.com [195.60.68.17]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2C5613A951; Tue, 27 Sep 2022 04:21:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axis.com; q=dns/txt; s=axis-central1; t=1664277702; x=1695813702; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TrQtsuqQ4EnLgm0F7EqTy/mbaX8OOSSxYGg4dteSuwI=; b=EX9w+PF1n6PQ+dm4r/EnyxJBqxrgQ9yW6ZOWwDXUAjaJ5fkO/ipDbNNV /Pj6D67jGqBgeDpjKjOeuu4OqfmvFKmjp226snUg7TjDDSC8GoVwwxOGs bk8pEabpSBP5kK8DeRCjbwGsDPo7K7QWOblViIrqz3/tvTzSdHrsbxFVL 6b6wDMTXFNSt2iv5fvg0v6G8LPkmrVQm5eKuHEn+/N+TjDyatfhviLbQ6 MsrCFJvwgl66q1RpU8aoNmayrFW0ZnH9k+GDqRqpC07/MqblQY+yXg4JI HAbuApAwGbeae8ToB3pn8ZtRr7GBQZBxj5lIWHYvrdLe6R4gDWdNq5MKG A==; From: Vincent Whitchurch To: , , CC: , Vincent Whitchurch , , , , , Subject: [PATCH v2 4/4] spi: s3c64xx: Fix large transfers with DMA Date: Tue, 27 Sep 2022 13:21:17 +0200 Message-ID: <20220927112117.77599-5-vincent.whitchurch@axis.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220927112117.77599-1-vincent.whitchurch@axis.com> References: <20220927112117.77599-1-vincent.whitchurch@axis.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org The COUNT_VALUE in the PACKET_CNT register is 16-bit so the maximum value is 65535. Asking the driver to transfer a larger size currently leads to the DMA transfer timing out. Implement ->max_transfer_size() and have the core split the transfer as needed. Fixes: 230d42d422e7 ("spi: Add s3c64xx SPI Controller driver") Signed-off-by: Vincent Whitchurch --- drivers/spi/spi-s3c64xx.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c index 651c35dd9124..71d324ec9a70 100644 --- a/drivers/spi/spi-s3c64xx.c +++ b/drivers/spi/spi-s3c64xx.c @@ -84,6 +84,7 @@ #define S3C64XX_SPI_ST_TX_FIFORDY (1<<0) #define S3C64XX_SPI_PACKET_CNT_EN (1<<16) +#define S3C64XX_SPI_PACKET_CNT_MASK GENMASK(15, 0) #define S3C64XX_SPI_PND_TX_UNDERRUN_CLR (1<<4) #define S3C64XX_SPI_PND_TX_OVERRUN_CLR (1<<3) @@ -711,6 +712,13 @@ static int s3c64xx_spi_prepare_message(struct spi_master *master, return 0; } +static size_t s3c64xx_spi_max_transfer_size(struct spi_device *spi) +{ + struct spi_controller *ctlr = spi->controller; + + return ctlr->can_dma ? S3C64XX_SPI_PACKET_CNT_MASK : SIZE_MAX; +} + static int s3c64xx_spi_transfer_one(struct spi_master *master, struct spi_device *spi, struct spi_transfer *xfer) @@ -1152,6 +1160,7 @@ static int s3c64xx_spi_probe(struct platform_device *pdev) master->unprepare_transfer_hardware = s3c64xx_spi_unprepare_transfer; master->prepare_message = s3c64xx_spi_prepare_message; master->transfer_one = s3c64xx_spi_transfer_one; + master->max_transfer_size = s3c64xx_spi_max_transfer_size; master->num_chipselect = sci->num_cs; master->use_gpio_descriptors = true; master->dma_alignment = 8;