diff mbox series

[5.17,019/146] dmaengine: dw-edma: Fix unaligned 64bit access

Message ID 20220426081750.605299587@linuxfoundation.org
State Superseded
Headers show
Series None | expand

Commit Message

Greg Kroah-Hartman April 26, 2022, 8:20 a.m. UTC
From: Herve Codina <herve.codina@bootlin.com>

[ Upstream commit 8fc5133d6d4da65cad6b73152fc714ad3d7f91c1 ]

On some arch (ie aarch64 iMX8MM) unaligned PCIe accesses are
not allowed and lead to a kernel Oops.
  [ 1911.668835] Unable to handle kernel paging request at virtual address ffff80001bc00a8c
  [ 1911.668841] Mem abort info:
  [ 1911.668844]   ESR = 0x96000061
  [ 1911.668847]   EC = 0x25: DABT (current EL), IL = 32 bits
  [ 1911.668850]   SET = 0, FnV = 0
  [ 1911.668852]   EA = 0, S1PTW = 0
  [ 1911.668853] Data abort info:
  [ 1911.668855]   ISV = 0, ISS = 0x00000061
  [ 1911.668857]   CM = 0, WnR = 1
  [ 1911.668861] swapper pgtable: 4k pages, 48-bit VAs, pgdp=0000000040ff4000
  [ 1911.668864] [ffff80001bc00a8c] pgd=00000000bffff003, pud=00000000bfffe003, pmd=0068000018400705
  [ 1911.668872] Internal error: Oops: 96000061 [#1] PREEMPT SMP
  ...

The llp register present in the channel group registers is not
aligned on 64bit.

Fix unaligned 64bit access using two 32bit accesses

Fixes: 04e0a39fc10f ("dmaengine: dw-edma: Add writeq() and readq() for 64 bits architectures")
Signed-off-by: Herve Codina <herve.codina@bootlin.com>
Link: https://lore.kernel.org/r/20220225120252.309404-1-herve.codina@bootlin.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/dma/dw-edma/dw-edma-v0-core.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c
index 329fc2e57b70..b5b8f8181e77 100644
--- a/drivers/dma/dw-edma/dw-edma-v0-core.c
+++ b/drivers/dma/dw-edma/dw-edma-v0-core.c
@@ -415,8 +415,11 @@  void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
 			  (DW_EDMA_V0_CCS | DW_EDMA_V0_LLE));
 		/* Linked list */
 		#ifdef CONFIG_64BIT
-			SET_CH_64(dw, chan->dir, chan->id, llp.reg,
-				  chunk->ll_region.paddr);
+			/* llp is not aligned on 64bit -> keep 32bit accesses */
+			SET_CH_32(dw, chan->dir, chan->id, llp.lsb,
+				  lower_32_bits(chunk->ll_region.paddr));
+			SET_CH_32(dw, chan->dir, chan->id, llp.msb,
+				  upper_32_bits(chunk->ll_region.paddr));
 		#else /* CONFIG_64BIT */
 			SET_CH_32(dw, chan->dir, chan->id, llp.lsb,
 				  lower_32_bits(chunk->ll_region.paddr));