From patchwork Thu Sep 28 07:06:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herve Codina X-Patchwork-Id: 727903 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CDF0CE7B04 for ; Thu, 28 Sep 2023 07:07:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231160AbjI1HHb (ORCPT ); Thu, 28 Sep 2023 03:07:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230439AbjI1HHT (ORCPT ); Thu, 28 Sep 2023 03:07:19 -0400 Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [217.70.183.200]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFCEC121; Thu, 28 Sep 2023 00:07:13 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPA id C482120002; Thu, 28 Sep 2023 07:07:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1695884831; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e0ujzdffAJjksGlWhIAmGsiTFnivZ4Pu6Zkj+KUCbPE=; b=Oog4bDR2zfpSWeVE/FTU/dBape48LwD+wyv5A4IhWSEHPUgiry1nWlSSmfNV5FCvXPpxK7 CmaUbHyl5y+34Lgqk09eLhPdTmoFvf/O71b3UU1rM+G7gcwoMKj9RflmwJGvYFUCQZLR0D iWqusWTUAKTRqTFOYTDlWV3mhDVIvvwavaksCB/f/Bt1LEcNy9H2xckbZ09yGdvt/57ojH KmqaAQglwnh6kCjsMe4y6SdFB/iowI21OyCkqOZw2+sAM6uiLjSueiyh7gqv3e1MIP0TXP gYGkZrj54isHhJHu19S+q77kAKQNlrzQN+edE+dyS4Jmp8xz93OE671Z95k18A== From: Herve Codina To: Herve Codina , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Andrew Lunn , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Linus Walleij , Qiang Zhao , Li Yang , Liam Girdwood , Mark Brown , Jaroslav Kysela , Takashi Iwai , Shengjiu Wang , Xiubo Li , Fabio Estevam , Nicolin Chen , Christophe Leroy , Randy Dunlap Cc: netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-gpio@vger.kernel.org, linux-arm-kernel@lists.infradead.org, alsa-devel@alsa-project.org, Simon Horman , Christophe JAILLET , Thomas Petazzoni Subject: [PATCH v7 02/30] soc: fsl: cpm1: qmc: Fix __iomem addresses declaration Date: Thu, 28 Sep 2023 09:06:20 +0200 Message-ID: <20230928070652.330429-3-herve.codina@bootlin.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230928070652.330429-1-herve.codina@bootlin.com> References: <20230928070652.330429-1-herve.codina@bootlin.com> MIME-Version: 1.0 X-GND-Sasl: herve.codina@bootlin.com Precedence: bulk List-ID: X-Mailing-List: linux-gpio@vger.kernel.org Running sparse (make C=1) on qmc.c raises a lot of warning such as: ... warning: incorrect type in assignment (different address spaces) expected struct cpm_buf_desc [usertype] *[noderef] __iomem bd got struct cpm_buf_desc [noderef] [usertype] __iomem *txbd_free ... Indeed, some variable were declared 'type *__iomem var' instead of 'type __iomem *var'. Use the correct declaration to remove these warnings. Fixes: 3178d58e0b97 ("soc: fsl: cpm1: Add support for QMC") Signed-off-by: Herve Codina Reviewed-by: Christophe Leroy --- drivers/soc/fsl/qe/qmc.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c index b3c292c9a14e..7ad0d77f1740 100644 --- a/drivers/soc/fsl/qe/qmc.c +++ b/drivers/soc/fsl/qe/qmc.c @@ -175,7 +175,7 @@ struct qmc_chan { struct list_head list; unsigned int id; struct qmc *qmc; - void *__iomem s_param; + void __iomem *s_param; enum qmc_mode mode; u64 tx_ts_mask; u64 rx_ts_mask; @@ -203,9 +203,9 @@ struct qmc_chan { struct qmc { struct device *dev; struct tsa_serial *tsa_serial; - void *__iomem scc_regs; - void *__iomem scc_pram; - void *__iomem dpram; + void __iomem *scc_regs; + void __iomem *scc_pram; + void __iomem *dpram; u16 scc_pram_offset; cbd_t __iomem *bd_table; dma_addr_t bd_dma_addr; @@ -218,37 +218,37 @@ struct qmc { struct qmc_chan *chans[64]; }; -static inline void qmc_write16(void *__iomem addr, u16 val) +static inline void qmc_write16(void __iomem *addr, u16 val) { iowrite16be(val, addr); } -static inline u16 qmc_read16(void *__iomem addr) +static inline u16 qmc_read16(void __iomem *addr) { return ioread16be(addr); } -static inline void qmc_setbits16(void *__iomem addr, u16 set) +static inline void qmc_setbits16(void __iomem *addr, u16 set) { qmc_write16(addr, qmc_read16(addr) | set); } -static inline void qmc_clrbits16(void *__iomem addr, u16 clr) +static inline void qmc_clrbits16(void __iomem *addr, u16 clr) { qmc_write16(addr, qmc_read16(addr) & ~clr); } -static inline void qmc_write32(void *__iomem addr, u32 val) +static inline void qmc_write32(void __iomem *addr, u32 val) { iowrite32be(val, addr); } -static inline u32 qmc_read32(void *__iomem addr) +static inline u32 qmc_read32(void __iomem *addr) { return ioread32be(addr); } -static inline void qmc_setbits32(void *__iomem addr, u32 set) +static inline void qmc_setbits32(void __iomem *addr, u32 set) { qmc_write32(addr, qmc_read32(addr) | set); } @@ -318,7 +318,7 @@ int qmc_chan_write_submit(struct qmc_chan *chan, dma_addr_t addr, size_t length, { struct qmc_xfer_desc *xfer_desc; unsigned long flags; - cbd_t *__iomem bd; + cbd_t __iomem *bd; u16 ctrl; int ret; @@ -374,7 +374,7 @@ static void qmc_chan_write_done(struct qmc_chan *chan) void (*complete)(void *context); unsigned long flags; void *context; - cbd_t *__iomem bd; + cbd_t __iomem *bd; u16 ctrl; /* @@ -425,7 +425,7 @@ int qmc_chan_read_submit(struct qmc_chan *chan, dma_addr_t addr, size_t length, { struct qmc_xfer_desc *xfer_desc; unsigned long flags; - cbd_t *__iomem bd; + cbd_t __iomem *bd; u16 ctrl; int ret; @@ -488,7 +488,7 @@ static void qmc_chan_read_done(struct qmc_chan *chan) void (*complete)(void *context, size_t size); struct qmc_xfer_desc *xfer_desc; unsigned long flags; - cbd_t *__iomem bd; + cbd_t __iomem *bd; void *context; u16 datalen; u16 ctrl; @@ -663,7 +663,7 @@ static void qmc_chan_reset_rx(struct qmc_chan *chan) { struct qmc_xfer_desc *xfer_desc; unsigned long flags; - cbd_t *__iomem bd; + cbd_t __iomem *bd; u16 ctrl; spin_lock_irqsave(&chan->rx_lock, flags); @@ -694,7 +694,7 @@ static void qmc_chan_reset_tx(struct qmc_chan *chan) { struct qmc_xfer_desc *xfer_desc; unsigned long flags; - cbd_t *__iomem bd; + cbd_t __iomem *bd; u16 ctrl; spin_lock_irqsave(&chan->tx_lock, flags);