From patchwork Wed Oct 11 06:14:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herve Codina X-Patchwork-Id: 732273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C0A3CD98C7 for ; Wed, 11 Oct 2023 06:16:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345234AbjJKGQO (ORCPT ); Wed, 11 Oct 2023 02:16:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344542AbjJKGPz (ORCPT ); Wed, 11 Oct 2023 02:15:55 -0400 Received: from relay4-d.mail.gandi.net (relay4-d.mail.gandi.net [217.70.183.196]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3A201AD; Tue, 10 Oct 2023 23:15:39 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPA id 22E60E000F; Wed, 11 Oct 2023 06:15:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1697004938; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BnfLJOmcIlZtdPTfQSCGu7hbf9EJc9MdEBjwblFr2LE=; b=VQilJN/CUvhj40zeR5SNT0nm7XBWjRsN1JtKJwfbJu5XkK2J0JFTMN7G56riX9wa+uSK+0 eDlk1+bH5DiwKi1lYePJyMvEKEdw2siPNsRZ9/3e9osi07joq7aOiubHaur1hiIkI5OZ/M PVPmftJY+qnrZysm0H8T1e6iOCXGZDbVrY21bc53drFOAaRq7JizY05SDzSi4nLnEG7P3s RZskKgf9i81hwJ2OQpA7MidpFTZVDOsJZd3pX9x2YgtUBkJ6ErjpWDbDebw1Ga+TPLKBA8 AZ+8MZ6JJEYRBbOu2Nc34KV3DA9cnZPvnARb7dy3534Tfrzu8lDTmYEgLUugzg== From: Herve Codina To: Herve Codina , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Andrew Lunn , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lee Jones , Linus Walleij , Qiang Zhao , Li Yang , Liam Girdwood , Mark Brown , Jaroslav Kysela , Takashi Iwai , Shengjiu Wang , Xiubo Li , Fabio Estevam , Nicolin Chen , Christophe Leroy , Randy Dunlap Cc: netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-gpio@vger.kernel.org, linux-arm-kernel@lists.infradead.org, alsa-devel@alsa-project.org, Simon Horman , Christophe JAILLET , Thomas Petazzoni Subject: [PATCH v8 14/30] soc: fsl: cpm1: qmc: Introduce qmc_chan_setup_tsa* Date: Wed, 11 Oct 2023 08:14:18 +0200 Message-ID: <20231011061437.64213-15-herve.codina@bootlin.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231011061437.64213-1-herve.codina@bootlin.com> References: <20231011061437.64213-1-herve.codina@bootlin.com> MIME-Version: 1.0 X-GND-Sasl: herve.codina@bootlin.com Precedence: bulk List-ID: X-Mailing-List: linux-gpio@vger.kernel.org Introduce the qmc_chan_setup_tsa* functions to setup entries related to the given channel. Use them during QMC channels setup. Signed-off-by: Herve Codina Reviewed-by: Christophe Leroy --- drivers/soc/fsl/qe/qmc.c | 161 ++++++++++++++++++++++++++++++--------- 1 file changed, 125 insertions(+), 36 deletions(-) diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c index 28acf4c8a141..8e8bd1942c08 100644 --- a/drivers/soc/fsl/qe/qmc.c +++ b/drivers/soc/fsl/qe/qmc.c @@ -240,6 +240,11 @@ static void qmc_clrbits16(void __iomem *addr, u16 clr) qmc_write16(addr, qmc_read16(addr) & ~clr); } +static void qmc_clrsetbits16(void __iomem *addr, u16 clr, u16 set) +{ + qmc_write16(addr, (qmc_read16(addr) & ~clr) | set); +} + static void qmc_write32(void __iomem *addr, u32 val) { iowrite32be(val, addr); @@ -562,6 +567,122 @@ static void qmc_chan_read_done(struct qmc_chan *chan) spin_unlock_irqrestore(&chan->rx_lock, flags); } +static int qmc_chan_setup_tsa_64rxtx(struct qmc_chan *chan, const struct tsa_serial_info *info) +{ + unsigned int i; + u16 curr; + u16 val; + + /* + * Use a common Tx/Rx 64 entries table. + * Tx and Rx related stuffs must be identical + */ + if (chan->tx_ts_mask != chan->rx_ts_mask) { + dev_err(chan->qmc->dev, "chan %u uses different Rx and Tx TS\n", chan->id); + return -EINVAL; + } + + val = QMC_TSA_VALID | QMC_TSA_MASK | QMC_TSA_CHANNEL(chan->id); + + /* Check entries based on Rx stuff*/ + for (i = 0; i < info->nb_rx_ts; i++) { + if (!(chan->rx_ts_mask & (((u64)1) << i))) + continue; + + curr = qmc_read16(chan->qmc->scc_pram + QMC_GBL_TSATRX + (i * 2)); + if (curr & QMC_TSA_VALID && (curr & ~QMC_TSA_WRAP) != val) { + dev_err(chan->qmc->dev, "chan %u TxRx entry %d already used\n", + chan->id, i); + return -EBUSY; + } + } + + /* Set entries based on Rx stuff*/ + for (i = 0; i < info->nb_rx_ts; i++) { + if (!(chan->rx_ts_mask & (((u64)1) << i))) + continue; + + qmc_clrsetbits16(chan->qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), + ~QMC_TSA_WRAP, val); + } + + return 0; +} + +static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_chan *chan, const struct tsa_serial_info *info) +{ + unsigned int i; + u16 curr; + u16 val; + + /* Use a Tx 32 entries table and a Rx 32 entries table */ + + val = QMC_TSA_VALID | QMC_TSA_MASK | QMC_TSA_CHANNEL(chan->id); + + /* Check entries based on Rx stuff */ + for (i = 0; i < info->nb_rx_ts; i++) { + if (!(chan->rx_ts_mask & (((u64)1) << i))) + continue; + + curr = qmc_read16(chan->qmc->scc_pram + QMC_GBL_TSATRX + (i * 2)); + if (curr & QMC_TSA_VALID && (curr & ~QMC_TSA_WRAP) != val) { + dev_err(chan->qmc->dev, "chan %u Rx entry %d already used\n", + chan->id, i); + return -EBUSY; + } + } + /* Check entries based on Tx stuff */ + for (i = 0; i < info->nb_tx_ts; i++) { + if (!(chan->tx_ts_mask & (((u64)1) << i))) + continue; + + curr = qmc_read16(chan->qmc->scc_pram + QMC_GBL_TSATTX + (i * 2)); + if (curr & QMC_TSA_VALID && (curr & ~QMC_TSA_WRAP) != val) { + dev_err(chan->qmc->dev, "chan %u Tx entry %d already used\n", + chan->id, i); + return -EBUSY; + } + } + + /* Set entries based on Rx stuff */ + for (i = 0; i < info->nb_rx_ts; i++) { + if (!(chan->rx_ts_mask & (((u64)1) << i))) + continue; + + qmc_clrsetbits16(chan->qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), + ~QMC_TSA_WRAP, val); + } + /* Set entries based on Tx stuff */ + for (i = 0; i < info->nb_tx_ts; i++) { + if (!(chan->tx_ts_mask & (((u64)1) << i))) + continue; + + qmc_clrsetbits16(chan->qmc->scc_pram + QMC_GBL_TSATTX + (i * 2), + ~QMC_TSA_WRAP, val); + } + + return 0; +} + +static int qmc_chan_setup_tsa(struct qmc_chan *chan) +{ + struct tsa_serial_info info; + int ret; + + /* Retrieve info from the TSA related serial */ + ret = tsa_serial_get_info(chan->qmc->tsa_serial, &info); + if (ret) + return ret; + + /* + * Setup one common 64 entries table or two 32 entries (one for Tx + * and one for Tx) according to assigned TS numbers. + */ + return ((info.nb_tx_ts > 32) || (info.nb_rx_ts > 32)) ? + qmc_chan_setup_tsa_64rxtx(chan, &info) : + qmc_chan_setup_tsa_32rx_32tx(chan, &info); +} + static int qmc_chan_command(struct qmc_chan *chan, u8 qmc_opcode) { return cpm_command(chan->id << 2, (qmc_opcode << 4) | 0x0E); @@ -921,7 +1042,6 @@ static int qmc_of_parse_chans(struct qmc *qmc, struct device_node *np) static int qmc_init_tsa_64rxtx(struct qmc *qmc, const struct tsa_serial_info *info) { - struct qmc_chan *chan; unsigned int i; u16 val; @@ -935,18 +1055,6 @@ static int qmc_init_tsa_64rxtx(struct qmc *qmc, const struct tsa_serial_info *in for (i = 0; i < 64; i++) qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), 0x0000); - /* Set entries based on Rx stuff*/ - list_for_each_entry(chan, &qmc->chan_head, list) { - for (i = 0; i < info->nb_rx_ts; i++) { - if (!(chan->rx_ts_mask & (((u64)1) << i))) - continue; - - val = QMC_TSA_VALID | QMC_TSA_MASK | - QMC_TSA_CHANNEL(chan->id); - qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), val); - } - } - /* Set Wrap bit on last entry */ qmc_setbits16(qmc->scc_pram + QMC_GBL_TSATRX + ((info->nb_rx_ts - 1) * 2), QMC_TSA_WRAP); @@ -963,7 +1071,6 @@ static int qmc_init_tsa_64rxtx(struct qmc *qmc, const struct tsa_serial_info *in static int qmc_init_tsa_32rx_32tx(struct qmc *qmc, const struct tsa_serial_info *info) { - struct qmc_chan *chan; unsigned int i; u16 val; @@ -978,28 +1085,6 @@ static int qmc_init_tsa_32rx_32tx(struct qmc *qmc, const struct tsa_serial_info qmc_write16(qmc->scc_pram + QMC_GBL_TSATTX + (i * 2), 0x0000); } - /* Set entries based on Rx and Tx stuff*/ - list_for_each_entry(chan, &qmc->chan_head, list) { - /* Rx part */ - for (i = 0; i < info->nb_rx_ts; i++) { - if (!(chan->rx_ts_mask & (((u64)1) << i))) - continue; - - val = QMC_TSA_VALID | QMC_TSA_MASK | - QMC_TSA_CHANNEL(chan->id); - qmc_write16(qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), val); - } - /* Tx part */ - for (i = 0; i < info->nb_tx_ts; i++) { - if (!(chan->tx_ts_mask & (((u64)1) << i))) - continue; - - val = QMC_TSA_VALID | QMC_TSA_MASK | - QMC_TSA_CHANNEL(chan->id); - qmc_write16(qmc->scc_pram + QMC_GBL_TSATTX + (i * 2), val); - } - } - /* Set Wrap bit on last entries */ qmc_setbits16(qmc->scc_pram + QMC_GBL_TSATRX + ((info->nb_rx_ts - 1) * 2), QMC_TSA_WRAP); @@ -1081,6 +1166,10 @@ static int qmc_setup_chan(struct qmc *qmc, struct qmc_chan *chan) chan->qmc = qmc; + ret = qmc_chan_setup_tsa(chan); + if (ret) + return ret; + /* Set channel specific parameter base address */ chan->s_param = qmc->dpram + (chan->id * 64); /* 16 bd per channel (8 rx and 8 tx) */