From patchwork Wed Jul 26 15:02:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herve Codina X-Patchwork-Id: 706743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECE20C001DC for ; Wed, 26 Jul 2023 15:04:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234767AbjGZPES (ORCPT ); Wed, 26 Jul 2023 11:04:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234693AbjGZPD6 (ORCPT ); Wed, 26 Jul 2023 11:03:58 -0400 Received: from relay5-d.mail.gandi.net (relay5-d.mail.gandi.net [IPv6:2001:4b98:dc4:8::225]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B85BF2D43; Wed, 26 Jul 2023 08:03:28 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPA id C240A1C000E; Wed, 26 Jul 2023 15:03:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1690383806; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7apDwjx9/8uO9n+ufyQhZ3ncaFdyB8ZdXZa/oGvwoPc=; b=jLpaUYyLgtyZpfY/rM/UmGTtYfR74hu/qBfAdl+wmUcRmphN9e9nCqReT3yJ65NB7Ue8ck 2bHcRMIZp2MV+GbKoIeDeik94gVrfb3gvlQ0ZAupN3Z+FiFQNBH6AGyCTbEvDNK3G73J5d Gb/IR+VXBdbiPfGjzDhDUZ8b1jWeRCeX08hBq8ivb13gZT8IGjGxvmX2+6JDCjfx7X73HF H73qyaL8vhfUgJ4jdenFnetnymNeEQPabITt1u4n5gBykzDKndKWOt7cUMKpTXQrZu+Gl+ NIsj0d8bFjBF0jxccsE1JpTqnQnzoE0yqpRFHPhT1PLXktehqzKvTKf9UgJMGg== From: Herve Codina To: Herve Codina , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Andrew Lunn , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lee Jones , Linus Walleij , Qiang Zhao , Li Yang , Liam Girdwood , Mark Brown , Jaroslav Kysela , Takashi Iwai , Shengjiu Wang , Xiubo Li , Fabio Estevam , Nicolin Chen , Christophe Leroy , Randy Dunlap Cc: netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-gpio@vger.kernel.org, linux-arm-kernel@lists.infradead.org, alsa-devel@alsa-project.org, Thomas Petazzoni Subject: [PATCH v2 13/28] soc: fsl: cpm1: qmc: Add support for disabling channel TSA entries Date: Wed, 26 Jul 2023 17:02:09 +0200 Message-ID: <20230726150225.483464-14-herve.codina@bootlin.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230726150225.483464-1-herve.codina@bootlin.com> References: <20230726150225.483464-1-herve.codina@bootlin.com> MIME-Version: 1.0 X-GND-Sasl: herve.codina@bootlin.com Precedence: bulk List-ID: X-Mailing-List: linux-gpio@vger.kernel.org In order to allow runtime timeslot route changes, disabling channel TSA entries needs to be supported. Add support for this new feature. Signed-off-by: Herve Codina Reviewed-by: Christophe Leroy --- drivers/soc/fsl/qe/qmc.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c index 82405915f2a4..146eebc12737 100644 --- a/drivers/soc/fsl/qe/qmc.c +++ b/drivers/soc/fsl/qe/qmc.c @@ -567,7 +567,8 @@ static void qmc_chan_read_done(struct qmc_chan *chan) spin_unlock_irqrestore(&chan->rx_lock, flags); } -static int qmc_chan_setup_tsa_64rxtx(struct qmc_chan *chan, const struct tsa_serial_info *info) +static int qmc_chan_setup_tsa_64rxtx(struct qmc_chan *chan, const struct tsa_serial_info *info, + bool enable) { unsigned int i; u16 curr; @@ -603,13 +604,14 @@ static int qmc_chan_setup_tsa_64rxtx(struct qmc_chan *chan, const struct tsa_ser continue; qmc_clrsetbits16(chan->qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), - ~QMC_TSA_WRAP, val); + ~QMC_TSA_WRAP, enable ? val : 0x0000); } return 0; } -static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_chan *chan, const struct tsa_serial_info *info) +static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_chan *chan, const struct tsa_serial_info *info, + bool enable) { unsigned int i; u16 curr; @@ -650,7 +652,7 @@ static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_chan *chan, const struct tsa_ continue; qmc_clrsetbits16(chan->qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), - ~QMC_TSA_WRAP, val); + ~QMC_TSA_WRAP, enable ? val : 0x0000); } /* Set entries based on Tx stuff */ for (i = 0; i < info->nb_tx_ts; i++) { @@ -658,13 +660,13 @@ static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_chan *chan, const struct tsa_ continue; qmc_clrsetbits16(chan->qmc->scc_pram + QMC_GBL_TSATTX + (i * 2), - ~QMC_TSA_WRAP, val); + ~QMC_TSA_WRAP, enable ? val : 0x0000); } return 0; } -static int qmc_chan_setup_tsa(struct qmc_chan *chan) +static int qmc_chan_setup_tsa(struct qmc_chan *chan, bool enable) { struct tsa_serial_info info; int ret; @@ -679,8 +681,8 @@ static int qmc_chan_setup_tsa(struct qmc_chan *chan) * and one for Tx) according to assigned TS numbers. */ return ((info.nb_tx_ts > 32) || (info.nb_rx_ts > 32)) ? - qmc_chan_setup_tsa_64rxtx(chan, &info) : - qmc_chan_setup_tsa_32rx_32tx(chan, &info); + qmc_chan_setup_tsa_64rxtx(chan, &info, enable) : + qmc_chan_setup_tsa_32rx_32tx(chan, &info, enable); } static int qmc_chan_command(struct qmc_chan *chan, u8 qmc_opcode) @@ -1146,7 +1148,7 @@ static int qmc_setup_chan(struct qmc *qmc, struct qmc_chan *chan) chan->qmc = qmc; - ret = qmc_chan_setup_tsa(chan); + ret = qmc_chan_setup_tsa(chan, true); if (ret) return ret;