From patchwork Tue Sep 12 10:13:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herve Codina X-Patchwork-Id: 721884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94F7FCA0ED4 for ; Tue, 12 Sep 2023 10:14:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233739AbjILKOH (ORCPT ); Tue, 12 Sep 2023 06:14:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234035AbjILKNz (ORCPT ); Tue, 12 Sep 2023 06:13:55 -0400 Received: from relay1-d.mail.gandi.net (relay1-d.mail.gandi.net [217.70.183.193]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF9772108; Tue, 12 Sep 2023 03:13:50 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPA id 2733624000B; Tue, 12 Sep 2023 10:13:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1694513629; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GKfV24aXsj6NN01h/5KIurNfVpjaer7l9el2ciQw9sc=; b=ouihLrmHCcPiJ13GXIPu8Uws3gwZOpnKHLX0Id7Dcr2G1VGvVxVCJxuxok+Di0rzDo/YhT VZjHTX065szQqY62GhD8YYK+ObwtNVPmKtgaHMIBOGgbay1e2JC4YjR1VEMNsyrbx9SxxD vmDPEynavc5e1bUKx1wxhfY7HAxnxCrd3R3i5z70qP+gWO2E6gDhJtWDsmuPvJHcDuqycG wn4JrDw9UoQ+cFmvlGi1wX/OWmAtM9sFO0ItdLl6L8+XUgxw7yUVSXSH4v1C2CUpoOWKUe r/z8K3NF8pmPw+gpcMsQ1gz/dm6hKjt+0z7dfxGRS4yT+LVIV8nYPf7NMMtJXg== From: Herve Codina To: Herve Codina , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Andrew Lunn , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lee Jones , Linus Walleij , Qiang Zhao , Li Yang , Liam Girdwood , Mark Brown , Jaroslav Kysela , Takashi Iwai , Shengjiu Wang , Xiubo Li , Fabio Estevam , Nicolin Chen , Christophe Leroy , Randy Dunlap Cc: netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-gpio@vger.kernel.org, linux-arm-kernel@lists.infradead.org, alsa-devel@alsa-project.org, Simon Horman , Christophe JAILLET , Thomas Petazzoni Subject: [PATCH v5 17/31] soc: fsl: cpm1: qmc: Add support for disabling channel TSA entries Date: Tue, 12 Sep 2023 12:13:46 +0200 Message-ID: <20230912101346.225601-1-herve.codina@bootlin.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230912081527.208499-1-herve.codina@bootlin.com> References: <20230912081527.208499-1-herve.codina@bootlin.com> MIME-Version: 1.0 X-GND-Sasl: herve.codina@bootlin.com Precedence: bulk List-ID: X-Mailing-List: linux-gpio@vger.kernel.org In order to allow runtime timeslot route changes, disabling channel TSA entries needs to be supported. Add support for this new feature. Signed-off-by: Herve Codina Reviewed-by: Christophe Leroy Signed-off-by: Christophe Leroy --- drivers/soc/fsl/qe/qmc.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c index 269d10cd3c7a..26cd7e1ccafc 100644 --- a/drivers/soc/fsl/qe/qmc.c +++ b/drivers/soc/fsl/qe/qmc.c @@ -567,7 +567,8 @@ static void qmc_chan_read_done(struct qmc_chan *chan) spin_unlock_irqrestore(&chan->rx_lock, flags); } -static int qmc_chan_setup_tsa_64rxtx(struct qmc_chan *chan, const struct tsa_serial_info *info) +static int qmc_chan_setup_tsa_64rxtx(struct qmc_chan *chan, const struct tsa_serial_info *info, + bool enable) { unsigned int i; u16 curr; @@ -603,13 +604,14 @@ static int qmc_chan_setup_tsa_64rxtx(struct qmc_chan *chan, const struct tsa_ser continue; qmc_clrsetbits16(chan->qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), - ~QMC_TSA_WRAP, val); + ~QMC_TSA_WRAP, enable ? val : 0x0000); } return 0; } -static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_chan *chan, const struct tsa_serial_info *info) +static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_chan *chan, const struct tsa_serial_info *info, + bool enable) { unsigned int i; u16 curr; @@ -650,7 +652,7 @@ static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_chan *chan, const struct tsa_ continue; qmc_clrsetbits16(chan->qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), - ~QMC_TSA_WRAP, val); + ~QMC_TSA_WRAP, enable ? val : 0x0000); } /* Set entries based on Tx stuff */ for (i = 0; i < info->nb_tx_ts; i++) { @@ -658,13 +660,13 @@ static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_chan *chan, const struct tsa_ continue; qmc_clrsetbits16(chan->qmc->scc_pram + QMC_GBL_TSATTX + (i * 2), - ~QMC_TSA_WRAP, val); + ~QMC_TSA_WRAP, enable ? val : 0x0000); } return 0; } -static int qmc_chan_setup_tsa(struct qmc_chan *chan) +static int qmc_chan_setup_tsa(struct qmc_chan *chan, bool enable) { struct tsa_serial_info info; int ret; @@ -679,8 +681,8 @@ static int qmc_chan_setup_tsa(struct qmc_chan *chan) * and one for Tx) according to assigned TS numbers. */ return ((info.nb_tx_ts > 32) || (info.nb_rx_ts > 32)) ? - qmc_chan_setup_tsa_64rxtx(chan, &info) : - qmc_chan_setup_tsa_32rx_32tx(chan, &info); + qmc_chan_setup_tsa_64rxtx(chan, &info, enable) : + qmc_chan_setup_tsa_32rx_32tx(chan, &info, enable); } static int qmc_chan_command(struct qmc_chan *chan, u8 qmc_opcode) @@ -1146,7 +1148,7 @@ static int qmc_setup_chan(struct qmc *qmc, struct qmc_chan *chan) chan->qmc = qmc; - ret = qmc_chan_setup_tsa(chan); + ret = qmc_chan_setup_tsa(chan, true); if (ret) return ret;