From patchwork Tue Dec 7 02:47:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Martinez X-Patchwork-Id: 521905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACE1AC433FE for ; Tue, 7 Dec 2021 02:48:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230204AbhLGCvm (ORCPT ); Mon, 6 Dec 2021 21:51:42 -0500 Received: from mga07.intel.com ([134.134.136.100]:27268 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229577AbhLGCvl (ORCPT ); Mon, 6 Dec 2021 21:51:41 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10190"; a="300860555" X-IronPort-AV: E=Sophos;i="5.87,293,1631602800"; d="scan'208";a="300860555" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2021 18:47:44 -0800 X-IronPort-AV: E=Sophos;i="5.87,293,1631602800"; d="scan'208";a="748524126" Received: from rmarti10-desk.jf.intel.com ([134.134.150.146]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2021 18:47:43 -0800 From: Ricardo Martinez To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: kuba@kernel.org, davem@davemloft.net, johannes@sipsolutions.net, ryazanov.s.a@gmail.com, loic.poulain@linaro.org, m.chetan.kumar@intel.com, chandrashekar.devegowda@intel.com, linuxwwan@intel.com, chiranjeevi.rapolu@linux.intel.com, haijun.liu@mediatek.com, amir.hanania@intel.com, andriy.shevchenko@linux.intel.com, dinesh.sharma@intel.com, eliot.lee@intel.com, mika.westerberg@linux.intel.com, moises.veleta@intel.com, pierre-louis.bossart@intel.com, muralidharan.sethuraman@intel.com, Soumya.Prakash.Mishra@intel.com, sreehari.kancharla@intel.com, suresh.nagaraj@intel.com, Ricardo Martinez Subject: [PATCH net-next v3 01/12] net: wwan: t7xx: Add control DMA interface Date: Mon, 6 Dec 2021 19:47:00 -0700 Message-Id: <20211207024711.2765-2-ricardo.martinez@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211207024711.2765-1-ricardo.martinez@linux.intel.com> References: <20211207024711.2765-1-ricardo.martinez@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org From: Haijun Liu Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control path of Host-Modem data transfers. CLDMA HIF layer provides a common interface to the Port Layer. CLDMA manages 8 independent RX/TX physical channels with data flow control in HW queues. CLDMA uses ring buffers of General Packet Descriptors (GPD) for TX/RX. GPDs can represent multiple or single data buffers (DB). CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA interrupts, and initializes CLDMA HW registers. CLDMA TX flow: 1. Port Layer write 2. Get DB address 3. Configure GPD 4. Triggering processing via HW register write CLDMA RX flow: 1. CLDMA HW sends a RX "done" to host 2. Driver starts thread to safely read GPD 3. DB is sent to Port layer 4. Create a new buffer for GPD ring Signed-off-by: Haijun Liu Signed-off-by: Chandrashekar Devegowda Co-developed-by: Ricardo Martinez Signed-off-by: Ricardo Martinez --- drivers/net/wwan/t7xx/t7xx_cldma.c | 290 ++++++ drivers/net/wwan/t7xx/t7xx_cldma.h | 177 ++++ drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 1328 ++++++++++++++++++++++++ drivers/net/wwan/t7xx/t7xx_hif_cldma.h | 139 +++ 4 files changed, 1934 insertions(+) create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.c create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.h create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.c create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.h diff --git a/drivers/net/wwan/t7xx/t7xx_cldma.c b/drivers/net/wwan/t7xx/t7xx_cldma.c new file mode 100644 index 000000000000..4f068ad27900 --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_cldma.c @@ -0,0 +1,290 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Moises Veleta + * Ricardo Martinez + * + * Contributors: + * Amir Hanania + * Andy Shevchenko + * Eliot Lee + * Sreehari Kancharla + */ + +#include +#include +#include +#include +#include + +#include "t7xx_common.h" +#include "t7xx_cldma.h" + +#define ADDR_SIZE 8 + +void t7xx_cldma_clear_ip_busy(struct t7xx_cldma_hw *hw_info) +{ + u32 val; + + val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_IP_BUSY); + val |= IP_BUSY_WAKEUP; + iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_IP_BUSY); +} + +/** + * t7xx_cldma_hw_restore() - Restore CLDMA HW registers. + * @hw_info: Pointer to struct t7xx_cldma_hw. + * + * Restore HW after resume. Writes uplink configuration for CLDMA HW. + */ +void t7xx_cldma_hw_restore(struct t7xx_cldma_hw *hw_info) +{ + u32 ul_cfg; + + ul_cfg = ioread32(hw_info->ap_pdn_base + REG_CLDMA_UL_CFG); + ul_cfg &= ~UL_CFG_BIT_MODE_MASK; + + if (hw_info->hw_mode == MODE_BIT_64) + ul_cfg |= UL_CFG_BIT_MODE_64; + else if (hw_info->hw_mode == MODE_BIT_40) + ul_cfg |= UL_CFG_BIT_MODE_40; + else if (hw_info->hw_mode == MODE_BIT_36) + ul_cfg |= UL_CFG_BIT_MODE_36; + + iowrite32(ul_cfg, hw_info->ap_pdn_base + REG_CLDMA_UL_CFG); + /* Disable TX and RX invalid address check */ + iowrite32(UL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_UL_MEM); + iowrite32(DL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_DL_MEM); +} + +void t7xx_cldma_hw_start_queue(struct t7xx_cldma_hw *hw_info, u8 qno, enum mtk_txrx tx_rx) +{ + void __iomem *reg_start_cmd; + u32 val; + + reg_start_cmd = (tx_rx == MTK_RX) ? hw_info->ap_pdn_base + REG_CLDMA_DL_START_CMD : + hw_info->ap_pdn_base + REG_CLDMA_UL_START_CMD; + val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno); + iowrite32(val, reg_start_cmd); +} + +void t7xx_cldma_hw_start(struct t7xx_cldma_hw *hw_info) +{ + /* Enable the TX & RX interrupts */ + iowrite32(TXRX_STATUS_BITMASK, hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0); + iowrite32(TXRX_STATUS_BITMASK, hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0); + /* Enable the empty queue interrupt */ + iowrite32(EMPTY_STATUS_BITMASK, hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0); + iowrite32(EMPTY_STATUS_BITMASK, hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0); +} + +void t7xx_cldma_hw_reset(void __iomem *ao_base) +{ + u32 val; + + val = ioread32(ao_base + REG_INFRA_RST2_SET); + val |= RST2_PMIC_SW_RST_SET; + iowrite32(val, ao_base + REG_INFRA_RST2_SET); + val = ioread32(ao_base + REG_INFRA_RST4_SET); + val |= RST4_CLDMA1_SW_RST_SET; + iowrite32(val, ao_base + REG_INFRA_RST4_SET); + udelay(1); + + val = ioread32(ao_base + REG_INFRA_RST4_CLR); + val |= RST4_CLDMA1_SW_RST_CLR; + iowrite32(val, ao_base + REG_INFRA_RST4_CLR); + val = ioread32(ao_base + REG_INFRA_RST2_CLR); + val |= RST2_PMIC_SW_RST_CLR; + iowrite32(val, ao_base + REG_INFRA_RST2_CLR); +} + +bool t7xx_cldma_tx_addr_is_set(struct t7xx_cldma_hw *hw_info, unsigned char qno) +{ + u32 offset = REG_CLDMA_UL_START_ADDRL_0 + qno * ADDR_SIZE; + + return !!ioread64(hw_info->ap_pdn_base + offset); +} + +void t7xx_cldma_hw_set_start_addr(struct t7xx_cldma_hw *hw_info, unsigned char qno, u64 address, + enum mtk_txrx tx_rx) +{ + void __iomem *base; + u32 offset = qno * ADDR_SIZE; + + if (tx_rx == MTK_RX) { + base = hw_info->ap_ao_base; + iowrite64(address, base + REG_CLDMA_DL_START_ADDRL_0 + offset); + } else { + base = hw_info->ap_pdn_base; + iowrite64(address, base + REG_CLDMA_UL_START_ADDRL_0 + offset); + } +} + +void t7xx_cldma_hw_resume_queue(struct t7xx_cldma_hw *hw_info, unsigned char qno, + enum mtk_txrx tx_rx) +{ + void __iomem *base = hw_info->ap_pdn_base; + + if (tx_rx == MTK_RX) + iowrite32(BIT(qno), base + REG_CLDMA_DL_RESUME_CMD); + else + iowrite32(BIT(qno), base + REG_CLDMA_UL_RESUME_CMD); +} + +unsigned int t7xx_cldma_hw_queue_status(struct t7xx_cldma_hw *hw_info, unsigned char qno, + enum mtk_txrx tx_rx) +{ + u32 mask, val; + + mask = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno); + + if (tx_rx == MTK_RX) { + val = ioread32(hw_info->ap_ao_base + REG_CLDMA_DL_STATUS); + return val & mask; + } + + val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_UL_STATUS); + return val & mask; +} + +void t7xx_cldma_hw_tx_done(struct t7xx_cldma_hw *hw_info, unsigned int bitmask) +{ + unsigned int ch_id; + + ch_id = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0); + ch_id &= bitmask; + /* Clear the ch IDs in the TX interrupt status register */ + iowrite32(ch_id, hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0); + ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0); +} + +void t7xx_cldma_hw_rx_done(struct t7xx_cldma_hw *hw_info, unsigned int bitmask) +{ + unsigned int ch_id; + + ch_id = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0); + ch_id &= bitmask; + /* Clear the ch IDs in the RX interrupt status register */ + iowrite32(ch_id, hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0); + ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0); +} + +unsigned int t7xx_cldma_hw_int_status(struct t7xx_cldma_hw *hw_info, unsigned int bitmask, + enum mtk_txrx tx_rx) +{ + void __iomem *reg_int_sta; + u32 val; + + reg_int_sta = (tx_rx == MTK_RX) ? hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0 : + hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0; + val = ioread32(reg_int_sta); + return val & bitmask; +} + +void t7xx_cldma_hw_irq_dis_txrx(struct t7xx_cldma_hw *hw_info, unsigned char qno, + enum mtk_txrx tx_rx) +{ + void __iomem *reg; + u32 val; + + reg = (tx_rx == MTK_RX) ? hw_info->ap_ao_base + REG_CLDMA_L2RIMSR0 : + hw_info->ap_pdn_base + REG_CLDMA_L2TIMSR0; + val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno); + iowrite32(val, reg); +} + +void t7xx_cldma_hw_irq_dis_eq(struct t7xx_cldma_hw *hw_info, unsigned char qno, enum mtk_txrx tx_rx) +{ + void __iomem *reg; + u32 val; + + reg = (tx_rx == MTK_RX) ? hw_info->ap_ao_base + REG_CLDMA_L2RIMSR0 : + hw_info->ap_pdn_base + REG_CLDMA_L2TIMSR0; + + val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno); + iowrite32(val << EQ_STA_BIT_OFFSET, reg); +} + +void t7xx_cldma_hw_irq_en_txrx(struct t7xx_cldma_hw *hw_info, unsigned char qno, + enum mtk_txrx tx_rx) +{ + void __iomem *reg; + u32 val; + + reg = (tx_rx == MTK_RX) ? hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0 : + hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0; + val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno); + iowrite32(val, reg); +} + +void t7xx_cldma_hw_irq_en_eq(struct t7xx_cldma_hw *hw_info, unsigned char qno, enum mtk_txrx tx_rx) +{ + void __iomem *reg; + u32 val; + + reg = (tx_rx == MTK_RX) ? hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0 : + hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0; + val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno); + iowrite32(val << EQ_STA_BIT_OFFSET, reg); +} + +/** + * t7xx_cldma_hw_init() - Initialize CLDMA HW. + * @hw_info: Pointer to struct t7xx_cldma_hw. + * + * Write uplink and downlink configuration to CLDMA HW. + */ +void t7xx_cldma_hw_init(struct t7xx_cldma_hw *hw_info) +{ + u32 ul_cfg, dl_cfg; + + ul_cfg = ioread32(hw_info->ap_pdn_base + REG_CLDMA_UL_CFG); + dl_cfg = ioread32(hw_info->ap_ao_base + REG_CLDMA_DL_CFG); + /* Configure the DRAM address mode */ + ul_cfg &= ~UL_CFG_BIT_MODE_MASK; + dl_cfg &= ~DL_CFG_BIT_MODE_MASK; + + if (hw_info->hw_mode == MODE_BIT_64) { + ul_cfg |= UL_CFG_BIT_MODE_64; + dl_cfg |= DL_CFG_BIT_MODE_64; + } else if (hw_info->hw_mode == MODE_BIT_40) { + ul_cfg |= UL_CFG_BIT_MODE_40; + dl_cfg |= DL_CFG_BIT_MODE_40; + } else if (hw_info->hw_mode == MODE_BIT_36) { + ul_cfg |= UL_CFG_BIT_MODE_36; + dl_cfg |= DL_CFG_BIT_MODE_36; + } + + iowrite32(ul_cfg, hw_info->ap_pdn_base + REG_CLDMA_UL_CFG); + dl_cfg |= DL_CFG_UP_HW_LAST; + iowrite32(dl_cfg, hw_info->ap_ao_base + REG_CLDMA_DL_CFG); + iowrite32(0, hw_info->ap_ao_base + REG_CLDMA_INT_MASK); + iowrite32(BUSY_MASK_MD, hw_info->ap_ao_base + REG_CLDMA_BUSY_MASK); + iowrite32(UL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_UL_MEM); + iowrite32(DL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_DL_MEM); +} + +void t7xx_cldma_hw_stop_queue(struct t7xx_cldma_hw *hw_info, u8 qno, enum mtk_txrx tx_rx) +{ + void __iomem *reg_stop_cmd; + u32 val; + + reg_stop_cmd = (tx_rx == MTK_RX) ? hw_info->ap_pdn_base + REG_CLDMA_DL_STOP_CMD : + hw_info->ap_pdn_base + REG_CLDMA_UL_STOP_CMD; + val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno); + iowrite32(val, reg_stop_cmd); +} + +void t7xx_cldma_hw_stop(struct t7xx_cldma_hw *hw_info, enum mtk_txrx tx_rx) +{ + void __iomem *reg_ims; + + reg_ims = (tx_rx == MTK_RX) ? hw_info->ap_ao_base + REG_CLDMA_L2RIMSR0 : + hw_info->ap_pdn_base + REG_CLDMA_L2TIMSR0; + iowrite32(TXRX_STATUS_BITMASK, reg_ims); + iowrite32(EMPTY_STATUS_BITMASK, reg_ims); +} diff --git a/drivers/net/wwan/t7xx/t7xx_cldma.h b/drivers/net/wwan/t7xx/t7xx_cldma.h new file mode 100644 index 000000000000..1b5e5bf15a3e --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_cldma.h @@ -0,0 +1,177 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Moises Veleta + * Ricardo Martinez + * + * Contributors: + * Amir Hanania + * Andy Shevchenko + * Sreehari Kancharla + */ + +#ifndef __T7XX_CLDMA_H__ +#define __T7XX_CLDMA_H__ + +#include +#include + +#include "t7xx_common.h" + +#define CLDMA_TXQ_NUM 8 +#define CLDMA_RXQ_NUM 8 +#define CLDMA_ALL_Q GENMASK(7, 0) + +/* Interrupt status bits */ +#define EMPTY_STATUS_BITMASK GENMASK(15, 8) +#define TXRX_STATUS_BITMASK GENMASK(7, 0) +#define EQ_STA_BIT_OFFSET 8 +#define L2_INT_BIT_COUNT 16 +#define EQ_STA_BIT(index) (BIT((index) + EQ_STA_BIT_OFFSET) & EMPTY_STATUS_BITMASK) + +#define TQ_ERR_INT_BITMASK GENMASK(23, 16) +#define TQ_ACTIVE_START_ERR_INT_BITMASK GENMASK(31, 24) + +#define RQ_ERR_INT_BITMASK GENMASK(23, 16) +#define RQ_ACTIVE_START_ERR_INT_BITMASK GENMASK(31, 24) + +#define CLDMA0_AO_BASE 0x10049000 +#define CLDMA0_PD_BASE 0x1021d000 +#define CLDMA1_AO_BASE 0x1004b000 +#define CLDMA1_PD_BASE 0x1021f000 + +#define CLDMA_R_AO_BASE 0x10023000 +#define CLDMA_R_PD_BASE 0x1023d000 + +/* CLDMA TX */ +#define REG_CLDMA_UL_START_ADDRL_0 0x0004 +#define REG_CLDMA_UL_START_ADDRH_0 0x0008 +#define REG_CLDMA_UL_CURRENT_ADDRL_0 0x0044 +#define REG_CLDMA_UL_CURRENT_ADDRH_0 0x0048 +#define REG_CLDMA_UL_STATUS 0x0084 +#define CLDMA_INVALID_STATUS GENMASK(31, 0) +#define REG_CLDMA_UL_START_CMD 0x0088 +#define REG_CLDMA_UL_RESUME_CMD 0x008c +#define REG_CLDMA_UL_STOP_CMD 0x0090 +#define REG_CLDMA_UL_ERROR 0x0094 +#define REG_CLDMA_UL_CFG 0x0098 +#define UL_CFG_BIT_MODE_36 BIT(5) +#define UL_CFG_BIT_MODE_40 BIT(6) +#define UL_CFG_BIT_MODE_64 BIT(7) +#define UL_CFG_BIT_MODE_MASK GENMASK(7, 5) + +#define REG_CLDMA_UL_MEM 0x009c +#define UL_MEM_CHECK_DIS BIT(0) + +/* CLDMA RX */ +#define REG_CLDMA_DL_START_CMD 0x05bc +#define REG_CLDMA_DL_RESUME_CMD 0x05c0 +#define REG_CLDMA_DL_STOP_CMD 0x05c4 +#define REG_CLDMA_DL_MEM 0x0508 +#define DL_MEM_CHECK_DIS BIT(0) + +#define REG_CLDMA_DL_CFG 0x0404 +#define DL_CFG_UP_HW_LAST BIT(2) +#define DL_CFG_BIT_MODE_36 BIT(10) +#define DL_CFG_BIT_MODE_40 BIT(11) +#define DL_CFG_BIT_MODE_64 BIT(12) +#define DL_CFG_BIT_MODE_MASK GENMASK(12, 10) + +#define REG_CLDMA_DL_START_ADDRL_0 0x0478 +#define REG_CLDMA_DL_START_ADDRH_0 0x047c +#define REG_CLDMA_DL_CURRENT_ADDRL_0 0x04b8 +#define REG_CLDMA_DL_CURRENT_ADDRH_0 0x04bc +#define REG_CLDMA_DL_STATUS 0x04f8 + +/* CLDMA MISC */ +#define REG_CLDMA_L2TISAR0 0x0810 +#define REG_CLDMA_L2TISAR1 0x0814 +#define REG_CLDMA_L2TIMR0 0x0818 +#define REG_CLDMA_L2TIMR1 0x081c +#define REG_CLDMA_L2TIMCR0 0x0820 +#define REG_CLDMA_L2TIMCR1 0x0824 +#define REG_CLDMA_L2TIMSR0 0x0828 +#define REG_CLDMA_L2TIMSR1 0x082c +#define REG_CLDMA_L3TISAR0 0x0830 +#define REG_CLDMA_L3TISAR1 0x0834 +#define REG_CLDMA_L2RISAR0 0x0850 +#define REG_CLDMA_L2RISAR1 0x0854 +#define REG_CLDMA_L3RISAR0 0x0870 +#define REG_CLDMA_L3RISAR1 0x0874 +#define REG_CLDMA_IP_BUSY 0x08b4 +#define IP_BUSY_WAKEUP BIT(0) +#define CLDMA_L2TISAR0_ALL_INT_MASK GENMASK(15, 0) +#define CLDMA_L2RISAR0_ALL_INT_MASK GENMASK(15, 0) + +/* CLDMA MISC */ +#define REG_CLDMA_L2RIMR0 0x0858 +#define REG_CLDMA_L2RIMR1 0x085c +#define REG_CLDMA_L2RIMCR0 0x0860 +#define REG_CLDMA_L2RIMCR1 0x0864 +#define REG_CLDMA_L2RIMSR0 0x0868 +#define REG_CLDMA_L2RIMSR1 0x086c +#define REG_CLDMA_BUSY_MASK 0x0954 +#define BUSY_MASK_PCIE BIT(0) +#define BUSY_MASK_AP BIT(1) +#define BUSY_MASK_MD BIT(2) + +#define REG_CLDMA_INT_MASK 0x0960 + +/* CLDMA RESET */ +#define REG_INFRA_RST4_SET 0x0730 +#define RST4_CLDMA1_SW_RST_SET BIT(20) + +#define REG_INFRA_RST4_CLR 0x0734 +#define RST4_CLDMA1_SW_RST_CLR BIT(20) + +#define REG_INFRA_RST2_SET 0x0140 +#define RST2_PMIC_SW_RST_SET BIT(18) + +#define REG_INFRA_RST2_CLR 0x0144 +#define RST2_PMIC_SW_RST_CLR BIT(18) + +enum t7xx_hw_mode { + MODE_BIT_32, + MODE_BIT_36, + MODE_BIT_40, + MODE_BIT_64, +}; + +struct t7xx_cldma_hw { + enum t7xx_hw_mode hw_mode; + void __iomem *ap_ao_base; + void __iomem *ap_pdn_base; + u32 phy_interrupt_id; +}; + +void t7xx_cldma_hw_irq_dis_txrx(struct t7xx_cldma_hw *hw_info, unsigned char qno, + enum mtk_txrx tx_rx); +void t7xx_cldma_hw_irq_dis_eq(struct t7xx_cldma_hw *hw_info, unsigned char qno, + enum mtk_txrx tx_rx); +void t7xx_cldma_hw_irq_en_txrx(struct t7xx_cldma_hw *hw_info, unsigned char qno, + enum mtk_txrx tx_rx); +void t7xx_cldma_hw_irq_en_eq(struct t7xx_cldma_hw *hw_info, unsigned char qno, enum mtk_txrx tx_rx); +unsigned int t7xx_cldma_hw_queue_status(struct t7xx_cldma_hw *hw_info, unsigned char qno, + enum mtk_txrx tx_rx); +void t7xx_cldma_hw_init(struct t7xx_cldma_hw *hw_info); +void t7xx_cldma_hw_resume_queue(struct t7xx_cldma_hw *hw_info, unsigned char qno, + enum mtk_txrx tx_rx); +void t7xx_cldma_hw_start(struct t7xx_cldma_hw *hw_info); +void t7xx_cldma_hw_start_queue(struct t7xx_cldma_hw *hw_info, u8 qno, enum mtk_txrx tx_rx); +void t7xx_cldma_hw_tx_done(struct t7xx_cldma_hw *hw_info, unsigned int bitmask); +void t7xx_cldma_hw_rx_done(struct t7xx_cldma_hw *hw_info, unsigned int bitmask); +void t7xx_cldma_hw_stop_queue(struct t7xx_cldma_hw *hw_info, u8 qno, enum mtk_txrx tx_rx); +void t7xx_cldma_hw_set_start_addr(struct t7xx_cldma_hw *hw_info, + unsigned char qno, u64 address, enum mtk_txrx tx_rx); +void t7xx_cldma_hw_reset(void __iomem *ao_base); +void t7xx_cldma_hw_stop(struct t7xx_cldma_hw *hw_info, enum mtk_txrx tx_rx); +unsigned int t7xx_cldma_hw_int_status(struct t7xx_cldma_hw *hw_info, unsigned int bitmask, + enum mtk_txrx tx_rx); +void t7xx_cldma_hw_restore(struct t7xx_cldma_hw *hw_info); +void t7xx_cldma_clear_ip_busy(struct t7xx_cldma_hw *hw_info); +bool t7xx_cldma_tx_addr_is_set(struct t7xx_cldma_hw *hw_info, unsigned char qno); +#endif diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c new file mode 100644 index 000000000000..300ffbcc4735 --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c @@ -0,0 +1,1328 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Amir Hanania + * Haijun Liu + * Moises Veleta + * Ricardo Martinez + * Sreehari Kancharla + * + * Contributors: + * Andy Shevchenko + * Chiranjeevi Rapolu + * Eliot Lee + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "t7xx_cldma.h" +#include "t7xx_common.h" +#include "t7xx_hif_cldma.h" +#include "t7xx_mhccif.h" +#include "t7xx_modem_ops.h" +#include "t7xx_pci.h" +#include "t7xx_pcie_mac.h" +#include "t7xx_reg.h" +#include "t7xx_state_monitor.h" + +#define MAX_TX_BUDGET 16 +#define MAX_RX_BUDGET 16 + +#define CHECK_Q_STOP_TIMEOUT_US 1000000 +#define CHECK_Q_STOP_STEP_US 10000 + +static inline void md_cd_queue_struct_reset(struct cldma_queue *queue, struct cldma_ctrl *md_ctrl, + enum mtk_txrx tx_rx, unsigned char index) +{ + queue->dir = tx_rx; + queue->index = index; + queue->md_ctrl = md_ctrl; + queue->tr_ring = NULL; + queue->tr_done = NULL; + queue->tx_xmit = NULL; +} + +static inline void md_cd_queue_struct_init(struct cldma_queue *queue, struct cldma_ctrl *md_ctrl, + enum mtk_txrx tx_rx, unsigned char index) +{ + md_cd_queue_struct_reset(queue, md_ctrl, tx_rx, index); + init_waitqueue_head(&queue->req_wq); + spin_lock_init(&queue->ring_lock); +} + +static inline void t7xx_cldma_tgpd_set_data_ptr(struct cldma_tgpd *tgpd, dma_addr_t data_ptr) +{ + tgpd->data_buff_bd_ptr_h = cpu_to_le32(upper_32_bits(data_ptr)); + tgpd->data_buff_bd_ptr_l = cpu_to_le32(lower_32_bits(data_ptr)); +} + +static inline void t7xx_cldma_tgpd_set_next_ptr(struct cldma_tgpd *tgpd, dma_addr_t next_ptr) +{ + tgpd->next_gpd_ptr_h = cpu_to_le32(upper_32_bits(next_ptr)); + tgpd->next_gpd_ptr_l = cpu_to_le32(lower_32_bits(next_ptr)); +} + +static inline void t7xx_cldma_rgpd_set_data_ptr(struct cldma_rgpd *rgpd, dma_addr_t data_ptr) +{ + rgpd->data_buff_bd_ptr_h = cpu_to_le32(upper_32_bits(data_ptr)); + rgpd->data_buff_bd_ptr_l = cpu_to_le32(lower_32_bits(data_ptr)); +} + +static inline void t7xx_cldma_rgpd_set_next_ptr(struct cldma_rgpd *rgpd, dma_addr_t next_ptr) +{ + rgpd->next_gpd_ptr_h = cpu_to_le32(upper_32_bits(next_ptr)); + rgpd->next_gpd_ptr_l = cpu_to_le32(lower_32_bits(next_ptr)); +} + +static struct cldma_request *t7xx_cldma_ring_step_forward(struct cldma_ring *ring, + struct cldma_request *req) +{ + if (req->entry.next == &ring->gpd_ring) + return list_first_entry(&ring->gpd_ring, struct cldma_request, entry); + + return list_next_entry(req, entry); +} + +static struct cldma_request *t7xx_cldma_ring_step_backward(struct cldma_ring *ring, + struct cldma_request *req) +{ + if (req->entry.prev == &ring->gpd_ring) + return list_last_entry(&ring->gpd_ring, struct cldma_request, entry); + + return list_prev_entry(req, entry); +} + +static int t7xx_cldma_alloc_and_map_skb(struct cldma_ctrl *md_ctrl, struct cldma_request *req, + size_t size) +{ + req->skb = __dev_alloc_skb(size, GFP_KERNEL); + if (!req->skb) + return -ENOMEM; + + req->mapped_buff = dma_map_single(md_ctrl->dev, req->skb->data, + t7xx_skb_data_size(req->skb), DMA_FROM_DEVICE); + if (dma_mapping_error(md_ctrl->dev, req->mapped_buff)) { + dev_err(md_ctrl->dev, "DMA mapping failed\n"); + dev_kfree_skb_any(req->skb); + req->skb = NULL; + req->mapped_buff = 0; + return -ENOMEM; + } + + return 0; +} + +static int t7xx_cldma_gpd_rx_from_queue(struct cldma_queue *queue, int budget, bool *over_budget) +{ + struct cldma_ctrl *md_ctrl = queue->md_ctrl; + unsigned char hwo_polling_count = 0; + struct t7xx_cldma_hw *hw_info; + bool rx_not_done = true; + int count = 0; + + hw_info = &md_ctrl->hw_info; + + do { + struct cldma_request *req; + struct cldma_rgpd *rgpd; + struct sk_buff *skb; + int ret; + + req = queue->tr_done; + if (!req) + return -ENODATA; + + rgpd = req->gpd; + if ((rgpd->gpd_flags & GPD_FLAGS_HWO) || !req->skb) { + dma_addr_t gpd_addr; + + if (!pci_device_is_present(to_pci_dev(md_ctrl->dev))) { + dev_err(md_ctrl->dev, "PCIe Link disconnected\n"); + return -ENODEV; + } + + gpd_addr = ioread64(hw_info->ap_pdn_base + REG_CLDMA_DL_CURRENT_ADDRL_0 + + queue->index * sizeof(u64)); + if (req->gpd_addr == gpd_addr || hwo_polling_count++ >= 100) + return 0; + + udelay(1); + continue; + } + + hwo_polling_count = 0; + skb = req->skb; + + if (req->mapped_buff) { + dma_unmap_single(md_ctrl->dev, req->mapped_buff, + t7xx_skb_data_size(skb), DMA_FROM_DEVICE); + req->mapped_buff = 0; + } + + skb->len = 0; + skb_reset_tail_pointer(skb); + skb_put(skb, le16_to_cpu(rgpd->data_buff_len)); + + ret = md_ctrl->recv_skb(queue, skb); + if (ret < 0) + return ret; + + req->skb = NULL; + t7xx_cldma_rgpd_set_data_ptr(rgpd, 0); + queue->tr_done = t7xx_cldma_ring_step_forward(queue->tr_ring, req); + req = queue->rx_refill; + + ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size); + if (ret) + return ret; + + rgpd = req->gpd; + t7xx_cldma_rgpd_set_data_ptr(rgpd, req->mapped_buff); + rgpd->data_buff_len = 0; + rgpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO; + queue->rx_refill = t7xx_cldma_ring_step_forward(queue->tr_ring, req); + rx_not_done = ++count < budget || !need_resched(); + } while (rx_not_done); + + *over_budget = true; + return 0; +} + +static int t7xx_cldma_gpd_rx_collect(struct cldma_queue *queue, int budget) +{ + struct cldma_ctrl *md_ctrl = queue->md_ctrl; + bool rx_not_done, over_budget = false; + struct t7xx_cldma_hw *hw_info; + unsigned int l2_rx_int; + unsigned long flags; + int ret; + + hw_info = &md_ctrl->hw_info; + + do { + rx_not_done = false; + + ret = t7xx_cldma_gpd_rx_from_queue(queue, budget, &over_budget); + if (ret == -ENODATA) + return 0; + else if (ret) + return ret; + + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + if (md_ctrl->rxq_active & BIT(queue->index)) { + if (!t7xx_cldma_hw_queue_status(hw_info, queue->index, MTK_RX)) + t7xx_cldma_hw_resume_queue(hw_info, queue->index, MTK_RX); + + l2_rx_int = t7xx_cldma_hw_int_status(hw_info, BIT(queue->index), MTK_RX); + if (l2_rx_int) { + t7xx_cldma_hw_rx_done(hw_info, l2_rx_int); + + if (over_budget) { + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); + return -EAGAIN; + } + + rx_not_done = true; + } + } + + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); + } while (rx_not_done); + + return 0; +} + +static void t7xx_cldma_rx_done(struct work_struct *work) +{ + struct cldma_queue *queue = container_of(work, struct cldma_queue, cldma_work); + struct cldma_ctrl *md_ctrl = queue->md_ctrl; + int value; + + value = t7xx_cldma_gpd_rx_collect(queue, queue->budget); + if (value && md_ctrl->rxq_active & BIT(queue->index)) { + queue_work(queue->worker, &queue->cldma_work); + return; + } + + t7xx_cldma_clear_ip_busy(&md_ctrl->hw_info); + t7xx_cldma_hw_irq_en_txrx(&md_ctrl->hw_info, queue->index, MTK_RX); + t7xx_cldma_hw_irq_en_eq(&md_ctrl->hw_info, queue->index, MTK_RX); +} + +static int t7xx_cldma_gpd_tx_collect(struct cldma_queue *queue) +{ + struct cldma_ctrl *md_ctrl = queue->md_ctrl; + unsigned int dma_len, count = 0; + struct cldma_request *req; + struct sk_buff *skb_free; + struct cldma_tgpd *tgpd; + unsigned long flags; + dma_addr_t dma_free; + + while (!kthread_should_stop()) { + spin_lock_irqsave(&queue->ring_lock, flags); + req = queue->tr_done; + if (!req) { + spin_unlock_irqrestore(&queue->ring_lock, flags); + break; + } + + tgpd = req->gpd; + if ((tgpd->gpd_flags & GPD_FLAGS_HWO) || !req->skb) { + spin_unlock_irqrestore(&queue->ring_lock, flags); + break; + } + + queue->budget++; + dma_free = req->mapped_buff; + dma_len = le16_to_cpu(tgpd->data_buff_len); + skb_free = req->skb; + req->skb = NULL; + queue->tr_done = t7xx_cldma_ring_step_forward(queue->tr_ring, req); + spin_unlock_irqrestore(&queue->ring_lock, flags); + count++; + dma_unmap_single(md_ctrl->dev, dma_free, dma_len, DMA_TO_DEVICE); + dev_kfree_skb_any(skb_free); + } + + if (count) + wake_up_nr(&queue->req_wq, count); + + return count; +} + +static void t7xx_cldma_txq_empty_hndl(struct cldma_queue *queue) +{ + struct cldma_ctrl *md_ctrl = queue->md_ctrl; + struct cldma_request *req; + struct cldma_tgpd *tgpd; + dma_addr_t ul_curr_addr; + unsigned long flags; + bool pending_gpd; + + if (!(md_ctrl->txq_active & BIT(queue->index))) + return; + + spin_lock_irqsave(&queue->ring_lock, flags); + req = t7xx_cldma_ring_step_backward(queue->tr_ring, queue->tx_xmit); + tgpd = req->gpd; + pending_gpd = (tgpd->gpd_flags & GPD_FLAGS_HWO) && req->skb; + spin_unlock_irqrestore(&queue->ring_lock, flags); + + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + if (pending_gpd) { + struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info; + + /* Check current processing TGPD, 64-bit address is in a table by Q index */ + ul_curr_addr = ioread64(hw_info->ap_pdn_base + REG_CLDMA_UL_CURRENT_ADDRL_0 + + queue->index * sizeof(u64)); + if (req->gpd_addr != ul_curr_addr) { + struct t7xx_fsm_ctl *ctl = queue->md->fsm_ctl; + + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); + dev_err(md_ctrl->dev, "CLDMA%d queue %d is not empty\n", + md_ctrl->hif_id, queue->index); + t7xx_fsm_append_cmd(ctl, FSM_CMD_RECOVER, 0); + return; + } + + t7xx_cldma_hw_resume_queue(hw_info, queue->index, MTK_TX); + } + + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); +} + +static void t7xx_cldma_tx_done(struct work_struct *work) +{ + struct cldma_queue *queue = container_of(work, struct cldma_queue, cldma_work); + struct cldma_ctrl *md_ctrl = queue->md_ctrl; + struct t7xx_cldma_hw *hw_info; + unsigned int l2_tx_int; + unsigned long flags; + + hw_info = &md_ctrl->hw_info; + t7xx_cldma_gpd_tx_collect(queue); + l2_tx_int = t7xx_cldma_hw_int_status(hw_info, BIT(queue->index) | EQ_STA_BIT(queue->index), + MTK_TX); + if (l2_tx_int & EQ_STA_BIT(queue->index)) { + t7xx_cldma_hw_tx_done(hw_info, EQ_STA_BIT(queue->index)); + t7xx_cldma_txq_empty_hndl(queue); + } + + if (l2_tx_int & BIT(queue->index)) { + t7xx_cldma_hw_tx_done(hw_info, BIT(queue->index)); + queue_work(queue->worker, &queue->cldma_work); + return; + } + + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + if (md_ctrl->txq_active & BIT(queue->index)) { + t7xx_cldma_clear_ip_busy(hw_info); + t7xx_cldma_hw_irq_en_eq(hw_info, queue->index, MTK_TX); + t7xx_cldma_hw_irq_en_txrx(hw_info, queue->index, MTK_TX); + } + + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); +} + +static void t7xx_cldma_ring_free(struct cldma_ctrl *md_ctrl, + struct cldma_ring *ring, enum dma_data_direction tx_rx) +{ + struct cldma_request *req_cur, *req_next; + + list_for_each_entry_safe(req_cur, req_next, &ring->gpd_ring, entry) { + if (req_cur->mapped_buff && req_cur->skb) { + dma_unmap_single(md_ctrl->dev, req_cur->mapped_buff, + t7xx_skb_data_size(req_cur->skb), tx_rx); + req_cur->mapped_buff = 0; + } + + dev_kfree_skb_any(req_cur->skb); + + if (req_cur->gpd) + dma_pool_free(md_ctrl->gpd_dmapool, req_cur->gpd, req_cur->gpd_addr); + + list_del(&req_cur->entry); + kfree_sensitive(req_cur); + } +} + +static struct cldma_request *t7xx_alloc_rx_request(struct cldma_ctrl *md_ctrl, size_t pkt_size) +{ + struct cldma_request *item; + int val; + + item = kzalloc(sizeof(*item), GFP_KERNEL); + if (!item) + return NULL; + + item->gpd = dma_pool_zalloc(md_ctrl->gpd_dmapool, GFP_KERNEL, &item->gpd_addr); + if (!item->gpd) + goto err_free_req; + + val = t7xx_cldma_alloc_and_map_skb(md_ctrl, item, pkt_size); + if (val) + goto err_free_pool; + + return item; + +err_free_pool: + dma_pool_free(md_ctrl->gpd_dmapool, item->gpd, item->gpd_addr); + +err_free_req: + kfree(item); + + return NULL; +} + +static int t7xx_cldma_rx_ring_init(struct cldma_ctrl *md_ctrl, struct cldma_ring *ring) +{ + struct cldma_request *item, *first_item = NULL; + struct cldma_rgpd *prev_gpd, *gpd = NULL; + int i; + + for (i = 0; i < ring->length; i++) { + item = t7xx_alloc_rx_request(md_ctrl, ring->pkt_size); + if (!item) { + t7xx_cldma_ring_free(md_ctrl, ring, DMA_FROM_DEVICE); + return -ENOMEM; + } + + gpd = item->gpd; + t7xx_cldma_rgpd_set_data_ptr(gpd, item->mapped_buff); + gpd->data_allow_len = cpu_to_le16(ring->pkt_size); + gpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO; + + if (i) + t7xx_cldma_rgpd_set_next_ptr(prev_gpd, item->gpd_addr); + else + first_item = item; + + INIT_LIST_HEAD(&item->entry); + list_add_tail(&item->entry, &ring->gpd_ring); + prev_gpd = gpd; + } + + if (first_item) + t7xx_cldma_rgpd_set_next_ptr(gpd, first_item->gpd_addr); + + return 0; +} + +static struct cldma_request *t7xx_alloc_tx_request(struct cldma_ctrl *md_ctrl) +{ + struct cldma_request *item; + + item = kzalloc(sizeof(*item), GFP_KERNEL); + if (!item) + return NULL; + + item->gpd = dma_pool_zalloc(md_ctrl->gpd_dmapool, GFP_KERNEL, &item->gpd_addr); + if (!item->gpd) { + kfree(item); + return NULL; + } + + return item; +} + +static int t7xx_cldma_tx_ring_init(struct cldma_ctrl *md_ctrl, struct cldma_ring *ring) +{ + struct cldma_request *item, *first_item = NULL; + struct cldma_tgpd *tgpd, *prev_gpd; + int i; + + for (i = 0; i < ring->length; i++) { + item = t7xx_alloc_tx_request(md_ctrl); + if (!item) { + t7xx_cldma_ring_free(md_ctrl, ring, DMA_TO_DEVICE); + return -ENOMEM; + } + + tgpd = item->gpd; + tgpd->gpd_flags = GPD_FLAGS_IOC; + + if (!first_item) + first_item = item; + else + t7xx_cldma_tgpd_set_next_ptr(prev_gpd, item->gpd_addr); + + INIT_LIST_HEAD(&item->entry); + list_add_tail(&item->entry, &ring->gpd_ring); + prev_gpd = tgpd; + } + + if (first_item) + t7xx_cldma_tgpd_set_next_ptr(tgpd, first_item->gpd_addr); + + return 0; +} + +/** + * t7xx_cldma_queue_reset() - Reset CLDMA request pointers to their initial values. + * @queue: Pointer to the queue structure. + * + */ +static void t7xx_cldma_queue_reset(struct cldma_queue *queue) +{ + struct cldma_request *req; + + req = list_first_entry(&queue->tr_ring->gpd_ring, struct cldma_request, entry); + queue->tr_done = req; + queue->budget = queue->tr_ring->length; + + if (queue->dir == MTK_TX) + queue->tx_xmit = req; + else + queue->rx_refill = req; +} + +static void t7xx_cldma_rx_queue_init(struct cldma_queue *queue) +{ + struct cldma_ctrl *md_ctrl = queue->md_ctrl; + + queue->dir = MTK_RX; + queue->tr_ring = &md_ctrl->rx_ring[queue->index]; + t7xx_cldma_queue_reset(queue); +} + +static void t7xx_cldma_tx_queue_init(struct cldma_queue *queue) +{ + struct cldma_ctrl *md_ctrl = queue->md_ctrl; + + queue->dir = MTK_TX; + queue->tr_ring = &md_ctrl->tx_ring[queue->index]; + t7xx_cldma_queue_reset(queue); +} + +static inline void t7xx_cldma_enable_irq(struct cldma_ctrl *md_ctrl) +{ + t7xx_pcie_mac_set_int(md_ctrl->t7xx_dev, md_ctrl->hw_info.phy_interrupt_id); +} + +static inline void t7xx_cldma_disable_irq(struct cldma_ctrl *md_ctrl) +{ + t7xx_pcie_mac_clear_int(md_ctrl->t7xx_dev, md_ctrl->hw_info.phy_interrupt_id); +} + +static void t7xx_cldma_irq_work_cb(struct cldma_ctrl *md_ctrl) +{ + u32 l2_tx_int_msk, l2_rx_int_msk, l2_tx_int, l2_rx_int, val; + struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info; + int i; + + /* L2 raw interrupt status */ + l2_tx_int = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0); + l2_rx_int = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0); + l2_tx_int_msk = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TIMR0); + l2_rx_int_msk = ioread32(hw_info->ap_ao_base + REG_CLDMA_L2RIMR0); + l2_tx_int &= ~l2_tx_int_msk; + l2_rx_int &= ~l2_rx_int_msk; + + if (l2_tx_int) { + if (l2_tx_int & (TQ_ERR_INT_BITMASK | TQ_ACTIVE_START_ERR_INT_BITMASK)) { + /* Read and clear L3 TX interrupt status */ + val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3TISAR0); + iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3TISAR0); + val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3TISAR1); + iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3TISAR1); + } + + t7xx_cldma_hw_tx_done(hw_info, l2_tx_int); + if (l2_tx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) { + for_each_set_bit(i, (unsigned long *)&l2_tx_int, L2_INT_BIT_COUNT) { + if (i < CLDMA_TXQ_NUM) { + t7xx_cldma_hw_irq_dis_eq(hw_info, i, MTK_TX); + t7xx_cldma_hw_irq_dis_txrx(hw_info, i, MTK_TX); + queue_work(md_ctrl->txq[i].worker, + &md_ctrl->txq[i].cldma_work); + } else { + t7xx_cldma_txq_empty_hndl(&md_ctrl->txq[i - CLDMA_TXQ_NUM]); + } + } + } + } + + if (l2_rx_int) { + if (l2_rx_int & (RQ_ERR_INT_BITMASK | RQ_ACTIVE_START_ERR_INT_BITMASK)) { + /* Read and clear L3 RX interrupt status */ + val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3RISAR0); + iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3RISAR0); + val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3RISAR1); + iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3RISAR1); + } + + t7xx_cldma_hw_rx_done(hw_info, l2_rx_int); + if (l2_rx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) { + l2_rx_int |= l2_rx_int >> CLDMA_RXQ_NUM; + for_each_set_bit(i, (unsigned long *)&l2_rx_int, CLDMA_RXQ_NUM) { + t7xx_cldma_hw_irq_dis_eq(hw_info, i, MTK_RX); + t7xx_cldma_hw_irq_dis_txrx(hw_info, i, MTK_RX); + queue_work(md_ctrl->rxq[i].worker, &md_ctrl->rxq[i].cldma_work); + } + } + } +} + +static bool t7xx_cldma_queues_active(struct t7xx_cldma_hw *hw_info) +{ + unsigned int tx_active; + unsigned int rx_active; + + tx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_TX); + rx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_RX); + if (tx_active == CLDMA_INVALID_STATUS || rx_active == CLDMA_INVALID_STATUS) + return false; + + return tx_active || rx_active; +} + +/** + * t7xx_cldma_stop() - Stop CLDMA. + * @md_ctrl: CLDMA context structure. + * + * Stop TX and RX queues. Disable L1 and L2 interrupts. + * Clear status registers. + * + * Return: + * * 0 - Success. + * * -ERROR - Error code from polling cldma_queues_active. + */ +int t7xx_cldma_stop(struct cldma_ctrl *md_ctrl) +{ + struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info; + bool active; + int i, ret; + + md_ctrl->rxq_active = 0; + t7xx_cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, MTK_RX); + md_ctrl->txq_active = 0; + t7xx_cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, MTK_TX); + md_ctrl->txq_started = 0; + t7xx_cldma_disable_irq(md_ctrl); + t7xx_cldma_hw_stop(hw_info, MTK_RX); + t7xx_cldma_hw_stop(hw_info, MTK_TX); + t7xx_cldma_hw_tx_done(hw_info, CLDMA_L2TISAR0_ALL_INT_MASK); + t7xx_cldma_hw_rx_done(hw_info, CLDMA_L2RISAR0_ALL_INT_MASK); + + if (md_ctrl->is_late_init) { + for (i = 0; i < CLDMA_TXQ_NUM; i++) + flush_work(&md_ctrl->txq[i].cldma_work); + + for (i = 0; i < CLDMA_RXQ_NUM; i++) + flush_work(&md_ctrl->rxq[i].cldma_work); + } + + ret = read_poll_timeout(t7xx_cldma_queues_active, active, !active, CHECK_Q_STOP_STEP_US, + CHECK_Q_STOP_TIMEOUT_US, true, hw_info); + if (ret) + dev_err(md_ctrl->dev, "Could not stop CLDMA%d queues", md_ctrl->hif_id); + + return ret; +} + +static void t7xx_cldma_late_release(struct cldma_ctrl *md_ctrl) +{ + int i; + + if (!md_ctrl->is_late_init) + return; + + for (i = 0; i < CLDMA_TXQ_NUM; i++) + t7xx_cldma_ring_free(md_ctrl, &md_ctrl->tx_ring[i], DMA_TO_DEVICE); + + for (i = 0; i < CLDMA_RXQ_NUM; i++) + t7xx_cldma_ring_free(md_ctrl, &md_ctrl->rx_ring[i], DMA_FROM_DEVICE); + + dma_pool_destroy(md_ctrl->gpd_dmapool); + md_ctrl->gpd_dmapool = NULL; + md_ctrl->is_late_init = false; +} + +void t7xx_cldma_reset(struct cldma_ctrl *md_ctrl) +{ + struct t7xx_modem *md = md_ctrl->t7xx_dev->md; + unsigned long flags; + int i; + + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + md_ctrl->txq_active = 0; + md_ctrl->rxq_active = 0; + t7xx_cldma_disable_irq(md_ctrl); + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); + for (i = 0; i < CLDMA_TXQ_NUM; i++) { + md_ctrl->txq[i].md = md; + cancel_work_sync(&md_ctrl->txq[i].cldma_work); + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + md_cd_queue_struct_reset(&md_ctrl->txq[i], md_ctrl, MTK_TX, i); + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); + } + + for (i = 0; i < CLDMA_RXQ_NUM; i++) { + md_ctrl->rxq[i].md = md; + cancel_work_sync(&md_ctrl->rxq[i].cldma_work); + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + md_cd_queue_struct_reset(&md_ctrl->rxq[i], md_ctrl, MTK_RX, i); + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); + } + + t7xx_cldma_late_release(md_ctrl); +} + +/** + * t7xx_cldma_start() - Start CLDMA. + * @md_ctrl: CLDMA context structure. + * + * Set TX/RX start address. + * Start all RX queues and enable L2 interrupt. + */ +void t7xx_cldma_start(struct cldma_ctrl *md_ctrl) +{ + unsigned long flags; + + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + if (md_ctrl->is_late_init) { + struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info; + int i; + + t7xx_cldma_enable_irq(md_ctrl); + + for (i = 0; i < CLDMA_TXQ_NUM; i++) { + if (md_ctrl->txq[i].tr_done) + t7xx_cldma_hw_set_start_addr(hw_info, i, + md_ctrl->txq[i].tr_done->gpd_addr, + MTK_TX); + } + + for (i = 0; i < CLDMA_RXQ_NUM; i++) { + if (md_ctrl->rxq[i].tr_done) + t7xx_cldma_hw_set_start_addr(hw_info, i, + md_ctrl->rxq[i].tr_done->gpd_addr, + MTK_RX); + } + + /* Enable L2 interrupt */ + t7xx_cldma_hw_start_queue(hw_info, CLDMA_ALL_Q, MTK_RX); + t7xx_cldma_hw_start(hw_info); + md_ctrl->txq_started = 0; + md_ctrl->txq_active |= TXRX_STATUS_BITMASK; + md_ctrl->rxq_active |= TXRX_STATUS_BITMASK; + } + + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); +} + +static void t7xx_cldma_clear_txq(struct cldma_ctrl *md_ctrl, int qnum) +{ + struct cldma_queue *txq = &md_ctrl->txq[qnum]; + struct cldma_request *req; + struct cldma_tgpd *tgpd; + unsigned long flags; + + spin_lock_irqsave(&txq->ring_lock, flags); + t7xx_cldma_queue_reset(txq); + list_for_each_entry(req, &txq->tr_ring->gpd_ring, entry) { + tgpd = req->gpd; + tgpd->gpd_flags &= ~GPD_FLAGS_HWO; + t7xx_cldma_tgpd_set_data_ptr(tgpd, 0); + tgpd->data_buff_len = 0; + dev_kfree_skb_any(req->skb); + req->skb = NULL; + } + + spin_unlock_irqrestore(&txq->ring_lock, flags); +} + +static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum) +{ + struct cldma_queue *rxq = &md_ctrl->rxq[qnum]; + struct cldma_request *req; + struct cldma_rgpd *rgpd; + unsigned long flags; + + spin_lock_irqsave(&rxq->ring_lock, flags); + t7xx_cldma_queue_reset(rxq); + list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) { + rgpd = req->gpd; + rgpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO; + rgpd->data_buff_len = 0; + + if (req->skb) { + req->skb->len = 0; + skb_reset_tail_pointer(req->skb); + } + } + + spin_unlock_irqrestore(&rxq->ring_lock, flags); + list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) { + int ret; + + if (req->skb) + continue; + + ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size); + if (ret) + return ret; + + t7xx_cldma_rgpd_set_data_ptr(req->gpd, req->mapped_buff); + } + + return 0; +} + +static void t7xx_cldma_clear_all_queue(struct cldma_ctrl *md_ctrl, enum mtk_txrx tx_rx) +{ + int i; + + if (tx_rx == MTK_TX) { + for (i = 0; i < CLDMA_TXQ_NUM; i++) + t7xx_cldma_clear_txq(md_ctrl, i); + } else { + for (i = 0; i < CLDMA_RXQ_NUM; i++) + t7xx_cldma_clear_rxq(md_ctrl, i); + } +} + +static void t7xx_cldma_stop_queue(struct cldma_ctrl *md_ctrl, unsigned char qno, + enum mtk_txrx tx_rx) +{ + struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info; + unsigned long flags; + + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + if (tx_rx == MTK_RX) { + t7xx_cldma_hw_irq_dis_eq(hw_info, qno, MTK_RX); + t7xx_cldma_hw_irq_dis_txrx(hw_info, qno, MTK_RX); + + if (qno == CLDMA_ALL_Q) + md_ctrl->rxq_active &= ~TXRX_STATUS_BITMASK; + else + md_ctrl->rxq_active &= ~(TXRX_STATUS_BITMASK & BIT(qno)); + + t7xx_cldma_hw_stop_queue(hw_info, qno, MTK_RX); + } else if (tx_rx == MTK_TX) { + t7xx_cldma_hw_irq_dis_eq(hw_info, qno, MTK_TX); + t7xx_cldma_hw_irq_dis_txrx(hw_info, qno, MTK_TX); + + if (qno == CLDMA_ALL_Q) + md_ctrl->txq_active &= ~TXRX_STATUS_BITMASK; + else + md_ctrl->txq_active &= ~(TXRX_STATUS_BITMASK & BIT(qno)); + + t7xx_cldma_hw_stop_queue(hw_info, qno, MTK_TX); + } + + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); +} + +static int t7xx_cldma_gpd_handle_tx_request(struct cldma_queue *queue, struct cldma_request *tx_req, + struct sk_buff *skb) +{ + struct cldma_ctrl *md_ctrl = queue->md_ctrl; + struct cldma_tgpd *tgpd = tx_req->gpd; + unsigned long flags; + + /* Update GPD */ + tx_req->mapped_buff = dma_map_single(md_ctrl->dev, skb->data, skb->len, DMA_TO_DEVICE); + + if (dma_mapping_error(md_ctrl->dev, tx_req->mapped_buff)) { + dev_err(md_ctrl->dev, "DMA mapping failed\n"); + return -ENOMEM; + } + + t7xx_cldma_tgpd_set_data_ptr(tgpd, tx_req->mapped_buff); + tgpd->data_buff_len = cpu_to_le16(skb->len); + + /* This lock must cover TGPD setting, as even without a resume operation, + * CLDMA can send next HWO=1 if last TGPD just finished. + */ + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + if (md_ctrl->txq_active & BIT(queue->index)) + tgpd->gpd_flags |= GPD_FLAGS_HWO; + + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); + tx_req->skb = skb; + return 0; +} + +static void t7xx_cldma_hw_start_send(struct cldma_ctrl *md_ctrl, u8 qno) +{ + struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info; + struct cldma_request *req; + + /* Check whether the device was powered off (CLDMA start address is not set) */ + if (!t7xx_cldma_tx_addr_is_set(hw_info, qno)) { + t7xx_cldma_hw_init(hw_info); + req = t7xx_cldma_ring_step_backward(md_ctrl->txq[qno].tr_ring, + md_ctrl->txq[qno].tx_xmit); + t7xx_cldma_hw_set_start_addr(hw_info, qno, req->gpd_addr, MTK_TX); + md_ctrl->txq_started &= ~BIT(qno); + } + + if (!t7xx_cldma_hw_queue_status(hw_info, qno, MTK_TX)) { + if (md_ctrl->txq_started & BIT(qno)) + t7xx_cldma_hw_resume_queue(hw_info, qno, MTK_TX); + else + t7xx_cldma_hw_start_queue(hw_info, qno, MTK_TX); + + md_ctrl->txq_started |= BIT(qno); + } +} + +int t7xx_cldma_write_room(struct cldma_ctrl *md_ctrl, unsigned char qno) +{ + struct cldma_queue *queue = &md_ctrl->txq[qno]; + + if (!queue) + return -EINVAL; + + if (queue->budget >= MAX_TX_BUDGET) + return queue->budget; + + return 0; +} + +/** + * t7xx_cldma_set_recv_skb() - Set the callback to handle RX packets. + * @md_ctrl: CLDMA context structure. + * @recv_skb: Receiving skb callback. + */ +void t7xx_cldma_set_recv_skb(struct cldma_ctrl *md_ctrl, + int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb)) +{ + md_ctrl->recv_skb = recv_skb; +} + +/** + * t7xx_cldma_send_skb() - Send control data to modem. + * @md_ctrl: CLDMA context structure. + * @qno: Queue number. + * @skb: Socket buffer. + * @blocking: True for blocking operation. + * + * Send control packet to modem using a ring buffer. + * If blocking is set, it will wait for completion. + * + * Return: + * * 0 - Success. + * * -ENOMEM - Allocation failure. + * * -EINVAL - Invalid queue request. + * * -EBUSY - Resource lock failure. + */ +int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb, bool blocking) +{ + struct cldma_request *tx_req; + struct cldma_queue *queue; + unsigned long flags; + int ret; + + if (qno >= CLDMA_TXQ_NUM) + return -EINVAL; + + queue = &md_ctrl->txq[qno]; + + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + if (!(md_ctrl->txq_active & BIT(qno))) { + ret = -EBUSY; + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); + goto allow_sleep; + } + + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); + + do { + spin_lock_irqsave(&queue->ring_lock, flags); + tx_req = queue->tx_xmit; + if (queue->budget > 0 && !tx_req->skb) { + queue->budget--; + t7xx_cldma_gpd_handle_tx_request(queue, tx_req, skb); + queue->tx_xmit = t7xx_cldma_ring_step_forward(queue->tr_ring, tx_req); + spin_unlock_irqrestore(&queue->ring_lock, flags); + + /* Protect the access to the modem for queues operations (resume/start) + * which access shared locations by all the queues. + * cldma_lock is independent of ring_lock which is per queue. + */ + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + t7xx_cldma_hw_start_send(md_ctrl, qno); + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); + break; + } + + spin_unlock_irqrestore(&queue->ring_lock, flags); + + if (!t7xx_cldma_hw_queue_status(&md_ctrl->hw_info, qno, MTK_TX)) { + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + t7xx_cldma_hw_resume_queue(&md_ctrl->hw_info, qno, MTK_TX); + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); + } + + if (!blocking) { + ret = -EBUSY; + break; + } + + ret = wait_event_interruptible_exclusive(queue->req_wq, queue->budget > 0); + } while (!ret); + +allow_sleep: + return ret; +} + +static int t7xx_cldma_late_init(struct cldma_ctrl *md_ctrl) +{ + char dma_pool_name[32]; + int i, j, ret; + + if (md_ctrl->is_late_init) { + dev_err(md_ctrl->dev, "CLDMA late init was already done\n"); + return -EALREADY; + } + + snprintf(dma_pool_name, sizeof(dma_pool_name), "cldma_req_hif%d", md_ctrl->hif_id); + + md_ctrl->gpd_dmapool = dma_pool_create(dma_pool_name, md_ctrl->dev, + sizeof(struct cldma_tgpd), GPD_DMAPOOL_ALIGN, 0); + if (!md_ctrl->gpd_dmapool) { + dev_err(md_ctrl->dev, "DMA pool alloc fail\n"); + return -ENOMEM; + } + + for (i = 0; i < CLDMA_TXQ_NUM; i++) { + INIT_LIST_HEAD(&md_ctrl->tx_ring[i].gpd_ring); + md_ctrl->tx_ring[i].length = MAX_TX_BUDGET; + + ret = t7xx_cldma_tx_ring_init(md_ctrl, &md_ctrl->tx_ring[i]); + if (ret) { + dev_err(md_ctrl->dev, "control TX ring init fail\n"); + goto err_free_tx_ring; + } + } + + for (j = 0; j < CLDMA_RXQ_NUM; j++) { + INIT_LIST_HEAD(&md_ctrl->rx_ring[j].gpd_ring); + md_ctrl->rx_ring[j].length = MAX_RX_BUDGET; + md_ctrl->rx_ring[j].pkt_size = MTK_SKB_4K; + + if (j == CLDMA_RXQ_NUM - 1) + md_ctrl->rx_ring[j].pkt_size = MTK_SKB_64K; + + ret = t7xx_cldma_rx_ring_init(md_ctrl, &md_ctrl->rx_ring[j]); + if (ret) { + dev_err(md_ctrl->dev, "Control RX ring init fail\n"); + goto err_free_rx_ring; + } + } + + for (i = 0; i < CLDMA_TXQ_NUM; i++) + t7xx_cldma_tx_queue_init(&md_ctrl->txq[i]); + + for (j = 0; j < CLDMA_RXQ_NUM; j++) + t7xx_cldma_rx_queue_init(&md_ctrl->rxq[j]); + + md_ctrl->is_late_init = true; + return 0; + +err_free_rx_ring: + while (j--) + t7xx_cldma_ring_free(md_ctrl, &md_ctrl->rx_ring[j], DMA_FROM_DEVICE); + +err_free_tx_ring: + while (i--) + t7xx_cldma_ring_free(md_ctrl, &md_ctrl->tx_ring[i], DMA_TO_DEVICE); + + return ret; +} + +static inline void __iomem *pcie_addr_transfer(void __iomem *addr, u32 addr_trs1, u32 phy_addr) +{ + return addr + phy_addr - addr_trs1; +} + +static void t7xx_hw_info_init(struct cldma_ctrl *md_ctrl) +{ + struct t7xx_cldma_hw *hw_info; + u32 phy_ao_base, phy_pd_base; + struct t7xx_addr_base *pbase; + + if (md_ctrl->hif_id != ID_CLDMA1) + return; + + phy_ao_base = CLDMA1_AO_BASE; + phy_pd_base = CLDMA1_PD_BASE; + hw_info = &md_ctrl->hw_info; + hw_info->phy_interrupt_id = CLDMA1_INT; + hw_info->hw_mode = MODE_BIT_64; + pbase = &md_ctrl->t7xx_dev->base_addr; + hw_info->ap_ao_base = pcie_addr_transfer(pbase->pcie_ext_reg_base, + pbase->pcie_dev_reg_trsl_addr, phy_ao_base); + hw_info->ap_pdn_base = pcie_addr_transfer(pbase->pcie_ext_reg_base, + pbase->pcie_dev_reg_trsl_addr, phy_pd_base); +} + +static int t7xx_cldma_default_recv_skb(struct cldma_queue *queue, struct sk_buff *skb) +{ + dev_kfree_skb_any(skb); + return 0; +} + +int t7xx_cldma_alloc(enum cldma_id hif_id, struct t7xx_pci_dev *t7xx_dev) +{ + struct device *dev = &t7xx_dev->pdev->dev; + struct cldma_ctrl *md_ctrl; + + md_ctrl = devm_kzalloc(dev, sizeof(*md_ctrl), GFP_KERNEL); + if (!md_ctrl) + return -ENOMEM; + + md_ctrl->t7xx_dev = t7xx_dev; + md_ctrl->dev = dev; + md_ctrl->hif_id = hif_id; + md_ctrl->recv_skb = t7xx_cldma_default_recv_skb; + t7xx_hw_info_init(md_ctrl); + t7xx_dev->md->md_ctrl[hif_id] = md_ctrl; + return 0; +} + +/** + * t7xx_cldma_exception() - CLDMA exception handler. + * @md_ctrl: CLDMA context structure. + * @stage: exception stage. + * + * Part of the modem exception recovery. + * Stages are one after the other as describe below: + * HIF_EX_INIT: Disable and clear TXQ. + * HIF_EX_CLEARQ_DONE: Disable RX, flush TX/RX workqueues and clear RX. + * HIF_EX_ALLQ_RESET: HW is back in safe mode for re-initialization and restart. + */ + +/* + * Modem Exception Handshake Flow + * + * Modem HW Exception interrupt received + * (MD_IRQ_CCIF_EX) + * | + * +---------v--------+ + * | HIF_EX_INIT | : Disable and clear TXQ + * +------------------+ + * | + * +---------v--------+ + * | HIF_EX_INIT_DONE | : Wait for the init to be done + * +------------------+ + * | + * +---------v--------+ + * |HIF_EX_CLEARQ_DONE| : Disable and clear RXQ + * +------------------+ : Flush TX/RX workqueues + * | + * +---------v--------+ + * |HIF_EX_ALLQ_RESET | : Restart HW and CLDMA + * +------------------+ + */ +void t7xx_cldma_exception(struct cldma_ctrl *md_ctrl, enum hif_ex_stage stage) +{ + switch (stage) { + case HIF_EX_INIT: + t7xx_cldma_stop_queue(md_ctrl, CLDMA_ALL_Q, MTK_TX); + t7xx_cldma_clear_all_queue(md_ctrl, MTK_TX); + break; + + case HIF_EX_CLEARQ_DONE: + /* We do not want to get CLDMA IRQ when MD is + * resetting CLDMA after it got clearq_ack. + */ + t7xx_cldma_stop_queue(md_ctrl, CLDMA_ALL_Q, MTK_RX); + t7xx_cldma_stop(md_ctrl); + + if (md_ctrl->hif_id == ID_CLDMA1) + t7xx_cldma_hw_reset(md_ctrl->t7xx_dev->base_addr.infracfg_ao_base); + + t7xx_cldma_clear_all_queue(md_ctrl, MTK_RX); + break; + + case HIF_EX_ALLQ_RESET: + t7xx_cldma_hw_init(&md_ctrl->hw_info); + t7xx_cldma_start(md_ctrl); + break; + + default: + break; + } +} + +void t7xx_cldma_hif_hw_init(struct cldma_ctrl *md_ctrl) +{ + struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info; + unsigned long flags; + + spin_lock_irqsave(&md_ctrl->cldma_lock, flags); + t7xx_cldma_hw_stop(hw_info, MTK_TX); + t7xx_cldma_hw_stop(hw_info, MTK_RX); + t7xx_cldma_hw_rx_done(hw_info, EMPTY_STATUS_BITMASK | TXRX_STATUS_BITMASK); + t7xx_cldma_hw_tx_done(hw_info, EMPTY_STATUS_BITMASK | TXRX_STATUS_BITMASK); + t7xx_cldma_hw_init(hw_info); + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags); +} + +static irqreturn_t t7xx_cldma_isr_handler(int irq, void *data) +{ + struct cldma_ctrl *md_ctrl = data; + u32 interrupt; + + interrupt = md_ctrl->hw_info.phy_interrupt_id; + t7xx_pcie_mac_clear_int(md_ctrl->t7xx_dev, interrupt); + t7xx_cldma_irq_work_cb(md_ctrl); + t7xx_pcie_mac_clear_int_status(md_ctrl->t7xx_dev, interrupt); + t7xx_pcie_mac_set_int(md_ctrl->t7xx_dev, interrupt); + return IRQ_HANDLED; +} + +/** + * t7xx_cldma_init() - Initialize CLDMA. + * @md: Modem context structure. + * @md_ctrl: CLDMA context structure. + * + * Initialize HIF TX/RX queue structure. + * Register CLDMA callback ISR with PCIe driver. + * + * Return: + * * 0 - Success. + * * -ERROR - Error code from failure sub-initializations. + */ +int t7xx_cldma_init(struct t7xx_modem *md, struct cldma_ctrl *md_ctrl) +{ + struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info; + int i; + + md_ctrl->txq_active = 0; + md_ctrl->rxq_active = 0; + md_ctrl->is_late_init = false; + + spin_lock_init(&md_ctrl->cldma_lock); + for (i = 0; i < CLDMA_TXQ_NUM; i++) { + md_cd_queue_struct_init(&md_ctrl->txq[i], md_ctrl, MTK_TX, i); + md_ctrl->txq[i].md = md; + + md_ctrl->txq[i].worker = + alloc_workqueue("md_hif%d_tx%d_worker", + WQ_UNBOUND | WQ_MEM_RECLAIM | (i ? 0 : WQ_HIGHPRI), + 1, md_ctrl->hif_id, i); + if (!md_ctrl->txq[i].worker) + return -ENOMEM; + + INIT_WORK(&md_ctrl->txq[i].cldma_work, t7xx_cldma_tx_done); + } + + for (i = 0; i < CLDMA_RXQ_NUM; i++) { + md_cd_queue_struct_init(&md_ctrl->rxq[i], md_ctrl, MTK_RX, i); + md_ctrl->rxq[i].md = md; + INIT_WORK(&md_ctrl->rxq[i].cldma_work, t7xx_cldma_rx_done); + + md_ctrl->rxq[i].worker = alloc_workqueue("md_hif%d_rx%d_worker", + WQ_UNBOUND | WQ_MEM_RECLAIM, + 1, md_ctrl->hif_id, i); + if (!md_ctrl->rxq[i].worker) + return -ENOMEM; + } + + t7xx_pcie_mac_clear_int(md_ctrl->t7xx_dev, hw_info->phy_interrupt_id); + md_ctrl->t7xx_dev->intr_handler[hw_info->phy_interrupt_id] = t7xx_cldma_isr_handler; + md_ctrl->t7xx_dev->intr_thread[hw_info->phy_interrupt_id] = NULL; + md_ctrl->t7xx_dev->callback_param[hw_info->phy_interrupt_id] = md_ctrl; + t7xx_pcie_mac_clear_int_status(md_ctrl->t7xx_dev, hw_info->phy_interrupt_id); + return 0; +} + +void t7xx_cldma_switch_cfg(struct cldma_ctrl *md_ctrl) +{ + t7xx_cldma_late_release(md_ctrl); + t7xx_cldma_late_init(md_ctrl); +} + +void t7xx_cldma_exit(struct cldma_ctrl *md_ctrl) +{ + int i; + + t7xx_cldma_stop(md_ctrl); + t7xx_cldma_late_release(md_ctrl); + + for (i = 0; i < CLDMA_TXQ_NUM; i++) { + if (md_ctrl->txq[i].worker) { + destroy_workqueue(md_ctrl->txq[i].worker); + md_ctrl->txq[i].worker = NULL; + } + } + + for (i = 0; i < CLDMA_RXQ_NUM; i++) { + if (md_ctrl->rxq[i].worker) { + destroy_workqueue(md_ctrl->rxq[i].worker); + md_ctrl->rxq[i].worker = NULL; + } + } +} diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.h b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h new file mode 100644 index 000000000000..5db6d69373e3 --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h @@ -0,0 +1,139 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Moises Veleta + * Ricardo Martinez + * Sreehari Kancharla + * + * Contributors: + * Amir Hanania + * Chiranjeevi Rapolu + * Eliot Lee + */ + +#ifndef __T7XX_HIF_CLDMA_H__ +#define __T7XX_HIF_CLDMA_H__ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "t7xx_cldma.h" +#include "t7xx_common.h" +#include "t7xx_modem_ops.h" +#include "t7xx_pci.h" + +enum cldma_id { + ID_CLDMA0, + ID_CLDMA1, +}; + +struct cldma_request { + void *gpd; /* Virtual address for CPU */ + dma_addr_t gpd_addr; /* Physical address for DMA */ + struct sk_buff *skb; + dma_addr_t mapped_buff; + struct list_head entry; +}; + +struct cldma_queue; +struct cldma_ctrl; + +struct cldma_ring { + struct list_head gpd_ring; /* Ring of struct cldma_request */ + int length; /* Number of struct cldma_request */ + int pkt_size; +}; + +struct cldma_queue { + struct t7xx_modem *md; + struct cldma_ctrl *md_ctrl; + enum mtk_txrx dir; + unsigned char index; + struct cldma_ring *tr_ring; + struct cldma_request *tr_done; + struct cldma_request *rx_refill; + struct cldma_request *tx_xmit; + int budget; /* Same as ring buffer size by default */ + spinlock_t ring_lock; + wait_queue_head_t req_wq; /* Only for TX */ + struct workqueue_struct *worker; + struct work_struct cldma_work; +}; + +struct cldma_ctrl { + enum cldma_id hif_id; + struct device *dev; + struct t7xx_pci_dev *t7xx_dev; + struct cldma_queue txq[CLDMA_TXQ_NUM]; + struct cldma_queue rxq[CLDMA_RXQ_NUM]; + unsigned short txq_active; + unsigned short rxq_active; + unsigned short txq_started; + spinlock_t cldma_lock; /* Protects CLDMA structure */ + /* Assumes T/R GPD/BD/SPD have the same size */ + struct dma_pool *gpd_dmapool; + struct cldma_ring tx_ring[CLDMA_TXQ_NUM]; + struct cldma_ring rx_ring[CLDMA_RXQ_NUM]; + struct t7xx_cldma_hw hw_info; + bool is_late_init; + int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb); +}; + +#define GPD_FLAGS_HWO BIT(0) +#define GPD_FLAGS_BDP BIT(1) +#define GPD_FLAGS_BPS BIT(2) +#define GPD_FLAGS_IOC BIT(7) +#define GPD_DMAPOOL_ALIGN 16 + +struct cldma_tgpd { + u8 gpd_flags; + u8 not_used1; + u8 not_used2; + u8 debug_id; + __le32 next_gpd_ptr_h; + __le32 next_gpd_ptr_l; + __le32 data_buff_bd_ptr_h; + __le32 data_buff_bd_ptr_l; + __le16 data_buff_len; + __le16 not_used3; +}; + +struct cldma_rgpd { + u8 gpd_flags; + u8 not_used1; + __le16 data_allow_len; + __le32 next_gpd_ptr_h; + __le32 next_gpd_ptr_l; + __le32 data_buff_bd_ptr_h; + __le32 data_buff_bd_ptr_l; + __le16 data_buff_len; + u8 not_used2; + u8 debug_id; +}; + +int t7xx_cldma_alloc(enum cldma_id hif_id, struct t7xx_pci_dev *t7xx_dev); +void t7xx_cldma_hif_hw_init(struct cldma_ctrl *md_ctrl); +int t7xx_cldma_init(struct t7xx_modem *md, struct cldma_ctrl *md_ctrl); +void t7xx_cldma_exception(struct cldma_ctrl *md_ctrl, enum hif_ex_stage stage); +void t7xx_cldma_exit(struct cldma_ctrl *md_ctrl); +void t7xx_cldma_switch_cfg(struct cldma_ctrl *md_ctrl); +int t7xx_cldma_write_room(struct cldma_ctrl *md_ctrl, unsigned char qno); +void t7xx_cldma_start(struct cldma_ctrl *md_ctrl); +int t7xx_cldma_stop(struct cldma_ctrl *md_ctrl); +void t7xx_cldma_reset(struct cldma_ctrl *md_ctrl); +void t7xx_cldma_set_recv_skb(struct cldma_ctrl *md_ctrl, + int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb)); +int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb, bool blocking); + +#endif /* __T7XX_HIF_CLDMA_H__ */ From patchwork Tue Dec 7 02:47:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Martinez X-Patchwork-Id: 521902 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FAC4C4321E for ; Tue, 7 Dec 2021 02:48:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230455AbhLGCvo (ORCPT ); Mon, 6 Dec 2021 21:51:44 -0500 Received: from mga07.intel.com ([134.134.136.100]:27268 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230146AbhLGCvm (ORCPT ); Mon, 6 Dec 2021 21:51:42 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10190"; a="300860559" X-IronPort-AV: E=Sophos;i="5.87,293,1631602800"; d="scan'208";a="300860559" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2021 18:47:45 -0800 X-IronPort-AV: E=Sophos;i="5.87,293,1631602800"; d="scan'208";a="748524131" Received: from rmarti10-desk.jf.intel.com ([134.134.150.146]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2021 18:47:44 -0800 From: Ricardo Martinez To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: kuba@kernel.org, davem@davemloft.net, johannes@sipsolutions.net, ryazanov.s.a@gmail.com, loic.poulain@linaro.org, m.chetan.kumar@intel.com, chandrashekar.devegowda@intel.com, linuxwwan@intel.com, chiranjeevi.rapolu@linux.intel.com, haijun.liu@mediatek.com, amir.hanania@intel.com, andriy.shevchenko@linux.intel.com, dinesh.sharma@intel.com, eliot.lee@intel.com, mika.westerberg@linux.intel.com, moises.veleta@intel.com, pierre-louis.bossart@intel.com, muralidharan.sethuraman@intel.com, Soumya.Prakash.Mishra@intel.com, sreehari.kancharla@intel.com, suresh.nagaraj@intel.com, Ricardo Martinez Subject: [PATCH net-next v3 02/12] net: wwan: t7xx: Add core components Date: Mon, 6 Dec 2021 19:47:01 -0700 Message-Id: <20211207024711.2765-3-ricardo.martinez@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211207024711.2765-1-ricardo.martinez@linux.intel.com> References: <20211207024711.2765-1-ricardo.martinez@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org From: Haijun Liu Registers the t7xx device driver with the kernel. Setup all the core components: PCIe layer, Modem Host Cross Core Interface (MHCCIF), modem control operations, modem state machine, and build infrastructure. * PCIe layer code implements driver probe and removal. * MHCCIF provides interrupt channels to communicate events such as handshake, PM and port enumeration. * Modem control implements the entry point for modem init, reset and exit. * The modem status monitor is a state machine used by modem control to complete initialization and stop. It is used also to propagate exception events reported by other components. Signed-off-by: Haijun Liu Signed-off-by: Chandrashekar Devegowda Co-developed-by: Ricardo Martinez Signed-off-by: Ricardo Martinez --- drivers/net/wwan/Kconfig | 14 + drivers/net/wwan/Makefile | 1 + drivers/net/wwan/t7xx/Makefile | 12 + drivers/net/wwan/t7xx/t7xx_common.h | 95 ++++ drivers/net/wwan/t7xx/t7xx_mhccif.c | 98 ++++ drivers/net/wwan/t7xx/t7xx_mhccif.h | 37 ++ drivers/net/wwan/t7xx/t7xx_modem_ops.c | 434 ++++++++++++++++ drivers/net/wwan/t7xx/t7xx_modem_ops.h | 84 +++ drivers/net/wwan/t7xx/t7xx_pci.c | 223 ++++++++ drivers/net/wwan/t7xx/t7xx_pci.h | 67 +++ drivers/net/wwan/t7xx/t7xx_pcie_mac.c | 277 ++++++++++ drivers/net/wwan/t7xx/t7xx_pcie_mac.h | 37 ++ drivers/net/wwan/t7xx/t7xx_reg.h | 379 ++++++++++++++ drivers/net/wwan/t7xx/t7xx_state_monitor.c | 569 +++++++++++++++++++++ drivers/net/wwan/t7xx/t7xx_state_monitor.h | 124 +++++ 15 files changed, 2451 insertions(+) create mode 100644 drivers/net/wwan/t7xx/Makefile create mode 100644 drivers/net/wwan/t7xx/t7xx_common.h create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.c create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.h create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.c create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.h create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.c create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.h create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.c create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.h create mode 100644 drivers/net/wwan/t7xx/t7xx_reg.h create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.c create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.h diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig index bdb2b0e46c12..aa1a1f6bfc23 100644 --- a/drivers/net/wwan/Kconfig +++ b/drivers/net/wwan/Kconfig @@ -93,6 +93,20 @@ config IOSM If unsure, say N. +config MTK_T7XX + tristate "MediaTek PCIe 5G WWAN modem T7xx device" + depends on PCI + help + Enables MediaTek PCIe based 5G WWAN modem (T7xx series) device. + Adapts WWAN framework and provides network interface like wwan0 + and tty interfaces like wwan0at0 (AT protocol), wwan0mbim0 + (MBIM protocol), etc. + + To compile this driver as a module, choose M here: the module will be + called mtk_t7xx. + + If unsure, say N. + endif # WWAN endmenu diff --git a/drivers/net/wwan/Makefile b/drivers/net/wwan/Makefile index e722650bebea..3960c0ae2445 100644 --- a/drivers/net/wwan/Makefile +++ b/drivers/net/wwan/Makefile @@ -13,3 +13,4 @@ obj-$(CONFIG_MHI_WWAN_MBIM) += mhi_wwan_mbim.o obj-$(CONFIG_QCOM_BAM_DMUX) += qcom_bam_dmux.o obj-$(CONFIG_RPMSG_WWAN_CTRL) += rpmsg_wwan_ctrl.o obj-$(CONFIG_IOSM) += iosm/ +obj-$(CONFIG_MTK_T7XX) += t7xx/ diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile new file mode 100644 index 000000000000..6a49013bc343 --- /dev/null +++ b/drivers/net/wwan/t7xx/Makefile @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: GPL-2.0-only + +ccflags-y += -Werror + +obj-${CONFIG_MTK_T7XX} := mtk_t7xx.o +mtk_t7xx-y:= t7xx_pci.o \ + t7xx_pcie_mac.o \ + t7xx_mhccif.o \ + t7xx_state_monitor.o \ + t7xx_modem_ops.o \ + t7xx_cldma.o \ + t7xx_hif_cldma.o \ diff --git a/drivers/net/wwan/t7xx/t7xx_common.h b/drivers/net/wwan/t7xx/t7xx_common.h new file mode 100644 index 000000000000..3f7042248d20 --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_common.h @@ -0,0 +1,95 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Ricardo Martinez + * + * Contributors: + * Andy Shevchenko + * Chiranjeevi Rapolu + * Eliot Lee + * Moises Veleta + * Sreehari Kancharla + */ + +#ifndef __T7XX_COMMON_H__ +#define __T7XX_COMMON_H__ + +#include +#include +#include + +struct ccci_header { + __le32 packet_header; + __le32 packet_len; + __le32 status; + __le32 ex_msg; +}; + +enum mtk_txrx { + MTK_TX, + MTK_RX, +}; + +#define TXQ_TYPE_DEFAULT 0 + +#define CLDMA_NUM 2 + +#define MTK_SKB_64K 64528 /* 63kB + CCCI header */ +#define MTK_SKB_4K 3584 /* 3.5kB */ +#define NET_RX_BUF MTK_SKB_4K + +#define HDR_FLD_AST ((u32)BIT(31)) +#define HDR_FLD_SEQ GENMASK(30, 16) +#define HDR_FLD_CHN GENMASK(15, 0) + +#define CCCI_H_LEN 16 +/* For exception flow use CCCI_H_LEN + reserved space */ +#define CCCI_H_ELEN 128 + +/* Coupled with HW - indicates if there is data following the CCCI header or not */ +#define CCCI_HEADER_NO_DATA 0xffffffff + +/* Control identification numbers for AP<->MD messages */ +#define CTL_ID_HS1_MSG 0x0 +#define CTL_ID_HS2_MSG 0x1 +#define CTL_ID_HS3_MSG 0x2 +#define CTL_ID_MD_EX 0x4 +#define CTL_ID_DRV_VER_ERROR 0x5 +#define CTL_ID_MD_EX_ACK 0x6 +#define CTL_ID_MD_EX_PASS 0x8 +#define CTL_ID_PORT_ENUM 0x9 + +/* Modem exception check identification code - "EXCP" */ +#define MD_EX_CHK_ID 0x45584350 +/* Modem exception check acknowledge identification code - "EREC" */ +#define MD_EX_CHK_ACK_ID 0x45524543 + +enum md_state { + MD_STATE_INVALID, /* No traffic */ + MD_STATE_GATED, /* No traffic */ + MD_STATE_WAITING_FOR_HS1, + MD_STATE_WAITING_FOR_HS2, + MD_STATE_READY, + MD_STATE_EXCEPTION, + MD_STATE_RESET, /* No traffic */ + MD_STATE_WAITING_TO_STOP, + MD_STATE_STOPPED, +}; + +#ifdef NET_SKBUFF_DATA_USES_OFFSET +static inline unsigned int t7xx_skb_data_size(struct sk_buff *skb) +{ + return skb->head + skb->end - skb->data; +} +#else +static inline unsigned int t7xx_skb_data_size(struct sk_buff *skb) +{ + return skb->end - skb->data; +} +#endif + +#endif diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.c b/drivers/net/wwan/t7xx/t7xx_mhccif.c new file mode 100644 index 000000000000..20aae457629c --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_mhccif.c @@ -0,0 +1,98 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Sreehari Kancharla + * + * Contributors: + * Amir Hanania + * Ricardo Martinez + */ + +#include +#include +#include +#include +#include + +#include "t7xx_mhccif.h" +#include "t7xx_modem_ops.h" +#include "t7xx_pci.h" +#include "t7xx_pcie_mac.h" +#include "t7xx_reg.h" + +static void t7xx_mhccif_clear_interrupts(struct t7xx_pci_dev *t7xx_dev, u32 mask) +{ + void __iomem *mhccif_pbase = t7xx_dev->base_addr.mhccif_rc_base; + + /* Clear level 2 interrupt */ + iowrite32(mask, mhccif_pbase + REG_EP2RC_SW_INT_ACK); + /* Ensure write is complete */ + t7xx_mhccif_read_sw_int_sts(t7xx_dev); + /* Clear level 1 interrupt */ + t7xx_pcie_mac_clear_int_status(t7xx_dev, MHCCIF_INT); +} + +static irqreturn_t t7xx_mhccif_isr_thread(int irq, void *data) +{ + struct t7xx_pci_dev *t7xx_dev = data; + u32 int_sts, val; + + val = L1_1_DISABLE_BIT(1) | L1_2_DISABLE_BIT(1); + iowrite32(val, IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0); + + int_sts = t7xx_mhccif_read_sw_int_sts(t7xx_dev); + if (int_sts & t7xx_dev->mhccif_bitmask) + t7xx_pci_mhccif_isr(t7xx_dev); + + t7xx_mhccif_clear_interrupts(t7xx_dev, int_sts); + t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT); + return IRQ_HANDLED; +} + +u32 t7xx_mhccif_read_sw_int_sts(struct t7xx_pci_dev *t7xx_dev) +{ + return ioread32(t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_STS); +} + +void t7xx_mhccif_mask_set(struct t7xx_pci_dev *t7xx_dev, u32 val) +{ + iowrite32(val, t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK_SET); +} + +void t7xx_mhccif_mask_clr(struct t7xx_pci_dev *t7xx_dev, u32 val) +{ + iowrite32(val, t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK_CLR); +} + +u32 t7xx_mhccif_mask_get(struct t7xx_pci_dev *t7xx_dev) +{ + return ioread32(t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK); +} + +static irqreturn_t t7xx_mhccif_isr_handler(int irq, void *data) +{ + return IRQ_WAKE_THREAD; +} + +void t7xx_mhccif_init(struct t7xx_pci_dev *t7xx_dev) +{ + t7xx_dev->base_addr.mhccif_rc_base = t7xx_dev->base_addr.pcie_ext_reg_base + + MHCCIF_RC_DEV_BASE - + t7xx_dev->base_addr.pcie_dev_reg_trsl_addr; + + t7xx_dev->intr_handler[MHCCIF_INT] = t7xx_mhccif_isr_handler; + t7xx_dev->intr_thread[MHCCIF_INT] = t7xx_mhccif_isr_thread; + t7xx_dev->callback_param[MHCCIF_INT] = t7xx_dev; +} + +void t7xx_mhccif_h2d_swint_trigger(struct t7xx_pci_dev *t7xx_dev, u32 channel) +{ + void __iomem *mhccif_pbase = t7xx_dev->base_addr.mhccif_rc_base; + + iowrite32(BIT(channel), mhccif_pbase + REG_RC2EP_SW_BSY); + iowrite32(channel, mhccif_pbase + REG_RC2EP_SW_TCHNUM); +} diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.h b/drivers/net/wwan/t7xx/t7xx_mhccif.h new file mode 100644 index 000000000000..62e9afa59296 --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_mhccif.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Sreehari Kancharla + * + * Contributors: + * Amir Hanania + * Ricardo Martinez + */ + +#ifndef __T7XX_MHCCIF_H__ +#define __T7XX_MHCCIF_H__ + +#include + +#include "t7xx_pci.h" +#include "t7xx_reg.h" + +#define D2H_SW_INT_MASK (D2H_INT_EXCEPTION_INIT | \ + D2H_INT_EXCEPTION_INIT_DONE | \ + D2H_INT_EXCEPTION_CLEARQ_DONE | \ + D2H_INT_EXCEPTION_ALLQ_RESET | \ + D2H_INT_PORT_ENUM | \ + D2H_INT_ASYNC_MD_HK) + +void t7xx_mhccif_mask_set(struct t7xx_pci_dev *t7xx_dev, u32 val); +void t7xx_mhccif_mask_clr(struct t7xx_pci_dev *t7xx_dev, u32 val); +u32 t7xx_mhccif_mask_get(struct t7xx_pci_dev *t7xx_dev); +void t7xx_mhccif_init(struct t7xx_pci_dev *t7xx_dev); +u32 t7xx_mhccif_read_sw_int_sts(struct t7xx_pci_dev *t7xx_dev); +void t7xx_mhccif_h2d_swint_trigger(struct t7xx_pci_dev *t7xx_dev, u32 channel); + +#endif /*__T7XX_MHCCIF_H__ */ diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c new file mode 100644 index 000000000000..53ab88d33b56 --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c @@ -0,0 +1,434 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Eliot Lee + * Moises Veleta + * Ricardo Martinez + * + * Contributors: + * Amir Hanania + * Chiranjeevi Rapolu + * Sreehari Kancharla + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "t7xx_hif_cldma.h" +#include "t7xx_mhccif.h" +#include "t7xx_modem_ops.h" +#include "t7xx_pci.h" +#include "t7xx_pcie_mac.h" +#include "t7xx_reg.h" +#include "t7xx_state_monitor.h" + +#define RGU_RESET_DELAY_MS 10 +#define PORT_RESET_DELAY_MS 2000 +#define EX_HS_TIMEOUT_MS 5000 +#define EX_HS_POLL_DELAY_MS 10 + +static inline unsigned int t7xx_get_interrupt_status(struct t7xx_pci_dev *t7xx_dev) +{ + return t7xx_mhccif_read_sw_int_sts(t7xx_dev) & D2H_SW_INT_MASK; +} + +/** + * t7xx_pci_mhccif_isr() - Process MHCCIF interrupts. + * @t7xx_dev: MTK device. + * + * Check the interrupt status and queue commands accordingly. + * + * Returns: + ** 0 - Success. + ** -EINVAL - Failure to get FSM control. + */ +int t7xx_pci_mhccif_isr(struct t7xx_pci_dev *t7xx_dev) +{ + struct t7xx_modem *md = t7xx_dev->md; + struct t7xx_fsm_ctl *ctl; + unsigned int int_sta; + u32 mask; + + ctl = md->fsm_ctl; + if (!ctl) { + dev_err_ratelimited(&t7xx_dev->pdev->dev, + "MHCCIF interrupt received before initializing MD monitor\n"); + return -EINVAL; + } + + spin_lock_bh(&md->exp_lock); + int_sta = t7xx_get_interrupt_status(t7xx_dev); + + md->exp_id |= int_sta; + if (md->exp_id & D2H_INT_PORT_ENUM) { + md->exp_id &= ~D2H_INT_PORT_ENUM; + + if (ctl->curr_state == FSM_STATE_INIT || ctl->curr_state == FSM_STATE_PRE_START || + ctl->curr_state == FSM_STATE_STOPPED) + t7xx_fsm_recv_md_intr(ctl, MD_IRQ_PORT_ENUM); + } + + if (md->exp_id & D2H_INT_EXCEPTION_INIT) { + if (ctl->md_state == MD_STATE_INVALID || + ctl->md_state == MD_STATE_WAITING_FOR_HS1 || + ctl->md_state == MD_STATE_WAITING_FOR_HS2 || + ctl->md_state == MD_STATE_READY) { + md->exp_id &= ~D2H_INT_EXCEPTION_INIT; + t7xx_fsm_recv_md_intr(ctl, MD_IRQ_CCIF_EX); + } + } else if (ctl->md_state == MD_STATE_WAITING_FOR_HS1) { + mask = t7xx_mhccif_mask_get(t7xx_dev); + if ((md->exp_id & D2H_INT_ASYNC_MD_HK) && !(mask & D2H_INT_ASYNC_MD_HK)) { + md->exp_id &= ~D2H_INT_ASYNC_MD_HK; + queue_work(md->handshake_wq, &md->handshake_work); + } + } + + spin_unlock_bh(&md->exp_lock); + + return 0; +} + +static void t7xx_clr_device_irq_via_pcie(struct t7xx_pci_dev *t7xx_dev) +{ + struct t7xx_addr_base *pbase_addr = &t7xx_dev->base_addr; + void __iomem *reset_pcie_reg; + u32 val; + + reset_pcie_reg = pbase_addr->pcie_ext_reg_base + TOPRGU_CH_PCIE_IRQ_STA - + pbase_addr->pcie_dev_reg_trsl_addr; + val = ioread32(reset_pcie_reg); + iowrite32(val, reset_pcie_reg); +} + +void t7xx_clear_rgu_irq(struct t7xx_pci_dev *t7xx_dev) +{ + /* Clear L2 */ + t7xx_clr_device_irq_via_pcie(t7xx_dev); + /* Clear L1 */ + t7xx_pcie_mac_clear_int_status(t7xx_dev, SAP_RGU_INT); +} + +static int t7xx_acpi_reset(struct t7xx_pci_dev *t7xx_dev, char *fn_name) +{ +#ifdef CONFIG_ACPI + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; + struct device *dev = &t7xx_dev->pdev->dev; + acpi_status acpi_ret; + acpi_handle handle; + + handle = ACPI_HANDLE(dev); + if (!handle) { + dev_err(dev, "ACPI handle not found\n"); + return -EFAULT; + } + + if (!acpi_has_method(handle, fn_name)) { + dev_err(dev, "%s method not found\n", fn_name); + return -EFAULT; + } + + acpi_ret = acpi_evaluate_object(handle, fn_name, NULL, &buffer); + if (ACPI_FAILURE(acpi_ret)) { + dev_err(dev, "%s method fail: %s\n", fn_name, acpi_format_exception(acpi_ret)); + return -EFAULT; + } + +#endif + return 0; +} + +int t7xx_acpi_fldr_func(struct t7xx_pci_dev *t7xx_dev) +{ + return t7xx_acpi_reset(t7xx_dev, "_RST"); +} + +static void t7xx_reset_device_via_pmic(struct t7xx_pci_dev *t7xx_dev) +{ + u32 val; + + val = ioread32(IREG_BASE(t7xx_dev) + PCIE_MISC_DEV_STATUS); + if (val & MISC_RESET_TYPE_PLDR) + t7xx_acpi_reset(t7xx_dev, "MRST._RST"); + else if (val & MISC_RESET_TYPE_FLDR) + t7xx_acpi_fldr_func(t7xx_dev); +} + +static irqreturn_t t7xx_rgu_isr_thread(int irq, void *data) +{ + struct t7xx_pci_dev *t7xx_dev = data; + + msleep(RGU_RESET_DELAY_MS); + t7xx_reset_device_via_pmic(t7xx_dev); + return IRQ_HANDLED; +} + +static irqreturn_t t7xx_rgu_isr_handler(int irq, void *data) +{ + struct t7xx_pci_dev *t7xx_dev = data; + struct t7xx_modem *modem; + + t7xx_clear_rgu_irq(t7xx_dev); + if (!t7xx_dev->rgu_pci_irq_en) + return IRQ_HANDLED; + + modem = t7xx_dev->md; + modem->rgu_irq_asserted = true; + t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT); + return IRQ_WAKE_THREAD; +} + +static void t7xx_pcie_register_rgu_isr(struct t7xx_pci_dev *t7xx_dev) +{ + /* Registers RGU callback ISR with PCIe driver */ + t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT); + t7xx_pcie_mac_clear_int_status(t7xx_dev, SAP_RGU_INT); + + t7xx_dev->intr_handler[SAP_RGU_INT] = t7xx_rgu_isr_handler; + t7xx_dev->intr_thread[SAP_RGU_INT] = t7xx_rgu_isr_thread; + t7xx_dev->callback_param[SAP_RGU_INT] = t7xx_dev; + t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT); +} + +static void t7xx_md_exception(struct t7xx_modem *md, enum hif_ex_stage stage) +{ + struct t7xx_pci_dev *t7xx_dev = md->t7xx_dev; + + if (stage == HIF_EX_CLEARQ_DONE) { + /* Give DHL time to flush data */ + msleep(PORT_RESET_DELAY_MS); + } + + t7xx_cldma_exception(md->md_ctrl[ID_CLDMA1], stage); + + if (stage == HIF_EX_INIT) + t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_EXCEPTION_ACK); + else if (stage == HIF_EX_CLEARQ_DONE) + t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_EXCEPTION_CLEARQ_ACK); +} + +static int t7xx_wait_hif_ex_hk_event(struct t7xx_modem *md, int event_id) +{ + unsigned int waited_time_ms = 0; + + do { + if (md->exp_id & event_id) + return 0; + + waited_time_ms += EX_HS_POLL_DELAY_MS; + msleep(EX_HS_POLL_DELAY_MS); + } while (waited_time_ms < EX_HS_TIMEOUT_MS); + + return -EFAULT; +} + +static void t7xx_md_sys_sw_init(struct t7xx_pci_dev *t7xx_dev) +{ + /* Register the MHCCIF ISR for MD exception, port enum and + * async handshake notifications. + */ + t7xx_mhccif_mask_set(t7xx_dev, D2H_SW_INT_MASK); + t7xx_dev->mhccif_bitmask = D2H_SW_INT_MASK; + t7xx_mhccif_mask_clr(t7xx_dev, D2H_INT_PORT_ENUM); + + /* Register RGU IRQ handler for sAP exception notification */ + t7xx_dev->rgu_pci_irq_en = true; + t7xx_pcie_register_rgu_isr(t7xx_dev); +} + +static void t7xx_md_hk_wq(struct work_struct *work) +{ + struct t7xx_modem *md = container_of(work, struct t7xx_modem, handshake_work); + struct t7xx_fsm_ctl *ctl = md->fsm_ctl; + + t7xx_cldma_switch_cfg(md->md_ctrl[ID_CLDMA1]); + t7xx_cldma_start(md->md_ctrl[ID_CLDMA1]); + t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS2); + md->core_md.ready = true; + wake_up(&ctl->async_hk_wq); +} + +void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id) +{ + struct t7xx_fsm_ctl *ctl = md->fsm_ctl; + void __iomem *mhccif_base; + unsigned int int_sta; + unsigned long flags; + + switch (evt_id) { + case FSM_PRE_START: + t7xx_mhccif_mask_clr(md->t7xx_dev, D2H_INT_PORT_ENUM); + break; + + case FSM_START: + t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_PORT_ENUM); + spin_lock_irqsave(&md->exp_lock, flags); + int_sta = t7xx_get_interrupt_status(md->t7xx_dev); + + md->exp_id |= int_sta; + if (md->exp_id & D2H_INT_EXCEPTION_INIT) { + ctl->exp_flg = true; + md->exp_id &= ~D2H_INT_EXCEPTION_INIT; + md->exp_id &= ~D2H_INT_ASYNC_MD_HK; + } else if (ctl->exp_flg) { + md->exp_id &= ~D2H_INT_ASYNC_MD_HK; + } else if (md->exp_id & D2H_INT_ASYNC_MD_HK) { + queue_work(md->handshake_wq, &md->handshake_work); + md->exp_id &= ~D2H_INT_ASYNC_MD_HK; + mhccif_base = md->t7xx_dev->base_addr.mhccif_rc_base; + iowrite32(D2H_INT_ASYNC_MD_HK, mhccif_base + REG_EP2RC_SW_INT_ACK); + t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_MD_HK); + } else { + t7xx_mhccif_mask_clr(md->t7xx_dev, D2H_INT_ASYNC_MD_HK); + } + + spin_unlock_irqrestore(&md->exp_lock, flags); + + t7xx_mhccif_mask_clr(md->t7xx_dev, + D2H_INT_EXCEPTION_INIT | + D2H_INT_EXCEPTION_INIT_DONE | + D2H_INT_EXCEPTION_CLEARQ_DONE | + D2H_INT_EXCEPTION_ALLQ_RESET); + break; + + case FSM_READY: + t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_MD_HK); + break; + + default: + break; + } +} + +void t7xx_md_exception_handshake(struct t7xx_modem *md) +{ + struct device *dev = &md->t7xx_dev->pdev->dev; + int ret; + + t7xx_md_exception(md, HIF_EX_INIT); + ret = t7xx_wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_INIT_DONE); + if (ret) + dev_err(dev, "EX CCIF HS timeout, RCH 0x%lx\n", D2H_INT_EXCEPTION_INIT_DONE); + + t7xx_md_exception(md, HIF_EX_INIT_DONE); + ret = t7xx_wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_CLEARQ_DONE); + if (ret) + dev_err(dev, "EX CCIF HS timeout, RCH 0x%lx\n", D2H_INT_EXCEPTION_CLEARQ_DONE); + + t7xx_md_exception(md, HIF_EX_CLEARQ_DONE); + ret = t7xx_wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_ALLQ_RESET); + if (ret) + dev_err(dev, "EX CCIF HS timeout, RCH 0x%lx\n", D2H_INT_EXCEPTION_ALLQ_RESET); + + t7xx_md_exception(md, HIF_EX_ALLQ_RESET); +} + +static struct t7xx_modem *t7xx_md_alloc(struct t7xx_pci_dev *t7xx_dev) +{ + struct device *dev = &t7xx_dev->pdev->dev; + struct t7xx_modem *md; + + md = devm_kzalloc(dev, sizeof(*md), GFP_KERNEL); + if (!md) + return NULL; + + md->t7xx_dev = t7xx_dev; + t7xx_dev->md = md; + md->core_md.ready = false; + spin_lock_init(&md->exp_lock); + md->handshake_wq = alloc_workqueue("%s", WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI, + 0, "md_hk_wq"); + if (!md->handshake_wq) + return NULL; + + INIT_WORK(&md->handshake_work, t7xx_md_hk_wq); + return md; +} + +void t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev) +{ + struct t7xx_modem *md = t7xx_dev->md; + + md->md_init_finish = false; + md->exp_id = 0; + spin_lock_init(&md->exp_lock); + t7xx_fsm_reset(md); + t7xx_cldma_reset(md->md_ctrl[ID_CLDMA1]); + md->md_init_finish = true; +} + +/** + * t7xx_md_init() - Initialize modem. + * @t7xx_dev: MTK device. + * + * Allocate and initialize MD control block, and initialize data path. + * Register MHCCIF ISR and RGU ISR, and start the state machine. + * + * Return: + ** 0 - Success. + ** -ENOMEM - Allocation failure. + */ +int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev) +{ + struct t7xx_modem *md; + int ret; + + md = t7xx_md_alloc(t7xx_dev); + if (!md) + return -ENOMEM; + + ret = t7xx_cldma_alloc(ID_CLDMA1, t7xx_dev); + if (ret) + goto err_destroy_hswq; + + ret = t7xx_fsm_init(md); + if (ret) + goto err_destroy_hswq; + + ret = t7xx_cldma_init(md, md->md_ctrl[ID_CLDMA1]); + if (ret) + goto err_uninit_fsm; + + t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_START, 0); + t7xx_md_sys_sw_init(t7xx_dev); + md->md_init_finish = true; + return 0; + +err_uninit_fsm: + t7xx_fsm_uninit(md); + +err_destroy_hswq: + destroy_workqueue(md->handshake_wq); + dev_err(&t7xx_dev->pdev->dev, "Modem init failed\n"); + return ret; +} + +void t7xx_md_exit(struct t7xx_pci_dev *t7xx_dev) +{ + struct t7xx_modem *md = t7xx_dev->md; + + t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT); + + if (!md->md_init_finish) + return; + + t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_PRE_STOP, FSM_CMD_FLAG_WAIT_FOR_COMPLETION); + t7xx_cldma_exit(md->md_ctrl[ID_CLDMA1]); + t7xx_fsm_uninit(md); + destroy_workqueue(md->handshake_wq); +} diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.h b/drivers/net/wwan/t7xx/t7xx_modem_ops.h new file mode 100644 index 000000000000..24d2ee5bfbda --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.h @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Eliot Lee + * Moises Veleta + * Ricardo Martinez + * + * Contributors: + * Amir Hanania + * Chiranjeevi Rapolu + * Sreehari Kancharla + */ + +#ifndef __T7XX_MODEM_OPS_H__ +#define __T7XX_MODEM_OPS_H__ + +#include +#include +#include + +#include "t7xx_common.h" +#include "t7xx_pci.h" + +#define FEATURE_COUNT 64 + +/** + * enum hif_ex_stage - HIF exception handshake stages with the HW. + * @HIF_EX_INIT: Disable and clear TXQ. + * @HIF_EX_INIT_DONE: Polling for initialization to be done. + * @HIF_EX_CLEARQ_DONE: Disable RX, flush TX/RX workqueues and clear RX. + * @HIF_EX_ALLQ_RESET: HW is back in safe mode for re-initialization and restart. + */ +enum hif_ex_stage { + HIF_EX_INIT, + HIF_EX_INIT_DONE, + HIF_EX_CLEARQ_DONE, + HIF_EX_ALLQ_RESET, +}; + +struct mtk_runtime_feature { + u8 feature_id; + u8 support_info; + u8 reserved[2]; + __le32 data_len; +}; + +enum md_event_id { + FSM_PRE_START, + FSM_START, + FSM_READY, +}; + +struct t7xx_sys_info { + bool ready; +}; + +struct t7xx_modem { + struct cldma_ctrl *md_ctrl[CLDMA_NUM]; + struct t7xx_pci_dev *t7xx_dev; + struct t7xx_sys_info core_md; + bool md_init_finish; + bool rgu_irq_asserted; + struct workqueue_struct *handshake_wq; + struct work_struct handshake_work; + struct t7xx_fsm_ctl *fsm_ctl; + struct port_proxy *port_prox; + unsigned int exp_id; + spinlock_t exp_lock; /* Protects exception events */ +}; + +void t7xx_md_exception_handshake(struct t7xx_modem *md); +void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id); +void t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev); +int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev); +void t7xx_md_exit(struct t7xx_pci_dev *t7xx_dev); +void t7xx_clear_rgu_irq(struct t7xx_pci_dev *t7xx_dev); +int t7xx_acpi_fldr_func(struct t7xx_pci_dev *t7xx_dev); +int t7xx_pci_mhccif_isr(struct t7xx_pci_dev *t7xx_dev); + +#endif /* __T7XX_MODEM_OPS_H__ */ diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c new file mode 100644 index 000000000000..f525468ed472 --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_pci.c @@ -0,0 +1,223 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Ricardo Martinez + * Sreehari Kancharla + * + * Contributors: + * Amir Hanania + * Andy Shevchenko + * Chiranjeevi Rapolu + * Eliot Lee + * Moises Veleta + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "t7xx_mhccif.h" +#include "t7xx_modem_ops.h" +#include "t7xx_pci.h" +#include "t7xx_pcie_mac.h" +#include "t7xx_reg.h" + +#define PCI_IREG_BASE 0 +#define PCI_EREG_BASE 2 + +static int t7xx_request_irq(struct pci_dev *pdev) +{ + struct t7xx_pci_dev *t7xx_dev; + int ret, i; + + t7xx_dev = pci_get_drvdata(pdev); + + for (i = 0; i < EXT_INT_NUM; i++) { + const char *irq_descr; + int irq_vec; + + if (!t7xx_dev->intr_handler[i]) + continue; + + irq_descr = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s_%d", + dev_driver_string(&pdev->dev), i); + if (!irq_descr) { + ret = -ENOMEM; + break; + } + + irq_vec = pci_irq_vector(pdev, i); + ret = request_threaded_irq(irq_vec, t7xx_dev->intr_handler[i], + t7xx_dev->intr_thread[i], 0, irq_descr, + t7xx_dev->callback_param[i]); + if (ret) { + dev_err(&pdev->dev, "Failed to request IRQ: %d\n", ret); + break; + } + } + + if (ret) { + while (i--) { + if (!t7xx_dev->intr_handler[i]) + continue; + + free_irq(pci_irq_vector(pdev, i), t7xx_dev->callback_param[i]); + } + } + + return ret; +} + +static int t7xx_setup_msix(struct t7xx_pci_dev *t7xx_dev) +{ + struct pci_dev *pdev = t7xx_dev->pdev; + int ret; + + /* Only using 6 interrupts, but HW-design requires power-of-2 IRQs allocation */ + ret = pci_alloc_irq_vectors(pdev, EXT_INT_NUM, EXT_INT_NUM, PCI_IRQ_MSIX); + if (ret < 0) { + dev_err(&pdev->dev, "Failed to allocate MSI-X entry: %d\n", ret); + return ret; + } + + ret = t7xx_request_irq(pdev); + if (ret) { + pci_free_irq_vectors(pdev); + return ret; + } + + t7xx_pcie_set_mac_msix_cfg(t7xx_dev, EXT_INT_NUM); + return 0; +} + +static int t7xx_interrupt_init(struct t7xx_pci_dev *t7xx_dev) +{ + int ret, i; + + if (!t7xx_dev->pdev->msix_cap) + return -EINVAL; + + ret = t7xx_setup_msix(t7xx_dev); + if (ret) + return ret; + + /* IPs enable interrupts when ready */ + for (i = EXT_INT_START; i < EXT_INT_START + EXT_INT_NUM; i++) + PCIE_MAC_MSIX_MSK_SET(t7xx_dev, i); + + return 0; +} + +static inline void t7xx_pci_infracfg_ao_calc(struct t7xx_pci_dev *t7xx_dev) +{ + t7xx_dev->base_addr.infracfg_ao_base = t7xx_dev->base_addr.pcie_ext_reg_base + + INFRACFG_AO_DEV_CHIP - + t7xx_dev->base_addr.pcie_dev_reg_trsl_addr; +} + +static int t7xx_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct t7xx_pci_dev *t7xx_dev; + int ret; + + t7xx_dev = devm_kzalloc(&pdev->dev, sizeof(*t7xx_dev), GFP_KERNEL); + if (!t7xx_dev) + return -ENOMEM; + + pci_set_drvdata(pdev, t7xx_dev); + t7xx_dev->pdev = pdev; + + ret = pcim_enable_device(pdev); + if (ret) + return ret; + + pci_set_master(pdev); + + ret = pcim_iomap_regions(pdev, BIT(PCI_IREG_BASE) | BIT(PCI_EREG_BASE), pci_name(pdev)); + if (ret) { + dev_err(&pdev->dev, "Could not request BARs: %d\n", ret); + return -ENOMEM; + } + + ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); + if (ret) { + dev_err(&pdev->dev, "Could not set PCI DMA mask: %d\n", ret); + return ret; + } + + ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); + if (ret) { + dev_err(&pdev->dev, "Could not set consistent PCI DMA mask: %d\n", ret); + return ret; + } + + IREG_BASE(t7xx_dev) = pcim_iomap_table(pdev)[PCI_IREG_BASE]; + t7xx_dev->base_addr.pcie_ext_reg_base = pcim_iomap_table(pdev)[PCI_EREG_BASE]; + + t7xx_pcie_mac_atr_init(t7xx_dev); + t7xx_pci_infracfg_ao_calc(t7xx_dev); + t7xx_mhccif_init(t7xx_dev); + + ret = t7xx_md_init(t7xx_dev); + if (ret) + return ret; + + t7xx_pcie_mac_interrupts_dis(t7xx_dev); + + ret = t7xx_interrupt_init(t7xx_dev); + if (ret) + return ret; + + t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT); + t7xx_pcie_mac_interrupts_en(t7xx_dev); + + return 0; +} + +static void t7xx_pci_remove(struct pci_dev *pdev) +{ + struct t7xx_pci_dev *t7xx_dev; + int i; + + t7xx_dev = pci_get_drvdata(pdev); + t7xx_md_exit(t7xx_dev); + + for (i = 0; i < EXT_INT_NUM; i++) { + if (!t7xx_dev->intr_handler[i]) + continue; + + free_irq(pci_irq_vector(pdev, i), t7xx_dev->callback_param[i]); + } + + pci_free_irq_vectors(t7xx_dev->pdev); +} + +static const struct pci_device_id t7xx_pci_table[] = { + { PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x4d75) }, + { } +}; +MODULE_DEVICE_TABLE(pci, t7xx_pci_table); + +static struct pci_driver t7xx_pci_driver = { + .name = "mtk_t7xx", + .id_table = t7xx_pci_table, + .probe = t7xx_pci_probe, + .remove = t7xx_pci_remove, +}; + +module_pci_driver(t7xx_pci_driver); + +MODULE_AUTHOR("MediaTek Inc"); +MODULE_DESCRIPTION("MediaTek PCIe 5G WWAN modem T7xx driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h new file mode 100644 index 000000000000..b52aaa182a10 --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_pci.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Ricardo Martinez + * Sreehari Kancharla + * + * Contributors: + * Amir Hanania + * Chiranjeevi Rapolu + * Moises Veleta + */ + +#ifndef __T7XX_PCI_H__ +#define __T7XX_PCI_H__ + +#include +#include +#include + +#include "t7xx_reg.h" + +/* struct t7xx_addr_base - holds base addresses + * @pcie_mac_ireg_base: PCIe MAC register base + * @pcie_ext_reg_base: used to calculate base addresses for CLDMA, DPMA and MHCCIF registers + * @pcie_dev_reg_trsl_addr: used to calculate the register base address + * @infracfg_ao_base: base address used in CLDMA reset operations + * @mhccif_rc_base: host view of MHCCIF rc base addr + */ +struct t7xx_addr_base { + void __iomem *pcie_mac_ireg_base; + void __iomem *pcie_ext_reg_base; + u32 pcie_dev_reg_trsl_addr; + void __iomem *infracfg_ao_base; + void __iomem *mhccif_rc_base; +}; + +typedef irqreturn_t (*t7xx_intr_callback)(int irq, void *param); + +/* struct t7xx_pci_dev - MTK device context structure + * @intr_handler: array of handler function for request_threaded_irq + * @intr_thread: array of thread_fn for request_threaded_irq + * @callback_param: array of cookie passed back to interrupt functions + * @mhccif_bitmask: device to host interrupt mask + * @pdev: PCI device + * @base_addr: memory base addresses of HW components + * @md: modem interface + * @ccmni_ctlb: context structure used to control the network data path + * @rgu_pci_irq_en: RGU callback isr registered and active + * @pools: pre allocated skb pools + */ +struct t7xx_pci_dev { + t7xx_intr_callback intr_handler[EXT_INT_NUM]; + t7xx_intr_callback intr_thread[EXT_INT_NUM]; + void *callback_param[EXT_INT_NUM]; + u32 mhccif_bitmask; + struct pci_dev *pdev; + struct t7xx_addr_base base_addr; + struct t7xx_modem *md; + struct t7xx_ccmni_ctrl *ccmni_ctlb; + bool rgu_pci_irq_en; +}; + +#endif /* __T7XX_PCI_H__ */ diff --git a/drivers/net/wwan/t7xx/t7xx_pcie_mac.c b/drivers/net/wwan/t7xx/t7xx_pcie_mac.c new file mode 100644 index 000000000000..fa5effa4e07a --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_pcie_mac.c @@ -0,0 +1,277 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Moises Veleta + * Sreehari Kancharla + * + * Contributors: + * Amir Hanania + * Chiranjeevi Rapolu + * Ricardo Martinez + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "t7xx_pci.h" +#include "t7xx_pcie_mac.h" +#include "t7xx_reg.h" + +#define PCIE_REG_BAR 2 +#define PCIE_REG_PORT ATR_SRC_PCI_WIN0 +#define PCIE_REG_TABLE_NUM 0 +#define PCIE_REG_TRSL_PORT ATR_DST_AXIM_0 + +#define PCIE_DEV_DMA_PORT_START ATR_SRC_AXIS_0 +#define PCIE_DEV_DMA_PORT_END ATR_SRC_AXIS_2 +#define PCIE_DEV_DMA_TABLE_NUM 0 +#define PCIE_DEV_DMA_TRSL_ADDR 0 +#define PCIE_DEV_DMA_SRC_ADDR 0 +#define PCIE_DEV_DMA_TRANSPARENT 1 +#define PCIE_DEV_DMA_SIZE 0 + +enum t7xx_atr_src_port { + ATR_SRC_PCI_WIN0, + ATR_SRC_PCI_WIN1, + ATR_SRC_AXIS_0, + ATR_SRC_AXIS_1, + ATR_SRC_AXIS_2, + ATR_SRC_AXIS_3, +}; + +enum t7xx_atr_dst_port { + ATR_DST_PCI_TRX, + ATR_DST_PCI_CONFIG, + ATR_DST_AXIM_0 = 4, + ATR_DST_AXIM_1, + ATR_DST_AXIM_2, + ATR_DST_AXIM_3, +}; + +struct t7xx_atr_config { + u64 src_addr; + u64 trsl_addr; + u64 size; + u32 port; + u32 table; + enum t7xx_atr_dst_port trsl_id; + u32 transparent; +}; + +static void t7xx_pcie_mac_atr_tables_dis(void __iomem *pbase, enum t7xx_atr_src_port port) +{ + void __iomem *reg; + int i, offset; + + for (i = 0; i < ATR_TABLE_NUM_PER_ATR; i++) { + offset = ATR_PORT_OFFSET * port + ATR_TABLE_OFFSET * i; + reg = pbase + ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR + offset; + iowrite64(0, reg); + } +} + +static int t7xx_pcie_mac_atr_cfg(struct t7xx_pci_dev *t7xx_dev, struct t7xx_atr_config *cfg) +{ + struct device *dev = &t7xx_dev->pdev->dev; + void __iomem *pbase = IREG_BASE(t7xx_dev); + int atr_size, pos, offset; + void __iomem *reg; + u64 value; + + if (cfg->transparent) { + /* No address conversion is performed */ + atr_size = ATR_TRANSPARENT_SIZE; + } else { + if (cfg->src_addr & (cfg->size - 1)) { + dev_err(dev, "Source address is not aligned to size\n"); + return -EINVAL; + } + + if (cfg->trsl_addr & (cfg->size - 1)) { + dev_err(dev, "Translation address %llx is not aligned to size %llx\n", + cfg->trsl_addr, cfg->size - 1); + return -EINVAL; + } + + pos = __ffs64(cfg->size); + + /* HW calculates the address translation space as 2^(atr_size + 1) */ + atr_size = pos - 1; + } + + offset = ATR_PORT_OFFSET * cfg->port + ATR_TABLE_OFFSET * cfg->table; + + reg = pbase + ATR_PCIE_WIN0_T0_TRSL_ADDR + offset; + value = cfg->trsl_addr & ATR_PCIE_WIN0_ADDR_ALGMT; + iowrite64(value, reg); + + reg = pbase + ATR_PCIE_WIN0_T0_TRSL_PARAM + offset; + iowrite32(cfg->trsl_id, reg); + + reg = pbase + ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR + offset; + value = (cfg->src_addr & ATR_PCIE_WIN0_ADDR_ALGMT) | (atr_size << 1) | BIT(0); + iowrite64(value, reg); + + /* Ensure ATR is set */ + ioread64(reg); + return 0; +} + +/** + * t7xx_pcie_mac_atr_init() - Initialize address translation. + * @t7xx_dev: MTK device. + * + * Setup ATR for ports & device. + */ +void t7xx_pcie_mac_atr_init(struct t7xx_pci_dev *t7xx_dev) +{ + struct t7xx_atr_config cfg; + u32 i; + + /* Disable for all ports */ + for (i = ATR_SRC_PCI_WIN0; i <= ATR_SRC_AXIS_3; i++) + t7xx_pcie_mac_atr_tables_dis(IREG_BASE(t7xx_dev), i); + + memset(&cfg, 0, sizeof(cfg)); + /* Config ATR for RC to access device's register */ + cfg.src_addr = pci_resource_start(t7xx_dev->pdev, PCIE_REG_BAR); + cfg.size = PCIE_REG_SIZE_CHIP; + cfg.trsl_addr = PCIE_REG_TRSL_ADDR_CHIP; + cfg.port = PCIE_REG_PORT; + cfg.table = PCIE_REG_TABLE_NUM; + cfg.trsl_id = PCIE_REG_TRSL_PORT; + t7xx_pcie_mac_atr_tables_dis(IREG_BASE(t7xx_dev), cfg.port); + t7xx_pcie_mac_atr_cfg(t7xx_dev, &cfg); + + t7xx_dev->base_addr.pcie_dev_reg_trsl_addr = PCIE_REG_TRSL_ADDR_CHIP; + + /* Config ATR for EP to access RC's memory */ + for (i = PCIE_DEV_DMA_PORT_START; i <= PCIE_DEV_DMA_PORT_END; i++) { + cfg.src_addr = PCIE_DEV_DMA_SRC_ADDR; + cfg.size = PCIE_DEV_DMA_SIZE; + cfg.trsl_addr = PCIE_DEV_DMA_TRSL_ADDR; + cfg.port = i; + cfg.table = PCIE_DEV_DMA_TABLE_NUM; + cfg.trsl_id = ATR_DST_PCI_TRX; + cfg.transparent = PCIE_DEV_DMA_TRANSPARENT; + t7xx_pcie_mac_atr_tables_dis(IREG_BASE(t7xx_dev), cfg.port); + t7xx_pcie_mac_atr_cfg(t7xx_dev, &cfg); + } +} + +/** + * t7xx_pcie_mac_enable_disable_int() - Enable/disable interrupts. + * @t7xx_dev: MTK device. + * @enable: Enable/disable. + * + * Enable or disable device interrupts. + */ +static void t7xx_pcie_mac_enable_disable_int(struct t7xx_pci_dev *t7xx_dev, bool enable) +{ + u32 value; + + value = ioread32(IREG_BASE(t7xx_dev) + ISTAT_HST_CTRL); + + if (enable) + value &= ~ISTAT_HST_CTRL_DIS; + else + value |= ISTAT_HST_CTRL_DIS; + + iowrite32(value, IREG_BASE(t7xx_dev) + ISTAT_HST_CTRL); +} + +void t7xx_pcie_mac_interrupts_en(struct t7xx_pci_dev *t7xx_dev) +{ + t7xx_pcie_mac_enable_disable_int(t7xx_dev, true); +} + +void t7xx_pcie_mac_interrupts_dis(struct t7xx_pci_dev *t7xx_dev) +{ + t7xx_pcie_mac_enable_disable_int(t7xx_dev, false); +} + +/** + * t7xx_pcie_mac_clear_set_int() - Clear/set interrupt by type. + * @t7xx_dev: MTK device. + * @int_type: Interrupt type. + * @clear: Clear/set. + * + * Clear or set device interrupt by type. + */ +static void t7xx_pcie_mac_clear_set_int(struct t7xx_pci_dev *t7xx_dev, + enum pcie_int int_type, bool clear) +{ + void __iomem *reg; + u32 val; + + if (t7xx_dev->pdev->msix_enabled) { + if (clear) + reg = IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_CLR_GRP0_0; + else + reg = IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_SET_GRP0_0; + } else { + if (clear) + reg = IREG_BASE(t7xx_dev) + INT_EN_HST_CLR; + else + reg = IREG_BASE(t7xx_dev) + INT_EN_HST_SET; + } + + val = BIT(EXT_INT_START + int_type); + iowrite32(val, reg); +} + +void t7xx_pcie_mac_clear_int(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type) +{ + t7xx_pcie_mac_clear_set_int(t7xx_dev, int_type, true); +} + +void t7xx_pcie_mac_set_int(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type) +{ + t7xx_pcie_mac_clear_set_int(t7xx_dev, int_type, false); +} + +/** + * t7xx_pcie_mac_clear_int_status() - Clear interrupt status by type. + * @t7xx_dev: MTK device. + * @int_type: Interrupt type. + * + * Enable or disable device interrupts' status by type. + */ +void t7xx_pcie_mac_clear_int_status(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type) +{ + void __iomem *reg; + u32 val; + + if (t7xx_dev->pdev->msix_enabled) + reg = IREG_BASE(t7xx_dev) + MSIX_ISTAT_HST_GRP0_0; + else + reg = IREG_BASE(t7xx_dev) + ISTAT_HST; + + val = BIT(EXT_INT_START + int_type); + iowrite32(val, reg); +} + +/** + * t7xx_pcie_set_mac_msix_cfg() - Write MSIX control configuration. + * @t7xx_dev: MTK device. + * @irq_count: Number of MSIX IRQ vectors. + * + * Write IRQ count to device. + */ +void t7xx_pcie_set_mac_msix_cfg(struct t7xx_pci_dev *t7xx_dev, unsigned int irq_count) +{ + u32 val; + + val = ffs(irq_count) * 2 - 1; + iowrite32(val, IREG_BASE(t7xx_dev) + PCIE_CFG_MSIX); +} diff --git a/drivers/net/wwan/t7xx/t7xx_pcie_mac.h b/drivers/net/wwan/t7xx/t7xx_pcie_mac.h new file mode 100644 index 000000000000..7cb6942fd44f --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_pcie_mac.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Sreehari Kancharla + * + * Contributors: + * Moises Veleta + * Ricardo Martinez + */ + +#ifndef __T7XX_PCIE_MAC_H__ +#define __T7XX_PCIE_MAC_H__ + +#include +#include + +#include "t7xx_pci.h" +#include "t7xx_reg.h" + +#define IREG_BASE(t7xx_dev) ((t7xx_dev)->base_addr.pcie_mac_ireg_base) + +#define PCIE_MAC_MSIX_MSK_SET(t7xx_dev, ext_id) \ + iowrite32(BIT(ext_id), IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_SET_GRP0_0) + +void t7xx_pcie_mac_interrupts_en(struct t7xx_pci_dev *t7xx_dev); +void t7xx_pcie_mac_interrupts_dis(struct t7xx_pci_dev *t7xx_dev); +void t7xx_pcie_mac_atr_init(struct t7xx_pci_dev *t7xx_dev); +void t7xx_pcie_mac_clear_int(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type); +void t7xx_pcie_mac_set_int(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type); +void t7xx_pcie_mac_clear_int_status(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type); +void t7xx_pcie_set_mac_msix_cfg(struct t7xx_pci_dev *t7xx_dev, unsigned int irq_count); + +#endif /* __T7XX_PCIE_MAC_H__ */ diff --git a/drivers/net/wwan/t7xx/t7xx_reg.h b/drivers/net/wwan/t7xx/t7xx_reg.h new file mode 100644 index 000000000000..ba2127ca8501 --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_reg.h @@ -0,0 +1,379 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Chiranjeevi Rapolu + * + * Contributors: + * Amir Hanania + * Andy Shevchenko + * Eliot Lee + * Moises Veleta + * Ricardo Martinez + * Sreehari Kancharla + */ + +#ifndef __T7XX_REG_H__ +#define __T7XX_REG_H__ + +#include + +/* RC part */ + +/* Device base address offset - update if reg BAR base is changed */ +#define MHCCIF_RC_DEV_BASE 0x10024000 + +#define REG_RC2EP_SW_BSY 0x04 +#define REG_RC2EP_SW_INT_START 0x08 + +#define REG_RC2EP_SW_TCHNUM 0x0c +#define H2D_CH_EXCEPTION_ACK 1 +#define H2D_CH_EXCEPTION_CLEARQ_ACK 2 +#define H2D_CH_DS_LOCK 3 +/* Channels 4-8 are reserved */ +#define H2D_CH_SUSPEND_REQ 9 +#define H2D_CH_RESUME_REQ 10 +#define H2D_CH_SUSPEND_REQ_AP 11 +#define H2D_CH_RESUME_REQ_AP 12 +#define H2D_CH_DEVICE_RESET 13 +#define H2D_CH_DRM_DISABLE_AP 14 + +#define REG_EP2RC_SW_INT_STS 0x10 +#define REG_EP2RC_SW_INT_ACK 0x14 +#define REG_EP2RC_SW_INT_EAP_MASK 0x20 +#define REG_EP2RC_SW_INT_EAP_MASK_SET 0x30 +#define REG_EP2RC_SW_INT_EAP_MASK_CLR 0x40 + +#define D2H_INT_DS_LOCK_ACK BIT(0) +#define D2H_INT_EXCEPTION_INIT BIT(1) +#define D2H_INT_EXCEPTION_INIT_DONE BIT(2) +#define D2H_INT_EXCEPTION_CLEARQ_DONE BIT(3) +#define D2H_INT_EXCEPTION_ALLQ_RESET BIT(4) +#define D2H_INT_PORT_ENUM BIT(5) +/* Bits 6-10 are reserved */ +#define D2H_INT_SUSPEND_ACK BIT(11) +#define D2H_INT_RESUME_ACK BIT(12) +#define D2H_INT_SUSPEND_ACK_AP BIT(13) +#define D2H_INT_RESUME_ACK_AP BIT(14) +#define D2H_INT_ASYNC_SAP_HK BIT(15) +#define D2H_INT_ASYNC_MD_HK BIT(16) + +/* EP part */ + +/* Register base */ +#define INFRACFG_AO_DEV_CHIP 0x10001000 + +/* ATR setting */ +#define PCIE_REG_TRSL_ADDR_CHIP 0x10000000 +#define PCIE_REG_SIZE_CHIP 0x00400000 + +/* Reset Generic Unit (RGU) */ +#define TOPRGU_CH_PCIE_IRQ_STA 0x1000790c + +#define ATR_PORT_OFFSET 0x100 +#define ATR_TABLE_OFFSET 0x20 +#define ATR_TABLE_NUM_PER_ATR 8 +#define ATR_TRANSPARENT_SIZE 0x3f + +/* PCIE_MAC_IREG Register Definition */ + +#define INT_EN_HST 0x0188 +#define ISTAT_HST 0x018c + +#define ISTAT_HST_CTRL 0x01ac +#define ISTAT_HST_CTRL_DIS BIT(0) + +#define INT_EN_HST_SET 0x01f0 +#define INT_EN_HST_CLR 0x01f4 + +#define PCIE_MISC_CTRL 0x0348 +#define PCIE_MISC_MAC_SLEEP_DIS BIT(7) + +#define PCIE_CFG_MSIX 0x03ec +#define ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR 0x0600 +#define ATR_PCIE_WIN0_T0_TRSL_ADDR 0x0608 +#define ATR_PCIE_WIN0_T0_TRSL_PARAM 0x0610 +#define ATR_PCIE_WIN0_ADDR_ALGMT GENMASK_ULL(63, 12) + +#define ATR_SRC_ADDR_INVALID 0x007f + +#define PCIE_PM_RESUME_STATE 0x0d0c + +enum t7xx_pm_resume_state { + PM_RESUME_REG_STATE_L3, + PM_RESUME_REG_STATE_L1, + PM_RESUME_REG_STATE_INIT, + PM_RESUME_REG_STATE_EXP, + PM_RESUME_REG_STATE_L2, + PM_RESUME_REG_STATE_L2_EXP, +}; + +#define PCIE_MISC_DEV_STATUS 0x0d1c +#define MISC_STAGE_MASK GENMASK(2, 0) +#define MISC_RESET_TYPE_PLDR BIT(26) +#define MISC_RESET_TYPE_FLDR BIT(27) +#define LINUX_STAGE 4 + +#define PCIE_RESOURCE_STATUS 0x0d28 +#define PCIE_RESOURCE_STATUS_MSK GENMASK(4, 0) + +#define DIS_ASPM_LOWPWR_SET_0 0x0e50 +#define DIS_ASPM_LOWPWR_CLR_0 0x0e54 +#define DIS_ASPM_LOWPWR_SET_1 0x0e58 +#define DIS_ASPM_LOWPWR_CLR_1 0x0e5c +#define L1_DISABLE_BIT(i) BIT((i) * 4 + 1) +#define L1_1_DISABLE_BIT(i) BIT((i) * 4 + 2) +#define L1_2_DISABLE_BIT(i) BIT((i) * 4 + 3) + +#define MSIX_SW_TRIG_SET_GRP0_0 0x0e80 +#define MSIX_ISTAT_HST_GRP0_0 0x0f00 +#define IMASK_HOST_MSIX_SET_GRP0_0 0x3000 +#define IMASK_HOST_MSIX_CLR_GRP0_0 0x3080 +#define IMASK_HOST_MSIX_GRP0_0 0x3100 +#define EXT_INT_START 24 +#define EXT_INT_NUM 8 +#define MSIX_MSK_SET_ALL GENMASK(31, 24) + +enum pcie_int { + DPMAIF_INT = 0, + CLDMA0_INT, + CLDMA1_INT, + CLDMA2_INT, + MHCCIF_INT, + DPMAIF2_INT, + SAP_RGU_INT, + CLDMA3_INT, +}; + +/* DPMA definitions */ + +#define DPMAIF_PD_BASE 0x1022d000 +#define BASE_DPMAIF_UL DPMAIF_PD_BASE +#define BASE_DPMAIF_DL (DPMAIF_PD_BASE + 0x100) +#define BASE_DPMAIF_AP_MISC (DPMAIF_PD_BASE + 0x400) +#define BASE_DPMAIF_MMW_HPC (DPMAIF_PD_BASE + 0x600) +#define BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX (DPMAIF_PD_BASE + 0x900) +#define BASE_DPMAIF_PD_SRAM_DL (DPMAIF_PD_BASE + 0xc00) +#define BASE_DPMAIF_PD_SRAM_UL (DPMAIF_PD_BASE + 0xd00) + +#define DPMAIF_AO_BASE 0x10014000 +#define BASE_DPMAIF_AO_UL DPMAIF_AO_BASE +#define BASE_DPMAIF_AO_DL (DPMAIF_AO_BASE + 0x400) + +/* DPMAIF UL */ +#define DPMAIF_UL_ADD_DESC (BASE_DPMAIF_UL + 0x00) +#define DPMAIF_UL_CHK_BUSY (BASE_DPMAIF_UL + 0x88) +#define DPMAIF_UL_RESERVE_AO_RW (BASE_DPMAIF_UL + 0xac) +#define DPMAIF_UL_ADD_DESC_CH0 (BASE_DPMAIF_UL + 0xb0) + +/* DPMAIF DL */ +#define DPMAIF_DL_BAT_INIT (BASE_DPMAIF_DL + 0x00) +#define DPMAIF_DL_BAT_ADD (BASE_DPMAIF_DL + 0x04) +#define DPMAIF_DL_BAT_INIT_CON0 (BASE_DPMAIF_DL + 0x08) +#define DPMAIF_DL_BAT_INIT_CON1 (BASE_DPMAIF_DL + 0x0c) +#define DPMAIF_DL_BAT_INIT_CON2 (BASE_DPMAIF_DL + 0x10) +#define DPMAIF_DL_BAT_INIT_CON3 (BASE_DPMAIF_DL + 0x50) +#define DPMAIF_DL_CHK_BUSY (BASE_DPMAIF_DL + 0xb4) + +/* DPMAIF AP misc */ +#define DPMAIF_AP_L2TISAR0 (BASE_DPMAIF_AP_MISC + 0x00) +#define DPMAIF_AP_APDL_L2TISAR0 (BASE_DPMAIF_AP_MISC + 0x50) +#define DPMAIF_AP_IP_BUSY (BASE_DPMAIF_AP_MISC + 0x60) +#define DPMAIF_AP_CG_EN (BASE_DPMAIF_AP_MISC + 0x68) +#define DPMAIF_AP_OVERWRITE_CFG (BASE_DPMAIF_AP_MISC + 0x90) +#define DPMAIF_AP_MEM_CLR (BASE_DPMAIF_AP_MISC + 0x94) +#define DPMAIF_AP_ALL_L2TISAR0_MASK GENMASK(31, 0) +#define DPMAIF_AP_APDL_ALL_L2TISAR0_MASK GENMASK(31, 0) +#define DPMAIF_AP_IP_BUSY_MASK GENMASK(31, 0) + +/* DPMAIF AO UL */ +#define DPMAIF_AO_UL_INIT_SET (BASE_DPMAIF_AO_UL + 0x0) +#define DPMAIF_AO_UL_CHNL_ARB0 (BASE_DPMAIF_AO_UL + 0x1c) +#define DPMAIF_AO_UL_AP_L2TIMR0 (BASE_DPMAIF_AO_UL + 0x80) +#define DPMAIF_AO_UL_AP_L2TIMCR0 (BASE_DPMAIF_AO_UL + 0x84) +#define DPMAIF_AO_UL_AP_L2TIMSR0 (BASE_DPMAIF_AO_UL + 0x88) +#define DPMAIF_AO_UL_AP_L1TIMR0 (BASE_DPMAIF_AO_UL + 0x8c) +#define DPMAIF_AO_UL_APDL_L2TIMR0 (BASE_DPMAIF_AO_UL + 0x90) +#define DPMAIF_AO_UL_APDL_L2TIMCR0 (BASE_DPMAIF_AO_UL + 0x94) +#define DPMAIF_AO_UL_APDL_L2TIMSR0 (BASE_DPMAIF_AO_UL + 0x98) +#define DPMAIF_AO_AP_DLUL_IP_BUSY_MASK (BASE_DPMAIF_AO_UL + 0x9c) + +/* DPMAIF PD SRAM UL */ +#define DPMAIF_AO_UL_CHNL0_CON0 (BASE_DPMAIF_PD_SRAM_UL + 0x10) +#define DPMAIF_AO_UL_CHNL0_CON1 (BASE_DPMAIF_PD_SRAM_UL + 0x14) +#define DPMAIF_AO_UL_CHNL0_CON2 (BASE_DPMAIF_PD_SRAM_UL + 0x18) +#define DPMAIF_AO_UL_CH0_STA (BASE_DPMAIF_PD_SRAM_UL + 0x70) + +/* DPMAIF AO DL */ +#define DPMAIF_AO_DL_INIT_SET (BASE_DPMAIF_AO_DL + 0x00) +#define DPMAIF_AO_DL_IRQ_MASK (BASE_DPMAIF_AO_DL + 0x0c) +#define DPMAIF_AO_DL_DLQPIT_INIT_CON5 (BASE_DPMAIF_AO_DL + 0x28) +#define DPMAIF_AO_DL_DLQPIT_TRIG_THRES (BASE_DPMAIF_AO_DL + 0x34) + +/* DPMAIF PD SRAM DL */ +#define DPMAIF_AO_DL_PKTINFO_CON0 (BASE_DPMAIF_PD_SRAM_DL + 0x00) +#define DPMAIF_AO_DL_PKTINFO_CON1 (BASE_DPMAIF_PD_SRAM_DL + 0x04) +#define DPMAIF_AO_DL_PKTINFO_CON2 (BASE_DPMAIF_PD_SRAM_DL + 0x08) +#define DPMAIF_AO_DL_RDY_CHK_THRES (BASE_DPMAIF_PD_SRAM_DL + 0x0c) +#define DPMAIF_AO_DL_RDY_CHK_FRG_THRES (BASE_DPMAIF_PD_SRAM_DL + 0x10) + +#define DPMAIF_AO_DL_DLQ_AGG_CFG (BASE_DPMAIF_PD_SRAM_DL + 0x20) +#define DPMAIF_AO_DL_DLQPIT_TIMEOUT0 (BASE_DPMAIF_PD_SRAM_DL + 0x24) +#define DPMAIF_AO_DL_DLQPIT_TIMEOUT1 (BASE_DPMAIF_PD_SRAM_DL + 0x28) +#define DPMAIF_AO_DL_HPC_CNTL (BASE_DPMAIF_PD_SRAM_DL + 0x38) +#define DPMAIF_AO_DL_PIT_SEQ_END (BASE_DPMAIF_PD_SRAM_DL + 0x40) + +#define DPMAIF_AO_DL_BAT_RIDX (BASE_DPMAIF_PD_SRAM_DL + 0xd8) +#define DPMAIF_AO_DL_BAT_WRIDX (BASE_DPMAIF_PD_SRAM_DL + 0xdc) +#define DPMAIF_AO_DL_PIT_RIDX (BASE_DPMAIF_PD_SRAM_DL + 0xec) +#define DPMAIF_AO_DL_PIT_WRIDX (BASE_DPMAIF_PD_SRAM_DL + 0x60) +#define DPMAIF_AO_DL_FRGBAT_WRIDX (BASE_DPMAIF_PD_SRAM_DL + 0x78) +#define DPMAIF_AO_DL_DLQ_WRIDX (BASE_DPMAIF_PD_SRAM_DL + 0xa4) + +/* DPMAIF HPC */ +#define DPMAIF_HPC_INTR_MASK (BASE_DPMAIF_MMW_HPC + 0x0f4) +#define DPMA_HPC_ALL_INT_MASK GENMASK(15, 0) + +#define DPMAIF_HPC_DLQ_PATH_MODE 3 +#define DPMAIF_HPC_ADD_MODE_DF 0 +#define DPMAIF_HPC_TOTAL_NUM 8 +#define DPMAIF_HPC_MAX_TOTAL_NUM 8 + +/* DPMAIF DL queue */ +#define DPMAIF_DL_DLQPIT_INIT (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x00) +#define DPMAIF_DL_DLQPIT_ADD (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x10) +#define DPMAIF_DL_DLQPIT_INIT_CON0 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x14) +#define DPMAIF_DL_DLQPIT_INIT_CON1 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x18) +#define DPMAIF_DL_DLQPIT_INIT_CON2 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x1c) +#define DPMAIF_DL_DLQPIT_INIT_CON3 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x20) +#define DPMAIF_DL_DLQPIT_INIT_CON4 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x24) +#define DPMAIF_DL_DLQPIT_INIT_CON5 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x28) +#define DPMAIF_DL_DLQPIT_INIT_CON6 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x2c) + +#define DPMAIF_ULQSAR_n(q) (DPMAIF_AO_UL_CHNL0_CON0 + 0x10 * (q)) +#define DPMAIF_UL_DRBSIZE_ADDRH_n(q) (DPMAIF_AO_UL_CHNL0_CON1 + 0x10 * (q)) +#define DPMAIF_UL_DRB_ADDRH_n(q) (DPMAIF_AO_UL_CHNL0_CON2 + 0x10 * (q)) +#define DPMAIF_ULQ_STA0_n(q) (DPMAIF_AO_UL_CH0_STA + 0x04 * (q)) +#define DPMAIF_ULQ_ADD_DESC_CH_n(q) (DPMAIF_UL_ADD_DESC_CH0 + 0x04 * (q)) + +#define DPMAIF_UL_DRB_RIDX_OFFSET 16 + +#define DPMAIF_AP_RGU_ASSERT 0x10001150 +#define DPMAIF_AP_RGU_DEASSERT 0x10001154 +#define DPMAIF_AP_RST_BIT BIT(2) + +#define DPMAIF_AP_AO_RGU_ASSERT 0x10001140 +#define DPMAIF_AP_AO_RGU_DEASSERT 0x10001144 +#define DPMAIF_AP_AO_RST_BIT BIT(6) + +/* DPMAIF init/restore */ +#define DPMAIF_UL_ADD_NOT_READY BIT(31) +#define DPMAIF_UL_ADD_UPDATE BIT(31) +#define DPMAIF_UL_ADD_COUNT_MASK GENMASK(15, 0) +#define DPMAIF_UL_ALL_QUE_ARB_EN GENMASK(11, 8) + +#define DPMAIF_DL_ADD_UPDATE BIT(31) +#define DPMAIF_DL_ADD_NOT_READY BIT(31) +#define DPMAIF_DL_FRG_ADD_UPDATE BIT(16) +#define DPMAIF_DL_ADD_COUNT_MASK GENMASK(15, 0) + +#define DPMAIF_DL_BAT_INIT_ALLSET BIT(0) +#define DPMAIF_DL_BAT_FRG_INIT BIT(16) +#define DPMAIF_DL_BAT_INIT_EN BIT(31) +#define DPMAIF_DL_BAT_INIT_NOT_READY BIT(31) +#define DPMAIF_DL_BAT_INIT_ONLY_ENABLE_BIT 0 + +#define DPMAIF_DL_PIT_INIT_ALLSET BIT(0) +#define DPMAIF_DL_PIT_INIT_EN BIT(31) +#define DPMAIF_DL_PIT_INIT_NOT_READY BIT(31) + +#define DPMAIF_PKT_ALIGN64_MODE 0 +#define DPMAIF_PKT_ALIGN128_MODE 1 + +#define DPMAIF_BAT_REMAIN_SZ_BASE 16 +#define DPMAIF_BAT_BUFFER_SZ_BASE 128 +#define DPMAIF_FRG_BUFFER_SZ_BASE 128 + +#define DLQ_PIT_IDX_SIZE 0x20 + +#define DPMAIF_PIT_SIZE_MSK GENMASK(17, 0) + +#define DPMAIF_PIT_REM_CNT_MSK GENMASK(17, 0) + +#define DPMAIF_BAT_EN_MSK BIT(16) +#define DPMAIF_FRG_EN_MSK BIT(28) +#define DPMAIF_BAT_SIZE_MSK GENMASK(15, 0) + +#define DPMAIF_BAT_BID_MAXCNT_MSK GENMASK(31, 16) +#define DPMAIF_BAT_REMAIN_MINSZ_MSK GENMASK(15, 8) +#define DPMAIF_PIT_CHK_NUM_MSK GENMASK(31, 24) +#define DPMAIF_BAT_BUF_SZ_MSK GENMASK(16, 8) +#define DPMAIF_FRG_BUF_SZ_MSK GENMASK(16, 8) +#define DPMAIF_BAT_RSV_LEN_MSK GENMASK(7, 0) +#define DPMAIF_PKT_ALIGN_MSK GENMASK(23, 22) + +#define DPMAIF_BAT_CHECK_THRES_MSK GENMASK(21, 16) +#define DPMAIF_FRG_CHECK_THRES_MSK GENMASK(7, 0) + +#define DPMAIF_PKT_ALIGN_EN BIT(23) + +#define DPMAIF_DRB_SIZE_MSK GENMASK(15, 0) + +#define DPMAIF_DL_PIT_WRIDX_MSK GENMASK(17, 0) +#define DPMAIF_DL_BAT_WRIDX_MSK GENMASK(17, 0) +#define DPMAIF_DL_FRG_WRIDX_MSK GENMASK(17, 0) + +/* Bit fields of registers */ +/* DPMAIF_UL_CHK_BUSY */ +#define DPMAIF_UL_IDLE_STS BIT(11) +/* DPMAIF_DL_CHK_BUSY */ +#define DPMAIF_DL_IDLE_STS BIT(23) +/* DPMAIF_AO_DL_RDY_CHK_THRES */ +#define DPMAIF_DL_PKT_CHECKSUM_EN BIT(31) +#define DPMAIF_PORT_MODE_PCIE BIT(30) +#define DPMAIF_DL_BURST_PIT_EN BIT(13) +/* DPMAIF_DL_BAT_INIT_CON1 */ +#define DPMAIF_DL_BAT_CACHE_PRI BIT(22) +/* DPMAIF_AP_MEM_CLR */ +#define DPMAIF_MEM_CLR BIT(0) +/* DPMAIF_AP_OVERWRITE_CFG */ +#define DPMAIF_SRAM_SYNC BIT(0) +/* DPMAIF_AO_UL_INIT_SET */ +#define DPMAIF_UL_INIT_DONE BIT(0) +/* DPMAIF_AO_DL_INIT_SET */ +#define DPMAIF_DL_INIT_DONE BIT(0) +/* DPMAIF_AO_DL_PIT_SEQ_END */ +#define DPMAIF_DL_PIT_SEQ_MSK GENMASK(7, 0) +/* DPMAIF_UL_RESERVE_AO_RW */ +#define DPMAIF_PCIE_MODE_SET_VALUE 0x55 +/* DPMAIF_AP_CG_EN */ +#define DPMAIF_CG_EN 0x7f + +#define DPMAIF_UDL_IP_BUSY BIT(0) +#define DPMAIF_DL_INT_DLQ0_QDONE BIT(8) +#define DPMAIF_DL_INT_DLQ1_QDONE BIT(9) +#define DPMAIF_DL_INT_DLQ0_PITCNT_LEN BIT(10) +#define DPMAIF_DL_INT_DLQ1_PITCNT_LEN BIT(11) +#define DPMAIF_DL_INT_Q2TOQ1 BIT(24) +#define DPMAIF_DL_INT_Q2APTOP BIT(25) + +#define DPMAIF_DLQ_LOW_TIMEOUT_THRES_MKS GENMASK(15, 0) +#define DPMAIF_DLQ_HIGH_TIMEOUT_THRES_MSK GENMASK(31, 16) + +/* DPMAIF DLQ HW configure */ +#define DPMAIF_AGG_MAX_LEN_DF 65535 +#define DPMAIF_AGG_TBL_ENT_NUM_DF 50 +#define DPMAIF_HASH_PRIME_DF 13 +#define DPMAIF_MID_TIMEOUT_THRES_DF 100 +#define DPMAIF_DLQ_TIMEOUT_THRES_DF 100 +#define DPMAIF_DLQ_PRS_THRES_DF 10 +#define DPMAIF_DLQ_HASH_BIT_CHOOSE_DF 0 + +#define DPMAIF_DLQPIT_EN_MSK BIT(20) +#define DPMAIF_DLQPIT_CHAN_OFS 16 +#define DPMAIF_ADD_DLQ_PIT_CHAN_OFS 20 + +#endif /* __T7XX_REG_H__ */ diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c new file mode 100644 index 000000000000..cfd4e4b1a8df --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c @@ -0,0 +1,569 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Eliot Lee + * Moises Veleta + * Ricardo Martinez + * + * Contributors: + * Amir Hanania + * Sreehari Kancharla + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "t7xx_hif_cldma.h" +#include "t7xx_mhccif.h" +#include "t7xx_modem_ops.h" +#include "t7xx_pci.h" +#include "t7xx_pcie_mac.h" +#include "t7xx_reg.h" +#include "t7xx_state_monitor.h" + +#define FSM_DRM_DISABLE_DELAY_MS 200 +#define FSM_EVENT_POLL_INTERVAL_MS 20 +#define FSM_MD_EX_REC_OK_TIMEOUT_MS 10000 +#define FSM_MD_EX_PASS_TIMEOUT_MS (45 * 1000) +#define FSM_CMD_TIMEOUT_MS 2000 + +void t7xx_fsm_notifier_register(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier) +{ + struct t7xx_fsm_ctl *ctl = md->fsm_ctl; + unsigned long flags; + + spin_lock_irqsave(&ctl->notifier_lock, flags); + list_add_tail(¬ifier->entry, &ctl->notifier_list); + spin_unlock_irqrestore(&ctl->notifier_lock, flags); +} + +void t7xx_fsm_notifier_unregister(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier) +{ + struct t7xx_fsm_notifier *notifier_cur, *notifier_next; + struct t7xx_fsm_ctl *ctl = md->fsm_ctl; + unsigned long flags; + + spin_lock_irqsave(&ctl->notifier_lock, flags); + list_for_each_entry_safe(notifier_cur, notifier_next, &ctl->notifier_list, entry) { + if (notifier_cur == notifier) + list_del(¬ifier->entry); + } + + spin_unlock_irqrestore(&ctl->notifier_lock, flags); +} + +static void fsm_state_notify(struct t7xx_modem *md, enum md_state state) +{ + struct t7xx_fsm_ctl *ctl = md->fsm_ctl; + struct t7xx_fsm_notifier *notifier; + unsigned long flags; + + spin_lock_irqsave(&ctl->notifier_lock, flags); + list_for_each_entry(notifier, &ctl->notifier_list, entry) { + spin_unlock_irqrestore(&ctl->notifier_lock, flags); + if (notifier->notifier_fn) + notifier->notifier_fn(state, notifier->data); + + spin_lock_irqsave(&ctl->notifier_lock, flags); + } + + spin_unlock_irqrestore(&ctl->notifier_lock, flags); +} + +void t7xx_fsm_broadcast_state(struct t7xx_fsm_ctl *ctl, enum md_state state) +{ + if (ctl->md_state != MD_STATE_WAITING_FOR_HS2 && state == MD_STATE_READY) + return; + + ctl->md_state = state; + + fsm_state_notify(ctl->md, state); +} + +static void fsm_finish_command(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd, int result) +{ + if (cmd->flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) { + *cmd->ret = result; + complete_all(cmd->done); + } + + kfree(cmd); +} + +static void fsm_del_kf_event(struct t7xx_fsm_event *event) +{ + list_del(&event->entry); + kfree(event); +} + +static void fsm_flush_event_cmd_qs(struct t7xx_fsm_ctl *ctl) +{ + struct device *dev = &ctl->md->t7xx_dev->pdev->dev; + struct t7xx_fsm_event *event, *evt_next; + struct t7xx_fsm_command *cmd, *cmd_next; + unsigned long flags; + + spin_lock_irqsave(&ctl->command_lock, flags); + list_for_each_entry_safe(cmd, cmd_next, &ctl->command_queue, entry) { + dev_warn(dev, "Unhandled command %d\n", cmd->cmd_id); + list_del(&cmd->entry); + fsm_finish_command(ctl, cmd, -EINVAL); + } + + spin_unlock_irqrestore(&ctl->command_lock, flags); + spin_lock_irqsave(&ctl->event_lock, flags); + list_for_each_entry_safe(event, evt_next, &ctl->event_queue, entry) { + dev_warn(dev, "Unhandled event %d\n", event->event_id); + fsm_del_kf_event(event); + } + + spin_unlock_irqrestore(&ctl->event_lock, flags); +} + +static void fsm_routine_exception(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd, + enum t7xx_ex_reason reason) +{ + struct device *dev = &ctl->md->t7xx_dev->pdev->dev; + bool rec_ok = false, pass = false; + struct t7xx_fsm_event *event; + unsigned long flags; + int cnt; + + dev_err(dev, "Exception %d, from %ps\n", reason, __builtin_return_address(0)); + + if (ctl->curr_state != FSM_STATE_READY && ctl->curr_state != FSM_STATE_STARTING) { + if (cmd) + fsm_finish_command(ctl, cmd, -EINVAL); + + return; + } + + ctl->last_state = ctl->curr_state; + ctl->curr_state = FSM_STATE_EXCEPTION; + + switch (reason) { + case EXCEPTION_HS_TIMEOUT: + dev_err(dev, "BOOT_HS_FAIL\n"); + break; + + case EXCEPTION_EVENT: + t7xx_fsm_broadcast_state(ctl, MD_STATE_EXCEPTION); + t7xx_md_exception_handshake(ctl->md); + + cnt = 0; + while (cnt < FSM_MD_EX_REC_OK_TIMEOUT_MS / FSM_EVENT_POLL_INTERVAL_MS) { + if (kthread_should_stop()) + return; + + spin_lock_irqsave(&ctl->event_lock, flags); + event = list_first_entry_or_null(&ctl->event_queue, + struct t7xx_fsm_event, entry); + if (event) { + if (event->event_id == FSM_EVENT_MD_EX) { + fsm_del_kf_event(event); + } else if (event->event_id == FSM_EVENT_MD_EX_REC_OK) { + rec_ok = true; + fsm_del_kf_event(event); + } + } + + spin_unlock_irqrestore(&ctl->event_lock, flags); + if (rec_ok) + break; + + cnt++; + /* Try again after 20ms */ + msleep(FSM_EVENT_POLL_INTERVAL_MS); + } + + cnt = 0; + while (cnt < FSM_MD_EX_PASS_TIMEOUT_MS / FSM_EVENT_POLL_INTERVAL_MS) { + if (kthread_should_stop()) + return; + + spin_lock_irqsave(&ctl->event_lock, flags); + event = list_first_entry_or_null(&ctl->event_queue, + struct t7xx_fsm_event, entry); + if (event && event->event_id == FSM_EVENT_MD_EX_PASS) { + pass = true; + fsm_del_kf_event(event); + } + + spin_unlock_irqrestore(&ctl->event_lock, flags); + + if (pass) + break; + cnt++; + /* Try again after 20ms */ + msleep(FSM_EVENT_POLL_INTERVAL_MS); + } + + break; + + default: + break; + } + + if (cmd) + fsm_finish_command(ctl, cmd, 0); +} + +static void fsm_stopped_handler(struct t7xx_fsm_ctl *ctl) +{ + ctl->last_state = ctl->curr_state; + ctl->curr_state = FSM_STATE_STOPPED; + + t7xx_fsm_broadcast_state(ctl, MD_STATE_STOPPED); + t7xx_md_reset(ctl->md->t7xx_dev); +} + +static void fsm_routine_stopped(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd) +{ + if (ctl->curr_state == FSM_STATE_STOPPED) { + fsm_finish_command(ctl, cmd, -EINVAL); + return; + } + + fsm_stopped_handler(ctl); + fsm_finish_command(ctl, cmd, 0); +} + +static void fsm_routine_stopping(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd) +{ + struct t7xx_pci_dev *t7xx_dev; + struct cldma_ctrl *md_ctrl; + int err; + + if (ctl->curr_state == FSM_STATE_STOPPED || ctl->curr_state == FSM_STATE_STOPPING) { + fsm_finish_command(ctl, cmd, -EINVAL); + return; + } + + md_ctrl = ctl->md->md_ctrl[ID_CLDMA1]; + t7xx_dev = ctl->md->t7xx_dev; + + ctl->last_state = ctl->curr_state; + ctl->curr_state = FSM_STATE_STOPPING; + t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_TO_STOP); + t7xx_cldma_stop(md_ctrl); + + if (!ctl->md->rgu_irq_asserted) { + t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DRM_DISABLE_AP); + /* Wait for the DRM disable to take effect */ + msleep(FSM_DRM_DISABLE_DELAY_MS); + + err = t7xx_acpi_fldr_func(t7xx_dev); + if (err) + t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DEVICE_RESET); + } + + fsm_stopped_handler(ctl); + fsm_finish_command(ctl, cmd, 0); +} + +static void fsm_routine_ready(struct t7xx_fsm_ctl *ctl) +{ + struct t7xx_modem *md = ctl->md; + + ctl->last_state = ctl->curr_state; + ctl->curr_state = FSM_STATE_READY; + t7xx_fsm_broadcast_state(ctl, MD_STATE_READY); + t7xx_md_event_notify(md, FSM_READY); +} + +static void fsm_routine_starting(struct t7xx_fsm_ctl *ctl) +{ + struct t7xx_modem *md = ctl->md; + struct device *dev; + + ctl->last_state = ctl->curr_state; + ctl->curr_state = FSM_STATE_STARTING; + + t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS1); + t7xx_md_event_notify(md, FSM_START); + + wait_event_interruptible_timeout(ctl->async_hk_wq, md->core_md.ready || ctl->exp_flg, + HZ * 60); + dev = &md->t7xx_dev->pdev->dev; + + if (ctl->exp_flg) + dev_err(dev, "MD exception is captured during handshake\n"); + + if (!md->core_md.ready) { + dev_err(dev, "MD handshake timeout\n"); + fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT); + } else { + fsm_routine_ready(ctl); + } +} + +static void fsm_routine_start(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd) +{ + struct t7xx_modem *md = ctl->md; + u32 dev_status; + int ret; + + if (!md) + return; + + if (ctl->curr_state != FSM_STATE_INIT && ctl->curr_state != FSM_STATE_PRE_START && + ctl->curr_state != FSM_STATE_STOPPED) { + fsm_finish_command(ctl, cmd, -EINVAL); + return; + } + + ctl->last_state = ctl->curr_state; + ctl->curr_state = FSM_STATE_PRE_START; + t7xx_md_event_notify(md, FSM_PRE_START); + + ret = read_poll_timeout(ioread32, dev_status, + (dev_status & MISC_STAGE_MASK) == LINUX_STAGE, 20000, 2000000, + false, IREG_BASE(md->t7xx_dev) + PCIE_MISC_DEV_STATUS); + if (ret) { + struct device *dev = &md->t7xx_dev->pdev->dev; + + dev_err(dev, "Invalid device status 0x%lx\n", dev_status & MISC_STAGE_MASK); + fsm_finish_command(ctl, cmd, -ETIMEDOUT); + return; + } + + t7xx_cldma_hif_hw_init(md->md_ctrl[ID_CLDMA1]); + fsm_routine_starting(ctl); + fsm_finish_command(ctl, cmd, 0); +} + +static int fsm_main_thread(void *data) +{ + struct t7xx_fsm_ctl *ctl = data; + struct t7xx_fsm_command *cmd; + unsigned long flags; + + while (!kthread_should_stop()) { + if (wait_event_interruptible(ctl->command_wq, !list_empty(&ctl->command_queue) || + kthread_should_stop())) + continue; + + if (kthread_should_stop()) + break; + + spin_lock_irqsave(&ctl->command_lock, flags); + cmd = list_first_entry(&ctl->command_queue, struct t7xx_fsm_command, entry); + list_del(&cmd->entry); + spin_unlock_irqrestore(&ctl->command_lock, flags); + + switch (cmd->cmd_id) { + case FSM_CMD_START: + fsm_routine_start(ctl, cmd); + break; + + case FSM_CMD_EXCEPTION: + fsm_routine_exception(ctl, cmd, FIELD_GET(FSM_CMD_EX_REASON, cmd->flag)); + break; + + case FSM_CMD_PRE_STOP: + fsm_routine_stopping(ctl, cmd); + break; + + case FSM_CMD_STOP: + fsm_routine_stopped(ctl, cmd); + break; + + case FSM_CMD_RECOVER: + fsm_routine_stopped(ctl, cmd); + t7xx_fsm_append_cmd(ctl, FSM_CMD_START, 0); + break; + + default: + fsm_finish_command(ctl, cmd, -EINVAL); + fsm_flush_event_cmd_qs(ctl); + break; + } + } + + return 0; +} + +int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id, unsigned int flag) +{ + DECLARE_COMPLETION_ONSTACK(done); + struct t7xx_fsm_command *cmd; + unsigned long flags; + int ret; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) + return -ENOMEM; + + INIT_LIST_HEAD(&cmd->entry); + cmd->cmd_id = cmd_id; + cmd->flag = flag; + if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) { + cmd->done = &done; + cmd->ret = &ret; + } + + spin_lock_irqsave(&ctl->command_lock, flags); + list_add_tail(&cmd->entry, &ctl->command_queue); + spin_unlock_irqrestore(&ctl->command_lock, flags); + + wake_up(&ctl->command_wq); + + if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) { + unsigned long wait_ret; + + wait_ret = wait_for_completion_timeout(&done, + msecs_to_jiffies(FSM_CMD_TIMEOUT_MS)); + if (!wait_ret) + return -ETIMEDOUT; + + return ret; + } + + return 0; +} + +int t7xx_fsm_append_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id, + unsigned char *data, unsigned int length) +{ + struct device *dev = &ctl->md->t7xx_dev->pdev->dev; + struct t7xx_fsm_event *event; + unsigned long flags; + + if (event_id <= FSM_EVENT_INVALID || event_id >= FSM_EVENT_MAX) { + dev_err(dev, "Invalid event %d\n", event_id); + return -EINVAL; + } + + event = kmalloc(sizeof(*event) + length, in_interrupt() ? GFP_ATOMIC : GFP_KERNEL); + if (!event) + return -ENOMEM; + + INIT_LIST_HEAD(&event->entry); + event->event_id = event_id; + event->length = length; + + if (data && length) + memcpy((void *)event + sizeof(*event), data, length); + + spin_lock_irqsave(&ctl->event_lock, flags); + list_add_tail(&event->entry, &ctl->event_queue); + spin_unlock_irqrestore(&ctl->event_lock, flags); + wake_up_all(&ctl->event_wq); + return 0; +} + +void t7xx_fsm_clr_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id) +{ + struct device *dev = &ctl->md->t7xx_dev->pdev->dev; + struct t7xx_fsm_event *event, *evt_next; + unsigned long flags; + + spin_lock_irqsave(&ctl->event_lock, flags); + list_for_each_entry_safe(event, evt_next, &ctl->event_queue, entry) { + dev_err(dev, "Unhandled event %d\n", event->event_id); + + if (event->event_id == event_id) + fsm_del_kf_event(event); + } + + spin_unlock_irqrestore(&ctl->event_lock, flags); +} + +enum md_state t7xx_fsm_get_md_state(struct t7xx_fsm_ctl *ctl) +{ + if (ctl) + return ctl->md_state; + + return MD_STATE_INVALID; +} + +unsigned int t7xx_fsm_get_ctl_state(struct t7xx_fsm_ctl *ctl) +{ + if (ctl) + return ctl->curr_state; + + return FSM_STATE_STOPPED; +} + +void t7xx_fsm_recv_md_intr(struct t7xx_fsm_ctl *ctl, enum t7xx_md_irq_type type) +{ + if (type == MD_IRQ_PORT_ENUM) { + t7xx_fsm_append_cmd(ctl, FSM_CMD_START, 0); + } else if (type == MD_IRQ_CCIF_EX) { + ctl->exp_flg = true; + wake_up(&ctl->async_hk_wq); + t7xx_fsm_append_cmd(ctl, FSM_CMD_EXCEPTION, + FIELD_PREP(FSM_CMD_EX_REASON, EXCEPTION_EVENT)); + } +} + +void t7xx_fsm_reset(struct t7xx_modem *md) +{ + struct t7xx_fsm_ctl *ctl = md->fsm_ctl; + + fsm_flush_event_cmd_qs(ctl); + ctl->last_state = FSM_STATE_INIT; + ctl->curr_state = FSM_STATE_STOPPED; + ctl->exp_flg = false; +} + +int t7xx_fsm_init(struct t7xx_modem *md) +{ + struct device *dev = &md->t7xx_dev->pdev->dev; + struct t7xx_fsm_ctl *ctl; + + ctl = devm_kzalloc(dev, sizeof(*ctl), GFP_KERNEL); + if (!ctl) + return -ENOMEM; + + md->fsm_ctl = ctl; + ctl->md = md; + ctl->last_state = FSM_STATE_INIT; + ctl->curr_state = FSM_STATE_INIT; + INIT_LIST_HEAD(&ctl->command_queue); + INIT_LIST_HEAD(&ctl->event_queue); + init_waitqueue_head(&ctl->async_hk_wq); + init_waitqueue_head(&ctl->event_wq); + INIT_LIST_HEAD(&ctl->notifier_list); + init_waitqueue_head(&ctl->command_wq); + spin_lock_init(&ctl->event_lock); + spin_lock_init(&ctl->command_lock); + ctl->exp_flg = false; + spin_lock_init(&ctl->notifier_lock); + + ctl->fsm_thread = kthread_run(fsm_main_thread, ctl, "t7xx_fsm"); + return PTR_ERR_OR_ZERO(ctl->fsm_thread); +} + +void t7xx_fsm_uninit(struct t7xx_modem *md) +{ + struct t7xx_fsm_ctl *ctl = md->fsm_ctl; + + if (!ctl) + return; + + if (ctl->fsm_thread) + kthread_stop(ctl->fsm_thread); + + fsm_flush_event_cmd_qs(ctl); +} diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.h b/drivers/net/wwan/t7xx/t7xx_state_monitor.h new file mode 100644 index 000000000000..f1e0b1a3f8de --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.h @@ -0,0 +1,124 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Amir Hanania + * Haijun Liu + * Moises Veleta + * + * Contributors: + * Eliot Lee + * Ricardo Martinez + * Sreehari Kancharla + */ + +#ifndef __T7XX_MONITOR_H__ +#define __T7XX_MONITOR_H__ + +#include +#include +#include +#include +#include + +#include "t7xx_common.h" +#include "t7xx_modem_ops.h" + +enum t7xx_fsm_state { + FSM_STATE_INIT, + FSM_STATE_PRE_START, + FSM_STATE_STARTING, + FSM_STATE_READY, + FSM_STATE_EXCEPTION, + FSM_STATE_STOPPING, + FSM_STATE_STOPPED, +}; + +enum t7xx_fsm_event_state { + FSM_EVENT_INVALID, + FSM_EVENT_MD_EX, + FSM_EVENT_MD_EX_REC_OK, + FSM_EVENT_MD_EX_PASS, + FSM_EVENT_MAX +}; + +enum t7xx_fsm_cmd_state { + FSM_CMD_INVALID, + FSM_CMD_START, + FSM_CMD_EXCEPTION, + FSM_CMD_PRE_STOP, + FSM_CMD_STOP, + FSM_CMD_RECOVER, +}; + +enum t7xx_ex_reason { + EXCEPTION_HS_TIMEOUT, + EXCEPTION_EVENT, +}; + +enum t7xx_md_irq_type { + MD_IRQ_WDT, + MD_IRQ_CCIF_EX, + MD_IRQ_PORT_ENUM, +}; + +#define FSM_CMD_FLAG_WAIT_FOR_COMPLETION BIT(0) +#define FSM_CMD_FLAG_FLIGHT_MODE BIT(1) +#define FSM_CMD_EX_REASON GENMASK(23, 16) + +struct t7xx_fsm_ctl { + struct t7xx_modem *md; + enum md_state md_state; + unsigned int curr_state; + unsigned int last_state; + struct list_head command_queue; + struct list_head event_queue; + wait_queue_head_t command_wq; + wait_queue_head_t event_wq; + wait_queue_head_t async_hk_wq; + spinlock_t event_lock; /* Protects event queue */ + spinlock_t command_lock; /* Protects command queue */ + struct task_struct *fsm_thread; + bool exp_flg; + spinlock_t notifier_lock; /* Protects notifier list */ + struct list_head notifier_list; +}; + +struct t7xx_fsm_event { + struct list_head entry; + enum t7xx_fsm_event_state event_id; + unsigned int length; +}; + +struct t7xx_fsm_command { + struct list_head entry; + enum t7xx_fsm_cmd_state cmd_id; + unsigned int flag; + struct completion *done; + int *ret; +}; + +struct t7xx_fsm_notifier { + struct list_head entry; + int (*notifier_fn)(enum md_state state, void *data); + void *data; +}; + +int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id, + unsigned int flag); +int t7xx_fsm_append_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id, + unsigned char *data, unsigned int length); +void t7xx_fsm_clr_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id); +void t7xx_fsm_broadcast_state(struct t7xx_fsm_ctl *ctl, enum md_state state); +void t7xx_fsm_reset(struct t7xx_modem *md); +int t7xx_fsm_init(struct t7xx_modem *md); +void t7xx_fsm_uninit(struct t7xx_modem *md); +void t7xx_fsm_recv_md_intr(struct t7xx_fsm_ctl *ctl, enum t7xx_md_irq_type type); +enum md_state t7xx_fsm_get_md_state(struct t7xx_fsm_ctl *ctl); +unsigned int t7xx_fsm_get_ctl_state(struct t7xx_fsm_ctl *ctl); +void t7xx_fsm_notifier_register(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier); +void t7xx_fsm_notifier_unregister(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier); + +#endif /* __T7XX_MONITOR_H__ */ From patchwork Tue Dec 7 02:47:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Martinez X-Patchwork-Id: 521904 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B13CEC43217 for ; Tue, 7 Dec 2021 02:48:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231295AbhLGCvr (ORCPT ); Mon, 6 Dec 2021 21:51:47 -0500 Received: from mga07.intel.com ([134.134.136.100]:27273 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230319AbhLGCvm (ORCPT ); Mon, 6 Dec 2021 21:51:42 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10190"; a="300860567" X-IronPort-AV: E=Sophos;i="5.87,293,1631602800"; d="scan'208";a="300860567" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2021 18:47:47 -0800 X-IronPort-AV: E=Sophos;i="5.87,293,1631602800"; d="scan'208";a="748524151" Received: from rmarti10-desk.jf.intel.com ([134.134.150.146]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2021 18:47:47 -0800 From: Ricardo Martinez To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: kuba@kernel.org, davem@davemloft.net, johannes@sipsolutions.net, ryazanov.s.a@gmail.com, loic.poulain@linaro.org, m.chetan.kumar@intel.com, chandrashekar.devegowda@intel.com, linuxwwan@intel.com, chiranjeevi.rapolu@linux.intel.com, haijun.liu@mediatek.com, amir.hanania@intel.com, andriy.shevchenko@linux.intel.com, dinesh.sharma@intel.com, eliot.lee@intel.com, mika.westerberg@linux.intel.com, moises.veleta@intel.com, pierre-louis.bossart@intel.com, muralidharan.sethuraman@intel.com, Soumya.Prakash.Mishra@intel.com, sreehari.kancharla@intel.com, suresh.nagaraj@intel.com, Ricardo Martinez Subject: [PATCH net-next v3 05/12] net: wwan: t7xx: Add AT and MBIM WWAN ports Date: Mon, 6 Dec 2021 19:47:04 -0700 Message-Id: <20211207024711.2765-6-ricardo.martinez@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211207024711.2765-1-ricardo.martinez@linux.intel.com> References: <20211207024711.2765-1-ricardo.martinez@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org From: Chandrashekar Devegowda Adds AT and MBIM ports to the port proxy infrastructure. The initialization method is responsible for creating the corresponding ports using the WWAN framework infrastructure. The implemented WWAN port operations are start, stop, and TX. Signed-off-by: Chandrashekar Devegowda Co-developed-by: Ricardo Martinez Signed-off-by: Ricardo Martinez --- drivers/net/wwan/t7xx/Makefile | 1 + drivers/net/wwan/t7xx/t7xx_port_proxy.c | 24 +++ drivers/net/wwan/t7xx/t7xx_port_proxy.h | 1 + drivers/net/wwan/t7xx/t7xx_port_wwan.c | 258 ++++++++++++++++++++++++ 4 files changed, 284 insertions(+) create mode 100644 drivers/net/wwan/t7xx/t7xx_port_wwan.c diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile index 63e1c67b82b5..9eec2e2472fb 100644 --- a/drivers/net/wwan/t7xx/Makefile +++ b/drivers/net/wwan/t7xx/Makefile @@ -12,3 +12,4 @@ mtk_t7xx-y:= t7xx_pci.o \ t7xx_hif_cldma.o \ t7xx_port_proxy.o \ t7xx_port_ctrl_msg.o \ + t7xx_port_wwan.o \ diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.c b/drivers/net/wwan/t7xx/t7xx_port_proxy.c index 67219319e9f7..4c6f255c4d2f 100644 --- a/drivers/net/wwan/t7xx/t7xx_port_proxy.c +++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.c @@ -51,6 +51,30 @@ static struct t7xx_port_static t7xx_md_ports[] = { { + .tx_ch = PORT_CH_UART2_TX, + .rx_ch = PORT_CH_UART2_RX, + .txq_index = Q_IDX_AT_CMD, + .rxq_index = Q_IDX_AT_CMD, + .txq_exp_index = 0xff, + .rxq_exp_index = 0xff, + .path_id = ID_CLDMA1, + .flags = PORT_F_RX_CHAR_NODE, + .ops = &wwan_sub_port_ops, + .name = "AT", + .port_type = WWAN_PORT_AT, + }, { + .tx_ch = PORT_CH_MBIM_TX, + .rx_ch = PORT_CH_MBIM_RX, + .txq_index = Q_IDX_MBIM, + .rxq_index = Q_IDX_MBIM, + .txq_exp_index = 0, + .rxq_exp_index = 0, + .path_id = ID_CLDMA1, + .flags = PORT_F_RX_CHAR_NODE, + .ops = &wwan_sub_port_ops, + .name = "MBIM", + .port_type = WWAN_PORT_MBIM, + }, { .tx_ch = PORT_CH_CONTROL_TX, .rx_ch = PORT_CH_CONTROL_RX, .txq_index = Q_IDX_CTRL, diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.h b/drivers/net/wwan/t7xx/t7xx_port_proxy.h index 06ccda839445..1f49d1ef6bad 100644 --- a/drivers/net/wwan/t7xx/t7xx_port_proxy.h +++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.h @@ -67,6 +67,7 @@ struct port_msg { #define PORT_ENUM_VER_MISMATCH 0x00657272 /* Port operations mapping */ +extern struct port_ops wwan_sub_port_ops; extern struct port_ops ctl_port_ops; int t7xx_port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb); diff --git a/drivers/net/wwan/t7xx/t7xx_port_wwan.c b/drivers/net/wwan/t7xx/t7xx_port_wwan.c new file mode 100644 index 000000000000..79831291e490 --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_port_wwan.c @@ -0,0 +1,258 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Amir Hanania + * Chandrashekar Devegowda + * Haijun Liu + * Moises Veleta + * Ricardo Martinez + * + * Contributors: + * Andy Shevchenko + * Chiranjeevi Rapolu + * Eliot Lee + * Sreehari Kancharla + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "t7xx_common.h" +#include "t7xx_port.h" +#include "t7xx_port_proxy.h" +#include "t7xx_state_monitor.h" + +static int t7xx_port_ctrl_start(struct wwan_port *port) +{ + struct t7xx_port *port_mtk = wwan_port_get_drvdata(port); + + if (atomic_read(&port_mtk->usage_cnt)) + return -EBUSY; + + atomic_inc(&port_mtk->usage_cnt); + return 0; +} + +static void t7xx_port_ctrl_stop(struct wwan_port *port) +{ + struct t7xx_port *port_mtk = wwan_port_get_drvdata(port); + + atomic_dec(&port_mtk->usage_cnt); +} + +static inline bool t7xx_port_wwan_multipkt_capable(struct t7xx_port_static *port) +{ + return port->tx_ch == PORT_CH_MBIM_TX || + (port->tx_ch >= PORT_CH_DSS0_TX && port->tx_ch <= PORT_CH_DSS7_TX); +} + +static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb) +{ + struct t7xx_port *port_private = wwan_port_get_drvdata(port); + size_t actual_count = 0, alloc_size = 0, txq_mtu = 0; + struct t7xx_port_static *port_static; + int i, multi_packet = 1, ret = 0; + struct sk_buff *skb_ccci = NULL; + struct t7xx_fsm_ctl *ctl; + enum md_state md_state; + unsigned int count; + bool port_multi; + + count = skb->len; + if (!count) + return -EINVAL; + + port_static = port_private->port_static; + ctl = port_private->t7xx_dev->md->fsm_ctl; + md_state = t7xx_fsm_get_md_state(ctl); + if (md_state == MD_STATE_WAITING_FOR_HS1 || md_state == MD_STATE_WAITING_FOR_HS2) { + dev_warn(port_private->dev, "Cannot write to %s port when md_state=%d\n", + port_static->name, md_state); + return -ENODEV; + } + + txq_mtu = CLDMA_TXQ_MTU; + + if (port_private->flags & PORT_F_USER_HEADER) { + if (port_private->flags & PORT_F_USER_HEADER && count > txq_mtu) { + dev_err(port_private->dev, "Packet %u larger than MTU on %s port\n", + count, port_static->name); + return -ENOMEM; + } + + alloc_size = min_t(size_t, txq_mtu, count); + actual_count = alloc_size; + } else { + alloc_size = min_t(size_t, txq_mtu, count + CCCI_H_ELEN); + actual_count = alloc_size - CCCI_H_ELEN; + port_multi = t7xx_port_wwan_multipkt_capable(port_static); + if ((count + CCCI_H_ELEN > txq_mtu) && port_multi) + multi_packet = DIV_ROUND_UP(count, txq_mtu - CCCI_H_ELEN); + } + + for (i = 0; i < multi_packet; i++) { + struct ccci_header *ccci_h = NULL; + + if (multi_packet > 1 && multi_packet == i + 1) { + actual_count = count % (txq_mtu - CCCI_H_ELEN); + alloc_size = actual_count + CCCI_H_ELEN; + } + + skb_ccci = __dev_alloc_skb(alloc_size, GFP_KERNEL); + if (!skb_ccci) + return -ENOMEM; + + ccci_h = skb_put(skb_ccci, CCCI_H_LEN); + ccci_h->packet_header = 0; + ccci_h->packet_len = cpu_to_le32(actual_count + CCCI_H_LEN); + ccci_h->status &= cpu_to_le32(~HDR_FLD_CHN); + ccci_h->status |= cpu_to_le32(FIELD_PREP(HDR_FLD_CHN, port_static->tx_ch)); + ccci_h->ex_msg = 0; + + memcpy(skb_put(skb_ccci, actual_count), skb->data + i * (txq_mtu - CCCI_H_ELEN), + actual_count); + + t7xx_port_proxy_set_seq_num(port_private, ccci_h); + + ret = t7xx_port_send_skb_to_md(port_private, skb_ccci, true); + if (ret) + goto err_free_skb; + + port_private->seq_nums[MTK_TX]++; + + if (multi_packet == 1) + return actual_count; + else if (multi_packet == i + 1) + return count; + } + +err_free_skb: + if (ret != -ENOMEM) { + dev_err(port_private->dev, "Write error on %s port, %d\n", port_static->name, ret); + dev_kfree_skb_any(skb_ccci); + } + + return ret; +} + +static const struct wwan_port_ops wwan_ops = { + .start = t7xx_port_ctrl_start, + .stop = t7xx_port_ctrl_stop, + .tx = t7xx_port_ctrl_tx, +}; + +static int t7xx_port_wwan_init(struct t7xx_port *port) +{ + struct t7xx_port_static *port_static = port->port_static; + + port->rx_length_th = MAX_RX_QUEUE_LENGTH; + port->flags |= PORT_F_RX_ADJUST_HEADER; + + if (port_static->rx_ch == PORT_CH_UART2_RX) + port->flags |= PORT_F_RX_CH_TRAFFIC; + + if (port_static->port_type != WWAN_PORT_UNKNOWN) { + port->wwan_port = wwan_create_port(port->dev, port_static->port_type, + &wwan_ops, port); + if (IS_ERR(port->wwan_port)) + return PTR_ERR(port->wwan_port); + } else { + port->wwan_port = NULL; + } + + return 0; +} + +static void t7xx_port_wwan_uninit(struct t7xx_port *port) +{ + if (port->wwan_port) { + if (port->chn_crt_stat) { + spin_lock(&port->port_update_lock); + port->chn_crt_stat = false; + spin_unlock(&port->port_update_lock); + } + + wwan_remove_port(port->wwan_port); + port->wwan_port = NULL; + } +} + +static int t7xx_port_wwan_recv_skb(struct t7xx_port *port, struct sk_buff *skb) +{ + struct t7xx_port_static *port_static = port->port_static; + + if (port->flags & PORT_F_RX_CHAR_NODE) { + if (!atomic_read(&port->usage_cnt)) { + dev_err_ratelimited(port->dev, "Port %s is not opened, drop packets\n", + port_static->name); + return -ENETDOWN; + } + } + + return t7xx_port_recv_skb(port, skb); +} + +static void port_status_update(struct t7xx_port *port) +{ + if (port->flags & PORT_F_RX_CHAR_NODE) { + if (port->chan_enable) { + port->flags &= ~PORT_F_RX_ALLOW_DROP; + } else { + port->flags |= PORT_F_RX_ALLOW_DROP; + spin_lock(&port->port_update_lock); + port->chn_crt_stat = false; + spin_unlock(&port->port_update_lock); + } + } +} + +static int t7xx_port_wwan_enable_chl(struct t7xx_port *port) +{ + spin_lock(&port->port_update_lock); + port->chan_enable = true; + spin_unlock(&port->port_update_lock); + + if (port->chn_crt_stat != port->chan_enable) + port_status_update(port); + + return 0; +} + +static int t7xx_port_wwan_disable_chl(struct t7xx_port *port) +{ + spin_lock(&port->port_update_lock); + port->chan_enable = false; + spin_unlock(&port->port_update_lock); + + if (port->chn_crt_stat != port->chan_enable) + port_status_update(port); + + return 0; +} + +static void t7xx_port_wwan_md_state_notify(struct t7xx_port *port, unsigned int state) +{ + if (state == MD_STATE_READY) + port_status_update(port); +} + +struct port_ops wwan_sub_port_ops = { + .init = &t7xx_port_wwan_init, + .recv_skb = &t7xx_port_wwan_recv_skb, + .uninit = &t7xx_port_wwan_uninit, + .enable_chl = &t7xx_port_wwan_enable_chl, + .disable_chl = &t7xx_port_wwan_disable_chl, + .md_state_notify = &t7xx_port_wwan_md_state_notify, +}; From patchwork Tue Dec 7 02:47:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Martinez X-Patchwork-Id: 521903 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F0E1C433EF for ; Tue, 7 Dec 2021 02:48:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231590AbhLGCvs (ORCPT ); Mon, 6 Dec 2021 21:51:48 -0500 Received: from mga07.intel.com ([134.134.136.100]:27268 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230355AbhLGCvn (ORCPT ); Mon, 6 Dec 2021 21:51:43 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10190"; a="300860576" X-IronPort-AV: E=Sophos;i="5.87,293,1631602800"; d="scan'208";a="300860576" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2021 18:47:49 -0800 X-IronPort-AV: E=Sophos;i="5.87,293,1631602800"; d="scan'208";a="748524165" Received: from rmarti10-desk.jf.intel.com ([134.134.150.146]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2021 18:47:49 -0800 From: Ricardo Martinez To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: kuba@kernel.org, davem@davemloft.net, johannes@sipsolutions.net, ryazanov.s.a@gmail.com, loic.poulain@linaro.org, m.chetan.kumar@intel.com, chandrashekar.devegowda@intel.com, linuxwwan@intel.com, chiranjeevi.rapolu@linux.intel.com, haijun.liu@mediatek.com, amir.hanania@intel.com, andriy.shevchenko@linux.intel.com, dinesh.sharma@intel.com, eliot.lee@intel.com, mika.westerberg@linux.intel.com, moises.veleta@intel.com, pierre-louis.bossart@intel.com, muralidharan.sethuraman@intel.com, Soumya.Prakash.Mishra@intel.com, sreehari.kancharla@intel.com, suresh.nagaraj@intel.com, Ricardo Martinez Subject: [PATCH net-next v3 08/12] net: wwan: t7xx: Add WWAN network interface Date: Mon, 6 Dec 2021 19:47:07 -0700 Message-Id: <20211207024711.2765-9-ricardo.martinez@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211207024711.2765-1-ricardo.martinez@linux.intel.com> References: <20211207024711.2765-1-ricardo.martinez@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org From: Haijun Liu Creates the Cross Core Modem Network Interface (CCMNI) which implements the wwan_ops for registration with the WWAN framework, CCMNI also implements the net_device_ops functions used by the network device. Network device operations include open, close, start transmission, TX timeout, change MTU, and select queue. Signed-off-by: Haijun Liu Co-developed-by: Chandrashekar Devegowda Signed-off-by: Chandrashekar Devegowda Co-developed-by: Ricardo Martinez Signed-off-by: Ricardo Martinez --- drivers/net/wwan/t7xx/Makefile | 1 + drivers/net/wwan/t7xx/t7xx_modem_ops.c | 11 +- drivers/net/wwan/t7xx/t7xx_netdev.c | 433 +++++++++++++++++++++++++ drivers/net/wwan/t7xx/t7xx_netdev.h | 61 ++++ 4 files changed, 505 insertions(+), 1 deletion(-) create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.c create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.h diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile index 04a9ba50dc14..dc6a7d682c15 100644 --- a/drivers/net/wwan/t7xx/Makefile +++ b/drivers/net/wwan/t7xx/Makefile @@ -17,3 +17,4 @@ mtk_t7xx-y:= t7xx_pci.o \ t7xx_hif_dpmaif_tx.o \ t7xx_hif_dpmaif_rx.o \ t7xx_dpmaif.o \ + t7xx_netdev.o diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c index 3db5e86c23e5..5a7089f10b80 100644 --- a/drivers/net/wwan/t7xx/t7xx_modem_ops.c +++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c @@ -35,6 +35,7 @@ #include "t7xx_hif_cldma.h" #include "t7xx_mhccif.h" #include "t7xx_modem_ops.h" +#include "t7xx_netdev.h" #include "t7xx_pci.h" #include "t7xx_pcie_mac.h" #include "t7xx_port.h" @@ -663,10 +664,14 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev) if (ret) goto err_destroy_hswq; - ret = t7xx_cldma_init(md, md->md_ctrl[ID_CLDMA1]); + ret = t7xx_ccmni_init(t7xx_dev); if (ret) goto err_uninit_fsm; + ret = t7xx_cldma_init(md, md->md_ctrl[ID_CLDMA1]); + if (ret) + goto err_uninit_ccmni; + ret = t7xx_port_proxy_init(md); if (ret) goto err_uninit_cldma; @@ -679,6 +684,9 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev) err_uninit_cldma: t7xx_cldma_exit(md->md_ctrl[ID_CLDMA1]); +err_uninit_ccmni: + t7xx_ccmni_exit(t7xx_dev); + err_uninit_fsm: t7xx_fsm_uninit(md); @@ -700,6 +708,7 @@ void t7xx_md_exit(struct t7xx_pci_dev *t7xx_dev) t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_PRE_STOP, FSM_CMD_FLAG_WAIT_FOR_COMPLETION); t7xx_port_proxy_uninit(md->port_prox); t7xx_cldma_exit(md->md_ctrl[ID_CLDMA1]); + t7xx_ccmni_exit(t7xx_dev); t7xx_fsm_uninit(md); destroy_workqueue(md->handshake_wq); } diff --git a/drivers/net/wwan/t7xx/t7xx_netdev.c b/drivers/net/wwan/t7xx/t7xx_netdev.c new file mode 100644 index 000000000000..bae95f7be08a --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_netdev.c @@ -0,0 +1,433 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Chandrashekar Devegowda + * Haijun Liu + * Ricardo Martinez + * + * Contributors: + * Amir Hanania + * Andy Shevchenko + * Chiranjeevi Rapolu + * Eliot Lee + * Moises Veleta + * Sreehari Kancharla + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "t7xx_common.h" +#include "t7xx_hif_dpmaif_rx.h" +#include "t7xx_hif_dpmaif_tx.h" +#include "t7xx_netdev.h" +#include "t7xx_pci.h" +#include "t7xx_state_monitor.h" + +#define IP_MUX_SESSION_DEFAULT 0 + +static u16 t7xx_ccmni_select_queue(struct net_device *dev, struct sk_buff *skb, + struct net_device *sb_dev) +{ + return TXQ_TYPE_DEFAULT; +} + +static int t7xx_ccmni_open(struct net_device *dev) +{ + struct t7xx_ccmni *ccmni = wwan_netdev_drvpriv(dev); + + netif_carrier_on(dev); + netif_tx_start_all_queues(dev); + atomic_inc(&ccmni->usage); + return 0; +} + +static int t7xx_ccmni_close(struct net_device *dev) +{ + struct t7xx_ccmni *ccmni = wwan_netdev_drvpriv(dev); + + if (atomic_dec_return(&ccmni->usage) < 0) + return -EINVAL; + + netif_carrier_off(dev); + netif_tx_disable(dev); + return 0; +} + +static int t7xx_ccmni_send_packet(struct t7xx_ccmni *ccmni, struct sk_buff *skb, + unsigned int txqt) +{ + struct t7xx_ccmni_ctrl *ctlb = ccmni->ctlb; + + skb->cb[TX_CB_NETIF_IDX] = ccmni->index; + + if (t7xx_dpmaif_tx_send_skb(ctlb->hif_ctrl, txqt, skb)) + return NETDEV_TX_BUSY; + + return 0; +} + +static int t7xx_ccmni_start_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct t7xx_ccmni *ccmni = wwan_netdev_drvpriv(dev); + int skb_len = skb->len; + + /* If MTU is changed or there is no headroom, drop the packet */ + if (skb->len > dev->mtu || skb_headroom(skb) < sizeof(struct ccci_header)) { + dev_kfree_skb(skb); + dev->stats.tx_dropped++; + return NETDEV_TX_OK; + } + + if (t7xx_ccmni_send_packet(ccmni, skb, TXQ_TYPE_DEFAULT)) + return NETDEV_TX_BUSY; + + dev->stats.tx_packets++; + dev->stats.tx_bytes += skb_len; + + return NETDEV_TX_OK; +} + +static void t7xx_ccmni_tx_timeout(struct net_device *dev, unsigned int __always_unused txqueue) +{ + struct t7xx_ccmni *ccmni = netdev_priv(dev); + + dev->stats.tx_errors++; + + if (atomic_read(&ccmni->usage) > 0) + netif_tx_wake_all_queues(dev); +} + +static const struct net_device_ops ccmni_netdev_ops = { + .ndo_open = t7xx_ccmni_open, + .ndo_stop = t7xx_ccmni_close, + .ndo_start_xmit = t7xx_ccmni_start_xmit, + .ndo_tx_timeout = t7xx_ccmni_tx_timeout, + .ndo_select_queue = t7xx_ccmni_select_queue, +}; + +static void t7xx_ccmni_start(struct t7xx_ccmni_ctrl *ctlb) +{ + struct t7xx_ccmni *ccmni; + int i; + + for (i = 0; i < ctlb->nic_dev_num; i++) { + ccmni = ctlb->ccmni_inst[i]; + if (!ccmni) + continue; + + if (atomic_read(&ccmni->usage) > 0) { + netif_tx_start_all_queues(ccmni->dev); + netif_carrier_on(ccmni->dev); + } + } +} + +static void t7xx_ccmni_pre_stop(struct t7xx_ccmni_ctrl *ctlb) +{ + struct t7xx_ccmni *ccmni; + int i; + + for (i = 0; i < ctlb->nic_dev_num; i++) { + ccmni = ctlb->ccmni_inst[i]; + if (!ccmni) + continue; + + if (atomic_read(&ccmni->usage) > 0) + netif_tx_disable(ccmni->dev); + } +} + +static void t7xx_ccmni_post_stop(struct t7xx_ccmni_ctrl *ctlb) +{ + struct t7xx_ccmni *ccmni; + int i; + + for (i = 0; i < ctlb->nic_dev_num; i++) { + ccmni = ctlb->ccmni_inst[i]; + if (!ccmni) + continue; + + if (atomic_read(&ccmni->usage) > 0) + netif_carrier_off(ccmni->dev); + } +} + +static void t7xx_ccmni_wwan_setup(struct net_device *dev) +{ + dev->header_ops = NULL; + dev->hard_header_len += sizeof(struct ccci_header); + + dev->mtu = ETH_DATA_LEN; + dev->max_mtu = CCMNI_MTU_MAX; + dev->tx_queue_len = DEFAULT_TX_QUEUE_LEN; + dev->watchdog_timeo = CCMNI_NETDEV_WDT_TO; + /* CCMNI is a pure IP device */ + dev->flags = IFF_POINTOPOINT | IFF_NOARP; + + /* Not supporting VLAN */ + dev->features = NETIF_F_VLAN_CHALLENGED; + + dev->features |= NETIF_F_SG; + dev->hw_features |= NETIF_F_SG; + + /* Uplink checksum offload */ + dev->features |= NETIF_F_HW_CSUM; + dev->hw_features |= NETIF_F_HW_CSUM; + + /* Downlink checksum offload */ + dev->features |= NETIF_F_RXCSUM; + dev->hw_features |= NETIF_F_RXCSUM; + + /* Use kernel default free_netdev() function */ + dev->needs_free_netdev = true; + + /* No need to free again because of free_netdev() */ + dev->priv_destructor = NULL; + dev->type = ARPHRD_NONE; + + dev->netdev_ops = &ccmni_netdev_ops; +} + +static int t7xx_ccmni_wwan_newlink(void *ctxt, struct net_device *dev, u32 if_id, + struct netlink_ext_ack *extack) +{ + struct t7xx_ccmni_ctrl *ctlb = ctxt; + struct t7xx_ccmni *ccmni; + int ret; + + if (if_id >= ARRAY_SIZE(ctlb->ccmni_inst)) + return -EINVAL; + + ccmni = wwan_netdev_drvpriv(dev); + ccmni->index = if_id; + ccmni->ctlb = ctlb; + ccmni->dev = dev; + atomic_set(&ccmni->usage, 0); + ctlb->ccmni_inst[if_id] = ccmni; + + ret = register_netdevice(dev); + if (ret) + return ret; + + netif_device_attach(dev); + return 0; +} + +static void t7xx_ccmni_wwan_dellink(void *ctxt, struct net_device *dev, struct list_head *head) +{ + struct t7xx_ccmni *ccmni = wwan_netdev_drvpriv(dev); + struct t7xx_ccmni_ctrl *ctlb = ctxt; + u8 if_id = ccmni->index; + + if (if_id >= ARRAY_SIZE(ctlb->ccmni_inst)) + return; + + if (WARN_ON(ctlb->ccmni_inst[if_id] != ccmni)) + return; + + unregister_netdevice(dev); +} + +static const struct wwan_ops ccmni_wwan_ops = { + .priv_size = sizeof(struct t7xx_ccmni), + .setup = t7xx_ccmni_wwan_setup, + .newlink = t7xx_ccmni_wwan_newlink, + .dellink = t7xx_ccmni_wwan_dellink, +}; + +static int t7xx_ccmni_md_state_callback(enum md_state state, void *para) +{ + struct t7xx_ccmni_ctrl *ctlb = para; + int ret = 0; + + ctlb->md_sta = state; + + switch (state) { + case MD_STATE_READY: + t7xx_ccmni_start(ctlb); + break; + + case MD_STATE_EXCEPTION: + case MD_STATE_STOPPED: + t7xx_ccmni_pre_stop(ctlb); + + ret = t7xx_dpmaif_md_state_callback(ctlb->hif_ctrl, state); + if (ret < 0) + dev_err(ctlb->hif_ctrl->dev, + "dpmaif md state callback err, md_sta=%d\n", state); + + t7xx_ccmni_post_stop(ctlb); + break; + + case MD_STATE_WAITING_FOR_HS1: + case MD_STATE_WAITING_TO_STOP: + ret = t7xx_dpmaif_md_state_callback(ctlb->hif_ctrl, state); + if (ret < 0) + dev_err(ctlb->hif_ctrl->dev, + "dpmaif md state callback err, md_sta=%d\n", state); + + break; + + default: + break; + } + + return ret; +} + +static void init_md_status_notifier(struct t7xx_pci_dev *t7xx_dev) +{ + struct t7xx_ccmni_ctrl *ctlb = t7xx_dev->ccmni_ctlb; + struct t7xx_fsm_notifier *md_status_notifier; + + md_status_notifier = &ctlb->md_status_notify; + INIT_LIST_HEAD(&md_status_notifier->entry); + md_status_notifier->notifier_fn = t7xx_ccmni_md_state_callback; + md_status_notifier->data = ctlb; + + t7xx_fsm_notifier_register(t7xx_dev->md, md_status_notifier); +} + +static void t7xx_ccmni_recv_skb(struct t7xx_pci_dev *t7xx_dev, struct sk_buff *skb) +{ + struct t7xx_ccmni *ccmni; + struct net_device *net_dev; + int pkt_type, skb_len; + u8 netif_id; + + netif_id = skb->cb[RX_CB_NETIF_IDX]; + ccmni = t7xx_dev->ccmni_ctlb->ccmni_inst[netif_id]; + if (!ccmni) { + dev_kfree_skb(skb); + return; + } + + net_dev = ccmni->dev; + skb->dev = net_dev; + + pkt_type = skb->cb[RX_CB_PKT_TYPE]; + if (pkt_type == PKT_TYPE_IP6) + skb->protocol = htons(ETH_P_IPV6); + else + skb->protocol = htons(ETH_P_IP); + + skb_len = skb->len; + netif_rx_any_context(skb); + net_dev->stats.rx_packets++; + net_dev->stats.rx_bytes += skb_len; +} + +static void t7xx_ccmni_queue_tx_irq_notify(struct t7xx_ccmni_ctrl *ctlb, int qno) +{ + struct t7xx_ccmni *ccmni = ctlb->ccmni_inst[0]; + struct netdev_queue *net_queue; + + if (netif_running(ccmni->dev) && atomic_read(&ccmni->usage) > 0) { + if (ctlb->capability & NIC_CAP_CCMNI_MQ) { + net_queue = netdev_get_tx_queue(ccmni->dev, qno); + if (netif_tx_queue_stopped(net_queue)) + netif_tx_wake_queue(net_queue); + } else if (netif_queue_stopped(ccmni->dev)) { + netif_wake_queue(ccmni->dev); + } + } +} + +static void t7xx_ccmni_queue_tx_full_notify(struct t7xx_ccmni_ctrl *ctlb, int qno) +{ + struct t7xx_ccmni *ccmni = ctlb->ccmni_inst[0]; + struct netdev_queue *net_queue; + + if (atomic_read(&ccmni->usage) > 0) { + netdev_err(ccmni->dev, "TX queue %d is full\n", qno); + + if (ctlb->capability & NIC_CAP_CCMNI_MQ) { + net_queue = netdev_get_tx_queue(ccmni->dev, qno); + netif_tx_stop_queue(net_queue); + } else { + netif_stop_queue(ccmni->dev); + } + } +} + +static void t7xx_ccmni_queue_state_notify(struct t7xx_pci_dev *t7xx_dev, + enum dpmaif_txq_state state, int qno) +{ + struct t7xx_ccmni_ctrl *ctlb = t7xx_dev->ccmni_ctlb; + + if (!(ctlb->capability & NIC_CAP_TXBUSY_STOP) || + ctlb->md_sta != MD_STATE_READY) + return; + + if (!ctlb->ccmni_inst[0]) { + dev_warn(&t7xx_dev->pdev->dev, "No netdev registered yet\n"); + return; + } + + if (state == DMPAIF_TXQ_STATE_IRQ) + t7xx_ccmni_queue_tx_irq_notify(ctlb, qno); + else if (state == DMPAIF_TXQ_STATE_FULL) + t7xx_ccmni_queue_tx_full_notify(ctlb, qno); +} + +int t7xx_ccmni_init(struct t7xx_pci_dev *t7xx_dev) +{ + struct device *dev = &t7xx_dev->pdev->dev; + struct t7xx_ccmni_ctrl *ctlb; + int ret; + + ctlb = devm_kzalloc(dev, sizeof(*ctlb), GFP_KERNEL); + if (!ctlb) + return -ENOMEM; + + t7xx_dev->ccmni_ctlb = ctlb; + ctlb->t7xx_dev = t7xx_dev; + ctlb->callbacks.state_notify = t7xx_ccmni_queue_state_notify; + ctlb->callbacks.recv_skb = t7xx_ccmni_recv_skb; + ctlb->nic_dev_num = NIC_DEV_DEFAULT; + ctlb->capability = NIC_CAP_TXBUSY_STOP | NIC_CAP_SGIO | + NIC_CAP_DATA_ACK_DVD | NIC_CAP_CCMNI_MQ; + + ctlb->hif_ctrl = t7xx_dpmaif_hif_init(t7xx_dev, &ctlb->callbacks); + if (!ctlb->hif_ctrl) + return -ENOMEM; + + /* WWAN core will create a netdev for the default IP MUX channel */ + ret = wwan_register_ops(dev, &ccmni_wwan_ops, ctlb, IP_MUX_SESSION_DEFAULT); + if (ret) + goto err_unregister_ops; + + init_md_status_notifier(t7xx_dev); + + return 0; + +err_unregister_ops: + wwan_unregister_ops(dev); + + return ret; +} + +void t7xx_ccmni_exit(struct t7xx_pci_dev *t7xx_dev) +{ + struct t7xx_ccmni_ctrl *ctlb = t7xx_dev->ccmni_ctlb; + + t7xx_fsm_notifier_unregister(t7xx_dev->md, &ctlb->md_status_notify); + wwan_unregister_ops(&t7xx_dev->pdev->dev); + t7xx_dpmaif_hif_exit(ctlb->hif_ctrl); +} diff --git a/drivers/net/wwan/t7xx/t7xx_netdev.h b/drivers/net/wwan/t7xx/t7xx_netdev.h new file mode 100644 index 000000000000..2553c050fca4 --- /dev/null +++ b/drivers/net/wwan/t7xx/t7xx_netdev.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2021, MediaTek Inc. + * Copyright (c) 2021, Intel Corporation. + * + * Authors: + * Haijun Liu + * Moises Veleta + * + * Contributors: + * Amir Hanania + * Chiranjeevi Rapolu + * Ricardo Martinez + */ + +#ifndef __T7XX_NETDEV_H__ +#define __T7XX_NETDEV_H__ + +#include +#include +#include + +#include "t7xx_common.h" +#include "t7xx_hif_dpmaif.h" +#include "t7xx_pci.h" +#include "t7xx_state_monitor.h" + +#define RXQ_NUM DPMAIF_RXQ_NUM +#define NIC_DEV_MAX 21 +#define NIC_DEV_DEFAULT 2 +#define NIC_CAP_TXBUSY_STOP BIT(0) +#define NIC_CAP_SGIO BIT(1) +#define NIC_CAP_DATA_ACK_DVD BIT(2) +#define NIC_CAP_CCMNI_MQ BIT(3) + +/* Must be less than DPMAIF_HW_MTU_SIZE (3*1024 + 8) */ +#define CCMNI_MTU_MAX 3000 +#define CCMNI_NETDEV_WDT_TO (1 * HZ) + +struct t7xx_ccmni { + u8 index; + atomic_t usage; + struct net_device *dev; + struct t7xx_ccmni_ctrl *ctlb; +}; + +struct t7xx_ccmni_ctrl { + struct t7xx_pci_dev *t7xx_dev; + struct dpmaif_ctrl *hif_ctrl; + struct t7xx_ccmni *ccmni_inst[NIC_DEV_MAX]; + struct dpmaif_callbacks callbacks; + unsigned int nic_dev_num; + unsigned int md_sta; + unsigned int capability; + struct t7xx_fsm_notifier md_status_notify; +}; + +int t7xx_ccmni_init(struct t7xx_pci_dev *t7xx_dev); +void t7xx_ccmni_exit(struct t7xx_pci_dev *t7xx_dev); + +#endif /* __T7XX_NETDEV_H__ */ From patchwork Tue Dec 7 02:47:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Martinez X-Patchwork-Id: 521901 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B6ECC433FE for ; Tue, 7 Dec 2021 02:48:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232731AbhLGCwA (ORCPT ); Mon, 6 Dec 2021 21:52:00 -0500 Received: from mga07.intel.com ([134.134.136.100]:27273 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231158AbhLGCvo (ORCPT ); Mon, 6 Dec 2021 21:51:44 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10190"; a="300860596" X-IronPort-AV: E=Sophos;i="5.87,293,1631602800"; d="scan'208";a="300860596" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2021 18:47:52 -0800 X-IronPort-AV: E=Sophos;i="5.87,293,1631602800"; d="scan'208";a="748524186" Received: from rmarti10-desk.jf.intel.com ([134.134.150.146]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2021 18:47:51 -0800 From: Ricardo Martinez To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: kuba@kernel.org, davem@davemloft.net, johannes@sipsolutions.net, ryazanov.s.a@gmail.com, loic.poulain@linaro.org, m.chetan.kumar@intel.com, chandrashekar.devegowda@intel.com, linuxwwan@intel.com, chiranjeevi.rapolu@linux.intel.com, haijun.liu@mediatek.com, amir.hanania@intel.com, andriy.shevchenko@linux.intel.com, dinesh.sharma@intel.com, eliot.lee@intel.com, mika.westerberg@linux.intel.com, moises.veleta@intel.com, pierre-louis.bossart@intel.com, muralidharan.sethuraman@intel.com, Soumya.Prakash.Mishra@intel.com, sreehari.kancharla@intel.com, suresh.nagaraj@intel.com, Ricardo Martinez Subject: [PATCH net-next v3 11/12] net: wwan: t7xx: Device deep sleep lock/unlock Date: Mon, 6 Dec 2021 19:47:10 -0700 Message-Id: <20211207024711.2765-12-ricardo.martinez@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211207024711.2765-1-ricardo.martinez@linux.intel.com> References: <20211207024711.2765-1-ricardo.martinez@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org From: Haijun Liu Introduce the mechanism to lock/unlock the device 'deep sleep' mode. When the PCIe link state is L1.2 or L2, the host side still can keep the device is in D0 state from the host side point of view. At the same time, if the device's 'deep sleep' mode is unlocked, the device will go to 'deep sleep' while it is still in D0 state on the host side. Signed-off-by: Haijun Liu Signed-off-by: Chandrashekar Devegowda Co-developed-by: Ricardo Martinez Signed-off-by: Ricardo Martinez --- drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 12 +++ drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 14 +++- drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 37 ++++++--- drivers/net/wwan/t7xx/t7xx_mhccif.c | 3 + drivers/net/wwan/t7xx/t7xx_pci.c | 97 ++++++++++++++++++++++ drivers/net/wwan/t7xx/t7xx_pci.h | 10 +++ 6 files changed, 159 insertions(+), 14 deletions(-) diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c index cdeb79bb006e..7b8c135600cf 100644 --- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c +++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c @@ -997,6 +997,7 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb if (ret < 0 && ret != -EACCES) return ret; + t7xx_pci_disable_sleep(md_ctrl->t7xx_dev); queue = &md_ctrl->txq[qno]; spin_lock_irqsave(&md_ctrl->cldma_lock, flags); @@ -1017,6 +1018,11 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb queue->tx_xmit = t7xx_cldma_ring_step_forward(queue->tr_ring, tx_req); spin_unlock_irqrestore(&queue->ring_lock, flags); + if (!t7xx_pci_sleep_disable_complete(md_ctrl->t7xx_dev)) { + ret = -EBUSY; + break; + } + /* Protect the access to the modem for queues operations (resume/start) * which access shared locations by all the queues. * cldma_lock is independent of ring_lock which is per queue. @@ -1029,6 +1035,11 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb spin_unlock_irqrestore(&queue->ring_lock, flags); + if (!t7xx_pci_sleep_disable_complete(md_ctrl->t7xx_dev)) { + ret = -EBUSY; + break; + } + if (!t7xx_cldma_hw_queue_status(&md_ctrl->hw_info, qno, MTK_TX)) { spin_lock_irqsave(&md_ctrl->cldma_lock, flags); t7xx_cldma_hw_resume_queue(&md_ctrl->hw_info, qno, MTK_TX); @@ -1044,6 +1055,7 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb } while (!ret); allow_sleep: + t7xx_pci_enable_sleep(md_ctrl->t7xx_dev); pm_runtime_mark_last_busy(md_ctrl->dev); pm_runtime_put_autosuspend(md_ctrl->dev); return ret; diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c index 710e4727ea84..1a4694f405ea 100644 --- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c +++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c @@ -929,8 +929,11 @@ static void t7xx_dpmaif_rxq_work(struct work_struct *work) if (ret < 0 && ret != -EACCES) return; - t7xx_dpmaif_do_rx(dpmaif_ctrl, rxq); + t7xx_pci_disable_sleep(dpmaif_ctrl->t7xx_dev); + if (t7xx_pci_sleep_disable_complete(dpmaif_ctrl->t7xx_dev)) + t7xx_dpmaif_do_rx(dpmaif_ctrl, rxq); + t7xx_pci_enable_sleep(dpmaif_ctrl->t7xx_dev); pm_runtime_mark_last_busy(dpmaif_ctrl->dev); pm_runtime_put_autosuspend(dpmaif_ctrl->dev); atomic_set(&rxq->rx_processing, 0); @@ -1168,11 +1171,16 @@ static void t7xx_dpmaif_bat_release_work(struct work_struct *work) if (ret < 0 && ret != -EACCES) return; + t7xx_pci_disable_sleep(dpmaif_ctrl->t7xx_dev); + /* ALL RXQ use one BAT table, so choose DPF_RX_QNO_DFT */ rxq = &dpmaif_ctrl->rxq[DPF_RX_QNO_DFT]; - t7xx_dpmaif_bat_release_and_add(rxq); - t7xx_dpmaif_frag_bat_release_and_add(rxq); + if (t7xx_pci_sleep_disable_complete(dpmaif_ctrl->t7xx_dev)) { + t7xx_dpmaif_bat_release_and_add(rxq); + t7xx_dpmaif_frag_bat_release_and_add(rxq); + } + t7xx_pci_enable_sleep(dpmaif_ctrl->t7xx_dev); pm_runtime_mark_last_busy(dpmaif_ctrl->dev); pm_runtime_put_autosuspend(dpmaif_ctrl->dev); } diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c index f8e7a10273a4..17f473d8e5ae 100644 --- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c +++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c @@ -175,19 +175,24 @@ static void t7xx_dpmaif_tx_done(struct work_struct *work) if (ret < 0 && ret != -EACCES) return; - ret = t7xx_dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt); - if (ret == -EAGAIN || - (t7xx_dpmaif_ul_clr_done(&dpmaif_ctrl->hif_hw_info, txq->index) && - t7xx_dpmaif_drb_ring_not_empty(txq))) { - queue_work(dpmaif_ctrl->txq[txq->index].worker, - &dpmaif_ctrl->txq[txq->index].dpmaif_tx_work); - /* Give the device time to enter the low power state */ - t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info); - } else { - t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info); - t7xx_dpmaif_unmask_ulq_intr(dpmaif_ctrl, txq->index); + /* The device may be in low power state. Disable sleep if needed */ + t7xx_pci_disable_sleep(dpmaif_ctrl->t7xx_dev); + if (t7xx_pci_sleep_disable_complete(dpmaif_ctrl->t7xx_dev)) { + ret = t7xx_dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt); + if (ret == -EAGAIN || + (t7xx_dpmaif_ul_clr_done(&dpmaif_ctrl->hif_hw_info, txq->index) && + t7xx_dpmaif_drb_ring_not_empty(txq))) { + queue_work(dpmaif_ctrl->txq[txq->index].worker, + &dpmaif_ctrl->txq[txq->index].dpmaif_tx_work); + /* Give the device time to enter the low power state */ + t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info); + } else { + t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info); + t7xx_dpmaif_unmask_ulq_intr(dpmaif_ctrl, txq->index); + } } + t7xx_pci_enable_sleep(dpmaif_ctrl->t7xx_dev); pm_runtime_mark_last_busy(dpmaif_ctrl->dev); pm_runtime_put_autosuspend(dpmaif_ctrl->dev); } @@ -429,6 +434,8 @@ static bool t7xx_check_all_txq_drb_lack(const struct dpmaif_ctrl *dpmaif_ctrl) static void t7xx_do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl) { + bool first_time = true; + dpmaif_ctrl->txq_select_times = 0; do { int txq_id; @@ -444,6 +451,11 @@ static void t7xx_do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl) if (ret > 0) { int drb_send_cnt = ret; + /* Wait for the PCIe resource to unlock */ + if (first_time && + !t7xx_pci_sleep_disable_complete(dpmaif_ctrl->t7xx_dev)) + return; + ret = t7xx_dpmaif_ul_update_hw_drb_cnt(dpmaif_ctrl, (unsigned char)txq_id, drb_send_cnt * @@ -457,6 +469,7 @@ static void t7xx_do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl) } } + first_time = false; cond_resched(); } while (!t7xx_tx_lists_are_all_empty(dpmaif_ctrl) && !kthread_should_stop() && (dpmaif_ctrl->state == DPMAIF_STATE_PWRON)); @@ -483,7 +496,9 @@ static int t7xx_dpmaif_tx_hw_push_thread(void *arg) if (ret < 0 && ret != -EACCES) return ret; + t7xx_pci_disable_sleep(dpmaif_ctrl->t7xx_dev); t7xx_do_tx_hw_push(dpmaif_ctrl); + t7xx_pci_enable_sleep(dpmaif_ctrl->t7xx_dev); pm_runtime_mark_last_busy(dpmaif_ctrl->dev); pm_runtime_put_autosuspend(dpmaif_ctrl->dev); } diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.c b/drivers/net/wwan/t7xx/t7xx_mhccif.c index 74c79d520d88..da076416da10 100644 --- a/drivers/net/wwan/t7xx/t7xx_mhccif.c +++ b/drivers/net/wwan/t7xx/t7xx_mhccif.c @@ -55,6 +55,9 @@ static irqreturn_t t7xx_mhccif_isr_thread(int irq, void *data) t7xx_mhccif_clear_interrupts(t7xx_dev, int_sts); + if (int_sts & D2H_INT_DS_LOCK_ACK) + complete_all(&t7xx_dev->sleep_lock_acquire); + if (int_sts & D2H_INT_SR_ACK) complete(&t7xx_dev->pm_sr_ack); diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c index a1a0824c1e11..8d70847826fb 100644 --- a/drivers/net/wwan/t7xx/t7xx_pci.c +++ b/drivers/net/wwan/t7xx/t7xx_pci.c @@ -34,6 +34,7 @@ #include #include #include +#include #include "t7xx_mhccif.h" #include "t7xx_modem_ops.h" @@ -45,6 +46,7 @@ #define PCI_IREG_BASE 0 #define PCI_EREG_BASE 2 +#define MTK_WAIT_TIMEOUT_MS 10 #define PM_ACK_TIMEOUT_MS 1500 #define PM_AUTOSUSPEND_MS 20000 #define PM_RESOURCE_POLL_TIMEOUT_US 10000 @@ -57,6 +59,21 @@ enum t7xx_pm_state { MTK_PM_RESUMED, }; +static void t7xx_dev_set_sleep_capability(struct t7xx_pci_dev *t7xx_dev, bool enable) +{ + void __iomem *ctrl_reg = IREG_BASE(t7xx_dev) + PCIE_MISC_CTRL; + u32 value; + + value = ioread32(ctrl_reg); + + if (enable) + value &= ~PCIE_MISC_MAC_SLEEP_DIS; + else + value |= PCIE_MISC_MAC_SLEEP_DIS; + + iowrite32(value, ctrl_reg); +} + static int t7xx_wait_pm_config(struct t7xx_pci_dev *t7xx_dev) { int ret, val; @@ -77,10 +94,14 @@ static int t7xx_pci_pm_init(struct t7xx_pci_dev *t7xx_dev) INIT_LIST_HEAD(&t7xx_dev->md_pm_entities); + spin_lock_init(&t7xx_dev->md_pm_lock); + mutex_init(&t7xx_dev->md_pm_entity_mtx); + init_completion(&t7xx_dev->sleep_lock_acquire); init_completion(&t7xx_dev->pm_sr_ack); + atomic_set(&t7xx_dev->sleep_disable_count, 0); device_init_wakeup(&pdev->dev, true); dev_pm_set_driver_flags(&pdev->dev, pdev->dev.power.driver_flags | @@ -99,6 +120,7 @@ void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev) { /* Enable the PCIe resource lock only after MD deep sleep is done */ t7xx_mhccif_mask_clr(t7xx_dev, + D2H_INT_DS_LOCK_ACK | D2H_INT_SUSPEND_ACK | D2H_INT_RESUME_ACK | D2H_INT_SUSPEND_ACK_AP | @@ -164,6 +186,81 @@ int t7xx_pci_pm_entity_unregister(struct t7xx_pci_dev *t7xx_dev, struct md_pm_en return -ENXIO; } +int t7xx_pci_sleep_disable_complete(struct t7xx_pci_dev *t7xx_dev) +{ + struct device *dev = &t7xx_dev->pdev->dev; + int ret; + + ret = wait_for_completion_timeout(&t7xx_dev->sleep_lock_acquire, + msecs_to_jiffies(MTK_WAIT_TIMEOUT_MS)); + if (!ret) + dev_err_ratelimited(dev, "Resource wait complete timed out\n"); + + return ret; +} + +/** + * t7xx_pci_disable_sleep() - Disable deep sleep capability. + * @t7xx_dev: MTK device. + * + * Lock the deep sleep capability, note that the device can still go into deep sleep + * state while device is in D0 state, from the host's point-of-view. + * + * If device is in deep sleep state, wake up the device and disable deep sleep capability. + */ +void t7xx_pci_disable_sleep(struct t7xx_pci_dev *t7xx_dev) +{ + unsigned long flags; + + if (atomic_read(&t7xx_dev->md_pm_state) < MTK_PM_RESUMED) { + atomic_inc(&t7xx_dev->sleep_disable_count); + complete_all(&t7xx_dev->sleep_lock_acquire); + return; + } + + spin_lock_irqsave(&t7xx_dev->md_pm_lock, flags); + if (atomic_inc_return(&t7xx_dev->sleep_disable_count) == 1) { + u32 deep_sleep_enabled; + + reinit_completion(&t7xx_dev->sleep_lock_acquire); + t7xx_dev_set_sleep_capability(t7xx_dev, false); + + deep_sleep_enabled = ioread32(IREG_BASE(t7xx_dev) + PCIE_RESOURCE_STATUS); + deep_sleep_enabled &= PCIE_RESOURCE_STATUS_MSK; + if (deep_sleep_enabled) { + spin_unlock_irqrestore(&t7xx_dev->md_pm_lock, flags); + complete_all(&t7xx_dev->sleep_lock_acquire); + return; + } + + t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DS_LOCK); + } + + spin_unlock_irqrestore(&t7xx_dev->md_pm_lock, flags); +} + +/** + * t7xx_pci_enable_sleep() - Enable deep sleep capability. + * @t7xx_dev: MTK device. + * + * After enabling deep sleep, device can enter into deep sleep state. + */ +void t7xx_pci_enable_sleep(struct t7xx_pci_dev *t7xx_dev) +{ + unsigned long flags; + + if (atomic_read(&t7xx_dev->md_pm_state) < MTK_PM_RESUMED) { + atomic_dec(&t7xx_dev->sleep_disable_count); + return; + } + + if (atomic_dec_and_test(&t7xx_dev->sleep_disable_count)) { + spin_lock_irqsave(&t7xx_dev->md_pm_lock, flags); + t7xx_dev_set_sleep_capability(t7xx_dev, true); + spin_unlock_irqrestore(&t7xx_dev->md_pm_lock, flags); + } +} + static int __t7xx_pci_pm_suspend(struct pci_dev *pdev) { struct t7xx_pci_dev *t7xx_dev; diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h index 6310f31540ca..90d358bff54c 100644 --- a/drivers/net/wwan/t7xx/t7xx_pci.h +++ b/drivers/net/wwan/t7xx/t7xx_pci.h @@ -21,6 +21,7 @@ #include #include #include +#include #include #include "t7xx_reg.h" @@ -51,7 +52,10 @@ typedef irqreturn_t (*t7xx_intr_callback)(int irq, void *param); * @base_addr: memory base addresses of HW components * @md: modem interface * @md_pm_entities: list of pm entities + * @md_pm_lock: protects PCIe sleep lock * @md_pm_entity_mtx: protects md_pm_entities list + * @sleep_disable_count: PCIe L1.2 lock counter + * @sleep_lock_acquire: indicates that sleep has been disabled * @pm_sr_ack: ack from the device when went to sleep or woke up * @md_pm_state: state for resume/suspend * @ccmni_ctlb: context structure used to control the network data path @@ -69,7 +73,10 @@ struct t7xx_pci_dev { /* Low Power Items */ struct list_head md_pm_entities; + spinlock_t md_pm_lock; /* Protects PCI resource lock */ struct mutex md_pm_entity_mtx; /* Protects MD PM entities list */ + atomic_t sleep_disable_count; + struct completion sleep_lock_acquire; struct completion pm_sr_ack; atomic_t md_pm_state; @@ -105,6 +112,9 @@ struct md_pm_entity { void *entity_param; }; +void t7xx_pci_disable_sleep(struct t7xx_pci_dev *t7xx_dev); +void t7xx_pci_enable_sleep(struct t7xx_pci_dev *t7xx_dev); +int t7xx_pci_sleep_disable_complete(struct t7xx_pci_dev *t7xx_dev); int t7xx_pci_pm_entity_register(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity); int t7xx_pci_pm_entity_unregister(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity); void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev);