From patchwork Mon Nov 23 13:51:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 330946 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94B7CC6379D for ; Mon, 23 Nov 2020 13:52:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 541FD20729 for ; Mon, 23 Nov 2020 13:52:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729801AbgKWNvz (ORCPT ); Mon, 23 Nov 2020 08:51:55 -0500 Received: from mga14.intel.com ([192.55.52.115]:1467 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729455AbgKWNvz (ORCPT ); Mon, 23 Nov 2020 08:51:55 -0500 IronPort-SDR: HF5z9zPQznQlLzXPWez9YgSGLp7c8KX22jPQP3VspdtdLtm673FnRDKdtsc18PaevFUCRl4Inn P4ZaLgAFO3Uw== X-IronPort-AV: E=McAfee;i="6000,8403,9813"; a="170981415" X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="170981415" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 05:51:54 -0800 IronPort-SDR: 5NlwpLIH2Qo+ZO4m3d2sJv9cOZw4Ms5LMl1lbdK6O+9XGTWGVF/DnBGT997EKa34yV/arWgTDh aPt2YpNJJhKQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="370035473" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga007.jf.intel.com with ESMTP; 23 Nov 2020 05:51:52 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [RFC 02/18] net: iosm: irq handling Date: Mon, 23 Nov 2020 19:21:07 +0530 Message-Id: <20201123135123.48892-3-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20201123135123.48892-1-m.chetan.kumar@intel.com> References: <20201123135123.48892-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org 1) Request interrupt vector, frees allocated resource. 2) Registers IRQ handler. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_irq.c | 95 ++++++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_irq.h | 35 +++++++++++++ 2 files changed, 130 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_irq.c b/drivers/net/wwan/iosm/iosm_ipc_irq.c new file mode 100644 index 000000000000..b9e1bc7959db --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_irq.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include "iosm_ipc_pcie.h" +#include "iosm_ipc_protocol.h" + +/* Write to the specified register offset for doorbell interrupt */ +static inline void write_dbell_reg(struct iosm_pcie *ipc_pcie, int irq_n, + u32 data) +{ + void __iomem *write_reg; + + /* Select the first doorbell register, which is only currently needed + * by CP. + */ + write_reg = (void __iomem *)((u8 __iomem *)ipc_pcie->ipc_regs + + ipc_pcie->doorbell_write + + (irq_n * ipc_pcie->doorbell_reg_offset)); + + /* Fire the doorbell irq by writing data on the doorbell write pointer + * register. + */ + iowrite32(data, write_reg); +} + +void ipc_doorbell_fire(struct iosm_pcie *ipc_pcie, int irq_n, u32 data) +{ + if (!ipc_pcie || !ipc_pcie->ipc_regs) + return; + + write_dbell_reg(ipc_pcie, irq_n, data); +} + +/* Threaded Interrupt handler for MSI interrupts */ +static irqreturn_t ipc_msi_interrupt(int irq, void *dev_id) +{ + struct iosm_pcie *ipc_pcie = dev_id; + int instance = irq - ipc_pcie->pci->irq; + + /* Shift the MSI irq actions to the IPC tasklet. IRQ_NONE means the + * irq was not from the IPC device or could not be served. + */ + if (instance >= ipc_pcie->nvec) + return IRQ_NONE; + + ipc_imem_irq_process(ipc_pcie->imem, instance); + + return IRQ_HANDLED; +} + +void ipc_release_irq(struct iosm_pcie *ipc_pcie) +{ + struct pci_dev *pdev = ipc_pcie->pci; + + if (pdev->msi_enabled) { + while (--ipc_pcie->nvec >= 0) + free_irq(pdev->irq + ipc_pcie->nvec, ipc_pcie); + } + pci_free_irq_vectors(pdev); +} + +int ipc_acquire_irq(struct iosm_pcie *ipc_pcie) +{ + struct pci_dev *pdev = ipc_pcie->pci; + int i, rc = 0; + + ipc_pcie->nvec = pci_alloc_irq_vectors(pdev, IPC_MSI_VECTORS, + IPC_MSI_VECTORS, PCI_IRQ_MSI); + + if (ipc_pcie->nvec < 0) + return ipc_pcie->nvec; + + if (!pdev->msi_enabled) { + rc = -1; + goto error; + } + + for (i = 0; i < ipc_pcie->nvec; ++i) { + rc = request_threaded_irq(pdev->irq + i, NULL, + ipc_msi_interrupt, 0, KBUILD_MODNAME, + ipc_pcie); + if (rc) { + dev_err(ipc_pcie->dev, "unable to grab IRQ %d, rc=%d", + pdev->irq, rc); + ipc_pcie->nvec = i; + ipc_release_irq(ipc_pcie); + goto error; + } + } + +error: + return rc; +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_irq.h b/drivers/net/wwan/iosm/iosm_ipc_irq.h new file mode 100644 index 000000000000..db207cb95a8a --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_irq.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_IRQ_H +#define IOSM_IPC_IRQ_H + +#include "iosm_ipc_pcie.h" + +struct iosm_pcie; + +/** + * ipc_doorbell_fire - fire doorbell to CP + * @ipc_pcie: Pointer to iosm_pcie + * @irq_n: Doorbell type + * @data: ipc state + */ +void ipc_doorbell_fire(struct iosm_pcie *ipc_pcie, int irq_n, u32 data); + +/** + * ipc_release_irq - Remove the IRQ handler. + * @ipc_pcie: Pointer to iosm_pcie struct + */ +void ipc_release_irq(struct iosm_pcie *ipc_pcie); + +/** + * ipc_acquire_irq - Install the IPC IRQ handler. + * @ipc_pcie: Pointer to iosm_pcie struct + * + * Return: 0 on success and -1 on failure + */ +int ipc_acquire_irq(struct iosm_pcie *ipc_pcie); + +#endif From patchwork Mon Nov 23 13:51:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 330944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 346D2C64EBC for ; Mon, 23 Nov 2020 13:52:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 04DE5206F1 for ; Mon, 23 Nov 2020 13:52:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730365AbgKWNwH (ORCPT ); Mon, 23 Nov 2020 08:52:07 -0500 Received: from mga14.intel.com ([192.55.52.115]:1467 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729455AbgKWNwG (ORCPT ); Mon, 23 Nov 2020 08:52:06 -0500 IronPort-SDR: s2WS+Cu/Xc399ZwCsnv0CvAWsAIlodyOHqnoyojA1/gmRxvjdzdNg/M3JwHQykayGorqqefo4e ji3TS5lsdSng== X-IronPort-AV: E=McAfee;i="6000,8403,9813"; a="170981434" X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="170981434" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 05:52:04 -0800 IronPort-SDR: XOIWbvh5TXp5N2TMUAbxzyhjMbxz+Aiuue2MGbun46Ao/zWan/wed+/GvNDjQoGM7ivFoVTU0v Lt5BAoAWvOzw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="370035515" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga007.jf.intel.com with ESMTP; 23 Nov 2020 05:52:02 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [RFC 05/18] net: iosm: shared memory I/O operations Date: Mon, 23 Nov 2020 19:21:10 +0530 Message-Id: <20201123135123.48892-6-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20201123135123.48892-1-m.chetan.kumar@intel.com> References: <20201123135123.48892-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org 1) Binds logical channel between host-device for communication. 2) Implements device specific(Char/Net) IO operations. 3) Inject primary BootLoader FW image to modem. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_imem_ops.c | 779 ++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_imem_ops.h | 102 ++++ 2 files changed, 881 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c new file mode 100644 index 000000000000..2e2f3f43e21c --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c @@ -0,0 +1,779 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include + +#include "iosm_ipc_chnl_cfg.h" +#include "iosm_ipc_imem.h" +#include "iosm_ipc_imem_ops.h" +#include "iosm_ipc_sio.h" +#include "iosm_ipc_task_queue.h" + +/* Open a packet data online channel between the network layer and CP. */ +int imem_sys_wwan_open(void *instance, int vlan_id) +{ + struct iosm_imem *ipc_imem = instance; + + dev_dbg(ipc_imem->dev, "%s[vlan id:%d]", + ipc_ap_phase_get_string(ipc_imem->phase), vlan_id); + + /* The network interface is only supported in the runtime phase. */ + if (imem_ap_phase_update(ipc_imem) != IPC_P_RUN) { + dev_err(ipc_imem->dev, "[net:%d]: refused phase %s", vlan_id, + ipc_ap_phase_get_string(ipc_imem->phase)); + return -1; + } + + /* check for the vlan tag + * if tag 1 to 8 then create IP MUX channel sessions. + * if tag 257 to 512 then create dss channel. + * To start MUX session from 0 as vlan tag would start from 1 + * so map it to if_id = vlan_id - 1 + */ + if (vlan_id > 0 && vlan_id <= ipc_mux_get_max_sessions(ipc_imem->mux)) { + return ipc_mux_open_session(ipc_imem->mux, vlan_id - 1); + } else if (vlan_id > 256 && vlan_id < 512) { + int ch_id = + imem_channel_alloc(ipc_imem, vlan_id, IPC_CTYPE_WWAN); + + if (imem_channel_open(ipc_imem, ch_id, IPC_HP_NET_CHANNEL_INIT)) + return ch_id; + } + + return -1; +} + +/* Release a net link to CP. */ +void imem_sys_wwan_close(void *instance, int vlan_id, int channel_id) +{ + struct iosm_imem *ipc_imem = instance; + + if (ipc_imem->mux && vlan_id > 0 && + vlan_id <= ipc_mux_get_max_sessions(ipc_imem->mux)) + ipc_mux_close_session(ipc_imem->mux, vlan_id - 1); + + else if ((vlan_id > 256 && vlan_id < 512)) + imem_channel_close(ipc_imem, channel_id); +} + +/* Tasklet call to do uplink transfer. */ +static int imem_tq_sio_write(void *instance, int arg, void *msg, size_t size) +{ + struct iosm_imem *ipc_imem = instance; + + ipc_imem->ev_sio_write_pending = false; + imem_ul_send(ipc_imem); + + return 0; +} + +/* Through tasklet to do sio write. */ +static bool imem_call_sio_write(struct iosm_imem *ipc_imem) +{ + if (ipc_imem->ev_sio_write_pending) + return false; + + ipc_imem->ev_sio_write_pending = true; + + return (!ipc_task_queue_send_task(ipc_imem, imem_tq_sio_write, 0, NULL, + 0, false)); +} + +/* Add to the ul list skb */ +static int imem_wwan_transmit(struct iosm_imem *ipc_imem, int vlan_id, + int channel_id, struct sk_buff *skb) +{ + struct ipc_mem_channel *channel; + + channel = &ipc_imem->channels[channel_id]; + + if (channel->state != IMEM_CHANNEL_ACTIVE) { + dev_err(ipc_imem->dev, "invalid state of channel %d", + channel_id); + return -1; + } + + if (ipc_pcie_addr_map(ipc_imem->pcie, skb->data, skb->len, + &IPC_CB(skb)->mapping, DMA_TO_DEVICE)) { + dev_err(ipc_imem->dev, "failed to map skb"); + IPC_CB(skb)->direction = DMA_TO_DEVICE; + IPC_CB(skb)->len = skb->len; + IPC_CB(skb)->op_type = UL_DEFAULT; + return -1; + } + + /* Add skb to the uplink skbuf accumulator */ + skb_queue_tail(&channel->ul_list, skb); + imem_call_sio_write(ipc_imem); + + return 0; +} + +/* Function for transfer UL data + * WWAN layer must free the packet in case if imem fails to transmit. + * In case of success, imem layer will free it. + */ +int imem_sys_wwan_transmit(void *instance, int vlan_id, int channel_id, + struct sk_buff *skb) +{ + struct iosm_imem *ipc_imem = instance; + int ret = -1; + + if (!ipc_imem || channel_id < 0) + return -EINVAL; + + /* Is CP Running? */ + if (ipc_imem->phase != IPC_P_RUN) { + dev_dbg(ipc_imem->dev, "%s[transmit, vlanid:%d]", + ipc_ap_phase_get_string(ipc_imem->phase), vlan_id); + return -EBUSY; + } + + if (ipc_imem->channels[channel_id].ctype == IPC_CTYPE_WWAN) { + if (vlan_id > 0 && + vlan_id <= ipc_mux_get_max_sessions(ipc_imem->mux)) + /* Route the UL packet through IP MUX Layer */ + ret = ipc_mux_ul_trigger_encode(ipc_imem->mux, + vlan_id - 1, skb); + /* Control channels and Low latency data channel for VoLTE*/ + else if (vlan_id > 256 && vlan_id < 512) + ret = imem_wwan_transmit(ipc_imem, vlan_id, channel_id, + skb); + } else { + dev_err(ipc_imem->dev, + "invalid channel type on channel %d: ctype: %d", + channel_id, ipc_imem->channels[channel_id].ctype); + } + + return ret; +} + +void wwan_channel_init(struct iosm_imem *ipc_imem, int total_sessions, + enum ipc_mux_protocol mux_type) +{ + struct ipc_chnl_cfg chnl_cfg = { 0 }; + + ipc_imem->cp_version = ipc_mmio_get_cp_version(ipc_imem->mmio); + + /* If modem version is invalid (0xffffffff), do not initialize WWAN. */ + if (ipc_imem->cp_version == -1) { + dev_err(ipc_imem->dev, "invalid CP version"); + return; + } + + while (ipc_imem->nr_of_channels < IPC_MEM_MAX_CHANNELS && + !ipc_chnl_cfg_get(&chnl_cfg, ipc_imem->nr_of_channels, + mux_type)) { + dev_dbg(ipc_imem->dev, + "initializing entry :%d id:%d ul_pipe:%d dl_pipe:%d", + ipc_imem->nr_of_channels, chnl_cfg.id, chnl_cfg.ul_pipe, + chnl_cfg.dl_pipe); + + imem_channel_init(ipc_imem, IPC_CTYPE_WWAN, chnl_cfg, + IRQ_MOD_OFF); + } + /* WWAN registration. */ + ipc_imem->wwan = ipc_wwan_init(ipc_imem, ipc_imem->dev, total_sessions); + if (!ipc_imem->wwan) + dev_err(ipc_imem->dev, + "failed to register the ipc_wwan interfaces"); +} + +/* Copies the data from user space */ +static struct sk_buff * +imem_sio_copy_from_user_to_skb(struct iosm_imem *ipc_imem, int channel_id, + const unsigned char __user *buf, int size, + int is_blocking) +{ + struct sk_buff *skb; + dma_addr_t mapping; + + /* Allocate skb memory for the uplink buffer. */ + skb = ipc_pcie_alloc_skb(ipc_imem->pcie, size, GFP_KERNEL, &mapping, + DMA_TO_DEVICE, 0); + if (!skb) + return skb; + + if (copy_from_user(skb_put(skb, size), buf, size) != 0) { + dev_err(ipc_imem->dev, "ch[%d]: copy from user failed", + channel_id); + ipc_pcie_kfree_skb(ipc_imem->pcie, skb); + return NULL; + } + + IPC_CB(skb)->op_type = + (u8)(is_blocking ? UL_USR_OP_BLOCKED : UL_DEFAULT); + + return skb; +} + +/* Save the complete PSI image in a specific imem region, prepare the doorbell + * scratchpad and inform* the ROM driver. The flash app is suspended until the + * CP has processed the information. After the start of the PSI image, CP shall + * set the execution state to PSI and generate the irq, then the flash app + * is resumed or timeout. + */ +static int imem_psi_transfer(struct iosm_imem *ipc_imem, + struct ipc_mem_channel *channel, + const unsigned char __user *buf, int count) +{ + enum ipc_mem_exec_stage exec_stage = IPC_MEM_EXEC_STAGE_INVALID; + int psi_start_timeout = PSI_START_DEFAULT_TIMEOUT; + dma_addr_t mapping = 0; + int status, result; + void *dest_buf; + + imem_hrtimer_stop(&ipc_imem->startup_timer); + + /* Allocate the buffer for the PSI image. */ + dest_buf = pci_alloc_consistent(ipc_imem->pcie->pci, count, &mapping); + if (!dest_buf) { + dev_err(ipc_imem->dev, "ch[%d] cannot allocate %d bytes", + channel->channel_id, count); + return -1; + } + + /* Copy the PSI image from user to kernel space. */ + if (copy_from_user(dest_buf, buf, count) != 0) { + dev_err(ipc_imem->dev, "ch[%d] copy from user failed", + channel->channel_id); + goto error; + } + + /* Save the PSI information for the CP ROM driver on the doorbell + * scratchpad. + */ + ipc_mmio_set_psi_addr_and_size(ipc_imem->mmio, mapping, count); + + /* Trigger the CP interrupt to process the PSI information. */ + ipc_doorbell_fire(ipc_imem->pcie, 0, IPC_MEM_EXEC_STAGE_BOOT); + /* Suspend the flash app and wait for irq. */ + status = WAIT_FOR_TIMEOUT(&channel->ul_sem, IPC_PSI_TRANSFER_TIMEOUT); + + if (status <= 0) { + dev_err(ipc_imem->dev, + "ch[%d] timeout, failed PSI transfer to CP", + channel->channel_id); + ipc_uevent_send(ipc_imem->dev, UEVENT_MDM_TIMEOUT); + goto error; + } + + /* CP should have copied the PSI image. */ + pci_free_consistent(ipc_imem->pcie->pci, count, dest_buf, mapping); + + /* If the PSI download fails, return the CP boot ROM exit code to the + * flash app received about the doorbell scratchpad. + */ + if (ipc_imem->rom_exit_code != IMEM_ROM_EXIT_OPEN_EXT && + ipc_imem->rom_exit_code != IMEM_ROM_EXIT_CERT_EXT) + return (-1) * ((int)ipc_imem->rom_exit_code); + + dev_dbg(ipc_imem->dev, "PSI image successfully downloaded"); + + /* Wait psi_start_timeout milliseconds until the CP PSI image is + * running and updates the execution_stage field with + * IPC_MEM_EXEC_STAGE_PSI. Verify the execution stage. + */ + while (psi_start_timeout > 0) { + exec_stage = ipc_mmio_get_exec_stage(ipc_imem->mmio); + + if (exec_stage == IPC_MEM_EXEC_STAGE_PSI) + break; + + msleep(20); + psi_start_timeout -= 20; + } + + if (exec_stage != IPC_MEM_EXEC_STAGE_PSI) + return -1; /* Unknown status of the CP PSI process. */ + + /* Enter the PSI phase. */ + dev_dbg(ipc_imem->dev, "execution_stage[%X] eq. PSI", exec_stage); + + ipc_imem->phase = IPC_P_PSI; + + /* Request the RUNNING state from CP and wait until it was reached + * or timeout. + */ + imem_ipc_init_check(ipc_imem); + + /* Suspend the flash app, wait for irq and evaluate the CP IPC state. */ + status = WAIT_FOR_TIMEOUT(&channel->ul_sem, IPC_PSI_TRANSFER_TIMEOUT); + if (status <= 0) { + dev_err(ipc_imem->dev, + "ch[%d] timeout, failed PSI RUNNING state on CP", + channel->channel_id); + ipc_uevent_send(ipc_imem->dev, UEVENT_MDM_TIMEOUT); + return -1; + } + + if (ipc_mmio_get_ipc_state(ipc_imem->mmio) != + IPC_MEM_DEVICE_IPC_RUNNING) { + dev_err(ipc_imem->dev, + "ch[%d] %s: unexpected CP IPC state %d, not RUNNING", + channel->channel_id, + ipc_ap_phase_get_string(ipc_imem->phase), + ipc_mmio_get_ipc_state(ipc_imem->mmio)); + + return -1; + } + + /* Create the flash channel for the transfer of the images. */ + result = imem_sys_sio_open(ipc_imem); + if (result < 0) { + dev_err(ipc_imem->dev, "can't open flash_channel"); + return result; + } + + /* Inform the flash app that the PSI was sent and start on CP. + * The flash app shall wait for the CP status in blocking read + * entry point. + */ + return count; +error: + pci_free_consistent(ipc_imem->pcie->pci, count, dest_buf, mapping); + + return -1; +} + +/* Get the write active channel */ +static struct ipc_mem_channel * +imem_sio_write_channel(struct iosm_imem *ipc_imem, int ch, + const unsigned char __user *buf, int size) +{ + struct ipc_mem_channel *channel; + enum ipc_phase phase; + + if (ch < 0 || ch >= ipc_imem->nr_of_channels || size <= 0) { + dev_err(ipc_imem->dev, "invalid channel No. or buff size"); + return NULL; + } + + channel = &ipc_imem->channels[ch]; + /* Update the current operation phase. */ + phase = ipc_imem->phase; + + /* Select the operation depending on the execution stage. */ + switch (phase) { + case IPC_P_RUN: + case IPC_P_PSI: + case IPC_P_EBL: + break; + + case IPC_P_ROM: + /* Prepare the PSI image for the CP ROM driver and + * suspend the flash app. + */ + if (channel->state != IMEM_CHANNEL_RESERVED) { + dev_err(ipc_imem->dev, + "ch[%d]:invalid channel state %d,expected %d", + ch, channel->state, IMEM_CHANNEL_RESERVED); + return NULL; + } + return channel; + + default: + /* Ignore uplink actions in all other phases. */ + dev_err(ipc_imem->dev, "ch[%d]: confused phase %d", ch, phase); + return NULL; + } + + /* Check the full availability of the channel. */ + if (channel->state != IMEM_CHANNEL_ACTIVE) { + dev_err(ipc_imem->dev, "ch[%d]: confused channel state %d", ch, + channel->state); + return NULL; + } + + return channel; +} + +/* Release a sio link to CP. */ +void imem_sys_sio_close(struct iosm_sio *ipc_sio) +{ + struct iosm_imem *ipc_imem = ipc_sio->imem_instance; + int channel_id = ipc_sio->channel_id; + struct ipc_mem_channel *channel; + enum ipc_phase curr_phase; + int boot_check_timeout = 0; + int status = 0; + u32 tail = 0; + + if (channel_id < 0 || channel_id >= ipc_imem->nr_of_channels) { + dev_err(ipc_imem->dev, "invalid channel id %d", channel_id); + return; + } + if (channel_id != IPC_MEM_MBIM_CTRL_CH_ID) + boot_check_timeout = BOOT_CHECK_DEFAULT_TIMEOUT; + + channel = &ipc_imem->channels[channel_id]; + + curr_phase = ipc_imem->phase; + + /* If current phase is IPC_P_OFF or SIO ID is -ve then + * channel is already freed. Nothing to do. + */ + if (curr_phase == IPC_P_OFF || channel->sio_id < 0) { + dev_err(ipc_imem->dev, + "nothing to do. Current Phase: %s SIO ID: %d", + ipc_ap_phase_get_string(curr_phase), channel->sio_id); + return; + } + + if (channel->state == IMEM_CHANNEL_FREE) { + dev_err(ipc_imem->dev, "ch[%d]: invalid channel state %d", + channel_id, channel->state); + return; + } + /* Free only the channel id in the CP power off mode. */ + if (channel->state == IMEM_CHANNEL_RESERVED) { + imem_channel_free(channel); + return; + } + + if (channel_id != IPC_MEM_MBIM_CTRL_CH_ID && + ipc_imem->flash_channel_id >= 0) { + int i; + enum ipc_mem_exec_stage exec_stage; + + /* Increase the total wait time to boot_check_timeout */ + for (i = 0; i < boot_check_timeout; i++) { + /* user space can terminate either the modem is finished + * with Downloading or finished transferring Coredump. + */ + exec_stage = ipc_mmio_get_exec_stage(ipc_imem->mmio); + if (exec_stage == IPC_MEM_EXEC_STAGE_RUN || + exec_stage == IPC_MEM_EXEC_STAGE_PSI) + break; + + msleep(20); + } + + msleep(100); + } + /* If there are any pending TDs then wait for Timeout/Completion before + * closing pipe. + */ + if (channel->ul_pipe.old_tail != channel->ul_pipe.old_head) { + ipc_imem->app_notify_ul_pend = 1; + + /* Suspend the user app and wait a certain time for processing + * UL Data. + */ + status = WAIT_FOR_TIMEOUT(&ipc_imem->ul_pend_sem, + IPC_PEND_DATA_TIMEOUT); + + if (status == 0) { + dev_dbg(ipc_imem->dev, + "Pending data Timeout on UL-Pipe:%d Head:%d Tail:%d", + channel->ul_pipe.pipe_nr, + channel->ul_pipe.old_head, + channel->ul_pipe.old_tail); + } + + ipc_imem->app_notify_ul_pend = 0; + } + + /* If there are any pending TDs then wait for Timeout/Completion before + * closing pipe. + */ + ipc_protocol_get_head_tail_index(ipc_imem->ipc_protocol, + &channel->dl_pipe, NULL, &tail); + + if (tail != channel->dl_pipe.old_tail) { + ipc_imem->app_notify_dl_pend = 1; + + /* Suspend the user app and wait a certain time for processing + * DL Data. + */ + status = WAIT_FOR_TIMEOUT(&ipc_imem->dl_pend_sem, + IPC_PEND_DATA_TIMEOUT); + + if (status == 0) { + dev_dbg(ipc_imem->dev, + "Pending data Timeout on DL-Pipe:%d Head:%d Tail:%d", + channel->dl_pipe.pipe_nr, + channel->dl_pipe.old_head, + channel->dl_pipe.old_tail); + } + + ipc_imem->app_notify_dl_pend = 0; + } + + /* Due to wait for completion in messages, there is a small window + * between closing the pipe and updating the channel is closed. In this + * small window there could be HP update from Host Driver. Hence update + * the channel state as CLOSING to aviod unnecessary interrupt + * towards CP. + */ + channel->state = IMEM_CHANNEL_CLOSING; + + /* Release the pipe resources */ + if (channel_id != IPC_MEM_MBIM_CTRL_CH_ID && + ipc_imem->flash_channel_id != -1) { + /* don't send close for software download pipes, as + * the device is already rebooting + */ + imem_pipe_cleanup(ipc_imem, &channel->ul_pipe); + imem_pipe_cleanup(ipc_imem, &channel->dl_pipe); + } else { + imem_pipe_close(ipc_imem, &channel->ul_pipe); + imem_pipe_close(ipc_imem, &channel->dl_pipe); + } + + imem_channel_free(channel); + + if (channel_id != IPC_MEM_MBIM_CTRL_CH_ID) + /* Reset the global flash channel id. */ + ipc_imem->flash_channel_id = -1; +} + +/* Open a MBIM link to CP and return the channel id. */ +int imem_sys_mbim_open(void *instance) +{ + struct iosm_imem *ipc_imem = instance; + int ch_id; + + /* The MBIM interface is only supported in the runtime phase. */ + if (imem_ap_phase_update(ipc_imem) != IPC_P_RUN) { + dev_err(ipc_imem->dev, "MBIM open refused, phase %s", + ipc_ap_phase_get_string(ipc_imem->phase)); + return -1; + } + + ch_id = imem_channel_alloc(ipc_imem, IPC_MEM_MBIM_CTRL_CH_ID, + IPC_CTYPE_MBIM); + + if (ch_id < 0) { + dev_err(ipc_imem->dev, "reservation of an MBIM chnl id failed"); + return ch_id; + } + + if (!imem_channel_open(ipc_imem, ch_id, IPC_HP_SIO_OPEN)) { + dev_err(ipc_imem->dev, "MBIM channel id open failed"); + return -1; + } + + return ch_id; +} + +/* Open a SIO link to CP and return the channel id. */ +int imem_sys_sio_open(void *instance) +{ + struct iosm_imem *ipc_imem = instance; + struct ipc_chnl_cfg chnl_cfg = { 0 }; + enum ipc_phase phase; + int channel_id; + + phase = imem_ap_phase_update(ipc_imem); + + /* The control link to CP is only supported in the power off, psi or + * run phase. + */ + switch (phase) { + case IPC_P_OFF: + case IPC_P_ROM: + /* Get a channel id as flash id and reserve it. */ + channel_id = imem_channel_alloc(ipc_imem, IPC_MEM_FLASH_CH_ID, + IPC_CTYPE_FLASH); + if (channel_id < 0) { + dev_err(ipc_imem->dev, + "reservation of a flash channel id failed"); + return channel_id; + } + + /* Enqueue chip info data to be read */ + if (imem_trigger_chip_info(ipc_imem)) { + imem_channel_close(ipc_imem, channel_id); + return -1; + } + + /* Save the flash channel id to execute the ROM interworking. */ + ipc_imem->flash_channel_id = channel_id; + + return channel_id; + + case IPC_P_PSI: + case IPC_P_EBL: + /* The channel id used as flash id shall be already + * present as reserved. + */ + if (ipc_imem->flash_channel_id < 0) { + dev_err(ipc_imem->dev, + "missing a valid flash channel id"); + return -1; + } + channel_id = ipc_imem->flash_channel_id; + + ipc_imem->cp_version = ipc_mmio_get_cp_version(ipc_imem->mmio); + if (ipc_imem->cp_version == -1) { + dev_err(ipc_imem->dev, "invalid CP version"); + return -1; + } + + /* PSI may have changed the CP version field, which may + * result in a different channel configuration. + * Fetch and update the flash channel config + */ + if (ipc_chnl_cfg_get(&chnl_cfg, ipc_imem->flash_channel_id, + MUX_UNKNOWN)) { + dev_err(ipc_imem->dev, + "failed to get flash pipe configuration"); + return -1; + } + + ipc_imem_channel_update(ipc_imem, channel_id, chnl_cfg, + IRQ_MOD_OFF); + + if (!imem_channel_open(ipc_imem, channel_id, IPC_HP_SIO_OPEN)) + return -1; + + return channel_id; + + default: + /* CP is in the wrong state (e.g. CRASH or CD_READY) */ + dev_err(ipc_imem->dev, "SIO open refused, phase %d", phase); + return -1; + } +} + +ssize_t imem_sys_sio_read(struct iosm_sio *ipc_sio, unsigned char __user *buf, + size_t size, struct sk_buff *skb) +{ + unsigned char __user *dest_buf, *dest_end; + size_t dest_len, src_len, copied_b = 0; + unsigned char *src_buf; + + /* Prepare the destination space. */ + dest_buf = buf; + dest_end = dest_buf + size; + + /* Copy the accumulated rx packets. */ + while (skb) { + /* Prepare the source elements. */ + src_buf = skb->data; + src_len = skb->len; + + /* Calculate the current size of the destination buffer. */ + dest_len = dest_end - dest_buf; + + /* Compute the number of bytes to copy. */ + copied_b = (dest_len < src_len) ? dest_len : src_len; + + /* Copy the chars into the user space buffer. */ + if (copy_to_user((void __user *)dest_buf, src_buf, copied_b) != + 0) { + dev_err(ipc_sio->dev, + "chid[%d] userspace copy failed n=%zu\n", + ipc_sio->channel_id, copied_b); + ipc_pcie_kfree_skb(ipc_sio->pcie, skb); + return -EFAULT; + } + + /* Update the source elements. */ + skb->data = src_buf + copied_b; + skb->len = skb->len - copied_b; + + /* Update the desctination pointer. */ + dest_buf += copied_b; + + /* Test the fill level of the user buffer. */ + if (dest_buf >= dest_end) { + /* Free the consumed skbuf or save the pending skbuf + * to consume it in the read call. + */ + if (skb->len == 0) + ipc_pcie_kfree_skb(ipc_sio->pcie, skb); + else + ipc_sio->rx_pending_buf = skb; + + /* Return the number of saved chars. */ + break; + } + + /* Free the consumed skbuf. */ + ipc_pcie_kfree_skb(ipc_sio->pcie, skb); + + /* Get the next skbuf element. */ + skb = skb_dequeue(&ipc_sio->rx_list); + } + + /* Return the number of saved chars. */ + copied_b = dest_buf - buf; + return copied_b; +} + +int imem_sys_sio_write(struct iosm_sio *ipc_sio, + const unsigned char __user *buf, int count, + bool blocking_write) +{ + struct iosm_imem *ipc_imem = ipc_sio->imem_instance; + int channel_id = ipc_sio->channel_id; + struct ipc_mem_channel *channel; + struct sk_buff *skb; + int ret = -1; + + channel = imem_sio_write_channel(ipc_imem, channel_id, buf, count); + + if (!channel || ipc_imem->phase == IPC_P_OFF_REQ) + return ret; + + /* In the ROM phase the PSI image is passed to CP about a specific + * shared memory area and doorbell scratchpad directly. + */ + if (ipc_imem->phase == IPC_P_ROM) { + ret = imem_psi_transfer(ipc_imem, channel, buf, count); + + /* If the PSI transfer is successful then send Feature + * Set message. + */ + if (ret > 0) + imem_msg_send_feature_set(ipc_imem, + IPC_MEM_INBAND_CRASH_SIG, + false); + return ret; + } + + /* Allocate skb memory for the uplink buffer.*/ + skb = imem_sio_copy_from_user_to_skb(ipc_imem, channel_id, buf, count, + blocking_write); + if (!skb) + return ret; + + /* Add skb to the uplink skbuf accumulator. */ + skb_queue_tail(&channel->ul_list, skb); + + /* Inform the IPC tasklet to pass uplink IP packets to CP. + * Blocking write waits for UL completion notification, + * non-blocking write simply returns the count. + */ + if (imem_call_sio_write(ipc_imem) && blocking_write) { + /* Suspend the app and wait for UL data completion. */ + int status = + wait_for_completion_interruptible(&channel->ul_sem); + + if (status < 0) { + dev_err(ipc_imem->dev, + "ch[%d] no CP confirmation, status=%d", + channel->channel_id, status); + return status; + } + } + + return count; +} + +int imem_sys_sio_receive(struct iosm_sio *ipc_sio, struct sk_buff *skb) +{ + dev_dbg(ipc_sio->dev, "sio receive[c-id:%d]: %d", ipc_sio->channel_id, + skb->len); + + skb_queue_tail((&ipc_sio->rx_list), skb); + + complete(&ipc_sio->read_sem); + wake_up_interruptible(&ipc_sio->poll_inq); + + return 0; +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h new file mode 100644 index 000000000000..c60295056499 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h @@ -0,0 +1,102 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_IMEM_OPS_H +#define IOSM_IPC_IMEM_OPS_H + +#include "iosm_ipc_mux_codec.h" + +/* Maximum length of the SIO device names */ +#define IPC_SIO_DEVNAME_LEN 32 +#define IPC_READ_TIMEOUT 500 + +/* The delay in ms for defering the unregister */ +#define SIO_UNREGISTER_DEFER_DELAY_MS 1 + +/* Default delay till CP PSI image is running and modem updates the + * execution stage. + * unit : milliseconds + */ +#define PSI_START_DEFAULT_TIMEOUT 3000 + +/* Default time out when closing SIO, till the modem is in + * running state. + * unit : milliseconds + */ +#define BOOT_CHECK_DEFAULT_TIMEOUT 400 + +/** + * imem_sys_sio_open - Open a sio link to CP. + * @instance: Imem instance. + * + * Return: chnl id on success, -EINVAL or -1 for failure + */ +int imem_sys_sio_open(void *instance); + +/** + * imem_sys_mbim_open - Open a mbim link to CP. + * @instance: Imem instance. + * + * Return: chnl id on success, -EINVAL or -1 for failure + */ +int imem_sys_mbim_open(void *instance); + +/** + * imem_sys_sio_close - Release a sio link to CP. + * @ipc_sio: iosm sio instance. + */ +void imem_sys_sio_close(struct iosm_sio *ipc_sio); + +/** + * imem_sys_sio_read - Copy the rx data to the user space buffer and free the + * skbuf. + * @ipc_sio: Pointer to iosm_sio structi. + * @buf: Pointer to destination buffer. + * @size: Size of destination buffer. + * @skb: Pointer to source buffer. + * + * Return: Number of bytes read, -EFAULT and -EINVAL for failure + */ +ssize_t imem_sys_sio_read(struct iosm_sio *ipc_sio, unsigned char __user *buf, + size_t size, struct sk_buff *skb); + +/** + * imem_sys_sio_write - Route the uplink buffer to CP. + * @ipc_sio: iosm_sio instance. + * @buf: Pointer to source buffer. + * @count: Number of data bytes to write. + * @blocking_write: if true wait for UL data completion. + * + * Return: Number of bytes read, -EINVAL and -1 for failure + */ +int imem_sys_sio_write(struct iosm_sio *ipc_sio, + const unsigned char __user *buf, int count, + bool blocking_write); + +/** + * imem_sys_sio_receive - Receive downlink characters from CP, the downlink + * skbuf is added at the end of the downlink or rx list. + * @ipc_sio: Pointer to ipc char data-struct + * @skb: Pointer to sk buffer + * + * Returns: 0 on success, -EINVAL on failure + */ +int imem_sys_sio_receive(struct iosm_sio *ipc_sio, struct sk_buff *skb); + +int imem_sys_wwan_open(void *instance, int vlan_id); + +void imem_sys_wwan_close(void *instance, int vlan_id, int channel_id); + +int imem_sys_wwan_transmit(void *instance, int vlan_id, int channel_id, + struct sk_buff *skb); +/** + * wwan_channel_init - Initializes WWAN channels and the channel for MUX. + * @ipc_imem: Pointer to iosm_imem struct. + * @total_sessions: Total sessions. + * @mux_type: Type of mux protocol. + */ +void wwan_channel_init(struct iosm_imem *ipc_imem, int total_sessions, + enum ipc_mux_protocol mux_type); +#endif From patchwork Mon Nov 23 13:51:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 330945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B95ABC71156 for ; Mon, 23 Nov 2020 13:52:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 68DB620888 for ; Mon, 23 Nov 2020 13:52:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730420AbgKWNwI (ORCPT ); Mon, 23 Nov 2020 08:52:08 -0500 Received: from mga14.intel.com ([192.55.52.115]:1467 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729455AbgKWNwI (ORCPT ); Mon, 23 Nov 2020 08:52:08 -0500 IronPort-SDR: 9oNPYpQ/hQ7GONH9hdopINc2cjMN1ZIUm4mTI4BJu7rP0RlsRy2zUsSJIY6KoEfIBCarCfFQwQ qO+Pww0bj5og== X-IronPort-AV: E=McAfee;i="6000,8403,9813"; a="170981443" X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="170981443" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 05:52:07 -0800 IronPort-SDR: ivSJz+DRauJHWcZOiHlUVA1pmrYSRG6vabiZ5FJSg/rB/j8ltufTPMUR/GfAKgOsUPhFDAzD1g 80lljeIl+CbQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="370035530" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga007.jf.intel.com with ESMTP; 23 Nov 2020 05:52:05 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [RFC 06/18] net: iosm: channel configuration Date: Mon, 23 Nov 2020 19:21:11 +0530 Message-Id: <20201123135123.48892-7-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20201123135123.48892-1-m.chetan.kumar@intel.com> References: <20201123135123.48892-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Defines pipes & channel configurations like channel type, pipe mappings, No. of transfer descriptors and transfer buffer size etc. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c | 87 +++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h | 57 ++++++++++++++++++++ 2 files changed, 144 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c b/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c new file mode 100644 index 000000000000..d1d239218494 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include "iosm_ipc_chnl_cfg.h" + +/* Max. sizes of a downlink buffers */ +#define IPC_MEM_MAX_DL_FLASH_BUF_SIZE (16 * 1024) +#define IPC_MEM_MAX_DL_LOOPBACK_SIZE (1 * 1024 * 1024) +#define IPC_MEM_MAX_DL_AT_BUF_SIZE 2048 +#define IPC_MEM_MAX_DL_RPC_BUF_SIZE (32 * 1024) +#define IPC_MEM_MAX_DL_MBIM_BUF_SIZE IPC_MEM_MAX_DL_RPC_BUF_SIZE + +/* Max. transfer descriptors for a pipe. */ +#define IPC_MEM_MAX_TDS_FLASH_DL 3 +#define IPC_MEM_MAX_TDS_FLASH_UL 6 +#define IPC_MEM_MAX_TDS_AT 4 +#define IPC_MEM_MAX_TDS_RPC 4 +#define IPC_MEM_MAX_TDS_MBIM IPC_MEM_MAX_TDS_RPC +#define IPC_MEM_MAX_TDS_LOOPBACK 11 + +/* Accumulation backoff usec */ +#define IRQ_ACC_BACKOFF_OFF 0 + +/* MUX acc backoff 1ms */ +#define IRQ_ACC_BACKOFF_MUX 1000 + +/* Modem channel configuration table + * Always reserve element zero for flash channel. + */ +static struct ipc_chnl_cfg modem_cfg[] = { + /* FLASH Channel */ + { IPC_MEM_FLASH_CH_ID, IPC_MEM_PIPE_0, IPC_MEM_PIPE_1, + IPC_MEM_MAX_TDS_FLASH_UL, IPC_MEM_MAX_TDS_FLASH_DL, + IPC_MEM_MAX_DL_FLASH_BUF_SIZE }, + /* MBIM Channel */ + { IPC_MEM_MBIM_CTRL_CH_ID, IPC_MEM_PIPE_12, IPC_MEM_PIPE_13, + IPC_MEM_MAX_TDS_MBIM, IPC_MEM_MAX_TDS_MBIM, + IPC_MEM_MAX_DL_MBIM_BUF_SIZE }, + /* RPC - 0 */ + { IPC_WWAN_DSS_ID_0, IPC_MEM_PIPE_2, IPC_MEM_PIPE_3, + IPC_MEM_MAX_TDS_RPC, IPC_MEM_MAX_TDS_RPC, + IPC_MEM_MAX_DL_RPC_BUF_SIZE }, + /* IAT0 */ + { IPC_WWAN_DSS_ID_1, IPC_MEM_PIPE_4, IPC_MEM_PIPE_5, IPC_MEM_MAX_TDS_AT, + IPC_MEM_MAX_TDS_AT, IPC_MEM_MAX_DL_AT_BUF_SIZE }, + /* IAT1 */ + { IPC_WWAN_DSS_ID_2, IPC_MEM_PIPE_8, IPC_MEM_PIPE_9, IPC_MEM_MAX_TDS_AT, + IPC_MEM_MAX_TDS_AT, IPC_MEM_MAX_DL_AT_BUF_SIZE }, + /* Loopback */ + { IPC_WWAN_DSS_ID_3, IPC_MEM_PIPE_10, IPC_MEM_PIPE_11, + IPC_MEM_MAX_TDS_LOOPBACK, IPC_MEM_MAX_TDS_LOOPBACK, + IPC_MEM_MAX_DL_LOOPBACK_SIZE }, + /* Trace */ + { IPC_WWAN_DSS_ID_4, IPC_MEM_PIPE_6, IPC_MEM_PIPE_7, IPC_MEM_TDS_TRC, + IPC_MEM_TDS_TRC, IPC_MEM_MAX_DL_TRC_BUF_SIZE }, + /* IP Mux */ + { IPC_MEM_MUX_IP_CH_VLAN_ID, IPC_MEM_PIPE_0, IPC_MEM_PIPE_1, + IPC_MEM_MAX_TDS_MUX_LITE_UL, IPC_MEM_MAX_TDS_MUX_LITE_DL, + IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE }, +}; + +int ipc_chnl_cfg_get(struct ipc_chnl_cfg *chnl_cfg, int index, + enum ipc_mux_protocol mux_protocol) +{ + int array_size = ARRAY_SIZE(modem_cfg); + + if (index >= array_size) { + pr_err("index: %d and array_size %d", index, array_size); + return -1; + } + + if (index == IPC_MEM_MUX_IP_CH_VLAN_ID) + chnl_cfg->accumulation_backoff = IRQ_ACC_BACKOFF_MUX; + else + chnl_cfg->accumulation_backoff = IRQ_ACC_BACKOFF_OFF; + + chnl_cfg->ul_nr_of_entries = modem_cfg[index].ul_nr_of_entries; + chnl_cfg->dl_nr_of_entries = modem_cfg[index].dl_nr_of_entries; + chnl_cfg->dl_buf_size = modem_cfg[index].dl_buf_size; + chnl_cfg->id = modem_cfg[index].id; + chnl_cfg->ul_pipe = modem_cfg[index].ul_pipe; + chnl_cfg->dl_pipe = modem_cfg[index].dl_pipe; + + return 0; +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h b/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h new file mode 100644 index 000000000000..42ba4e4849bb --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation + */ + +#ifndef IOSM_IPC_CHNL_CFG_H +#define IOSM_IPC_CHNL_CFG_H + +#include "iosm_ipc_mux.h" + +/* Number of TDs on the trace channel */ +#define IPC_MEM_TDS_TRC 32 + +/* Trace channel TD buffer size. */ +#define IPC_MEM_MAX_DL_TRC_BUF_SIZE 8192 + +/* Type of the WWAN ID */ +enum ipc_wwan_id { + IPC_WWAN_DSS_ID_0 = 257, + IPC_WWAN_DSS_ID_1, + IPC_WWAN_DSS_ID_2, + IPC_WWAN_DSS_ID_3, + IPC_WWAN_DSS_ID_4, +}; + +/** + * struct ipc_chnl_cfg - IPC channel configuration structure + * @id: VLAN ID + * @ul_pipe: Uplink datastream + * @dl_pipe: Downlink datastream + * @ul_nr_of_entries: Number of Transfer descriptor uplink pipe + * @dl_nr_of_entries: Number of Transfer descriptor downlink pipe + * @dl_buf_size: Downlink buffer size + * @accumulation_backoff: Time in usec for data accumalation + */ +struct ipc_chnl_cfg { + int id; + u32 ul_pipe; + u32 dl_pipe; + u32 ul_nr_of_entries; + u32 dl_nr_of_entries; + u32 dl_buf_size; + u32 accumulation_backoff; +}; + +/** + * ipc_chnl_cfg_get - Get pipe configuration. + * @chnl_cfg: Array of ipc_chnl_cfg struct + * @index: Channel index (upto MAX_CHANNELS) + * @mux_protocol: Active mux protocol + * + * Return: 0 on success and -1 on failure + */ +int ipc_chnl_cfg_get(struct ipc_chnl_cfg *chnl_cfg, int index, + enum ipc_mux_protocol mux_protocol); + +#endif From patchwork Mon Nov 23 13:51:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 330943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A41F6C2D0E4 for ; Mon, 23 Nov 2020 13:53:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 624E4206F1 for ; Mon, 23 Nov 2020 13:53:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730641AbgKWNwQ (ORCPT ); Mon, 23 Nov 2020 08:52:16 -0500 Received: from mga14.intel.com ([192.55.52.115]:1467 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729455AbgKWNwP (ORCPT ); Mon, 23 Nov 2020 08:52:15 -0500 IronPort-SDR: a2bd40c91DcbpzZpjgq4vZ4Ia93oBQ5z6/4UBPAdSFubVH8I7hrbS3WB6ODhmCUUE6dlsa9hfp 9k9x27A+dUvQ== X-IronPort-AV: E=McAfee;i="6000,8403,9813"; a="170981463" X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="170981463" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 05:52:14 -0800 IronPort-SDR: +36Rq6FaH2mev9KCyshvDQkWeXrJDLwjCLfBxof5e8r39JxAL/kh3yiOnhQ8MDBQ6RYs0JRrg7 +GmhKsW2mpZw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="370035554" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga007.jf.intel.com with ESMTP; 23 Nov 2020 05:52:11 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [RFC 08/18] net: iosm: MBIM control device Date: Mon, 23 Nov 2020 19:21:13 +0530 Message-Id: <20201123135123.48892-9-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20201123135123.48892-1-m.chetan.kumar@intel.com> References: <20201123135123.48892-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Implements a char device for MBIM protocol communication & provides a simple IOCTL for max transfer buffer size configuration. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_mbim.c | 205 ++++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_mbim.h | 24 ++++ 2 files changed, 229 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mbim.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mbim.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_mbim.c b/drivers/net/wwan/iosm/iosm_ipc_mbim.c new file mode 100644 index 000000000000..b263c77d6eb2 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_mbim.c @@ -0,0 +1,205 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include +#include + +#include "iosm_ipc_imem_ops.h" +#include "iosm_ipc_mbim.h" +#include "iosm_ipc_sio.h" + +#define IOCTL_WDM_MAX_COMMAND _IOR('H', 0xA0, __u16) +#define WDM_MAX_SIZE 4096 +#define IPC_READ_TIMEOUT 500 + +/* MBIM IOCTL for max buffer size. */ +static long ipc_mbim_fop_unlocked_ioctl(struct file *filp, unsigned int cmd, + unsigned long arg) +{ + struct iosm_sio *ipc_mbim = + container_of(filp->private_data, struct iosm_sio, misc); + + if (cmd != IOCTL_WDM_MAX_COMMAND || + !access_ok((void __user *)arg, sizeof(ipc_mbim->wmaxcommand))) + return -EINVAL; + + if (copy_to_user((void __user *)arg, &ipc_mbim->wmaxcommand, + sizeof(ipc_mbim->wmaxcommand))) + return -EFAULT; + + return 0; +} + +/* Open a shared memory device and initialize the head of the rx skbuf list. */ +static int ipc_mbim_fop_open(struct inode *inode, struct file *filp) +{ + struct iosm_sio *ipc_mbim = + container_of(filp->private_data, struct iosm_sio, misc); + + if (test_and_set_bit(0, &ipc_mbim->mbim_is_open)) + return -EBUSY; + + ipc_mbim->channel_id = imem_sys_mbim_open(ipc_mbim->imem_instance); + + if (ipc_mbim->channel_id < 0) + return -EIO; + + return 0; +} + +/* Close a shared memory control device and free the rx skbuf list. */ +static int ipc_mbim_fop_release(struct inode *inode, struct file *filp) +{ + struct iosm_sio *ipc_mbim = + container_of(filp->private_data, struct iosm_sio, misc); + + if (ipc_mbim->channel_id < 0) + return -EINVAL; + + imem_sys_sio_close(ipc_mbim); + + clear_bit(0, &ipc_mbim->mbim_is_open); + return 0; +} + +/* Copy the data from skbuff to the user buffer */ +static ssize_t ipc_mbim_fop_read(struct file *filp, char __user *buf, + size_t size, loff_t *l) +{ + struct sk_buff *skb = NULL; + struct iosm_sio *ipc_mbim; + bool is_blocking; + + if (!access_ok(buf, size)) + return -EINVAL; + + ipc_mbim = container_of(filp->private_data, struct iosm_sio, misc); + + is_blocking = !(filp->f_flags & O_NONBLOCK); + + /* First provide the pending skbuf to the user. */ + if (ipc_mbim->rx_pending_buf) { + skb = ipc_mbim->rx_pending_buf; + ipc_mbim->rx_pending_buf = NULL; + } + + /* Check rx queue until skb is available */ + while (!skb) { + skb = skb_dequeue(&ipc_mbim->rx_list); + if (skb) + break; + + if (!is_blocking) + return -EAGAIN; + + /* Suspend the user app and wait a certain time for data + * from CP. + */ + if (WAIT_FOR_TIMEOUT(&ipc_mbim->read_sem, IPC_READ_TIMEOUT) < 0) + return -ETIMEDOUT; + } + + return imem_sys_sio_read(ipc_mbim, buf, size, skb); +} + +/* Route the user data to the shared memory layer. */ +static ssize_t ipc_mbim_fop_write(struct file *filp, const char __user *buf, + size_t size, loff_t *l) +{ + struct iosm_sio *ipc_mbim; + bool is_blocking; + + if (!access_ok(buf, size)) + return -EINVAL; + + ipc_mbim = container_of(filp->private_data, struct iosm_sio, misc); + + is_blocking = !(filp->f_flags & O_NONBLOCK); + + if (ipc_mbim->channel_id < 0) + return -EPERM; + + return imem_sys_sio_write(ipc_mbim, buf, size, is_blocking); +} + +/* Poll mechanism for applications that use nonblocking IO */ +static __poll_t ipc_mbim_fop_poll(struct file *filp, poll_table *wait) +{ + struct iosm_sio *ipc_mbim = + container_of(filp->private_data, struct iosm_sio, misc); + __poll_t mask = EPOLLOUT | EPOLLWRNORM; /* writable */ + + /* Just registers wait_queue hook. This doesn't really wait. */ + poll_wait(filp, &ipc_mbim->poll_inq, wait); + + /* Test the fill level of the skbuf rx queue. */ + if (!skb_queue_empty(&ipc_mbim->rx_list) || ipc_mbim->rx_pending_buf) + mask |= EPOLLIN | EPOLLRDNORM; /* readable */ + + return mask; +} + +struct iosm_sio *ipc_mbim_init(struct iosm_imem *ipc_imem, const char *name) +{ + struct iosm_sio *ipc_mbim = kzalloc(sizeof(*ipc_mbim), GFP_KERNEL); + + static const struct file_operations fops = { + .owner = THIS_MODULE, + .open = ipc_mbim_fop_open, + .release = ipc_mbim_fop_release, + .read = ipc_mbim_fop_read, + .write = ipc_mbim_fop_write, + .poll = ipc_mbim_fop_poll, + .unlocked_ioctl = ipc_mbim_fop_unlocked_ioctl, + }; + + if (!ipc_mbim) + return NULL; + + ipc_mbim->dev = ipc_imem->dev; + ipc_mbim->pcie = ipc_imem->pcie; + ipc_mbim->imem_instance = ipc_imem; + + ipc_mbim->wmaxcommand = WDM_MAX_SIZE; + ipc_mbim->channel_id = -1; + ipc_mbim->mbim_is_open = 0; + + init_completion(&ipc_mbim->read_sem); + + skb_queue_head_init(&ipc_mbim->rx_list); + init_waitqueue_head(&ipc_mbim->poll_inq); + init_waitqueue_head(&ipc_mbim->poll_outq); + + strncpy(ipc_mbim->devname, name, sizeof(ipc_mbim->devname) - 1); + ipc_mbim->devname[IPC_SIO_DEVNAME_LEN - 1] = '\0'; + + ipc_mbim->misc.minor = MISC_DYNAMIC_MINOR; + ipc_mbim->misc.name = ipc_mbim->devname; + ipc_mbim->misc.fops = &fops; + ipc_mbim->misc.mode = IPC_CHAR_DEVICE_DEFAULT_MODE; + + if (misc_register(&ipc_mbim->misc)) { + kfree(ipc_mbim); + return NULL; + } + + dev_set_drvdata(ipc_mbim->misc.this_device, ipc_mbim); + + return ipc_mbim; +} + +void ipc_mbim_deinit(struct iosm_sio *ipc_mbim) +{ + misc_deregister(&ipc_mbim->misc); + + /* Wakeup the user app. */ + complete(&ipc_mbim->read_sem); + + ipc_pcie_kfree_skb(ipc_mbim->pcie, ipc_mbim->rx_pending_buf); + ipc_mbim->rx_pending_buf = NULL; + + skb_queue_purge(&ipc_mbim->rx_list); + kfree(ipc_mbim); +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_mbim.h b/drivers/net/wwan/iosm/iosm_ipc_mbim.h new file mode 100644 index 000000000000..4d87c52903ed --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_mbim.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_MBIM_H +#define IOSM_IPC_MBIM_H + +/** + * ipc_mbim_init - Initialize and create a character device + * @ipc_imem: Pointer to iosm_imem structure + * @name: Pointer to character device name + * + * Returns: 0 on success + */ +struct iosm_sio *ipc_mbim_init(struct iosm_imem *ipc_imem, const char *name); + +/** + * ipc_mbim_deinit - Frees all the memory allocated for the ipc mbim structure. + * @ipc_mbim: Pointer to the ipc mbim data-struct + */ +void ipc_mbim_deinit(struct iosm_sio *ipc_mbim); + +#endif From patchwork Mon Nov 23 13:51:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 330942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BFFBC6379F for ; Mon, 23 Nov 2020 13:53:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42607206F1 for ; Mon, 23 Nov 2020 13:53:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731732AbgKWNwV (ORCPT ); Mon, 23 Nov 2020 08:52:21 -0500 Received: from mga14.intel.com ([192.55.52.115]:1467 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730688AbgKWNwV (ORCPT ); Mon, 23 Nov 2020 08:52:21 -0500 IronPort-SDR: Ou+40J1ugCjDDN2dmWNZhtYEu5L2DMbFSSMp7na13+/PVNgmzElQYH2l/cIW2KCAHxTG1/Vq2j 7+UfoEhMnBuQ== X-IronPort-AV: E=McAfee;i="6000,8403,9813"; a="170981477" X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="170981477" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 05:52:19 -0800 IronPort-SDR: XDv0U4zCUiAZHDTRYmebeoD7I0rY2zNqks0cMv+vLFx9eD2ZpkXAb7ecS32nus8OjT6egNkYhI V3Y/OUgnlT+w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="370035573" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga007.jf.intel.com with ESMTP; 23 Nov 2020 05:52:17 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [RFC 10/18] net: iosm: multiplex IP sessions Date: Mon, 23 Nov 2020 19:21:15 +0530 Message-Id: <20201123135123.48892-11-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20201123135123.48892-1-m.chetan.kumar@intel.com> References: <20201123135123.48892-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Establish IP session between host-device & session management. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_mux.c | 455 +++++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_mux.h | 344 ++++++++++++++++++++++++++ 2 files changed, 799 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux.c b/drivers/net/wwan/iosm/iosm_ipc_mux.c new file mode 100644 index 000000000000..3b46ef98460d --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_mux.c @@ -0,0 +1,455 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include "iosm_ipc_mux_codec.h" + +/* At the begin of the runtime phase the IP MUX channel shall created. */ +static int mux_channel_create(struct iosm_mux *ipc_mux) +{ + int channel_id; + + channel_id = imem_channel_alloc(ipc_mux->imem, ipc_mux->instance_id, + IPC_CTYPE_WWAN); + + if (channel_id < 0) { + dev_err(ipc_mux->dev, + "allocation of the MUX channel id failed"); + ipc_mux->state = MUX_S_ERROR; + ipc_mux->event = MUX_E_NOT_APPLICABLE; + return channel_id; /* MUX channel is not available. */ + } + + /* Establish the MUX channel in blocking mode. */ + ipc_mux->channel = imem_channel_open(ipc_mux->imem, channel_id, + IPC_HP_NET_CHANNEL_INIT); + + if (!ipc_mux->channel) { + dev_err(ipc_mux->dev, "imem_channel_open failed"); + ipc_mux->state = MUX_S_ERROR; + ipc_mux->event = MUX_E_NOT_APPLICABLE; + return -1; /* MUX channel is not available. */ + } + + /* Define the MUX active state properties. */ + ipc_mux->state = MUX_S_ACTIVE; + ipc_mux->event = MUX_E_NO_ORDERS; + return channel_id; +} + +/* Reset the session/if id state. */ +static void mux_session_free(struct iosm_mux *ipc_mux, int if_id) +{ + struct mux_session *if_entry; + + if_entry = &ipc_mux->session[if_id]; + /* Reset the session state. */ + if_entry->wwan = NULL; +} + +/* Create and send the session open command. */ +static struct mux_cmd_open_session_resp * +mux_session_open_send(struct iosm_mux *ipc_mux, int if_id) +{ + struct mux_cmd_open_session_resp *open_session_resp; + struct mux_acb *acb = &ipc_mux->acb; + union mux_cmd_param param; + + /* open_session commands to one ACB and start transmission. */ + param.open_session.flow_ctrl = 0; + param.open_session.reserved = 0; + param.open_session.ipv4v6_hints = 0; + param.open_session.reserved2 = 0; + param.open_session.dl_head_pad_len = IPC_MEM_DL_ETH_OFFSET; + + /* Finish and transfer ACB. The user thread is suspended. + * It is a blocking function call, until CP responds or timeout. + */ + acb->wanted_response = MUX_CMD_OPEN_SESSION_RESP; + if (mux_dl_acb_send_cmds(ipc_mux, MUX_CMD_OPEN_SESSION, if_id, 0, + ¶m, sizeof(param.open_session), true, + false) || + acb->got_response != MUX_CMD_OPEN_SESSION_RESP) { + dev_err(ipc_mux->dev, "if_id %d: OPEN_SESSION send failed", + if_id); + return NULL; + } + + open_session_resp = &ipc_mux->acb.got_param.open_session_resp; + if (open_session_resp->response != MUX_CMD_RESP_SUCCESS) { + dev_err(ipc_mux->dev, + "if_id %d,session open failed,response=%d", if_id, + (int)open_session_resp->response); + return NULL; + } + + return open_session_resp; +} + +/* Open the first IP session. */ +static bool mux_session_open(struct iosm_mux *ipc_mux, + struct mux_session_open *session_open) +{ + struct mux_cmd_open_session_resp *open_session_resp; + int if_id; + + /* Search for a free session interface id. */ + if_id = session_open->if_id; + if (if_id < 0 || if_id >= ipc_mux->nr_sessions) { + dev_err(ipc_mux->dev, "invalid interface id=%d", if_id); + return false; + } + + /* Create and send the session open command. + * It is a blocking function call, until CP responds or timeout. + */ + open_session_resp = mux_session_open_send(ipc_mux, if_id); + if (!open_session_resp) { + mux_session_free(ipc_mux, if_id); + session_open->if_id = -1; + return false; + } + + /* Initialize the uplink skb accumulator. */ + skb_queue_head_init(&ipc_mux->session[if_id].ul_list); + + ipc_mux->session[if_id].dl_head_pad_len = IPC_MEM_DL_ETH_OFFSET; + ipc_mux->session[if_id].ul_head_pad_len = + open_session_resp->ul_head_pad_len; + ipc_mux->session[if_id].wwan = ipc_mux->wwan; + + /* Reset the flow ctrl stats of the session */ + ipc_mux->session[if_id].flow_ctl_en_cnt = 0; + ipc_mux->session[if_id].flow_ctl_dis_cnt = 0; + ipc_mux->session[if_id].ul_flow_credits = 0; + ipc_mux->session[if_id].net_tx_stop = false; + ipc_mux->session[if_id].flow_ctl_mask = 0; + + /* Save and return the assigned if id. */ + session_open->if_id = if_id; + + return true; +} + +/* Free pending session UL packet. */ +static void mux_session_reset(struct iosm_mux *ipc_mux, int if_id) +{ + /* Reset the session/if id state. */ + mux_session_free(ipc_mux, if_id); + + /* Empty the uplink skb accumulator. */ + skb_queue_purge(&ipc_mux->session[if_id].ul_list); +} + +static void mux_session_close(struct iosm_mux *ipc_mux, + struct mux_session_close *msg) +{ + int if_id; + + /* Copy the session interface id. */ + if_id = msg->if_id; + + if (if_id < 0 || if_id >= ipc_mux->nr_sessions) { + dev_err(ipc_mux->dev, "invalid session id %d", if_id); + return; + } + + /* Create and send the session close command. + * It is a blocking function call, until CP responds or timeout. + */ + if (mux_dl_acb_send_cmds(ipc_mux, MUX_CMD_CLOSE_SESSION, if_id, 0, NULL, + 0, true, false)) + dev_err(ipc_mux->dev, "if_id %d: CLOSE_SESSION send failed", + if_id); + + /* Reset the flow ctrl stats of the session */ + ipc_mux->session[if_id].flow_ctl_en_cnt = 0; + ipc_mux->session[if_id].flow_ctl_dis_cnt = 0; + ipc_mux->session[if_id].flow_ctl_mask = 0; + + mux_session_reset(ipc_mux, if_id); +} + +static void mux_channel_close(struct iosm_mux *ipc_mux, + struct mux_channel_close *channel_close_p) +{ + int i; + + /* Free pending session UL packet. */ + for (i = 0; i < ipc_mux->nr_sessions; i++) + if (ipc_mux->session[i].wwan) + mux_session_reset(ipc_mux, i); + + imem_channel_close(ipc_mux->imem, ipc_mux->channel_id); + + /* Reset the MUX object. */ + ipc_mux->state = MUX_S_INACTIVE; + ipc_mux->event = MUX_E_INACTIVE; +} + +/* CP has interrupted AP. If AP is in IP MUX mode, execute the pending ops. */ +static int mux_schedule(struct iosm_mux *ipc_mux, union mux_msg *msg) +{ + enum mux_event order; + bool success; + + if (!ipc_mux->initialized) + return -1; /* Shall be used as normal IP channel. */ + + order = msg->common.event; + + switch (ipc_mux->state) { + case MUX_S_INACTIVE: + if (order != MUX_E_MUX_SESSION_OPEN) + /* Wait for the request to open a session */ + return -1; + + if (ipc_mux->event == MUX_E_INACTIVE) + /* Establish the MUX channel and the new state. */ + ipc_mux->channel_id = mux_channel_create(ipc_mux); + + if (ipc_mux->state != MUX_S_ACTIVE) + /* Missing the MUX channel. */ + return -1; + + /* Disable the TD update timer and open the first IP session. */ + imem_td_update_timer_suspend(ipc_mux->imem, true); + ipc_mux->event = MUX_E_MUX_SESSION_OPEN; + success = mux_session_open(ipc_mux, &msg->session_open); + + imem_td_update_timer_suspend(ipc_mux->imem, false); + return success ? ipc_mux->channel_id : -1; + + case MUX_S_ACTIVE: + switch (order) { + case MUX_E_MUX_SESSION_OPEN: + /* Disable the TD update timer and open a session */ + imem_td_update_timer_suspend(ipc_mux->imem, true); + ipc_mux->event = MUX_E_MUX_SESSION_OPEN; + success = mux_session_open(ipc_mux, &msg->session_open); + imem_td_update_timer_suspend(ipc_mux->imem, false); + return success ? ipc_mux->channel_id : -1; + + case MUX_E_MUX_SESSION_CLOSE: + /* Release an IP session. */ + ipc_mux->event = MUX_E_MUX_SESSION_CLOSE; + mux_session_close(ipc_mux, &msg->session_close); + return ipc_mux->channel_id; + + case MUX_E_MUX_CHANNEL_CLOSE: + /* Close the MUX channel pipes. */ + ipc_mux->event = MUX_E_MUX_CHANNEL_CLOSE; + mux_channel_close(ipc_mux, &msg->channel_close); + return ipc_mux->channel_id; + + default: + /* Invalid order. */ + return -1; + } + + default: + dev_err(ipc_mux->dev, + "unexpected MUX transition: state=%d, event=%d", + ipc_mux->state, ipc_mux->event); + return -1; + } +} + +struct iosm_mux *mux_init(struct ipc_mux_config *mux_cfg, + struct iosm_imem *imem) +{ + struct iosm_mux *ipc_mux = kzalloc(sizeof(*ipc_mux), GFP_KERNEL); + int i, ul_tds, ul_td_size; + struct mux_session *session; + struct sk_buff_head *free_list; + struct sk_buff *skb; + + if (!ipc_mux) + return NULL; + + ipc_mux->protocol = mux_cfg->protocol; + ipc_mux->ul_flow = mux_cfg->ul_flow; + ipc_mux->nr_sessions = mux_cfg->nr_sessions; + ipc_mux->instance_id = mux_cfg->instance_id; + ipc_mux->wwan_q_offset = 0; + + ipc_mux->pcie = imem->pcie; + ipc_mux->imem = imem; + ipc_mux->ipc_protocol = imem->ipc_protocol; + ipc_mux->dev = imem->dev; + ipc_mux->wwan = imem->wwan; + + ipc_mux->session = + kcalloc(ipc_mux->nr_sessions, sizeof(*session), GFP_KERNEL); + + /* Get the reference to the id list. */ + session = ipc_mux->session; + + /* Get the reference to the UL ADB list. */ + free_list = &ipc_mux->ul_adb.free_list; + + /* Initialize the list with free ADB. */ + skb_queue_head_init(free_list); + + ul_td_size = IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE; + + ul_tds = IPC_MEM_MAX_TDS_MUX_LITE_UL; + + ipc_mux->ul_adb.dest_skb = NULL; + + ipc_mux->initialized = true; + ipc_mux->adb_prep_ongoing = false; + ipc_mux->size_needed = 0; + ipc_mux->ul_data_pend_bytes = 0; + ipc_mux->state = MUX_S_INACTIVE; + ipc_mux->ev_mux_net_transmit_pending = false; + ipc_mux->tx_transaction_id = 0; + ipc_mux->rr_next_session = 0; + ipc_mux->event = MUX_E_INACTIVE; + ipc_mux->channel_id = -1; + ipc_mux->channel = NULL; + + /* Allocate the list of UL ADB. */ + for (i = 0; i < ul_tds; i++) { + dma_addr_t mapping; + + skb = ipc_pcie_alloc_skb(ipc_mux->pcie, ul_td_size, GFP_ATOMIC, + &mapping, DMA_TO_DEVICE, 0); + if (!skb) { + ipc_mux_deinit(ipc_mux); + return NULL; + } + /* Extend the UL ADB list. */ + skb_queue_tail(free_list, skb); + } + + return ipc_mux; +} + +/* Informs the network stack to restart transmission for all opened session if + * Flow Control is not ON for that session. + */ +static void mux_restart_tx_for_all_sessions(struct iosm_mux *ipc_mux) +{ + struct mux_session *session; + int idx; + + for (idx = 0; idx < ipc_mux->nr_sessions; idx++) { + session = &ipc_mux->session[idx]; + + if (!session->wwan) + continue; + + /* If flow control of the session is OFF and if there was tx + * stop then restart. Inform the network interface to restart + * sending data. + */ + if (session->flow_ctl_mask == 0) { + session->net_tx_stop = false; + mux_netif_tx_flowctrl(session, idx, false); + } + } +} + +/* Informs the network stack to stop sending further pkt for all opened + * sessions + */ +static void mux_stop_netif_for_all_sessions(struct iosm_mux *ipc_mux) +{ + struct mux_session *session; + int idx; + + for (idx = 0; idx < ipc_mux->nr_sessions; idx++) { + session = &ipc_mux->session[idx]; + + if (!session->wwan) + continue; + + mux_netif_tx_flowctrl(session, session->if_id, true); + } +} + +void ipc_mux_check_n_restart_tx(struct iosm_mux *ipc_mux) +{ + if (ipc_mux->ul_flow == MUX_UL) { + int low_thresh = IPC_MEM_MUX_UL_FLOWCTRL_LOW_B; + + if (ipc_mux->ul_data_pend_bytes < low_thresh) + mux_restart_tx_for_all_sessions(ipc_mux); + } +} + +int ipc_mux_get_max_sessions(struct iosm_mux *ipc_mux) +{ + return ipc_mux ? ipc_mux->nr_sessions : -1; +} + +enum ipc_mux_protocol ipc_mux_get_active_protocol(struct iosm_mux *ipc_mux) +{ + return ipc_mux ? ipc_mux->protocol : MUX_UNKNOWN; +} + +int ipc_mux_open_session(struct iosm_mux *ipc_mux, int session_nr) +{ + struct mux_session_open *session_open; + union mux_msg mux_msg; + + session_open = &mux_msg.session_open; + session_open->event = MUX_E_MUX_SESSION_OPEN; + + session_open->if_id = session_nr; + ipc_mux->session[session_nr].flags |= IPC_MEM_WWAN_MUX; + return mux_schedule(ipc_mux, &mux_msg); +} + +int ipc_mux_close_session(struct iosm_mux *ipc_mux, int session_nr) +{ + struct mux_session_close *session_close; + union mux_msg mux_msg; + int ret_val; + + session_close = &mux_msg.session_close; + session_close->event = MUX_E_MUX_SESSION_CLOSE; + + session_close->if_id = session_nr; + ret_val = mux_schedule(ipc_mux, &mux_msg); + ipc_mux->session[session_nr].flags &= ~IPC_MEM_WWAN_MUX; + + return ret_val; +} + +void ipc_mux_deinit(struct iosm_mux *ipc_mux) +{ + struct mux_channel_close *channel_close; + struct sk_buff_head *free_list; + union mux_msg mux_msg; + struct sk_buff *skb; + + if (!ipc_mux) + return; + + if (!ipc_mux->initialized) + return; + + mux_stop_netif_for_all_sessions(ipc_mux); + + channel_close = &mux_msg.channel_close; + channel_close->event = MUX_E_MUX_CHANNEL_CLOSE; + mux_schedule(ipc_mux, &mux_msg); + + /* Empty the ADB free list. */ + free_list = &ipc_mux->ul_adb.free_list; + + /* Remove from the head of the downlink queue. */ + while ((skb = skb_dequeue(free_list))) + ipc_pcie_kfree_skb(ipc_mux->pcie, skb); + + if (ipc_mux->channel) { + ipc_mux->channel->ul_pipe.is_open = false; + ipc_mux->channel->dl_pipe.is_open = false; + } + + kfree(ipc_mux->session); + kfree(ipc_mux); +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux.h b/drivers/net/wwan/iosm/iosm_ipc_mux.h new file mode 100644 index 000000000000..4df5e1a6f7ce --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_mux.h @@ -0,0 +1,344 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_MUX_H +#define IOSM_IPC_MUX_H + +#include "iosm_ipc_protocol.h" + +/* Size of the buffer for the IP MUX data buffer. */ +#define IPC_MEM_MAX_DL_MUX_BUF_SIZE (16 * 1024) +#define IPC_MEM_MAX_UL_ADB_BUF_SIZE IPC_MEM_MAX_DL_MUX_BUF_SIZE + +/* Size of the buffer for the IP MUX Lite data buffer. */ +#define IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE (2 * 1024) + +/* TD counts for IP MUX Lite */ +#define IPC_MEM_MAX_TDS_MUX_LITE_UL 800 +#define IPC_MEM_MAX_TDS_MUX_LITE_DL 1200 + +/* open session request (AP->CP) */ +#define MUX_CMD_OPEN_SESSION 1 + +/* response to open session request (CP->AP) */ +#define MUX_CMD_OPEN_SESSION_RESP 2 + +/* close session request (AP->CP) */ +#define MUX_CMD_CLOSE_SESSION 3 + +/* response to close session request (CP->AP) */ +#define MUX_CMD_CLOSE_SESSION_RESP 4 + +/* Flow control command with mask of the flow per queue/flow. */ +#define MUX_LITE_CMD_FLOW_CTL 5 + +/* ACK the flow control command. Shall have the same Transaction ID as the + * matching FLOW_CTL command. + */ +#define MUX_LITE_CMD_FLOW_CTL_ACK 6 + +/* Command for report packet indicating link quality metrics. */ +#define MUX_LITE_CMD_LINK_STATUS_REPORT 7 + +/* Response to a report packet */ +#define MUX_LITE_CMD_LINK_STATUS_REPORT_RESP 8 + +/* Used to reset a command/response state. */ +#define MUX_CMD_INVALID 255 + +/* command response : command processed successfully */ +#define MUX_CMD_RESP_SUCCESS 0 + +/* MUX for vlan devices */ +#define IPC_MEM_WWAN_MUX BIT(0) + +/* Initiated actions to change the state of the MUX object. */ +enum mux_event { + MUX_E_INACTIVE, /* No initiated actions. */ + MUX_E_MUX_SESSION_OPEN, /* Create the MUX channel and a session. */ + MUX_E_MUX_SESSION_CLOSE, /* Release a session. */ + MUX_E_MUX_CHANNEL_CLOSE, /* Release the MUX channel. */ + MUX_E_NO_ORDERS, /* No MUX order. */ + MUX_E_NOT_APPLICABLE, /* Defect IP MUX. */ +}; + +struct mux_session_open { + enum mux_event event; + int if_id; +}; + +/* MUX session close command. */ +struct mux_session_close { + enum mux_event event; + int if_id; +}; + +/* MUX channel close command. */ +struct mux_channel_close { + enum mux_event event; +}; + +/* Default message type to find out the right message type. */ +struct mux_common { + enum mux_event event; +}; + +/* List of the MUX orders. */ +union mux_msg { + struct mux_session_open session_open; + struct mux_session_close session_close; + struct mux_channel_close channel_close; + struct mux_common common; +}; + +/* Parameter definition of the open session command. */ +struct mux_cmd_open_session { + u32 flow_ctrl : 1; /* 0: Flow control disabled (flow allowed). */ + /* 1: Flow control enabled (flow not allowed)*/ + u32 reserved : 7; /* Reserved. Set to zero. */ + u32 ipv4v6_hints : 1; /* 0: IPv4/IPv6 hints not supported.*/ + /* 1: IPv4/IPv6 hints supported*/ + u32 reserved2 : 23; /* Reserved. Set to zero. */ + u32 dl_head_pad_len; /* Maximum length supported */ + /* for DL head padding on a datagram. */ +}; + +/* Parameter definition of the open session response. */ +struct mux_cmd_open_session_resp { + u32 response; /* Response code */ + u32 flow_ctrl : 1; /* 0: Flow control disabled (flow allowed). */ + /* 1: Flow control enabled (flow not allowed) */ + u32 reserved : 7; /* Reserved. Set to zero. */ + u32 ipv4v6_hints : 1; /* 0: IPv4/IPv6 hints not supported */ + /* 1: IPv4/IPv6 hints supported */ + u32 reserved2 : 23; /* Reserved. Set to zero. */ + u32 ul_head_pad_len; /* Actual length supported for */ + /* UL head padding on adatagram.*/ +}; + +/* Parameter definition of the close session response code */ +struct mux_cmd_close_session_resp { + u32 response; +}; + +/* Parameter definition of the flow control command. */ +struct mux_cmd_flow_ctl { + u32 mask; /* indicating the desired flow control */ + /* state for various flows/queues */ +}; + +/* Parameter definition of the link status report code*/ +struct mux_cmd_link_status_report { + u8 payload[1]; +}; + +/* Parameter definition of the link status report response code. */ +struct mux_cmd_link_status_report_resp { + u32 response; +}; + +/** + * union mux_cmd_param - Union-definition of the command parameters. + * @open_session: Inband command for open session + * @open_session_resp: Inband command for open session response + * @close_session_resp: Inband command for close session response + * @flow_ctl: In-band flow control on the opened interfaces + * @link_status: In-band Link Status Report + * @link_status_resp: In-band command for link status report response + */ +union mux_cmd_param { + struct mux_cmd_open_session open_session; + struct mux_cmd_open_session_resp open_session_resp; + struct mux_cmd_close_session_resp close_session_resp; + struct mux_cmd_flow_ctl flow_ctl; + struct mux_cmd_link_status_report link_status; + struct mux_cmd_link_status_report_resp link_status_resp; +}; + +/* States of the MUX object.. */ +enum mux_state { + MUX_S_INACTIVE, /* IP MUX is unused. */ + MUX_S_ACTIVE, /* IP MUX channel is available. */ + MUX_S_ERROR, /* Defect IP MUX. */ +}; + +/* Supported MUX protocols. */ +enum ipc_mux_protocol { + MUX_UNKNOWN, + MUX_LITE, +}; + +/* Supported UL data transfer methods. */ +enum ipc_mux_ul_flow { + MUX_UL_UNKNOWN, + MUX_UL, /* Normal UL data transfer */ + MUX_UL_ON_CREDITS, /* UL data transfer will be based on credits */ +}; + +/* List of the MUX session. */ +struct mux_session { + struct iosm_wwan *wwan; /*Network i/f used for communication*/ + int if_id; /* i/f id for session open message.*/ + u32 flags; + u32 ul_head_pad_len; /* Nr of bytes for UL head padding. */ + u32 dl_head_pad_len; /* Nr of bytes for DL head padding. */ + struct sk_buff_head ul_list; /* skb entries for an ADT. */ + u32 flow_ctl_mask; /* UL flow control */ + u32 flow_ctl_en_cnt; /* Flow control Enable cmd count */ + u32 flow_ctl_dis_cnt; /* Flow Control Disable cmd count */ + int ul_flow_credits; /* UL flow credits */ + u8 net_tx_stop : 1; + u8 flush : 1; /* flush net interface ? */ +}; + +/* State of a single UL data block. */ +struct mux_adb { + struct sk_buff *dest_skb; /* Current UL skb for the data block. */ + u8 *buf; /* ADB memory. */ + struct mux_adgh *adgh; /* ADGH pointer */ + struct sk_buff *qlth_skb; /* QLTH pointer */ + u32 *next_table_index; /* Pointer to next table index. */ + struct sk_buff_head free_list; /* List of alloc. ADB for the UL sess.*/ + int size; /* Size of the ADB memory. */ + u32 if_cnt; /* Statistic counter */ + u32 dg_cnt_total; + u32 payload_size; +}; + +/* Temporary ACB state. */ +struct mux_acb { + struct sk_buff *skb; /* Used UL skb. */ + int if_id; /* Session id. */ + u32 wanted_response; + u32 got_response; + u32 cmd; + union mux_cmd_param got_param; /* Received command/response parameter */ +}; + +/** + * struct iosm_mux - State of the data multiplexing over an IP channel. + * @dev: pointer to device structure + * @session: List of the MUX sessions. + * @channel: Reference to the IP MUX channel + * @pcie: Pointer to iosm_pcie struct + * @imem: Pointer to iosm_imem + * @wwan: Poinetr to iosm_wwan + * @ipc_protocol: Pointer to iosm_protocol + * @channel_id: Channel ID for MUX + * @protocol: Type of the MUX protocol + * @ul_flow: UL Flow type + * @nr_sessions: Number of sessions + * @instance_id: Instance ID + * @state: States of the MUX object + * @event: Initiated actions to change the state of the MUX object + * @tx_transaction_id: Transaction id for the ACB command. + * @rr_next_session: Next session number for round robin. + * @ul_adb: State of the UL ADB/ADGH. + * @size_needed: Variable to store the size needed during ADB preparation + * @ul_data_pend_bytes: Pending UL data to be processed in bytes + * @acb: Temporary ACB state + * @wwan_q_offset: This will hold the offset of the given instance + * Useful while passing or receiving packets from + * wwan/imem layer. + * @initialized: MUX object is initialized + * @ev_mux_net_transmit_pending: + * 0 means inform the IPC tasklet to pass the + * accumulated uplink ADB to CP. + * @adb_prep_ongoing: Flag for ADB preparation status + */ +struct iosm_mux { + struct device *dev; + struct mux_session *session; + struct ipc_mem_channel *channel; + struct iosm_pcie *pcie; + struct iosm_imem *imem; + struct iosm_wwan *wwan; + struct iosm_protocol *ipc_protocol; + int channel_id; + enum ipc_mux_protocol protocol; + enum ipc_mux_ul_flow ul_flow; + int nr_sessions; + int instance_id; + enum mux_state state; + enum mux_event event; + u32 tx_transaction_id; + int rr_next_session; + struct mux_adb ul_adb; + int size_needed; + long long ul_data_pend_bytes; + struct mux_acb acb; + int wwan_q_offset; + u8 initialized : 1; + u8 ev_mux_net_transmit_pending : 1; + u8 adb_prep_ongoing : 1; +}; + +/* MUX configuration structure */ +struct ipc_mux_config { + enum ipc_mux_protocol protocol; + enum ipc_mux_ul_flow ul_flow; + int nr_sessions; + int instance_id; +}; + +/** + * mux_init - Allocates and Init MUX instance data + * @mux_cfg: Pointer to MUX configuration structure + * @ipc_imem: Pointer to imem data-struct + * + * Returns: Initialized mux pointer on success else NULL + */ +struct iosm_mux *mux_init(struct ipc_mux_config *mux_cfg, + struct iosm_imem *ipc_imem); + +/** + * ipc_mux_deinit - Deallocates MUX instance data + * @ipc_mux: Pointer to the MUX instance data. + */ +void ipc_mux_deinit(struct iosm_mux *ipc_mux); + +/** + * ipc_mux_check_n_restart_tx - If UL flow type is Legacy for the given instance + * then it restarts the net interface tx queue if + * device has set flow control as off. + * @ipc_mux: Pointer to MUX data-struct + */ +void ipc_mux_check_n_restart_tx(struct iosm_mux *ipc_mux); + +/** + * ipc_mux_get_active_protocol - Returns the active MUX protocol type. + * @ipc_mux: Pointer to MUX data-struct + * + * Returns: enum of type ipc_mux_protocol + */ +enum ipc_mux_protocol ipc_mux_get_active_protocol(struct iosm_mux *ipc_mux); + +/** + * ipc_mux_open_session - Opens a MUX session. + * @ipc_mux: Pointer to MUX data-struct + * @session_nr: Interface ID or session number + * + * Returns: channel id on success, -1 on failure + */ +int ipc_mux_open_session(struct iosm_mux *ipc_mux, int session_nr); + +/** + * ipc_mux_close_session - Closes a MUX session. + * @ipc_mux: Pointer to MUX data-struct + * @session_nr: Interface ID or session number + * + * Returns: channel id on success, -1 on failure + */ +int ipc_mux_close_session(struct iosm_mux *ipc_mux, int session_nr); + +/** + * ipc_mux_get_max_sessions - Retuns the maximum sessions supported on the + * provided MUX instance.. + * @ipc_mux: Pointer to MUX data-struct + * + * Returns: Number of sessions supported on Success and -1 on failure + */ +int ipc_mux_get_max_sessions(struct iosm_mux *ipc_mux); +#endif From patchwork Mon Nov 23 13:51:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 330941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0ADEEC64E75 for ; Mon, 23 Nov 2020 13:53:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C60B020702 for ; Mon, 23 Nov 2020 13:53:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387733AbgKWNwZ (ORCPT ); Mon, 23 Nov 2020 08:52:25 -0500 Received: from mga14.intel.com ([192.55.52.115]:1467 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730688AbgKWNwZ (ORCPT ); Mon, 23 Nov 2020 08:52:25 -0500 IronPort-SDR: +W4Dee8OSzpujFrJeT2SM0GBQPItkLLv51ZbuJHuxjH6TkwtMpx814gHOO1U4ZJqsBRB1NWPOs kTJsnRz51Apg== X-IronPort-AV: E=McAfee;i="6000,8403,9813"; a="170981482" X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="170981482" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 05:52:22 -0800 IronPort-SDR: 9xfCZrTZvs6p1EpnLmcJ76kWy9VFeimxVBQx8W+hZ2MlUVLWCMi+tK/fschAnrbYbJ8RICeFsN NXMNELNiCIFw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="370035591" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga007.jf.intel.com with ESMTP; 23 Nov 2020 05:52:20 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [RFC 11/18] net: iosm: encode or decode datagram Date: Mon, 23 Nov 2020 19:21:16 +0530 Message-Id: <20201123135123.48892-12-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20201123135123.48892-1-m.chetan.kumar@intel.com> References: <20201123135123.48892-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org 1) Encode UL packet into datagram. 2) Decode DL datagram and route it to network layer. 3) Supports credit based flow control. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_mux_codec.c | 902 +++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_mux_codec.h | 194 +++++++ 2 files changed, 1096 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux_codec.c b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.c new file mode 100644 index 000000000000..54437651704e --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.c @@ -0,0 +1,902 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include + +#include "iosm_ipc_imem_ops.h" +#include "iosm_ipc_mux_codec.h" +#include "iosm_ipc_task_queue.h" + +/* Test the link power state and send a MUX command in blocking mode. */ +static int mux_tq_cmd_send(void *instance, int arg, void *msg, size_t size) +{ + struct iosm_mux *ipc_mux = ((struct iosm_imem *)instance)->mux; + const struct mux_acb *acb = msg; + + skb_queue_tail(&ipc_mux->channel->ul_list, acb->skb); + imem_ul_send(ipc_mux->imem); + + return 0; +} + +static int mux_acb_send(struct iosm_mux *ipc_mux, bool blocking) +{ + struct completion *completion = &ipc_mux->channel->ul_sem; + + if (ipc_task_queue_send_task(ipc_mux->imem, mux_tq_cmd_send, 0, + &ipc_mux->acb, sizeof(ipc_mux->acb), + false)) { + dev_err(ipc_mux->dev, "unable to send mux command"); + return -1; + } + + /* if blocking, suspend the app and wait for irq in the flash or + * crash phase. return false on timeout to indicate failure. + */ + if (blocking) { + u32 wait_time_milliseconds = IPC_MUX_CMD_RUN_DEFAULT_TIMEOUT; + + reinit_completion(completion); + + if (WAIT_FOR_TIMEOUT(completion, wait_time_milliseconds) == 0) { + dev_err(ipc_mux->dev, "ch[%d] timeout", + ipc_mux->channel_id); + ipc_uevent_send(ipc_mux->imem->dev, UEVENT_MDM_TIMEOUT); + return -ETIMEDOUT; + } + } + + return 0; +} + +/* Prepare mux Command */ +static struct mux_lite_cmdh *mux_lite_add_cmd(struct iosm_mux *ipc_mux, u32 cmd, + struct mux_acb *acb, void *param, + u32 param_size) +{ + struct mux_lite_cmdh *cmdh = (struct mux_lite_cmdh *)acb->skb->data; + + cmdh->signature = MUX_SIG_CMDH; + cmdh->command_type = cmd; + cmdh->if_id = acb->if_id; + + acb->cmd = cmd; + + cmdh->cmd_len = offsetof(struct mux_lite_cmdh, param) + param_size; + cmdh->transaction_id = ipc_mux->tx_transaction_id++; + + if (param) + memcpy(&cmdh->param, param, param_size); + + skb_put(acb->skb, cmdh->cmd_len); + + return cmdh; +} + +static int mux_acb_alloc(struct iosm_mux *ipc_mux) +{ + struct mux_acb *acb = &ipc_mux->acb; + struct sk_buff *skb; + dma_addr_t mapping; + + /* Allocate skb memory for the uplink buffer. */ + skb = ipc_pcie_alloc_skb(ipc_mux->pcie, MUX_MAX_UL_ACB_BUF_SIZE, + GFP_ATOMIC, &mapping, DMA_TO_DEVICE, 0); + if (!skb) + return -ENOMEM; + + /* Save the skb address. */ + acb->skb = skb; + + memset(skb->data, 0, MUX_MAX_UL_ACB_BUF_SIZE); + + return 0; +} + +int mux_dl_acb_send_cmds(struct iosm_mux *ipc_mux, u32 cmd_type, u8 if_id, + u32 transaction_id, union mux_cmd_param *param, + size_t res_size, bool blocking, bool respond) +{ + struct mux_acb *acb = &ipc_mux->acb; + struct mux_lite_cmdh *ack_lite; + int ret = 0; + + acb->if_id = if_id; + ret = mux_acb_alloc(ipc_mux); + if (ret) + return ret; + + ack_lite = mux_lite_add_cmd(ipc_mux, cmd_type, acb, param, res_size); + if (respond) + ack_lite->transaction_id = (u32)transaction_id; + + ret = mux_acb_send(ipc_mux, blocking); + + return ret; +} + +void mux_netif_tx_flowctrl(struct mux_session *session, int idx, bool on) +{ + /* Inform the network interface to start/stop flow ctrl */ + if (ipc_wwan_is_tx_stopped(session->wwan, idx) != on) + ipc_wwan_tx_flowctrl(session->wwan, idx, on); +} + +static int mux_dl_cmdresps_decode_process(struct iosm_mux *ipc_mux, + struct mux_lite_cmdh *cmdh) +{ + struct mux_acb *acb = &ipc_mux->acb; + + switch (cmdh->command_type) { + case MUX_CMD_OPEN_SESSION_RESP: + case MUX_CMD_CLOSE_SESSION_RESP: + /* Resume the control application. */ + acb->got_param = cmdh->param; + break; + + case MUX_LITE_CMD_FLOW_CTL_ACK: + /* This command type is not expected as response for + * Aggregation version of the protocol. So return non-zero. + */ + if (ipc_mux->protocol != MUX_LITE) + return -EINVAL; + + dev_dbg(ipc_mux->dev, "if[%u] FLOW_CTL_ACK(%u) received", + cmdh->if_id, cmdh->transaction_id); + break; + + default: + return -EINVAL; + } + + acb->wanted_response = MUX_CMD_INVALID; + acb->got_response = cmdh->command_type; + complete(&ipc_mux->channel->ul_sem); + + return 0; +} + +static int mux_dl_dlcmds_decode_process(struct iosm_mux *ipc_mux, + struct mux_lite_cmdh *cmdh) +{ + union mux_cmd_param *param = &cmdh->param; + struct mux_session *session; + int new_size; + + dev_dbg(ipc_mux->dev, "if_id[%d]: dlcmds decode process %d", + cmdh->if_id, cmdh->command_type); + + switch (cmdh->command_type) { + case MUX_LITE_CMD_FLOW_CTL: + + if (cmdh->if_id >= ipc_mux->nr_sessions) { + dev_err(ipc_mux->dev, "if_id [%d] not valid", + cmdh->if_id); + return -EINVAL; /* No session interface id. */ + } + + session = &ipc_mux->session[cmdh->if_id]; + + new_size = offsetof(struct mux_lite_cmdh, param) + + sizeof(param->flow_ctl); + if (param->flow_ctl.mask == 0xFFFFFFFF) { + /* Backward Compatibility */ + if (cmdh->cmd_len == new_size) + session->flow_ctl_mask = param->flow_ctl.mask; + else + session->flow_ctl_mask = ~0; + /* if CP asks for FLOW CTRL Enable + * then set our internal flow control Tx flag + * to limit uplink session queueing + */ + session->net_tx_stop = true; + /* Update the stats */ + session->flow_ctl_en_cnt++; + } else if (param->flow_ctl.mask == 0) { + /* Just reset the Flow control mask and let + * mux_flow_ctrl_low_thre_b take control on + * our internal Tx flag and enabling kernel + * flow control + */ + /* Backward Compatibility */ + if (cmdh->cmd_len == new_size) + session->flow_ctl_mask = param->flow_ctl.mask; + else + session->flow_ctl_mask = 0; + /* Update the stats */ + session->flow_ctl_dis_cnt++; + } else { + break; + } + + dev_dbg(ipc_mux->dev, "if[%u] FLOW CTRL 0x%08X", cmdh->if_id, + param->flow_ctl.mask); + break; + + case MUX_LITE_CMD_LINK_STATUS_REPORT: + break; + + default: + return -EINVAL; + } + return 0; +} + +/* Decode and Send appropriate response to a command block. */ +static void mux_dl_cmd_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb) +{ + struct mux_lite_cmdh *cmdh = (struct mux_lite_cmdh *)skb->data; + + if (mux_dl_cmdresps_decode_process(ipc_mux, cmdh)) { + /* Unable to decode command response indicates the cmd_type + * may be a command instead of response. So try to decoding it. + */ + if (!mux_dl_dlcmds_decode_process(ipc_mux, cmdh)) { + /* Decoded command may need a response. Give the + * response according to the command type. + */ + union mux_cmd_param *mux_cmd = NULL; + size_t size = 0; + u32 cmd = MUX_LITE_CMD_LINK_STATUS_REPORT_RESP; + + if (cmdh->command_type == + MUX_LITE_CMD_LINK_STATUS_REPORT) { + mux_cmd = &cmdh->param; + mux_cmd->link_status_resp.response = + MUX_CMD_RESP_SUCCESS; + /* response field is u32 */ + size = sizeof(u32); + } else if (cmdh->command_type == + MUX_LITE_CMD_FLOW_CTL) { + cmd = MUX_LITE_CMD_FLOW_CTL_ACK; + } else { + return; + } + + if (mux_dl_acb_send_cmds(ipc_mux, cmd, cmdh->if_id, + cmdh->transaction_id, mux_cmd, + size, false, true)) + dev_err(ipc_mux->dev, + "if_id %d: cmd send failed", + cmdh->if_id); + } + } +} + +/* Pass the DL packet to the netif layer. */ +static int mux_net_receive(struct iosm_mux *ipc_mux, int if_id, + struct iosm_wwan *wwan, u32 offset, u8 service_class, + struct sk_buff *skb) +{ + /* for "zero copy" use clone */ + struct sk_buff *dest_skb = skb_clone(skb, GFP_ATOMIC); + + if (!dest_skb) + return -1; + + skb_pull(dest_skb, offset); + + skb_set_tail_pointer(dest_skb, dest_skb->len); + + /* Goto the start of the Ethernet header. */ + skb_push(dest_skb, ETH_HLEN); + + /* map session to vlan */ + __vlan_hwaccel_put_tag(dest_skb, htons(ETH_P_8021Q), if_id + 1); + + /* Pass the packet to the netif layer. */ + dest_skb->priority = service_class; + + return ipc_wwan_receive(wwan, dest_skb, false); +} + +/* Decode Flow Credit Table in the block */ +static void mux_dl_fcth_decode(struct iosm_mux *ipc_mux, void *block) +{ + struct ipc_mem_lite_gen_tbl *fct = (struct ipc_mem_lite_gen_tbl *)block; + struct iosm_wwan *wwan; + int ul_credits = 0; + int if_id = 0; + + if (fct->vfl_length != sizeof(fct->vfl[0].nr_of_bytes)) { + dev_err(ipc_mux->dev, "unexpected FCT length: %d", + fct->vfl_length); + return; + } + + if_id = fct->if_id; + if (if_id >= ipc_mux->nr_sessions) { + dev_err(ipc_mux->dev, "not supported if_id: %d", if_id); + return; + } + + /* Is the session active ? */ + wwan = ipc_mux->session[if_id].wwan; + if (!wwan) { + dev_err(ipc_mux->dev, "session Net ID is NULL"); + return; + } + + ul_credits = fct->vfl[0].nr_of_bytes; + + dev_dbg(ipc_mux->dev, "Flow_Credit:: if_id[%d] Old: %d Grants: %d", + if_id, ipc_mux->session[if_id].ul_flow_credits, ul_credits); + + /* Update the Flow Credit information from ADB */ + ipc_mux->session[if_id].ul_flow_credits += ul_credits; + + /* Check whether the TX can be started */ + if (ipc_mux->session[if_id].ul_flow_credits > 0) { + ipc_mux->session[if_id].net_tx_stop = false; + mux_netif_tx_flowctrl(&ipc_mux->session[if_id], + ipc_mux->session[if_id].if_id, false); + } +} + +/* Decode non-aggregated datagram */ +static void mux_dl_adgh_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb) +{ + u32 pad_len, packet_offset; + struct iosm_wwan *wwan; + struct mux_adgh *adgh; + u8 *block = skb->data; + int rc = 0; + u8 if_id; + + adgh = (struct mux_adgh *)block; + + if (adgh->signature != MUX_SIG_ADGH) { + dev_err(ipc_mux->dev, "invalid ADGH signature received"); + return; + } + + if_id = adgh->if_id; + if (if_id >= ipc_mux->nr_sessions) { + dev_err(ipc_mux->dev, "invalid if_id while decoding %d", if_id); + return; + } + + /* Is the session active ? */ + wwan = ipc_mux->session[if_id].wwan; + if (!wwan) { + dev_err(ipc_mux->dev, "session Net ID is NULL"); + return; + } + + /* Store the pad len for the corresponding session + * Pad bytes as negotiated in the open session less the header size + * (see session management chapter for details). + * If resulting padding is zero or less, the additional head padding is + * omitted. For e.g., if HEAD_PAD_LEN = 16 or less, this field is + * omitted if HEAD_PAD_LEN = 20, then this field will have 4 bytes + * set to zero + */ + pad_len = + ipc_mux->session[if_id].dl_head_pad_len - IPC_MEM_DL_ETH_OFFSET; + packet_offset = sizeof(*adgh) + pad_len; + + if_id += ipc_mux->wwan_q_offset; + + /* Pass the packet to the netif layer */ + rc = mux_net_receive(ipc_mux, if_id, wwan, packet_offset, + adgh->service_class, skb); + if (rc) { + dev_err(ipc_mux->dev, "mux adgh decoding error"); + return; + } + ipc_mux->session[if_id].flush = 1; +} + +void ipc_mux_dl_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb) +{ + u32 signature; + + if (!skb->data || !ipc_mux) + return; + + /* Decode the MUX header type. */ + signature = le32_to_cpup((__le32 *)skb->data); + + switch (signature) { + case MUX_SIG_ADGH: + mux_dl_adgh_decode(ipc_mux, skb); + break; + + case MUX_SIG_FCTH: + mux_dl_fcth_decode(ipc_mux, skb->data); + break; + + case MUX_SIG_CMDH: + mux_dl_cmd_decode(ipc_mux, skb); + break; + + default: + dev_err(ipc_mux->dev, "invalid ABH signature"); + } + + ipc_pcie_kfree_skb(ipc_mux->pcie, skb); +} + +static int mux_ul_skb_alloc(struct iosm_mux *ipc_mux, struct mux_adb *ul_adb, + u32 type) +{ + /* Take the first element of the free list. */ + struct sk_buff *skb = skb_dequeue(&ul_adb->free_list); + int qlt_size; + + if (!skb) + return -1; /* Wait for a free ADB skb. */ + + /* Mark it as UL ADB to select the right free operation. */ + IPC_CB(skb)->op_type = (u8)UL_MUX_OP_ADB; + + switch (type) { + case MUX_SIG_ADGH: + /* Save the ADB memory settings. */ + ul_adb->dest_skb = skb; + ul_adb->buf = skb->data; + ul_adb->size = IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE; + /* reset statistic counter */ + ul_adb->if_cnt = 0; + ul_adb->payload_size = 0; + ul_adb->dg_cnt_total = 0; + + ul_adb->adgh = (struct mux_adgh *)skb->data; + memset(ul_adb->adgh, 0, sizeof(struct mux_adgh)); + break; + + case MUX_SIG_QLTH: + qlt_size = offsetof(struct ipc_mem_lite_gen_tbl, vfl) + + (MUX_QUEUE_LEVEL * sizeof(struct mux_lite_vfl)); + + if (qlt_size > IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE) { + dev_err(ipc_mux->dev, + "can't support. QLT size:%d SKB size: %d", + qlt_size, IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE); + return -1; + } + + ul_adb->qlth_skb = skb; + memset((ul_adb->qlth_skb)->data, 0, qlt_size); + skb_put(skb, qlt_size); + break; + } + + return 0; +} + +static void mux_ul_adgh_finish(struct iosm_mux *ipc_mux) +{ + struct mux_adb *ul_adb = &ipc_mux->ul_adb; + long long bytes; + char *str; + + if (!ul_adb || !ul_adb->dest_skb) { + dev_err(ipc_mux->dev, "no dest skb"); + return; + } + skb_put(ul_adb->dest_skb, ul_adb->adgh->length); + skb_queue_tail(&ipc_mux->channel->ul_list, ul_adb->dest_skb); + ul_adb->dest_skb = NULL; + + if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS) { + struct mux_session *session; + + session = &ipc_mux->session[ul_adb->adgh->if_id]; + str = "available_credits"; + bytes = (long long)session->ul_flow_credits; + + } else { + str = "pend_bytes"; + bytes = ipc_mux->ul_data_pend_bytes; + ipc_mux->ul_data_pend_bytes += ul_adb->adgh->length; + } + + dev_dbg(ipc_mux->dev, "UL ADGH: size=%d, if_id=%d, payload=%d, %s=%lld", + ul_adb->adgh->length, ul_adb->adgh->if_id, ul_adb->payload_size, + str, bytes); +} + +/* Allocates an ADB from the free list and initializes it with ADBH */ +static bool mux_ul_adb_allocate(struct iosm_mux *ipc_mux, struct mux_adb *adb, + int *size_needed, u32 type) +{ + bool ret_val = false; + int status; + + if (!adb->dest_skb) { + /* Allocate memory for the ADB including of the + * datagram table header. + */ + status = mux_ul_skb_alloc(ipc_mux, adb, type); + if (status != 0) + /* Is a pending ADB available ? */ + ret_val = true; /* None. */ + + /* Update size need to zero only for new ADB memory */ + *size_needed = 0; + } + + return ret_val; +} + +/* Informs the network stack to stop sending further packets for all opened + * sessions + */ +static void mux_stop_tx_for_all_sessions(struct iosm_mux *ipc_mux) +{ + struct mux_session *session; + int idx; + + for (idx = 0; idx < ipc_mux->nr_sessions; idx++) { + session = &ipc_mux->session[idx]; + + if (!session->wwan) + continue; + + session->net_tx_stop = true; + } +} + +/* Sends Queue Level Table of all opened sessions */ +static bool mux_lite_send_qlt(struct iosm_mux *ipc_mux) +{ + struct ipc_mem_lite_gen_tbl *qlt; + struct mux_session *session; + bool qlt_updated = false; + int i, ql_idx; + int qlt_size; + + if (!ipc_mux->initialized || ipc_mux->state != MUX_S_ACTIVE) + return qlt_updated; + + qlt_size = offsetof(struct ipc_mem_lite_gen_tbl, vfl) + + MUX_QUEUE_LEVEL * sizeof(struct mux_lite_vfl); + + for (i = 0; i < ipc_mux->nr_sessions; i++) { + session = &ipc_mux->session[i]; + + if (!session->wwan || session->flow_ctl_mask != 0) + continue; + + if (mux_ul_skb_alloc(ipc_mux, &ipc_mux->ul_adb, MUX_SIG_QLTH)) { + dev_err(ipc_mux->dev, + "no reserved mem to send QLT of if_id: %d", i); + break; + } + + /* Prepare QLT */ + qlt = (struct ipc_mem_lite_gen_tbl *)(ipc_mux->ul_adb.qlth_skb) + ->data; + qlt->signature = MUX_SIG_QLTH; + qlt->length = qlt_size; + qlt->if_id = i; + qlt->vfl_length = MUX_QUEUE_LEVEL * sizeof(struct mux_lite_vfl); + qlt->reserved[0] = 0; + qlt->reserved[1] = 0; + + for (ql_idx = 0; ql_idx < MUX_QUEUE_LEVEL; ql_idx++) + qlt->vfl[ql_idx].nr_of_bytes = session->ul_list.qlen; + + /* Add QLT to the transfer list. */ + skb_queue_tail(&ipc_mux->channel->ul_list, + ipc_mux->ul_adb.qlth_skb); + + qlt_updated = true; + ipc_mux->ul_adb.qlth_skb = NULL; + } + + if (qlt_updated) + /* Updates the TDs with ul_list */ + (void)imem_ul_write_td(ipc_mux->imem); + + return qlt_updated; +} + +/* Checks the available credits for the specified session and returns + * number of packets for which credits are available. + */ +static int mux_ul_bytes_credits_check(struct iosm_mux *ipc_mux, + struct mux_session *session, + struct sk_buff_head *ul_list, + int max_nr_of_pkts) +{ + int pkts_to_send = 0; + struct sk_buff *skb; + int credits = 0; + + if (!ipc_mux || !session || !ul_list) + return 0; + + if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS) { + credits = session->ul_flow_credits; + if (credits <= 0) { + dev_dbg(ipc_mux->dev, + "FC::if_id[%d] Insuff.Credits/Qlen:%d/%u", + session->if_id, session->ul_flow_credits, + session->ul_list.qlen); /* nr_of_bytes */ + return 0; + } + } else { + credits = IPC_MEM_MUX_UL_FLOWCTRL_HIGH_B - + ipc_mux->ul_data_pend_bytes; + if (credits <= 0) { + mux_stop_tx_for_all_sessions(ipc_mux); + + dev_dbg(ipc_mux->dev, + "if_id[%d] Stopped encoding.PendBytes: %llu, high_thresh: %d", + session->if_id, ipc_mux->ul_data_pend_bytes, + IPC_MEM_MUX_UL_FLOWCTRL_HIGH_B); + return 0; + } + } + + /* Check if there are enough credits/bytes available to send the + * requested max_nr_of_pkts. Otherwise restrict the nr_of_pkts + * depending on available credits. + */ + skb_queue_walk(ul_list, skb) + { + if (!(credits >= skb->len && pkts_to_send < max_nr_of_pkts)) + break; + credits -= skb->len; + pkts_to_send++; + } + + return pkts_to_send; +} + +/* Encode the UL IP packet according to Lite spec. */ +static int mux_ul_adgh_encode(struct iosm_mux *ipc_mux, int session_id, + struct mux_session *session, + struct sk_buff_head *ul_list, struct mux_adb *adb, + int nr_of_pkts) +{ + int offset = sizeof(struct mux_adgh); + int adb_updated = -EINVAL; + struct sk_buff *src_skb; + int aligned_size = 0; + int nr_of_skb = 0; + u32 pad_len = 0; + int vlan_id; + + /* Re-calculate the number of packets depending on number of bytes to be + * processed/available credits. + */ + nr_of_pkts = mux_ul_bytes_credits_check(ipc_mux, session, ul_list, + nr_of_pkts); + + /* If calculated nr_of_pkts from available credits is <= 0 + * then nothing to do. + */ + if (nr_of_pkts <= 0) + return 0; + + /* Read configured UL head_pad_length for session.*/ + if (session->ul_head_pad_len > IPC_MEM_DL_ETH_OFFSET) + pad_len = session->ul_head_pad_len - IPC_MEM_DL_ETH_OFFSET; + + /* Process all pending UL packets for this session + * depending on the allocated datagram table size. + */ + while (nr_of_pkts > 0) { + /* get destination skb allocated */ + if (mux_ul_adb_allocate(ipc_mux, adb, &ipc_mux->size_needed, + MUX_SIG_ADGH)) { + dev_err(ipc_mux->dev, "no reserved memory for ADGH"); + return -ENOMEM; + } + + /* Peek at the head of the list. */ + src_skb = skb_peek(ul_list); + if (!src_skb) { + dev_err(ipc_mux->dev, + "skb peek return NULL with count : %d", + nr_of_pkts); + break; + } + + /* Calculate the memory value. */ + aligned_size = ALIGN((pad_len + src_skb->len), 4); + + ipc_mux->size_needed = sizeof(struct mux_adgh) + aligned_size; + + if (ipc_mux->size_needed > adb->size) { + dev_dbg(ipc_mux->dev, "size needed %d, adgh size %d", + ipc_mux->size_needed, adb->size); + /* Return 1 if any IP packet is added to the transfer + * list. + */ + return nr_of_skb ? 1 : 0; + } + + vlan_id = session_id + ipc_mux->wwan_q_offset; + ipc_wwan_update_stats(session->wwan, vlan_id, src_skb->len, + true); + + /* Add buffer (without head padding to next pending transfer) */ + memcpy(adb->buf + offset + pad_len, src_skb->data, + src_skb->len); + + adb->adgh->signature = MUX_SIG_ADGH; + adb->adgh->if_id = session_id; + adb->adgh->length = + sizeof(struct mux_adgh) + pad_len + src_skb->len; + adb->adgh->service_class = src_skb->priority; + adb->adgh->next_count = --nr_of_pkts; + adb->dg_cnt_total++; + adb->payload_size += src_skb->len; + + if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS) + /* Decrement the credit value as we are processing the + * datagram from the UL list. + */ + session->ul_flow_credits -= src_skb->len; + + /* Remove the processed elements and free it. */ + src_skb = skb_dequeue(ul_list); + dev_kfree_skb(src_skb); + nr_of_skb++; + + mux_ul_adgh_finish(ipc_mux); + } + + if (nr_of_skb) { + /* Send QLT info to modem if pending bytes > high watermark + * in case of mux lite + */ + if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS || + ipc_mux->ul_data_pend_bytes >= + IPC_MEM_MUX_UL_FLOWCTRL_LOW_B) + adb_updated = mux_lite_send_qlt(ipc_mux); + else + adb_updated = 1; + + /* Updates the TDs with ul_list */ + (void)imem_ul_write_td(ipc_mux->imem); + } + + return adb_updated; +} + +bool ipc_mux_ul_data_encode(struct iosm_mux *ipc_mux) +{ + struct sk_buff_head *ul_list; + struct mux_session *session; + int updated = 0; + int session_id; + int dg_n; + int i; + + if (!ipc_mux || ipc_mux->state != MUX_S_ACTIVE || + ipc_mux->adb_prep_ongoing) + return false; + + ipc_mux->adb_prep_ongoing = true; + + for (i = 0; i < ipc_mux->nr_sessions; i++) { + session_id = ipc_mux->rr_next_session; + session = &ipc_mux->session[session_id]; + + /* Go to next handle rr_next_session overflow */ + ipc_mux->rr_next_session++; + if (ipc_mux->rr_next_session >= ipc_mux->nr_sessions) + ipc_mux->rr_next_session = 0; + + if (!session->wwan || session->flow_ctl_mask || + session->net_tx_stop) + continue; + + ul_list = &session->ul_list; + + /* Is something pending in UL and flow ctrl off */ + dg_n = skb_queue_len(ul_list); + if (dg_n > MUX_MAX_UL_DG_ENTRIES) + dg_n = MUX_MAX_UL_DG_ENTRIES; + + if (dg_n == 0) + /* Nothing to do for ipc_mux session + * -> try next session id. + */ + continue; + + updated = mux_ul_adgh_encode(ipc_mux, session_id, session, + ul_list, &ipc_mux->ul_adb, dg_n); + } + + ipc_mux->adb_prep_ongoing = false; + return updated == 1; +} + +void ipc_mux_ul_encoded_process(struct iosm_mux *ipc_mux, struct sk_buff *skb) +{ + struct mux_adgh *adgh; + + if (!ipc_mux || !skb || !skb->data) + return; + + adgh = (struct mux_adgh *)skb->data; + + if (adgh->signature == MUX_SIG_ADGH && ipc_mux->ul_flow == MUX_UL) + ipc_mux->ul_data_pend_bytes -= adgh->length; + + if (ipc_mux->ul_flow == MUX_UL) + dev_dbg(ipc_mux->dev, "ul_data_pend_bytes: %lld", + ipc_mux->ul_data_pend_bytes); + + /* Reset the skb settings. */ + skb->tail = 0; + skb->len = 0; + + /* Add the consumed ADB to the free list. */ + skb_queue_tail((&ipc_mux->ul_adb.free_list), skb); +} + +/* Start the NETIF uplink send transfer in MUX mode. */ +static int mux_tq_ul_trigger_encode(void *instance, int arg, void *msg, + size_t size) +{ + struct iosm_mux *ipc_mux = ((struct iosm_imem *)instance)->mux; + bool ul_data_pend = false; + + /* Add session UL data to a ADB and ADGH */ + ul_data_pend = ipc_mux_ul_data_encode(ipc_mux); + if (ul_data_pend) + /* Delay the doorbell irq */ + imem_td_update_timer_start(ipc_mux->imem); + + /* reset the debounce flag */ + ipc_mux->ev_mux_net_transmit_pending = false; + + return 0; +} + +int ipc_mux_ul_trigger_encode(struct iosm_mux *ipc_mux, int if_id, + struct sk_buff *skb) +{ + struct mux_session *session = &ipc_mux->session[if_id]; + + if (ipc_mux->channel && + ipc_mux->channel->state != IMEM_CHANNEL_ACTIVE) { + dev_err(ipc_mux->dev, + "channel state is not IMEM_CHANNEL_ACTIVE"); + return -1; + } + + if (!session->wwan) { + dev_err(ipc_mux->dev, "session net ID is NULL"); + return -1; + } + + /* Session is under flow control. + * Check if packet can be queued in session list, if not + * suspend net tx + */ + if (skb_queue_len(&session->ul_list) >= + (session->net_tx_stop ? + IPC_MEM_MUX_UL_SESS_FCON_THRESHOLD : + (IPC_MEM_MUX_UL_SESS_FCON_THRESHOLD * + IPC_MEM_MUX_UL_SESS_FCOFF_THRESHOLD_FACTOR))) { + mux_netif_tx_flowctrl(session, session->if_id, true); + return -2; + } + + /* Add skb to the uplink skb accumulator. */ + skb_queue_tail(&session->ul_list, skb); + + /* Inform the IPC kthread to pass uplink IP packets to CP. */ + if (!ipc_mux->ev_mux_net_transmit_pending) { + ipc_mux->ev_mux_net_transmit_pending = true; + if (ipc_task_queue_send_task(ipc_mux->imem, + mux_tq_ul_trigger_encode, 0, NULL, + 0, false)) + return -1; + } + dev_dbg(ipc_mux->dev, "mux ul if[%d] qlen=%d/%u, len=%d/%d, prio=%d", + if_id, skb_queue_len(&session->ul_list), session->ul_list.qlen, + skb->len, skb->truesize, skb->priority); + + return 0; +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux_codec.h b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.h new file mode 100644 index 000000000000..796790113ad5 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.h @@ -0,0 +1,194 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_MUX_CODEC_H +#define IOSM_IPC_MUX_CODEC_H + +#include "iosm_ipc_mux.h" + +/* Queue level size and reporting + * >1 is enable, 0 is disable + */ +#define MUX_QUEUE_LEVEL 1 + +/* Size of the buffer for the IP MUX commands. */ +#define MUX_MAX_UL_ACB_BUF_SIZE 256 + +/* Maximum number of packets in a go per session */ +#define MUX_MAX_UL_DG_ENTRIES 100 + +/* ADGH: Signature of the Datagram Header. */ +#define MUX_SIG_ADGH 0x48474441 + +/* CMDH: Signature of the Command Header. */ +#define MUX_SIG_CMDH 0x48444D43 + +/* QLTH: Signature of the Queue Level Table */ +#define MUX_SIG_QLTH 0x48544C51 + +/* FCTH: Signature of the Flow Credit Table */ +#define MUX_SIG_FCTH 0x48544346 + +/* MUX UL session threshold factor */ +#define IPC_MEM_MUX_UL_SESS_FCOFF_THRESHOLD_FACTOR (4) + +/* Size of the buffer for the IP MUX Lite data buffer. */ +#define IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE (2 * 1024) + +/* MUX UL session threshold in number of packets */ +#define IPC_MEM_MUX_UL_SESS_FCON_THRESHOLD (64) + +/* Default time out for sending IPC session commands like + * open session, close session etc + * unit : milliseconds + */ +#define IPC_MUX_CMD_RUN_DEFAULT_TIMEOUT 1000 /* 1 second */ + +/* MUX UL flow control lower threshold in bytes */ +#define IPC_MEM_MUX_UL_FLOWCTRL_LOW_B 10240 /* 10KB */ + +/* MUX UL flow control higher threshold in bytes (5ms worth of data)*/ +#define IPC_MEM_MUX_UL_FLOWCTRL_HIGH_B (110 * 1024) + +/** + * struct mux_adgh - Aggregated Datagram Header. + * @signature: Signature of the Aggregated Datagram Header(0x48474441) + * @length: Length (in bytes) of the datagram header. This length + * shall include the header size. Min value: 0x10 + * @if_id: ID of the interface the datagrams belong to + * @opt_ipv4v6: Indicates IPv4(=0)/IPv6(=1), It is optional if not + * used set it to zero. + * @reserved: Reserved bits. Set to zero. + * @service_class: Service class identifier for the datagram. + * @next_count: Count of the datagrams that shall be following this + * datagrams for this interface. A count of zero means + * the next datagram may not belong to this interface. + * @reserved1: Reserved bytes, Set to zero + */ +struct mux_adgh { + u32 signature; + u16 length; + u8 if_id; + u8 opt_ipv4v6 : 1; + u8 reserved : 7; + u8 service_class; + u8 next_count; + u8 reserved1[6]; +}; + +/** + * struct mux_lite_cmdh - MUX Lite Command Header + * @signature: Signature of the Command Header(0x48444D43) + * @cmd_len: Length (in bytes) of the command. This length shall + * include the header size. Minimum value: 0x10 + * @if_id: ID of the interface the commands in the table belong to. + * @reserved: Reserved Set to zero. + * @command_type: Command Enum. + * @transaction_id: 4 byte value shall be generated and sent along with a + * command Responses and ACKs shall have the same + * Transaction ID as their commands. It shall be unique to + * the command transaction on the given interface. + * @param: Optional parameters used with the command. + */ +struct mux_lite_cmdh { + u32 signature; + u16 cmd_len; + u8 if_id; + u8 reserved; + u32 command_type; + u32 transaction_id; + union mux_cmd_param param; +}; + +/** + * struct mux_lite_vfl - value field in generic table + * @nr_of_bytes: Number of bytes available to transmit in the queue. + */ +struct mux_lite_vfl { + u32 nr_of_bytes; +}; + +/** + * struct ipc_mem_lite_gen_tbl - Generic table format for Queue Level + * and Flow Credit + * @signature: Signature of the table + * @length: Length of the table + * @if_id: ID of the interface the table belongs to + * @vfl_length: Value field length + * @reserved: Reserved + * @vfl: Value field of variable length + */ +struct ipc_mem_lite_gen_tbl { + u32 signature; + u16 length; + u8 if_id; + u8 vfl_length; + u32 reserved[2]; + struct mux_lite_vfl vfl[1]; +}; + +/** + * ipc_mux_dl_decode -Route the DL packet through the IP MUX layer + * depending on Header. + * @ipc_mux: Pointer to MUX data-struct + * @skb: Pointer to ipc_skb. + */ +void ipc_mux_dl_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb); + +/** + * mux_dl_acb_send_cmds - Respond to the Command blocks. + * @ipc_mux: Pointer to MUX data-struct + * @cmd_type: Command + * @if_id: Session interface id. + * @transaction_id: Command transaction id. + * @param: Pointer to command params. + * @res_size: Response size + * @blocking: True for blocking send + * @respond: If true return transaction ID + * + * Returns: 0 in success and -ve for failure + */ +int mux_dl_acb_send_cmds(struct iosm_mux *ipc_mux, u32 cmd_type, u8 if_id, + u32 transaction_id, union mux_cmd_param *param, + size_t res_size, bool blocking, bool respond); + +/** + * mux_netif_tx_flowctrl - Enable/Disable TX flow control on MUX sessions. + * @session: Pointer to mux_session struct + * @idx: Session ID + * @on: true for Enable and false for disable flow control + */ +void mux_netif_tx_flowctrl(struct mux_session *session, int idx, bool on); + +/** + * ipc_mux_ul_trigger_encode - Route the UL packet through the IP MUX layer + * for encoding. + * @ipc_mux: Pointer to MUX data-struct + * @if_id: Session ID. + * @skb: Pointer to ipc_skb. + * + * Returns: 0 if successfully encoded + * -1 on failure + * -2 if packet has to be retransmitted. + */ +int ipc_mux_ul_trigger_encode(struct iosm_mux *ipc_mux, int if_id, + struct sk_buff *skb); +/** + * ipc_mux_ul_data_encode - UL encode function for calling from Tasklet context. + * @ipc_mux: Pointer to MUX data-struct + * + * Returns: TRUE if any packet of any session is encoded FALSE otherwise. + */ +bool ipc_mux_ul_data_encode(struct iosm_mux *ipc_mux); + +/** + * ipc_mux_ul_encoded_process - Handles the Modem processed UL data by adding + * the SKB to the UL free list. + * @ipc_mux: Pointer to MUX data-struct + * @skb: Pointer to ipc_skb. + */ +void ipc_mux_ul_encoded_process(struct iosm_mux *ipc_mux, struct sk_buff *skb); + +#endif From patchwork Mon Nov 23 13:51:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 330939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E738C71155 for ; Mon, 23 Nov 2020 13:53:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 152AC206F1 for ; Mon, 23 Nov 2020 13:53:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388580AbgKWNwg (ORCPT ); Mon, 23 Nov 2020 08:52:36 -0500 Received: from mga14.intel.com ([192.55.52.115]:1517 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388215AbgKWNwf (ORCPT ); Mon, 23 Nov 2020 08:52:35 -0500 IronPort-SDR: +Xj8nF9BtGVC9+3Ard/b3tVYom0F4G6bsmvc6q9/BU0aKVW8PUaDg26ppJ8Vs1xb8UMqx7R5WY 3TwA97QMFNLA== X-IronPort-AV: E=McAfee;i="6000,8403,9813"; a="170981507" X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="170981507" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 05:52:35 -0800 IronPort-SDR: ZpFg2E6bbIJ7ajCW1ggTmbmybcU5Y90nPijeaUgndCqKv/GfTQMadC2w5ZUqCpVxNGSkma4X6f ZwdLyiy5q+gg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="370035655" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga007.jf.intel.com with ESMTP; 23 Nov 2020 05:52:33 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [RFC 15/18] net: iosm: uevent support Date: Mon, 23 Nov 2020 19:21:20 +0530 Message-Id: <20201123135123.48892-16-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20201123135123.48892-1-m.chetan.kumar@intel.com> References: <20201123135123.48892-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Report modem status via uevent. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_uevent.c | 47 +++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_uevent.h | 41 ++++++++++++++++++++++++++++ 2 files changed, 88 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_uevent.c b/drivers/net/wwan/iosm/iosm_ipc_uevent.c new file mode 100644 index 000000000000..27542ca27613 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_uevent.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include + +#include "iosm_ipc_sio.h" +#include "iosm_ipc_uevent.h" + +/* Update the uevent in work queue context */ +static void ipc_uevent_work(struct work_struct *data) +{ + struct ipc_uevent_info *info; + char *envp[2] = { NULL, NULL }; + + info = container_of(data, struct ipc_uevent_info, work); + + envp[0] = info->uevent; + + if (kobject_uevent_env(&info->dev->kobj, KOBJ_CHANGE, envp)) + pr_err("uevent %s failed to sent", info->uevent); + + kfree(info); +} + +void ipc_uevent_send(struct device *dev, char *uevent) +{ + struct ipc_uevent_info *info; + + if (!uevent || !dev) + return; + + info = kzalloc(sizeof(*info), GFP_ATOMIC); + if (!info) + return; + + /* Initialize the kernel work queue */ + INIT_WORK(&info->work, ipc_uevent_work); + + /* Store the device and event information */ + info->dev = dev; + snprintf(info->uevent, MAX_UEVENT_LEN, "%s: %s", dev_name(dev), uevent); + + /* Schedule uevent in process context using work queue */ + schedule_work(&info->work); +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_uevent.h b/drivers/net/wwan/iosm/iosm_ipc_uevent.h new file mode 100644 index 000000000000..422f64411c6e --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_uevent.h @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_UEVENT_H +#define IOSM_IPC_UEVENT_H + +/* Baseband event strings */ +#define UEVENT_MDM_NOT_READY "MDM_NOT_READY" +#define UEVENT_ROM_READY "ROM_READY" +#define UEVENT_MDM_READY "MDM_READY" +#define UEVENT_CRASH "CRASH" +#define UEVENT_CD_READY "CD_READY" +#define UEVENT_CD_READY_LINK_DOWN "CD_READY_LINK_DOWN" +#define UEVENT_MDM_TIMEOUT "MDM_TIMEOUT" + +/* Maximum length of user events */ +#define MAX_UEVENT_LEN 64 + +/** + * struct ipc_uevent_info - Uevent information structure. + * @dev: Pointer to device structure + * @uevent: Uevent information + * @work: Uevent work struct + */ +struct ipc_uevent_info { + struct device *dev; + char uevent[MAX_UEVENT_LEN]; + struct work_struct work; +}; + +/** + * ipc_uevent_send - Send modem event to user space. + * @dev: Generic device pointer + * @uevent: Uevent information + * + */ +void ipc_uevent_send(struct device *dev, char *uevent); + +#endif From patchwork Mon Nov 23 13:51:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 330940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F26DC71156 for ; Mon, 23 Nov 2020 13:53:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 61E2F2151B for ; Mon, 23 Nov 2020 13:53:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388618AbgKWNwk (ORCPT ); Mon, 23 Nov 2020 08:52:40 -0500 Received: from mga14.intel.com ([192.55.52.115]:1517 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388215AbgKWNwk (ORCPT ); Mon, 23 Nov 2020 08:52:40 -0500 IronPort-SDR: wYDSrpebPGEstalQqRKEKrQn+jgiALiIBjHK5+SZaRxBEGFZRU5ohT75ConRgH+Z/nXk4WH5YV eurYyeQ/XukQ== X-IronPort-AV: E=McAfee;i="6000,8403,9813"; a="170981512" X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="170981512" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 05:52:38 -0800 IronPort-SDR: 7ZpVixRrR/SuDpmnmoL1FlL4GbzJO4y1yzaIwZj15KOb1brYqDpLUO/2BvkVkQ3BQHps1EypGC ctisJb/hEV/Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="370035664" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga007.jf.intel.com with ESMTP; 23 Nov 2020 05:52:36 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [RFC 16/18] net: iosm: net driver Date: Mon, 23 Nov 2020 19:21:21 +0530 Message-Id: <20201123135123.48892-17-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20201123135123.48892-1-m.chetan.kumar@intel.com> References: <20201123135123.48892-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org 1) Create net device for data/IP communication. 2) Bind VLAN ID to mux IP session. 3) Implement net device operations. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_wwan.c | 674 ++++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_wwan.h | 72 ++++ 2 files changed, 746 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_wwan.c b/drivers/net/wwan/iosm/iosm_ipc_wwan.c new file mode 100644 index 000000000000..f14a971455bb --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_wwan.c @@ -0,0 +1,674 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include + +#include "iosm_ipc_chnl_cfg.h" +#include "iosm_ipc_imem_ops.h" + +/* Minimum number of transmit queues per WWAN root device */ +#define WWAN_MIN_TXQ (1) +/* Minimum number of receive queues per WWAN root device */ +#define WWAN_MAX_RXQ (1) +/* Default transmit queue for WWAN root device */ +#define WWAN_DEFAULT_TXQ (0) +/* VLAN tag for WWAN root device */ +#define WWAN_ROOT_VLAN_TAG (0) + +#define IPC_MEM_MIN_MTU_SIZE (68) +#define IPC_MEM_MAX_MTU_SIZE (1024 * 1024) + +#define IPC_MEM_VLAN_TO_SESSION (1) + +/* Required alignment for TX in bytes (32 bit/4 bytes)*/ +#define IPC_WWAN_ALIGN (4) + +/** + * struct ipc_vlan_info - This structure includes information about VLAN device. + * @vlan_id: VLAN tag of the VLAN device. + * @ch_id: IPC channel number for which VLAN device is created. + * @stats: Contains statistics of VLAN devices. + */ +struct ipc_vlan_info { + int vlan_id; + int ch_id; + struct net_device_stats stats; +}; + +/** + * struct iosm_wwan - This structure contains information about WWAN root device + * and interface to the IPC layer. + * @vlan_devs: Contains information about VLAN devices created under + * WWAN root device. + * @netdev: Pointer to network interface device structure. + * @ops_instance: Instance pointer for Callbacks + * @dev: Pointer device structure + * @lock: Spinlock to be used for atomic operations of the + * root device. + * @stats: Contains statistics of WWAN root device + * @vlan_devs_nr: Number of VLAN devices. + * @if_mutex: Mutex used for add and remove vlan-id + * @max_devs: Maximum supported VLAN devs + * @max_ip_devs: Maximum supported IP VLAN devs + * @is_registered: Registration status with netdev + */ +struct iosm_wwan { + struct ipc_vlan_info *vlan_devs; + struct net_device *netdev; + void *ops_instance; + struct device *dev; + spinlock_t lock; /* Used for atomic operations on root device */ + struct net_device_stats stats; + int vlan_devs_nr; + struct mutex if_mutex; /* Mutex used for add and remove vlan-id */ + int max_devs; + int max_ip_devs; + u8 is_registered : 1; +}; + +/* Get the array index of requested tag. */ +static int ipc_wwan_get_vlan_devs_nr(struct iosm_wwan *ipc_wwan, u16 tag) +{ + int i = 0; + + if (!ipc_wwan->vlan_devs) + return -EINVAL; + + for (i = 0; i < ipc_wwan->vlan_devs_nr; i++) + if (ipc_wwan->vlan_devs[i].vlan_id == tag) + return i; + + return -EINVAL; +} + +static int ipc_wwan_add_vlan(struct iosm_wwan *ipc_wwan, u16 vid) +{ + if (vid >= 512 || !ipc_wwan->vlan_devs) + return -EINVAL; + + if (vid == WWAN_ROOT_VLAN_TAG) + return 0; + + mutex_lock(&ipc_wwan->if_mutex); + + /* get channel id */ + ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].ch_id = + imem_sys_wwan_open(ipc_wwan->ops_instance, vid); + + if (ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].ch_id < 0) { + dev_err(ipc_wwan->dev, + "cannot connect wwan0 & id %d to the IPC mem layer", + vid); + mutex_unlock(&ipc_wwan->if_mutex); + return -ENODEV; + } + + /* save vlan id */ + ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].vlan_id = vid; + + dev_dbg(ipc_wwan->dev, "Channel id %d allocated to vlan id %d", + ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].ch_id, + ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].vlan_id); + + ipc_wwan->vlan_devs_nr++; + + mutex_unlock(&ipc_wwan->if_mutex); + + return 0; +} + +static int ipc_wwan_remove_vlan(struct iosm_wwan *ipc_wwan, u16 vid) +{ + int ch_nr = ipc_wwan_get_vlan_devs_nr(ipc_wwan, vid); + int i = 0; + + if (ch_nr < 0) { + dev_err(ipc_wwan->dev, "vlan dev not found for vid = %d", vid); + return ch_nr; + } + + if (ipc_wwan->vlan_devs[ch_nr].ch_id < 0) { + dev_err(ipc_wwan->dev, "invalid ch nr %d to kill", ch_nr); + return -EINVAL; + } + + mutex_lock(&ipc_wwan->if_mutex); + + imem_sys_wwan_close(ipc_wwan->ops_instance, vid, + ipc_wwan->vlan_devs[ch_nr].ch_id); + + ipc_wwan->vlan_devs[ch_nr].ch_id = -1; + + /* re-align the vlan information as we removed one tag */ + for (i = ch_nr; i < ipc_wwan->vlan_devs_nr; i++) + memcpy(&ipc_wwan->vlan_devs[i], &ipc_wwan->vlan_devs[i + 1], + sizeof(struct ipc_vlan_info)); + + ipc_wwan->vlan_devs_nr--; + + mutex_unlock(&ipc_wwan->if_mutex); + + return 0; +} + +/* Checks the protocol and discards the Ethernet header or VLAN header + * accordingly. + */ +static int ipc_wwan_pull_header(struct sk_buff *skb, bool *is_ip) +{ + unsigned int header_size; + __be16 proto; + + if (skb->protocol == htons(ETH_P_8021Q)) { + proto = vlan_eth_hdr(skb)->h_vlan_encapsulated_proto; + + if (skb->len < VLAN_ETH_HLEN) + header_size = 0; + else + header_size = VLAN_ETH_HLEN; + } else { + proto = eth_hdr(skb)->h_proto; + + if (skb->len < ETH_HLEN) + header_size = 0; + else + header_size = ETH_HLEN; + } + + /* If a valid pointer */ + if (header_size > 0 && is_ip) { + *is_ip = (proto == htons(ETH_P_IP)) || + (proto == htons(ETH_P_IPV6)); + + /* Discard the vlan/ethernet header. */ + if (unlikely(!skb_pull(skb, header_size))) + header_size = 0; + } + + return header_size; +} + +/* Get VLAN tag from IPC SESSION ID */ +static inline u16 ipc_wwan_mux_session_to_vlan_tag(int id) +{ + return (u16)(id + IPC_MEM_VLAN_TO_SESSION); +} + +/* Get IPC SESSION ID from VLAN tag */ +static inline int ipc_wwan_vlan_to_mux_session_id(u16 tag) +{ + return tag - IPC_MEM_VLAN_TO_SESSION; +} + +/* Add new vlan device and open a channel */ +static int ipc_wwan_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, + u16 vid) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(netdev); + + if (vid != IPC_WWAN_DSS_ID_4) + return ipc_wwan_add_vlan(ipc_wwan, vid); + + return 0; +} + +/* Remove vlan device and de-allocate channel */ +static int ipc_wwan_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, + u16 vid) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(netdev); + + if (vid == WWAN_ROOT_VLAN_TAG) + return 0; + + return ipc_wwan_remove_vlan(ipc_wwan, vid); +} + +static int ipc_wwan_open(struct net_device *netdev) +{ + /* Octets in one ethernet addr */ + if (netdev->addr_len < ETH_ALEN) { + pr_err("cannot build the Ethernet address for \"%s\"", + netdev->name); + return -ENODEV; + } + + /* enable tx path, DL data may follow */ + netif_tx_start_all_queues(netdev); + + return 0; +} + +static int ipc_wwan_stop(struct net_device *netdev) +{ + pr_debug("Stop all TX Queues"); + + netif_tx_stop_all_queues(netdev); + return 0; +} + +int ipc_wwan_receive(struct iosm_wwan *ipc_wwan, struct sk_buff *skb_arg, + bool dss) +{ + struct sk_buff *skb; + struct ethhdr *eth; + u16 tag = 0; + + if (unlikely(!ipc_wwan)) { + if (skb_arg) + dev_kfree_skb(skb_arg); + return -EINVAL; + } + + skb = skb_arg; + + eth = (struct ethhdr *)skb->data; + if (unlikely(!eth)) { + dev_err(ipc_wwan->dev, "ethernet header info error"); + dev_kfree_skb(skb); + return -1; + } + + /* Build the ethernet header. + * for kernel version latest than 3.14.0. + */ + ether_addr_copy(eth->h_dest, ipc_wwan->netdev->dev_addr); + ether_addr_copy(eth->h_source, ipc_wwan->netdev->dev_addr); + eth->h_source[ETH_ALEN - 1] ^= 0x01; /* src is us xor 1 */ + /* set the ethernet payload type: ipv4 or ipv6 or Dummy type + * for 802.3 frames + */ + eth->h_proto = htons(ETH_P_802_3); + if (!dss) { + if ((skb->data[ETH_HLEN] & 0xF0) == 0x40) + eth->h_proto = htons(ETH_P_IP); + else if ((skb->data[ETH_HLEN] & 0xF0) == 0x60) + eth->h_proto = htons(ETH_P_IPV6); + } + + skb->dev = ipc_wwan->netdev; + skb->protocol = eth_type_trans(skb, ipc_wwan->netdev); + skb->ip_summed = CHECKSUM_UNNECESSARY; + + vlan_get_tag(skb, &tag); + /* TX stats doesn't include ETH_HLEN. + * eth_type_trans() functions pulls the ethernet header. + * so skb->len does not have ethernet header in it. + */ + ipc_wwan_update_stats(ipc_wwan, ipc_wwan_vlan_to_mux_session_id(tag), + skb->len, false); + + switch (netif_rx_ni(skb)) { + case NET_RX_SUCCESS: + break; + case NET_RX_DROP: + break; + default: + break; + } + return 0; +} + +/* Align SKB to 32bit, if not already aligned */ +static struct sk_buff *ipc_wwan_skb_align(struct iosm_wwan *ipc_wwan, + struct sk_buff *skb) +{ + unsigned int offset = (uintptr_t)skb->data & (IPC_WWAN_ALIGN - 1); + struct sk_buff *new_skb; + + if (offset == 0) + return skb; + + /* Allocate new skb to copy into */ + new_skb = dev_alloc_skb(skb->len + (IPC_WWAN_ALIGN - 1)); + if (unlikely(!new_skb)) { + dev_err(ipc_wwan->dev, "failed to reallocate skb"); + goto out; + } + + /* Make sure newly allocated skb is aligned */ + offset = (uintptr_t)new_skb->data & (IPC_WWAN_ALIGN - 1); + if (unlikely(offset != 0)) + skb_reserve(new_skb, IPC_WWAN_ALIGN - offset); + + /* Copy payload */ + memcpy(new_skb->data, skb->data, skb->len); + + skb_put(new_skb, skb->len); +out: + dev_kfree_skb(skb); + return new_skb; +} + +/* Transmit a packet (called by the kernel) */ +static int ipc_wwan_transmit(struct sk_buff *skb, struct net_device *netdev) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(netdev); + bool is_ip = false; + int ret = -EINVAL; + int header_size; + int idx = 0; + u16 tag = 0; + + vlan_get_tag(skb, &tag); + + /* If the SKB is of WWAN root device then don't send it to device. + * Free the SKB and then return. + */ + if (unlikely(tag == WWAN_ROOT_VLAN_TAG)) + goto exit; + + /* Discard the Ethernet header or VLAN Ethernet header depending + * on the protocol. + */ + header_size = ipc_wwan_pull_header(skb, &is_ip); + if (!header_size) + goto exit; + + /* Get the channel number corresponding to VLAN ID */ + idx = ipc_wwan_get_vlan_devs_nr(ipc_wwan, tag); + if (unlikely(idx < 0 || idx >= ipc_wwan->max_devs || + ipc_wwan->vlan_devs[idx].ch_id < 0)) + goto exit; + + /* VLAN IDs from 1 to 255 are for IP data + * 257 to 512 are for non-IP data + */ + if ((tag > 0 && tag < 256)) { + if (unlikely(!is_ip)) { + ret = -EXDEV; + goto exit; + } + } else if (tag > 256 && tag < 512) { + if (unlikely(is_ip)) { + ret = -EXDEV; + goto exit; + } + + /* Align the SKB only for control packets if not aligned. */ + skb = ipc_wwan_skb_align(ipc_wwan, skb); + if (!skb) + goto exit; + } else { + /* Unknown VLAN IDs */ + ret = -EXDEV; + goto exit; + } + + /* Send the SKB to device for transmission */ + ret = imem_sys_wwan_transmit(ipc_wwan->ops_instance, tag, + ipc_wwan->vlan_devs[idx].ch_id, skb); + + /* Return code of zero is success */ + if (ret == 0) { + ret = NETDEV_TX_OK; + } else if (ret == -2) { + /* Return code -2 is to enable re-enqueue of the skb. + * Re-push the stripped header before returning busy. + */ + if (unlikely(!skb_push(skb, header_size))) { + dev_err(ipc_wwan->dev, "unable to push eth hdr"); + ret = -EIO; + goto exit; + } + + ret = NETDEV_TX_BUSY; + } else { + ret = -EIO; + goto exit; + } + + return ret; + +exit: + /* Log any skb drop except for WWAN Root device */ + if (tag != 0) + dev_dbg(ipc_wwan->dev, "skb dropped.VLAN ID: %d, ret: %d", tag, + ret); + + dev_kfree_skb_any(skb); + return ret; +} + +static int ipc_wwan_change_mtu(struct net_device *dev, int new_mtu) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(dev); + unsigned long flags = 0; + + if (unlikely(new_mtu < IPC_MEM_MIN_MTU_SIZE || + new_mtu > IPC_MEM_MAX_MTU_SIZE)) { + dev_err(ipc_wwan->dev, "mtu %d out of range %d..%d", new_mtu, + IPC_MEM_MIN_MTU_SIZE, IPC_MEM_MAX_MTU_SIZE); + return -EINVAL; + } + + spin_lock_irqsave(&ipc_wwan->lock, flags); + dev->mtu = new_mtu; + spin_unlock_irqrestore(&ipc_wwan->lock, flags); + return 0; +} + +static int ipc_wwan_change_mac_addr(struct net_device *dev, void *sock_addr) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(dev); + struct sockaddr *addr = sock_addr; + unsigned long flags = 0; + int result = 0; + u8 *sock_data; + + sock_data = (u8 *)addr->sa_data; + + spin_lock_irqsave(&ipc_wwan->lock, flags); + + if (is_zero_ether_addr(sock_data)) { + dev->addr_len = 1; + memset(dev->dev_addr, 0, 6); + dev_dbg(ipc_wwan->dev, "mac addr set to zero"); + goto exit; + } + + result = eth_mac_addr(dev, sock_addr); +exit: + spin_unlock_irqrestore(&ipc_wwan->lock, flags); + return result; +} + +static int ipc_wwan_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) +{ + if (cmd != SIOCSIFHWADDR || + !access_ok((void __user *)ifr, sizeof(struct ifreq)) || + dev->addr_len > sizeof(struct sockaddr)) + return -EINVAL; + + return ipc_wwan_change_mac_addr(dev, &ifr->ifr_hwaddr); +} + +static struct net_device_stats *ipc_wwan_get_stats(struct net_device *dev) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(dev); + + return &ipc_wwan->stats; +} + +/* validate mac address for wwan devices */ +static int ipc_wwan_eth_validate_addr(struct net_device *netdev) +{ + return eth_validate_addr(netdev); +} + +/* return valid TX queue for the mapped VLAN device + * for kernel version latest than 4.19.0 + */ +static u16 ipc_wwan_select_queue(struct net_device *netdev, struct sk_buff *skb, + struct net_device *sb_dev) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(netdev); + u16 txqn = 0xFFFF; + u16 tag = 0; + + /* get VLAN tag for the current skb + * if the packet is untagged, return the default queue. + */ + if (vlan_get_tag(skb, &tag) < 0) + return WWAN_DEFAULT_TXQ; + + /* TX Queues are allocated as following: + * + * if vlan ID == 0 is used for VLAN root device i.e. WWAN0. + * Assign default TX Queue which is 0. + * + * if vlan ID >= IMEM_WWAN_CTRL_VLAN_ID_START + * && <= IMEM_WWAN_CTRL_VLAN_ID_END then we use default + * TX Queue which is 0. + * + * if vlan ID >= IMEM_WWAN_DATA_VLAN_ID_START + * && <= Max IP devices then allocate separate + * TX Queue to each VLAN ID. + * + * For any other vlan ID return invalid Tx Queue + */ + if (tag >= IMEM_WWAN_DATA_VLAN_ID_START && tag <= ipc_wwan->max_ip_devs) + txqn = tag; + else if ((tag >= IMEM_WWAN_CTRL_VLAN_ID_START && + tag <= IMEM_WWAN_CTRL_VLAN_ID_END) || + tag == WWAN_ROOT_VLAN_TAG) + txqn = WWAN_DEFAULT_TXQ; + + dev_dbg(ipc_wwan->dev, "VLAN tag = %u, TX Queue selected %u", tag, + txqn); + return txqn; +} + +static const struct net_device_ops ipc_wwandev_ops = { + .ndo_open = ipc_wwan_open, + .ndo_stop = ipc_wwan_stop, + .ndo_start_xmit = ipc_wwan_transmit, + .ndo_change_mtu = ipc_wwan_change_mtu, + .ndo_validate_addr = ipc_wwan_eth_validate_addr, + .ndo_do_ioctl = ipc_wwan_ioctl, + .ndo_get_stats = ipc_wwan_get_stats, + .ndo_vlan_rx_add_vid = ipc_wwan_vlan_rx_add_vid, + .ndo_vlan_rx_kill_vid = ipc_wwan_vlan_rx_kill_vid, + .ndo_set_mac_address = ipc_wwan_change_mac_addr, + .ndo_select_queue = ipc_wwan_select_queue, +}; + +void ipc_wwan_update_stats(struct iosm_wwan *ipc_wwan, int id, size_t len, + bool tx) +{ + int idx = + ipc_wwan_get_vlan_devs_nr(ipc_wwan, + ipc_wwan_mux_session_to_vlan_tag(id)); + + if (unlikely(idx < 0 || idx >= ipc_wwan->max_devs)) { + dev_err(ipc_wwan->dev, "invalid VLAN device"); + return; + } + + if (tx) { + /* Update vlan device tx statistics */ + ipc_wwan->vlan_devs[idx].stats.tx_packets++; + ipc_wwan->vlan_devs[idx].stats.tx_bytes += len; + /* Update root device tx statistics */ + ipc_wwan->stats.tx_packets++; + ipc_wwan->stats.tx_bytes += len; + } else { + /* Update vlan device rx statistics */ + ipc_wwan->vlan_devs[idx].stats.rx_packets++; + ipc_wwan->vlan_devs[idx].stats.rx_bytes += len; + /* Update root device rx statistics */ + ipc_wwan->stats.rx_packets++; + ipc_wwan->stats.rx_bytes += len; + } +} + +void ipc_wwan_tx_flowctrl(struct iosm_wwan *ipc_wwan, int id, bool on) +{ + u16 vid = ipc_wwan_mux_session_to_vlan_tag(id); + + dev_dbg(ipc_wwan->dev, "MUX session id[%d]: %s", id, + on ? "Enable" : "Disable"); + if (on) + netif_stop_subqueue(ipc_wwan->netdev, vid); + else + netif_wake_subqueue(ipc_wwan->netdev, vid); +} + +static struct device_type wwan_type = { .name = "wwan" }; + +struct iosm_wwan *ipc_wwan_init(void *ops_instance, struct device *dev, + int max_sessions) +{ + int max_tx_q = WWAN_MIN_TXQ + max_sessions; + struct iosm_wwan *ipc_wwan; + struct net_device *netdev; + + /* allocate ethernet device */ + netdev = alloc_etherdev_mqs(sizeof(*ipc_wwan), max_tx_q, WWAN_MAX_RXQ); + + if (unlikely(!netdev || !ops_instance)) + return NULL; + + ipc_wwan = netdev_priv(netdev); + + ipc_wwan->dev = dev; + ipc_wwan->netdev = netdev; + ipc_wwan->is_registered = false; + + ipc_wwan->vlan_devs_nr = 0; + ipc_wwan->ops_instance = ops_instance; + + ipc_wwan->max_devs = max_sessions + IPC_MEM_MAX_CHANNELS; + ipc_wwan->max_ip_devs = max_sessions; + + ipc_wwan->vlan_devs = kcalloc(ipc_wwan->max_devs, + sizeof(ipc_wwan->vlan_devs[0]), + GFP_KERNEL); + + spin_lock_init(&ipc_wwan->lock); + mutex_init(&ipc_wwan->if_mutex); + + /* allocate random ethernet address */ + eth_random_addr(netdev->dev_addr); + netdev->addr_assign_type = NET_ADDR_RANDOM; + + snprintf(netdev->name, IFNAMSIZ, "%s", "wwan0"); + netdev->netdev_ops = &ipc_wwandev_ops; + netdev->flags |= IFF_NOARP; + netdev->features |= + NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_FILTER; + SET_NETDEV_DEVTYPE(netdev, &wwan_type); + + if (register_netdev(netdev)) { + dev_err(ipc_wwan->dev, "register_netdev failed"); + ipc_wwan_deinit(ipc_wwan); + return NULL; + } + + ipc_wwan->is_registered = true; + + netif_device_attach(netdev); + + /* Set Max MTU for kernel version latest than 4.10.0. */ + netdev->max_mtu = IPC_MEM_MAX_MTU_SIZE; + + return ipc_wwan; +} + +void ipc_wwan_deinit(struct iosm_wwan *ipc_wwan) +{ + if (ipc_wwan->is_registered) + unregister_netdev(ipc_wwan->netdev); + kfree(ipc_wwan->vlan_devs); + ipc_wwan->vlan_devs = NULL; + free_netdev(ipc_wwan->netdev); +} + +bool ipc_wwan_is_tx_stopped(struct iosm_wwan *ipc_wwan, int id) +{ + u16 vid = ipc_wwan_mux_session_to_vlan_tag(id); + + return __netif_subqueue_stopped(ipc_wwan->netdev, vid); +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_wwan.h b/drivers/net/wwan/iosm/iosm_ipc_wwan.h new file mode 100644 index 000000000000..3c3b1fb31ae1 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_wwan.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_WWAN_H +#define IOSM_IPC_WWAN_H + +#define IMEM_WWAN_DATA_VLAN_ID_START 1 +#define IMEM_WWAN_CTRL_VLAN_ID_START 257 +#define IMEM_WWAN_CTRL_VLAN_ID_END 512 + +/** + * ipc_wwan_init - Allocate, Init and register WWAN device + * @ops_instance: Instance pointer for callback + * @dev: Pointer to device structure + * @max_sessions: Maximum number of sessions + * + * Returns: Pointer to instance on success else NULL + */ +struct iosm_wwan *ipc_wwan_init(void *ops_instance, struct device *dev, + int max_sessions); + +/** + * ipc_wwan_deinit - Unregister and free WWAN device, clear pointer + * @ipc_wwan: Pointer to wwan instance data + */ +void ipc_wwan_deinit(struct iosm_wwan *ipc_wwan); + +/** + * ipc_wwan_receive - Receive a downlink packet from CP. + * @ipc_wwan: Pointer to wwan instance + * @skb_arg: Pointer to struct sk_buff + * @dss: Set to true if vlan id is greater than + * IMEM_WWAN_CTRL_VLAN_ID_START else false + * + * Return: 0 on success else error code + */ +int ipc_wwan_receive(struct iosm_wwan *ipc_wwan, struct sk_buff *skb_arg, + bool dss); + +/** + * ipc_wwan_update_stats - Update device statistics + * @ipc_wwan: Pointer to wwan instance + * @id: Ipc mux channel session id + * @len: Number of bytes to update + * @tx: True if statistics needs to be updated for transmit + * else false + * + */ +void ipc_wwan_update_stats(struct iosm_wwan *ipc_wwan, int id, size_t len, + bool tx); + +/** + * ipc_wwan_tx_flowctrl - Enable/Disable TX flow control + * @ipc_wwan: Pointer to wwan instance + * @id: Ipc mux channel session id + * @on: if true then flow ctrl would be enabled else disable + * + */ +void ipc_wwan_tx_flowctrl(struct iosm_wwan *ipc_wwan, int id, bool on); + +/** + * ipc_wwan_is_tx_stopped - Checks if Tx stopped for a VLAN id. + * @ipc_wwan: Pointer to wwan instance + * @id: Ipc mux channel session id + * + * Return: true if stopped, false otherwise + */ +bool ipc_wwan_is_tx_stopped(struct iosm_wwan *ipc_wwan, int id); + +#endif From patchwork Mon Nov 23 13:51:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 330938 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4353C8300C for ; Mon, 23 Nov 2020 13:53:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8ED8620888 for ; Mon, 23 Nov 2020 13:53:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388710AbgKWNwm (ORCPT ); Mon, 23 Nov 2020 08:52:42 -0500 Received: from mga14.intel.com ([192.55.52.115]:1517 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388215AbgKWNwm (ORCPT ); Mon, 23 Nov 2020 08:52:42 -0500 IronPort-SDR: Urxtyn3rJNPsT8r27iuSv74u+nB2FwMIH2gtEw2owWdpIuO7kl44bI6pPw7lszz9aUwcEX14+x vpbwbVglfkPg== X-IronPort-AV: E=McAfee;i="6000,8403,9813"; a="170981521" X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="170981521" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 05:52:41 -0800 IronPort-SDR: E/AZctTomfoqxdjiZGko30O/ibjqmO52PghxPifFBfLpXIusUIfz0zXmTmOijUaxxVCidgq/FP Ip2gyr7+fJfA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; d="scan'208";a="370035672" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga007.jf.intel.com with ESMTP; 23 Nov 2020 05:52:39 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [RFC 17/18] net: iosm: readme file Date: Mon, 23 Nov 2020 19:21:22 +0530 Message-Id: <20201123135123.48892-18-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20201123135123.48892-1-m.chetan.kumar@intel.com> References: <20201123135123.48892-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Documents IOSM Driver interface usage. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/README | 126 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 126 insertions(+) create mode 100644 drivers/net/wwan/iosm/README diff --git a/drivers/net/wwan/iosm/README b/drivers/net/wwan/iosm/README new file mode 100644 index 000000000000..4a489177ad96 --- /dev/null +++ b/drivers/net/wwan/iosm/README @@ -0,0 +1,126 @@ +IOSM Driver for PCIe based Intel M.2 Modems +================================================ +The IOSM (IPC over Shared Memory) driver is a PCIe host driver implemented +for linux or chrome platform for data exchange over PCIe interface between +Host platform & Intel M.2 Modem. The driver exposes interface conforming to the +MBIM protocol [1]. Any front end application ( eg: Modem Manager) could easily +manage the MBIM interface to enable data communication towards WWAN. + +Basic usage +=========== +MBIM functions are inactive when unmanaged. The IOSM driver only +provides a userspace interface of a character device representing +MBIM control channel and does not play any role in managing the +functionality. It is the job of a userspace application to enumerate +the port appropriately and enable MBIM functionality. + +Examples of few such userspace application are: + - mbimcli (included with the libmbim [2] library), and + - ModemManager [3] + +For establishing an MBIM IP session at least these actions are required by the +management application: + - open the control channel + - configure network connection settings + - connect to network + - configure IP interface + +Management application development +---------------------------------- +The driver and userspace interfaces are described below. The MBIM +control channel protocol is described in [1]. + +MBIM control channel userspace ABI +================================== + +/dev/wwanctrl character device +------------------------------ +The driver exposes an interface to the MBIM function control channel using char +driver as a subdriver. The userspace end of the control channel pipe is a +/dev/wwanctrl character device. + +The /dev/wwanctrl device is created as a subordinate character device under +IOSM driver. The character device associated with a specific MBIM function +can be looked up using sysfs with matching the above device name. + +Control channel configuration +----------------------------- +The wMaxControlMessage field of the MBIM functional descriptor +limits the maximum control message size. The management application needs to +negotiate the control message size as per the requirements. +See also the ioctl documentation below. + +Fragmentation +------------- +The userspace application is responsible for all control message +fragmentation and defragmentation as per MBIM. + +/dev/wwanctrl write() +--------------------- +The MBIM control messages from the management application must not +exceed the negotiated control message size. + +/dev/wwanctrl read() +-------------------- +The management application must accept control messages of up the +negotiated control message size. + +/dev/wwanctrl ioctl() +-------------------- +IOCTL_WDM_MAX_COMMAND: Get Maximum Command Size +This IOCTL command could be used by applications to fetch the Maximum Command +buffer length supported by the driver which is restricted to 4096 bytes. + + #include + #include + #include + #include + int main() + { + __u16 max; + int fd = open("/dev/wwanctrl", O_RDWR); + if (!ioctl(fd, IOCTL_WDM_MAX_COMMAND, &max)) + printf("wMaxControlMessage is %d\n", max); + } + +MBIM data channel userspace ABI +=============================== + +wwanY network device +-------------------- +The IOSM driver represents the MBIM data channel as a single +network device of the "wwan0" type. This network device is initially +mapped to MBIM IP session 0. + +Multiplexed IP sessions (IPS) +----------------------------- +IOSM driver allows multiplexing of several IP sessions over the single network +device of type wwan0. IOSM driver models such IP sessions as 802.1q VLAN +subdevices of the master wwanY device, mapping MBIM IP session M to VLAN ID M +for all values of M greater than 0. + +The userspace management application is responsible for adding new VLAN links +prior to establishing MBIM IP sessions where the SessionId is greater than 0. +These links can be added by using the normal VLAN kernel interfaces. + +For example, adding a link for a MBIM IP session with SessionId 5: + + ip link add link wwan0 name wwan0. type vlan id 5 + +The driver will automatically map the "wwan0." network device to MBIM +IP session 5. + +References +========== + +[1] "MBIM (Mobile Broadband Interface Model) Registry" + - http://compliance.usb.org/mbim/ + +[2] libmbim - "a glib-based library for talking to WWAN modems and + devices which speak the Mobile Interface Broadband Model (MBIM) + protocol" + - http://www.freedesktop.org/wiki/Software/libmbim/ + +[3] ModemManager - "a DBus-activated daemon which controls mobile + broadband (2G/3G/4G) devices and connections" + - http://www.freedesktop.org/wiki/Software/ModemManager/ \ No newline at end of file