From patchwork Thu Jan 7 17:05:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 358666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9B26C433E0 for ; Thu, 7 Jan 2021 17:06:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53577233FC for ; Thu, 7 Jan 2021 17:06:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728257AbhAGRG2 (ORCPT ); Thu, 7 Jan 2021 12:06:28 -0500 Received: from mga11.intel.com ([192.55.52.93]:22129 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726427AbhAGRG1 (ORCPT ); Thu, 7 Jan 2021 12:06:27 -0500 IronPort-SDR: qVUGQBODOj56ahRwT7so1mgaIRy1SmtsdVmReKOCyXYBsz5Y5XWmVB+PDyk86en0DpgUdcO75f 800oX8YXgQAw== X-IronPort-AV: E=McAfee;i="6000,8403,9857"; a="173951932" X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="173951932" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jan 2021 09:05:47 -0800 IronPort-SDR: Zu497OyawzQKTctSoissDtcvTW4jDUzglf+vJbVqCMyc+q7wnCSRsnhy9hM5IjKD0/YtbhRcEA t0Bq2VoNVhqw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="422643648" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga001.jf.intel.com with ESMTP; 07 Jan 2021 09:05:44 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [PATCH 01/18] net: iosm: entry point Date: Thu, 7 Jan 2021 22:35:06 +0530 Message-Id: <20210107170523.26531-2-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210107170523.26531-1-m.chetan.kumar@intel.com> References: <20210107170523.26531-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org 1) Register IOSM driver with kernel to manage Intel WWAN PCIe device(PCI_VENDOR_ID_INTEL, INTEL_CP_DEVICE_7560_ID). 2) Exposes the EP PCIe device capability to Host PCIe core. 3) Initializes PCIe EP configuration and defines PCIe driver probe, remove and power management OPS. 4) Allocate and map(dma) skb memory for data communication from device to kernel and vice versa. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_pcie.c | 561 ++++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_pcie.h | 210 +++++++++++++ 2 files changed, 771 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pcie.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pcie.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_pcie.c b/drivers/net/wwan/iosm/iosm_ipc_pcie.c new file mode 100644 index 000000000000..c37e54fd4dae --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_pcie.c @@ -0,0 +1,561 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include +#include +#include + +#include "iosm_ipc_imem.h" +#include "iosm_ipc_pcie.h" +#include "iosm_ipc_protocol.h" + +#define DRV_AUTHOR "Intel Corporation " + +MODULE_AUTHOR(DRV_AUTHOR); +MODULE_DESCRIPTION("IOSM Driver"); +MODULE_LICENSE("GPL v2"); + +/* WWAN GUID */ +static guid_t wwan_acpi_guid = GUID_INIT(0xbad01b75, 0x22a8, 0x4f48, 0x87, 0x92, + 0xbd, 0xde, 0x94, 0x67, 0x74, 0x7d); + +static void ipc_pcie_resources_release(struct iosm_pcie *ipc_pcie) +{ + /* Free the MSI resources. */ + ipc_release_irq(ipc_pcie); + + /* Free mapped doorbell scratchpad bus memory into CPU space. */ + iounmap(ipc_pcie->scratchpad); + + /* Free mapped IPC_REGS bus memory into CPU space. */ + iounmap(ipc_pcie->ipc_regs); + + /* Releases all PCI I/O and memory resources previously reserved by a + * successful call to pci_request_regions. Call this function only + * after all use of the PCI regions has ceased. + */ + pci_release_regions(ipc_pcie->pci); +} + +static void ipc_cleanup(struct iosm_pcie *ipc_pcie) +{ + /* Free the shared memory resources. */ + ipc_imem_cleanup(ipc_pcie->imem); + + ipc_pcie_resources_release(ipc_pcie); + + /* Signal to the system that the PCI device is not in use. */ + pci_disable_device(ipc_pcie->pci); +} + +static void ipc_pcie_deinit(struct iosm_pcie *ipc_pcie) +{ + kfree(ipc_pcie->imem); + kfree(ipc_pcie); +} + +static void iosm_ipc_remove(struct pci_dev *pci) +{ + struct iosm_pcie *ipc_pcie = pci_get_drvdata(pci); + + ipc_cleanup(ipc_pcie); + + ipc_pcie_deinit(ipc_pcie); +} + +static int ipc_pcie_resources_request(struct iosm_pcie *ipc_pcie) +{ + struct pci_dev *pci = ipc_pcie->pci; + u32 cap; + u32 ret; + + /* Reserved PCI I/O and memory resources. + * Mark all PCI regions associated with PCI device pci as + * being reserved by owner IOSM_IPC. + */ + ret = pci_request_regions(pci, "IOSM_IPC"); + if (ret) { + dev_err(ipc_pcie->dev, "failed pci request regions"); + goto pci_request_region_fail; + } + + /* Reserve the doorbell IPC REGS memory resources. + * Remap the memory into CPU space. Arrange for the physical address + * (BAR) to be visible from this driver. + * pci_ioremap_bar() ensures that the memory is marked uncachable. + */ + ipc_pcie->ipc_regs = pci_ioremap_bar(pci, ipc_pcie->ipc_regs_bar_nr); + + if (!ipc_pcie->ipc_regs) { + dev_err(ipc_pcie->dev, "IPC REGS ioremap error"); + ret = -EBUSY; + goto ipc_regs_remap_fail; + } + + /* Reserve the MMIO scratchpad memory resources. + * Remap the memory into CPU space. Arrange for the physical address + * (BAR) to be visible from this driver. + * pci_ioremap_bar() ensures that the memory is marked uncachable. + */ + ipc_pcie->scratchpad = + pci_ioremap_bar(pci, ipc_pcie->scratchpad_bar_nr); + + if (!ipc_pcie->scratchpad) { + dev_err(ipc_pcie->dev, "doorbell scratchpad ioremap error"); + ret = -EBUSY; + goto scratch_remap_fail; + } + + /* Install the irq handler triggered by CP. */ + ret = ipc_acquire_irq(ipc_pcie); + if (ret) { + dev_err(ipc_pcie->dev, "acquiring MSI irq failed!"); + goto irq_acquire_fail; + } + + /* Enable bus-mastering for the IOSM IPC device. */ + pci_set_master(pci); + + /* Enable LTR if possible + * This is needed for L1.2! + */ + pcie_capability_read_dword(ipc_pcie->pci, PCI_EXP_DEVCAP2, &cap); + if (cap & PCI_EXP_DEVCAP2_LTR) + pcie_capability_set_word(ipc_pcie->pci, PCI_EXP_DEVCTL2, + PCI_EXP_DEVCTL2_LTR_EN); + + dev_dbg(ipc_pcie->dev, "link between AP and CP is fully on"); + + return ret; + +irq_acquire_fail: + iounmap(ipc_pcie->scratchpad); +scratch_remap_fail: + iounmap(ipc_pcie->ipc_regs); +ipc_regs_remap_fail: + pci_release_regions(pci); +pci_request_region_fail: + return ret; +} + +bool ipc_pcie_check_aspm_enabled(struct iosm_pcie *ipc_pcie, + bool parent) +{ + struct pci_dev *pdev; + u32 enabled; + u16 value; + + if (parent) + pdev = ipc_pcie->pci->bus->self; + else + pdev = ipc_pcie->pci; + + pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &value); + enabled = value & PCI_EXP_LNKCTL_ASPMC; + dev_dbg(ipc_pcie->dev, "ASPM L1: 0x%04X 0x%03X", pdev->device, value); + + return (enabled == PCI_EXP_LNKCTL_ASPM_L1 || + enabled == PCI_EXP_LNKCTL_ASPMC); +} + +bool ipc_pcie_check_data_link_active(struct iosm_pcie *ipc_pcie) +{ + struct pci_dev *parent; + u16 link_status = 0; + + if (!ipc_pcie->pci->bus || !ipc_pcie->pci->bus->self) { + dev_err(ipc_pcie->dev, "root port not found"); + return false; + } + + parent = ipc_pcie->pci->bus->self; + + pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &link_status); + dev_dbg(ipc_pcie->dev, "Link status: 0x%04X", link_status); + + return link_status & PCI_EXP_LNKSTA_DLLLA; +} + +static bool ipc_pcie_check_aspm_supported(struct iosm_pcie *ipc_pcie, + bool parent) +{ + struct pci_dev *pdev; + u32 support; + u32 cap = 0; + + if (parent) + pdev = ipc_pcie->pci->bus->self; + else + pdev = ipc_pcie->pci; + pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &cap); + support = u32_get_bits(cap, PCI_EXP_LNKCAP_ASPMS); + if (support < PCI_EXP_LNKCTL_ASPM_L1) { + dev_dbg(ipc_pcie->dev, "ASPM L1 not supported: 0x%04X", + pdev->device); + return false; + } + return true; +} + +void ipc_pcie_config_aspm(struct iosm_pcie *ipc_pcie) +{ + bool parent_aspm_enabled, dev_aspm_enabled; + + /* check if both root port and child supports ASPM L1 */ + if (!ipc_pcie_check_aspm_supported(ipc_pcie, true) || + !ipc_pcie_check_aspm_supported(ipc_pcie, false)) + return; + + parent_aspm_enabled = ipc_pcie_check_aspm_enabled(ipc_pcie, true); + dev_aspm_enabled = ipc_pcie_check_aspm_enabled(ipc_pcie, false); + + dev_dbg(ipc_pcie->dev, "ASPM parent: %s device: %s", + parent_aspm_enabled ? "Enabled" : "Disabled", + dev_aspm_enabled ? "Enabled" : "Disabled"); +} + +/* Initializes PCIe endpoint configuration */ +static void ipc_pcie_config_init(struct iosm_pcie *ipc_pcie) +{ + /* BAR0 is used for doorbell */ + ipc_pcie->ipc_regs_bar_nr = IPC_DOORBELL_BAR0; + + /* update HW configuration */ + ipc_pcie->scratchpad_bar_nr = IPC_SCRATCHPAD_BAR2; + ipc_pcie->doorbell_reg_offset = IPC_DOORBELL_CH_OFFSET; + ipc_pcie->doorbell_write = IPC_WRITE_PTR_REG_0; + ipc_pcie->doorbell_capture = IPC_CAPTURE_PTR_REG_0; +} + +/* This will read the BIOS WWAN RTD3 settings: + * D0L1.2/D3L2/Disabled + */ +static enum ipc_pcie_sleep_state imc_ipc_read_bios_cfg(struct device *dev) +{ + union acpi_object *object; + acpi_handle handle_acpi; + + handle_acpi = ACPI_HANDLE(dev); + if (!handle_acpi) { + pr_debug("pci device is NOT ACPI supporting device\n"); + goto default_ret; + } + + object = acpi_evaluate_dsm(handle_acpi, &wwan_acpi_guid, 0, 3, NULL); + + if (object && object->integer.value == 3) + return IPC_PCIE_D3L2; + +default_ret: + return IPC_PCIE_D0L12; +} + +static int iosm_ipc_probe(struct pci_dev *pci, + const struct pci_device_id *pci_id) +{ + struct iosm_pcie *ipc_pcie = kzalloc(sizeof(*ipc_pcie), GFP_KERNEL); + + pr_debug("Probing device 0x%X from the vendor 0x%X", pci_id->device, + pci_id->vendor); + + if (!ipc_pcie) + goto ret_fail; + + /* Initialize ipc dbg component for the PCIe device */ + ipc_pcie->dev = &pci->dev; + + /* Set the driver specific data. */ + pci_set_drvdata(pci, ipc_pcie); + + /* Save the address of the PCI device configuration. */ + ipc_pcie->pci = pci; + + /* Update platform configuration */ + ipc_pcie_config_init(ipc_pcie); + + /* Initialize the device before it is used. Ask low-level code + * to enable I/O and memory. Wake up the device if it was suspended. + */ + if (pci_enable_device(pci)) { + dev_err(ipc_pcie->dev, "failed to enable the AP PCIe device"); + /* If enable of PCIe device has failed then calling ipc_cleanup + * will panic the system. More over ipc_cleanup() is required to + * be called after ipc_imem_mount() + */ + goto pci_enable_fail; + } + + ipc_pcie_config_aspm(ipc_pcie); + dev_dbg(ipc_pcie->dev, "PCIe device enabled."); + + /* Read WWAN RTD3 BIOS Setting + */ + ipc_pcie->d3l2_support = imc_ipc_read_bios_cfg(&pci->dev); + + ipc_pcie->suspend = 0; + + if (ipc_pcie_resources_request(ipc_pcie)) + goto resources_req_fail; + + /* Establish the link to the imem layer. */ + ipc_pcie->imem = ipc_imem_init(ipc_pcie, pci->device, + ipc_pcie->scratchpad, ipc_pcie->dev); + if (!ipc_pcie->imem) { + dev_err(ipc_pcie->dev, "failed to init imem"); + goto imem_init_fail; + } + + return 0; + +imem_init_fail: + ipc_pcie_resources_release(ipc_pcie); +resources_req_fail: + pci_disable_device(pci); +pci_enable_fail: + kfree(ipc_pcie); +ret_fail: + return -EIO; +} + +static const struct pci_device_id iosm_ipc_ids[] = { + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, INTEL_CP_DEVICE_7560_ID) }, + {} +}; + +/* Enter sleep in s2idle case + */ +static int __maybe_unused iosm_ipc_suspend_s2idle(struct iosm_pcie *ipc_pcie) +{ + ipc_cp_irq_sleep_control(ipc_pcie, IPC_MEM_DEV_PM_FORCE_SLEEP); + + set_bit(0, &ipc_pcie->suspend); + /* Applying memory barrier so that ipc_pcie->suspend is updated + * before being read + */ + smp_mb__after_atomic(); + + ipc_imem_pm_s2idle_sleep(ipc_pcie->imem, true); + + return 0; +} + +/* Resume from sleep in s2idle case + */ +static int __maybe_unused iosm_ipc_resume_s2idle(struct iosm_pcie *ipc_pcie) +{ + ipc_cp_irq_sleep_control(ipc_pcie, IPC_MEM_DEV_PM_FORCE_ACTIVE); + + ipc_imem_pm_s2idle_sleep(ipc_pcie->imem, false); + + clear_bit(0, &ipc_pcie->suspend); + /* Applying memory barrier so that ipc_pcie->suspend is updated + * before being read + */ + smp_mb__after_atomic(); + + return 0; +} + +int __maybe_unused iosm_ipc_suspend(struct iosm_pcie *ipc_pcie) +{ + struct pci_dev *pdev; + int ret; + + pdev = ipc_pcie->pci; + + /* Execute D3 one time. */ + if (pdev->current_state != PCI_D0) { + dev_dbg(ipc_pcie->dev, "done for PM=%d", pdev->current_state); + return 0; + } + + /* The HAL shall ask the shared memory layer whether D3 is allowed. */ + ipc_imem_pm_suspend(ipc_pcie->imem); + + /* Save the PCI configuration space of a device before suspending. */ + ret = pci_save_state(pdev); + + if (ret) { + dev_err(ipc_pcie->dev, "pci_save_state error=%d", ret); + return ret; + } + + /* Set the power state of a PCI device. + * Transition a device to a new power state, using the device's PCI PM + * registers. + */ + ret = pci_set_power_state(pdev, PCI_D3cold); + + if (ret) { + dev_err(ipc_pcie->dev, "pci_set_power_state error=%d", ret); + return ret; + } + + dev_dbg(ipc_pcie->dev, "SUSPEND done"); + return ret; +} + +int __maybe_unused iosm_ipc_resume(struct iosm_pcie *ipc_pcie) +{ + int ret; + + /* Set the power state of a PCI device. + * Transition a device to a new power state, using the device's PCI PM + * registers. + */ + ret = pci_set_power_state(ipc_pcie->pci, PCI_D0); + + if (ret) { + dev_err(ipc_pcie->dev, "pci_set_power_state error=%d", ret); + return ret; + } + + pci_restore_state(ipc_pcie->pci); + + /* The HAL shall inform the shared memory layer that the device is + * active. + */ + ipc_imem_pm_resume(ipc_pcie->imem); + + dev_dbg(ipc_pcie->dev, "RESUME done"); + return ret; +} + +static int __maybe_unused iosm_ipc_suspend_cb(struct device *dev) +{ + struct iosm_pcie *ipc_pcie; + struct pci_dev *pdev; + + pdev = to_pci_dev(dev); + + ipc_pcie = pci_get_drvdata(pdev); + + switch (ipc_pcie->d3l2_support) { + case IPC_PCIE_D0L12: + iosm_ipc_suspend_s2idle(ipc_pcie); + break; + case IPC_PCIE_D3L2: + iosm_ipc_suspend(ipc_pcie); + break; + } + + return 0; +} + +static int __maybe_unused iosm_ipc_resume_cb(struct device *dev) +{ + struct iosm_pcie *ipc_pcie; + struct pci_dev *pdev; + + pdev = to_pci_dev(dev); + + ipc_pcie = pci_get_drvdata(pdev); + + switch (ipc_pcie->d3l2_support) { + case IPC_PCIE_D0L12: + iosm_ipc_resume_s2idle(ipc_pcie); + break; + case IPC_PCIE_D3L2: + iosm_ipc_resume(ipc_pcie); + break; + } + + return 0; +} + +static SIMPLE_DEV_PM_OPS(iosm_ipc_pm, iosm_ipc_suspend_cb, iosm_ipc_resume_cb); + +static struct pci_driver iosm_ipc_driver = { + .name = KBUILD_MODNAME, + .probe = iosm_ipc_probe, + .remove = iosm_ipc_remove, + .driver = { + .pm = &iosm_ipc_pm, + }, + .id_table = iosm_ipc_ids, +}; + +module_pci_driver(iosm_ipc_driver); + +int ipc_pcie_addr_map(struct iosm_pcie *ipc_pcie, void *mem, size_t size, + dma_addr_t *mapping, int direction) +{ + if (ipc_pcie->pci) { + *mapping = dma_map_single(&ipc_pcie->pci->dev, mem, size, + direction); + if (dma_mapping_error(&ipc_pcie->pci->dev, *mapping)) { + dev_err(ipc_pcie->dev, "dma mapping failed"); + return -EINVAL; + } + } + return 0; +} + +void ipc_pcie_addr_unmap(struct iosm_pcie *ipc_pcie, size_t size, + dma_addr_t mapping, int direction) +{ + if (!mapping) + return; + if (ipc_pcie->pci) + dma_unmap_single(&ipc_pcie->pci->dev, mapping, size, direction); +} + +struct sk_buff *ipc_pcie_alloc_local_skb(struct iosm_pcie *ipc_pcie, + gfp_t flags, size_t size) +{ + struct sk_buff *skb; + + if (!ipc_pcie || !size) { + pr_err("invalid pcie object or size"); + return NULL; + } + + skb = __netdev_alloc_skb(NULL, size, flags); + if (!skb) + return NULL; + + IPC_CB(skb)->op_type = (u8)UL_DEFAULT; + IPC_CB(skb)->mapping = 0; + + return skb; +} + +struct sk_buff *ipc_pcie_alloc_skb(struct iosm_pcie *ipc_pcie, size_t size, + gfp_t flags, dma_addr_t *mapping, + int direction, size_t headroom) +{ + struct sk_buff *skb = ipc_pcie_alloc_local_skb(ipc_pcie, flags, + size + headroom); + if (!skb) + return NULL; + + if (headroom) + skb_reserve(skb, headroom); + + if (ipc_pcie_addr_map(ipc_pcie, skb->data, size, mapping, direction)) { + dev_kfree_skb(skb); + return NULL; + } + + BUILD_BUG_ON(sizeof(*IPC_CB(skb)) > sizeof(skb->cb)); + + /* Store the mapping address in skb scratch pad for later usage */ + IPC_CB(skb)->mapping = *mapping; + IPC_CB(skb)->direction = direction; + IPC_CB(skb)->len = size; + + return skb; +} + +void ipc_pcie_kfree_skb(struct iosm_pcie *ipc_pcie, struct sk_buff *skb) +{ + if (!skb) + return; + + ipc_pcie_addr_unmap(ipc_pcie, IPC_CB(skb)->len, IPC_CB(skb)->mapping, + IPC_CB(skb)->direction); + IPC_CB(skb)->mapping = 0; + dev_kfree_skb(skb); +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_pcie.h b/drivers/net/wwan/iosm/iosm_ipc_pcie.h new file mode 100644 index 000000000000..33361b30e71e --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_pcie.h @@ -0,0 +1,210 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_PCIE_H +#define IOSM_IPC_PCIE_H + +#include +#include +#include + +#include "iosm_ipc_irq.h" + +/* Device ID */ +#define INTEL_CP_DEVICE_7560_ID 0x7560 + +/* Define for BAR area usage */ +#define IPC_DOORBELL_BAR0 0 +#define IPC_SCRATCHPAD_BAR2 2 + +/* Defines for DOORBELL registers information */ +#define IPC_DOORBELL_CH_OFFSET BIT(5) +#define IPC_WRITE_PTR_REG_0 BIT(4) +#define IPC_CAPTURE_PTR_REG_0 BIT(3) + +/* Number of MSI used for IPC */ +#define IPC_MSI_VECTORS 1 + +/* Total number of Maximum IPC IRQ vectors used for IPC */ +#define IPC_IRQ_VECTORS IPC_MSI_VECTORS + +/** + * enum ipc_pcie_sleep_state - Enum type to different sleep state transitions + * @IPC_PCIE_D0L12: Put the sleep state in D0L12 + * @IPC_PCIE_D3L2: Put the sleep state in D3L2 + */ +enum ipc_pcie_sleep_state { + IPC_PCIE_D0L12, + IPC_PCIE_D3L2, +}; + +/** + * struct iosm_pcie - IPC_PCIE struct. + * @pci: Address of the device description + * @dev: Pointer to generic device structure + * @ipc_regs: Remapped CP doorbell address of the irq register + * set, to fire the doorbell irq. + * @scratchpad: Remapped CP scratchpad address, to send the + * configuration. tuple and the IPC descriptors + * to CP in the ROM phase. The config tuple + * information are saved on the MSI scratchpad. + * @imem: Pointer to imem data struct + * @ipc_regs_bar_nr: BAR number to be used for IPC doorbell + * @scratchpad_bar_nr: BAR number to be used for Scratchpad + * @nvec: number of requested irq vectors + * @doorbell_reg_offset: doorbell_reg_offset + * @doorbell_write: doorbell write register + * @doorbell_capture: doorbell capture resgister + * @suspend: S2IDLE sleep/active + * @d3l2_support: Read WWAN RTD3 BIOS setting for D3L2 support + */ +struct iosm_pcie { + struct pci_dev *pci; + struct device *dev; + void __iomem *ipc_regs; + void __iomem *scratchpad; + struct iosm_imem *imem; + int ipc_regs_bar_nr; + int scratchpad_bar_nr; + int nvec; + u32 doorbell_reg_offset; + u32 doorbell_write; + u32 doorbell_capture; + unsigned long suspend; + enum ipc_pcie_sleep_state d3l2_support; +}; + +/** + * struct ipc_skb_cb - Struct definition of the socket buffer which is mapped to + * the cb field of sbk + * @mapping: Store physical or IOVA mapped address of skb virtual add. + * @direction: DMA direction + * @len: Length of the DMA mapped region + * @op_type: Expected values are defined about enum ipc_ul_usr_op. + */ +struct ipc_skb_cb { + dma_addr_t mapping; + int direction; + int len; + u8 op_type; +}; + +/** + * enum ipc_ul_usr_op - Control operation to execute the right action on + * the user interface. + * @UL_USR_OP_BLOCKED: The uplink app was blocked until CP confirms that the + * uplink buffer was consumed triggered by the IRQ. + * @UL_MUX_OP_ADB: In MUX mode the UL ADB shall be addedd to the free list. + * @UL_DEFAULT: SKB in non muxing mode + */ +enum ipc_ul_usr_op { + UL_USR_OP_BLOCKED, + UL_MUX_OP_ADB, + UL_DEFAULT, +}; + +/** + * ipc_pcie_addr_map - Maps the kernel's virtual address to either IOVA + * address space or Physical address space, the mapping is + * stored in the skb's cb. + * @ipc_pcie: Pointer to struct iosm_pcie + * @mem: Skb mem containing data + * @size: Data size + * @mapping: Dma mapping address + * @direction: Data direction + * + * Returns: 0 on success else error code + */ +int ipc_pcie_addr_map(struct iosm_pcie *ipc_pcie, void *mem, size_t size, + dma_addr_t *mapping, int direction); + +/** + * ipc_pcie_addr_unmap - Unmaps the skb memory region from IOVA address space + * @ipc_pcie: Pointer to struct iosm_pcie + * @size: Data size + * @mapping: Dma mapping address + * @direction: Data direction + * + * Returns: 0 on success else error code + */ +void ipc_pcie_addr_unmap(struct iosm_pcie *ipc_pcie, size_t size, + dma_addr_t mapping, int direction); + +/** + * ipc_pcie_alloc_skb - Allocate an uplink SKB for the given size. + * @ipc_pcie: Pointer to struct iosm_pcie + * @size: Size of the SKB required. + * @flags: Allocation flags + * @mapping: Copies either mapped IOVA add. or converted Phy address + * @direction: DMA data direction + * @headroom: Header data offset + * Returns: Pointer to ipc_skb on Success, NULL on failure. + */ +struct sk_buff *ipc_pcie_alloc_skb(struct iosm_pcie *ipc_pcie, size_t size, + gfp_t flags, dma_addr_t *mapping, + int direction, size_t headroom); + +/** + * ipc_pcie_alloc_local_skb - Allocate a local SKB for the given size. + * @ipc_pcie: Pointer to struct iosm_pcie + * @flags: Allocation flags + * @size: Size of the SKB required. + * + * Returns: Pointer to ipc_skb on Success, NULL on failure. + */ +struct sk_buff *ipc_pcie_alloc_local_skb(struct iosm_pcie *ipc_pcie, + gfp_t flags, size_t size); + +/** + * ipc_pcie_kfree_skb - Free skb allocated by ipc_pcie_alloc_*_skb(). + * @ipc_pcie: Pointer to struct iosm_pcie + * @skb: Pointer to the skb + */ +void ipc_pcie_kfree_skb(struct iosm_pcie *ipc_pcie, struct sk_buff *skb); + +/** + * ipc_pcie_check_data_link_active - Check Data Link Layer Active + * @ipc_pcie: Pointer to struct iosm_pcie + * + * Returns: true if active, otherwise false + */ +bool ipc_pcie_check_data_link_active(struct iosm_pcie *ipc_pcie); + +/** + * iosm_ipc_suspend - Callback invoked by pm_runtime_suspend. It decrements + * the device's usage count then, carry out a suspend, + * either synchronous or asynchronous. + * @ipc_pcie: Pointer to struct iosm_pcie + * + * Returns: 0 on success else error code + */ +int iosm_ipc_suspend(struct iosm_pcie *ipc_pcie); + +/** + * iosm_ipc_resume - Callback invoked by pm_runtime_resume. It increments + * the device's usage count then, carry out a resume, + * either synchronous or asynchronous. + * @ipc_pcie: Pointer to struct iosm_pcie + * + * Returns: 0 on success else error code + */ +int iosm_ipc_resume(struct iosm_pcie *ipc_pcie); + +/** + * ipc_pcie_check_aspm_enabled - Check if ASPM L1 is already enabled + * @ipc_pcie: Pointer to struct iosm_pcie + * @parent: True if checking ASPM L1 for parent else false + * + * Returns: true if ASPM is already enabled else false + */ +bool ipc_pcie_check_aspm_enabled(struct iosm_pcie *ipc_pcie, + bool parent); +/** + * ipc_pcie_config_aspm - Configure ASPM L1 + * @ipc_pcie: Pointer to struct iosm_pcie + */ +void ipc_pcie_config_aspm(struct iosm_pcie *ipc_pcie); + +#endif From patchwork Thu Jan 7 17:05:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 358665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EC85C4332B for ; Thu, 7 Jan 2021 17:06:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ECB08233CF for ; Thu, 7 Jan 2021 17:06:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728081AbhAGRGe (ORCPT ); Thu, 7 Jan 2021 12:06:34 -0500 Received: from mga11.intel.com ([192.55.52.93]:22136 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725835AbhAGRGb (ORCPT ); Thu, 7 Jan 2021 12:06:31 -0500 IronPort-SDR: YagXXWHE50D2nodM6jfPfTlIIn6CSpnaPP3yEy7XjSThCOA+kRzWuxBodXUy6hFpYgnFQhV9X8 D+TYvsox4J6Q== X-IronPort-AV: E=McAfee;i="6000,8403,9857"; a="173951946" X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="173951946" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jan 2021 09:05:50 -0800 IronPort-SDR: hA9pvmmp4vCfTB5Bv6phudHA//hfWofXrhjMGY4VrD+guaS/NZtFt8YBKteeeEJyy4cwzTJBm7 8DHN9UVDUouA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="422643689" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga001.jf.intel.com with ESMTP; 07 Jan 2021 09:05:48 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [PATCH 02/18] net: iosm: irq handling Date: Thu, 7 Jan 2021 22:35:07 +0530 Message-Id: <20210107170523.26531-3-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210107170523.26531-1-m.chetan.kumar@intel.com> References: <20210107170523.26531-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org 1) Request interrupt vector, frees allocated resource. 2) Registers IRQ handler. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_irq.c | 89 ++++++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_irq.h | 35 ++++++++++++++ 2 files changed, 124 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_irq.c b/drivers/net/wwan/iosm/iosm_ipc_irq.c new file mode 100644 index 000000000000..190d6b6b274d --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_irq.c @@ -0,0 +1,89 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include "iosm_ipc_pcie.h" +#include "iosm_ipc_protocol.h" + +static inline void write_dbell_reg(struct iosm_pcie *ipc_pcie, int irq_n, + u32 data) +{ + void __iomem *write_reg; + + /* Select the first doorbell register, which is only currently needed + * by CP. + */ + write_reg = (void __iomem *)((u8 __iomem *)ipc_pcie->ipc_regs + + ipc_pcie->doorbell_write + + (irq_n * ipc_pcie->doorbell_reg_offset)); + + /* Fire the doorbell irq by writing data on the doorbell write pointer + * register. + */ + iowrite32(data, write_reg); +} + +void ipc_doorbell_fire(struct iosm_pcie *ipc_pcie, int irq_n, u32 data) +{ + write_dbell_reg(ipc_pcie, irq_n, data); +} + +/* Threaded Interrupt handler for MSI interrupts */ +static irqreturn_t ipc_msi_interrupt(int irq, void *dev_id) +{ + struct iosm_pcie *ipc_pcie = dev_id; + int instance = irq - ipc_pcie->pci->irq; + + /* Shift the MSI irq actions to the IPC tasklet. IRQ_NONE means the + * irq was not from the IPC device or could not be served. + */ + if (instance >= ipc_pcie->nvec) + return IRQ_NONE; + + if (!test_bit(0, &ipc_pcie->suspend)) + ipc_imem_irq_process(ipc_pcie->imem, instance); + + return IRQ_HANDLED; +} + +void ipc_release_irq(struct iosm_pcie *ipc_pcie) +{ + struct pci_dev *pdev = ipc_pcie->pci; + + if (pdev->msi_enabled) { + while (--ipc_pcie->nvec >= 0) + free_irq(pdev->irq + ipc_pcie->nvec, ipc_pcie); + } + pci_free_irq_vectors(pdev); +} + +int ipc_acquire_irq(struct iosm_pcie *ipc_pcie) +{ + struct pci_dev *pdev = ipc_pcie->pci; + int i, rc = -1; + + ipc_pcie->nvec = pci_alloc_irq_vectors(pdev, IPC_MSI_VECTORS, + IPC_MSI_VECTORS, PCI_IRQ_MSI); + + if (ipc_pcie->nvec < 0) + return ipc_pcie->nvec; + + if (!pdev->msi_enabled) + goto error; + + for (i = 0; i < ipc_pcie->nvec; ++i) { + rc = request_threaded_irq(pdev->irq + i, NULL, + ipc_msi_interrupt, 0, KBUILD_MODNAME, + ipc_pcie); + if (rc) { + dev_err(ipc_pcie->dev, "unable to grab IRQ, rc=%d", rc); + ipc_pcie->nvec = i; + ipc_release_irq(ipc_pcie); + goto error; + } + } + +error: + return rc; +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_irq.h b/drivers/net/wwan/iosm/iosm_ipc_irq.h new file mode 100644 index 000000000000..ca270d396730 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_irq.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_IRQ_H +#define IOSM_IPC_IRQ_H + +#include "iosm_ipc_pcie.h" + +struct iosm_pcie; + +/** + * ipc_doorbell_fire - fire doorbell to CP + * @ipc_pcie: Pointer to iosm_pcie + * @irq_n: Doorbell type + * @data: ipc state + */ +void ipc_doorbell_fire(struct iosm_pcie *ipc_pcie, int irq_n, u32 data); + +/** + * ipc_release_irq - Release the IRQ handler. + * @ipc_pcie: Pointer to iosm_pcie struct + */ +void ipc_release_irq(struct iosm_pcie *ipc_pcie); + +/** + * ipc_acquire_irq - acquire IRQ & register IRQ handler. + * @ipc_pcie: Pointer to iosm_pcie struct + * + * Return: 0 on success and -1 on failure + */ +int ipc_acquire_irq(struct iosm_pcie *ipc_pcie); + +#endif From patchwork Thu Jan 7 17:05:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 358662 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E17CBC433DB for ; Thu, 7 Jan 2021 17:07:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB9FB233CF for ; Thu, 7 Jan 2021 17:07:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728967AbhAGRHB (ORCPT ); Thu, 7 Jan 2021 12:07:01 -0500 Received: from mga14.intel.com ([192.55.52.115]:53928 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728705AbhAGRGv (ORCPT ); Thu, 7 Jan 2021 12:06:51 -0500 IronPort-SDR: N5ZaqAO0KcgIzCwGUVMk50oiyipUXEHqxTCEHgXbtxubKGtudx1uVx7MGBLqTF9ZHGB0kreBzp kgtmNPoHe4iw== X-IronPort-AV: E=McAfee;i="6000,8403,9857"; a="176680913" X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="176680913" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jan 2021 09:06:10 -0800 IronPort-SDR: rt7FEuUbiMWWbDx6uph6jJDO0VmYE6WCa4b5R0J2OGGCrpHPw2MwO01KlUitTh+OmN+w0mfXVi AEVSRzXb2pxg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="422643835" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga001.jf.intel.com with ESMTP; 07 Jan 2021 09:06:05 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [PATCH 07/18] net: iosm: char device for FW flash & coredump Date: Thu, 7 Jan 2021 22:35:12 +0530 Message-Id: <20210107170523.26531-8-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210107170523.26531-1-m.chetan.kumar@intel.com> References: <20210107170523.26531-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Implements a char device for flashing Modem FW image while Device is in boot rom phase and for collecting traces on modem crash. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_sio.c | 266 +++++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_sio.h | 78 ++++++++++ 2 files changed, 344 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_sio.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_sio.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_sio.c b/drivers/net/wwan/iosm/iosm_ipc_sio.c new file mode 100644 index 000000000000..dfa7bff15c51 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_sio.c @@ -0,0 +1,266 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include +#include + +#include "iosm_ipc_sio.h" + +static struct mutex sio_floc; /* Mutex Lock for sio read */ +static struct mutex sio_floc_wr; /* Mutex Lock for sio write */ + +/* Open a shared memory device and initialize the head of the rx skbuf list. */ +static int ipc_sio_fop_open(struct inode *inode, struct file *filp) +{ + struct iosm_sio *ipc_sio = + container_of(filp->private_data, struct iosm_sio, misc); + + struct iosm_sio_open_file *sio_op = kzalloc(sizeof(*sio_op), + GFP_KERNEL); + if (!sio_op) + return -ENOMEM; + + if (test_and_set_bit(IS_OPEN, &ipc_sio->flag)) { + kfree(sio_op); + return -EBUSY; + } + + ipc_sio->channel = imem_sys_sio_open(ipc_sio->ipc_imem); + + if (!ipc_sio->channel) { + kfree(sio_op); + return -EIO; + } + + mutex_lock(&sio_floc); + + inode->i_private = sio_op; + ipc_sio->sio_fop = sio_op; + sio_op->sio_dev = ipc_sio; + + mutex_unlock(&sio_floc); + return 0; +} + +static int ipc_sio_fop_release(struct inode *inode, struct file *filp) +{ + struct iosm_sio_open_file *sio_op = inode->i_private; + + mutex_lock(&sio_floc); + + if (sio_op->sio_dev) { + clear_bit(IS_OPEN, &sio_op->sio_dev->flag); + imem_sys_sio_close(sio_op->sio_dev); + sio_op->sio_dev->sio_fop = NULL; + } + + kfree(sio_op); + + mutex_unlock(&sio_floc); + return 0; +} + +/* Copy the data from skbuff to the user buffer */ +static ssize_t ipc_sio_fop_read(struct file *filp, char __user *buf, + size_t size, loff_t *l) +{ + struct iosm_sio_open_file *sio_op = filp->f_inode->i_private; + struct sk_buff *skb = NULL; + struct iosm_sio *ipc_sio; + ssize_t read_byt; + int ret_err; + + if (!buf) { + ret_err = -EINVAL; + goto err; + } + + mutex_lock(&sio_floc); + + if (!sio_op->sio_dev) { + ret_err = -EIO; + goto err_free_lock; + } + + ipc_sio = sio_op->sio_dev; + + if (!(filp->f_flags & O_NONBLOCK)) + set_bit(IS_BLOCKING, &ipc_sio->flag); + + /* only log in blocking mode to reduce flooding the log */ + if (test_bit(IS_BLOCKING, &ipc_sio->flag)) + dev_dbg(ipc_sio->dev, "sio read chid[%d] size=%zu", + ipc_sio->channel->channel_id, size); + + /* First provide the pending skbuf to the user. */ + if (ipc_sio->rx_pending_buf) { + skb = ipc_sio->rx_pending_buf; + ipc_sio->rx_pending_buf = NULL; + } + + /* check skb is available in rx_list or wait for skb in case of + * blocking read + */ + while (!skb && !(skb = skb_dequeue(&ipc_sio->rx_list))) { + if (!test_bit(IS_BLOCKING, &ipc_sio->flag)) { + ret_err = -EAGAIN; + goto err_free_lock; + } + /* Suspend the user app and wait a certain time for data + * from CP. + */ + wait_for_completion_interruptible_timeout + (&ipc_sio->read_sem, msecs_to_jiffies(IPC_READ_TIMEOUT)); + + if (test_bit(IS_DEINIT, &ipc_sio->flag)) { + ret_err = -EPERM; + goto err_free_lock; + } + } + + read_byt = imem_sys_sio_read(ipc_sio, buf, size, skb); + mutex_unlock(&sio_floc); + return read_byt; + +err_free_lock: + mutex_unlock(&sio_floc); +err: + return ret_err; +} + +/* Route the user data to the shared memory layer. */ +static ssize_t ipc_sio_fop_write(struct file *filp, const char __user *buf, + size_t size, loff_t *l) +{ + struct iosm_sio_open_file *sio_op = filp->f_inode->i_private; + struct iosm_sio *ipc_sio; + bool is_blocking; + ssize_t write_byt; + int ret_err; + + if (!buf) { + ret_err = -EINVAL; + goto err; + } + + mutex_lock(&sio_floc_wr); + if (!sio_op->sio_dev) { + ret_err = -EIO; + goto err_free_lock; + } + + ipc_sio = sio_op->sio_dev; + + is_blocking = !(filp->f_flags & O_NONBLOCK); + if (!is_blocking) { + if (test_bit(WRITE_IN_USE, &ipc_sio->flag)) { + ret_err = -EAGAIN; + goto err_free_lock; + } + } + + write_byt = imem_sys_sio_write(ipc_sio, buf, size, is_blocking); + mutex_unlock(&sio_floc_wr); + return write_byt; + +err_free_lock: + mutex_unlock(&sio_floc_wr); +err: + return ret_err; +} + +/* poll for applications using nonblocking I/O */ +static __poll_t ipc_sio_fop_poll(struct file *filp, poll_table *wait) +{ + struct iosm_sio *ipc_sio = + container_of(filp->private_data, struct iosm_sio, misc); + __poll_t mask = 0; + + /* Just registers wait_queue hook. This doesn't really wait. */ + poll_wait(filp, &ipc_sio->poll_inq, wait); + + /* Test the fill level of the skbuf rx queue. */ + if (!skb_queue_empty(&ipc_sio->rx_list) || ipc_sio->rx_pending_buf) + mask |= EPOLLIN | EPOLLRDNORM; /* readable */ + + if (!test_bit(WRITE_IN_USE, &ipc_sio->flag)) + mask |= EPOLLOUT | EPOLLWRNORM; /* writable */ + + return mask; +} + +struct iosm_sio *ipc_sio_init(struct iosm_imem *ipc_imem, const char *name) +{ + static const struct file_operations fops = { + .owner = THIS_MODULE, + .open = ipc_sio_fop_open, + .release = ipc_sio_fop_release, + .read = ipc_sio_fop_read, + .write = ipc_sio_fop_write, + .poll = ipc_sio_fop_poll, + }; + + struct iosm_sio *ipc_sio = kzalloc(sizeof(*ipc_sio), GFP_KERNEL); + + if (!ipc_sio) + return NULL; + + ipc_sio->dev = ipc_imem->dev; + ipc_sio->pcie = ipc_imem->pcie; + ipc_sio->ipc_imem = ipc_imem; + + mutex_init(&sio_floc); + mutex_init(&sio_floc_wr); + init_completion(&ipc_sio->read_sem); + + skb_queue_head_init(&ipc_sio->rx_list); + init_waitqueue_head(&ipc_sio->poll_inq); + + strncpy(ipc_sio->devname, name, sizeof(ipc_sio->devname) - 1); + ipc_sio->devname[IPC_SIO_DEVNAME_LEN - 1] = '\0'; + + ipc_sio->misc.minor = MISC_DYNAMIC_MINOR; + ipc_sio->misc.name = ipc_sio->devname; + ipc_sio->misc.fops = &fops; + ipc_sio->misc.mode = IPC_CHAR_DEVICE_DEFAULT_MODE; + + if (misc_register(&ipc_sio->misc) != 0) { + kfree(ipc_sio); + return NULL; + } + + return ipc_sio; +} + +void ipc_sio_deinit(struct iosm_sio *ipc_sio) +{ + if (ipc_sio) { + misc_deregister(&ipc_sio->misc); + + set_bit(IS_DEINIT, &ipc_sio->flag); + /* Applying memory barrier so that ipc_sio->flag is updated + * before being read + */ + smp_mb__after_atomic(); + if (test_bit(IS_BLOCKING, &ipc_sio->flag)) { + complete(&ipc_sio->read_sem); + complete(&ipc_sio->channel->ul_sem); + } + + mutex_lock(&sio_floc); + mutex_lock(&sio_floc_wr); + + ipc_pcie_kfree_skb(ipc_sio->pcie, ipc_sio->rx_pending_buf); + skb_queue_purge(&ipc_sio->rx_list); + + if (ipc_sio->sio_fop) + ipc_sio->sio_fop->sio_dev = NULL; + + mutex_unlock(&sio_floc_wr); + mutex_unlock(&sio_floc); + + kfree(ipc_sio); + } +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_sio.h b/drivers/net/wwan/iosm/iosm_ipc_sio.h new file mode 100644 index 000000000000..fedc14d5f3c2 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_sio.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_SIO_H +#define IOSM_IPC_SIO_H + +#include +#include + +#include "iosm_ipc_imem_ops.h" + +/* IPC char. device default mode. Only privileged user can access. */ +#define IPC_CHAR_DEVICE_DEFAULT_MODE 0600 + +#define IS_OPEN 0 +#define IS_BLOCKING 1 +#define WRITE_IN_USE 2 +#define IS_DEINIT 3 + +/** + * struct iosm_sio_open_file - Reference to struct iosm_sio + * @sio_dev: iosm_sio instance + */ +struct iosm_sio_open_file { + struct iosm_sio *sio_dev; +}; + +/** + * struct iosm_sio - State of the char driver layer. + * @misc: OS misc device component + * @sio_fop: reference to iosm_sio structure + * @ipc_imem: imem instance + * @dev: Pointer to device struct + * @pcie: PCIe component + * @rx_pending_buf: Storage for skb when its data has not been fully read + * @misc: OS misc device component + * @devname: Device name + * @channel: Channel instance + * @rx_list: Downlink skbuf list received from CP. + * @read_sem: Needed for the blocking read or downlink transfer + * @poll_inq: Read queues to support the poll system call + * @flag: Flags to monitor state of device + * @wmaxcommand: Max buffer size + */ +struct iosm_sio { + struct miscdevice misc; + struct iosm_sio_open_file *sio_fop; + struct iosm_imem *ipc_imem; + struct device *dev; + struct iosm_pcie *pcie; + struct sk_buff *rx_pending_buf; + char devname[IPC_SIO_DEVNAME_LEN]; + struct ipc_mem_channel *channel; + struct sk_buff_head rx_list; + struct completion read_sem; + wait_queue_head_t poll_inq; + unsigned long flag; + u16 wmaxcommand; +}; + +/** + * ipc_sio_init - Allocate and create a character device. + * @ipc_imem: Pointer to iosm_imem structure + * @name: Pointer to character device name + * + * Returns: Pointer to sio instance on success and NULL on failure + */ +struct iosm_sio *ipc_sio_init(struct iosm_imem *ipc_imem, const char *name); + +/** + * ipc_sio_deinit - Dellocate and free resource for a character device. + * @ipc_sio: Pointer to the ipc sio data-struct + */ +void ipc_sio_deinit(struct iosm_sio *ipc_sio); + +#endif From patchwork Thu Jan 7 17:05:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 358663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AA90C433DB for ; Thu, 7 Jan 2021 17:07:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BBD96233CF for ; Thu, 7 Jan 2021 17:06:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728950AbhAGRG6 (ORCPT ); Thu, 7 Jan 2021 12:06:58 -0500 Received: from mga14.intel.com ([192.55.52.115]:53942 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728842AbhAGRG5 (ORCPT ); Thu, 7 Jan 2021 12:06:57 -0500 IronPort-SDR: DnvhWi+X02reVUSzM3wOZLhcsvFxU0UQvmn906HwO41LRJWFXezCCaIW9sP3RexuTeyd77cDt7 5rzMMvE9uMdA== X-IronPort-AV: E=McAfee;i="6000,8403,9857"; a="176680942" X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="176680942" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jan 2021 09:06:16 -0800 IronPort-SDR: RRC1HiPMLzjBEL+F5YAeeyvNo7d36dV3zWfN76XQ4MwUyp4cVX4e0QU9Ali0Qhsr7X4NY9dQvR 3YKDfnwpx30w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="422643882" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga001.jf.intel.com with ESMTP; 07 Jan 2021 09:06:14 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [PATCH 09/18] net: iosm: bottom half Date: Thu, 7 Jan 2021 22:35:14 +0530 Message-Id: <20210107170523.26531-10-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210107170523.26531-1-m.chetan.kumar@intel.com> References: <20210107170523.26531-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org 1) Bottom half(tasklet) for IRQ and task processing. 2) Tasks are processed asynchronous and synchronously. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_task_queue.c | 247 ++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_task_queue.h | 46 ++++++ 2 files changed, 293 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_task_queue.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_task_queue.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_task_queue.c b/drivers/net/wwan/iosm/iosm_ipc_task_queue.c new file mode 100644 index 000000000000..0820f7fdbd1f --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_task_queue.c @@ -0,0 +1,247 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include + +#include "iosm_ipc_task_queue.h" + +/* Number of available element for the input message queue of the IPC + * ipc_task. + */ +#define IPC_THREAD_QUEUE_SIZE 256 + +/** + * struct ipc_task_queue_args - Struct for Task queue elements + * @instance: Instance pointer for function to be called in tasklet context + * @msg: Message argument for tasklet function. (optional, can be NULL) + * @completion: OS object used to wait for the tasklet function to finish for + * synchronous calls + * @func: Function to be called in tasklet (tl) context + * @arg: Generic integer argument for tasklet function (optional) + * @size: Message size argument for tasklet function (optional) + * @response: Return code of tasklet function for synchronous calls + * @is_copy: Is true if msg contains a pointer to a copy of the original msg + * for async. calls that needs to be freed once the tasklet returns + */ +struct ipc_task_queue_args { + void *instance; + void *msg; + struct completion *completion; + int (*func)(struct iosm_imem *ipc_imem, int arg, void *msg, + size_t size); + int arg; + size_t size; + int response; + u8 is_copy : 1; +}; + +/** + * struct ipc_task_queue - Struct for Task queue + * @dev: pointer to device structure + * @q_lock: Protect the message queue of the ipc ipc_task + * @args: Message queue of the IPC ipc_task + * @q_rpos: First queue element to process. + * @q_wpos: First free element of the input queue. + */ +struct ipc_task_queue { + struct device *dev; + spinlock_t q_lock; /* for atomic operation on queue */ + struct ipc_task_queue_args args[IPC_THREAD_QUEUE_SIZE]; + unsigned int q_rpos; + unsigned int q_wpos; +}; + +/* Actual tasklet function, will be called whenever tasklet is scheduled. + * Calls event handler involves callback for each element in the message queue + */ +static void ipc_task_queue_handler(unsigned long data) +{ + struct ipc_task_queue *ipc_task = (struct ipc_task_queue *)data; + unsigned int q_rpos = ipc_task->q_rpos; + + /* Loop over the input queue contents. */ + while (q_rpos != ipc_task->q_wpos) { + /* Get the current first queue element. */ + struct ipc_task_queue_args *args = &ipc_task->args[q_rpos]; + + /* Process the input message. */ + if (args->func) + args->response = args->func(args->instance, args->arg, + args->msg, args->size); + + /* Signal completion for synchronous calls */ + if (args->completion) + complete(args->completion); + + /* Free message if copy was allocated. */ + if (args->is_copy) + kfree(args->msg); + + /* Set invalid queue element. Technically + * spin_lock_irqsave is not required here as + * the array element has been processed already + * so we can assume that immediately after processing + * ipc_task element, queue will not rotate again to + * ipc_task same element within such short time. + */ + args->completion = NULL; + args->func = NULL; + args->msg = NULL; + args->size = 0; + args->is_copy = false; + + /* calculate the new read ptr and update the volatile read + * ptr + */ + q_rpos = (q_rpos + 1) % IPC_THREAD_QUEUE_SIZE; + ipc_task->q_rpos = q_rpos; + } +} + +/* Free memory alloc and trigger completions left in the queue during dealloc */ +static void ipc_task_queue_cleanup(struct ipc_task_queue *ipc_task) +{ + unsigned int q_rpos = ipc_task->q_rpos; + + while (q_rpos != ipc_task->q_wpos) { + struct ipc_task_queue_args *args = &ipc_task->args[q_rpos]; + + if (args->completion) + complete(args->completion); + + if (args->is_copy) + kfree(args->msg); + + q_rpos = (q_rpos + 1) % IPC_THREAD_QUEUE_SIZE; + ipc_task->q_rpos = q_rpos; + } +} + +/* Add a message to the queue and trigger the ipc_task. */ +static int +ipc_task_queue_add_task(struct tasklet_struct *ipc_tasklet, + struct ipc_task_queue *ipc_task, + int arg, void *argmnt, + int (*func)(struct iosm_imem *ipc_imem, int arg, + void *msg, size_t size), + void *instance, size_t size, bool is_copy, bool wait) +{ + struct completion completion; + unsigned int pos, nextpos; + unsigned long flags; + int result = -1; + + init_completion(&completion); + + /* tasklet send may be called from both interrupt or thread + * context, therefore protect queue operation by spinlock + */ + spin_lock_irqsave(&ipc_task->q_lock, flags); + + pos = ipc_task->q_wpos; + nextpos = (pos + 1) % IPC_THREAD_QUEUE_SIZE; + + /* Get next queue position. */ + if (nextpos != ipc_task->q_rpos) { + /* Get the reference to the queue element and save the passed + * values. + */ + ipc_task->args[pos].arg = arg; + ipc_task->args[pos].msg = argmnt; + ipc_task->args[pos].func = func; + ipc_task->args[pos].instance = instance; + ipc_task->args[pos].size = size; + ipc_task->args[pos].is_copy = is_copy; + ipc_task->args[pos].completion = wait ? &completion : NULL; + ipc_task->args[pos].response = -1; + + /* apply write barrier so that ipc_task->q_rpos elements + * are updated before ipc_task->q_wpos is being updated. + */ + smp_wmb(); + + /* Update the status of the free queue space. */ + ipc_task->q_wpos = nextpos; + result = 0; + } + + spin_unlock_irqrestore(&ipc_task->q_lock, flags); + + if (result == 0) { + tasklet_schedule(ipc_tasklet); + + if (wait) { + wait_for_completion(&completion); + result = ipc_task->args[pos].response; + } + } else { + dev_err(ipc_task->dev, "queue is full"); + } + + return result; +} + +int ipc_task_queue_send_task(struct iosm_imem *imem, + int (*func)(struct iosm_imem *ipc_imem, int arg, + void *msg, size_t size), + int arg, void *msg, size_t size, bool wait) +{ + struct tasklet_struct *ipc_tasklet = imem->ipc_tasklet; + struct ipc_task_queue *ipc_task = imem->ipc_task; + bool is_copy = false; + void *copy = msg; + + if (size > 0) { + copy = kmemdup(msg, size, GFP_ATOMIC); + if (!copy) + return -ENOMEM; + + is_copy = true; + } + + if (ipc_task_queue_add_task(ipc_tasklet, ipc_task, arg, copy, func, + imem, size, is_copy, wait) < 0) { + dev_err(ipc_task->dev, + "add task failed for %ps %d, %p, %zu, %d", func, arg, + copy, size, is_copy); + if (is_copy) + kfree(copy); + return -1; + } + + return 0; +} + +struct ipc_task_queue *ipc_task_queue_init(struct tasklet_struct *ipc_tasklet, + struct device *dev) +{ + struct ipc_task_queue *ipc_task = kzalloc(sizeof(*ipc_task), + GFP_KERNEL); + + if (!ipc_task) + return NULL; + + ipc_task->dev = dev; + + /* Initialize the spinlock needed to protect the message queue of the + * ipc_task + */ + spin_lock_init(&ipc_task->q_lock); + + tasklet_init(ipc_tasklet, ipc_task_queue_handler, + (unsigned long)ipc_task); + + return ipc_task; +} + +void ipc_task_queue_deinit(struct ipc_task_queue *ipc_task) +{ + /* This will free/complete any outstanding messages, + * without calling the actual handler + */ + ipc_task_queue_cleanup(ipc_task); + + kfree(ipc_task); +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_task_queue.h b/drivers/net/wwan/iosm/iosm_ipc_task_queue.h new file mode 100644 index 000000000000..5591213bcb6d --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_task_queue.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_TASK_QUEUE_H +#define IOSM_IPC_TASK_QUEUE_H + +#include + +#include "iosm_ipc_imem.h" + +/** + * ipc_task_queue_init - Allocate a tasklet + * @ipc_tasklet: Pointer to tasklet_struct + * @dev: Pointer to device structure + * + * Returns: Pointer to allocated ipc_task data-struct or NULL on failure. + */ +struct ipc_task_queue *ipc_task_queue_init(struct tasklet_struct *ipc_tasklet, + struct device *dev); + +/** + * ipc_task_queue_deinit - Free a tasklet, invalidating its pointer. + * @ipc_task: Pointer to ipc_task instance + */ +void ipc_task_queue_deinit(struct ipc_task_queue *ipc_task); + +/** + * ipc_task_queue_send_task - Synchronously/Asynchronously call a function in + * tasklet context. + * @imem: Pointer to iosm_imem struct + * @func: Function to be called in tasklet context + * @arg: Integer argument for func + * @msg: Message pointer argument for func + * @size: Size argument for func + * @wait: if true wait for result + * + * Returns: Result value returned by func or -1 if func could not be called. + */ +int ipc_task_queue_send_task(struct iosm_imem *imem, + int (*func)(struct iosm_imem *ipc_imem, int arg, + void *msg, size_t size), + int arg, void *msg, size_t size, bool wait); + +#endif From patchwork Thu Jan 7 17:05:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 358661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9808C433E0 for ; Thu, 7 Jan 2021 17:07:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 65704233CF for ; Thu, 7 Jan 2021 17:07:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728983AbhAGRHI (ORCPT ); Thu, 7 Jan 2021 12:07:08 -0500 Received: from mga14.intel.com ([192.55.52.115]:53928 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728705AbhAGRHI (ORCPT ); Thu, 7 Jan 2021 12:07:08 -0500 IronPort-SDR: MwPafUVjQevnO+JCNosGsJAyXOCj75jTnowJ6UlPLw+3yP8UyvfPqge79SaH21AP+V/hLSf/RA va+Em+tik89g== X-IronPort-AV: E=McAfee;i="6000,8403,9857"; a="176680962" X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="176680962" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jan 2021 09:06:20 -0800 IronPort-SDR: V35eLDitCQPwOcT5gVjZI5eQAySHLL2h7mBIdzp4zeZcnZadR719YXa9vMCFYmqtZOkbbnCAFR NieygZ2SZEoQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="422643907" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga001.jf.intel.com with ESMTP; 07 Jan 2021 09:06:17 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [PATCH 10/18] net: iosm: multiplex IP sessions Date: Thu, 7 Jan 2021 22:35:15 +0530 Message-Id: <20210107170523.26531-11-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210107170523.26531-1-m.chetan.kumar@intel.com> References: <20210107170523.26531-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Establish IP session between host-device & session management. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_mux.c | 458 +++++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_mux.h | 345 ++++++++++++++++++++++++++ 2 files changed, 803 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux.c b/drivers/net/wwan/iosm/iosm_ipc_mux.c new file mode 100644 index 000000000000..fde467a50ab6 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_mux.c @@ -0,0 +1,458 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include "iosm_ipc_mux_codec.h" + +/* At the begin of the runtime phase the IP MUX channel shall created. */ +static int mux_channel_create(struct iosm_mux *ipc_mux) +{ + int channel_id; + + channel_id = imem_channel_alloc(ipc_mux->imem, ipc_mux->instance_id, + IPC_CTYPE_WWAN); + + if (channel_id < 0) { + dev_err(ipc_mux->dev, + "allocation of the MUX channel id failed"); + ipc_mux->state = MUX_S_ERROR; + ipc_mux->event = MUX_E_NOT_APPLICABLE; + goto no_channel; + } + + /* Establish the MUX channel in blocking mode. */ + ipc_mux->channel = imem_channel_open(ipc_mux->imem, channel_id, + IPC_HP_NET_CHANNEL_INIT); + + if (!ipc_mux->channel) { + dev_err(ipc_mux->dev, "imem_channel_open failed"); + ipc_mux->state = MUX_S_ERROR; + ipc_mux->event = MUX_E_NOT_APPLICABLE; + return -ENODEV; /* MUX channel is not available. */ + } + + /* Define the MUX active state properties. */ + ipc_mux->state = MUX_S_ACTIVE; + ipc_mux->event = MUX_E_NO_ORDERS; + +no_channel: + return channel_id; +} + +/* Reset the session/if id state. */ +static void mux_session_free(struct iosm_mux *ipc_mux, int if_id) +{ + struct mux_session *if_entry; + + if_entry = &ipc_mux->session[if_id]; + /* Reset the session state. */ + if_entry->wwan = NULL; +} + +/* Create and send the session open command. */ +static struct mux_cmd_open_session_resp * +mux_session_open_send(struct iosm_mux *ipc_mux, int if_id) +{ + struct mux_cmd_open_session_resp *open_session_resp; + struct mux_acb *acb = &ipc_mux->acb; + union mux_cmd_param param; + + /* open_session commands to one ACB and start transmission. */ + param.open_session.flow_ctrl = 0; + param.open_session.reserved = 0; + param.open_session.ipv4v6_hints = 0; + param.open_session.reserved2 = 0; + param.open_session.dl_head_pad_len = IPC_MEM_DL_ETH_OFFSET; + + /* Finish and transfer ACB. The user thread is suspended. + * It is a blocking function call, until CP responds or timeout. + */ + acb->wanted_response = MUX_CMD_OPEN_SESSION_RESP; + if (mux_dl_acb_send_cmds(ipc_mux, MUX_CMD_OPEN_SESSION, if_id, 0, + ¶m, sizeof(param.open_session), true, + false) || + acb->got_response != MUX_CMD_OPEN_SESSION_RESP) { + dev_err(ipc_mux->dev, "if_id %d: OPEN_SESSION send failed", + if_id); + return NULL; + } + + open_session_resp = &ipc_mux->acb.got_param.open_session_resp; + if (open_session_resp->response != MUX_CMD_RESP_SUCCESS) { + dev_err(ipc_mux->dev, + "if_id %d,session open failed,response=%d", if_id, + (int)open_session_resp->response); + return NULL; + } + + return open_session_resp; +} + +/* Open the first IP session. */ +static bool mux_session_open(struct iosm_mux *ipc_mux, + struct mux_session_open *session_open) +{ + struct mux_cmd_open_session_resp *open_session_resp; + int if_id; + + /* Search for a free session interface id. */ + if_id = session_open->if_id; + if (if_id < 0 || if_id >= ipc_mux->nr_sessions) { + dev_err(ipc_mux->dev, "invalid interface id=%d", if_id); + return false; + } + + /* Create and send the session open command. + * It is a blocking function call, until CP responds or timeout. + */ + open_session_resp = mux_session_open_send(ipc_mux, if_id); + if (!open_session_resp) { + mux_session_free(ipc_mux, if_id); + session_open->if_id = -1; + return false; + } + + /* Initialize the uplink skb accumulator. */ + skb_queue_head_init(&ipc_mux->session[if_id].ul_list); + + ipc_mux->session[if_id].dl_head_pad_len = IPC_MEM_DL_ETH_OFFSET; + ipc_mux->session[if_id].ul_head_pad_len = + open_session_resp->ul_head_pad_len; + ipc_mux->session[if_id].wwan = ipc_mux->wwan; + + /* Reset the flow ctrl stats of the session */ + ipc_mux->session[if_id].flow_ctl_en_cnt = 0; + ipc_mux->session[if_id].flow_ctl_dis_cnt = 0; + ipc_mux->session[if_id].ul_flow_credits = 0; + ipc_mux->session[if_id].net_tx_stop = false; + ipc_mux->session[if_id].flow_ctl_mask = 0; + + /* Save and return the assigned if id. */ + session_open->if_id = if_id; + + return true; +} + +/* Free pending session UL packet. */ +static void mux_session_reset(struct iosm_mux *ipc_mux, int if_id) +{ + /* Reset the session/if id state. */ + mux_session_free(ipc_mux, if_id); + + /* Empty the uplink skb accumulator. */ + skb_queue_purge(&ipc_mux->session[if_id].ul_list); +} + +static void mux_session_close(struct iosm_mux *ipc_mux, + struct mux_session_close *msg) +{ + int if_id; + + /* Copy the session interface id. */ + if_id = msg->if_id; + + if (if_id < 0 || if_id >= ipc_mux->nr_sessions) { + dev_err(ipc_mux->dev, "invalid session id %d", if_id); + return; + } + + /* Create and send the session close command. + * It is a blocking function call, until CP responds or timeout. + */ + if (mux_dl_acb_send_cmds(ipc_mux, MUX_CMD_CLOSE_SESSION, if_id, 0, NULL, + 0, true, false)) + dev_err(ipc_mux->dev, "if_id %d: CLOSE_SESSION send failed", + if_id); + + /* Reset the flow ctrl stats of the session */ + ipc_mux->session[if_id].flow_ctl_en_cnt = 0; + ipc_mux->session[if_id].flow_ctl_dis_cnt = 0; + ipc_mux->session[if_id].flow_ctl_mask = 0; + + mux_session_reset(ipc_mux, if_id); +} + +static void mux_channel_close(struct iosm_mux *ipc_mux, + struct mux_channel_close *channel_close_p) +{ + int i; + + /* Free pending session UL packet. */ + for (i = 0; i < ipc_mux->nr_sessions; i++) + if (ipc_mux->session[i].wwan) + mux_session_reset(ipc_mux, i); + + imem_channel_close(ipc_mux->imem, ipc_mux->channel_id); + + /* Reset the MUX object. */ + ipc_mux->state = MUX_S_INACTIVE; + ipc_mux->event = MUX_E_INACTIVE; +} + +/* CP has interrupted AP. If AP is in IP MUX mode, execute the pending ops. */ +static int mux_schedule(struct iosm_mux *ipc_mux, union mux_msg *msg) +{ + enum mux_event order; + bool success; + + if (!ipc_mux->initialized) + return -1; /* Shall be used as normal IP channel. */ + + order = msg->common.event; + + switch (ipc_mux->state) { + case MUX_S_INACTIVE: + if (order != MUX_E_MUX_SESSION_OPEN) + /* Wait for the request to open a session */ + return -1; + + if (ipc_mux->event == MUX_E_INACTIVE) + /* Establish the MUX channel and the new state. */ + ipc_mux->channel_id = mux_channel_create(ipc_mux); + + if (ipc_mux->state != MUX_S_ACTIVE) + /* Missing the MUX channel. */ + return -1; + + /* Disable the TD update timer and open the first IP session. */ + imem_td_update_timer_suspend(ipc_mux->imem, true); + ipc_mux->event = MUX_E_MUX_SESSION_OPEN; + success = mux_session_open(ipc_mux, &msg->session_open); + + imem_td_update_timer_suspend(ipc_mux->imem, false); + return success ? ipc_mux->channel_id : -1; + + case MUX_S_ACTIVE: + switch (order) { + case MUX_E_MUX_SESSION_OPEN: + /* Disable the TD update timer and open a session */ + imem_td_update_timer_suspend(ipc_mux->imem, true); + ipc_mux->event = MUX_E_MUX_SESSION_OPEN; + success = mux_session_open(ipc_mux, &msg->session_open); + imem_td_update_timer_suspend(ipc_mux->imem, false); + return success ? ipc_mux->channel_id : -1; + + case MUX_E_MUX_SESSION_CLOSE: + /* Release an IP session. */ + ipc_mux->event = MUX_E_MUX_SESSION_CLOSE; + mux_session_close(ipc_mux, &msg->session_close); + return ipc_mux->channel_id; + + case MUX_E_MUX_CHANNEL_CLOSE: + /* Close the MUX channel pipes. */ + ipc_mux->event = MUX_E_MUX_CHANNEL_CLOSE; + mux_channel_close(ipc_mux, &msg->channel_close); + return ipc_mux->channel_id; + + default: + /* Invalid order. */ + return -1; + } + + default: + dev_err(ipc_mux->dev, + "unexpected MUX transition: state=%d, event=%d", + ipc_mux->state, ipc_mux->event); + return -1; + } +} + +struct iosm_mux *mux_init(struct ipc_mux_config *mux_cfg, + struct iosm_imem *imem) +{ + struct iosm_mux *ipc_mux = kzalloc(sizeof(*ipc_mux), GFP_KERNEL); + int i, ul_tds, ul_td_size; + struct mux_session *session; + struct sk_buff_head *free_list; + struct sk_buff *skb; + + if (!ipc_mux) + return NULL; + + ipc_mux->protocol = mux_cfg->protocol; + ipc_mux->ul_flow = mux_cfg->ul_flow; + ipc_mux->nr_sessions = mux_cfg->nr_sessions; + ipc_mux->instance_id = mux_cfg->instance_id; + ipc_mux->wwan_q_offset = 0; + + ipc_mux->pcie = imem->pcie; + ipc_mux->imem = imem; + ipc_mux->ipc_protocol = imem->ipc_protocol; + ipc_mux->dev = imem->dev; + ipc_mux->wwan = imem->wwan; + + ipc_mux->session = + kcalloc(ipc_mux->nr_sessions, sizeof(*session), GFP_KERNEL); + + if (!ipc_mux->session) { + kfree(ipc_mux); + return NULL; + } + + /* Get the reference to the id list. */ + session = ipc_mux->session; + + /* Get the reference to the UL ADB list. */ + free_list = &ipc_mux->ul_adb.free_list; + + /* Initialize the list with free ADB. */ + skb_queue_head_init(free_list); + + ul_td_size = IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE; + + ul_tds = IPC_MEM_MAX_TDS_MUX_LITE_UL; + + ipc_mux->ul_adb.dest_skb = NULL; + + ipc_mux->initialized = true; + ipc_mux->adb_prep_ongoing = false; + ipc_mux->size_needed = 0; + ipc_mux->ul_data_pend_bytes = 0; + ipc_mux->state = MUX_S_INACTIVE; + ipc_mux->ev_mux_net_transmit_pending = false; + ipc_mux->tx_transaction_id = 0; + ipc_mux->rr_next_session = 0; + ipc_mux->event = MUX_E_INACTIVE; + ipc_mux->channel_id = -1; + ipc_mux->channel = NULL; + + /* Allocate the list of UL ADB. */ + for (i = 0; i < ul_tds; i++) { + dma_addr_t mapping; + + skb = ipc_pcie_alloc_skb(ipc_mux->pcie, ul_td_size, GFP_ATOMIC, + &mapping, DMA_TO_DEVICE, 0); + if (!skb) { + ipc_mux_deinit(ipc_mux); + return NULL; + } + /* Extend the UL ADB list. */ + skb_queue_tail(free_list, skb); + } + + return ipc_mux; +} + +/* Informs the network stack to restart transmission for all opened session if + * Flow Control is not ON for that session. + */ +static void mux_restart_tx_for_all_sessions(struct iosm_mux *ipc_mux) +{ + struct mux_session *session; + int idx; + + for (idx = 0; idx < ipc_mux->nr_sessions; idx++) { + session = &ipc_mux->session[idx]; + + if (!session->wwan) + continue; + + /* If flow control of the session is OFF and if there was tx + * stop then restart. Inform the network interface to restart + * sending data. + */ + if (session->flow_ctl_mask == 0) { + session->net_tx_stop = false; + mux_netif_tx_flowctrl(session, idx, false); + } + } +} + +/* Informs the network stack to stop sending further pkt for all opened + * sessions + */ +static void mux_stop_netif_for_all_sessions(struct iosm_mux *ipc_mux) +{ + struct mux_session *session; + int idx; + + for (idx = 0; idx < ipc_mux->nr_sessions; idx++) { + session = &ipc_mux->session[idx]; + + if (!session->wwan) + continue; + + mux_netif_tx_flowctrl(session, session->if_id, true); + } +} + +void ipc_mux_check_n_restart_tx(struct iosm_mux *ipc_mux) +{ + if (ipc_mux->ul_flow == MUX_UL) { + int low_thresh = IPC_MEM_MUX_UL_FLOWCTRL_LOW_B; + + if (ipc_mux->ul_data_pend_bytes < low_thresh) + mux_restart_tx_for_all_sessions(ipc_mux); + } +} + +int ipc_mux_get_max_sessions(struct iosm_mux *ipc_mux) +{ + return ipc_mux ? ipc_mux->nr_sessions : -1; +} + +enum ipc_mux_protocol ipc_mux_get_active_protocol(struct iosm_mux *ipc_mux) +{ + return ipc_mux ? ipc_mux->protocol : MUX_UNKNOWN; +} + +int ipc_mux_open_session(struct iosm_mux *ipc_mux, int session_nr) +{ + struct mux_session_open *session_open; + union mux_msg mux_msg; + + session_open = &mux_msg.session_open; + session_open->event = MUX_E_MUX_SESSION_OPEN; + + session_open->if_id = session_nr; + ipc_mux->session[session_nr].flags |= IPC_MEM_WWAN_MUX; + return mux_schedule(ipc_mux, &mux_msg); +} + +int ipc_mux_close_session(struct iosm_mux *ipc_mux, int session_nr) +{ + struct mux_session_close *session_close; + union mux_msg mux_msg; + int ret_val; + + session_close = &mux_msg.session_close; + session_close->event = MUX_E_MUX_SESSION_CLOSE; + + session_close->if_id = session_nr; + ret_val = mux_schedule(ipc_mux, &mux_msg); + ipc_mux->session[session_nr].flags &= ~IPC_MEM_WWAN_MUX; + + return ret_val; +} + +void ipc_mux_deinit(struct iosm_mux *ipc_mux) +{ + struct mux_channel_close *channel_close; + struct sk_buff_head *free_list; + union mux_msg mux_msg; + struct sk_buff *skb; + + if (!ipc_mux->initialized) + return; + mux_stop_netif_for_all_sessions(ipc_mux); + + channel_close = &mux_msg.channel_close; + channel_close->event = MUX_E_MUX_CHANNEL_CLOSE; + mux_schedule(ipc_mux, &mux_msg); + + /* Empty the ADB free list. */ + free_list = &ipc_mux->ul_adb.free_list; + + /* Remove from the head of the downlink queue. */ + while ((skb = skb_dequeue(free_list))) + ipc_pcie_kfree_skb(ipc_mux->pcie, skb); + + if (ipc_mux->channel) { + ipc_mux->channel->ul_pipe.is_open = false; + ipc_mux->channel->dl_pipe.is_open = false; + } + + kfree(ipc_mux->session); + kfree(ipc_mux); +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux.h b/drivers/net/wwan/iosm/iosm_ipc_mux.h new file mode 100644 index 000000000000..047a29f7e4fa --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_mux.h @@ -0,0 +1,345 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_MUX_H +#define IOSM_IPC_MUX_H + +#include "iosm_ipc_protocol.h" + +/* Size of the buffer for the IP MUX data buffer. */ +#define IPC_MEM_MAX_DL_MUX_BUF_SIZE (16 * 1024) +#define IPC_MEM_MAX_UL_ADB_BUF_SIZE IPC_MEM_MAX_DL_MUX_BUF_SIZE + +/* Size of the buffer for the IP MUX Lite data buffer. */ +#define IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE (2 * 1024) + +/* TD counts for IP MUX Lite */ +#define IPC_MEM_MAX_TDS_MUX_LITE_UL 800 +#define IPC_MEM_MAX_TDS_MUX_LITE_DL 1200 + +/* open session request (AP->CP) */ +#define MUX_CMD_OPEN_SESSION 1 + +/* response to open session request (CP->AP) */ +#define MUX_CMD_OPEN_SESSION_RESP 2 + +/* close session request (AP->CP) */ +#define MUX_CMD_CLOSE_SESSION 3 + +/* response to close session request (CP->AP) */ +#define MUX_CMD_CLOSE_SESSION_RESP 4 + +/* Flow control command with mask of the flow per queue/flow. */ +#define MUX_LITE_CMD_FLOW_CTL 5 + +/* ACK the flow control command. Shall have the same Transaction ID as the + * matching FLOW_CTL command. + */ +#define MUX_LITE_CMD_FLOW_CTL_ACK 6 + +/* Command for report packet indicating link quality metrics. */ +#define MUX_LITE_CMD_LINK_STATUS_REPORT 7 + +/* Response to a report packet */ +#define MUX_LITE_CMD_LINK_STATUS_REPORT_RESP 8 + +/* Used to reset a command/response state. */ +#define MUX_CMD_INVALID 255 + +/* command response : command processed successfully */ +#define MUX_CMD_RESP_SUCCESS 0 + +/* MUX for vlan devices */ +#define IPC_MEM_WWAN_MUX BIT(0) + +/* Initiated actions to change the state of the MUX object. */ +enum mux_event { + MUX_E_INACTIVE, /* No initiated actions. */ + MUX_E_MUX_SESSION_OPEN, /* Create the MUX channel and a session. */ + MUX_E_MUX_SESSION_CLOSE, /* Release a session. */ + MUX_E_MUX_CHANNEL_CLOSE, /* Release the MUX channel. */ + MUX_E_NO_ORDERS, /* No MUX order. */ + MUX_E_NOT_APPLICABLE, /* Defect IP MUX. */ +}; + +/* MUX session open command. */ +struct mux_session_open { + enum mux_event event; + int if_id; +}; + +/* MUX session close command. */ +struct mux_session_close { + enum mux_event event; + int if_id; +}; + +/* MUX channel close command. */ +struct mux_channel_close { + enum mux_event event; +}; + +/* Default message type to find out the right message type. */ +struct mux_common { + enum mux_event event; +}; + +/* List of ops in MUX mode. */ +union mux_msg { + struct mux_session_open session_open; + struct mux_session_close session_close; + struct mux_channel_close channel_close; + struct mux_common common; +}; + +/* Parameter definition of the open session command. */ +struct mux_cmd_open_session { + u32 flow_ctrl : 1; /* 0: Flow control disabled (flow allowed). */ + /* 1: Flow control enabled (flow not allowed)*/ + u32 reserved : 7; /* Reserved. Set to zero. */ + u32 ipv4v6_hints : 1; /* 0: IPv4/IPv6 hints not supported.*/ + /* 1: IPv4/IPv6 hints supported*/ + u32 reserved2 : 23; /* Reserved. Set to zero. */ + u32 dl_head_pad_len; /* Maximum length supported */ + /* for DL head padding on a datagram. */ +}; + +/* Parameter definition of the open session response. */ +struct mux_cmd_open_session_resp { + u32 response; /* Response code */ + u32 flow_ctrl : 1; /* 0: Flow control disabled (flow allowed). */ + /* 1: Flow control enabled (flow not allowed) */ + u32 reserved : 7; /* Reserved. Set to zero. */ + u32 ipv4v6_hints : 1; /* 0: IPv4/IPv6 hints not supported */ + /* 1: IPv4/IPv6 hints supported */ + u32 reserved2 : 23; /* Reserved. Set to zero. */ + u32 ul_head_pad_len; /* Actual length supported for */ + /* UL head padding on adatagram.*/ +}; + +/* Parameter definition of the close session response code */ +struct mux_cmd_close_session_resp { + u32 response; +}; + +/* Parameter definition of the flow control command. */ +struct mux_cmd_flow_ctl { + u32 mask; /* indicating the desired flow control */ + /* state for various flows/queues */ +}; + +/* Parameter definition of the link status report code*/ +struct mux_cmd_link_status_report { + u8 payload; +}; + +/* Parameter definition of the link status report response code. */ +struct mux_cmd_link_status_report_resp { + u32 response; +}; + +/** + * union mux_cmd_param - Union-definition of the command parameters. + * @open_session: Inband command for open session + * @open_session_resp: Inband command for open session response + * @close_session_resp: Inband command for close session response + * @flow_ctl: In-band flow control on the opened interfaces + * @link_status: In-band Link Status Report + * @link_status_resp: In-band command for link status report response + */ +union mux_cmd_param { + struct mux_cmd_open_session open_session; + struct mux_cmd_open_session_resp open_session_resp; + struct mux_cmd_close_session_resp close_session_resp; + struct mux_cmd_flow_ctl flow_ctl; + struct mux_cmd_link_status_report link_status; + struct mux_cmd_link_status_report_resp link_status_resp; +}; + +/* States of the MUX object.. */ +enum mux_state { + MUX_S_INACTIVE, /* IP MUX is unused. */ + MUX_S_ACTIVE, /* IP MUX channel is available. */ + MUX_S_ERROR, /* Defect IP MUX. */ +}; + +/* Supported MUX protocols. */ +enum ipc_mux_protocol { + MUX_UNKNOWN, + MUX_LITE, +}; + +/* Supported UL data transfer methods. */ +enum ipc_mux_ul_flow { + MUX_UL_UNKNOWN, + MUX_UL, /* Normal UL data transfer */ + MUX_UL_ON_CREDITS, /* UL data transfer will be based on credits */ +}; + +/* List of the MUX session. */ +struct mux_session { + struct iosm_wwan *wwan; /*Network i/f used for communication*/ + int if_id; /* i/f id for session open message.*/ + u32 flags; + u32 ul_head_pad_len; /* Nr of bytes for UL head padding. */ + u32 dl_head_pad_len; /* Nr of bytes for DL head padding. */ + struct sk_buff_head ul_list; /* skb entries for an ADT. */ + u32 flow_ctl_mask; /* UL flow control */ + u32 flow_ctl_en_cnt; /* Flow control Enable cmd count */ + u32 flow_ctl_dis_cnt; /* Flow Control Disable cmd count */ + int ul_flow_credits; /* UL flow credits */ + u8 net_tx_stop : 1; + u8 flush : 1; /* flush net interface ? */ +}; + +/* State of a single UL data block. */ +struct mux_adb { + struct sk_buff *dest_skb; /* Current UL skb for the data block. */ + u8 *buf; /* ADB memory. */ + struct mux_adgh *adgh; /* ADGH pointer */ + struct sk_buff *qlth_skb; /* QLTH pointer */ + u32 *next_table_index; /* Pointer to next table index. */ + struct sk_buff_head free_list; /* List of alloc. ADB for the UL sess.*/ + int size; /* Size of the ADB memory. */ + u32 if_cnt; /* Statistic counter */ + u32 dg_cnt_total; + u32 payload_size; +}; + +/* Temporary ACB state. */ +struct mux_acb { + struct sk_buff *skb; /* Used UL skb. */ + int if_id; /* Session id. */ + u32 wanted_response; + u32 got_response; + u32 cmd; + union mux_cmd_param got_param; /* Received command/response parameter */ +}; + +/** + * struct iosm_mux - Structure of the data multiplexing over an IP channel. + * @dev: pointer to device structure + * @session: List of the MUX sessions. + * @channel: Reference to the IP MUX channel + * @pcie: Pointer to iosm_pcie struct + * @imem: Pointer to iosm_imem + * @wwan: Poinetr to iosm_wwan + * @ipc_protocol: Pointer to iosm_protocol + * @channel_id: Channel ID for MUX + * @protocol: Type of the MUX protocol + * @ul_flow: UL Flow type + * @nr_sessions: Number of sessions + * @instance_id: Instance ID + * @state: States of the MUX object + * @event: Initiated actions to change the state of the MUX object + * @tx_transaction_id: Transaction id for the ACB command. + * @rr_next_session: Next session number for round robin. + * @ul_adb: State of the UL ADB/ADGH. + * @size_needed: Variable to store the size needed during ADB preparation + * @ul_data_pend_bytes: Pending UL data to be processed in bytes + * @acb: Temporary ACB state + * @wwan_q_offset: This will hold the offset of the given instance + * Useful while passing or receiving packets from + * wwan/imem layer. + * @initialized: MUX object is initialized + * @ev_mux_net_transmit_pending: + * 0 means inform the IPC tasklet to pass the + * accumulated uplink ADB to CP. + * @adb_prep_ongoing: Flag for ADB preparation status + */ +struct iosm_mux { + struct device *dev; + struct mux_session *session; + struct ipc_mem_channel *channel; + struct iosm_pcie *pcie; + struct iosm_imem *imem; + struct iosm_wwan *wwan; + struct iosm_protocol *ipc_protocol; + int channel_id; + enum ipc_mux_protocol protocol; + enum ipc_mux_ul_flow ul_flow; + int nr_sessions; + int instance_id; + enum mux_state state; + enum mux_event event; + u32 tx_transaction_id; + int rr_next_session; + struct mux_adb ul_adb; + int size_needed; + long long ul_data_pend_bytes; + struct mux_acb acb; + int wwan_q_offset; + u8 initialized : 1; + u8 ev_mux_net_transmit_pending : 1; + u8 adb_prep_ongoing : 1; +}; + +/* MUX configuration structure */ +struct ipc_mux_config { + enum ipc_mux_protocol protocol; + enum ipc_mux_ul_flow ul_flow; + int nr_sessions; + int instance_id; +}; + +/** + * mux_init - Allocates and Init MUX instance + * @mux_cfg: Pointer to MUX configuration structure + * @ipc_imem: Pointer to imem data-struct + * + * Returns: Initialized mux pointer on success else NULL + */ +struct iosm_mux *mux_init(struct ipc_mux_config *mux_cfg, + struct iosm_imem *ipc_imem); + +/** + * ipc_mux_deinit - Deallocates MUX instance + * @ipc_mux: Pointer to the MUX instance. + */ +void ipc_mux_deinit(struct iosm_mux *ipc_mux); + +/** + * ipc_mux_check_n_restart_tx - Checks for pending UL date bytes and then + * it restarts the net interface tx queue if + * device has set flow control as off. + * @ipc_mux: Pointer to MUX data-struct + */ +void ipc_mux_check_n_restart_tx(struct iosm_mux *ipc_mux); + +/** + * ipc_mux_get_active_protocol - Returns the active MUX protocol type. + * @ipc_mux: Pointer to MUX data-struct + * + * Returns: enum of type ipc_mux_protocol + */ +enum ipc_mux_protocol ipc_mux_get_active_protocol(struct iosm_mux *ipc_mux); + +/** + * ipc_mux_open_session - Opens a MUX session for IP traffic. + * @ipc_mux: Pointer to MUX data-struct + * @session_nr: Interface ID or session number + * + * Returns: channel id on success, -1 on failure + */ +int ipc_mux_open_session(struct iosm_mux *ipc_mux, int session_nr); + +/** + * ipc_mux_close_session - Closes a MUX session. + * @ipc_mux: Pointer to MUX data-struct + * @session_nr: Interface ID or session number + * + * Returns: channel id on success, -1 on failure + */ +int ipc_mux_close_session(struct iosm_mux *ipc_mux, int session_nr); + +/** + * ipc_mux_get_max_sessions - Retuns the maximum sessions supported on the + * provided MUX instance.. + * @ipc_mux: Pointer to MUX data-struct + * + * Returns: Number of sessions supported on Success and -1 on failure + */ +int ipc_mux_get_max_sessions(struct iosm_mux *ipc_mux); +#endif From patchwork Thu Jan 7 17:05:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 358660 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F26F0C43381 for ; Thu, 7 Jan 2021 17:07:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C3B00233CF for ; Thu, 7 Jan 2021 17:07:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729020AbhAGRHN (ORCPT ); Thu, 7 Jan 2021 12:07:13 -0500 Received: from mga14.intel.com ([192.55.52.115]:53942 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728705AbhAGRHN (ORCPT ); Thu, 7 Jan 2021 12:07:13 -0500 IronPort-SDR: 1rV9DjYZ3WE6hKILIwFhVGPn1v1+5cHRKQ7ICnfigw8eXR4DX4jrixvUwmmXWYwo2MzWfNzf6L QgE/Dvi38qrQ== X-IronPort-AV: E=McAfee;i="6000,8403,9857"; a="176680992" X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="176680992" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jan 2021 09:06:26 -0800 IronPort-SDR: G5oVGSpPeurxkG2BzhK8CGGhSyukL9hrv5WzxVWmjwscjwV6FHf92E2UvDS9OnWPr1AVjiJz6j COpyItsq76oA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="422643944" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga001.jf.intel.com with ESMTP; 07 Jan 2021 09:06:24 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [PATCH 12/18] net: iosm: power management Date: Thu, 7 Jan 2021 22:35:17 +0530 Message-Id: <20210107170523.26531-13-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210107170523.26531-1-m.chetan.kumar@intel.com> References: <20210107170523.26531-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Implements state machine to handle host & device sleep. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_pm.c | 326 ++++++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_pm.h | 228 +++++++++++++++++++++++++ 2 files changed, 554 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_pm.c b/drivers/net/wwan/iosm/iosm_ipc_pm.c new file mode 100644 index 000000000000..3c6028b9d96e --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_pm.c @@ -0,0 +1,326 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include "iosm_ipc_protocol.h" +#include "iosm_ipc_task_queue.h" + +/* Timeout value in MS for the PM to wait for device to reach active state */ +#define IPC_PM_ACTIVE_TIMEOUT_MS (500) + +/* Note that here "active" has the value 1, as compared to the enums + * ipc_mem_host_pm_state or ipc_mem_dev_pm_state, where "active" is 0 + */ +#define IPC_PM_SLEEP (0) +#define IPC_PM_ACTIVE (1) + +void ipc_pm_signal_hpda_doorbell(struct iosm_pm *ipc_pm, u32 identifier, + bool host_slp_check) +{ + if (host_slp_check && ipc_pm->host_pm_state != IPC_MEM_HOST_PM_ACTIVE && + ipc_pm->host_pm_state != IPC_MEM_HOST_PM_ACTIVE_WAIT) { + ipc_pm->pending_hpda_update = true; + dev_dbg(ipc_pm->dev, + "Pending HPDA update set. Host PM_State: %d identifier:%d", + ipc_pm->host_pm_state, identifier); + return; + } + + if (!ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_IRQ, true)) { + ipc_pm->pending_hpda_update = true; + dev_dbg(ipc_pm->dev, "Pending HPDA update set. identifier:%d", + identifier); + return; + } + ipc_pm->pending_hpda_update = false; + + /* Trigger the irq towards CP */ + ipc_cp_irq_hpda_update(ipc_pm->pcie, identifier); + + ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_IRQ, false); +} + +/* Wake up the device if it is in low power mode. */ +static bool ipc_pm_link_activate(struct iosm_pm *ipc_pm) +{ + if (ipc_pm->cp_state == IPC_MEM_DEV_PM_ACTIVE) + return true; + + if (ipc_pm->cp_state == IPC_MEM_DEV_PM_SLEEP) { + if (ipc_pm->ap_state == IPC_MEM_DEV_PM_SLEEP) { + /* Wake up the device. */ + ipc_cp_irq_sleep_control(ipc_pm->pcie, + IPC_MEM_DEV_PM_WAKEUP); + ipc_pm->ap_state = IPC_MEM_DEV_PM_ACTIVE_WAIT; + + goto not_active; + } + + if (ipc_pm->ap_state == IPC_MEM_DEV_PM_ACTIVE_WAIT) + goto not_active; + + return true; + } + +not_active: + /* link is not ready */ + return false; +} + +void ipc_pm_host_slp_reinit_dev_active_completion(struct iosm_pm *ipc_pm) +{ + atomic_set(&ipc_pm->host_sleep_pend, 1); + + reinit_completion(&ipc_pm->host_sleep_complete); +} + +bool ipc_pm_wait_for_device_active(struct iosm_pm *ipc_pm) +{ + bool ret_val = false; + + if (ipc_pm->ap_state != IPC_MEM_DEV_PM_ACTIVE) + + /* Wait for IPC_PM_ACTIVE_TIMEOUT_MS for Device sleep state + * machine to enter ACTIVE state. + */ + if (!wait_for_completion_interruptible_timeout + (&ipc_pm->host_sleep_complete, + msecs_to_jiffies(IPC_PM_ACTIVE_TIMEOUT_MS))) { + dev_err(ipc_pm->dev, + "PM timeout. Expected State:%d. Actual: %d", + IPC_MEM_DEV_PM_ACTIVE, ipc_pm->ap_state); + goto active_timeout; + } + + ret_val = true; +active_timeout: + /* Reset the atomic variable in any case as device sleep + * state machine change is no longer of interest. + */ + atomic_set(&ipc_pm->host_sleep_pend, 0); + + return ret_val; +} + +static void ipc_pm_on_link_sleep(struct iosm_pm *ipc_pm) +{ + /* pending sleep ack and all conditions are cleared + * -> signal SLEEP__ACK to CP + */ + ipc_pm->cp_state = IPC_MEM_DEV_PM_SLEEP; + ipc_pm->ap_state = IPC_MEM_DEV_PM_SLEEP; + + ipc_cp_irq_sleep_control(ipc_pm->pcie, IPC_MEM_DEV_PM_SLEEP); +} + +static void ipc_pm_on_link_wake(struct iosm_pm *ipc_pm, bool ack) +{ + ipc_pm->ap_state = IPC_MEM_DEV_PM_ACTIVE; + + if (ack) { + ipc_pm->cp_state = IPC_MEM_DEV_PM_ACTIVE; + + ipc_cp_irq_sleep_control(ipc_pm->pcie, IPC_MEM_DEV_PM_ACTIVE); + + /* check the consume state !!! */ + if (atomic_cmpxchg(&ipc_pm->host_sleep_pend, 1, 0)) + complete(&ipc_pm->host_sleep_complete); + } + + /* Check for pending HPDA update. + * Pending HP update could be because of sending message was + * put on hold due to Device sleep state or due to TD update + * which could be because of Device Sleep and Host Sleep + * states. + */ + if (ipc_pm->pending_hpda_update && + ipc_pm->host_pm_state == IPC_MEM_HOST_PM_ACTIVE) + ipc_pm_signal_hpda_doorbell(ipc_pm, IPC_HP_PM_TRIGGER, true); +} + +bool ipc_pm_trigger(struct iosm_pm *ipc_pm, enum ipc_pm_unit unit, bool active) +{ + union ipc_pm_cond old_cond; + union ipc_pm_cond new_cond; + bool link_active; + + /* Save the current D3 state. */ + new_cond = ipc_pm->pm_cond; + old_cond = ipc_pm->pm_cond; + + /* Calculate the power state only in the runtime phase. */ + switch (unit) { + case IPC_PM_UNIT_IRQ: /* CP irq */ + new_cond.irq = active; + break; + + case IPC_PM_UNIT_LINK: /* Device link state. */ + new_cond.link = active; + break; + + case IPC_PM_UNIT_HS: /* Host sleep trigger requires Link. */ + new_cond.hs = active; + break; + + default: + break; + } + + /* Something changed ? */ + if (old_cond.raw == new_cond.raw) { + /* Stay in the current PM state. */ + link_active = old_cond.link == IPC_PM_ACTIVE; + goto ret; + } + + ipc_pm->pm_cond = new_cond; + + if (new_cond.link) + ipc_pm_on_link_wake(ipc_pm, unit == IPC_PM_UNIT_LINK); + else if (unit == IPC_PM_UNIT_LINK) + ipc_pm_on_link_sleep(ipc_pm); + + if (old_cond.link == IPC_PM_SLEEP && new_cond.raw != 0) { + link_active = ipc_pm_link_activate(ipc_pm); + goto ret; + } + + link_active = old_cond.link == IPC_PM_ACTIVE; + +ret: + return link_active; +} + +bool ipc_pm_prepare_host_sleep(struct iosm_pm *ipc_pm) +{ + /* suspend not allowed if host_pm_state is not IPC_MEM_HOST_PM_ACTIVE */ + if (ipc_pm->host_pm_state != IPC_MEM_HOST_PM_ACTIVE) { + dev_err(ipc_pm->dev, "host_pm_state=%d\tExpected to be: %d", + ipc_pm->host_pm_state, IPC_MEM_HOST_PM_ACTIVE); + return false; + } + + ipc_pm->host_pm_state = IPC_MEM_HOST_PM_SLEEP_WAIT_D3; + + return true; +} + +bool ipc_pm_prepare_host_active(struct iosm_pm *ipc_pm) +{ + if (ipc_pm->host_pm_state != IPC_MEM_HOST_PM_SLEEP) { + dev_err(ipc_pm->dev, "host_pm_state=%d\tExpected to be: %d", + ipc_pm->host_pm_state, IPC_MEM_HOST_PM_SLEEP); + return false; + } + + /* Sending Sleep Exit message to CP. Update the state */ + ipc_pm->host_pm_state = IPC_MEM_HOST_PM_ACTIVE_WAIT; + + return true; +} + +void ipc_pm_set_s2idle_sleep(struct iosm_pm *ipc_pm, bool sleep) +{ + if (sleep) { + ipc_pm->ap_state = IPC_MEM_DEV_PM_SLEEP; + ipc_pm->cp_state = IPC_MEM_DEV_PM_SLEEP; + ipc_pm->device_sleep_notification = IPC_MEM_DEV_PM_SLEEP; + } else { + ipc_pm->ap_state = IPC_MEM_DEV_PM_ACTIVE; + ipc_pm->cp_state = IPC_MEM_DEV_PM_ACTIVE; + ipc_pm->device_sleep_notification = IPC_MEM_DEV_PM_ACTIVE; + ipc_pm->pm_cond.link = IPC_PM_ACTIVE; + } +} + +bool ipc_pm_dev_slp_notification(struct iosm_pm *ipc_pm, u32 cp_pm_req) +{ + if (cp_pm_req == ipc_pm->device_sleep_notification) + return false; + + ipc_pm->device_sleep_notification = cp_pm_req; + + /* Evaluate the PM request. */ + switch (ipc_pm->cp_state) { + case IPC_MEM_DEV_PM_ACTIVE: + switch (cp_pm_req) { + case IPC_MEM_DEV_PM_ACTIVE: + break; + + case IPC_MEM_DEV_PM_SLEEP: + /* Inform the PM that the device link can go down. */ + ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_LINK, false); + return true; + + default: + dev_err(ipc_pm->dev, + "loc-pm=%d active: confused req-pm=%d", + ipc_pm->cp_state, cp_pm_req); + break; + } + break; + + case IPC_MEM_DEV_PM_SLEEP: + switch (cp_pm_req) { + case IPC_MEM_DEV_PM_ACTIVE: + /* Inform the PM that the device link is active. */ + ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_LINK, true); + break; + + case IPC_MEM_DEV_PM_SLEEP: + break; + + default: + dev_err(ipc_pm->dev, + "loc-pm=%d sleep: confused req-pm=%d", + ipc_pm->cp_state, cp_pm_req); + break; + } + break; + + default: + dev_err(ipc_pm->dev, "confused loc-pm=%d, req-pm=%d", + ipc_pm->cp_state, cp_pm_req); + break; + } + + return false; +} + +struct iosm_pm *ipc_pm_init(struct iosm_imem *ipc_imem) +{ + struct iosm_pm *ipc_pm = kzalloc(sizeof(*ipc_pm), GFP_KERNEL); + + if (!ipc_pm) + return NULL; + + ipc_pm->pcie = ipc_imem->pcie; + ipc_pm->dev = ipc_imem->dev; + + ipc_pm->pm_cond.irq = IPC_PM_SLEEP; + ipc_pm->pm_cond.hs = IPC_PM_SLEEP; + ipc_pm->pm_cond.link = IPC_PM_ACTIVE; + + ipc_pm->cp_state = IPC_MEM_DEV_PM_ACTIVE; + ipc_pm->ap_state = IPC_MEM_DEV_PM_ACTIVE; + ipc_pm->host_pm_state = IPC_MEM_HOST_PM_ACTIVE; + + ipc_pm->ipc_tasklet = ipc_imem->ipc_tasklet; + ipc_pm->ipc_task = ipc_imem->ipc_task; + + /* Create generic wait-for-completion handler for Host Sleep + * and device sleep coordination. + */ + init_completion(&ipc_pm->host_sleep_complete); + + atomic_set(&ipc_pm->host_sleep_pend, 0); + + return ipc_pm; +} + +void ipc_pm_deinit(struct iosm_pm *ipc_pm) +{ + complete(&ipc_pm->host_sleep_complete); + kfree(ipc_pm); +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_pm.h b/drivers/net/wwan/iosm/iosm_ipc_pm.h new file mode 100644 index 000000000000..621e0b9d6201 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_pm.h @@ -0,0 +1,228 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_PM_H +#define IOSM_IPC_PM_H + +#include + +/* Trigger the doorbell interrupt on cp to change the PM sleep/active status */ +#define ipc_cp_irq_sleep_control(ipc_pcie, data) \ + ipc_doorbell_fire(ipc_pcie, IPC_DOORBELL_IRQ_SLEEP, data) + +/* Trigger the doorbell interrupt on CP to do hpda update */ +#define ipc_cp_irq_hpda_update(ipc_pcie, data) \ + ipc_doorbell_fire(ipc_pcie, IPC_DOORBELL_IRQ_HPDA, 0xFF & (data)) + +/** + * union ipc_pm_cond - Conditions for D3 and the sleep message to CP. + * @raw: raw/combined value for faster check + * @irq: IRQ towards CP + * @hs: Host Sleep + * @link: Device link state. + */ +union ipc_pm_cond { + unsigned int raw; + + struct { + unsigned int irq : 1; + unsigned int hs : 1; + unsigned int link : 1; + }; +}; + +/** + * enum ipc_mem_host_pm_state - Possible states of the HOST SLEEP finite state + * machine. + * @IPC_MEM_HOST_PM_ACTIVE: Host is active + * @IPC_MEM_HOST_PM_ACTIVE_WAIT: Intermediate state before going to + * active + * @IPC_MEM_HOST_PM_SLEEP_WAIT_IDLE: Intermediate state to wait for idle + * before going into sleep + * @IPC_MEM_HOST_PM_SLEEP_WAIT_D3: Intermediate state to wait for D3 + * before going to sleep + * @IPC_MEM_HOST_PM_SLEEP: after this state the interface is not + * accessible host is in suspend to RAM + * @IPC_MEM_HOST_PM_SLEEP_WAIT_EXIT_SLEEP: Intermediate state before exiting + * sleep + */ +enum ipc_mem_host_pm_state { + IPC_MEM_HOST_PM_ACTIVE, + IPC_MEM_HOST_PM_ACTIVE_WAIT, + IPC_MEM_HOST_PM_SLEEP_WAIT_IDLE, + IPC_MEM_HOST_PM_SLEEP_WAIT_D3, + IPC_MEM_HOST_PM_SLEEP, + IPC_MEM_HOST_PM_SLEEP_WAIT_EXIT_SLEEP, +}; + +/** + * enum ipc_mem_dev_pm_state - Possible states of the DEVICE SLEEP finite state + * machine. + * @IPC_MEM_DEV_PM_ACTIVE: IPC_MEM_DEV_PM_ACTIVE is the initial + * power management state. + * IRQ(struct ipc_mem_device_info: + * device_sleep_notification) + * and DOORBELL-IRQ-HPDA(data) values. + * @IPC_MEM_DEV_PM_SLEEP: IPC_MEM_DEV_PM_SLEEP is PM state for + * sleep. + * @IPC_MEM_DEV_PM_WAKEUP: DOORBELL-IRQ-DEVICE_WAKE(data). + * @IPC_MEM_DEV_PM_HOST_SLEEP: DOORBELL-IRQ-HOST_SLEEP(data). + * @IPC_MEM_DEV_PM_ACTIVE_WAIT: Local intermediate states. + * @IPC_MEM_DEV_PM_FORCE_SLEEP: DOORBELL-IRQ-FORCE_SLEEP. + * @IPC_MEM_DEV_PM_FORCE_ACTIVE: DOORBELL-IRQ-FORCE_ACTIVE. + */ +enum ipc_mem_dev_pm_state { + IPC_MEM_DEV_PM_ACTIVE, + IPC_MEM_DEV_PM_SLEEP, + IPC_MEM_DEV_PM_WAKEUP, + IPC_MEM_DEV_PM_HOST_SLEEP, + IPC_MEM_DEV_PM_ACTIVE_WAIT, + IPC_MEM_DEV_PM_FORCE_SLEEP = 7, + IPC_MEM_DEV_PM_FORCE_ACTIVE, +}; + +/** + * struct iosm_pm - Power management instance + * @pcie: Pointer to iosm_pcie structure + * @dev: Pointer to device structure + * @ipc_tasklet: Tasklet instance + * @ipc_task: Tasklet for scheduling a wakeup in task context + * @host_pm_state: PM states for host + * @host_sleep_pend: Variable to indicate Host Sleep Pending + * @host_sleep_complete: Generic wait-for-completion used in + * case of Host Sleep + * @pm_cond: Conditions for power management + * @ap_state: Current power management state, the + * initial state is IPC_MEM_DEV_PM_ACTIVE eq. 0. + * @cp_state: PM State of CP + * @device_sleep_notification: last handled device_sleep_notfication + * @pending_hpda_update: is a HPDA update pending? + */ +struct iosm_pm { + struct iosm_pcie *pcie; + struct device *dev; + struct tasklet_struct *ipc_tasklet; + struct ipc_task_queue *ipc_task; + enum ipc_mem_host_pm_state host_pm_state; + atomic_t host_sleep_pend; + struct completion host_sleep_complete; + union ipc_pm_cond pm_cond; + enum ipc_mem_dev_pm_state ap_state; + enum ipc_mem_dev_pm_state cp_state; + u32 device_sleep_notification; + u8 pending_hpda_update : 1; +}; + +/** + * enum ipc_pm_unit - Power management units. + * @IPC_PM_UNIT_IRQ: IRQ towards CP + * @IPC_PM_UNIT_HS: Host Sleep for converged protocol + * @IPC_PM_UNIT_LINK: Link state controlled by CP. + */ +enum ipc_pm_unit { + IPC_PM_UNIT_IRQ, + IPC_PM_UNIT_HS, + IPC_PM_UNIT_LINK, +}; + +/** + * ipc_pm_init - Allocate power management component + * @ipc_imem: Pointer to iosm_imem structure + * + * Returns: pointer to allocated PM component or NULL on failure. + */ +struct iosm_pm *ipc_pm_init(struct iosm_imem *ipc_imem); + +/** + * ipc_pm_deinit - Free power management component, invalidating its pointer. + * @ipc_pm: Pointer to pm component. + */ +void ipc_pm_deinit(struct iosm_pm *ipc_pm); + +/** + * ipc_pm_dev_slp_notification - Handle a sleep notification message from the + * device. This can be called from interrupt state + * This function handles Host Sleep requests too + * if the Host Sleep protocol is register based. + * @ipc_pm: Pointer to power management component + * @sleep_notification: Actual notification from device + * + * Returns: true if dev sleep state has to be checked, false otherwise. + */ +bool ipc_pm_dev_slp_notification(struct iosm_pm *ipc_pm, + u32 sleep_notification); + +/** + * ipc_pm_set_s2idle_sleep - Set PM variables to sleep/active + * @ipc_pm: Pointer to power management component + * @sleep: true to enter sleep/false to exit sleep + */ +void ipc_pm_set_s2idle_sleep(struct iosm_pm *ipc_pm, bool sleep); + +/** + * ipc_pm_prepare_host_sleep - Prepare the PM for sleep by entering + * IPC_MEM_HOST_PM_SLEEP_WAIT_D3 state. + * @ipc_pm: Pointer to power management component + * + * Returns: true on success, false if the host was not active. + */ +bool ipc_pm_prepare_host_sleep(struct iosm_pm *ipc_pm); + +/** + * ipc_pm_prepare_host_active - Prepare the PM for wakeup by entering + * IPC_MEM_HOST_PM_ACTIVE_WAIT state. + * @ipc_pm: Pointer to power management component + * + * Returns: true on success, false if the host was not sleeping. + */ +bool ipc_pm_prepare_host_active(struct iosm_pm *ipc_pm); + +/** + * ipc_pm_wait_for_device_active - Wait upto IPC_PM_ACTIVE_TIMEOUT_MS ms + * for the device to reach active state + * @ipc_pm: Pointer to power management component + * + * Returns: true if device is active, false on timeout + */ +bool ipc_pm_wait_for_device_active(struct iosm_pm *ipc_pm); + +/** + * ipc_pm_signal_hpda_doorbell - Wake up the device if it is in low power mode + * and trigger a head pointer update interrupt. + * @ipc_pm: Pointer to power management component + * @identifier: specifies what component triggered hpda update irq + * @host_slp_check: if set to true then Host Sleep state machine check will + * be performed. If Host Sleep state machine allows HP + * update then only doorbell is triggered otherwise pending + * flag will be set. If set to false then Host Sleep check + * will not be performed. This is helpful for Host Sleep + * negotiation through message ring. + */ +void ipc_pm_signal_hpda_doorbell(struct iosm_pm *ipc_pm, u32 identifier, + bool host_slp_check); +/** + * ipc_pm_host_slp_reinit_dev_active_completion - This function initializes + * the atomic variable and + * completion object used to + * get notification about + * Device Sleep state machine + * changed to ACTIVE state + * so that Sleep negotiation + * can be started. + * @ipc_pm: Pointer to power management component + */ +void ipc_pm_host_slp_reinit_dev_active_completion(struct iosm_pm *ipc_pm); + +/** + * ipc_pm_trigger - Update power manager and wake up the link if needed + * @ipc_pm: Pointer to power management component + * @unit: Power management units + * @active: Device link state + * + * Returns: true if link is unchanged or active, false otherwise + */ +bool ipc_pm_trigger(struct iosm_pm *ipc_pm, enum ipc_pm_unit unit, bool active); + +#endif From patchwork Thu Jan 7 17:05:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 358659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A6BEC433E6 for ; Thu, 7 Jan 2021 17:07:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 71BF9233CE for ; Thu, 7 Jan 2021 17:07:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729087AbhAGRHa (ORCPT ); Thu, 7 Jan 2021 12:07:30 -0500 Received: from mga14.intel.com ([192.55.52.115]:53942 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729081AbhAGRH3 (ORCPT ); Thu, 7 Jan 2021 12:07:29 -0500 IronPort-SDR: K6ccX36oliDOaFiDjBQZaKzex/9KJuxdsCFK2mMOBs6tp6fVhe88/N8WoWytPYt9RUVITTTgi4 6YBih9qFeQkQ== X-IronPort-AV: E=McAfee;i="6000,8403,9857"; a="176681037" X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="176681037" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jan 2021 09:06:36 -0800 IronPort-SDR: /l/Z8dND4oajdoCjl27XrY0mFcrifLPhMkfyiL6+N7xUvBMpItRo9FYLd0bUSDJZa2+HpiBJjN VCf9G4bDnFdw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="422644052" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga001.jf.intel.com with ESMTP; 07 Jan 2021 09:06:34 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [PATCH 15/18] net: iosm: uevent support Date: Thu, 7 Jan 2021 22:35:20 +0530 Message-Id: <20210107170523.26531-16-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210107170523.26531-1-m.chetan.kumar@intel.com> References: <20210107170523.26531-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Report modem status via uevent. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_uevent.c | 43 +++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_uevent.h | 41 +++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_uevent.c b/drivers/net/wwan/iosm/iosm_ipc_uevent.c new file mode 100644 index 000000000000..9040ee6f6065 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_uevent.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include + +#include "iosm_ipc_sio.h" +#include "iosm_ipc_uevent.h" + +/* Update the uevent in work queue context */ +static void ipc_uevent_work(struct work_struct *data) +{ + struct ipc_uevent_info *info; + char *envp[2] = { NULL, NULL }; + + info = container_of(data, struct ipc_uevent_info, work); + + envp[0] = info->uevent; + + if (kobject_uevent_env(&info->dev->kobj, KOBJ_CHANGE, envp)) + pr_err("uevent %s failed to sent", info->uevent); + + kfree(info); +} + +void ipc_uevent_send(struct device *dev, char *uevent) +{ + struct ipc_uevent_info *info = kzalloc(sizeof(*info), GFP_ATOMIC); + + if (!info) + return; + + /* Initialize the kernel work queue */ + INIT_WORK(&info->work, ipc_uevent_work); + + /* Store the device and event information */ + info->dev = dev; + snprintf(info->uevent, MAX_UEVENT_LEN, "%s: %s", dev_name(dev), uevent); + + /* Schedule uevent in process context using work queue */ + schedule_work(&info->work); +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_uevent.h b/drivers/net/wwan/iosm/iosm_ipc_uevent.h new file mode 100644 index 000000000000..422f64411c6e --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_uevent.h @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_UEVENT_H +#define IOSM_IPC_UEVENT_H + +/* Baseband event strings */ +#define UEVENT_MDM_NOT_READY "MDM_NOT_READY" +#define UEVENT_ROM_READY "ROM_READY" +#define UEVENT_MDM_READY "MDM_READY" +#define UEVENT_CRASH "CRASH" +#define UEVENT_CD_READY "CD_READY" +#define UEVENT_CD_READY_LINK_DOWN "CD_READY_LINK_DOWN" +#define UEVENT_MDM_TIMEOUT "MDM_TIMEOUT" + +/* Maximum length of user events */ +#define MAX_UEVENT_LEN 64 + +/** + * struct ipc_uevent_info - Uevent information structure. + * @dev: Pointer to device structure + * @uevent: Uevent information + * @work: Uevent work struct + */ +struct ipc_uevent_info { + struct device *dev; + char uevent[MAX_UEVENT_LEN]; + struct work_struct work; +}; + +/** + * ipc_uevent_send - Send modem event to user space. + * @dev: Generic device pointer + * @uevent: Uevent information + * + */ +void ipc_uevent_send(struct device *dev, char *uevent); + +#endif From patchwork Thu Jan 7 17:05:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 358658 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85D91C433DB for ; Thu, 7 Jan 2021 17:07:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 387D4233CE for ; Thu, 7 Jan 2021 17:07:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729097AbhAGRHl (ORCPT ); Thu, 7 Jan 2021 12:07:41 -0500 Received: from mga14.intel.com ([192.55.52.115]:53928 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728862AbhAGRHk (ORCPT ); Thu, 7 Jan 2021 12:07:40 -0500 IronPort-SDR: FHSpScGArtX4kQtWBJEqZNIkLBbwaldf+b9ukIpncKT4iY5316REWUDja2a4HM9oYY48Ec3FWR cwYthRr1+gUg== X-IronPort-AV: E=McAfee;i="6000,8403,9857"; a="176681048" X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="176681048" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jan 2021 09:06:39 -0800 IronPort-SDR: sP9DGl8Ny7hZYV3A2IGHeD+jJLgBJRnWEep6sll7ksyHaV9x8Enq2HBvGEu/vraaBM+DMWF4XX 3QwRef+AaKLg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="422644083" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga001.jf.intel.com with ESMTP; 07 Jan 2021 09:06:37 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [PATCH 16/18] net: iosm: net driver Date: Thu, 7 Jan 2021 22:35:21 +0530 Message-Id: <20210107170523.26531-17-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210107170523.26531-1-m.chetan.kumar@intel.com> References: <20210107170523.26531-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org 1) Create net device for data/IP communication. 2) Bind VLAN ID to mux IP session. 3) Implement net device operations. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/iosm_ipc_wwan.c | 649 ++++++++++++++++++++++++++++++++++ drivers/net/wwan/iosm/iosm_ipc_wwan.h | 72 ++++ 2 files changed, 721 insertions(+) create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.c create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.h diff --git a/drivers/net/wwan/iosm/iosm_ipc_wwan.c b/drivers/net/wwan/iosm/iosm_ipc_wwan.c new file mode 100644 index 000000000000..b4c0e228f7a2 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_wwan.c @@ -0,0 +1,649 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Intel Corporation. + */ + +#include + +#include "iosm_ipc_chnl_cfg.h" +#include "iosm_ipc_imem_ops.h" + +/* Minimum number of transmit queues per WWAN root device */ +#define WWAN_MIN_TXQ (1) +/* Minimum number of receive queues per WWAN root device */ +#define WWAN_MAX_RXQ (1) +/* Default transmit queue for WWAN root device */ +#define WWAN_DEFAULT_TXQ (0) +/* VLAN tag for WWAN root device */ +#define WWAN_ROOT_VLAN_TAG (0) + +#define IPC_MEM_MIN_MTU_SIZE (68) +#define IPC_MEM_MAX_MTU_SIZE (1024 * 1024) + +#define IPC_MEM_VLAN_TO_SESSION (1) + +/* Required alignment for TX in bytes (32 bit/4 bytes)*/ +#define IPC_WWAN_ALIGN (4) + +/** + * struct ipc_vlan_info - This structure includes information about VLAN device. + * @vlan_id: VLAN tag of the VLAN device. + * @ch_id: IPC channel number for which VLAN device is created. + * @stats: Contains statistics of VLAN devices. + */ +struct ipc_vlan_info { + int vlan_id; + int ch_id; + struct net_device_stats stats; +}; + +/** + * struct iosm_wwan - This structure contains information about WWAN root device + * and interface to the IPC layer. + * @vlan_devs: Contains information about VLAN devices created under + * WWAN root device. + * @netdev: Pointer to network interface device structure. + * @ops_instance: Instance pointer for Callbacks + * @dev: Pointer device structure + * @lock: Spinlock to be used for atomic operations of the + * root device. + * @vlan_devs_nr: Number of VLAN devices. + * @if_mutex: Mutex used for add and remove vlan-id + * @max_devs: Maximum supported VLAN devs + * @max_ip_devs: Maximum supported IP VLAN devs + * @is_registered: Registration status with netdev + */ +struct iosm_wwan { + struct ipc_vlan_info *vlan_devs; + struct net_device *netdev; + void *ops_instance; + struct device *dev; + spinlock_t lock; /* Used for atomic operations on root device */ + int vlan_devs_nr; + struct mutex if_mutex; /* Mutex used for add and remove vlan-id */ + int max_devs; + int max_ip_devs; + u8 is_registered : 1; +}; + +/* Get the array index of requested tag. */ +static int ipc_wwan_get_vlan_devs_nr(struct iosm_wwan *ipc_wwan, u16 tag) +{ + int i; + + if (ipc_wwan->vlan_devs) { + for (i = 0; i < ipc_wwan->vlan_devs_nr; i++) + if (ipc_wwan->vlan_devs[i].vlan_id == tag) + return i; + } + return -EINVAL; +} + +static int ipc_wwan_add_vlan(struct iosm_wwan *ipc_wwan, u16 vid) +{ + if (vid >= 512 || !ipc_wwan->vlan_devs) + return -EINVAL; + + if (vid == WWAN_ROOT_VLAN_TAG) + return 0; + + mutex_lock(&ipc_wwan->if_mutex); + + /* get channel id */ + ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].ch_id = + imem_sys_wwan_open(ipc_wwan->ops_instance, vid); + + if (ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].ch_id < 0) { + dev_err(ipc_wwan->dev, + "cannot connect wwan0 & id %d to the IPC mem layer", + vid); + mutex_unlock(&ipc_wwan->if_mutex); + return -ENODEV; + } + + /* save vlan id */ + ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].vlan_id = vid; + + dev_dbg(ipc_wwan->dev, "Channel id %d allocated to vlan id %d", + ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].ch_id, + ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].vlan_id); + + ipc_wwan->vlan_devs_nr++; + + mutex_unlock(&ipc_wwan->if_mutex); + + return 0; +} + +static int ipc_wwan_remove_vlan(struct iosm_wwan *ipc_wwan, u16 vid) +{ + int ch_nr = ipc_wwan_get_vlan_devs_nr(ipc_wwan, vid); + int i = 0; + + if (ch_nr < 0) { + dev_err(ipc_wwan->dev, "vlan dev not found for vid = %d", vid); + return ch_nr; + } + + if (ipc_wwan->vlan_devs[ch_nr].ch_id < 0) { + dev_err(ipc_wwan->dev, "invalid ch nr %d to kill", ch_nr); + return -EINVAL; + } + + mutex_lock(&ipc_wwan->if_mutex); + + imem_sys_wwan_close(ipc_wwan->ops_instance, vid, + ipc_wwan->vlan_devs[ch_nr].ch_id); + + ipc_wwan->vlan_devs[ch_nr].ch_id = -1; + + /* re-align the vlan information as we removed one tag */ + for (i = ch_nr; i < ipc_wwan->vlan_devs_nr; i++) + memcpy(&ipc_wwan->vlan_devs[i], &ipc_wwan->vlan_devs[i + 1], + sizeof(struct ipc_vlan_info)); + + ipc_wwan->vlan_devs_nr--; + + mutex_unlock(&ipc_wwan->if_mutex); + + return 0; +} + +/* Checks the protocol and discards the Ethernet header or VLAN header + * accordingly. + */ +static int ipc_wwan_pull_header(struct sk_buff *skb, bool *is_ip) +{ + unsigned int header_size; + __be16 proto; + + if (skb->protocol == htons(ETH_P_8021Q)) { + proto = vlan_eth_hdr(skb)->h_vlan_encapsulated_proto; + + if (skb->len < VLAN_ETH_HLEN) + header_size = 0; + else + header_size = VLAN_ETH_HLEN; + } else { + proto = eth_hdr(skb)->h_proto; + + if (skb->len < ETH_HLEN) + header_size = 0; + else + header_size = ETH_HLEN; + } + + /* If a valid pointer */ + if (header_size > 0 && is_ip) { + *is_ip = (proto == htons(ETH_P_IP)) || + (proto == htons(ETH_P_IPV6)); + + /* Discard the vlan/ethernet header. */ + if (unlikely(!skb_pull(skb, header_size))) + header_size = 0; + } + + return header_size; +} + +/* Get VLAN tag from IPC SESSION ID */ +static inline u16 ipc_wwan_mux_session_to_vlan_tag(int id) +{ + return (u16)(id + IPC_MEM_VLAN_TO_SESSION); +} + +/* Get IPC SESSION ID from VLAN tag */ +static inline int ipc_wwan_vlan_to_mux_session_id(u16 tag) +{ + return tag - IPC_MEM_VLAN_TO_SESSION; +} + +/* Add new vlan device and open a channel */ +static int ipc_wwan_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, + u16 vid) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(netdev); + + if (vid != IPC_WWAN_DSS_ID_4) + return ipc_wwan_add_vlan(ipc_wwan, vid); + + return 0; +} + +/* Remove vlan device and de-allocate channel */ +static int ipc_wwan_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, + u16 vid) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(netdev); + + if (vid == WWAN_ROOT_VLAN_TAG) + return 0; + + return ipc_wwan_remove_vlan(ipc_wwan, vid); +} + +static int ipc_wwan_open(struct net_device *netdev) +{ + /* Octets in one ethernet addr */ + if (netdev->addr_len < ETH_ALEN) { + pr_err("cannot build the Ethernet address for \"%s\"", + netdev->name); + return -ENODEV; + } + + /* enable tx path, DL data may follow */ + netif_tx_start_all_queues(netdev); + + return 0; +} + +static int ipc_wwan_stop(struct net_device *netdev) +{ + pr_debug("Stop all TX Queues"); + + netif_tx_stop_all_queues(netdev); + return 0; +} + +int ipc_wwan_receive(struct iosm_wwan *ipc_wwan, struct sk_buff *skb_arg, + bool dss) +{ + struct sk_buff *skb = skb_arg; + struct ethhdr *eth = (struct ethhdr *)skb->data; + u16 tag; + + if (unlikely(!eth)) { + dev_err(ipc_wwan->dev, "ethernet header info error"); + dev_kfree_skb(skb); + return -1; + } + + ether_addr_copy(eth->h_dest, ipc_wwan->netdev->dev_addr); + ether_addr_copy(eth->h_source, ipc_wwan->netdev->dev_addr); + eth->h_source[ETH_ALEN - 1] ^= 0x01; /* src is us xor 1 */ + /* set the ethernet payload type: ipv4 or ipv6 or Dummy type + * for 802.3 frames + */ + eth->h_proto = htons(ETH_P_802_3); + if (!dss) { + if ((skb->data[ETH_HLEN] & 0xF0) == 0x40) + eth->h_proto = htons(ETH_P_IP); + else if ((skb->data[ETH_HLEN] & 0xF0) == 0x60) + eth->h_proto = htons(ETH_P_IPV6); + } + + skb->dev = ipc_wwan->netdev; + skb->protocol = eth_type_trans(skb, ipc_wwan->netdev); + skb->ip_summed = CHECKSUM_UNNECESSARY; + + vlan_get_tag(skb, &tag); + /* TX stats doesn't include ETH_HLEN. + * eth_type_trans() functions pulls the ethernet header. + * so skb->len does not have ethernet header in it. + */ + ipc_wwan_update_stats(ipc_wwan, ipc_wwan_vlan_to_mux_session_id(tag), + skb->len, false); + + switch (netif_rx_ni(skb)) { + case NET_RX_SUCCESS: + break; + case NET_RX_DROP: + break; + default: + break; + } + return 0; +} + +/* Align SKB to 32bit, if not already aligned */ +static struct sk_buff *ipc_wwan_skb_align(struct iosm_wwan *ipc_wwan, + struct sk_buff *skb) +{ + unsigned int offset = (uintptr_t)skb->data & (IPC_WWAN_ALIGN - 1); + struct sk_buff *new_skb; + + if (offset == 0) + return skb; + + /* Allocate new skb to copy into */ + new_skb = dev_alloc_skb(skb->len + (IPC_WWAN_ALIGN - 1)); + if (unlikely(!new_skb)) { + dev_err(ipc_wwan->dev, "failed to reallocate skb"); + goto out; + } + + /* Make sure newly allocated skb is aligned */ + offset = (uintptr_t)new_skb->data & (IPC_WWAN_ALIGN - 1); + if (unlikely(offset != 0)) + skb_reserve(new_skb, IPC_WWAN_ALIGN - offset); + + /* Copy payload */ + memcpy(new_skb->data, skb->data, skb->len); + + skb_put(new_skb, skb->len); +out: + return new_skb; +} + +/* Transmit a packet (called by the kernel) */ +static int ipc_wwan_transmit(struct sk_buff *skb, struct net_device *netdev) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(netdev); + bool is_ip = false; + int ret = -EINVAL; + int header_size; + int idx = 0; + u16 tag = 0; + + vlan_get_tag(skb, &tag); + + /* If the SKB is of WWAN root device then don't send it to device. + * Free the SKB and then return. + */ + if (unlikely(tag == WWAN_ROOT_VLAN_TAG)) + goto exit; + + /* Discard the Ethernet header or VLAN Ethernet header depending + * on the protocol. + */ + header_size = ipc_wwan_pull_header(skb, &is_ip); + if (!header_size) + goto exit; + + /* Get the channel number corresponding to VLAN ID */ + idx = ipc_wwan_get_vlan_devs_nr(ipc_wwan, tag); + if (unlikely(idx < 0 || idx >= ipc_wwan->max_devs || + ipc_wwan->vlan_devs[idx].ch_id < 0)) + goto exit; + + /* VLAN IDs from 1 to 255 are for IP data + * 257 to 512 are for non-IP data + */ + if ((tag > 0 && tag < 256)) { + if (unlikely(!is_ip)) { + ret = -EXDEV; + goto exit; + } + } else if (tag > 256 && tag < 512) { + if (unlikely(is_ip)) { + ret = -EXDEV; + goto exit; + } + + /* Align the SKB only for control packets if not aligned. */ + skb = ipc_wwan_skb_align(ipc_wwan, skb); + if (!skb) + goto exit; + } else { + /* Unknown VLAN IDs */ + ret = -EXDEV; + goto exit; + } + + /* Send the SKB to device for transmission */ + ret = imem_sys_wwan_transmit(ipc_wwan->ops_instance, tag, + ipc_wwan->vlan_devs[idx].ch_id, skb); + + /* Return code of zero is success */ + if (ret == 0) { + ret = NETDEV_TX_OK; + } else if (ret == -2) { + /* Return code -2 is to enable re-enqueue of the skb. + * Re-push the stripped header before returning busy. + */ + if (unlikely(!skb_push(skb, header_size))) { + dev_err(ipc_wwan->dev, "unable to push eth hdr"); + ret = -EIO; + goto exit; + } + + ret = NETDEV_TX_BUSY; + } else { + ret = -EIO; + goto exit; + } + + return ret; + +exit: + /* Log any skb drop except for WWAN Root device */ + if (tag != 0) + dev_dbg(ipc_wwan->dev, "skb dropped.VLAN ID: %d, ret: %d", tag, + ret); + + dev_kfree_skb_any(skb); + return ret; +} + +static int ipc_wwan_change_mtu(struct net_device *dev, int new_mtu) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(dev); + unsigned long flags = 0; + + if (unlikely(new_mtu < IPC_MEM_MIN_MTU_SIZE || + new_mtu > IPC_MEM_MAX_MTU_SIZE)) { + dev_err(ipc_wwan->dev, "mtu %d out of range %d..%d", new_mtu, + IPC_MEM_MIN_MTU_SIZE, IPC_MEM_MAX_MTU_SIZE); + return -EINVAL; + } + + spin_lock_irqsave(&ipc_wwan->lock, flags); + dev->mtu = new_mtu; + spin_unlock_irqrestore(&ipc_wwan->lock, flags); + return 0; +} + +static int ipc_wwan_change_mac_addr(struct net_device *dev, void *sock_addr) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(dev); + struct sockaddr *addr = sock_addr; + unsigned long flags = 0; + int result = 0; + u8 *sock_data; + + sock_data = (u8 *)addr->sa_data; + + spin_lock_irqsave(&ipc_wwan->lock, flags); + + if (is_zero_ether_addr(sock_data)) { + dev->addr_len = 1; + memset(dev->dev_addr, 0, 6); + goto exit; + } + + result = eth_mac_addr(dev, sock_addr); +exit: + spin_unlock_irqrestore(&ipc_wwan->lock, flags); + return result; +} + +static int ipc_wwan_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) +{ + if (cmd != SIOCSIFHWADDR || + !access_ok((void __user *)ifr, sizeof(struct ifreq)) || + dev->addr_len > sizeof(struct sockaddr)) + return -EINVAL; + + return ipc_wwan_change_mac_addr(dev, &ifr->ifr_hwaddr); +} + +static struct net_device_stats *ipc_wwan_get_stats(struct net_device *ndev) +{ + return &ndev->stats; +} + +/* validate mac address for wwan devices */ +static int ipc_wwan_eth_validate_addr(struct net_device *netdev) +{ + return eth_validate_addr(netdev); +} + +/* return valid TX queue for the mapped VLAN device */ +static u16 ipc_wwan_select_queue(struct net_device *netdev, struct sk_buff *skb, + struct net_device *sb_dev) +{ + struct iosm_wwan *ipc_wwan = netdev_priv(netdev); + u16 txqn = 0xFFFF; + u16 tag = 0; + + /* get VLAN tag for the current skb + * if the packet is untagged, return the default queue. + */ + if (vlan_get_tag(skb, &tag) < 0) + return WWAN_DEFAULT_TXQ; + + /* TX Queues are allocated as following: + * + * if vlan ID == 0 is used for VLAN root device i.e. WWAN0. + * Assign default TX Queue which is 0. + * + * if vlan ID >= IMEM_WWAN_CTRL_VLAN_ID_START + * && <= IMEM_WWAN_CTRL_VLAN_ID_END then we use default + * TX Queue which is 0. + * + * if vlan ID >= IMEM_WWAN_DATA_VLAN_ID_START + * && <= Max IP devices then allocate separate + * TX Queue to each VLAN ID. + * + * For any other vlan ID return invalid Tx Queue + */ + if (tag >= IMEM_WWAN_DATA_VLAN_ID_START && tag <= ipc_wwan->max_ip_devs) + txqn = tag; + else if ((tag >= IMEM_WWAN_CTRL_VLAN_ID_START && + tag <= IMEM_WWAN_CTRL_VLAN_ID_END) || + tag == WWAN_ROOT_VLAN_TAG) + txqn = WWAN_DEFAULT_TXQ; + + dev_dbg(ipc_wwan->dev, "VLAN tag = %u, TX Queue selected %u", tag, + txqn); + return txqn; +} + +static const struct net_device_ops ipc_wwandev_ops = { + .ndo_open = ipc_wwan_open, + .ndo_stop = ipc_wwan_stop, + .ndo_start_xmit = ipc_wwan_transmit, + .ndo_change_mtu = ipc_wwan_change_mtu, + .ndo_validate_addr = ipc_wwan_eth_validate_addr, + .ndo_do_ioctl = ipc_wwan_ioctl, + .ndo_get_stats = ipc_wwan_get_stats, + .ndo_vlan_rx_add_vid = ipc_wwan_vlan_rx_add_vid, + .ndo_vlan_rx_kill_vid = ipc_wwan_vlan_rx_kill_vid, + .ndo_set_mac_address = ipc_wwan_change_mac_addr, + .ndo_select_queue = ipc_wwan_select_queue, +}; + +void ipc_wwan_update_stats(struct iosm_wwan *ipc_wwan, int id, size_t len, + bool tx) +{ + struct net_device *iosm_ndev = ipc_wwan->netdev; + int idx = + ipc_wwan_get_vlan_devs_nr(ipc_wwan, + ipc_wwan_mux_session_to_vlan_tag(id)); + + if (unlikely(idx < 0 || idx >= ipc_wwan->max_devs)) { + dev_err(ipc_wwan->dev, "invalid VLAN device"); + return; + } + + if (tx) { + /* Update vlan device tx statistics */ + ipc_wwan->vlan_devs[idx].stats.tx_packets++; + ipc_wwan->vlan_devs[idx].stats.tx_bytes += len; + /* Update root device tx statistics */ + iosm_ndev->stats.tx_packets++; + iosm_ndev->stats.tx_bytes += len; + } else { + /* Update vlan device rx statistics */ + ipc_wwan->vlan_devs[idx].stats.rx_packets++; + ipc_wwan->vlan_devs[idx].stats.rx_bytes += len; + /* Update root device rx statistics */ + iosm_ndev->stats.rx_packets++; + iosm_ndev->stats.rx_bytes += len; + } +} + +void ipc_wwan_tx_flowctrl(struct iosm_wwan *ipc_wwan, int id, bool on) +{ + u16 vid = ipc_wwan_mux_session_to_vlan_tag(id); + + dev_dbg(ipc_wwan->dev, "MUX session id[%d]: %s", id, + on ? "Enable" : "Disable"); + if (on) + netif_stop_subqueue(ipc_wwan->netdev, vid); + else + netif_wake_subqueue(ipc_wwan->netdev, vid); +} + +static struct device_type wwan_type = { .name = "wwan" }; + +struct iosm_wwan *ipc_wwan_init(void *ops_instance, struct device *dev, + int max_sessions) +{ + int max_tx_q = WWAN_MIN_TXQ + max_sessions; + struct iosm_wwan *ipc_wwan; + struct net_device *netdev = alloc_etherdev_mqs(sizeof(*ipc_wwan), + max_tx_q, WWAN_MAX_RXQ); + + if (!netdev || !ops_instance) + return NULL; + + ipc_wwan = netdev_priv(netdev); + + ipc_wwan->dev = dev; + ipc_wwan->netdev = netdev; + ipc_wwan->is_registered = false; + + ipc_wwan->vlan_devs_nr = 0; + ipc_wwan->ops_instance = ops_instance; + + ipc_wwan->max_devs = max_sessions + IPC_MEM_MAX_CHANNELS; + ipc_wwan->max_ip_devs = max_sessions; + + ipc_wwan->vlan_devs = kcalloc(ipc_wwan->max_devs, + sizeof(ipc_wwan->vlan_devs[0]), + GFP_KERNEL); + + spin_lock_init(&ipc_wwan->lock); + mutex_init(&ipc_wwan->if_mutex); + + /* allocate random ethernet address */ + eth_random_addr(netdev->dev_addr); + netdev->addr_assign_type = NET_ADDR_RANDOM; + + snprintf(netdev->name, IFNAMSIZ, "%s", "wwan0"); + netdev->netdev_ops = &ipc_wwandev_ops; + netdev->flags |= IFF_NOARP; + netdev->features |= + NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_FILTER; + SET_NETDEV_DEVTYPE(netdev, &wwan_type); + + if (register_netdev(netdev)) { + dev_err(ipc_wwan->dev, "register_netdev failed"); + ipc_wwan_deinit(ipc_wwan); + return NULL; + } + + ipc_wwan->is_registered = true; + + netif_device_attach(netdev); + + netdev->max_mtu = IPC_MEM_MAX_MTU_SIZE; + + return ipc_wwan; +} + +void ipc_wwan_deinit(struct iosm_wwan *ipc_wwan) +{ + if (ipc_wwan->is_registered) + unregister_netdev(ipc_wwan->netdev); + kfree(ipc_wwan->vlan_devs); + free_netdev(ipc_wwan->netdev); +} + +bool ipc_wwan_is_tx_stopped(struct iosm_wwan *ipc_wwan, int id) +{ + u16 vid = ipc_wwan_mux_session_to_vlan_tag(id); + + return __netif_subqueue_stopped(ipc_wwan->netdev, vid); +} diff --git a/drivers/net/wwan/iosm/iosm_ipc_wwan.h b/drivers/net/wwan/iosm/iosm_ipc_wwan.h new file mode 100644 index 000000000000..f30064901aa7 --- /dev/null +++ b/drivers/net/wwan/iosm/iosm_ipc_wwan.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (C) 2020 Intel Corporation. + */ + +#ifndef IOSM_IPC_WWAN_H +#define IOSM_IPC_WWAN_H + +#define IMEM_WWAN_DATA_VLAN_ID_START 1 +#define IMEM_WWAN_CTRL_VLAN_ID_START 257 +#define IMEM_WWAN_CTRL_VLAN_ID_END 512 + +/** + * ipc_wwan_init - Allocate, Init and register WWAN device + * @ops_instance: Instance pointer for callback + * @dev: Pointer to device structure + * @max_sessions: Maximum number of sessions + * + * Returns: Pointer to instance on success else NULL + */ +struct iosm_wwan *ipc_wwan_init(void *ops_instance, struct device *dev, + int max_sessions); + +/** + * ipc_wwan_deinit - Unregister and free WWAN device, clear pointer + * @ipc_wwan: Pointer to wwan instance data + */ +void ipc_wwan_deinit(struct iosm_wwan *ipc_wwan); + +/** + * ipc_wwan_receive - Receive a downlink packet from CP. + * @ipc_wwan: Pointer to wwan instance + * @skb_arg: Pointer to struct sk_buff + * @dss: Set to true if vlan id is greater than + * IMEM_WWAN_CTRL_VLAN_ID_START else false + * + * Return: 0 on success else -EINVAL or -1 + */ +int ipc_wwan_receive(struct iosm_wwan *ipc_wwan, struct sk_buff *skb_arg, + bool dss); + +/** + * ipc_wwan_update_stats - Update device statistics + * @ipc_wwan: Pointer to wwan instance + * @id: Ipc mux channel session id + * @len: Number of bytes to update + * @tx: True if statistics needs to be updated for transmit + * else false + * + */ +void ipc_wwan_update_stats(struct iosm_wwan *ipc_wwan, int id, size_t len, + bool tx); + +/** + * ipc_wwan_tx_flowctrl - Enable/Disable TX flow control + * @ipc_wwan: Pointer to wwan instance + * @id: Ipc mux channel session id + * @on: if true then flow ctrl would be enabled else disable + * + */ +void ipc_wwan_tx_flowctrl(struct iosm_wwan *ipc_wwan, int id, bool on); + +/** + * ipc_wwan_is_tx_stopped - Checks if Tx stopped for a VLAN id. + * @ipc_wwan: Pointer to wwan instance + * @id: Ipc mux channel session id + * + * Return: true if stopped, false otherwise + */ +bool ipc_wwan_is_tx_stopped(struct iosm_wwan *ipc_wwan, int id); + +#endif From patchwork Thu Jan 7 17:05:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kumar, M Chetan" X-Patchwork-Id: 358657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D619CC433E0 for ; Thu, 7 Jan 2021 17:07:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A66F4233CE for ; Thu, 7 Jan 2021 17:07:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729110AbhAGRHn (ORCPT ); Thu, 7 Jan 2021 12:07:43 -0500 Received: from mga14.intel.com ([192.55.52.115]:53939 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728862AbhAGRHm (ORCPT ); Thu, 7 Jan 2021 12:07:42 -0500 IronPort-SDR: JWo4zKrF2Y7wvEDVo8IGbVT2R4myxZmdXM+IZwZIXcCNmNErCvk8IjxFRXwtlKqoRRwO3qu6R1 1Q+LADBeKtbA== X-IronPort-AV: E=McAfee;i="6000,8403,9857"; a="176681060" X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="176681060" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jan 2021 09:06:42 -0800 IronPort-SDR: 7Jad5hW6im1ZUwplWTTvUMtTvwuRslQXglSHNU+kmYQuJui77y8m/br+7ZCCqm0Xw+BVCaPTdt V5L+9iEv7hMg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,329,1602572400"; d="scan'208";a="422644115" Received: from bgsxx0031.iind.intel.com ([10.106.222.40]) by orsmga001.jf.intel.com with ESMTP; 07 Jan 2021 09:06:40 -0800 From: M Chetan Kumar To: netdev@vger.kernel.org, linux-wireless@vger.kernel.org Cc: johannes@sipsolutions.net, krishna.c.sudi@intel.com, m.chetan.kumar@intel.com Subject: [PATCH 17/18] net: iosm: readme file Date: Thu, 7 Jan 2021 22:35:22 +0530 Message-Id: <20210107170523.26531-18-m.chetan.kumar@intel.com> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20210107170523.26531-1-m.chetan.kumar@intel.com> References: <20210107170523.26531-1-m.chetan.kumar@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Documents IOSM Driver interface usage. Signed-off-by: M Chetan Kumar --- drivers/net/wwan/iosm/README | 126 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 126 insertions(+) create mode 100644 drivers/net/wwan/iosm/README diff --git a/drivers/net/wwan/iosm/README b/drivers/net/wwan/iosm/README new file mode 100644 index 000000000000..4a489177ad96 --- /dev/null +++ b/drivers/net/wwan/iosm/README @@ -0,0 +1,126 @@ +IOSM Driver for PCIe based Intel M.2 Modems +================================================ +The IOSM (IPC over Shared Memory) driver is a PCIe host driver implemented +for linux or chrome platform for data exchange over PCIe interface between +Host platform & Intel M.2 Modem. The driver exposes interface conforming to the +MBIM protocol [1]. Any front end application ( eg: Modem Manager) could easily +manage the MBIM interface to enable data communication towards WWAN. + +Basic usage +=========== +MBIM functions are inactive when unmanaged. The IOSM driver only +provides a userspace interface of a character device representing +MBIM control channel and does not play any role in managing the +functionality. It is the job of a userspace application to enumerate +the port appropriately and enable MBIM functionality. + +Examples of few such userspace application are: + - mbimcli (included with the libmbim [2] library), and + - ModemManager [3] + +For establishing an MBIM IP session at least these actions are required by the +management application: + - open the control channel + - configure network connection settings + - connect to network + - configure IP interface + +Management application development +---------------------------------- +The driver and userspace interfaces are described below. The MBIM +control channel protocol is described in [1]. + +MBIM control channel userspace ABI +================================== + +/dev/wwanctrl character device +------------------------------ +The driver exposes an interface to the MBIM function control channel using char +driver as a subdriver. The userspace end of the control channel pipe is a +/dev/wwanctrl character device. + +The /dev/wwanctrl device is created as a subordinate character device under +IOSM driver. The character device associated with a specific MBIM function +can be looked up using sysfs with matching the above device name. + +Control channel configuration +----------------------------- +The wMaxControlMessage field of the MBIM functional descriptor +limits the maximum control message size. The management application needs to +negotiate the control message size as per the requirements. +See also the ioctl documentation below. + +Fragmentation +------------- +The userspace application is responsible for all control message +fragmentation and defragmentation as per MBIM. + +/dev/wwanctrl write() +--------------------- +The MBIM control messages from the management application must not +exceed the negotiated control message size. + +/dev/wwanctrl read() +-------------------- +The management application must accept control messages of up the +negotiated control message size. + +/dev/wwanctrl ioctl() +-------------------- +IOCTL_WDM_MAX_COMMAND: Get Maximum Command Size +This IOCTL command could be used by applications to fetch the Maximum Command +buffer length supported by the driver which is restricted to 4096 bytes. + + #include + #include + #include + #include + int main() + { + __u16 max; + int fd = open("/dev/wwanctrl", O_RDWR); + if (!ioctl(fd, IOCTL_WDM_MAX_COMMAND, &max)) + printf("wMaxControlMessage is %d\n", max); + } + +MBIM data channel userspace ABI +=============================== + +wwanY network device +-------------------- +The IOSM driver represents the MBIM data channel as a single +network device of the "wwan0" type. This network device is initially +mapped to MBIM IP session 0. + +Multiplexed IP sessions (IPS) +----------------------------- +IOSM driver allows multiplexing of several IP sessions over the single network +device of type wwan0. IOSM driver models such IP sessions as 802.1q VLAN +subdevices of the master wwanY device, mapping MBIM IP session M to VLAN ID M +for all values of M greater than 0. + +The userspace management application is responsible for adding new VLAN links +prior to establishing MBIM IP sessions where the SessionId is greater than 0. +These links can be added by using the normal VLAN kernel interfaces. + +For example, adding a link for a MBIM IP session with SessionId 5: + + ip link add link wwan0 name wwan0. type vlan id 5 + +The driver will automatically map the "wwan0." network device to MBIM +IP session 5. + +References +========== + +[1] "MBIM (Mobile Broadband Interface Model) Registry" + - http://compliance.usb.org/mbim/ + +[2] libmbim - "a glib-based library for talking to WWAN modems and + devices which speak the Mobile Interface Broadband Model (MBIM) + protocol" + - http://www.freedesktop.org/wiki/Software/libmbim/ + +[3] ModemManager - "a DBus-activated daemon which controls mobile + broadband (2G/3G/4G) devices and connections" + - http://www.freedesktop.org/wiki/Software/ModemManager/ \ No newline at end of file