From patchwork Fri Mar 6 04:28:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E133EC3F2E0 for ; Fri, 6 Mar 2020 04:30:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8CC4A2073D for ; Fri, 6 Mar 2020 04:30:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="t14kD7FM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727080AbgCFE2p (ORCPT ); Thu, 5 Mar 2020 23:28:45 -0500 Received: from mail-yw1-f68.google.com ([209.85.161.68]:44591 "EHLO mail-yw1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727069AbgCFE2p (ORCPT ); Thu, 5 Mar 2020 23:28:45 -0500 Received: by mail-yw1-f68.google.com with SMTP id t141so1054277ywc.11 for ; Thu, 05 Mar 2020 20:28:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ztByHnzYUMZdmRwzBHllgxXTQ466kpKswoRroUwNrEQ=; b=t14kD7FMG6qhZT9t338J34kVq3lDCdNAF9NqEKa1Ka70QjFy0XVAeQQnC45QrK/Fua KYgkfGUQkRLTqb//UPIroSYJr1fU5Cs6GcNvUqzzH1nEZyOiIQP8/y9Lm2/olfaQFVX0 R5TGNmX66hogOQ1hpDO1n9ad1kCW2m3e3mMC5k6n9J7MzxBqO7Kd9B0zaH5gYVhroSoV 7dgedV877iWpVz4Pntgwn5b5HHBaPVhM4tHedz4raVH0udIVFuLN1EaYtPr36XFBzbSV K8d2gH3c33knqoZw7PmeTH6r/1yXVq4FhTH/BhfXv/ZSgM7rYYKjR8MHFoyJwOYK6QIw xtyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ztByHnzYUMZdmRwzBHllgxXTQ466kpKswoRroUwNrEQ=; b=InpHD49EKQfT08Ee3XY+Pez5Ka0MnXlEIQc+djUxdA6uz17cmezIYA8lusAx7HXI+y P0sZWVt7xto5a1FP/DpsVw53++OuAwdR/ADlaAkcnFApPBJmC0/GYVORJkC8DCZ2JGHq l4NxVA5Pvo/53vaDcE6sXRqNuJvegVzHwd0au1pN+y6W7G+yr1nLOOvibkuQ9ptwAtZg OH0YNxtePsptOxbPP5z7uSgzdbX2LI8TOUPzkXpsX56R0+DlPky10IPHMNVhp0MO5dJp U5zg72d7q8uffjo9mlK3qBnFTu200OhnePkV2PTRJ7YxdgsxqooKeCNbeniIOM1ZWawN Ncyw== X-Gm-Message-State: ANhLgQ1tuLcIdjsfPkaahTSrQWiJOA9UcT+N+PO96ewrTztYFlJEsXXa Svwp0tWpEbDakuvhHHmAH+PXBg== X-Google-Smtp-Source: ADFU+vtzRypefT6vCidDmyTFAoT6S5yrBKPN2vK5kmDp1yWMMYdczEtddDOfrRKp2dXch0nHcpILxQ== X-Received: by 2002:a25:cb44:: with SMTP id b65mr1841577ybg.114.1583468922341; Thu, 05 Mar 2020 20:28:42 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:41 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 03/17] soc: qcom: ipa: main code Date: Thu, 5 Mar 2020 22:28:17 -0600 Message-Id: <20200306042831.17827-4-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch includes three source files that represent some basic "main program" code for the IPA driver. They are: - "ipa.h" defines the top-level IPA structure which represents an IPA device throughout the code. - "ipa_main.c" contains the platform driver probe function, along with some general code used during initialization. - "ipa_reg.h" defines the offsets of the 32-bit registers used for the IPA device, along with masks that define the position and width of fields within these registers. - "version.h" defines some symbolic IPA version numbers. Each file includes some documentation that provides a little more overview of how the code is organized and used. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa.h | 148 ++++++ drivers/net/ipa/ipa_main.c | 954 ++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_reg.c | 38 ++ drivers/net/ipa/ipa_reg.h | 476 +++++++++++++++++ drivers/net/ipa/ipa_version.h | 23 + 5 files changed, 1639 insertions(+) create mode 100644 drivers/net/ipa/ipa.h create mode 100644 drivers/net/ipa/ipa_main.c create mode 100644 drivers/net/ipa/ipa_reg.c create mode 100644 drivers/net/ipa/ipa_reg.h create mode 100644 drivers/net/ipa/ipa_version.h diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h new file mode 100644 index 000000000000..23fb29889e5a --- /dev/null +++ b/drivers/net/ipa/ipa.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_H_ +#define _IPA_H_ + +#include +#include +#include +#include + +#include "ipa_version.h" +#include "gsi.h" +#include "ipa_mem.h" +#include "ipa_qmi.h" +#include "ipa_endpoint.h" +#include "ipa_interrupt.h" + +struct clk; +struct icc_path; +struct net_device; +struct platform_device; + +struct ipa_clock; +struct ipa_smp2p; +struct ipa_interrupt; + +/** + * struct ipa - IPA information + * @gsi: Embedded GSI structure + * @version: IPA hardware version + * @pdev: Platform device + * @modem_rproc: Remoteproc handle for modem subsystem + * @smp2p: SMP2P information + * @clock: IPA clocking information + * @suspend_ref: Whether clock reference preventing suspend taken + * @table_addr: DMA address of filter/route table content + * @table_virt: Virtual address of filter/route table content + * @interrupt: IPA Interrupt information + * @uc_loaded: true after microcontroller has reported it's ready + * @reg_addr: DMA address used for IPA register access + * @reg_virt: Virtual address used for IPA register access + * @mem_addr: DMA address of IPA-local memory space + * @mem_virt: Virtual address of IPA-local memory space + * @mem_offset: Offset from @mem_virt used for access to IPA memory + * @mem_size: Total size (bytes) of memory at @mem_virt + * @mem: Array of IPA-local memory region descriptors + * @zero_addr: DMA address of preallocated zero-filled memory + * @zero_virt: Virtual address of preallocated zero-filled memory + * @zero_size: Size (bytes) of preallocated zero-filled memory + * @wakeup_source: Wakeup source information + * @available: Bit mask indicating endpoints hardware supports + * @filter_map: Bit mask indicating endpoints that support filtering + * @initialized: Bit mask indicating endpoints initialized + * @set_up: Bit mask indicating endpoints set up + * @enabled: Bit mask indicating endpoints enabled + * @endpoint: Array of endpoint information + * @channel_map: Mapping of GSI channel to IPA endpoint + * @name_map: Mapping of IPA endpoint name to IPA endpoint + * @setup_complete: Flag indicating whether setup stage has completed + * @modem_state: State of modem (stopped, running) + * @modem_netdev: Network device structure used for modem + * @qmi: QMI information + */ +struct ipa { + struct gsi gsi; + enum ipa_version version; + struct platform_device *pdev; + struct rproc *modem_rproc; + struct ipa_smp2p *smp2p; + struct ipa_clock *clock; + atomic_t suspend_ref; + + dma_addr_t table_addr; + __le64 *table_virt; + + struct ipa_interrupt *interrupt; + bool uc_loaded; + + dma_addr_t reg_addr; + void __iomem *reg_virt; + + dma_addr_t mem_addr; + void *mem_virt; + u32 mem_offset; + u32 mem_size; + const struct ipa_mem *mem; + + dma_addr_t zero_addr; + void *zero_virt; + size_t zero_size; + + struct wakeup_source *wakeup_source; + + /* Bit masks indicating endpoint state */ + u32 available; /* supported by hardware */ + u32 filter_map; + u32 initialized; + u32 set_up; + u32 enabled; + + struct ipa_endpoint endpoint[IPA_ENDPOINT_MAX]; + struct ipa_endpoint *channel_map[GSI_CHANNEL_COUNT_MAX]; + struct ipa_endpoint *name_map[IPA_ENDPOINT_COUNT]; + + bool setup_complete; + + atomic_t modem_state; /* enum ipa_modem_state */ + struct net_device *modem_netdev; + struct ipa_qmi qmi; +}; + +/** + * ipa_setup() - Perform IPA setup + * @ipa: IPA pointer + * + * IPA initialization is broken into stages: init; config; and setup. + * (These have inverses exit, deconfig, and teardown.) + * + * Activities performed at the init stage can be done without requiring + * any access to IPA hardware. Activities performed at the config stage + * require the IPA clock to be running, because they involve access + * to IPA registers. The setup stage is performed only after the GSI + * hardware is ready (more on this below). The setup stage allows + * the AP to perform more complex initialization by issuing "immediate + * commands" using a special interface to the IPA. + * + * This function, @ipa_setup(), starts the setup stage. + * + * In order for the GSI hardware to be functional it needs firmware to be + * loaded (in addition to some other low-level initialization). This early + * GSI initialization can be done either by Trust Zone on the AP or by the + * modem. + * + * If it's done by Trust Zone, the AP loads the GSI firmware and supplies + * it to Trust Zone to verify and install. When this completes, if + * verification was successful, the GSI layer is ready and ipa_setup() + * implements the setup phase of initialization. + * + * If the modem performs early GSI initialization, the AP needs to know + * when this has occurred. An SMP2P interrupt is used for this purpose, + * and receipt of that interrupt triggers the call to ipa_setup(). + */ +int ipa_setup(struct ipa *ipa); + +#endif /* _IPA_H_ */ diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c new file mode 100644 index 000000000000..d6e7f257e99d --- /dev/null +++ b/drivers/net/ipa/ipa_main.c @@ -0,0 +1,954 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" +#include "ipa_cmd.h" +#include "ipa_reg.h" +#include "ipa_mem.h" +#include "ipa_table.h" +#include "ipa_modem.h" +#include "ipa_uc.h" +#include "ipa_interrupt.h" +#include "gsi_trans.h" + +/** + * DOC: The IP Accelerator + * + * This driver supports the Qualcomm IP Accelerator (IPA), which is a + * networking component found in many Qualcomm SoCs. The IPA is connected + * to the application processor (AP), but is also connected (and partially + * controlled by) other "execution environments" (EEs), such as a modem. + * + * The IPA is the conduit between the AP and the modem that carries network + * traffic. This driver presents a network interface representing the + * connection of the modem to external (e.g. LTE) networks. + * + * The IPA provides protocol checksum calculation, offloading this work + * from the AP. The IPA offers additional functionality, including routing, + * filtering, and NAT support, but that more advanced functionality is not + * currently supported. Despite that, some resources--including routing + * tables and filter tables--are defined in this driver because they must + * be initialized even when the advanced hardware features are not used. + * + * There are two distinct layers that implement the IPA hardware, and this + * is reflected in the organization of the driver. The generic software + * interface (GSI) is an integral component of the IPA, providing a + * well-defined communication layer between the AP subsystem and the IPA + * core. The GSI implements a set of "channels" used for communication + * between the AP and the IPA. + * + * The IPA layer uses GSI channels to implement its "endpoints". And while + * a GSI channel carries data between the AP and the IPA, a pair of IPA + * endpoints is used to carry traffic between two EEs. Specifically, the main + * modem network interface is implemented by two pairs of endpoints: a TX + * endpoint on the AP coupled with an RX endpoint on the modem; and another + * RX endpoint on the AP receiving data from a TX endpoint on the modem. + */ + +/* The name of the GSI firmware file relative to /lib/firmware */ +#define IPA_FWS_PATH "ipa_fws.mdt" +#define IPA_PAS_ID 15 + +/** + * ipa_suspend_handler() - Handle the suspend IPA interrupt + * @ipa: IPA pointer + * @irq_id: IPA interrupt type (unused) + * + * When in suspended state, the IPA can trigger a resume by sending a SUSPEND + * IPA interrupt. + */ +static void ipa_suspend_handler(struct ipa *ipa, enum ipa_irq_id irq_id) +{ + /* Take a a single clock reference to prevent suspend. All + * endpoints will be resumed as a result. This reference will + * be dropped when we get a power management suspend request. + */ + if (!atomic_xchg(&ipa->suspend_ref, 1)) + ipa_clock_get(ipa); + + /* Acknowledge/clear the suspend interrupt on all endpoints */ + ipa_interrupt_suspend_clear_all(ipa->interrupt); +} + +/** + * ipa_setup() - Set up IPA hardware + * @ipa: IPA pointer + * + * Perform initialization that requires issuing immediate commands on + * the command TX endpoint. If the modem is doing GSI firmware load + * and initialization, this function will be called when an SMP2P + * interrupt has been signaled by the modem. Otherwise it will be + * called from ipa_probe() after GSI firmware has been successfully + * loaded, authenticated, and started by Trust Zone. + */ +int ipa_setup(struct ipa *ipa) +{ + struct ipa_endpoint *exception_endpoint; + struct ipa_endpoint *command_endpoint; + int ret; + + /* IPA v4.0 and above don't use the doorbell engine. */ + ret = gsi_setup(&ipa->gsi, ipa->version == IPA_VERSION_3_5_1); + if (ret) + return ret; + + ipa->interrupt = ipa_interrupt_setup(ipa); + if (IS_ERR(ipa->interrupt)) { + ret = PTR_ERR(ipa->interrupt); + goto err_gsi_teardown; + } + ipa_interrupt_add(ipa->interrupt, IPA_IRQ_TX_SUSPEND, + ipa_suspend_handler); + + ipa_uc_setup(ipa); + + ipa_endpoint_setup(ipa); + + /* We need to use the AP command TX endpoint to perform other + * initialization, so we enable first. + */ + command_endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; + ret = ipa_endpoint_enable_one(command_endpoint); + if (ret) + goto err_endpoint_teardown; + + ret = ipa_mem_setup(ipa); + if (ret) + goto err_command_disable; + + ret = ipa_table_setup(ipa); + if (ret) + goto err_mem_teardown; + + /* Enable the exception handling endpoint, and tell the hardware + * to use it by default. + */ + exception_endpoint = ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]; + ret = ipa_endpoint_enable_one(exception_endpoint); + if (ret) + goto err_table_teardown; + + ipa_endpoint_default_route_set(ipa, exception_endpoint->endpoint_id); + + /* We're all set. Now prepare for communication with the modem */ + ret = ipa_modem_setup(ipa); + if (ret) + goto err_default_route_clear; + + ipa->setup_complete = true; + + dev_info(&ipa->pdev->dev, "IPA driver setup completed successfully\n"); + + return 0; + +err_default_route_clear: + ipa_endpoint_default_route_clear(ipa); + ipa_endpoint_disable_one(exception_endpoint); +err_table_teardown: + ipa_table_teardown(ipa); +err_mem_teardown: + ipa_mem_teardown(ipa); +err_command_disable: + ipa_endpoint_disable_one(command_endpoint); +err_endpoint_teardown: + ipa_endpoint_teardown(ipa); + ipa_uc_teardown(ipa); + ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_TX_SUSPEND); + ipa_interrupt_teardown(ipa->interrupt); +err_gsi_teardown: + gsi_teardown(&ipa->gsi); + + return ret; +} + +/** + * ipa_teardown() - Inverse of ipa_setup() + * @ipa: IPA pointer + */ +static void ipa_teardown(struct ipa *ipa) +{ + struct ipa_endpoint *exception_endpoint; + struct ipa_endpoint *command_endpoint; + + ipa_modem_teardown(ipa); + ipa_endpoint_default_route_clear(ipa); + exception_endpoint = ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]; + ipa_endpoint_disable_one(exception_endpoint); + ipa_table_teardown(ipa); + ipa_mem_teardown(ipa); + command_endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; + ipa_endpoint_disable_one(command_endpoint); + ipa_endpoint_teardown(ipa); + ipa_uc_teardown(ipa); + ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_TX_SUSPEND); + ipa_interrupt_teardown(ipa->interrupt); + gsi_teardown(&ipa->gsi); +} + +/* Configure QMB Core Master Port selection */ +static void ipa_hardware_config_comp(struct ipa *ipa) +{ + u32 val; + + /* Nothing to configure for IPA v3.5.1 */ + if (ipa->version == IPA_VERSION_3_5_1) + return; + + val = ioread32(ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET); + + if (ipa->version == IPA_VERSION_4_0) { + val &= ~IPA_QMB_SELECT_CONS_EN_FMASK; + val &= ~IPA_QMB_SELECT_PROD_EN_FMASK; + val &= ~IPA_QMB_SELECT_GLOBAL_EN_FMASK; + } else { + val |= GSI_MULTI_AXI_MASTERS_DIS_FMASK; + } + + val |= GSI_MULTI_INORDER_RD_DIS_FMASK; + val |= GSI_MULTI_INORDER_WR_DIS_FMASK; + + iowrite32(val, ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET); +} + +/* Configure DDR and PCIe max read/write QSB values */ +static void ipa_hardware_config_qsb(struct ipa *ipa) +{ + u32 val; + + /* QMB_0 represents DDR; QMB_1 represents PCIe (not present in 4.2) */ + val = u32_encode_bits(8, GEN_QMB_0_MAX_WRITES_FMASK); + if (ipa->version == IPA_VERSION_4_2) + val |= u32_encode_bits(0, GEN_QMB_1_MAX_WRITES_FMASK); + else + val |= u32_encode_bits(4, GEN_QMB_1_MAX_WRITES_FMASK); + iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_WRITES_OFFSET); + + if (ipa->version == IPA_VERSION_3_5_1) { + val = u32_encode_bits(8, GEN_QMB_0_MAX_READS_FMASK); + val |= u32_encode_bits(12, GEN_QMB_1_MAX_READS_FMASK); + } else { + val = u32_encode_bits(12, GEN_QMB_0_MAX_READS_FMASK); + if (ipa->version == IPA_VERSION_4_2) + val |= u32_encode_bits(0, GEN_QMB_1_MAX_READS_FMASK); + else + val |= u32_encode_bits(12, GEN_QMB_1_MAX_READS_FMASK); + /* GEN_QMB_0_MAX_READS_BEATS is 0 */ + /* GEN_QMB_1_MAX_READS_BEATS is 0 */ + } + iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_READS_OFFSET); +} + +static void ipa_idle_indication_cfg(struct ipa *ipa, + u32 enter_idle_debounce_thresh, + bool const_non_idle_enable) +{ + u32 offset; + u32 val; + + val = u32_encode_bits(enter_idle_debounce_thresh, + ENTER_IDLE_DEBOUNCE_THRESH_FMASK); + if (const_non_idle_enable) + val |= CONST_NON_IDLE_ENABLE_FMASK; + + offset = ipa_reg_idle_indication_cfg_offset(ipa->version); + iowrite32(val, ipa->reg_virt + offset); +} + +/** + * ipa_hardware_dcd_config() - Enable dynamic clock division on IPA + * + * Configures when the IPA signals it is idle to the global clock + * controller, which can respond by scalling down the clock to + * save power. + */ +static void ipa_hardware_dcd_config(struct ipa *ipa) +{ + /* Recommended values for IPA 3.5 according to IPA HPG */ + ipa_idle_indication_cfg(ipa, 256, false); +} + +static void ipa_hardware_dcd_deconfig(struct ipa *ipa) +{ + /* Power-on reset values */ + ipa_idle_indication_cfg(ipa, 0, true); +} + +/** + * ipa_hardware_config() - Primitive hardware initialization + * @ipa: IPA pointer + */ +static void ipa_hardware_config(struct ipa *ipa) +{ + u32 granularity; + u32 val; + + /* Fill in backward-compatibility register, based on version */ + val = ipa_reg_bcr_val(ipa->version); + iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET); + + if (ipa->version != IPA_VERSION_3_5_1) { + /* Enable open global clocks (hardware workaround) */ + val = GLOBAL_FMASK; + val |= GLOBAL_2X_CLK_FMASK; + iowrite32(val, ipa->reg_virt + IPA_REG_CLKON_CFG_OFFSET); + + /* Disable PA mask to allow HOLB drop (hardware workaround) */ + val = ioread32(ipa->reg_virt + IPA_REG_TX_CFG_OFFSET); + val &= ~PA_MASK_EN; + iowrite32(val, ipa->reg_virt + IPA_REG_TX_CFG_OFFSET); + } + + ipa_hardware_config_comp(ipa); + + /* Configure system bus limits */ + ipa_hardware_config_qsb(ipa); + + /* Configure aggregation granularity */ + val = ioread32(ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET); + granularity = ipa_aggr_granularity_val(IPA_AGGR_GRANULARITY); + val = u32_encode_bits(granularity, AGGR_GRANULARITY); + iowrite32(val, ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET); + + /* Disable hashed IPv4 and IPv6 routing and filtering for IPA v4.2 */ + if (ipa->version == IPA_VERSION_4_2) + iowrite32(0, ipa->reg_virt + IPA_REG_FILT_ROUT_HASH_EN_OFFSET); + + /* Enable dynamic clock division */ + ipa_hardware_dcd_config(ipa); +} + +/** + * ipa_hardware_deconfig() - Inverse of ipa_hardware_config() + * @ipa: IPA pointer + * + * This restores the power-on reset values (even if they aren't different) + */ +static void ipa_hardware_deconfig(struct ipa *ipa) +{ + /* Mostly we just leave things as we set them. */ + ipa_hardware_dcd_deconfig(ipa); +} + +#ifdef IPA_VALIDATION + +/* # IPA resources used based on version (see IPA_RESOURCE_GROUP_COUNT) */ +static int ipa_resource_group_count(struct ipa *ipa) +{ + switch (ipa->version) { + case IPA_VERSION_3_5_1: + return 3; + + case IPA_VERSION_4_0: + case IPA_VERSION_4_1: + return 4; + + case IPA_VERSION_4_2: + return 1; + + default: + return 0; + } +} + +static bool ipa_resource_limits_valid(struct ipa *ipa, + const struct ipa_resource_data *data) +{ + u32 group_count = ipa_resource_group_count(ipa); + u32 i; + u32 j; + + if (!group_count) + return false; + + /* Return an error if a non-zero resource group limit is specified + * for a resource not supported by hardware. + */ + for (i = 0; i < data->resource_src_count; i++) { + const struct ipa_resource_src *resource; + + resource = &data->resource_src[i]; + for (j = group_count; j < IPA_RESOURCE_GROUP_COUNT; j++) + if (resource->limits[j].min || resource->limits[j].max) + return false; + } + + for (i = 0; i < data->resource_dst_count; i++) { + const struct ipa_resource_dst *resource; + + resource = &data->resource_dst[i]; + for (j = group_count; j < IPA_RESOURCE_GROUP_COUNT; j++) + if (resource->limits[j].min || resource->limits[j].max) + return false; + } + + return true; +} + +#else /* !IPA_VALIDATION */ + +static bool ipa_resource_limits_valid(struct ipa *ipa, + const struct ipa_resource_data *data) +{ + return true; +} + +#endif /* !IPA_VALIDATION */ + +static void +ipa_resource_config_common(struct ipa *ipa, u32 offset, + const struct ipa_resource_limits *xlimits, + const struct ipa_resource_limits *ylimits) +{ + u32 val; + + val = u32_encode_bits(xlimits->min, X_MIN_LIM_FMASK); + val |= u32_encode_bits(xlimits->max, X_MAX_LIM_FMASK); + val |= u32_encode_bits(ylimits->min, Y_MIN_LIM_FMASK); + val |= u32_encode_bits(ylimits->max, Y_MAX_LIM_FMASK); + + iowrite32(val, ipa->reg_virt + offset); +} + +static void ipa_resource_config_src_01(struct ipa *ipa, + const struct ipa_resource_src *resource) +{ + u32 offset = IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(resource->type); + + ipa_resource_config_common(ipa, offset, + &resource->limits[0], &resource->limits[1]); +} + +static void ipa_resource_config_src_23(struct ipa *ipa, + const struct ipa_resource_src *resource) +{ + u32 offset = IPA_REG_SRC_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(resource->type); + + ipa_resource_config_common(ipa, offset, + &resource->limits[2], &resource->limits[3]); +} + +static void ipa_resource_config_dst_01(struct ipa *ipa, + const struct ipa_resource_dst *resource) +{ + u32 offset = IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(resource->type); + + ipa_resource_config_common(ipa, offset, + &resource->limits[0], &resource->limits[1]); +} + +static void ipa_resource_config_dst_23(struct ipa *ipa, + const struct ipa_resource_dst *resource) +{ + u32 offset = IPA_REG_DST_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(resource->type); + + ipa_resource_config_common(ipa, offset, + &resource->limits[2], &resource->limits[3]); +} + +static int +ipa_resource_config(struct ipa *ipa, const struct ipa_resource_data *data) +{ + u32 i; + + if (!ipa_resource_limits_valid(ipa, data)) + return -EINVAL; + + for (i = 0; i < data->resource_src_count; i++) { + ipa_resource_config_src_01(ipa, &data->resource_src[i]); + ipa_resource_config_src_23(ipa, &data->resource_src[i]); + } + + for (i = 0; i < data->resource_dst_count; i++) { + ipa_resource_config_dst_01(ipa, &data->resource_dst[i]); + ipa_resource_config_dst_23(ipa, &data->resource_dst[i]); + } + + return 0; +} + +static void ipa_resource_deconfig(struct ipa *ipa) +{ + /* Nothing to do */ +} + +/** + * ipa_config() - Configure IPA hardware + * @ipa: IPA pointer + * + * Perform initialization requiring IPA clock to be enabled. + */ +static int ipa_config(struct ipa *ipa, const struct ipa_data *data) +{ + int ret; + + /* Get a clock reference to allow initialization. This reference + * is held after initialization completes, and won't get dropped + * unless/until a system suspend request arrives. + */ + atomic_set(&ipa->suspend_ref, 1); + ipa_clock_get(ipa); + + ipa_hardware_config(ipa); + + ret = ipa_endpoint_config(ipa); + if (ret) + goto err_hardware_deconfig; + + ret = ipa_mem_config(ipa); + if (ret) + goto err_endpoint_deconfig; + + ipa_table_config(ipa); + + /* Assign resource limitation to each group */ + ret = ipa_resource_config(ipa, data->resource_data); + if (ret) + goto err_table_deconfig; + + ret = ipa_modem_config(ipa); + if (ret) + goto err_resource_deconfig; + + return 0; + +err_resource_deconfig: + ipa_resource_deconfig(ipa); +err_table_deconfig: + ipa_table_deconfig(ipa); + ipa_mem_deconfig(ipa); +err_endpoint_deconfig: + ipa_endpoint_deconfig(ipa); +err_hardware_deconfig: + ipa_hardware_deconfig(ipa); + ipa_clock_put(ipa); + atomic_set(&ipa->suspend_ref, 0); + + return ret; +} + +/** + * ipa_deconfig() - Inverse of ipa_config() + * @ipa: IPA pointer + */ +static void ipa_deconfig(struct ipa *ipa) +{ + ipa_modem_deconfig(ipa); + ipa_resource_deconfig(ipa); + ipa_table_deconfig(ipa); + ipa_mem_deconfig(ipa); + ipa_endpoint_deconfig(ipa); + ipa_hardware_deconfig(ipa); + ipa_clock_put(ipa); + atomic_set(&ipa->suspend_ref, 0); +} + +static int ipa_firmware_load(struct device *dev) +{ + const struct firmware *fw; + struct device_node *node; + struct resource res; + phys_addr_t phys; + ssize_t size; + void *virt; + int ret; + + node = of_parse_phandle(dev->of_node, "memory-region", 0); + if (!node) { + dev_err(dev, "DT error getting \"memory-region\" property\n"); + return -EINVAL; + } + + ret = of_address_to_resource(node, 0, &res); + if (ret) { + dev_err(dev, "error %d getting \"memory-region\" resource\n", + ret); + return ret; + } + + ret = request_firmware(&fw, IPA_FWS_PATH, dev); + if (ret) { + dev_err(dev, "error %d requesting \"%s\"\n", ret, IPA_FWS_PATH); + return ret; + } + + phys = res.start; + size = (size_t)resource_size(&res); + virt = memremap(phys, size, MEMREMAP_WC); + if (!virt) { + dev_err(dev, "unable to remap firmware memory\n"); + ret = -ENOMEM; + goto out_release_firmware; + } + + ret = qcom_mdt_load(dev, fw, IPA_FWS_PATH, IPA_PAS_ID, + virt, phys, size, NULL); + if (ret) + dev_err(dev, "error %d loading \"%s\"\n", ret, IPA_FWS_PATH); + else if ((ret = qcom_scm_pas_auth_and_reset(IPA_PAS_ID))) + dev_err(dev, "error %d authenticating \"%s\"\n", ret, + IPA_FWS_PATH); + + memunmap(virt); +out_release_firmware: + release_firmware(fw); + + return ret; +} + +static const struct of_device_id ipa_match[] = { + { + .compatible = "qcom,sdm845-ipa", + .data = &ipa_data_sdm845, + }, + { + .compatible = "qcom,sc7180-ipa", + .data = &ipa_data_sc7180, + }, + { }, +}; +MODULE_DEVICE_TABLE(of, ipa_match); + +static phandle of_property_read_phandle(const struct device_node *np, + const char *name) +{ + struct property *prop; + int len = 0; + + prop = of_find_property(np, name, &len); + if (!prop || len != sizeof(__be32)) + return 0; + + return be32_to_cpup(prop->value); +} + +/* Check things that can be validated at build time. This just + * groups these things BUILD_BUG_ON() calls don't clutter the rest + * of the code. + * */ +static void ipa_validate_build(void) +{ +#ifdef IPA_VALIDATE + /* We assume we're working on 64-bit hardware */ + BUILD_BUG_ON(!IS_ENABLED(CONFIG_64BIT)); + + /* Code assumes the EE ID for the AP is 0 (zeroed structure field) */ + BUILD_BUG_ON(GSI_EE_AP != 0); + + /* There's no point if we have no channels or event rings */ + BUILD_BUG_ON(!GSI_CHANNEL_COUNT_MAX); + BUILD_BUG_ON(!GSI_EVT_RING_COUNT_MAX); + + /* GSI hardware design limits */ + BUILD_BUG_ON(GSI_CHANNEL_COUNT_MAX > 32); + BUILD_BUG_ON(GSI_EVT_RING_COUNT_MAX > 31); + + /* The number of TREs in a transaction is limited by the channel's + * TLV FIFO size. A transaction structure uses 8-bit fields + * to represents the number of TREs it has allocated and used. + */ + BUILD_BUG_ON(GSI_TLV_MAX > U8_MAX); + + /* Exceeding 128 bytes makes the transaction pool *much* larger */ + BUILD_BUG_ON(sizeof(struct gsi_trans) > 128); + + /* This is used as a divisor */ + BUILD_BUG_ON(!IPA_AGGR_GRANULARITY); +#endif /* IPA_VALIDATE */ +} + +/** + * ipa_probe() - IPA platform driver probe function + * @pdev: Platform device pointer + * + * @Return: 0 if successful, or a negative error code (possibly + * EPROBE_DEFER) + * + * This is the main entry point for the IPA driver. Initialization proceeds + * in several stages: + * - The "init" stage involves activities that can be initialized without + * access to the IPA hardware. + * - The "config" stage requires the IPA clock to be active so IPA registers + * can be accessed, but does not require the use of IPA immediate commands. + * - The "setup" stage uses IPA immediate commands, and so requires the GSI + * layer to be initialized. + * + * A Boolean Device Tree "modem-init" property determines whether GSI + * initialization will be performed by the AP (Trust Zone) or the modem. + * If the AP does GSI initialization, the setup phase is entered after + * this has completed successfully. Otherwise the modem initializes + * the GSI layer and signals it has finished by sending an SMP2P interrupt + * to the AP; this triggers the start if IPA setup. + */ +static int ipa_probe(struct platform_device *pdev) +{ + struct wakeup_source *wakeup_source; + struct device *dev = &pdev->dev; + const struct ipa_data *data; + struct ipa_clock *clock; + struct rproc *rproc; + bool modem_alloc; + bool modem_init; + struct ipa *ipa; + phandle phandle; + bool prefetch; + int ret; + + ipa_validate_build(); + + /* If we need Trust Zone, make sure it's available */ + modem_init = of_property_read_bool(dev->of_node, "modem-init"); + if (!modem_init) + if (!qcom_scm_is_available()) + return -EPROBE_DEFER; + + /* We rely on remoteproc to tell us about modem state changes */ + phandle = of_property_read_phandle(dev->of_node, "modem-remoteproc"); + if (!phandle) { + dev_err(dev, "DT missing \"modem-remoteproc\" property\n"); + return -EINVAL; + } + + rproc = rproc_get_by_phandle(phandle); + if (!rproc) + return -EPROBE_DEFER; + + /* The clock and interconnects might not be ready when we're + * probed, so might return -EPROBE_DEFER. + */ + clock = ipa_clock_init(dev); + if (IS_ERR(clock)) { + ret = PTR_ERR(clock); + goto err_rproc_put; + } + + /* No more EPROBE_DEFER. Get our configuration data */ + data = of_device_get_match_data(dev); + if (!data) { + /* This is really IPA_VALIDATE (should never happen) */ + dev_err(dev, "matched hardware not supported\n"); + ret = -ENOTSUPP; + goto err_clock_exit; + } + + /* Create a wakeup source. */ + wakeup_source = wakeup_source_register(dev, "ipa"); + if (!wakeup_source) { + /* The most likely reason for failure is memory exhaustion */ + ret = -ENOMEM; + goto err_clock_exit; + } + + /* Allocate and initialize the IPA structure */ + ipa = kzalloc(sizeof(*ipa), GFP_KERNEL); + if (!ipa) { + ret = -ENOMEM; + goto err_wakeup_source_unregister; + } + + ipa->pdev = pdev; + dev_set_drvdata(dev, ipa); + ipa->modem_rproc = rproc; + ipa->clock = clock; + atomic_set(&ipa->suspend_ref, 0); + ipa->wakeup_source = wakeup_source; + ipa->version = data->version; + + ret = ipa_reg_init(ipa); + if (ret) + goto err_kfree_ipa; + + ret = ipa_mem_init(ipa, data->mem_count, data->mem_data); + if (ret) + goto err_reg_exit; + + /* GSI v2.0+ (IPA v4.0+) uses prefetch for the command channel */ + prefetch = ipa->version != IPA_VERSION_3_5_1; + /* IPA v4.2 requires the AP to allocate channels for the modem */ + modem_alloc = ipa->version == IPA_VERSION_4_2; + + ret = gsi_init(&ipa->gsi, pdev, prefetch, data->endpoint_count, + data->endpoint_data, modem_alloc); + if (ret) + goto err_mem_exit; + + /* Result is a non-zero mask endpoints that support filtering */ + ipa->filter_map = ipa_endpoint_init(ipa, data->endpoint_count, + data->endpoint_data); + if (!ipa->filter_map) { + ret = -EINVAL; + goto err_gsi_exit; + } + + ret = ipa_table_init(ipa); + if (ret) + goto err_endpoint_exit; + + ret = ipa_modem_init(ipa, modem_init); + if (ret) + goto err_table_exit; + + ret = ipa_config(ipa, data); + if (ret) + goto err_modem_exit; + + dev_info(dev, "IPA driver initialized"); + + /* If the modem is doing early initialization, it will trigger a + * call to ipa_setup() call when it has finished. In that case + * we're done here. + */ + if (modem_init) + return 0; + + /* Otherwise we need to load the firmware and have Trust Zone validate + * and install it. If that succeeds we can proceed with setup. + */ + ret = ipa_firmware_load(dev); + if (ret) + goto err_deconfig; + + ret = ipa_setup(ipa); + if (ret) + goto err_deconfig; + + return 0; + +err_deconfig: + ipa_deconfig(ipa); +err_modem_exit: + ipa_modem_exit(ipa); +err_table_exit: + ipa_table_exit(ipa); +err_endpoint_exit: + ipa_endpoint_exit(ipa); +err_gsi_exit: + gsi_exit(&ipa->gsi); +err_mem_exit: + ipa_mem_exit(ipa); +err_reg_exit: + ipa_reg_exit(ipa); +err_kfree_ipa: + kfree(ipa); +err_wakeup_source_unregister: + wakeup_source_unregister(wakeup_source); +err_clock_exit: + ipa_clock_exit(clock); +err_rproc_put: + rproc_put(rproc); + + return ret; +} + +static int ipa_remove(struct platform_device *pdev) +{ + struct ipa *ipa = dev_get_drvdata(&pdev->dev); + struct rproc *rproc = ipa->modem_rproc; + struct ipa_clock *clock = ipa->clock; + struct wakeup_source *wakeup_source; + int ret; + + wakeup_source = ipa->wakeup_source; + + if (ipa->setup_complete) { + ret = ipa_modem_stop(ipa); + if (ret) + return ret; + + ipa_teardown(ipa); + } + + ipa_deconfig(ipa); + ipa_modem_exit(ipa); + ipa_table_exit(ipa); + ipa_endpoint_exit(ipa); + gsi_exit(&ipa->gsi); + ipa_mem_exit(ipa); + ipa_reg_exit(ipa); + kfree(ipa); + wakeup_source_unregister(wakeup_source); + ipa_clock_exit(clock); + rproc_put(rproc); + + return 0; +} + +/** + * ipa_suspend() - Power management system suspend callback + * @dev: IPA device structure + * + * @Return: Zero + * + * Called by the PM framework when a system suspend operation is invoked. + */ +static int ipa_suspend(struct device *dev) +{ + struct ipa *ipa = dev_get_drvdata(dev); + + ipa_clock_put(ipa); + atomic_set(&ipa->suspend_ref, 0); + + return 0; +} + +/** + * ipa_resume() - Power management system resume callback + * @dev: IPA device structure + * + * @Return: Always returns 0 + * + * Called by the PM framework when a system resume operation is invoked. + */ +static int ipa_resume(struct device *dev) +{ + struct ipa *ipa = dev_get_drvdata(dev); + + /* This clock reference will keep the IPA out of suspend + * until we get a power management suspend request. + */ + atomic_set(&ipa->suspend_ref, 1); + ipa_clock_get(ipa); + + return 0; +} + +static const struct dev_pm_ops ipa_pm_ops = { + .suspend_noirq = ipa_suspend, + .resume_noirq = ipa_resume, +}; + +static struct platform_driver ipa_driver = { + .probe = ipa_probe, + .remove = ipa_remove, + .driver = { + .name = "ipa", + .owner = THIS_MODULE, + .pm = &ipa_pm_ops, + .of_match_table = ipa_match, + }, +}; + +module_platform_driver(ipa_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Qualcomm IP Accelerator device driver"); diff --git a/drivers/net/ipa/ipa_reg.c b/drivers/net/ipa/ipa_reg.c new file mode 100644 index 000000000000..e6147a1cd787 --- /dev/null +++ b/drivers/net/ipa/ipa_reg.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include + +#include "ipa.h" +#include "ipa_reg.h" + +int ipa_reg_init(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + struct resource *res; + + /* Setup IPA register memory */ + res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM, + "ipa-reg"); + if (!res) { + dev_err(dev, "DT error getting \"ipa-reg\" memory property\n"); + return -ENODEV; + } + + ipa->reg_virt = ioremap(res->start, resource_size(res)); + if (!ipa->reg_virt) { + dev_err(dev, "unable to remap \"ipa-reg\" memory\n"); + return -ENOMEM; + } + ipa->reg_addr = res->start; + + return 0; +} + +void ipa_reg_exit(struct ipa *ipa) +{ + iounmap(ipa->reg_virt); +} diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h new file mode 100644 index 000000000000..3b8106aa277a --- /dev/null +++ b/drivers/net/ipa/ipa_reg.h @@ -0,0 +1,476 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_REG_H_ +#define _IPA_REG_H_ + +#include + +#include "ipa_version.h" + +struct ipa; + +/** + * DOC: IPA Registers + * + * IPA registers are located within the "ipa-reg" address space defined by + * Device Tree. The offset of each register within that space is specified + * by symbols defined below. The address space is mapped to virtual memory + * space in ipa_mem_init(). All IPA registers are 32 bits wide. + * + * Certain register types are duplicated for a number of instances of + * something. For example, each IPA endpoint has an set of registers + * defining its configuration. The offset to an endpoint's set of registers + * is computed based on an "base" offset, plus an endpoint's ID multiplied + * and a "stride" value for the register. For such registers, the offset is + * computed by a function-like macro that takes a parameter used in the + * computation. + * + * Some register offsets depend on execution environment. For these an "ee" + * parameter is supplied to the offset macro. The "ee" value is a member of + * the gsi_ee enumerated type. + * + * The offset of a register dependent on endpoint id is computed by a macro + * that is supplied a parameter "ep". The "ep" value is assumed to be less + * than the maximum endpoint value for the current hardware, and that will + * not exceed IPA_ENDPOINT_MAX. + * + * The offset of registers related to filter and route tables is computed + * by a macro that is supplied a parameter "er". The "er" represents an + * endpoint ID for filters, or a route ID for routes. For filters, the + * endpoint ID must be less than IPA_ENDPOINT_MAX, but is further restricted + * because not all endpoints support filtering. For routes, the route ID + * must be less than IPA_ROUTE_MAX. + * + * The offset of registers related to resource types is computed by a macro + * that is supplied a parameter "rt". The "rt" represents a resource type, + * which is is a member of the ipa_resource_type_src enumerated type for + * source endpoint resources or the ipa_resource_type_dst enumerated type + * for destination endpoint resources. + * + * Some registers encode multiple fields within them. For these, each field + * has a symbol below defining a field mask that encodes both the position + * and width of the field within its register. + * + * In some cases, different versions of IPA hardware use different offset or + * field mask values. In such cases an inline_function(ipa) is used rather + * than a MACRO to define the offset or field mask to use. + * + * Finally, some registers hold bitmasks representing endpoints. In such + * cases the @available field in the @ipa structure defines the "full" set + * of valid bits for the register. + */ + +#define IPA_REG_ENABLED_PIPES_OFFSET 0x00000038 + +#define IPA_REG_COMP_CFG_OFFSET 0x0000003c +#define ENABLE_FMASK GENMASK(0, 0) +#define GSI_SNOC_BYPASS_DIS_FMASK GENMASK(1, 1) +#define GEN_QMB_0_SNOC_BYPASS_DIS_FMASK GENMASK(2, 2) +#define GEN_QMB_1_SNOC_BYPASS_DIS_FMASK GENMASK(3, 3) +#define IPA_DCMP_FAST_CLK_EN_FMASK GENMASK(4, 4) +#define IPA_QMB_SELECT_CONS_EN_FMASK GENMASK(5, 5) +#define IPA_QMB_SELECT_PROD_EN_FMASK GENMASK(6, 6) +#define GSI_MULTI_INORDER_RD_DIS_FMASK GENMASK(7, 7) +#define GSI_MULTI_INORDER_WR_DIS_FMASK GENMASK(8, 8) +#define GEN_QMB_0_MULTI_INORDER_RD_DIS_FMASK GENMASK(9, 9) +#define GEN_QMB_1_MULTI_INORDER_RD_DIS_FMASK GENMASK(10, 10) +#define GEN_QMB_0_MULTI_INORDER_WR_DIS_FMASK GENMASK(11, 11) +#define GEN_QMB_1_MULTI_INORDER_WR_DIS_FMASK GENMASK(12, 12) +#define GEN_QMB_0_SNOC_CNOC_LOOP_PROT_DIS_FMASK GENMASK(13, 13) +#define GSI_SNOC_CNOC_LOOP_PROT_DISABLE_FMASK GENMASK(14, 14) +#define GSI_MULTI_AXI_MASTERS_DIS_FMASK GENMASK(15, 15) +#define IPA_QMB_SELECT_GLOBAL_EN_FMASK GENMASK(16, 16) +#define IPA_ATOMIC_FETCHER_ARB_LOCK_DIS_FMASK GENMASK(20, 17) + +#define IPA_REG_CLKON_CFG_OFFSET 0x00000044 +#define RX_FMASK GENMASK(0, 0) +#define PROC_FMASK GENMASK(1, 1) +#define TX_WRAPPER_FMASK GENMASK(2, 2) +#define MISC_FMASK GENMASK(3, 3) +#define RAM_ARB_FMASK GENMASK(4, 4) +#define FTCH_HPS_FMASK GENMASK(5, 5) +#define FTCH_DPS_FMASK GENMASK(6, 6) +#define HPS_FMASK GENMASK(7, 7) +#define DPS_FMASK GENMASK(8, 8) +#define RX_HPS_CMDQS_FMASK GENMASK(9, 9) +#define HPS_DPS_CMDQS_FMASK GENMASK(10, 10) +#define DPS_TX_CMDQS_FMASK GENMASK(11, 11) +#define RSRC_MNGR_FMASK GENMASK(12, 12) +#define CTX_HANDLER_FMASK GENMASK(13, 13) +#define ACK_MNGR_FMASK GENMASK(14, 14) +#define D_DCPH_FMASK GENMASK(15, 15) +#define H_DCPH_FMASK GENMASK(16, 16) +#define DCMP_FMASK GENMASK(17, 17) +#define NTF_TX_CMDQS_FMASK GENMASK(18, 18) +#define TX_0_FMASK GENMASK(19, 19) +#define TX_1_FMASK GENMASK(20, 20) +#define FNR_FMASK GENMASK(21, 21) +#define QSB2AXI_CMDQ_L_FMASK GENMASK(22, 22) +#define AGGR_WRAPPER_FMASK GENMASK(23, 23) +#define RAM_SLAVEWAY_FMASK GENMASK(24, 24) +#define QMB_FMASK GENMASK(25, 25) +#define WEIGHT_ARB_FMASK GENMASK(26, 26) +#define GSI_IF_FMASK GENMASK(27, 27) +#define GLOBAL_FMASK GENMASK(28, 28) +#define GLOBAL_2X_CLK_FMASK GENMASK(29, 29) + +#define IPA_REG_ROUTE_OFFSET 0x00000048 +#define ROUTE_DIS_FMASK GENMASK(0, 0) +#define ROUTE_DEF_PIPE_FMASK GENMASK(5, 1) +#define ROUTE_DEF_HDR_TABLE_FMASK GENMASK(6, 6) +#define ROUTE_DEF_HDR_OFST_FMASK GENMASK(16, 7) +#define ROUTE_FRAG_DEF_PIPE_FMASK GENMASK(21, 17) +#define ROUTE_DEF_RETAIN_HDR_FMASK GENMASK(24, 24) + +#define IPA_REG_SHARED_MEM_SIZE_OFFSET 0x00000054 +#define SHARED_MEM_SIZE_FMASK GENMASK(15, 0) +#define SHARED_MEM_BADDR_FMASK GENMASK(31, 16) + +#define IPA_REG_QSB_MAX_WRITES_OFFSET 0x00000074 +#define GEN_QMB_0_MAX_WRITES_FMASK GENMASK(3, 0) +#define GEN_QMB_1_MAX_WRITES_FMASK GENMASK(7, 4) + +#define IPA_REG_QSB_MAX_READS_OFFSET 0x00000078 +#define GEN_QMB_0_MAX_READS_FMASK GENMASK(3, 0) +#define GEN_QMB_1_MAX_READS_FMASK GENMASK(7, 4) +/* The next two fields are present for IPA v4.0 and above */ +#define GEN_QMB_0_MAX_READS_BEATS_FMASK GENMASK(23, 16) +#define GEN_QMB_1_MAX_READS_BEATS_FMASK GENMASK(31, 24) + +static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version) +{ + if (version == IPA_VERSION_3_5_1) + return 0x0000010c; + + return 0x000000b4; +} +/* ipa->available defines the valid bits in the STATE_AGGR_ACTIVE register */ + +/* The next register is present for IPA v4.2 and above */ +#define IPA_REG_FILT_ROUT_HASH_EN_OFFSET 0x00000148 +#define IPV6_ROUTER_HASH_EN GENMASK(0, 0) +#define IPV6_FILTER_HASH_EN GENMASK(4, 4) +#define IPV4_ROUTER_HASH_EN GENMASK(8, 8) +#define IPV4_FILTER_HASH_EN GENMASK(12, 12) + +static inline u32 ipa_reg_filt_rout_hash_flush_offset(enum ipa_version version) +{ + if (version == IPA_VERSION_3_5_1) + return 0x0000090; + + return 0x000014c; +} + +#define IPV6_ROUTER_HASH_FLUSH GENMASK(0, 0) +#define IPV6_FILTER_HASH_FLUSH GENMASK(4, 4) +#define IPV4_ROUTER_HASH_FLUSH GENMASK(8, 8) +#define IPV4_FILTER_HASH_FLUSH GENMASK(12, 12) + +#define IPA_REG_BCR_OFFSET 0x000001d0 +#define BCR_CMDQ_L_LACK_ONE_ENTRY BIT(0) +#define BCR_TX_NOT_USING_BRESP BIT(1) +#define BCR_SUSPEND_L2_IRQ BIT(3) +#define BCR_HOLB_DROP_L2_IRQ BIT(4) +#define BCR_DUAL_TX BIT(5) + +/* Backward compatibility register value to use for each version */ +static inline u32 ipa_reg_bcr_val(enum ipa_version version) +{ + if (version == IPA_VERSION_3_5_1) + return BCR_CMDQ_L_LACK_ONE_ENTRY | BCR_TX_NOT_USING_BRESP | + BCR_SUSPEND_L2_IRQ | BCR_HOLB_DROP_L2_IRQ | BCR_DUAL_TX; + + if (version == IPA_VERSION_4_0 || version == IPA_VERSION_4_1) + return BCR_CMDQ_L_LACK_ONE_ENTRY | BCR_SUSPEND_L2_IRQ | + BCR_HOLB_DROP_L2_IRQ | BCR_DUAL_TX; + + return 0x00000000; +} + + +#define IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET 0x000001e8 + +#define IPA_REG_AGGR_FORCE_CLOSE_OFFSET 0x000001ec +/* ipa->available defines the valid bits in the AGGR_FORCE_CLOSE register */ + +#define IPA_REG_COUNTER_CFG_OFFSET 0x000001f0 +#define AGGR_GRANULARITY GENMASK(8, 4) +/* Compute the value to use in the AGGR_GRANULARITY field representing + * the given number of microseconds (up to 1 millisecond). + * x = (32 * usec) / 1000 - 1 + */ +static inline u32 ipa_aggr_granularity_val(u32 microseconds) +{ + /* assert(microseconds >= 16); (?) */ + /* assert(microseconds <= 1015); */ + + return DIV_ROUND_CLOSEST(32 * microseconds, 1000) - 1; +} + +#define IPA_REG_TX_CFG_OFFSET 0x000001fc +/* The first three fields are present for IPA v3.5.1 only */ +#define TX0_PREFETCH_DISABLE GENMASK(0, 0) +#define TX1_PREFETCH_DISABLE GENMASK(1, 1) +#define PREFETCH_ALMOST_EMPTY_SIZE GENMASK(4, 2) +/* The next fields are present for IPA v4.0 and above */ +#define PREFETCH_ALMOST_EMPTY_SIZE_TX0 GENMASK(5, 2) +#define DMAW_SCND_OUTSD_PRED_THRESHOLD GENMASK(9, 6) +#define DMAW_SCND_OUTSD_PRED_EN GENMASK(10, 10) +#define DMAW_MAX_BEATS_256_DIS GENMASK(11, 11) +#define PA_MASK_EN GENMASK(12, 12) +#define PREFETCH_ALMOST_EMPTY_SIZE_TX1 GENMASK(16, 13) +/* The last two fields are present for IPA v4.2 and above */ +#define SSPND_PA_NO_START_STATE GENMASK(18, 18) +#define SSPND_PA_NO_BQ_STATE GENMASK(19, 19) + +#define IPA_REG_FLAVOR_0_OFFSET 0x00000210 +#define BAM_MAX_PIPES_FMASK GENMASK(4, 0) +#define BAM_MAX_CONS_PIPES_FMASK GENMASK(12, 8) +#define BAM_MAX_PROD_PIPES_FMASK GENMASK(20, 16) +#define BAM_PROD_LOWEST_FMASK GENMASK(27, 24) + +static inline u32 ipa_reg_idle_indication_cfg_offset(enum ipa_version version) +{ + if (version == IPA_VERSION_4_2) + return 0x00000240; + + return 0x00000220; +} + +#define ENTER_IDLE_DEBOUNCE_THRESH_FMASK GENMASK(15, 0) +#define CONST_NON_IDLE_ENABLE_FMASK GENMASK(16, 16) + +#define IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000400 + 0x0020 * (rt)) +#define IPA_REG_SRC_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000404 + 0x0020 * (rt)) +#define IPA_REG_SRC_RSRC_GRP_45_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000408 + 0x0020 * (rt)) +#define IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000500 + 0x0020 * (rt)) +#define IPA_REG_DST_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000504 + 0x0020 * (rt)) +#define IPA_REG_DST_RSRC_GRP_45_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000508 + 0x0020 * (rt)) +#define X_MIN_LIM_FMASK GENMASK(5, 0) +#define X_MAX_LIM_FMASK GENMASK(13, 8) +#define Y_MIN_LIM_FMASK GENMASK(21, 16) +#define Y_MAX_LIM_FMASK GENMASK(29, 24) + +#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \ + (0x00000800 + 0x0070 * (ep)) +#define ENDP_SUSPEND_FMASK GENMASK(0, 0) +#define ENDP_DELAY_FMASK GENMASK(1, 1) + +#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \ + (0x00000808 + 0x0070 * (ep)) +#define FRAG_OFFLOAD_EN_FMASK GENMASK(0, 0) +#define CS_OFFLOAD_EN_FMASK GENMASK(2, 1) +#define CS_METADATA_HDR_OFFSET_FMASK GENMASK(6, 3) +#define CS_GEN_QMB_MASTER_SEL_FMASK GENMASK(8, 8) + +#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \ + (0x00000810 + 0x0070 * (ep)) +#define HDR_LEN_FMASK GENMASK(5, 0) +#define HDR_OFST_METADATA_VALID_FMASK GENMASK(6, 6) +#define HDR_OFST_METADATA_FMASK GENMASK(12, 7) +#define HDR_ADDITIONAL_CONST_LEN_FMASK GENMASK(18, 13) +#define HDR_OFST_PKT_SIZE_VALID_FMASK GENMASK(19, 19) +#define HDR_OFST_PKT_SIZE_FMASK GENMASK(25, 20) +#define HDR_A5_MUX_FMASK GENMASK(26, 26) +#define HDR_LEN_INC_DEAGG_HDR_FMASK GENMASK(27, 27) +#define HDR_METADATA_REG_VALID_FMASK GENMASK(28, 28) + +#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \ + (0x00000814 + 0x0070 * (ep)) +#define HDR_ENDIANNESS_FMASK GENMASK(0, 0) +#define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK GENMASK(1, 1) +#define HDR_TOTAL_LEN_OR_PAD_FMASK GENMASK(2, 2) +#define HDR_PAYLOAD_LEN_INC_PADDING_FMASK GENMASK(3, 3) +#define HDR_TOTAL_LEN_OR_PAD_OFFSET_FMASK GENMASK(9, 4) +#define HDR_PAD_TO_ALIGNMENT_FMASK GENMASK(13, 10) + +#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(ep) \ + (0x00000818 + 0x0070 * (ep)) + +#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(ep) \ + (0x00000820 + 0x0070 * (ep)) +#define MODE_FMASK GENMASK(2, 0) +#define DEST_PIPE_INDEX_FMASK GENMASK(8, 4) +#define BYTE_THRESHOLD_FMASK GENMASK(27, 12) +#define PIPE_REPLICATION_EN_FMASK GENMASK(28, 28) +#define PAD_EN_FMASK GENMASK(29, 29) +#define HDR_FTCH_DISABLE_FMASK GENMASK(30, 30) + +#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \ + (0x00000824 + 0x0070 * (ep)) +#define AGGR_EN_FMASK GENMASK(1, 0) +#define AGGR_TYPE_FMASK GENMASK(4, 2) +#define AGGR_BYTE_LIMIT_FMASK GENMASK(9, 5) +#define AGGR_TIME_LIMIT_FMASK GENMASK(14, 10) +#define AGGR_PKT_LIMIT_FMASK GENMASK(20, 15) +#define AGGR_SW_EOF_ACTIVE_FMASK GENMASK(21, 21) +#define AGGR_FORCE_CLOSE_FMASK GENMASK(22, 22) +#define AGGR_HARD_BYTE_LIMIT_ENABLE_FMASK GENMASK(24, 24) + +#define IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(ep) \ + (0x0000082c + 0x0070 * (ep)) +#define HOL_BLOCK_EN_FMASK GENMASK(0, 0) + +/* The next register is valid only for RX (IPA producer) endpoints */ +#define IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(ep) \ + (0x00000830 + 0x0070 * (ep)) +/* The next fields are present for IPA v4.2 only */ +#define BASE_VALUE_FMASK GENMASK(4, 0) +#define SCALE_FMASK GENMASK(12, 8) + +#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(ep) \ + (0x00000834 + 0x0070 * (ep)) +#define DEAGGR_HDR_LEN_FMASK GENMASK(5, 0) +#define PACKET_OFFSET_VALID_FMASK GENMASK(7, 7) +#define PACKET_OFFSET_LOCATION_FMASK GENMASK(13, 8) +#define MAX_PACKET_LEN_FMASK GENMASK(31, 16) + +#define IPA_REG_ENDP_INIT_RSRC_GRP_N_OFFSET(ep) \ + (0x00000838 + 0x0070 * (ep)) +#define RSRC_GRP_FMASK GENMASK(1, 0) + +#define IPA_REG_ENDP_INIT_SEQ_N_OFFSET(ep) \ + (0x0000083c + 0x0070 * (ep)) +#define HPS_SEQ_TYPE_FMASK GENMASK(3, 0) +#define DPS_SEQ_TYPE_FMASK GENMASK(7, 4) +#define HPS_REP_SEQ_TYPE_FMASK GENMASK(11, 8) +#define DPS_REP_SEQ_TYPE_FMASK GENMASK(15, 12) + +#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \ + (0x00000840 + 0x0070 * (ep)) +#define STATUS_EN_FMASK GENMASK(0, 0) +#define STATUS_ENDP_FMASK GENMASK(5, 1) +#define STATUS_LOCATION_FMASK GENMASK(8, 8) +/* The next field is present for IPA v4.0 and above */ +#define STATUS_PKT_SUPPRESS_FMASK GENMASK(9, 9) + +/* "er" is either an endpoint id (for filters) or a route id (for routes) */ +#define IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(er) \ + (0x0000085c + 0x0070 * (er)) +#define FILTER_HASH_MSK_SRC_ID_FMASK GENMASK(0, 0) +#define FILTER_HASH_MSK_SRC_IP_FMASK GENMASK(1, 1) +#define FILTER_HASH_MSK_DST_IP_FMASK GENMASK(2, 2) +#define FILTER_HASH_MSK_SRC_PORT_FMASK GENMASK(3, 3) +#define FILTER_HASH_MSK_DST_PORT_FMASK GENMASK(4, 4) +#define FILTER_HASH_MSK_PROTOCOL_FMASK GENMASK(5, 5) +#define FILTER_HASH_MSK_METADATA_FMASK GENMASK(6, 6) +#define IPA_REG_ENDP_FILTER_HASH_MSK_ALL GENMASK(6, 0) + +#define ROUTER_HASH_MSK_SRC_ID_FMASK GENMASK(16, 16) +#define ROUTER_HASH_MSK_SRC_IP_FMASK GENMASK(17, 17) +#define ROUTER_HASH_MSK_DST_IP_FMASK GENMASK(18, 18) +#define ROUTER_HASH_MSK_SRC_PORT_FMASK GENMASK(19, 19) +#define ROUTER_HASH_MSK_DST_PORT_FMASK GENMASK(20, 20) +#define ROUTER_HASH_MSK_PROTOCOL_FMASK GENMASK(21, 21) +#define ROUTER_HASH_MSK_METADATA_FMASK GENMASK(22, 22) +#define IPA_REG_ENDP_ROUTER_HASH_MSK_ALL GENMASK(22, 16) + +#define IPA_REG_IRQ_STTS_OFFSET \ + IPA_REG_IRQ_STTS_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_STTS_EE_N_OFFSET(ee) \ + (0x00003008 + 0x1000 * (ee)) + +#define IPA_REG_IRQ_EN_OFFSET \ + IPA_REG_IRQ_EN_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_EN_EE_N_OFFSET(ee) \ + (0x0000300c + 0x1000 * (ee)) + +#define IPA_REG_IRQ_CLR_OFFSET \ + IPA_REG_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_CLR_EE_N_OFFSET(ee) \ + (0x00003010 + 0x1000 * (ee)) + +#define IPA_REG_IRQ_UC_OFFSET \ + IPA_REG_IRQ_UC_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_UC_EE_N_OFFSET(ee) \ + (0x0000301c + 0x1000 * (ee)) + +#define IPA_REG_IRQ_SUSPEND_INFO_OFFSET \ + IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(ee) \ + (0x00003030 + 0x1000 * (ee)) +/* ipa->available defines the valid bits in the SUSPEND_INFO register */ + +#define IPA_REG_SUSPEND_IRQ_EN_OFFSET \ + IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(ee) \ + (0x00003034 + 0x1000 * (ee)) +/* ipa->available defines the valid bits in the SUSPEND_IRQ_EN register */ + +#define IPA_REG_SUSPEND_IRQ_CLR_OFFSET \ + IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(ee) \ + (0x00003038 + 0x1000 * (ee)) +/* ipa->available defines the valid bits in the SUSPEND_IRQ_CLR register */ + +/** enum ipa_cs_offload_en - checksum offload field in ENDP_INIT_CFG_N */ +enum ipa_cs_offload_en { + IPA_CS_OFFLOAD_NONE = 0, + IPA_CS_OFFLOAD_UL = 1, + IPA_CS_OFFLOAD_DL = 2, + IPA_CS_RSVD +}; + +/** enum ipa_aggr_en - aggregation type field in ENDP_INIT_AGGR_N */ +enum ipa_aggr_en { + IPA_BYPASS_AGGR = 0, + IPA_ENABLE_AGGR = 1, + IPA_ENABLE_DEAGGR = 2, +}; + +/** enum ipa_aggr_type - aggregation type field in in_ENDP_INIT_AGGR_N */ +enum ipa_aggr_type { + IPA_MBIM_16 = 0, + IPA_HDLC = 1, + IPA_TLP = 2, + IPA_RNDIS = 3, + IPA_GENERIC = 4, + IPA_COALESCE = 5, + IPA_QCMAP = 6, +}; + +/** enum ipa_mode - mode field in ENDP_INIT_MODE_N */ +enum ipa_mode { + IPA_BASIC = 0, + IPA_ENABLE_FRAMING_HDLC = 1, + IPA_ENABLE_DEFRAMING_HDLC = 2, + IPA_DMA = 3, +}; + +/** + * enum ipa_seq_type - HPS and DPS sequencer type fields in in ENDP_INIT_SEQ_N + * @IPA_SEQ_DMA_ONLY: only DMA is performed + * @IPA_SEQ_PKT_PROCESS_NO_DEC_UCP: + * packet processing + no decipher + microcontroller (Ethernet Bridging) + * @IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP: + * second packet processing pass + no decipher + microcontroller + * @IPA_SEQ_DMA_DEC: DMA + cipher/decipher + * @IPA_SEQ_DMA_COMP_DECOMP: DMA + compression/decompression + * @IPA_SEQ_INVALID: invalid sequencer type + * + * The values defined here are broken into 4-bit nibbles that are written + * into fields of the INIT_SEQ_N endpoint registers. + */ +enum ipa_seq_type { + IPA_SEQ_DMA_ONLY = 0x0000, + IPA_SEQ_PKT_PROCESS_NO_DEC_UCP = 0x0002, + IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP = 0x0004, + IPA_SEQ_DMA_DEC = 0x0011, + IPA_SEQ_DMA_COMP_DECOMP = 0x0020, + IPA_SEQ_PKT_PROCESS_NO_DEC_NO_UCP_DMAP = 0x0806, + IPA_SEQ_INVALID = 0xffff, +}; + +int ipa_reg_init(struct ipa *ipa); +void ipa_reg_exit(struct ipa *ipa); + +#endif /* _IPA_REG_H_ */ diff --git a/drivers/net/ipa/ipa_version.h b/drivers/net/ipa/ipa_version.h new file mode 100644 index 000000000000..85449df0f512 --- /dev/null +++ b/drivers/net/ipa/ipa_version.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_VERSION_H_ +#define _IPA_VERSION_H_ + +/** + * enum ipa_version + * + * Defines the version of IPA (and GSI) hardware present on the platform. + * It seems this might be better defined elsewhere, but having it here gets + * it where it's needed. + */ +enum ipa_version { + IPA_VERSION_3_5_1, /* GSI version 1.3.0 */ + IPA_VERSION_4_0, /* GSI version 2.0 */ + IPA_VERSION_4_1, /* GSI version 2.1 */ + IPA_VERSION_4_2, /* GSI version 2.2 */ +}; + +#endif /* _IPA_VERSION_H_ */ From patchwork Fri Mar 6 04:28:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190108 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 494A8C3F2D9 for ; Fri, 6 Mar 2020 04:28:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 18C95208C3 for ; Fri, 6 Mar 2020 04:28:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="TdanpPhT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727173AbgCFE2t (ORCPT ); Thu, 5 Mar 2020 23:28:49 -0500 Received: from mail-yw1-f66.google.com ([209.85.161.66]:36325 "EHLO mail-yw1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727112AbgCFE2s (ORCPT ); Thu, 5 Mar 2020 23:28:48 -0500 Received: by mail-yw1-f66.google.com with SMTP id j71so1109554ywb.3 for ; Thu, 05 Mar 2020 20:28:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7mle7p02Xb0qr6DjqG1KSSYes6DlokB48ZSET+PHFhE=; b=TdanpPhT+qlPFJnI0iMEEp9pF+ZNddg0CT6rgFTSQpPwx6djx3XnBS66TNJpKLUbk8 ehT4A5gesLG557sqh0lojU1Wgq6+04tyBRzqm2LnSOOwYEwJIGKb9xSCfa/kG9dgFoLz is6bEpEtvKd6ROmmtCyK/Y4i9u3yvTdcBP3ptD4bcQfBSa1GFAHPsrOs2coml8rIzGOf NjKbobezXxIY/6rNrOsha5j7Z8U3/C0g5WNIdkiogKwGOTBKikatzi/xKfqAglLqUmot 8y6ac+ewa0LsVj4nO1boGgED3HKGVHe4IEYK2reYAnb3Pyh0fLylzhKAQ9RUt1Zi0YNb RQ8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7mle7p02Xb0qr6DjqG1KSSYes6DlokB48ZSET+PHFhE=; b=gBAdf9P0YuyvAXWcvzrVjDJ6GGVV45E/ShDo0McR6kUGlv/PyIyEravSz8WyBxyYOD KZStiExKggm2+Y/Pfd1nqbgg9s0oZhGBeJiEMOoHhFps2Sh5YkGwiTsnMKVjinVOiKBU 1WcRtpKObk4qdv3pGyhgs3RnmehNfKjDtd5fhCvrpiHNTe9Bon+27zU0zzCwfkB3DbKs MTf5KIcp+m2k1Gvdn8gCpgxuppfvLjBfjZubtW4GOICQGYkL4oAnLCwCi+2/WIYB1sNo hiCVD6xGAnKzmq9v/loWi4kx7mju50haHUl19aTteqFl3ZS412h/NrlurxdlRwCmQo0G G1aw== X-Gm-Message-State: ANhLgQ3/0TrY+4K0xgwLIc1RefnW/XoD/7pzf7gITaZncwevHQlOcK4b M3pcCsuk1lHyenrS8KAbVzIQ0g== X-Google-Smtp-Source: ADFU+vsuU/4ohIEs97WdK0Rz2jwHrzfbRahHqRbwEQOTVEOh/6GBUXqxtE/dtS5bD0y4xZqg7b6snQ== X-Received: by 2002:a25:2f43:: with SMTP id v64mr1958946ybv.71.1583468926179; Thu, 05 Mar 2020 20:28:46 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:45 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 05/17] soc: qcom: ipa: clocking, interrupts, and memory Date: Thu, 5 Mar 2020 22:28:19 -0600 Message-Id: <20200306042831.17827-6-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch incorporates three source files (and their headers). They're grouped into one patch mainly for the purpose of making the number and size of patches in this series somewhat reasonable. - "ipa_clock.c" and "ipa_clock.h" implement clocking for the IPA device. The IPA has a single core clock managed by the common clock framework. In addition, the IPA has three buses whose bandwidth is managed by the Linux interconnect framework. At this time the core clock and all three buses are either on or off; we don't yet do any more fine-grained management than that. The core clock and interconnects are enabled and disabled as a unit, using a unified clock-like abstraction, ipa_clock_get()/ipa_clock_put(). - "ipa_interrupt.c" and "ipa_interrupt.h" implement IPA interrupts. There are two hardware IRQs used by the IPA driver (the other is the GSI interrupt, described in a separate patch). Several types of interrupt are handled by the IPA IRQ handler; these are not part of data/fast path. - The IPA has a region of local memory that is accessible by the AP (and modem). Within that region are areas with certain defined purposes. "ipa_mem.c" and "ipa_mem.h" define those regions, and implement their initialization. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_clock.c | 313 +++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_clock.h | 53 ++++++ drivers/net/ipa/ipa_interrupt.c | 253 +++++++++++++++++++++++++ drivers/net/ipa/ipa_interrupt.h | 117 ++++++++++++ drivers/net/ipa/ipa_mem.c | 314 ++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_mem.h | 90 +++++++++ 6 files changed, 1140 insertions(+) create mode 100644 drivers/net/ipa/ipa_clock.c create mode 100644 drivers/net/ipa/ipa_clock.h create mode 100644 drivers/net/ipa/ipa_interrupt.c create mode 100644 drivers/net/ipa/ipa_interrupt.h create mode 100644 drivers/net/ipa/ipa_mem.c create mode 100644 drivers/net/ipa/ipa_mem.h diff --git a/drivers/net/ipa/ipa_clock.c b/drivers/net/ipa/ipa_clock.c new file mode 100644 index 000000000000..374491ea11cf --- /dev/null +++ b/drivers/net/ipa/ipa_clock.c @@ -0,0 +1,313 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_modem.h" + +/** + * DOC: IPA Clocking + * + * The "IPA Clock" manages both the IPA core clock and the interconnects + * (buses) the IPA depends on as a single logical entity. A reference count + * is incremented by "get" operations and decremented by "put" operations. + * Transitions of that count from 0 to 1 result in the clock and interconnects + * being enabled, and transitions of the count from 1 to 0 cause them to be + * disabled. We currently operate the core clock at a fixed clock rate, and + * all buses at a fixed average and peak bandwidth. As more advanced IPA + * features are enabled, we can make better use of clock and bus scaling. + * + * An IPA clock reference must be held for any access to IPA hardware. + */ + +#define IPA_CORE_CLOCK_RATE (75UL * 1000 * 1000) /* Hz */ + +/* Interconnect path bandwidths (each times 1000 bytes per second) */ +#define IPA_MEMORY_AVG (80 * 1000) /* 80 MBps */ +#define IPA_MEMORY_PEAK (600 * 1000) + +#define IPA_IMEM_AVG (80 * 1000) +#define IPA_IMEM_PEAK (350 * 1000) + +#define IPA_CONFIG_AVG (40 * 1000) +#define IPA_CONFIG_PEAK (40 * 1000) + +/** + * struct ipa_clock - IPA clocking information + * @count: Clocking reference count + * @mutex; Protects clock enable/disable + * @core: IPA core clock + * @memory_path: Memory interconnect + * @imem_path: Internal memory interconnect + * @config_path: Configuration space interconnect + */ +struct ipa_clock { + atomic_t count; + struct mutex mutex; /* protects clock enable/disable */ + struct clk *core; + struct icc_path *memory_path; + struct icc_path *imem_path; + struct icc_path *config_path; +}; + +static struct icc_path * +ipa_interconnect_init_one(struct device *dev, const char *name) +{ + struct icc_path *path; + + path = of_icc_get(dev, name); + if (IS_ERR(path)) + dev_err(dev, "error %ld getting memory interconnect\n", + PTR_ERR(path)); + + return path; +} + +/* Initialize interconnects required for IPA operation */ +static int ipa_interconnect_init(struct ipa_clock *clock, struct device *dev) +{ + struct icc_path *path; + + path = ipa_interconnect_init_one(dev, "memory"); + if (IS_ERR(path)) + goto err_return; + clock->memory_path = path; + + path = ipa_interconnect_init_one(dev, "imem"); + if (IS_ERR(path)) + goto err_memory_path_put; + clock->imem_path = path; + + path = ipa_interconnect_init_one(dev, "config"); + if (IS_ERR(path)) + goto err_imem_path_put; + clock->config_path = path; + + return 0; + +err_imem_path_put: + icc_put(clock->imem_path); +err_memory_path_put: + icc_put(clock->memory_path); +err_return: + return PTR_ERR(path); +} + +/* Inverse of ipa_interconnect_init() */ +static void ipa_interconnect_exit(struct ipa_clock *clock) +{ + icc_put(clock->config_path); + icc_put(clock->imem_path); + icc_put(clock->memory_path); +} + +/* Currently we only use one bandwidth level, so just "enable" interconnects */ +static int ipa_interconnect_enable(struct ipa *ipa) +{ + struct ipa_clock *clock = ipa->clock; + int ret; + + ret = icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + if (ret) + return ret; + + ret = icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); + if (ret) + goto err_memory_path_disable; + + ret = icc_set_bw(clock->config_path, IPA_CONFIG_AVG, IPA_CONFIG_PEAK); + if (ret) + goto err_imem_path_disable; + + return 0; + +err_imem_path_disable: + (void)icc_set_bw(clock->imem_path, 0, 0); +err_memory_path_disable: + (void)icc_set_bw(clock->memory_path, 0, 0); + + return ret; +} + +/* To disable an interconnect, we just its bandwidth to 0 */ +static int ipa_interconnect_disable(struct ipa *ipa) +{ + struct ipa_clock *clock = ipa->clock; + int ret; + + ret = icc_set_bw(clock->memory_path, 0, 0); + if (ret) + return ret; + + ret = icc_set_bw(clock->imem_path, 0, 0); + if (ret) + goto err_memory_path_reenable; + + ret = icc_set_bw(clock->config_path, 0, 0); + if (ret) + goto err_imem_path_reenable; + + return 0; + +err_imem_path_reenable: + (void)icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); +err_memory_path_reenable: + (void)icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + + return ret; +} + +/* Turn on IPA clocks, including interconnects */ +static int ipa_clock_enable(struct ipa *ipa) +{ + int ret; + + ret = ipa_interconnect_enable(ipa); + if (ret) + return ret; + + ret = clk_prepare_enable(ipa->clock->core); + if (ret) + ipa_interconnect_disable(ipa); + + return ret; +} + +/* Inverse of ipa_clock_enable() */ +static void ipa_clock_disable(struct ipa *ipa) +{ + clk_disable_unprepare(ipa->clock->core); + (void)ipa_interconnect_disable(ipa); +} + +/* Get an IPA clock reference, but only if the reference count is + * already non-zero. Returns true if the additional reference was + * added successfully, or false otherwise. + */ +bool ipa_clock_get_additional(struct ipa *ipa) +{ + return !!atomic_inc_not_zero(&ipa->clock->count); +} + +/* Get an IPA clock reference. If the reference count is non-zero, it is + * incremented and return is immediate. Otherwise it is checked again + * under protection of the mutex, and if appropriate the clock (and + * interconnects) are enabled suspended endpoints (if any) are resumed + * before returning. + * + * Incrementing the reference count is intentionally deferred until + * after the clock is running and endpoints are resumed. + */ +void ipa_clock_get(struct ipa *ipa) +{ + struct ipa_clock *clock = ipa->clock; + int ret; + + /* If the clock is running, just bump the reference count */ + if (ipa_clock_get_additional(ipa)) + return; + + /* Otherwise get the mutex and check again */ + mutex_lock(&clock->mutex); + + /* A reference might have been added before we got the mutex. */ + if (ipa_clock_get_additional(ipa)) + goto out_mutex_unlock; + + ret = ipa_clock_enable(ipa); + if (ret) { + dev_err(&ipa->pdev->dev, "error %d enabling IPA clock\n", ret); + goto out_mutex_unlock; + } + + ipa_endpoint_resume(ipa); + + atomic_inc(&clock->count); + +out_mutex_unlock: + mutex_unlock(&clock->mutex); +} + +/* Attempt to remove an IPA clock reference. If this represents the last + * reference, suspend endpoints and disable the clock (and interconnects) + * under protection of a mutex. + */ +void ipa_clock_put(struct ipa *ipa) +{ + struct ipa_clock *clock = ipa->clock; + + /* If this is not the last reference there's nothing more to do */ + if (!atomic_dec_and_mutex_lock(&clock->count, &clock->mutex)) + return; + + ipa_endpoint_suspend(ipa); + + ipa_clock_disable(ipa); + + mutex_unlock(&clock->mutex); +} + +/* Initialize IPA clocking */ +struct ipa_clock *ipa_clock_init(struct device *dev) +{ + struct ipa_clock *clock; + struct clk *clk; + int ret; + + clk = clk_get(dev, "core"); + if (IS_ERR(clk)) { + dev_err(dev, "error %ld getting core clock\n", PTR_ERR(clk)); + return ERR_CAST(clk); + } + + ret = clk_set_rate(clk, IPA_CORE_CLOCK_RATE); + if (ret) { + dev_err(dev, "error %d setting core clock rate to %lu\n", + ret, IPA_CORE_CLOCK_RATE); + goto err_clk_put; + } + + clock = kzalloc(sizeof(*clock), GFP_KERNEL); + if (!clock) { + ret = -ENOMEM; + goto err_clk_put; + } + clock->core = clk; + + ret = ipa_interconnect_init(clock, dev); + if (ret) + goto err_kfree; + + mutex_init(&clock->mutex); + atomic_set(&clock->count, 0); + + return clock; + +err_kfree: + kfree(clock); +err_clk_put: + clk_put(clk); + + return ERR_PTR(ret); +} + +/* Inverse of ipa_clock_init() */ +void ipa_clock_exit(struct ipa_clock *clock) +{ + struct clk *clk = clock->core; + + WARN_ON(atomic_read(&clock->count) != 0); + mutex_destroy(&clock->mutex); + ipa_interconnect_exit(clock); + kfree(clock); + clk_put(clk); +} diff --git a/drivers/net/ipa/ipa_clock.h b/drivers/net/ipa/ipa_clock.h new file mode 100644 index 000000000000..bc52b35e6bb2 --- /dev/null +++ b/drivers/net/ipa/ipa_clock.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_CLOCK_H_ +#define _IPA_CLOCK_H_ + +struct device; + +struct ipa; + +/** + * ipa_clock_init() - Initialize IPA clocking + * @dev: IPA device + * + * @Return: A pointer to an ipa_clock structure, or a pointer-coded error + */ +struct ipa_clock *ipa_clock_init(struct device *dev); + +/** + * ipa_clock_exit() - Inverse of ipa_clock_init() + * @clock: IPA clock pointer + */ +void ipa_clock_exit(struct ipa_clock *clock); + +/** + * ipa_clock_get() - Get an IPA clock reference + * @ipa: IPA pointer + * + * This call blocks if this is the first reference. + */ +void ipa_clock_get(struct ipa *ipa); + +/** + * ipa_clock_get_additional() - Get an IPA clock reference if not first + * @ipa: IPA pointer + * + * This returns immediately, and only takes a reference if not the first + */ +bool ipa_clock_get_additional(struct ipa *ipa); + +/** + * ipa_clock_put() - Drop an IPA clock reference + * @ipa: IPA pointer + * + * This drops a clock reference. If the last reference is being dropped, + * the clock is stopped and RX endpoints are suspended. This call will + * not block unless the last reference is dropped. + */ +void ipa_clock_put(struct ipa *ipa); + +#endif /* _IPA_CLOCK_H_ */ diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c new file mode 100644 index 000000000000..90353987c45f --- /dev/null +++ b/drivers/net/ipa/ipa_interrupt.c @@ -0,0 +1,253 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ + +/* DOC: IPA Interrupts + * + * The IPA has an interrupt line distinct from the interrupt used by the GSI + * code. Whereas GSI interrupts are generally related to channel events (like + * transfer completions), IPA interrupts are related to other events related + * to the IPA. Some of the IPA interrupts come from a microcontroller + * embedded in the IPA. Each IPA interrupt type can be both masked and + * acknowledged independent of the others. + * + * Two of the IPA interrupts are initiated by the microcontroller. A third + * can be generated to signal the need for a wakeup/resume when an IPA + * endpoint has been suspended. There are other IPA events, but at this + * time only these three are supported. + */ + +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_reg.h" +#include "ipa_endpoint.h" +#include "ipa_interrupt.h" + +/** + * struct ipa_interrupt - IPA interrupt information + * @ipa: IPA pointer + * @irq: Linux IRQ number used for IPA interrupts + * @enabled: Mask indicating which interrupts are enabled + * @handler: Array of handlers indexed by IPA interrupt ID + */ +struct ipa_interrupt { + struct ipa *ipa; + u32 irq; + u32 enabled; + ipa_irq_handler_t handler[IPA_IRQ_COUNT]; +}; + +/* Returns true if the interrupt type is associated with the microcontroller */ +static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 irq_id) +{ + return irq_id == IPA_IRQ_UC_0 || irq_id == IPA_IRQ_UC_1; +} + +/* Process a particular interrupt type that has been received */ +static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 irq_id) +{ + bool uc_irq = ipa_interrupt_uc(interrupt, irq_id); + struct ipa *ipa = interrupt->ipa; + u32 mask = BIT(irq_id); + + /* For microcontroller interrupts, clear the interrupt right away, + * "to avoid clearing unhandled interrupts." + */ + if (uc_irq) + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); + + if (irq_id < IPA_IRQ_COUNT && interrupt->handler[irq_id]) + interrupt->handler[irq_id](interrupt->ipa, irq_id); + + /* Clearing the SUSPEND_TX interrupt also clears the register + * that tells us which suspended endpoint(s) caused the interrupt, + * so defer clearing until after the handler has been called. + */ + if (!uc_irq) + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); +} + +/* Process all IPA interrupt types that have been signaled */ +static void ipa_interrupt_process_all(struct ipa_interrupt *interrupt) +{ + struct ipa *ipa = interrupt->ipa; + u32 enabled = interrupt->enabled; + u32 mask; + + /* The status register indicates which conditions are present, + * including conditions whose interrupt is not enabled. Handle + * only the enabled ones. + */ + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + while ((mask &= enabled)) { + do { + u32 irq_id = __ffs(mask); + + mask ^= BIT(irq_id); + + ipa_interrupt_process(interrupt, irq_id); + } while (mask); + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + } +} + +/* Threaded part of the IPA IRQ handler */ +static irqreturn_t ipa_isr_thread(int irq, void *dev_id) +{ + struct ipa_interrupt *interrupt = dev_id; + + ipa_clock_get(interrupt->ipa); + + ipa_interrupt_process_all(interrupt); + + ipa_clock_put(interrupt->ipa); + + return IRQ_HANDLED; +} + +/* Hard part (i.e., "real" IRQ handler) of the IRQ handler */ +static irqreturn_t ipa_isr(int irq, void *dev_id) +{ + struct ipa_interrupt *interrupt = dev_id; + struct ipa *ipa = interrupt->ipa; + u32 mask; + + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + if (mask & interrupt->enabled) + return IRQ_WAKE_THREAD; + + /* Nothing in the mask was supposed to cause an interrupt */ + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); + + dev_err(&ipa->pdev->dev, "%s: unexpected interrupt, mask 0x%08x\n", + __func__, mask); + + return IRQ_HANDLED; +} + +/* Common function used to enable/disable TX_SUSPEND for an endpoint */ +static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt, + u32 endpoint_id, bool enable) +{ + struct ipa *ipa = interrupt->ipa; + u32 mask = BIT(endpoint_id); + u32 val; + + /* assert(mask & ipa->available); */ + val = ioread32(ipa->reg_virt + IPA_REG_SUSPEND_IRQ_EN_OFFSET); + if (enable) + val |= mask; + else + val &= ~mask; + iowrite32(val, ipa->reg_virt + IPA_REG_SUSPEND_IRQ_EN_OFFSET); +} + +/* Enable TX_SUSPEND for an endpoint */ +void +ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt, u32 endpoint_id) +{ + ipa_interrupt_suspend_control(interrupt, endpoint_id, true); +} + +/* Disable TX_SUSPEND for an endpoint */ +void +ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt, u32 endpoint_id) +{ + ipa_interrupt_suspend_control(interrupt, endpoint_id, false); +} + +/* Clear the suspend interrupt for all endpoints that signaled it */ +void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt) +{ + struct ipa *ipa = interrupt->ipa; + u32 val; + + val = ioread32(ipa->reg_virt + IPA_REG_IRQ_SUSPEND_INFO_OFFSET); + iowrite32(val, ipa->reg_virt + IPA_REG_SUSPEND_IRQ_CLR_OFFSET); +} + +/* Simulate arrival of an IPA TX_SUSPEND interrupt */ +void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt) +{ + ipa_interrupt_process(interrupt, IPA_IRQ_TX_SUSPEND); +} + +/* Add a handler for an IPA interrupt */ +void ipa_interrupt_add(struct ipa_interrupt *interrupt, + enum ipa_irq_id ipa_irq, ipa_irq_handler_t handler) +{ + struct ipa *ipa = interrupt->ipa; + + /* assert(ipa_irq < IPA_IRQ_COUNT); */ + interrupt->handler[ipa_irq] = handler; + + /* Update the IPA interrupt mask to enable it */ + interrupt->enabled |= BIT(ipa_irq); + iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); +} + +/* Remove the handler for an IPA interrupt type */ +void +ipa_interrupt_remove(struct ipa_interrupt *interrupt, enum ipa_irq_id ipa_irq) +{ + struct ipa *ipa = interrupt->ipa; + + /* assert(ipa_irq < IPA_IRQ_COUNT); */ + /* Update the IPA interrupt mask to disable it */ + interrupt->enabled &= ~BIT(ipa_irq); + iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); + + interrupt->handler[ipa_irq] = NULL; +} + +/* Set up the IPA interrupt framework */ +struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + struct ipa_interrupt *interrupt; + unsigned int irq; + int ret; + + ret = platform_get_irq_byname(ipa->pdev, "ipa"); + if (ret <= 0) { + dev_err(dev, "DT error %d getting \"ipa\" IRQ property\n", + ret); + return ERR_PTR(ret ? : -EINVAL); + } + irq = ret; + + interrupt = kzalloc(sizeof(*interrupt), GFP_KERNEL); + if (!interrupt) + return ERR_PTR(-ENOMEM); + interrupt->ipa = ipa; + interrupt->irq = irq; + + /* Start with all IPA interrupts disabled */ + iowrite32(0, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); + + ret = request_threaded_irq(irq, ipa_isr, ipa_isr_thread, IRQF_ONESHOT, + "ipa", interrupt); + if (ret) { + dev_err(dev, "error %d requesting \"ipa\" IRQ\n", ret); + goto err_kfree; + } + + return interrupt; + +err_kfree: + kfree(interrupt); + + return ERR_PTR(ret); +} + +/* Tear down the IPA interrupt framework */ +void ipa_interrupt_teardown(struct ipa_interrupt *interrupt) +{ + free_irq(interrupt->irq, interrupt); + kfree(interrupt); +} diff --git a/drivers/net/ipa/ipa_interrupt.h b/drivers/net/ipa/ipa_interrupt.h new file mode 100644 index 000000000000..d4f4c1c9f0b1 --- /dev/null +++ b/drivers/net/ipa/ipa_interrupt.h @@ -0,0 +1,117 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_INTERRUPT_H_ +#define _IPA_INTERRUPT_H_ + +#include +#include + +struct ipa; +struct ipa_interrupt; + +/** + * enum ipa_irq_id - IPA interrupt type + * @IPA_IRQ_UC_0: Microcontroller event interrupt + * @IPA_IRQ_UC_1: Microcontroller response interrupt + * @IPA_IRQ_TX_SUSPEND: Data ready interrupt + * + * The data ready interrupt is signaled if data has arrived that is destined + * for an AP RX endpoint whose underlying GSI channel is suspended/stopped. + */ +enum ipa_irq_id { + IPA_IRQ_UC_0 = 2, + IPA_IRQ_UC_1 = 3, + IPA_IRQ_TX_SUSPEND = 14, + IPA_IRQ_COUNT, /* Number of interrupt types (not an index) */ +}; + +/** + * typedef ipa_irq_handler_t - IPA interrupt handler function type + * @ipa: IPA pointer + * @irq_id: interrupt type + * + * Callback function registered by ipa_interrupt_add() to handle a specific + * IPA interrupt type + */ +typedef void (*ipa_irq_handler_t)(struct ipa *ipa, enum ipa_irq_id irq_id); + +/** + * ipa_interrupt_add() - Register a handler for an IPA interrupt type + * @irq_id: IPA interrupt type + * @handler: Handler function for the interrupt + * + * Add a handler for an IPA interrupt and enable it. IPA interrupt + * handlers are run in threaded interrupt context, so are allowed to + * block. + */ +void ipa_interrupt_add(struct ipa_interrupt *interrupt, enum ipa_irq_id irq_id, + ipa_irq_handler_t handler); + +/** + * ipa_interrupt_remove() - Remove the handler for an IPA interrupt type + * @interrupt: IPA interrupt structure + * @irq_id: IPA interrupt type + * + * Remove an IPA interrupt handler and disable it. + */ +void ipa_interrupt_remove(struct ipa_interrupt *interrupt, + enum ipa_irq_id irq_id); + +/** + * ipa_interrupt_suspend_enable - Enable TX_SUSPEND for an endpoint + * @interrupt: IPA interrupt structure + * @endpoint_id: Endpoint whose interrupt should be enabled + * + * Note: The "TX" in the name is from the perspective of the IPA hardware. + * A TX_SUSPEND interrupt arrives on an AP RX enpoint when packet data can't + * be delivered to the endpoint because it is suspended (or its underlying + * channel is stopped). + */ +void ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt, + u32 endpoint_id); + +/** + * ipa_interrupt_suspend_disable - Disable TX_SUSPEND for an endpoint + * @interrupt: IPA interrupt structure + * @endpoint_id: Endpoint whose interrupt should be disabled + */ +void ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt, + u32 endpoint_id); + +/** + * ipa_interrupt_suspend_clear_all - clear all suspend interrupts + * @interrupt: IPA interrupt structure + * + * Clear the TX_SUSPEND interrupt for all endpoints that signaled it. + */ +void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt); + +/** + * ipa_interrupt_simulate_suspend() - Simulate TX_SUSPEND IPA interrupt + * @interrupt: IPA interrupt structure + * + * This calls the TX_SUSPEND interrupt handler, as if such an interrupt + * had been signaled. This is needed to work around a hardware quirk + * that occurs if aggregation is active on an endpoint when its underlying + * channel is suspended. + */ +void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt); + +/** + * ipa_interrupt_setup() - Set up the IPA interrupt framework + * @ipa: IPA pointer + * + * @Return: Pointer to IPA SMP2P info, or a pointer-coded error + */ +struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa); + +/** + * ipa_interrupt_teardown() - Tear down the IPA interrupt framework + * @interrupt: IPA interrupt structure + */ +void ipa_interrupt_teardown(struct ipa_interrupt *interrupt); + +#endif /* _IPA_INTERRUPT_H_ */ diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c new file mode 100644 index 000000000000..42d2c29d9f0c --- /dev/null +++ b/drivers/net/ipa/ipa_mem.c @@ -0,0 +1,314 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_reg.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" +#include "ipa_data.h" +#include "ipa_table.h" +#include "gsi_trans.h" + +/* "Canary" value placed between memory regions to detect overflow */ +#define IPA_MEM_CANARY_VAL cpu_to_le32(0xdeadbeef) + +/* Add an immediate command to a transaction that zeroes a memory region */ +static void +ipa_mem_zero_region_add(struct gsi_trans *trans, const struct ipa_mem *mem) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + dma_addr_t addr = ipa->zero_addr; + + if (!mem->size) + return; + + ipa_cmd_dma_shared_mem_add(trans, mem->offset, mem->size, addr, true); +} + +/** + * ipa_mem_setup() - Set up IPA AP and modem shared memory areas + * + * Set up the shared memory regions in IPA local memory. This involves + * zero-filling memory regions, and in the case of header memory, telling + * the IPA where it's located. + * + * This function performs the initial setup of this memory. If the modem + * crashes, its regions are re-zeroed in ipa_mem_zero_modem(). + * + * The AP informs the modem where its portions of memory are located + * in a QMI exchange that occurs at modem startup. + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_mem_setup(struct ipa *ipa) +{ + dma_addr_t addr = ipa->zero_addr; + struct gsi_trans *trans; + u32 offset; + u16 size; + + /* Get a transaction to define the header memory region and to zero + * the processing context and modem memory regions. + */ + trans = ipa_cmd_trans_alloc(ipa, 4); + if (!trans) { + dev_err(&ipa->pdev->dev, "no transaction for memory setup\n"); + return -EBUSY; + } + + /* Initialize IPA-local header memory. The modem and AP header + * regions are contiguous, and initialized together. + */ + offset = ipa->mem[IPA_MEM_MODEM_HEADER].offset; + size = ipa->mem[IPA_MEM_MODEM_HEADER].size; + size += ipa->mem[IPA_MEM_AP_HEADER].size; + + ipa_cmd_hdr_init_local_add(trans, offset, size, addr); + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM_PROC_CTX]); + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_AP_PROC_CTX]); + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM]); + + gsi_trans_commit_wait(trans); + + /* Tell the hardware where the processing context area is located */ + iowrite32(ipa->mem_offset + offset, + ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET); + + return 0; +} + +void ipa_mem_teardown(struct ipa *ipa) +{ + /* Nothing to do */ +} + +#ifdef IPA_VALIDATE + +static bool ipa_mem_valid(struct ipa *ipa, enum ipa_mem_id mem_id) +{ + const struct ipa_mem *mem = &ipa->mem[mem_id]; + struct device *dev = &ipa->pdev->dev; + u16 size_multiple; + + /* Other than modem memory, sizes must be a multiple of 8 */ + size_multiple = mem_id == IPA_MEM_MODEM ? 4 : 8; + if (mem->size % size_multiple) + dev_err(dev, "region %u size not a multiple of %u bytes\n", + mem_id, size_multiple); + else if (mem->offset % 8) + dev_err(dev, "region %u offset not 8-byte aligned\n", mem_id); + else if (mem->offset < mem->canary_count * sizeof(__le32)) + dev_err(dev, "region %u offset too small for %hu canaries\n", + mem_id, mem->canary_count); + else if (mem->offset + mem->size > ipa->mem_size) + dev_err(dev, "region %u ends beyond memory limit (0x%08x)\n", + mem_id, ipa->mem_size); + else + return true; + + return false; +} + +#else /* !IPA_VALIDATE */ + +static bool ipa_mem_valid(struct ipa *ipa, enum ipa_mem_id mem_id) +{ + return true; +} + +#endif /*! IPA_VALIDATE */ + +/** + * ipa_mem_config() - Configure IPA shared memory + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_mem_config(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + enum ipa_mem_id mem_id; + dma_addr_t addr; + u32 mem_size; + void *virt; + u32 val; + + /* Check the advertised location and size of the shared memory area */ + val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET); + + /* The fields in the register are in 8 byte units */ + ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK); + /* Make sure the end is within the region's mapped space */ + mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK); + + /* If the sizes don't match, issue a warning */ + if (ipa->mem_offset + mem_size > ipa->mem_size) { + dev_warn(dev, "ignoring larger reported memory size: 0x%08x\n", + mem_size); + } else if (ipa->mem_offset + mem_size < ipa->mem_size) { + dev_warn(dev, "limiting IPA memory size to 0x%08x\n", + mem_size); + ipa->mem_size = mem_size; + } + + /* Prealloc DMA memory for zeroing regions */ + virt = dma_alloc_coherent(dev, IPA_MEM_MAX, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + ipa->zero_addr = addr; + ipa->zero_virt = virt; + ipa->zero_size = IPA_MEM_MAX; + + /* Verify each defined memory region is valid, and if indicated + * for the region, write "canary" values in the space prior to + * the region's base address. + */ + for (mem_id = 0; mem_id < IPA_MEM_COUNT; mem_id++) { + const struct ipa_mem *mem = &ipa->mem[mem_id]; + u16 canary_count; + __le32 *canary; + + /* Validate all regions (even undefined ones) */ + if (!ipa_mem_valid(ipa, mem_id)) + goto err_dma_free; + + /* Skip over undefined regions */ + if (!mem->offset && !mem->size) + continue; + + canary_count = mem->canary_count; + if (!canary_count) + continue; + + /* Write canary values in the space before the region */ + canary = ipa->mem_virt + ipa->mem_offset + mem->offset; + do + *--canary = IPA_MEM_CANARY_VAL; + while (--canary_count); + } + + /* Make sure filter and route table memory regions are valid */ + if (!ipa_table_valid(ipa)) + goto err_dma_free; + + /* Validate memory-related properties relevant to immediate commands */ + if (!ipa_cmd_data_valid(ipa)) + goto err_dma_free; + + /* Verify the microcontroller ring alignment (0 is OK too) */ + if (ipa->mem[IPA_MEM_UC_EVENT_RING].offset % 1024) { + dev_err(dev, "microcontroller ring not 1024-byte aligned\n"); + goto err_dma_free; + } + + return 0; + +err_dma_free: + dma_free_coherent(dev, IPA_MEM_MAX, ipa->zero_virt, ipa->zero_addr); + + return -EINVAL; +} + +/* Inverse of ipa_mem_config() */ +void ipa_mem_deconfig(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + + dma_free_coherent(dev, ipa->zero_size, ipa->zero_virt, ipa->zero_addr); + ipa->zero_size = 0; + ipa->zero_virt = NULL; + ipa->zero_addr = 0; +} + +/** + * ipa_mem_zero_modem() - Zero IPA-local memory regions owned by the modem + * + * Zero regions of IPA-local memory used by the modem. These are configured + * (and initially zeroed) by ipa_mem_setup(), but if the modem crashes and + * restarts via SSR we need to re-initialize them. A QMI message tells the + * modem where to find regions of IPA local memory it needs to know about + * (these included). + */ +int ipa_mem_zero_modem(struct ipa *ipa) +{ + struct gsi_trans *trans; + + /* Get a transaction to zero the modem memory, modem header, + * and modem processing context regions. + */ + trans = ipa_cmd_trans_alloc(ipa, 3); + if (!trans) { + dev_err(&ipa->pdev->dev, + "no transaction to zero modem memory\n"); + return -EBUSY; + } + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM_HEADER]); + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM_PROC_CTX]); + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM]); + + gsi_trans_commit_wait(trans); + + return 0; +} + +/* Perform memory region-related initialization */ +int ipa_mem_init(struct ipa *ipa, u32 count, const struct ipa_mem *mem) +{ + struct device *dev = &ipa->pdev->dev; + struct resource *res; + int ret; + + if (count > IPA_MEM_COUNT) { + dev_err(dev, "to many memory regions (%u > %u)\n", + count, IPA_MEM_COUNT); + return -EINVAL; + } + + ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64)); + if (ret) { + dev_err(dev, "error %d setting DMA mask\n", ret); + return ret; + } + + res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM, + "ipa-shared"); + if (!res) { + dev_err(dev, + "DT error getting \"ipa-shared\" memory property\n"); + return -ENODEV; + } + + ipa->mem_virt = memremap(res->start, resource_size(res), MEMREMAP_WC); + if (!ipa->mem_virt) { + dev_err(dev, "unable to remap \"ipa-shared\" memory\n"); + return -ENOMEM; + } + + ipa->mem_addr = res->start; + ipa->mem_size = resource_size(res); + + /* The ipa->mem[] array is indexed by enum ipa_mem_id values */ + ipa->mem = mem; + + return 0; +} + +/* Inverse of ipa_mem_init() */ +void ipa_mem_exit(struct ipa *ipa) +{ + memunmap(ipa->mem_virt); +} diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h new file mode 100644 index 000000000000..065cb499ebe5 --- /dev/null +++ b/drivers/net/ipa/ipa_mem.h @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_MEM_H_ +#define _IPA_MEM_H_ + +struct ipa; + +/** + * DOC: IPA Local Memory + * + * The IPA has a block of shared memory, divided into regions used for + * specific purposes. + * + * The regions within the shared block are bounded by an offset (relative to + * the "ipa-shared" memory range) and size found in the IPA_SHARED_MEM_SIZE + * register. + * + * Each region is optionally preceded by one or more 32-bit "canary" values. + * These are meant to detect out-of-range writes (if they become corrupted). + * A given region (such as a filter or routing table) has the same number + * of canaries for all IPA hardware versions. Still, the number used is + * defined in the config data, allowing for generic handling of regions. + * + * The set of memory regions is defined in configuration data. They are + * subject to these constraints: + * - a zero offset and zero size represents and undefined region + * - a region's offset is defined to be *past* all "canary" values + * - offset must be large enough to account for all canaries + * - a region's size may be zero, but may still have canaries + * - all offsets must be 8-byte aligned + * - most sizes must be a multiple of 8 + * - modem memory size must be a multiple of 4 + * - the microcontroller ring offset must be a multiple of 1024 + */ + +/* The maximum allowed size for any memory region */ +#define IPA_MEM_MAX (2 * PAGE_SIZE) + +/* IPA-resident memory region ids */ +enum ipa_mem_id { + IPA_MEM_UC_SHARED, /* 0 canaries */ + IPA_MEM_UC_INFO, /* 0 canaries */ + IPA_MEM_V4_FILTER_HASHED, /* 2 canaries */ + IPA_MEM_V4_FILTER, /* 2 canaries */ + IPA_MEM_V6_FILTER_HASHED, /* 2 canaries */ + IPA_MEM_V6_FILTER, /* 2 canaries */ + IPA_MEM_V4_ROUTE_HASHED, /* 2 canaries */ + IPA_MEM_V4_ROUTE, /* 2 canaries */ + IPA_MEM_V6_ROUTE_HASHED, /* 2 canaries */ + IPA_MEM_V6_ROUTE, /* 2 canaries */ + IPA_MEM_MODEM_HEADER, /* 2 canaries */ + IPA_MEM_AP_HEADER, /* 0 canaries */ + IPA_MEM_MODEM_PROC_CTX, /* 2 canaries */ + IPA_MEM_AP_PROC_CTX, /* 0 canaries */ + IPA_MEM_PDN_CONFIG, /* 2 canaries (IPA v4.0 and above) */ + IPA_MEM_STATS_QUOTA, /* 2 canaries (IPA v4.0 and above) */ + IPA_MEM_STATS_TETHERING, /* 0 canaries (IPA v4.0 and above) */ + IPA_MEM_STATS_DROP, /* 0 canaries (IPA v4.0 and above) */ + IPA_MEM_MODEM, /* 0 canaries */ + IPA_MEM_UC_EVENT_RING, /* 1 canary */ + IPA_MEM_COUNT, /* Number of regions (not an index) */ +}; + +/** + * struct ipa_mem - IPA local memory region description + * @offset: offset in IPA memory space to base of the region + * @size: size in bytes base of the region + * @canary_count # 32-bit "canary" values that precede region + */ +struct ipa_mem { + u32 offset; + u16 size; + u16 canary_count; +}; + +int ipa_mem_config(struct ipa *ipa); +void ipa_mem_deconfig(struct ipa *ipa); + +int ipa_mem_setup(struct ipa *ipa); +void ipa_mem_teardown(struct ipa *ipa); + +int ipa_mem_zero_modem(struct ipa *ipa); + +int ipa_mem_init(struct ipa *ipa, u32 count, const struct ipa_mem *mem); +void ipa_mem_exit(struct ipa *ipa); + +#endif /* _IPA_MEM_H_ */ From patchwork Fri Mar 6 04:28:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190102 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 947E1C3F2DC for ; Fri, 6 Mar 2020 04:30:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 526FC2073D for ; Fri, 6 Mar 2020 04:30:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Uvawy2IF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727191AbgCFEaP (ORCPT ); Thu, 5 Mar 2020 23:30:15 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:46121 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727152AbgCFE2t (ORCPT ); Thu, 5 Mar 2020 23:28:49 -0500 Received: by mail-yw1-f65.google.com with SMTP id x5so215389ywb.13 for ; Thu, 05 Mar 2020 20:28:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1SW/DxoqhGjaj3Cy7I1FXx44tNdHemhTbjCDe8OoppQ=; b=Uvawy2IFHGwfSMIWD4m0w2jsyVtk+rmmVi92Kidzh100ycKFsuuP7KpHlw7+T6cvQ+ F6jH87UZMQshIcYbObXsayNiijgZvKXVqCLdYL3ksVfBZpVWdlpVzcpe41n8qTtdVq6Z 2RuDV6gzrBVtIh5hOXlW1yRAZ0D9ah/teIZnIomgnNIX1J1AnUCecgm3u4ckpyHgXvrN ijej7GTkKG3W5Of9OnaVKNcdfT9l/Lbl2xR5F4rF7eV8u5bXHLxJxNxI7PHtBxAJvX+Z j38JCWSVkZ3tm3h3auJ5/ae21mI5uTSROa3E7oFOnVJwzlXko3653QCTw9tz+f5vfEO4 jJmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1SW/DxoqhGjaj3Cy7I1FXx44tNdHemhTbjCDe8OoppQ=; b=kWatBkdaxWsKUwHl4nxoLPgdDyKWg7Dqn47GtWJMXrx8kaMfN59phq8hGEZjO/kBH0 mFwTxru7J2ah3LfaUKL/64yv+tBkIMmSd0LbX4IU7Luoi6uprbzJayMXWyNdghGA6mW9 B2ueVbBx1YZklhfJZxpN9utHIqa7RFsku8OofbvALRexzg8aahbd8vdvFZt7IGRY6P3L GDdBbjkk2Yy/REHfzTYj8Fi+LfhKCrbTk9qfSmo0NJmr0WGqloU2oJRp4oiEY027psCb f7pUaxiV5HlL+6y/yY2Y2MQLEbLHa8MnhG6fgj4VIiMh01Q37ZlK4x22DHnFNpAao4a0 4MpQ== X-Gm-Message-State: ANhLgQ3i2H4ptkZMKxs6I+OlhKzXf29ZfbLU63Z8ULeKN3g2eDorFHNI k00HRyfpIortKzLJHGD9jepfxEsaEaI= X-Google-Smtp-Source: ADFU+vvBbLM2GE6JsOyOOkyWHiMSPvDcn9wUJqdSfa1r67YiWGpoedr37dzIk/B9pLTnwwAashSS4A== X-Received: by 2002:a81:3a06:: with SMTP id h6mr2190169ywa.170.1583468928197; Thu, 05 Mar 2020 20:28:48 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:47 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 06/17] soc: qcom: ipa: GSI headers Date: Thu, 5 Mar 2020 22:28:20 -0600 Message-Id: <20200306042831.17827-7-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org The Generic Software Interface is a layer of the IPA driver that abstracts the underlying hardware. The next patch includes the main code for GSI (including some additional documentation). This patch just includes three GSI header files. - "gsi.h" is the top-level GSI header file. This structure is is embedded within the IPA structure. The main abstraction implemented by the GSI code is the channel, and this header exposes several operations that can be performed on a GSI channel. - "gsi_private.h" exposes some definitions that are intended to be private, used only by the main GSI code and the GSI transaction code (defined in an upcoming patch). - Like "ipa_reg.h", "gsi_reg.h" defines the offsets of the 32-bit registers used by the GSI layer, along with masks that define the position and width of fields less than 32 bits located within these registers. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi.h | 257 +++++++++++++++++++++ drivers/net/ipa/gsi_private.h | 118 ++++++++++ drivers/net/ipa/gsi_reg.h | 417 ++++++++++++++++++++++++++++++++++ 3 files changed, 792 insertions(+) create mode 100644 drivers/net/ipa/gsi.h create mode 100644 drivers/net/ipa/gsi_private.h create mode 100644 drivers/net/ipa/gsi_reg.h diff --git a/drivers/net/ipa/gsi.h b/drivers/net/ipa/gsi.h new file mode 100644 index 000000000000..0698ff1ae7a6 --- /dev/null +++ b/drivers/net/ipa/gsi.h @@ -0,0 +1,257 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _GSI_H_ +#define _GSI_H_ + +#include +#include +#include +#include +#include +#include + +/* Maximum number of channels and event rings supported by the driver */ +#define GSI_CHANNEL_COUNT_MAX 17 +#define GSI_EVT_RING_COUNT_MAX 13 + +/* Maximum TLV FIFO size for a channel; 64 here is arbitrary (and high) */ +#define GSI_TLV_MAX 64 + +struct device; +struct scatterlist; +struct platform_device; + +struct gsi; +struct gsi_trans; +struct gsi_channel_data; +struct ipa_gsi_endpoint_data; + +/* Execution environment IDs */ +enum gsi_ee_id { + GSI_EE_AP = 0, + GSI_EE_MODEM = 1, + GSI_EE_UC = 2, + GSI_EE_TZ = 3, +}; + +struct gsi_ring { + void *virt; /* ring array base address */ + dma_addr_t addr; /* primarily low 32 bits used */ + u32 count; /* number of elements in ring */ + + /* The ring index value indicates the next "open" entry in the ring. + * + * A channel ring consists of TRE entries filled by the AP and passed + * to the hardware for processing. For a channel ring, the ring index + * identifies the next unused entry to be filled by the AP. + * + * An event ring consists of event structures filled by the hardware + * and passed to the AP. For event rings, the ring index identifies + * the next ring entry that is not known to have been filled by the + * hardware. + */ + u32 index; +}; + +/* Transactions use several resources that can be allocated dynamically + * but taken from a fixed-size pool. The number of elements required for + * the pool is limited by the total number of TREs that can be outstanding. + * + * If sufficient TREs are available to reserve for a transaction, + * allocation from these pools is guaranteed to succeed. Furthermore, + * these resources are implicitly freed whenever the TREs in the + * transaction they're associated with are released. + * + * The result of a pool allocation of multiple elements is always + * contiguous. + */ +struct gsi_trans_pool { + void *base; /* base address of element pool */ + u32 count; /* # elements in the pool */ + u32 free; /* next free element in pool (modulo) */ + u32 size; /* size (bytes) of an element */ + u32 max_alloc; /* max allocation request */ + dma_addr_t addr; /* DMA address if DMA pool (or 0) */ +}; + +struct gsi_trans_info { + atomic_t tre_avail; /* TREs available for allocation */ + struct gsi_trans_pool pool; /* transaction pool */ + struct gsi_trans_pool sg_pool; /* scatterlist pool */ + struct gsi_trans_pool cmd_pool; /* command payload DMA pool */ + struct gsi_trans_pool info_pool;/* command information pool */ + struct gsi_trans **map; /* TRE -> transaction map */ + + spinlock_t spinlock; /* protects updates to the lists */ + struct list_head alloc; /* allocated, not committed */ + struct list_head pending; /* committed, awaiting completion */ + struct list_head complete; /* completed, awaiting poll */ + struct list_head polled; /* returned by gsi_channel_poll_one() */ +}; + +/* Hardware values signifying the state of a channel */ +enum gsi_channel_state { + GSI_CHANNEL_STATE_NOT_ALLOCATED = 0x0, + GSI_CHANNEL_STATE_ALLOCATED = 0x1, + GSI_CHANNEL_STATE_STARTED = 0x2, + GSI_CHANNEL_STATE_STOPPED = 0x3, + GSI_CHANNEL_STATE_STOP_IN_PROC = 0x4, + GSI_CHANNEL_STATE_ERROR = 0xf, +}; + +/* We only care about channels between IPA and AP */ +struct gsi_channel { + struct gsi *gsi; + bool toward_ipa; + bool command; /* AP command TX channel or not */ + bool use_prefetch; /* use prefetch (else escape buf) */ + + u8 tlv_count; /* # entries in TLV FIFO */ + u16 tre_count; + u16 event_count; + + struct completion completion; /* signals channel state changes */ + enum gsi_channel_state state; + + struct gsi_ring tre_ring; + u32 evt_ring_id; + + u64 byte_count; /* total # bytes transferred */ + u64 trans_count; /* total # transactions */ + /* The following counts are used only for TX endpoints */ + u64 queued_byte_count; /* last reported queued byte count */ + u64 queued_trans_count; /* ...and queued trans count */ + u64 compl_byte_count; /* last reported completed byte count */ + u64 compl_trans_count; /* ...and completed trans count */ + + struct gsi_trans_info trans_info; + + struct napi_struct napi; +}; + +/* Hardware values signifying the state of an event ring */ +enum gsi_evt_ring_state { + GSI_EVT_RING_STATE_NOT_ALLOCATED = 0x0, + GSI_EVT_RING_STATE_ALLOCATED = 0x1, + GSI_EVT_RING_STATE_ERROR = 0xf, +}; + +struct gsi_evt_ring { + struct gsi_channel *channel; + struct completion completion; /* signals event ring state changes */ + enum gsi_evt_ring_state state; + struct gsi_ring ring; +}; + +struct gsi { + struct device *dev; /* Same as IPA device */ + struct net_device dummy_dev; /* needed for NAPI */ + void __iomem *virt; + u32 irq; + bool irq_wake_enabled; + u32 channel_count; + u32 evt_ring_count; + struct gsi_channel channel[GSI_CHANNEL_COUNT_MAX]; + struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX]; + u32 event_bitmap; + u32 event_enable_bitmap; + u32 modem_channel_bitmap; + struct completion completion; /* for global EE commands */ + struct mutex mutex; /* protects commands, programming */ +}; + +/** + * gsi_setup() - Set up the GSI subsystem + * @gsi: Address of GSI structure embedded in an IPA structure + * @db_enable: Whether to use the GSI doorbell engine + * + * @Return: 0 if successful, or a negative error code + * + * Performs initialization that must wait until the GSI hardware is + * ready (including firmware loaded). + */ +int gsi_setup(struct gsi *gsi, bool db_enable); + +/** + * gsi_teardown() - Tear down GSI subsystem + * @gsi: GSI address previously passed to a successful gsi_setup() call + */ +void gsi_teardown(struct gsi *gsi); + +/** + * gsi_channel_tre_max() - Channel maximum number of in-flight TREs + * @gsi: GSI pointer + * @channel_id: Channel whose limit is to be returned + * + * @Return: The maximum number of TREs oustanding on the channel + */ +u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction + * @gsi: GSI pointer + * @channel_id: Channel whose limit is to be returned + * + * @Return: The maximum TRE count per transaction on the channel + */ +u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_start() - Start an allocated GSI channel + * @gsi: GSI pointer + * @channel_id: Channel to start + * + * @Return: 0 if successful, or a negative error code + */ +int gsi_channel_start(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_stop() - Stop a started GSI channel + * @gsi: GSI pointer returned by gsi_setup() + * @channel_id: Channel to stop + * + * @Return: 0 if successful, or a negative error code + */ +int gsi_channel_stop(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_reset() - Reset an allocated GSI channel + * @gsi: GSI pointer + * @channel_id: Channel to be reset + * @db_enable: Whether doorbell engine should be enabled + * + * Reset a channel and reconfigure it. The @db_enable flag indicates + * whether the doorbell engine will be enabled following reconfiguration. + * + * GSI hardware relinquishes ownership of all pending receive buffer + * transactions and they will complete with their cancelled flag set. + */ +void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool db_enable); + +int gsi_channel_suspend(struct gsi *gsi, u32 channel_id, bool stop); +int gsi_channel_resume(struct gsi *gsi, u32 channel_id, bool start); + +/** + * gsi_init() - Initialize the GSI subsystem + * @gsi: Address of GSI structure embedded in an IPA structure + * @pdev: IPA platform device + * + * @Return: 0 if successful, or a negative error code + * + * Early stage initialization of the GSI subsystem, performing tasks + * that can be done before the GSI hardware is ready to use. + */ +int gsi_init(struct gsi *gsi, struct platform_device *pdev, bool prefetch, + u32 count, const struct ipa_gsi_endpoint_data *data, + bool modem_alloc); + +/** + * gsi_exit() - Exit the GSI subsystem + * @gsi: GSI address previously passed to a successful gsi_init() call + */ +void gsi_exit(struct gsi *gsi); + +#endif /* _GSI_H_ */ diff --git a/drivers/net/ipa/gsi_private.h b/drivers/net/ipa/gsi_private.h new file mode 100644 index 000000000000..b57d0198ebc1 --- /dev/null +++ b/drivers/net/ipa/gsi_private.h @@ -0,0 +1,118 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _GSI_PRIVATE_H_ +#define _GSI_PRIVATE_H_ + +/* === Only "gsi.c" and "gsi_trans.c" should include this file === */ + +#include + +struct gsi_trans; +struct gsi_ring; +struct gsi_channel; + +#define GSI_RING_ELEMENT_SIZE 16 /* bytes */ + +/* Return the entry that follows one provided in a transaction pool */ +void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element); + +/** + * gsi_trans_move_complete() - Mark a GSI transaction completed + * @trans: Transaction to commit + */ +void gsi_trans_move_complete(struct gsi_trans *trans); + +/** + * gsi_trans_move_polled() - Mark a transaction polled + * @trans: Transaction to update + */ +void gsi_trans_move_polled(struct gsi_trans *trans); + +/** + * gsi_trans_complete() - Complete a GSI transaction + * @trans: Transaction to complete + * + * Marks a transaction complete (including freeing it). + */ +void gsi_trans_complete(struct gsi_trans *trans); + +/** + * gsi_channel_trans_mapped() - Return a transaction mapped to a TRE index + * @channel: Channel associated with the transaction + * @index: Index of the TRE having a transaction + * + * @Return: The GSI transaction pointer associated with the TRE index + */ +struct gsi_trans *gsi_channel_trans_mapped(struct gsi_channel *channel, + u32 index); + +/** + * gsi_channel_trans_complete() - Return a channel's next completed transaction + * @channel: Channel whose next transaction is to be returned + * + * @Return: The next completed transaction, or NULL if nothing new + */ +struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel); + +/** + * gsi_channel_trans_cancel_pending() - Cancel pending transactions + * @channel: Channel whose pending transactions should be cancelled + * + * Cancel all pending transactions on a channel. These are transactions + * that have been committed but not yet completed. This is required when + * the channel gets reset. At that time all pending transactions will be + * marked as cancelled. + * + * NOTE: Transactions already complete at the time of this call are + * unaffected. + */ +void gsi_channel_trans_cancel_pending(struct gsi_channel *channel); + +/** + * gsi_channel_trans_init() - Initialize a channel's GSI transaction info + * @gsi: GSI pointer + * @channel_id: Channel number + * + * @Return: 0 if successful, or -ENOMEM on allocation failure + * + * Creates and sets up information for managing transactions on a channel + */ +int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_trans_exit() - Inverse of gsi_channel_trans_init() + * @channel: Channel whose transaction information is to be cleaned up + */ +void gsi_channel_trans_exit(struct gsi_channel *channel); + +/** + * gsi_channel_doorbell() - Ring a channel's doorbell + * @channel: Channel whose doorbell should be rung + * + * Rings a channel's doorbell to inform the GSI hardware that new + * transactions (TREs, really) are available for it to process. + */ +void gsi_channel_doorbell(struct gsi_channel *channel); + +/** + * gsi_ring_virt() - Return virtual address for a ring entry + * @ring: Ring whose address is to be translated + * @addr: Index (slot number) of entry + */ +void *gsi_ring_virt(struct gsi_ring *ring, u32 index); + +/** + * gsi_channel_tx_queued() - Report the number of bytes queued to hardware + * @channel: Channel whose bytes have been queued + * + * This arranges for the the number of transactions and bytes for + * transfer that have been queued to hardware to be reported. It + * passes this information up the network stack so it can be used to + * throttle transmissions. + */ +void gsi_channel_tx_queued(struct gsi_channel *channel); + +#endif /* _GSI_PRIVATE_H_ */ diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h new file mode 100644 index 000000000000..7613b9cc7cf6 --- /dev/null +++ b/drivers/net/ipa/gsi_reg.h @@ -0,0 +1,417 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _GSI_REG_H_ +#define _GSI_REG_H_ + +/* === Only "gsi.c" should include this file === */ + +#include + +/** + * DOC: GSI Registers + * + * GSI registers are located within the "gsi" address space defined by Device + * Tree. The offset of each register within that space is specified by + * symbols defined below. The GSI address space is mapped to virtual memory + * space in gsi_init(). All GSI registers are 32 bits wide. + * + * Each register type is duplicated for a number of instances of something. + * For example, each GSI channel has its own set of registers defining its + * configuration. The offset to a channel's set of registers is computed + * based on a "base" offset plus an additional "stride" amount computed + * from the channel's ID. For such registers, the offset is computed by a + * function-like macro that takes a parameter used in the computation. + * + * The offset of a register dependent on execution environment is computed + * by a macro that is supplied a parameter "ee". The "ee" value is a member + * of the gsi_ee_id enumerated type. + * + * The offset of a channel register is computed by a macro that is supplied a + * parameter "ch". The "ch" value is a channel id whose maximum value is 30 + * (though the actual limit is hardware-dependent). + * + * The offset of an event register is computed by a macro that is supplied a + * parameter "ev". The "ev" value is an event id whose maximum value is 15 + * (though the actual limit is hardware-dependent). + */ + +#define GSI_INTER_EE_SRC_CH_IRQ_OFFSET \ + GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(ee) \ + (0x0000c018 + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET \ + GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(ee) \ + (0x0000c01c + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_CH_IRQ_CLR_OFFSET \ + GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(ee) \ + (0x0000c028 + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_EV_CH_IRQ_CLR_OFFSET \ + GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \ + (0x0000c02c + 0x1000 * (ee)) + +#define GSI_CH_C_CNTXT_0_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_0_OFFSET(ch, ee) \ + (0x0001c000 + 0x4000 * (ee) + 0x80 * (ch)) +#define CHTYPE_PROTOCOL_FMASK GENMASK(2, 0) +#define CHTYPE_DIR_FMASK GENMASK(3, 3) +#define EE_FMASK GENMASK(7, 4) +#define CHID_FMASK GENMASK(12, 8) +/* The next field is present for GSI v2.0 and above */ +#define CHTYPE_PROTOCOL_MSB_FMASK GENMASK(13, 13) +#define ERINDEX_FMASK GENMASK(18, 14) +#define CHSTATE_FMASK GENMASK(23, 20) +#define ELEMENT_SIZE_FMASK GENMASK(31, 24) + +#define GSI_CH_C_CNTXT_1_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_1_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_1_OFFSET(ch, ee) \ + (0x0001c004 + 0x4000 * (ee) + 0x80 * (ch)) +#define R_LENGTH_FMASK GENMASK(15, 0) + +#define GSI_CH_C_CNTXT_2_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_2_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_2_OFFSET(ch, ee) \ + (0x0001c008 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_CNTXT_3_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_3_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_3_OFFSET(ch, ee) \ + (0x0001c00c + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_QOS_OFFSET(ch) \ + GSI_EE_N_CH_C_QOS_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_QOS_OFFSET(ch, ee) \ + (0x0001c05c + 0x4000 * (ee) + 0x80 * (ch)) +#define WRR_WEIGHT_FMASK GENMASK(3, 0) +#define MAX_PREFETCH_FMASK GENMASK(8, 8) +#define USE_DB_ENG_FMASK GENMASK(9, 9) +/* The next field is present for GSI v2.0 and above */ +#define USE_ESCAPE_BUF_ONLY_FMASK GENMASK(10, 10) + +#define GSI_CH_C_SCRATCH_0_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_0_OFFSET(ch, ee) \ + (0x0001c060 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_1_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_1_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_1_OFFSET(ch, ee) \ + (0x0001c064 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_2_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_2_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_2_OFFSET(ch, ee) \ + (0x0001c068 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_3_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_3_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_3_OFFSET(ch, ee) \ + (0x0001c06c + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_EV_CH_E_CNTXT_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_0_OFFSET(ev, ee) \ + (0x0001d000 + 0x4000 * (ee) + 0x80 * (ev)) +#define EV_CHTYPE_FMASK GENMASK(3, 0) +#define EV_EE_FMASK GENMASK(7, 4) +#define EV_EVCHID_FMASK GENMASK(15, 8) +#define EV_INTYPE_FMASK GENMASK(16, 16) +#define EV_CHSTATE_FMASK GENMASK(23, 20) +#define EV_ELEMENT_SIZE_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_E_CNTXT_1_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_1_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_1_OFFSET(ev, ee) \ + (0x0001d004 + 0x4000 * (ee) + 0x80 * (ev)) +#define EV_R_LENGTH_FMASK GENMASK(15, 0) + +#define GSI_EV_CH_E_CNTXT_2_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_2_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_2_OFFSET(ev, ee) \ + (0x0001d008 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_3_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_3_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_3_OFFSET(ev, ee) \ + (0x0001d00c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_4_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_4_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_4_OFFSET(ev, ee) \ + (0x0001d010 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_8_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_8_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_8_OFFSET(ev, ee) \ + (0x0001d020 + 0x4000 * (ee) + 0x80 * (ev)) +#define MODT_FMASK GENMASK(15, 0) +#define MODC_FMASK GENMASK(23, 16) +#define MOD_CNT_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_E_CNTXT_9_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_9_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_9_OFFSET(ev, ee) \ + (0x0001d024 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_10_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_10_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_10_OFFSET(ev, ee) \ + (0x0001d028 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_11_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_11_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_11_OFFSET(ev, ee) \ + (0x0001d02c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_12_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_12_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_12_OFFSET(ev, ee) \ + (0x0001d030 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_13_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_13_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_13_OFFSET(ev, ee) \ + (0x0001d034 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_SCRATCH_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_SCRATCH_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_SCRATCH_0_OFFSET(ev, ee) \ + (0x0001d048 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_SCRATCH_1_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_SCRATCH_1_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_SCRATCH_1_OFFSET(ev, ee) \ + (0x0001d04c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_CH_C_DOORBELL_0_OFFSET(ch) \ + GSI_EE_N_CH_C_DOORBELL_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_DOORBELL_0_OFFSET(ch, ee) \ + (0x0001e000 + 0x4000 * (ee) + 0x08 * (ch)) + +#define GSI_EV_CH_E_DOORBELL_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_DOORBELL_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_DOORBELL_0_OFFSET(ev, ee) \ + (0x0001e100 + 0x4000 * (ee) + 0x08 * (ev)) + +#define GSI_GSI_STATUS_OFFSET \ + GSI_EE_N_GSI_STATUS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_GSI_STATUS_OFFSET(ee) \ + (0x0001f000 + 0x4000 * (ee)) +#define ENABLED_FMASK GENMASK(0, 0) + +#define GSI_CH_CMD_OFFSET \ + GSI_EE_N_CH_CMD_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CH_CMD_OFFSET(ee) \ + (0x0001f008 + 0x4000 * (ee)) +#define CH_CHID_FMASK GENMASK(7, 0) +#define CH_OPCODE_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_CMD_OFFSET \ + GSI_EE_N_EV_CH_CMD_OFFSET(GSI_EE_AP) +#define GSI_EE_N_EV_CH_CMD_OFFSET(ee) \ + (0x0001f010 + 0x4000 * (ee)) +#define EV_CHID_FMASK GENMASK(7, 0) +#define EV_OPCODE_FMASK GENMASK(31, 24) + +#define GSI_GENERIC_CMD_OFFSET \ + GSI_EE_N_GENERIC_CMD_OFFSET(GSI_EE_AP) +#define GSI_EE_N_GENERIC_CMD_OFFSET(ee) \ + (0x0001f018 + 0x4000 * (ee)) +#define GENERIC_OPCODE_FMASK GENMASK(4, 0) +#define GENERIC_CHID_FMASK GENMASK(9, 5) +#define GENERIC_EE_FMASK GENMASK(13, 10) + +#define GSI_GSI_HW_PARAM_2_OFFSET \ + GSI_EE_N_GSI_HW_PARAM_2_OFFSET(GSI_EE_AP) +#define GSI_EE_N_GSI_HW_PARAM_2_OFFSET(ee) \ + (0x0001f040 + 0x4000 * (ee)) +#define IRAM_SIZE_FMASK GENMASK(2, 0) +#define IRAM_SIZE_ONE_KB_FVAL 0 +#define IRAM_SIZE_TWO_KB_FVAL 1 +/* The next two values are available for GSI v2.0 and above */ +#define IRAM_SIZE_TWO_N_HALF_KB_FVAL 2 +#define IRAM_SIZE_THREE_KB_FVAL 3 +#define NUM_CH_PER_EE_FMASK GENMASK(7, 3) +#define NUM_EV_PER_EE_FMASK GENMASK(12, 8) +#define GSI_CH_PEND_TRANSLATE_FMASK GENMASK(13, 13) +#define GSI_CH_FULL_LOGIC_FMASK GENMASK(14, 14) +/* Fields below are present for GSI v2.0 and above */ +#define GSI_USE_SDMA_FMASK GENMASK(15, 15) +#define GSI_SDMA_N_INT_FMASK GENMASK(18, 16) +#define GSI_SDMA_MAX_BURST_FMASK GENMASK(26, 19) +#define GSI_SDMA_N_IOVEC_FMASK GENMASK(29, 27) +/* Fields below are present for GSI v2.2 and above */ +#define GSI_USE_RD_WR_ENG_FMASK GENMASK(30, 30) +#define GSI_USE_INTER_EE_FMASK GENMASK(31, 31) + +#define GSI_CNTXT_TYPE_IRQ_OFFSET \ + GSI_EE_N_CNTXT_TYPE_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_TYPE_IRQ_OFFSET(ee) \ + (0x0001f080 + 0x4000 * (ee)) +#define CH_CTRL_FMASK GENMASK(0, 0) +#define EV_CTRL_FMASK GENMASK(1, 1) +#define GLOB_EE_FMASK GENMASK(2, 2) +#define IEOB_FMASK GENMASK(3, 3) +#define INTER_EE_CH_CTRL_FMASK GENMASK(4, 4) +#define INTER_EE_EV_CTRL_FMASK GENMASK(5, 5) +#define GENERAL_FMASK GENMASK(6, 6) + +#define GSI_CNTXT_TYPE_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFSET(ee) \ + (0x0001f088 + 0x4000 * (ee)) +#define MSK_CH_CTRL_FMASK GENMASK(0, 0) +#define MSK_EV_CTRL_FMASK GENMASK(1, 1) +#define MSK_GLOB_EE_FMASK GENMASK(2, 2) +#define MSK_IEOB_FMASK GENMASK(3, 3) +#define MSK_INTER_EE_CH_CTRL_FMASK GENMASK(4, 4) +#define MSK_INTER_EE_EV_CTRL_FMASK GENMASK(5, 5) +#define MSK_GENERAL_FMASK GENMASK(6, 6) +#define GSI_CNTXT_TYPE_IRQ_MSK_ALL GENMASK(6, 0) + +#define GSI_CNTXT_SRC_CH_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFSET(ee) \ + (0x0001f090 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFSET(ee) \ + (0x0001f094 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFSET(ee) \ + (0x0001f098 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET(ee) \ + (0x0001f09c + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFSET(ee) \ + (0x0001f0a0 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \ + (0x0001f0a4 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFSET(ee) \ + (0x0001f0b0 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET(ee) \ + (0x0001f0b8 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET(ee) \ + (0x0001f0c0 + 0x4000 * (ee)) + +#define GSI_CNTXT_GLOB_IRQ_STTS_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFSET(ee) \ + (0x0001f100 + 0x4000 * (ee)) +#define ERROR_INT_FMASK GENMASK(0, 0) +#define GP_INT1_FMASK GENMASK(1, 1) +#define GP_INT2_FMASK GENMASK(2, 2) +#define GP_INT3_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GLOB_IRQ_EN_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFSET(ee) \ + (0x0001f108 + 0x4000 * (ee)) +#define EN_ERROR_INT_FMASK GENMASK(0, 0) +#define EN_GP_INT1_FMASK GENMASK(1, 1) +#define EN_GP_INT2_FMASK GENMASK(2, 2) +#define EN_GP_INT3_FMASK GENMASK(3, 3) +#define GSI_CNTXT_GLOB_IRQ_ALL GENMASK(3, 0) + +#define GSI_CNTXT_GLOB_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFSET(ee) \ + (0x0001f110 + 0x4000 * (ee)) +#define CLR_ERROR_INT_FMASK GENMASK(0, 0) +#define CLR_GP_INT1_FMASK GENMASK(1, 1) +#define CLR_GP_INT2_FMASK GENMASK(2, 2) +#define CLR_GP_INT3_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GSI_IRQ_STTS_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFSET(ee) \ + (0x0001f118 + 0x4000 * (ee)) +#define BREAK_POINT_FMASK GENMASK(0, 0) +#define BUS_ERROR_FMASK GENMASK(1, 1) +#define CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GSI_IRQ_EN_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFSET(ee) \ + (0x0001f120 + 0x4000 * (ee)) +#define EN_BREAK_POINT_FMASK GENMASK(0, 0) +#define EN_BUS_ERROR_FMASK GENMASK(1, 1) +#define EN_CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define EN_MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) +#define GSI_CNTXT_GSI_IRQ_ALL GENMASK(3, 0) + +#define GSI_CNTXT_GSI_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFSET(ee) \ + (0x0001f128 + 0x4000 * (ee)) +#define CLR_BREAK_POINT_FMASK GENMASK(0, 0) +#define CLR_BUS_ERROR_FMASK GENMASK(1, 1) +#define CLR_CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define CLR_MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_INTSET_OFFSET \ + GSI_EE_N_CNTXT_INTSET_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_INTSET_OFFSET(ee) \ + (0x0001f180 + 0x4000 * (ee)) +#define INTYPE_FMASK GENMASK(0, 0) + +#define GSI_ERROR_LOG_OFFSET \ + GSI_EE_N_ERROR_LOG_OFFSET(GSI_EE_AP) +#define GSI_EE_N_ERROR_LOG_OFFSET(ee) \ + (0x0001f200 + 0x4000 * (ee)) +#define ERR_ARG3_FMASK GENMASK(3, 0) +#define ERR_ARG2_FMASK GENMASK(7, 4) +#define ERR_ARG1_FMASK GENMASK(11, 8) +#define ERR_CODE_FMASK GENMASK(15, 12) +#define ERR_VIRT_IDX_FMASK GENMASK(23, 19) +#define ERR_TYPE_FMASK GENMASK(27, 24) +#define ERR_EE_FMASK GENMASK(31, 28) + +#define GSI_ERROR_LOG_CLR_OFFSET \ + GSI_EE_N_ERROR_LOG_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_ERROR_LOG_CLR_OFFSET(ee) \ + (0x0001f210 + 0x4000 * (ee)) + +#define GSI_CNTXT_SCRATCH_0_OFFSET \ + GSI_EE_N_CNTXT_SCRATCH_0_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SCRATCH_0_OFFSET(ee) \ + (0x0001f400 + 0x4000 * (ee)) +#define INTER_EE_RESULT_FMASK GENMASK(2, 0) +#define GENERIC_EE_RESULT_FMASK GENMASK(7, 5) +#define GENERIC_EE_SUCCESS_FVAL 1 +#define GENERIC_EE_NO_RESOURCES_FVAL 7 +#define USB_MAX_PACKET_FMASK GENMASK(15, 15) /* 0: HS; 1: SS */ +#define MHI_BASE_CHANNEL_FMASK GENMASK(31, 24) + +#endif /* _GSI_REG_H_ */ From patchwork Fri Mar 6 04:28:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B176FC3F2E4 for ; Fri, 6 Mar 2020 04:30:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 89766208CD for ; Fri, 6 Mar 2020 04:30:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="hGsIa/w9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727196AbgCFEaC (ORCPT ); Thu, 5 Mar 2020 23:30:02 -0500 Received: from mail-yw1-f66.google.com ([209.85.161.66]:46129 "EHLO mail-yw1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727234AbgCFE2y (ORCPT ); Thu, 5 Mar 2020 23:28:54 -0500 Received: by mail-yw1-f66.google.com with SMTP id x5so215494ywb.13 for ; Thu, 05 Mar 2020 20:28:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GSt5faLYca4AcJbso3LEBGMbkMJiDIuIJIY2653z1qs=; b=hGsIa/w9xyaXQuvVqqr1Tt9IVilT5o97JHORsVkBt9rUc7wYY80FtlyvbhTCVSUOuL TiJ/By1JoW5nQGhL50vOAwWo5e74CFB98LDdafw9LBZhr8rsqZTvZYTwoSDZFSvsjNx0 Nk4G15U2mPbnxiLCD5aSU7ypohfDieuHIwfC+8DL9um8gg5JQhHuLdpwN64rtSZIf+z3 +3e7Flk1T8wqhBBeUAXYFnZdd9rWk1SCukJxqt/+QZfar4+im3uHxwLp4LpTpeQTiqjT k9kfmq/7ifgktECKPelmF/APPxyJW4eUuhiMlMs2saL/eluZ3u4t4laHm5OsxYTLcm48 i4+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GSt5faLYca4AcJbso3LEBGMbkMJiDIuIJIY2653z1qs=; b=IukJwYNMbhQ1AFmcIdqqBC2Yrz1Aqlb1Z0U/kOqgD7NrdJJzEzclwqRFDLVyKft1E7 4Z+3rxqxt32mPWT1u3HIE3lIXDjluJAwELF46VQlkW37kLw2wza5kzWDZdtatYRBHtuM hl9AGlBV3y6ehS1WEzZgKRoMaWFQcLi9xDyZwxS82l18qO+pOhJQ858yU+U4GjRMOEVi krpzThqm8KAGbVy4JYxLdaUHDSvYeymLtTP2TwjZgtA6aN2Eh7zsLwOAtJQS5FMxd/rx hMFPRbyBvEx5nWM7hC2PXL1NfBPrxDS0gur5FjqNlpFwx0cGQgTTDZVJHSlBdR3qa63k iMOg== X-Gm-Message-State: ANhLgQ33GNh5G4Ow4nokQBSr19J1BcGOAd6oNQcXeKwtgGMBgx/kc0tQ tH4EQDlQAHRJOwlqegS4sq8nlQ== X-Google-Smtp-Source: ADFU+vuVCmd1ZmbC0nZ0pKOZ/xTjMIMTHExzy6e43ZY3xCgHN7S0YlUHDZh2OLN854T744XCjvMYUw== X-Received: by 2002:a25:3b50:: with SMTP id i77mr1804839yba.523.1583468932275; Thu, 05 Mar 2020 20:28:52 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:51 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 08/17] soc: qcom: ipa: IPA interface to GSI Date: Thu, 5 Mar 2020 22:28:22 -0600 Message-Id: <20200306042831.17827-9-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch provides interface functions supplied by the IPA layer that are called from the GSI layer. One function is called when a GSI transaction has completed. The others allow the GSI layer to inform the IPA layer when the hardware has been told it has new TREs to execute, and when the hardware has indicated transactions have completed. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_gsi.c | 54 +++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_gsi.h | 60 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 114 insertions(+) create mode 100644 drivers/net/ipa/ipa_gsi.c create mode 100644 drivers/net/ipa/ipa_gsi.h diff --git a/drivers/net/ipa/ipa_gsi.c b/drivers/net/ipa/ipa_gsi.c new file mode 100644 index 000000000000..dc4a5c2196ae --- /dev/null +++ b/drivers/net/ipa/ipa_gsi.c @@ -0,0 +1,54 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include + +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_endpoint.h" +#include "ipa_data.h" + +void ipa_gsi_trans_complete(struct gsi_trans *trans) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + + ipa_endpoint_trans_complete(ipa->channel_map[trans->channel_id], trans); +} + +void ipa_gsi_trans_release(struct gsi_trans *trans) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + + ipa_endpoint_trans_release(ipa->channel_map[trans->channel_id], trans); +} + +void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count) +{ + struct ipa *ipa = container_of(gsi, struct ipa, gsi); + struct ipa_endpoint *endpoint; + + endpoint = ipa->channel_map[channel_id]; + if (endpoint->netdev) + netdev_sent_queue(endpoint->netdev, byte_count); +} + +void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count) +{ + struct ipa *ipa = container_of(gsi, struct ipa, gsi); + struct ipa_endpoint *endpoint; + + endpoint = ipa->channel_map[channel_id]; + if (endpoint->netdev) + netdev_completed_queue(endpoint->netdev, count, byte_count); +} + +/* Indicate whether an endpoint config data entry is "empty" */ +bool ipa_gsi_endpoint_data_empty(const struct ipa_gsi_endpoint_data *data) +{ + return data->ee_id == GSI_EE_AP && !data->channel.tlv_count; +} diff --git a/drivers/net/ipa/ipa_gsi.h b/drivers/net/ipa/ipa_gsi.h new file mode 100644 index 000000000000..3cf18600c68e --- /dev/null +++ b/drivers/net/ipa/ipa_gsi.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_GSI_TRANS_H_ +#define _IPA_GSI_TRANS_H_ + +#include + +struct gsi_trans; + +/** + * ipa_gsi_trans_complete() - GSI transaction completion callback + * @trans: Transaction that has completed + * + * This called from the GSI layer to notify the IPA layer that a + * transaction has completed. + */ +void ipa_gsi_trans_complete(struct gsi_trans *trans); + +/** + * ipa_gsi_trans_release() - GSI transaction release callback + * @trans: Transaction whose resources should be freed + * + * This called from the GSI layer to notify the IPA layer that a + * transaction is about to be freed, so any resources associated + * with it should be released. + */ +void ipa_gsi_trans_release(struct gsi_trans *trans); + +/** + * ipa_gsi_channel_tx_queued() - GSI queued to hardware notification + * @gsi: GSI pointer + * @channel_id: Channel number + * @count: Number of transactions queued + * @byte_count: Number of bytes to transfer represented by transactions + * + * This called from the GSI layer to notify the IPA layer that some + * number of transactions have been queued to hardware for execution. + */ +void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count); +/** + * ipa_gsi_trans_complete() - GSI transaction completion callback +ipa_gsi_channel_tx_completed() + * @gsi: GSI pointer + * @channel_id: Channel number + * @count: Number of transactions completed since last report + * @byte_count: Number of bytes transferred represented by transactions + * + * This called from the GSI layer to notify the IPA layer that the hardware + * has reported the completion of some number of transactions. + */ +void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count); + +bool ipa_gsi_endpoint_data_empty(const struct ipa_gsi_endpoint_data *data); + +#endif /* _IPA_GSI_TRANS_H_ */ From patchwork Fri Mar 6 04:28:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190104 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3DBEC3F2D9 for ; Fri, 6 Mar 2020 04:29:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7DCED2073D for ; Fri, 6 Mar 2020 04:29:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="bB0hNbdU" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727501AbgCFE3u (ORCPT ); Thu, 5 Mar 2020 23:29:50 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:40150 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727302AbgCFE27 (ORCPT ); Thu, 5 Mar 2020 23:28:59 -0500 Received: by mail-yw1-f65.google.com with SMTP id t192so1078895ywe.7 for ; Thu, 05 Mar 2020 20:28:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JjeH9m4qgkWZXplsg1h97DYvGGWmb9nEGzu4XNBBl/I=; b=bB0hNbdUoLtgtevamEs7Y9rRoSCPY+FgD8I+ERJ03onobS9rtxhle6y99Qzdzl94hb peLHTAqs6zovUhzUS60CZUB7HqvTSyK/DD3HUuOWk3iTS1Wf34h5iV0NL1mEtPyoMJ1f 81ml0FJ55TjnWqyeg3w+0wUkmlqm2lvttcTsLZ28TkCpU0/MsrPMt1yiuR+aWGJcvx+V A4hwFnd8tlxMDw3vzIMNj+vMDqKWSG5pmLXhziov/50KCndlHRWbEiBoJlYkM71tfN61 PftvaxWHahMxtoYt8o1OhgvdudaR+CtGPx3b6Rd+eYeaLu0fyan9msFAQDUduMt+R9I4 X9ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JjeH9m4qgkWZXplsg1h97DYvGGWmb9nEGzu4XNBBl/I=; b=QTxk+oxaHidq5IvB93Ssh92c7vHJXNWXANJE5+hnAAKrpQPEBqmPwnF9OA0ho3Qi4i 5gvzFYuoVlSy7KoJa8RHJOUKRf61Xk6LgtTFixzPTp0OuHHj0C0L1DuWEjQGQTCBlQIt V0kT/7txmqynKMCLJ9HNp/+dJ3eJbihQGSWpdrhSKreDh5CFXEIpAvlbhCgvWsRsosmn IGdey1OWXJguCAUWofPlhm0XGQnNCf9D6FOeovYaqsXXXp/dsgK7i8h70GFH8Tudys79 1cZBsgP07BMu6k0taRuuHSdr+PpZ4j5E3kOTStpbeRUHMWYt2yaDYxixKy+uEyNU63Zk F2eQ== X-Gm-Message-State: ANhLgQ0WNytXsfoOp9ywhF1eZqQyFFFwouXji2Q/0lfE/BjiXowFu6XA S7hzKBwr0dliHI0HiBxzG4Q6Nw== X-Google-Smtp-Source: ADFU+vueAFjaBsSuI7/v0OjnVlU3UkhMtu7Zf2SG4BOuUFeeUG37juzfa62HWdxD5JZwztbDCbSpnA== X-Received: by 2002:a0d:d106:: with SMTP id t6mr2074868ywd.211.1583468938087; Thu, 05 Mar 2020 20:28:58 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:57 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 11/17] soc: qcom: ipa: filter and routing tables Date: Thu, 5 Mar 2020 22:28:25 -0600 Message-Id: <20200306042831.17827-12-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch contains code implementing filter and routing tables for the IPA. A filter table allows rules to be used for filtering packets that depart the AP at an endpoint. A filter table entry contains the address of a set of rules to apply for each endpoint that supports filtering. A routing table allows packets to be routed to an endpoint based on packet metadata. It is also a table whose entries each contain the address of a set of routing rules to apply. Neither filtering nor routing is supported by the current driver. All table entries refer to rules that mean "no filtering" and "no routing." Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_table.c | 700 ++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_table.h | 103 ++++++ 2 files changed, 803 insertions(+) create mode 100644 drivers/net/ipa/ipa_table.c create mode 100644 drivers/net/ipa/ipa_table.h diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c new file mode 100644 index 000000000000..9df2a3e78c98 --- /dev/null +++ b/drivers/net/ipa/ipa_table.c @@ -0,0 +1,700 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_version.h" +#include "ipa_endpoint.h" +#include "ipa_table.h" +#include "ipa_reg.h" +#include "ipa_mem.h" +#include "ipa_cmd.h" +#include "gsi.h" +#include "gsi_trans.h" + +/** + * DOC: IPA Filter and Route Tables + * + * The IPA has tables defined in its local shared memory that define filter + * and routing rules. Each entry in these tables contains a 64-bit DMA + * address that refers to DRAM (system memory) containing a rule definition. + * A rule consists of a contiguous block of 32-bit values terminated with + * 32 zero bits. A special "zero entry" rule consisting of 64 zero bits + * represents "no filtering" or "no routing," and is the reset value for + * filter or route table rules. Separate tables (both filter and route) + * used for IPv4 and IPv6. Additionally, there can be hashed filter or + * route tables, which are used when a hash of message metadata matches. + * Hashed operation is not supported by all IPA hardware. + * + * Each filter rule is associated with an AP or modem TX endpoint, though + * not all TX endpoints support filtering. The first 64-bit entry in a + * filter table is a bitmap indicating which endpoints have entries in + * the table. The low-order bit (bit 0) in this bitmap represents a + * special global filter, which applies to all traffic. This is not + * used in the current code. Bit 1, if set, indicates that there is an + * entry (i.e. a DMA address referring to a rule) for endpoint 0 in the + * table. Bit 2, if set, indicates there is an entry for endpoint 1, + * and so on. Space is set aside in IPA local memory to hold as many + * filter table entries as might be required, but typically they are not + * all used. + * + * The AP initializes all entries in a filter table to refer to a "zero" + * entry. Once initialized the modem and AP update the entries for + * endpoints they "own" directly. Currently the AP does not use the + * IPA filtering functionality. + * + * IPA Filter Table + * ---------------------- + * endpoint bitmap | 0x0000000000000048 | Bits 3 and 6 set (endpoints 2 and 5) + * |--------------------| + * 1st endpoint | 0x000123456789abc0 | DMA address for modem endpoint 2 rule + * |--------------------| + * 2nd endpoint | 0x000123456789abf0 | DMA address for AP endpoint 5 rule + * |--------------------| + * (unused) | | (Unused space in filter table) + * |--------------------| + * . . . + * |--------------------| + * (unused) | | (Unused space in filter table) + * ---------------------- + * + * The set of available route rules is divided about equally between the AP + * and modem. The AP initializes all entries in a route table to refer to + * a "zero entry". Once initialized, the modem and AP are responsible for + * updating their own entries. All entries in a route table are usable, + * though the AP currently does not use the IPA routing functionality. + * + * IPA Route Table + * ---------------------- + * 1st modem route | 0x0001234500001100 | DMA address for first route rule + * |--------------------| + * 2nd modem route | 0x0001234500001140 | DMA address for second route rule + * |--------------------| + * . . . + * |--------------------| + * Last modem route| 0x0001234500002280 | DMA address for Nth route rule + * |--------------------| + * 1st AP route | 0x0001234500001100 | DMA address for route rule (N+1) + * |--------------------| + * 2nd AP route | 0x0001234500001140 | DMA address for next route rule + * |--------------------| + * . . . + * |--------------------| + * Last AP route | 0x0001234500002280 | DMA address for last route rule + * ---------------------- + */ + +/* IPA hardware constrains filter and route tables alignment */ +#define IPA_TABLE_ALIGN 128 /* Minimum table alignment */ + +/* Assignment of route table entries to the modem and AP */ +#define IPA_ROUTE_MODEM_MIN 0 +#define IPA_ROUTE_MODEM_COUNT 8 + +#define IPA_ROUTE_AP_MIN IPA_ROUTE_MODEM_COUNT +#define IPA_ROUTE_AP_COUNT \ + (IPA_ROUTE_COUNT_MAX - IPA_ROUTE_MODEM_COUNT) + +/* Filter or route rules consist of a set of 32-bit values followed by a + * 32-bit all-zero rule list terminator. The "zero rule" is simply an + * all-zero rule followed by the list terminator. + */ +#define IPA_ZERO_RULE_SIZE (2 * sizeof(__le32)) + +#ifdef IPA_VALIDATE + +/* Check things that can be validated at build time. */ +static void ipa_table_validate_build(void) +{ + /* IPA hardware accesses memory 128 bytes at a time. Addresses + * referred to by entries in filter and route tables must be + * aligned on 128-byte byte boundaries. The only rule address + * ever use is the "zero rule", and it's aligned at the base + * of a coherent DMA allocation. + */ + BUILD_BUG_ON(ARCH_DMA_MINALIGN % IPA_TABLE_ALIGN); + + /* Filter and route tables contain DMA addresses that refer to + * filter or route rules. We use a fixed constant to represent + * the size of either type of table entry. Code in ipa_table_init() + * uses a pointer to __le64 to initialize table entriews. + */ + BUILD_BUG_ON(IPA_TABLE_ENTRY_SIZE != sizeof(dma_addr_t)); + BUILD_BUG_ON(sizeof(dma_addr_t) != sizeof(__le64)); + + /* A "zero rule" is used to represent no filtering or no routing. + * It is a 64-bit block of zeroed memory. Code in ipa_table_init() + * assumes that it can be written using a pointer to __le64. + */ + BUILD_BUG_ON(IPA_ZERO_RULE_SIZE != sizeof(__le64)); + + /* Impose a practical limit on the number of routes */ + BUILD_BUG_ON(IPA_ROUTE_COUNT_MAX > 32); + /* The modem must be allotted at least one route table entry */ + BUILD_BUG_ON(!IPA_ROUTE_MODEM_COUNT); + /* But it can't have more than what is available */ + BUILD_BUG_ON(IPA_ROUTE_MODEM_COUNT > IPA_ROUTE_COUNT_MAX); + +} + +static bool +ipa_table_valid_one(struct ipa *ipa, bool route, bool ipv6, bool hashed) +{ + struct device *dev = &ipa->pdev->dev; + const struct ipa_mem *mem; + u32 size; + + if (route) { + if (ipv6) + mem = hashed ? &ipa->mem[IPA_MEM_V6_ROUTE_HASHED] + : &ipa->mem[IPA_MEM_V6_ROUTE]; + else + mem = hashed ? &ipa->mem[IPA_MEM_V4_ROUTE_HASHED] + : &ipa->mem[IPA_MEM_V4_ROUTE]; + size = IPA_ROUTE_COUNT_MAX * IPA_TABLE_ENTRY_SIZE; + } else { + if (ipv6) + mem = hashed ? &ipa->mem[IPA_MEM_V6_FILTER_HASHED] + : &ipa->mem[IPA_MEM_V6_FILTER]; + else + mem = hashed ? &ipa->mem[IPA_MEM_V4_FILTER_HASHED] + : &ipa->mem[IPA_MEM_V4_FILTER]; + size = (1 + IPA_FILTER_COUNT_MAX) * IPA_TABLE_ENTRY_SIZE; + } + + if (!ipa_cmd_table_valid(ipa, mem, route, ipv6, hashed)) + return false; + + /* mem->size >= size is sufficient, but we'll demand more */ + if (mem->size == size) + return true; + + /* Hashed table regions can be zero size if hashing is not supported */ + if (hashed && !mem->size) + return true; + + dev_err(dev, "IPv%c %s%s table region size 0x%02x, expected 0x%02x\n", + ipv6 ? '6' : '4', hashed ? "hashed " : "", + route ? "route" : "filter", mem->size, size); + + return false; +} + +/* Verify the filter and route table memory regions are the expected size */ +bool ipa_table_valid(struct ipa *ipa) +{ + bool valid = true; + + valid = valid && ipa_table_valid_one(ipa, false, false, false); + valid = valid && ipa_table_valid_one(ipa, false, false, true); + valid = valid && ipa_table_valid_one(ipa, false, true, false); + valid = valid && ipa_table_valid_one(ipa, false, true, true); + valid = valid && ipa_table_valid_one(ipa, true, false, false); + valid = valid && ipa_table_valid_one(ipa, true, false, true); + valid = valid && ipa_table_valid_one(ipa, true, true, false); + valid = valid && ipa_table_valid_one(ipa, true, true, true); + + return valid; +} + +bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_map) +{ + struct device *dev = &ipa->pdev->dev; + u32 count; + + if (!filter_map) { + dev_err(dev, "at least one filtering endpoint is required\n"); + + return false; + } + + count = hweight32(filter_map); + if (count > IPA_FILTER_COUNT_MAX) { + dev_err(dev, "too many filtering endpoints (%u, max %u)\n", + count, IPA_FILTER_COUNT_MAX); + + return false; + } + + return true; +} + +#else /* !IPA_VALIDATE */ +static void ipa_table_validate_build(void) + +{ +} + +#endif /* !IPA_VALIDATE */ + +/* Zero entry count means no table, so just return a 0 address */ +static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count) +{ + u32 skip; + + if (!count) + return 0; + +/* assert(count <= max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX)); */ + + /* Skip over the zero rule and possibly the filter mask */ + skip = filter_mask ? 1 : 2; + + return ipa->table_addr + skip * sizeof(*ipa->table_virt); +} + +static void ipa_table_reset_add(struct gsi_trans *trans, bool filter, + u16 first, u16 count, const struct ipa_mem *mem) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + dma_addr_t addr; + u32 offset; + u16 size; + + /* Nothing to do if the table memory regions is empty */ + if (!mem->size) + return; + + if (filter) + first++; /* skip over bitmap */ + + offset = mem->offset + first * IPA_TABLE_ENTRY_SIZE; + size = count * IPA_TABLE_ENTRY_SIZE; + addr = ipa_table_addr(ipa, false, count); + + ipa_cmd_dma_shared_mem_add(trans, offset, size, addr, true); +} + +/* Reset entries in a single filter table belonging to either the AP or + * modem to refer to the zero entry. The memory region supplied will be + * for the IPv4 and IPv6 non-hashed and hashed filter tables. + */ +static int +ipa_filter_reset_table(struct ipa *ipa, const struct ipa_mem *mem, bool modem) +{ + u32 ep_mask = ipa->filter_map; + u32 count = hweight32(ep_mask); + struct gsi_trans *trans; + enum gsi_ee_id ee_id; + + if (!mem->size) + return 0; + + trans = ipa_cmd_trans_alloc(ipa, count); + if (!trans) { + dev_err(&ipa->pdev->dev, + "no transaction for %s filter reset\n", + modem ? "modem" : "AP"); + return -EBUSY; + } + + ee_id = modem ? GSI_EE_MODEM : GSI_EE_AP; + while (ep_mask) { + u32 endpoint_id = __ffs(ep_mask); + struct ipa_endpoint *endpoint; + + ep_mask ^= BIT(endpoint_id); + + endpoint = &ipa->endpoint[endpoint_id]; + if (endpoint->ee_id != ee_id) + continue; + + ipa_table_reset_add(trans, true, endpoint_id, 1, mem); + } + + gsi_trans_commit_wait(trans); + + return 0; +} + +/* Theoretically, each filter table could have more filter slots to + * update than the maximum number of commands in a transaction. So + * we do each table separately. + */ +static int ipa_filter_reset(struct ipa *ipa, bool modem) +{ + int ret; + + ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V4_FILTER], modem); + if (ret) + return ret; + + ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V4_FILTER_HASHED], + modem); + if (ret) + return ret; + + ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V6_FILTER], modem); + if (ret) + return ret; + ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V6_FILTER_HASHED], + modem); + + return ret; +} + +/* The AP routes and modem routes are each contiguous within the + * table. We can update each table with a single command, and we + * won't exceed the per-transaction command limit. + * */ +static int ipa_route_reset(struct ipa *ipa, bool modem) +{ + struct gsi_trans *trans; + u16 first; + u16 count; + + trans = ipa_cmd_trans_alloc(ipa, 4); + if (!trans) { + dev_err(&ipa->pdev->dev, + "no transaction for %s route reset\n", + modem ? "modem" : "AP"); + return -EBUSY; + } + + if (modem) { + first = IPA_ROUTE_MODEM_MIN; + count = IPA_ROUTE_MODEM_COUNT; + } else { + first = IPA_ROUTE_AP_MIN; + count = IPA_ROUTE_AP_COUNT; + } + + ipa_table_reset_add(trans, false, first, count, + &ipa->mem[IPA_MEM_V4_ROUTE]); + ipa_table_reset_add(trans, false, first, count, + &ipa->mem[IPA_MEM_V4_ROUTE_HASHED]); + + ipa_table_reset_add(trans, false, first, count, + &ipa->mem[IPA_MEM_V6_ROUTE]); + ipa_table_reset_add(trans, false, first, count, + &ipa->mem[IPA_MEM_V6_ROUTE_HASHED]); + + gsi_trans_commit_wait(trans); + + return 0; +} + +void ipa_table_reset(struct ipa *ipa, bool modem) +{ + struct device *dev = &ipa->pdev->dev; + const char *ee_name; + int ret; + + ee_name = modem ? "modem" : "AP"; + + /* Report errors, but reset filter and route tables */ + ret = ipa_filter_reset(ipa, modem); + if (ret) + dev_err(dev, "error %d resetting filter table for %s\n", + ret, ee_name); + + ret = ipa_route_reset(ipa, modem); + if (ret) + dev_err(dev, "error %d resetting route table for %s\n", + ret, ee_name); +} + +int ipa_table_hash_flush(struct ipa *ipa) +{ + u32 offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version); + struct gsi_trans *trans; + u32 val; + + /* IPA version 4.2 does not support hashed tables */ + if (ipa->version == IPA_VERSION_4_2) + return 0; + + trans = ipa_cmd_trans_alloc(ipa, 1); + if (!trans) { + dev_err(&ipa->pdev->dev, "no transaction for hash flush\n"); + return -EBUSY; + } + + val = IPV4_FILTER_HASH_FLUSH | IPV6_FILTER_HASH_FLUSH; + val |= IPV6_ROUTER_HASH_FLUSH | IPV4_ROUTER_HASH_FLUSH; + + ipa_cmd_register_write_add(trans, offset, val, val, false); + + gsi_trans_commit_wait(trans); + + return 0; +} + +static void ipa_table_init_add(struct gsi_trans *trans, bool filter, + enum ipa_cmd_opcode opcode, + const struct ipa_mem *mem, + const struct ipa_mem *hash_mem) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + dma_addr_t hash_addr; + dma_addr_t addr; + u16 hash_count; + u16 hash_size; + u16 count; + u16 size; + + /* The number of filtering endpoints determines number of entries + * in the filter table. The hashed and non-hashed filter table + * will have the same number of entries. The size of the route + * table region determines the number of entries it has. + */ + if (filter) { + count = hweight32(ipa->filter_map); + hash_count = hash_mem->size ? count : 0; + } else { + count = mem->size / IPA_TABLE_ENTRY_SIZE; + hash_count = hash_mem->size / IPA_TABLE_ENTRY_SIZE; + } + size = count * IPA_TABLE_ENTRY_SIZE; + hash_size = hash_count * IPA_TABLE_ENTRY_SIZE; + + addr = ipa_table_addr(ipa, filter, count); + hash_addr = ipa_table_addr(ipa, filter, hash_count); + + ipa_cmd_table_init_add(trans, opcode, size, mem->offset, addr, + hash_size, hash_mem->offset, hash_addr); +} + +int ipa_table_setup(struct ipa *ipa) +{ + struct gsi_trans *trans; + + trans = ipa_cmd_trans_alloc(ipa, 4); + if (!trans) { + dev_err(&ipa->pdev->dev, "no transaction for table setup\n"); + return -EBUSY; + } + + ipa_table_init_add(trans, false, IPA_CMD_IP_V4_ROUTING_INIT, + &ipa->mem[IPA_MEM_V4_ROUTE], + &ipa->mem[IPA_MEM_V4_ROUTE_HASHED]); + + ipa_table_init_add(trans, false, IPA_CMD_IP_V6_ROUTING_INIT, + &ipa->mem[IPA_MEM_V6_ROUTE], + &ipa->mem[IPA_MEM_V6_ROUTE_HASHED]); + + ipa_table_init_add(trans, true, IPA_CMD_IP_V4_FILTER_INIT, + &ipa->mem[IPA_MEM_V4_FILTER], + &ipa->mem[IPA_MEM_V4_FILTER_HASHED]); + + ipa_table_init_add(trans, true, IPA_CMD_IP_V6_FILTER_INIT, + &ipa->mem[IPA_MEM_V6_FILTER], + &ipa->mem[IPA_MEM_V6_FILTER_HASHED]); + + gsi_trans_commit_wait(trans); + + return 0; +} + +void ipa_table_teardown(struct ipa *ipa) +{ + /* Nothing to do */ /* XXX Maybe reset the tables? */ +} + +/** + * ipa_filter_tuple_zero() - Zero an endpoint's hashed filter tuple + * @endpoint_id: Endpoint whose filter hash tuple should be zeroed + * + * Endpoint must be for the AP (not modem) and support filtering. Updates + * the filter hash values without changing route ones. + */ +static void ipa_filter_tuple_zero(struct ipa_endpoint *endpoint) +{ + u32 endpoint_id = endpoint->endpoint_id; + u32 offset; + u32 val; + + offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(endpoint_id); + + val = ioread32(endpoint->ipa->reg_virt + offset); + + /* Zero all filter-related fields, preserving the rest */ + u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL); + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_filter_config(struct ipa *ipa, bool modem) +{ + enum gsi_ee_id ee_id = modem ? GSI_EE_MODEM : GSI_EE_AP; + u32 ep_mask = ipa->filter_map; + + /* IPA version 4.2 has no hashed route tables */ + if (ipa->version == IPA_VERSION_4_2) + return; + + while (ep_mask) { + u32 endpoint_id = __ffs(ep_mask); + struct ipa_endpoint *endpoint; + + ep_mask ^= BIT(endpoint_id); + + endpoint = &ipa->endpoint[endpoint_id]; + if (endpoint->ee_id == ee_id) + ipa_filter_tuple_zero(endpoint); + } +} + +static void ipa_filter_deconfig(struct ipa *ipa, bool modem) +{ + /* Nothing to do */ +} + +static bool ipa_route_id_modem(u32 route_id) +{ + return route_id >= IPA_ROUTE_MODEM_MIN && + route_id <= IPA_ROUTE_MODEM_MIN + IPA_ROUTE_MODEM_COUNT - 1; +} + +/** + * ipa_route_tuple_zero() - Zero a hashed route table entry tuple + * @route_id: Route table entry whose hash tuple should be zeroed + * + * Updates the route hash values without changing filter ones. + */ +static void ipa_route_tuple_zero(struct ipa *ipa, u32 route_id) +{ + u32 offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(route_id); + u32 val; + + val = ioread32(ipa->reg_virt + offset); + + /* Zero all route-related fields, preserving the rest */ + u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL); + + iowrite32(val, ipa->reg_virt + offset); +} + +static void ipa_route_config(struct ipa *ipa, bool modem) +{ + u32 route_id; + + /* IPA version 4.2 has no hashed route tables */ + if (ipa->version == IPA_VERSION_4_2) + return; + + for (route_id = 0; route_id < IPA_ROUTE_COUNT_MAX; route_id++) + if (ipa_route_id_modem(route_id) == modem) + ipa_route_tuple_zero(ipa, route_id); +} + +static void ipa_route_deconfig(struct ipa *ipa, bool modem) +{ + /* Nothing to do */ +} + +void ipa_table_config(struct ipa *ipa) +{ + ipa_filter_config(ipa, false); + ipa_filter_config(ipa, true); + ipa_route_config(ipa, false); + ipa_route_config(ipa, true); +} + +void ipa_table_deconfig(struct ipa *ipa) +{ + ipa_route_deconfig(ipa, true); + ipa_route_deconfig(ipa, false); + ipa_filter_deconfig(ipa, true); + ipa_filter_deconfig(ipa, false); +} + +/* + * Initialize a coherent DMA allocation containing initialized filter and + * route table data. This is used when initializing or resetting the IPA + * filter or route table. + * + * The first entry in a filter table contains a bitmap indicating which + * endpoints contain entries in the table. In addition to that first entry, + * there are at most IPA_FILTER_COUNT_MAX entries that follow. Filter table + * entries are 64 bits wide, and (other than the bitmap) contain the DMA + * address of a filter rule. A "zero rule" indicates no filtering, and + * consists of 64 bits of zeroes. When a filter table is initialized (or + * reset) its entries are made to refer to the zero rule. + * + * Each entry in a route table is the DMA address of a routing rule. For + * routing there is also a 64-bit "zero rule" that means no routing, and + * when a route table is initialized or reset, its entries are made to refer + * to the zero rule. The zero rule is shared for route and filter tables. + * + * Note that the IPA hardware requires a filter or route rule address to be + * aligned on a 128 byte boundary. The coherent DMA buffer we allocate here + * has a minimum alignment, and we place the zero rule at the base of that + * allocated space. In ipa_table_init() we verify the minimum DMA allocation + * meets our requirement. + * + * +-------------------+ + * --> | zero rule | + * / |-------------------| + * | | filter mask | + * |\ |-------------------| + * | ---- zero rule address | \ + * |\ |-------------------| | + * | ---- zero rule address | | IPA_FILTER_COUNT_MAX + * | |-------------------| > or IPA_ROUTE_COUNT_MAX, + * | ... | whichever is greater + * \ |-------------------| | + * ---- zero rule address | / + * +-------------------+ + */ +int ipa_table_init(struct ipa *ipa) +{ + u32 count = max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX); + struct device *dev = &ipa->pdev->dev; + dma_addr_t addr; + __le64 le_addr; + __le64 *virt; + size_t size; + + ipa_table_validate_build(); + + size = IPA_ZERO_RULE_SIZE + (1 + count) * IPA_TABLE_ENTRY_SIZE; + virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + ipa->table_virt = virt; + ipa->table_addr = addr; + + /* First slot is the zero rule */ + *virt++ = 0; + + /* Next is the filter table bitmap. The "soft" bitmap value + * must be converted to the hardware representation by shifting + * it left one position. (Bit 0 repesents global filtering, + * which is possible but not used.) + */ + *virt++ = cpu_to_le64((u64)ipa->filter_map << 1); + + /* All the rest contain the DMA address of the zero rule */ + le_addr = cpu_to_le64(addr); + while (count--) + *virt++ = le_addr; + + return 0; +} + +void ipa_table_exit(struct ipa *ipa) +{ + u32 count = max_t(u32, 1 + IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX); + struct device *dev = &ipa->pdev->dev; + size_t size; + + size = IPA_ZERO_RULE_SIZE + (1 + count) * IPA_TABLE_ENTRY_SIZE; + + dma_free_coherent(dev, size, ipa->table_virt, ipa->table_addr); + ipa->table_addr = 0; + ipa->table_virt = NULL; +} diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h new file mode 100644 index 000000000000..64ea0221441a --- /dev/null +++ b/drivers/net/ipa/ipa_table.h @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_TABLE_H_ +#define _IPA_TABLE_H_ + +#include + +struct ipa; + +/* The size of a filter or route table entry */ +#define IPA_TABLE_ENTRY_SIZE sizeof(__le64) /* Holds a physical address */ + +/* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */ +#define IPA_FILTER_COUNT_MAX 14 + +/* The maximum number of route table entries (IPv4, IPv6; hashed or not) */ +#define IPA_ROUTE_COUNT_MAX 15 + +#ifdef IPA_VALIDATE + +/** + * ipa_table_valid() - Validate route and filter table memory regions + * @ipa: IPA pointer + + * @Return: true if all regions are valid, false otherwise + */ +bool ipa_table_valid(struct ipa *ipa); + +/** + * ipa_filter_map_valid() - Validate a filter table endpoint bitmap + * @ipa: IPA pointer + * + * @Return: true if all regions are valid, false otherwise + */ +bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask); + +#else /* !IPA_VALIDATE */ + +static inline bool ipa_table_valid(struct ipa *ipa) +{ + return true; +} + +static inline bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask) +{ + return true; +} + +#endif /* !IPA_VALIDATE */ + +/** + * ipa_table_reset() - Reset filter and route tables entries to "none" + * @ipa: IPA pointer + * @modem: Whether to reset modem or AP entries + */ +void ipa_table_reset(struct ipa *ipa, bool modem); + +/** + * ipa_table_hash_flush() - Synchronize hashed filter and route updates + * @ipa: IPA pointer + */ +int ipa_table_hash_flush(struct ipa *ipa); + +/** + * ipa_table_setup() - Set up filter and route tables + * @ipa: IPA pointer + */ +int ipa_table_setup(struct ipa *ipa); + +/** + * ipa_table_teardown() - Inverse of ipa_table_setup() + * @ipa: IPA pointer + */ +void ipa_table_teardown(struct ipa *ipa); + +/** + * ipa_table_config() - Configure filter and route tables + * @ipa: IPA pointer + */ +void ipa_table_config(struct ipa *ipa); + +/** + * ipa_table_deconfig() - Inverse of ipa_table_config() + * @ipa: IPA pointer + */ +void ipa_table_deconfig(struct ipa *ipa); + +/** + * ipa_table_init() - Do early initialization of filter and route tables + * @ipa: IPA pointer + */ +int ipa_table_init(struct ipa *ipa); + +/** + * ipa_table_exit() - Inverse of ipa_table_init() + * @ipa: IPA pointer + */ +void ipa_table_exit(struct ipa *ipa); + +#endif /* _IPA_TABLE_H_ */ From patchwork Fri Mar 6 04:28:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E43CFC3F2DD for ; Fri, 6 Mar 2020 04:29:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 951FA21741 for ; Fri, 6 Mar 2020 04:29:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="u+F4WutO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727347AbgCFE3C (ORCPT ); Thu, 5 Mar 2020 23:29:02 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:43626 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727322AbgCFE3C (ORCPT ); Thu, 5 Mar 2020 23:29:02 -0500 Received: by mail-yw1-f65.google.com with SMTP id p69so1062734ywh.10 for ; Thu, 05 Mar 2020 20:29:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IzoreZnPLlVH00cFIY4SNmfl6p/pO+mkW+KZWyCgFB8=; b=u+F4WutOIsIfpB1ZmpOkobURmDSpG0cF17vLJjxX7/dTuegS7pEvNYEPYMRbOsqWbY ZCoKSuuYWS1LkQGjHvdOG8Ljf7gWbmbOMCpQZZc42Ymj/5xnpRXKfnFHMkS8PS01el+J ctJkcy/I1q8oJydW7FFq0a7XXX+6JQX67hPT8WSDCbkPV8q6jukbZOZS283G5wPE6Ph9 FUUgriFkhPA8DSmdiYTMHgSGSPYKwkxIZ9rpreZkoxn/Jzx7P3baC7HOZqgEIumi/bPD Vy3kIW+iV/InALE2omsLdEUKtQ2DJYr5qI6h8FVvK2CZqSOPT5sPeav78GMgNJAx8qCi fURQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IzoreZnPLlVH00cFIY4SNmfl6p/pO+mkW+KZWyCgFB8=; b=A7cCC9SkJMlGc8t0hkAwy2t9qqQJl7nR6GDxphDviIQ7YBGqklyT3uls6cWKjI+jzj qanVFO4cvw6Af80xamsTGPCCRrC+CzJdlEa3sgd8MRZSHGJCgbfUTOsYenBNaSmPPAlZ zojvFJUjlcd2ojbOrDYdhWQbe0A+3WeiVrk/M1d9fZR1eFW69LyuUVjRKXyVx/gcrJzP oT7qjyRaeRb/SM65yPtelTOCqHe/OKQkyXfMzXWHdUEGaz4lmnIi+tQ0cS06yPZIRRpa wIbdo3UIwJ54IDbuuSlDjb2Ki3c3r/5gEHDxBYjKdvfQIzLHj251PtbQ5wEWOPDbnAk5 c/ow== X-Gm-Message-State: ANhLgQ1fmiNiJbsII47+BM9KSu0GFPx08Bjg4bJ6y8NQlqEGyhdt+RYx BeLpnMAKc9WuN/5UAucgrAWaKw== X-Google-Smtp-Source: ADFU+vtfR+mq/xkP6/E2/WyffkwCF2aYuuiPLvstDrSVyIDMSpgw4DQCAwGDZxQkwZWjL6vO335ACw== X-Received: by 2002:a25:b49:: with SMTP id 70mr1959333ybl.138.1583468940060; Thu, 05 Mar 2020 20:29:00 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:59 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 12/17] soc: qcom: ipa: immediate commands Date: Thu, 5 Mar 2020 22:28:26 -0600 Message-Id: <20200306042831.17827-13-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org One TX endpoint (per EE) is used for issuing immediate commands to the IPA. These commands request activites beyond simple data transfers to be done by the IPA hardware. For example, the IPA is able to manage routing packets among endpoints, and immediate commands are used to configure tables used for that routing. Immediate commands are built on top of GSI transactions. They are different from normal transfers (in that they use a special endpoint, and their "payload" is interpreted differently), so separate functions are used to issue immediate command transactions. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_cmd.c | 680 ++++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_cmd.h | 195 +++++++++++ 2 files changed, 875 insertions(+) create mode 100644 drivers/net/ipa/ipa_cmd.c create mode 100644 drivers/net/ipa/ipa_cmd.h diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c new file mode 100644 index 000000000000..d226b858742d --- /dev/null +++ b/drivers/net/ipa/ipa_cmd.c @@ -0,0 +1,680 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_endpoint.h" +#include "ipa_table.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" + +/** + * DOC: IPA Immediate Commands + * + * The AP command TX endpoint is used to issue immediate commands to the IPA. + * An immediate command is generally used to request the IPA do something + * other than data transfer to another endpoint. + * + * Immediate commands are represented by GSI transactions just like other + * transfer requests, represented by a single GSI TRE. Each immediate + * command has a well-defined format, having a payload of a known length. + * This allows the transfer element's length field to be used to hold an + * immediate command's opcode. The payload for a command resides in DRAM + * and is described by a single scatterlist entry in its transaction. + * Commands do not require a transaction completion callback. To commit + * an immediate command transaction, either gsi_trans_commit_wait() or + * gsi_trans_commit_wait_timeout() is used. + */ + +/* Some commands can wait until indicated pipeline stages are clear */ +enum pipeline_clear_options { + pipeline_clear_hps = 0, + pipeline_clear_src_grp = 1, + pipeline_clear_full = 2, +}; + +/* IPA_CMD_IP_V{4,6}_{FILTER,ROUTING}_INIT */ + +struct ipa_cmd_hw_ip_fltrt_init { + __le64 hash_rules_addr; + __le64 flags; + __le64 nhash_rules_addr; +}; + +/* Field masks for ipa_cmd_hw_ip_fltrt_init structure fields */ +#define IP_FLTRT_FLAGS_HASH_SIZE_FMASK GENMASK_ULL(11, 0) +#define IP_FLTRT_FLAGS_HASH_ADDR_FMASK GENMASK_ULL(27, 12) +#define IP_FLTRT_FLAGS_NHASH_SIZE_FMASK GENMASK_ULL(39, 28) +#define IP_FLTRT_FLAGS_NHASH_ADDR_FMASK GENMASK_ULL(55, 40) + +/* IPA_CMD_HDR_INIT_LOCAL */ + +struct ipa_cmd_hw_hdr_init_local { + __le64 hdr_table_addr; + __le32 flags; + __le32 reserved; +}; + +/* Field masks for ipa_cmd_hw_hdr_init_local structure fields */ +#define HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK GENMASK(11, 0) +#define HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK GENMASK(27, 12) + +/* IPA_CMD_REGISTER_WRITE */ + +/* For IPA v4.0+, this opcode gets modified with pipeline clear options */ + +#define REGISTER_WRITE_OPCODE_SKIP_CLEAR_FMASK GENMASK(8, 8) +#define REGISTER_WRITE_OPCODE_CLEAR_OPTION_FMASK GENMASK(10, 9) + +struct ipa_cmd_register_write { + __le16 flags; /* Unused/reserved for IPA v3.5.1 */ + __le16 offset; + __le32 value; + __le32 value_mask; + __le32 clear_options; /* Unused/reserved for IPA v4.0+ */ +}; + +/* Field masks for ipa_cmd_register_write structure fields */ +/* The next field is present for IPA v4.0 and above */ +#define REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK GENMASK(14, 11) +/* The next field is present for IPA v3.5.1 only */ +#define REGISTER_WRITE_FLAGS_SKIP_CLEAR_FMASK GENMASK(15, 15) + +/* The next field and its values are present for IPA v3.5.1 only */ +#define REGISTER_WRITE_CLEAR_OPTIONS_FMASK GENMASK(1, 0) + +/* IPA_CMD_IP_PACKET_INIT */ + +struct ipa_cmd_ip_packet_init { + u8 dest_endpoint; + u8 reserved[7]; +}; + +/* Field masks for ipa_cmd_ip_packet_init dest_endpoint field */ +#define IPA_PACKET_INIT_DEST_ENDPOINT_FMASK GENMASK(4, 0) + +/* IPA_CMD_DMA_TASK_32B_ADDR */ + +/* This opcode gets modified with a DMA operation count */ + +#define DMA_TASK_32B_ADDR_OPCODE_COUNT_FMASK GENMASK(15, 8) + +struct ipa_cmd_hw_dma_task_32b_addr { + __le16 flags; + __le16 size; + __le32 addr; + __le16 packet_size; + u8 reserved[6]; +}; + +/* Field masks for ipa_cmd_hw_dma_task_32b_addr flags field */ +#define DMA_TASK_32B_ADDR_FLAGS_SW_RSVD_FMASK GENMASK(10, 0) +#define DMA_TASK_32B_ADDR_FLAGS_CMPLT_FMASK GENMASK(11, 11) +#define DMA_TASK_32B_ADDR_FLAGS_EOF_FMASK GENMASK(12, 12) +#define DMA_TASK_32B_ADDR_FLAGS_FLSH_FMASK GENMASK(13, 13) +#define DMA_TASK_32B_ADDR_FLAGS_LOCK_FMASK GENMASK(14, 14) +#define DMA_TASK_32B_ADDR_FLAGS_UNLOCK_FMASK GENMASK(15, 15) + +/* IPA_CMD_DMA_SHARED_MEM */ + +/* For IPA v4.0+, this opcode gets modified with pipeline clear options */ + +#define DMA_SHARED_MEM_OPCODE_SKIP_CLEAR_FMASK GENMASK(8, 8) +#define DMA_SHARED_MEM_OPCODE_CLEAR_OPTION_FMASK GENMASK(10, 9) + +struct ipa_cmd_hw_dma_mem_mem { + __le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */ + __le16 size; + __le16 local_addr; + __le16 flags; + __le64 system_addr; +}; + +/* Flag allowing atomic clear of target region after reading data (v4.0+)*/ +#define DMA_SHARED_MEM_CLEAR_AFTER_READ GENMASK(15, 15) + +/* Field masks for ipa_cmd_hw_dma_mem_mem structure fields */ +#define DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK GENMASK(0, 0) +/* The next two fields are present for IPA v3.5.1 only. */ +#define DMA_SHARED_MEM_FLAGS_SKIP_CLEAR_FMASK GENMASK(1, 1) +#define DMA_SHARED_MEM_FLAGS_CLEAR_OPTIONS_FMASK GENMASK(3, 2) + +/* IPA_CMD_IP_PACKET_TAG_STATUS */ + +struct ipa_cmd_ip_packet_tag_status { + __le64 tag; +}; + +#define IP_PACKET_TAG_STATUS_TAG_FMASK GENMASK_ULL(63, 16) + +/* Immediate command payload */ +union ipa_cmd_payload { + struct ipa_cmd_hw_ip_fltrt_init table_init; + struct ipa_cmd_hw_hdr_init_local hdr_init_local; + struct ipa_cmd_register_write register_write; + struct ipa_cmd_ip_packet_init ip_packet_init; + struct ipa_cmd_hw_dma_task_32b_addr dma_task_32b_addr; + struct ipa_cmd_hw_dma_mem_mem dma_shared_mem; + struct ipa_cmd_ip_packet_tag_status ip_packet_tag_status; +}; + +static void ipa_cmd_validate_build(void) +{ + /* The sizes of a filter and route tables need to fit into fields + * in the ipa_cmd_hw_ip_fltrt_init structure. Although hashed tables + * might not be used, non-hashed and hashed tables have the same + * maximum size. IPv4 and IPv6 filter tables have the same number + * of entries, as and IPv4 and IPv6 route tables have the same number + * of entries. + */ +#define TABLE_SIZE (TABLE_COUNT_MAX * IPA_TABLE_ENTRY_SIZE) +#define TABLE_COUNT_MAX max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX) + BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK)); + BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK)); +#undef TABLE_COUNT_MAX +#undef TABLE_SIZE +} + +#ifdef IPA_VALIDATE + +/* Validate a memory region holding a table */ +bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem, + bool route, bool ipv6, bool hashed) +{ + struct device *dev = &ipa->pdev->dev; + u32 offset_max; + + offset_max = hashed ? field_max(IP_FLTRT_FLAGS_HASH_ADDR_FMASK) + : field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK); + if (mem->offset > offset_max || + ipa->mem_offset > offset_max - mem->offset) { + dev_err(dev, "IPv%c %s%s table region offset too large " + "(0x%04x + 0x%04x > 0x%04x)\n", + ipv6 ? '6' : '4', hashed ? "hashed " : "", + route ? "route" : "filter", + ipa->mem_offset, mem->offset, offset_max); + return false; + } + + if (mem->offset > ipa->mem_size || + mem->size > ipa->mem_size - mem->offset) { + dev_err(dev, "IPv%c %s%s table region out of range " + "(0x%04x + 0x%04x > 0x%04x)\n", + ipv6 ? '6' : '4', hashed ? "hashed " : "", + route ? "route" : "filter", + mem->offset, mem->size, ipa->mem_size); + return false; + } + + return true; +} + +/* Validate the memory region that holds headers */ +static bool ipa_cmd_header_valid(struct ipa *ipa) +{ + const struct ipa_mem *mem = &ipa->mem[IPA_MEM_MODEM_HEADER]; + struct device *dev = &ipa->pdev->dev; + u32 offset_max; + u32 size_max; + u32 size; + + offset_max = field_max(HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK); + if (mem->offset > offset_max || + ipa->mem_offset > offset_max - mem->offset) { + dev_err(dev, "header table region offset too large " + "(0x%04x + 0x%04x > 0x%04x)\n", + ipa->mem_offset + mem->offset, offset_max); + return false; + } + + size_max = field_max(HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK); + size = ipa->mem[IPA_MEM_MODEM_HEADER].size; + size += ipa->mem[IPA_MEM_AP_HEADER].size; + if (mem->offset > ipa->mem_size || size > ipa->mem_size - mem->offset) { + dev_err(dev, "header table region out of range " + "(0x%04x + 0x%04x > 0x%04x)\n", + mem->offset, size, ipa->mem_size); + return false; + } + + return true; +} + +/* Indicate whether an offset can be used with a register_write command */ +static bool ipa_cmd_register_write_offset_valid(struct ipa *ipa, + const char *name, u32 offset) +{ + struct ipa_cmd_register_write *payload; + struct device *dev = &ipa->pdev->dev; + u32 offset_max; + u32 bit_count; + + /* The maximum offset in a register_write immediate command depends + * on the version of IPA. IPA v3.5.1 supports a 16 bit offset, but + * newer versions allow some additional high-order bits. + */ + bit_count = BITS_PER_BYTE * sizeof(payload->offset); + if (ipa->version != IPA_VERSION_3_5_1) + bit_count += hweight32(REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK); + BUILD_BUG_ON(bit_count > 32); + offset_max = ~0 >> (32 - bit_count); + + if (offset > offset_max || ipa->mem_offset > offset_max - offset) { + dev_err(dev, "%s offset too large 0x%04x + 0x%04x > 0x%04x)\n", + ipa->mem_offset + offset, offset_max); + return false; + } + + return true; +} + +/* Check whether offsets passed to register_write are valid */ +static bool ipa_cmd_register_write_valid(struct ipa *ipa) +{ + const char *name; + u32 offset; + + offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version); + name = "filter/route hash flush"; + if (!ipa_cmd_register_write_offset_valid(ipa, name, offset)) + return false; + + offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT); + name = "maximal endpoint status"; + if (!ipa_cmd_register_write_offset_valid(ipa, name, offset)) + return false; + + return true; +} + +bool ipa_cmd_data_valid(struct ipa *ipa) +{ + if (!ipa_cmd_header_valid(ipa)) + return false; + + if (!ipa_cmd_register_write_valid(ipa)) + return false; + + return true; +} + +#endif /* IPA_VALIDATE */ + +int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + struct device *dev = channel->gsi->dev; + int ret; + + /* This is as good a place as any to validate build constants */ + ipa_cmd_validate_build(); + + /* Even though command payloads are allocated one at a time, + * a single transaction can require up to tlv_count of them, + * so we treat them as if that many can be allocated at once. + */ + ret = gsi_trans_pool_init_dma(dev, &trans_info->cmd_pool, + sizeof(union ipa_cmd_payload), + tre_max, channel->tlv_count); + if (ret) + return ret; + + /* Each TRE needs a command info structure */ + ret = gsi_trans_pool_init(&trans_info->info_pool, + sizeof(struct ipa_cmd_info), + tre_max, channel->tlv_count); + if (ret) + gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool); + + return ret; +} + +void ipa_cmd_pool_exit(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + struct device *dev = channel->gsi->dev; + + gsi_trans_pool_exit(&trans_info->info_pool); + gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool); +} + +static union ipa_cmd_payload * +ipa_cmd_payload_alloc(struct ipa *ipa, dma_addr_t *addr) +{ + struct gsi_trans_info *trans_info; + struct ipa_endpoint *endpoint; + + endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; + trans_info = &ipa->gsi.channel[endpoint->channel_id].trans_info; + + return gsi_trans_pool_alloc_dma(&trans_info->cmd_pool, addr); +} + +/* If hash_size is 0, hash_offset and hash_addr ignored. */ +void ipa_cmd_table_init_add(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, u16 size, u32 offset, + dma_addr_t addr, u16 hash_size, u32 hash_offset, + dma_addr_t hash_addr) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_hw_ip_fltrt_init *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + u64 val; + + /* Record the non-hash table offset and size */ + offset += ipa->mem_offset; + val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK); + val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK); + + /* The hash table offset and address are zero if its size is 0 */ + if (hash_size) { + /* Record the hash table offset and size */ + hash_offset += ipa->mem_offset; + val |= u64_encode_bits(hash_offset, + IP_FLTRT_FLAGS_HASH_ADDR_FMASK); + val |= u64_encode_bits(hash_size, + IP_FLTRT_FLAGS_HASH_SIZE_FMASK); + } + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->table_init; + + /* Fill in all offsets and sizes and the non-hash table address */ + if (hash_size) + payload->hash_rules_addr = cpu_to_le64(hash_addr); + payload->flags = cpu_to_le64(val); + payload->nhash_rules_addr = cpu_to_le64(addr); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Initialize header space in IPA-local memory */ +void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size, + dma_addr_t addr) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL; + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_hw_hdr_init_local *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + u32 flags; + + offset += ipa->mem_offset; + + /* With this command we tell the IPA where in its local memory the + * header tables reside. The content of the buffer provided is + * also written via DMA into that space. The IPA hardware owns + * the table, but the AP must initialize it. + */ + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->hdr_init_local; + + payload->hdr_table_addr = cpu_to_le64(addr); + flags = u32_encode_bits(size, HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK); + flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK); + payload->flags = cpu_to_le32(flags); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value, + u32 mask, bool clear_full) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + struct ipa_cmd_register_write *payload; + union ipa_cmd_payload *cmd_payload; + u32 opcode = IPA_CMD_REGISTER_WRITE; + dma_addr_t payload_addr; + u32 clear_option; + u32 options; + u16 flags; + + /* pipeline_clear_src_grp is not used */ + clear_option = clear_full ? pipeline_clear_full : pipeline_clear_hps; + + if (ipa->version != IPA_VERSION_3_5_1) { + u16 offset_high; + u32 val; + + /* Opcode encodes pipeline clear options */ + /* SKIP_CLEAR is always 0 (don't skip pipeline clear) */ + val = u16_encode_bits(clear_option, + REGISTER_WRITE_OPCODE_CLEAR_OPTION_FMASK); + opcode |= val; + + /* Extract the high 4 bits from the offset */ + offset_high = (u16)u32_get_bits(offset, GENMASK(19, 16)); + offset &= (1 << 16) - 1; + + /* Extract the top 4 bits and encode it into the flags field */ + flags = u16_encode_bits(offset_high, + REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK); + options = 0; /* reserved */ + + } else { + flags = 0; /* SKIP_CLEAR flag is always 0 */ + options = u16_encode_bits(clear_option, + REGISTER_WRITE_CLEAR_OPTIONS_FMASK); + } + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->register_write; + + payload->flags = cpu_to_le16(flags); + payload->offset = cpu_to_le16((u16)offset); + payload->value = cpu_to_le32(value); + payload->value_mask = cpu_to_le32(mask); + payload->clear_options = cpu_to_le32(options); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + DMA_NONE, opcode); +} + +/* Skip IP packet processing on the next data transfer on a TX channel */ +static void ipa_cmd_ip_packet_init_add(struct gsi_trans *trans, u8 endpoint_id) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_INIT; + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_ip_packet_init *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + + /* assert(endpoint_id < + field_max(IPA_PACKET_INIT_DEST_ENDPOINT_FMASK)); */ + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->ip_packet_init; + + payload->dest_endpoint = u8_encode_bits(endpoint_id, + IPA_PACKET_INIT_DEST_ENDPOINT_FMASK); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Use a 32-bit DMA command to zero a block of memory */ +void ipa_cmd_dma_task_32b_addr_add(struct gsi_trans *trans, u16 size, + dma_addr_t addr, bool toward_ipa) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_DMA_TASK_32B_ADDR; + struct ipa_cmd_hw_dma_task_32b_addr *payload; + union ipa_cmd_payload *cmd_payload; + enum dma_data_direction direction; + dma_addr_t payload_addr; + u16 flags; + + /* assert(addr <= U32_MAX); */ + addr &= GENMASK_ULL(31, 0); + + /* The opcode encodes the number of DMA operations in the high byte */ + opcode |= u16_encode_bits(1, DMA_TASK_32B_ADDR_OPCODE_COUNT_FMASK); + + direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + + /* complete: 0 = don't interrupt; eof: 0 = don't assert eot */ + flags = DMA_TASK_32B_ADDR_FLAGS_FLSH_FMASK; + /* lock: 0 = don't lock endpoint; unlock: 0 = don't unlock */ + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->dma_task_32b_addr; + + payload->flags = cpu_to_le16(flags); + payload->size = cpu_to_le16(size); + payload->addr = cpu_to_le32((u32)addr); + payload->packet_size = cpu_to_le16(size); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Use a DMA command to read or write a block of IPA-resident memory */ +void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, u16 size, + dma_addr_t addr, bool toward_ipa) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM; + struct ipa_cmd_hw_dma_mem_mem *payload; + union ipa_cmd_payload *cmd_payload; + enum dma_data_direction direction; + dma_addr_t payload_addr; + u16 flags; + + /* size and offset must fit in 16 bit fields */ + /* assert(size > 0 && size <= U16_MAX); */ + /* assert(offset <= U16_MAX && ipa->mem_offset <= U16_MAX - offset); */ + + offset += ipa->mem_offset; + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->dma_shared_mem; + + /* payload->clear_after_read was reserved prior to IPA v4.0. It's + * never needed for current code, so it's 0 regardless of version. + */ + payload->size = cpu_to_le16(size); + payload->local_addr = cpu_to_le16(offset); + /* payload->flags: + * direction: 0 = write to IPA, 1 read from IPA + * Starting at v4.0 these are reserved; either way, all zero: + * pipeline clear: 0 = wait for pipeline clear (don't skip) + * clear_options: 0 = pipeline_clear_hps + * Instead, for v4.0+ these are encoded in the opcode. But again + * since both values are 0 we won't bother OR'ing them in. + */ + flags = toward_ipa ? 0 : DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK; + payload->flags = cpu_to_le16(flags); + payload->system_addr = cpu_to_le64(addr); + + direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +static void ipa_cmd_ip_tag_status_add(struct gsi_trans *trans, u64 tag) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_TAG_STATUS; + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_ip_packet_tag_status *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + + /* assert(tag <= field_max(IP_PACKET_TAG_STATUS_TAG_FMASK)); */ + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->ip_packet_tag_status; + + payload->tag = u64_encode_bits(tag, IP_PACKET_TAG_STATUS_TAG_FMASK); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Issue a small command TX data transfer */ +static void ipa_cmd_transfer_add(struct gsi_trans *trans, u16 size) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum dma_data_direction direction = DMA_TO_DEVICE; + enum ipa_cmd_opcode opcode = IPA_CMD_NONE; + union ipa_cmd_payload *payload; + dma_addr_t payload_addr; + + /* assert(size <= sizeof(*payload)); */ + + /* Just transfer a zero-filled payload structure */ + payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +void ipa_cmd_tag_process_add(struct gsi_trans *trans) +{ + ipa_cmd_register_write_add(trans, 0, 0, 0, true); +#if 1 + /* Reference these functions to avoid a compile error */ + (void)ipa_cmd_ip_packet_init_add; + (void)ipa_cmd_ip_tag_status_add; + (void) ipa_cmd_transfer_add; +#else + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + struct gsi_endpoint *endpoint; + + endpoint = ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]; + ipa_cmd_ip_packet_init_add(trans, endpoint->endpoint_id); + + ipa_cmd_ip_tag_status_add(trans, 0xcba987654321); + + ipa_cmd_transfer_add(trans, 4); +#endif +} + +/* Returns the number of commands required for the tag process */ +u32 ipa_cmd_tag_process_count(void) +{ + return 4; +} + +static struct ipa_cmd_info * +ipa_cmd_info_alloc(struct ipa_endpoint *endpoint, u32 tre_count) +{ + struct gsi_channel *channel; + + channel = &endpoint->ipa->gsi.channel[endpoint->channel_id]; + + return gsi_trans_pool_alloc(&channel->trans_info.info_pool, tre_count); +} + +/* Allocate a transaction for the command TX endpoint */ +struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count) +{ + struct ipa_endpoint *endpoint; + struct gsi_trans *trans; + + endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; + + trans = gsi_channel_trans_alloc(&ipa->gsi, endpoint->channel_id, + tre_count, DMA_NONE); + if (trans) + trans->info = ipa_cmd_info_alloc(endpoint, tre_count); + + return trans; +} diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h new file mode 100644 index 000000000000..4917525b3a47 --- /dev/null +++ b/drivers/net/ipa/ipa_cmd.h @@ -0,0 +1,195 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_CMD_H_ +#define _IPA_CMD_H_ + +#include +#include + +struct sk_buff; +struct scatterlist; + +struct ipa; +struct ipa_mem; +struct gsi_trans; +struct gsi_channel; + +/** + * enum ipa_cmd_opcode: IPA immediate commands + * + * All immediate commands are issued using the AP command TX endpoint. + * The numeric values here are the opcodes for IPA v3.5.1 hardware. + * + * IPA_CMD_NONE is a special (invalid) value that's used to indicate + * a request is *not* an immediate command. + */ +enum ipa_cmd_opcode { + IPA_CMD_NONE = 0, + IPA_CMD_IP_V4_FILTER_INIT = 3, + IPA_CMD_IP_V6_FILTER_INIT = 4, + IPA_CMD_IP_V4_ROUTING_INIT = 7, + IPA_CMD_IP_V6_ROUTING_INIT = 8, + IPA_CMD_HDR_INIT_LOCAL = 9, + IPA_CMD_REGISTER_WRITE = 12, + IPA_CMD_IP_PACKET_INIT = 16, + IPA_CMD_DMA_TASK_32B_ADDR = 17, + IPA_CMD_DMA_SHARED_MEM = 19, + IPA_CMD_IP_PACKET_TAG_STATUS = 20, +}; + +/** + * struct ipa_cmd_info - information needed for an IPA immediate command + * + * @opcode: The command opcode. + * @direction: Direction of data transfer for DMA commands + */ +struct ipa_cmd_info { + enum ipa_cmd_opcode opcode; + enum dma_data_direction direction; +}; + + +#ifdef IPA_VALIDATE + +/** + * ipa_cmd_table_valid() - Validate a memory region holding a table + * @ipa: - IPA pointer + * @mem: - IPA memory region descriptor + * @route: - Whether the region holds a route or filter table + * @ipv6: - Whether the table is for IPv6 or IPv4 + * @hashed: - Whether the table is hashed or non-hashed + * + * @Return: true if region is valid, false otherwise + */ +bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem, + bool route, bool ipv6, bool hashed); + +/** + * ipa_cmd_data_valid() - Validate command-realted configuration is valid + * @ipa: - IPA pointer + * + * @Return: true if assumptions required for command are valid + */ +bool ipa_cmd_data_valid(struct ipa *ipa); + +#else /* !IPA_VALIDATE */ + +static inline bool ipa_cmd_table_valid(struct ipa *ipa, + const struct ipa_mem *mem, bool route, + bool ipv6, bool hashed) +{ + return true; +} + +static inline bool ipa_cmd_data_valid(struct ipa *ipa) +{ + return true; +} + +#endif /* !IPA_VALIDATE */ + +/** + * ipa_cmd_pool_init() - initialize command channel pools + * @channel: AP->IPA command TX GSI channel pointer + * @tre_count: Number of pool elements to allocate + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_cmd_pool_init(struct gsi_channel *gsi_channel, u32 tre_count); + +/** + * ipa_cmd_pool_exit() - Inverse of ipa_cmd_pool_init() + * @channel: AP->IPA command TX GSI channel pointer + */ +void ipa_cmd_pool_exit(struct gsi_channel *channel); + +/** + * ipa_cmd_table_init_add() - Add table init command to a transaction + * @trans: GSI transaction + * @opcode: IPA immediate command opcode + * @size: Size of non-hashed routing table memory + * @offset: Offset in IPA shared memory of non-hashed routing table memory + * @addr: DMA address of non-hashed table data to write + * @hash_size: Size of hashed routing table memory + * @hash_offset: Offset in IPA shared memory of hashed routing table memory + * @hash_addr: DMA address of hashed table data to write + * + * If hash_size is 0, hash_offset and hash_addr are ignored. + */ +void ipa_cmd_table_init_add(struct gsi_trans *trans, enum ipa_cmd_opcode opcode, + u16 size, u32 offset, dma_addr_t addr, + u16 hash_size, u32 hash_offset, + dma_addr_t hash_addr); + +/** + * ipa_cmd_hdr_init_local_add() - Add a header init command to a transaction + * @ipa: IPA structure + * @offset: Offset of header memory in IPA local space + * @size: Size of header memory + * @addr: DMA address of buffer to be written from + * + * Defines and fills the location in IPA memory to use for headers. + */ +void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size, + dma_addr_t addr); + +/** + * ipa_cmd_register_write_add() - Add a register write command to a transaction + * @trans: GSI transaction + * @offset: Offset of register to be written + * @value: Value to be written + * @mask: Mask of bits in register to update with bits from value + * @clear_full: Pipeline clear option; true means full pipeline clear + */ +void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value, + u32 mask, bool clear_full); + +/** + * ipa_cmd_dma_task_32b_addr_add() - Add a 32-bit DMA command to a transaction + * @trans: GSi transaction + * @size: Number of bytes to be memory to be transferred + * @addr: DMA address of buffer to be read into or written from + * @toward_ipa: true means write to IPA memory; false means read + */ +void ipa_cmd_dma_task_32b_addr_add(struct gsi_trans *trans, u16 size, + dma_addr_t addr, bool toward_ipa); + +/** + * ipa_cmd_dma_shared_mem_add() - Add a DMA memory command to a transaction + * @trans: GSI transaction + * @offset: Offset of IPA memory to be read or written + * @size: Number of bytes of memory to be transferred + * @addr: DMA address of buffer to be read into or written from + * @toward_ipa: true means write to IPA memory; false means read + */ +void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, + u16 size, dma_addr_t addr, bool toward_ipa); + +/** + * ipa_cmd_tag_process_add() - Add IPA tag process commands to a transaction + * @trans: GSI transaction + */ +void ipa_cmd_tag_process_add(struct gsi_trans *trans); + +/** + * ipa_cmd_tag_process_add_count() - Number of commands in a tag process + * + * @Return: The number of elements to allocate in a transaction + * to hold tag process commands + */ +u32 ipa_cmd_tag_process_count(void); + +/** + * ipa_cmd_trans_alloc() - Allocate a transaction for the command TX endpoint + * @ipa: IPA pointer + * @tre_count: Number of elements in the transaction + * + * @Return: A GSI transaction structure, or a null pointer if all + * available transactions are in use + */ +struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count); + +#endif /* _IPA_CMD_H_ */ From patchwork Fri Mar 6 04:28:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B45B7C3F2DC for ; Fri, 6 Mar 2020 04:29:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 616E02146E for ; Fri, 6 Mar 2020 04:29:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="FGwdqCT7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726436AbgCFE3X (ORCPT ); Thu, 5 Mar 2020 23:29:23 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:38567 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727369AbgCFE3H (ORCPT ); Thu, 5 Mar 2020 23:29:07 -0500 Received: by mail-yw1-f65.google.com with SMTP id 10so1096385ywv.5 for ; Thu, 05 Mar 2020 20:29:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LjIqQIN/2szfQ75PSKeiupXgUSHFFM6xFTZ/Ss1UYRs=; b=FGwdqCT7fT/IWN+Y38w0FCNa/CgWHSS8UwbEubged5SWFysRdxaXpQFfr6kn81jTeL kKMy3Qm2YFxb8sgeQxGVVfm2oYpZ71Hk7NOAEVninw0MXRfVEq0fBLo8PzHmhq28ZTGI 4btelsPz5yLX6kZHSi1EodeO2ZMg4+46aZNEMwAC3iaSYlqRZCQvPXK06IZuA70r3wsC E6vtaT3mncZGUZdLIQCL1FQ5muWzjUass58VYQkNzOaqZM4jdMdAnmWToEanzHZ6mii0 x58eBQOrT/aXP0KW2Mv2abHQ9/x8Q3VgPkTbRHZ1cxrYfa80OEXFIs2sxBZDo0ekdGOK fksA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LjIqQIN/2szfQ75PSKeiupXgUSHFFM6xFTZ/Ss1UYRs=; b=Jgn1/CdxUVHUVF1I5PKFkBDTqqA7zzsLZG8OTkOex81xfSQ94lOnIK4pXYCHVjAr8Q oUVufWJdJC7MlJEsmXOL2ketDOZSlOAmljsgv43LRqExH/fPh/PCHAk05U/t7XB/8YVc 81Lpeo3evLd61CUc7BQgMGRZSShXWVI4+YP8kpdTgxZriX2Vba0fYqwWLtGqS4rA9hjN p2eY5Ch2cquxiXbUORdzgHntCfWdScG6OI+elu9PfIKQCb8oJPI8rE4d+turnX7q9nia fWEcUdhv6hxMnc0LYonQphOs0ABFxUSm6fd99JqFo+Gr9Ar0bC0Eq3brIfQ9wOvtCE7t El+w== X-Gm-Message-State: ANhLgQ3sEtRFaGZgAMar9suI3NjgO9GeH4LEYYUW/VNQXGbEqcIWW0vP YnGAcsNuL3bWcagyv2TEhOkBIQ== X-Google-Smtp-Source: ADFU+vuuyAuj7AdtF/lGQjlASBP1BaoTFy1BP25r0/bGAdrtVTd9IDZj18jkYsHe0TmF3VToKLorrg== X-Received: by 2002:a81:9d8:: with SMTP id 207mr2231269ywj.101.1583468943945; Thu, 05 Mar 2020 20:29:03 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.29.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:29:03 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 14/17] soc: qcom: ipa: AP/modem communications Date: Thu, 5 Mar 2020 22:28:28 -0600 Message-Id: <20200306042831.17827-15-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch implements two forms of out-of-band communication between the AP and modem. - QMI is a mechanism that allows clients running on the AP interact with services running on the modem (and vice-versa). The AP IPA driver uses QMI to communicate with the corresponding IPA driver resident on the modem, to agree on parameters used with the IPA hardware and to ensure both sides are ready before entering operational mode. - SMP2P is a more primitive mechanism available for the modem and AP to communicate with each other. It provides a means for either the AP or modem to interrupt the other, and furthermore, to provide 32 bits worth of information. The IPA driver uses SMP2P to tell the modem what the state of the IPA clock was in the event of a crash. This allows the modem to safely access the IPA hardware (or avoid doing so) when a crash occurs, for example, to access information within the IPA hardware. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_qmi.c | 538 +++++++++++++++++++++++++++ drivers/net/ipa/ipa_qmi.h | 41 +++ drivers/net/ipa/ipa_qmi_msg.c | 663 ++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_qmi_msg.h | 252 +++++++++++++ drivers/net/ipa/ipa_smp2p.c | 335 +++++++++++++++++ drivers/net/ipa/ipa_smp2p.h | 48 +++ 6 files changed, 1877 insertions(+) create mode 100644 drivers/net/ipa/ipa_qmi.c create mode 100644 drivers/net/ipa/ipa_qmi.h create mode 100644 drivers/net/ipa/ipa_qmi_msg.c create mode 100644 drivers/net/ipa/ipa_qmi_msg.h create mode 100644 drivers/net/ipa/ipa_smp2p.c create mode 100644 drivers/net/ipa/ipa_smp2p.h diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c new file mode 100644 index 000000000000..5090f0f923ad --- /dev/null +++ b/drivers/net/ipa/ipa_qmi.c @@ -0,0 +1,538 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_endpoint.h" +#include "ipa_mem.h" +#include "ipa_table.h" +#include "ipa_modem.h" +#include "ipa_qmi_msg.h" + +/** + * DOC: AP/Modem QMI Handshake + * + * The AP and modem perform a "handshake" at initialization time to ensure + * both sides know when everything is ready to begin operating. The AP + * driver (this code) uses two QMI handles (endpoints) for this; a client + * using a service on the modem, and server to service modem requests (and + * to supply an indication message from the AP). Once the handshake is + * complete, the AP and modem may begin IPA operation. This occurs + * only when the AP IPA driver, modem IPA driver, and IPA microcontroller + * are ready. + * + * The QMI service on the modem expects to receive an INIT_DRIVER request from + * the AP, which contains parameters used by the modem during initialization. + * The AP sends this request as soon as it is knows the modem side service + * is available. The modem responds to this request, and if this response + * contains a success result, the AP knows the modem IPA driver is ready. + * + * The modem is responsible for loading firmware on the IPA microcontroller. + * This occurs only during the initial modem boot. The modem sends a + * separate DRIVER_INIT_COMPLETE request to the AP to report that the + * microcontroller is ready. The AP may assume the microcontroller is + * ready and remain so (even if the modem reboots) once it has received + * and responded to this request. + * + * There is one final exchange involved in the handshake. It is required + * on the initial modem boot, but optional (but in practice does occur) on + * subsequent boots. The modem expects to receive a final INIT_COMPLETE + * indication message from the AP when it is about to begin its normal + * operation. The AP will only send this message after it has received + * and responded to an INDICATION_REGISTER request from the modem. + * + * So in summary: + * - Whenever the AP learns the modem has booted and its IPA QMI service + * is available, it sends an INIT_DRIVER request to the modem. The + * modem supplies a success response when it is ready to operate. + * - On the initial boot, the modem sets up the IPA microcontroller, and + * sends a DRIVER_INIT_COMPLETE request to the AP when this is done. + * - When the modem is ready to receive an INIT_COMPLETE indication from + * the AP, it sends an INDICATION_REGISTER request to the AP. + * - On the initial modem boot, everything is ready when: + * - AP has received a success response from its INIT_DRIVER request + * - AP has responded to a DRIVER_INIT_COMPLETE request + * - AP has responded to an INDICATION_REGISTER request from the modem + * - AP has sent an INIT_COMPLETE indication to the modem + * - On subsequent modem boots, everything is ready when: + * - AP has received a success response from its INIT_DRIVER request + * - AP has responded to a DRIVER_INIT_COMPLETE request + * - The INDICATION_REGISTER request and INIT_COMPLETE indication are + * optional for non-initial modem boots, and have no bearing on the + * determination of when things are "ready" + */ + +#define IPA_HOST_SERVICE_SVC_ID 0x31 +#define IPA_HOST_SVC_VERS 1 +#define IPA_HOST_SERVICE_INS_ID 1 + +#define IPA_MODEM_SERVICE_SVC_ID 0x31 +#define IPA_MODEM_SERVICE_INS_ID 2 +#define IPA_MODEM_SVC_VERS 1 + +#define QMI_INIT_DRIVER_TIMEOUT 60000 /* A minute in milliseconds */ + +/* Send an INIT_COMPLETE indication message to the modem */ +static void ipa_server_init_complete(struct ipa_qmi *ipa_qmi) +{ + struct ipa *ipa = container_of(ipa_qmi, struct ipa, qmi); + struct qmi_handle *qmi = &ipa_qmi->server_handle; + struct sockaddr_qrtr *sq = &ipa_qmi->modem_sq; + struct ipa_init_complete_ind ind = { }; + int ret; + + ind.status.result = QMI_RESULT_SUCCESS_V01; + ind.status.error = QMI_ERR_NONE_V01; + + ret = qmi_send_indication(qmi, sq, IPA_QMI_INIT_COMPLETE, + IPA_QMI_INIT_COMPLETE_IND_SZ, + ipa_init_complete_ind_ei, &ind); + if (ret) + dev_err(&ipa->pdev->dev, + "error %d sending init complete indication\n", ret); + else + ipa_qmi->indication_sent = true; +} + +/* If requested (and not already sent) send the INIT_COMPLETE indication */ +static void ipa_qmi_indication(struct ipa_qmi *ipa_qmi) +{ + if (!ipa_qmi->indication_requested) + return; + + if (ipa_qmi->indication_sent) + return; + + ipa_server_init_complete(ipa_qmi); +} + +/* Determine whether everything is ready to start normal operation. + * We know everything (else) is ready when we know the IPA driver on + * the modem is ready, and the microcontroller is ready. + * + * When the modem boots (or reboots), the handshake sequence starts + * with the AP sending the modem an INIT_DRIVER request. Within + * that request, the uc_loaded flag will be zero (false) for an + * initial boot, non-zero (true) for a subsequent (SSR) boot. + */ +static void ipa_qmi_ready(struct ipa_qmi *ipa_qmi) +{ + struct ipa *ipa = container_of(ipa_qmi, struct ipa, qmi); + int ret; + + /* We aren't ready until the modem and microcontroller are */ + if (!ipa_qmi->modem_ready || !ipa_qmi->uc_ready) + return; + + /* Send the indication message if it was requested */ + ipa_qmi_indication(ipa_qmi); + + /* The initial boot requires us to send the indication. */ + if (ipa_qmi->initial_boot) { + if (!ipa_qmi->indication_sent) + return; + + /* The initial modem boot completed successfully */ + ipa_qmi->initial_boot = false; + } + + /* We're ready. Start up normal operation */ + ipa = container_of(ipa_qmi, struct ipa, qmi); + ret = ipa_modem_start(ipa); + if (ret) + dev_err(&ipa->pdev->dev, "error %d starting modem\n", ret); +} + +/* All QMI clients from the modem node are gone (modem shut down or crashed). */ +static void ipa_server_bye(struct qmi_handle *qmi, unsigned int node) +{ + struct ipa_qmi *ipa_qmi; + + ipa_qmi = container_of(qmi, struct ipa_qmi, server_handle); + + /* The modem client and server go away at the same time */ + memset(&ipa_qmi->modem_sq, 0, sizeof(ipa_qmi->modem_sq)); + + /* initial_boot doesn't change when modem reboots */ + /* uc_ready doesn't change when modem reboots */ + ipa_qmi->modem_ready = false; + ipa_qmi->indication_requested = false; + ipa_qmi->indication_sent = false; +} + +static struct qmi_ops ipa_server_ops = { + .bye = ipa_server_bye, +}; + +/* Callback function to handle an INDICATION_REGISTER request message from the + * modem. This informs the AP that the modem is now ready to receive the + * INIT_COMPLETE indication message. + */ +static void ipa_server_indication_register(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, + const void *decoded) +{ + struct ipa_indication_register_rsp rsp = { }; + struct ipa_qmi *ipa_qmi; + struct ipa *ipa; + int ret; + + ipa_qmi = container_of(qmi, struct ipa_qmi, server_handle); + ipa = container_of(ipa_qmi, struct ipa, qmi); + + rsp.rsp.result = QMI_RESULT_SUCCESS_V01; + rsp.rsp.error = QMI_ERR_NONE_V01; + + ret = qmi_send_response(qmi, sq, txn, IPA_QMI_INDICATION_REGISTER, + IPA_QMI_INDICATION_REGISTER_RSP_SZ, + ipa_indication_register_rsp_ei, &rsp); + if (!ret) { + ipa_qmi->indication_requested = true; + ipa_qmi_ready(ipa_qmi); /* We might be ready now */ + } else { + dev_err(&ipa->pdev->dev, + "error %d sending register indication response\n", ret); + } +} + +/* Respond to a DRIVER_INIT_COMPLETE request message from the modem. */ +static void ipa_server_driver_init_complete(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, + const void *decoded) +{ + struct ipa_driver_init_complete_rsp rsp = { }; + struct ipa_qmi *ipa_qmi; + struct ipa *ipa; + int ret; + + ipa_qmi = container_of(qmi, struct ipa_qmi, server_handle); + ipa = container_of(ipa_qmi, struct ipa, qmi); + + rsp.rsp.result = QMI_RESULT_SUCCESS_V01; + rsp.rsp.error = QMI_ERR_NONE_V01; + + ret = qmi_send_response(qmi, sq, txn, IPA_QMI_DRIVER_INIT_COMPLETE, + IPA_QMI_DRIVER_INIT_COMPLETE_RSP_SZ, + ipa_driver_init_complete_rsp_ei, &rsp); + if (!ret) { + ipa_qmi->uc_ready = true; + ipa_qmi_ready(ipa_qmi); /* We might be ready now */ + } else { + dev_err(&ipa->pdev->dev, + "error %d sending init complete response\n", ret); + } +} + +/* The server handles two request message types sent by the modem. */ +static struct qmi_msg_handler ipa_server_msg_handlers[] = { + { + .type = QMI_REQUEST, + .msg_id = IPA_QMI_INDICATION_REGISTER, + .ei = ipa_indication_register_req_ei, + .decoded_size = IPA_QMI_INDICATION_REGISTER_REQ_SZ, + .fn = ipa_server_indication_register, + }, + { + .type = QMI_REQUEST, + .msg_id = IPA_QMI_DRIVER_INIT_COMPLETE, + .ei = ipa_driver_init_complete_req_ei, + .decoded_size = IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ, + .fn = ipa_server_driver_init_complete, + }, +}; + +/* Handle an INIT_DRIVER response message from the modem. */ +static void ipa_client_init_driver(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, const void *decoded) +{ + txn->result = 0; /* IPA_QMI_INIT_DRIVER request was successful */ + complete(&txn->completion); +} + +/* The client handles one response message type sent by the modem. */ +static struct qmi_msg_handler ipa_client_msg_handlers[] = { + { + .type = QMI_RESPONSE, + .msg_id = IPA_QMI_INIT_DRIVER, + .ei = ipa_init_modem_driver_rsp_ei, + .decoded_size = IPA_QMI_INIT_DRIVER_RSP_SZ, + .fn = ipa_client_init_driver, + }, +}; + +/* Return a pointer to an init modem driver request structure, which contains + * configuration parameters for the modem. The modem may be started multiple + * times, but generally these parameters don't change so we can reuse the + * request structure once it's initialized. The only exception is the + * skip_uc_load field, which will be set only after the microcontroller has + * reported it has completed its initialization. + */ +static const struct ipa_init_modem_driver_req * +init_modem_driver_req(struct ipa_qmi *ipa_qmi) +{ + struct ipa *ipa = container_of(ipa_qmi, struct ipa, qmi); + static struct ipa_init_modem_driver_req req; + const struct ipa_mem *mem; + + /* The microcontroller is initialized on the first boot */ + req.skip_uc_load_valid = 1; + req.skip_uc_load = ipa->uc_loaded ? 1 : 0; + + /* We only have to initialize most of it once */ + if (req.platform_type_valid) + return &req; + + req.platform_type_valid = 1; + req.platform_type = IPA_QMI_PLATFORM_TYPE_MSM_ANDROID; + + mem = &ipa->mem[IPA_MEM_MODEM_HEADER]; + if (mem->size) { + req.hdr_tbl_info_valid = 1; + req.hdr_tbl_info.start = ipa->mem_offset + mem->offset; + req.hdr_tbl_info.end = req.hdr_tbl_info.start + mem->size - 1; + } + + mem = &ipa->mem[IPA_MEM_V4_ROUTE]; + req.v4_route_tbl_info_valid = 1; + req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset; + req.v4_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE; + + mem = &ipa->mem[IPA_MEM_V6_ROUTE]; + req.v6_route_tbl_info_valid = 1; + req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset; + req.v6_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE; + + mem = &ipa->mem[IPA_MEM_V4_FILTER]; + req.v4_filter_tbl_start_valid = 1; + req.v4_filter_tbl_start = ipa->mem_offset + mem->offset; + + mem = &ipa->mem[IPA_MEM_V6_FILTER]; + req.v6_filter_tbl_start_valid = 1; + req.v6_filter_tbl_start = ipa->mem_offset + mem->offset; + + mem = &ipa->mem[IPA_MEM_MODEM]; + if (mem->size) { + req.modem_mem_info_valid = 1; + req.modem_mem_info.start = ipa->mem_offset + mem->offset; + req.modem_mem_info.size = mem->size; + } + + req.ctrl_comm_dest_end_pt_valid = 1; + req.ctrl_comm_dest_end_pt = + ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]->endpoint_id; + + /* skip_uc_load_valid and skip_uc_load are set above */ + + mem = &ipa->mem[IPA_MEM_MODEM_PROC_CTX]; + if (mem->size) { + req.hdr_proc_ctx_tbl_info_valid = 1; + req.hdr_proc_ctx_tbl_info.start = + ipa->mem_offset + mem->offset; + req.hdr_proc_ctx_tbl_info.end = + req.hdr_proc_ctx_tbl_info.start + mem->size - 1; + } + + /* Nothing to report for the compression table (zip_tbl_info) */ + + mem = &ipa->mem[IPA_MEM_V4_ROUTE_HASHED]; + if (mem->size) { + req.v4_hash_route_tbl_info_valid = 1; + req.v4_hash_route_tbl_info.start = + ipa->mem_offset + mem->offset; + req.v4_hash_route_tbl_info.count = + mem->size / IPA_TABLE_ENTRY_SIZE; + } + + mem = &ipa->mem[IPA_MEM_V6_ROUTE_HASHED]; + if (mem->size) { + req.v6_hash_route_tbl_info_valid = 1; + req.v6_hash_route_tbl_info.start = + ipa->mem_offset + mem->offset; + req.v6_hash_route_tbl_info.count = + mem->size / IPA_TABLE_ENTRY_SIZE; + } + + mem = &ipa->mem[IPA_MEM_V4_FILTER_HASHED]; + if (mem->size) { + req.v4_hash_filter_tbl_start_valid = 1; + req.v4_hash_filter_tbl_start = ipa->mem_offset + mem->offset; + } + + mem = &ipa->mem[IPA_MEM_V6_FILTER_HASHED]; + if (mem->size) { + req.v6_hash_filter_tbl_start_valid = 1; + req.v6_hash_filter_tbl_start = ipa->mem_offset + mem->offset; + } + + /* None of the stats fields are valid (IPA v4.0 and above) */ + + if (ipa->version != IPA_VERSION_3_5_1) { + mem = &ipa->mem[IPA_MEM_STATS_QUOTA]; + if (mem->size) { + req.hw_stats_quota_base_addr_valid = 1; + req.hw_stats_quota_base_addr = + ipa->mem_offset + mem->offset; + req.hw_stats_quota_size_valid = 1; + req.hw_stats_quota_size = ipa->mem_offset + mem->size; + } + + mem = &ipa->mem[IPA_MEM_STATS_DROP]; + if (mem->size) { + req.hw_stats_drop_base_addr_valid = 1; + req.hw_stats_drop_base_addr = + ipa->mem_offset + mem->offset; + req.hw_stats_drop_size_valid = 1; + req.hw_stats_drop_size = ipa->mem_offset + mem->size; + } + } + + return &req; +} + +/* Send an INIT_DRIVER request to the modem, and wait for it to complete. */ +static void ipa_client_init_driver_work(struct work_struct *work) +{ + unsigned long timeout = msecs_to_jiffies(QMI_INIT_DRIVER_TIMEOUT); + const struct ipa_init_modem_driver_req *req; + struct ipa_qmi *ipa_qmi; + struct qmi_handle *qmi; + struct qmi_txn txn; + struct device *dev; + struct ipa *ipa; + int ret; + + ipa_qmi = container_of(work, struct ipa_qmi, init_driver_work); + qmi = &ipa_qmi->client_handle, + + ipa = container_of(ipa_qmi, struct ipa, qmi); + dev = &ipa->pdev->dev; + + ret = qmi_txn_init(qmi, &txn, NULL, NULL); + if (ret < 0) { + dev_err(dev, "error %d preparing init driver request\n", ret); + return; + } + + /* Send the request, and if successful wait for its response */ + req = init_modem_driver_req(ipa_qmi); + ret = qmi_send_request(qmi, &ipa_qmi->modem_sq, &txn, + IPA_QMI_INIT_DRIVER, IPA_QMI_INIT_DRIVER_REQ_SZ, + ipa_init_modem_driver_req_ei, req); + if (ret) + dev_err(dev, "error %d sending init driver request\n", ret); + else if ((ret = qmi_txn_wait(&txn, timeout))) + dev_err(dev, "error %d awaiting init driver response\n", ret); + + if (!ret) { + ipa_qmi->modem_ready = true; + ipa_qmi_ready(ipa_qmi); /* We might be ready now */ + } else { + /* If any error occurs we need to cancel the transaction */ + qmi_txn_cancel(&txn); + } +} + +/* The modem server is now available. We will send an INIT_DRIVER request + * to the modem, but can't wait for it to complete in this callback thread. + * Schedule a worker on the global workqueue to do that for us. + */ +static int +ipa_client_new_server(struct qmi_handle *qmi, struct qmi_service *svc) +{ + struct ipa_qmi *ipa_qmi; + + ipa_qmi = container_of(qmi, struct ipa_qmi, client_handle); + + ipa_qmi->modem_sq.sq_family = AF_QIPCRTR; + ipa_qmi->modem_sq.sq_node = svc->node; + ipa_qmi->modem_sq.sq_port = svc->port; + + schedule_work(&ipa_qmi->init_driver_work); + + return 0; +} + +static struct qmi_ops ipa_client_ops = { + .new_server = ipa_client_new_server, +}; + +/* This is called by ipa_setup(). We can be informed via remoteproc that + * the modem has shut down, in which case this function will be called + * again to prepare for it coming back up again. + */ +int ipa_qmi_setup(struct ipa *ipa) +{ + struct ipa_qmi *ipa_qmi = &ipa->qmi; + int ret; + + ipa_qmi->initial_boot = true; + + /* The server handle is used to handle the DRIVER_INIT_COMPLETE + * request on the first modem boot. It also receives the + * INDICATION_REGISTER request on the first boot and (optionally) + * subsequent boots. The INIT_COMPLETE indication message is + * sent over the server handle if requested. + */ + ret = qmi_handle_init(&ipa_qmi->server_handle, + IPA_QMI_SERVER_MAX_RCV_SZ, &ipa_server_ops, + ipa_server_msg_handlers); + if (ret) + return ret; + + ret = qmi_add_server(&ipa_qmi->server_handle, IPA_HOST_SERVICE_SVC_ID, + IPA_HOST_SVC_VERS, IPA_HOST_SERVICE_INS_ID); + if (ret) + goto err_server_handle_release; + + /* The client handle is only used for sending an INIT_DRIVER request + * to the modem, and receiving its response message. + */ + ret = qmi_handle_init(&ipa_qmi->client_handle, + IPA_QMI_CLIENT_MAX_RCV_SZ, &ipa_client_ops, + ipa_client_msg_handlers); + if (ret) + goto err_server_handle_release; + + /* We need this ready before the service lookup is added */ + INIT_WORK(&ipa_qmi->init_driver_work, ipa_client_init_driver_work); + + ret = qmi_add_lookup(&ipa_qmi->client_handle, IPA_MODEM_SERVICE_SVC_ID, + IPA_MODEM_SVC_VERS, IPA_MODEM_SERVICE_INS_ID); + if (ret) + goto err_client_handle_release; + + return 0; + +err_client_handle_release: + /* Releasing the handle also removes registered lookups */ + qmi_handle_release(&ipa_qmi->client_handle); + memset(&ipa_qmi->client_handle, 0, sizeof(ipa_qmi->client_handle)); +err_server_handle_release: + /* Releasing the handle also removes registered services */ + qmi_handle_release(&ipa_qmi->server_handle); + memset(&ipa_qmi->server_handle, 0, sizeof(ipa_qmi->server_handle)); + + return ret; +} + +void ipa_qmi_teardown(struct ipa *ipa) +{ + cancel_work_sync(&ipa->qmi.init_driver_work); + + qmi_handle_release(&ipa->qmi.client_handle); + memset(&ipa->qmi.client_handle, 0, sizeof(ipa->qmi.client_handle)); + + qmi_handle_release(&ipa->qmi.server_handle); + memset(&ipa->qmi.server_handle, 0, sizeof(ipa->qmi.server_handle)); +} diff --git a/drivers/net/ipa/ipa_qmi.h b/drivers/net/ipa/ipa_qmi.h new file mode 100644 index 000000000000..3993687593d0 --- /dev/null +++ b/drivers/net/ipa/ipa_qmi.h @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_QMI_H_ +#define _IPA_QMI_H_ + +#include +#include + +struct ipa; + +/** + * struct ipa_qmi - QMI state associated with an IPA + * @client_handle - used to send an QMI requests to the modem + * @server_handle - used to handle QMI requests from the modem + * @initialized - whether QMI initialization has completed + * @indication_register_received - tracks modem request receipt + * @init_driver_response_received - tracks modem response receipt + */ +struct ipa_qmi { + struct qmi_handle client_handle; + struct qmi_handle server_handle; + + /* Information used for the client handle */ + struct sockaddr_qrtr modem_sq; + struct work_struct init_driver_work; + + /* Flags used in negotiating readiness */ + bool initial_boot; + bool uc_ready; + bool modem_ready; + bool indication_requested; + bool indication_sent; +}; + +int ipa_qmi_setup(struct ipa *ipa); +void ipa_qmi_teardown(struct ipa *ipa); + +#endif /* !_IPA_QMI_H_ */ diff --git a/drivers/net/ipa/ipa_qmi_msg.c b/drivers/net/ipa/ipa_qmi_msg.c new file mode 100644 index 000000000000..03a1d0e55964 --- /dev/null +++ b/drivers/net/ipa/ipa_qmi_msg.c @@ -0,0 +1,663 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#include +#include + +#include "ipa_qmi_msg.h" + +/* QMI message structure definition for struct ipa_indication_register_req */ +struct qmi_elem_info ipa_indication_register_req_ei[] = { + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + master_driver_init_complete_valid), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_indication_register_req, + master_driver_init_complete_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + master_driver_init_complete), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_indication_register_req, + master_driver_init_complete), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + data_usage_quota_reached_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + data_usage_quota_reached_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + data_usage_quota_reached), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + data_usage_quota_reached), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + ipa_mhi_ready_ind_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + ipa_mhi_ready_ind_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + ipa_mhi_ready_ind), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + ipa_mhi_ready_ind), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_indication_register_rsp */ +struct qmi_elem_info ipa_indication_register_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_rsp, + rsp), + .tlv_type = 0x02, + .offset = offsetof(struct ipa_indication_register_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_driver_init_complete_req */ +struct qmi_elem_info ipa_driver_init_complete_req_ei[] = { + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_driver_init_complete_req, + status), + .tlv_type = 0x01, + .offset = offsetof(struct ipa_driver_init_complete_req, + status), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_driver_init_complete_rsp */ +struct qmi_elem_info ipa_driver_init_complete_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_driver_init_complete_rsp, + rsp), + .tlv_type = 0x02, + .elem_size = offsetof(struct ipa_driver_init_complete_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_complete_ind */ +struct qmi_elem_info ipa_init_complete_ind_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_complete_ind, + status), + .tlv_type = 0x02, + .elem_size = offsetof(struct ipa_init_complete_ind, + status), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_bounds */ +struct qmi_elem_info ipa_mem_bounds_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_bounds, start), + .offset = offsetof(struct ipa_mem_bounds, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_bounds, end), + .offset = offsetof(struct ipa_mem_bounds, end), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_array */ +struct qmi_elem_info ipa_mem_array_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_array, start), + .offset = offsetof(struct ipa_mem_array, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_array, count), + .offset = offsetof(struct ipa_mem_array, count), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_range */ +struct qmi_elem_info ipa_mem_range_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_range, start), + .offset = offsetof(struct ipa_mem_range, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_range, size), + .offset = offsetof(struct ipa_mem_range, size), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_modem_driver_req */ +struct qmi_elem_info ipa_init_modem_driver_req_ei[] = { + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + platform_type_valid), + .tlv_type = 0x10, + .elem_size = offsetof(struct ipa_init_modem_driver_req, + platform_type_valid), + }, + { + .data_type = QMI_SIGNED_4_BYTE_ENUM, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + platform_type), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_req, + platform_type), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_tbl_info_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_tbl_info), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_route_tbl_info_valid), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_route_tbl_info), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_route_tbl_info_valid), + .tlv_type = 0x13, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_route_tbl_info), + .tlv_type = 0x13, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_filter_tbl_start_valid), + .tlv_type = 0x14, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_filter_tbl_start), + .tlv_type = 0x14, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_filter_tbl_start_valid), + .tlv_type = 0x15, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_filter_tbl_start), + .tlv_type = 0x15, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + modem_mem_info_valid), + .tlv_type = 0x16, + .offset = offsetof(struct ipa_init_modem_driver_req, + modem_mem_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + modem_mem_info), + .tlv_type = 0x16, + .offset = offsetof(struct ipa_init_modem_driver_req, + modem_mem_info), + .ei_array = ipa_mem_range_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt_valid), + .tlv_type = 0x17, + .offset = offsetof(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt), + .tlv_type = 0x17, + .offset = offsetof(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + skip_uc_load_valid), + .tlv_type = 0x18, + .offset = offsetof(struct ipa_init_modem_driver_req, + skip_uc_load_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + skip_uc_load), + .tlv_type = 0x18, + .offset = offsetof(struct ipa_init_modem_driver_req, + skip_uc_load), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info_valid), + .tlv_type = 0x19, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info), + .tlv_type = 0x19, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + zip_tbl_info_valid), + .tlv_type = 0x1a, + .offset = offsetof(struct ipa_init_modem_driver_req, + zip_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + zip_tbl_info), + .tlv_type = 0x1a, + .offset = offsetof(struct ipa_init_modem_driver_req, + zip_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info_valid), + .tlv_type = 0x1b, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info), + .tlv_type = 0x1b, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info_valid), + .tlv_type = 0x1c, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info), + .tlv_type = 0x1c, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start_valid), + .tlv_type = 0x1d, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start), + .tlv_type = 0x1d, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start_valid), + .tlv_type = 0x1e, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start), + .tlv_type = 0x1e, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_quota_base_addr_valid), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_quota_base_addr_valid), + }, + { + .data_type = QMI_SIGNED_4_BYTE_ENUM, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_quota_base_addr), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_quota_base_addr), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_quota_size_valid), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_quota_size_valid), + }, + { + .data_type = QMI_SIGNED_4_BYTE_ENUM, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_quota_size), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_quota_size), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_drop_size_valid), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_drop_size_valid), + }, + { + .data_type = QMI_SIGNED_4_BYTE_ENUM, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_drop_size), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_drop_size), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_modem_driver_rsp */ +struct qmi_elem_info ipa_init_modem_driver_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + rsp), + .tlv_type = 0x02, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt_valid), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + default_end_pt_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + default_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + default_end_pt), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + default_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending_valid), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending), + }, + { + .data_type = QMI_EOTI, + }, +}; diff --git a/drivers/net/ipa/ipa_qmi_msg.h b/drivers/net/ipa/ipa_qmi_msg.h new file mode 100644 index 000000000000..cfac456cea0c --- /dev/null +++ b/drivers/net/ipa/ipa_qmi_msg.h @@ -0,0 +1,252 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_QMI_MSG_H_ +#define _IPA_QMI_MSG_H_ + +/* === Only "ipa_qmi" and "ipa_qmi_msg.c" should include this file === */ + +#include +#include + +/* Request/response/indication QMI message ids used for IPA. Receiving + * end issues a response for requests; indications require no response. + */ +#define IPA_QMI_INDICATION_REGISTER 0x20 /* modem -> AP request */ +#define IPA_QMI_INIT_DRIVER 0x21 /* AP -> modem request */ +#define IPA_QMI_INIT_COMPLETE 0x22 /* AP -> modem indication */ +#define IPA_QMI_DRIVER_INIT_COMPLETE 0x35 /* modem -> AP request */ + +/* The maximum size required for message types. These sizes include + * the message data, along with type (1 byte) and length (2 byte) + * information for each field. The qmi_send_*() interfaces require + * the message size to be provided. + */ +#define IPA_QMI_INDICATION_REGISTER_REQ_SZ 12 /* -> server handle */ +#define IPA_QMI_INDICATION_REGISTER_RSP_SZ 7 /* <- server handle */ +#define IPA_QMI_INIT_DRIVER_REQ_SZ 162 /* client handle -> */ +#define IPA_QMI_INIT_DRIVER_RSP_SZ 25 /* client handle <- */ +#define IPA_QMI_INIT_COMPLETE_IND_SZ 7 /* <- server handle */ +#define IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ 4 /* -> server handle */ +#define IPA_QMI_DRIVER_INIT_COMPLETE_RSP_SZ 7 /* <- server handle */ + +/* Maximum size of messages we expect the AP to receive (max of above) */ +#define IPA_QMI_SERVER_MAX_RCV_SZ 8 +#define IPA_QMI_CLIENT_MAX_RCV_SZ 25 + +/* Request message for the IPA_QMI_INDICATION_REGISTER request */ +struct ipa_indication_register_req { + u8 master_driver_init_complete_valid; + u8 master_driver_init_complete; + u8 data_usage_quota_reached_valid; + u8 data_usage_quota_reached; + u8 ipa_mhi_ready_ind_valid; + u8 ipa_mhi_ready_ind; +}; + +/* The response to a IPA_QMI_INDICATION_REGISTER request consists only of + * a standard QMI response. + */ +struct ipa_indication_register_rsp { + struct qmi_response_type_v01 rsp; +}; + +/* Request message for the IPA_QMI_DRIVER_INIT_COMPLETE request */ +struct ipa_driver_init_complete_req { + u8 status; +}; + +/* The response to a IPA_QMI_DRIVER_INIT_COMPLETE request consists only + * of a standard QMI response. + */ +struct ipa_driver_init_complete_rsp { + struct qmi_response_type_v01 rsp; +}; + +/* The message for the IPA_QMI_INIT_COMPLETE_IND indication consists + * only of a standard QMI response. + */ +struct ipa_init_complete_ind { + struct qmi_response_type_v01 status; +}; + +/* The AP tells the modem its platform type. We assume Android. */ +enum ipa_platform_type { + IPA_QMI_PLATFORM_TYPE_INVALID = 0, /* Invalid */ + IPA_QMI_PLATFORM_TYPE_TN = 1, /* Data card */ + IPA_QMI_PLATFORM_TYPE_LE = 2, /* Data router */ + IPA_QMI_PLATFORM_TYPE_MSM_ANDROID = 3, /* Android MSM */ + IPA_QMI_PLATFORM_TYPE_MSM_WINDOWS = 4, /* Windows MSM */ + IPA_QMI_PLATFORM_TYPE_MSM_QNX_V01 = 5, /* QNX MSM */ +}; + +/* This defines the start and end offset of a range of memory. Both + * fields are offsets relative to the start of IPA shared memory. + * The end value is the last addressable byte *within* the range. + */ +struct ipa_mem_bounds { + u32 start; + u32 end; +}; + +/* This defines the location and size of an array. The start value + * is an offset relative to the start of IPA shared memory. The + * size of the array is implied by the number of entries (the entry + * size is assumed to be known). + */ +struct ipa_mem_array { + u32 start; + u32 count; +}; + +/* This defines the location and size of a range of memory. The + * start is an offset relative to the start of IPA shared memory. + * This differs from the ipa_mem_bounds structure in that the size + * (in bytes) of the memory region is specified rather than the + * offset of its last byte. + */ +struct ipa_mem_range { + u32 start; + u32 size; +}; + +/* The message for the IPA_QMI_INIT_DRIVER request contains information + * from the AP that affects modem initialization. + */ +struct ipa_init_modem_driver_req { + u8 platform_type_valid; + u32 platform_type; /* enum ipa_platform_type */ + + /* Modem header table information. This defines the IPA shared + * memory in which the modem may insert header table entries. + */ + u8 hdr_tbl_info_valid; + struct ipa_mem_bounds hdr_tbl_info; + + /* Routing table information. These define the location and size of + * non-hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_route_tbl_info_valid; + struct ipa_mem_array v4_route_tbl_info; + u8 v6_route_tbl_info_valid; + struct ipa_mem_array v6_route_tbl_info; + + /* Filter table information. These define the location of the + * non-hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_filter_tbl_start_valid; + u32 v4_filter_tbl_start; + u8 v6_filter_tbl_start_valid; + u32 v6_filter_tbl_start; + + /* Modem memory information. This defines the location and + * size of memory available for the modem to use. + */ + u8 modem_mem_info_valid; + struct ipa_mem_range modem_mem_info; + + /* This defines the destination endpoint on the AP to which + * the modem driver can send control commands. Must be less + * than ipa_endpoint_max(). + */ + u8 ctrl_comm_dest_end_pt_valid; + u32 ctrl_comm_dest_end_pt; + + /* This defines whether the modem should load the microcontroller + * or not. It is unnecessary to reload it if the modem is being + * restarted. + * + * NOTE: this field is named "is_ssr_bootup" elsewhere. + */ + u8 skip_uc_load_valid; + u8 skip_uc_load; + + /* Processing context memory information. This defines the memory in + * which the modem may insert header processing context table entries. + */ + u8 hdr_proc_ctx_tbl_info_valid; + struct ipa_mem_bounds hdr_proc_ctx_tbl_info; + + /* Compression command memory information. This defines the memory + * in which the modem may insert compression/decompression commands. + */ + u8 zip_tbl_info_valid; + struct ipa_mem_bounds zip_tbl_info; + + /* Routing table information. These define the location and size + * of hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_hash_route_tbl_info_valid; + struct ipa_mem_array v4_hash_route_tbl_info; + u8 v6_hash_route_tbl_info_valid; + struct ipa_mem_array v6_hash_route_tbl_info; + + /* Filter table information. These define the location and size + * of hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_hash_filter_tbl_start_valid; + u32 v4_hash_filter_tbl_start; + u8 v6_hash_filter_tbl_start_valid; + u32 v6_hash_filter_tbl_start; + + /* Statistics information. These define the locations of the + * first and last statistics sub-regions. (IPA v4.0 and above) + */ + u8 hw_stats_quota_base_addr_valid; + u32 hw_stats_quota_base_addr; + u8 hw_stats_quota_size_valid; + u32 hw_stats_quota_size; + u8 hw_stats_drop_base_addr_valid; + u32 hw_stats_drop_base_addr; + u8 hw_stats_drop_size_valid; + u32 hw_stats_drop_size; +}; + +/* The response to a IPA_QMI_INIT_DRIVER request begins with a standard + * QMI response, but contains other information as well. Currently we + * simply wait for the the INIT_DRIVER transaction to complete and + * ignore any other data that might be returned. + */ +struct ipa_init_modem_driver_rsp { + struct qmi_response_type_v01 rsp; + + /* This defines the destination endpoint on the modem to which + * the AP driver can send control commands. Must be less than + * ipa_endpoint_max(). + */ + u8 ctrl_comm_dest_end_pt_valid; + u32 ctrl_comm_dest_end_pt; + + /* This defines the default endpoint. The AP driver is not + * required to configure the hardware with this value. Must + * be less than ipa_endpoint_max(). + */ + u8 default_end_pt_valid; + u32 default_end_pt; + + /* This defines whether a second handshake is required to complete + * initialization. + */ + u8 modem_driver_init_pending_valid; + u8 modem_driver_init_pending; +}; + +/* Message structure definitions defined in "ipa_qmi_msg.c" */ +extern struct qmi_elem_info ipa_indication_register_req_ei[]; +extern struct qmi_elem_info ipa_indication_register_rsp_ei[]; +extern struct qmi_elem_info ipa_driver_init_complete_req_ei[]; +extern struct qmi_elem_info ipa_driver_init_complete_rsp_ei[]; +extern struct qmi_elem_info ipa_init_complete_ind_ei[]; +extern struct qmi_elem_info ipa_mem_bounds_ei[]; +extern struct qmi_elem_info ipa_mem_array_ei[]; +extern struct qmi_elem_info ipa_mem_range_ei[]; +extern struct qmi_elem_info ipa_init_modem_driver_req_ei[]; +extern struct qmi_elem_info ipa_init_modem_driver_rsp_ei[]; + +#endif /* !_IPA_QMI_MSG_H_ */ diff --git a/drivers/net/ipa/ipa_smp2p.c b/drivers/net/ipa/ipa_smp2p.c new file mode 100644 index 000000000000..4d33aa7ebfbb --- /dev/null +++ b/drivers/net/ipa/ipa_smp2p.c @@ -0,0 +1,335 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include + +#include "ipa_smp2p.h" +#include "ipa.h" +#include "ipa_uc.h" +#include "ipa_clock.h" + +/** + * DOC: IPA SMP2P communication with the modem + * + * SMP2P is a primitive communication mechanism available between the AP and + * the modem. The IPA driver uses this for two purposes: to enable the modem + * to state that the GSI hardware is ready to use; and to communicate the + * state of the IPA clock in the event of a crash. + * + * GSI needs to have early initialization completed before it can be used. + * This initialization is done either by Trust Zone or by the modem. In the + * latter case, the modem uses an SMP2P interrupt to tell the AP IPA driver + * when the GSI is ready to use. + * + * The modem is also able to inquire about the current state of the IPA + * clock by trigging another SMP2P interrupt to the AP. We communicate + * whether the clock is enabled using two SMP2P state bits--one to + * indicate the clock state (on or off), and a second to indicate the + * clock state bit is valid. The modem will poll the valid bit until it + * is set, and at that time records whether the AP has the IPA clock enabled. + * + * Finally, if the AP kernel panics, we update the SMP2P state bits even if + * we never receive an interrupt from the modem requesting this. + */ + +/** + * struct ipa_smp2p - IPA SMP2P information + * @ipa: IPA pointer + * @valid_state: SMEM state indicating enabled state is valid + * @enabled_state: SMEM state to indicate clock is enabled + * @valid_bit: Valid bit in 32-bit SMEM state mask + * @enabled_bit: Enabled bit in 32-bit SMEM state mask + * @enabled_bit: Enabled bit in 32-bit SMEM state mask + * @clock_query_irq: IPA interrupt triggered by modem for clock query + * @setup_ready_irq: IPA interrupt triggered by modem to signal GSI ready + * @clock_on: Whether IPA clock is on + * @notified: Whether modem has been notified of clock state + * @disabled: Whether setup ready interrupt handling is disabled + * @mutex mutex: Motex protecting ready interrupt/shutdown interlock + * @panic_notifier: Panic notifier structure +*/ +struct ipa_smp2p { + struct ipa *ipa; + struct qcom_smem_state *valid_state; + struct qcom_smem_state *enabled_state; + u32 valid_bit; + u32 enabled_bit; + u32 clock_query_irq; + u32 setup_ready_irq; + bool clock_on; + bool notified; + bool disabled; + struct mutex mutex; + struct notifier_block panic_notifier; +}; + +/** + * ipa_smp2p_notify() - use SMP2P to tell modem about IPA clock state + * @smp2p: SMP2P information + * + * This is called either when the modem has requested it (by triggering + * the modem clock query IPA interrupt) or whenever the AP is shutting down + * (via a panic notifier). It sets the two SMP2P state bits--one saying + * whether the IPA clock is running, and the other indicating the first bit + * is valid. + */ +static void ipa_smp2p_notify(struct ipa_smp2p *smp2p) +{ + u32 value; + u32 mask; + + if (smp2p->notified) + return; + + smp2p->clock_on = ipa_clock_get_additional(smp2p->ipa); + + /* Signal whether the clock is enabled */ + mask = BIT(smp2p->enabled_bit); + value = smp2p->clock_on ? mask : 0; + qcom_smem_state_update_bits(smp2p->enabled_state, mask, value); + + /* Now indicate that the enabled flag is valid */ + mask = BIT(smp2p->valid_bit); + value = mask; + qcom_smem_state_update_bits(smp2p->valid_state, mask, value); + + smp2p->notified = true; +} + +/* Threaded IRQ handler for modem "ipa-clock-query" SMP2P interrupt */ +static irqreturn_t ipa_smp2p_modem_clk_query_isr(int irq, void *dev_id) +{ + struct ipa_smp2p *smp2p = dev_id; + + ipa_smp2p_notify(smp2p); + + return IRQ_HANDLED; +} + +static int ipa_smp2p_panic_notifier(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct ipa_smp2p *smp2p; + + smp2p = container_of(nb, struct ipa_smp2p, panic_notifier); + + ipa_smp2p_notify(smp2p); + + if (smp2p->clock_on) + ipa_uc_panic_notifier(smp2p->ipa); + + return NOTIFY_DONE; +} + +static int ipa_smp2p_panic_notifier_register(struct ipa_smp2p *smp2p) +{ + /* IPA panic handler needs to run before modem shuts down */ + smp2p->panic_notifier.notifier_call = ipa_smp2p_panic_notifier; + smp2p->panic_notifier.priority = INT_MAX; /* Do it early */ + + return atomic_notifier_chain_register(&panic_notifier_list, + &smp2p->panic_notifier); +} + +static void ipa_smp2p_panic_notifier_unregister(struct ipa_smp2p *smp2p) +{ + atomic_notifier_chain_unregister(&panic_notifier_list, + &smp2p->panic_notifier); +} + +/* Threaded IRQ handler for modem "ipa-setup-ready" SMP2P interrupt */ +static irqreturn_t ipa_smp2p_modem_setup_ready_isr(int irq, void *dev_id) +{ + struct ipa_smp2p *smp2p = dev_id; + + mutex_lock(&smp2p->mutex); + + if (!smp2p->disabled) { + int ret; + + ret = ipa_setup(smp2p->ipa); + if (ret) + dev_err(&smp2p->ipa->pdev->dev, + "error %d from ipa_setup()\n", ret); + smp2p->disabled = true; + } + + mutex_unlock(&smp2p->mutex); + + return IRQ_HANDLED; +} + +/* Initialize SMP2P interrupts */ +static int ipa_smp2p_irq_init(struct ipa_smp2p *smp2p, const char *name, + irq_handler_t handler) +{ + struct device *dev = &smp2p->ipa->pdev->dev; + unsigned int irq; + int ret; + + ret = platform_get_irq_byname(smp2p->ipa->pdev, name); + if (ret <= 0) { + dev_err(dev, "DT error %d getting \"%s\" IRQ property\n", + ret, name); + return ret ? : -EINVAL; + } + irq = ret; + + ret = request_threaded_irq(irq, NULL, handler, 0, name, smp2p); + if (ret) { + dev_err(dev, "error %d requesting \"%s\" IRQ\n", ret, name); + return ret; + } + + return irq; +} + +static void ipa_smp2p_irq_exit(struct ipa_smp2p *smp2p, u32 irq) +{ + free_irq(irq, smp2p); +} + +/* Drop the clock reference if it was taken in ipa_smp2p_notify() */ +static void ipa_smp2p_clock_release(struct ipa *ipa) +{ + if (!ipa->smp2p->clock_on) + return; + + ipa_clock_put(ipa); + ipa->smp2p->clock_on = false; +} + +/* Initialize the IPA SMP2P subsystem */ +int ipa_smp2p_init(struct ipa *ipa, bool modem_init) +{ + struct qcom_smem_state *enabled_state; + struct device *dev = &ipa->pdev->dev; + struct qcom_smem_state *valid_state; + struct ipa_smp2p *smp2p; + u32 enabled_bit; + u32 valid_bit; + int ret; + + valid_state = qcom_smem_state_get(dev, "ipa-clock-enabled-valid", + &valid_bit); + if (IS_ERR(valid_state)) + return PTR_ERR(valid_state); + if (valid_bit >= 32) /* BITS_PER_U32 */ + return -EINVAL; + + enabled_state = qcom_smem_state_get(dev, "ipa-clock-enabled", + &enabled_bit); + if (IS_ERR(enabled_state)) + return PTR_ERR(enabled_state); + if (enabled_bit >= 32) /* BITS_PER_U32 */ + return -EINVAL; + + smp2p = kzalloc(sizeof(*smp2p), GFP_KERNEL); + if (!smp2p) + return -ENOMEM; + + smp2p->ipa = ipa; + + /* These fields are needed by the clock query interrupt + * handler, so initialize them now. + */ + mutex_init(&smp2p->mutex); + smp2p->valid_state = valid_state; + smp2p->valid_bit = valid_bit; + smp2p->enabled_state = enabled_state; + smp2p->enabled_bit = enabled_bit; + + /* We have enough information saved to handle notifications */ + ipa->smp2p = smp2p; + + ret = ipa_smp2p_irq_init(smp2p, "ipa-clock-query", + ipa_smp2p_modem_clk_query_isr); + if (ret < 0) + goto err_null_smp2p; + smp2p->clock_query_irq = ret; + + ret = ipa_smp2p_panic_notifier_register(smp2p); + if (ret) + goto err_irq_exit; + + if (modem_init) { + /* Result will be non-zero (negative for error) */ + ret = ipa_smp2p_irq_init(smp2p, "ipa-setup-ready", + ipa_smp2p_modem_setup_ready_isr); + if (ret < 0) + goto err_notifier_unregister; + smp2p->setup_ready_irq = ret; + } + + return 0; + +err_notifier_unregister: + ipa_smp2p_panic_notifier_unregister(smp2p); +err_irq_exit: + ipa_smp2p_irq_exit(smp2p, smp2p->clock_query_irq); +err_null_smp2p: + ipa->smp2p = NULL; + mutex_destroy(&smp2p->mutex); + kfree(smp2p); + + return ret; +} + +void ipa_smp2p_exit(struct ipa *ipa) +{ + struct ipa_smp2p *smp2p = ipa->smp2p; + + if (smp2p->setup_ready_irq) + ipa_smp2p_irq_exit(smp2p, smp2p->setup_ready_irq); + ipa_smp2p_panic_notifier_unregister(smp2p); + ipa_smp2p_irq_exit(smp2p, smp2p->clock_query_irq); + /* We won't get notified any more; drop clock reference (if any) */ + ipa_smp2p_clock_release(ipa); + ipa->smp2p = NULL; + mutex_destroy(&smp2p->mutex); + kfree(smp2p); +} + +void ipa_smp2p_disable(struct ipa *ipa) +{ + struct ipa_smp2p *smp2p = ipa->smp2p; + + if (!smp2p->setup_ready_irq) + return; + + mutex_lock(&smp2p->mutex); + + smp2p->disabled = true; + + mutex_unlock(&smp2p->mutex); +} + +/* Reset state tracking whether we have notified the modem */ +void ipa_smp2p_notify_reset(struct ipa *ipa) +{ + struct ipa_smp2p *smp2p = ipa->smp2p; + u32 mask; + + if (!smp2p->notified) + return; + + ipa_smp2p_clock_release(ipa); + + /* Reset the clock enabled valid flag */ + mask = BIT(smp2p->valid_bit); + qcom_smem_state_update_bits(smp2p->valid_state, mask, 0); + + /* Mark the clock disabled for good measure... */ + mask = BIT(smp2p->enabled_bit); + qcom_smem_state_update_bits(smp2p->enabled_state, mask, 0); + + smp2p->notified = false; +} diff --git a/drivers/net/ipa/ipa_smp2p.h b/drivers/net/ipa/ipa_smp2p.h new file mode 100644 index 000000000000..1f65cdc9d406 --- /dev/null +++ b/drivers/net/ipa/ipa_smp2p.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_SMP2P_H_ +#define _IPA_SMP2P_H_ + +#include + +struct ipa; + +/** + * ipa_smp2p_init() - Initialize the IPA SMP2P subsystem + * @ipa: IPA pointer + * @modem_init: Whether the modem is responsible for GSI initialization + * + * @Return: 0 if successful, or a negative error code + * + */ +int ipa_smp2p_init(struct ipa *ipa, bool modem_init); + +/** + * ipa_smp2p_exit() - Inverse of ipa_smp2p_init() + * @ipa: IPA pointer + */ +void ipa_smp2p_exit(struct ipa *ipa); + +/** + * ipa_smp2p_disable() - Prevent "ipa-setup-ready" interrupt handling + * @IPA: IPA pointer + * + * Prevent handling of the "setup ready" interrupt from the modem. + * This is used before initiating shutdown of the driver. + */ +void ipa_smp2p_disable(struct ipa *ipa); + +/** + * ipa_smp2p_notify_reset() - Reset modem notification state + * @ipa: IPA pointer + * + * If the modem crashes it queries the IPA clock state. In cleaning + * up after such a crash this is used to reset some state maintained + * for managing this notification. + */ +void ipa_smp2p_notify_reset(struct ipa *ipa); + +#endif /* _IPA_SMP2P_H_ */ From patchwork Fri Mar 6 04:28:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190106 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DFBAC3F2DF for ; Fri, 6 Mar 2020 04:29:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 42572208C3 for ; Fri, 6 Mar 2020 04:29:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="sjVEnqNb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726533AbgCFE3O (ORCPT ); Thu, 5 Mar 2020 23:29:14 -0500 Received: from mail-yw1-f52.google.com ([209.85.161.52]:45920 "EHLO mail-yw1-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727433AbgCFE3L (ORCPT ); Thu, 5 Mar 2020 23:29:11 -0500 Received: by mail-yw1-f52.google.com with SMTP id d206so1049811ywa.12 for ; Thu, 05 Mar 2020 20:29:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+OwhcmpR7pb9yvMAW/3zddjXHmCNfQqId9+2KCqrSJ0=; b=sjVEnqNbTnwO2ySAIcfwaHGIihm2d0ARvf/F6V07mD/Axnj9d9GnoXRS2V9mVxm3W2 0TS3QpjPCK04d5r1+L/HC5wdj96wA3MIS/6VlekW8qXxENUuLtNEZ+b8IevwkjspC6un HOvuEZ5CW96Ob3kx8r7mxGlbq7+x6Zv8ATZTA8PZRmo/fy/qEBkXLRjAyH3mVPPdnYBC 52N667VdKut1mbrQyKWkxvtTprXMBsL/RCHD/LukuF28fDlbIkI5cGpA/Y2xWipqc+sq ZMmyBdtuZHQpXUWYZa9y86Wb+97peQc8sMc4k9kCbrNXX4jLYmICKhd0ynCdcRQmWbcw 29Gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+OwhcmpR7pb9yvMAW/3zddjXHmCNfQqId9+2KCqrSJ0=; b=j5tJOzk8DoqkqbzXFPwo7CYncQ7RXOYqG7vY5eO1+e0eJpD1tyo2VZdi0FE5l8tU0s 3dsAB27hy+wOB+5J9rfC2Az4RwCjboTfvKaQYIR0c/prYETYD9DyTYL8NnfDm7rXz3OM j2NQi3r5sIjf5BYI8Qy/RtBT7qMC2mSQ0it1wu3d52ykvxaL3GcT8BgHsxbRhV+318Tm GuttbscfwPwnfJIGuGFRlfdd6kYU9UUqKoEwZEBlM8KgshnDzII0af091ZbcRTa16ajl i7RdUUtkCkX5zYBhQok7tIUjwxNF5r2FC5gw8k9YrUckQM7f9QqsoCMD9iJLPe0hzDyr OaEQ== X-Gm-Message-State: ANhLgQ1aR1Tx6B9fXcIPau4lVsR9AiPeSXY9Hy2OcL9Nu4i0ATye7poK NIxBDAayRm//E5uj7iygJMz0uA== X-Google-Smtp-Source: ADFU+vuavLb0hi1Kt/CtoekqDopx5Ou6PhT5pbkRZYLJyLf4bvqSt3M6Cw7E2PZn2Fxgjq2Y837RIg== X-Received: by 2002:a0d:d952:: with SMTP id b79mr2289846ywe.226.1583468949477; Thu, 05 Mar 2020 20:29:09 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.29.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:29:09 -0800 (PST) From: Alex Elder To: Bjorn Andersson , Andy Gross Cc: David Miller , Arnd Bergmann , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 17/17] arm64: dts: sdm845: add IPA information Date: Thu, 5 Mar 2020 22:28:31 -0600 Message-Id: <20200306042831.17827-18-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add IPA-related nodes and definitions to "sdm845.dtsi". Signed-off-by: Alex Elder --- arch/arm64/boot/dts/qcom/sdm845.dtsi | 51 ++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi index d42302b8889b..58fd1c611849 100644 --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi @@ -675,6 +675,17 @@ interrupt-controller; #interrupt-cells = <2>; }; + + ipa_smp2p_out: ipa-ap-to-modem { + qcom,entry-name = "ipa"; + #qcom,smem-state-cells = <1>; + }; + + ipa_smp2p_in: ipa-modem-to-ap { + qcom,entry-name = "ipa"; + interrupt-controller; + #interrupt-cells = <2>; + }; }; smp2p-slpi { @@ -1435,6 +1446,46 @@ }; }; + ipa@1e40000 { + compatible = "qcom,sdm845-ipa"; + + modem-init; + modem-remoteproc = <&mss_pil>; + + reg = <0 0x1e40000 0 0x7000>, + <0 0x1e47000 0 0x2000>, + <0 0x1e04000 0 0x2c000>; + reg-names = "ipa-reg", + "ipa-shared", + "gsi"; + + interrupts-extended = + <&intc 0 311 IRQ_TYPE_EDGE_RISING>, + <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>, + <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, + <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "ipa", + "gsi", + "ipa-clock-query", + "ipa-setup-ready"; + + clocks = <&rpmhcc RPMH_IPA_CLK>; + clock-names = "core"; + + interconnects = + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>, + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>, + <&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>; + interconnect-names = "memory", + "imem", + "config"; + + qcom,smem-states = <&ipa_smp2p_out 0>, + <&ipa_smp2p_out 1>; + qcom,smem-state-names = "ipa-clock-enabled-valid", + "ipa-clock-enabled"; + }; + tcsr_mutex_regs: syscon@1f40000 { compatible = "syscon"; reg = <0 0x01f40000 0 0x40000>;