From patchwork Fri May 31 03:53:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 165489 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp185833ili; Thu, 30 May 2019 20:54:04 -0700 (PDT) X-Google-Smtp-Source: APXvYqwQvoJMyO1kvXBPIkauHbZwkG2QGaR/Gw63TpPf9NeSy1bdQ1pBTCLIVkGFUmhxshIYwPnh X-Received: by 2002:a63:140c:: with SMTP id u12mr6809123pgl.378.1559274844034; Thu, 30 May 2019 20:54:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559274844; cv=none; d=google.com; s=arc-20160816; b=FpQ2dOUk/JkYvtiHYHuFUB86RPRO2XzB9yUb2JLDmd4MvqtDWC+EDFWz9zxlozpV3h 3W6E7AjNMnQO+/rZcEO3CAKa60x2cX8o/FB1pSVtJEYuocWZPrxCTYJEqGndZAtt7IUA h5ABAYmAQzD9xy210Vx2lTyRarVrWUr0dJrDLOsnafHiHkn7XpuHiAdARztWvfqYNPMp kBlnxoqtBQXVYzcGLMM9D+JGCPXQVkdPs4MPFQdT3hU9C4kOMV/YTNrmBk8PibGkcBOn KxwWk2Hy+/3uOjWM1+J1D+PFNlk4PEEPeUIoSe/6xcle6j6TLk/u+31YLeQLIshEiQIh 6fvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=10rk4EeLLsyvC1xr0GY9e+fZm4KSUy7ctXT9DRu7iCc=; b=rfE1s+qS5rItWJW608xrS+e+U+xevVGYKi3PoqqbVuvFVmOuEp4Cd3R46YRE/Ew4fp 4eNLUPou3YnTXig2GEKlu9Nnu8g0dwiINQ82u8Efn6A6YBd0+z63Wl6RBu1xrFmG4XpG pcwaC7/H5fABqK7mRY0VWdwz36v/7WftUXZ3h1rYSHzpweDZNXNcDkLoJDowA4WK2ulw JK6qBxtfivF13jhcJrE/mrOr+g1qcajIeb4RqcQ1kbeCc+8hr++NquhGX6EFt1vTVpI1 T6ikkDpzj9lJ/lNjhtJoKLzy/nG69m9N41eaKxAx3vH1WYuddwfZpLuwKbNrUEKJNw+B wl4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=By6kOwqp; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d19si4601423pls.221.2019.05.30.20.54.03; Thu, 30 May 2019 20:54:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=By6kOwqp; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726808AbfEaDyC (ORCPT + 30 others); Thu, 30 May 2019 23:54:02 -0400 Received: from mail-io1-f66.google.com ([209.85.166.66]:43384 "EHLO mail-io1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726768AbfEaDyB (ORCPT ); Thu, 30 May 2019 23:54:01 -0400 Received: by mail-io1-f66.google.com with SMTP id k20so6997152ios.10 for ; Thu, 30 May 2019 20:54:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=10rk4EeLLsyvC1xr0GY9e+fZm4KSUy7ctXT9DRu7iCc=; b=By6kOwqpQoQ6Pj1mdyn8FcvtdxEXLA8rMl78y73Of1bl59K2HDANnnp9YDwoVGDFcp tIpC2nAniY8+5gQNtYQyEs0+wuV8EJdOtKfAKWRqMLZgU+M7QysRUj5pGct+2f2kUOjt JNAAmN9b6HQvYqHYnPvGjR4PPsBwT4bgZaQLc5HGPjtVukKymB12EONSAAymTEwmTz5u +auYKFI8ZnC4/DpwaZIb5F+9HI382QR1GNO/rTIQlBdtgilS5jar0cUfPS7ldWj43qYt h3roOuN618qKPiQpXTCqtaCuHCESpwM0UdB3KtUybA6pjjapx6G58I5Ri12RgMmJVWPa 0Lrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=10rk4EeLLsyvC1xr0GY9e+fZm4KSUy7ctXT9DRu7iCc=; b=ltdILOJD6M9Wg9Gip64/RXUaDOZQtfYiA9UOTK7yR8rKPDWcWwgFb4Siy6XxC7TmL4 M+oo3i/Lt2h/ILRZY+BKz0R8atcK9EnF2zt3hInrY+5rid9kshlT3yV9p64r6it0J9ti /5aVCVXseDdQfLMMfu5kNMEOkIAKlVfAUXYL/9qvAfmY7TnlFpAGvv1a4jv4PqJ45/g9 7aC4Zz5gFP6VGVxMUb1ijQFJ7tQaote4hM4C6vuWJ68hRjiNOXK/JqCfwdXxjxNGvCQq mUwru6Px6jpJfM0JFyHQrWZp1iD+vIPDobvENKwJNypZgZZgDg8Hx7lTLUMu6GLaAoqp Ue5g== X-Gm-Message-State: APjAAAU68RAQGH6yIgU6JdfV10s6pgrloRdLapZDEaeJoXfLLhc6Cpjf nnYXECBjjDzFa6YtDvXWAHaV+A== X-Received: by 2002:a5d:870e:: with SMTP id u14mr5025257iom.44.1559274839357; Thu, 30 May 2019 20:53:59 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.53.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:53:58 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, cpratapa@codeaurora.org, syadagir@codeaurora.org, subashab@codeaurora.org, abhishek.esse@gmail.com, netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v2 03/17] soc: qcom: ipa: main code Date: Thu, 30 May 2019 22:53:34 -0500 Message-Id: <20190531035348.7194-4-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch includes three source files that represent some basic "main program" code for the IPA driver. They are: - "ipa.h" defines the top-level IPA structure which represents an IPA device throughout the code. - "ipa_main.c" contains the platform driver probe function, along with some general code used during initialization. - "ipa_reg.h" defines the offsets of the 32-bit registers used for the IPA device, along with masks that define the position and width of fields less than 32 bits located within these registers. Each file includes some documentation that provides a little more overview of how the code is organized and used. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa.h | 131 ++++++ drivers/net/ipa/ipa_main.c | 921 +++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_reg.h | 279 +++++++++++ 3 files changed, 1331 insertions(+) create mode 100644 drivers/net/ipa/ipa.h create mode 100644 drivers/net/ipa/ipa_main.c create mode 100644 drivers/net/ipa/ipa_reg.h -- 2.20.1 diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h new file mode 100644 index 000000000000..c580254d1e0e --- /dev/null +++ b/drivers/net/ipa/ipa.h @@ -0,0 +1,131 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_H_ +#define _IPA_H_ + +#include +#include +#include +#include + +#include "gsi.h" +#include "ipa_qmi.h" +#include "ipa_endpoint.h" +#include "ipa_interrupt.h" + +struct clk; +struct icc_path; +struct net_device; +struct platform_device; + +struct ipa_clock; +struct ipa_smp2p; +struct ipa_interrupt; + +/** + * struct ipa - IPA information + * @gsi: Embedded GSI structure + * @pdev: Platform device + * @smp2p: SMP2P information + * @clock: IPA clocking information + * @suspend_ref: Whether clock reference preventing suspend taken + * @route_virt: Virtual address of routing table + * @route_addr: DMA address for routing table + * @filter_virt: Virtual address of filter table + * @filter_addr: DMA address for filter table + * @interrupt: IPA Interrupt information + * @uc_loaded: Non-zero when microcontroller has reported it's ready + * @ipa_phys: Physical address of IPA memory space + * @ipa_virt: Virtual address for IPA memory space + * @reg_virt: Virtual address used for IPA register access + * @shared_phys: Physical address of memory space shared with modem + * @shared_virt: Virtual address of memory space shared with modem + * @shared_offset: Additional offset used for shared memory + * @wakeup: Wakeup source information + * @filter_support: Bit mask indicating endpoints that support filtering + * @initialized: Bit mask indicating endpoints initialized + * @set_up: Bit mask indicating endpoints set up + * @enabled: Bit mask indicating endpoints enabled + * @suspended: Bit mask indicating endpoints suspended + * @endpoint: Array of endpoint information + * @endpoint_map: Mapping of GSI channel to IPA endpoint information + * @command_endpoint: Endpoint used for command TX + * @default_endpoint: Endpoint used for default route RX + * @modem_netdev: Network device structure used for modem + * @setup_complete: Flag indicating whether setup stage has completed + * @qmi: QMI information + */ +struct ipa { + struct gsi gsi; + struct platform_device *pdev; + struct ipa_smp2p *smp2p; + struct ipa_clock *clock; + atomic_t suspend_ref; + + void *route_virt; + dma_addr_t route_addr; + void *filter_virt; + dma_addr_t filter_addr; + + struct ipa_interrupt *interrupt; + u32 uc_loaded; + + phys_addr_t reg_phys; + void __iomem *reg_virt; + phys_addr_t shared_phys; + void *shared_virt; + u32 shared_offset; + + struct wakeup_source wakeup; + + /* Bit masks indicating endpoint state */ + u32 filter_support; + u32 initialized; + u32 set_up; + u32 enabled; + u32 suspended; + + struct ipa_endpoint endpoint[IPA_ENDPOINT_MAX]; + struct ipa_endpoint *endpoint_map[GSI_CHANNEL_MAX]; + struct ipa_endpoint *command_endpoint; /* TX */ + struct ipa_endpoint *default_endpoint; /* Default route RX */ + + struct net_device *modem_netdev; + u32 setup_complete; + + struct ipa_qmi qmi; +}; + +/** + * ipa_setup() - Perform IPA setup + * @ipa: IPA pointer + * + * IPA initialization is broken into stages: init; config; setup; and + * sometimes enable. (These have inverses exit, deconfig, teardown, and + * disable.) Activities performed at the init stage can be done without + * requiring any access to hardware. For IPA, activities performed at the + * config stage require the IPA clock to be running, because they involve + * access to IPA registers. The setup stage is performed only after the + * GSI hardware is ready (more on this below). And finally IPA endpoints + * can be enabled once they're successfully set up. + * + * This function, @ipa_setup(), starts the setup stage. + * + * In order for the GSI hardware to be functional it needs firmware to be + * loaded (in addition to some other low-level initialization). This early + * GSI initialization can be done either by Trust Zone or by the modem. If + * it's done by Trust Zone, the AP loads the GSI firmware and supplies it to + * Trust Zone to verify and install. The AP knows when this completes, and + * whether it was successful. In this case the AP proceeds to setup once it + * knows GSI is ready. + * + * If the modem performs early GSI initialization, the AP needs to know when + * this has occurred. An SMP2P interrupt is used for this purpose, and + * receipt of that interrupt triggers the call to ipa_setup(). + */ +int ipa_setup(struct ipa *ipa); + +#endif /* _IPA_H_ */ diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c new file mode 100644 index 000000000000..bd3f258b3b02 --- /dev/null +++ b/drivers/net/ipa/ipa_main.c @@ -0,0 +1,921 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" +#include "ipa_netdev.h" +#include "ipa_smp2p.h" +#include "ipa_uc.h" +#include "ipa_interrupt.h" + +/** + * DOC: The IP Accelerator + * + * This driver supports the Qualcomm IP Accelerator (IPA), which is a + * networking component found in many Qualcomm SoCs. The IPA is connected + * to the application processor (AP), but is also connected (and partially + * controlled by) other "execution environments" (EEs), such as a modem. + * + * The IPA is the conduit between the AP and the modem that carries network + * traffic. This driver presents a network interface representing the + * connection of the modem to external (e.g. LTE) networks. The IPA can + * provide protocol checksum calculation, offloading this work from the AP. + * The IPA is able to provide additional functionality, including routing, + * filtering, and NAT support, but that more advanced functionality is not + * currently supported. + * + * Certain resources--including routing tables and filter tables--are still + * defined in this driver, because they must be initialized even when the + * advanced hardware features are not used. + * + * There are two distinct layers that implement the IPA hardware, and this + * is reflected in the organization of the driver. The generic software + * interface (GSI) is an integral component of the IPA, providing a + * well-defined communication layer between the AP subsystem and the IPA + * core. The GSI implements a set of "channels" used for communication + * between the AP and the IPA. + * + * The IPA layer uses GSI channels to implement its "endpoints". And while + * a GSI channel carries data between the AP and the IPA, a pair of IPA + * endpoints is used to carry traffic between two EEs. Specifically, the main + * modem network interface is implemented by two pairs of endpoints: a TX + * endpoint on the AP coupled with an RX endpoint on the modem; and another + * RX endpoint on the AP receiving data from a TX endpoint on the modem. + */ + +#define IPA_TABLE_ALIGN 128 /* Minimum table alignment */ +#define IPA_TABLE_ENTRY_SIZE sizeof(u64) /* Holds a physical address */ +#define IPA_FILTER_SIZE 8 /* Filter descriptor size */ +#define IPA_ROUTE_SIZE 8 /* Route descriptor size */ + +/* Backward compatibility register value to use for SDM845 */ +#define IPA_BCR_REG_VAL 0x0000003b + +/* The name of the main firmware file relative to /lib/firmware */ +#define IPA_FWS_PATH "ipa_fws.mdt" +#define IPA_PAS_ID 15 + +/** + * ipa_filter_tuple_zero() - Zero an endpoints filter tuple + * @endpoint_id: Endpoint whose filter tuple should be zeroed + * + * Endpoint must be for AP (not modem) and support filtering. Updates the + * filter masks values without changing routing ones. + */ +static void ipa_filter_tuple_zero(struct ipa_endpoint *endpoint) +{ + enum ipa_endpoint_id endpoint_id = endpoint->endpoint_id; + u32 offset; + u32 val; + + offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(endpoint_id); + + val = ioread32(endpoint->ipa->reg_virt + offset); + + /* Zero all filter-related fields, preserving the rest */ + u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL); + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_filter_hash_tuple_config(struct ipa *ipa) +{ + u32 ep_mask = ipa->filter_support; + + while (ep_mask) { + enum ipa_endpoint_id endpoint_id = __ffs(ep_mask); + struct ipa_endpoint *endpoint; + + ep_mask ^= BIT(endpoint_id); + + endpoint = &ipa->endpoint[endpoint_id]; + if (endpoint->ee_id != GSI_EE_MODEM) + ipa_filter_tuple_zero(endpoint); + } +} + +/** + * ipa_route_tuple_zero() - Zero a routing table entry tuple + * @route_id: Identifier for routing table entry to be zeroed + * + * Updates the routing table values without changing filtering ones. + */ +static void ipa_route_tuple_zero(struct ipa *ipa, u32 route_id) +{ + u32 offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(route_id); + u32 val; + + val = ioread32(ipa->reg_virt + offset); + + /* Zero all route-related fields, preserving the rest */ + u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL); + + iowrite32(val, ipa->reg_virt + offset); +} + +static void ipa_route_hash_tuple_config(struct ipa *ipa) +{ + u32 route_mask; + u32 modem_mask; + + BUILD_BUG_ON(!IPA_SMEM_MODEM_RT_COUNT); + BUILD_BUG_ON(IPA_SMEM_RT_COUNT < IPA_SMEM_MODEM_RT_COUNT); + BUILD_BUG_ON(IPA_SMEM_RT_COUNT >= BITS_PER_LONG); + + /* Compute a mask representing non-modem routing table entries */ + route_mask = GENMASK(IPA_SMEM_RT_COUNT - 1, 0); + modem_mask = GENMASK(IPA_SMEM_MODEM_RT_INDEX_MAX, + IPA_SMEM_MODEM_RT_INDEX_MIN); + route_mask &= ~modem_mask; + + while (route_mask) { + u32 route_id = __ffs(route_mask); + + route_mask ^= BIT(route_id); + + ipa_route_tuple_zero(ipa, route_id); + } +} + +/** + * ipa_route_setup() - Initialize an empty routing table + * @ipa: IPA pointer + * + * Each entry in the routing table contains the DMA address of a route + * descriptor. A special zero descriptor is allocated that represents "no + * route" and this function initializes all its entries to point at that + * zero route. The zero route is allocated with the table, immediately past + * its end. + * + * @Return: 0 if successful or -ENOMEM + */ +static int ipa_route_setup(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + u64 zero_route_addr; + dma_addr_t addr; + u32 route_id; + size_t size; + u64 *virt; + + BUILD_BUG_ON(!IPA_ROUTE_SIZE); + BUILD_BUG_ON(sizeof(*virt) != IPA_TABLE_ENTRY_SIZE); + + /* Allocate the routing table, with enough space at the end of the + * table to hold the zero route descriptor. Initialize all filter + * table entries to point to the zero route. + */ + size = IPA_SMEM_RT_COUNT * IPA_TABLE_ENTRY_SIZE; + virt = dma_alloc_coherent(dev, size + IPA_ROUTE_SIZE, &addr, + GFP_KERNEL); + if (!virt) + return -ENOMEM; + ipa->route_virt = virt; + ipa->route_addr = addr; + + /* Zero route is immediately after the route table */ + zero_route_addr = addr + size; + + for (route_id = 0; route_id < IPA_SMEM_RT_COUNT; route_id++) + *virt++ = zero_route_addr; + + ipa_cmd_route_config_ipv4(ipa, size); + ipa_cmd_route_config_ipv6(ipa, size); + + ipa_route_hash_tuple_config(ipa); + + /* Configure default route for exception packets */ + ipa_endpoint_default_route_setup(ipa->default_endpoint); + + return 0; +} + +/** + * ipa_route_teardown() - Inverse of ipa_route_setup(). + * @ipa: IPA pointer + */ +static void ipa_route_teardown(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + size_t size; + + ipa_endpoint_default_route_teardown(ipa->default_endpoint); + + size = IPA_SMEM_RT_COUNT * IPA_TABLE_ENTRY_SIZE; + size += IPA_ROUTE_SIZE; + + dma_free_coherent(dev, size, ipa->route_virt, ipa->route_addr); + ipa->route_virt = NULL; + ipa->route_addr = 0; +} + +/** + * ipa_filter_setup() - Initialize an empty filter table + * @ipa: IPA pointer + * + * The filter table consists of a bitmask representing which endpoints support + * filtering, followed by one table entry for each set bit in the mask. Each + * entry in the filter table contains the DMA address of a filter descriptor. + * A special zero descriptor is allocated that represents "no filter" and this + * function initializes all its entries to point at that zero filter. The + * zero filter is allocated with the table, immediately past its end. + * + * @Return: 0 if successful or a negative error code + */ +static int ipa_filter_setup(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + u64 zero_filter_addr; + u32 filter_count; + dma_addr_t addr; + size_t size; + u64 *virt; + u32 i; + + BUILD_BUG_ON(!IPA_FILTER_SIZE); + + /* Allocate the filter table, with an extra slot for the bitmap. Also + * allocate enough space at the end of the table to hold the zero + * filter descriptor. Initialize all filter table entries point to + * that. + */ + filter_count = hweight32(ipa->filter_support); + size = (filter_count + 1) * IPA_TABLE_ENTRY_SIZE; + virt = dma_alloc_coherent(dev, size + IPA_FILTER_SIZE, &addr, + GFP_KERNEL); + if (!virt) + goto err_clear_filter_support; + ipa->filter_virt = virt; + ipa->filter_addr = addr; + + /* Zero filter is immediately after the filter table */ + zero_filter_addr = addr + size; + + /* Save the filter table bitmap. The "soft" bitmap value must be + * converted to the hardware representation by shifting it left one + * position. (Bit 0 represents global filtering, which is possible + * but not used.) + */ + *virt++ = ipa->filter_support << 1; + + /* Now point every entry in the table at the empty filter */ + for (i = 0; i < filter_count; i++) + *virt++ = zero_filter_addr; + + ipa_cmd_filter_config_ipv4(ipa, size); + ipa_cmd_filter_config_ipv6(ipa, size); + + ipa_filter_hash_tuple_config(ipa); + + return 0; + +err_clear_filter_support: + ipa->filter_support = 0; + + return -ENOMEM; +} + +/** + * ipa_filter_teardown() - Inverse of ipa_filter_setup(). + * @ipa: IPA pointer + */ +static void ipa_filter_teardown(struct ipa *ipa) +{ + u32 filter_count = hweight32(ipa->filter_support); + struct device *dev = &ipa->pdev->dev; + size_t size; + + size = (filter_count + 1) * IPA_TABLE_ENTRY_SIZE; + size += IPA_FILTER_SIZE; + + dma_free_coherent(dev, size, ipa->filter_virt, ipa->filter_addr); + ipa->filter_virt = NULL; + ipa->filter_addr = 0; + ipa->filter_support = 0; +} + +/** + * ipa_suspend_handler() - Handle the suspend interrupt + * @ipa: IPA pointer + * @interrupt: Interrupt type. + * + * When in suspended state, the IPA can trigger a resume by sending a SUSPEND + * IPA interrupt. + */ +static void ipa_suspend_handler(struct ipa *ipa, + enum ipa_interrupt_id interrupt_id) +{ + /* Take a a single clock reference to prevent suspend. All + * endpoints will be resumed as a result. This reference will + * be dropped when we get a power management suspend request. + */ + if (!atomic_xchg(&ipa->suspend_ref, 1)) + ipa_clock_get(ipa->clock); + + /* Acknowledge/clear the suspend interrupt on all endpoints */ + ipa_interrupt_suspend_clear_all(ipa->interrupt); +} + +/* Remoteproc callbacks for SSR events: prepare, start, stop, unprepare */ +int ipa_ssr_prepare(struct rproc_subdev *subdev) +{ + return 0; +} +EXPORT_SYMBOL_GPL(ipa_ssr_prepare); + +int ipa_ssr_start(struct rproc_subdev *subdev) +{ + return 0; +} +EXPORT_SYMBOL_GPL(ipa_ssr_start); + +void ipa_ssr_stop(struct rproc_subdev *subdev, bool crashed) +{ +} +EXPORT_SYMBOL_GPL(ipa_ssr_stop); + +void ipa_ssr_unprepare(struct rproc_subdev *subdev) +{ +} +EXPORT_SYMBOL_GPL(ipa_ssr_unprepare); + +/** + * ipa_setup() - Set up IPA hardware + * @ipa: IPA pointer + * + * Perform initialization that requires issuing immediate commands using the + * command TX endpoint. This cannot be run until early initialization + * (including loading GSI firmware) is complete. + */ +int ipa_setup(struct ipa *ipa) +{ + struct ipa_endpoint *rx_endpoint; + struct ipa_endpoint *tx_endpoint; + int ret; + + dev_dbg(&ipa->pdev->dev, "%s() started\n", __func__); + + ret = gsi_setup(&ipa->gsi); + if (ret) + return ret; + + ipa->interrupt = ipa_interrupt_setup(ipa); + if (IS_ERR(ipa->interrupt)) { + ret = PTR_ERR(ipa->interrupt); + goto err_gsi_teardown; + } + ipa_interrupt_add(ipa->interrupt, IPA_INTERRUPT_TX_SUSPEND, + ipa_suspend_handler); + + ipa_uc_setup(ipa); + + ipa_endpoint_setup(ipa); + + /* We need to use the AP command out endpoint to perform other + * initialization, so we set that up first. + */ + ret = ipa_endpoint_enable_one(ipa->command_endpoint); + if (ret) + goto err_endpoint_teardown; + + ret = ipa_smem_setup(ipa); + if (ret) + goto err_command_disable; + + ret = ipa_route_setup(ipa); + if (ret) + goto err_smem_teardown; + + ret = ipa_filter_setup(ipa); + if (ret) + goto err_route_teardown; + + ret = ipa_endpoint_enable_one(ipa->default_endpoint); + if (ret) + goto err_filter_teardown; + + rx_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_MODEM_RX]; + tx_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_MODEM_TX]; + ipa->modem_netdev = ipa_netdev_setup(ipa, rx_endpoint, tx_endpoint); + if (IS_ERR(ipa->modem_netdev)) { + ret = PTR_ERR(ipa->modem_netdev); + goto err_default_disable; + } + + ipa->setup_complete = 1; + + dev_info(&ipa->pdev->dev, "IPA driver setup completed successfully\n"); + + return 0; + +err_default_disable: + ipa_endpoint_disable_one(ipa->default_endpoint); +err_filter_teardown: + ipa_filter_teardown(ipa); +err_route_teardown: + ipa_route_teardown(ipa); +err_smem_teardown: + ipa_smem_teardown(ipa); +err_command_disable: + ipa_endpoint_disable_one(ipa->command_endpoint); +err_endpoint_teardown: + ipa_endpoint_teardown(ipa); + ipa_uc_teardown(ipa); + ipa_interrupt_remove(ipa->interrupt, IPA_INTERRUPT_TX_SUSPEND); + ipa_interrupt_teardown(ipa->interrupt); +err_gsi_teardown: + gsi_teardown(&ipa->gsi); + + return ret; +} + +/** + * ipa_teardown() - Inverse of ipa_setup() + * @ipa: IPA pointer + */ +static void ipa_teardown(struct ipa *ipa) +{ + ipa_netdev_teardown(ipa->modem_netdev); + ipa_endpoint_disable_one(ipa->default_endpoint); + ipa_filter_teardown(ipa); + ipa_route_teardown(ipa); + ipa_smem_teardown(ipa); + ipa_endpoint_disable_one(ipa->command_endpoint); + ipa_endpoint_teardown(ipa); + ipa_uc_teardown(ipa); + ipa_interrupt_remove(ipa->interrupt, IPA_INTERRUPT_TX_SUSPEND); + ipa_interrupt_teardown(ipa->interrupt); + gsi_teardown(&ipa->gsi); +} + +/** + * ipa_hardware_config() - Primitive hardware initialization + * @ipa: IPA pointer + */ +static void ipa_hardware_config(struct ipa *ipa) +{ + u32 val; + + /* SDM845 has IPA version 3.5.1 */ + val = IPA_BCR_REG_VAL; + iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET); + + val = u32_encode_bits(8, GEN_QMB_0_MAX_WRITES_FMASK); + val |= u32_encode_bits(4, GEN_QMB_1_MAX_WRITES_FMASK); + iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_WRITES_OFFSET); + + val = u32_encode_bits(8, GEN_QMB_0_MAX_READS_FMASK); + val |= u32_encode_bits(12, GEN_QMB_1_MAX_READS_FMASK); + iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_READS_OFFSET); +} + +/** + * ipa_hardware_deconfig() - Inverse of ipa_hardware_config() + * @ipa: IPA pointer + * + * This restores the power-on reset values (even if they aren't different) + */ +static void ipa_hardware_deconfig(struct ipa *ipa) +{ + /* Values we program above are the same as the power-on reset values */ +} + +static void ipa_resource_config_src_one(struct ipa *ipa, + const struct ipa_resource_src *resource) +{ + u32 offset = IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET; + u32 stride = IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_STRIDE; + enum ipa_resource_type_src n = resource->type; + const struct ipa_resource_limits *xlimits; + const struct ipa_resource_limits *ylimits; + u32 val; + + xlimits = &resource->limits[IPA_RESOURCE_GROUP_LWA_DL]; + ylimits = &resource->limits[IPA_RESOURCE_GROUP_UL_DL]; + + val = u32_encode_bits(xlimits->min, X_MIN_LIM_FMASK); + val |= u32_encode_bits(xlimits->max, X_MAX_LIM_FMASK); + val |= u32_encode_bits(ylimits->min, Y_MIN_LIM_FMASK); + val |= u32_encode_bits(ylimits->max, Y_MAX_LIM_FMASK); + + iowrite32(val, ipa->reg_virt + offset + n * stride); +} + +static void ipa_resource_config_dst_one(struct ipa *ipa, + const struct ipa_resource_dst *resource) +{ + u32 offset = IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET; + u32 stride = IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_STRIDE; + enum ipa_resource_type_dst n = resource->type; + const struct ipa_resource_limits *xlimits; + const struct ipa_resource_limits *ylimits; + u32 val; + + xlimits = &resource->limits[IPA_RESOURCE_GROUP_LWA_DL]; + ylimits = &resource->limits[IPA_RESOURCE_GROUP_UL_DL]; + + val = u32_encode_bits(xlimits->min, X_MIN_LIM_FMASK); + val |= u32_encode_bits(xlimits->max, X_MAX_LIM_FMASK); + val |= u32_encode_bits(ylimits->min, Y_MIN_LIM_FMASK); + val |= u32_encode_bits(ylimits->max, Y_MAX_LIM_FMASK); + + iowrite32(val, ipa->reg_virt + offset + n * stride); +} + +static void +ipa_resource_config(struct ipa *ipa, const struct ipa_resource_data *data) +{ + const struct ipa_resource_src *resource_src; + const struct ipa_resource_dst *resource_dst; + u32 i; + + resource_src = data->resource_src; + resource_dst = data->resource_dst; + + for (i = 0; i < data->resource_src_count; i++) + ipa_resource_config_src_one(ipa, &resource_src[i]); + + for (i = 0; i < data->resource_dst_count; i++) + ipa_resource_config_dst_one(ipa, &resource_dst[i]); +} + +static void ipa_resource_deconfig(struct ipa *ipa) +{ + /* Nothing to do */ +} + +static void ipa_idle_indication_cfg(struct ipa *ipa, + u32 enter_idle_debounce_thresh, + bool const_non_idle_enable) +{ + u32 val; + + val = u32_encode_bits(enter_idle_debounce_thresh, + ENTER_IDLE_DEBOUNCE_THRESH_FMASK); + if (const_non_idle_enable) + val |= CONST_NON_IDLE_ENABLE_FMASK; + + iowrite32(val, ipa->reg_virt + IPA_REG_IDLE_INDICATION_CFG_OFFSET); +} + +/** + * ipa_dcd_config() - Enable dynamic clock division on IPA + * + * Configures when the IPA signals it is idle to the global clock + * controller, which can respond by scalling down the clock to + * save power. + */ +static void ipa_dcd_config(struct ipa *ipa) +{ + /* Recommended values for IPA 3.5 according to IPA HPG */ + ipa_idle_indication_cfg(ipa, 256, false); +} + +static void ipa_dcd_deconfig(struct ipa *ipa) +{ + /* Power-on reset values */ + ipa_idle_indication_cfg(ipa, 0, true); +} + +/** + * ipa_config() - Configure IPA hardware + * @ipa: IPA pointer + * + * Perform initialization requiring IPA clock to be enabled. + */ +static int ipa_config(struct ipa *ipa, const struct ipa_data *data) +{ + u32 val; + int ret; + + /* Get a clock reference to allow initialization. This reference + * is held after initialization completes, and won't get dropped + * unless/until a system suspend request arrives. + */ + atomic_set(&ipa->suspend_ref, 1); + ipa_clock_get(ipa->clock); + + ipa_hardware_config(ipa); + + /* Ensure we support the number of endpoints supplied by hardware */ + val = ioread32(ipa->reg_virt + IPA_REG_ENABLED_PIPES_OFFSET); + if (val > IPA_ENDPOINT_MAX) { + ret = -EINVAL; + goto err_hardware_deconfig; + } + + ret = ipa_smem_config(ipa); + if (ret) + goto err_hardware_deconfig; + + /* Assign resource limitation to each group */ + ipa_resource_config(ipa, data->resource_data); + + /* Note enabling dynamic clock division must not be + * attempted for IPA hardware versions prior to 3.5. + */ + ipa_dcd_config(ipa); + + return 0; + +err_hardware_deconfig: + ipa_hardware_deconfig(ipa); + ipa_clock_put(ipa->clock); + + return ret; +} + +/** + * ipa_deconfig() - Inverse of ipa_config() + * @ipa: IPA pointer + */ +static void ipa_deconfig(struct ipa *ipa) +{ + ipa_dcd_deconfig(ipa); + ipa_resource_deconfig(ipa); + ipa_smem_deconfig(ipa); + ipa_hardware_deconfig(ipa); + + ipa_clock_put(ipa->clock); +} + +static int ipa_firmware_load(struct device *dev) +{ + const struct firmware *fw; + struct device_node *node; + struct resource res; + phys_addr_t phys; + ssize_t size; + void *virt; + int ret; + + node = of_parse_phandle(dev->of_node, "memory-region", 0); + if (!node) { + dev_err(dev, "memory-region not specified\n"); + return -EINVAL; + } + + ret = of_address_to_resource(node, 0, &res); + if (ret) + return ret; + + ret = request_firmware(&fw, IPA_FWS_PATH, dev); + if (ret) + return ret; + + phys = res.start; + size = (size_t)resource_size(&res); + virt = memremap(phys, size, MEMREMAP_WC); + if (!virt) { + ret = -ENOMEM; + goto out_release_firmware; + } + + ret = qcom_mdt_load(dev, fw, IPA_FWS_PATH, IPA_PAS_ID, + virt, phys, size, NULL); + if (!ret) + ret = qcom_scm_pas_auth_and_reset(IPA_PAS_ID); + + memunmap(virt); +out_release_firmware: + release_firmware(fw); + + return ret; +} + +static const struct of_device_id ipa_match[] = { + { + .compatible = "qcom,sdm845-ipa", + .data = &ipa_data_sdm845, + }, + { }, +}; + +/** + * ipa_probe() - IPA platform driver probe function + * @pdev: Platform device pointer + * + * @Return: 0 if successful, or a negative error code (possibly + * EPROBE_DEFER) + * + * This is the main entry point for the IPA driver. When successful, it + * initializes the IPA hardware for use. + * + * Initialization proceeds in several stages. The "init" stage involves + * activities that can be initialized without access to the IPA hardware. + * The "setup" stage requires the IPA clock to be active so IPA registers + * can beaccessed, but does not require access to the GSI layer. The + * "setup" stage requires access to GSI, and includes initialization that's + * performed by issuing IPA immediate commands. + */ +static int ipa_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + const struct ipa_data *data; + struct ipa *ipa; + bool modem_init; + int ret; + + /* We assume we're working on 64-bit hardware */ + BUILD_BUG_ON(!IS_ENABLED(CONFIG_64BIT)); + BUILD_BUG_ON(ARCH_DMA_MINALIGN % IPA_TABLE_ALIGN); + + data = of_device_get_match_data(dev); + + modem_init = of_property_read_bool(dev->of_node, "modem-init"); + + /* If we need Trust Zone, make sure it's ready */ + if (!modem_init) + if (!qcom_scm_is_available()) + return -EPROBE_DEFER; + + ipa = kzalloc(sizeof(*ipa), GFP_KERNEL); + if (!ipa) + return -ENOMEM; + ipa->pdev = pdev; + dev_set_drvdata(dev, ipa); + + /* Initialize the clock and interconnects early. They might + * not be ready when we're probed, so might return -EPROBE_DEFER. + */ + atomic_set(&ipa->suspend_ref, 0); + + ipa->clock = ipa_clock_init(ipa); + if (IS_ERR(ipa->clock)) { + ret = PTR_ERR(ipa->clock); + goto err_free_ipa; + } + + ret = ipa_mem_init(ipa); + if (ret) + goto err_clock_exit; + + ret = gsi_init(&ipa->gsi, pdev, data->endpoint_data_count, + data->endpoint_data); + if (ret) + goto err_mem_exit; + + ipa->smp2p = ipa_smp2p_init(ipa, modem_init); + if (IS_ERR(ipa->smp2p)) { + ret = PTR_ERR(ipa->smp2p); + goto err_gsi_exit; + } + + ret = ipa_endpoint_init(ipa, data->endpoint_data_count, + data->endpoint_data); + if (ret) + goto err_smp2p_exit; + ipa->command_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_COMMAND_TX]; + ipa->default_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_LAN_RX]; + + /* Create a wakeup source. */ + wakeup_source_init(&ipa->wakeup, "ipa"); + + /* Proceed to real initialization */ + ret = ipa_config(ipa, data); + if (ret) + goto err_endpoint_exit; + + dev_info(dev, "IPA driver initialized"); + + /* If the modem is verifying and loading firmware, we're + * done. We will receive an SMP2P interrupt when it is OK + * to proceed with the setup phase (involving issuing + * immediate commands after GSI is initialized). + */ + if (modem_init) + return 0; + + /* Otherwise we need to load the firmware and have Trust + * Zone validate and install it. If that succeeds we can + * proceed with setup. + */ + ret = ipa_firmware_load(dev); + if (ret) + goto err_deconfig; + + ret = ipa_setup(ipa); + if (ret) + goto err_deconfig; + + return 0; + +err_deconfig: + ipa_deconfig(ipa); +err_endpoint_exit: + wakeup_source_remove(&ipa->wakeup); + ipa_endpoint_exit(ipa); +err_smp2p_exit: + ipa_smp2p_exit(ipa->smp2p); +err_gsi_exit: + gsi_exit(&ipa->gsi); +err_mem_exit: + ipa_mem_exit(ipa); +err_clock_exit: + ipa_clock_exit(ipa->clock); +err_free_ipa: + kfree(ipa); + + return ret; +} + +static int ipa_remove(struct platform_device *pdev) +{ + struct ipa *ipa = dev_get_drvdata(&pdev->dev); + + ipa_smp2p_disable(ipa->smp2p); + if (ipa->setup_complete) + ipa_teardown(ipa); + + ipa_deconfig(ipa); + wakeup_source_remove(&ipa->wakeup); + ipa_endpoint_exit(ipa); + ipa_smp2p_exit(ipa->smp2p); + ipa_mem_exit(ipa); + ipa_clock_exit(ipa->clock); + kfree(ipa); + + return 0; +} + +/** + * ipa_suspend() - Power management system suspend callback + * @dev: IPA device structure + * + * @Return: Zero + * + * Called by the PM framework when a system suspend operation is invoked. + */ +int ipa_suspend(struct device *dev) +{ + struct ipa *ipa = dev_get_drvdata(dev); + + ipa_clock_put(ipa->clock); + atomic_set(&ipa->suspend_ref, 0); + + return 0; +} + +/** + * ipa_resume() - Power management system resume callback + * @dev: IPA device structure + * + * @Return: Always returns 0 + * + * Called by the PM framework when a system resume operation is invoked. + */ +int ipa_resume(struct device *dev) +{ + struct ipa *ipa = dev_get_drvdata(dev); + + /* This clock reference will keep the IPA out of suspend + * until we get a power management suspend request. + */ + atomic_set(&ipa->suspend_ref, 1); + ipa_clock_get(ipa->clock); + + return 0; +} + +static const struct dev_pm_ops ipa_pm_ops = { + .suspend_noirq = ipa_suspend, + .resume_noirq = ipa_resume, +}; + +static struct platform_driver ipa_driver = { + .probe = ipa_probe, + .remove = ipa_remove, + .driver = { + .name = "ipa", + .owner = THIS_MODULE, + .pm = &ipa_pm_ops, + .of_match_table = ipa_match, + }, +}; + +module_platform_driver(ipa_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Qualcomm IP Accelerator device driver"); diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h new file mode 100644 index 000000000000..8d04db6f7b00 --- /dev/null +++ b/drivers/net/ipa/ipa_reg.h @@ -0,0 +1,279 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_REG_H_ +#define _IPA_REG_H_ + +#include + +/** + * DOC: IPA Registers + * + * IPA registers are located within the "ipa" address space defined by + * Device Tree. The offset of each register within that space is specified + * by symbols defined below. The address space is mapped to virtual memory + * space in ipa_mem_init(). All IPA registers are 32 bits wide. + * + * Certain register types are duplicated for a number of instances of + * something. For example, each IPA endpoint has an set of registers + * defining its configuration. The offset to an endpoint's set of registers + * is computed based on an "base" offset plus an additional "stride" offset + * that's dependent on the endpoint's ID. For such registers, the offset + * is computed by a function-like macro that takes a parameter used in + * the computation. + * + * The offset of a register dependent on execution environment is computed + * by a macro that is supplied a parameter "ee". The "ee" value is a member + * of the gsi_ee enumerated type. + * + * The offset of a register dependent on endpoint id is computed by a macro + * that is supplied a parameter "ep". The "ep" value must be less than + * IPA_ENDPOINT_MAX. + * + * The offset of registers related to hashed filter and router tables is + * computed by a macro that is supplied a parameter "er". The "er" represents + * an endpoint ID for filters, or a route ID for routes. For filters, the + * endpoint ID must be less than IPA_ENDPOINT_MAX, but is further restricted + * because not all endpoints support filtering. For routes, the route ID + * must be less than IPA_SMEM_RT_COUNT. + * + * Some registers encode multiple fields within them. For these, each field + * has a symbol below definining a mask that defines both the position and + * width of the field within its register. + */ + +#define IPA_REG_ENABLED_PIPES_OFFSET 0x00000038 + +#define IPA_REG_ROUTE_OFFSET 0x00000048 +#define ROUTE_DIS_FMASK GENMASK(0, 0) +#define ROUTE_DEF_PIPE_FMASK GENMASK(5, 1) +#define ROUTE_DEF_HDR_TABLE_FMASK GENMASK(6, 6) +#define ROUTE_DEF_HDR_OFST_FMASK GENMASK(16, 7) +#define ROUTE_FRAG_DEF_PIPE_FMASK GENMASK(21, 17) +#define ROUTE_DEF_RETAIN_HDR_FMASK GENMASK(24, 24) + +#define IPA_REG_SHARED_MEM_SIZE_OFFSET 0x00000054 +#define SHARED_MEM_SIZE_FMASK GENMASK(15, 0) +#define SHARED_MEM_BADDR_FMASK GENMASK(31, 16) + +#define IPA_REG_QSB_MAX_WRITES_OFFSET 0x00000074 +#define GEN_QMB_0_MAX_WRITES_FMASK GENMASK(3, 0) +#define GEN_QMB_1_MAX_WRITES_FMASK GENMASK(7, 4) + +#define IPA_REG_QSB_MAX_READS_OFFSET 0x00000078 +#define GEN_QMB_0_MAX_READS_FMASK GENMASK(3, 0) +#define GEN_QMB_1_MAX_READS_FMASK GENMASK(7, 4) + +#define IPA_REG_STATE_AGGR_ACTIVE_OFFSET 0x0000010c + +#define IPA_REG_BCR_OFFSET 0x000001d0 + +#define IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET 0x000001e8 + +#define IPA_REG_AGGR_FORCE_CLOSE_OFFSET 0x000001ec +#define PIPE_BITMAP_FMASK GENMASK(19, 0) + +#define IPA_REG_IDLE_INDICATION_CFG_OFFSET 0x00000220 +#define ENTER_IDLE_DEBOUNCE_THRESH_FMASK GENMASK(15, 0) +#define CONST_NON_IDLE_ENABLE_FMASK GENMASK(16, 16) + +#define IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET 0x00000400 +#define IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_STRIDE 0x0020 +#define IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET 0x00000500 +#define IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_STRIDE 0x0020 +#define X_MIN_LIM_FMASK GENMASK(5, 0) +#define X_MAX_LIM_FMASK GENMASK(13, 8) +#define Y_MIN_LIM_FMASK GENMASK(21, 16) +#define Y_MAX_LIM_FMASK GENMASK(29, 24) + +#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \ + (0x00000800 + 0x0070 * (ep)) +#define ENDP_SUSPEND_FMASK GENMASK(0, 0) +#define ENDP_DELAY_FMASK GENMASK(1, 1) + +#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \ + (0x00000808 + 0x0070 * (ep)) +#define FRAG_OFFLOAD_EN_FMASK GENMASK(0, 0) +#define CS_OFFLOAD_EN_FMASK GENMASK(2, 1) +#define CS_METADATA_HDR_OFFSET_FMASK GENMASK(6, 3) +#define CS_GEN_QMB_MASTER_SEL_FMASK GENMASK(8, 8) + +#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \ + (0x00000810 + 0x0070 * (ep)) +#define HDR_LEN_FMASK GENMASK(5, 0) +#define HDR_OFST_METADATA_VALID_FMASK GENMASK(6, 6) +#define HDR_OFST_METADATA_FMASK GENMASK(12, 7) +#define HDR_ADDITIONAL_CONST_LEN_FMASK GENMASK(18, 13) +#define HDR_OFST_PKT_SIZE_VALID_FMASK GENMASK(19, 19) +#define HDR_OFST_PKT_SIZE_FMASK GENMASK(25, 20) +#define HDR_A5_MUX_FMASK GENMASK(26, 26) +#define HDR_LEN_INC_DEAGG_HDR_FMASK GENMASK(27, 27) +#define HDR_METADATA_REG_VALID_FMASK GENMASK(28, 28) + +#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \ + (0x00000814 + 0x0070 * (ep)) +#define HDR_ENDIANNESS_FMASK GENMASK(0, 0) +#define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK GENMASK(1, 1) +#define HDR_TOTAL_LEN_OR_PAD_FMASK GENMASK(2, 2) +#define HDR_PAYLOAD_LEN_INC_PADDING_FMASK GENMASK(3, 3) +#define HDR_TOTAL_LEN_OR_PAD_OFFSET_FMASK GENMASK(9, 4) +#define HDR_PAD_TO_ALIGNMENT_FMASK GENMASK(13, 10) + +#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(ep) \ + (0x00000818 + 0x0070 * (ep)) + +#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \ + (0x00000824 + 0x0070 * (ep)) +#define AGGR_EN_FMASK GENMASK(1, 0) +#define AGGR_TYPE_FMASK GENMASK(4, 2) +#define AGGR_BYTE_LIMIT_FMASK GENMASK(9, 5) +#define AGGR_TIME_LIMIT_FMASK GENMASK(14, 10) +#define AGGR_PKT_LIMIT_FMASK GENMASK(20, 15) +#define AGGR_SW_EOF_ACTIVE_FMASK GENMASK(21, 21) +#define AGGR_FORCE_CLOSE_FMASK GENMASK(22, 22) +#define AGGR_HARD_BYTE_LIMIT_ENABLE_FMASK GENMASK(24, 24) + +#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(ep) \ + (0x00000820 + 0x0070 * (ep)) +#define MODE_FMASK GENMASK(2, 0) +#define DEST_PIPE_INDEX_FMASK GENMASK(8, 4) +#define BYTE_THRESHOLD_FMASK GENMASK(27, 12) +#define PIPE_REPLICATION_EN_FMASK GENMASK(28, 28) +#define PAD_EN_FMASK GENMASK(29, 29) +#define HDR_FTCH_DISABLE_FMASK GENMASK(30, 30) + +#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(ep) \ + (0x00000834 + 0x0070 * (ep)) +#define DEAGGR_HDR_LEN_FMASK GENMASK(5, 0) +#define PACKET_OFFSET_VALID_FMASK GENMASK(7, 7) +#define PACKET_OFFSET_LOCATION_FMASK GENMASK(13, 8) +#define MAX_PACKET_LEN_FMASK GENMASK(31, 16) + +#define IPA_REG_ENDP_INIT_SEQ_N_OFFSET(ep) \ + (0x0000083c + 0x0070 * (ep)) +#define HPS_SEQ_TYPE_FMASK GENMASK(3, 0) +#define DPS_SEQ_TYPE_FMASK GENMASK(7, 4) +#define HPS_REP_SEQ_TYPE_FMASK GENMASK(11, 8) +#define DPS_REP_SEQ_TYPE_FMASK GENMASK(15, 12) + +#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \ + (0x00000840 + 0x0070 * (ep)) +#define STATUS_EN_FMASK GENMASK(0, 0) +#define STATUS_ENDP_FMASK GENMASK(5, 1) +#define STATUS_LOCATION_FMASK GENMASK(8, 8) +#define STATUS_PKT_SUPPRESS_FMASK GENMASK(9, 9) + +/* "er" is either an endpoint id (for filters) or a route id (for routes) */ +#define IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(er) \ + (0x0000085c + 0x0070 * (er)) +#define FILTER_HASH_MSK_SRC_ID_FMASK GENMASK(0, 0) +#define FILTER_HASH_MSK_SRC_IP_FMASK GENMASK(1, 1) +#define FILTER_HASH_MSK_DST_IP_FMASK GENMASK(2, 2) +#define FILTER_HASH_MSK_SRC_PORT_FMASK GENMASK(3, 3) +#define FILTER_HASH_MSK_DST_PORT_FMASK GENMASK(4, 4) +#define FILTER_HASH_MSK_PROTOCOL_FMASK GENMASK(5, 5) +#define FILTER_HASH_MSK_METADATA_FMASK GENMASK(6, 6) +#define FILTER_HASH_UNDEFINED1_FMASK GENMASK(15, 7) +#define IPA_REG_ENDP_FILTER_HASH_MSK_ALL GENMASK(15, 0) + +#define ROUTER_HASH_MSK_SRC_ID_FMASK GENMASK(16, 16) +#define ROUTER_HASH_MSK_SRC_IP_FMASK GENMASK(17, 17) +#define ROUTER_HASH_MSK_DST_IP_FMASK GENMASK(18, 18) +#define ROUTER_HASH_MSK_SRC_PORT_FMASK GENMASK(19, 19) +#define ROUTER_HASH_MSK_DST_PORT_FMASK GENMASK(20, 20) +#define ROUTER_HASH_MSK_PROTOCOL_FMASK GENMASK(21, 21) +#define ROUTER_HASH_MSK_METADATA_FMASK GENMASK(22, 22) +#define ROUTER_HASH_UNDEFINED2_FMASK GENMASK(31, 23) +#define IPA_REG_ENDP_ROUTER_HASH_MSK_ALL GENMASK(31, 16) + +#define IPA_REG_IRQ_STTS_OFFSET \ + IPA_REG_IRQ_STTS_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_STTS_EE_N_OFFSET(ee) \ + (0x00003008 + 0x1000 * (ee)) + +#define IPA_REG_IRQ_EN_OFFSET \ + IPA_REG_IRQ_EN_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_EN_EE_N_OFFSET(ee) \ + (0x0000300c + 0x1000 * (ee)) + +#define IPA_REG_IRQ_CLR_OFFSET \ + IPA_REG_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_CLR_EE_N_OFFSET(ee) \ + (0x00003010 + 0x1000 * (ee)) + +#define IPA_REG_IRQ_UC_OFFSET \ + IPA_REG_IRQ_UC_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_UC_EE_N_OFFSET(ee) \ + (0x0000301c + 0x1000 * (ee)) + +#define IPA_REG_IRQ_SUSPEND_INFO_OFFSET \ + IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(ee) \ + (0x00003030 + 0x1000 * (ee)) + +#define IPA_REG_SUSPEND_IRQ_EN_OFFSET \ + IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(ee) \ + (0x00003034 + 0x1000 * (ee)) + +#define IPA_REG_SUSPEND_IRQ_CLR_OFFSET \ + IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(ee) \ + (0x00003038 + 0x1000 * (ee)) + +/** enum ipa_cs_offload_en - checksum offload field in ENDP_INIT_CFG_N */ +enum ipa_cs_offload_en { + IPA_CS_OFFLOAD_NONE = 0, + IPA_CS_OFFLOAD_UL = 1, + IPA_CS_OFFLOAD_DL = 2, + IPA_CS_RSVD +}; + +/** enum ipa_aggr_en - aggregation type field in ENDP_INIT_AGGR_N */ +enum ipa_aggr_en { + IPA_BYPASS_AGGR = 0, + IPA_ENABLE_AGGR = 1, + IPA_ENABLE_DEAGGR = 2, +}; + +/** enum ipa_aggr_type - aggregation type field in in_ENDP_INIT_AGGR_N */ +enum ipa_aggr_type { + IPA_MBIM_16 = 0, + IPA_HDLC = 1, + IPA_TLP = 2, + IPA_RNDIS = 3, + IPA_GENERIC = 4, + IPA_QCMAP = 6, +}; + +/** enum ipa_mode - mode field in ENDP_INIT_MODE_N */ +enum ipa_mode { + IPA_BASIC = 0, + IPA_ENABLE_FRAMING_HDLC = 1, + IPA_ENABLE_DEFRAMING_HDLC = 2, + IPA_DMA = 3, +}; + +/** + * enum ipa_seq_type - HPS and DPS sequencer type fields in in ENDP_INIT_SEQ_N + * @IPA_SEQ_DMA_ONLY: only DMA is performed + * @IPA_SEQ_PKT_PROCESS_NO_DEC_UCP: + * packet processing + no decipher + microcontroller (Ethernet Bridging) + * @IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP: + * second packet processing pass + no decipher + microcontroller + * @IPA_SEQ_DMA_DEC: DMA + cipher/decipher + * @IPA_SEQ_DMA_COMP_DECOMP: DMA + compression/decompression + * @IPA_SEQ_INVALID: invalid sequencer type + */ +enum ipa_seq_type { + IPA_SEQ_DMA_ONLY = 0x00, + IPA_SEQ_PKT_PROCESS_NO_DEC_UCP = 0x02, + IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP = 0x04, + IPA_SEQ_DMA_DEC = 0x11, + IPA_SEQ_DMA_COMP_DECOMP = 0x20, + IPA_SEQ_INVALID = 0xff, +}; + +#endif /* _IPA_REG_H_ */ From patchwork Fri May 31 03:53:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 165490 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp185875ili; Thu, 30 May 2019 20:54:07 -0700 (PDT) X-Google-Smtp-Source: APXvYqxB+glC+ExjNd+TFw79LDJ9Vueb/5jmLq6nUChWGilgegGePUNhJXxoe5nkRUxrw5hgOw25 X-Received: by 2002:a17:902:27a8:: with SMTP id d37mr6954325plb.150.1559274846987; Thu, 30 May 2019 20:54:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559274846; cv=none; d=google.com; s=arc-20160816; b=ATfoBNXnyrpHO6zm95o4+QwRfSxcub+1Rakx2zorpPvSnLq9SoCSTNQdbWVeV6eeUg M3jQSli1LzYoXTyU2dVkC4fmMFPP2cEBwEL07c96e5saYItbunEoCn+bIgvvSjU7O2Vi dI11KMu1SMvwrdNXYqoFAfnt786P96vSSLScU9o4rp4ZPF1iu92h50tDk1mtIIK4wfye kicgqdNOXDxYTHDUVn6K+vpLEwHA4WWdQ9qJ+w8YkDTGFC3C86evlV0fPkS5MSuNk1XD iu/3FaLauIG7mBgHyoSrG/3YvsuKp+DXM/qIM9OSmyMWCn6Ny3YHvh0jYlnU+H/D0V6d rkIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=2/L/8emfxyGoTsf39QVK6y6tpkz0o/6m0FIh3x5x+sE=; b=mVCvJbDkzOlVbabd3X0Gxe1vXMzDJl1YJ8+99tyBFIKGAxOlgb9mWcRIajc7hX196w 5fzcbmQGI3a+5cd6HzDEzV405E1IngXFtcW2s05ZSmIhYAT07/wFsYisJ8W2ssNtFj9X g9i2Gp0nHewHVVo3lJ2P0M59cRQhwrVmFiJYB+7BhMF5OYmmrnUWk7oRAb+lwd8QDg4j +kuOEni8NGaZ3HzMsAQWugWTlXrMir37s3B8Z7z3P+4DS+HniuSM+1oa++b4CR62Nlrb 3r9iOxAnppkRjUo5mzFPGY/K6gskO5NrUhI128iZO57nTpPJMbsOnmpWW7gceej10t/F KSuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LtLkVr6O; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d19si4601423pls.221.2019.05.30.20.54.06; Thu, 30 May 2019 20:54:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LtLkVr6O; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726857AbfEaDyF (ORCPT + 30 others); Thu, 30 May 2019 23:54:05 -0400 Received: from mail-io1-f65.google.com ([209.85.166.65]:33541 "EHLO mail-io1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726812AbfEaDyE (ORCPT ); Thu, 30 May 2019 23:54:04 -0400 Received: by mail-io1-f65.google.com with SMTP id u13so7044923iop.0 for ; Thu, 30 May 2019 20:54:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2/L/8emfxyGoTsf39QVK6y6tpkz0o/6m0FIh3x5x+sE=; b=LtLkVr6OY0lYm0adAANtt9XSE9nmqSNczmco3ezqSEOhx/itnTww+YEPey+9Fdj8U9 I+TSsz/OakQ+JoXbdZ+9imwMGJk3TOevNxYUEHh1Ck+0SDMYd+8Ce/JsLk/xu590Eoj/ cq6zTD7yEzdwntV2kbEJB3swnxkREsqTc9GhZo9KTsSu8hWSWPAshCulOxawv8pFQ+h7 C7rurTp3aK3HaKZSkvleqWnUKVjlSVGMlBHYrlAyHTjTANIgGBLGk+LAJN5Lfra2ZTb/ ujeW9SAUXkWgIbgwM+uKb4QCz0bf8e/QdH6vlUY4bAX/zyTay6JWcoSa14FHqNRJoHTL OPlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2/L/8emfxyGoTsf39QVK6y6tpkz0o/6m0FIh3x5x+sE=; b=OF8kaJT9BCvR/CbxLCCH2otUDZtufEsaEb16YATtACA7nOL373KJ3CcWvS6kUzIjyF ad3JydMWwRge3CPiKdtSj+S6Qe6qsoAMOI3TodaFtQzd9cXNuIhbhn45iMyueYWklUkc vBbbp+3dMzRrGWFSN+0HKJjQocO5c1N/+SuM8KfvWXC1EXoolQhuJ4QkhxxwpoffVXZm pDv6Ji8maSLKYLi/bj5ERKAfIE6ZcM353omF4LWQ05N2uZUbVNdY994qreA0SvorvIKc /oRisak+oPMnACrcxjWHBLyN3FGoduUfJiT2jmqoEbFJv1bvLlWAlHg6fAv0QwTLvF+m C/Mg== X-Gm-Message-State: APjAAAXlMFw1ayfWhTYuDAmb+kP8wG4CUboDe3809OWUw9Va3E50g6hZ y+JoXFaqo83c6svqgK4v5E1fOw== X-Received: by 2002:a5d:9743:: with SMTP id c3mr5044760ioo.32.1559274842452; Thu, 30 May 2019 20:54:02 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:01 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, cpratapa@codeaurora.org, syadagir@codeaurora.org, subashab@codeaurora.org, abhishek.esse@gmail.com, netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v2 05/17] soc: qcom: ipa: clocking, interrupts, and memory Date: Thu, 30 May 2019 22:53:36 -0500 Message-Id: <20190531035348.7194-6-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch incorporates three source files (and their headers). They're grouped into one patch mainly for the purpose of making the number and size of patches in this series somewhat reasonable. - "ipa_clock.c" and "ipa_clock.h" implement clocking for the IPA device. The IPA has a single core clock managed by the common clock framework. In addition, the IPA has three buses whose bandwidth is managed by the Linux interconnect framework. At this time the core clock and all three buses are either on or off; we don't yet do any more fine-grained management than that. The core clock and interconnects are enabled and disabled as a unit, using a unified clock-like abstraction, ipa_clock_get()/ipa_clock_put(). - "ipa_interrupt.c" and "ipa_interrupt.h" implement IPA interrupts. There are two hardare IRQs used by the IPA driver (the other is the GSI interrupt, described in a separate patch). Several types of interrupt are handled by the IPA IRQ handler; these are not part of data/fast path. - The IPA has a region of local memory that is accessible by the AP (and modem). Within that region are areas with certain defined purposes. "ipa_mem.c" and "ipa_mem.h" define those regions, and implement their initialization. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_clock.c | 297 ++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_clock.h | 52 ++++++ drivers/net/ipa/ipa_interrupt.c | 279 ++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_interrupt.h | 53 ++++++ drivers/net/ipa/ipa_mem.c | 234 +++++++++++++++++++++++++ drivers/net/ipa/ipa_mem.h | 83 +++++++++ 6 files changed, 998 insertions(+) create mode 100644 drivers/net/ipa/ipa_clock.c create mode 100644 drivers/net/ipa/ipa_clock.h create mode 100644 drivers/net/ipa/ipa_interrupt.c create mode 100644 drivers/net/ipa/ipa_interrupt.h create mode 100644 drivers/net/ipa/ipa_mem.c create mode 100644 drivers/net/ipa/ipa_mem.h -- 2.20.1 diff --git a/drivers/net/ipa/ipa_clock.c b/drivers/net/ipa/ipa_clock.c new file mode 100644 index 000000000000..9ed12e8183ad --- /dev/null +++ b/drivers/net/ipa/ipa_clock.c @@ -0,0 +1,297 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_netdev.h" + +/** + * DOC: IPA Clocking + * + * The "IPA Clock" manages both the IPA core clock and the interconnects + * (buses) the IPA depends on as a single logical entity. A reference count + * is incremented by "get" operations and decremented by "put" operations. + * Transitions of that count from 0 to 1 result in the clock and interconnects + * being enabled, and transitions of the count from 1 to 0 cause them to be + * disabled. We currently operate the core clock at a fixed clock rate, and + * all buses at a fixed average and peak bandwidth. As more advanced IPA + * features are enabled, we can will better use of clock and bus scaling. + * + * An IPA clock reference must be held for any access to IPA hardware. + */ + +#define IPA_CORE_CLOCK_RATE (75UL * 1000 * 1000) /* Hz */ + +/* Interconnect path bandwidths (each times 1000 bytes per second) */ +#define IPA_MEMORY_AVG (80 * 1000) /* 80 MBps */ +#define IPA_MEMORY_PEAK (600 * 1000) + +#define IPA_IMEM_AVG (80 * 1000) +#define IPA_IMEM_PEAK (350 * 1000) + +#define IPA_CONFIG_AVG (40 * 1000) +#define IPA_CONFIG_PEAK (40 * 1000) + +/** + * struct ipa_clock - IPA clocking information + * @core: IPA core clock + * @memory_path: Memory interconnect + * @imem_path: Internal memory interconnect + * @config_path: Configuration space interconnect + * @mutex; Protects clock enable/disable + * @count: Clocking reference count + */ +struct ipa_clock { + struct ipa *ipa; + atomic_t count; + struct mutex mutex; /* protects clock enable/disable */ + struct clk *core; + struct icc_path *memory_path; + struct icc_path *imem_path; + struct icc_path *config_path; +}; + +/* Initialize interconnects required for IPA operation */ +static int ipa_interconnect_init(struct ipa_clock *clock, struct device *dev) +{ + struct icc_path *path; + + path = of_icc_get(dev, "memory"); + if (IS_ERR(path)) + goto err_return; + clock->memory_path = path; + + path = of_icc_get(dev, "imem"); + if (IS_ERR(path)) + goto err_memory_path_put; + clock->imem_path = path; + + path = of_icc_get(dev, "config"); + if (IS_ERR(path)) + goto err_imem_path_put; + clock->config_path = path; + + return 0; + +err_imem_path_put: + icc_put(clock->imem_path); +err_memory_path_put: + icc_put(clock->memory_path); +err_return: + + return PTR_ERR(path); +} + +/* Inverse of ipa_interconnect_init() */ +static void ipa_interconnect_exit(struct ipa_clock *clock) +{ + icc_put(clock->config_path); + icc_put(clock->imem_path); + icc_put(clock->memory_path); +} + +/* Currently we only use one bandwidth level, so just "enable" interconnects */ +static int ipa_interconnect_enable(struct ipa_clock *clock) +{ + int ret; + + ret = icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + if (ret) + return ret; + + ret = icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); + if (ret) + goto err_disable_memory_path; + + ret = icc_set_bw(clock->config_path, IPA_CONFIG_AVG, IPA_CONFIG_PEAK); + if (ret) + goto err_disable_imem_path; + + return 0; + +err_disable_imem_path: + (void)icc_set_bw(clock->imem_path, 0, 0); +err_disable_memory_path: + (void)icc_set_bw(clock->memory_path, 0, 0); + + return ret; +} + +/* To disable an interconnect, we just its bandwidth to 0 */ +static int ipa_interconnect_disable(struct ipa_clock *clock) +{ + int ret; + + ret = icc_set_bw(clock->memory_path, 0, 0); + if (ret) + return ret; + + ret = icc_set_bw(clock->imem_path, 0, 0); + if (ret) + goto err_reenable_memory_path; + + ret = icc_set_bw(clock->config_path, 0, 0); + if (ret) + goto err_reenable_imem_path; + + return 0; + +err_reenable_imem_path: + (void)icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); +err_reenable_memory_path: + (void)icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + + return ret; +} + +/* Turn on IPA clocks, including interconnects */ +static int ipa_clock_enable(struct ipa_clock *clock) +{ + int ret; + + ret = ipa_interconnect_enable(clock); + if (ret) + return ret; + + ret = clk_prepare_enable(clock->core); + if (ret) + ipa_interconnect_disable(clock); + + return ret; +} + +/* Inverse of ipa_clock_enable() */ +static void ipa_clock_disable(struct ipa_clock *clock) +{ + clk_disable_unprepare(clock->core); + (void)ipa_interconnect_disable(clock); +} + +/* Get an IPA clock reference, but only if the reference count is + * already non-zero. Returns true if the additional reference was + * added successfully, or false otherwise. + */ +bool ipa_clock_get_additional(struct ipa_clock *clock) +{ + return !!atomic_inc_not_zero(&clock->count); +} + +/* Get an IPA clock reference. If the reference count is non-zero, it is + * incremented and return is immediate. Otherwise it is checked again + * under protection of the mutex, and enable clocks and resume RX endpoints + * before returning. For the first reference, the count is intentionally + * not incremented until after these activities are complete. + */ +void ipa_clock_get(struct ipa_clock *clock) +{ + /* If the clock is running, just bump the reference count */ + if (ipa_clock_get_additional(clock)) + return; + + /* Otherwise get the mutex and check again */ + mutex_lock(&clock->mutex); + + /* A reference might have been added before we got the mutex. */ + if (!ipa_clock_get_additional(clock)) { + int ret; + + ret = ipa_clock_enable(clock); + if (!WARN(ret, "error %d enabling IPA clock\n", ret)) { + struct ipa *ipa = clock->ipa; + + if (ipa->command_endpoint) + ipa_endpoint_resume(ipa->command_endpoint); + + if (ipa->default_endpoint) + ipa_endpoint_resume(ipa->default_endpoint); + + if (ipa->modem_netdev) + ipa_netdev_resume(ipa->modem_netdev); + + atomic_inc(&clock->count); + } + } + + mutex_unlock(&clock->mutex); +} + +/* Attempt to remove an IPA clock reference. If this represents + * the last reference, suspend endpoints and disable the clock + * (and interconnects) under protection of a mutex. + */ +void ipa_clock_put(struct ipa_clock *clock) +{ + /* If this is not the last reference there's nothing more to do */ + if (!atomic_dec_and_mutex_lock(&clock->count, &clock->mutex)) + return; + + if (clock->ipa->modem_netdev) + ipa_netdev_suspend(clock->ipa->modem_netdev); + + if (clock->ipa->default_endpoint) + ipa_endpoint_suspend(clock->ipa->default_endpoint); + + if (clock->ipa->command_endpoint) + ipa_endpoint_suspend(clock->ipa->command_endpoint); + + ipa_clock_disable(clock); + + mutex_unlock(&clock->mutex); +} + +/* Initialize IPA clocking */ +struct ipa_clock *ipa_clock_init(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + struct ipa_clock *clock; + int ret; + + clock = kzalloc(sizeof(*clock), GFP_KERNEL); + if (!clock) + return ERR_PTR(-ENOMEM); + + clock->ipa = ipa; + clock->core = clk_get(dev, "core"); + if (IS_ERR(clock->core)) { + ret = PTR_ERR(clock->core); + goto err_free_clock; + } + + ret = clk_set_rate(clock->core, IPA_CORE_CLOCK_RATE); + if (ret) + goto err_clk_put; + + ret = ipa_interconnect_init(clock, dev); + if (ret) + goto err_clk_put; + + mutex_init(&clock->mutex); + atomic_set(&clock->count, 0); + + return clock; + +err_clk_put: + clk_put(clock->core); +err_free_clock: + kfree(clock); + + return ERR_PTR(ret); +} + +/* Inverse of ipa_clock_init() */ +void ipa_clock_exit(struct ipa_clock *clock) +{ + mutex_destroy(&clock->mutex); + ipa_interconnect_exit(clock); + clk_put(clock->core); + kfree(clock); +} diff --git a/drivers/net/ipa/ipa_clock.h b/drivers/net/ipa/ipa_clock.h new file mode 100644 index 000000000000..f38c3face29a --- /dev/null +++ b/drivers/net/ipa/ipa_clock.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_CLOCK_H_ +#define _IPA_CLOCK_H_ + +struct ipa; +struct ipa_clock; + +/** + * ipa_clock_init() - Initialize IPA clocking + * @ipa: IPA pointer + * + * @Return: A pointer to an ipa_clock structure, or a pointer-coded error + */ +struct ipa_clock *ipa_clock_init(struct ipa *ipa); + +/** + * ipa_clock_exit() - Inverse of ipa_clock_init() + * @clock: IPA clock pointer + */ +void ipa_clock_exit(struct ipa_clock *clock); + +/** + * ipa_clock_get() - Get an IPA clock reference + * @clock: IPA clock pointer + * + * This call blocks if this is the first reference. + */ +void ipa_clock_get(struct ipa_clock *clock); + +/** + * ipa_clock_get_additional() - Get an IPA clock reference if not first + * @clock: IPA clock pointer + * + * This returns immediately, and only takes a reference if not the first + */ +bool ipa_clock_get_additional(struct ipa_clock *clock); + +/** + * ipa_clock_put() - Drop an IPA clock reference + * @clock: IPA clock pointer + * + * This drops a clock reference. If the last reference is being dropped, + * the clock is stopped and RX endpoints are suspended. This call will + * not block unless the last reference is dropped. + */ +void ipa_clock_put(struct ipa_clock *clock); + +#endif /* _IPA_CLOCK_H_ */ diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c new file mode 100644 index 000000000000..5be6b3c762ed --- /dev/null +++ b/drivers/net/ipa/ipa_interrupt.c @@ -0,0 +1,279 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +/* DOC: IPA Interrupts + * + * The IPA has an interrupt line distinct from the interrupt used by the GSI + * code. Whereas GSI interrupts are generally related to channel events (like + * transfer completions), IPA interrupts are related to other events related + * to the IPA. Some of the IPA interrupts come from a microcontroller + * embedded in the IPA. Each IPA interrupt type can be both masked and + * acknowledged independent of the others. + * + * Two of the IPA interrupts are initiated by the microcontroller. A third + * can be generated to signal the need for a wakeup/resume when an IPA + * endpoint has been suspended. There are other IPA events defined, but at + * this time only these three are supported. + */ + +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_reg.h" +#include "ipa_endpoint.h" +#include "ipa_interrupt.h" + +/* Maximum number of bits in an IPA interrupt mask */ +#define IPA_INTERRUPT_MAX (sizeof(u32) * BITS_PER_BYTE) + +struct ipa_interrupt_info { + ipa_irq_handler_t handler; + enum ipa_interrupt_id interrupt_id; +}; + +/** + * struct ipa_interrupt - IPA interrupt information + * @ipa: IPA pointer + * @irq: Linux IRQ number used for IPA interrupts + * @interrupt_info: Information for each IPA interrupt type + */ +struct ipa_interrupt { + struct ipa *ipa; + u32 irq; + u32 enabled; + struct ipa_interrupt_info info[IPA_INTERRUPT_MAX]; +}; + +/* Map a logical interrupt number to a hardware IPA IRQ number */ +static const u32 ipa_interrupt_mapping[] = { + [IPA_INTERRUPT_UC_0] = 2, + [IPA_INTERRUPT_UC_1] = 3, + [IPA_INTERRUPT_TX_SUSPEND] = 14, +}; + +static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 ipa_irq) +{ + return ipa_irq == ipa_interrupt_mapping[IPA_INTERRUPT_UC_0] || + ipa_irq == ipa_interrupt_mapping[IPA_INTERRUPT_UC_1]; +} + +static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 ipa_irq) +{ + struct ipa_interrupt_info *info = &interrupt->info[ipa_irq]; + bool uc_irq = ipa_interrupt_uc(interrupt, ipa_irq); + struct ipa *ipa = interrupt->ipa; + u32 mask = BIT(ipa_irq); + + /* For microcontroller interrupts, clear the interrupt right away, + * "to avoid clearing unhandled interrupts." + */ + if (uc_irq) + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); + + if (info->handler) + info->handler(interrupt->ipa, info->interrupt_id); + + /* Clearing the SUSPEND_TX interrupt also clears the register + * that tells us which suspended endpoint(s) caused the interrupt, + * so defer clearing until after the handler's been called. + */ + if (!uc_irq) + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); +} + +static void ipa_interrupt_process_all(struct ipa_interrupt *interrupt) +{ + struct ipa *ipa = interrupt->ipa; + u32 enabled = interrupt->enabled; + u32 mask; + + /* The status register indicates which conditions are present, + * including conditions whose interrupt is not enabled. Handle + * only the enabled ones. + */ + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + while ((mask &= enabled)) { + do { + u32 ipa_irq = __ffs(mask); + + mask ^= BIT(ipa_irq); + + ipa_interrupt_process(interrupt, ipa_irq); + } while (mask); + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + } +} + +/* Threaded part of the IRQ handler */ +static irqreturn_t ipa_isr_thread(int irq, void *dev_id) +{ + struct ipa_interrupt *interrupt = dev_id; + + ipa_clock_get(interrupt->ipa->clock); + + ipa_interrupt_process_all(interrupt); + + ipa_clock_put(interrupt->ipa->clock); + + return IRQ_HANDLED; +} + +/* Hard part of the IRQ handler */ +static irqreturn_t ipa_isr(int irq, void *dev_id) +{ + struct ipa_interrupt *interrupt = dev_id; + struct ipa *ipa = interrupt->ipa; + u32 mask; + + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + if (mask & interrupt->enabled) + return IRQ_WAKE_THREAD; + + /* Nothing in the mask was supposed to cause an interrupt */ + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); + + dev_err(&ipa->pdev->dev, "%s: unexpected interrupt, mask 0x%08x\n", + __func__, mask); + + return IRQ_HANDLED; +} + +static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id, + bool enable) +{ + u32 offset = IPA_REG_SUSPEND_IRQ_EN_OFFSET; + u32 mask = BIT(endpoint_id); + u32 val; + + val = ioread32(interrupt->ipa->reg_virt + offset); + if (enable) + val |= mask; + else + val &= ~mask; + iowrite32(val, interrupt->ipa->reg_virt + offset); +} + +void ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id) +{ + ipa_interrupt_suspend_control(interrupt, endpoint_id, true); +} + +void ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id) +{ + ipa_interrupt_suspend_control(interrupt, endpoint_id, false); +} + +/* Clear the suspend interrupt for all endpoints that signaled it */ +void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt) +{ + struct ipa *ipa = interrupt->ipa; + u32 val; + + val = ioread32(ipa->reg_virt + IPA_REG_IRQ_SUSPEND_INFO_OFFSET); + iowrite32(val, ipa->reg_virt + IPA_REG_SUSPEND_IRQ_CLR_OFFSET); +} + +/** + * ipa_interrupt_simulate() - Simulate arrival of an IPA TX_SUSPEND interrupt + * + * This is needed to work around a problem that occurs if aggregation + * is active on an endpoint when its underlying channel is suspended. + */ +void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt) +{ + u32 ipa_irq = ipa_interrupt_mapping[IPA_INTERRUPT_TX_SUSPEND]; + + ipa_interrupt_process(interrupt, ipa_irq); +} + +/** + * ipa_interrupt_add() - Adds handler for an IPA interrupt + * @interrupt_id: IPA interrupt type + * @handler: The handler for that interrupt + * + * Adds handler for an IPA interrupt and enable it. IPA interrupt + * handlers are run in threaded interrupt context, so are allowed to + * block. + */ +void ipa_interrupt_add(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id, + ipa_irq_handler_t handler) +{ + u32 ipa_irq = ipa_interrupt_mapping[interrupt_id]; + struct ipa *ipa = interrupt->ipa; + + interrupt->info[ipa_irq].handler = handler; + interrupt->info[ipa_irq].interrupt_id = interrupt_id; + + /* Update the IPA interrupt mask to enable it */ + interrupt->enabled |= BIT(ipa_irq); + iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); +} + +/** + * ipa_interrupt_remove() - Removes handler for an IPA interrupt type + * @interrupt: IPA interrupt type + * + * Remove an IPA interrupt handler and disable it. + */ +void ipa_interrupt_remove(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id) +{ + u32 ipa_irq = ipa_interrupt_mapping[interrupt_id]; + struct ipa *ipa = interrupt->ipa; + + /* Update the IPA interrupt mask to disable it */ + interrupt->enabled &= ~BIT(ipa_irq); + iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); + + interrupt->info[ipa_irq].handler = NULL; +} + +/** + * ipa_interrupts_init() - Initialize the IPA interrupts framework + */ +struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa) +{ + struct ipa_interrupt *interrupt; + unsigned int irq; + int ret; + + ret = platform_get_irq_byname(ipa->pdev, "ipa"); + if (ret < 0) + return ERR_PTR(ret); + irq = ret; + + interrupt = kzalloc(sizeof(*interrupt), GFP_KERNEL); + if (!interrupt) + return ERR_PTR(-ENOMEM); + interrupt->ipa = ipa; + interrupt->irq = irq; + + /* Start with all IPA interrupts disabled */ + iowrite32(0, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); + + ret = request_threaded_irq(irq, ipa_isr, ipa_isr_thread, IRQF_ONESHOT, + "ipa", interrupt); + if (ret) + goto err_free_interrupt; + + return interrupt; + +err_free_interrupt: + kfree(interrupt); + + return ERR_PTR(ret); +} + +void ipa_interrupt_teardown(struct ipa_interrupt *interrupt) +{ + free_irq(interrupt->irq, interrupt); +} diff --git a/drivers/net/ipa/ipa_interrupt.h b/drivers/net/ipa/ipa_interrupt.h new file mode 100644 index 000000000000..6e452430c156 --- /dev/null +++ b/drivers/net/ipa/ipa_interrupt.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_INTERRUPT_H_ +#define _IPA_INTERRUPT_H_ + +#include +#include + +struct ipa; +struct ipa_interrupt; + +/** + * enum ipa_interrupt_id - IPA Interrupt Type + * + * Used to register handlers for IPA interrupts. + */ +enum ipa_interrupt_id { + IPA_INTERRUPT_UC_0, + IPA_INTERRUPT_UC_1, + IPA_INTERRUPT_TX_SUSPEND, +}; + +/** + * typedef ipa_irq_handler_t - irq handler/callback type + * @param interrupt - interrupt type + * @param interrupt_data - interrupt information data + * + * Callback function registered by ipa_interrupt_add() to handle a specific + * interrupt type + */ +typedef void (*ipa_irq_handler_t)(struct ipa *ipa, + enum ipa_interrupt_id interrupt_id); + +struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa); +void ipa_interrupt_teardown(struct ipa_interrupt *interrupt); + +void ipa_interrupt_add(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id, + ipa_irq_handler_t handler); +void ipa_interrupt_remove(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id); + +void ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id); +void ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id); +void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt); +void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt); + +#endif /* _IPA_INTERRUPT_H_ */ diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c new file mode 100644 index 000000000000..ad7e55aec31f --- /dev/null +++ b/drivers/net/ipa/ipa_mem.c @@ -0,0 +1,234 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_reg.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" + +/* "Canary" value placed between memory regions to detect overflow */ +#define IPA_SMEM_CANARY_VAL cpu_to_le32(0xdeadbeef) + +/* Only used for IPA_SMEM_UC_EVENT_RING */ +static __always_inline void smem_set_canary(struct ipa *ipa, u32 offset) +{ + __le32 *cp = ipa->shared_virt + offset; + + BUILD_BUG_ON(offset < sizeof(*cp)); + + *--cp = IPA_SMEM_CANARY_VAL; +} + +static __always_inline void smem_set_canaries(struct ipa *ipa, u32 offset) +{ + __le32 *cp = ipa->shared_virt + offset; + + /* IPA accesses memory at 8-byte aligned offsets, 8 bytes at a time */ + BUILD_BUG_ON(offset % 8); + BUILD_BUG_ON(offset < 2 * sizeof(*cp)); + + *--cp = IPA_SMEM_CANARY_VAL; + *--cp = IPA_SMEM_CANARY_VAL; +} + +/** + * ipa_smem_setup() - Set up IPA AP and modem shared memory areas + * + * Set up the IPA-local shared memory areas located in shared memory + * located in the IPA. This involves zero-filling each area (using + * DMA) and then telling the IPA where it's located. We set up the + * regions for the header and processing context structures used by + * both the modem and the AP. + * + * The modem and AP header areas are contiguous, with the modem area + * located at the lower address. The processing context memory areas + * for the modem and AP are also contiguous, with the modem at the base + * of the combined space. + * + * The modem portions are also zeroed in ipa_smem_zero_modem(); if it + * crashes and restarts via SSR these areas need to be * re-initialized. + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_smem_setup(struct ipa *ipa) +{ + u32 offset; + u32 size; + int ret; + + /* Alignments of some offsets are verified in smem_set_canaries() */ + BUILD_BUG_ON(IPA_SMEM_AP_HDR_OFFSET % 8); + BUILD_BUG_ON(IPA_SMEM_MODEM_HDR_SIZE % 8); + BUILD_BUG_ON(IPA_SMEM_AP_HDR_SIZE % 8); + + /* Initialize IPA-local header memory */ + offset = IPA_SMEM_MODEM_HDR_OFFSET; + size = IPA_SMEM_MODEM_HDR_SIZE + IPA_SMEM_AP_HDR_SIZE; + ret = ipa_cmd_hdr_init_local(ipa, offset, size); + if (ret) + return ret; + + BUILD_BUG_ON(IPA_SMEM_AP_HDR_PROC_CTX_OFFSET % 8); + BUILD_BUG_ON(IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE % 8); + BUILD_BUG_ON(IPA_SMEM_AP_HDR_PROC_CTX_SIZE % 8); + + /* Zero the processing context IPA-local memory for the modem and AP */ + offset = IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET; + size = IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE + IPA_SMEM_AP_HDR_PROC_CTX_SIZE; + ret = ipa_cmd_smem_dma_zero(ipa, offset, size); + if (ret) + return ret; + + /* Tell the hardware where the processing context area is located */ + iowrite32(ipa->shared_offset + offset, + ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET); + + return ret; +} + +void ipa_smem_teardown(struct ipa *ipa) +{ + /* Nothing to do */ +} + +/** + * ipa_smem_config() - Configure IPA shared memory + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_smem_config(struct ipa *ipa) +{ + u32 size; + u32 val; + + /* Check the advertised location and size of the shared memory area */ + val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET); + + /* The fields in the register are in 8 byte units */ + ipa->shared_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK); + dev_dbg(&ipa->pdev->dev, "shared memory offset 0x%x bytes\n", + ipa->shared_offset); + if (WARN_ON(ipa->shared_offset)) + return -EINVAL; + + /* The code assumes a certain minimum shared memory area size */ + size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK); + dev_dbg(&ipa->pdev->dev, "shared memory size 0x%x bytes\n", size); + if (WARN_ON(size < IPA_SMEM_SIZE)) + return -EINVAL; + + /* Now write "canary" values before each sub-section. */ + smem_set_canaries(ipa, IPA_SMEM_V4_FLT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V4_FLT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_FLT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_FLT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V4_RT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V4_RT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_RT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_RT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_MODEM_HDR_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_MODEM_OFFSET); + + /* Only one canary precedes the microcontroller ring */ + BUILD_BUG_ON(IPA_SMEM_UC_EVENT_RING_OFFSET % 1024); + smem_set_canary(ipa, IPA_SMEM_UC_EVENT_RING_OFFSET); + + return 0; +} + +void ipa_smem_deconfig(struct ipa *ipa) +{ + /* Don't bother zeroing any of the shared memory on exit */ +} + +/** + * ipa_smem_zero_modem() - Zero modem IPA-local memory regions + * + * Zero regions of IPA-local memory used by the modem. These are + * configured (and initially zeroed) by ipa_smem_setup(), but if + * the modem crashes and restarts via SSR we need to re-initialize + * them. + */ +int ipa_smem_zero_modem(struct ipa *ipa) +{ + int ret; + + ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_MODEM_OFFSET, + IPA_SMEM_MODEM_SIZE); + if (ret) + return ret; + + ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_MODEM_HDR_OFFSET, + IPA_SMEM_MODEM_HDR_SIZE); + if (ret) + return ret; + + ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET, + IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE); + + return ret; +} + +int ipa_mem_init(struct ipa *ipa) +{ + struct resource *res; + int ret; + + ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64)); + if (ret) + return ret; + + /* Set up IPA shared memory */ + res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM, + "ipa-shared"); + if (!res) + return -ENODEV; + + /* The code assumes a certain minimum shared memory area size */ + if (WARN_ON(resource_size(res) < IPA_SMEM_SIZE)) + return -EINVAL; + + ipa->shared_virt = memremap(res->start, resource_size(res), + MEMREMAP_WC); + if (!ipa->shared_virt) + ret = -ENOMEM; + ipa->shared_phys = res->start; + + /* Setup IPA register memory */ + res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM, + "ipa-reg"); + if (!res) { + ret = -ENODEV; + goto err_unmap_shared; + } + + ipa->reg_virt = ioremap(res->start, resource_size(res)); + if (!ipa->reg_virt) { + ret = -ENOMEM; + goto err_unmap_shared; + } + ipa->reg_phys = res->start; + + return 0; + +err_unmap_shared: + memunmap(ipa->shared_virt); + + return ret; +} + +void ipa_mem_exit(struct ipa *ipa) +{ + iounmap(ipa->reg_virt); + memunmap(ipa->shared_virt); +} diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h new file mode 100644 index 000000000000..179b62c958ed --- /dev/null +++ b/drivers/net/ipa/ipa_mem.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_MEM_H_ +#define _IPA_MEM_H_ + +struct ipa; + +/** + * DOC: IPA Local Memory + * + * The IPA has a block of shared memory, divided into regions used for + * specific purposes. The offset within the IPA address space of this shared + * memory block is defined by the IPA_SMEM_DIRECT_ACCESS_OFFSET register. + * + * The regions within the shared block are bounded by an offset and size found + * in the IPA_SHARED_MEM_SIZE register. The first 128 bytes of the shared + * memory block are shared with the microcontroller, and the first 40 bytes of + * that contain a structure used to communicate between the microcontroller + * and the AP. + * + * There is a set of filter and routing tables, and each is given a 128 byte + * region in shared memory. Each entry in a filter or route table is + * IPA_TABLE_ENTRY_SIZE, or 8 bytes. The first "slot" of every table is + * filled with a "canary" value, and the table offsets defined below represent + * the location of the first real entry in each table after this. + * + * The number of filter table entries depends on the number of endpoints that + * support filtering. The first non-canary slot of a filter table contains a + * bitmap, with each set bit indicating an endpoint containing an entry in the + * table. Bit 0 is used to represent a global filter. + * + * About half of the routing table entries are reserved for modem use. + */ + +/* The maximum number of filter table entries (IPv4, IPv6; hashed and not) */ +#define IPA_SMEM_FLT_COUNT 14 + +/* The number of routing table entries (IPv4, IPv6; hashed and not) */ +#define IPA_SMEM_RT_COUNT 15 + + /* Which routing table entries are for the modem */ +#define IPA_SMEM_MODEM_RT_COUNT 8 +#define IPA_SMEM_MODEM_RT_INDEX_MIN 0 +#define IPA_SMEM_MODEM_RT_INDEX_MAX \ + (IPA_SMEM_MODEM_RT_INDEX_MIN + IPA_SMEM_MODEM_RT_COUNT - 1) + +/* Regions within the shared memory block. Table sizes are 0x80 bytes. */ +#define IPA_SMEM_V4_FLT_HASH_OFFSET 0x0288 +#define IPA_SMEM_V4_FLT_NHASH_OFFSET 0x0308 +#define IPA_SMEM_V6_FLT_HASH_OFFSET 0x0388 +#define IPA_SMEM_V6_FLT_NHASH_OFFSET 0x0408 +#define IPA_SMEM_V4_RT_HASH_OFFSET 0x0488 +#define IPA_SMEM_V4_RT_NHASH_OFFSET 0x0508 +#define IPA_SMEM_V6_RT_HASH_OFFSET 0x0588 +#define IPA_SMEM_V6_RT_NHASH_OFFSET 0x0608 +#define IPA_SMEM_MODEM_HDR_OFFSET 0x0688 +#define IPA_SMEM_MODEM_HDR_SIZE 0x0140 +#define IPA_SMEM_AP_HDR_OFFSET 0x07c8 +#define IPA_SMEM_AP_HDR_SIZE 0x0000 +#define IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET 0x07d0 +#define IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE 0x0200 +#define IPA_SMEM_AP_HDR_PROC_CTX_OFFSET 0x09d0 +#define IPA_SMEM_AP_HDR_PROC_CTX_SIZE 0x0200 +#define IPA_SMEM_MODEM_OFFSET 0x0bd8 +#define IPA_SMEM_MODEM_SIZE 0x1024 +#define IPA_SMEM_UC_EVENT_RING_OFFSET 0x1c00 /* v3.5 and later */ +#define IPA_SMEM_SIZE 0x2000 + +int ipa_smem_config(struct ipa *ipa); +void ipa_smem_deconfig(struct ipa *ipa); + +int ipa_smem_setup(struct ipa *ipa); +void ipa_smem_teardown(struct ipa *ipa); + +int ipa_smem_zero_modem(struct ipa *ipa); + +int ipa_mem_init(struct ipa *ipa); +void ipa_mem_exit(struct ipa *ipa); + +#endif /* _IPA_SMEM_H_ */ From patchwork Fri May 31 03:53:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 165499 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp186619ili; Thu, 30 May 2019 20:54:50 -0700 (PDT) X-Google-Smtp-Source: APXvYqzeIt4TxNOPXgMOOZBce9pCfGIK2It0twCLirS26Xl2bK2Srj7CVWwUfoXYzidnC795TRKk X-Received: by 2002:a17:902:9a43:: with SMTP id x3mr7054994plv.35.1559274890519; Thu, 30 May 2019 20:54:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559274890; cv=none; d=google.com; s=arc-20160816; b=xr8cP5giUWGd2i8hVTrwMk05X098Nmd1ZnUtB9nXIm7EfyYWcVVYXsAIOgL2to2N60 iMHYXPaZFWiBjU5ef6T9K7mgS85zZP5jcxzBCS4iW3MxeAaxrG9Ahbxy3hMNCTJctn0F RGEnB/gO9p3i+FUi8T6g0lEkvg9jjJIDpB0Rjx3fQqTvMaZoUSSc5BXO3NGuE4A8TabV vhOVmu1AAcOjWcyCY2SsvV+sRA6yxkAz0fRGxl8EsXeX3CJScbq9BRynQnCYaC/sSSCM lZAKbI+8L4b2HvTbm186UyMKYFq8Yr3NcGb37xU9QYBKX3+EI9Z5CXofPzOQPzrY/JIf Y1Qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=heqGQKaiqjnGfnQxAhpeIlMm25793VV3ODq8AUARHOI=; b=JED/lR7hGEfO1RyNuiNWYhgwSmp/Zq9mGCpI/OD1sOF6/kM+du3+awHene9ukpDFPK UC1U5ariiqi8DvbSYpg+Sn4rZdZYCr5QLVwijUY+nuQVli1M1Egph3NLipnrtDXRTHKN D/camDtSZL+em/GHgdbL+xemeMHYtYnz3KLh6rpATfFF5AbUBrDJeqpohK4Pnu2fF9BX BIpK9kCPIX9Db1KAzGfQh/Cn2Iqf+r+qJWz5F1yPeyABzDEr8E8zRu9raU2jogwgTfWJ D4MejCzJIU/gXvgVC4MXpNKDiDTOJMObP1oWGtzJdyTpoKoO3goShMDrqFmXHy3tblsN XYUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pUg7g9Gw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d21si4285125pjv.72.2019.05.30.20.54.50; Thu, 30 May 2019 20:54:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pUg7g9Gw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726972AbfEaDyO (ORCPT + 30 others); Thu, 30 May 2019 23:54:14 -0400 Received: from mail-io1-f66.google.com ([209.85.166.66]:45349 "EHLO mail-io1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726873AbfEaDyI (ORCPT ); Thu, 30 May 2019 23:54:08 -0400 Received: by mail-io1-f66.google.com with SMTP id e3so6998685ioc.12 for ; Thu, 30 May 2019 20:54:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=heqGQKaiqjnGfnQxAhpeIlMm25793VV3ODq8AUARHOI=; b=pUg7g9GwuRyqe5RfpSeAHR0wym9RwDCZzsQcESgPydMY2m5sNQ14kfF7IsmvAxBUG0 H8JOkhYiw7RVbiMdAv8UtgIGKhA9jukxJHakWYQe1Deindb7OIh9DGviMi6NIW1eWL/L scwk5otxRQPLdLQh9oWh71ZJ97nrqOgXSSsK4TjToAscvOS7S8tniqCk4oHmQ+vA42GX RmuPFRDTgCHbSaA06ny0O6JltXzgeVrwp2U8PFJK/c9i2Zb23LJ2tC6VDYtGMXzYfVny n0UYMtiN6c9yRxvm8z9++pN+G2llDrUzQK2VhyGySmUpPGtQ5Qo7yqwrqYpI4QUscSAG mzWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=heqGQKaiqjnGfnQxAhpeIlMm25793VV3ODq8AUARHOI=; b=IBLfc6cUO0SrAlYTNgi3ZnKWBwefO0gHJopm3V+mBfMQClccnfuct7Xibvzz9iGFYf 0TAfx6ED2pdal/bfaLZeYqBq/GIB+7k81k/iQK7ohFgY1/Mk0ml2M+bk+wi+n+kILt12 6osGX3JpHB4ChGVSGShLch/kqnTn0vg9xBOgHl/BqpyR97vWpCsYjkSlxD4TKYqW+C/h N3bv3FAIn3C7y3QHI67Aok5j79JrZ1yRtTH6crvonDhIlbwCcdbzRoegwJnj/8/abO9G W3xFmJrvYuYFFLyW1u7tXbqYWVYmc74941jRhl14OaywPFQR3zRVcQVCOigqMMYtAH0y lgGg== X-Gm-Message-State: APjAAAWQPMm1lPTaSnzpT7FPVPFOEIBU1BPb+IT+BJr2nhpUXTuInKyD 3qvlS8DfUkhzK9lvToeE1ilQnA== X-Received: by 2002:a05:6602:220a:: with SMTP id n10mr5713733ion.205.1559274846594; Thu, 30 May 2019 20:54:06 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:06 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, cpratapa@codeaurora.org, syadagir@codeaurora.org, subashab@codeaurora.org, abhishek.esse@gmail.com, netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v2 08/17] soc: qcom: ipa: GSI transactions Date: Thu, 30 May 2019 22:53:39 -0500 Message-Id: <20190531035348.7194-9-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch implements GSI transactions. A GSI transaction is a structure that represents a single request (consisting of one or more TREs) sent to the GSI hardware. The last TRE in a transaction includes a flag requesting that the GSI interrupt the AP to notify that it has completed. TREs are executed and completed strictly in order. For this reason, the completion of a single TRE implies that all previous TREs (in particular all of those "earlier" in a transaction) have completed. Whenever there is a need to send a request (a set of TREs) to the IPA, a GSI transaction is allocated, specifying the number of TREs that will be required. Details of the request (e.g. transfer offsets and length) are represented by in a Linux scatterlist array that is incorporated in the transaction structure. Once "filled," the transaction is committed. The GSI transaction layer performs all needed mapping (and unmapping) for DMA, and issues the request to the hardware. When the hardware signals that the request has completed, a callback function allows for cleanup or followup activity to be performed before the transaction is freed. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi_trans.c | 624 ++++++++++++++++++++++++++++++++++++ drivers/net/ipa/gsi_trans.h | 116 +++++++ 2 files changed, 740 insertions(+) create mode 100644 drivers/net/ipa/gsi_trans.c create mode 100644 drivers/net/ipa/gsi_trans.h -- 2.20.1 diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c new file mode 100644 index 000000000000..267e33093554 --- /dev/null +++ b/drivers/net/ipa/gsi_trans.c @@ -0,0 +1,624 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_private.h" +#include "gsi_trans.h" +#include "ipa_gsi.h" +#include "ipa_data.h" +#include "ipa_cmd.h" + +/** + * DOC: GSI Transactions + * + * A GSI transaction abstracts the behavior of a GSI channel by representing + * everything about a related group of data transfers in a single structure. + * Most details of interaction with the GSI hardware are managed by the GSI + * transaction core, allowing users to simply describe transfers to be + * performed. When a transaction has completed a callback function + * (dependent on the type of endpoint associated with the channel) allows + * cleanup of resources associated with the transaction. + * + * To perform a data transfer (or a related set of them), a user of the GSI + * transaction interface allocates a transaction, indicating the number of + * TREs required (one per data transfer). If sufficient TREs are available, + * they are reserved for use in the transaction and the allocation succeeds. + * This way exhaustion of the available TREs in a channel ring is detected + * as early as possible. All resources required to complete a transaction + * are allocated at transaction allocation time. + * + * Transfers performed as part of a transaction are represented in an array + * of Linux scatterlist structures. This array is allocated with the + * transaction, and its entries must be initialized using standard + * scatterlist functions (such as sg_init_one() or skb_to_sgvec()). + * + * Once a transaction's scatterlist structures have been initialized, the + * transaction is committed. The GSI transaction layer is responsible for + * DMA mapping (and unmapping) memory described in the transaction's + * scatterlist array. The only way committing a transaction fails is if + * this DMA mapping step returns an error. Otherwise, ownership of the + * entire transaction is transferred to the GSI transaction core. The GSI + * transaction code formats the content of the scatterlist array into the + * channel ring buffer and informs the hardware that new TREs are available + * to process. + * + * The last TRE in each transaction is marked to interrupt the AP when the + * GSI hardware has completed it. Because transfers described by TREs are + * performed strictly in order, signaling the completion of just the last + * TRE in the transaction is sufficient to indicate the full transaction + * is complete. + * + * When a transaction is complete, ipa_gsi_trans_complete() is called by the + * GSI code into the IPA layer, allowing it to perform any final cleanup + * required before the transaction is freed. + */ + +/* gsi_tre->flags mask values (in CPU byte order) */ +#define GSI_TRE_FLAGS_CHAIN_FMASK GENMASK(0, 0) +#define GSI_TRE_FLAGS_IEOB_FMASK GENMASK(8, 8) +#define GSI_TRE_FLAGS_IEOT_FMASK GENMASK(9, 9) +#define GSI_TRE_FLAGS_BEI_FMASK GENMASK(10, 10) +#define GSI_TRE_FLAGS_TYPE_FMASK GENMASK(23, 16) + +/* Hardware values representing a transfer element type */ +enum gsi_tre_type { + GSI_RE_XFER = 0x2, + GSI_RE_IMMD_CMD = 0x3, + GSI_RE_NOP = 0x4, +}; + +/* Map a given ring entry index to the transaction associated with it */ +static void gsi_channel_trans_map(struct gsi_channel *channel, u32 index, + struct gsi_trans *trans) +{ + /* Note: index *must* be used modulo the ring count here */ + channel->trans_info.map[index % channel->tre_ring.count] = trans; +} + +/* Return the transaction mapped to a given ring entry */ +struct gsi_trans * +gsi_channel_trans_mapped(struct gsi_channel *channel, u32 index) +{ + /* Note: index *must* be used modulo the ring count here */ + return channel->trans_info.map[index % channel->tre_ring.count]; +} + +/* Return the oldest completed transaction for a channel (or null) */ +struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel) +{ + return list_first_entry_or_null(&channel->trans_info.complete, + struct gsi_trans, links); +} + +/* Move a transaction from the allocated list to the pending list */ +static void gsi_trans_move_pending(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_move_tail(&trans->links, &trans_info->pending); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Move a transaction and all of its predecessors from the pending list + * to the completed list. + */ +void gsi_trans_move_complete(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + struct list_head list; + + spin_lock_bh(&trans_info->spinlock); + + /* Move this transaction and all predecessors to completed list */ + list_cut_position(&list, &trans_info->pending, &trans->links); + list_splice_tail(&list, &trans_info->complete); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Move a transaction from the completed list to the polled list */ +void gsi_trans_move_polled(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_move_tail(&trans->links, &trans_info->polled); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Return the last (most recent) transaction allocated on a channel */ +struct gsi_trans *gsi_channel_trans_last(struct gsi *gsi, u32 channel_id) +{ + struct gsi_trans_info *trans_info; + struct gsi_trans *trans; + struct list_head *list; + + trans_info = &gsi->channel[channel_id].trans_info; + + spin_lock_bh(&trans_info->spinlock); + + /* Find the last list to which a transaction was added */ + if (!list_empty(&trans_info->alloc)) + list = &trans_info->alloc; + else if (!list_empty(&trans_info->pending)) + list = &trans_info->pending; + else if (!list_empty(&trans_info->complete)) + list = &trans_info->complete; + else if (!list_empty(&trans_info->polled)) + list = &trans_info->polled; + else + list = NULL; + + if (list) { + /* The last entry on this list is the last one allocated. + * Grab a reference so it can be waited for. + */ + trans = list_last_entry(list, struct gsi_trans, links); + refcount_inc(&trans->refcount); + } else { + trans = NULL; + } + + spin_unlock_bh(&trans_info->spinlock); + + return trans; +} + +/* Reserve some number of TREs on a channel. Returns true if successful */ +static bool +gsi_trans_tre_reserve(struct gsi_trans_info *trans_info, u32 tre_count) +{ + int avail = atomic_read(&trans_info->tre_avail); + int new; + + do { + new = avail - (int)tre_count; + if (unlikely(new < 0)) + return false; + } while (!atomic_try_cmpxchg(&trans_info->tre_avail, &avail, new)); + + return true; +} + +/* Release previously-reserved TRE entries to a channel */ +static void +gsi_trans_tre_release(struct gsi_trans_info *trans_info, u32 tre_count) +{ + atomic_add(tre_count, &trans_info->tre_avail); +} + +/* Allocate a GSI transaction on a channel */ +struct gsi_trans * +gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id, u32 tre_count) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_trans_info *trans_info; + struct gsi_trans *trans; + u32 which; + + /* Caller should know the limit is gsi_channel_trans_max() */ + if (WARN_ON(tre_count > channel->data->tlv_count)) + return NULL; + + trans_info = &channel->trans_info; + + /* We reserve the TREs now, but consume them at commit time. + * If there aren't enough available, we're done. + */ + if (!gsi_trans_tre_reserve(trans_info, tre_count)) + return NULL; + + /* Allocate the transaction and initialize it */ + which = trans_info->pool_free++ % trans_info->pool_count; + trans = &trans_info->pool[which]; + + trans->gsi = gsi; + trans->channel_id = channel_id; + refcount_set(&trans->refcount, 1); + trans->tre_count = tre_count; + init_completion(&trans->completion); + + /* We're reusing, so make sure all fields are reinitialized */ + trans->dev = gsi->dev; + trans->result = 0; /* Success assumed unless overwritten */ + trans->data = NULL; + + /* Allocate the scatter/gather entries it will use. If what's + * needed would cross the end-of-pool boundary, allocate them + * from the beginning of the pool. + */ + if (tre_count > trans_info->sg_pool_count - trans_info->sg_pool_free) + trans_info->sg_pool_free = 0; + trans->sgl = &trans_info->sg_pool[trans_info->sg_pool_free]; + trans->sgc = tre_count; + trans_info->sg_pool_free += tre_count; + + spin_lock_bh(&trans_info->spinlock); + + list_add_tail(&trans->links, &trans_info->alloc); + + spin_unlock_bh(&trans_info->spinlock); + + return trans; +} + +/* Free a previously-allocated transaction (used only in case of error) */ +void gsi_trans_free(struct gsi_trans *trans) +{ + struct gsi_trans_info *trans_info; + + if (!refcount_dec_and_test(&trans->refcount)) + return; + + trans_info = &trans->gsi->channel[trans->channel_id].trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_del(&trans->links); + + spin_unlock_bh(&trans_info->spinlock); + + gsi_trans_tre_release(trans_info, trans->tre_count); +} + +/* Compute the length/opcode value to use for a TRE */ +static __le16 gsi_tre_len_opcode(enum ipa_cmd_opcode opcode, u32 len) +{ + return opcode == IPA_CMD_NONE ? cpu_to_le16((u16)len) + : cpu_to_le16((u16)opcode); +} + +/* Compute the flags value to use for a given TRE */ +static __le32 gsi_tre_flags(bool last_tre, bool bei, enum ipa_cmd_opcode opcode) +{ + enum gsi_tre_type tre_type; + u32 tre_flags; + + tre_type = opcode == IPA_CMD_NONE ? GSI_RE_XFER : GSI_RE_IMMD_CMD; + tre_flags = u32_encode_bits(tre_type, GSI_TRE_FLAGS_TYPE_FMASK); + + /* Last TRE contains interrupt flags */ + if (last_tre) { + /* All transactions end in a transfer completion interrupt */ + tre_flags |= GSI_TRE_FLAGS_IEOT_FMASK; + /* Don't interrupt when outbound commands are acknowledged */ + if (bei) + tre_flags |= GSI_TRE_FLAGS_BEI_FMASK; + } else { /* All others indicate there's more to come */ + tre_flags |= GSI_TRE_FLAGS_CHAIN_FMASK; + } + + return cpu_to_le32(tre_flags); +} + +static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr, + u32 len, bool last_tre, bool bei, + enum ipa_cmd_opcode opcode) +{ + struct gsi_tre tre; + + tre.addr = cpu_to_le64(addr); + tre.len_opcode = gsi_tre_len_opcode(opcode, len); + tre.reserved = 0; + tre.flags = gsi_tre_flags(last_tre, bei, opcode); + + /* ARM64 can write 16 bytes as a unit with a single instruction. + * Doing the assignment this way is an attempt to make that happen. + */ + *dest_tre = tre; +} + +/* Issue a command to read a single byte from a channel */ +int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_trans_info *trans_info; + struct gsi_ring *tre_ring; + struct gsi_tre *dest_tre; + struct gsi_ring *ring; + + trans_info = &channel->trans_info; + + /* First reserve the TRE, if possible */ + if (!gsi_trans_tre_reserve(trans_info, 1)) + return -EBUSY; + + /* Now allocate the next TRE, fill it, and tell the hardware */ + tre_ring = &channel->tre_ring; + ring = &gsi->evt_ring[channel->evt_ring_id].ring; + + dest_tre = gsi_ring_virt(tre_ring, tre_ring->index); + gsi_trans_tre_fill(dest_tre, addr, 1, true, false, IPA_CMD_NONE); + + tre_ring->index++; + gsi_channel_doorbell(channel); + + return 0; +} + +/* Mark a gsi_trans_read_byte() request done */ +void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + gsi_trans_tre_release(&channel->trans_info, 1); +} + +/** + * __gsi_trans_commit() - Common GSI transaction commit code + * @trans: Transaction to commit + * @opcode: Immediate command opcode, or IPA_CMD_NONE + * @ring_db: Whether to tell the hardware about these queued transfers + * + * @Return: 0 if successful, or a negative error code + * + * Maps the transactions's scatterlist array for DMA, and returns -ENOMEM + * if that fails. Formats channel ring TRE entries based on the content of + * the scatterlist. Maps a transaction pointer to the last ring entry used + * for the transaction, so it can be recovered when it completes. Moves + * the transaction to the pending list. Finally, updates the channel ring + * pointer and optionally rings the doorbell. + */ +static int __gsi_trans_commit(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, bool ring_db) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_ring *tre_ring = &channel->tre_ring; + enum dma_data_direction direction; + bool bei = channel->toward_ipa; + struct gsi_tre *dest_tre; + struct scatterlist *sg; + struct gsi_ring *ring; + u32 byte_count = 0; + u32 avail; + int ret; + u32 i; + + direction = channel->toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + ret = dma_map_sg(trans->dev, trans->sgl, trans->sgc, direction); + if (!ret) + return -ENOMEM; + + ring = &channel->gsi->evt_ring[channel->evt_ring_id].ring; + + /* Consume the entries. If we cross the end of the ring while + * filling them we'll switch to the beginning to finish. + */ + avail = ring->count - tre_ring->index % tre_ring->count; + dest_tre = gsi_ring_virt(tre_ring, tre_ring->index); + for_each_sg(trans->sgl, sg, trans->sgc, i) { + bool last_tre = i == trans->tre_count - 1; + dma_addr_t addr = sg_dma_address(sg); + u32 len = sg_dma_len(sg); + + byte_count += len; + if (!avail--) + dest_tre = gsi_ring_virt(tre_ring, 0); + + gsi_trans_tre_fill(dest_tre, addr, len, last_tre, bei, opcode); + dest_tre++; + } + tre_ring->index += trans->tre_count; + + if (channel->toward_ipa) { + /* We record TX bytes when they are sent */ + trans->len = byte_count; + trans->trans_count = channel->trans_count; + trans->byte_count = channel->byte_count; + channel->trans_count++; + channel->byte_count += byte_count; + } + + /* Associate the last TRE with the transaction */ + gsi_channel_trans_map(channel, tre_ring->index - 1, trans); + + gsi_trans_move_pending(trans); + + /* Ring doorbell if requested, or if all TREs are allocated */ + if (ring_db || !atomic_read(&channel->trans_info.tre_avail)) { + /* Report what we're handing off to hardware for TX channels */ + if (channel->toward_ipa) + gsi_channel_tx_queued(channel); + gsi_channel_doorbell(channel); + } + + return 0; +} + +/* Commit a GSI transaction */ +int gsi_trans_commit(struct gsi_trans *trans, bool ring_db) +{ + return __gsi_trans_commit(trans, IPA_CMD_NONE, ring_db); +} + +/* Commit a GSI command transaction and wait for it to complete */ +int gsi_trans_commit_command(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode) +{ + int ret; + + refcount_inc(&trans->refcount); + + ret = __gsi_trans_commit(trans, opcode, true); + if (ret) + goto out_free_trans; + + wait_for_completion(&trans->completion); + +out_free_trans: + gsi_trans_free(trans); + + return ret; +} + +/* Commit a GSI command transaction, wait for it to complete, with timeout */ +int gsi_trans_commit_command_timeout(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, + unsigned long timeout) +{ + unsigned long timeout_jiffies = msecs_to_jiffies(timeout); + unsigned long remaining; + int ret; + + refcount_inc(&trans->refcount); + + ret = __gsi_trans_commit(trans, opcode, true); + if (ret) + goto out_free_trans; + + remaining = wait_for_completion_timeout(&trans->completion, + timeout_jiffies); +out_free_trans: + gsi_trans_free(trans); + + return ret ? ret : remaining ? 0 : -ETIMEDOUT; +} + +/* Return a channel's next completed transaction (or NULL) */ +void gsi_trans_complete(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + enum dma_data_direction direction; + + direction = channel->toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + + dma_unmap_sg(trans->dev, trans->sgl, trans->sgc, direction); + + ipa_gsi_trans_complete(trans); + + complete(&trans->completion); + + gsi_trans_free(trans); +} + +/* Cancel a channel's pending transactions */ +void gsi_channel_trans_cancel_pending(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + u32 evt_ring_id = channel->evt_ring_id; + struct gsi *gsi = channel->gsi; + struct gsi_trans *trans; + struct gsi_ring *ring; + + ring = &gsi->evt_ring[evt_ring_id].ring; + + spin_lock_bh(&trans_info->spinlock); + + list_for_each_entry(trans, &trans_info->pending, links) + trans->result = -ECANCELED; + + list_splice_tail_init(&trans_info->pending, &trans_info->complete); + + spin_unlock_bh(&trans_info->spinlock); + + /* Schedule NAPI polling to complete the cancelled transactions */ + napi_schedule(&channel->napi); +} + +/* Initialize a channel's GSI transaction info */ +int gsi_channel_trans_init(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + u32 tre_count = channel->data->tre_count; + + trans_info->map = kcalloc(tre_count, sizeof(*trans_info->map), + GFP_KERNEL); + if (!trans_info->map) + return -ENOMEM; + + /* We will never need more transactions than there are TRE + * entries in the transfer ring. For that reason, we can + * preallocate an array of (at least) that many transactions, + * and use a single free index to determine the next one + * available for allocation. + */ + trans_info->pool_count = tre_count; + trans_info->pool = kcalloc(trans_info->pool_count, + sizeof(*trans_info->pool), GFP_KERNEL); + if (!trans_info->pool) + goto err_free_map; + /* If we get extra memory from the allocator, use it */ + trans_info->pool_count = + ksize(trans_info->pool) / sizeof(*trans_info->pool); + trans_info->pool_free = 0; + + /* While transactions are allocated one at a time, a transaction + * can have multiple TREs. The number of TRE entries in a single + * transaction is limited by the number of TLV FIFO entries the + * channel has. We reserve TREs when a transaction is allocated, + * but we don't actually use/fill them until the transaction is + * committed. + * + * A transaction uses a scatterlist array to represent the data + * transfers implemented by the transaction. Each scatterlist + * element is used to fill a single TRE when the transaction is + * committed. As a result, we need the same number of scatterlist + * elements as there are TREs in the transfer ring, and we can + * preallocate them in a pool. + * + * If we allocate a few (tlv_count - 1) extra entries in our pool, + * we can always satisfy requests without ever worrying about + * straddling the end of the array. If there aren't enough + * entries starting at the free index, we just allocate free + * entries from the beginning of the pool. + */ + trans_info->sg_pool_count = tre_count + channel->data->tlv_count - 1; + trans_info->sg_pool = kcalloc(trans_info->sg_pool_count, + sizeof(*trans_info->sg_pool), GFP_KERNEL); + if (!trans_info->sg_pool) + goto err_free_pool; + /* Use any extra memory we get from the allocator */ + trans_info->sg_pool_count = + ksize(trans_info->sg_pool) / sizeof(*trans_info->sg_pool); + trans_info->sg_pool_free = 0; + + /* The tre_avail field limits the number of outstanding transactions. + * In theory we should be able use all of the TREs in the ring. But + * in practice, doing that caused the hardware to report running out + * of event ring slots for writing completion information. So give + * the poor hardware a break, and allow one less than the maximum. + */ + atomic_set(&trans_info->tre_avail, tre_count - 1); + + spin_lock_init(&trans_info->spinlock); + INIT_LIST_HEAD(&trans_info->alloc); + INIT_LIST_HEAD(&trans_info->pending); + INIT_LIST_HEAD(&trans_info->complete); + INIT_LIST_HEAD(&trans_info->polled); + + return 0; + +err_free_pool: + kfree(trans_info->pool); +err_free_map: + kfree(trans_info->map); + + return -ENOMEM; +} + +/* Inverse of gsi_channel_trans_init() */ +void gsi_channel_trans_exit(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + + kfree(trans_info->sg_pool); + kfree(trans_info->pool); + kfree(trans_info->map); +} diff --git a/drivers/net/ipa/gsi_trans.h b/drivers/net/ipa/gsi_trans.h new file mode 100644 index 000000000000..2d5a199e4396 --- /dev/null +++ b/drivers/net/ipa/gsi_trans.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _GSI_TRANS_H_ +#define _GSI_TRANS_H_ + +#include +#include +#include + +struct scatterlist; +struct device; + +struct gsi; +struct gsi_trans; +enum ipa_cmd_opcode; + +struct gsi_trans { + struct list_head links; /* gsi_channel lists */ + + struct gsi *gsi; + u32 channel_id; + + u32 tre_count; /* # TREs requested */ + u32 len; /* total # bytes in sgl */ + struct scatterlist *sgl; + u32 sgc; /* # entries in sgl[] */ + + struct completion completion; + refcount_t refcount; + + /* fields above are internal only */ + + struct device *dev; /* Use this for DMA mapping */ + long result; /* RX count, 0, or error code */ + + u64 byte_count; /* channel byte_count when committed */ + u64 trans_count; /* channel trans_count when committed */ + + void *data; +}; + +/** + * gsi_channel_trans_alloc() - Allocate a GSI transaction on a channel + * @gsi: GSI pointer + * @channel_id: Channel the transaction is associated with + * @tre_count: Number of elements in the transaction + * + * @Return: A GSI transaction structure, or a null pointer if all + * available transactions are in use + */ +struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id, + u32 tre_count); + +/** + * gsi_trans_free() - Free a previously-allocated GSI transaction + * @trans: Transaction to be freed + * + * Note: this should only be used in error paths, before the transaction is + * committed or in the event committing the transaction produces an error. + * Successfully committing a transaction passes ownership of the structure + * to the core transaction code. + */ +void gsi_trans_free(struct gsi_trans *trans); + +/** + * gsi_trans_commit() - Commit a GSI transaction + * @trans: Transaction to commit + * @ring_db: Whether to tell the hardware about these queued transfers + * @callback: Function called when transaction has completed. + */ +int gsi_trans_commit(struct gsi_trans *trans, bool ring_db); + +/** + * gsi_trans_commit_command() - Commit a GSI command transaction and wait + * wait for it to complete + * @trans: Transaction to commit + */ +int gsi_trans_commit_command(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode); + +/** + * gsi_trans_commit_command_timeout() - Commit a GSI command transaction, + * wait for it to complete, with timeout + * @trans: Transaction to commit + * @ring_db: Whether to tell the hardware about these queued transfers + * @timeout: Timeout period (in milliseconds) + */ +int gsi_trans_commit_command_timeout(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, + unsigned long timeout); + +/** + * gsi_trans_read_byte() - Issue a single byte read TRE on a channel + * @gsi: GSI pointer + * @channel_id: Channel on which to read a byte + * @addr: DMA address into which to transfer the one byte + * + * This is not a transaction operation at all. It's defined here because + * it needs to be done in coordination with other transaction activity. + */ +int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr); + +/** + * gsi_trans_read_byte_done() - Clean up after a single byte read TRE + * @gsi: GSI pointer + * @channel_id: Channel on which byte was read + * + * This function needs to be called to signal that the work related + * to reading a byte initiated by gsi_trans_read_byte() is complete. + */ +void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id); + +#endif /* _GSI_TRANS_H_ */ From patchwork Fri May 31 03:53:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 165495 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp186112ili; Thu, 30 May 2019 20:54:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqxCYwxdt6kN5iRlflii8pgzOHSWm53NiXk7unJSttjEpJjLWboF8UZkhgMzQD/HqiKWv1Kn X-Received: by 2002:aa7:9e0a:: with SMTP id y10mr7076831pfq.190.1559274862002; Thu, 30 May 2019 20:54:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559274861; cv=none; d=google.com; s=arc-20160816; b=L6SVy19m9rIBZDfiAb9T9NDo3hmxvGPO+EiFAIErp/4r8m789273SWiLKvWzYpQCW8 7g0slUj3Gx+QFSU0OFrIfUrYzREvXsTmAwIy7h4K1ItKBpAeTRTP45LVlwT19eZgcGlk U/lZTKO5Tar63A8kEOLrvLjGp0JHq4OHCCWQ+qoyMbmqKbSfYBmjvvvJiWnkmKscrItc msjDxi8VYnMYZHcZt9xZDODWifEhJ9pasScP56m4j8YMB3PR0pbNmqV9jhUdvF9ikNcu /zWOxW4OQ5E8AXJWfG+HTfVT3Fc5MbCFLAGMMl6u/240/KcF6NZUFuPsG/fJGjxFt7zd ttoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=kagssQd3QuWugHezIJ6doapoPYga+HnLQRv+8NMyX9g=; b=n48hnd9GZuYSxmQIzVMpD6XK1bzT1AyhUwbyprWSQRj2qFCcU9WHpYm4PByI5bNPBd zpasGgAgQIwVAA7lwSBDoy4T5GOeatHYvjRoA97Ns0r6yMNAdbp+Gi/rqgYaOv63iKPT DSq5hYvpVQTMsciVGMQQSqIPySrGiyggh8Hz2neZTCAEe5IVBXAnslAhcOzPB8S4+6jx JokuSLZX5cUPgPWQyT60HTcXSDEYZUcpSCAG/2rUysj9Qs6L0x32YTyvGb7t+8GX21IM OOxE7EqQr/+7J5xfEjH11+vxdKt5pwwOdXaaKzlk4louck3Zom9hf8/2b8AGfE4wDQd+ CXNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="aVaU2Uk/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w21si4521493pgh.251.2019.05.30.20.54.21; Thu, 30 May 2019 20:54:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="aVaU2Uk/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727036AbfEaDyU (ORCPT + 30 others); Thu, 30 May 2019 23:54:20 -0400 Received: from mail-it1-f182.google.com ([209.85.166.182]:55228 "EHLO mail-it1-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727001AbfEaDyR (ORCPT ); Thu, 30 May 2019 23:54:17 -0400 Received: by mail-it1-f182.google.com with SMTP id h20so13557882itk.4 for ; Thu, 30 May 2019 20:54:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kagssQd3QuWugHezIJ6doapoPYga+HnLQRv+8NMyX9g=; b=aVaU2Uk/PSvFOR2ZTUEkpLv50UR082xEm5HsKlrw7QgZckzEHtXA7yO/Wn/WOBbJvJ Dc+RwCg9LLpK+kEt8JnxklzRSCovp7PitZw296lYP8Jd1UwgpjL6wVVrUTMb+9Bc9n+N t3Maw5oZUPapO1nFVOsnDqMKzA6i4fmK+kL1wanbzG8ogV628MK9QBHGEZKqK77sc8Wg gEHiQzbr0xcVtCqK6elbQQ1PkYtX5ZPymVlQtCfCogAJWOQT7ZhIg+V/xQd8Q9pgDbYZ zifSkJyuzPAChGKjgwm6JomDh11WZ8yM0/5ykVmjJzQLsMFc391cvHSKIpBnUCPJl/x6 CBFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kagssQd3QuWugHezIJ6doapoPYga+HnLQRv+8NMyX9g=; b=XG2O495XXv1bvkCZc9jHWsxpSBda4KDVpcSi7G03CfiAzfMNKRB9EjvIqyBXt/EUrk FQoCa9qSTF2QtxUnmsK4vBRn1hnRIdngbZu/sP4MkwG8srr9LR8Z2IA1hFwNLOkLSV3T nu4LoYXx2Fw0/i++/MQ8SdkzO/OOOXeh6YeaWmqhpe/kLMzaHb9+jawX9w70hn9MFnW6 2rPYwUeB5Ug4uCoA8Gwx0UBoz1GRiRE6mGxD4/Lmc8YiAUQU6XBQfDETA2OoGOgCcm7n aFoy8Soym3h2kFoRH5VbY9MUNXBdBec9OO+K7pSE5IbRXY8ouWKPD/qOjRdOH0pcCNcf aX3A== X-Gm-Message-State: APjAAAVdutHjcqnkZUigJoVfWSqILA7Ga+86vZOeZZMbdWmuX4/6OmNW lqLR0l3MFi9jGyBmmZspV2mMww== X-Received: by 2002:a24:5095:: with SMTP id m143mr5427071itb.68.1559274856806; Thu, 30 May 2019 20:54:16 -0700 (PDT) Received: from localhost.localdomain (c-71-195-29-92.hsd1.mn.comcast.net. [71.195.29.92]) by smtp.gmail.com with ESMTPSA id q15sm1626947ioi.15.2019.05.30.20.54.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 May 2019 20:54:16 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, cpratapa@codeaurora.org, syadagir@codeaurora.org, subashab@codeaurora.org, abhishek.esse@gmail.com, netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v2 16/17] arm64: dts: sdm845: add IPA information Date: Thu, 30 May 2019 22:53:47 -0500 Message-Id: <20190531035348.7194-17-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190531035348.7194-1-elder@linaro.org> References: <20190531035348.7194-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add IPA-related nodes and definitions to "sdm845.dtsi". Signed-off-by: Alex Elder --- arch/arm64/boot/dts/qcom/sdm845.dtsi | 51 ++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) -- 2.20.1 diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi index fcb93300ca62..985479925af8 100644 --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi @@ -20,6 +20,7 @@ #include #include #include +#include / { interrupt-parent = <&intc>; @@ -517,6 +518,17 @@ interrupt-controller; #interrupt-cells = <2>; }; + + ipa_smp2p_out: ipa-ap-to-modem { + qcom,entry-name = "ipa"; + #qcom,smem-state-cells = <1>; + }; + + ipa_smp2p_in: ipa-modem-to-ap { + qcom,entry-name = "ipa"; + interrupt-controller; + #interrupt-cells = <2>; + }; }; smp2p-slpi { @@ -1268,6 +1280,45 @@ }; }; + ipa@1e40000 { + compatible = "qcom,sdm845-ipa"; + + modem-init; + + reg = <0 0x1e40000 0 0x7000>, + <0 0x1e47000 0 0x2000>, + <0 0x1e04000 0 0x2c000>; + reg-names = "ipa-reg", + "ipa-shared", + "gsi"; + + interrupts-extended = + <&intc 0 311 IRQ_TYPE_EDGE_RISING>, + <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>, + <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, + <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "ipa", + "gsi", + "ipa-clock-query", + "ipa-setup-ready"; + + clocks = <&rpmhcc RPMH_IPA_CLK>; + clock-names = "core"; + + interconnects = + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>, + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>, + <&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>; + interconnect-names = "memory", + "imem", + "config"; + + qcom,smem-states = <&ipa_smp2p_out 0>, + <&ipa_smp2p_out 1>; + qcom,smem-state-names = "ipa-clock-enabled-valid", + "ipa-clock-enabled"; + }; + tcsr_mutex_regs: syscon@1f40000 { compatible = "syscon"; reg = <0 0x01f40000 0 0x40000>;