From patchwork Fri Mar 6 04:28:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 222955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C92C2C3F2DC for ; Fri, 6 Mar 2020 04:28:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9982F2084E for ; Fri, 6 Mar 2020 04:28:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="ZZjQOHHH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727050AbgCFE2o (ORCPT ); Thu, 5 Mar 2020 23:28:44 -0500 Received: from mail-yw1-f52.google.com ([209.85.161.52]:39602 "EHLO mail-yw1-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726988AbgCFE2l (ORCPT ); Thu, 5 Mar 2020 23:28:41 -0500 Received: by mail-yw1-f52.google.com with SMTP id x184so1086609ywd.6 for ; Thu, 05 Mar 2020 20:28:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gni/ERHExthWFIsP0a9AZaLKXwZ4DtsmIHrkv9FFJ/g=; b=ZZjQOHHHvLICFVdbvoNSiFBKx37ccWUkVp8E7u7hZUHeAgrEwRM8a7OZ3VqGESKyJZ eNcs1UdwYGVWYf819xIMYG/5M4h13AIG1pm0QXnNtvWIyqRDkjGLz0YnKjQHju3m/dK/ s/kMR/NAlCIQ4mcYqlUSDlwqDAvBjR7w5zCx9wVXjzjAf3uNCDPUWhhxidZk67jgyOLZ rtx3gKcqV10GZ1Aw1BNI0nldNBZ2siXJUggL9DmPqGZj8ldeS3vrNykaDkjO1C7oaOaT PLGIDmzGKK5l6j0vekm9eywyfN9+JFlZlHDv7C4MGDAvWgeNk2i+xWnw/52UpRgivWsD MIDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gni/ERHExthWFIsP0a9AZaLKXwZ4DtsmIHrkv9FFJ/g=; b=EkkeQF5/WKeEe2POTQxJovervydFZHIPLNm59uSc6gHA9vd9lgHtwCmN/C1Qq7tmld 8S8x+Z/Ofm52r9SOMrDcHGsnMcZv+d2Lu9HxSSwV0jS6F4KC1yInTJS/59UdsPLNQMhR aj6ORxgZrAXTHsGwWBXOt4D8NzH/Pe0m7Ij/UQMXNQ0MBrcEDpGUJksev9mgRnY19fi/ cuv+1RcyqUiNLzUBD/WMkHSNoiNiYRw1Lphly/NOJccfB7rdWe8csuTs0fLIrsigobn5 2yKyx5ksCviAL4DHuCpvvu1P9wgTcvPk7AT4UYzB6aBlI/xXcTjczmsYiYKyjAOB34mb W6kA== X-Gm-Message-State: ANhLgQ2G/f7AsEwpaNawAUyGt0djHRivftpmaR62elPbFWrLvc9aJsS6 ojND8h2TjYnjwpr/FLiDKZebKg== X-Google-Smtp-Source: ADFU+vvu6D2F+M9U3eMejzYO8QeU2OrRUzjgwjL3w7ZbTZBDoypIRluNrk+/A27q1TeOnSEExFIZQg== X-Received: by 2002:a81:3944:: with SMTP id g65mr2047976ywa.169.1583468920290; Thu, 05 Mar 2020 20:28:40 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:39 -0800 (PST) From: Alex Elder To: Rob Herring , Mark Rutland , David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org, Rob Herring Subject: [PATCH v2 02/17] dt-bindings: soc: qcom: add IPA bindings Date: Thu, 5 Mar 2020 22:28:16 -0600 Message-Id: <20200306042831.17827-3-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add the binding definitions for the "qcom,ipa" device tree node. Signed-off-by: Alex Elder Reviewed-by: Rob Herring --- .../devicetree/bindings/net/qcom,ipa.yaml | 192 ++++++++++++++++++ 1 file changed, 192 insertions(+) create mode 100644 Documentation/devicetree/bindings/net/qcom,ipa.yaml diff --git a/Documentation/devicetree/bindings/net/qcom,ipa.yaml b/Documentation/devicetree/bindings/net/qcom,ipa.yaml new file mode 100644 index 000000000000..91d08f2c7791 --- /dev/null +++ b/Documentation/devicetree/bindings/net/qcom,ipa.yaml @@ -0,0 +1,192 @@ +# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/net/qcom,ipa.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Qualcomm IP Accelerator (IPA) + +maintainers: + - Alex Elder + +description: + This binding describes the Qualcomm IPA. The IPA is capable of offloading + certain network processing tasks (e.g. filtering, routing, and NAT) from + the main processor. + + The IPA sits between multiple independent "execution environments," + including the Application Processor (AP) and the modem. The IPA presents + a Generic Software Interface (GSI) to each execution environment. + The GSI is an integral part of the IPA, but it is logically isolated + and has a distinct interrupt and a separately-defined address space. + + See also soc/qcom/qcom,smp2p.txt and interconnect/interconnect.txt. + + - | + -------- --------- + | | | | + | AP +<---. .----+ Modem | + | +--. | | .->+ | + | | | | | | | | + -------- | | | | --------- + v | v | + --+-+---+-+-- + | GSI | + |-----------| + | | + | IPA | + | | + ------------- + +properties: + compatible: + const: "qcom,sdm845-ipa" + + reg: + items: + - description: IPA registers + - description: IPA shared memory + - description: GSI registers + + reg-names: + items: + - const: ipa-reg + - const: ipa-shared + - const: gsi + + clocks: + maxItems: 1 + + clock-names: + const: core + + interrupts: + items: + - description: IPA interrupt (hardware IRQ) + - description: GSI interrupt (hardware IRQ) + - description: Modem clock query interrupt (smp2p interrupt) + - description: Modem setup ready interrupt (smp2p interrupt) + + interrupt-names: + items: + - const: ipa + - const: gsi + - const: ipa-clock-query + - const: ipa-setup-ready + + interconnects: + items: + - description: Interconnect path between IPA and main memory + - description: Interconnect path between IPA and internal memory + - description: Interconnect path between IPA and the AP subsystem + + interconnect-names: + items: + - const: memory + - const: imem + - const: config + + qcom,smem-states: + $ref: /schemas/types.yaml#/definitions/phandle-array + description: State bits used in by the AP to signal the modem. + items: + - description: Whether the "ipa-clock-enabled" state bit is valid + - description: Whether the IPA clock is enabled (if valid) + + qcom,smem-state-names: + $ref: /schemas/types.yaml#/definitions/string-array + description: The names of the state bits used for SMP2P output + items: + - const: ipa-clock-enabled-valid + - const: ipa-clock-enabled + + modem-init: + type: boolean + description: + If present, it indicates that the modem is responsible for + performing early IPA initialization, including loading and + validating firwmare used by the GSI. + + modem-remoteproc: + $ref: /schemas/types.yaml#definitions/phandle + description: + This defines the phandle to the remoteproc node representing + the modem subsystem. This is requied so the IPA driver can + receive and act on notifications of modem up/down events. + + memory-region: + $ref: /schemas/types.yaml#/definitions/phandle-array + maxItems: 1 + description: + If present, a phandle for a reserved memory area that holds + the firmware passed to Trust Zone for authentication. Required + when Trust Zone (not the modem) performs early initialization. + +required: + - compatible + - reg + - clocks + - interrupts + - interconnects + - qcom,smem-states + - modem-remoteproc + +oneOf: + - required: + - modem-init + - required: + - memory-region + +examples: + - | + smp2p-mpss { + compatible = "qcom,smp2p"; + ipa_smp2p_out: ipa-ap-to-modem { + qcom,entry-name = "ipa"; + #qcom,smem-state-cells = <1>; + }; + + ipa_smp2p_in: ipa-modem-to-ap { + qcom,entry-name = "ipa"; + interrupt-controller; + #interrupt-cells = <2>; + }; + }; + ipa@1e40000 { + compatible = "qcom,sdm845-ipa"; + + modem-init; + modem-remoteproc = <&mss_pil>; + + reg = <0 0x1e40000 0 0x7000>, + <0 0x1e47000 0 0x2000>, + <0 0x1e04000 0 0x2c000>; + reg-names = "ipa-reg", + "ipa-shared"; + "gsi"; + + interrupts-extended = <&intc 0 311 IRQ_TYPE_EDGE_RISING>, + <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>, + <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, + <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "ipa", + "gsi", + "ipa-clock-query", + "ipa-setup-ready"; + + clocks = <&rpmhcc RPMH_IPA_CLK>; + clock-names = "core"; + + interconnects = + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>, + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>, + <&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>; + interconnect-names = "memory", + "imem", + "config"; + + qcom,smem-states = <&ipa_smp2p_out 0>, + <&ipa_smp2p_out 1>; + qcom,smem-state-names = "ipa-clock-enabled-valid", + "ipa-clock-enabled"; + }; From patchwork Fri Mar 6 04:28:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 222947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2017C3F2E4 for ; Fri, 6 Mar 2020 04:30:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 99E2E2073D for ; Fri, 6 Mar 2020 04:30:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="t14kD7FM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727077AbgCFE2q (ORCPT ); Thu, 5 Mar 2020 23:28:46 -0500 Received: from mail-yw1-f68.google.com ([209.85.161.68]:38523 "EHLO mail-yw1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727070AbgCFE2o (ORCPT ); Thu, 5 Mar 2020 23:28:44 -0500 Received: by mail-yw1-f68.google.com with SMTP id 10so1095737ywv.5 for ; Thu, 05 Mar 2020 20:28:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ztByHnzYUMZdmRwzBHllgxXTQ466kpKswoRroUwNrEQ=; b=t14kD7FMG6qhZT9t338J34kVq3lDCdNAF9NqEKa1Ka70QjFy0XVAeQQnC45QrK/Fua KYgkfGUQkRLTqb//UPIroSYJr1fU5Cs6GcNvUqzzH1nEZyOiIQP8/y9Lm2/olfaQFVX0 R5TGNmX66hogOQ1hpDO1n9ad1kCW2m3e3mMC5k6n9J7MzxBqO7Kd9B0zaH5gYVhroSoV 7dgedV877iWpVz4Pntgwn5b5HHBaPVhM4tHedz4raVH0udIVFuLN1EaYtPr36XFBzbSV K8d2gH3c33knqoZw7PmeTH6r/1yXVq4FhTH/BhfXv/ZSgM7rYYKjR8MHFoyJwOYK6QIw xtyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ztByHnzYUMZdmRwzBHllgxXTQ466kpKswoRroUwNrEQ=; b=bMpCiuDE7u9khihSraxhqwVgnCRYfDw1k9AB3073MzZG7hdkdDxCYWro4PLs2g4uOb FvkY6MbR0ZDNPMrdqH9xanAleRG8QWAtP3V7Ofe4TofTQeBAXzY8SBg513xTogu/yq+a vyHKZt5XHXVW1e6kbSfY8a7Y1MNbKfac60PJtswlN+GTfqKZBwj8+CmBN0Y+9TvmsKaF QgteH9nE/dXJmYTDqE4Ve+rg/hpVr7bFo0pCoalucgYjJYu5VejNdR0t4geDNvAMxGLA 4E78T4zl4nV14g79azmPWpy0udeOPuq/YLxhWBXH/LFx0xoEdrNxF4OUdj7TMRqEzmgC qtQA== X-Gm-Message-State: ANhLgQ1v/yTDpIZEVIAKNdZuUBU8/jphn6erTC7FR/YY07MZL4NZFXYl dGqcCCG8xBlERGYMS+ZAwqVRrA== X-Google-Smtp-Source: ADFU+vtzRypefT6vCidDmyTFAoT6S5yrBKPN2vK5kmDp1yWMMYdczEtddDOfrRKp2dXch0nHcpILxQ== X-Received: by 2002:a25:cb44:: with SMTP id b65mr1841577ybg.114.1583468922341; Thu, 05 Mar 2020 20:28:42 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:41 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 03/17] soc: qcom: ipa: main code Date: Thu, 5 Mar 2020 22:28:17 -0600 Message-Id: <20200306042831.17827-4-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch includes three source files that represent some basic "main program" code for the IPA driver. They are: - "ipa.h" defines the top-level IPA structure which represents an IPA device throughout the code. - "ipa_main.c" contains the platform driver probe function, along with some general code used during initialization. - "ipa_reg.h" defines the offsets of the 32-bit registers used for the IPA device, along with masks that define the position and width of fields within these registers. - "version.h" defines some symbolic IPA version numbers. Each file includes some documentation that provides a little more overview of how the code is organized and used. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa.h | 148 ++++++ drivers/net/ipa/ipa_main.c | 954 ++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_reg.c | 38 ++ drivers/net/ipa/ipa_reg.h | 476 +++++++++++++++++ drivers/net/ipa/ipa_version.h | 23 + 5 files changed, 1639 insertions(+) create mode 100644 drivers/net/ipa/ipa.h create mode 100644 drivers/net/ipa/ipa_main.c create mode 100644 drivers/net/ipa/ipa_reg.c create mode 100644 drivers/net/ipa/ipa_reg.h create mode 100644 drivers/net/ipa/ipa_version.h diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h new file mode 100644 index 000000000000..23fb29889e5a --- /dev/null +++ b/drivers/net/ipa/ipa.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_H_ +#define _IPA_H_ + +#include +#include +#include +#include + +#include "ipa_version.h" +#include "gsi.h" +#include "ipa_mem.h" +#include "ipa_qmi.h" +#include "ipa_endpoint.h" +#include "ipa_interrupt.h" + +struct clk; +struct icc_path; +struct net_device; +struct platform_device; + +struct ipa_clock; +struct ipa_smp2p; +struct ipa_interrupt; + +/** + * struct ipa - IPA information + * @gsi: Embedded GSI structure + * @version: IPA hardware version + * @pdev: Platform device + * @modem_rproc: Remoteproc handle for modem subsystem + * @smp2p: SMP2P information + * @clock: IPA clocking information + * @suspend_ref: Whether clock reference preventing suspend taken + * @table_addr: DMA address of filter/route table content + * @table_virt: Virtual address of filter/route table content + * @interrupt: IPA Interrupt information + * @uc_loaded: true after microcontroller has reported it's ready + * @reg_addr: DMA address used for IPA register access + * @reg_virt: Virtual address used for IPA register access + * @mem_addr: DMA address of IPA-local memory space + * @mem_virt: Virtual address of IPA-local memory space + * @mem_offset: Offset from @mem_virt used for access to IPA memory + * @mem_size: Total size (bytes) of memory at @mem_virt + * @mem: Array of IPA-local memory region descriptors + * @zero_addr: DMA address of preallocated zero-filled memory + * @zero_virt: Virtual address of preallocated zero-filled memory + * @zero_size: Size (bytes) of preallocated zero-filled memory + * @wakeup_source: Wakeup source information + * @available: Bit mask indicating endpoints hardware supports + * @filter_map: Bit mask indicating endpoints that support filtering + * @initialized: Bit mask indicating endpoints initialized + * @set_up: Bit mask indicating endpoints set up + * @enabled: Bit mask indicating endpoints enabled + * @endpoint: Array of endpoint information + * @channel_map: Mapping of GSI channel to IPA endpoint + * @name_map: Mapping of IPA endpoint name to IPA endpoint + * @setup_complete: Flag indicating whether setup stage has completed + * @modem_state: State of modem (stopped, running) + * @modem_netdev: Network device structure used for modem + * @qmi: QMI information + */ +struct ipa { + struct gsi gsi; + enum ipa_version version; + struct platform_device *pdev; + struct rproc *modem_rproc; + struct ipa_smp2p *smp2p; + struct ipa_clock *clock; + atomic_t suspend_ref; + + dma_addr_t table_addr; + __le64 *table_virt; + + struct ipa_interrupt *interrupt; + bool uc_loaded; + + dma_addr_t reg_addr; + void __iomem *reg_virt; + + dma_addr_t mem_addr; + void *mem_virt; + u32 mem_offset; + u32 mem_size; + const struct ipa_mem *mem; + + dma_addr_t zero_addr; + void *zero_virt; + size_t zero_size; + + struct wakeup_source *wakeup_source; + + /* Bit masks indicating endpoint state */ + u32 available; /* supported by hardware */ + u32 filter_map; + u32 initialized; + u32 set_up; + u32 enabled; + + struct ipa_endpoint endpoint[IPA_ENDPOINT_MAX]; + struct ipa_endpoint *channel_map[GSI_CHANNEL_COUNT_MAX]; + struct ipa_endpoint *name_map[IPA_ENDPOINT_COUNT]; + + bool setup_complete; + + atomic_t modem_state; /* enum ipa_modem_state */ + struct net_device *modem_netdev; + struct ipa_qmi qmi; +}; + +/** + * ipa_setup() - Perform IPA setup + * @ipa: IPA pointer + * + * IPA initialization is broken into stages: init; config; and setup. + * (These have inverses exit, deconfig, and teardown.) + * + * Activities performed at the init stage can be done without requiring + * any access to IPA hardware. Activities performed at the config stage + * require the IPA clock to be running, because they involve access + * to IPA registers. The setup stage is performed only after the GSI + * hardware is ready (more on this below). The setup stage allows + * the AP to perform more complex initialization by issuing "immediate + * commands" using a special interface to the IPA. + * + * This function, @ipa_setup(), starts the setup stage. + * + * In order for the GSI hardware to be functional it needs firmware to be + * loaded (in addition to some other low-level initialization). This early + * GSI initialization can be done either by Trust Zone on the AP or by the + * modem. + * + * If it's done by Trust Zone, the AP loads the GSI firmware and supplies + * it to Trust Zone to verify and install. When this completes, if + * verification was successful, the GSI layer is ready and ipa_setup() + * implements the setup phase of initialization. + * + * If the modem performs early GSI initialization, the AP needs to know + * when this has occurred. An SMP2P interrupt is used for this purpose, + * and receipt of that interrupt triggers the call to ipa_setup(). + */ +int ipa_setup(struct ipa *ipa); + +#endif /* _IPA_H_ */ diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c new file mode 100644 index 000000000000..d6e7f257e99d --- /dev/null +++ b/drivers/net/ipa/ipa_main.c @@ -0,0 +1,954 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" +#include "ipa_cmd.h" +#include "ipa_reg.h" +#include "ipa_mem.h" +#include "ipa_table.h" +#include "ipa_modem.h" +#include "ipa_uc.h" +#include "ipa_interrupt.h" +#include "gsi_trans.h" + +/** + * DOC: The IP Accelerator + * + * This driver supports the Qualcomm IP Accelerator (IPA), which is a + * networking component found in many Qualcomm SoCs. The IPA is connected + * to the application processor (AP), but is also connected (and partially + * controlled by) other "execution environments" (EEs), such as a modem. + * + * The IPA is the conduit between the AP and the modem that carries network + * traffic. This driver presents a network interface representing the + * connection of the modem to external (e.g. LTE) networks. + * + * The IPA provides protocol checksum calculation, offloading this work + * from the AP. The IPA offers additional functionality, including routing, + * filtering, and NAT support, but that more advanced functionality is not + * currently supported. Despite that, some resources--including routing + * tables and filter tables--are defined in this driver because they must + * be initialized even when the advanced hardware features are not used. + * + * There are two distinct layers that implement the IPA hardware, and this + * is reflected in the organization of the driver. The generic software + * interface (GSI) is an integral component of the IPA, providing a + * well-defined communication layer between the AP subsystem and the IPA + * core. The GSI implements a set of "channels" used for communication + * between the AP and the IPA. + * + * The IPA layer uses GSI channels to implement its "endpoints". And while + * a GSI channel carries data between the AP and the IPA, a pair of IPA + * endpoints is used to carry traffic between two EEs. Specifically, the main + * modem network interface is implemented by two pairs of endpoints: a TX + * endpoint on the AP coupled with an RX endpoint on the modem; and another + * RX endpoint on the AP receiving data from a TX endpoint on the modem. + */ + +/* The name of the GSI firmware file relative to /lib/firmware */ +#define IPA_FWS_PATH "ipa_fws.mdt" +#define IPA_PAS_ID 15 + +/** + * ipa_suspend_handler() - Handle the suspend IPA interrupt + * @ipa: IPA pointer + * @irq_id: IPA interrupt type (unused) + * + * When in suspended state, the IPA can trigger a resume by sending a SUSPEND + * IPA interrupt. + */ +static void ipa_suspend_handler(struct ipa *ipa, enum ipa_irq_id irq_id) +{ + /* Take a a single clock reference to prevent suspend. All + * endpoints will be resumed as a result. This reference will + * be dropped when we get a power management suspend request. + */ + if (!atomic_xchg(&ipa->suspend_ref, 1)) + ipa_clock_get(ipa); + + /* Acknowledge/clear the suspend interrupt on all endpoints */ + ipa_interrupt_suspend_clear_all(ipa->interrupt); +} + +/** + * ipa_setup() - Set up IPA hardware + * @ipa: IPA pointer + * + * Perform initialization that requires issuing immediate commands on + * the command TX endpoint. If the modem is doing GSI firmware load + * and initialization, this function will be called when an SMP2P + * interrupt has been signaled by the modem. Otherwise it will be + * called from ipa_probe() after GSI firmware has been successfully + * loaded, authenticated, and started by Trust Zone. + */ +int ipa_setup(struct ipa *ipa) +{ + struct ipa_endpoint *exception_endpoint; + struct ipa_endpoint *command_endpoint; + int ret; + + /* IPA v4.0 and above don't use the doorbell engine. */ + ret = gsi_setup(&ipa->gsi, ipa->version == IPA_VERSION_3_5_1); + if (ret) + return ret; + + ipa->interrupt = ipa_interrupt_setup(ipa); + if (IS_ERR(ipa->interrupt)) { + ret = PTR_ERR(ipa->interrupt); + goto err_gsi_teardown; + } + ipa_interrupt_add(ipa->interrupt, IPA_IRQ_TX_SUSPEND, + ipa_suspend_handler); + + ipa_uc_setup(ipa); + + ipa_endpoint_setup(ipa); + + /* We need to use the AP command TX endpoint to perform other + * initialization, so we enable first. + */ + command_endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; + ret = ipa_endpoint_enable_one(command_endpoint); + if (ret) + goto err_endpoint_teardown; + + ret = ipa_mem_setup(ipa); + if (ret) + goto err_command_disable; + + ret = ipa_table_setup(ipa); + if (ret) + goto err_mem_teardown; + + /* Enable the exception handling endpoint, and tell the hardware + * to use it by default. + */ + exception_endpoint = ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]; + ret = ipa_endpoint_enable_one(exception_endpoint); + if (ret) + goto err_table_teardown; + + ipa_endpoint_default_route_set(ipa, exception_endpoint->endpoint_id); + + /* We're all set. Now prepare for communication with the modem */ + ret = ipa_modem_setup(ipa); + if (ret) + goto err_default_route_clear; + + ipa->setup_complete = true; + + dev_info(&ipa->pdev->dev, "IPA driver setup completed successfully\n"); + + return 0; + +err_default_route_clear: + ipa_endpoint_default_route_clear(ipa); + ipa_endpoint_disable_one(exception_endpoint); +err_table_teardown: + ipa_table_teardown(ipa); +err_mem_teardown: + ipa_mem_teardown(ipa); +err_command_disable: + ipa_endpoint_disable_one(command_endpoint); +err_endpoint_teardown: + ipa_endpoint_teardown(ipa); + ipa_uc_teardown(ipa); + ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_TX_SUSPEND); + ipa_interrupt_teardown(ipa->interrupt); +err_gsi_teardown: + gsi_teardown(&ipa->gsi); + + return ret; +} + +/** + * ipa_teardown() - Inverse of ipa_setup() + * @ipa: IPA pointer + */ +static void ipa_teardown(struct ipa *ipa) +{ + struct ipa_endpoint *exception_endpoint; + struct ipa_endpoint *command_endpoint; + + ipa_modem_teardown(ipa); + ipa_endpoint_default_route_clear(ipa); + exception_endpoint = ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]; + ipa_endpoint_disable_one(exception_endpoint); + ipa_table_teardown(ipa); + ipa_mem_teardown(ipa); + command_endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; + ipa_endpoint_disable_one(command_endpoint); + ipa_endpoint_teardown(ipa); + ipa_uc_teardown(ipa); + ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_TX_SUSPEND); + ipa_interrupt_teardown(ipa->interrupt); + gsi_teardown(&ipa->gsi); +} + +/* Configure QMB Core Master Port selection */ +static void ipa_hardware_config_comp(struct ipa *ipa) +{ + u32 val; + + /* Nothing to configure for IPA v3.5.1 */ + if (ipa->version == IPA_VERSION_3_5_1) + return; + + val = ioread32(ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET); + + if (ipa->version == IPA_VERSION_4_0) { + val &= ~IPA_QMB_SELECT_CONS_EN_FMASK; + val &= ~IPA_QMB_SELECT_PROD_EN_FMASK; + val &= ~IPA_QMB_SELECT_GLOBAL_EN_FMASK; + } else { + val |= GSI_MULTI_AXI_MASTERS_DIS_FMASK; + } + + val |= GSI_MULTI_INORDER_RD_DIS_FMASK; + val |= GSI_MULTI_INORDER_WR_DIS_FMASK; + + iowrite32(val, ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET); +} + +/* Configure DDR and PCIe max read/write QSB values */ +static void ipa_hardware_config_qsb(struct ipa *ipa) +{ + u32 val; + + /* QMB_0 represents DDR; QMB_1 represents PCIe (not present in 4.2) */ + val = u32_encode_bits(8, GEN_QMB_0_MAX_WRITES_FMASK); + if (ipa->version == IPA_VERSION_4_2) + val |= u32_encode_bits(0, GEN_QMB_1_MAX_WRITES_FMASK); + else + val |= u32_encode_bits(4, GEN_QMB_1_MAX_WRITES_FMASK); + iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_WRITES_OFFSET); + + if (ipa->version == IPA_VERSION_3_5_1) { + val = u32_encode_bits(8, GEN_QMB_0_MAX_READS_FMASK); + val |= u32_encode_bits(12, GEN_QMB_1_MAX_READS_FMASK); + } else { + val = u32_encode_bits(12, GEN_QMB_0_MAX_READS_FMASK); + if (ipa->version == IPA_VERSION_4_2) + val |= u32_encode_bits(0, GEN_QMB_1_MAX_READS_FMASK); + else + val |= u32_encode_bits(12, GEN_QMB_1_MAX_READS_FMASK); + /* GEN_QMB_0_MAX_READS_BEATS is 0 */ + /* GEN_QMB_1_MAX_READS_BEATS is 0 */ + } + iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_READS_OFFSET); +} + +static void ipa_idle_indication_cfg(struct ipa *ipa, + u32 enter_idle_debounce_thresh, + bool const_non_idle_enable) +{ + u32 offset; + u32 val; + + val = u32_encode_bits(enter_idle_debounce_thresh, + ENTER_IDLE_DEBOUNCE_THRESH_FMASK); + if (const_non_idle_enable) + val |= CONST_NON_IDLE_ENABLE_FMASK; + + offset = ipa_reg_idle_indication_cfg_offset(ipa->version); + iowrite32(val, ipa->reg_virt + offset); +} + +/** + * ipa_hardware_dcd_config() - Enable dynamic clock division on IPA + * + * Configures when the IPA signals it is idle to the global clock + * controller, which can respond by scalling down the clock to + * save power. + */ +static void ipa_hardware_dcd_config(struct ipa *ipa) +{ + /* Recommended values for IPA 3.5 according to IPA HPG */ + ipa_idle_indication_cfg(ipa, 256, false); +} + +static void ipa_hardware_dcd_deconfig(struct ipa *ipa) +{ + /* Power-on reset values */ + ipa_idle_indication_cfg(ipa, 0, true); +} + +/** + * ipa_hardware_config() - Primitive hardware initialization + * @ipa: IPA pointer + */ +static void ipa_hardware_config(struct ipa *ipa) +{ + u32 granularity; + u32 val; + + /* Fill in backward-compatibility register, based on version */ + val = ipa_reg_bcr_val(ipa->version); + iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET); + + if (ipa->version != IPA_VERSION_3_5_1) { + /* Enable open global clocks (hardware workaround) */ + val = GLOBAL_FMASK; + val |= GLOBAL_2X_CLK_FMASK; + iowrite32(val, ipa->reg_virt + IPA_REG_CLKON_CFG_OFFSET); + + /* Disable PA mask to allow HOLB drop (hardware workaround) */ + val = ioread32(ipa->reg_virt + IPA_REG_TX_CFG_OFFSET); + val &= ~PA_MASK_EN; + iowrite32(val, ipa->reg_virt + IPA_REG_TX_CFG_OFFSET); + } + + ipa_hardware_config_comp(ipa); + + /* Configure system bus limits */ + ipa_hardware_config_qsb(ipa); + + /* Configure aggregation granularity */ + val = ioread32(ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET); + granularity = ipa_aggr_granularity_val(IPA_AGGR_GRANULARITY); + val = u32_encode_bits(granularity, AGGR_GRANULARITY); + iowrite32(val, ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET); + + /* Disable hashed IPv4 and IPv6 routing and filtering for IPA v4.2 */ + if (ipa->version == IPA_VERSION_4_2) + iowrite32(0, ipa->reg_virt + IPA_REG_FILT_ROUT_HASH_EN_OFFSET); + + /* Enable dynamic clock division */ + ipa_hardware_dcd_config(ipa); +} + +/** + * ipa_hardware_deconfig() - Inverse of ipa_hardware_config() + * @ipa: IPA pointer + * + * This restores the power-on reset values (even if they aren't different) + */ +static void ipa_hardware_deconfig(struct ipa *ipa) +{ + /* Mostly we just leave things as we set them. */ + ipa_hardware_dcd_deconfig(ipa); +} + +#ifdef IPA_VALIDATION + +/* # IPA resources used based on version (see IPA_RESOURCE_GROUP_COUNT) */ +static int ipa_resource_group_count(struct ipa *ipa) +{ + switch (ipa->version) { + case IPA_VERSION_3_5_1: + return 3; + + case IPA_VERSION_4_0: + case IPA_VERSION_4_1: + return 4; + + case IPA_VERSION_4_2: + return 1; + + default: + return 0; + } +} + +static bool ipa_resource_limits_valid(struct ipa *ipa, + const struct ipa_resource_data *data) +{ + u32 group_count = ipa_resource_group_count(ipa); + u32 i; + u32 j; + + if (!group_count) + return false; + + /* Return an error if a non-zero resource group limit is specified + * for a resource not supported by hardware. + */ + for (i = 0; i < data->resource_src_count; i++) { + const struct ipa_resource_src *resource; + + resource = &data->resource_src[i]; + for (j = group_count; j < IPA_RESOURCE_GROUP_COUNT; j++) + if (resource->limits[j].min || resource->limits[j].max) + return false; + } + + for (i = 0; i < data->resource_dst_count; i++) { + const struct ipa_resource_dst *resource; + + resource = &data->resource_dst[i]; + for (j = group_count; j < IPA_RESOURCE_GROUP_COUNT; j++) + if (resource->limits[j].min || resource->limits[j].max) + return false; + } + + return true; +} + +#else /* !IPA_VALIDATION */ + +static bool ipa_resource_limits_valid(struct ipa *ipa, + const struct ipa_resource_data *data) +{ + return true; +} + +#endif /* !IPA_VALIDATION */ + +static void +ipa_resource_config_common(struct ipa *ipa, u32 offset, + const struct ipa_resource_limits *xlimits, + const struct ipa_resource_limits *ylimits) +{ + u32 val; + + val = u32_encode_bits(xlimits->min, X_MIN_LIM_FMASK); + val |= u32_encode_bits(xlimits->max, X_MAX_LIM_FMASK); + val |= u32_encode_bits(ylimits->min, Y_MIN_LIM_FMASK); + val |= u32_encode_bits(ylimits->max, Y_MAX_LIM_FMASK); + + iowrite32(val, ipa->reg_virt + offset); +} + +static void ipa_resource_config_src_01(struct ipa *ipa, + const struct ipa_resource_src *resource) +{ + u32 offset = IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(resource->type); + + ipa_resource_config_common(ipa, offset, + &resource->limits[0], &resource->limits[1]); +} + +static void ipa_resource_config_src_23(struct ipa *ipa, + const struct ipa_resource_src *resource) +{ + u32 offset = IPA_REG_SRC_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(resource->type); + + ipa_resource_config_common(ipa, offset, + &resource->limits[2], &resource->limits[3]); +} + +static void ipa_resource_config_dst_01(struct ipa *ipa, + const struct ipa_resource_dst *resource) +{ + u32 offset = IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(resource->type); + + ipa_resource_config_common(ipa, offset, + &resource->limits[0], &resource->limits[1]); +} + +static void ipa_resource_config_dst_23(struct ipa *ipa, + const struct ipa_resource_dst *resource) +{ + u32 offset = IPA_REG_DST_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(resource->type); + + ipa_resource_config_common(ipa, offset, + &resource->limits[2], &resource->limits[3]); +} + +static int +ipa_resource_config(struct ipa *ipa, const struct ipa_resource_data *data) +{ + u32 i; + + if (!ipa_resource_limits_valid(ipa, data)) + return -EINVAL; + + for (i = 0; i < data->resource_src_count; i++) { + ipa_resource_config_src_01(ipa, &data->resource_src[i]); + ipa_resource_config_src_23(ipa, &data->resource_src[i]); + } + + for (i = 0; i < data->resource_dst_count; i++) { + ipa_resource_config_dst_01(ipa, &data->resource_dst[i]); + ipa_resource_config_dst_23(ipa, &data->resource_dst[i]); + } + + return 0; +} + +static void ipa_resource_deconfig(struct ipa *ipa) +{ + /* Nothing to do */ +} + +/** + * ipa_config() - Configure IPA hardware + * @ipa: IPA pointer + * + * Perform initialization requiring IPA clock to be enabled. + */ +static int ipa_config(struct ipa *ipa, const struct ipa_data *data) +{ + int ret; + + /* Get a clock reference to allow initialization. This reference + * is held after initialization completes, and won't get dropped + * unless/until a system suspend request arrives. + */ + atomic_set(&ipa->suspend_ref, 1); + ipa_clock_get(ipa); + + ipa_hardware_config(ipa); + + ret = ipa_endpoint_config(ipa); + if (ret) + goto err_hardware_deconfig; + + ret = ipa_mem_config(ipa); + if (ret) + goto err_endpoint_deconfig; + + ipa_table_config(ipa); + + /* Assign resource limitation to each group */ + ret = ipa_resource_config(ipa, data->resource_data); + if (ret) + goto err_table_deconfig; + + ret = ipa_modem_config(ipa); + if (ret) + goto err_resource_deconfig; + + return 0; + +err_resource_deconfig: + ipa_resource_deconfig(ipa); +err_table_deconfig: + ipa_table_deconfig(ipa); + ipa_mem_deconfig(ipa); +err_endpoint_deconfig: + ipa_endpoint_deconfig(ipa); +err_hardware_deconfig: + ipa_hardware_deconfig(ipa); + ipa_clock_put(ipa); + atomic_set(&ipa->suspend_ref, 0); + + return ret; +} + +/** + * ipa_deconfig() - Inverse of ipa_config() + * @ipa: IPA pointer + */ +static void ipa_deconfig(struct ipa *ipa) +{ + ipa_modem_deconfig(ipa); + ipa_resource_deconfig(ipa); + ipa_table_deconfig(ipa); + ipa_mem_deconfig(ipa); + ipa_endpoint_deconfig(ipa); + ipa_hardware_deconfig(ipa); + ipa_clock_put(ipa); + atomic_set(&ipa->suspend_ref, 0); +} + +static int ipa_firmware_load(struct device *dev) +{ + const struct firmware *fw; + struct device_node *node; + struct resource res; + phys_addr_t phys; + ssize_t size; + void *virt; + int ret; + + node = of_parse_phandle(dev->of_node, "memory-region", 0); + if (!node) { + dev_err(dev, "DT error getting \"memory-region\" property\n"); + return -EINVAL; + } + + ret = of_address_to_resource(node, 0, &res); + if (ret) { + dev_err(dev, "error %d getting \"memory-region\" resource\n", + ret); + return ret; + } + + ret = request_firmware(&fw, IPA_FWS_PATH, dev); + if (ret) { + dev_err(dev, "error %d requesting \"%s\"\n", ret, IPA_FWS_PATH); + return ret; + } + + phys = res.start; + size = (size_t)resource_size(&res); + virt = memremap(phys, size, MEMREMAP_WC); + if (!virt) { + dev_err(dev, "unable to remap firmware memory\n"); + ret = -ENOMEM; + goto out_release_firmware; + } + + ret = qcom_mdt_load(dev, fw, IPA_FWS_PATH, IPA_PAS_ID, + virt, phys, size, NULL); + if (ret) + dev_err(dev, "error %d loading \"%s\"\n", ret, IPA_FWS_PATH); + else if ((ret = qcom_scm_pas_auth_and_reset(IPA_PAS_ID))) + dev_err(dev, "error %d authenticating \"%s\"\n", ret, + IPA_FWS_PATH); + + memunmap(virt); +out_release_firmware: + release_firmware(fw); + + return ret; +} + +static const struct of_device_id ipa_match[] = { + { + .compatible = "qcom,sdm845-ipa", + .data = &ipa_data_sdm845, + }, + { + .compatible = "qcom,sc7180-ipa", + .data = &ipa_data_sc7180, + }, + { }, +}; +MODULE_DEVICE_TABLE(of, ipa_match); + +static phandle of_property_read_phandle(const struct device_node *np, + const char *name) +{ + struct property *prop; + int len = 0; + + prop = of_find_property(np, name, &len); + if (!prop || len != sizeof(__be32)) + return 0; + + return be32_to_cpup(prop->value); +} + +/* Check things that can be validated at build time. This just + * groups these things BUILD_BUG_ON() calls don't clutter the rest + * of the code. + * */ +static void ipa_validate_build(void) +{ +#ifdef IPA_VALIDATE + /* We assume we're working on 64-bit hardware */ + BUILD_BUG_ON(!IS_ENABLED(CONFIG_64BIT)); + + /* Code assumes the EE ID for the AP is 0 (zeroed structure field) */ + BUILD_BUG_ON(GSI_EE_AP != 0); + + /* There's no point if we have no channels or event rings */ + BUILD_BUG_ON(!GSI_CHANNEL_COUNT_MAX); + BUILD_BUG_ON(!GSI_EVT_RING_COUNT_MAX); + + /* GSI hardware design limits */ + BUILD_BUG_ON(GSI_CHANNEL_COUNT_MAX > 32); + BUILD_BUG_ON(GSI_EVT_RING_COUNT_MAX > 31); + + /* The number of TREs in a transaction is limited by the channel's + * TLV FIFO size. A transaction structure uses 8-bit fields + * to represents the number of TREs it has allocated and used. + */ + BUILD_BUG_ON(GSI_TLV_MAX > U8_MAX); + + /* Exceeding 128 bytes makes the transaction pool *much* larger */ + BUILD_BUG_ON(sizeof(struct gsi_trans) > 128); + + /* This is used as a divisor */ + BUILD_BUG_ON(!IPA_AGGR_GRANULARITY); +#endif /* IPA_VALIDATE */ +} + +/** + * ipa_probe() - IPA platform driver probe function + * @pdev: Platform device pointer + * + * @Return: 0 if successful, or a negative error code (possibly + * EPROBE_DEFER) + * + * This is the main entry point for the IPA driver. Initialization proceeds + * in several stages: + * - The "init" stage involves activities that can be initialized without + * access to the IPA hardware. + * - The "config" stage requires the IPA clock to be active so IPA registers + * can be accessed, but does not require the use of IPA immediate commands. + * - The "setup" stage uses IPA immediate commands, and so requires the GSI + * layer to be initialized. + * + * A Boolean Device Tree "modem-init" property determines whether GSI + * initialization will be performed by the AP (Trust Zone) or the modem. + * If the AP does GSI initialization, the setup phase is entered after + * this has completed successfully. Otherwise the modem initializes + * the GSI layer and signals it has finished by sending an SMP2P interrupt + * to the AP; this triggers the start if IPA setup. + */ +static int ipa_probe(struct platform_device *pdev) +{ + struct wakeup_source *wakeup_source; + struct device *dev = &pdev->dev; + const struct ipa_data *data; + struct ipa_clock *clock; + struct rproc *rproc; + bool modem_alloc; + bool modem_init; + struct ipa *ipa; + phandle phandle; + bool prefetch; + int ret; + + ipa_validate_build(); + + /* If we need Trust Zone, make sure it's available */ + modem_init = of_property_read_bool(dev->of_node, "modem-init"); + if (!modem_init) + if (!qcom_scm_is_available()) + return -EPROBE_DEFER; + + /* We rely on remoteproc to tell us about modem state changes */ + phandle = of_property_read_phandle(dev->of_node, "modem-remoteproc"); + if (!phandle) { + dev_err(dev, "DT missing \"modem-remoteproc\" property\n"); + return -EINVAL; + } + + rproc = rproc_get_by_phandle(phandle); + if (!rproc) + return -EPROBE_DEFER; + + /* The clock and interconnects might not be ready when we're + * probed, so might return -EPROBE_DEFER. + */ + clock = ipa_clock_init(dev); + if (IS_ERR(clock)) { + ret = PTR_ERR(clock); + goto err_rproc_put; + } + + /* No more EPROBE_DEFER. Get our configuration data */ + data = of_device_get_match_data(dev); + if (!data) { + /* This is really IPA_VALIDATE (should never happen) */ + dev_err(dev, "matched hardware not supported\n"); + ret = -ENOTSUPP; + goto err_clock_exit; + } + + /* Create a wakeup source. */ + wakeup_source = wakeup_source_register(dev, "ipa"); + if (!wakeup_source) { + /* The most likely reason for failure is memory exhaustion */ + ret = -ENOMEM; + goto err_clock_exit; + } + + /* Allocate and initialize the IPA structure */ + ipa = kzalloc(sizeof(*ipa), GFP_KERNEL); + if (!ipa) { + ret = -ENOMEM; + goto err_wakeup_source_unregister; + } + + ipa->pdev = pdev; + dev_set_drvdata(dev, ipa); + ipa->modem_rproc = rproc; + ipa->clock = clock; + atomic_set(&ipa->suspend_ref, 0); + ipa->wakeup_source = wakeup_source; + ipa->version = data->version; + + ret = ipa_reg_init(ipa); + if (ret) + goto err_kfree_ipa; + + ret = ipa_mem_init(ipa, data->mem_count, data->mem_data); + if (ret) + goto err_reg_exit; + + /* GSI v2.0+ (IPA v4.0+) uses prefetch for the command channel */ + prefetch = ipa->version != IPA_VERSION_3_5_1; + /* IPA v4.2 requires the AP to allocate channels for the modem */ + modem_alloc = ipa->version == IPA_VERSION_4_2; + + ret = gsi_init(&ipa->gsi, pdev, prefetch, data->endpoint_count, + data->endpoint_data, modem_alloc); + if (ret) + goto err_mem_exit; + + /* Result is a non-zero mask endpoints that support filtering */ + ipa->filter_map = ipa_endpoint_init(ipa, data->endpoint_count, + data->endpoint_data); + if (!ipa->filter_map) { + ret = -EINVAL; + goto err_gsi_exit; + } + + ret = ipa_table_init(ipa); + if (ret) + goto err_endpoint_exit; + + ret = ipa_modem_init(ipa, modem_init); + if (ret) + goto err_table_exit; + + ret = ipa_config(ipa, data); + if (ret) + goto err_modem_exit; + + dev_info(dev, "IPA driver initialized"); + + /* If the modem is doing early initialization, it will trigger a + * call to ipa_setup() call when it has finished. In that case + * we're done here. + */ + if (modem_init) + return 0; + + /* Otherwise we need to load the firmware and have Trust Zone validate + * and install it. If that succeeds we can proceed with setup. + */ + ret = ipa_firmware_load(dev); + if (ret) + goto err_deconfig; + + ret = ipa_setup(ipa); + if (ret) + goto err_deconfig; + + return 0; + +err_deconfig: + ipa_deconfig(ipa); +err_modem_exit: + ipa_modem_exit(ipa); +err_table_exit: + ipa_table_exit(ipa); +err_endpoint_exit: + ipa_endpoint_exit(ipa); +err_gsi_exit: + gsi_exit(&ipa->gsi); +err_mem_exit: + ipa_mem_exit(ipa); +err_reg_exit: + ipa_reg_exit(ipa); +err_kfree_ipa: + kfree(ipa); +err_wakeup_source_unregister: + wakeup_source_unregister(wakeup_source); +err_clock_exit: + ipa_clock_exit(clock); +err_rproc_put: + rproc_put(rproc); + + return ret; +} + +static int ipa_remove(struct platform_device *pdev) +{ + struct ipa *ipa = dev_get_drvdata(&pdev->dev); + struct rproc *rproc = ipa->modem_rproc; + struct ipa_clock *clock = ipa->clock; + struct wakeup_source *wakeup_source; + int ret; + + wakeup_source = ipa->wakeup_source; + + if (ipa->setup_complete) { + ret = ipa_modem_stop(ipa); + if (ret) + return ret; + + ipa_teardown(ipa); + } + + ipa_deconfig(ipa); + ipa_modem_exit(ipa); + ipa_table_exit(ipa); + ipa_endpoint_exit(ipa); + gsi_exit(&ipa->gsi); + ipa_mem_exit(ipa); + ipa_reg_exit(ipa); + kfree(ipa); + wakeup_source_unregister(wakeup_source); + ipa_clock_exit(clock); + rproc_put(rproc); + + return 0; +} + +/** + * ipa_suspend() - Power management system suspend callback + * @dev: IPA device structure + * + * @Return: Zero + * + * Called by the PM framework when a system suspend operation is invoked. + */ +static int ipa_suspend(struct device *dev) +{ + struct ipa *ipa = dev_get_drvdata(dev); + + ipa_clock_put(ipa); + atomic_set(&ipa->suspend_ref, 0); + + return 0; +} + +/** + * ipa_resume() - Power management system resume callback + * @dev: IPA device structure + * + * @Return: Always returns 0 + * + * Called by the PM framework when a system resume operation is invoked. + */ +static int ipa_resume(struct device *dev) +{ + struct ipa *ipa = dev_get_drvdata(dev); + + /* This clock reference will keep the IPA out of suspend + * until we get a power management suspend request. + */ + atomic_set(&ipa->suspend_ref, 1); + ipa_clock_get(ipa); + + return 0; +} + +static const struct dev_pm_ops ipa_pm_ops = { + .suspend_noirq = ipa_suspend, + .resume_noirq = ipa_resume, +}; + +static struct platform_driver ipa_driver = { + .probe = ipa_probe, + .remove = ipa_remove, + .driver = { + .name = "ipa", + .owner = THIS_MODULE, + .pm = &ipa_pm_ops, + .of_match_table = ipa_match, + }, +}; + +module_platform_driver(ipa_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Qualcomm IP Accelerator device driver"); diff --git a/drivers/net/ipa/ipa_reg.c b/drivers/net/ipa/ipa_reg.c new file mode 100644 index 000000000000..e6147a1cd787 --- /dev/null +++ b/drivers/net/ipa/ipa_reg.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include + +#include "ipa.h" +#include "ipa_reg.h" + +int ipa_reg_init(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + struct resource *res; + + /* Setup IPA register memory */ + res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM, + "ipa-reg"); + if (!res) { + dev_err(dev, "DT error getting \"ipa-reg\" memory property\n"); + return -ENODEV; + } + + ipa->reg_virt = ioremap(res->start, resource_size(res)); + if (!ipa->reg_virt) { + dev_err(dev, "unable to remap \"ipa-reg\" memory\n"); + return -ENOMEM; + } + ipa->reg_addr = res->start; + + return 0; +} + +void ipa_reg_exit(struct ipa *ipa) +{ + iounmap(ipa->reg_virt); +} diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h new file mode 100644 index 000000000000..3b8106aa277a --- /dev/null +++ b/drivers/net/ipa/ipa_reg.h @@ -0,0 +1,476 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_REG_H_ +#define _IPA_REG_H_ + +#include + +#include "ipa_version.h" + +struct ipa; + +/** + * DOC: IPA Registers + * + * IPA registers are located within the "ipa-reg" address space defined by + * Device Tree. The offset of each register within that space is specified + * by symbols defined below. The address space is mapped to virtual memory + * space in ipa_mem_init(). All IPA registers are 32 bits wide. + * + * Certain register types are duplicated for a number of instances of + * something. For example, each IPA endpoint has an set of registers + * defining its configuration. The offset to an endpoint's set of registers + * is computed based on an "base" offset, plus an endpoint's ID multiplied + * and a "stride" value for the register. For such registers, the offset is + * computed by a function-like macro that takes a parameter used in the + * computation. + * + * Some register offsets depend on execution environment. For these an "ee" + * parameter is supplied to the offset macro. The "ee" value is a member of + * the gsi_ee enumerated type. + * + * The offset of a register dependent on endpoint id is computed by a macro + * that is supplied a parameter "ep". The "ep" value is assumed to be less + * than the maximum endpoint value for the current hardware, and that will + * not exceed IPA_ENDPOINT_MAX. + * + * The offset of registers related to filter and route tables is computed + * by a macro that is supplied a parameter "er". The "er" represents an + * endpoint ID for filters, or a route ID for routes. For filters, the + * endpoint ID must be less than IPA_ENDPOINT_MAX, but is further restricted + * because not all endpoints support filtering. For routes, the route ID + * must be less than IPA_ROUTE_MAX. + * + * The offset of registers related to resource types is computed by a macro + * that is supplied a parameter "rt". The "rt" represents a resource type, + * which is is a member of the ipa_resource_type_src enumerated type for + * source endpoint resources or the ipa_resource_type_dst enumerated type + * for destination endpoint resources. + * + * Some registers encode multiple fields within them. For these, each field + * has a symbol below defining a field mask that encodes both the position + * and width of the field within its register. + * + * In some cases, different versions of IPA hardware use different offset or + * field mask values. In such cases an inline_function(ipa) is used rather + * than a MACRO to define the offset or field mask to use. + * + * Finally, some registers hold bitmasks representing endpoints. In such + * cases the @available field in the @ipa structure defines the "full" set + * of valid bits for the register. + */ + +#define IPA_REG_ENABLED_PIPES_OFFSET 0x00000038 + +#define IPA_REG_COMP_CFG_OFFSET 0x0000003c +#define ENABLE_FMASK GENMASK(0, 0) +#define GSI_SNOC_BYPASS_DIS_FMASK GENMASK(1, 1) +#define GEN_QMB_0_SNOC_BYPASS_DIS_FMASK GENMASK(2, 2) +#define GEN_QMB_1_SNOC_BYPASS_DIS_FMASK GENMASK(3, 3) +#define IPA_DCMP_FAST_CLK_EN_FMASK GENMASK(4, 4) +#define IPA_QMB_SELECT_CONS_EN_FMASK GENMASK(5, 5) +#define IPA_QMB_SELECT_PROD_EN_FMASK GENMASK(6, 6) +#define GSI_MULTI_INORDER_RD_DIS_FMASK GENMASK(7, 7) +#define GSI_MULTI_INORDER_WR_DIS_FMASK GENMASK(8, 8) +#define GEN_QMB_0_MULTI_INORDER_RD_DIS_FMASK GENMASK(9, 9) +#define GEN_QMB_1_MULTI_INORDER_RD_DIS_FMASK GENMASK(10, 10) +#define GEN_QMB_0_MULTI_INORDER_WR_DIS_FMASK GENMASK(11, 11) +#define GEN_QMB_1_MULTI_INORDER_WR_DIS_FMASK GENMASK(12, 12) +#define GEN_QMB_0_SNOC_CNOC_LOOP_PROT_DIS_FMASK GENMASK(13, 13) +#define GSI_SNOC_CNOC_LOOP_PROT_DISABLE_FMASK GENMASK(14, 14) +#define GSI_MULTI_AXI_MASTERS_DIS_FMASK GENMASK(15, 15) +#define IPA_QMB_SELECT_GLOBAL_EN_FMASK GENMASK(16, 16) +#define IPA_ATOMIC_FETCHER_ARB_LOCK_DIS_FMASK GENMASK(20, 17) + +#define IPA_REG_CLKON_CFG_OFFSET 0x00000044 +#define RX_FMASK GENMASK(0, 0) +#define PROC_FMASK GENMASK(1, 1) +#define TX_WRAPPER_FMASK GENMASK(2, 2) +#define MISC_FMASK GENMASK(3, 3) +#define RAM_ARB_FMASK GENMASK(4, 4) +#define FTCH_HPS_FMASK GENMASK(5, 5) +#define FTCH_DPS_FMASK GENMASK(6, 6) +#define HPS_FMASK GENMASK(7, 7) +#define DPS_FMASK GENMASK(8, 8) +#define RX_HPS_CMDQS_FMASK GENMASK(9, 9) +#define HPS_DPS_CMDQS_FMASK GENMASK(10, 10) +#define DPS_TX_CMDQS_FMASK GENMASK(11, 11) +#define RSRC_MNGR_FMASK GENMASK(12, 12) +#define CTX_HANDLER_FMASK GENMASK(13, 13) +#define ACK_MNGR_FMASK GENMASK(14, 14) +#define D_DCPH_FMASK GENMASK(15, 15) +#define H_DCPH_FMASK GENMASK(16, 16) +#define DCMP_FMASK GENMASK(17, 17) +#define NTF_TX_CMDQS_FMASK GENMASK(18, 18) +#define TX_0_FMASK GENMASK(19, 19) +#define TX_1_FMASK GENMASK(20, 20) +#define FNR_FMASK GENMASK(21, 21) +#define QSB2AXI_CMDQ_L_FMASK GENMASK(22, 22) +#define AGGR_WRAPPER_FMASK GENMASK(23, 23) +#define RAM_SLAVEWAY_FMASK GENMASK(24, 24) +#define QMB_FMASK GENMASK(25, 25) +#define WEIGHT_ARB_FMASK GENMASK(26, 26) +#define GSI_IF_FMASK GENMASK(27, 27) +#define GLOBAL_FMASK GENMASK(28, 28) +#define GLOBAL_2X_CLK_FMASK GENMASK(29, 29) + +#define IPA_REG_ROUTE_OFFSET 0x00000048 +#define ROUTE_DIS_FMASK GENMASK(0, 0) +#define ROUTE_DEF_PIPE_FMASK GENMASK(5, 1) +#define ROUTE_DEF_HDR_TABLE_FMASK GENMASK(6, 6) +#define ROUTE_DEF_HDR_OFST_FMASK GENMASK(16, 7) +#define ROUTE_FRAG_DEF_PIPE_FMASK GENMASK(21, 17) +#define ROUTE_DEF_RETAIN_HDR_FMASK GENMASK(24, 24) + +#define IPA_REG_SHARED_MEM_SIZE_OFFSET 0x00000054 +#define SHARED_MEM_SIZE_FMASK GENMASK(15, 0) +#define SHARED_MEM_BADDR_FMASK GENMASK(31, 16) + +#define IPA_REG_QSB_MAX_WRITES_OFFSET 0x00000074 +#define GEN_QMB_0_MAX_WRITES_FMASK GENMASK(3, 0) +#define GEN_QMB_1_MAX_WRITES_FMASK GENMASK(7, 4) + +#define IPA_REG_QSB_MAX_READS_OFFSET 0x00000078 +#define GEN_QMB_0_MAX_READS_FMASK GENMASK(3, 0) +#define GEN_QMB_1_MAX_READS_FMASK GENMASK(7, 4) +/* The next two fields are present for IPA v4.0 and above */ +#define GEN_QMB_0_MAX_READS_BEATS_FMASK GENMASK(23, 16) +#define GEN_QMB_1_MAX_READS_BEATS_FMASK GENMASK(31, 24) + +static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version) +{ + if (version == IPA_VERSION_3_5_1) + return 0x0000010c; + + return 0x000000b4; +} +/* ipa->available defines the valid bits in the STATE_AGGR_ACTIVE register */ + +/* The next register is present for IPA v4.2 and above */ +#define IPA_REG_FILT_ROUT_HASH_EN_OFFSET 0x00000148 +#define IPV6_ROUTER_HASH_EN GENMASK(0, 0) +#define IPV6_FILTER_HASH_EN GENMASK(4, 4) +#define IPV4_ROUTER_HASH_EN GENMASK(8, 8) +#define IPV4_FILTER_HASH_EN GENMASK(12, 12) + +static inline u32 ipa_reg_filt_rout_hash_flush_offset(enum ipa_version version) +{ + if (version == IPA_VERSION_3_5_1) + return 0x0000090; + + return 0x000014c; +} + +#define IPV6_ROUTER_HASH_FLUSH GENMASK(0, 0) +#define IPV6_FILTER_HASH_FLUSH GENMASK(4, 4) +#define IPV4_ROUTER_HASH_FLUSH GENMASK(8, 8) +#define IPV4_FILTER_HASH_FLUSH GENMASK(12, 12) + +#define IPA_REG_BCR_OFFSET 0x000001d0 +#define BCR_CMDQ_L_LACK_ONE_ENTRY BIT(0) +#define BCR_TX_NOT_USING_BRESP BIT(1) +#define BCR_SUSPEND_L2_IRQ BIT(3) +#define BCR_HOLB_DROP_L2_IRQ BIT(4) +#define BCR_DUAL_TX BIT(5) + +/* Backward compatibility register value to use for each version */ +static inline u32 ipa_reg_bcr_val(enum ipa_version version) +{ + if (version == IPA_VERSION_3_5_1) + return BCR_CMDQ_L_LACK_ONE_ENTRY | BCR_TX_NOT_USING_BRESP | + BCR_SUSPEND_L2_IRQ | BCR_HOLB_DROP_L2_IRQ | BCR_DUAL_TX; + + if (version == IPA_VERSION_4_0 || version == IPA_VERSION_4_1) + return BCR_CMDQ_L_LACK_ONE_ENTRY | BCR_SUSPEND_L2_IRQ | + BCR_HOLB_DROP_L2_IRQ | BCR_DUAL_TX; + + return 0x00000000; +} + + +#define IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET 0x000001e8 + +#define IPA_REG_AGGR_FORCE_CLOSE_OFFSET 0x000001ec +/* ipa->available defines the valid bits in the AGGR_FORCE_CLOSE register */ + +#define IPA_REG_COUNTER_CFG_OFFSET 0x000001f0 +#define AGGR_GRANULARITY GENMASK(8, 4) +/* Compute the value to use in the AGGR_GRANULARITY field representing + * the given number of microseconds (up to 1 millisecond). + * x = (32 * usec) / 1000 - 1 + */ +static inline u32 ipa_aggr_granularity_val(u32 microseconds) +{ + /* assert(microseconds >= 16); (?) */ + /* assert(microseconds <= 1015); */ + + return DIV_ROUND_CLOSEST(32 * microseconds, 1000) - 1; +} + +#define IPA_REG_TX_CFG_OFFSET 0x000001fc +/* The first three fields are present for IPA v3.5.1 only */ +#define TX0_PREFETCH_DISABLE GENMASK(0, 0) +#define TX1_PREFETCH_DISABLE GENMASK(1, 1) +#define PREFETCH_ALMOST_EMPTY_SIZE GENMASK(4, 2) +/* The next fields are present for IPA v4.0 and above */ +#define PREFETCH_ALMOST_EMPTY_SIZE_TX0 GENMASK(5, 2) +#define DMAW_SCND_OUTSD_PRED_THRESHOLD GENMASK(9, 6) +#define DMAW_SCND_OUTSD_PRED_EN GENMASK(10, 10) +#define DMAW_MAX_BEATS_256_DIS GENMASK(11, 11) +#define PA_MASK_EN GENMASK(12, 12) +#define PREFETCH_ALMOST_EMPTY_SIZE_TX1 GENMASK(16, 13) +/* The last two fields are present for IPA v4.2 and above */ +#define SSPND_PA_NO_START_STATE GENMASK(18, 18) +#define SSPND_PA_NO_BQ_STATE GENMASK(19, 19) + +#define IPA_REG_FLAVOR_0_OFFSET 0x00000210 +#define BAM_MAX_PIPES_FMASK GENMASK(4, 0) +#define BAM_MAX_CONS_PIPES_FMASK GENMASK(12, 8) +#define BAM_MAX_PROD_PIPES_FMASK GENMASK(20, 16) +#define BAM_PROD_LOWEST_FMASK GENMASK(27, 24) + +static inline u32 ipa_reg_idle_indication_cfg_offset(enum ipa_version version) +{ + if (version == IPA_VERSION_4_2) + return 0x00000240; + + return 0x00000220; +} + +#define ENTER_IDLE_DEBOUNCE_THRESH_FMASK GENMASK(15, 0) +#define CONST_NON_IDLE_ENABLE_FMASK GENMASK(16, 16) + +#define IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000400 + 0x0020 * (rt)) +#define IPA_REG_SRC_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000404 + 0x0020 * (rt)) +#define IPA_REG_SRC_RSRC_GRP_45_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000408 + 0x0020 * (rt)) +#define IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000500 + 0x0020 * (rt)) +#define IPA_REG_DST_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000504 + 0x0020 * (rt)) +#define IPA_REG_DST_RSRC_GRP_45_RSRC_TYPE_N_OFFSET(rt) \ + (0x00000508 + 0x0020 * (rt)) +#define X_MIN_LIM_FMASK GENMASK(5, 0) +#define X_MAX_LIM_FMASK GENMASK(13, 8) +#define Y_MIN_LIM_FMASK GENMASK(21, 16) +#define Y_MAX_LIM_FMASK GENMASK(29, 24) + +#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \ + (0x00000800 + 0x0070 * (ep)) +#define ENDP_SUSPEND_FMASK GENMASK(0, 0) +#define ENDP_DELAY_FMASK GENMASK(1, 1) + +#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \ + (0x00000808 + 0x0070 * (ep)) +#define FRAG_OFFLOAD_EN_FMASK GENMASK(0, 0) +#define CS_OFFLOAD_EN_FMASK GENMASK(2, 1) +#define CS_METADATA_HDR_OFFSET_FMASK GENMASK(6, 3) +#define CS_GEN_QMB_MASTER_SEL_FMASK GENMASK(8, 8) + +#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \ + (0x00000810 + 0x0070 * (ep)) +#define HDR_LEN_FMASK GENMASK(5, 0) +#define HDR_OFST_METADATA_VALID_FMASK GENMASK(6, 6) +#define HDR_OFST_METADATA_FMASK GENMASK(12, 7) +#define HDR_ADDITIONAL_CONST_LEN_FMASK GENMASK(18, 13) +#define HDR_OFST_PKT_SIZE_VALID_FMASK GENMASK(19, 19) +#define HDR_OFST_PKT_SIZE_FMASK GENMASK(25, 20) +#define HDR_A5_MUX_FMASK GENMASK(26, 26) +#define HDR_LEN_INC_DEAGG_HDR_FMASK GENMASK(27, 27) +#define HDR_METADATA_REG_VALID_FMASK GENMASK(28, 28) + +#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \ + (0x00000814 + 0x0070 * (ep)) +#define HDR_ENDIANNESS_FMASK GENMASK(0, 0) +#define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK GENMASK(1, 1) +#define HDR_TOTAL_LEN_OR_PAD_FMASK GENMASK(2, 2) +#define HDR_PAYLOAD_LEN_INC_PADDING_FMASK GENMASK(3, 3) +#define HDR_TOTAL_LEN_OR_PAD_OFFSET_FMASK GENMASK(9, 4) +#define HDR_PAD_TO_ALIGNMENT_FMASK GENMASK(13, 10) + +#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(ep) \ + (0x00000818 + 0x0070 * (ep)) + +#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(ep) \ + (0x00000820 + 0x0070 * (ep)) +#define MODE_FMASK GENMASK(2, 0) +#define DEST_PIPE_INDEX_FMASK GENMASK(8, 4) +#define BYTE_THRESHOLD_FMASK GENMASK(27, 12) +#define PIPE_REPLICATION_EN_FMASK GENMASK(28, 28) +#define PAD_EN_FMASK GENMASK(29, 29) +#define HDR_FTCH_DISABLE_FMASK GENMASK(30, 30) + +#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \ + (0x00000824 + 0x0070 * (ep)) +#define AGGR_EN_FMASK GENMASK(1, 0) +#define AGGR_TYPE_FMASK GENMASK(4, 2) +#define AGGR_BYTE_LIMIT_FMASK GENMASK(9, 5) +#define AGGR_TIME_LIMIT_FMASK GENMASK(14, 10) +#define AGGR_PKT_LIMIT_FMASK GENMASK(20, 15) +#define AGGR_SW_EOF_ACTIVE_FMASK GENMASK(21, 21) +#define AGGR_FORCE_CLOSE_FMASK GENMASK(22, 22) +#define AGGR_HARD_BYTE_LIMIT_ENABLE_FMASK GENMASK(24, 24) + +#define IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(ep) \ + (0x0000082c + 0x0070 * (ep)) +#define HOL_BLOCK_EN_FMASK GENMASK(0, 0) + +/* The next register is valid only for RX (IPA producer) endpoints */ +#define IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(ep) \ + (0x00000830 + 0x0070 * (ep)) +/* The next fields are present for IPA v4.2 only */ +#define BASE_VALUE_FMASK GENMASK(4, 0) +#define SCALE_FMASK GENMASK(12, 8) + +#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(ep) \ + (0x00000834 + 0x0070 * (ep)) +#define DEAGGR_HDR_LEN_FMASK GENMASK(5, 0) +#define PACKET_OFFSET_VALID_FMASK GENMASK(7, 7) +#define PACKET_OFFSET_LOCATION_FMASK GENMASK(13, 8) +#define MAX_PACKET_LEN_FMASK GENMASK(31, 16) + +#define IPA_REG_ENDP_INIT_RSRC_GRP_N_OFFSET(ep) \ + (0x00000838 + 0x0070 * (ep)) +#define RSRC_GRP_FMASK GENMASK(1, 0) + +#define IPA_REG_ENDP_INIT_SEQ_N_OFFSET(ep) \ + (0x0000083c + 0x0070 * (ep)) +#define HPS_SEQ_TYPE_FMASK GENMASK(3, 0) +#define DPS_SEQ_TYPE_FMASK GENMASK(7, 4) +#define HPS_REP_SEQ_TYPE_FMASK GENMASK(11, 8) +#define DPS_REP_SEQ_TYPE_FMASK GENMASK(15, 12) + +#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \ + (0x00000840 + 0x0070 * (ep)) +#define STATUS_EN_FMASK GENMASK(0, 0) +#define STATUS_ENDP_FMASK GENMASK(5, 1) +#define STATUS_LOCATION_FMASK GENMASK(8, 8) +/* The next field is present for IPA v4.0 and above */ +#define STATUS_PKT_SUPPRESS_FMASK GENMASK(9, 9) + +/* "er" is either an endpoint id (for filters) or a route id (for routes) */ +#define IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(er) \ + (0x0000085c + 0x0070 * (er)) +#define FILTER_HASH_MSK_SRC_ID_FMASK GENMASK(0, 0) +#define FILTER_HASH_MSK_SRC_IP_FMASK GENMASK(1, 1) +#define FILTER_HASH_MSK_DST_IP_FMASK GENMASK(2, 2) +#define FILTER_HASH_MSK_SRC_PORT_FMASK GENMASK(3, 3) +#define FILTER_HASH_MSK_DST_PORT_FMASK GENMASK(4, 4) +#define FILTER_HASH_MSK_PROTOCOL_FMASK GENMASK(5, 5) +#define FILTER_HASH_MSK_METADATA_FMASK GENMASK(6, 6) +#define IPA_REG_ENDP_FILTER_HASH_MSK_ALL GENMASK(6, 0) + +#define ROUTER_HASH_MSK_SRC_ID_FMASK GENMASK(16, 16) +#define ROUTER_HASH_MSK_SRC_IP_FMASK GENMASK(17, 17) +#define ROUTER_HASH_MSK_DST_IP_FMASK GENMASK(18, 18) +#define ROUTER_HASH_MSK_SRC_PORT_FMASK GENMASK(19, 19) +#define ROUTER_HASH_MSK_DST_PORT_FMASK GENMASK(20, 20) +#define ROUTER_HASH_MSK_PROTOCOL_FMASK GENMASK(21, 21) +#define ROUTER_HASH_MSK_METADATA_FMASK GENMASK(22, 22) +#define IPA_REG_ENDP_ROUTER_HASH_MSK_ALL GENMASK(22, 16) + +#define IPA_REG_IRQ_STTS_OFFSET \ + IPA_REG_IRQ_STTS_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_STTS_EE_N_OFFSET(ee) \ + (0x00003008 + 0x1000 * (ee)) + +#define IPA_REG_IRQ_EN_OFFSET \ + IPA_REG_IRQ_EN_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_EN_EE_N_OFFSET(ee) \ + (0x0000300c + 0x1000 * (ee)) + +#define IPA_REG_IRQ_CLR_OFFSET \ + IPA_REG_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_CLR_EE_N_OFFSET(ee) \ + (0x00003010 + 0x1000 * (ee)) + +#define IPA_REG_IRQ_UC_OFFSET \ + IPA_REG_IRQ_UC_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_UC_EE_N_OFFSET(ee) \ + (0x0000301c + 0x1000 * (ee)) + +#define IPA_REG_IRQ_SUSPEND_INFO_OFFSET \ + IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(ee) \ + (0x00003030 + 0x1000 * (ee)) +/* ipa->available defines the valid bits in the SUSPEND_INFO register */ + +#define IPA_REG_SUSPEND_IRQ_EN_OFFSET \ + IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(ee) \ + (0x00003034 + 0x1000 * (ee)) +/* ipa->available defines the valid bits in the SUSPEND_IRQ_EN register */ + +#define IPA_REG_SUSPEND_IRQ_CLR_OFFSET \ + IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(ee) \ + (0x00003038 + 0x1000 * (ee)) +/* ipa->available defines the valid bits in the SUSPEND_IRQ_CLR register */ + +/** enum ipa_cs_offload_en - checksum offload field in ENDP_INIT_CFG_N */ +enum ipa_cs_offload_en { + IPA_CS_OFFLOAD_NONE = 0, + IPA_CS_OFFLOAD_UL = 1, + IPA_CS_OFFLOAD_DL = 2, + IPA_CS_RSVD +}; + +/** enum ipa_aggr_en - aggregation type field in ENDP_INIT_AGGR_N */ +enum ipa_aggr_en { + IPA_BYPASS_AGGR = 0, + IPA_ENABLE_AGGR = 1, + IPA_ENABLE_DEAGGR = 2, +}; + +/** enum ipa_aggr_type - aggregation type field in in_ENDP_INIT_AGGR_N */ +enum ipa_aggr_type { + IPA_MBIM_16 = 0, + IPA_HDLC = 1, + IPA_TLP = 2, + IPA_RNDIS = 3, + IPA_GENERIC = 4, + IPA_COALESCE = 5, + IPA_QCMAP = 6, +}; + +/** enum ipa_mode - mode field in ENDP_INIT_MODE_N */ +enum ipa_mode { + IPA_BASIC = 0, + IPA_ENABLE_FRAMING_HDLC = 1, + IPA_ENABLE_DEFRAMING_HDLC = 2, + IPA_DMA = 3, +}; + +/** + * enum ipa_seq_type - HPS and DPS sequencer type fields in in ENDP_INIT_SEQ_N + * @IPA_SEQ_DMA_ONLY: only DMA is performed + * @IPA_SEQ_PKT_PROCESS_NO_DEC_UCP: + * packet processing + no decipher + microcontroller (Ethernet Bridging) + * @IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP: + * second packet processing pass + no decipher + microcontroller + * @IPA_SEQ_DMA_DEC: DMA + cipher/decipher + * @IPA_SEQ_DMA_COMP_DECOMP: DMA + compression/decompression + * @IPA_SEQ_INVALID: invalid sequencer type + * + * The values defined here are broken into 4-bit nibbles that are written + * into fields of the INIT_SEQ_N endpoint registers. + */ +enum ipa_seq_type { + IPA_SEQ_DMA_ONLY = 0x0000, + IPA_SEQ_PKT_PROCESS_NO_DEC_UCP = 0x0002, + IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP = 0x0004, + IPA_SEQ_DMA_DEC = 0x0011, + IPA_SEQ_DMA_COMP_DECOMP = 0x0020, + IPA_SEQ_PKT_PROCESS_NO_DEC_NO_UCP_DMAP = 0x0806, + IPA_SEQ_INVALID = 0xffff, +}; + +int ipa_reg_init(struct ipa *ipa); +void ipa_reg_exit(struct ipa *ipa); + +#endif /* _IPA_REG_H_ */ diff --git a/drivers/net/ipa/ipa_version.h b/drivers/net/ipa/ipa_version.h new file mode 100644 index 000000000000..85449df0f512 --- /dev/null +++ b/drivers/net/ipa/ipa_version.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_VERSION_H_ +#define _IPA_VERSION_H_ + +/** + * enum ipa_version + * + * Defines the version of IPA (and GSI) hardware present on the platform. + * It seems this might be better defined elsewhere, but having it here gets + * it where it's needed. + */ +enum ipa_version { + IPA_VERSION_3_5_1, /* GSI version 1.3.0 */ + IPA_VERSION_4_0, /* GSI version 2.0 */ + IPA_VERSION_4_1, /* GSI version 2.1 */ + IPA_VERSION_4_2, /* GSI version 2.2 */ +}; + +#endif /* _IPA_VERSION_H_ */ From patchwork Fri Mar 6 04:28:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 222948 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2A8CC3F2DA for ; Fri, 6 Mar 2020 04:30:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9CBA52073D for ; Fri, 6 Mar 2020 04:30:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Uvawy2IF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727212AbgCFEaL (ORCPT ); Thu, 5 Mar 2020 23:30:11 -0500 Received: from mail-yw1-f68.google.com ([209.85.161.68]:39571 "EHLO mail-yw1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727143AbgCFE2u (ORCPT ); Thu, 5 Mar 2020 23:28:50 -0500 Received: by mail-yw1-f68.google.com with SMTP id x184so1086828ywd.6 for ; Thu, 05 Mar 2020 20:28:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1SW/DxoqhGjaj3Cy7I1FXx44tNdHemhTbjCDe8OoppQ=; b=Uvawy2IFHGwfSMIWD4m0w2jsyVtk+rmmVi92Kidzh100ycKFsuuP7KpHlw7+T6cvQ+ F6jH87UZMQshIcYbObXsayNiijgZvKXVqCLdYL3ksVfBZpVWdlpVzcpe41n8qTtdVq6Z 2RuDV6gzrBVtIh5hOXlW1yRAZ0D9ah/teIZnIomgnNIX1J1AnUCecgm3u4ckpyHgXvrN ijej7GTkKG3W5Of9OnaVKNcdfT9l/Lbl2xR5F4rF7eV8u5bXHLxJxNxI7PHtBxAJvX+Z j38JCWSVkZ3tm3h3auJ5/ae21mI5uTSROa3E7oFOnVJwzlXko3653QCTw9tz+f5vfEO4 jJmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1SW/DxoqhGjaj3Cy7I1FXx44tNdHemhTbjCDe8OoppQ=; b=Gl1V70UMvgR8iXtowkjga35XnGt2e/Fekxi6aK8cQi8za6AUNGiswqL/TzA5tHdYTE 5YQElm7BIcRlZa4JGaYuXRfZrlSRnT8Vn+4LSsF5hvyRAdwAgcShX5TxsRD13m0/5SsX aI/edMhDImf1r2mtqCj9a75ivjxTCUTJcrEcFq7mwj4jSUlWXqqM49qnWwcuXRsLQcUW Cb40DCkdYbS4jKCPCRJN6fKVA2bdMMuZYRxNnTJ893ySE1dpvgY+KEMkPfyKxuVDGeBd bi5+jBFyHBUJ/QEL4MzN2XbRd3RrxB9coqV6UJHbWu1phC2OpwPXjj0xGiL5m20vH9VF BKrQ== X-Gm-Message-State: ANhLgQ0RDh2NzhwWw37b4ebTmzLRYaUp11cwXFcevLHp3VncfKqB/MB7 tsCZ9TLQSElQLJSgLQa83kYrTA== X-Google-Smtp-Source: ADFU+vvBbLM2GE6JsOyOOkyWHiMSPvDcn9wUJqdSfa1r67YiWGpoedr37dzIk/B9pLTnwwAashSS4A== X-Received: by 2002:a81:3a06:: with SMTP id h6mr2190169ywa.170.1583468928197; Thu, 05 Mar 2020 20:28:48 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:47 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 06/17] soc: qcom: ipa: GSI headers Date: Thu, 5 Mar 2020 22:28:20 -0600 Message-Id: <20200306042831.17827-7-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The Generic Software Interface is a layer of the IPA driver that abstracts the underlying hardware. The next patch includes the main code for GSI (including some additional documentation). This patch just includes three GSI header files. - "gsi.h" is the top-level GSI header file. This structure is is embedded within the IPA structure. The main abstraction implemented by the GSI code is the channel, and this header exposes several operations that can be performed on a GSI channel. - "gsi_private.h" exposes some definitions that are intended to be private, used only by the main GSI code and the GSI transaction code (defined in an upcoming patch). - Like "ipa_reg.h", "gsi_reg.h" defines the offsets of the 32-bit registers used by the GSI layer, along with masks that define the position and width of fields less than 32 bits located within these registers. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi.h | 257 +++++++++++++++++++++ drivers/net/ipa/gsi_private.h | 118 ++++++++++ drivers/net/ipa/gsi_reg.h | 417 ++++++++++++++++++++++++++++++++++ 3 files changed, 792 insertions(+) create mode 100644 drivers/net/ipa/gsi.h create mode 100644 drivers/net/ipa/gsi_private.h create mode 100644 drivers/net/ipa/gsi_reg.h diff --git a/drivers/net/ipa/gsi.h b/drivers/net/ipa/gsi.h new file mode 100644 index 000000000000..0698ff1ae7a6 --- /dev/null +++ b/drivers/net/ipa/gsi.h @@ -0,0 +1,257 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _GSI_H_ +#define _GSI_H_ + +#include +#include +#include +#include +#include +#include + +/* Maximum number of channels and event rings supported by the driver */ +#define GSI_CHANNEL_COUNT_MAX 17 +#define GSI_EVT_RING_COUNT_MAX 13 + +/* Maximum TLV FIFO size for a channel; 64 here is arbitrary (and high) */ +#define GSI_TLV_MAX 64 + +struct device; +struct scatterlist; +struct platform_device; + +struct gsi; +struct gsi_trans; +struct gsi_channel_data; +struct ipa_gsi_endpoint_data; + +/* Execution environment IDs */ +enum gsi_ee_id { + GSI_EE_AP = 0, + GSI_EE_MODEM = 1, + GSI_EE_UC = 2, + GSI_EE_TZ = 3, +}; + +struct gsi_ring { + void *virt; /* ring array base address */ + dma_addr_t addr; /* primarily low 32 bits used */ + u32 count; /* number of elements in ring */ + + /* The ring index value indicates the next "open" entry in the ring. + * + * A channel ring consists of TRE entries filled by the AP and passed + * to the hardware for processing. For a channel ring, the ring index + * identifies the next unused entry to be filled by the AP. + * + * An event ring consists of event structures filled by the hardware + * and passed to the AP. For event rings, the ring index identifies + * the next ring entry that is not known to have been filled by the + * hardware. + */ + u32 index; +}; + +/* Transactions use several resources that can be allocated dynamically + * but taken from a fixed-size pool. The number of elements required for + * the pool is limited by the total number of TREs that can be outstanding. + * + * If sufficient TREs are available to reserve for a transaction, + * allocation from these pools is guaranteed to succeed. Furthermore, + * these resources are implicitly freed whenever the TREs in the + * transaction they're associated with are released. + * + * The result of a pool allocation of multiple elements is always + * contiguous. + */ +struct gsi_trans_pool { + void *base; /* base address of element pool */ + u32 count; /* # elements in the pool */ + u32 free; /* next free element in pool (modulo) */ + u32 size; /* size (bytes) of an element */ + u32 max_alloc; /* max allocation request */ + dma_addr_t addr; /* DMA address if DMA pool (or 0) */ +}; + +struct gsi_trans_info { + atomic_t tre_avail; /* TREs available for allocation */ + struct gsi_trans_pool pool; /* transaction pool */ + struct gsi_trans_pool sg_pool; /* scatterlist pool */ + struct gsi_trans_pool cmd_pool; /* command payload DMA pool */ + struct gsi_trans_pool info_pool;/* command information pool */ + struct gsi_trans **map; /* TRE -> transaction map */ + + spinlock_t spinlock; /* protects updates to the lists */ + struct list_head alloc; /* allocated, not committed */ + struct list_head pending; /* committed, awaiting completion */ + struct list_head complete; /* completed, awaiting poll */ + struct list_head polled; /* returned by gsi_channel_poll_one() */ +}; + +/* Hardware values signifying the state of a channel */ +enum gsi_channel_state { + GSI_CHANNEL_STATE_NOT_ALLOCATED = 0x0, + GSI_CHANNEL_STATE_ALLOCATED = 0x1, + GSI_CHANNEL_STATE_STARTED = 0x2, + GSI_CHANNEL_STATE_STOPPED = 0x3, + GSI_CHANNEL_STATE_STOP_IN_PROC = 0x4, + GSI_CHANNEL_STATE_ERROR = 0xf, +}; + +/* We only care about channels between IPA and AP */ +struct gsi_channel { + struct gsi *gsi; + bool toward_ipa; + bool command; /* AP command TX channel or not */ + bool use_prefetch; /* use prefetch (else escape buf) */ + + u8 tlv_count; /* # entries in TLV FIFO */ + u16 tre_count; + u16 event_count; + + struct completion completion; /* signals channel state changes */ + enum gsi_channel_state state; + + struct gsi_ring tre_ring; + u32 evt_ring_id; + + u64 byte_count; /* total # bytes transferred */ + u64 trans_count; /* total # transactions */ + /* The following counts are used only for TX endpoints */ + u64 queued_byte_count; /* last reported queued byte count */ + u64 queued_trans_count; /* ...and queued trans count */ + u64 compl_byte_count; /* last reported completed byte count */ + u64 compl_trans_count; /* ...and completed trans count */ + + struct gsi_trans_info trans_info; + + struct napi_struct napi; +}; + +/* Hardware values signifying the state of an event ring */ +enum gsi_evt_ring_state { + GSI_EVT_RING_STATE_NOT_ALLOCATED = 0x0, + GSI_EVT_RING_STATE_ALLOCATED = 0x1, + GSI_EVT_RING_STATE_ERROR = 0xf, +}; + +struct gsi_evt_ring { + struct gsi_channel *channel; + struct completion completion; /* signals event ring state changes */ + enum gsi_evt_ring_state state; + struct gsi_ring ring; +}; + +struct gsi { + struct device *dev; /* Same as IPA device */ + struct net_device dummy_dev; /* needed for NAPI */ + void __iomem *virt; + u32 irq; + bool irq_wake_enabled; + u32 channel_count; + u32 evt_ring_count; + struct gsi_channel channel[GSI_CHANNEL_COUNT_MAX]; + struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX]; + u32 event_bitmap; + u32 event_enable_bitmap; + u32 modem_channel_bitmap; + struct completion completion; /* for global EE commands */ + struct mutex mutex; /* protects commands, programming */ +}; + +/** + * gsi_setup() - Set up the GSI subsystem + * @gsi: Address of GSI structure embedded in an IPA structure + * @db_enable: Whether to use the GSI doorbell engine + * + * @Return: 0 if successful, or a negative error code + * + * Performs initialization that must wait until the GSI hardware is + * ready (including firmware loaded). + */ +int gsi_setup(struct gsi *gsi, bool db_enable); + +/** + * gsi_teardown() - Tear down GSI subsystem + * @gsi: GSI address previously passed to a successful gsi_setup() call + */ +void gsi_teardown(struct gsi *gsi); + +/** + * gsi_channel_tre_max() - Channel maximum number of in-flight TREs + * @gsi: GSI pointer + * @channel_id: Channel whose limit is to be returned + * + * @Return: The maximum number of TREs oustanding on the channel + */ +u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction + * @gsi: GSI pointer + * @channel_id: Channel whose limit is to be returned + * + * @Return: The maximum TRE count per transaction on the channel + */ +u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_start() - Start an allocated GSI channel + * @gsi: GSI pointer + * @channel_id: Channel to start + * + * @Return: 0 if successful, or a negative error code + */ +int gsi_channel_start(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_stop() - Stop a started GSI channel + * @gsi: GSI pointer returned by gsi_setup() + * @channel_id: Channel to stop + * + * @Return: 0 if successful, or a negative error code + */ +int gsi_channel_stop(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_reset() - Reset an allocated GSI channel + * @gsi: GSI pointer + * @channel_id: Channel to be reset + * @db_enable: Whether doorbell engine should be enabled + * + * Reset a channel and reconfigure it. The @db_enable flag indicates + * whether the doorbell engine will be enabled following reconfiguration. + * + * GSI hardware relinquishes ownership of all pending receive buffer + * transactions and they will complete with their cancelled flag set. + */ +void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool db_enable); + +int gsi_channel_suspend(struct gsi *gsi, u32 channel_id, bool stop); +int gsi_channel_resume(struct gsi *gsi, u32 channel_id, bool start); + +/** + * gsi_init() - Initialize the GSI subsystem + * @gsi: Address of GSI structure embedded in an IPA structure + * @pdev: IPA platform device + * + * @Return: 0 if successful, or a negative error code + * + * Early stage initialization of the GSI subsystem, performing tasks + * that can be done before the GSI hardware is ready to use. + */ +int gsi_init(struct gsi *gsi, struct platform_device *pdev, bool prefetch, + u32 count, const struct ipa_gsi_endpoint_data *data, + bool modem_alloc); + +/** + * gsi_exit() - Exit the GSI subsystem + * @gsi: GSI address previously passed to a successful gsi_init() call + */ +void gsi_exit(struct gsi *gsi); + +#endif /* _GSI_H_ */ diff --git a/drivers/net/ipa/gsi_private.h b/drivers/net/ipa/gsi_private.h new file mode 100644 index 000000000000..b57d0198ebc1 --- /dev/null +++ b/drivers/net/ipa/gsi_private.h @@ -0,0 +1,118 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _GSI_PRIVATE_H_ +#define _GSI_PRIVATE_H_ + +/* === Only "gsi.c" and "gsi_trans.c" should include this file === */ + +#include + +struct gsi_trans; +struct gsi_ring; +struct gsi_channel; + +#define GSI_RING_ELEMENT_SIZE 16 /* bytes */ + +/* Return the entry that follows one provided in a transaction pool */ +void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element); + +/** + * gsi_trans_move_complete() - Mark a GSI transaction completed + * @trans: Transaction to commit + */ +void gsi_trans_move_complete(struct gsi_trans *trans); + +/** + * gsi_trans_move_polled() - Mark a transaction polled + * @trans: Transaction to update + */ +void gsi_trans_move_polled(struct gsi_trans *trans); + +/** + * gsi_trans_complete() - Complete a GSI transaction + * @trans: Transaction to complete + * + * Marks a transaction complete (including freeing it). + */ +void gsi_trans_complete(struct gsi_trans *trans); + +/** + * gsi_channel_trans_mapped() - Return a transaction mapped to a TRE index + * @channel: Channel associated with the transaction + * @index: Index of the TRE having a transaction + * + * @Return: The GSI transaction pointer associated with the TRE index + */ +struct gsi_trans *gsi_channel_trans_mapped(struct gsi_channel *channel, + u32 index); + +/** + * gsi_channel_trans_complete() - Return a channel's next completed transaction + * @channel: Channel whose next transaction is to be returned + * + * @Return: The next completed transaction, or NULL if nothing new + */ +struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel); + +/** + * gsi_channel_trans_cancel_pending() - Cancel pending transactions + * @channel: Channel whose pending transactions should be cancelled + * + * Cancel all pending transactions on a channel. These are transactions + * that have been committed but not yet completed. This is required when + * the channel gets reset. At that time all pending transactions will be + * marked as cancelled. + * + * NOTE: Transactions already complete at the time of this call are + * unaffected. + */ +void gsi_channel_trans_cancel_pending(struct gsi_channel *channel); + +/** + * gsi_channel_trans_init() - Initialize a channel's GSI transaction info + * @gsi: GSI pointer + * @channel_id: Channel number + * + * @Return: 0 if successful, or -ENOMEM on allocation failure + * + * Creates and sets up information for managing transactions on a channel + */ +int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_trans_exit() - Inverse of gsi_channel_trans_init() + * @channel: Channel whose transaction information is to be cleaned up + */ +void gsi_channel_trans_exit(struct gsi_channel *channel); + +/** + * gsi_channel_doorbell() - Ring a channel's doorbell + * @channel: Channel whose doorbell should be rung + * + * Rings a channel's doorbell to inform the GSI hardware that new + * transactions (TREs, really) are available for it to process. + */ +void gsi_channel_doorbell(struct gsi_channel *channel); + +/** + * gsi_ring_virt() - Return virtual address for a ring entry + * @ring: Ring whose address is to be translated + * @addr: Index (slot number) of entry + */ +void *gsi_ring_virt(struct gsi_ring *ring, u32 index); + +/** + * gsi_channel_tx_queued() - Report the number of bytes queued to hardware + * @channel: Channel whose bytes have been queued + * + * This arranges for the the number of transactions and bytes for + * transfer that have been queued to hardware to be reported. It + * passes this information up the network stack so it can be used to + * throttle transmissions. + */ +void gsi_channel_tx_queued(struct gsi_channel *channel); + +#endif /* _GSI_PRIVATE_H_ */ diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h new file mode 100644 index 000000000000..7613b9cc7cf6 --- /dev/null +++ b/drivers/net/ipa/gsi_reg.h @@ -0,0 +1,417 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _GSI_REG_H_ +#define _GSI_REG_H_ + +/* === Only "gsi.c" should include this file === */ + +#include + +/** + * DOC: GSI Registers + * + * GSI registers are located within the "gsi" address space defined by Device + * Tree. The offset of each register within that space is specified by + * symbols defined below. The GSI address space is mapped to virtual memory + * space in gsi_init(). All GSI registers are 32 bits wide. + * + * Each register type is duplicated for a number of instances of something. + * For example, each GSI channel has its own set of registers defining its + * configuration. The offset to a channel's set of registers is computed + * based on a "base" offset plus an additional "stride" amount computed + * from the channel's ID. For such registers, the offset is computed by a + * function-like macro that takes a parameter used in the computation. + * + * The offset of a register dependent on execution environment is computed + * by a macro that is supplied a parameter "ee". The "ee" value is a member + * of the gsi_ee_id enumerated type. + * + * The offset of a channel register is computed by a macro that is supplied a + * parameter "ch". The "ch" value is a channel id whose maximum value is 30 + * (though the actual limit is hardware-dependent). + * + * The offset of an event register is computed by a macro that is supplied a + * parameter "ev". The "ev" value is an event id whose maximum value is 15 + * (though the actual limit is hardware-dependent). + */ + +#define GSI_INTER_EE_SRC_CH_IRQ_OFFSET \ + GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(ee) \ + (0x0000c018 + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET \ + GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(ee) \ + (0x0000c01c + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_CH_IRQ_CLR_OFFSET \ + GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(ee) \ + (0x0000c028 + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_EV_CH_IRQ_CLR_OFFSET \ + GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \ + (0x0000c02c + 0x1000 * (ee)) + +#define GSI_CH_C_CNTXT_0_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_0_OFFSET(ch, ee) \ + (0x0001c000 + 0x4000 * (ee) + 0x80 * (ch)) +#define CHTYPE_PROTOCOL_FMASK GENMASK(2, 0) +#define CHTYPE_DIR_FMASK GENMASK(3, 3) +#define EE_FMASK GENMASK(7, 4) +#define CHID_FMASK GENMASK(12, 8) +/* The next field is present for GSI v2.0 and above */ +#define CHTYPE_PROTOCOL_MSB_FMASK GENMASK(13, 13) +#define ERINDEX_FMASK GENMASK(18, 14) +#define CHSTATE_FMASK GENMASK(23, 20) +#define ELEMENT_SIZE_FMASK GENMASK(31, 24) + +#define GSI_CH_C_CNTXT_1_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_1_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_1_OFFSET(ch, ee) \ + (0x0001c004 + 0x4000 * (ee) + 0x80 * (ch)) +#define R_LENGTH_FMASK GENMASK(15, 0) + +#define GSI_CH_C_CNTXT_2_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_2_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_2_OFFSET(ch, ee) \ + (0x0001c008 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_CNTXT_3_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_3_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_3_OFFSET(ch, ee) \ + (0x0001c00c + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_QOS_OFFSET(ch) \ + GSI_EE_N_CH_C_QOS_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_QOS_OFFSET(ch, ee) \ + (0x0001c05c + 0x4000 * (ee) + 0x80 * (ch)) +#define WRR_WEIGHT_FMASK GENMASK(3, 0) +#define MAX_PREFETCH_FMASK GENMASK(8, 8) +#define USE_DB_ENG_FMASK GENMASK(9, 9) +/* The next field is present for GSI v2.0 and above */ +#define USE_ESCAPE_BUF_ONLY_FMASK GENMASK(10, 10) + +#define GSI_CH_C_SCRATCH_0_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_0_OFFSET(ch, ee) \ + (0x0001c060 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_1_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_1_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_1_OFFSET(ch, ee) \ + (0x0001c064 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_2_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_2_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_2_OFFSET(ch, ee) \ + (0x0001c068 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_3_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_3_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_3_OFFSET(ch, ee) \ + (0x0001c06c + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_EV_CH_E_CNTXT_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_0_OFFSET(ev, ee) \ + (0x0001d000 + 0x4000 * (ee) + 0x80 * (ev)) +#define EV_CHTYPE_FMASK GENMASK(3, 0) +#define EV_EE_FMASK GENMASK(7, 4) +#define EV_EVCHID_FMASK GENMASK(15, 8) +#define EV_INTYPE_FMASK GENMASK(16, 16) +#define EV_CHSTATE_FMASK GENMASK(23, 20) +#define EV_ELEMENT_SIZE_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_E_CNTXT_1_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_1_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_1_OFFSET(ev, ee) \ + (0x0001d004 + 0x4000 * (ee) + 0x80 * (ev)) +#define EV_R_LENGTH_FMASK GENMASK(15, 0) + +#define GSI_EV_CH_E_CNTXT_2_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_2_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_2_OFFSET(ev, ee) \ + (0x0001d008 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_3_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_3_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_3_OFFSET(ev, ee) \ + (0x0001d00c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_4_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_4_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_4_OFFSET(ev, ee) \ + (0x0001d010 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_8_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_8_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_8_OFFSET(ev, ee) \ + (0x0001d020 + 0x4000 * (ee) + 0x80 * (ev)) +#define MODT_FMASK GENMASK(15, 0) +#define MODC_FMASK GENMASK(23, 16) +#define MOD_CNT_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_E_CNTXT_9_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_9_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_9_OFFSET(ev, ee) \ + (0x0001d024 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_10_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_10_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_10_OFFSET(ev, ee) \ + (0x0001d028 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_11_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_11_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_11_OFFSET(ev, ee) \ + (0x0001d02c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_12_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_12_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_12_OFFSET(ev, ee) \ + (0x0001d030 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_13_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_13_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_13_OFFSET(ev, ee) \ + (0x0001d034 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_SCRATCH_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_SCRATCH_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_SCRATCH_0_OFFSET(ev, ee) \ + (0x0001d048 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_SCRATCH_1_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_SCRATCH_1_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_SCRATCH_1_OFFSET(ev, ee) \ + (0x0001d04c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_CH_C_DOORBELL_0_OFFSET(ch) \ + GSI_EE_N_CH_C_DOORBELL_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_DOORBELL_0_OFFSET(ch, ee) \ + (0x0001e000 + 0x4000 * (ee) + 0x08 * (ch)) + +#define GSI_EV_CH_E_DOORBELL_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_DOORBELL_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_DOORBELL_0_OFFSET(ev, ee) \ + (0x0001e100 + 0x4000 * (ee) + 0x08 * (ev)) + +#define GSI_GSI_STATUS_OFFSET \ + GSI_EE_N_GSI_STATUS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_GSI_STATUS_OFFSET(ee) \ + (0x0001f000 + 0x4000 * (ee)) +#define ENABLED_FMASK GENMASK(0, 0) + +#define GSI_CH_CMD_OFFSET \ + GSI_EE_N_CH_CMD_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CH_CMD_OFFSET(ee) \ + (0x0001f008 + 0x4000 * (ee)) +#define CH_CHID_FMASK GENMASK(7, 0) +#define CH_OPCODE_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_CMD_OFFSET \ + GSI_EE_N_EV_CH_CMD_OFFSET(GSI_EE_AP) +#define GSI_EE_N_EV_CH_CMD_OFFSET(ee) \ + (0x0001f010 + 0x4000 * (ee)) +#define EV_CHID_FMASK GENMASK(7, 0) +#define EV_OPCODE_FMASK GENMASK(31, 24) + +#define GSI_GENERIC_CMD_OFFSET \ + GSI_EE_N_GENERIC_CMD_OFFSET(GSI_EE_AP) +#define GSI_EE_N_GENERIC_CMD_OFFSET(ee) \ + (0x0001f018 + 0x4000 * (ee)) +#define GENERIC_OPCODE_FMASK GENMASK(4, 0) +#define GENERIC_CHID_FMASK GENMASK(9, 5) +#define GENERIC_EE_FMASK GENMASK(13, 10) + +#define GSI_GSI_HW_PARAM_2_OFFSET \ + GSI_EE_N_GSI_HW_PARAM_2_OFFSET(GSI_EE_AP) +#define GSI_EE_N_GSI_HW_PARAM_2_OFFSET(ee) \ + (0x0001f040 + 0x4000 * (ee)) +#define IRAM_SIZE_FMASK GENMASK(2, 0) +#define IRAM_SIZE_ONE_KB_FVAL 0 +#define IRAM_SIZE_TWO_KB_FVAL 1 +/* The next two values are available for GSI v2.0 and above */ +#define IRAM_SIZE_TWO_N_HALF_KB_FVAL 2 +#define IRAM_SIZE_THREE_KB_FVAL 3 +#define NUM_CH_PER_EE_FMASK GENMASK(7, 3) +#define NUM_EV_PER_EE_FMASK GENMASK(12, 8) +#define GSI_CH_PEND_TRANSLATE_FMASK GENMASK(13, 13) +#define GSI_CH_FULL_LOGIC_FMASK GENMASK(14, 14) +/* Fields below are present for GSI v2.0 and above */ +#define GSI_USE_SDMA_FMASK GENMASK(15, 15) +#define GSI_SDMA_N_INT_FMASK GENMASK(18, 16) +#define GSI_SDMA_MAX_BURST_FMASK GENMASK(26, 19) +#define GSI_SDMA_N_IOVEC_FMASK GENMASK(29, 27) +/* Fields below are present for GSI v2.2 and above */ +#define GSI_USE_RD_WR_ENG_FMASK GENMASK(30, 30) +#define GSI_USE_INTER_EE_FMASK GENMASK(31, 31) + +#define GSI_CNTXT_TYPE_IRQ_OFFSET \ + GSI_EE_N_CNTXT_TYPE_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_TYPE_IRQ_OFFSET(ee) \ + (0x0001f080 + 0x4000 * (ee)) +#define CH_CTRL_FMASK GENMASK(0, 0) +#define EV_CTRL_FMASK GENMASK(1, 1) +#define GLOB_EE_FMASK GENMASK(2, 2) +#define IEOB_FMASK GENMASK(3, 3) +#define INTER_EE_CH_CTRL_FMASK GENMASK(4, 4) +#define INTER_EE_EV_CTRL_FMASK GENMASK(5, 5) +#define GENERAL_FMASK GENMASK(6, 6) + +#define GSI_CNTXT_TYPE_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFSET(ee) \ + (0x0001f088 + 0x4000 * (ee)) +#define MSK_CH_CTRL_FMASK GENMASK(0, 0) +#define MSK_EV_CTRL_FMASK GENMASK(1, 1) +#define MSK_GLOB_EE_FMASK GENMASK(2, 2) +#define MSK_IEOB_FMASK GENMASK(3, 3) +#define MSK_INTER_EE_CH_CTRL_FMASK GENMASK(4, 4) +#define MSK_INTER_EE_EV_CTRL_FMASK GENMASK(5, 5) +#define MSK_GENERAL_FMASK GENMASK(6, 6) +#define GSI_CNTXT_TYPE_IRQ_MSK_ALL GENMASK(6, 0) + +#define GSI_CNTXT_SRC_CH_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFSET(ee) \ + (0x0001f090 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFSET(ee) \ + (0x0001f094 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFSET(ee) \ + (0x0001f098 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET(ee) \ + (0x0001f09c + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFSET(ee) \ + (0x0001f0a0 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \ + (0x0001f0a4 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFSET(ee) \ + (0x0001f0b0 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET(ee) \ + (0x0001f0b8 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET(ee) \ + (0x0001f0c0 + 0x4000 * (ee)) + +#define GSI_CNTXT_GLOB_IRQ_STTS_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFSET(ee) \ + (0x0001f100 + 0x4000 * (ee)) +#define ERROR_INT_FMASK GENMASK(0, 0) +#define GP_INT1_FMASK GENMASK(1, 1) +#define GP_INT2_FMASK GENMASK(2, 2) +#define GP_INT3_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GLOB_IRQ_EN_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFSET(ee) \ + (0x0001f108 + 0x4000 * (ee)) +#define EN_ERROR_INT_FMASK GENMASK(0, 0) +#define EN_GP_INT1_FMASK GENMASK(1, 1) +#define EN_GP_INT2_FMASK GENMASK(2, 2) +#define EN_GP_INT3_FMASK GENMASK(3, 3) +#define GSI_CNTXT_GLOB_IRQ_ALL GENMASK(3, 0) + +#define GSI_CNTXT_GLOB_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFSET(ee) \ + (0x0001f110 + 0x4000 * (ee)) +#define CLR_ERROR_INT_FMASK GENMASK(0, 0) +#define CLR_GP_INT1_FMASK GENMASK(1, 1) +#define CLR_GP_INT2_FMASK GENMASK(2, 2) +#define CLR_GP_INT3_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GSI_IRQ_STTS_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFSET(ee) \ + (0x0001f118 + 0x4000 * (ee)) +#define BREAK_POINT_FMASK GENMASK(0, 0) +#define BUS_ERROR_FMASK GENMASK(1, 1) +#define CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GSI_IRQ_EN_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFSET(ee) \ + (0x0001f120 + 0x4000 * (ee)) +#define EN_BREAK_POINT_FMASK GENMASK(0, 0) +#define EN_BUS_ERROR_FMASK GENMASK(1, 1) +#define EN_CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define EN_MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) +#define GSI_CNTXT_GSI_IRQ_ALL GENMASK(3, 0) + +#define GSI_CNTXT_GSI_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFSET(ee) \ + (0x0001f128 + 0x4000 * (ee)) +#define CLR_BREAK_POINT_FMASK GENMASK(0, 0) +#define CLR_BUS_ERROR_FMASK GENMASK(1, 1) +#define CLR_CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define CLR_MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_INTSET_OFFSET \ + GSI_EE_N_CNTXT_INTSET_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_INTSET_OFFSET(ee) \ + (0x0001f180 + 0x4000 * (ee)) +#define INTYPE_FMASK GENMASK(0, 0) + +#define GSI_ERROR_LOG_OFFSET \ + GSI_EE_N_ERROR_LOG_OFFSET(GSI_EE_AP) +#define GSI_EE_N_ERROR_LOG_OFFSET(ee) \ + (0x0001f200 + 0x4000 * (ee)) +#define ERR_ARG3_FMASK GENMASK(3, 0) +#define ERR_ARG2_FMASK GENMASK(7, 4) +#define ERR_ARG1_FMASK GENMASK(11, 8) +#define ERR_CODE_FMASK GENMASK(15, 12) +#define ERR_VIRT_IDX_FMASK GENMASK(23, 19) +#define ERR_TYPE_FMASK GENMASK(27, 24) +#define ERR_EE_FMASK GENMASK(31, 28) + +#define GSI_ERROR_LOG_CLR_OFFSET \ + GSI_EE_N_ERROR_LOG_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_ERROR_LOG_CLR_OFFSET(ee) \ + (0x0001f210 + 0x4000 * (ee)) + +#define GSI_CNTXT_SCRATCH_0_OFFSET \ + GSI_EE_N_CNTXT_SCRATCH_0_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SCRATCH_0_OFFSET(ee) \ + (0x0001f400 + 0x4000 * (ee)) +#define INTER_EE_RESULT_FMASK GENMASK(2, 0) +#define GENERIC_EE_RESULT_FMASK GENMASK(7, 5) +#define GENERIC_EE_SUCCESS_FVAL 1 +#define GENERIC_EE_NO_RESOURCES_FVAL 7 +#define USB_MAX_PACKET_FMASK GENMASK(15, 15) /* 0: HS; 1: SS */ +#define MHI_BASE_CHANNEL_FMASK GENMASK(31, 24) + +#endif /* _GSI_REG_H_ */ From patchwork Fri Mar 6 04:28:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 222946 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 418D0C3F2D8 for ; Fri, 6 Mar 2020 04:30:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 11B442073D for ; Fri, 6 Mar 2020 04:30:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="hGsIa/w9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727517AbgCFEaD (ORCPT ); Thu, 5 Mar 2020 23:30:03 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:45123 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727210AbgCFE2x (ORCPT ); Thu, 5 Mar 2020 23:28:53 -0500 Received: by mail-yw1-f65.google.com with SMTP id d206so1049316ywa.12 for ; Thu, 05 Mar 2020 20:28:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GSt5faLYca4AcJbso3LEBGMbkMJiDIuIJIY2653z1qs=; b=hGsIa/w9xyaXQuvVqqr1Tt9IVilT5o97JHORsVkBt9rUc7wYY80FtlyvbhTCVSUOuL TiJ/By1JoW5nQGhL50vOAwWo5e74CFB98LDdafw9LBZhr8rsqZTvZYTwoSDZFSvsjNx0 Nk4G15U2mPbnxiLCD5aSU7ypohfDieuHIwfC+8DL9um8gg5JQhHuLdpwN64rtSZIf+z3 +3e7Flk1T8wqhBBeUAXYFnZdd9rWk1SCukJxqt/+QZfar4+im3uHxwLp4LpTpeQTiqjT k9kfmq/7ifgktECKPelmF/APPxyJW4eUuhiMlMs2saL/eluZ3u4t4laHm5OsxYTLcm48 i4+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GSt5faLYca4AcJbso3LEBGMbkMJiDIuIJIY2653z1qs=; b=Q+FAZgM59OKKylNiiRAddWvTBHtS/Oj3iVKgokhZCVQU046aUxVPOCWBXimAChKnLY d0sJQvoJMZD4xpCZRjEIhyWfhu7SOzclshMv+GUHILy0NPt3hW87D1QB0EjZrvHUobpV A3qE4HONwZdZQO6EzoQSFgehi1iYwi3YaZBGlwnm9WsMXHL1SNbfMOL7XKnzKtRcWg3m /fyG/CmYOYl5dbBsUaz4dlfgZD9T6OaCCQgAqZengzgkIszVLXkfTiBFab5U+S7DWkdc OEVnsUsBwKsNc2xXxqCI4qMdMoO9tWg+f9VfkffOG0BfoZbviAzDkeElf5GVdyFdxBaN mMKw== X-Gm-Message-State: ANhLgQ1Z8Eidyj3Wgp2uU1HHkeyOF1yBECRriS1C2KjfortR8TstbG+c ane4HPKlKDSUb7VEK2HiRBv2/Q== X-Google-Smtp-Source: ADFU+vuVCmd1ZmbC0nZ0pKOZ/xTjMIMTHExzy6e43ZY3xCgHN7S0YlUHDZh2OLN854T744XCjvMYUw== X-Received: by 2002:a25:3b50:: with SMTP id i77mr1804839yba.523.1583468932275; Thu, 05 Mar 2020 20:28:52 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:51 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 08/17] soc: qcom: ipa: IPA interface to GSI Date: Thu, 5 Mar 2020 22:28:22 -0600 Message-Id: <20200306042831.17827-9-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch provides interface functions supplied by the IPA layer that are called from the GSI layer. One function is called when a GSI transaction has completed. The others allow the GSI layer to inform the IPA layer when the hardware has been told it has new TREs to execute, and when the hardware has indicated transactions have completed. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_gsi.c | 54 +++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_gsi.h | 60 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 114 insertions(+) create mode 100644 drivers/net/ipa/ipa_gsi.c create mode 100644 drivers/net/ipa/ipa_gsi.h diff --git a/drivers/net/ipa/ipa_gsi.c b/drivers/net/ipa/ipa_gsi.c new file mode 100644 index 000000000000..dc4a5c2196ae --- /dev/null +++ b/drivers/net/ipa/ipa_gsi.c @@ -0,0 +1,54 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include + +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_endpoint.h" +#include "ipa_data.h" + +void ipa_gsi_trans_complete(struct gsi_trans *trans) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + + ipa_endpoint_trans_complete(ipa->channel_map[trans->channel_id], trans); +} + +void ipa_gsi_trans_release(struct gsi_trans *trans) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + + ipa_endpoint_trans_release(ipa->channel_map[trans->channel_id], trans); +} + +void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count) +{ + struct ipa *ipa = container_of(gsi, struct ipa, gsi); + struct ipa_endpoint *endpoint; + + endpoint = ipa->channel_map[channel_id]; + if (endpoint->netdev) + netdev_sent_queue(endpoint->netdev, byte_count); +} + +void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count) +{ + struct ipa *ipa = container_of(gsi, struct ipa, gsi); + struct ipa_endpoint *endpoint; + + endpoint = ipa->channel_map[channel_id]; + if (endpoint->netdev) + netdev_completed_queue(endpoint->netdev, count, byte_count); +} + +/* Indicate whether an endpoint config data entry is "empty" */ +bool ipa_gsi_endpoint_data_empty(const struct ipa_gsi_endpoint_data *data) +{ + return data->ee_id == GSI_EE_AP && !data->channel.tlv_count; +} diff --git a/drivers/net/ipa/ipa_gsi.h b/drivers/net/ipa/ipa_gsi.h new file mode 100644 index 000000000000..3cf18600c68e --- /dev/null +++ b/drivers/net/ipa/ipa_gsi.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_GSI_TRANS_H_ +#define _IPA_GSI_TRANS_H_ + +#include + +struct gsi_trans; + +/** + * ipa_gsi_trans_complete() - GSI transaction completion callback + * @trans: Transaction that has completed + * + * This called from the GSI layer to notify the IPA layer that a + * transaction has completed. + */ +void ipa_gsi_trans_complete(struct gsi_trans *trans); + +/** + * ipa_gsi_trans_release() - GSI transaction release callback + * @trans: Transaction whose resources should be freed + * + * This called from the GSI layer to notify the IPA layer that a + * transaction is about to be freed, so any resources associated + * with it should be released. + */ +void ipa_gsi_trans_release(struct gsi_trans *trans); + +/** + * ipa_gsi_channel_tx_queued() - GSI queued to hardware notification + * @gsi: GSI pointer + * @channel_id: Channel number + * @count: Number of transactions queued + * @byte_count: Number of bytes to transfer represented by transactions + * + * This called from the GSI layer to notify the IPA layer that some + * number of transactions have been queued to hardware for execution. + */ +void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count); +/** + * ipa_gsi_trans_complete() - GSI transaction completion callback +ipa_gsi_channel_tx_completed() + * @gsi: GSI pointer + * @channel_id: Channel number + * @count: Number of transactions completed since last report + * @byte_count: Number of bytes transferred represented by transactions + * + * This called from the GSI layer to notify the IPA layer that the hardware + * has reported the completion of some number of transactions. + */ +void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count); + +bool ipa_gsi_endpoint_data_empty(const struct ipa_gsi_endpoint_data *data); + +#endif /* _IPA_GSI_TRANS_H_ */ From patchwork Fri Mar 6 04:28:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 222954 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59DA8C3F2DC for ; Fri, 6 Mar 2020 04:29:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0C5B3208C3 for ; Fri, 6 Mar 2020 04:29:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="btdO8ZVI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727317AbgCFE3A (ORCPT ); Thu, 5 Mar 2020 23:29:00 -0500 Received: from mail-yw1-f49.google.com ([209.85.161.49]:44790 "EHLO mail-yw1-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726915AbgCFE25 (ORCPT ); Thu, 5 Mar 2020 23:28:57 -0500 Received: by mail-yw1-f49.google.com with SMTP id t141so1054549ywc.11 for ; Thu, 05 Mar 2020 20:28:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cPx9b7g31UL87sIPFTXqcHoJpnS3MZvZXw5GBIQE91k=; b=btdO8ZVIgM9tY+vKPAJV3QWnfgdxXWWH0DtUplAWzOD0GpqxigMLwpYkXIV8pT4Oqm D+JNykGrNYKDNuNbc9ts7zxCMbGA/dh+yR1BGOOx6NHMIW9Zlc7gPOX2s6WmTuzB9U/m o2UlIQ0He4qDlntL5k7bXtKURsCgEwBSyk+g2B3bOyK58hmnQN5leGlMwnQFFrEwQzed xlCKsFIthtA29AVrt6tRbVuUKgEIGV3bB3rfAaKyI7q1rCKvDhZisk2g06SVpd4UM0Jh MBBezEDYv2yho0/FhWNpQW1elV1TDqa+sgWNOaKW+i7WbdR7DkwKtulUAF50XL0Cwt6z vYzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cPx9b7g31UL87sIPFTXqcHoJpnS3MZvZXw5GBIQE91k=; b=SaVNO6L8MxfU27PYAI/C0BI3SeMtM+5dGWWywakAXQEZNl4wBUg97vTSfUuFcGyjRc Rw5hy7FijHH3fRavO3LXGm8+cGn6vNSBRS3hVImWkLmp5A0GH4fixKff/UXv/3CxhK8+ Nd4ZCqCeUZRhcVpt1pA30zPacwFdNQ/0/KAF5aBsZY/1iVP/iTUNZq8SrUjrd1ZMF9oW 6abfzuXPdUcbiE6v6WsudFY0itaG5c/ixkWi+REarbCeRqEzj/WMFvdY9mZQSLinblXV in4NHOzN4fObIFkRrZvc3O0IBCL5ReUs+n3S3IF083WpqF8ZMoHDEekNFQNNLxeGr6MH 8psA== X-Gm-Message-State: ANhLgQ1Y9/IftTZaNhqOq/VXq1klR6VUxN2Uoy0SPstjOZkqWoQqEA80 r/yAGrUA9WT2xb46wSGMSud1zOgR090= X-Google-Smtp-Source: ADFU+vtYAG7qB/A7y+ybqoaMjkk6dBwB/Q0WbF7jnCoD1sipjUHX3UOd0iXmG4nehMpPhK3gmN4pJQ== X-Received: by 2002:a25:bd0a:: with SMTP id f10mr1894998ybk.173.1583468934222; Thu, 05 Mar 2020 20:28:54 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:53 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 09/17] soc: qcom: ipa: GSI transactions Date: Thu, 5 Mar 2020 22:28:23 -0600 Message-Id: <20200306042831.17827-10-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch implements GSI transactions. A GSI transaction is a structure that represents a single request (consisting of one or more TREs) sent to the GSI hardware. The last TRE in a transaction includes a flag requesting that the GSI interrupt the AP to notify that it has completed. TREs are executed and completed strictly in order. For this reason, the completion of a single TRE implies that all previous TREs (in particular all of those "earlier" in a transaction) have completed. Whenever there is a need to send a request (a set of TREs) to the IPA, a GSI transaction is allocated, specifying the number of TREs that will be required. Details of the request (e.g. transfer offsets and length) are represented by in a Linux scatterlist array that is incorporated in the transaction structure. Once all commands (TREs) are added to a transaction it is committed. When the hardware signals that the request has completed, a callback function allows for cleanup or followup activity to be performed before the transaction is freed. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi_trans.c | 786 ++++++++++++++++++++++++++++++++++++ drivers/net/ipa/gsi_trans.h | 226 +++++++++++ 2 files changed, 1012 insertions(+) create mode 100644 drivers/net/ipa/gsi_trans.c create mode 100644 drivers/net/ipa/gsi_trans.h diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c new file mode 100644 index 000000000000..2fd21d75367d --- /dev/null +++ b/drivers/net/ipa/gsi_trans.c @@ -0,0 +1,786 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_private.h" +#include "gsi_trans.h" +#include "ipa_gsi.h" +#include "ipa_data.h" +#include "ipa_cmd.h" + +/** + * DOC: GSI Transactions + * + * A GSI transaction abstracts the behavior of a GSI channel by representing + * everything about a related group of IPA commands in a single structure. + * (A "command" in this sense is either a data transfer or an IPA immediate + * command.) Most details of interaction with the GSI hardware are managed + * by the GSI transaction core, allowing users to simply describe commands + * to be performed. When a transaction has completed a callback function + * (dependent on the type of endpoint associated with the channel) allows + * cleanup of resources associated with the transaction. + * + * To perform a command (or set of them), a user of the GSI transaction + * interface allocates a transaction, indicating the number of TREs required + * (one per command). If sufficient TREs are available, they are reserved + * for use in the transaction and the allocation succeeds. This way + * exhaustion of the available TREs in a channel ring is detected + * as early as possible. All resources required to complete a transaction + * are allocated at transaction allocation time. + * + * Commands performed as part of a transaction are represented in an array + * of Linux scatterlist structures. This array is allocated with the + * transaction, and its entries are initialized using standard scatterlist + * functions (such as sg_set_buf() or skb_to_sgvec()). + * + * Once a transaction's scatterlist structures have been initialized, the + * transaction is committed. The caller is responsible for mapping buffers + * for DMA if necessary, and this should be done *before* allocating + * the transaction. Between a successful allocation and commit of a + * transaction no errors should occur. + * + * Committing transfers ownership of the entire transaction to the GSI + * transaction core. The GSI transaction code formats the content of + * the scatterlist array into the channel ring buffer and informs the + * hardware that new TREs are available to process. + * + * The last TRE in each transaction is marked to interrupt the AP when the + * GSI hardware has completed it. Because transfers described by TREs are + * performed strictly in order, signaling the completion of just the last + * TRE in the transaction is sufficient to indicate the full transaction + * is complete. + * + * When a transaction is complete, ipa_gsi_trans_complete() is called by the + * GSI code into the IPA layer, allowing it to perform any final cleanup + * required before the transaction is freed. + */ + +/* Hardware values representing a transfer element type */ +enum gsi_tre_type { + GSI_RE_XFER = 0x2, + GSI_RE_IMMD_CMD = 0x3, +}; + +/* An entry in a channel ring */ +struct gsi_tre { + __le64 addr; /* DMA address */ + __le16 len_opcode; /* length in bytes or enum IPA_CMD_* */ + __le16 reserved; + __le32 flags; /* TRE_FLAGS_* */ +}; + +/* gsi_tre->flags mask values (in CPU byte order) */ +#define TRE_FLAGS_CHAIN_FMASK GENMASK(0, 0) +#define TRE_FLAGS_IEOB_FMASK GENMASK(8, 8) +#define TRE_FLAGS_IEOT_FMASK GENMASK(9, 9) +#define TRE_FLAGS_BEI_FMASK GENMASK(10, 10) +#define TRE_FLAGS_TYPE_FMASK GENMASK(23, 16) + +int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count, + u32 max_alloc) +{ + void *virt; + +#ifdef IPA_VALIDATE + if (!size || size % 8) + return -EINVAL; + if (count < max_alloc) + return -EINVAL; + if (!max_alloc) + return -EINVAL; +#endif /* IPA_VALIDATE */ + + /* By allocating a few extra entries in our pool (one less + * than the maximum number that will be requested in a + * single allocation), we can always satisfy requests without + * ever worrying about straddling the end of the pool array. + * If there aren't enough entries starting at the free index, + * we just allocate free entries from the beginning of the pool. + */ + virt = kcalloc(count + max_alloc - 1, size, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + pool->base = virt; + /* If the allocator gave us any extra memory, use it */ + pool->count = ksize(pool->base) / size; + pool->free = 0; + pool->max_alloc = max_alloc; + pool->size = size; + pool->addr = 0; /* Only used for DMA pools */ + + return 0; +} + +void gsi_trans_pool_exit(struct gsi_trans_pool *pool) +{ + kfree(pool->base); + memset(pool, 0, sizeof(*pool)); +} + +/* Allocate the requested number of (zeroed) entries from the pool */ +/* Home-grown DMA pool. This way we can preallocate and use the tre_count + * to guarantee allocations will succeed. Even though we specify max_alloc + * (and it can be more than one), we only allow allocation of a single + * element from a DMA pool. + */ +int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool, + size_t size, u32 count, u32 max_alloc) +{ + size_t total_size; + dma_addr_t addr; + void *virt; + +#ifdef IPA_VALIDATE + if (!size || size % 8) + return -EINVAL; + if (count < max_alloc) + return -EINVAL; + if (!max_alloc) + return -EINVAL; +#endif /* IPA_VALIDATE */ + + /* Don't let allocations cross a power-of-two boundary */ + size = __roundup_pow_of_two(size); + total_size = (count + max_alloc - 1) * size; + + /* The allocator will give us a power-of-2 number of pages. But we + * can't guarantee that, so request it. That way we won't waste any + * memory that would be available beyond the required space. + */ + total_size = get_order(total_size) << PAGE_SHIFT; + + virt = dma_alloc_coherent(dev, total_size, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + pool->base = virt; + pool->count = total_size / size; + pool->free = 0; + pool->size = size; + pool->max_alloc = max_alloc; + pool->addr = addr; + + return 0; +} + +void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool) +{ + dma_free_coherent(dev, pool->size, pool->base, pool->addr); + memset(pool, 0, sizeof(*pool)); +} + +/* Return the byte offset of the next free entry in the pool */ +static u32 gsi_trans_pool_alloc_common(struct gsi_trans_pool *pool, u32 count) +{ + u32 offset; + + /* assert(count > 0); */ + /* assert(count <= pool->max_alloc); */ + + /* Allocate from beginning if wrap would occur */ + if (count > pool->count - pool->free) + pool->free = 0; + + offset = pool->free * pool->size; + pool->free += count; + memset(pool->base + offset, 0, count * pool->size); + + return offset; +} + +/* Allocate a contiguous block of zeroed entries from a pool */ +void *gsi_trans_pool_alloc(struct gsi_trans_pool *pool, u32 count) +{ + return pool->base + gsi_trans_pool_alloc_common(pool, count); +} + +/* Allocate a single zeroed entry from a DMA pool */ +void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr) +{ + u32 offset = gsi_trans_pool_alloc_common(pool, 1); + + *addr = pool->addr + offset; + + return pool->base + offset; +} + +/* Return the pool element that immediately follows the one given. + * This only works done if elements are allocated one at a time. + */ +void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element) +{ + void *end = pool->base + pool->count * pool->size; + + /* assert(element >= pool->base); */ + /* assert(element < end); */ + /* assert(pool->max_alloc == 1); */ + element += pool->size; + + return element < end ? element : pool->base; +} + +/* Map a given ring entry index to the transaction associated with it */ +static void gsi_channel_trans_map(struct gsi_channel *channel, u32 index, + struct gsi_trans *trans) +{ + /* Note: index *must* be used modulo the ring count here */ + channel->trans_info.map[index % channel->tre_ring.count] = trans; +} + +/* Return the transaction mapped to a given ring entry */ +struct gsi_trans * +gsi_channel_trans_mapped(struct gsi_channel *channel, u32 index) +{ + /* Note: index *must* be used modulo the ring count here */ + return channel->trans_info.map[index % channel->tre_ring.count]; +} + +/* Return the oldest completed transaction for a channel (or null) */ +struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel) +{ + return list_first_entry_or_null(&channel->trans_info.complete, + struct gsi_trans, links); +} + +/* Move a transaction from the allocated list to the pending list */ +static void gsi_trans_move_pending(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_move_tail(&trans->links, &trans_info->pending); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Move a transaction and all of its predecessors from the pending list + * to the completed list. + */ +void gsi_trans_move_complete(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + struct list_head list; + + spin_lock_bh(&trans_info->spinlock); + + /* Move this transaction and all predecessors to completed list */ + list_cut_position(&list, &trans_info->pending, &trans->links); + list_splice_tail(&list, &trans_info->complete); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Move a transaction from the completed list to the polled list */ +void gsi_trans_move_polled(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_move_tail(&trans->links, &trans_info->polled); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Reserve some number of TREs on a channel. Returns true if successful */ +static bool +gsi_trans_tre_reserve(struct gsi_trans_info *trans_info, u32 tre_count) +{ + int avail = atomic_read(&trans_info->tre_avail); + int new; + + do { + new = avail - (int)tre_count; + if (unlikely(new < 0)) + return false; + } while (!atomic_try_cmpxchg(&trans_info->tre_avail, &avail, new)); + + return true; +} + +/* Release previously-reserved TRE entries to a channel */ +static void +gsi_trans_tre_release(struct gsi_trans_info *trans_info, u32 tre_count) +{ + atomic_add(tre_count, &trans_info->tre_avail); +} + +/* Allocate a GSI transaction on a channel */ +struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id, + u32 tre_count, + enum dma_data_direction direction) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_trans_info *trans_info; + struct gsi_trans *trans; + + /* assert(tre_count <= gsi_channel_trans_tre_max(gsi, channel_id)); */ + + trans_info = &channel->trans_info; + + /* We reserve the TREs now, but consume them at commit time. + * If there aren't enough available, we're done. + */ + if (!gsi_trans_tre_reserve(trans_info, tre_count)) + return NULL; + + /* Allocate and initialize non-zero fields in the the transaction */ + trans = gsi_trans_pool_alloc(&trans_info->pool, 1); + trans->gsi = gsi; + trans->channel_id = channel_id; + trans->tre_count = tre_count; + init_completion(&trans->completion); + + /* Allocate the scatterlist and (if requested) info entries. */ + trans->sgl = gsi_trans_pool_alloc(&trans_info->sg_pool, tre_count); + sg_init_marker(trans->sgl, tre_count); + + trans->direction = direction; + + spin_lock_bh(&trans_info->spinlock); + + list_add_tail(&trans->links, &trans_info->alloc); + + spin_unlock_bh(&trans_info->spinlock); + + refcount_set(&trans->refcount, 1); + + return trans; +} + +/* Free a previously-allocated transaction (used only in case of error) */ +void gsi_trans_free(struct gsi_trans *trans) +{ + struct gsi_trans_info *trans_info; + + if (!refcount_dec_and_test(&trans->refcount)) + return; + + trans_info = &trans->gsi->channel[trans->channel_id].trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_del(&trans->links); + + spin_unlock_bh(&trans_info->spinlock); + + ipa_gsi_trans_release(trans); + + /* Releasing the reserved TREs implicitly frees the sgl[] and + * (if present) info[] arrays, plus the transaction itself. + */ + gsi_trans_tre_release(trans_info, trans->tre_count); +} + +/* Add an immediate command to a transaction */ +void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size, + dma_addr_t addr, enum dma_data_direction direction, + enum ipa_cmd_opcode opcode) +{ + struct ipa_cmd_info *info; + u32 which = trans->used++; + struct scatterlist *sg; + + /* assert(which < trans->tre_count); */ + + /* Set the page information for the buffer. We also need to fill in + * the DMA address for the buffer (something dma_map_sg() normally + * does). + */ + sg = &trans->sgl[which]; + + sg_set_buf(sg, buf, size); + sg_dma_address(sg) = addr; + + info = &trans->info[which]; + info->opcode = opcode; + info->direction = direction; +} + +/* Add a page transfer to a transaction. It will fill the only TRE. */ +int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size, + u32 offset) +{ + struct scatterlist *sg = &trans->sgl[0]; + int ret; + + /* assert(trans->tre_count == 1); */ + /* assert(!trans->used); */ + + sg_set_page(sg, page, size, offset); + ret = dma_map_sg(trans->gsi->dev, sg, 1, trans->direction); + if (!ret) + return -ENOMEM; + + trans->used++; /* Transaction now owns the (DMA mapped) page */ + + return 0; +} + +/* Add an SKB transfer to a transaction. No other TREs will be used. */ +int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb) +{ + struct scatterlist *sg = &trans->sgl[0]; + u32 used; + int ret; + + /* assert(trans->tre_count == 1); */ + /* assert(!trans->used); */ + + /* skb->len will not be 0 (checked early) */ + ret = skb_to_sgvec(skb, sg, 0, skb->len); + if (ret < 0) + return ret; + used = ret; + + ret = dma_map_sg(trans->gsi->dev, sg, used, trans->direction); + if (!ret) + return -ENOMEM; + + trans->used += used; /* Transaction now owns the (DMA mapped) skb */ + + return 0; +} + +/* Compute the length/opcode value to use for a TRE */ +static __le16 gsi_tre_len_opcode(enum ipa_cmd_opcode opcode, u32 len) +{ + return opcode == IPA_CMD_NONE ? cpu_to_le16((u16)len) + : cpu_to_le16((u16)opcode); +} + +/* Compute the flags value to use for a given TRE */ +static __le32 gsi_tre_flags(bool last_tre, bool bei, enum ipa_cmd_opcode opcode) +{ + enum gsi_tre_type tre_type; + u32 tre_flags; + + tre_type = opcode == IPA_CMD_NONE ? GSI_RE_XFER : GSI_RE_IMMD_CMD; + tre_flags = u32_encode_bits(tre_type, TRE_FLAGS_TYPE_FMASK); + + /* Last TRE contains interrupt flags */ + if (last_tre) { + /* All transactions end in a transfer completion interrupt */ + tre_flags |= TRE_FLAGS_IEOT_FMASK; + /* Don't interrupt when outbound commands are acknowledged */ + if (bei) + tre_flags |= TRE_FLAGS_BEI_FMASK; + } else { /* All others indicate there's more to come */ + tre_flags |= TRE_FLAGS_CHAIN_FMASK; + } + + return cpu_to_le32(tre_flags); +} + +static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr, + u32 len, bool last_tre, bool bei, + enum ipa_cmd_opcode opcode) +{ + struct gsi_tre tre; + + tre.addr = cpu_to_le64(addr); + tre.len_opcode = gsi_tre_len_opcode(opcode, len); + tre.reserved = 0; + tre.flags = gsi_tre_flags(last_tre, bei, opcode); + + /* ARM64 can write 16 bytes as a unit with a single instruction. + * Doing the assignment this way is an attempt to make that happen. + */ + *dest_tre = tre; +} + +/** + * __gsi_trans_commit() - Common GSI transaction commit code + * @trans: Transaction to commit + * @ring_db: Whether to tell the hardware about these queued transfers + * + * Formats channel ring TRE entries based on the content of the scatterlist. + * Maps a transaction pointer to the last ring entry used for the transaction, + * so it can be recovered when it completes. Moves the transaction to the + * pending list. Finally, updates the channel ring pointer and optionally + * rings the doorbell. + */ +static void __gsi_trans_commit(struct gsi_trans *trans, bool ring_db) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_ring *ring = &channel->tre_ring; + enum ipa_cmd_opcode opcode = IPA_CMD_NONE; + bool bei = channel->toward_ipa; + struct ipa_cmd_info *info; + struct gsi_tre *dest_tre; + struct scatterlist *sg; + u32 byte_count = 0; + u32 avail; + u32 i; + + /* assert(trans->used > 0); */ + + /* Consume the entries. If we cross the end of the ring while + * filling them we'll switch to the beginning to finish. + * If there is no info array we're doing a simple data + * transfer request, whose opcode is IPA_CMD_NONE. + */ + info = trans->info ? &trans->info[0] : NULL; + avail = ring->count - ring->index % ring->count; + dest_tre = gsi_ring_virt(ring, ring->index); + for_each_sg(trans->sgl, sg, trans->used, i) { + bool last_tre = i == trans->used - 1; + dma_addr_t addr = sg_dma_address(sg); + u32 len = sg_dma_len(sg); + + byte_count += len; + if (!avail--) + dest_tre = gsi_ring_virt(ring, 0); + if (info) + opcode = info++->opcode; + + gsi_trans_tre_fill(dest_tre, addr, len, last_tre, bei, opcode); + dest_tre++; + } + ring->index += trans->used; + + if (channel->toward_ipa) { + /* We record TX bytes when they are sent */ + trans->len = byte_count; + trans->trans_count = channel->trans_count; + trans->byte_count = channel->byte_count; + channel->trans_count++; + channel->byte_count += byte_count; + } + + /* Associate the last TRE with the transaction */ + gsi_channel_trans_map(channel, ring->index - 1, trans); + + gsi_trans_move_pending(trans); + + /* Ring doorbell if requested, or if all TREs are allocated */ + if (ring_db || !atomic_read(&channel->trans_info.tre_avail)) { + /* Report what we're handing off to hardware for TX channels */ + if (channel->toward_ipa) + gsi_channel_tx_queued(channel); + gsi_channel_doorbell(channel); + } +} + +/* Commit a GSI transaction */ +void gsi_trans_commit(struct gsi_trans *trans, bool ring_db) +{ + if (trans->used) + __gsi_trans_commit(trans, ring_db); + else + gsi_trans_free(trans); +} + +/* Commit a GSI transaction and wait for it to complete */ +void gsi_trans_commit_wait(struct gsi_trans *trans) +{ + if (!trans->used) + goto out_trans_free; + + refcount_inc(&trans->refcount); + + __gsi_trans_commit(trans, true); + + wait_for_completion(&trans->completion); + +out_trans_free: + gsi_trans_free(trans); +} + +/* Commit a GSI transaction and wait for it to complete, with timeout */ +int gsi_trans_commit_wait_timeout(struct gsi_trans *trans, + unsigned long timeout) +{ + unsigned long timeout_jiffies = msecs_to_jiffies(timeout); + unsigned long remaining = 1; /* In case of empty transaction */ + + if (!trans->used) + goto out_trans_free; + + refcount_inc(&trans->refcount); + + __gsi_trans_commit(trans, true); + + remaining = wait_for_completion_timeout(&trans->completion, + timeout_jiffies); +out_trans_free: + gsi_trans_free(trans); + + return remaining ? 0 : -ETIMEDOUT; +} + +/* Process the completion of a transaction; called while polling */ +void gsi_trans_complete(struct gsi_trans *trans) +{ + /* If the entire SGL was mapped when added, unmap it now */ + if (trans->direction != DMA_NONE) + dma_unmap_sg(trans->gsi->dev, trans->sgl, trans->used, + trans->direction); + + ipa_gsi_trans_complete(trans); + + complete(&trans->completion); + + gsi_trans_free(trans); +} + +/* Cancel a channel's pending transactions */ +void gsi_channel_trans_cancel_pending(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + struct gsi_trans *trans; + bool cancelled; + + /* channel->gsi->mutex is held by caller */ + spin_lock_bh(&trans_info->spinlock); + + cancelled = !list_empty(&trans_info->pending); + list_for_each_entry(trans, &trans_info->pending, links) + trans->cancelled = true; + + list_splice_tail_init(&trans_info->pending, &trans_info->complete); + + spin_unlock_bh(&trans_info->spinlock); + + /* Schedule NAPI polling to complete the cancelled transactions */ + if (cancelled) + napi_schedule(&channel->napi); +} + +/* Issue a command to read a single byte from a channel */ +int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_ring *ring = &channel->tre_ring; + struct gsi_trans_info *trans_info; + struct gsi_tre *dest_tre; + + trans_info = &channel->trans_info; + + /* First reserve the TRE, if possible */ + if (!gsi_trans_tre_reserve(trans_info, 1)) + return -EBUSY; + + /* Now fill the the reserved TRE and tell the hardware */ + + dest_tre = gsi_ring_virt(ring, ring->index); + gsi_trans_tre_fill(dest_tre, addr, 1, true, false, IPA_CMD_NONE); + + ring->index++; + gsi_channel_doorbell(channel); + + return 0; +} + +/* Mark a gsi_trans_read_byte() request done */ +void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + gsi_trans_tre_release(&channel->trans_info, 1); +} + +/* Initialize a channel's GSI transaction info */ +int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_trans_info *trans_info; + u32 tre_max; + int ret; + + /* Ensure the size of a channel element is what's expected */ + BUILD_BUG_ON(sizeof(struct gsi_tre) != GSI_RING_ELEMENT_SIZE); + + /* The map array is used to determine what transaction is associated + * with a TRE that the hardware reports has completed. We need one + * map entry per TRE. + */ + trans_info = &channel->trans_info; + trans_info->map = kcalloc(channel->tre_count, sizeof(*trans_info->map), + GFP_KERNEL); + if (!trans_info->map) + return -ENOMEM; + + /* We can't use more TREs than there are available in the ring. + * This limits the number of transactions that can be oustanding. + * Worst case is one TRE per transaction (but we actually limit + * it to something a little less than that). We allocate resources + * for transactions (including transaction structures) based on + * this maximum number. + */ + tre_max = gsi_channel_tre_max(channel->gsi, channel_id); + + /* Transactions are allocated one at a time. */ + ret = gsi_trans_pool_init(&trans_info->pool, sizeof(struct gsi_trans), + tre_max, 1); + if (ret) + goto err_kfree; + + /* A transaction uses a scatterlist array to represent the data + * transfers implemented by the transaction. Each scatterlist + * element is used to fill a single TRE when the transaction is + * committed. So we need as many scatterlist elements as the + * maximum number of TREs that can be outstanding. + * + * All TREs in a transaction must fit within the channel's TLV FIFO. + * A transaction on a channel can allocate as many TREs as that but + * no more. + */ + ret = gsi_trans_pool_init(&trans_info->sg_pool, + sizeof(struct scatterlist), + tre_max, channel->tlv_count); + if (ret) + goto err_trans_pool_exit; + + /* Finally, the tre_avail field is what ultimately limits the number + * of outstanding transactions and their resources. A transaction + * allocation succeeds only if the TREs available are sufficient for + * what the transaction might need. Transaction resource pools are + * sized based on the maximum number of outstanding TREs, so there + * will always be resources available if there are TREs available. + */ + atomic_set(&trans_info->tre_avail, tre_max); + + spin_lock_init(&trans_info->spinlock); + INIT_LIST_HEAD(&trans_info->alloc); + INIT_LIST_HEAD(&trans_info->pending); + INIT_LIST_HEAD(&trans_info->complete); + INIT_LIST_HEAD(&trans_info->polled); + + return 0; + +err_trans_pool_exit: + gsi_trans_pool_exit(&trans_info->pool); +err_kfree: + kfree(trans_info->map); + + dev_err(gsi->dev, "error %d initializing channel %u transactions\n", + ret, channel_id); + + return ret; +} + +/* Inverse of gsi_channel_trans_init() */ +void gsi_channel_trans_exit(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + + gsi_trans_pool_exit(&trans_info->sg_pool); + gsi_trans_pool_exit(&trans_info->pool); + kfree(trans_info->map); +} diff --git a/drivers/net/ipa/gsi_trans.h b/drivers/net/ipa/gsi_trans.h new file mode 100644 index 000000000000..1477fc15b30a --- /dev/null +++ b/drivers/net/ipa/gsi_trans.h @@ -0,0 +1,226 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _GSI_TRANS_H_ +#define _GSI_TRANS_H_ + +#include +#include +#include +#include + +#include "ipa_cmd.h" + +struct scatterlist; +struct device; +struct sk_buff; + +struct gsi; +struct gsi_trans; +struct gsi_trans_pool; + +/** + * struct gsi_trans - a GSI transaction + * + * Most fields in this structure for internal use by the transaction core code: + * @links: Links for channel transaction lists by state + * @gsi: GSI pointer + * @channel_id: Channel number transaction is associated with + * @cancelled: If set by the core code, transaction was cancelled + * @tre_count: Number of TREs reserved for this transaction + * @used: Number of TREs *used* (could be less than tre_count) + * @len: Total # of transfer bytes represented in sgl[] (set by core) + * @data: Preserved but not touched by the core transaction code + * @sgl: An array of scatter/gather entries managed by core code + * @info: Array of command information structures (command channel) + * @direction: DMA transfer direction (DMA_NONE for commands) + * @refcount: Reference count used for destruction + * @completion: Completed when the transaction completes + * @byte_count: TX channel byte count recorded when transaction committed + * @trans_count: Channel transaction count when committed (for BQL accounting) + * + * The size used for some fields in this structure were chosen to ensure + * the full structure size is no larger than 128 bytes. + */ +struct gsi_trans { + struct list_head links; /* gsi_channel lists */ + + struct gsi *gsi; + u8 channel_id; + + bool cancelled; /* true if transaction was cancelled */ + + u8 tre_count; /* # TREs requested */ + u8 used; /* # entries used in sgl[] */ + u32 len; /* total # bytes across sgl[] */ + + void *data; + struct scatterlist *sgl; + struct ipa_cmd_info *info; /* array of entries, or null */ + enum dma_data_direction direction; + + refcount_t refcount; + struct completion completion; + + u64 byte_count; /* channel byte_count when committed */ + u64 trans_count; /* channel trans_count when committed */ +}; + +/** + * gsi_trans_pool_init() - Initialize a pool of structures for transactions + * @gsi: GSI pointer + * @size: Size of elements in the pool + * @count: Minimum number of elements in the pool + * @max_alloc: Maximum number of elements allocated at a time from pool + * + * @Return: 0 if successful, or a negative error code + */ +int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count, + u32 max_alloc); + +/** + * gsi_trans_pool_alloc() - Allocate one or more elements from a pool + * @pool: Pool pointer + * @count: Number of elements to allocate from the pool + * + * @Return: Virtual address of element(s) allocated from the pool + */ +void *gsi_trans_pool_alloc(struct gsi_trans_pool *pool, u32 count); + +/** + * gsi_trans_pool_exit() - Inverse of gsi_trans_pool_init() + * @pool: Pool pointer + */ +void gsi_trans_pool_exit(struct gsi_trans_pool *pool); + +/** + * gsi_trans_pool_init_dma() - Initialize a pool of DMA-able structures + * @dev: Device used for DMA + * @pool: Pool pointer + * @size: Size of elements in the pool + * @count: Minimum number of elements in the pool + * @max_alloc: Maximum number of elements allocated at a time from pool + * + * @Return: 0 if successful, or a negative error code + * + * Structures in this pool reside in DMA-coherent memory. + */ +int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool, + size_t size, u32 count, u32 max_alloc); + +/** + * gsi_trans_pool_alloc_dma() - Allocate an element from a DMA pool + * @pool: DMA pool pointer + * @addr: DMA address "handle" associated with the allocation + * + * @Return: Virtual address of element allocated from the pool + * + * Only one element at a time may be allocated from a DMA pool. + */ +void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr); + +/** + * gsi_trans_pool_exit() - Inverse of gsi_trans_pool_init() + * @pool: Pool pointer + */ +void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool); + +/** + * gsi_channel_trans_alloc() - Allocate a GSI transaction on a channel + * @gsi: GSI pointer + * @channel_id: Channel the transaction is associated with + * @tre_count: Number of elements in the transaction + * @direction: DMA direction for entire SGL (or DMA_NONE) + * + * @Return: A GSI transaction structure, or a null pointer if all + * available transactions are in use + */ +struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id, + u32 tre_count, + enum dma_data_direction direction); + +/** + * gsi_trans_free() - Free a previously-allocated GSI transaction + * @trans: Transaction to be freed + */ +void gsi_trans_free(struct gsi_trans *trans); + +/** + * gsi_trans_cmd_add() - Add an immediate command to a transaction + * @trans: Transaction + * @buf: Buffer pointer for command payload + * @size: Number of bytes in buffer + * @addr: DMA address for payload + * @direction: Direction of DMA transfer (or DMA_NONE if none required) + * @opcode: IPA immediate command opcode + */ +void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size, + dma_addr_t addr, enum dma_data_direction direction, + enum ipa_cmd_opcode opcode); + +/** + * gsi_trans_page_add() - Add a page transfer to a transaction + * @trans: Transaction + * @page: Page pointer + * @size: Number of bytes (starting at offset) to transfer + * @offset: Offset within page for start of transfer + */ +int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size, + u32 offset); + +/** + * gsi_trans_skb_add() - Add a socket transfer to a transaction + * @trans: Transaction + * @skb: Socket buffer for transfer (outbound) + * + * @Return: 0, or -EMSGSIZE if socket data won't fit in transaction. + */ +int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb); + +/** + * gsi_trans_commit() - Commit a GSI transaction + * @trans: Transaction to commit + * @ring_db: Whether to tell the hardware about these queued transfers + */ +void gsi_trans_commit(struct gsi_trans *trans, bool ring_db); + +/** + * gsi_trans_commit_wait() - Commit a GSI transaction and wait for it + * to complete + * @trans: Transaction to commit + */ +void gsi_trans_commit_wait(struct gsi_trans *trans); + +/** + * gsi_trans_commit_wait_timeout() - Commit a GSI transaction and wait for + * it to complete, with timeout + * @trans: Transaction to commit + * @timeout: Timeout period (in milliseconds) + */ +int gsi_trans_commit_wait_timeout(struct gsi_trans *trans, + unsigned long timeout); + +/** + * gsi_trans_read_byte() - Issue a single byte read TRE on a channel + * @gsi: GSI pointer + * @channel_id: Channel on which to read a byte + * @addr: DMA address into which to transfer the one byte + * + * This is not a transaction operation at all. It's defined here because + * it needs to be done in coordination with other transaction activity. + */ +int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr); + +/** + * gsi_trans_read_byte_done() - Clean up after a single byte read TRE + * @gsi: GSI pointer + * @channel_id: Channel on which byte was read + * + * This function needs to be called to signal that the work related + * to reading a byte initiated by gsi_trans_read_byte() is complete. + */ +void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id); + +#endif /* _GSI_TRANS_H_ */ From patchwork Fri Mar 6 04:28:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 222950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D561C3F2E1 for ; Fri, 6 Mar 2020 04:29:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D0C772166E for ; Fri, 6 Mar 2020 04:29:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="bB0hNbdU" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726351AbgCFE3D (ORCPT ); Thu, 5 Mar 2020 23:29:03 -0500 Received: from mail-yw1-f68.google.com ([209.85.161.68]:43624 "EHLO mail-yw1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727303AbgCFE3A (ORCPT ); Thu, 5 Mar 2020 23:29:00 -0500 Received: by mail-yw1-f68.google.com with SMTP id p69so1062663ywh.10 for ; Thu, 05 Mar 2020 20:28:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JjeH9m4qgkWZXplsg1h97DYvGGWmb9nEGzu4XNBBl/I=; b=bB0hNbdUoLtgtevamEs7Y9rRoSCPY+FgD8I+ERJ03onobS9rtxhle6y99Qzdzl94hb peLHTAqs6zovUhzUS60CZUB7HqvTSyK/DD3HUuOWk3iTS1Wf34h5iV0NL1mEtPyoMJ1f 81ml0FJ55TjnWqyeg3w+0wUkmlqm2lvttcTsLZ28TkCpU0/MsrPMt1yiuR+aWGJcvx+V A4hwFnd8tlxMDw3vzIMNj+vMDqKWSG5pmLXhziov/50KCndlHRWbEiBoJlYkM71tfN61 PftvaxWHahMxtoYt8o1OhgvdudaR+CtGPx3b6Rd+eYeaLu0fyan9msFAQDUduMt+R9I4 X9ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JjeH9m4qgkWZXplsg1h97DYvGGWmb9nEGzu4XNBBl/I=; b=qL8bnSc0RiTHxo4FntaZhlDcQGwNRGbhQ9rcGD6DYTT/xA5EcA0nX8BaYjiQOqpo2D u+AbnUMJ+t9nUB4c9nendHfeDJxmpqaIBsGoaCn9GpO/8a5Xa7PRR5VkfESzwG2pvJ2U 1J2fw6ggHf2U8Mqq1zyRYWvJ4FlU+VFMVebTYH4c90QEKfZKDkE80qGVxErcB/v1DN95 RrUh8rPpOtnW28QqCeWXRy1vcZnd2bN9uVQ5F6seLF4miXqnNYYlygs98PDx70MwtAfQ LrHEnD/djkqElq4T6oqa9trwIqIIkb0ncaTRt8BzxM8PQOad2x4kiYpRhdc+mAUk3y+n yONg== X-Gm-Message-State: ANhLgQ150bfT9/3OvH6nReLIB+EBC81E+7PVFdhWdWNjygqFmK+cGC1X cmcvBOdHNSyT5DTpe+30mnGxgw== X-Google-Smtp-Source: ADFU+vueAFjaBsSuI7/v0OjnVlU3UkhMtu7Zf2SG4BOuUFeeUG37juzfa62HWdxD5JZwztbDCbSpnA== X-Received: by 2002:a0d:d106:: with SMTP id t6mr2074868ywd.211.1583468938087; Thu, 05 Mar 2020 20:28:58 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:57 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 11/17] soc: qcom: ipa: filter and routing tables Date: Thu, 5 Mar 2020 22:28:25 -0600 Message-Id: <20200306042831.17827-12-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch contains code implementing filter and routing tables for the IPA. A filter table allows rules to be used for filtering packets that depart the AP at an endpoint. A filter table entry contains the address of a set of rules to apply for each endpoint that supports filtering. A routing table allows packets to be routed to an endpoint based on packet metadata. It is also a table whose entries each contain the address of a set of routing rules to apply. Neither filtering nor routing is supported by the current driver. All table entries refer to rules that mean "no filtering" and "no routing." Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_table.c | 700 ++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_table.h | 103 ++++++ 2 files changed, 803 insertions(+) create mode 100644 drivers/net/ipa/ipa_table.c create mode 100644 drivers/net/ipa/ipa_table.h diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c new file mode 100644 index 000000000000..9df2a3e78c98 --- /dev/null +++ b/drivers/net/ipa/ipa_table.c @@ -0,0 +1,700 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_version.h" +#include "ipa_endpoint.h" +#include "ipa_table.h" +#include "ipa_reg.h" +#include "ipa_mem.h" +#include "ipa_cmd.h" +#include "gsi.h" +#include "gsi_trans.h" + +/** + * DOC: IPA Filter and Route Tables + * + * The IPA has tables defined in its local shared memory that define filter + * and routing rules. Each entry in these tables contains a 64-bit DMA + * address that refers to DRAM (system memory) containing a rule definition. + * A rule consists of a contiguous block of 32-bit values terminated with + * 32 zero bits. A special "zero entry" rule consisting of 64 zero bits + * represents "no filtering" or "no routing," and is the reset value for + * filter or route table rules. Separate tables (both filter and route) + * used for IPv4 and IPv6. Additionally, there can be hashed filter or + * route tables, which are used when a hash of message metadata matches. + * Hashed operation is not supported by all IPA hardware. + * + * Each filter rule is associated with an AP or modem TX endpoint, though + * not all TX endpoints support filtering. The first 64-bit entry in a + * filter table is a bitmap indicating which endpoints have entries in + * the table. The low-order bit (bit 0) in this bitmap represents a + * special global filter, which applies to all traffic. This is not + * used in the current code. Bit 1, if set, indicates that there is an + * entry (i.e. a DMA address referring to a rule) for endpoint 0 in the + * table. Bit 2, if set, indicates there is an entry for endpoint 1, + * and so on. Space is set aside in IPA local memory to hold as many + * filter table entries as might be required, but typically they are not + * all used. + * + * The AP initializes all entries in a filter table to refer to a "zero" + * entry. Once initialized the modem and AP update the entries for + * endpoints they "own" directly. Currently the AP does not use the + * IPA filtering functionality. + * + * IPA Filter Table + * ---------------------- + * endpoint bitmap | 0x0000000000000048 | Bits 3 and 6 set (endpoints 2 and 5) + * |--------------------| + * 1st endpoint | 0x000123456789abc0 | DMA address for modem endpoint 2 rule + * |--------------------| + * 2nd endpoint | 0x000123456789abf0 | DMA address for AP endpoint 5 rule + * |--------------------| + * (unused) | | (Unused space in filter table) + * |--------------------| + * . . . + * |--------------------| + * (unused) | | (Unused space in filter table) + * ---------------------- + * + * The set of available route rules is divided about equally between the AP + * and modem. The AP initializes all entries in a route table to refer to + * a "zero entry". Once initialized, the modem and AP are responsible for + * updating their own entries. All entries in a route table are usable, + * though the AP currently does not use the IPA routing functionality. + * + * IPA Route Table + * ---------------------- + * 1st modem route | 0x0001234500001100 | DMA address for first route rule + * |--------------------| + * 2nd modem route | 0x0001234500001140 | DMA address for second route rule + * |--------------------| + * . . . + * |--------------------| + * Last modem route| 0x0001234500002280 | DMA address for Nth route rule + * |--------------------| + * 1st AP route | 0x0001234500001100 | DMA address for route rule (N+1) + * |--------------------| + * 2nd AP route | 0x0001234500001140 | DMA address for next route rule + * |--------------------| + * . . . + * |--------------------| + * Last AP route | 0x0001234500002280 | DMA address for last route rule + * ---------------------- + */ + +/* IPA hardware constrains filter and route tables alignment */ +#define IPA_TABLE_ALIGN 128 /* Minimum table alignment */ + +/* Assignment of route table entries to the modem and AP */ +#define IPA_ROUTE_MODEM_MIN 0 +#define IPA_ROUTE_MODEM_COUNT 8 + +#define IPA_ROUTE_AP_MIN IPA_ROUTE_MODEM_COUNT +#define IPA_ROUTE_AP_COUNT \ + (IPA_ROUTE_COUNT_MAX - IPA_ROUTE_MODEM_COUNT) + +/* Filter or route rules consist of a set of 32-bit values followed by a + * 32-bit all-zero rule list terminator. The "zero rule" is simply an + * all-zero rule followed by the list terminator. + */ +#define IPA_ZERO_RULE_SIZE (2 * sizeof(__le32)) + +#ifdef IPA_VALIDATE + +/* Check things that can be validated at build time. */ +static void ipa_table_validate_build(void) +{ + /* IPA hardware accesses memory 128 bytes at a time. Addresses + * referred to by entries in filter and route tables must be + * aligned on 128-byte byte boundaries. The only rule address + * ever use is the "zero rule", and it's aligned at the base + * of a coherent DMA allocation. + */ + BUILD_BUG_ON(ARCH_DMA_MINALIGN % IPA_TABLE_ALIGN); + + /* Filter and route tables contain DMA addresses that refer to + * filter or route rules. We use a fixed constant to represent + * the size of either type of table entry. Code in ipa_table_init() + * uses a pointer to __le64 to initialize table entriews. + */ + BUILD_BUG_ON(IPA_TABLE_ENTRY_SIZE != sizeof(dma_addr_t)); + BUILD_BUG_ON(sizeof(dma_addr_t) != sizeof(__le64)); + + /* A "zero rule" is used to represent no filtering or no routing. + * It is a 64-bit block of zeroed memory. Code in ipa_table_init() + * assumes that it can be written using a pointer to __le64. + */ + BUILD_BUG_ON(IPA_ZERO_RULE_SIZE != sizeof(__le64)); + + /* Impose a practical limit on the number of routes */ + BUILD_BUG_ON(IPA_ROUTE_COUNT_MAX > 32); + /* The modem must be allotted at least one route table entry */ + BUILD_BUG_ON(!IPA_ROUTE_MODEM_COUNT); + /* But it can't have more than what is available */ + BUILD_BUG_ON(IPA_ROUTE_MODEM_COUNT > IPA_ROUTE_COUNT_MAX); + +} + +static bool +ipa_table_valid_one(struct ipa *ipa, bool route, bool ipv6, bool hashed) +{ + struct device *dev = &ipa->pdev->dev; + const struct ipa_mem *mem; + u32 size; + + if (route) { + if (ipv6) + mem = hashed ? &ipa->mem[IPA_MEM_V6_ROUTE_HASHED] + : &ipa->mem[IPA_MEM_V6_ROUTE]; + else + mem = hashed ? &ipa->mem[IPA_MEM_V4_ROUTE_HASHED] + : &ipa->mem[IPA_MEM_V4_ROUTE]; + size = IPA_ROUTE_COUNT_MAX * IPA_TABLE_ENTRY_SIZE; + } else { + if (ipv6) + mem = hashed ? &ipa->mem[IPA_MEM_V6_FILTER_HASHED] + : &ipa->mem[IPA_MEM_V6_FILTER]; + else + mem = hashed ? &ipa->mem[IPA_MEM_V4_FILTER_HASHED] + : &ipa->mem[IPA_MEM_V4_FILTER]; + size = (1 + IPA_FILTER_COUNT_MAX) * IPA_TABLE_ENTRY_SIZE; + } + + if (!ipa_cmd_table_valid(ipa, mem, route, ipv6, hashed)) + return false; + + /* mem->size >= size is sufficient, but we'll demand more */ + if (mem->size == size) + return true; + + /* Hashed table regions can be zero size if hashing is not supported */ + if (hashed && !mem->size) + return true; + + dev_err(dev, "IPv%c %s%s table region size 0x%02x, expected 0x%02x\n", + ipv6 ? '6' : '4', hashed ? "hashed " : "", + route ? "route" : "filter", mem->size, size); + + return false; +} + +/* Verify the filter and route table memory regions are the expected size */ +bool ipa_table_valid(struct ipa *ipa) +{ + bool valid = true; + + valid = valid && ipa_table_valid_one(ipa, false, false, false); + valid = valid && ipa_table_valid_one(ipa, false, false, true); + valid = valid && ipa_table_valid_one(ipa, false, true, false); + valid = valid && ipa_table_valid_one(ipa, false, true, true); + valid = valid && ipa_table_valid_one(ipa, true, false, false); + valid = valid && ipa_table_valid_one(ipa, true, false, true); + valid = valid && ipa_table_valid_one(ipa, true, true, false); + valid = valid && ipa_table_valid_one(ipa, true, true, true); + + return valid; +} + +bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_map) +{ + struct device *dev = &ipa->pdev->dev; + u32 count; + + if (!filter_map) { + dev_err(dev, "at least one filtering endpoint is required\n"); + + return false; + } + + count = hweight32(filter_map); + if (count > IPA_FILTER_COUNT_MAX) { + dev_err(dev, "too many filtering endpoints (%u, max %u)\n", + count, IPA_FILTER_COUNT_MAX); + + return false; + } + + return true; +} + +#else /* !IPA_VALIDATE */ +static void ipa_table_validate_build(void) + +{ +} + +#endif /* !IPA_VALIDATE */ + +/* Zero entry count means no table, so just return a 0 address */ +static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count) +{ + u32 skip; + + if (!count) + return 0; + +/* assert(count <= max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX)); */ + + /* Skip over the zero rule and possibly the filter mask */ + skip = filter_mask ? 1 : 2; + + return ipa->table_addr + skip * sizeof(*ipa->table_virt); +} + +static void ipa_table_reset_add(struct gsi_trans *trans, bool filter, + u16 first, u16 count, const struct ipa_mem *mem) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + dma_addr_t addr; + u32 offset; + u16 size; + + /* Nothing to do if the table memory regions is empty */ + if (!mem->size) + return; + + if (filter) + first++; /* skip over bitmap */ + + offset = mem->offset + first * IPA_TABLE_ENTRY_SIZE; + size = count * IPA_TABLE_ENTRY_SIZE; + addr = ipa_table_addr(ipa, false, count); + + ipa_cmd_dma_shared_mem_add(trans, offset, size, addr, true); +} + +/* Reset entries in a single filter table belonging to either the AP or + * modem to refer to the zero entry. The memory region supplied will be + * for the IPv4 and IPv6 non-hashed and hashed filter tables. + */ +static int +ipa_filter_reset_table(struct ipa *ipa, const struct ipa_mem *mem, bool modem) +{ + u32 ep_mask = ipa->filter_map; + u32 count = hweight32(ep_mask); + struct gsi_trans *trans; + enum gsi_ee_id ee_id; + + if (!mem->size) + return 0; + + trans = ipa_cmd_trans_alloc(ipa, count); + if (!trans) { + dev_err(&ipa->pdev->dev, + "no transaction for %s filter reset\n", + modem ? "modem" : "AP"); + return -EBUSY; + } + + ee_id = modem ? GSI_EE_MODEM : GSI_EE_AP; + while (ep_mask) { + u32 endpoint_id = __ffs(ep_mask); + struct ipa_endpoint *endpoint; + + ep_mask ^= BIT(endpoint_id); + + endpoint = &ipa->endpoint[endpoint_id]; + if (endpoint->ee_id != ee_id) + continue; + + ipa_table_reset_add(trans, true, endpoint_id, 1, mem); + } + + gsi_trans_commit_wait(trans); + + return 0; +} + +/* Theoretically, each filter table could have more filter slots to + * update than the maximum number of commands in a transaction. So + * we do each table separately. + */ +static int ipa_filter_reset(struct ipa *ipa, bool modem) +{ + int ret; + + ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V4_FILTER], modem); + if (ret) + return ret; + + ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V4_FILTER_HASHED], + modem); + if (ret) + return ret; + + ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V6_FILTER], modem); + if (ret) + return ret; + ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V6_FILTER_HASHED], + modem); + + return ret; +} + +/* The AP routes and modem routes are each contiguous within the + * table. We can update each table with a single command, and we + * won't exceed the per-transaction command limit. + * */ +static int ipa_route_reset(struct ipa *ipa, bool modem) +{ + struct gsi_trans *trans; + u16 first; + u16 count; + + trans = ipa_cmd_trans_alloc(ipa, 4); + if (!trans) { + dev_err(&ipa->pdev->dev, + "no transaction for %s route reset\n", + modem ? "modem" : "AP"); + return -EBUSY; + } + + if (modem) { + first = IPA_ROUTE_MODEM_MIN; + count = IPA_ROUTE_MODEM_COUNT; + } else { + first = IPA_ROUTE_AP_MIN; + count = IPA_ROUTE_AP_COUNT; + } + + ipa_table_reset_add(trans, false, first, count, + &ipa->mem[IPA_MEM_V4_ROUTE]); + ipa_table_reset_add(trans, false, first, count, + &ipa->mem[IPA_MEM_V4_ROUTE_HASHED]); + + ipa_table_reset_add(trans, false, first, count, + &ipa->mem[IPA_MEM_V6_ROUTE]); + ipa_table_reset_add(trans, false, first, count, + &ipa->mem[IPA_MEM_V6_ROUTE_HASHED]); + + gsi_trans_commit_wait(trans); + + return 0; +} + +void ipa_table_reset(struct ipa *ipa, bool modem) +{ + struct device *dev = &ipa->pdev->dev; + const char *ee_name; + int ret; + + ee_name = modem ? "modem" : "AP"; + + /* Report errors, but reset filter and route tables */ + ret = ipa_filter_reset(ipa, modem); + if (ret) + dev_err(dev, "error %d resetting filter table for %s\n", + ret, ee_name); + + ret = ipa_route_reset(ipa, modem); + if (ret) + dev_err(dev, "error %d resetting route table for %s\n", + ret, ee_name); +} + +int ipa_table_hash_flush(struct ipa *ipa) +{ + u32 offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version); + struct gsi_trans *trans; + u32 val; + + /* IPA version 4.2 does not support hashed tables */ + if (ipa->version == IPA_VERSION_4_2) + return 0; + + trans = ipa_cmd_trans_alloc(ipa, 1); + if (!trans) { + dev_err(&ipa->pdev->dev, "no transaction for hash flush\n"); + return -EBUSY; + } + + val = IPV4_FILTER_HASH_FLUSH | IPV6_FILTER_HASH_FLUSH; + val |= IPV6_ROUTER_HASH_FLUSH | IPV4_ROUTER_HASH_FLUSH; + + ipa_cmd_register_write_add(trans, offset, val, val, false); + + gsi_trans_commit_wait(trans); + + return 0; +} + +static void ipa_table_init_add(struct gsi_trans *trans, bool filter, + enum ipa_cmd_opcode opcode, + const struct ipa_mem *mem, + const struct ipa_mem *hash_mem) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + dma_addr_t hash_addr; + dma_addr_t addr; + u16 hash_count; + u16 hash_size; + u16 count; + u16 size; + + /* The number of filtering endpoints determines number of entries + * in the filter table. The hashed and non-hashed filter table + * will have the same number of entries. The size of the route + * table region determines the number of entries it has. + */ + if (filter) { + count = hweight32(ipa->filter_map); + hash_count = hash_mem->size ? count : 0; + } else { + count = mem->size / IPA_TABLE_ENTRY_SIZE; + hash_count = hash_mem->size / IPA_TABLE_ENTRY_SIZE; + } + size = count * IPA_TABLE_ENTRY_SIZE; + hash_size = hash_count * IPA_TABLE_ENTRY_SIZE; + + addr = ipa_table_addr(ipa, filter, count); + hash_addr = ipa_table_addr(ipa, filter, hash_count); + + ipa_cmd_table_init_add(trans, opcode, size, mem->offset, addr, + hash_size, hash_mem->offset, hash_addr); +} + +int ipa_table_setup(struct ipa *ipa) +{ + struct gsi_trans *trans; + + trans = ipa_cmd_trans_alloc(ipa, 4); + if (!trans) { + dev_err(&ipa->pdev->dev, "no transaction for table setup\n"); + return -EBUSY; + } + + ipa_table_init_add(trans, false, IPA_CMD_IP_V4_ROUTING_INIT, + &ipa->mem[IPA_MEM_V4_ROUTE], + &ipa->mem[IPA_MEM_V4_ROUTE_HASHED]); + + ipa_table_init_add(trans, false, IPA_CMD_IP_V6_ROUTING_INIT, + &ipa->mem[IPA_MEM_V6_ROUTE], + &ipa->mem[IPA_MEM_V6_ROUTE_HASHED]); + + ipa_table_init_add(trans, true, IPA_CMD_IP_V4_FILTER_INIT, + &ipa->mem[IPA_MEM_V4_FILTER], + &ipa->mem[IPA_MEM_V4_FILTER_HASHED]); + + ipa_table_init_add(trans, true, IPA_CMD_IP_V6_FILTER_INIT, + &ipa->mem[IPA_MEM_V6_FILTER], + &ipa->mem[IPA_MEM_V6_FILTER_HASHED]); + + gsi_trans_commit_wait(trans); + + return 0; +} + +void ipa_table_teardown(struct ipa *ipa) +{ + /* Nothing to do */ /* XXX Maybe reset the tables? */ +} + +/** + * ipa_filter_tuple_zero() - Zero an endpoint's hashed filter tuple + * @endpoint_id: Endpoint whose filter hash tuple should be zeroed + * + * Endpoint must be for the AP (not modem) and support filtering. Updates + * the filter hash values without changing route ones. + */ +static void ipa_filter_tuple_zero(struct ipa_endpoint *endpoint) +{ + u32 endpoint_id = endpoint->endpoint_id; + u32 offset; + u32 val; + + offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(endpoint_id); + + val = ioread32(endpoint->ipa->reg_virt + offset); + + /* Zero all filter-related fields, preserving the rest */ + u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL); + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_filter_config(struct ipa *ipa, bool modem) +{ + enum gsi_ee_id ee_id = modem ? GSI_EE_MODEM : GSI_EE_AP; + u32 ep_mask = ipa->filter_map; + + /* IPA version 4.2 has no hashed route tables */ + if (ipa->version == IPA_VERSION_4_2) + return; + + while (ep_mask) { + u32 endpoint_id = __ffs(ep_mask); + struct ipa_endpoint *endpoint; + + ep_mask ^= BIT(endpoint_id); + + endpoint = &ipa->endpoint[endpoint_id]; + if (endpoint->ee_id == ee_id) + ipa_filter_tuple_zero(endpoint); + } +} + +static void ipa_filter_deconfig(struct ipa *ipa, bool modem) +{ + /* Nothing to do */ +} + +static bool ipa_route_id_modem(u32 route_id) +{ + return route_id >= IPA_ROUTE_MODEM_MIN && + route_id <= IPA_ROUTE_MODEM_MIN + IPA_ROUTE_MODEM_COUNT - 1; +} + +/** + * ipa_route_tuple_zero() - Zero a hashed route table entry tuple + * @route_id: Route table entry whose hash tuple should be zeroed + * + * Updates the route hash values without changing filter ones. + */ +static void ipa_route_tuple_zero(struct ipa *ipa, u32 route_id) +{ + u32 offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(route_id); + u32 val; + + val = ioread32(ipa->reg_virt + offset); + + /* Zero all route-related fields, preserving the rest */ + u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL); + + iowrite32(val, ipa->reg_virt + offset); +} + +static void ipa_route_config(struct ipa *ipa, bool modem) +{ + u32 route_id; + + /* IPA version 4.2 has no hashed route tables */ + if (ipa->version == IPA_VERSION_4_2) + return; + + for (route_id = 0; route_id < IPA_ROUTE_COUNT_MAX; route_id++) + if (ipa_route_id_modem(route_id) == modem) + ipa_route_tuple_zero(ipa, route_id); +} + +static void ipa_route_deconfig(struct ipa *ipa, bool modem) +{ + /* Nothing to do */ +} + +void ipa_table_config(struct ipa *ipa) +{ + ipa_filter_config(ipa, false); + ipa_filter_config(ipa, true); + ipa_route_config(ipa, false); + ipa_route_config(ipa, true); +} + +void ipa_table_deconfig(struct ipa *ipa) +{ + ipa_route_deconfig(ipa, true); + ipa_route_deconfig(ipa, false); + ipa_filter_deconfig(ipa, true); + ipa_filter_deconfig(ipa, false); +} + +/* + * Initialize a coherent DMA allocation containing initialized filter and + * route table data. This is used when initializing or resetting the IPA + * filter or route table. + * + * The first entry in a filter table contains a bitmap indicating which + * endpoints contain entries in the table. In addition to that first entry, + * there are at most IPA_FILTER_COUNT_MAX entries that follow. Filter table + * entries are 64 bits wide, and (other than the bitmap) contain the DMA + * address of a filter rule. A "zero rule" indicates no filtering, and + * consists of 64 bits of zeroes. When a filter table is initialized (or + * reset) its entries are made to refer to the zero rule. + * + * Each entry in a route table is the DMA address of a routing rule. For + * routing there is also a 64-bit "zero rule" that means no routing, and + * when a route table is initialized or reset, its entries are made to refer + * to the zero rule. The zero rule is shared for route and filter tables. + * + * Note that the IPA hardware requires a filter or route rule address to be + * aligned on a 128 byte boundary. The coherent DMA buffer we allocate here + * has a minimum alignment, and we place the zero rule at the base of that + * allocated space. In ipa_table_init() we verify the minimum DMA allocation + * meets our requirement. + * + * +-------------------+ + * --> | zero rule | + * / |-------------------| + * | | filter mask | + * |\ |-------------------| + * | ---- zero rule address | \ + * |\ |-------------------| | + * | ---- zero rule address | | IPA_FILTER_COUNT_MAX + * | |-------------------| > or IPA_ROUTE_COUNT_MAX, + * | ... | whichever is greater + * \ |-------------------| | + * ---- zero rule address | / + * +-------------------+ + */ +int ipa_table_init(struct ipa *ipa) +{ + u32 count = max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX); + struct device *dev = &ipa->pdev->dev; + dma_addr_t addr; + __le64 le_addr; + __le64 *virt; + size_t size; + + ipa_table_validate_build(); + + size = IPA_ZERO_RULE_SIZE + (1 + count) * IPA_TABLE_ENTRY_SIZE; + virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + ipa->table_virt = virt; + ipa->table_addr = addr; + + /* First slot is the zero rule */ + *virt++ = 0; + + /* Next is the filter table bitmap. The "soft" bitmap value + * must be converted to the hardware representation by shifting + * it left one position. (Bit 0 repesents global filtering, + * which is possible but not used.) + */ + *virt++ = cpu_to_le64((u64)ipa->filter_map << 1); + + /* All the rest contain the DMA address of the zero rule */ + le_addr = cpu_to_le64(addr); + while (count--) + *virt++ = le_addr; + + return 0; +} + +void ipa_table_exit(struct ipa *ipa) +{ + u32 count = max_t(u32, 1 + IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX); + struct device *dev = &ipa->pdev->dev; + size_t size; + + size = IPA_ZERO_RULE_SIZE + (1 + count) * IPA_TABLE_ENTRY_SIZE; + + dma_free_coherent(dev, size, ipa->table_virt, ipa->table_addr); + ipa->table_addr = 0; + ipa->table_virt = NULL; +} diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h new file mode 100644 index 000000000000..64ea0221441a --- /dev/null +++ b/drivers/net/ipa/ipa_table.h @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_TABLE_H_ +#define _IPA_TABLE_H_ + +#include + +struct ipa; + +/* The size of a filter or route table entry */ +#define IPA_TABLE_ENTRY_SIZE sizeof(__le64) /* Holds a physical address */ + +/* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */ +#define IPA_FILTER_COUNT_MAX 14 + +/* The maximum number of route table entries (IPv4, IPv6; hashed or not) */ +#define IPA_ROUTE_COUNT_MAX 15 + +#ifdef IPA_VALIDATE + +/** + * ipa_table_valid() - Validate route and filter table memory regions + * @ipa: IPA pointer + + * @Return: true if all regions are valid, false otherwise + */ +bool ipa_table_valid(struct ipa *ipa); + +/** + * ipa_filter_map_valid() - Validate a filter table endpoint bitmap + * @ipa: IPA pointer + * + * @Return: true if all regions are valid, false otherwise + */ +bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask); + +#else /* !IPA_VALIDATE */ + +static inline bool ipa_table_valid(struct ipa *ipa) +{ + return true; +} + +static inline bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask) +{ + return true; +} + +#endif /* !IPA_VALIDATE */ + +/** + * ipa_table_reset() - Reset filter and route tables entries to "none" + * @ipa: IPA pointer + * @modem: Whether to reset modem or AP entries + */ +void ipa_table_reset(struct ipa *ipa, bool modem); + +/** + * ipa_table_hash_flush() - Synchronize hashed filter and route updates + * @ipa: IPA pointer + */ +int ipa_table_hash_flush(struct ipa *ipa); + +/** + * ipa_table_setup() - Set up filter and route tables + * @ipa: IPA pointer + */ +int ipa_table_setup(struct ipa *ipa); + +/** + * ipa_table_teardown() - Inverse of ipa_table_setup() + * @ipa: IPA pointer + */ +void ipa_table_teardown(struct ipa *ipa); + +/** + * ipa_table_config() - Configure filter and route tables + * @ipa: IPA pointer + */ +void ipa_table_config(struct ipa *ipa); + +/** + * ipa_table_deconfig() - Inverse of ipa_table_config() + * @ipa: IPA pointer + */ +void ipa_table_deconfig(struct ipa *ipa); + +/** + * ipa_table_init() - Do early initialization of filter and route tables + * @ipa: IPA pointer + */ +int ipa_table_init(struct ipa *ipa); + +/** + * ipa_table_exit() - Inverse of ipa_table_init() + * @ipa: IPA pointer + */ +void ipa_table_exit(struct ipa *ipa); + +#endif /* _IPA_TABLE_H_ */ From patchwork Fri Mar 6 04:28:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 222951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43324C3F2D8 for ; Fri, 6 Mar 2020 04:29:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ECA8F2166E for ; Fri, 6 Mar 2020 04:29:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="u+F4WutO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727403AbgCFE3G (ORCPT ); Thu, 5 Mar 2020 23:29:06 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:42758 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727341AbgCFE3D (ORCPT ); Thu, 5 Mar 2020 23:29:03 -0500 Received: by mail-yw1-f65.google.com with SMTP id v138so1066185ywa.9 for ; Thu, 05 Mar 2020 20:29:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IzoreZnPLlVH00cFIY4SNmfl6p/pO+mkW+KZWyCgFB8=; b=u+F4WutOIsIfpB1ZmpOkobURmDSpG0cF17vLJjxX7/dTuegS7pEvNYEPYMRbOsqWbY ZCoKSuuYWS1LkQGjHvdOG8Ljf7gWbmbOMCpQZZc42Ymj/5xnpRXKfnFHMkS8PS01el+J ctJkcy/I1q8oJydW7FFq0a7XXX+6JQX67hPT8WSDCbkPV8q6jukbZOZS283G5wPE6Ph9 FUUgriFkhPA8DSmdiYTMHgSGSPYKwkxIZ9rpreZkoxn/Jzx7P3baC7HOZqgEIumi/bPD Vy3kIW+iV/InALE2omsLdEUKtQ2DJYr5qI6h8FVvK2CZqSOPT5sPeav78GMgNJAx8qCi fURQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IzoreZnPLlVH00cFIY4SNmfl6p/pO+mkW+KZWyCgFB8=; b=B6Cukk8Z0wcVtbl3yMDzOe1fD/NaKAyJR2elSUy6jJO8A71yq1pCoc49aai36GyF/B 4RSyCuZczbnvgKhF31Ic5iObABu09lZoSy8OeTS6Us9K/N9dvP4eh3u9S2fayWIdeKrM 4SY17hotNiFoE5yUdsvWBqWqoP0EZNiMBnD+f14AxLmBb0rYlP5nvf0g75E/IE60kaf8 9/+JHrBn4tMRMdATI6WZ7B71zyzOxGhFzu2mK8u3AhW42sK0IEVwTydTB7Bj+tcp0EPK kCfsikx1f3z7iFmkDikk8ZpO35iHEwJn97ZCPRyiJiYDy9PQzMFsVGRFZPMS4JRy7pUc uvBA== X-Gm-Message-State: ANhLgQ06whEYCHk6QGUtEVndspCooYKqe6CcGQjhahS+iAX6non67rBF tVoH3n70E6/z9/RnJccq94wHeA== X-Google-Smtp-Source: ADFU+vtfR+mq/xkP6/E2/WyffkwCF2aYuuiPLvstDrSVyIDMSpgw4DQCAwGDZxQkwZWjL6vO335ACw== X-Received: by 2002:a25:b49:: with SMTP id 70mr1959333ybl.138.1583468940060; Thu, 05 Mar 2020 20:29:00 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.28.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:28:59 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 12/17] soc: qcom: ipa: immediate commands Date: Thu, 5 Mar 2020 22:28:26 -0600 Message-Id: <20200306042831.17827-13-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org One TX endpoint (per EE) is used for issuing immediate commands to the IPA. These commands request activites beyond simple data transfers to be done by the IPA hardware. For example, the IPA is able to manage routing packets among endpoints, and immediate commands are used to configure tables used for that routing. Immediate commands are built on top of GSI transactions. They are different from normal transfers (in that they use a special endpoint, and their "payload" is interpreted differently), so separate functions are used to issue immediate command transactions. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_cmd.c | 680 ++++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_cmd.h | 195 +++++++++++ 2 files changed, 875 insertions(+) create mode 100644 drivers/net/ipa/ipa_cmd.c create mode 100644 drivers/net/ipa/ipa_cmd.h diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c new file mode 100644 index 000000000000..d226b858742d --- /dev/null +++ b/drivers/net/ipa/ipa_cmd.c @@ -0,0 +1,680 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_endpoint.h" +#include "ipa_table.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" + +/** + * DOC: IPA Immediate Commands + * + * The AP command TX endpoint is used to issue immediate commands to the IPA. + * An immediate command is generally used to request the IPA do something + * other than data transfer to another endpoint. + * + * Immediate commands are represented by GSI transactions just like other + * transfer requests, represented by a single GSI TRE. Each immediate + * command has a well-defined format, having a payload of a known length. + * This allows the transfer element's length field to be used to hold an + * immediate command's opcode. The payload for a command resides in DRAM + * and is described by a single scatterlist entry in its transaction. + * Commands do not require a transaction completion callback. To commit + * an immediate command transaction, either gsi_trans_commit_wait() or + * gsi_trans_commit_wait_timeout() is used. + */ + +/* Some commands can wait until indicated pipeline stages are clear */ +enum pipeline_clear_options { + pipeline_clear_hps = 0, + pipeline_clear_src_grp = 1, + pipeline_clear_full = 2, +}; + +/* IPA_CMD_IP_V{4,6}_{FILTER,ROUTING}_INIT */ + +struct ipa_cmd_hw_ip_fltrt_init { + __le64 hash_rules_addr; + __le64 flags; + __le64 nhash_rules_addr; +}; + +/* Field masks for ipa_cmd_hw_ip_fltrt_init structure fields */ +#define IP_FLTRT_FLAGS_HASH_SIZE_FMASK GENMASK_ULL(11, 0) +#define IP_FLTRT_FLAGS_HASH_ADDR_FMASK GENMASK_ULL(27, 12) +#define IP_FLTRT_FLAGS_NHASH_SIZE_FMASK GENMASK_ULL(39, 28) +#define IP_FLTRT_FLAGS_NHASH_ADDR_FMASK GENMASK_ULL(55, 40) + +/* IPA_CMD_HDR_INIT_LOCAL */ + +struct ipa_cmd_hw_hdr_init_local { + __le64 hdr_table_addr; + __le32 flags; + __le32 reserved; +}; + +/* Field masks for ipa_cmd_hw_hdr_init_local structure fields */ +#define HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK GENMASK(11, 0) +#define HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK GENMASK(27, 12) + +/* IPA_CMD_REGISTER_WRITE */ + +/* For IPA v4.0+, this opcode gets modified with pipeline clear options */ + +#define REGISTER_WRITE_OPCODE_SKIP_CLEAR_FMASK GENMASK(8, 8) +#define REGISTER_WRITE_OPCODE_CLEAR_OPTION_FMASK GENMASK(10, 9) + +struct ipa_cmd_register_write { + __le16 flags; /* Unused/reserved for IPA v3.5.1 */ + __le16 offset; + __le32 value; + __le32 value_mask; + __le32 clear_options; /* Unused/reserved for IPA v4.0+ */ +}; + +/* Field masks for ipa_cmd_register_write structure fields */ +/* The next field is present for IPA v4.0 and above */ +#define REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK GENMASK(14, 11) +/* The next field is present for IPA v3.5.1 only */ +#define REGISTER_WRITE_FLAGS_SKIP_CLEAR_FMASK GENMASK(15, 15) + +/* The next field and its values are present for IPA v3.5.1 only */ +#define REGISTER_WRITE_CLEAR_OPTIONS_FMASK GENMASK(1, 0) + +/* IPA_CMD_IP_PACKET_INIT */ + +struct ipa_cmd_ip_packet_init { + u8 dest_endpoint; + u8 reserved[7]; +}; + +/* Field masks for ipa_cmd_ip_packet_init dest_endpoint field */ +#define IPA_PACKET_INIT_DEST_ENDPOINT_FMASK GENMASK(4, 0) + +/* IPA_CMD_DMA_TASK_32B_ADDR */ + +/* This opcode gets modified with a DMA operation count */ + +#define DMA_TASK_32B_ADDR_OPCODE_COUNT_FMASK GENMASK(15, 8) + +struct ipa_cmd_hw_dma_task_32b_addr { + __le16 flags; + __le16 size; + __le32 addr; + __le16 packet_size; + u8 reserved[6]; +}; + +/* Field masks for ipa_cmd_hw_dma_task_32b_addr flags field */ +#define DMA_TASK_32B_ADDR_FLAGS_SW_RSVD_FMASK GENMASK(10, 0) +#define DMA_TASK_32B_ADDR_FLAGS_CMPLT_FMASK GENMASK(11, 11) +#define DMA_TASK_32B_ADDR_FLAGS_EOF_FMASK GENMASK(12, 12) +#define DMA_TASK_32B_ADDR_FLAGS_FLSH_FMASK GENMASK(13, 13) +#define DMA_TASK_32B_ADDR_FLAGS_LOCK_FMASK GENMASK(14, 14) +#define DMA_TASK_32B_ADDR_FLAGS_UNLOCK_FMASK GENMASK(15, 15) + +/* IPA_CMD_DMA_SHARED_MEM */ + +/* For IPA v4.0+, this opcode gets modified with pipeline clear options */ + +#define DMA_SHARED_MEM_OPCODE_SKIP_CLEAR_FMASK GENMASK(8, 8) +#define DMA_SHARED_MEM_OPCODE_CLEAR_OPTION_FMASK GENMASK(10, 9) + +struct ipa_cmd_hw_dma_mem_mem { + __le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */ + __le16 size; + __le16 local_addr; + __le16 flags; + __le64 system_addr; +}; + +/* Flag allowing atomic clear of target region after reading data (v4.0+)*/ +#define DMA_SHARED_MEM_CLEAR_AFTER_READ GENMASK(15, 15) + +/* Field masks for ipa_cmd_hw_dma_mem_mem structure fields */ +#define DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK GENMASK(0, 0) +/* The next two fields are present for IPA v3.5.1 only. */ +#define DMA_SHARED_MEM_FLAGS_SKIP_CLEAR_FMASK GENMASK(1, 1) +#define DMA_SHARED_MEM_FLAGS_CLEAR_OPTIONS_FMASK GENMASK(3, 2) + +/* IPA_CMD_IP_PACKET_TAG_STATUS */ + +struct ipa_cmd_ip_packet_tag_status { + __le64 tag; +}; + +#define IP_PACKET_TAG_STATUS_TAG_FMASK GENMASK_ULL(63, 16) + +/* Immediate command payload */ +union ipa_cmd_payload { + struct ipa_cmd_hw_ip_fltrt_init table_init; + struct ipa_cmd_hw_hdr_init_local hdr_init_local; + struct ipa_cmd_register_write register_write; + struct ipa_cmd_ip_packet_init ip_packet_init; + struct ipa_cmd_hw_dma_task_32b_addr dma_task_32b_addr; + struct ipa_cmd_hw_dma_mem_mem dma_shared_mem; + struct ipa_cmd_ip_packet_tag_status ip_packet_tag_status; +}; + +static void ipa_cmd_validate_build(void) +{ + /* The sizes of a filter and route tables need to fit into fields + * in the ipa_cmd_hw_ip_fltrt_init structure. Although hashed tables + * might not be used, non-hashed and hashed tables have the same + * maximum size. IPv4 and IPv6 filter tables have the same number + * of entries, as and IPv4 and IPv6 route tables have the same number + * of entries. + */ +#define TABLE_SIZE (TABLE_COUNT_MAX * IPA_TABLE_ENTRY_SIZE) +#define TABLE_COUNT_MAX max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX) + BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK)); + BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK)); +#undef TABLE_COUNT_MAX +#undef TABLE_SIZE +} + +#ifdef IPA_VALIDATE + +/* Validate a memory region holding a table */ +bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem, + bool route, bool ipv6, bool hashed) +{ + struct device *dev = &ipa->pdev->dev; + u32 offset_max; + + offset_max = hashed ? field_max(IP_FLTRT_FLAGS_HASH_ADDR_FMASK) + : field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK); + if (mem->offset > offset_max || + ipa->mem_offset > offset_max - mem->offset) { + dev_err(dev, "IPv%c %s%s table region offset too large " + "(0x%04x + 0x%04x > 0x%04x)\n", + ipv6 ? '6' : '4', hashed ? "hashed " : "", + route ? "route" : "filter", + ipa->mem_offset, mem->offset, offset_max); + return false; + } + + if (mem->offset > ipa->mem_size || + mem->size > ipa->mem_size - mem->offset) { + dev_err(dev, "IPv%c %s%s table region out of range " + "(0x%04x + 0x%04x > 0x%04x)\n", + ipv6 ? '6' : '4', hashed ? "hashed " : "", + route ? "route" : "filter", + mem->offset, mem->size, ipa->mem_size); + return false; + } + + return true; +} + +/* Validate the memory region that holds headers */ +static bool ipa_cmd_header_valid(struct ipa *ipa) +{ + const struct ipa_mem *mem = &ipa->mem[IPA_MEM_MODEM_HEADER]; + struct device *dev = &ipa->pdev->dev; + u32 offset_max; + u32 size_max; + u32 size; + + offset_max = field_max(HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK); + if (mem->offset > offset_max || + ipa->mem_offset > offset_max - mem->offset) { + dev_err(dev, "header table region offset too large " + "(0x%04x + 0x%04x > 0x%04x)\n", + ipa->mem_offset + mem->offset, offset_max); + return false; + } + + size_max = field_max(HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK); + size = ipa->mem[IPA_MEM_MODEM_HEADER].size; + size += ipa->mem[IPA_MEM_AP_HEADER].size; + if (mem->offset > ipa->mem_size || size > ipa->mem_size - mem->offset) { + dev_err(dev, "header table region out of range " + "(0x%04x + 0x%04x > 0x%04x)\n", + mem->offset, size, ipa->mem_size); + return false; + } + + return true; +} + +/* Indicate whether an offset can be used with a register_write command */ +static bool ipa_cmd_register_write_offset_valid(struct ipa *ipa, + const char *name, u32 offset) +{ + struct ipa_cmd_register_write *payload; + struct device *dev = &ipa->pdev->dev; + u32 offset_max; + u32 bit_count; + + /* The maximum offset in a register_write immediate command depends + * on the version of IPA. IPA v3.5.1 supports a 16 bit offset, but + * newer versions allow some additional high-order bits. + */ + bit_count = BITS_PER_BYTE * sizeof(payload->offset); + if (ipa->version != IPA_VERSION_3_5_1) + bit_count += hweight32(REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK); + BUILD_BUG_ON(bit_count > 32); + offset_max = ~0 >> (32 - bit_count); + + if (offset > offset_max || ipa->mem_offset > offset_max - offset) { + dev_err(dev, "%s offset too large 0x%04x + 0x%04x > 0x%04x)\n", + ipa->mem_offset + offset, offset_max); + return false; + } + + return true; +} + +/* Check whether offsets passed to register_write are valid */ +static bool ipa_cmd_register_write_valid(struct ipa *ipa) +{ + const char *name; + u32 offset; + + offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version); + name = "filter/route hash flush"; + if (!ipa_cmd_register_write_offset_valid(ipa, name, offset)) + return false; + + offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT); + name = "maximal endpoint status"; + if (!ipa_cmd_register_write_offset_valid(ipa, name, offset)) + return false; + + return true; +} + +bool ipa_cmd_data_valid(struct ipa *ipa) +{ + if (!ipa_cmd_header_valid(ipa)) + return false; + + if (!ipa_cmd_register_write_valid(ipa)) + return false; + + return true; +} + +#endif /* IPA_VALIDATE */ + +int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + struct device *dev = channel->gsi->dev; + int ret; + + /* This is as good a place as any to validate build constants */ + ipa_cmd_validate_build(); + + /* Even though command payloads are allocated one at a time, + * a single transaction can require up to tlv_count of them, + * so we treat them as if that many can be allocated at once. + */ + ret = gsi_trans_pool_init_dma(dev, &trans_info->cmd_pool, + sizeof(union ipa_cmd_payload), + tre_max, channel->tlv_count); + if (ret) + return ret; + + /* Each TRE needs a command info structure */ + ret = gsi_trans_pool_init(&trans_info->info_pool, + sizeof(struct ipa_cmd_info), + tre_max, channel->tlv_count); + if (ret) + gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool); + + return ret; +} + +void ipa_cmd_pool_exit(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + struct device *dev = channel->gsi->dev; + + gsi_trans_pool_exit(&trans_info->info_pool); + gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool); +} + +static union ipa_cmd_payload * +ipa_cmd_payload_alloc(struct ipa *ipa, dma_addr_t *addr) +{ + struct gsi_trans_info *trans_info; + struct ipa_endpoint *endpoint; + + endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; + trans_info = &ipa->gsi.channel[endpoint->channel_id].trans_info; + + return gsi_trans_pool_alloc_dma(&trans_info->cmd_pool, addr); +} + +/* If hash_size is 0, hash_offset and hash_addr ignored. */ +void ipa_cmd_table_init_add(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, u16 size, u32 offset, + dma_addr_t addr, u16 hash_size, u32 hash_offset, + dma_addr_t hash_addr) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_hw_ip_fltrt_init *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + u64 val; + + /* Record the non-hash table offset and size */ + offset += ipa->mem_offset; + val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK); + val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK); + + /* The hash table offset and address are zero if its size is 0 */ + if (hash_size) { + /* Record the hash table offset and size */ + hash_offset += ipa->mem_offset; + val |= u64_encode_bits(hash_offset, + IP_FLTRT_FLAGS_HASH_ADDR_FMASK); + val |= u64_encode_bits(hash_size, + IP_FLTRT_FLAGS_HASH_SIZE_FMASK); + } + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->table_init; + + /* Fill in all offsets and sizes and the non-hash table address */ + if (hash_size) + payload->hash_rules_addr = cpu_to_le64(hash_addr); + payload->flags = cpu_to_le64(val); + payload->nhash_rules_addr = cpu_to_le64(addr); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Initialize header space in IPA-local memory */ +void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size, + dma_addr_t addr) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL; + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_hw_hdr_init_local *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + u32 flags; + + offset += ipa->mem_offset; + + /* With this command we tell the IPA where in its local memory the + * header tables reside. The content of the buffer provided is + * also written via DMA into that space. The IPA hardware owns + * the table, but the AP must initialize it. + */ + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->hdr_init_local; + + payload->hdr_table_addr = cpu_to_le64(addr); + flags = u32_encode_bits(size, HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK); + flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK); + payload->flags = cpu_to_le32(flags); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value, + u32 mask, bool clear_full) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + struct ipa_cmd_register_write *payload; + union ipa_cmd_payload *cmd_payload; + u32 opcode = IPA_CMD_REGISTER_WRITE; + dma_addr_t payload_addr; + u32 clear_option; + u32 options; + u16 flags; + + /* pipeline_clear_src_grp is not used */ + clear_option = clear_full ? pipeline_clear_full : pipeline_clear_hps; + + if (ipa->version != IPA_VERSION_3_5_1) { + u16 offset_high; + u32 val; + + /* Opcode encodes pipeline clear options */ + /* SKIP_CLEAR is always 0 (don't skip pipeline clear) */ + val = u16_encode_bits(clear_option, + REGISTER_WRITE_OPCODE_CLEAR_OPTION_FMASK); + opcode |= val; + + /* Extract the high 4 bits from the offset */ + offset_high = (u16)u32_get_bits(offset, GENMASK(19, 16)); + offset &= (1 << 16) - 1; + + /* Extract the top 4 bits and encode it into the flags field */ + flags = u16_encode_bits(offset_high, + REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK); + options = 0; /* reserved */ + + } else { + flags = 0; /* SKIP_CLEAR flag is always 0 */ + options = u16_encode_bits(clear_option, + REGISTER_WRITE_CLEAR_OPTIONS_FMASK); + } + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->register_write; + + payload->flags = cpu_to_le16(flags); + payload->offset = cpu_to_le16((u16)offset); + payload->value = cpu_to_le32(value); + payload->value_mask = cpu_to_le32(mask); + payload->clear_options = cpu_to_le32(options); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + DMA_NONE, opcode); +} + +/* Skip IP packet processing on the next data transfer on a TX channel */ +static void ipa_cmd_ip_packet_init_add(struct gsi_trans *trans, u8 endpoint_id) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_INIT; + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_ip_packet_init *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + + /* assert(endpoint_id < + field_max(IPA_PACKET_INIT_DEST_ENDPOINT_FMASK)); */ + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->ip_packet_init; + + payload->dest_endpoint = u8_encode_bits(endpoint_id, + IPA_PACKET_INIT_DEST_ENDPOINT_FMASK); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Use a 32-bit DMA command to zero a block of memory */ +void ipa_cmd_dma_task_32b_addr_add(struct gsi_trans *trans, u16 size, + dma_addr_t addr, bool toward_ipa) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_DMA_TASK_32B_ADDR; + struct ipa_cmd_hw_dma_task_32b_addr *payload; + union ipa_cmd_payload *cmd_payload; + enum dma_data_direction direction; + dma_addr_t payload_addr; + u16 flags; + + /* assert(addr <= U32_MAX); */ + addr &= GENMASK_ULL(31, 0); + + /* The opcode encodes the number of DMA operations in the high byte */ + opcode |= u16_encode_bits(1, DMA_TASK_32B_ADDR_OPCODE_COUNT_FMASK); + + direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + + /* complete: 0 = don't interrupt; eof: 0 = don't assert eot */ + flags = DMA_TASK_32B_ADDR_FLAGS_FLSH_FMASK; + /* lock: 0 = don't lock endpoint; unlock: 0 = don't unlock */ + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->dma_task_32b_addr; + + payload->flags = cpu_to_le16(flags); + payload->size = cpu_to_le16(size); + payload->addr = cpu_to_le32((u32)addr); + payload->packet_size = cpu_to_le16(size); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Use a DMA command to read or write a block of IPA-resident memory */ +void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, u16 size, + dma_addr_t addr, bool toward_ipa) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM; + struct ipa_cmd_hw_dma_mem_mem *payload; + union ipa_cmd_payload *cmd_payload; + enum dma_data_direction direction; + dma_addr_t payload_addr; + u16 flags; + + /* size and offset must fit in 16 bit fields */ + /* assert(size > 0 && size <= U16_MAX); */ + /* assert(offset <= U16_MAX && ipa->mem_offset <= U16_MAX - offset); */ + + offset += ipa->mem_offset; + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->dma_shared_mem; + + /* payload->clear_after_read was reserved prior to IPA v4.0. It's + * never needed for current code, so it's 0 regardless of version. + */ + payload->size = cpu_to_le16(size); + payload->local_addr = cpu_to_le16(offset); + /* payload->flags: + * direction: 0 = write to IPA, 1 read from IPA + * Starting at v4.0 these are reserved; either way, all zero: + * pipeline clear: 0 = wait for pipeline clear (don't skip) + * clear_options: 0 = pipeline_clear_hps + * Instead, for v4.0+ these are encoded in the opcode. But again + * since both values are 0 we won't bother OR'ing them in. + */ + flags = toward_ipa ? 0 : DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK; + payload->flags = cpu_to_le16(flags); + payload->system_addr = cpu_to_le64(addr); + + direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +static void ipa_cmd_ip_tag_status_add(struct gsi_trans *trans, u64 tag) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_TAG_STATUS; + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_ip_packet_tag_status *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + + /* assert(tag <= field_max(IP_PACKET_TAG_STATUS_TAG_FMASK)); */ + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->ip_packet_tag_status; + + payload->tag = u64_encode_bits(tag, IP_PACKET_TAG_STATUS_TAG_FMASK); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Issue a small command TX data transfer */ +static void ipa_cmd_transfer_add(struct gsi_trans *trans, u16 size) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum dma_data_direction direction = DMA_TO_DEVICE; + enum ipa_cmd_opcode opcode = IPA_CMD_NONE; + union ipa_cmd_payload *payload; + dma_addr_t payload_addr; + + /* assert(size <= sizeof(*payload)); */ + + /* Just transfer a zero-filled payload structure */ + payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +void ipa_cmd_tag_process_add(struct gsi_trans *trans) +{ + ipa_cmd_register_write_add(trans, 0, 0, 0, true); +#if 1 + /* Reference these functions to avoid a compile error */ + (void)ipa_cmd_ip_packet_init_add; + (void)ipa_cmd_ip_tag_status_add; + (void) ipa_cmd_transfer_add; +#else + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + struct gsi_endpoint *endpoint; + + endpoint = ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]; + ipa_cmd_ip_packet_init_add(trans, endpoint->endpoint_id); + + ipa_cmd_ip_tag_status_add(trans, 0xcba987654321); + + ipa_cmd_transfer_add(trans, 4); +#endif +} + +/* Returns the number of commands required for the tag process */ +u32 ipa_cmd_tag_process_count(void) +{ + return 4; +} + +static struct ipa_cmd_info * +ipa_cmd_info_alloc(struct ipa_endpoint *endpoint, u32 tre_count) +{ + struct gsi_channel *channel; + + channel = &endpoint->ipa->gsi.channel[endpoint->channel_id]; + + return gsi_trans_pool_alloc(&channel->trans_info.info_pool, tre_count); +} + +/* Allocate a transaction for the command TX endpoint */ +struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count) +{ + struct ipa_endpoint *endpoint; + struct gsi_trans *trans; + + endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; + + trans = gsi_channel_trans_alloc(&ipa->gsi, endpoint->channel_id, + tre_count, DMA_NONE); + if (trans) + trans->info = ipa_cmd_info_alloc(endpoint, tre_count); + + return trans; +} diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h new file mode 100644 index 000000000000..4917525b3a47 --- /dev/null +++ b/drivers/net/ipa/ipa_cmd.h @@ -0,0 +1,195 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_CMD_H_ +#define _IPA_CMD_H_ + +#include +#include + +struct sk_buff; +struct scatterlist; + +struct ipa; +struct ipa_mem; +struct gsi_trans; +struct gsi_channel; + +/** + * enum ipa_cmd_opcode: IPA immediate commands + * + * All immediate commands are issued using the AP command TX endpoint. + * The numeric values here are the opcodes for IPA v3.5.1 hardware. + * + * IPA_CMD_NONE is a special (invalid) value that's used to indicate + * a request is *not* an immediate command. + */ +enum ipa_cmd_opcode { + IPA_CMD_NONE = 0, + IPA_CMD_IP_V4_FILTER_INIT = 3, + IPA_CMD_IP_V6_FILTER_INIT = 4, + IPA_CMD_IP_V4_ROUTING_INIT = 7, + IPA_CMD_IP_V6_ROUTING_INIT = 8, + IPA_CMD_HDR_INIT_LOCAL = 9, + IPA_CMD_REGISTER_WRITE = 12, + IPA_CMD_IP_PACKET_INIT = 16, + IPA_CMD_DMA_TASK_32B_ADDR = 17, + IPA_CMD_DMA_SHARED_MEM = 19, + IPA_CMD_IP_PACKET_TAG_STATUS = 20, +}; + +/** + * struct ipa_cmd_info - information needed for an IPA immediate command + * + * @opcode: The command opcode. + * @direction: Direction of data transfer for DMA commands + */ +struct ipa_cmd_info { + enum ipa_cmd_opcode opcode; + enum dma_data_direction direction; +}; + + +#ifdef IPA_VALIDATE + +/** + * ipa_cmd_table_valid() - Validate a memory region holding a table + * @ipa: - IPA pointer + * @mem: - IPA memory region descriptor + * @route: - Whether the region holds a route or filter table + * @ipv6: - Whether the table is for IPv6 or IPv4 + * @hashed: - Whether the table is hashed or non-hashed + * + * @Return: true if region is valid, false otherwise + */ +bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem, + bool route, bool ipv6, bool hashed); + +/** + * ipa_cmd_data_valid() - Validate command-realted configuration is valid + * @ipa: - IPA pointer + * + * @Return: true if assumptions required for command are valid + */ +bool ipa_cmd_data_valid(struct ipa *ipa); + +#else /* !IPA_VALIDATE */ + +static inline bool ipa_cmd_table_valid(struct ipa *ipa, + const struct ipa_mem *mem, bool route, + bool ipv6, bool hashed) +{ + return true; +} + +static inline bool ipa_cmd_data_valid(struct ipa *ipa) +{ + return true; +} + +#endif /* !IPA_VALIDATE */ + +/** + * ipa_cmd_pool_init() - initialize command channel pools + * @channel: AP->IPA command TX GSI channel pointer + * @tre_count: Number of pool elements to allocate + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_cmd_pool_init(struct gsi_channel *gsi_channel, u32 tre_count); + +/** + * ipa_cmd_pool_exit() - Inverse of ipa_cmd_pool_init() + * @channel: AP->IPA command TX GSI channel pointer + */ +void ipa_cmd_pool_exit(struct gsi_channel *channel); + +/** + * ipa_cmd_table_init_add() - Add table init command to a transaction + * @trans: GSI transaction + * @opcode: IPA immediate command opcode + * @size: Size of non-hashed routing table memory + * @offset: Offset in IPA shared memory of non-hashed routing table memory + * @addr: DMA address of non-hashed table data to write + * @hash_size: Size of hashed routing table memory + * @hash_offset: Offset in IPA shared memory of hashed routing table memory + * @hash_addr: DMA address of hashed table data to write + * + * If hash_size is 0, hash_offset and hash_addr are ignored. + */ +void ipa_cmd_table_init_add(struct gsi_trans *trans, enum ipa_cmd_opcode opcode, + u16 size, u32 offset, dma_addr_t addr, + u16 hash_size, u32 hash_offset, + dma_addr_t hash_addr); + +/** + * ipa_cmd_hdr_init_local_add() - Add a header init command to a transaction + * @ipa: IPA structure + * @offset: Offset of header memory in IPA local space + * @size: Size of header memory + * @addr: DMA address of buffer to be written from + * + * Defines and fills the location in IPA memory to use for headers. + */ +void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size, + dma_addr_t addr); + +/** + * ipa_cmd_register_write_add() - Add a register write command to a transaction + * @trans: GSI transaction + * @offset: Offset of register to be written + * @value: Value to be written + * @mask: Mask of bits in register to update with bits from value + * @clear_full: Pipeline clear option; true means full pipeline clear + */ +void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value, + u32 mask, bool clear_full); + +/** + * ipa_cmd_dma_task_32b_addr_add() - Add a 32-bit DMA command to a transaction + * @trans: GSi transaction + * @size: Number of bytes to be memory to be transferred + * @addr: DMA address of buffer to be read into or written from + * @toward_ipa: true means write to IPA memory; false means read + */ +void ipa_cmd_dma_task_32b_addr_add(struct gsi_trans *trans, u16 size, + dma_addr_t addr, bool toward_ipa); + +/** + * ipa_cmd_dma_shared_mem_add() - Add a DMA memory command to a transaction + * @trans: GSI transaction + * @offset: Offset of IPA memory to be read or written + * @size: Number of bytes of memory to be transferred + * @addr: DMA address of buffer to be read into or written from + * @toward_ipa: true means write to IPA memory; false means read + */ +void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, + u16 size, dma_addr_t addr, bool toward_ipa); + +/** + * ipa_cmd_tag_process_add() - Add IPA tag process commands to a transaction + * @trans: GSI transaction + */ +void ipa_cmd_tag_process_add(struct gsi_trans *trans); + +/** + * ipa_cmd_tag_process_add_count() - Number of commands in a tag process + * + * @Return: The number of elements to allocate in a transaction + * to hold tag process commands + */ +u32 ipa_cmd_tag_process_count(void); + +/** + * ipa_cmd_trans_alloc() - Allocate a transaction for the command TX endpoint + * @ipa: IPA pointer + * @tre_count: Number of elements in the transaction + * + * @Return: A GSI transaction structure, or a null pointer if all + * available transactions are in use + */ +struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count); + +#endif /* _IPA_CMD_H_ */ From patchwork Fri Mar 6 04:28:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 222949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83765C3F2DC for ; Fri, 6 Mar 2020 04:29:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5C6632166E for ; Fri, 6 Mar 2020 04:29:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="eslwFNq4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727486AbgCFE3g (ORCPT ); Thu, 5 Mar 2020 23:29:36 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:40167 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727389AbgCFE3G (ORCPT ); Thu, 5 Mar 2020 23:29:06 -0500 Received: by mail-yw1-f65.google.com with SMTP id t192so1079148ywe.7 for ; Thu, 05 Mar 2020 20:29:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=miNN8i4IwcvOjiB/qs3BU+cs0LOJ0WZgNaPZTHxNf20=; b=eslwFNq4PwCC6+Z4KsPepXRVMSQtjU60CJu4GIG6B7Jnm4GZgJE8XkeUHt+m8sYvds Vb3v8FWdY/Hm7Tzv2QWOSvmxWCRZA1Q1yD2R6rTHqF0WUw8XWuYnojK+8Gzhbvnty8b1 4FvH9hKarM9faVP8AInLEsBWcB7CTrpeDbz8B+hwcTyTXoHH1F9L2iAZtZOdn8xNMZcd U0qjD73Dgm3vDJkC8eBdPh7pIcbBn++HjfXSEfbn8Eu+1PLU70HJ3MKuMjeyjdKnjrrX WG6/hgDgSof0z3abT9dUiOIGO0mE3Pt9dt2vYe6ncLAoY1prfQBYsXM42eiTP9qihY8l C40A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=miNN8i4IwcvOjiB/qs3BU+cs0LOJ0WZgNaPZTHxNf20=; b=iyz7mx5KTWXkuuQS2LTrba0r7rM9z9GtfaRPloWh4aJ9s4H5jGFp6XDQftPxaS+SHw 0SWO4BaqNEAImJpZj4z+rwcOPmJl5CkvUX3aE3SEafB4daS/HClOiiaKEJNzwTYtZ8oS pe4t/35AcO5BIF4vBuzApUzkRVQCuyY/YEykRqBBrfgzECNZ2TAVNYxgchQAMn/NCL/d mAgLTWsEoxBiFFPNvgNc+DCKCC37Scz9hVKzG3iy+kqhbWVMrtixh7AA5r6NA034Uo0x owgB/YfrlJluFyPXBhztWcHA5OhZEsojgmQKW2RUVSC9HdHzdQaPeAjnIbpjJ/BChc/d PrwQ== X-Gm-Message-State: ANhLgQ3pzGth8Jf1dvJMk6baq87gZ1KgNAFzZESFiuHqBEc1lzkTlpyN OCRLkY2E1XEGNI9Ema6x9wKbCQ== X-Google-Smtp-Source: ADFU+vvvxCEfuf1gOKNFIP8+FrNcl9hJa0D2HXoL/aVi6uaZ18AjxRORPUKqhxu509Q6FwPA0HW7Tg== X-Received: by 2002:a81:49c5:: with SMTP id w188mr2205995ywa.186.1583468945849; Thu, 05 Mar 2020 20:29:05 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.29.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:29:05 -0800 (PST) From: Alex Elder To: David Miller , Arnd Bergmann Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 15/17] soc: qcom: ipa: support build of IPA code Date: Thu, 5 Mar 2020 22:28:29 -0600 Message-Id: <20200306042831.17827-16-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add build and Kconfig support for the Qualcomm IPA driver. Signed-off-by: Alex Elder --- drivers/net/Kconfig | 2 ++ drivers/net/Makefile | 1 + drivers/net/ipa/Kconfig | 19 +++++++++++++++++++ drivers/net/ipa/Makefile | 12 ++++++++++++ 4 files changed, 34 insertions(+) create mode 100644 drivers/net/ipa/Kconfig create mode 100644 drivers/net/ipa/Makefile diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 66e410e58c8e..02565bc2be8a 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -444,6 +444,8 @@ source "drivers/net/fddi/Kconfig" source "drivers/net/hippi/Kconfig" +source "drivers/net/ipa/Kconfig" + config NET_SB1000 tristate "General Instruments Surfboard 1000" depends on PNP diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 65967246f240..94b60800887a 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -47,6 +47,7 @@ obj-$(CONFIG_ETHERNET) += ethernet/ obj-$(CONFIG_FDDI) += fddi/ obj-$(CONFIG_HIPPI) += hippi/ obj-$(CONFIG_HAMRADIO) += hamradio/ +obj-$(CONFIG_QCOM_IPA) += ipa/ obj-$(CONFIG_PLIP) += plip/ obj-$(CONFIG_PPP) += ppp/ obj-$(CONFIG_PPP_ASYNC) += ppp/ diff --git a/drivers/net/ipa/Kconfig b/drivers/net/ipa/Kconfig new file mode 100644 index 000000000000..b8cb7cadbf75 --- /dev/null +++ b/drivers/net/ipa/Kconfig @@ -0,0 +1,19 @@ +config QCOM_IPA + tristate "Qualcomm IPA support" + depends on ARCH_QCOM && 64BIT && NET + select QCOM_QMI_HELPERS + select QCOM_MDT_LOADER + default QCOM_Q6V5_COMMON + help + Choose Y or M here to include support for the Qualcomm + IP Accelerator (IPA), a hardware block present in some + Qualcomm SoCs. The IPA is a programmable protocol processor + that is capable of generic hardware handling of IP packets, + including routing, filtering, and NAT. Currently the IPA + driver supports only basic transport of network traffic + between the AP and modem, on the Qualcomm SDM845 SoC. + + Note that if selected, the selection type must match that + of QCOM_Q6V5_COMMON (Y or M). + + If unsure, say N. diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile new file mode 100644 index 000000000000..afe5df1e6eee --- /dev/null +++ b/drivers/net/ipa/Makefile @@ -0,0 +1,12 @@ +# Un-comment the next line if you want to validate configuration data +#ccflags-y += -DIPA_VALIDATE + +obj-$(CONFIG_QCOM_IPA) += ipa.o + +ipa-y := ipa_main.o ipa_clock.o ipa_reg.o ipa_mem.o \ + ipa_table.o ipa_interrupt.o gsi.o gsi_trans.o \ + ipa_gsi.o ipa_smp2p.o ipa_uc.o \ + ipa_endpoint.o ipa_cmd.o ipa_modem.o \ + ipa_qmi.o ipa_qmi_msg.o + +ipa-y += ipa_data-sdm845.o ipa_data-sc7180.o From patchwork Fri Mar 6 04:28:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 222953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5101DC3F2D7 for ; Fri, 6 Mar 2020 04:29:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2A043208C3 for ; Fri, 6 Mar 2020 04:29:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="sjVEnqNb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727439AbgCFE3L (ORCPT ); Thu, 5 Mar 2020 23:29:11 -0500 Received: from mail-yw1-f46.google.com ([209.85.161.46]:35450 "EHLO mail-yw1-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727406AbgCFE3K (ORCPT ); Thu, 5 Mar 2020 23:29:10 -0500 Received: by mail-yw1-f46.google.com with SMTP id d79so38687ywd.2 for ; Thu, 05 Mar 2020 20:29:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+OwhcmpR7pb9yvMAW/3zddjXHmCNfQqId9+2KCqrSJ0=; b=sjVEnqNbTnwO2ySAIcfwaHGIihm2d0ARvf/F6V07mD/Axnj9d9GnoXRS2V9mVxm3W2 0TS3QpjPCK04d5r1+L/HC5wdj96wA3MIS/6VlekW8qXxENUuLtNEZ+b8IevwkjspC6un HOvuEZ5CW96Ob3kx8r7mxGlbq7+x6Zv8ATZTA8PZRmo/fy/qEBkXLRjAyH3mVPPdnYBC 52N667VdKut1mbrQyKWkxvtTprXMBsL/RCHD/LukuF28fDlbIkI5cGpA/Y2xWipqc+sq ZMmyBdtuZHQpXUWYZa9y86Wb+97peQc8sMc4k9kCbrNXX4jLYmICKhd0ynCdcRQmWbcw 29Gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+OwhcmpR7pb9yvMAW/3zddjXHmCNfQqId9+2KCqrSJ0=; b=l6sknTOK8NJrYGCcMUg99R/F3MGY8nkmmtESwsAigb2CMHd/c4ZUkLqo0FE6FhBb65 9CIUIfFpu2u71YWvcRDuJtfGNplwkBRhTXzONb5Y0pY5+t028UvyTKAnsD89W40KvoU6 /nIvuVuRH6INRKeOXPGd+m6XaHFkiZ38eLClhL8SeEaAWoo3d2qu7iWzPIvr/pD1NOBC DS7x9jveuTO3D4Ck6SQUpGgQGIq/QUxdS3Kbkvv5f0LCwtZSOT9cqxdOM8jd2xOir9j9 fhId9031TFauH83//KLYCpM4RUXCHbCXorkyUC1KlpawP+z+AiPSWU0SlQPh0uuZkwW3 emsQ== X-Gm-Message-State: ANhLgQ3yTsg2LZT2KadJBBWfQbt/m74Klr4nSBVoAp/w0jsUdTmYfNSt qt3C72v3SDqjUgoay31Oxs2Pyw== X-Google-Smtp-Source: ADFU+vuavLb0hi1Kt/CtoekqDopx5Ou6PhT5pbkRZYLJyLf4bvqSt3M6Cw7E2PZn2Fxgjq2Y837RIg== X-Received: by 2002:a0d:d952:: with SMTP id b79mr2289846ywe.226.1583468949477; Thu, 05 Mar 2020 20:29:09 -0800 (PST) Received: from presto.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id x2sm12581836ywa.32.2020.03.05.20.29.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 20:29:09 -0800 (PST) From: Alex Elder To: Bjorn Andersson , Andy Gross Cc: David Miller , Arnd Bergmann , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 17/17] arm64: dts: sdm845: add IPA information Date: Thu, 5 Mar 2020 22:28:31 -0600 Message-Id: <20200306042831.17827-18-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200306042831.17827-1-elder@linaro.org> References: <20200306042831.17827-1-elder@linaro.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add IPA-related nodes and definitions to "sdm845.dtsi". Signed-off-by: Alex Elder --- arch/arm64/boot/dts/qcom/sdm845.dtsi | 51 ++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi index d42302b8889b..58fd1c611849 100644 --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi @@ -675,6 +675,17 @@ interrupt-controller; #interrupt-cells = <2>; }; + + ipa_smp2p_out: ipa-ap-to-modem { + qcom,entry-name = "ipa"; + #qcom,smem-state-cells = <1>; + }; + + ipa_smp2p_in: ipa-modem-to-ap { + qcom,entry-name = "ipa"; + interrupt-controller; + #interrupt-cells = <2>; + }; }; smp2p-slpi { @@ -1435,6 +1446,46 @@ }; }; + ipa@1e40000 { + compatible = "qcom,sdm845-ipa"; + + modem-init; + modem-remoteproc = <&mss_pil>; + + reg = <0 0x1e40000 0 0x7000>, + <0 0x1e47000 0 0x2000>, + <0 0x1e04000 0 0x2c000>; + reg-names = "ipa-reg", + "ipa-shared", + "gsi"; + + interrupts-extended = + <&intc 0 311 IRQ_TYPE_EDGE_RISING>, + <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>, + <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, + <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "ipa", + "gsi", + "ipa-clock-query", + "ipa-setup-ready"; + + clocks = <&rpmhcc RPMH_IPA_CLK>; + clock-names = "core"; + + interconnects = + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>, + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>, + <&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>; + interconnect-names = "memory", + "imem", + "config"; + + qcom,smem-states = <&ipa_smp2p_out 0>, + <&ipa_smp2p_out 1>; + qcom,smem-state-names = "ipa-clock-enabled-valid", + "ipa-clock-enabled"; + }; + tcsr_mutex_regs: syscon@1f40000 { compatible = "syscon"; reg = <0 0x01f40000 0 0x40000>;