From patchwork Wed Nov 7 00:32:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 150354 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4548102ljp; Tue, 6 Nov 2018 16:33:11 -0800 (PST) X-Google-Smtp-Source: AJdET5dpOKq/OG4HnaQcf7l0236v9IOTsC0XQCU327o6mtcyq1uSpiF9hd+r8mwhgpWlPKyqH9ot X-Received: by 2002:a17:902:2943:: with SMTP id g61-v6mr19865065plb.82.1541550791448; Tue, 06 Nov 2018 16:33:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541550791; cv=none; d=google.com; s=arc-20160816; b=BBQIen8n1uxKQg9D9MXnKymFjJojqPMlCuekiNdWgwfGrGF08CemGiwmaXVgEPDd5w 9QJxk/Esbsdl4+fBXEKS7w8CgY3BD6p9G+rr6HrHNkAN88FJG+pD1ks6LLEKXZU0L0OD EFEdhHHGMlxy2tNf1oHB62srnYSiTDyxBQGkoKqnESUDvU3a+WfuPwMW54+YZOJZGbuo D+ORZgI3LsPHcBatmWG0nNbJ+nDcbCvkKPOIkuaQay6bWWj9P5pNzE8lnnio21BohGX2 hvolnk8NeOWs8hFnNezep/VVW5RDa5Ofmz2KZdfMXMag1xmEjskqEPSSkJgyuqLzOFRS nT5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=IWqht3lWvRgaSjU7kSguHrGrb0qMDwHxt+moiiah5KM=; b=tszbKlMgY15T6ofyVs1GP7NDrSwkKTrK01LNEgF6djEYBLyxEIXic8rt0HLj+mvJYe MP78dg2p0n4GfWZAHLLCX4mutmVb6Kj9BZMyDf5qnSO9gMKHMpaBIVqbv8aTP1gIRlj/ e06kNg9kPeOMUNhXyK6e+c0+FHwlHptF0Okdl7/FZzVDk+0eHbY2H7ZOUtAosj0wOe2m xYnXkuvB+W+WD+gnRDzVJqo4+AsD5VNLOFua9hyCpugqiU5xV+sJRbycYieSE1e28Sfl pkpyIifcAdaVKcZsRvngeIXufTzdofK4pouZRsttl0jT0V1hZWOd1mTp9bJ0ZlzDPuCL rrEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZL0jQZ0x; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s82-v6si28539335pfk.197.2018.11.06.16.33.11; Tue, 06 Nov 2018 16:33:11 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZL0jQZ0x; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388819AbeKGKBC (ORCPT + 10 others); Wed, 7 Nov 2018 05:01:02 -0500 Received: from mail-it1-f181.google.com ([209.85.166.181]:53796 "EHLO mail-it1-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388823AbeKGKBB (ORCPT ); Wed, 7 Nov 2018 05:01:01 -0500 Received: by mail-it1-f181.google.com with SMTP id r12-v6so17841164ita.3 for ; Tue, 06 Nov 2018 16:33:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IWqht3lWvRgaSjU7kSguHrGrb0qMDwHxt+moiiah5KM=; b=ZL0jQZ0xy9R6opJpoe/wEfGb8ZRudO+IxHxd7U1ISG6XrvrQMj2vg0zUoEvGduml9U tSO1uBvK4IsragGs+1ghN9EYuNrN/xqcuE3ju86CN/qHt3xoMW4kqhC+y2hIRVdG9HEs V0T89ymdBWafur1XazI3iP0Lo9azskqt7ptu8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IWqht3lWvRgaSjU7kSguHrGrb0qMDwHxt+moiiah5KM=; b=C1yrNpzT+U4ReWgA2SSXY+/7HqzF+l0B5You+aLo620VATiTsi/jeKeCcaDgQMLzKO bBQTv2k2mF7E6GdA3sXkh33eqP/jIpx86qnWIi0/EnQEgFPsm/v3EV/GCDMJvKsvn+GJ ynhzj2W98A7CT6mp1avbvZ5sShS6BflRyTpw3IgAFPWom0LDgwXZbeHo5vrfVBzcYK8B W960S1WE9RwbkptHGKGHPKQUymlvD2AOiZUS8tRyw/OMwCNXuE00x9+VIDGO1KW04Hf4 Mt4ZjSO55fVzJG7CxyaPjbKSVLb4Y5fiX0qoF9FKYFk5AxTuu+ab2Z/otplVIe0xhfIi CWJQ== X-Gm-Message-State: AGRZ1gK8g/hL4Y0qrF5mSTP9tRggzpQvmSK20MftZJAIN+QJ17IYJ2vf +BE8uw2DLWWG45UMsgyhfU9JsQ== X-Received: by 2002:a24:3c0a:: with SMTP id m10-v6mr116546ita.37.1541550786630; Tue, 06 Nov 2018 16:33:06 -0800 (PST) Received: from shibby.gateway.innflux.com ([66.228.239.218]) by smtp.gmail.com with ESMTPSA id e184-v6sm1061128ite.9.2018.11.06.16.33.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 16:33:05 -0800 (PST) From: Alex Elder To: robh+dt@kernel.org, mark.rutland@arm.com, davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, syadagir@codeaurora.org, mjavid@codeaurora.org Subject: [RFC PATCH 01/12] dt-bindings: soc: qcom: add IPA bindings Date: Tue, 6 Nov 2018 18:32:39 -0600 Message-Id: <20181107003250.5832-2-elder@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107003250.5832-1-elder@linaro.org> References: <20181107003250.5832-1-elder@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add the binding definitions for the "qcom,ipa" and "qcom,rmnet-ipa" device tree nodes. Signed-off-by: Alex Elder --- .../devicetree/bindings/soc/qcom/qcom,ipa.txt | 136 ++++++++++++++++++ .../bindings/soc/qcom/qcom,rmnet-ipa.txt | 15 ++ 2 files changed, 151 insertions(+) create mode 100644 Documentation/devicetree/bindings/soc/qcom/qcom,ipa.txt create mode 100644 Documentation/devicetree/bindings/soc/qcom/qcom,rmnet-ipa.txt -- 2.17.1 diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,ipa.txt b/Documentation/devicetree/bindings/soc/qcom/qcom,ipa.txt new file mode 100644 index 000000000000..d4d3d37df029 --- /dev/null +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,ipa.txt @@ -0,0 +1,136 @@ +Qualcomm IPA (IP Accelerator) Driver + +This binding describes the Qualcomm IPA. The IPA is capable of offloading +certain network processing tasks (e.g. filtering, routing, and NAT) from +the main processor. The IPA currently serves only as a network interface, +providing access to an LTE network available via a modem. + +The IPA sits between multiple independent "execution environments," +including the AP subsystem (APSS) and the modem. The IPA presents +a Generic Software Interface (GSI) to each execution environment. +The GSI is an integral part of the IPA, but it is logically isolated +and has a distinct interrupt and a separately-defined address space. + + ---------- ------------- --------- + | | |G| |G| | | + | APSS |===|S| IPA |S|===| Modem | + | | |I| |I| | | + ---------- ------------- --------- + +See also: + bindings/interrupt-controller/interrupts.txt + bindings/interconnect/interconnect.txt + bindings/soc/qcom/qcom,smp2p.txt + bindings/reserved-memory/reserved-memory.txt + bindings/clock/clock-bindings.txt + +All properties defined below are required. + +- compatible: + Must be one of the following compatible strings: + "qcom,ipa-sdm845-modem_init" + "qcom,ipa-sdm845-tz_init" + +-reg: + Resources specyfing the physical address spaces of the IPA and GSI. + +-reg-names: + The names of the address space ranges defined by the "reg" property. + Must be "ipa" and "gsi". + +- interrupts-extended: + Specifies the IRQs used by the IPA. Four cells are required, + specifying: the IPA IRQ; the GSI IRQ; the clock query interrupt + from the modem; and the "ready for stage 2 initialization" + interrupt from the modem. The first two are hardware IRQs; the + third and fourth are SMP2P input interrupts. + +- interrupt-names: + The names of the interrupts defined by the "interrupts-extended" + property. Must be "ipa", "gsi", "ipa-clock-query", and + "ipa-post-init". + +- clocks: + Resource that defines the IPA core clock. + +- clock-names: + The name used for the IPA core clock. Must be "core". + +- interconnects: + Specifies the interconnects used by the IPA. Three cells are + required, specifying: the path from the IPA to memory; from + IPA to internal (SoC resident) memory; and between the AP + subsystem and IPA for register access. + +- interconnect-names: + The names of the interconnects defined by the "interconnects" + property. Must be "memory", "imem", and "config". + +- qcom,smem-states + The state bits used for SMP2P output. Two cells must be specified. + The first indicates whether the value in the second bit is valid + (1 means valid). The second, if valid, defines whether the IPA + clock is enabled (1 means enabled). + +- qcom,smem-state-names + The names of the state bits used for SMP2P output. These must be + "ipa-clock-enabled-valid" and "ipa-clock-enabled". + +- memory-region + A phandle for a reserved memory area that holds the firmware passed + to Trust Zone for authentication. (Note, this is required + only for "qcom,ipa-sdm845-tz_init".) + += EXAMPLE + +The following example represents the IPA present in the SDM845 SoC. It +shows portions of the "modem-smp2p" node to indicate its relationship +with the interrupts and SMEM states used by the IPA. + + modem-smp2p { + compatible = "qcom,smp2p"; + . . . + ipa_smp2p_out: ipa-ap-to-modem { + qcom,entry-name = "ipa"; + #qcom,smem-state-cells = <1>; + }; + + ipa_smp2p_in: ipa-modem-to-ap { + qcom,entry-name = "ipa"; + interrupt-controller; + #interrupt-cells = <2>; + }; + }; + + ipa@1e00000 { + compatible = "qcom,ipa-sdm845-modem_init"; + + reg = <0x1e40000 0x34000>, + <0x1e04000 0x2c000>; + reg-names = "ipa", + "gsi"; + + interrupts-extended = <&intc 0 311 IRQ_TYPE_LEVEL_HIGH>, + <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>, + <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, + <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "ipa", + "gsi", + "ipa-clock-query", + "ipa-post-init"; + + clocks = <&rpmhcc RPMH_IPA_CLK>; + clock-names = "core"; + + interconnects = <&qnoc MASTER_IPA &qnoc SLAVE_EBI1>, + <&qnoc MASTER_IPA &qnoc SLAVE_IMEM>, + <&qnoc MASTER_APPSS_PROC &qnoc SLAVE_IPA_CFG>; + interconnect-names = "memory", + "imem", + "config"; + + qcom,smem-states = <&ipa_smp2p_out 0>, + <&ipa_smp2p_out 1>; + qcom,smem-state-names = "ipa-clock-enabled-valid", + "ipa-clock-enabled"; + }; diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,rmnet-ipa.txt b/Documentation/devicetree/bindings/soc/qcom/qcom,rmnet-ipa.txt new file mode 100644 index 000000000000..3d0b2aabefc7 --- /dev/null +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,rmnet-ipa.txt @@ -0,0 +1,15 @@ +Qualcomm IPA RMNet Driver + +This binding describes the IPA RMNet driver, which is used to +represent virtual interfaces available on the modem accessed via +the IPA. Other than the compatible string there are no properties +associated with this device. + +- compatible: + Must be "qcom,rmnet-ipa". + += EXAMPLE + + qcom,rmnet-ipa { + compatible = "qcom,rmnet-ipa"; + }; From patchwork Wed Nov 7 00:32:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 150358 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4548338ljp; Tue, 6 Nov 2018 16:33:27 -0800 (PST) X-Google-Smtp-Source: AJdET5czGFgjcJGJfJ4vKmLAP33+YCxxlKWZ0/0zQk2ey/Wt5511cNyG7ZuKGDyDZu5xuPY0CK+E X-Received: by 2002:a17:902:8d94:: with SMTP id v20-v6mr28974459plo.109.1541550807264; Tue, 06 Nov 2018 16:33:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541550807; cv=none; d=google.com; s=arc-20160816; b=zMaoiXLZn88CjpkaKf732OpyboRCKbpH0I8EXRmWBa1mnoQhJT5Y07eg2FoQr5Jfw1 KSxFfUiIaoFvCMf0UyNN3QQgglO8QrryinqyP9Noc+oo/QpbFhviYprJ6aZKAdjTd7yr 3DQlLMQvC2D8Mgt/HTDW6dzLzCIp7W3e6OlAyClRhXhqU1RvtvhvXRbG1jXrMoCaQbsd FxlHLKOJdbzJxVeCQoSiS+8Wic0gcckzyarVpEsXUpjnYSWeODX/uo0Fmmj4bB77XvRX I7OVvzuEQf9pBAQp3EHyKU8coH9opGbuBTU6pPeC+/P8I7rsrAxL9OtHR7HByxwBiMP8 Pc8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=gWkYzB4k4xq/XrsrsazkTNMqoBAgIRq8t035xK0VvbY=; b=p4mw9dib8ABDhHVSJXAbs85n4VkWoOUZryUljwZJzyKF6LQZG81VY8qJnzE8ayfmVT XHJJryCmuy30tIaxPOvkdYhjO1DQVq/7m9zrprSljJW0S4QKhOkEJABDmeL5ZVAxejPh w6WVPSuu09woXX1Antvtnd6oOQDdDvgp4cgMQ4v95W4wkSrWixH6yPNL5vakg1VXHCXZ J0VjYmS1ZiHqL+jstkdGGx0FdsKbsY3q42RY+sEoKga3B+EX/nrbx2KvZ3YneMIVyCG2 CY0HI8J0r/0buhqt8nX2EuNur9HPSnkPHW35a6dZfmOfylPtVsy3IuKsF2uxtBxenZe1 VrHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=O2wMMMyq; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h3-v6si37093501pld.424.2018.11.06.16.33.26; Tue, 06 Nov 2018 16:33:27 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=O2wMMMyq; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389039AbeKGKBT (ORCPT + 10 others); Wed, 7 Nov 2018 05:01:19 -0500 Received: from mail-it1-f193.google.com ([209.85.166.193]:55556 "EHLO mail-it1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389008AbeKGKBS (ORCPT ); Wed, 7 Nov 2018 05:01:18 -0500 Received: by mail-it1-f193.google.com with SMTP id b7-v6so20613503itd.5 for ; Tue, 06 Nov 2018 16:33:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=gWkYzB4k4xq/XrsrsazkTNMqoBAgIRq8t035xK0VvbY=; b=O2wMMMyqCeyAZLetK3AQTp70pj8qUdxwvVkM/kE8HXtiNoeZhmr4RujpizoOYn0LKi gTsx9TiTm+DN8R/kif1wCoDnWsfLsgzXvPTM49YQgjxAKkrg8RARh0VPjfUJ7hHxJA4A L7+7+xoumdphMy6eSH5HkiWjJFJq8LIWqXWsA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=gWkYzB4k4xq/XrsrsazkTNMqoBAgIRq8t035xK0VvbY=; b=SnVTRyWRdyLuvP0ii5DaDrCwu4kkY782Q2ya9egiWQURTBZJCPcVI5LaoUQp291mTw 9Rgb6St1pT86wBW5Ct7QarSZs92mS5bOE1x6x4XFUtmurZgh0dlXwyy0xeHg0Z3oiObN D9hoWHROqlhS4N4hgOtLMLQUZz2McexX/qDek+cxVZyD85DAIRzLq5cZ6jyRPMj7/g3w Zt6gFDlNwP8UFjn69JYadKqd2ZIhQ0C9lcqdsdkBossXxyPD8bDLcANXQ55qkowBz62q /hGbSaZtqqqDBOanPTTriJve9ScZjHKPTEOKEeymqCwJuHxuPt7Lm/j3pPgqFXF8fNpz sfqg== X-Gm-Message-State: AGRZ1gKy/gwXp9YPCHp/7ziRFzxpJb+W3P0LmU00ylxlAOXyX3v+TTQL N5vM1lNnah2B7lZH6yjvOVF8+A== X-Received: by 2002:a24:9ac7:: with SMTP id l190-v6mr117486ite.178.1541550802613; Tue, 06 Nov 2018 16:33:22 -0800 (PST) Received: from shibby.gateway.innflux.com ([66.228.239.218]) by smtp.gmail.com with ESMTPSA id e184-v6sm1061128ite.9.2018.11.06.16.33.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 16:33:21 -0800 (PST) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, syadagir@codeaurora.org, mjavid@codeaurora.org, robh+dt@kernel.org, mark.rutland@arm.com Subject: [RFC PATCH 05/12] soc: qcom: ipa: IPA interrupts and the microcontroller Date: Tue, 6 Nov 2018 18:32:43 -0600 Message-Id: <20181107003250.5832-6-elder@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107003250.5832-1-elder@linaro.org> References: <20181107003250.5832-1-elder@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The IPA has an interrupt line distinct from the interrupt used by the GSI code. Whereas GSI interrupts are generally related to channel events (like transfer completions), IPA interrupts are related to other events related to the IPA. When the IPA IRQ fires, an IPA interrupt status register indicates which IPA interrupt events are being signaled. IPA interrupts can be masked independently, and can also be endependently enabled or disabled. The IPA has an embedded microcontroller that can be used for additional processing of messages passing through the IPA. This feature is generally not used by the current code. Currently only three IPA interrupts are used: one to trigger a resume when in a suspended state; and two that allow the embedded microcontroller to signal events. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_interrupts.c | 307 ++++++++++++++++++++++++++++ drivers/net/ipa/ipa_uc.c | 336 +++++++++++++++++++++++++++++++ 2 files changed, 643 insertions(+) create mode 100644 drivers/net/ipa/ipa_interrupts.c create mode 100644 drivers/net/ipa/ipa_uc.c -- 2.17.1 diff --git a/drivers/net/ipa/ipa_interrupts.c b/drivers/net/ipa/ipa_interrupts.c new file mode 100644 index 000000000000..75cd81a1eab0 --- /dev/null +++ b/drivers/net/ipa/ipa_interrupts.c @@ -0,0 +1,307 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ + +/* + * DOC: IPA Interrupts + * + * The IPA has an interrupt line distinct from the interrupt used + * by the GSI code. Whereas GSI interrupts are generally related + * to channel events (like transfer completions), IPA interrupts are + * related to other events related to the IPA. Some of the IPA + * interrupts come from a microcontroller embedded in the IPA. + * Each IPA interrupt type can be both masked and acknowledged + * independent of the others, + * + * So two of the IPA interrupts are initiated by the microcontroller. + * A third can be generated to signal the need for a wakeup/resume + * when the IPA has been suspended. The modem can cause this event + * to occur (for example, for an incoming call). There are other IPA + * events defined, but at this time only these three are supported. + */ + +#include +#include +#include + +#include "ipa_i.h" +#include "ipa_reg.h" + +struct ipa_interrupt_info { + ipa_irq_handler_t handler; + enum ipa_irq_type interrupt; +}; + +#define IPA_IRQ_NUM_MAX 32 /* Number of IRQ bits in IPA interrupt mask */ +static struct ipa_interrupt_info ipa_interrupt_info[IPA_IRQ_NUM_MAX]; + +static struct workqueue_struct *ipa_interrupt_wq; + +static void enable_tx_suspend_work_func(struct work_struct *work); +static DECLARE_DELAYED_WORK(tx_suspend_work, enable_tx_suspend_work_func); + +static const int ipa_irq_mapping[] = { + [IPA_INVALID_IRQ] = -1, + [IPA_UC_IRQ_0] = 2, + [IPA_UC_IRQ_1] = 3, + [IPA_TX_SUSPEND_IRQ] = 14, +}; + +/* IPA interrupt handlers are called in contexts that can block */ +static void ipa_interrupt_work_func(struct work_struct *work); +static DECLARE_WORK(ipa_interrupt_work, ipa_interrupt_work_func); + +/* Workaround disables TX_SUSPEND interrupt for this long */ +#define DISABLE_TX_SUSPEND_INTR_DELAY msecs_to_jiffies(5) + +/* Disable the IPA TX_SUSPEND interrupt, and arrange for it to be + * re-enabled again in 5 milliseconds. + * + * This is part of a hardware bug workaround. + */ +static void ipa_tx_suspend_interrupt_wa(void) +{ + u32 val; + + val = ipa_read_reg_n(IPA_IRQ_EN_EE_N, IPA_EE_AP); + val &= ~BIT(ipa_irq_mapping[IPA_TX_SUSPEND_IRQ]); + ipa_write_reg_n(IPA_IRQ_EN_EE_N, IPA_EE_AP, val); + + queue_delayed_work(ipa_interrupt_wq, &tx_suspend_work, + DISABLE_TX_SUSPEND_INTR_DELAY); +} + +static void ipa_handle_interrupt(int irq_num) +{ + struct ipa_interrupt_info *intr_info = &ipa_interrupt_info[irq_num]; + u32 endpoints = 0; /* Only TX_SUSPEND uses its interrupt_data */ + + if (!intr_info->handler) + return; + + if (intr_info->interrupt == IPA_TX_SUSPEND_IRQ) { + /* Disable the suspend interrupt temporarily */ + ipa_tx_suspend_interrupt_wa(); + + /* Get and clear mask of endpoints signaling TX_SUSPEND */ + endpoints = ipa_read_reg_n(IPA_IRQ_SUSPEND_INFO_EE_N, + IPA_EE_AP); + ipa_write_reg_n(IPA_SUSPEND_IRQ_CLR_EE_N, IPA_EE_AP, endpoints); + } + + intr_info->handler(intr_info->interrupt, endpoints); +} + +static inline bool is_uc_irq(int irq_num) +{ + enum ipa_irq_type interrupt = ipa_interrupt_info[irq_num].interrupt; + + return interrupt != IPA_UC_IRQ_0 && interrupt != IPA_UC_IRQ_1; +} + +static void ipa_process_interrupts(void) +{ + while (true) { + u32 ipa_intr_mask; + u32 imask; /* one set bit */ + + /* Determine which interrupts have fired, then examine only + * those that are enabled. Note that a suspend interrupt + * bug forces us to re-read the enabled mask every time to + * avoid an endless loop. + */ + ipa_intr_mask = ipa_read_reg_n(IPA_IRQ_STTS_EE_N, IPA_EE_AP); + ipa_intr_mask &= ipa_read_reg_n(IPA_IRQ_EN_EE_N, IPA_EE_AP); + + if (!ipa_intr_mask) + break; + + do { + int i = __ffs(ipa_intr_mask); + bool uc_irq = is_uc_irq(i); + + imask = BIT(i); + + /* Clear uC interrupt before processing to avoid + * clearing unhandled interrupts + */ + if (uc_irq) + ipa_write_reg_n(IPA_IRQ_CLR_EE_N, IPA_EE_AP, + imask); + + ipa_handle_interrupt(i); + + /* Clear non-uC interrupt after processing + * to avoid clearing interrupt data + */ + if (!uc_irq) + ipa_write_reg_n(IPA_IRQ_CLR_EE_N, IPA_EE_AP, + imask); + } while ((ipa_intr_mask ^= imask)); + } +} + +static void ipa_interrupt_work_func(struct work_struct *work) +{ + ipa_client_add(); + + ipa_process_interrupts(); + + ipa_client_remove(); +} + +static irqreturn_t ipa_isr(int irq, void *ctxt) +{ + /* Schedule handling (if not already scheduled) */ + queue_work(ipa_interrupt_wq, &ipa_interrupt_work); + + return IRQ_HANDLED; +} + +/* Re-enable the IPA TX_SUSPEND interrupt after having been disabled + * for a moment by ipa_tx_suspend_interrupt_wa(). This is part of a + * workaround for a hardware bug. + */ +static void enable_tx_suspend_work_func(struct work_struct *work) +{ + u32 val; + + ipa_client_add(); + + val = ipa_read_reg_n(IPA_IRQ_EN_EE_N, IPA_EE_AP); + val |= BIT(ipa_irq_mapping[IPA_TX_SUSPEND_IRQ]); + ipa_write_reg_n(IPA_IRQ_EN_EE_N, IPA_EE_AP, val); + + ipa_process_interrupts(); + + ipa_client_remove(); +} + +/* Register SUSPEND_IRQ_EN_EE_N_ADDR for L2 interrupt. */ +static void tx_suspend_enable(void) +{ + enum ipa_client_type client; + u32 val = ~0; + + /* Compute the mask to use (bits set for all non-modem endpoints) */ + for (client = 0; client < IPA_CLIENT_MAX; client++) + if (ipa_modem_consumer(client) || ipa_modem_producer(client)) + val &= ~BIT(ipa_client_ep_id(client)); + + ipa_write_reg_n(IPA_SUSPEND_IRQ_EN_EE_N, IPA_EE_AP, val); +} + +/* Unregister SUSPEND_IRQ_EN_EE_N_ADDR for L2 interrupt. */ +static void tx_suspend_disable(void) +{ + ipa_write_reg_n(IPA_SUSPEND_IRQ_EN_EE_N, IPA_EE_AP, 0); +} + +/** + * ipa_add_interrupt_handler() - Adds handler for an IPA interrupt + * @interrupt: IPA interrupt type + * @handler: The handler for that interrupt + * + * Adds handler to an IPA interrupt type and enable it. IPA interrupt + * handlers are allowed to block (they aren't run in interrupt context). + */ +void ipa_add_interrupt_handler(enum ipa_irq_type interrupt, + ipa_irq_handler_t handler) +{ + int irq_num = ipa_irq_mapping[interrupt]; + struct ipa_interrupt_info *intr_info; + u32 val; + + intr_info = &ipa_interrupt_info[irq_num]; + intr_info->handler = handler; + intr_info->interrupt = interrupt; + + /* Enable the IPA interrupt */ + val = ipa_read_reg_n(IPA_IRQ_EN_EE_N, IPA_EE_AP); + val |= BIT(irq_num); + ipa_write_reg_n(IPA_IRQ_EN_EE_N, IPA_EE_AP, val); + + if (interrupt == IPA_TX_SUSPEND_IRQ) + tx_suspend_enable(); +} + +/** + * ipa_remove_interrupt_handler() - Removes handler for an IPA interrupt type + * @interrupt: IPA interrupt type + * + * Remove an IPA interrupt handler and disable it. + */ +void ipa_remove_interrupt_handler(enum ipa_irq_type interrupt) +{ + int irq_num = ipa_irq_mapping[interrupt]; + struct ipa_interrupt_info *intr_info; + u32 val; + + intr_info = &ipa_interrupt_info[irq_num]; + intr_info->handler = NULL; + intr_info->interrupt = IPA_INVALID_IRQ; + + if (interrupt == IPA_TX_SUSPEND_IRQ) + tx_suspend_disable(); + + /* Disable the interrupt */ + val = ipa_read_reg_n(IPA_IRQ_EN_EE_N, IPA_EE_AP); + val &= ~BIT(irq_num); + ipa_write_reg_n(IPA_IRQ_EN_EE_N, IPA_EE_AP, val); +} + +/** + * ipa_interrupts_init() - Initialize the IPA interrupts framework + */ +int ipa_interrupts_init(void) +{ + int ret; + + ret = request_irq(ipa_ctx->ipa_irq, ipa_isr, IRQF_TRIGGER_RISING, + "ipa", ipa_ctx->dev); + if (ret) + return ret; + + ipa_interrupt_wq = alloc_ordered_workqueue("ipa_interrupt_wq", 0); + if (ipa_interrupt_wq) + return 0; + + free_irq(ipa_ctx->ipa_irq, ipa_ctx->dev); + + return -ENOMEM; +} + +/** + * ipa_suspend_active_aggr_wa() - Emulate suspend interrupt + * @ep_id: Endpoint on which to emulate a suspend + * + * Emulate suspend IRQ to unsuspend a client suspended with an open + * aggregation frame. This is to work around a hardware issue + * where an IRQ is not generated as it should be when this occurs. + */ +void ipa_suspend_active_aggr_wa(u32 ep_id) +{ + struct ipa_reg_aggr_force_close force_close; + struct ipa_interrupt_info *intr_info; + u32 clnt_mask; + int irq_num; + + irq_num = ipa_irq_mapping[IPA_TX_SUSPEND_IRQ]; + intr_info = &ipa_interrupt_info[irq_num]; + clnt_mask = BIT(ep_id); + + /* Nothing to do if the endpoint doesn't have aggregation open */ + if (!(ipa_read_reg(IPA_STATE_AGGR_ACTIVE) & clnt_mask)) + return; + + /* Force close aggregation */ + ipa_reg_aggr_force_close(&force_close, clnt_mask); + ipa_write_reg_fields(IPA_AGGR_FORCE_CLOSE, &force_close); + + /* Simulate suspend IRQ */ + ipa_assert(!in_interrupt()); + if (intr_info->handler) + intr_info->handler(intr_info->interrupt, clnt_mask); +} diff --git a/drivers/net/ipa/ipa_uc.c b/drivers/net/ipa/ipa_uc.c new file mode 100644 index 000000000000..2065e53f3601 --- /dev/null +++ b/drivers/net/ipa/ipa_uc.c @@ -0,0 +1,336 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ + +#include +#include + +#include "ipa_i.h" + +/** + * DOC: The IPA Embedded Microcontroller + * + * The IPA incorporates an embedded microcontroller that is able to + * do some additional handling/offloading of network activity. The + * current code makes essentially no use of the microcontroller. + * Despite not being used, the microcontroller still requires some + * initialization, and it needs to be notified in the event the AP + * crashes. The IPA embedded microcontroller represents another IPA + * execution environment (in addition to the AP subsystem and + * modem). + */ + +/* Supports hardware interface version 0x2000 */ + +#define IPA_RAM_UC_SMEM_SIZE 128 /* Size of shared memory area */ + +/* Delay to allow a the microcontroller to save state when crashing */ +#define IPA_SEND_DELAY 100 /* microseconds */ + +/* + * The IPA has an embedded microcontroller that is capable of doing + * more general-purpose processing, for example for handling certain + * exceptional conditions. When it has completed its boot sequence + * it signals the AP with an interrupt. At this time we don't use + * any of the microcontroller capabilities, but we do handle the + * "ready" interrupt. We also notify it (by sending it a special + * command) in the event of a crash. + * + * A 128 byte block of structured memory within the IPA SRAM is used + * to communicate between the AP and the microcontroller embedded in + * the IPA. + * + * To send a command to the microcontroller, the AP fills in the + * command opcode and command parameter fields in this area, then + * writes a register to signal to the microcontroller the command is + * available. When the microcontroller has executed the command, it + * writes response data to this shared area, then issues a response + * interrupt (micrcontroller IRQ 1) to the AP. The response + * includes a "response operation" that indicates the completion, + * along with a "response parameter" which encodes the original + * command and the command's status (result). + * + * The shared area is also used to communicate events asynchronously + * from the microcontroller to the AP. Events are signaled using + * the event interrupt (micrcontroller IRQ 0). The microcontroller + * fills in an "event operation" and "event parameter" before + * triggering the interrupt. + * + * Some additional information is also found in this shared area, + * but is currently unused by the IPA driver. + * + * All other space in the shared area is reserved, and must not be + * read or written by the AP. + */ + +/** struct ipa_uc_shared_area - AP/microcontroller shared memory area + * + * @command: command code (AP->microcontroller) + * @command_param: low 32 bits of command parameter (AP->microcontroller) + * @command_param_hi: high 32 bits of command parameter (AP->microcontroller) + * + * @response: response code (microcontroller->AP) + * @response_param: response parameter (microcontroller->AP) + * + * @event: event code (microcontroller->AP) + * @event_param: event parameter (microcontroller->AP) + * + * @first_error_address: address of first error-source on SNOC + * @hw_state: state of hardware (including error type information) + * @warning_counter: counter of non-fatal hardware errors + * @interface_version: hardware-reported interface version + */ +struct ipa_uc_shared_area { + u32 command : 8; /* enum ipa_uc_command */ + /* 3 reserved bytes */ + u32 command_param; + u32 command_param_hi; + + u32 response : 8; /* enum ipa_uc_response */ + /* 3 reserved bytes */ + u32 response_param; + + u32 event : 8; /* enum ipa_uc_event */ + /* 3 reserved bytes */ + u32 event_param; + + u32 first_error_address; + u32 hw_state : 8, + warning_counter : 8, + reserved : 16; + u32 interface_version : 16; + /* 2 reserved bytes */ +}; + +/** struct ipa_uc_ctx - IPA microcontroller context + * + * @uc_loaded: whether microcontroller has been loaded + * @shared: pointer to AP/microcontroller shared memory area + */ +struct ipa_uc_ctx { + bool uc_loaded; + struct ipa_uc_shared_area *shared; +} ipa_uc_ctx; + +/* + * Microcontroller event codes, error codes, commands, and responses + * to commands all encode both a "code" and a "feature" in their + * 8-bit numeric value. The top 3 bits represent the feature, and + * the bottom 5 bits represent the code. A "common" feature uses + * feature code 0, and at this time we only deal with common + * features. Because of this we can just ignore the feature bits + * and define the values of symbols in the following enumerated + * types by just their code values. + */ + +/** enum ipa_uc_event - common cpu events (microcontroller->AP) + * + * @IPA_UC_EVENT_NO_OP: no event present + * @IPA_UC_EVENT_ERROR: system error has been detected + * @IPA_UC_EVENT_LOG_INFO: logging information available + */ +enum ipa_uc_event { + IPA_UC_EVENT_NO_OP = 0, + IPA_UC_EVENT_ERROR = 1, + IPA_UC_EVENT_LOG_INFO = 2, +}; + +/** enum ipa_uc_error - common error types (microcontroller->AP) + * + * @IPA_UC_ERROR_NONE: no error + * @IPA_UC_ERROR_INVALID_DOORBELL: invalid data read from doorbell + * @IPA_UC_ERROR_DMA: unexpected DMA error + * @IPA_UC_ERROR_FATAL_SYSTEM: microcontroller has crashed and requires reset + * @IPA_UC_ERROR_INVALID_OPCODE: invalid opcode sent + * @IPA_UC_ERROR_INVALID_PARAMS: invalid params for the requested command + * @IPA_UC_ERROR_CONS_DISABLE_CMD_GSI_STOP: consumer endpoint stop failure + * @IPA_UC_ERROR_PROD_DISABLE_CMD_GSI_STOP: producer endpoint stop failure + * @IPA_UC_ERROR_CH_NOT_EMPTY: micrcontroller GSI channel is not empty + */ +enum ipa_uc_error { + IPA_UC_ERROR_NONE = 0, + IPA_UC_ERROR_INVALID_DOORBELL = 1, + IPA_UC_ERROR_DMA = 2, + IPA_UC_ERROR_FATAL_SYSTEM = 3, + IPA_UC_ERROR_INVALID_OPCODE = 4, + IPA_UC_ERROR_INVALID_PARAMS = 5, + IPA_UC_ERROR_CONS_DISABLE_CMD_GSI_STOP = 6, + IPA_UC_ERROR_PROD_DISABLE_CMD_GSI_STOP = 7, + IPA_UC_ERROR_CH_NOT_EMPTY = 8, +}; + +/** enum ipa_uc_command - commands from the AP to the microcontroller + * + * @IPA_UC_COMMAND_NO_OP: no operation + * @IPA_UC_COMMAND_UPDATE_FLAGS: request to re-read configuration flags + * @IPA_UC_COMMAND_DEBUG_RUN_TEST: request to run hardware test + * @IPA_UC_COMMAND_DEBUG_GET_INFO: request to read internal debug information + * @IPA_UC_COMMAND_ERR_FATAL: AP system crash notification + * @IPA_UC_COMMAND_CLK_GATE: request hardware to enter clock gated state + * @IPA_UC_COMMAND_CLK_UNGATE: request hardware to enter clock ungated state + * @IPA_UC_COMMAND_MEMCPY: request hardware to perform memcpy + * @IPA_UC_COMMAND_RESET_PIPE: request endpoint reset + * @IPA_UC_COMMAND_REG_WRITE: request a register be written + * @IPA_UC_COMMAND_GSI_CH_EMPTY: request to determine whether channel is empty + */ +enum ipa_uc_command { + IPA_UC_COMMAND_NO_OP = 0, + IPA_UC_COMMAND_UPDATE_FLAGS = 1, + IPA_UC_COMMAND_DEBUG_RUN_TEST = 2, + IPA_UC_COMMAND_DEBUG_GET_INFO = 3, + IPA_UC_COMMAND_ERR_FATAL = 4, + IPA_UC_COMMAND_CLK_GATE = 5, + IPA_UC_COMMAND_CLK_UNGATE = 6, + IPA_UC_COMMAND_MEMCPY = 7, + IPA_UC_COMMAND_RESET_PIPE = 8, + IPA_UC_COMMAND_REG_WRITE = 9, + IPA_UC_COMMAND_GSI_CH_EMPTY = 10, +}; + +/** enum ipa_uc_response - common hardware response codes + * + * @IPA_UC_RESPONSE_NO_OP: no operation + * @IPA_UC_RESPONSE_INIT_COMPLETED: microcontroller ready + * @IPA_UC_RESPONSE_CMD_COMPLETED: AP-issued command has completed + * @IPA_UC_RESPONSE_DEBUG_GET_INFO: get debug info + */ +enum ipa_uc_response { + IPA_UC_RESPONSE_NO_OP = 0, + IPA_UC_RESPONSE_INIT_COMPLETED = 1, + IPA_UC_RESPONSE_CMD_COMPLETED = 2, + IPA_UC_RESPONSE_DEBUG_GET_INFO = 3, +}; + +/** union ipa_uc_event_data - microcontroller->AP event data + * + * @error_type: ipa_uc_error error type value + * @raw32b: 32-bit register value (used when reading) + */ +union ipa_uc_event_data { + u8 error_type; /* enum ipa_uc_error */ + u32 raw32b; +} __packed; + +/** union ipa_uc_response_data - response to AP command + * + * @command: the AP issued command this is responding to + * @status: 0 for success indication, otherwise failure + * @raw32b: 32-bit register value (used when reading) + */ +union ipa_uc_response_data { + struct ipa_uc_response_param { + u8 command; /* enum ipa_uc_command */ + u8 status; /* enum ipa_uc_error */ + } params; + u32 raw32b; +} __packed; + +/** ipa_uc_loaded() - tell whether the microcontroller has been loaded + * + * Returns true if the microcontroller is loaded, false otherwise + */ +bool ipa_uc_loaded(void) +{ + return ipa_uc_ctx.uc_loaded; +} + +static void +ipa_uc_event_handler(enum ipa_irq_type interrupt, u32 interrupt_data) +{ + struct ipa_uc_shared_area *shared = ipa_uc_ctx.shared; + union ipa_uc_event_data event_param; + u8 event; + + event = shared->event; + event_param.raw32b = shared->event_param; + + /* General handling */ + if (event == IPA_UC_EVENT_ERROR) { + ipa_err("uC error type 0x%02x timestamp 0x%08x\n", + event_param.error_type, ipa_read_reg(IPA_TAG_TIMER)); + ipa_bug(); + } else { + ipa_err("unsupported uC event opcode=%u\n", event); + } +} + +static void +ipa_uc_response_hdlr(enum ipa_irq_type interrupt, u32 interrupt_data) +{ + struct ipa_uc_shared_area *shared = ipa_uc_ctx.shared; + union ipa_uc_response_data response_data; + u8 response; + + response = shared->response; + + /* An INIT_COMPLETED response message is sent to the AP by + * the microcontroller when it is operational. Other than + * this, the AP should only receive responses from the + * microntroller when it has sent it a request message. + */ + if (response == IPA_UC_RESPONSE_INIT_COMPLETED) { + /* The proxy vote is held until uC is loaded to ensure that + * IPA_HW_2_CPU_RESPONSE_INIT_COMPLETED is received. + */ + ipa_proxy_clk_unvote(); + ipa_uc_ctx.uc_loaded = true; + } else if (response == IPA_UC_RESPONSE_CMD_COMPLETED) { + response_data.raw32b = shared->response_param; + ipa_err("uC command response code %u status %u\n", + response_data.params.command, + response_data.params.status); + } else { + ipa_err("Unsupported uC rsp opcode = %u\n", response); + } +} + +/** ipa_uc_init() - Initialize the microcontroller + * + * Returns pointer to microcontroller context on success, NULL otherwise + */ +struct ipa_uc_ctx *ipa_uc_init(phys_addr_t phys_addr) +{ + phys_addr += ipa_reg_n_offset(IPA_SRAM_DIRECT_ACCESS_N, 0); + ipa_uc_ctx.shared = ioremap(phys_addr, IPA_RAM_UC_SMEM_SIZE); + if (!ipa_uc_ctx.shared) + return NULL; + + ipa_add_interrupt_handler(IPA_UC_IRQ_0, ipa_uc_event_handler); + ipa_add_interrupt_handler(IPA_UC_IRQ_1, ipa_uc_response_hdlr); + + return &ipa_uc_ctx; +} + +/* Send a command to the microcontroller */ +static void send_uc_command(u32 command, u32 command_param) +{ + struct ipa_uc_shared_area *shared = ipa_uc_ctx.shared; + + shared->command = command; + shared->command_param = command_param; + shared->command_param_hi = 0; + shared->response = 0; + shared->response_param = 0; + + wmb(); /* ensure write to shared memory is done before triggering uc */ + + ipa_write_reg_n(IPA_IRQ_EE_UC_N, IPA_EE_AP, 0x1); +} + +void ipa_uc_panic_notifier(void) +{ + if (!ipa_uc_ctx.uc_loaded) + return; + + if (!ipa_client_add_additional()) + return; + + send_uc_command(IPA_UC_COMMAND_ERR_FATAL, 0); + + /* give uc enough time to save state */ + udelay(IPA_SEND_DELAY); + + ipa_client_remove(); +} From patchwork Wed Nov 7 00:32:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 150362 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4548526ljp; Tue, 6 Nov 2018 16:33:39 -0800 (PST) X-Google-Smtp-Source: AJdET5edxXTcJ6j2hmOkQmm9ot6ErXDCmeuNtpZJQoYjae8m7sLYr7OemI/E/f7p6ItUrHiPM75h X-Received: by 2002:a17:902:7485:: with SMTP id h5-v6mr292pll.172.1541550819618; Tue, 06 Nov 2018 16:33:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541550819; cv=none; d=google.com; s=arc-20160816; b=K+VVpJN8jsob90iCN2Eb91PjTm374+KrK/Hm1Esx1H7/d+z1ifd0Uw+/nMEODdvu5D g8oMItzKIUH2D5Rjkn/bgCHpubejUkgDX9Ahy9FhXkHTm+rjgWyYxB+oNSio2ZG3ZGoj +euS5KvFziSJU0ptrNOQ+7epgJTieC9M73fcgr1YwD2trZrv3d+vEWJJwI59nzXBegqh n0erLftf62fM+U/UeYZ2TeYyO0baRdlnrUX3c1K8FcS7IdNCBws5huqukYqy0DdJ2Jvb dVP3h/ThhqE7FrWh340qdnZ23MwoBG3hiEyy5/vM1WmfiGJk98T166g6HSqVqtK23Dt2 jfkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=rm6BjHEXusVWBGxEjuC7GQ9PchSD/mzf8dk/SJxKBMo=; b=RDPlvmusAMXNYI19px8P3FJmL31N5qRnyvsWWVZn5ExoAaf9iVmE9LQTMpN4X30Yj1 5yPpmaJpRsmAUCyocbBtPsn+6D1lFboVQrkHFHflMeDpoljPrG4/hg2GnhpsgbbYm7Ut M5CL19rjwVHuf4N/kbvvigohg+GbdkwJ0Bfl4DjIZinn9Sv4PSzFGKjGb+Cn0Pk/xdIg WdJfnJzTlISzkrdM7Xl42EzQM7Fh8vSQq1+N7qKI1OLbNCJDQm+PtnPSp/auwjHrq7Qb CZNWemDXGloIdDKrj7S5pLjUbwzmd/5p6bXpZGPdsXVpjydbIaLLvGxfEUrEAWo7h+DM LIAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="B+I/lpBw"; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t124-v6si40242994pfc.262.2018.11.06.16.33.39; Tue, 06 Nov 2018 16:33:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="B+I/lpBw"; spf=pass (google.com: best guess record for domain of netdev-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=netdev-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389198AbeKGKBc (ORCPT + 10 others); Wed, 7 Nov 2018 05:01:32 -0500 Received: from mail-it1-f194.google.com ([209.85.166.194]:51596 "EHLO mail-it1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389107AbeKGKBb (ORCPT ); Wed, 7 Nov 2018 05:01:31 -0500 Received: by mail-it1-f194.google.com with SMTP id h13so20648198itl.1 for ; Tue, 06 Nov 2018 16:33:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rm6BjHEXusVWBGxEjuC7GQ9PchSD/mzf8dk/SJxKBMo=; b=B+I/lpBw9R76MAEw334UbJ8LaufKX3FILhXZDvt/AZB1Ohs6DucKmouC6YtBrqhJZu 3qvdxrYCClFcxmSkHVyT8FszR3ixkTNNfKf9vfCxEafq2pP621gGO+A21UBh/g+Bq3RI Qz2ldqgdQU6oEq/XxOgZ3AV0FLS+tgDNBJJ5c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rm6BjHEXusVWBGxEjuC7GQ9PchSD/mzf8dk/SJxKBMo=; b=BD7jTOMzuhgluEbMfSV4pluypT3YdwjBOmS4zs3Tn/XUO/VyMhDooqmsu23iQcN3fH TB+tpQKwtWCMKZ5lbvpXdTNuSlWZzKFPNTm4WYdFbfMHCRadDGtRMm5KZatABXyqatpG r0Fd7gbSv4gSfnzJsN+/zIl3Yf276hylAseSWok5oWC+veJihOFilc5CRJSykeVGsY5f cKP4OBZ4bLtb8q9PjwzeIgeESG0P3efEQ1Ylk370K8wn2+YLS8pEonA+1nRUasvZJh1f HKzwD8qJ8kf+xh8sxnYQetOsjDejjTb4QwIUAY1K9O8fBooPbRp6dE74so2EMNzKmjac nT1g== X-Gm-Message-State: AGRZ1gJVSga9IxTD54sgqrjkcauK31XbRmqq5tXCtH+zwmE9q6/2Wf64 gHZNtiZ1Mxwg4dT0iNo/0HWNbQ== X-Received: by 2002:a02:5f18:: with SMTP id r24-v6mr27485590jab.10.1541550813948; Tue, 06 Nov 2018 16:33:33 -0800 (PST) Received: from shibby.gateway.innflux.com ([66.228.239.218]) by smtp.gmail.com with ESMTPSA id e184-v6sm1061128ite.9.2018.11.06.16.33.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 16:33:33 -0800 (PST) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, syadagir@codeaurora.org, mjavid@codeaurora.org, robh+dt@kernel.org, mark.rutland@arm.com Subject: [RFC PATCH 09/12] soc: qcom: ipa: main IPA source file Date: Tue, 6 Nov 2018 18:32:47 -0600 Message-Id: <20181107003250.5832-10-elder@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107003250.5832-1-elder@linaro.org> References: <20181107003250.5832-1-elder@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch includes "ipa_main.c", which consists mostly of the initialization code. The IPA is a hardware resource shared by multiple independent execution environments (currently, the AP and the modem). In some cases, initialization must be performed by only one of these. As an example, the AP must initialize some filter table data structures that are only used by the modem. (And in general, some initialization of IPA hardware is required regardless of whether it will be used.) There are two phases of IPA initialization. The first phase is triggered by the probe of the driver. It involves setting up operating system resources, and doing some basic initialization of IPA memory resources using register and DMA access. The second phase involves configuration of enpoints used, and this phase requires access to the GSI layer. However the GSI layer is requires some firmware to be loaded before it can be used. So the second stage (in ipa_post_init()) only occurs after it is known firmware is loaded. The GSI firmware can be loaded in two ways: the modem can load it; or Trust Zone code running on the AP can load it. If the modem loads the firmware, it will send an SMP2P interrupt to the AP to signal that GSI firmware is loaded and the AP can proceed with its second stage IPA initialization. If Trust Zone is responsible for loading the firmware, the IPA driver requests the firmware blob from the file system and passes the result via an SMC to Trust Zone to load and activate the GSI firmware. When that has completed successfully, the second stage of initialization can proceed. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_main.c | 1400 ++++++++++++++++++++++++++++++++++++ 1 file changed, 1400 insertions(+) create mode 100644 drivers/net/ipa/ipa_main.c -- 2.17.1 diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c new file mode 100644 index 000000000000..3d7c59177388 --- /dev/null +++ b/drivers/net/ipa/ipa_main.c @@ -0,0 +1,1400 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipa_i.h" +#include "ipa_dma.h" +#include "ipahal.h" + +#define IPA_CORE_CLOCK_RATE (75UL * 1000 * 1000) + +/* The name of the main firmware file relative to /lib/firmware */ +#define IPA_FWS_PATH "ipa_fws.mdt" +#define IPA_PAS_ID 15 + +#define IPA_APPS_CMD_PROD_RING_COUNT 256 +#define IPA_APPS_LAN_CONS_RING_COUNT 256 + +/* Details of the initialization sequence are determined by who is + * responsible for doing some early IPA hardware initialization. + * The Device Tree compatible string defines what to expect. + */ +enum ipa_init_type { + ipa_undefined_init = 0, + ipa_tz_init, + ipa_modem_init, +}; + +struct ipa_match_data { + enum ipa_init_type init_type; +}; + +static void ipa_client_remove_deferred(struct work_struct *work); +static DECLARE_WORK(ipa_client_remove_work, ipa_client_remove_deferred); + +static struct ipa_context ipa_ctx_struct; +struct ipa_context *ipa_ctx = &ipa_ctx_struct; + +static int hdr_init_local_cmd(u32 offset, u32 size) +{ + struct ipa_desc desc = { }; + struct ipa_dma_mem mem; + void *payload; + int ret; + + if (ipa_dma_alloc(&mem, size, GFP_KERNEL)) + return -ENOMEM; + + offset += ipa_ctx->smem_offset; + + payload = ipahal_hdr_init_local_pyld(&mem, offset); + if (!payload) { + ret = -ENOMEM; + goto err_dma_free; + } + + desc.type = IPA_IMM_CMD_DESC; + desc.len_opcode = IPA_IMM_CMD_HDR_INIT_LOCAL; + desc.payload = payload; + + ret = ipa_send_cmd(&desc); + + ipahal_payload_free(payload); +err_dma_free: + ipa_dma_free(&mem); + + return ret; +} + +static int dma_shared_mem_zero_cmd(u32 offset, u32 size) +{ + struct ipa_desc desc = { }; + struct ipa_dma_mem mem; + void *payload; + int ret; + + ipa_assert(size > 0); + + if (ipa_dma_alloc(&mem, size, GFP_KERNEL)) + return -ENOMEM; + + offset += ipa_ctx->smem_offset; + + payload = ipahal_dma_shared_mem_write_pyld(&mem, offset); + if (!payload) { + ret = -ENOMEM; + goto err_dma_free; + } + + desc.type = IPA_IMM_CMD_DESC; + desc.len_opcode = IPA_IMM_CMD_DMA_SHARED_MEM; + desc.payload = payload; + + ret = ipa_send_cmd(&desc); + + ipahal_payload_free(payload); +err_dma_free: + ipa_dma_free(&mem); + + return ret; +} + +/** + * ipa_modem_smem_init() - Initialize modem general memory and header memory + */ +int ipa_modem_smem_init(void) +{ + int ret; + + ret = dma_shared_mem_zero_cmd(IPA_MEM_MODEM_OFST, IPA_MEM_MODEM_SIZE); + if (ret) + return ret; + + ret = dma_shared_mem_zero_cmd(IPA_MEM_MODEM_HDR_OFST, + IPA_MEM_MODEM_HDR_SIZE); + if (ret) + return ret; + + return dma_shared_mem_zero_cmd(IPA_MEM_MODEM_HDR_PROC_CTX_OFST, + IPA_MEM_MODEM_HDR_PROC_CTX_SIZE); +} + +static int ipa_ep_apps_cmd_prod_setup(void) +{ + enum ipa_client_type dst_client; + enum ipa_client_type client; + u32 channel_count; + u32 ep_id; + int ret; + + if (ipa_ctx->cmd_prod_ep_id != IPA_EP_ID_BAD) + ret = -EBUSY; + + client = IPA_CLIENT_APPS_CMD_PROD; + dst_client = IPA_CLIENT_APPS_LAN_CONS; + channel_count = IPA_APPS_CMD_PROD_RING_COUNT; + + ret = ipa_ep_alloc(client); + if (ret < 0) + return ret; + ep_id = ret; + + + ipa_endp_init_mode_prod(ep_id, IPA_DMA, dst_client); + ipa_endp_init_seq_prod(ep_id); + ipa_endp_init_deaggr_prod(ep_id); + + ret = ipa_ep_setup(ep_id, channel_count, 2, 0, NULL, NULL); + if (ret) + ipa_ep_free(ep_id); + else + ipa_ctx->cmd_prod_ep_id = ep_id; + + return ret; +} + +/* Only used for IPA_MEM_UC_EVENT_RING_OFST, which must be 1KB aligned */ +static __always_inline void sram_set_canary(u32 *sram_mmio, u32 offset) +{ + BUILD_BUG_ON(offset < sizeof(*sram_mmio)); + BUILD_BUG_ON(offset % 1024); + + sram_mmio += offset / sizeof(*sram_mmio); + *--sram_mmio = IPA_MEM_CANARY_VAL; +} + +static __always_inline void sram_set_canaries(u32 *sram_mmio, u32 offset) +{ + BUILD_BUG_ON(offset < 2 * sizeof(*sram_mmio)); + BUILD_BUG_ON(offset % 8); + + sram_mmio += offset / sizeof(*sram_mmio); + *--sram_mmio = IPA_MEM_CANARY_VAL; + *--sram_mmio = IPA_MEM_CANARY_VAL; +} + +/** + * ipa_init_sram() - Initialize IPA local SRAM. + * + * Return: 0 if successful, or a negative error code + */ +static int ipa_init_sram(void) +{ + phys_addr_t phys_addr; + u32 *ipa_sram_mmio; + + phys_addr = ipa_ctx->ipa_phys; + phys_addr += ipa_reg_n_offset(IPA_SRAM_DIRECT_ACCESS_N, 0); + phys_addr += ipa_ctx->smem_offset; + + ipa_sram_mmio = ioremap(phys_addr, ipa_ctx->smem_size); + if (!ipa_sram_mmio) { + ipa_err("fail to ioremap IPA SRAM\n"); + return -ENOMEM; + } + + sram_set_canaries(ipa_sram_mmio, IPA_MEM_V4_FLT_HASH_OFST); + sram_set_canaries(ipa_sram_mmio, IPA_MEM_V4_FLT_NHASH_OFST); + sram_set_canaries(ipa_sram_mmio, IPA_MEM_V6_FLT_HASH_OFST); + sram_set_canaries(ipa_sram_mmio, IPA_MEM_V6_FLT_NHASH_OFST); + sram_set_canaries(ipa_sram_mmio, IPA_MEM_V4_RT_HASH_OFST); + sram_set_canaries(ipa_sram_mmio, IPA_MEM_V4_RT_NHASH_OFST); + sram_set_canaries(ipa_sram_mmio, IPA_MEM_V6_RT_HASH_OFST); + sram_set_canaries(ipa_sram_mmio, IPA_MEM_V6_RT_NHASH_OFST); + sram_set_canaries(ipa_sram_mmio, IPA_MEM_MODEM_HDR_OFST); + sram_set_canaries(ipa_sram_mmio, IPA_MEM_MODEM_HDR_PROC_CTX_OFST); + sram_set_canaries(ipa_sram_mmio, IPA_MEM_MODEM_OFST); + + /* Only one canary precedes the microcontroller ring */ + sram_set_canary(ipa_sram_mmio, IPA_MEM_UC_EVENT_RING_OFST); + + iounmap(ipa_sram_mmio); + + return 0; +} + +/** + * ipa_init_hdr() - Initialize IPA header block. + * + * Return: 0 if successful, or a negative error code + */ +static int ipa_init_hdr(void) +{ + int ret; + + if (IPA_MEM_MODEM_HDR_SIZE) { + ret = hdr_init_local_cmd(IPA_MEM_MODEM_HDR_OFST, + IPA_MEM_MODEM_HDR_SIZE); + if (ret) + return ret; + } + + if (IPA_MEM_APPS_HDR_SIZE) { + BUILD_BUG_ON(IPA_MEM_APPS_HDR_OFST % 8); + ret = hdr_init_local_cmd(IPA_MEM_APPS_HDR_OFST, + IPA_MEM_APPS_HDR_SIZE); + if (ret) + return ret; + } + + if (IPA_MEM_MODEM_HDR_PROC_CTX_SIZE) { + ret = dma_shared_mem_zero_cmd(IPA_MEM_MODEM_HDR_PROC_CTX_OFST, + IPA_MEM_MODEM_HDR_PROC_CTX_SIZE); + if (ret) + return ret; + } + + if (IPA_MEM_APPS_HDR_PROC_CTX_SIZE) { + BUILD_BUG_ON(IPA_MEM_APPS_HDR_PROC_CTX_OFST % 8); + ret = dma_shared_mem_zero_cmd(IPA_MEM_APPS_HDR_PROC_CTX_OFST, + IPA_MEM_APPS_HDR_PROC_CTX_SIZE); + if (ret) + return ret; + } + + ipa_write_reg(IPA_LOCAL_PKT_PROC_CNTXT_BASE, + ipa_ctx->smem_offset + IPA_MEM_MODEM_HDR_PROC_CTX_OFST); + + return 0; +} + +/** + * ipa_init_rt4() - Initialize IPA routing block for IPv4. + * + * Return: 0 if successful, or a negative error code + */ +static int ipa_init_rt4(struct ipa_dma_mem *mem) +{ + struct ipa_desc desc = { }; + u32 nhash_offset; + u32 hash_offset; + void *payload; + int ret; + + hash_offset = ipa_ctx->smem_offset + IPA_MEM_V4_RT_HASH_OFST; + nhash_offset = ipa_ctx->smem_offset + IPA_MEM_V4_RT_NHASH_OFST; + payload = ipahal_ip_v4_routing_init_pyld(mem, hash_offset, + nhash_offset); + if (!payload) + return -ENOMEM; + + desc.type = IPA_IMM_CMD_DESC; + desc.len_opcode = IPA_IMM_CMD_IP_V4_ROUTING_INIT; + desc.payload = payload; + + ret = ipa_send_cmd(&desc); + + ipahal_payload_free(payload); + + return ret; +} + +/** + * ipa_init_rt6() - Initialize IPA routing block for IPv6. + * + * Return: 0 if successful, or a negative error code + */ +static int ipa_init_rt6(struct ipa_dma_mem *mem) +{ + struct ipa_desc desc = { }; + u32 nhash_offset; + u32 hash_offset; + void *payload; + int ret; + + hash_offset = ipa_ctx->smem_offset + IPA_MEM_V6_RT_HASH_OFST; + nhash_offset = ipa_ctx->smem_offset + IPA_MEM_V6_RT_NHASH_OFST; + payload = ipahal_ip_v6_routing_init_pyld(mem, hash_offset, + nhash_offset); + if (!payload) + return -ENOMEM; + + desc.type = IPA_IMM_CMD_DESC; + desc.len_opcode = IPA_IMM_CMD_IP_V6_ROUTING_INIT; + desc.payload = payload; + + ret = ipa_send_cmd(&desc); + + ipahal_payload_free(payload); + + return ret; +} + +/** + * ipa_init_flt4() - Initialize IPA filtering block for IPv4. + * + * Return: 0 if successful, or a negative error code + */ +static int ipa_init_flt4(struct ipa_dma_mem *mem) +{ + struct ipa_desc desc = { }; + u32 nhash_offset; + u32 hash_offset; + void *payload; + int ret; + + hash_offset = ipa_ctx->smem_offset + IPA_MEM_V4_FLT_HASH_OFST; + nhash_offset = ipa_ctx->smem_offset + IPA_MEM_V4_FLT_NHASH_OFST; + payload = ipahal_ip_v4_filter_init_pyld(mem, hash_offset, + nhash_offset); + if (!payload) + return -ENOMEM; + + desc.type = IPA_IMM_CMD_DESC; + desc.len_opcode = IPA_IMM_CMD_IP_V4_FILTER_INIT; + desc.payload = payload; + + ret = ipa_send_cmd(&desc); + + ipahal_payload_free(payload); + + return ret; +} + +/** + * ipa_init_flt6() - Initialize IPA filtering block for IPv6. + * + * Return: 0 if successful, or a negative error code + */ +static int ipa_init_flt6(struct ipa_dma_mem *mem) +{ + struct ipa_desc desc = { }; + u32 nhash_offset; + u32 hash_offset; + void *payload; + int ret; + + hash_offset = ipa_ctx->smem_offset + IPA_MEM_V6_FLT_HASH_OFST; + nhash_offset = ipa_ctx->smem_offset + IPA_MEM_V6_FLT_NHASH_OFST; + payload = ipahal_ip_v6_filter_init_pyld(mem, hash_offset, + nhash_offset); + if (!payload) + return -ENOMEM; + + desc.type = IPA_IMM_CMD_DESC; + desc.len_opcode = IPA_IMM_CMD_IP_V6_FILTER_INIT; + desc.payload = payload; + + ret = ipa_send_cmd(&desc); + + ipahal_payload_free(payload); + + return ret; +} + +static void ipa_setup_flt_hash_tuple(void) +{ + u32 ep_mask = ipa_ctx->filter_bitmap; + + while (ep_mask) { + u32 i = __ffs(ep_mask); + + ep_mask ^= BIT(i); + if (!ipa_is_modem_ep(i)) + ipa_set_flt_tuple_mask(i); + } +} + +static void ipa_setup_rt_hash_tuple(void) +{ + u32 route_mask; + u32 modem_mask; + + BUILD_BUG_ON(!IPA_MEM_MODEM_RT_COUNT); + BUILD_BUG_ON(IPA_MEM_RT_COUNT < IPA_MEM_MODEM_RT_COUNT); + + /* Compute a mask representing non-modem route table entries */ + route_mask = GENMASK(IPA_MEM_RT_COUNT - 1, 0); + modem_mask = GENMASK(IPA_MEM_MODEM_RT_INDEX_MAX, + IPA_MEM_MODEM_RT_INDEX_MIN); + route_mask &= ~modem_mask; + + while (route_mask) { + u32 i = __ffs(route_mask); + + route_mask ^= BIT(i); + ipa_set_rt_tuple_mask(i); + } +} + +static int ipa_ep_apps_lan_cons_setup(void) +{ + enum ipa_client_type client; + u32 rx_buffer_size; + u32 channel_count; + u32 aggr_count; + u32 aggr_bytes; + u32 aggr_size; + u32 ep_id; + int ret; + + client = IPA_CLIENT_APPS_LAN_CONS; + channel_count = IPA_APPS_LAN_CONS_RING_COUNT; + aggr_count = IPA_GENERIC_AGGR_PKT_LIMIT; + aggr_bytes = IPA_GENERIC_AGGR_BYTE_LIMIT; + + if (aggr_bytes > ipa_reg_aggr_max_byte_limit()) + return -EINVAL; + + if (aggr_count > ipa_reg_aggr_max_packet_limit()) + return -EINVAL; + + if (ipa_ctx->lan_cons_ep_id != IPA_EP_ID_BAD) + return -EBUSY; + + /* Compute the buffer size required to handle the requested + * aggregation byte limit. The aggr_byte_limit value is + * expressed as a number of KB, but we derive that value + * after computing the buffer size to use (in bytes). The + * buffer must be sufficient to hold one IPA_MTU-sized + * packet *after* the limit is reached. + * + * (Note that the rx_buffer_size value reflects only the + * space for data, not any standard metadata or headers.) + */ + rx_buffer_size = ipa_aggr_byte_limit_buf_size(aggr_bytes); + + /* Account for the extra IPA_MTU past the limit in the + * buffer, and convert the result to the KB units the + * aggr_byte_limit uses. + */ + aggr_size = (rx_buffer_size - IPA_MTU) / SZ_1K; + + ret = ipa_ep_alloc(client); + if (ret < 0) + return ret; + ep_id = ret; + + ipa_endp_init_hdr_cons(ep_id, IPA_LAN_RX_HEADER_LENGTH, 0, 0); + ipa_endp_init_hdr_ext_cons(ep_id, ilog2(sizeof(u32)), false); + ipa_endp_init_aggr_cons(ep_id, aggr_size, aggr_count, false); + ipa_endp_init_cfg_cons(ep_id, IPA_CS_OFFLOAD_DL); + ipa_endp_init_hdr_metadata_mask_cons(ep_id, 0x0); + ipa_endp_status_cons(ep_id, true); + + ret = ipa_ep_setup(ep_id, channel_count, 1, rx_buffer_size, + ipa_lan_rx_cb, NULL); + if (ret) + ipa_ep_free(ep_id); + else + ipa_ctx->lan_cons_ep_id = ep_id; + + return ret; +} + +static int ipa_ep_apps_setup(void) +{ + struct ipa_dma_mem mem; /* Empty table */ + int ret; + + /* CMD OUT (AP->IPA) */ + ret = ipa_ep_apps_cmd_prod_setup(); + if (ret < 0) + return ret; + + ipa_init_sram(); + ipa_init_hdr(); + + ret = ipahal_rt_generate_empty_img(IPA_MEM_RT_COUNT, &mem); + ipa_assert(!ret); + ipa_init_rt4(&mem); + ipa_init_rt6(&mem); + ipahal_free_empty_img(&mem); + + ret = ipahal_flt_generate_empty_img(ipa_ctx->filter_bitmap, &mem); + ipa_assert(!ret); + ipa_init_flt4(&mem); + ipa_init_flt6(&mem); + ipahal_free_empty_img(&mem); + + ipa_setup_flt_hash_tuple(); + ipa_setup_rt_hash_tuple(); + + /* LAN IN (IPA->AP) + * + * Even without supporting LAN traffic, we use the LAN consumer + * endpoint for receiving some information from the IPA. If we issue + * a tagged command, we arrange to be notified of its completion + * through this endpoint. In addition, we arrange for this endpoint + * to be used as the IPA's default route; the IPA will notify the AP + * of exceptions (unroutable packets, but other events as well) + * through this endpoint. + */ + ret = ipa_ep_apps_lan_cons_setup(); + if (ret < 0) + goto fail_flt_hash_tuple; + + ipa_cfg_default_route(IPA_CLIENT_APPS_LAN_CONS); + + return 0; + +fail_flt_hash_tuple: + ipa_ep_teardown(ipa_ctx->cmd_prod_ep_id); + ipa_ctx->cmd_prod_ep_id = IPA_EP_ID_BAD; + + return ret; +} + +static int ipa_clock_init(struct device *dev) +{ + struct clk *clk; + int ret; + + clk = clk_get(dev, "core"); + if (IS_ERR(clk)) + return PTR_ERR(clk); + + ret = clk_set_rate(clk, IPA_CORE_CLOCK_RATE); + if (ret) { + clk_put(clk); + return ret; + } + + ipa_ctx->core_clock = clk; + + return 0; +} + +static void ipa_clock_exit(void) +{ + clk_put(ipa_ctx->core_clock); + ipa_ctx->core_clock = NULL; +} + +/** + * ipa_enable_clks() - Turn on IPA clocks + */ +static void ipa_enable_clks(void) +{ + if (WARN_ON(ipa_interconnect_enable())) + return; + + if (WARN_ON(clk_prepare_enable(ipa_ctx->core_clock))) + ipa_interconnect_disable(); +} + +/** + * ipa_disable_clks() - Turn off IPA clocks + */ +static void ipa_disable_clks(void) +{ + clk_disable_unprepare(ipa_ctx->core_clock); + WARN_ON(ipa_interconnect_disable()); +} + +/* Add an IPA client under protection of the mutex. This is called + * for the first client, but a race could mean another caller gets + * the first reference. When the first reference is taken, IPA + * clocks are enabled endpoints are resumed. A positive reference count + * means the endpoints are active; this doesn't set the first reference + * until after this is complete (and the mutex, not the atomic + * count, is what protects this). + */ +static void ipa_client_add_first(void) +{ + mutex_lock(&ipa_ctx->active_clients_mutex); + + /* A reference might have been added while awaiting the mutex. */ + if (!atomic_inc_not_zero(&ipa_ctx->active_clients_count)) { + ipa_enable_clks(); + ipa_ep_resume_all(); + atomic_inc(&ipa_ctx->active_clients_count); + } else { + ipa_assert(atomic_read(&ipa_ctx->active_clients_count) > 1); + } + + mutex_unlock(&ipa_ctx->active_clients_mutex); +} + +/* Attempt to add an IPA client reference, but only if this does not + * represent the initiaal reference. Returns true if the reference + * was taken, false otherwise. + */ +static bool ipa_client_add_not_first(void) +{ + return !!atomic_inc_not_zero(&ipa_ctx->active_clients_count); +} + +/* Add an IPA client, but only if the reference count is already + * non-zero. (This is used to avoid blocking.) Returns true if the + * additional reference was added successfully, or false otherwise. + */ +bool ipa_client_add_additional(void) +{ + return ipa_client_add_not_first(); +} + +/* Add an IPA client. If this is not the first client, the + * reference count is updated and return is immediate. Otherwise + * ipa_client_add_first() will safely add the first client, enabling + * clocks and setting up (resuming) endpoints before returning. + */ +void ipa_client_add(void) +{ + /* There's nothing more to do if this isn't the first reference */ + if (!ipa_client_add_not_first()) + ipa_client_add_first(); +} + +/* Remove an IPA client under protection of the mutex. This is + * called for the last remaining client, but a race could mean + * another caller gets an additional reference before the mutex + * is acquired. When the final reference is dropped, endpoints are + * suspended and IPA clocks disabled. + */ +static void ipa_client_remove_final(void) +{ + mutex_lock(&ipa_ctx->active_clients_mutex); + + /* A reference might have been removed while awaiting the mutex. */ + if (!atomic_dec_return(&ipa_ctx->active_clients_count)) { + ipa_ep_suspend_all(); + ipa_disable_clks(); + } + + mutex_unlock(&ipa_ctx->active_clients_mutex); +} + +/* Decrement the active clients reference count, and if the result + * is 0, suspend the endpoints and disable clocks. + * + * This function runs in work queue context, scheduled to run whenever + * the last reference would be dropped in ipa_client_remove(). + */ +static void ipa_client_remove_deferred(struct work_struct *work) +{ + ipa_client_remove_final(); +} + +/* Attempt to remove a client reference, but only if this is not the + * only reference remaining. Returns true if the reference was + * removed, or false if doing so would produce a zero reference + * count. + */ +static bool ipa_client_remove_not_final(void) +{ + return !!atomic_add_unless(&ipa_ctx->active_clients_count, -1, 1); +} + +/* Attempt to remove an IPA client reference. If this represents + * the last reference arrange for ipa_client_remove_final() to be + * called in workqueue context, dropping the last reference under + * protection of the mutex. + */ +void ipa_client_remove(void) +{ + if (!ipa_client_remove_not_final()) + queue_work(ipa_ctx->power_mgmt_wq, &ipa_client_remove_work); +} + +/** ipa_inc_acquire_wakelock() - Increase active clients counter, and + * acquire wakelock if necessary + */ +void ipa_inc_acquire_wakelock(void) +{ + unsigned long flags; + + spin_lock_irqsave(&ipa_ctx->wakeup_lock, flags); + + ipa_ctx->wakeup_count++; + if (ipa_ctx->wakeup_count == 1) + __pm_stay_awake(&ipa_ctx->wakeup); + + spin_unlock_irqrestore(&ipa_ctx->wakeup_lock, flags); +} + +/** ipa_dec_release_wakelock() - Decrease active clients counter + * + * In case if the ref count is 0, release the wakelock. + */ +void ipa_dec_release_wakelock(void) +{ + unsigned long flags; + + spin_lock_irqsave(&ipa_ctx->wakeup_lock, flags); + + ipa_ctx->wakeup_count--; + if (ipa_ctx->wakeup_count == 0) + __pm_relax(&ipa_ctx->wakeup); + + spin_unlock_irqrestore(&ipa_ctx->wakeup_lock, flags); +} + +/** ipa_suspend_handler() - Handle the suspend interrupt + * @interrupt: Interrupt type + * @endpoints: Interrupt specific information data + */ +static void ipa_suspend_handler(enum ipa_irq_type interrupt, u32 interrupt_data) +{ + u32 endpoints = interrupt_data; + + while (endpoints) { + enum ipa_client_type client; + u32 i = __ffs(endpoints); + + endpoints ^= BIT(i); + + if (!ipa_ctx->ep[i].allocated) + continue; + + client = ipa_ctx->ep[i].client; + if (!ipa_ap_consumer(client)) + continue; + + /* endpoint will be unsuspended by enabling IPA clocks */ + mutex_lock(&ipa_ctx->transport_pm.transport_pm_mutex); + if (!atomic_read(&ipa_ctx->transport_pm.dec_clients)) { + ipa_client_add(); + + atomic_set(&ipa_ctx->transport_pm.dec_clients, 1); + } + mutex_unlock(&ipa_ctx->transport_pm.transport_pm_mutex); + } +} + +/** + * ipa_init_interrupts() - Initialize IPA interrupts + */ +static int ipa_init_interrupts(void) +{ + int ret; + + ret = ipa_interrupts_init(); + if (!ret) + return ret; + + ipa_add_interrupt_handler(IPA_TX_SUSPEND_IRQ, ipa_suspend_handler); + + return 0; +} + +static void ipa_freeze_clock_vote_and_notify_modem(void) +{ + u32 value; + u32 mask; + + if (ipa_ctx->smp2p_info.res_sent) + return; + + if (!ipa_ctx->smp2p_info.enabled_state) { + ipa_err("smp2p out gpio not assigned\n"); + return; + } + + ipa_ctx->smp2p_info.ipa_clk_on = ipa_client_add_additional(); + + /* Signal whether the clock is enabled */ + mask = BIT(ipa_ctx->smp2p_info.enabled_bit); + value = ipa_ctx->smp2p_info.ipa_clk_on ? mask : 0; + qcom_smem_state_update_bits(ipa_ctx->smp2p_info.enabled_state, mask, + value); + + /* Now indicate that the enabled flag is valid */ + mask = BIT(ipa_ctx->smp2p_info.valid_bit); + value = mask; + qcom_smem_state_update_bits(ipa_ctx->smp2p_info.valid_state, mask, + value); + + ipa_ctx->smp2p_info.res_sent = true; +} + +void ipa_reset_freeze_vote(void) +{ + u32 mask; + + if (!ipa_ctx->smp2p_info.res_sent) + return; + + if (ipa_ctx->smp2p_info.ipa_clk_on) + ipa_client_remove(); + + /* Reset the clock enabled valid flag */ + mask = BIT(ipa_ctx->smp2p_info.valid_bit); + qcom_smem_state_update_bits(ipa_ctx->smp2p_info.valid_state, mask, 0); + + /* Mark the clock disabled for good measure... */ + mask = BIT(ipa_ctx->smp2p_info.enabled_bit); + qcom_smem_state_update_bits(ipa_ctx->smp2p_info.enabled_state, mask, 0); + + ipa_ctx->smp2p_info.res_sent = false; + ipa_ctx->smp2p_info.ipa_clk_on = false; +} + +static int +ipa_panic_notifier(struct notifier_block *this, unsigned long event, void *ptr) +{ + ipa_freeze_clock_vote_and_notify_modem(); + ipa_uc_panic_notifier(); + + return NOTIFY_DONE; +} + +static struct notifier_block ipa_panic_blk = { + .notifier_call = ipa_panic_notifier, + /* IPA panic handler needs to run before modem shuts down */ + .priority = INT_MAX, +}; + +static void ipa_register_panic_hdlr(void) +{ + atomic_notifier_chain_register(&panic_notifier_list, &ipa_panic_blk); +} + +/* Remoteproc callbacks for SSR events: prepare, start, stop, unprepare */ +int ipa_ssr_prepare(struct rproc_subdev *subdev) +{ + printk("======== SSR prepare received ========\n"); + return 0; +} +EXPORT_SYMBOL_GPL(ipa_ssr_prepare); + +int ipa_ssr_start(struct rproc_subdev *subdev) +{ + printk("======== SSR start received ========\n"); + return 0; +} +EXPORT_SYMBOL_GPL(ipa_ssr_start); + +void ipa_ssr_stop(struct rproc_subdev *subdev, bool crashed) +{ + printk("======== SSR stop received ========\n"); +} +EXPORT_SYMBOL_GPL(ipa_ssr_stop); + +void ipa_ssr_unprepare(struct rproc_subdev *subdev) +{ + printk("======== SSR unprepare received ========\n"); +} +EXPORT_SYMBOL_GPL(ipa_ssr_unprepare); + +/** + * ipa_post_init() - Initialize the IPA Driver (Part II). + * + * Perform initialization that requires interaction with IPA hardware. + */ +static void ipa_post_init(void) +{ + int ret; + + ipa_debug("ipa_post_init() started\n"); + + ret = gsi_device_init(ipa_ctx->gsi); + if (ret) { + ipa_err(":gsi register error - %d\n", ret); + return; + } + + /* setup the AP-IPA endpoints */ + if (ipa_ep_apps_setup()) { + ipa_err(":failed to setup IPA-Apps endpoints\n"); + gsi_device_exit(ipa_ctx->gsi); + + return; + } + + ipa_ctx->uc_ctx = ipa_uc_init(ipa_ctx->ipa_phys); + if (!ipa_ctx->uc_ctx) + ipa_err("microcontroller init failed\n"); + + ipa_register_panic_hdlr(); + + ipa_ctx->modem_clk_vote_valid = true; + + if (ipa_wwan_init()) + ipa_err("WWAN init failed (ignoring)\n"); + + dev_info(ipa_ctx->dev, "IPA driver initialization was successful.\n"); +} + +/** ipa_pre_init() - Initialize the IPA Driver. + * + * Perform initialization which doesn't require access to IPA hardware. + */ +static int ipa_pre_init(void) +{ + int ret = 0; + + /* enable IPA clocks explicitly to allow the initialization */ + ipa_enable_clks(); + + ipa_init_hw(); + + ipa_ctx->ep_count = ipa_get_ep_count(); + ipa_debug("ep_count %u\n", ipa_get_ep_count()); + ipa_assert(ipa_ctx->ep_count <= IPA_EP_COUNT_MAX); + + ipa_sram_settings_read(); + if (ipa_ctx->smem_size < IPA_MEM_END_OFST) { + ipa_err("insufficient memory: %hu bytes available, need %u\n", + ipa_ctx->smem_size, IPA_MEM_END_OFST); + ret = -ENOMEM; + goto err_disable_clks; + } + + mutex_init(&ipa_ctx->active_clients_mutex); + atomic_set(&ipa_ctx->active_clients_count, 1); + + /* Create workqueues for power management */ + ipa_ctx->power_mgmt_wq = + create_singlethread_workqueue("ipa_power_mgmt"); + if (!ipa_ctx->power_mgmt_wq) { + ipa_err("failed to create power mgmt wq\n"); + ret = -ENOMEM; + goto err_disable_clks; + } + + mutex_init(&ipa_ctx->transport_pm.transport_pm_mutex); + + /* init the lookaside cache */ + + ipa_ctx->dp = ipa_dp_init(); + if (!ipa_ctx->dp) + goto err_destroy_pm_wq; + + /* allocate memory for DMA_TASK workaround */ + ret = ipa_gsi_dma_task_alloc(); + if (ret) + goto err_dp_exit; + + /* Create a wakeup source. */ + wakeup_source_init(&ipa_ctx->wakeup, "IPA_WS"); + spin_lock_init(&ipa_ctx->wakeup_lock); + + /* Note enabling dynamic clock division must not be + * attempted for IPA hardware versions prior to 3.5. + */ + ipa_enable_dcd(); + + /* Assign resource limitation to each group */ + ipa_set_resource_groups_min_max_limits(); + + ret = ipa_init_interrupts(); + if (!ret) + return 0; /* Success! */ + + ipa_err("ipa initialization of interrupts failed\n"); +err_dp_exit: + ipa_dp_exit(ipa_ctx->dp); + ipa_ctx->dp = NULL; +err_destroy_pm_wq: + destroy_workqueue(ipa_ctx->power_mgmt_wq); +err_disable_clks: + ipa_disable_clks(); + + return ret; +} + +static int ipa_firmware_load(struct device *dev) +{ + const struct firmware *fw; + struct device_node *node; + struct resource res; + phys_addr_t phys; + ssize_t size; + void *virt; + int ret; + + ret = request_firmware(&fw, IPA_FWS_PATH, dev); + if (ret) + return ret; + + node = of_parse_phandle(dev->of_node, "memory-region", 0); + if (!node) { + dev_err(dev, "memory-region not specified\n"); + ret = -EINVAL; + goto out_release_firmware; + } + + ret = of_address_to_resource(node, 0, &res); + if (ret) + goto out_release_firmware; + + phys = res.start, + size = (size_t)resource_size(&res); + virt = memremap(phys, size, MEMREMAP_WC); + if (!virt) { + ret = -ENOMEM; + goto out_release_firmware; + } + + ret = qcom_mdt_load(dev, fw, IPA_FWS_PATH, IPA_PAS_ID, + virt, phys, size, NULL); + if (!ret) + ret = qcom_scm_pas_auth_and_reset(IPA_PAS_ID); + + memunmap(virt); +out_release_firmware: + release_firmware(fw); + + return ret; +} + +/* Threaded IRQ handler for modem "ipa-clock-query" SMP2P interrupt */ +static irqreturn_t ipa_smp2p_modem_clk_query_isr(int irq, void *ctxt) +{ + ipa_freeze_clock_vote_and_notify_modem(); + + return IRQ_HANDLED; +} + +/* Threaded IRQ handler for modem "ipa-post-init" SMP2P interrupt */ +static irqreturn_t ipa_smp2p_modem_post_init_isr(int irq, void *ctxt) +{ + ipa_post_init(); + + return IRQ_HANDLED; +} + +static int +ipa_smp2p_irq_init(struct device *dev, const char *name, irq_handler_t handler) +{ + struct device_node *node = dev->of_node; + unsigned int irq; + int ret; + + ret = of_irq_get_byname(node, name); + if (ret < 0) + return ret; + if (!ret) + return -EINVAL; /* IRQ mapping failure */ + irq = ret; + + ret = devm_request_threaded_irq(dev, irq, NULL, handler, 0, name, dev); + if (ret) + return ret; + + return irq; +} + +static void +ipa_smp2p_irq_exit(struct device *dev, unsigned int irq) +{ + devm_free_irq(dev, irq, dev); +} + +static int ipa_smp2p_init(struct device *dev, bool modem_init) +{ + struct qcom_smem_state *enabled_state; + struct qcom_smem_state *valid_state; + struct device_node *node; + unsigned int enabled_bit; + unsigned int valid_bit; + unsigned int clock_irq; + int ret; + + node = dev->of_node; + + valid_state = qcom_smem_state_get(dev, "ipa-clock-enabled-valid", + &valid_bit); + if (IS_ERR(valid_state)) + return PTR_ERR(valid_state); + + enabled_state = qcom_smem_state_get(dev, "ipa-clock-enabled", + &enabled_bit); + if (IS_ERR(enabled_state)) { + ret = PTR_ERR(enabled_state); + ipa_err("error %d getting ipa-clock-enabled state\n", ret); + + return ret; + } + + ret = ipa_smp2p_irq_init(dev, "ipa-clock-query", + ipa_smp2p_modem_clk_query_isr); + if (ret < 0) + return ret; + clock_irq = ret; + + if (modem_init) { + /* Result will be non-zero (negative for error) */ + ret = ipa_smp2p_irq_init(dev, "ipa-post-init", + ipa_smp2p_modem_post_init_isr); + if (ret < 0) { + ipa_smp2p_irq_exit(dev, clock_irq); + + return ret; + } + } + + /* Success. Record our smp2p information */ + ipa_ctx->smp2p_info.valid_state = valid_state; + ipa_ctx->smp2p_info.valid_bit = valid_bit; + ipa_ctx->smp2p_info.enabled_state = enabled_state; + ipa_ctx->smp2p_info.enabled_bit = enabled_bit; + ipa_ctx->smp2p_info.clock_query_irq = clock_irq; + ipa_ctx->smp2p_info.post_init_irq = modem_init ? ret : 0; + + return 0; +} + +static void ipa_smp2p_exit(struct device *dev) +{ + if (ipa_ctx->smp2p_info.post_init_irq) + ipa_smp2p_irq_exit(dev, ipa_ctx->smp2p_info.post_init_irq); + ipa_smp2p_irq_exit(dev, ipa_ctx->smp2p_info.clock_query_irq); + + memset(&ipa_ctx->smp2p_info, 0, sizeof(ipa_ctx->smp2p_info)); +} + +static const struct ipa_match_data tz_init = { + .init_type = ipa_tz_init, +}; + +static const struct ipa_match_data modem_init = { + .init_type = ipa_modem_init, +}; + +static const struct of_device_id ipa_plat_drv_match[] = { + { + .compatible = "qcom,ipa-sdm845-tz_init", + .data = &tz_init, + }, + { + .compatible = "qcom,ipa-sdm845-modem_init", + .data = &modem_init, + }, + {} +}; + +static int ipa_plat_drv_probe(struct platform_device *pdev) +{ + const struct ipa_match_data *match_data; + struct resource *res; + struct device *dev; + bool modem_init; + int ret; + + /* We assume we're working on 64-bit hardware */ + BUILD_BUG_ON(!IS_ENABLED(CONFIG_64BIT)); + + dev = &pdev->dev; + + match_data = of_device_get_match_data(dev); + modem_init = match_data->init_type == ipa_modem_init; + + /* If we need Trust Zone, make sure it's ready */ + if (!modem_init) + if (!qcom_scm_is_available()) + return -EPROBE_DEFER; + + /* Initialize the smp2p driver early. It might not be ready + * when we're probed, so it might return -EPROBE_DEFER. + */ + ret = ipa_smp2p_init(dev, modem_init); + if (ret) + return ret; + + /* Initialize the interconnect driver early too. It might + * also return -EPROBE_DEFER. + */ + ret = ipa_interconnect_init(dev); + if (ret) + goto out_smp2p_exit; + + ret = ipa_clock_init(dev); + if (ret) + goto err_interconnect_exit; + + ipa_ctx->dev = dev; /* Set early for ipa_err()/ipa_debug() */ + + /* Compute a bitmask representing which endpoints support filtering */ + ipa_ctx->filter_bitmap = ipa_filter_bitmap_init(); + ipa_debug("filter_bitmap 0x%08x\n", ipa_ctx->filter_bitmap); + if (!ipa_ctx->filter_bitmap) + goto err_clock_exit; + + ret = platform_get_irq_byname(pdev, "ipa"); + if (ret < 0) + goto err_clear_filter_bitmap; + ipa_ctx->ipa_irq = ret; + + /* Get IPA memory range */ + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ipa"); + if (!res) { + ret = -ENODEV; + goto err_clear_ipa_irq; + } + + /* Setup IPA register access */ + ret = ipa_reg_init(res->start, (size_t)resource_size(res)); + if (ret) + goto err_clear_ipa_irq; + ipa_ctx->ipa_phys = res->start; + + ipa_ctx->gsi = gsi_init(pdev); + if (IS_ERR(ipa_ctx->gsi)) { + ret = PTR_ERR(ipa_ctx->gsi); + goto err_clear_gsi; + } + + ret = ipa_dma_init(dev, IPA_HW_TBL_SYSADDR_ALIGN); + if (ret) + goto err_clear_gsi; + + ret = ipahal_init(); + if (ret) + goto err_dma_exit; + + ipa_ctx->cmd_prod_ep_id = IPA_EP_ID_BAD; + ipa_ctx->lan_cons_ep_id = IPA_EP_ID_BAD; + + /* Proceed to real initialization */ + ret = ipa_pre_init(); + if (ret) + goto err_clear_dev; + + /* If the modem is not verifying and loading firmware we need to + * get it loaded ourselves. Only then can we proceed with the + * second stage of IPA initialization. If the modem is doing it, + * it will send an SMP2P interrupt to signal this has been done, + * and that will trigger the "post init". + */ + if (!modem_init) { + ret = ipa_firmware_load(dev); + if (ret) + goto err_clear_dev; + + /* Now we can proceed to stage two initialization */ + ipa_post_init(); + } + + return 0; /* Success */ + +err_clear_dev: + ipa_ctx->lan_cons_ep_id = 0; + ipa_ctx->cmd_prod_ep_id = 0; + ipahal_exit(); +err_dma_exit: + ipa_dma_exit(); +err_clear_gsi: + ipa_ctx->gsi = NULL; + ipa_ctx->ipa_phys = 0; + ipa_reg_exit(); +err_clear_ipa_irq: + ipa_ctx->ipa_irq = 0; +err_clear_filter_bitmap: + ipa_ctx->filter_bitmap = 0; +err_interconnect_exit: + ipa_interconnect_exit(); +err_clock_exit: + ipa_clock_exit(); + ipa_ctx->dev = NULL; +out_smp2p_exit: + ipa_smp2p_exit(dev); + + return ret; +} + +static int ipa_plat_drv_remove(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + + ipa_ctx->dev = NULL; + ipahal_exit(); + ipa_dma_exit(); + ipa_ctx->gsi = NULL; /* XXX ipa_gsi_exit() */ + ipa_reg_exit(); + + ipa_ctx->ipa_phys = 0; + + if (ipa_ctx->lan_cons_ep_id != IPA_EP_ID_BAD) { + ipa_ep_free(ipa_ctx->lan_cons_ep_id); + ipa_ctx->lan_cons_ep_id = IPA_EP_ID_BAD; + } + if (ipa_ctx->cmd_prod_ep_id != IPA_EP_ID_BAD) { + ipa_ep_free(ipa_ctx->cmd_prod_ep_id); + ipa_ctx->cmd_prod_ep_id = IPA_EP_ID_BAD; + } + ipa_ctx->ipa_irq = 0; /* XXX Need to de-initialize? */ + ipa_ctx->filter_bitmap = 0; + ipa_interconnect_exit(); + ipa_smp2p_exit(dev); + + return 0; +} + +/** + * ipa_ap_suspend() - suspend callback for runtime_pm + * @dev: IPA device structure + * + * This callback will be invoked by the runtime_pm framework when an AP suspend + * operation is invoked, usually by pressing a suspend button. + * + * Return: 0 if successful, -EAGAIN if IPA is in use + */ +int ipa_ap_suspend(struct device *dev) +{ + u32 i; + + /* In case there is a tx/rx handler in polling mode fail to suspend */ + for (i = 0; i < ipa_ctx->ep_count; i++) { + if (ipa_ctx->ep[i].sys && ipa_ep_polling(&ipa_ctx->ep[i])) { + ipa_err("EP %d is in polling state, do not suspend\n", + i); + return -EAGAIN; + } + } + + return 0; +} + +/** + * ipa_ap_resume() - resume callback for runtime_pm + * @dev: IPA device structure + * + * This callback will be invoked by the runtime_pm framework when an AP resume + * operation is invoked. + * + * Return: Zero + */ +int ipa_ap_resume(struct device *dev) +{ + return 0; +} + +static const struct dev_pm_ops ipa_pm_ops = { + .suspend_noirq = ipa_ap_suspend, + .resume_noirq = ipa_ap_resume, +}; + +static struct platform_driver ipa_plat_drv = { + .probe = ipa_plat_drv_probe, + .remove = ipa_plat_drv_remove, + .driver = { + .name = "ipa", + .owner = THIS_MODULE, + .pm = &ipa_pm_ops, + .of_match_table = ipa_plat_drv_match, + }, +}; + +builtin_platform_driver(ipa_plat_drv); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("IPA HW device driver");