From patchwork Wed Nov 7 00:32:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 150357 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4548311ljp; Tue, 6 Nov 2018 16:33:25 -0800 (PST) X-Google-Smtp-Source: AJdET5dmEF/V/D5GL89z532oOAvUv7h6mlxOZdu4obAGNVcoRVjiAQcsdL6G0egMGfpXi5K9dqT6 X-Received: by 2002:a63:da14:: with SMTP id c20mr24837989pgh.233.1541550804898; Tue, 06 Nov 2018 16:33:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541550804; cv=none; d=google.com; s=arc-20160816; b=vZ4OuI1lBMq0we7fCglGk48sqRqa6diSm1Z35tKC1FZxc3NkM748yeyCnhRPpoAjOm kN6SuIA7j3cxSXZzjHv9UIYwpbwlTcdaYyZ8DD6EbJprIlgtXlGkbEghkUvBBPLI6Mux a4UodN1UsX7aknOt714G4HhFSybObPWXMm5LOm6XAGY48unIe3zXN7BT9lGFHvef8zBi KLQjUQkdxQX5FaKb189X5vRQvRN78t1Tk2LKRhSL6QG+YvvFN14b27E0ascJMT17X/5N nA1MehBSp+cWCiThMTOSFt64gnKX9Dt2Om5Iph+0p34OariLeqSfaK0TYMB4V4hBKnM5 Xz1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=pjR6c17n71mFi43rsN43F9+YeBe4+W87Vl2pubTjX8Y=; b=eqN9IY7rnBTDN0l2+IP4u6z0PwSXXc8sFQxaFiIZ1pjqjVFs0KePsw+wGBUPeGcDco MCSfM/pH2NskLyVP+sBFHyahHLQOWG2hl3QjyPClxDx7K7R4r0Mhdb6j+KyO5GOZjmsy cnDXL7lnudKeQuvdzf1eDjFMf3K5vNOg4I4db/sfQfY2f5fLeoNus8lohdiAIcJAIBMr MEZf/qtfEuvAejApoGJv3lcO+CxlP5wl6w44Bp1P554Fbgt+4GRF0n/YUR/mH9ATbafl GgNAtlZnut0SAR987K7n1T+I1/mJm0impAb/J9ZI0pAHPgucWfjCsuq1Mb004CtiyXZV CutA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JcEHsVh0; spf=pass (google.com: best guess record for domain of linux-arm-msm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-arm-msm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h3-v6si37093501pld.424.2018.11.06.16.33.24; Tue, 06 Nov 2018 16:33:24 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-arm-msm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JcEHsVh0; spf=pass (google.com: best guess record for domain of linux-arm-msm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-arm-msm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388600AbeKGKBR (ORCPT + 14 others); Wed, 7 Nov 2018 05:01:17 -0500 Received: from mail-it1-f195.google.com ([209.85.166.195]:54410 "EHLO mail-it1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388997AbeKGKBQ (ORCPT ); Wed, 7 Nov 2018 05:01:16 -0500 Received: by mail-it1-f195.google.com with SMTP id d6so20624029itl.4 for ; Tue, 06 Nov 2018 16:33:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pjR6c17n71mFi43rsN43F9+YeBe4+W87Vl2pubTjX8Y=; b=JcEHsVh0QqP4kppevC18a/zV3PQuPgIqQjhrDth4+/9x1A1xClctVmXWKVi2OILfp7 Uw4RzqtL2hTAp2Dgh0P1QoeIGhONygHVdWdIZGooPkZXp+HothKObXa8fug9dWcJ5kRQ IMmNwET/TNFx1EKZFemtFOi6cIcL7gjwzDEeA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pjR6c17n71mFi43rsN43F9+YeBe4+W87Vl2pubTjX8Y=; b=FJVxI+Z7X/c6bg5VeBiiYjX1cm3ANsYvZNusp36Z7lyGq8J6zYfrNhHlxSB+VYOZy2 5yaDLLSAKekuKWRr5HrFo267ljovd7WAdzu3+gWP+MbYDkjD/PcqM9R+Ynxh5vsb5xiO HdgOO70Tjyct3n95u/vy1oRgWLAkueWPzOSV2KlGgy/1EQZf+SCIP1xHhp8bpASmEHMo 3gGiFrYxWOPaAgc5eNTbFplBi7Bk7SnNhscp/WqkGqGkF21PKdXl+vJNuY5m5JuTi+CP swnupQpKu4GG1xIOIDiLC9y+cTP4t3vzOvD9QJLhk2dRZ3J9EZOKBlb98xKuR6tqZo+S e86g== X-Gm-Message-State: AGRZ1gLblSCBLRm87hM5ZrAXr05u89tLQ6OTksZdac71Nx2ov119GAmV cjUi5Wa+zX/fLveSUbb+x9uj0Q== X-Received: by 2002:a24:140b:: with SMTP id 11-v6mr112774itg.59.1541550800007; Tue, 06 Nov 2018 16:33:20 -0800 (PST) Received: from shibby.gateway.innflux.com ([66.228.239.218]) by smtp.gmail.com with ESMTPSA id e184-v6sm1061128ite.9.2018.11.06.16.33.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 16:33:19 -0800 (PST) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, syadagir@codeaurora.org, mjavid@codeaurora.org, robh+dt@kernel.org, mark.rutland@arm.com Subject: [RFC PATCH 04/12] soc: qcom: ipa: immediate commands Date: Tue, 6 Nov 2018 18:32:42 -0600 Message-Id: <20181107003250.5832-5-elder@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107003250.5832-1-elder@linaro.org> References: <20181107003250.5832-1-elder@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch contains (mostly) code implementing "immediate commands." (The source files are still named "ipahal" for historical reasons.) One channel (APPS CMD_PROD) is used for sending commands *to* the IPA itself, rather than passing data through it. These immediate commands are issued to the IPA using the normal GSI queueing mechanism. And each command's completion is handled using the normal GSI transfer completion mechanisms. In addition to immediate commands, the "IPA HAL" includes code for interpreting status packets that are supplied to the IPA on consumer channels. Signed-off-by: Alex Elder --- drivers/net/ipa/ipahal.c | 541 +++++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipahal.h | 253 ++++++++++++++++++ 2 files changed, 794 insertions(+) create mode 100644 drivers/net/ipa/ipahal.c create mode 100644 drivers/net/ipa/ipahal.h -- 2.17.1 diff --git a/drivers/net/ipa/ipahal.c b/drivers/net/ipa/ipahal.c new file mode 100644 index 000000000000..de00bcd54d4f --- /dev/null +++ b/drivers/net/ipa/ipahal.c @@ -0,0 +1,541 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ + +#include +#include +#include + +#include "ipahal.h" +#include "ipa_i.h" /* ipa_err() */ +#include "ipa_dma.h" + +/** + * DOC: IPA Immediate Commands + * + * The APPS_CMD_PROD channel is used to issue immediate commands to + * the IPA. An immediate command is generally used to request the + * IPA do something other than data transfer. + * + * An immediate command is represented by a GSI transfer element. + * Each immediate command has a well-defined format, with a known + * length. The transfer element's length field can therefore be + * used to hold a command's opcode. The "payload" of an immediate + * command contains additional information required for the command. + * It resides in DRAM and is referred to using the DMA memory data + * pointer (the same one used to refer to the data in a "normal" + * transfer). + * + * Immediate commands are issued to the IPA through the APPS_CMD_PROD + * channel using the normal GSI queueing mechanism. And each command's + * completion is handled using the normal GSI transfer completion + * mechanisms. + */ + +/** + * struct ipahal_context - HAL global context data + * @empty_fltrt_tbl: Empty table to be used for table initialization + */ +static struct ipahal_context { + struct ipa_dma_mem empty_fltrt_tbl; +} ipahal_ctx_struct; +static struct ipahal_context *ipahal_ctx = &ipahal_ctx_struct; + +/* enum ipa_pipeline_clear_option - Values for pipeline clear waiting options + * @IPAHAL_HPS_CLEAR: Wait for HPS clear. All queues except high priority queue + * shall not be serviced until HPS is clear of packets or immediate commands. + * The high priority Rx queue / Q6ZIP group shall still be serviced normally. + * + * @IPAHAL_SRC_GRP_CLEAR: Wait for originating source group to be clear + * (for no packet contexts allocated to the originating source group). + * The source group / Rx queue shall not be serviced until all previously + * allocated packet contexts are released. All other source groups/queues shall + * be serviced normally. + * + * @IPAHAL_FULL_PIPELINE_CLEAR: Wait for full pipeline to be clear. + * All groups / Rx queues shall not be serviced until IPA pipeline is fully + * clear. This should be used for debug only. + * + * The values assigned to these are assumed by the REGISTER_WRITE + * (struct ipa_imm_cmd_hw_register_write) and the DMA_SHARED_MEM + * (struct ipa_imm_cmd_hw_dma_shared_mem) immediate commands for + * IPA version 3 hardware. They are also used to modify the opcode + * used to implement these commands for IPA version 4 hardware. + */ +enum ipahal_pipeline_clear_option { + IPAHAL_HPS_CLEAR = 0, + IPAHAL_SRC_GRP_CLEAR = 1, + IPAHAL_FULL_PIPELINE_CLEAR = 2, +}; + +/* Immediate commands H/W structures */ + +/* struct ipa_imm_cmd_hw_ip_fltrt_init - IP_V*_FILTER_INIT/IP_V*_ROUTING_INIT + * command payload in H/W format. + * Inits IPv4/v6 routing or filter block. + * @hash_rules_addr: Addr in system mem where hashable flt/rt rules starts + * @hash_rules_size: Size in bytes of the hashable tbl to cpy to local mem + * @hash_local_addr: Addr in shared mem where hashable flt/rt tbl should + * be copied to + * @nhash_rules_size: Size in bytes of the non-hashable tbl to cpy to local mem + * @nhash_local_addr: Addr in shared mem where non-hashable flt/rt tbl should + * be copied to + * @rsvd: reserved + * @nhash_rules_addr: Addr in sys mem where non-hashable flt/rt tbl starts + */ +struct ipa_imm_cmd_hw_ip_fltrt_init { + u64 hash_rules_addr; + u64 hash_rules_size : 12, + hash_local_addr : 16, + nhash_rules_size : 12, + nhash_local_addr : 16, + rsvd : 8; + u64 nhash_rules_addr; +}; + +/* struct ipa_imm_cmd_hw_hdr_init_local - HDR_INIT_LOCAL command payload + * in H/W format. + * Inits hdr table within local mem with the hdrs and their length. + * @hdr_table_addr: Word address in sys mem where the table starts (SRC) + * @size_hdr_table: Size of the above (in bytes) + * @hdr_addr: header address in IPA sram (used as DST for memory copy) + * @rsvd: reserved + */ +struct ipa_imm_cmd_hw_hdr_init_local { + u64 hdr_table_addr; + u32 size_hdr_table : 12, + hdr_addr : 16, + rsvd : 4; +}; + +/* struct ipa_imm_cmd_hw_dma_shared_mem - DMA_SHARED_MEM command payload + * in H/W format. + * Perform mem copy into or out of the SW area of IPA local mem + * @sw_rsvd: Ignored by H/W. My be used by S/W + * @size: Size in bytes of data to copy. Expected size is up to 2K bytes + * @local_addr: Address in IPA local memory + * @direction: Read or write? + * 0: IPA write, Write to local address from system address + * 1: IPA read, Read from local address to system address + * @skip_pipeline_clear: 0 to wait until IPA pipeline is clear. 1 don't wait + * @pipeline_clear_options: options for pipeline to clear + * 0: HPS - no pkt inside HPS (not grp specific) + * 1: source group - The immediate cmd src grp does npt use any pkt ctxs + * 2: Wait until no pkt reside inside IPA pipeline + * 3: reserved + * @rsvd: reserved - should be set to zero + * @system_addr: Address in system memory + */ +struct ipa_imm_cmd_hw_dma_shared_mem { + u16 sw_rsvd; + u16 size; + u16 local_addr; + u16 direction : 1, + skip_pipeline_clear : 1, + pipeline_clear_options : 2, + rsvd : 12; + u64 system_addr; +}; + +/* struct ipa_imm_cmd_hw_dma_task_32b_addr - + * IPA_DMA_TASK_32B_ADDR command payload in H/W format. + * Used by clients using 32bit addresses. Used to perform DMA operation on + * multiple descriptors. + * The Opcode is dynamic, where it holds the number of buffer to process + * @sw_rsvd: Ignored by H/W. My be used by S/W + * @cmplt: Complete flag: When asserted IPA will interrupt SW when the entire + * DMA related data was completely xfered to its destination. + * @eof: Enf Of Frame flag: When asserted IPA will assert the EOT to the + * dest client. This is used used for aggr sequence + * @flsh: Flush flag: When asserted, pkt will go through the IPA blocks but + * will not be xfered to dest client but rather will be discarded + * @lock: Lock endpoint flag: When asserted, IPA will stop processing + * descriptors from other EPs in the same src grp (RX queue) + * @unlock: Unlock endpoint flag: When asserted, IPA will stop exclusively + * servicing current EP out of the src EPs of the grp (RX queue) + * @size1: Size of buffer1 data + * @addr1: Pointer to buffer1 data + * @packet_size: Total packet size. If a pkt send using multiple DMA_TASKs, + * only the first one needs to have this field set. It will be ignored + * in subsequent DMA_TASKs until the packet ends (EOT). First DMA_TASK + * must contain this field (2 or more buffers) or EOT. + */ +struct ipa_imm_cmd_hw_dma_task_32b_addr { + u16 sw_rsvd : 11, + cmplt : 1, + eof : 1, + flsh : 1, + lock : 1, + unlock : 1; + u16 size1; + u32 addr1; + u16 packet_size; + u16 rsvd1; + u32 rsvd2; +}; + +/* IPA Status packet H/W structures and info */ + +/* struct ipa_status_pkt_hw - IPA status packet payload in H/W format. + * This structure describes the status packet H/W structure for the + * following statuses: IPA_STATUS_PACKET, IPA_STATUS_DROPPED_PACKET, + * IPA_STATUS_SUSPENDED_PACKET. + * Other statuses types has different status packet structure. + * @status_opcode: The Type of the status (Opcode). + * @exception: (not bitmask) - the first exception that took place. + * In case of exception, src endp and pkt len are always valid. + * @status_mask: Bit mask specifying on which H/W blocks the pkt was processed. + * @pkt_len: Pkt payload len including hdr, include retained hdr if used. Does + * not include padding or checksum trailer len. + * @endp_src_idx: Source end point index. + * @rsvd1: reserved + * @endp_dest_idx: Destination end point index. + * Not valid in case of exception + * @rsvd2: reserved + * @metadata: meta data value used by packet + * @flt_local: Filter table location flag: Does matching flt rule belongs to + * flt tbl that resides in lcl memory? (if not, then system mem) + * @flt_hash: Filter hash hit flag: Does matching flt rule was in hash tbl? + * @flt_global: Global filter rule flag: Does matching flt rule belongs to + * the global flt tbl? (if not, then the per endp tables) + * @flt_ret_hdr: Retain header in filter rule flag: Does matching flt rule + * specifies to retain header? + * @flt_rule_id: The ID of the matching filter rule. This info can be combined + * with endp_src_idx to locate the exact rule. ID=0x3ff reserved to specify + * flt miss. In case of miss, all flt info to be ignored + * @rt_local: Route table location flag: Does matching rt rule belongs to + * rt tbl that resides in lcl memory? (if not, then system mem) + * @rt_hash: Route hash hit flag: Does matching rt rule was in hash tbl? + * @ucp: UC Processing flag. + * @rt_tbl_idx: Index of rt tbl that contains the rule on which was a match + * @rt_rule_id: The ID of the matching rt rule. This info can be combined + * with rt_tbl_idx to locate the exact rule. ID=0x3ff reserved to specify + * rt miss. In case of miss, all rt info to be ignored + * @nat_hit: NAT hit flag: Was their NAT hit? + * @nat_entry_idx: Index of the NAT entry used of NAT processing + * @nat_type: Defines the type of the NAT operation (ignored for now) + * @tag_info: S/W defined value provided via immediate command + * @seq_num: Per source endp unique packet sequence number + * @time_of_day_ctr: running counter from IPA clock + * @hdr_local: Header table location flag: In header insertion, was the header + * taken from the table resides in local memory? (If no, then system mem) + * @hdr_offset: Offset of used header in the header table + * @frag_hit: Frag hit flag: Was their frag rule hit in H/W frag table? + * @frag_rule: Frag rule index in H/W frag table in case of frag hit + * @hw_specific: H/W specific reserved value + */ +#define IPA_RULE_ID_BITS 10 /* See ipahal_is_rule_miss_id() */ +struct ipa_pkt_status_hw { + u8 status_opcode; + u8 exception; + u16 status_mask; + u16 pkt_len; + u8 endp_src_idx : 5, + rsvd1 : 3; + u8 endp_dest_idx : 5, + rsvd2 : 3; + u32 metadata; + u16 flt_local : 1, + flt_hash : 1, + flt_global : 1, + flt_ret_hdr : 1, + flt_rule_id : IPA_RULE_ID_BITS, + rt_local : 1, + rt_hash : 1; + u16 ucp : 1, + rt_tbl_idx : 5, + rt_rule_id : IPA_RULE_ID_BITS; + u64 nat_hit : 1, + nat_entry_idx : 13, + nat_type : 2, + tag_info : 48; + u32 seq_num : 8, + time_of_day_ctr : 24; + u16 hdr_local : 1, + hdr_offset : 10, + frag_hit : 1, + frag_rule : 4; + u16 hw_specific; +}; + +void *ipahal_dma_shared_mem_write_pyld(struct ipa_dma_mem *mem, u32 offset) +{ + struct ipa_imm_cmd_hw_dma_shared_mem *data; + + ipa_assert(mem->size < 1 << 16); /* size is 16 bits wide */ + ipa_assert(offset < 1 << 16); /* local_addr is 16 bits wide */ + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return NULL; + + data->size = mem->size; + data->local_addr = offset; + data->direction = 0; /* 0 = write to IPA; 1 = read from IPA */ + data->skip_pipeline_clear = 0; + data->pipeline_clear_options = IPAHAL_HPS_CLEAR; + data->system_addr = mem->phys; + + return data; +} + +void *ipahal_hdr_init_local_pyld(struct ipa_dma_mem *mem, u32 offset) +{ + struct ipa_imm_cmd_hw_hdr_init_local *data; + + ipa_assert(mem->size < 1 << 12); /* size_hdr_table is 12 bits wide */ + ipa_assert(offset < 1 << 16); /* hdr_addr is 16 bits wide */ + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return NULL; + + data->hdr_table_addr = mem->phys; + data->size_hdr_table = mem->size; + data->hdr_addr = offset; + + return data; +} + +static void *fltrt_init_common(struct ipa_dma_mem *mem, u32 hash_offset, + u32 nhash_offset) +{ + struct ipa_imm_cmd_hw_ip_fltrt_init *data; + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return NULL; + + data->hash_rules_addr = (u64)mem->phys; + data->hash_rules_size = (u32)mem->size; + data->hash_local_addr = hash_offset; + data->nhash_rules_addr = (u64)mem->phys; + data->nhash_rules_size = (u32)mem->size; + data->nhash_local_addr = nhash_offset; + + return data; +} + +void *ipahal_ip_v4_routing_init_pyld(struct ipa_dma_mem *mem, u32 hash_offset, + u32 nhash_offset) +{ + return fltrt_init_common(mem, hash_offset, nhash_offset); +} + +void *ipahal_ip_v6_routing_init_pyld(struct ipa_dma_mem *mem, u32 hash_offset, + u32 nhash_offset) +{ + return fltrt_init_common(mem, hash_offset, nhash_offset); +} + +void *ipahal_ip_v4_filter_init_pyld(struct ipa_dma_mem *mem, u32 hash_offset, + u32 nhash_offset) +{ + return fltrt_init_common(mem, hash_offset, nhash_offset); +} + +void *ipahal_ip_v6_filter_init_pyld(struct ipa_dma_mem *mem, u32 hash_offset, + u32 nhash_offset) +{ + return fltrt_init_common(mem, hash_offset, nhash_offset); +} + +void *ipahal_dma_task_32b_addr_pyld(struct ipa_dma_mem *mem) +{ + struct ipa_imm_cmd_hw_dma_task_32b_addr *data; + + /* size1 and packet_size are both 16 bits wide */ + ipa_assert(mem->size < 1 << 16); + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return NULL; + + data->cmplt = 0; + data->eof = 0; + data->flsh = 1; + data->lock = 0; + data->unlock = 0; + data->size1 = mem->size; + data->addr1 = mem->phys; + data->packet_size = mem->size; + + return data; +} + +void ipahal_payload_free(void *payload) +{ + kfree(payload); +} + +/* IPA Packet Status Logic */ + +/* Maps an exception type returned in a ipa_pkt_status_hw structure + * to the ipahal_pkt_status_exception value that represents it in + * the exception field of a ipahal_pkt_status structure. Returns + * IPAHAL_PKT_STATUS_EXCEPTION_MAX for an unrecognized value. + */ +static enum ipahal_pkt_status_exception +exception_map(u8 exception, bool is_ipv6) +{ + switch (exception) { + case 0x00: return IPAHAL_PKT_STATUS_EXCEPTION_NONE; + case 0x01: return IPAHAL_PKT_STATUS_EXCEPTION_DEAGGR; + case 0x04: return IPAHAL_PKT_STATUS_EXCEPTION_IPTYPE; + case 0x08: return IPAHAL_PKT_STATUS_EXCEPTION_PACKET_LENGTH; + case 0x10: return IPAHAL_PKT_STATUS_EXCEPTION_FRAG_RULE_MISS; + case 0x20: return IPAHAL_PKT_STATUS_EXCEPTION_SW_FILT; + case 0x40: return is_ipv6 ? IPAHAL_PKT_STATUS_EXCEPTION_IPV6CT + : IPAHAL_PKT_STATUS_EXCEPTION_NAT; + default: return IPAHAL_PKT_STATUS_EXCEPTION_MAX; + } +} + +/* ipahal_pkt_status_get_size() - Get H/W size of packet status */ +u32 ipahal_pkt_status_get_size(void) +{ + return sizeof(struct ipa_pkt_status_hw); +} + +/* ipahal_pkt_status_parse() - Parse Packet Status payload to abstracted form + * @unparsed_status: Pointer to H/W format of the packet status as read from H/W + * @status: Pointer to pre-allocated buffer where the parsed info will be stored + */ +void ipahal_pkt_status_parse(const void *unparsed_status, + struct ipahal_pkt_status *status) +{ + const struct ipa_pkt_status_hw *hw_status = unparsed_status; + bool is_ipv6; + + status->status_opcode = + (enum ipahal_pkt_status_opcode)hw_status->status_opcode; + is_ipv6 = hw_status->status_mask & BIT(7) ? false : true; + /* If hardware status values change we may have to re-map this */ + status->status_mask = + (enum ipahal_pkt_status_mask)hw_status->status_mask; + status->exception = exception_map(hw_status->exception, is_ipv6); + status->pkt_len = hw_status->pkt_len; + status->endp_src_idx = hw_status->endp_src_idx; + status->endp_dest_idx = hw_status->endp_dest_idx; + status->metadata = hw_status->metadata; + status->rt_miss = ipahal_is_rule_miss_id(hw_status->rt_rule_id); +} + +int ipahal_init(void) +{ + struct ipa_dma_mem *mem = &ipahal_ctx->empty_fltrt_tbl; + + /* Set up an empty filter/route table entry in system + * memory. This will be used, for example, to delete a + * route safely. + */ + if (ipa_dma_alloc(mem, IPA_HW_TBL_WIDTH, GFP_KERNEL)) { + ipa_err("error allocating empty filter/route table\n"); + return -ENOMEM; + } + + return 0; +} + +void ipahal_exit(void) +{ + ipa_dma_free(&ipahal_ctx->empty_fltrt_tbl); +} + +/* Does the given rule ID represent a routing or filter rule miss? + * A rule miss is indicated as an all-1's value in the rt_rule_id + * or flt_rule_id field of the ipahal_pkt_status structure. + */ +bool ipahal_is_rule_miss_id(u32 id) +{ + BUILD_BUG_ON(IPA_RULE_ID_BITS < 2); + + return id == (1U << IPA_RULE_ID_BITS) - 1; +} + +/** + * ipahal_rt_generate_empty_img() - Generate empty route table header + * @route_count: Number of table entries + * @mem: DMA memory object representing the header structure + * + * Allocates and fills an "empty" route table header having the given + * number of entries. Each entry in the table contains the DMA address + * of a routing entry. + * + * This function initializes all entries to point at the preallocated + * empty routing entry in system RAM. + * + * Return: 0 if successful, or a negative error code otherwise + */ +int ipahal_rt_generate_empty_img(u32 route_count, struct ipa_dma_mem *mem) +{ + u64 addr; + int i; + + BUILD_BUG_ON(!IPA_HW_TBL_HDR_WIDTH); + + if (ipa_dma_alloc(mem, route_count * IPA_HW_TBL_HDR_WIDTH, GFP_KERNEL)) + return -ENOMEM; + + addr = (u64)ipahal_ctx->empty_fltrt_tbl.phys; + for (i = 0; i < route_count; i++) + put_unaligned(addr, mem->virt + i * IPA_HW_TBL_HDR_WIDTH); + + return 0; +} + +/** + * ipahal_flt_generate_empty_img() - Generate empty filter table header + * @filter_bitmap: Bitmap representing which endpoints support filtering + * @mem: DMA memory object representing the header structure + * + * Allocates and fills an "empty" filter table header based on the + * given filter bitmap. + * + * The first slot in a filter table header is a 64-bit bitmap whose + * set bits define which endpoints support filtering. Following + * this, each set bit in the mask has the DMA address of the filter + * used for the corresponding endpoint. + * + * This function initializes all endpoints that support filtering to + * point at the preallocated empty filter in system RAM. + * + * Note: the (software) bitmap here uses bit 0 to represent + * endpoint 0, bit 1 for endpoint 1, and so on. This is different + * from the hardware (which uses bit 1 to represent filter 0, etc.). + * + * Return: 0 if successful, or a negative error code + */ +int ipahal_flt_generate_empty_img(u64 filter_bitmap, struct ipa_dma_mem *mem) +{ + u32 filter_count = hweight32(filter_bitmap) + 1; + u64 addr; + int i; + + ipa_assert(filter_bitmap); + + if (ipa_dma_alloc(mem, filter_count * IPA_HW_TBL_HDR_WIDTH, GFP_KERNEL)) + return -ENOMEM; + + /* Save the endpoint bitmap in the first slot of the table. + * Convert it from software to hardware representation by + * shifting it left one position. + * XXX Does bit position 0 represent global? At IPA3, global + * XXX configuration is possible but not used. + */ + put_unaligned(filter_bitmap << 1, mem->virt); + + /* Point every entry in the table at the empty filter */ + addr = (u64)ipahal_ctx->empty_fltrt_tbl.phys; + for (i = 1; i < filter_count; i++) + put_unaligned(addr, mem->virt + i * IPA_HW_TBL_HDR_WIDTH); + + return 0; +} + +void ipahal_free_empty_img(struct ipa_dma_mem *mem) +{ + ipa_dma_free(mem); +} diff --git a/drivers/net/ipa/ipahal.h b/drivers/net/ipa/ipahal.h new file mode 100644 index 000000000000..940254940d90 --- /dev/null +++ b/drivers/net/ipa/ipahal.h @@ -0,0 +1,253 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ +#ifndef _IPAHAL_H_ +#define _IPAHAL_H_ + +#include + +#include "ipa_dma.h" + +/* The IPA implements offloaded packet filtering and routing + * capabilities. This is managed by programming IPA-resident + * tables of rules that define the processing that should be + * performed by the IPA and the conditions under which they + * should be applied. Aspects of these rules are constrained + * by things like table entry sizes and alignment requirements; + * all of these are in units of bytes. These definitions are + * subject to some constraints: + * - IPA_HW_TBL_WIDTH must be non-zero + * - IPA_HW_TBL_SYSADDR_ALIGN must be a non-zero power of 2 + * - IPA_HW_TBL_HDR_WIDTH must be non-zero + * + * Values could differ for different versions of IPA hardware. + * These values are for v3.5.1, found in the SDM845. + */ +#define IPA_HW_TBL_WIDTH 8 +#define IPA_HW_TBL_SYSADDR_ALIGN 128 +#define IPA_HW_TBL_HDR_WIDTH 8 + +/** + * ipahal_dma_shared_mem_write_pyld() - Write to shared memory command payload + * + * Return a pointer to the payload for a DMA shared memory write immediate + * command, or null if one can't be allocated. Result is dynamically + * allocated, and caller must ensure it gets released by providing it to + * ipahal_destroy_imm_cmd() when it is no longer needed. + * + * Return: Pointer to the immediate command payload, or NULL + */ +void *ipahal_dma_shared_mem_write_pyld(struct ipa_dma_mem *mem, u32 offset); + +/** + * ipahal_hdr_init_local_pyld() - Header initialization command payload + * mem: DMA buffer containing data for initialization + * offset: Where in location IPA local memory to write + * + * Return a pointer to the payload for a header init local immediate + * command, or null if one can't be allocated. Caller must ensure result + * gets released by providing it to ipahal_destroy_imm_cmd(). + * + * Return: Pointer to the immediate command payload, or NULL + */ +void *ipahal_hdr_init_local_pyld(struct ipa_dma_mem *mem, u32 offset); + +/** + * ipahal_ip_v4_routing_init_pyld() - IPv4 routing table initialization payload + * mem: The IPv4 routing table data to be written + * hash_offset: The location in IPA memory for a hashed routing table + * nhash_offset: The location in IPA memory for a non-hashed routing table + * + * Return a pointer to the payload for an IPv4 routing init immediate + * command, or null if one can't be allocated. Caller must ensure result + * gets released by providing it to ipahal_destroy_imm_cmd(). + * + * Return: Pointer to the immediate command payload, or NULL + */ +void *ipahal_ip_v4_routing_init_pyld(struct ipa_dma_mem *mem, + u32 hash_offset, u32 nhash_offset); + +/** + * ipahal_ip_v6_routing_init_pyld() - IPv6 routing table initialization payload + * mem: The IPv6 routing table data to be written + * hash_offset: The location in IPA memory for a hashed routing table + * nhash_offset: The location in IPA memory for a non-hashed routing table + * + * Return a pointer to the payload for an IPv4 routing init immediate + * command, or null if one can't be allocated. Caller must ensure result + * gets released by providing it to ipahal_destroy_imm_cmd(). + * + * Return: Pointer to the immediate command payload, or NULL + */ +void *ipahal_ip_v6_routing_init_pyld(struct ipa_dma_mem *mem, + u32 hash_offset, u32 nhash_offset); + +/** + * ipahal_ip_v4_filter_init_pyld() - IPv4 filter table initialization payload + * mem: The IPv4 filter table data to be written + * hash_offset: The location in IPA memory for a hashed filter table + * nhash_offset: The location in IPA memory for a non-hashed filter table + * + * Return a pointer to the payload for an IPv4 filter init immediate + * command, or null if one can't be allocated. Caller must ensure result + * gets released by providing it to ipahal_destroy_imm_cmd(). + * + * Return: Pointer to the immediate command payload, or NULL + */ +void *ipahal_ip_v4_filter_init_pyld(struct ipa_dma_mem *mem, + u32 hash_offset, u32 nhash_offset); + +/** + * ipahal_ip_v6_filter_init_pyld() - IPv6 filter table initialization payload + * mem: The IPv6 filter table data to be written + * hash_offset: The location in IPA memory for a hashed filter table + * nhash_offset: The location in IPA memory for a non-hashed filter table + * + * Return a pointer to the payload for an IPv4 filter init immediate + * command, or null if one can't be allocated. Caller must ensure result + * gets released by providing it to ipahal_destroy_imm_cmd(). + * + * Return: Pointer to the immediate command payload, or NULL + */ +void *ipahal_ip_v6_filter_init_pyld(struct ipa_dma_mem *mem, + u32 hash_offset, u32 nhash_offset); + +/** + * ipahal_dma_task_32b_addr_pyld() - 32-bit DMA task command payload + * mem: DMA memory involved in the task + * + * Return a pointer to the payload for DMA task 32-bit address immediate + * command, or null if one can't be allocated. Caller must ensure result + * gets released by providing it to ipahal_destroy_imm_cmd(). + */ +void *ipahal_dma_task_32b_addr_pyld(struct ipa_dma_mem *mem); + +/** + * ipahal_payload_free() - Release an allocated immediate command payload + * @payload: Payload to be released + */ +void ipahal_payload_free(void *payload); + +/** + * enum ipahal_pkt_status_opcode - Packet Status Opcode + * @IPAHAL_STATUS_OPCODE_PACKET_2ND_PASS: Packet Status generated as part of + * IPA second processing pass for a packet (i.e. IPA XLAT processing for + * the translated packet). + * + * The values assigned here are assumed by ipa_pkt_status_parse() + * to match values returned in the status_opcode field of a + * ipa_pkt_status_hw structure inserted by the IPA in received + * buffer. + */ +enum ipahal_pkt_status_opcode { + IPAHAL_PKT_STATUS_OPCODE_PACKET = 0x01, + IPAHAL_PKT_STATUS_OPCODE_NEW_FRAG_RULE = 0x02, + IPAHAL_PKT_STATUS_OPCODE_DROPPED_PACKET = 0x04, + IPAHAL_PKT_STATUS_OPCODE_SUSPENDED_PACKET = 0x08, + IPAHAL_PKT_STATUS_OPCODE_LOG = 0x10, + IPAHAL_PKT_STATUS_OPCODE_DCMP = 0x20, + IPAHAL_PKT_STATUS_OPCODE_PACKET_2ND_PASS = 0x40, +}; + +/** + * enum ipahal_pkt_status_exception - Packet Status exception type + * @IPAHAL_PKT_STATUS_EXCEPTION_PACKET_LENGTH: formerly IHL exception. + * + * Note: IPTYPE, PACKET_LENGTH and PACKET_THRESHOLD exceptions means that + * partial / no IP processing took place and corresponding Status Mask + * fields should be ignored. Flt and rt info is not valid. + * + * NOTE:: Any change to this enum, need to change to + * ipahal_pkt_status_exception_to_str array as well. + */ +enum ipahal_pkt_status_exception { + IPAHAL_PKT_STATUS_EXCEPTION_NONE = 0, + IPAHAL_PKT_STATUS_EXCEPTION_DEAGGR, + IPAHAL_PKT_STATUS_EXCEPTION_IPTYPE, + IPAHAL_PKT_STATUS_EXCEPTION_PACKET_LENGTH, + IPAHAL_PKT_STATUS_EXCEPTION_PACKET_THRESHOLD, + IPAHAL_PKT_STATUS_EXCEPTION_FRAG_RULE_MISS, + IPAHAL_PKT_STATUS_EXCEPTION_SW_FILT, + /* NAT and IPv6CT have the same value at HW. + * NAT for IPv4 and IPv6CT for IPv6 exceptions + */ + IPAHAL_PKT_STATUS_EXCEPTION_NAT, + IPAHAL_PKT_STATUS_EXCEPTION_IPV6CT, + IPAHAL_PKT_STATUS_EXCEPTION_MAX, +}; + +/** + * enum ipahal_pkt_status_mask - Packet Status bitmask values of + * the contained flags. This bitmask indicates flags on the properties of + * the packet as well as IPA processing it may had. + * @TAG_VALID: Flag specifying if TAG and TAG info valid? + * @CKSUM_PROCESS: CSUM block processing flag: Was pkt processed by csum block? + * If so, csum trailer exists + */ +enum ipahal_pkt_status_mask { + /* Other values are defined but are not specifically handled yet. */ + IPAHAL_PKT_STATUS_MASK_CKSUM_PROCESS = 0x0100, +}; + +/** + * struct ipahal_pkt_status - IPA status packet abstracted payload. + * @status_opcode: The type of status (Opcode). + * @exception: The first exception that took place. + * In case of exception, endp_src_idx and pkt_len are always valid. + * @status_mask: Bit mask for flags on several properties on the packet + * and processing it may passed at IPA. + * @pkt_len: Pkt pyld len including hdr and retained hdr if used. Does + * not include padding or checksum trailer len. + * @endp_src_idx: Source end point index. + * @endp_dest_idx: Destination end point index. + * Not valid in case of exception + * @metadata: meta data value used by packet + * @rt_miss: Routing miss flag: Was their a routing rule miss? + * + * This structure describes the status packet fields for the following + * status values: IPA_STATUS_PACKET, IPA_STATUS_DROPPED_PACKET, + * IPA_STATUS_SUSPENDED_PACKET. Other status types have different status + * packet structure. Note that the hardware supplies additional status + * information that is currently unused. + */ +struct ipahal_pkt_status { + enum ipahal_pkt_status_opcode status_opcode; + enum ipahal_pkt_status_exception exception; + enum ipahal_pkt_status_mask status_mask; + u32 pkt_len; + u8 endp_src_idx; + u8 endp_dest_idx; + u32 metadata; + bool rt_miss; +}; + +/** + * ipahal_pkt_status_get_size() - Get size of a hardware packet status + */ +u32 ipahal_pkt_status_get_size(void); + +/* ipahal_pkt_status_parse() - Parse packet status payload + * @unparsed_status: Packet status read from hardware + * @status: Buffer to hold parsed status information + */ +void ipahal_pkt_status_parse(const void *unparsed_status, + struct ipahal_pkt_status *status); + +int ipahal_init(void); +void ipahal_exit(void); + +/* Does the given ID represent rule miss? */ +bool ipahal_is_rule_miss_id(u32 id); + +int ipahal_rt_generate_empty_img(u32 route_count, struct ipa_dma_mem *mem); +int ipahal_flt_generate_empty_img(u64 ep_bitmap, struct ipa_dma_mem *mem); + +/** + * ipahal_free_empty_img() - Free empty filter or route image + * @mem: DMA memory containing filter/route data + */ +void ipahal_free_empty_img(struct ipa_dma_mem *mem); + +#endif /* _IPAHAL_H_ */ From patchwork Wed Nov 7 00:32:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 150361 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4548493ljp; Tue, 6 Nov 2018 16:33:36 -0800 (PST) X-Google-Smtp-Source: AJdET5fQ3/qamTgPrJiK+o/mUpm2c4BdbEVbIpShJvTFXl0TopdmGDEHNkkE5riDWaFNIs1m0aUT X-Received: by 2002:a17:902:166:: with SMTP id 93-v6mr28111076plb.68.1541550816557; Tue, 06 Nov 2018 16:33:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541550816; cv=none; d=google.com; s=arc-20160816; b=Cn9tD02P8ms1lgjwO1RU3HXe/u6ursXY/ijYawFfqzky8tHkQ0GUt3n2qR2ncze6NY ZkNiOZx25Aukh4lxRY/c8lHBJuUEV9ye7LWkQIO1TyYlKpbRS6FiXerEqHGsDlvkDy+W rStw8INP2rIr0FJhULowoZzI669JnVewU3i+xFO1Os+O/qkypEz1WuycmaUifrcp6b16 TPNPrXFKhgZmnAipE09m+wKHwV2a8b2mhuckicCF3/0MhTkaBSZV2XreprTEpIdO4S+i yYCXXkYMxPk0tpBASn2IJEO3/HHDVZcuvKVyuRB6TiUlyTo+oTAv1PpeIivXbONLdxrm LUbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=iHkABtqOpTMeDszai05OjXrvAiAo+XgiZbz2ZPYdxRQ=; b=kqSmpm23XQKvWunOEw0KinTC4PFjFW4/f7b0iubiiMZlWOwGigkS+AYS1iIUf+nofV vM9ttlbaRDApeX/UKHKcvU3OTmhkhwSkzGNqJfqWZwJNR9mFnBzblxIE19/XPoZ30sHO WAvZWbyT970ntHjm2dh/1SiMzkC2ULEQnUE8smUmVdq+HqpGGu1+M+rGsBgFFpgVuFui IbEezAj6a4kl0mFKRc1bl5OjGoJZICwO9+v0NWMf3hbDy8+RM2yJs4Bi8y7anU5nz2XZ +4hA0zBYnPnGgPmoNyr1T8l+vzgxqtbZIyn8xPOL9V2jDa0Qx7XKMn8zOBln0uSjFKBl 4MPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="dK/1m/TR"; spf=pass (google.com: best guess record for domain of linux-arm-msm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-arm-msm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b8-v6si48717668ple.411.2018.11.06.16.33.36; Tue, 06 Nov 2018 16:33:36 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-arm-msm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="dK/1m/TR"; spf=pass (google.com: best guess record for domain of linux-arm-msm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-arm-msm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389148AbeKGKB2 (ORCPT + 14 others); Wed, 7 Nov 2018 05:01:28 -0500 Received: from mail-it1-f195.google.com ([209.85.166.195]:52876 "EHLO mail-it1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389096AbeKGKB1 (ORCPT ); Wed, 7 Nov 2018 05:01:27 -0500 Received: by mail-it1-f195.google.com with SMTP id t190-v6so14907583itb.2 for ; Tue, 06 Nov 2018 16:33:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iHkABtqOpTMeDszai05OjXrvAiAo+XgiZbz2ZPYdxRQ=; b=dK/1m/TRnsa5CnvIDb9HWv/Don+JFiX2tGwOpX5BF2Bxp9rdvSAVFqDa6pI6+XhhPx Sckk33VfJyN+JEKuUblIdcr2St5X5yonzCnS70sJCSWqTF1B4HIGqepkqa+eJe87qH25 qb07yxHAVjfAhnpzWVT9LOl3j7Pt2H2DfEGdI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iHkABtqOpTMeDszai05OjXrvAiAo+XgiZbz2ZPYdxRQ=; b=a/sTCTI2hrZ8HLogGeJvuducLrGXuf/gdf3O8rrQSwsWokG0qxcQN9czqe/YKF0Kma JrdvgNa57PaGAx/CLQAHdg6qByfk6YtrzElorydoL7eErf+e6/LY4CsDVN8AggL3AFPT YH7+gAqYB7b87kl7nTZcT5/2wZ7uV91bc//Uf0gaJBFjl8dWVev574w80jPVcKExlYio nnyzE3scEH3tR+wgENbWn/mr6zWs4mI1d2/UAamdQKQteLjHxpBViZoH60fadmqSzTu6 v1CkcDwmglI6DBQT+5lx2Hig0OeV+KeaTbISjSRcHQa3kj/nyTMYkGXNU8td+aOs/S67 hmkA== X-Gm-Message-State: AGRZ1gIGofGoTrCMDRZdNFnQl/raNXVYZb0d7kzp2/5XEFenJvei/3F7 ade9nzSx+7Fat4VwpGks26D8qg== X-Received: by 2002:a24:2f41:: with SMTP id j62-v6mr125184itj.80.1541550810834; Tue, 06 Nov 2018 16:33:30 -0800 (PST) Received: from shibby.gateway.innflux.com ([66.228.239.218]) by smtp.gmail.com with ESMTPSA id e184-v6sm1061128ite.9.2018.11.06.16.33.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 16:33:30 -0800 (PST) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, syadagir@codeaurora.org, mjavid@codeaurora.org, robh+dt@kernel.org, mark.rutland@arm.com Subject: [RFC PATCH 08/12] soc: qcom: ipa: utility functions Date: Tue, 6 Nov 2018 18:32:46 -0600 Message-Id: <20181107003250.5832-9-elder@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107003250.5832-1-elder@linaro.org> References: <20181107003250.5832-1-elder@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch contains "ipa_utils.c", which provides some miscellaneous supporting code. Included are: - Endpoint configuration. The IPA hardware has a fixed number of endpoints (pipes) available. In some cases the AP and modem must agree on aspects of these. A key example is that they need to agree what particular endpoints are used for (the modem can't send messages to the AP command endpoint, for example). There is a static array that defines these parameters, and this is found in "ipa_utils.c". - Resource group configuration. The IPA has a number of internal resources it uses for its operation. These are configurable, and it is up to the AP to set these configuration values at initialization time. There are some static arrays that define these configuration values. Functions are also defined here to send those values to hardware. - Shared memory. The IPA uses a region of shared memory to hold various data structures. A function ipa_sram_settings_read() fetches the location and size of this shared memory. (The individual regions are currently initialized in "ipa_main.c".) - Endpoint configuration. Each endpoint (or channel) has a number of configurable properties. Functions found in this file are used to configure these properties. - Interconnect handling. The IPA driver depends on the interconnect framework to request that buses it uses be enabled when needed. This is not a complete list, but it covers much of the functionality found in "ipa_utils.c". Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_utils.c | 1035 +++++++++++++++++++++++++++++++++++ 1 file changed, 1035 insertions(+) create mode 100644 drivers/net/ipa/ipa_utils.c -- 2.17.1 diff --git a/drivers/net/ipa/ipa_utils.c b/drivers/net/ipa/ipa_utils.c new file mode 100644 index 000000000000..085b0218779b --- /dev/null +++ b/drivers/net/ipa/ipa_utils.c @@ -0,0 +1,1035 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa_dma.h" +#include "ipa_i.h" +#include "ipahal.h" + +/* Interconnect path bandwidths (each times 1000 bytes per second) */ +#define IPA_MEMORY_AVG 80000 +#define IPA_MEMORY_PEAK 600000 + +#define IPA_IMEM_AVG 80000 +#define IPA_IMEM_PEAK 350000 + +#define IPA_CONFIG_AVG 40000 +#define IPA_CONFIG_PEAK 40000 + +#define IPA_BCR_REG_VAL 0x0000003b + +#define IPA_GSI_DMA_TASK_TIMEOUT 15 /* milliseconds */ + +#define IPA_GSI_CHANNEL_STOP_SLEEP_MIN 1000 /* microseconds */ +#define IPA_GSI_CHANNEL_STOP_SLEEP_MAX 2000 /* microseconds */ + +#define QMB_MASTER_SELECT_DDR 0 + +enum ipa_rsrc_group { + IPA_RSRC_GROUP_LWA_DL, /* currently not used */ + IPA_RSRC_GROUP_UL_DL, + IPA_RSRC_GROUP_MAX, +}; + +enum ipa_rsrc_grp_type_src { + IPA_RSRC_GRP_TYPE_SRC_PKT_CONTEXTS, + IPA_RSRC_GRP_TYPE_SRS_DESCRIPTOR_LISTS, + IPA_RSRC_GRP_TYPE_SRC_DESCRIPTOR_BUFF, + IPA_RSRC_GRP_TYPE_SRC_HPS_DMARS, + IPA_RSRC_GRP_TYPE_SRC_ACK_ENTRIES, +}; + +enum ipa_rsrc_grp_type_dst { + IPA_RSRC_GRP_TYPE_DST_DATA_SECTORS, + IPA_RSRC_GRP_TYPE_DST_DPS_DMARS, +}; + +enum ipa_rsrc_grp_type_rx { + IPA_RSRC_GRP_TYPE_RX_HPS_CMDQ, + IPA_RSRC_GRP_TYPE_RX_MAX +}; + +struct rsrc_min_max { + u32 min; + u32 max; +}; + +/* IPA_HW_v3_5_1 */ +static const struct rsrc_min_max ipa_src_rsrc_grp[][IPA_RSRC_GROUP_MAX] = { + [IPA_RSRC_GRP_TYPE_SRC_PKT_CONTEXTS] = { + [IPA_RSRC_GROUP_LWA_DL] = { .min = 1, .max = 63, }, + [IPA_RSRC_GROUP_UL_DL] = { .min = 1, .max = 63, }, + }, + [IPA_RSRC_GRP_TYPE_SRS_DESCRIPTOR_LISTS] = { + [IPA_RSRC_GROUP_LWA_DL] = { .min = 10, .max = 10, }, + [IPA_RSRC_GROUP_UL_DL] = { .min = 10, .max = 10, }, + }, + [IPA_RSRC_GRP_TYPE_SRC_DESCRIPTOR_BUFF] = { + [IPA_RSRC_GROUP_LWA_DL] = { .min = 12, .max = 12, }, + [IPA_RSRC_GROUP_UL_DL] = { .min = 14, .max = 14, }, + }, + [IPA_RSRC_GRP_TYPE_SRC_HPS_DMARS] = { + [IPA_RSRC_GROUP_LWA_DL] = { .min = 0, .max = 63, }, + [IPA_RSRC_GROUP_UL_DL] = { .min = 0, .max = 63, }, + }, + [IPA_RSRC_GRP_TYPE_SRC_ACK_ENTRIES] = { + [IPA_RSRC_GROUP_LWA_DL] = { .min = 14, .max = 14, }, + [IPA_RSRC_GROUP_UL_DL] = { .min = 20, .max = 20, }, + }, +}; + +/* IPA_HW_v3_5_1 */ +static const struct rsrc_min_max ipa_dst_rsrc_grp[][IPA_RSRC_GROUP_MAX] = { + [IPA_RSRC_GRP_TYPE_DST_DATA_SECTORS] = { + [IPA_RSRC_GROUP_LWA_DL] = { .min = 4, .max = 4, }, + [IPA_RSRC_GROUP_UL_DL] = { .min = 4, .max = 4, }, + }, + [IPA_RSRC_GRP_TYPE_DST_DPS_DMARS] = { + [IPA_RSRC_GROUP_LWA_DL] = { .min = 2, .max = 63, }, + [IPA_RSRC_GROUP_UL_DL] = { .min = 1, .max = 63, }, + }, +}; + +/** + * struct ipa_gsi_ep_config - GSI endpoint configuration. + * @ep_id: IPA endpoint identifier. + * @channel_id: GSI channel number used for this endpoint. + * @tlv_count: The number of TLV (type-length-value) entries for the channel. + * @ee: Execution environment endpoint is associated with. + * + * Each GSI endpoint has a set of configuration parameters defined within + * entries in the ipa_ep_configuration[] array. Its @ep_id field uniquely + * defines the endpoint, and @channel_id defines which data channel (ring + * buffer) is used for the endpoint. + * XXX TLV + * XXX ee is never used in the code + */ +struct ipa_gsi_ep_config { + u32 ep_id; + u32 channel_id; + u32 tlv_count; + u32 ee; +}; + +struct ipa_ep_configuration { + bool support_flt; + enum ipa_seq_type seq_type; + struct ipa_gsi_ep_config ipa_gsi_ep_info; +}; + +/* IPA_HW_v3_5_1 */ +/* clients not included in the list below are considered as invalid */ +static const struct ipa_ep_configuration ipa_ep_configuration[] = { + [IPA_CLIENT_WLAN1_PROD] = { + .support_flt = true, + .seq_type = IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 7, + .channel_id = 1, + .tlv_count = 8, + .ee = IPA_EE_UC, + }, + }, + [IPA_CLIENT_USB_PROD] = { + .support_flt = true, + .seq_type = IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 0, + .channel_id = 0, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_APPS_LAN_PROD] = { + .support_flt = false, + .seq_type = IPA_SEQ_PKT_PROCESS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 8, + .channel_id = 7, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_APPS_WAN_PROD] = { + .support_flt = true, + .seq_type = IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 2, + .channel_id = 3, + .tlv_count = 16, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_APPS_CMD_PROD] = { + .support_flt = false, + .seq_type = IPA_SEQ_DMA_ONLY, + .ipa_gsi_ep_info = { + .ep_id = 5, + .channel_id = 4, + .tlv_count = 20, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_Q6_LAN_PROD] = { + .support_flt = true, + .seq_type = IPA_SEQ_PKT_PROCESS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 3, + .channel_id = 0, + .tlv_count = 16, + .ee = IPA_EE_Q6, + }, + }, + [IPA_CLIENT_Q6_WAN_PROD] = { + .support_flt = true, + .seq_type = IPA_SEQ_PKT_PROCESS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 6, + .channel_id = 4, + .tlv_count = 12, + .ee = IPA_EE_Q6, + }, + }, + [IPA_CLIENT_Q6_CMD_PROD] = { + .support_flt = false, + .seq_type = IPA_SEQ_PKT_PROCESS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 4, + .channel_id = 1, + .tlv_count = 20, + .ee = IPA_EE_Q6, + }, + }, + [IPA_CLIENT_TEST_CONS] = { + .support_flt = true, + .seq_type = IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 14, + .channel_id = 5, + .tlv_count = 8, + .ee = IPA_EE_Q6, + }, + }, + [IPA_CLIENT_TEST1_CONS] = { + .support_flt = true, + .seq_type = IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 15, + .channel_id = 2, + .tlv_count = 8, + .ee = IPA_EE_UC, + }, + }, + /* Only for testing */ + [IPA_CLIENT_TEST_PROD] = { + .support_flt = true, + .seq_type = IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 0, + .channel_id = 0, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_TEST1_PROD] = { + .support_flt = true, + .seq_type = IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 0, + .channel_id = 0, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_TEST2_PROD] = { + .support_flt = true, + .seq_type = IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 2, + .channel_id = 3, + .tlv_count = 16, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_TEST3_PROD] = { + .support_flt = true, + .seq_type = IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 4, + .channel_id = 1, + .tlv_count = 20, + .ee = IPA_EE_Q6, + }, + }, + [IPA_CLIENT_TEST4_PROD] = { + .support_flt = true, + .seq_type = IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .ipa_gsi_ep_info = { + .ep_id = 1, + .channel_id = 0, + .tlv_count = 8, + .ee = IPA_EE_UC, + }, + }, + [IPA_CLIENT_WLAN1_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 16, + .channel_id = 3, + .tlv_count = 8, + .ee = IPA_EE_UC, + }, + }, + [IPA_CLIENT_WLAN2_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 18, + .channel_id = 9, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_WLAN3_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 19, + .channel_id = 10, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_USB_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 17, + .channel_id = 8, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_USB_DPL_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 11, + .channel_id = 2, + .tlv_count = 4, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_APPS_LAN_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 9, + .channel_id = 5, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_APPS_WAN_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 10, + .channel_id = 6, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_Q6_LAN_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 13, + .channel_id = 3, + .tlv_count = 8, + .ee = IPA_EE_Q6, + }, + }, + [IPA_CLIENT_Q6_WAN_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 12, + .channel_id = 2, + .tlv_count = 8, + .ee = IPA_EE_Q6, + }, + }, + /* Only for testing */ + [IPA_CLIENT_TEST2_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 18, + .channel_id = 9, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_TEST3_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 19, + .channel_id = 10, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, + [IPA_CLIENT_TEST4_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 11, + .channel_id = 2, + .tlv_count = 4, + .ee = IPA_EE_AP, + }, + }, +/* Dummy consumer (endpoint 31) is used in L2TP rt rule */ + [IPA_CLIENT_DUMMY_CONS] = { + .support_flt = false, + .seq_type = IPA_SEQ_INVALID, + .ipa_gsi_ep_info = { + .ep_id = 31, + .channel_id = 31, + .tlv_count = 8, + .ee = IPA_EE_AP, + }, + }, +}; + +/** ipa_client_ep_id() - provide endpoint mapping + * @client: client type + * + * Return value: endpoint mapping + */ +u32 ipa_client_ep_id(enum ipa_client_type client) +{ + return ipa_ep_configuration[client].ipa_gsi_ep_info.ep_id; +} + +u32 ipa_client_channel_id(enum ipa_client_type client) +{ + return ipa_ep_configuration[client].ipa_gsi_ep_info.channel_id; +} + +u32 ipa_client_tlv_count(enum ipa_client_type client) +{ + return ipa_ep_configuration[client].ipa_gsi_ep_info.tlv_count; +} + +enum ipa_seq_type ipa_endp_seq_type(u32 ep_id) +{ + return ipa_ep_configuration[ipa_ctx->ep[ep_id].client].seq_type; +} + +/** ipa_sram_settings_read() - Read SRAM settings from HW + * + * Returns: None + */ +void ipa_sram_settings_read(void) +{ + struct ipa_reg_shared_mem_size mem_size; + + ipa_read_reg_fields(IPA_SHARED_MEM_SIZE, &mem_size); + + /* reg fields are in 8B units */ + ipa_ctx->smem_offset = mem_size.shared_mem_baddr * 8; + ipa_ctx->smem_size = mem_size.shared_mem_size * 8; + + ipa_debug("sram size 0x%x offset 0x%x\n", ipa_ctx->smem_size, + ipa_ctx->smem_offset); +} + +/** ipa_init_hw() - initialize HW */ +void ipa_init_hw(void) +{ + struct ipa_reg_qsb_max_writes max_writes; + struct ipa_reg_qsb_max_reads max_reads; + + /* SDM845 has IPA version 3.5.1 */ + ipa_write_reg(IPA_BCR, IPA_BCR_REG_VAL); + + ipa_reg_qsb_max_writes(&max_writes, 8, 4); + ipa_write_reg_fields(IPA_QSB_MAX_WRITES, &max_writes); + + ipa_reg_qsb_max_reads(&max_reads, 8, 12); + ipa_write_reg_fields(IPA_QSB_MAX_READS, &max_reads); +} + +/** ipa_filter_bitmap_init() - Initialize the bitmap + * that represents the End-points that supports filtering + */ +u32 ipa_filter_bitmap_init(void) +{ + enum ipa_client_type client; + u32 filter_bitmap = 0; + u32 count = 0; + + for (client = 0; client < IPA_CLIENT_MAX ; client++) { + const struct ipa_ep_configuration *ep_config; + + ep_config = &ipa_ep_configuration[client]; + if (!ep_config->support_flt) + continue; + if (++count > IPA_MEM_FLT_COUNT) + return 0; /* Too many filtering endpoints */ + + filter_bitmap |= BIT(ep_config->ipa_gsi_ep_info.ep_id); + } + + return filter_bitmap; +} + +/* In IPAv3 only endpoints 0-3 can be configured to deaggregation */ +bool ipa_endp_aggr_support(u32 ep_id) +{ + return ep_id < 4; +} + +/** ipa_endp_init_hdr_write() + * + * @ep_id: endpoint whose header config register should be written + */ +static void ipa_endp_init_hdr_write(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_write_reg_n_fields(IPA_ENDP_INIT_HDR_N, ep_id, &ep->init_hdr); +} + +/** ipa_endp_init_hdr_ext_write() - write endpoint extended header register + * + * @ep_id: endpoint whose register should be written + */ +static void +ipa_endp_init_hdr_ext_write(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_write_reg_n_fields(IPA_ENDP_INIT_HDR_EXT_N, ep_id, + &ep->hdr_ext); +} + +/** ipa_endp_init_aggr_write() write endpoint aggregation register + * + * @ep_id: endpoint whose aggregation config register should be written + */ +static void ipa_endp_init_aggr_write(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_write_reg_n_fields(IPA_ENDP_INIT_AGGR_N, ep_id, &ep->init_aggr); +} + +/** ipa_endp_init_cfg_write() - write endpoint configuration register + * + * @ep_id: endpoint whose configuration register should be written + */ +static void ipa_endp_init_cfg_write(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_write_reg_n_fields(IPA_ENDP_INIT_CFG_N, ep_id, &ep->init_cfg); +} + +/** ipa_endp_init_mode_write() - write endpoint mode register + * + * @ep_id: endpoint whose register should be written + */ +static void ipa_endp_init_mode_write(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_write_reg_n_fields(IPA_ENDP_INIT_MODE_N, ep_id, + &ep->init_mode); +} + +/** ipa_endp_init_seq_write() - write endpoint sequencer register + * + * @ep_id: endpoint whose register should be written + */ +static void ipa_endp_init_seq_write(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_write_reg_n_fields(IPA_ENDP_INIT_SEQ_N, ep_id, &ep->init_seq); +} + +/** ipa_endp_init_deaggr_write() - write endpoint deaggregation register + * + * @ep_id: endpoint whose register should be written + */ +void ipa_endp_init_deaggr_write(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_write_reg_n_fields(IPA_ENDP_INIT_DEAGGR_N, ep_id, + &ep->init_deaggr); +} + +/** ipa_endp_init_hdr_metadata_mask_write() - endpoint metadata mask register + * + * @ep_id: endpoint whose register should be written + */ +static void ipa_endp_init_hdr_metadata_mask_write(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_write_reg_n_fields(IPA_ENDP_INIT_HDR_METADATA_MASK_N, ep_id, + &ep->metadata_mask); +} + +/** ipa_endp_init_hdr_metadata_mask_write() - endpoint metadata mask register + * + * @ep_id: endpoint whose register should be written + */ +static void ipa_endp_status_write(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_write_reg_n_fields(IPA_ENDP_STATUS_N, ep_id, &ep->status); +} + +/** ipa_cfg_ep - IPA end-point configuration + * @ep_id: [in] endpoint id assigned by IPA to client + * @dst: [in] destination client handle (ignored for consumer clients) + * + * This includes nat, IPv6CT, header, mode, aggregation and route settings and + * is a one shot API to configure the IPA end-point fully + * + * Returns: 0 on success, negative on failure + * + * Note: Should not be called from atomic context + */ +void ipa_cfg_ep(u32 ep_id) +{ + ipa_endp_init_hdr_write(ep_id); + ipa_endp_init_hdr_ext_write(ep_id); + + ipa_endp_init_aggr_write(ep_id); + ipa_endp_init_cfg_write(ep_id); + + if (ipa_producer(ipa_ctx->ep[ep_id].client)) { + ipa_endp_init_mode_write(ep_id); + ipa_endp_init_seq_write(ep_id); + ipa_endp_init_deaggr_write(ep_id); + } else { + ipa_endp_init_hdr_metadata_mask_write(ep_id); + } + + ipa_endp_status_write(ep_id); +} + +int ipa_interconnect_init(struct device *dev) +{ + struct icc_path *path; + + path = of_icc_get(dev, "memory"); + if (IS_ERR(path)) + goto err_return; + ipa_ctx->memory_path = path; + + path = of_icc_get(dev, "imem"); + if (IS_ERR(path)) + goto err_memory_path_put; + ipa_ctx->imem_path = path; + + path = of_icc_get(dev, "config"); + if (IS_ERR(path)) + goto err_imem_path_put; + ipa_ctx->config_path = path; + + return 0; + +err_imem_path_put: + icc_put(ipa_ctx->imem_path); + ipa_ctx->imem_path = NULL; +err_memory_path_put: + icc_put(ipa_ctx->memory_path); + ipa_ctx->memory_path = NULL; +err_return: + + return PTR_ERR(path); +} + +void ipa_interconnect_exit(void) +{ + icc_put(ipa_ctx->config_path); + ipa_ctx->config_path = NULL; + + icc_put(ipa_ctx->imem_path); + ipa_ctx->imem_path = NULL; + + icc_put(ipa_ctx->memory_path); + ipa_ctx->memory_path = NULL; +} + +/* Currently we only use bandwidth level, so just "enable" interconnects */ +int ipa_interconnect_enable(void) +{ + int ret; + + ret = icc_set(ipa_ctx->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + if (ret) + return ret; + + ret = icc_set(ipa_ctx->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); + if (ret) + goto err_disable_memory_path; + + ret = icc_set(ipa_ctx->config_path, IPA_CONFIG_AVG, IPA_CONFIG_PEAK); + if (!ret) + return 0; /* Success */ + + (void)icc_set(ipa_ctx->imem_path, 0, 0); +err_disable_memory_path: + (void)icc_set(ipa_ctx->memory_path, 0, 0); + + return ret; +} + +/* To disable an interconnect, we just its bandwidth to 0 */ +int ipa_interconnect_disable(void) +{ + int ret; + + ret = icc_set(ipa_ctx->memory_path, 0, 0); + if (ret) + return ret; + + ret = icc_set(ipa_ctx->imem_path, 0, 0); + if (ret) + goto err_reenable_memory_path; + + ret = icc_set(ipa_ctx->config_path, 0, 0); + if (!ret) + return 0; /* Success */ + + /* Re-enable things in the event of an error */ + (void)icc_set(ipa_ctx->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); +err_reenable_memory_path: + (void)icc_set(ipa_ctx->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + + return ret; +} + +/** ipa_proxy_clk_unvote() - called to remove IPA clock proxy vote + * + * Return value: none + */ +void ipa_proxy_clk_unvote(void) +{ + if (ipa_ctx->modem_clk_vote_valid) { + ipa_client_remove(); + ipa_ctx->modem_clk_vote_valid = false; + } +} + +/** ipa_proxy_clk_vote() - called to add IPA clock proxy vote + * + * Return value: none + */ +void ipa_proxy_clk_vote(void) +{ + if (!ipa_ctx->modem_clk_vote_valid) { + ipa_client_add(); + ipa_ctx->modem_clk_vote_valid = true; + } +} + +u32 ipa_get_ep_count(void) +{ + return ipa_read_reg(IPA_ENABLED_PIPES); +} + +/** ipa_is_modem_ep()- Checks if endpoint is owned by the modem + * + * @ep_id: endpoint identifier + * Return value: true if owned by modem, false otherwize + */ +bool ipa_is_modem_ep(u32 ep_id) +{ + int client_idx; + + for (client_idx = 0; client_idx < IPA_CLIENT_MAX; client_idx++) { + if (!ipa_modem_consumer(client_idx) && + !ipa_modem_producer(client_idx)) + continue; + if (ipa_client_ep_id(client_idx) == ep_id) + return true; + } + + return false; +} + +static void ipa_src_rsrc_grp_init(enum ipa_rsrc_grp_type_src n) +{ + struct ipa_reg_rsrc_grp_xy_rsrc_type_n limits; + const struct rsrc_min_max *x_limits; + const struct rsrc_min_max *y_limits; + + x_limits = &ipa_src_rsrc_grp[n][IPA_RSRC_GROUP_LWA_DL]; + y_limits = &ipa_src_rsrc_grp[n][IPA_RSRC_GROUP_UL_DL]; + ipa_reg_rsrc_grp_xy_rsrc_type_n(&limits, x_limits->min, x_limits->max, + y_limits->min, y_limits->max); + + ipa_write_reg_n_fields(IPA_SRC_RSRC_GRP_01_RSRC_TYPE_N, n, &limits); +} + +static void ipa_dst_rsrc_grp_init(enum ipa_rsrc_grp_type_dst n) +{ + struct ipa_reg_rsrc_grp_xy_rsrc_type_n limits; + const struct rsrc_min_max *x_limits; + const struct rsrc_min_max *y_limits; + + x_limits = &ipa_dst_rsrc_grp[n][IPA_RSRC_GROUP_LWA_DL]; + y_limits = &ipa_dst_rsrc_grp[n][IPA_RSRC_GROUP_UL_DL]; + ipa_reg_rsrc_grp_xy_rsrc_type_n(&limits, x_limits->min, x_limits->max, + y_limits->min, y_limits->max); + + ipa_write_reg_n_fields(IPA_DST_RSRC_GRP_01_RSRC_TYPE_N, n, &limits); +} + +void ipa_set_resource_groups_min_max_limits(void) +{ + ipa_src_rsrc_grp_init(IPA_RSRC_GRP_TYPE_SRC_PKT_CONTEXTS); + ipa_src_rsrc_grp_init(IPA_RSRC_GRP_TYPE_SRS_DESCRIPTOR_LISTS); + ipa_src_rsrc_grp_init(IPA_RSRC_GRP_TYPE_SRC_DESCRIPTOR_BUFF); + ipa_src_rsrc_grp_init(IPA_RSRC_GRP_TYPE_SRC_HPS_DMARS); + ipa_src_rsrc_grp_init(IPA_RSRC_GRP_TYPE_SRC_ACK_ENTRIES); + + ipa_dst_rsrc_grp_init(IPA_RSRC_GRP_TYPE_DST_DATA_SECTORS); + ipa_dst_rsrc_grp_init(IPA_RSRC_GRP_TYPE_DST_DPS_DMARS); +} + +static void ipa_gsi_poll_after_suspend(struct ipa_ep_context *ep) +{ + ipa_rx_switch_to_poll_mode(ep->sys); +} + +/* Suspend a consumer endpoint */ +static void ipa_ep_cons_suspend(enum ipa_client_type client) +{ + struct ipa_reg_endp_init_ctrl init_ctrl; + u32 ep_id = ipa_client_ep_id(client); + + ipa_reg_endp_init_ctrl(&init_ctrl, true); + ipa_write_reg_n_fields(IPA_ENDP_INIT_CTRL_N, ep_id, &init_ctrl); + + /* Due to a hardware bug, a client suspended with an open + * aggregation frame will not generate a SUSPEND IPA interrupt. + * We work around this by force-closing the aggregation frame, + * then simulating the arrival of such an interrupt. + */ + ipa_suspend_active_aggr_wa(ep_id); + + ipa_gsi_poll_after_suspend(&ipa_ctx->ep[ep_id]); +} + +void ipa_ep_suspend_all(void) +{ + ipa_ep_cons_suspend(IPA_CLIENT_APPS_WAN_CONS); + ipa_ep_cons_suspend(IPA_CLIENT_APPS_LAN_CONS); +} + +/* Resume a suspended consumer endpoint */ +static void ipa_ep_cons_resume(enum ipa_client_type client) +{ + struct ipa_reg_endp_init_ctrl init_ctrl; + struct ipa_ep_context *ep; + u32 ep_id; + + ep_id = ipa_client_ep_id(client); + ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_ctrl(&init_ctrl, false); + ipa_write_reg_n_fields(IPA_ENDP_INIT_CTRL_N, ep_id, &init_ctrl); + + if (!ipa_ep_polling(ep)) + gsi_channel_intr_enable(ipa_ctx->gsi, ep->channel_id); +} + +void ipa_ep_resume_all(void) +{ + ipa_ep_cons_resume(IPA_CLIENT_APPS_LAN_CONS); + ipa_ep_cons_resume(IPA_CLIENT_APPS_WAN_CONS); +} + +/** ipa_cfg_route() - configure IPA route + * @route: IPA route + * + * Return codes: + * 0: success + */ +void ipa_cfg_default_route(enum ipa_client_type client) +{ + struct ipa_reg_route route; + + ipa_reg_route(&route, ipa_client_ep_id(client)); + ipa_write_reg_fields(IPA_ROUTE, &route); +} + +/* In certain cases we need to issue a command to reliably clear the + * IPA pipeline. Sending a 1-byte DMA task is sufficient, and this + * function preallocates a command to do just that. There are + * conditions (process context in KILL state) where DMA allocations + * can fail, and we need to be able to issue this command to put the + * hardware in a known state. By preallocating the command here we + * guarantee it can't fail for that reason. + */ +int ipa_gsi_dma_task_alloc(void) +{ + struct ipa_dma_mem *mem = &ipa_ctx->dma_task_info.mem; + + if (ipa_dma_alloc(mem, IPA_GSI_CHANNEL_STOP_PKT_SIZE, GFP_KERNEL)) + return -ENOMEM; + + /* IPA_IMM_CMD_DMA_TASK_32B_ADDR */ + ipa_ctx->dma_task_info.payload = ipahal_dma_task_32b_addr_pyld(mem); + if (!ipa_ctx->dma_task_info.payload) { + ipa_dma_free(mem); + + return -ENOMEM; + } + + return 0; +} + +void ipa_gsi_dma_task_free(void) +{ + struct ipa_dma_mem *mem = &ipa_ctx->dma_task_info.mem; + + ipahal_payload_free(ipa_ctx->dma_task_info.payload); + ipa_ctx->dma_task_info.payload = NULL; + ipa_dma_free(mem); +} + +/** ipa_gsi_dma_task_inject()- Send DMA_TASK to IPA for GSI stop channel + * + * Send a DMA_TASK of 1B to IPA to unblock GSI channel in STOP_IN_PROG. + * Return value: 0 on success, negative otherwise + */ +static int ipa_gsi_dma_task_inject(void) +{ + struct ipa_desc desc = { }; + + desc.type = IPA_IMM_CMD_DESC; + desc.len_opcode = IPA_IMM_CMD_DMA_TASK_32B_ADDR; + desc.payload = ipa_ctx->dma_task_info.payload; + + return ipa_send_cmd_timeout(&desc, IPA_GSI_DMA_TASK_TIMEOUT); +} + +/** ipa_stop_gsi_channel()- Stops a GSI channel in IPA + * @chan_hdl: GSI channel handle + * + * This function implements the sequence to stop a GSI channel + * in IPA. This function returns when the channel is is STOP state. + * + * Return value: 0 on success, negative otherwise + */ +int ipa_stop_gsi_channel(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + int ret; + int i; + + if (ipa_producer(ep->client)) + return gsi_channel_stop(ipa_ctx->gsi, ep->channel_id); + + for (i = 0; i < IPA_GSI_CHANNEL_STOP_MAX_RETRY; i++) { + ret = gsi_channel_stop(ipa_ctx->gsi, ep->channel_id); + if (ret != -EAGAIN && ret != -ETIMEDOUT) + return ret; + + /* Send a 1B packet DMA_TASK to IPA and try again */ + ret = ipa_gsi_dma_task_inject(); + if (ret) + return ret; + + /* sleep for short period to flush IPA */ + usleep_range(IPA_GSI_CHANNEL_STOP_SLEEP_MIN, + IPA_GSI_CHANNEL_STOP_SLEEP_MAX); + } + + ipa_err("Failed to stop GSI channel with retries\n"); + + return -EFAULT; +} + +/** ipa_enable_dcd() - enable dynamic clock division on IPA + * + * Return value: Non applicable + * + */ +void ipa_enable_dcd(void) +{ + struct ipa_reg_idle_indication_cfg indication; + + /* recommended values for IPA 3.5 according to IPA HPG */ + ipa_reg_idle_indication_cfg(&indication, 256, 0); + ipa_write_reg_fields(IPA_IDLE_INDICATION_CFG, &indication); +} + +/** ipa_set_flt_tuple_mask() - Sets the flt tuple masking for the given + * endpoint. Endpoint must be for AP (not modem) and support filtering. + * Updates the the filtering masking values without changing the rt ones. + * + * @ep_id: filter endpoint to configure the tuple masking + * @tuple: the tuple members masking + * Returns: 0 on success, negative on failure + * + */ +void ipa_set_flt_tuple_mask(u32 ep_id) +{ + struct ipa_ep_filter_router_hsh_cfg hsh_cfg; + + ipa_read_reg_n_fields(IPA_ENDP_FILTER_ROUTER_HSH_CFG_N, ep_id, + &hsh_cfg); + + ipa_reg_hash_tuple(&hsh_cfg.flt); + + ipa_write_reg_n_fields(IPA_ENDP_FILTER_ROUTER_HSH_CFG_N, ep_id, + &hsh_cfg); +} + +/** ipa_set_rt_tuple_mask() - Sets the rt tuple masking for the given tbl + * table index must be for AP EP (not modem) + * updates the the routing masking values without changing the flt ones. + * + * @tbl_idx: routing table index to configure the tuple masking + * @tuple: the tuple members masking + * Returns: 0 on success, negative on failure + * + */ +void ipa_set_rt_tuple_mask(int tbl_idx) +{ + struct ipa_ep_filter_router_hsh_cfg hsh_cfg; + + ipa_read_reg_n_fields(IPA_ENDP_FILTER_ROUTER_HSH_CFG_N, tbl_idx, + &hsh_cfg); + + ipa_reg_hash_tuple(&hsh_cfg.rt); + + ipa_write_reg_n_fields(IPA_ENDP_FILTER_ROUTER_HSH_CFG_N, tbl_idx, + &hsh_cfg); +} + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("IPA HW device driver"); From patchwork Wed Nov 7 00:32:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 150363 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4548584ljp; Tue, 6 Nov 2018 16:33:43 -0800 (PST) X-Google-Smtp-Source: AJdET5ci3XKFMhXo0imtI0xW32yqkYGqTq3B/jAY/uSbBTYVi1ojQVl1LMzEhqARN+cBX5B1XtKA X-Received: by 2002:a63:2315:: with SMTP id j21mr25998213pgj.297.1541550823183; Tue, 06 Nov 2018 16:33:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541550823; cv=none; d=google.com; s=arc-20160816; b=g812WyqLYktz8EmYYlAQeLhyTfO12O4UH1HABwoqx/XL3Y/I6338uVEHSU/xuhG8vq 69AQVUQFDQpey/MKJI4xdb6ZN1W4PMponAmKyOAMfddbCd9B83DMN7GmsSHMsaES8592 4YTTAYhM8hSGT7QarJG/sDtEu+CsyVdqmO2oVHQtUxUa+mp8tsC16ojArbGUROyA/kCl S+e0FQqPUuPXZxgBgbPfrmR6SjO7N1CvwA/HNRQyXrP0nlUOhkwCHxLVSmp+Qd1nvv/U MJ7HpJPaZQXUkY3KHa0FrwmczzKpC2z4FUzuxT4O1LjFzrTA9E8aMSY45JFNQrOf/t5U QK0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=m9ARvDca0/xznI4JnCa+Bib3mLwWdb94/qHs+TFN7uE=; b=IXJKSQ4M5+Kcgvs8NNX8L4uRJcsfbjIUHXp3qtkZaXpGrS4fDSewV3a9MInyIu2CeW nLd12d+6Qjvp8HORXXu1rWZW5BwgO5eG5LtB5ZcGSeGK7JTK2GCCjSuI3zNy/K63hx9B drwwq9PrcFLZT15EGUFqsnVG7O3Ucj/SVAW9iIojYfJ0kWbcZODrNzAQKBZvWsmCLxMd TbXQaPXXYU+4mOSk9VsmOpA3E7HMayK1+TcpI/L2AgJNFUKVin4qe3YB380zc4nQgP74 XxixYbHATUVTAVV+QjoddP4vUhDLsNkg6dSPRJEhSLzDBGOEXm6TBunDEYfsWInuO5iw oaRw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZLncNJA1; spf=pass (google.com: best guess record for domain of linux-arm-msm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-arm-msm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q85-v6si50434428pfi.183.2018.11.06.16.33.42; Tue, 06 Nov 2018 16:33:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-arm-msm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZLncNJA1; spf=pass (google.com: best guess record for domain of linux-arm-msm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-arm-msm-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389220AbeKGKBf (ORCPT + 14 others); Wed, 7 Nov 2018 05:01:35 -0500 Received: from mail-it1-f195.google.com ([209.85.166.195]:34261 "EHLO mail-it1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389205AbeKGKBe (ORCPT ); Wed, 7 Nov 2018 05:01:34 -0500 Received: by mail-it1-f195.google.com with SMTP id t189-v6so11761108itf.1 for ; Tue, 06 Nov 2018 16:33:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=m9ARvDca0/xznI4JnCa+Bib3mLwWdb94/qHs+TFN7uE=; b=ZLncNJA1WFJVf2vYfSH5cvmfwct9j5zFaCEL+K+PVYOlz8pgX3h8yWGXIm2u4JmN2y BBjnQ5zQmIxSM/jnGyyf/UyKYek3aLhfeLPoAl5mkah/MdjCa388UJEF6rZbv6GB0L5B CYjekwKhwJZ5z+pcTW8EEJf5AOyoUHGKUz+ns= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m9ARvDca0/xznI4JnCa+Bib3mLwWdb94/qHs+TFN7uE=; b=kp9ArjhkLzzWCR3vNvvqTid0MPYf3njv33HaBCNpLl3vQ83AO6iuUuLX1JJfav3VMa SYVoBCIJIetYd37lMdiKhZXqNXzCIxn0lgabFo6DSO/lkosIl4iXezV8DepwkCeGIhrl 0NJJGzD4qPEpfamQHTdMECjin7orfgPa88fw0uAgCQ1EsNFkrBwVSJaeRvQ/gMNCFDkn itHd2N+ioYR2MI+DmLknNJFQ7fYRTnajZZGs6BvLQnHmSxXAt1p67wQ4l7/BsM5FT/dh 2A2wP/4hxYEYG9Xu95ixGali6v0l2Tbll+j+wGMyLEOb/CRVgf77k2vP7fzikF9f6yyB wXbg== X-Gm-Message-State: AGRZ1gK8WpcgesX9Fy/uMzuj5ujETHr+Iek5r7Slkn1MI27XFIKWWxHt KwqaXu7dklH2zsNfmwcvqg64nw== X-Received: by 2002:a24:9790:: with SMTP id k138-v6mr113922ite.69.1541550817628; Tue, 06 Nov 2018 16:33:37 -0800 (PST) Received: from shibby.gateway.innflux.com ([66.228.239.218]) by smtp.gmail.com with ESMTPSA id e184-v6sm1061128ite.9.2018.11.06.16.33.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 16:33:36 -0800 (PST) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, syadagir@codeaurora.org, mjavid@codeaurora.org, robh+dt@kernel.org, mark.rutland@arm.com Subject: [RFC PATCH 10/12] soc: qcom: ipa: data path Date: Tue, 6 Nov 2018 18:32:48 -0600 Message-Id: <20181107003250.5832-11-elder@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107003250.5832-1-elder@linaro.org> References: <20181107003250.5832-1-elder@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch contains "ipa_dp.c", which includes the bulk of the data path code. There is an overview in the code of how things operate, but there are already plans to rework this portion of the driver. In particular: - Interrupt handling will be replaced with a threaded interrupt handler. Currently handling occurs in a combination of interrupt and workqueue context, and this requires locking and atomic operations for proper synchronization. - Currently, only receive endpoints use NAPI. Transmit completion interrupts are disabled, and are handled in batches by periodically scheduling an interrupting no-op request. The plan is to arrange for transmit requests to generate interrupts, and their completion will be processed with other completions in the NAPI poll function. This will also allow accurate feedback about packet sojourn time to be provided to queue limiting mechanisms. - Not all receive endpoints use NAPI. The plan is for *all* endpoints to use NAPI. And because all endpoints share a common GSI interrupt, a single NAPI structure will used to managing the processing for all completions on all endpoints. - Receive buffers are posted to the hardware by a workqueue function. Instead, the plan is to have this done by the NAPI poll routine. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_dp.c | 1994 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 1994 insertions(+) create mode 100644 drivers/net/ipa/ipa_dp.c -- 2.17.1 diff --git a/drivers/net/ipa/ipa_dp.c b/drivers/net/ipa/ipa_dp.c new file mode 100644 index 000000000000..c16ac74765b8 --- /dev/null +++ b/drivers/net/ipa/ipa_dp.c @@ -0,0 +1,1994 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include + +#include "ipa_i.h" /* ipa_err() */ +#include "ipahal.h" +#include "ipa_dma.h" + +/** + * DOC: The IPA Data Path + * + * The IPA is used to transmit data between execution environments. + * The data path code uses functions and structures supplied by the + * GSI to interact with the IPA hardware. A packet to be transmitted + * or received is held in a socket buffer. Each has a "wrapper" + * structure associated with it. A GSI transfer request refers to + * the packet wrapper, and when queued to the hardware the packet + * wrapper is added to a list of outstanding requests for an endpoint + * (maintained in the head_desc_list in the endpoint's system context). + * When the GSI transfer completes, a callback function is provided + * the packet wrapper pointer, allowing it to be released after the + * received socket buffer has been passed up the stack, or a buffer + * whose data has been transmitted has been freed. + * + * Producer (PROD) endpoints are used to send data from the AP toward + * the IPA. The common function for sending data on producer endpoints + * is ipa_send(). It takes a system context and an array of IPA + * descriptors as arguments. Each descriptor is given a TX packet + * wrapper, and its content is translated into an equivalent GSI + * transfer element structure after its memory address is mapped for + * DMA. The GSI transfer element array is finally passed to the GSI + * layer using gsi_channel_queue(). + * + * The code provides a "no_intr" feature, allowing endpoints to have + * their transmit completions not produce an interrupt. (This + * behavior is used only for the modem producer.) In this case, a + * no-op request is generated every 200 milliseconds while transmit + * requests are outstanding. The no-op will generate an interrupt + * when it's complete, and its completion implies the completion of + * all transmit requests issued before it. The GSI will call + * ipa_gsi_irq_tx_notify_cb() in response to interrupts on a producer + * endpoint. + * + * Receive buffers are passed to consumer (CONS) channels to be + * available to hold incoming data. Arriving data is placed + * in these buffers, leading to events being generated on the event + * ring assciated with a channel. When an interrupt occurs on a + * consumer endpoint, the GSI layer calls ipa_gsi_irq_rx_notify_cb(). + * This causes the endpoint to switch to polling mode. The + * completion of a receive also leads to ipa_replenish_rx_cache() + * being called, to replace the consumed buffer. + * + * Consumer enpoints optionally use NAPI (only the modem consumer, + * WWAN_CONS, does currently). An atomic variable records whether + * the endpoint is in polling mode or not. This is needed because + * switching to polling mode is currently done in a workqueue. Once + * NAPI polling completes, and endpoint switches back to interrupt + * mode. + */ + +/** + * struct ipa_tx_pkt_wrapper - IPA transmit packet wrapper + * @type: type of descriptor + * @sys: Corresponding IPA sys context + * @mem: Memory buffer used by this packet + * @callback: IPA client provided callback + * @user1: Cookie1 for above callback + * @user2: Cookie2 for above callback + * @link: Links for the endpoint's sys->head_desc_list + * @cnt: Number of descriptors in request + * @done_work: Work structure used when complete + */ +struct ipa_tx_pkt_wrapper { + enum ipa_desc_type type; + struct ipa_sys_context *sys; + struct ipa_dma_mem mem; + void (*callback)(void *user1, int user2); + void *user1; + int user2; + struct list_head link; + u32 cnt; + struct work_struct done_work; +}; + +/** struct ipa_rx_pkt_wrapper - IPA Rx packet wrapper + * @link: Links for the endpoint's sys->head_desc_list + * @skb: Socket buffer containing the received packet + * @len: How many bytes are copied into skb's buffer + */ +struct ipa_rx_pkt_wrapper { + struct list_head link; + struct sk_buff *skb; + dma_addr_t dma_addr; +}; + +/** struct ipa_sys_context - IPA GPI endpoint context + * @len: The number of entries in @head_desc_list + * @tx: Details related to AP->IPA endpoints + * @rx: Details related to IPA->AP endpoints + * @ep: Associated endpoint + * @head_desc_list: List of packets + * @spinlock: Lock protecting the descriptor list + * @workqueue: Workqueue used for this endpoint + */ +struct ipa_sys_context { + u32 len; + union { + struct { /* Consumer endpoints only */ + u32 len_pending_xfer; + atomic_t curr_polling_state; + struct delayed_work switch_to_intr_work; /* sys->wq */ + void (*pyld_hdlr)(struct sk_buff *, + struct ipa_sys_context *); + u32 buff_sz; + u32 pool_sz; + struct sk_buff *prev_skb; + unsigned int len_rem; + unsigned int len_pad; /* APPS_LAN only */ + unsigned int len_partial; /* APPS_LAN only */ + bool drop_packet; /* APPS_LAN only */ + + struct work_struct work; /* sys->wq */ + struct delayed_work replenish_work; /* sys->wq */ + } rx; + struct { /* Producer endpoints only */ + /* no_intr/nop is APPS_WAN_PROD only */ + bool no_intr; + atomic_t nop_pending; + struct hrtimer nop_timer; + struct work_struct nop_work; /* sys->wq */ + } tx; + }; + + /* ordering is important - mutable fields go above */ + struct ipa_ep_context *ep; + struct list_head head_desc_list; /* contains len entries */ + spinlock_t spinlock; /* protects head_desc list */ + struct workqueue_struct *wq; + /* ordering is important - other immutable fields go below */ +}; + +/** + * struct ipa_dp - IPA data path information + * @tx_pkt_wrapper_cache: Tx packets cache + * @rx_pkt_wrapper_cache: Rx packets cache + */ +struct ipa_dp { + struct kmem_cache *tx_pkt_wrapper_cache; + struct kmem_cache *rx_pkt_wrapper_cache; +}; + +/** + * struct ipa_tag_completion - Reference counted completion object + * @comp: Completion when last reference is dropped + * @cnt: Reference count + */ +struct ipa_tag_completion { + struct completion comp; + atomic_t cnt; +}; + +#define CHANNEL_RESET_AGGR_RETRY_COUNT 3 +#define CHANNEL_RESET_DELAY 1 /* milliseconds */ + +#define IPA_QMAP_HEADER_LENGTH 4 + +#define IPA_WAN_AGGR_PKT_CNT 5 +#define POLLING_INACTIVITY_RX 40 +#define POLLING_MIN_SLEEP_RX 1010 /* microseconds */ +#define POLLING_MAX_SLEEP_RX 1050 /* microseconds */ + +#define IPA_RX_BUFFER_ORDER 1 /* Default RX buffer is 2^1 pages */ +#define IPA_RX_BUFFER_SIZE (1 << (IPA_RX_BUFFER_ORDER + PAGE_SHIFT)) + +/* The amount of RX buffer space consumed by standard skb overhead */ +#define IPA_RX_BUFFER_RESERVED \ + (IPA_RX_BUFFER_SIZE - SKB_MAX_ORDER(NET_SKB_PAD, IPA_RX_BUFFER_ORDER)) + +/* RX buffer space remaining after standard overhead is consumed */ +#define IPA_RX_BUFFER_AVAILABLE(X) ((X) - IPA_RX_BUFFER_RESERVED) + +#define IPA_RX_BUFF_CLIENT_HEADROOM 256 + +#define IPA_SIZE_DL_CSUM_META_TRAILER 8 + +#define IPA_REPL_XFER_THRESH 10 + +/* How long before sending an interrupting no-op to handle TX completions */ +#define IPA_TX_NOP_DELAY_NS (2 * 1000 * 1000) /* 2 msec */ + +static void ipa_rx_switch_to_intr_mode(struct ipa_sys_context *sys); + +static void ipa_replenish_rx_cache(struct ipa_sys_context *sys); +static void ipa_replenish_rx_work_func(struct work_struct *work); +static void ipa_wq_handle_rx(struct work_struct *work); +static void ipa_rx_common(struct ipa_sys_context *sys, u32 size); +static void ipa_cleanup_rx(struct ipa_sys_context *sys); +static int ipa_poll_gsi_pkt(struct ipa_sys_context *sys); + +static void ipa_tx_complete(struct ipa_tx_pkt_wrapper *tx_pkt) +{ + struct device *dev = ipa_ctx->dev; + + /* If DMA memory was mapped, unmap it */ + if (tx_pkt->mem.virt) { + if (tx_pkt->type == IPA_DATA_DESC_SKB_PAGED) + dma_unmap_page(dev, tx_pkt->mem.phys, + tx_pkt->mem.size, DMA_TO_DEVICE); + else + dma_unmap_single(dev, tx_pkt->mem.phys, + tx_pkt->mem.size, DMA_TO_DEVICE); + } + + if (tx_pkt->callback) + tx_pkt->callback(tx_pkt->user1, tx_pkt->user2); + + kmem_cache_free(ipa_ctx->dp->tx_pkt_wrapper_cache, tx_pkt); +} + +static void +ipa_wq_write_done_common(struct ipa_sys_context *sys, + struct ipa_tx_pkt_wrapper *tx_pkt) +{ + struct ipa_tx_pkt_wrapper *next_pkt; + int cnt; + int i; + + cnt = tx_pkt->cnt; + for (i = 0; i < cnt; i++) { + ipa_assert(!list_empty(&sys->head_desc_list)); + + spin_lock_bh(&sys->spinlock); + + next_pkt = list_next_entry(tx_pkt, link); + list_del(&tx_pkt->link); + sys->len--; + + spin_unlock_bh(&sys->spinlock); + + ipa_tx_complete(tx_pkt); + + tx_pkt = next_pkt; + } +} + +/** + * ipa_wq_write_done() - Work function executed when TX completes + * * @done_work: work_struct used by the work queue + */ +static void ipa_wq_write_done(struct work_struct *done_work) +{ + struct ipa_tx_pkt_wrapper *this_pkt; + struct ipa_tx_pkt_wrapper *tx_pkt; + struct ipa_sys_context *sys; + + tx_pkt = container_of(done_work, struct ipa_tx_pkt_wrapper, done_work); + sys = tx_pkt->sys; + spin_lock_bh(&sys->spinlock); + this_pkt = list_first_entry(&sys->head_desc_list, + struct ipa_tx_pkt_wrapper, link); + while (tx_pkt != this_pkt) { + spin_unlock_bh(&sys->spinlock); + ipa_wq_write_done_common(sys, this_pkt); + spin_lock_bh(&sys->spinlock); + this_pkt = list_first_entry(&sys->head_desc_list, + struct ipa_tx_pkt_wrapper, link); + } + spin_unlock_bh(&sys->spinlock); + ipa_wq_write_done_common(sys, tx_pkt); +} + +/** + * ipa_rx_poll() - Poll the rx packets from IPA hardware + * @ep_id: Endpoint to poll + * @weight: NAPI poll weight + * + * Return: The number of received packets. + */ +int ipa_rx_poll(u32 ep_id, int weight) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + static int total_cnt; + int cnt = 0; + + while (cnt < weight && ipa_ep_polling(ep)) { + int ret; + + ret = ipa_poll_gsi_pkt(ep->sys); + if (ret < 0) + break; + + ipa_rx_common(ep->sys, (u32)ret); + cnt += IPA_WAN_AGGR_PKT_CNT; + total_cnt++; + + /* Force switch back to interrupt mode if no more packets */ + if (!ep->sys->len || total_cnt >= ep->sys->rx.pool_sz) { + total_cnt = 0; + cnt--; + break; + } + } + + if (cnt < weight) { + ep->client_notify(ep->priv, IPA_CLIENT_COMP_NAPI, 0); + ipa_rx_switch_to_intr_mode(ep->sys); + + /* Matching enable is in ipa_gsi_irq_rx_notify_cb() */ + ipa_client_remove(); + } + + return cnt; +} + +/** + * ipa_send_nop() - Send an interrupting no-op request to a producer endpoint. + * @sys: System context for the endpoint + * + * Normally an interrupt is generated upon completion of every transfer + * performed by an endpoint, but a producer endpoint can be configured + * to avoid getting these interrupts. Instead, once a transfer has been + * initiated, a no-op is scheduled to be sent after a short delay. This + * no-op request will interrupt when it is complete, and in handling that + * interrupt, previously-completed transfers will be handled as well. If + * a no-op is already scheduled, another is not initiated (there's only + * one pending at a time). + */ +static bool ipa_send_nop(struct ipa_sys_context *sys) +{ + struct gsi_xfer_elem nop_xfer = { }; + struct ipa_tx_pkt_wrapper *nop_pkt; + u32 channel_id; + + nop_pkt = kmem_cache_zalloc(ipa_ctx->dp->tx_pkt_wrapper_cache, + GFP_KERNEL); + if (!nop_pkt) + return false; + + nop_pkt->type = IPA_DATA_DESC; + /* No-op packet uses no memory for data */ + INIT_WORK(&nop_pkt->done_work, ipa_wq_write_done); + nop_pkt->sys = sys; + nop_pkt->cnt = 1; + + nop_xfer.type = GSI_XFER_ELEM_NOP; + nop_xfer.flags = GSI_XFER_FLAG_EOT; + nop_xfer.user_data = nop_pkt; + + spin_lock_bh(&sys->spinlock); + list_add_tail(&nop_pkt->link, &sys->head_desc_list); + spin_unlock_bh(&sys->spinlock); + + channel_id = sys->ep->channel_id; + if (!gsi_channel_queue(ipa_ctx->gsi, channel_id, 1, &nop_xfer, true)) + return true; /* Success */ + + spin_lock_bh(&sys->spinlock); + list_del(&nop_pkt->link); + spin_unlock_bh(&sys->spinlock); + + kmem_cache_free(ipa_ctx->dp->tx_pkt_wrapper_cache, nop_pkt); + + return false; +} + +/** + * ipa_send_nop_work() - Work function for sending a no-op request + * nop_work: Work structure for the request + * + * Try to send the no-op request. If it fails, arrange to try again. + */ +static void ipa_send_nop_work(struct work_struct *nop_work) +{ + struct ipa_sys_context *sys; + + sys = container_of(nop_work, struct ipa_sys_context, tx.nop_work); + + /* If sending a no-op request fails, schedule another try */ + if (!ipa_send_nop(sys)) + queue_work(sys->wq, nop_work); +} + +/** + * ipa_nop_timer_expiry() - Timer function to schedule a no-op request + * @timer: High-resolution timer structure + * + * The delay before sending the no-op request is implemented by a + * high resolution timer, which will call this in interrupt context. + * Arrange to send the no-op in workqueue context when it expires. + */ +static enum hrtimer_restart ipa_nop_timer_expiry(struct hrtimer *timer) +{ + struct ipa_sys_context *sys; + + sys = container_of(timer, struct ipa_sys_context, tx.nop_timer); + atomic_set(&sys->tx.nop_pending, 0); + queue_work(sys->wq, &sys->tx.nop_work); + + return HRTIMER_NORESTART; +} + +static void ipa_nop_timer_schedule(struct ipa_sys_context *sys) +{ + ktime_t time; + + if (atomic_xchg(&sys->tx.nop_pending, 1)) + return; + + time = ktime_set(0, IPA_TX_NOP_DELAY_NS); + hrtimer_start(&sys->tx.nop_timer, time, HRTIMER_MODE_REL); +} + +/** + * ipa_no_intr_init() - Configure endpoint point for no-op requests + * @prod_ep_id: Endpoint that will use interrupting no-ops + * + * For some producer endpoints we don't interrupt on completions. + * Instead we schedule an interrupting NOP command to be issued on + * the endpoint after a short delay (if one is not already scheduled). + * When the NOP completes it signals all preceding transfers have + * completed also. + */ +void ipa_no_intr_init(u32 prod_ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[prod_ep_id]; + + INIT_WORK(&ep->sys->tx.nop_work, ipa_send_nop_work); + atomic_set(&ep->sys->tx.nop_pending, 0); + hrtimer_init(&ep->sys->tx.nop_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + ep->sys->tx.nop_timer.function = ipa_nop_timer_expiry; + ep->sys->tx.no_intr = true; +} + +/** + * ipa_send() - Send descriptors to hardware as a single transaction + * @sys: System context for endpoint + * @num_desc: Number of descriptors + * @desc: Transfer descriptors to send + * + * Return: 0 iff successful, or a negative error code. + */ +static int +ipa_send(struct ipa_sys_context *sys, u32 num_desc, struct ipa_desc *desc) +{ + struct ipa_tx_pkt_wrapper *tx_pkt; + struct ipa_tx_pkt_wrapper *first; + struct ipa_tx_pkt_wrapper *next; + struct gsi_xfer_elem *xfer_elem; + LIST_HEAD(pkt_list); + int ret; + int i; + + ipa_assert(num_desc); + ipa_assert(num_desc <= ipa_client_tlv_count(sys->ep->client)); + + xfer_elem = kcalloc(num_desc, sizeof(*xfer_elem), GFP_ATOMIC); + if (!xfer_elem) + return -ENOMEM; + + /* Within loop, all errors are allocation or DMA mapping */ + ret = -ENOMEM; + first = NULL; + for (i = 0; i < num_desc; i++) { + dma_addr_t phys; + + tx_pkt = kmem_cache_zalloc(ipa_ctx->dp->tx_pkt_wrapper_cache, + GFP_ATOMIC); + if (!tx_pkt) + goto err_unwind; + + if (!first) + first = tx_pkt; + + if (desc[i].type == IPA_DATA_DESC_SKB_PAGED) + phys = skb_frag_dma_map(ipa_ctx->dev, desc[i].payload, + 0, desc[i].len_opcode, + DMA_TO_DEVICE); + else + phys = dma_map_single(ipa_ctx->dev, desc[i].payload, + desc[i].len_opcode, + DMA_TO_DEVICE); + if (dma_mapping_error(ipa_ctx->dev, phys)) { + ipa_err("dma mapping error on descriptor\n"); + kmem_cache_free(ipa_ctx->dp->tx_pkt_wrapper_cache, + tx_pkt); + goto err_unwind; + } + + tx_pkt->type = desc[i].type; + tx_pkt->sys = sys; + tx_pkt->mem.virt = desc[i].payload; + tx_pkt->mem.phys = phys; + tx_pkt->mem.size = desc[i].len_opcode; + tx_pkt->callback = desc[i].callback; + tx_pkt->user1 = desc[i].user1; + tx_pkt->user2 = desc[i].user2; + list_add_tail(&tx_pkt->link, &pkt_list); + + xfer_elem[i].addr = tx_pkt->mem.phys; + if (desc[i].type == IPA_IMM_CMD_DESC) + xfer_elem[i].type = GSI_XFER_ELEM_IMME_CMD; + else + xfer_elem[i].type = GSI_XFER_ELEM_DATA; + xfer_elem[i].len_opcode = desc[i].len_opcode; + if (i < num_desc - 1) + xfer_elem[i].flags = GSI_XFER_FLAG_CHAIN; + } + + /* Fill in extra fields in the first TX packet */ + first->cnt = num_desc; + INIT_WORK(&first->done_work, ipa_wq_write_done); + + /* Fill in extra fields in the last transfer element */ + if (!sys->tx.no_intr) { + xfer_elem[num_desc - 1].flags = GSI_XFER_FLAG_EOT; + xfer_elem[num_desc - 1].flags |= GSI_XFER_FLAG_BEI; + } + xfer_elem[num_desc - 1].user_data = first; + + spin_lock_bh(&sys->spinlock); + + list_splice_tail_init(&pkt_list, &sys->head_desc_list); + ret = gsi_channel_queue(ipa_ctx->gsi, sys->ep->channel_id, num_desc, + xfer_elem, true); + if (ret) + list_cut_end(&pkt_list, &sys->head_desc_list, &first->link); + + spin_unlock_bh(&sys->spinlock); + + kfree(xfer_elem); + + if (!ret) { + if (sys->tx.no_intr) + ipa_nop_timer_schedule(sys); + return 0; + } +err_unwind: + list_for_each_entry_safe(tx_pkt, next, &pkt_list, link) { + list_del(&tx_pkt->link); + tx_pkt->callback = NULL; /* Avoid doing the callback */ + ipa_tx_complete(tx_pkt); + } + + return ret; +} + +/** + * ipa_send_cmd_timeout_complete() - Command completion callback + * @user1: Opaque value carried by the command + * @ignored: Second opaque value (ignored) + * + * Schedule a completion to signal that a command is done. Free the + * tag_completion structure if its reference count reaches zero. + */ +static void ipa_send_cmd_timeout_complete(void *user1, int ignored) +{ + struct ipa_tag_completion *comp = user1; + + complete(&comp->comp); + if (!atomic_dec_return(&comp->cnt)) + kfree(comp); +} + +/** + * ipa_send_cmd_timeout() - Send an immediate command with timeout + * @desc: descriptor structure + * @timeout: milliseconds to wait (or 0 to wait indefinitely) + * + * Send an immediate command, and wait for it to complete. If + * timeout is non-zero it indicates the number of milliseconds to + * wait to receive the acknowledgment from the hardware before + * timing out. If 0 is supplied, wait will not time out. + * + * Return: 0 if successful, or a negative error code + */ +int ipa_send_cmd_timeout(struct ipa_desc *desc, u32 timeout) +{ + struct ipa_tag_completion *comp; + unsigned long timeout_jiffies; + struct ipa_ep_context *ep; + int ret; + + comp = kzalloc(sizeof(*comp), GFP_KERNEL); + if (!comp) + return -ENOMEM; + + /* The reference count is decremented both here and in ack + * callback. Whichever reaches 0 frees the structure. + */ + atomic_set(&comp->cnt, 2); + init_completion(&comp->comp); + + /* Fill in the callback info (the sole descriptor is the last) */ + desc->callback = ipa_send_cmd_timeout_complete; + desc->user1 = comp; + + ep = &ipa_ctx->ep[ipa_client_ep_id(IPA_CLIENT_APPS_CMD_PROD)]; + ret = ipa_send(ep->sys, 1, desc); + if (ret) { + /* Callback won't run; drop reference on its behalf */ + atomic_dec(&comp->cnt); + goto out; + } + + timeout_jiffies = msecs_to_jiffies(timeout); + if (!timeout_jiffies) { + wait_for_completion(&comp->comp); + } else if (!wait_for_completion_timeout(&comp->comp, timeout_jiffies)) { + ret = -ETIMEDOUT; + ipa_err("command timed out\n"); + } +out: + if (!atomic_dec_return(&comp->cnt)) + kfree(comp); + + return ret; +} + +/** + * ipa_handle_rx_core() - Core packet reception handling + * @sys: System context for endpoint receiving packets + * + * Return: The number of packets processed, or a negative error code + */ +static int ipa_handle_rx_core(struct ipa_sys_context *sys) +{ + int cnt; + + /* Stop if the endpoint leaves polling state */ + cnt = 0; + while (ipa_ep_polling(sys->ep)) { + int ret = ipa_poll_gsi_pkt(sys); + + if (ret < 0) + break; + + ipa_rx_common(sys, (u32)ret); + + cnt++; + } + + return cnt; +} + +/** + * ipa_rx_switch_to_intr_mode() - Switch from polling to interrupt mode + * @sys: System context for endpoint switching mode + */ +static void ipa_rx_switch_to_intr_mode(struct ipa_sys_context *sys) +{ + if (!atomic_xchg(&sys->rx.curr_polling_state, 0)) { + ipa_err("already in intr mode\n"); + queue_delayed_work(sys->wq, &sys->rx.switch_to_intr_work, + msecs_to_jiffies(1)); + return; + } + ipa_dec_release_wakelock(); + gsi_channel_intr_enable(ipa_ctx->gsi, sys->ep->channel_id); +} + +void ipa_rx_switch_to_poll_mode(struct ipa_sys_context *sys) +{ + if (atomic_xchg(&sys->rx.curr_polling_state, 1)) + return; + gsi_channel_intr_disable(ipa_ctx->gsi, sys->ep->channel_id); + ipa_inc_acquire_wakelock(); + queue_work(sys->wq, &sys->rx.work); +} + +/** + * ipa_handle_rx() - Handle packet reception. + * @sys: System context for endpoint receiving packets + */ +static void ipa_handle_rx(struct ipa_sys_context *sys) +{ + int inactive_cycles = 0; + int cnt; + + ipa_client_add(); + do { + cnt = ipa_handle_rx_core(sys); + if (cnt == 0) + inactive_cycles++; + else + inactive_cycles = 0; + + usleep_range(POLLING_MIN_SLEEP_RX, POLLING_MAX_SLEEP_RX); + + /* if endpoint is out of buffers there is no point polling for + * completed descs; release the worker so delayed work can + * run in a timely manner + */ + if (sys->len - sys->rx.len_pending_xfer == 0) + break; + + } while (inactive_cycles <= POLLING_INACTIVITY_RX); + + ipa_rx_switch_to_intr_mode(sys); + ipa_client_remove(); +} + +static void ipa_switch_to_intr_rx_work_func(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct ipa_sys_context *sys; + + sys = container_of(dwork, struct ipa_sys_context, + rx.switch_to_intr_work); + + /* For NAPI, interrupt mode is done in ipa_rx_poll context */ + ipa_assert(!sys->ep->napi_enabled); + + ipa_handle_rx(sys); +} + +static struct ipa_sys_context *ipa_ep_sys_create(enum ipa_client_type client) +{ + const unsigned int wq_flags = WQ_MEM_RECLAIM | WQ_UNBOUND; + struct ipa_sys_context *sys; + + /* Caller will zero all "mutable" fields; we fill in the rest */ + sys = kmalloc(sizeof(*sys), GFP_KERNEL); + if (!sys) + return NULL; + + sys->wq = alloc_workqueue("ipawq%u", wq_flags, 1, (u32)client); + if (!sys->wq) { + kfree(sys); + return NULL; + } + + /* Caller assigns sys->ep = ep */ + INIT_LIST_HEAD(&sys->head_desc_list); + spin_lock_init(&sys->spinlock); + + return sys; +} + +/** + * ipa_tx_dp_complete() - Transmit complete callback + * @user1: Caller-supplied pointer value + * @user2: Caller-supplied integer value + * + * Calls the endpoint's client_notify function if it exists; + * otherwise just frees the socket buffer (supplied in user1). + */ +static void ipa_tx_dp_complete(void *user1, int user2) +{ + struct sk_buff *skb = user1; + int ep_id = user2; + + if (ipa_ctx->ep[ep_id].client_notify) { + unsigned long data; + void *priv; + + priv = ipa_ctx->ep[ep_id].priv; + data = (unsigned long)skb; + ipa_ctx->ep[ep_id].client_notify(priv, IPA_WRITE_DONE, data); + } else { + dev_kfree_skb_any(skb); + } +} + +/** + * ipa_tx_dp() - Transmit a socket buffer for APPS_WAN_PROD + * @client: IPA client that is sending packets (WAN producer) + * @skb: The socket buffer to send + * + * Returns: 0 if successful, or a negative error code + */ +int ipa_tx_dp(enum ipa_client_type client, struct sk_buff *skb) +{ + struct ipa_desc _desc = { }; /* Used for common case */ + struct ipa_desc *desc; + u32 tlv_count; + int data_idx; + u32 nr_frags; + u32 ep_id; + int ret; + u32 f; + + if (!skb->len) + return -EINVAL; + + ep_id = ipa_client_ep_id(client); + + /* Make sure source endpoint's TLV FIFO has enough entries to + * hold the linear portion of the skb and all its frags. + * If not, see if we can linearize it before giving up. + */ + nr_frags = skb_shinfo(skb)->nr_frags; + tlv_count = ipa_client_tlv_count(client); + if (1 + nr_frags > tlv_count) { + if (skb_linearize(skb)) + return -ENOMEM; + nr_frags = 0; + } + if (nr_frags) { + desc = kcalloc(1 + nr_frags, sizeof(*desc), GFP_ATOMIC); + if (!desc) + return -ENOMEM; + } else { + desc = &_desc; /* Default, linear case */ + } + + /* Fill in the IPA request descriptors--one for the linear + * data in the skb, one each for each of its fragments. + */ + data_idx = 0; + desc[data_idx].payload = skb->data; + desc[data_idx].len_opcode = skb_headlen(skb); + desc[data_idx].type = IPA_DATA_DESC_SKB; + for (f = 0; f < nr_frags; f++) { + data_idx++; + desc[data_idx].payload = &skb_shinfo(skb)->frags[f]; + desc[data_idx].type = IPA_DATA_DESC_SKB_PAGED; + desc[data_idx].len_opcode = + skb_frag_size(desc[data_idx].payload); + } + + /* Have the skb be freed after the last descriptor completes. */ + desc[data_idx].callback = ipa_tx_dp_complete; + desc[data_idx].user1 = skb; + desc[data_idx].user2 = ep_id; + + ret = ipa_send(ipa_ctx->ep[ep_id].sys, data_idx + 1, desc); + + if (nr_frags) + kfree(desc); + + return ret; +} + +static void ipa_wq_handle_rx(struct work_struct *work) +{ + struct ipa_sys_context *sys; + + sys = container_of(work, struct ipa_sys_context, rx.work); + + if (sys->ep->napi_enabled) { + ipa_client_add(); + sys->ep->client_notify(sys->ep->priv, IPA_CLIENT_START_POLL, 0); + } else { + ipa_handle_rx(sys); + } +} + +static int +queue_rx_cache(struct ipa_sys_context *sys, struct ipa_rx_pkt_wrapper *rx_pkt) +{ + struct gsi_xfer_elem gsi_xfer_elem; + bool ring_doorbell; + int ret; + + /* Don't bother zeroing this; we fill all fields */ + gsi_xfer_elem.addr = rx_pkt->dma_addr; + gsi_xfer_elem.len_opcode = sys->rx.buff_sz; + gsi_xfer_elem.flags = GSI_XFER_FLAG_EOT; + gsi_xfer_elem.flags |= GSI_XFER_FLAG_EOB; + gsi_xfer_elem.type = GSI_XFER_ELEM_DATA; + gsi_xfer_elem.user_data = rx_pkt; + + /* Doorbell is expensive; only ring it when a batch is queued */ + ring_doorbell = sys->rx.len_pending_xfer++ >= IPA_REPL_XFER_THRESH; + + ret = gsi_channel_queue(ipa_ctx->gsi, sys->ep->channel_id, + 1, &gsi_xfer_elem, ring_doorbell); + if (ret) + return ret; + + if (ring_doorbell) + sys->rx.len_pending_xfer = 0; + + return 0; +} + +/** + * ipa_replenish_rx_cache() - Replenish the Rx packets cache. + * @sys: System context for IPA->AP endpoint + * + * Allocate RX packet wrapper structures with maximal socket buffers + * for an endpoint. These are supplied to the hardware, which fills + * them with incoming data. + */ +static void ipa_replenish_rx_cache(struct ipa_sys_context *sys) +{ + struct ipa_rx_pkt_wrapper *rx_pkt; + struct device *dev = ipa_ctx->dev; + u32 rx_len_cached = sys->len; + + while (rx_len_cached < sys->rx.pool_sz) { + gfp_t flag = GFP_NOWAIT | __GFP_NOWARN; + void *ptr; + int ret; + + rx_pkt = kmem_cache_zalloc(ipa_ctx->dp->rx_pkt_wrapper_cache, + flag); + if (!rx_pkt) + goto fail_kmem_cache_alloc; + + INIT_LIST_HEAD(&rx_pkt->link); + + rx_pkt->skb = __dev_alloc_skb(sys->rx.buff_sz, flag); + if (!rx_pkt->skb) { + ipa_err("failed to alloc skb\n"); + goto fail_skb_alloc; + } + ptr = skb_put(rx_pkt->skb, sys->rx.buff_sz); + rx_pkt->dma_addr = dma_map_single(dev, ptr, sys->rx.buff_sz, + DMA_FROM_DEVICE); + if (dma_mapping_error(dev, rx_pkt->dma_addr)) { + ipa_err("dma_map_single failure %p for %p\n", + (void *)rx_pkt->dma_addr, ptr); + goto fail_dma_mapping; + } + + list_add_tail(&rx_pkt->link, &sys->head_desc_list); + rx_len_cached = ++sys->len; + + ret = queue_rx_cache(sys, rx_pkt); + if (ret) + goto fail_provide_rx_buffer; + } + + return; + +fail_provide_rx_buffer: + list_del(&rx_pkt->link); + rx_len_cached = --sys->len; + dma_unmap_single(dev, rx_pkt->dma_addr, sys->rx.buff_sz, + DMA_FROM_DEVICE); +fail_dma_mapping: + dev_kfree_skb_any(rx_pkt->skb); +fail_skb_alloc: + kmem_cache_free(ipa_ctx->dp->rx_pkt_wrapper_cache, rx_pkt); +fail_kmem_cache_alloc: + if (rx_len_cached - sys->rx.len_pending_xfer == 0) + queue_delayed_work(sys->wq, &sys->rx.replenish_work, + msecs_to_jiffies(1)); +} + +static void ipa_replenish_rx_work_func(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct ipa_sys_context *sys; + + sys = container_of(dwork, struct ipa_sys_context, rx.replenish_work); + ipa_client_add(); + ipa_replenish_rx_cache(sys); + ipa_client_remove(); +} + +/** ipa_cleanup_rx() - release RX queue resources */ +static void ipa_cleanup_rx(struct ipa_sys_context *sys) +{ + struct ipa_rx_pkt_wrapper *rx_pkt; + struct ipa_rx_pkt_wrapper *r; + + list_for_each_entry_safe(rx_pkt, r, &sys->head_desc_list, link) { + list_del(&rx_pkt->link); + dma_unmap_single(ipa_ctx->dev, rx_pkt->dma_addr, + sys->rx.buff_sz, DMA_FROM_DEVICE); + dev_kfree_skb_any(rx_pkt->skb); + kmem_cache_free(ipa_ctx->dp->rx_pkt_wrapper_cache, rx_pkt); + } +} + +static struct sk_buff *ipa_skb_copy_for_client(struct sk_buff *skb, int len) +{ + struct sk_buff *skb2; + + skb2 = __dev_alloc_skb(len + IPA_RX_BUFF_CLIENT_HEADROOM, GFP_KERNEL); + if (likely(skb2)) { + /* Set the data pointer */ + skb_reserve(skb2, IPA_RX_BUFF_CLIENT_HEADROOM); + memcpy(skb2->data, skb->data, len); + skb2->len = len; + skb_set_tail_pointer(skb2, len); + } + + return skb2; +} + +static struct sk_buff *ipa_join_prev_skb(struct sk_buff *prev_skb, + struct sk_buff *skb, unsigned int len) +{ + struct sk_buff *skb2; + + skb2 = skb_copy_expand(prev_skb, 0, len, GFP_KERNEL); + if (likely(skb2)) + memcpy(skb_put(skb2, len), skb->data, len); + else + ipa_err("copy expand failed\n"); + dev_kfree_skb_any(prev_skb); + + return skb2; +} + +static bool ipa_status_opcode_supported(enum ipahal_pkt_status_opcode opcode) +{ + return opcode == IPAHAL_PKT_STATUS_OPCODE_PACKET || + opcode == IPAHAL_PKT_STATUS_OPCODE_DROPPED_PACKET || + opcode == IPAHAL_PKT_STATUS_OPCODE_SUSPENDED_PACKET || + opcode == IPAHAL_PKT_STATUS_OPCODE_PACKET_2ND_PASS; +} + +static void +ipa_lan_rx_pyld_hdlr(struct sk_buff *skb, struct ipa_sys_context *sys) +{ + struct ipahal_pkt_status status; + struct sk_buff *skb2; + unsigned long unused; + unsigned int align; + unsigned int used; + unsigned char *buf; + u32 pkt_status_sz; + int pad_len_byte; + u32 ep_id; + int len; + int len2; + + pkt_status_sz = ipahal_pkt_status_get_size(); + used = *(unsigned int *)skb->cb; + align = ALIGN(used, 32); + unused = IPA_RX_BUFFER_SIZE - used; + + ipa_assert(skb->len); + + if (sys->rx.len_partial) { + buf = skb_push(skb, sys->rx.len_partial); + memcpy(buf, sys->rx.prev_skb->data, sys->rx.len_partial); + sys->rx.len_partial = 0; + dev_kfree_skb_any(sys->rx.prev_skb); + sys->rx.prev_skb = NULL; + goto begin; + } + + /* this endpoint has TX comp (status only) + mux-ed LAN RX data + * (status+data) + */ + if (sys->rx.len_rem) { + if (sys->rx.len_rem <= skb->len) { + if (sys->rx.prev_skb) { + skb2 = skb_copy_expand(sys->rx.prev_skb, 0, + sys->rx.len_rem, + GFP_KERNEL); + if (likely(skb2)) { + memcpy(skb_put(skb2, sys->rx.len_rem), + skb->data, sys->rx.len_rem); + skb_trim(skb2, + skb2->len - sys->rx.len_pad); + skb2->truesize = skb2->len + + sizeof(struct sk_buff); + if (sys->rx.drop_packet) + dev_kfree_skb_any(skb2); + else + sys->ep->client_notify( + sys->ep->priv, + IPA_RECEIVE, + (unsigned long)(skb2)); + } else { + ipa_err("copy expand failed\n"); + } + dev_kfree_skb_any(sys->rx.prev_skb); + } + skb_pull(skb, sys->rx.len_rem); + sys->rx.prev_skb = NULL; + sys->rx.len_rem = 0; + sys->rx.len_pad = 0; + } else { + if (sys->rx.prev_skb) { + skb2 = ipa_join_prev_skb(sys->rx.prev_skb, skb, + skb->len); + dev_kfree_skb_any(sys->rx.prev_skb); + sys->rx.prev_skb = skb2; + } + sys->rx.len_rem -= skb->len; + return; + } + } + +begin: + while (skb->len) { + sys->rx.drop_packet = false; + + if (skb->len < pkt_status_sz) { + WARN_ON(sys->rx.prev_skb); + sys->rx.prev_skb = skb_copy(skb, GFP_KERNEL); + sys->rx.len_partial = skb->len; + return; + } + + ipahal_pkt_status_parse(skb->data, &status); + + if (!ipa_status_opcode_supported(status.status_opcode)) { + ipa_err("unsupported opcode(%d)\n", + status.status_opcode); + skb_pull(skb, pkt_status_sz); + continue; + } + + if (status.pkt_len == 0) { + skb_pull(skb, pkt_status_sz); + continue; + } + + if (status.endp_dest_idx == (sys->ep - ipa_ctx->ep)) { + /* RX data */ + ep_id = status.endp_src_idx; + + /* A packet which is received back to the AP after + * there was no route match. + */ + + if (status.exception == + IPAHAL_PKT_STATUS_EXCEPTION_NONE && + status.rt_miss) + sys->rx.drop_packet = true; + if (skb->len == pkt_status_sz && + status.exception == + IPAHAL_PKT_STATUS_EXCEPTION_NONE) { + WARN_ON(sys->rx.prev_skb); + sys->rx.prev_skb = skb_copy(skb, GFP_KERNEL); + sys->rx.len_partial = skb->len; + return; + } + + pad_len_byte = ((status.pkt_len + 3) & ~3) - + status.pkt_len; + + len = status.pkt_len + pad_len_byte + + IPA_SIZE_DL_CSUM_META_TRAILER; + + if (status.exception == + IPAHAL_PKT_STATUS_EXCEPTION_DEAGGR) { + sys->rx.drop_packet = true; + } + + len2 = min(status.pkt_len + pkt_status_sz, skb->len); + skb2 = ipa_skb_copy_for_client(skb, len2); + if (likely(skb2)) { + if (skb->len < len + pkt_status_sz) { + sys->rx.prev_skb = skb2; + sys->rx.len_rem = len - skb->len + + pkt_status_sz; + sys->rx.len_pad = pad_len_byte; + skb_pull(skb, skb->len); + } else { + skb_trim(skb2, status.pkt_len + + pkt_status_sz); + if (sys->rx.drop_packet) { + dev_kfree_skb_any(skb2); + } else { + skb2->truesize = + skb2->len + + sizeof(struct sk_buff) + + (ALIGN(len + + pkt_status_sz, 32) * + unused / align); + sys->ep->client_notify( + sys->ep->priv, + IPA_RECEIVE, + (unsigned long)(skb2)); + } + skb_pull(skb, len + pkt_status_sz); + } + } else { + ipa_err("fail to alloc skb\n"); + if (skb->len < len) { + sys->rx.prev_skb = NULL; + sys->rx.len_rem = len - skb->len + + pkt_status_sz; + sys->rx.len_pad = pad_len_byte; + skb_pull(skb, skb->len); + } else { + skb_pull(skb, len + pkt_status_sz); + } + } + } else { + skb_pull(skb, pkt_status_sz); + } + } +} + +static void +ipa_wan_rx_handle_splt_pyld(struct sk_buff *skb, struct ipa_sys_context *sys) +{ + struct sk_buff *skb2; + + if (sys->rx.len_rem <= skb->len) { + if (sys->rx.prev_skb) { + skb2 = ipa_join_prev_skb(sys->rx.prev_skb, skb, + sys->rx.len_rem); + if (likely(skb2)) { + skb_pull(skb2, ipahal_pkt_status_get_size()); + skb2->truesize = skb2->len + + sizeof(struct sk_buff); + sys->ep->client_notify(sys->ep->priv, + IPA_RECEIVE, + (unsigned long)skb2); + } + } + skb_pull(skb, sys->rx.len_rem); + sys->rx.prev_skb = NULL; + sys->rx.len_rem = 0; + } else { + if (sys->rx.prev_skb) { + skb2 = ipa_join_prev_skb(sys->rx.prev_skb, skb, + skb->len); + sys->rx.prev_skb = skb2; + } + sys->rx.len_rem -= skb->len; + skb_pull(skb, skb->len); + } +} + +static void +ipa_wan_rx_pyld_hdlr(struct sk_buff *skb, struct ipa_sys_context *sys) +{ + struct ipahal_pkt_status status; + unsigned char *skb_data; + struct sk_buff *skb2; + u16 pkt_len_with_pad; + unsigned long unused; + unsigned int align; + unsigned int used; + u32 pkt_status_sz; + int frame_len; + u32 qmap_hdr; + int checksum; + int ep_id; + + used = *(unsigned int *)skb->cb; + align = ALIGN(used, 32); + unused = IPA_RX_BUFFER_SIZE - used; + + ipa_assert(skb->len); + + if (ipa_ctx->ipa_client_apps_wan_cons_agg_gro) { + sys->ep->client_notify(sys->ep->priv, IPA_RECEIVE, + (unsigned long)(skb)); + return; + } + + /* payload splits across 2 buff or more, + * take the start of the payload from rx.prev_skb + */ + if (sys->rx.len_rem) + ipa_wan_rx_handle_splt_pyld(skb, sys); + + pkt_status_sz = ipahal_pkt_status_get_size(); + while (skb->len) { + u32 status_mask; + + if (skb->len < pkt_status_sz) { + ipa_err("status straddles buffer\n"); + WARN_ON(1); + goto bail; + } + ipahal_pkt_status_parse(skb->data, &status); + skb_data = skb->data; + + if (!ipa_status_opcode_supported(status.status_opcode) || + status.status_opcode == + IPAHAL_PKT_STATUS_OPCODE_SUSPENDED_PACKET) { + ipa_err("unsupported opcode(%d)\n", + status.status_opcode); + skb_pull(skb, pkt_status_sz); + continue; + } + + if (status.endp_dest_idx >= ipa_ctx->ep_count || + status.endp_src_idx >= ipa_ctx->ep_count || + status.pkt_len > IPA_GENERIC_AGGR_BYTE_LIMIT) { + ipa_err("status fields invalid\n"); + WARN_ON(1); + goto bail; + } + if (status.pkt_len == 0) { + skb_pull(skb, pkt_status_sz); + continue; + } + ep_id = ipa_client_ep_id(IPA_CLIENT_APPS_WAN_CONS); + if (status.endp_dest_idx != ep_id) { + ipa_err("expected endp_dest_idx %d received %d\n", + ep_id, status.endp_dest_idx); + WARN_ON(1); + goto bail; + } + /* RX data */ + if (skb->len == pkt_status_sz) { + ipa_err("Ins header in next buffer\n"); + WARN_ON(1); + goto bail; + } + qmap_hdr = *(u32 *)(skb_data + pkt_status_sz); + + /* Take the pkt_len_with_pad from the last 2 bytes of the QMAP + * header + */ + /*QMAP is BE: convert the pkt_len field from BE to LE*/ + pkt_len_with_pad = ntohs((qmap_hdr >> 16) & 0xffff); + /*get the CHECKSUM_PROCESS bit*/ + status_mask = status.status_mask; + checksum = status_mask & IPAHAL_PKT_STATUS_MASK_CKSUM_PROCESS; + + frame_len = pkt_status_sz + IPA_QMAP_HEADER_LENGTH + + pkt_len_with_pad; + if (checksum) + frame_len += IPA_DL_CHECKSUM_LENGTH; + + skb2 = skb_clone(skb, GFP_ATOMIC); + if (likely(skb2)) { + /* the len of actual data is smaller than expected + * payload split across 2 buff + */ + if (skb->len < frame_len) { + sys->rx.prev_skb = skb2; + sys->rx.len_rem = frame_len - skb->len; + skb_pull(skb, skb->len); + } else { + skb_trim(skb2, frame_len); + skb_pull(skb2, pkt_status_sz); + skb2->truesize = skb2->len + + sizeof(struct sk_buff) + + (ALIGN(frame_len, 32) * + unused / align); + sys->ep->client_notify(sys->ep->priv, + IPA_RECEIVE, + (unsigned long)(skb2)); + skb_pull(skb, frame_len); + } + } else { + ipa_err("fail to clone\n"); + if (skb->len < frame_len) { + sys->rx.prev_skb = NULL; + sys->rx.len_rem = frame_len - skb->len; + skb_pull(skb, skb->len); + } else { + skb_pull(skb, frame_len); + } + } + } +bail: + dev_kfree_skb_any(skb); +} + +void ipa_lan_rx_cb(void *priv, enum ipa_dp_evt_type evt, unsigned long data) +{ + struct sk_buff *rx_skb = (struct sk_buff *)data; + struct ipahal_pkt_status status; + struct ipa_ep_context *ep; + u32 pkt_status_size; + u32 metadata; + u32 ep_id; + + pkt_status_size = ipahal_pkt_status_get_size(); + + ipa_assert(rx_skb->len >= pkt_status_size); + + ipahal_pkt_status_parse(rx_skb->data, &status); + ep_id = status.endp_src_idx; + metadata = status.metadata; + ep = &ipa_ctx->ep[ep_id]; + if (ep_id >= ipa_ctx->ep_count || !ep->allocated || + !ep->client_notify) { + ipa_err("drop endpoint=%u allocated=%s client_notify=%p\n", + ep_id, ep->allocated ? "true" : "false", + ep->client_notify); + dev_kfree_skb_any(rx_skb); + return; + } + + /* Consume the status packet, and if no exception, the header */ + skb_pull(rx_skb, pkt_status_size); + if (status.exception == IPAHAL_PKT_STATUS_EXCEPTION_NONE) + skb_pull(rx_skb, IPA_LAN_RX_HEADER_LENGTH); + + /* Metadata Info + * ------------------------------------------ + * | 3 | 2 | 1 | 0 | + * | fw_desc | vdev_id | qmap mux id | Resv | + * ------------------------------------------ + */ + *(u16 *)rx_skb->cb = ((metadata >> 16) & 0xffff); + + ep->client_notify(ep->priv, IPA_RECEIVE, (unsigned long)rx_skb); +} + +static void ipa_rx_common(struct ipa_sys_context *sys, u32 size) +{ + struct ipa_rx_pkt_wrapper *rx_pkt; + struct sk_buff *rx_skb; + + ipa_assert(!list_empty(&sys->head_desc_list)); + + spin_lock_bh(&sys->spinlock); + + rx_pkt = list_first_entry(&sys->head_desc_list, + struct ipa_rx_pkt_wrapper, link); + list_del(&rx_pkt->link); + sys->len--; + + spin_unlock_bh(&sys->spinlock); + + rx_skb = rx_pkt->skb; + dma_unmap_single(ipa_ctx->dev, rx_pkt->dma_addr, sys->rx.buff_sz, + DMA_FROM_DEVICE); + + skb_trim(rx_skb, size); + + *(unsigned int *)rx_skb->cb = rx_skb->len; + rx_skb->truesize = size + sizeof(struct sk_buff); + + sys->rx.pyld_hdlr(rx_skb, sys); + kmem_cache_free(ipa_ctx->dp->rx_pkt_wrapper_cache, rx_pkt); + ipa_replenish_rx_cache(sys); +} + +/** + * ipa_aggr_byte_limit_buf_size() + * @byte_limit: Desired limit (in bytes) for aggregation + * + * Compute the buffer size required to support a requested aggregation + * byte limit. Aggregration will close when *more* than the configured + * number of bytes have been added to an aggregation frame. Our + * buffers therefore need to to be big enough to receive one complete + * packet once the configured byte limit has been consumed. + * + * An incoming packet can have as much as IPA_MTU of data in it, but + * the buffer also needs to be large enough to accomodate the standard + * socket buffer overhead (NET_SKB_PAD of headroom, plus an implied + * skb_shared_info structure at the end). + * + * So we compute the required buffer size by adding the standard + * socket buffer overhead and MTU to the requested size. We round + * that down to a power of 2 in an effort to avoid fragmentation due + * to unaligned buffer sizes. + * + * After accounting for all of this, we return the number of bytes + * of buffer space the IPA hardware will know is available to hold + * received data (without any overhead). + * + * Return: The computes size of buffer space available + */ +u32 ipa_aggr_byte_limit_buf_size(u32 byte_limit) +{ + /* Account for one additional packet, including overhead */ + byte_limit += IPA_RX_BUFFER_RESERVED; + byte_limit += IPA_MTU; + + /* Convert this size to a nearby power-of-2. We choose one + * that's *less than* the limit we seek--so we start by + * subracting 1. The highest set bit in that is used to + * compute the power of 2. + * + * XXX Why is this *less than* and not possibly equal? + */ + byte_limit = 1 << __fls(byte_limit - 1); + + /* Given that size, figure out how much buffer space that + * leaves us for received data. + */ + return IPA_RX_BUFFER_AVAILABLE(byte_limit); +} + +void ipa_gsi_irq_tx_notify_cb(void *xfer_data) +{ + struct ipa_tx_pkt_wrapper *tx_pkt = xfer_data; + + queue_work(tx_pkt->sys->wq, &tx_pkt->done_work); +} + +void ipa_gsi_irq_rx_notify_cb(void *chan_data, u16 count) +{ + struct ipa_sys_context *sys = chan_data; + + sys->ep->bytes_xfered_valid = true; + sys->ep->bytes_xfered = count; + + ipa_rx_switch_to_poll_mode(sys); +} + +static int ipa_gsi_setup_channel(struct ipa_ep_context *ep, u32 channel_count, + u32 evt_ring_mult) +{ + u32 channel_id = ipa_client_channel_id(ep->client); + u32 tlv_count = ipa_client_tlv_count(ep->client); + bool from_ipa = ipa_consumer(ep->client); + bool moderation; + bool priority; + int ret; + + priority = ep->client == IPA_CLIENT_APPS_CMD_PROD; + moderation = !ep->sys->tx.no_intr; + + ret = gsi_channel_alloc(ipa_ctx->gsi, channel_id, channel_count, + from_ipa, priority, evt_ring_mult, moderation, + ep->sys); + if (ret) + return ret; + ep->channel_id = channel_id; + + gsi_channel_scratch_write(ipa_ctx->gsi, ep->channel_id, tlv_count); + + ret = gsi_channel_start(ipa_ctx->gsi, ep->channel_id); + if (ret) + gsi_channel_free(ipa_ctx->gsi, ep->channel_id); + + return ret; +} + +void ipa_endp_init_hdr_cons(u32 ep_id, u32 header_size, + u32 metadata_offset, u32 length_offset) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_hdr_cons(&ep->init_hdr, header_size, metadata_offset, + length_offset); +} + +void ipa_endp_init_hdr_prod(u32 ep_id, u32 header_size, + u32 metadata_offset, u32 length_offset) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_hdr_prod(&ep->init_hdr, header_size, metadata_offset, + length_offset); +} + +void +ipa_endp_init_hdr_ext_cons(u32 ep_id, u32 pad_align, bool pad_included) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_hdr_ext_cons(&ep->hdr_ext, pad_align, pad_included); +} + +void ipa_endp_init_hdr_ext_prod(u32 ep_id, u32 pad_align) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_hdr_ext_prod(&ep->hdr_ext, pad_align); +} + +void +ipa_endp_init_aggr_cons(u32 ep_id, u32 size, u32 count, bool close_on_eof) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_aggr_cons(&ep->init_aggr, size, count, close_on_eof); +} + +void ipa_endp_init_aggr_prod(u32 ep_id, enum ipa_aggr_en aggr_en, + enum ipa_aggr_type aggr_type) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_aggr_prod(&ep->init_aggr, aggr_en, aggr_type); +} + +void ipa_endp_init_cfg_cons(u32 ep_id, enum ipa_cs_offload_en offload_type) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_cfg_cons(&ep->init_cfg, offload_type); +} + +void ipa_endp_init_cfg_prod(u32 ep_id, enum ipa_cs_offload_en offload_type, + u32 metadata_offset) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_cfg_prod(&ep->init_cfg, offload_type, + metadata_offset); +} + +void ipa_endp_init_hdr_metadata_mask_cons(u32 ep_id, u32 mask) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_hdr_metadata_mask_cons(&ep->metadata_mask, mask); +} + +void ipa_endp_init_hdr_metadata_mask_prod(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_hdr_metadata_mask_prod(&ep->metadata_mask); +} + +void ipa_endp_status_cons(u32 ep_id, bool enable) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_status_cons(&ep->status, enable); +} + +void ipa_endp_status_prod(u32 ep_id, bool enable, + enum ipa_client_type status_client) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + u32 status_ep_id; + + status_ep_id = ipa_client_ep_id(status_client); + + ipa_reg_endp_status_prod(&ep->status, enable, status_ep_id); +} + + +/* Note that the mode setting is not valid for consumer endpoints */ +void ipa_endp_init_mode_cons(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_mode_cons(&ep->init_mode); +} + +void ipa_endp_init_mode_prod(u32 ep_id, enum ipa_mode mode, + enum ipa_client_type dst_client) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + u32 dst_ep_id; + + dst_ep_id = ipa_client_ep_id(dst_client); + + ipa_reg_endp_init_mode_prod(&ep->init_mode, mode, dst_ep_id); +} + +/* XXX The sequencer setting seems not to be valid for consumer endpoints */ +void ipa_endp_init_seq_cons(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_seq_cons(&ep->init_seq); +} + +void ipa_endp_init_seq_prod(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + u32 seq_type; + + seq_type = (u32)ipa_endp_seq_type(ep_id); + + ipa_reg_endp_init_seq_prod(&ep->init_seq, seq_type); +} + +/* XXX The deaggr setting seems not to be valid for consumer endpoints */ +void ipa_endp_init_deaggr_cons(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_deaggr_cons(&ep->init_deaggr); +} + +void ipa_endp_init_deaggr_prod(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_reg_endp_init_deaggr_prod(&ep->init_deaggr); +} + +int ipa_ep_alloc(enum ipa_client_type client) +{ + u32 ep_id = ipa_client_ep_id(client); + struct ipa_sys_context *sys; + struct ipa_ep_context *ep; + + ep = &ipa_ctx->ep[ep_id]; + + ipa_assert(!ep->allocated); + + /* Reuse the endpoint's sys pointer if it is initialized */ + sys = ep->sys; + if (!sys) { + sys = ipa_ep_sys_create(client); + if (!sys) + return -ENOMEM; + sys->ep = ep; + } + + /* Zero the "mutable" part of the system context */ + memset(sys, 0, offsetof(struct ipa_sys_context, ep)); + + /* Initialize the endpoint context */ + memset(ep, 0, sizeof(*ep)); + ep->sys = sys; + ep->client = client; + ep->allocated = true; + + return ep_id; +} + +void ipa_ep_free(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + + ipa_assert(ep->allocated); + + ep->allocated = false; +} + +/** + * ipa_ep_setup() - Set up an IPA endpoint + * @ep_id: Endpoint to set up + * @channel_count: Number of transfer elements in the channel + * @evt_ring_mult: Used to determine number of elements in event ring + * @rx_buffer_size: Receive buffer size to use (or 0 for TX endpoitns) + * @client_notify: Notify function to call on completion + * @priv: Value supplied to the notify function + * + * Returns: 0 if successful, or a negative error code + */ +int ipa_ep_setup(u32 ep_id, u32 channel_count, u32 evt_ring_mult, + u32 rx_buffer_size, + void (*client_notify)(void *priv, enum ipa_dp_evt_type type, + unsigned long data), + void *priv) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + int ret; + + if (ipa_consumer(ep->client)) { + atomic_set(&ep->sys->rx.curr_polling_state, 0); + INIT_DELAYED_WORK(&ep->sys->rx.switch_to_intr_work, + ipa_switch_to_intr_rx_work_func); + if (ep->client == IPA_CLIENT_APPS_LAN_CONS) + ep->sys->rx.pyld_hdlr = ipa_lan_rx_pyld_hdlr; + else + ep->sys->rx.pyld_hdlr = ipa_wan_rx_pyld_hdlr; + ep->sys->rx.buff_sz = rx_buffer_size; + ep->sys->rx.pool_sz = IPA_GENERIC_RX_POOL_SZ; + INIT_WORK(&ep->sys->rx.work, ipa_wq_handle_rx); + INIT_DELAYED_WORK(&ep->sys->rx.replenish_work, + ipa_replenish_rx_work_func); + } + + ep->client_notify = client_notify; + ep->priv = priv; + ep->napi_enabled = ep->client == IPA_CLIENT_APPS_WAN_CONS; + + ipa_client_add(); + + ipa_cfg_ep(ep_id); + + ret = ipa_gsi_setup_channel(ep, channel_count, evt_ring_mult); + if (ret) + goto err_client_remove; + + if (ipa_consumer(ep->client)) + ipa_replenish_rx_cache(ep->sys); +err_client_remove: + ipa_client_remove(); + + return ret; +} + +/** + * ipa_channel_reset_aggr() - Reset with aggregation active + * @ep_id: Endpoint on which reset is performed + * + * If aggregation is active on a channel when a reset is performed, + * a special sequence of actions must be taken. This is a workaround + * for a hardware limitation. + * + * Return: 0 if successful, or a negative error code. + */ +static int ipa_channel_reset_aggr(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + struct ipa_reg_aggr_force_close force_close; + struct ipa_reg_endp_init_ctrl init_ctrl; + struct gsi_xfer_elem xfer_elem = { }; + struct ipa_dma_mem dma_byte; + int aggr_active_bitmap = 0; + bool ep_suspended = false; + int ret; + int i; + + ipa_reg_aggr_force_close(&force_close, BIT(ep_id)); + ipa_write_reg_fields(IPA_AGGR_FORCE_CLOSE, &force_close); + + /* Reset channel */ + ret = gsi_channel_reset(ipa_ctx->gsi, ep->channel_id); + if (ret) + return ret; + + /* Turn off the doorbell engine. We're going to poll until + * we know aggregation isn't active. + */ + gsi_channel_config(ipa_ctx->gsi, ep->channel_id, false); + + ipa_read_reg_n_fields(IPA_ENDP_INIT_CTRL_N, ep_id, &init_ctrl); + if (init_ctrl.endp_suspend) { + ep_suspended = true; + ipa_reg_endp_init_ctrl(&init_ctrl, false); + ipa_write_reg_n_fields(IPA_ENDP_INIT_CTRL_N, ep_id, &init_ctrl); + } + + /* Start channel and put 1 Byte descriptor on it */ + ret = gsi_channel_start(ipa_ctx->gsi, ep->channel_id); + if (ret) + goto out_suspend_again; + + if (ipa_dma_alloc(&dma_byte, 1, GFP_KERNEL)) { + ret = -ENOMEM; + goto err_stop_channel; + } + + xfer_elem.addr = dma_byte.phys; + xfer_elem.len_opcode = 1; /* = dma_byte.size; */ + xfer_elem.flags = GSI_XFER_FLAG_EOT; + xfer_elem.type = GSI_XFER_ELEM_DATA; + + ret = gsi_channel_queue(ipa_ctx->gsi, ep->channel_id, 1, &xfer_elem, + true); + if (ret) + goto err_dma_free; + + /* Wait for aggregation frame to be closed */ + for (i = 0; i < CHANNEL_RESET_AGGR_RETRY_COUNT; i++) { + aggr_active_bitmap = ipa_read_reg(IPA_STATE_AGGR_ACTIVE); + if (!(aggr_active_bitmap & BIT(ep_id))) + break; + msleep(CHANNEL_RESET_DELAY); + } + ipa_bug_on(aggr_active_bitmap & BIT(ep_id)); + + ipa_dma_free(&dma_byte); + + ret = ipa_stop_gsi_channel(ep_id); + if (ret) + goto out_suspend_again; + + /* Reset the channel. If successful we need to sleep for 1 + * msec to complete the GSI channel reset sequence. Either + * way we finish by suspending the channel again (if necessary) + * and re-enabling its doorbell engine. + */ + ret = gsi_channel_reset(ipa_ctx->gsi, ep->channel_id); + if (!ret) + msleep(CHANNEL_RESET_DELAY); + goto out_suspend_again; + +err_dma_free: + ipa_dma_free(&dma_byte); +err_stop_channel: + ipa_stop_gsi_channel(ep_id); +out_suspend_again: + if (ep_suspended) { + ipa_reg_endp_init_ctrl(&init_ctrl, true); + ipa_write_reg_n_fields(IPA_ENDP_INIT_CTRL_N, ep_id, &init_ctrl); + } + /* Turn on the doorbell engine again */ + gsi_channel_config(ipa_ctx->gsi, ep->channel_id, true); + + return ret; +} + +static void ipa_reset_gsi_channel(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + u32 aggr_active_bitmap = 0; + + /* For consumer endpoints, a hardware limitation prevents us + * from issuing a channel reset if aggregation is active. + * Check for this case, and if detected, perform a special + * reset sequence. Otherwise just do a "normal" reset. + */ + if (ipa_consumer(ep->client)) + aggr_active_bitmap = ipa_read_reg(IPA_STATE_AGGR_ACTIVE); + + if (aggr_active_bitmap & BIT(ep_id)) { + ipa_bug_on(ipa_channel_reset_aggr(ep_id)); + } else { + /* In case the reset follows stop, need to wait 1 msec */ + msleep(CHANNEL_RESET_DELAY); + ipa_bug_on(gsi_channel_reset(ipa_ctx->gsi, ep->channel_id)); + } +} + +/** + * ipa_ep_teardown() - Tear down an endpoint + * @ep_id: The endpoint to tear down + */ +void ipa_ep_teardown(u32 ep_id) +{ + struct ipa_ep_context *ep = &ipa_ctx->ep[ep_id]; + int empty; + int ret; + int i; + + if (ep->napi_enabled) { + do { + usleep_range(95, 105); + } while (ipa_ep_polling(ep)); + } + + if (ipa_producer(ep->client)) { + do { + spin_lock_bh(&ep->sys->spinlock); + empty = list_empty(&ep->sys->head_desc_list); + spin_unlock_bh(&ep->sys->spinlock); + if (!empty) + usleep_range(95, 105); + else + break; + } while (1); + } + + if (ipa_consumer(ep->client)) + cancel_delayed_work_sync(&ep->sys->rx.replenish_work); + flush_workqueue(ep->sys->wq); + /* channel stop might fail on timeout if IPA is busy */ + for (i = 0; i < IPA_GSI_CHANNEL_STOP_MAX_RETRY; i++) { + ret = ipa_stop_gsi_channel(ep_id); + if (!ret) + break; + ipa_bug_on(ret != -EAGAIN && ret != -ETIMEDOUT); + } + + ipa_reset_gsi_channel(ep_id); + gsi_channel_free(ipa_ctx->gsi, ep->channel_id); + + if (ipa_consumer(ep->client)) + ipa_cleanup_rx(ep->sys); + + ipa_ep_free(ep_id); +} + +static int ipa_poll_gsi_pkt(struct ipa_sys_context *sys) +{ + if (sys->ep->bytes_xfered_valid) { + sys->ep->bytes_xfered_valid = false; + + return (int)sys->ep->bytes_xfered; + } + + return gsi_channel_poll(ipa_ctx->gsi, sys->ep->channel_id); +} + +bool ipa_ep_polling(struct ipa_ep_context *ep) +{ + ipa_assert(ipa_consumer(ep->client)); + + return !!atomic_read(&ep->sys->rx.curr_polling_state); +} + +struct ipa_dp *ipa_dp_init(void) +{ + struct kmem_cache *cache; + struct ipa_dp *dp; + + dp = kzalloc(sizeof(*dp), GFP_KERNEL); + if (!dp) + return NULL; + + cache = kmem_cache_create("IPA_TX_PKT_WRAPPER", + sizeof(struct ipa_tx_pkt_wrapper), + 0, 0, NULL); + if (!cache) { + kfree(dp); + return NULL; + } + dp->tx_pkt_wrapper_cache = cache; + + cache = kmem_cache_create("IPA_RX_PKT_WRAPPER", + sizeof(struct ipa_rx_pkt_wrapper), + 0, 0, NULL); + if (!cache) { + kmem_cache_destroy(dp->tx_pkt_wrapper_cache); + kfree(dp); + return NULL; + } + dp->rx_pkt_wrapper_cache = cache; + + return dp; +} + +void ipa_dp_exit(struct ipa_dp *dp) +{ + kmem_cache_destroy(dp->rx_pkt_wrapper_cache); + kmem_cache_destroy(dp->tx_pkt_wrapper_cache); + kfree(dp); +}