From patchwork Wed Nov 7 00:32:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 150355 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4548134ljp; Tue, 6 Nov 2018 16:33:13 -0800 (PST) X-Google-Smtp-Source: AJdET5dV+kDroN0F3tlI0D3s+aw7VWLJhe+zZMX+Sj5P0XKJSM+31p334SkhCfosCRxlApQT37d4 X-Received: by 2002:a17:902:684e:: with SMTP id f14-v6mr10266119pln.242.1541550793039; Tue, 06 Nov 2018 16:33:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541550793; cv=none; d=google.com; s=arc-20160816; b=Sn33lJHcyKw8W6BwdQQep5yk5cSmAS6KJzrG7fNjOrQ2BH1z6BSqsOgos5J7T8p4W/ 4UAPhQkfhZVBxC2zzVIIWFvCawTMZ1SLe0DyYGL7gowDuvxq3SpJXD7d444ffDXexb3J 3zdsbU0WwKrmyW/xj+Gh2unD93mLKw2fwgXcb+6dXkMyc71+YZHaOfbSBXb+VK1t0nwY D1P5azupN4j+mftnegkQxrPRCjNhAU+wQLoykzQhkej66pei+1E9WysavU/yQeEK70qY wVZZhvBXFR9HBUlqUv8jDTdgZENVsfGW6QtQF/WJ+LnVJOOfNd3tfsJuwlHKRH04rxKX yLUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=mChNx9VPrLoj/ohW2Qed5LeAGe8qnQ1vrW73K+GOCFs=; b=wy9vxgTqr5Z9sDHVCI8r9+tbzXz6vhBIHQJzgqamec6jdLGXiYO4hbOySK+NxwfzBO zTwVheT0n+pOLigJpEomtpqX45O0BTqf9ythCoT+4L0GzyCuhbw3/Bu3MH2NH+MVlNfE lqzO17sR17Uv76TeYj8YLyF8RqaKkRP83MpRAUXaBakYFfMloqMsO6BoFAWsIeAjvL25 uQdGCm+jVnMmcQUuMGEYSQxk/slaPNJa1aP5Q47yjYbCc3KiM1t79Xd3pYHB+yFHj4w4 YkeSHYO8/N4hQDXdBz3It5D2BjXSVI/v5uwjXtkQmL1whnpWlqX6uEIU3zbK4qElbgZo 65HQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=f2IYf7Ra; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p20-v6si24720344plq.379.2018.11.06.16.33.12; Tue, 06 Nov 2018 16:33:13 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=f2IYf7Ra; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388954AbeKGKBE (ORCPT + 32 others); Wed, 7 Nov 2018 05:01:04 -0500 Received: from mail-it1-f193.google.com ([209.85.166.193]:34900 "EHLO mail-it1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388888AbeKGKBD (ORCPT ); Wed, 7 Nov 2018 05:01:03 -0500 Received: by mail-it1-f193.google.com with SMTP id v11so14309411itj.0 for ; Tue, 06 Nov 2018 16:33:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mChNx9VPrLoj/ohW2Qed5LeAGe8qnQ1vrW73K+GOCFs=; b=f2IYf7Ra+LGI0tRtEQXE1i/zvFYB5PcP6cP0XaOJJoYU3TIQD5yu68p7FJ+FIftxEE BI4Y+iL5A7iSOtflBBDOu/YRZ6HidRJJS/jB5LmZw0Gfqz9EwAa4Loqzq44Gc2IdwZp7 TjG7nYw0kDn1vb4IfZXPMC7V/JSDzCNp2EHe4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mChNx9VPrLoj/ohW2Qed5LeAGe8qnQ1vrW73K+GOCFs=; b=oRBWj5GtshQ9dRQ94BCO5iug4srsUURhBiCO9EK1uOmLSATq8BEQjXL2JvA4MiBxwA AMf2PCj9up1YWQlltk9++gBFSPCcu0N3LsqO0VCz7zPLsbGS9PUriSa+uLEElBttN8kj RuJrRiAWejzlz8j3Yc0Ti+bHo2wU5n4zi0lqRC2B6kc83wgVaxm9mMsn53Sn7FUfbjU2 smWDqjBRAEIp2VDMl39V+zEeML351qlewZrFbJs8O7WhodA2p4sPjxln/B0IEhW5LaW2 mPL77bmiwivBC0CpyavrI7TUR+pRjmEBvFicXEoSfgirdTApwactt4bnU4y+YyJ3I3p1 XSpw== X-Gm-Message-State: AGRZ1gKzUQ+1O2ga9xwITgILQqNFJT1QFeiPdavKopCtULwrtX905pya Etebd/yMSM+6QsabTxBIa/cfvg== X-Received: by 2002:a24:4a95:: with SMTP id k143-v6mr117987itb.132.1541550788960; Tue, 06 Nov 2018 16:33:08 -0800 (PST) Received: from shibby.gateway.innflux.com ([66.228.239.218]) by smtp.gmail.com with ESMTPSA id e184-v6sm1061128ite.9.2018.11.06.16.33.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 16:33:08 -0800 (PST) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, syadagir@codeaurora.org, mjavid@codeaurora.org, robh+dt@kernel.org, mark.rutland@arm.com Subject: [RFC PATCH 02/12] soc: qcom: ipa: DMA helpers Date: Tue, 6 Nov 2018 18:32:40 -0600 Message-Id: <20181107003250.5832-3-elder@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107003250.5832-1-elder@linaro.org> References: <20181107003250.5832-1-elder@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch includes code implementing the IPA DMA module, which defines a structure to represent a DMA allocation for the IPA device. It's used throughout the IPA code. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_dma.c | 61 +++++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_dma.h | 61 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 122 insertions(+) create mode 100644 drivers/net/ipa/ipa_dma.c create mode 100644 drivers/net/ipa/ipa_dma.h -- 2.17.1 diff --git a/drivers/net/ipa/ipa_dma.c b/drivers/net/ipa/ipa_dma.c new file mode 100644 index 000000000000..dfde59e5072a --- /dev/null +++ b/drivers/net/ipa/ipa_dma.c @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ + +#include +#include +#include +#include + +#include "ipa_dma.h" + +static struct device *ipa_dma_dev; + +int ipa_dma_init(struct device *dev, u32 align) +{ + int ret; + + /* Ensure DMA addresses will have the alignment we require */ + if (dma_get_cache_alignment() % align) + return -ENOTSUPP; + + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); + if (!ret) + ipa_dma_dev = dev; + + return ret; +} + +void ipa_dma_exit(void) +{ + ipa_dma_dev = NULL; +} + +int ipa_dma_alloc(struct ipa_dma_mem *mem, size_t size, gfp_t gfp) +{ + dma_addr_t phys; + void *virt; + + virt = dma_zalloc_coherent(ipa_dma_dev, size, &phys, gfp); + if (!virt) + return -ENOMEM; + + mem->virt = virt; + mem->phys = phys; + mem->size = size; + + return 0; +} + +void ipa_dma_free(struct ipa_dma_mem *mem) +{ + dma_free_coherent(ipa_dma_dev, mem->size, mem->virt, mem->phys); + memset(mem, 0, sizeof(*mem)); +} + +void *ipa_dma_phys_to_virt(struct ipa_dma_mem *mem, dma_addr_t phys) +{ + return mem->virt + (phys - mem->phys); +} diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h new file mode 100644 index 000000000000..e211dbd9d4ec --- /dev/null +++ b/drivers/net/ipa/ipa_dma.h @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ +#ifndef _IPA_DMA_H_ +#define _IPA_DMA_H_ + +#include +#include + +/** + * struct ipa_dma_mem - IPA allocated DMA memory descriptor + * @virt: host virtual base address of allocated DMA memory + * @phys: bus physical base address of DMA memory + * @size: size (bytes) of DMA memory + */ +struct ipa_dma_mem { + void *virt; + dma_addr_t phys; + size_t size; +}; + +/** + * ipa_dma_init() - Initialize IPA DMA system. + * @dev: IPA device structure + * @align: Hardware required alignment for DMA memory + * + * Returns: 0 if successful, or a negative error code. + */ +int ipa_dma_init(struct device *dev, u32 align); + +/** + * ipa_dma_exit() - shut down/clean up IPA DMA system + */ +void ipa_dma_exit(void); + +/** + * ipa_dma_alloc() - allocate a DMA buffer, describe it in mem struct + * @mem: Memory structure to fill with allocation information. + * @size: Size of DMA buffer to allocate. + * @gfp: Allocation mode. + */ +int ipa_dma_alloc(struct ipa_dma_mem *mem, size_t size, gfp_t gfp); + +/** + * ipa_dma_free() - free a previously-allocated DMA buffer + * @mem: Information about DMA allocation to free + */ +void ipa_dma_free(struct ipa_dma_mem *mem); + +/** + * ipa_dma_phys_to_virt() - return the virtual equivalent of a DMA address + * @phys: DMA allocation information + * @phys: Physical address to convert + * + * Return: Virtual address corresponding to the given physical address + */ +void *ipa_dma_phys_to_virt(struct ipa_dma_mem *mem, dma_addr_t phys); + +#endif /* !_IPA_DMA_H_ */ From patchwork Wed Nov 7 00:32:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 150356 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4548241ljp; Tue, 6 Nov 2018 16:33:20 -0800 (PST) X-Google-Smtp-Source: AJdET5cOEDw9oEzjTYJODHlq0KgmgHpH96l5wHjZJicQWnnoNt4Y2ibgP30yHACsqZp7pQks3xp7 X-Received: by 2002:a63:5664:: with SMTP id g36mr25401906pgm.313.1541550800007; Tue, 06 Nov 2018 16:33:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541550799; cv=none; d=google.com; s=arc-20160816; b=jEyAHiiXnpXPZCSX5ubkWmJhFmREdbESkhzo2rJCSt3WNVIqD5XsvmrGFl48ZVcnDw heSCXIHVce8fuhtSRdrxQaZ5uCHK+PtigBoV7ugXuJ9/TbJxPXmeXK1AdEThtD6xA0qn E1dJiyc4oXlhx5fGHrrja4rKvAax7ToBdUQCYmWsxi4dvR6okFHS7RZFeH7MHIYE1ndi gZQloF0P9bmCRoOiFPKleQx3ya0FC/xWeKje2yS6H0NIh1/e1E642ltWIQWm1y6RnUeE 1N3i/g937KTFVS62h28YU2x4SUCHzJhYbmuNShcHpUPyQCtpXOttxUBE2MSSTjISBTB3 +eXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=x+IoJguFUHZzmlPmbMgOgMFeGPbHIdDyUWQqf7EjQBU=; b=tJwODSGyboDmXIBzu62/Ogz3vcshC68p7aS3c+jWuu1Mji90QIF/Fh2xtVj9IXO2DY Cboxg5rn/YvqDs48i6/rudmQ++E3lOvH3mYFlg4cXTUFGg+M2cZYGPhtz5IcIv+FlZrA Bb/ZsgPOFT4irZZW85kGvvSkhweRufrh7FtKkUtwKuFykV+zjTSLlbVJZcZw3bOtpXwW q0cKYRB2n51tEbcMHP+Vf2AVxD3VYnx1fqLidcbsEUlgDDC5X50e50DLfB0coxnoJWrL k75oJiU6VaIp+AfLvDKdCGRdapiMISgPikxXTIOsVqfk+Qh8Dg+8WL4vLRdn15QnQZA0 5QwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="TdPPMs/g"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bh4-v6si46406979plb.386.2018.11.06.16.33.19; Tue, 06 Nov 2018 16:33:19 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="TdPPMs/g"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388993AbeKGKBL (ORCPT + 32 others); Wed, 7 Nov 2018 05:01:11 -0500 Received: from mail-io1-f67.google.com ([209.85.166.67]:42996 "EHLO mail-io1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388893AbeKGKBL (ORCPT ); Wed, 7 Nov 2018 05:01:11 -0500 Received: by mail-io1-f67.google.com with SMTP id h19-v6so10684509iog.9 for ; Tue, 06 Nov 2018 16:33:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=x+IoJguFUHZzmlPmbMgOgMFeGPbHIdDyUWQqf7EjQBU=; b=TdPPMs/glep9fWKyGeF299MdA7WSnR9k7BqutKV8bq2TdcfXZVwta2TPVy3+DJj5Vq WZfVbOiEKBgj+Jz1gA+btfU0M6EyyxuPO2fru7dnd4N/LegnsRFYCjXDzECg/rBDbU4b ILMrfv3k6+jXFSg2NEF/jWmZMafBJfmTbaxzs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=x+IoJguFUHZzmlPmbMgOgMFeGPbHIdDyUWQqf7EjQBU=; b=jS8hmZsF82Ze7S+EgczlRcHhLJv5iYSwnYdthEtlbaiZHpfSnOhwV1xoeoxhLX5ORM b2qHI/6wgsJugSdVSsBASLQlEBlJ+lJlLGJVJvPInvGhVXf86m95/7qiPyaIW6s2wJpZ g6eeX4nUdnJ24RINQwio8joZLSggxEbt35pqreDuPknzgkMRraCY2+lajcHPmcrVp4IZ NmrQ1rdw3mMtrI5q4jeIq0p7mQ5LggKwCnah7bUXyFzGkUqNDyeAnEI/J5UWEQiS6lf7 EEij/xopv97Uq0uhMf4aSQXhtkJZnpVL3hAvI7wSARzkAyiMGnQyYr0BXcoxRwF3Ab+t Gljw== X-Gm-Message-State: AGRZ1gJUFXgv8/1rWxDrG5+c044haUg6kxk4tC+9yRVypeAQ555uy/0B h3yCIFjdOXxKvEyy9WoBrHJ2tA== X-Received: by 2002:a6b:b450:: with SMTP id d77-v6mr23544093iof.15.1541550792807; Tue, 06 Nov 2018 16:33:12 -0800 (PST) Received: from shibby.gateway.innflux.com ([66.228.239.218]) by smtp.gmail.com with ESMTPSA id e184-v6sm1061128ite.9.2018.11.06.16.33.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 16:33:12 -0800 (PST) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, syadagir@codeaurora.org, mjavid@codeaurora.org, robh+dt@kernel.org, mark.rutland@arm.com Subject: [RFC PATCH 03/12] soc: qcom: ipa: generic software interface Date: Tue, 6 Nov 2018 18:32:41 -0600 Message-Id: <20181107003250.5832-4-elder@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107003250.5832-1-elder@linaro.org> References: <20181107003250.5832-1-elder@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch contains the code supporting the Generic Software Interface (GSI) used by the IPA. Although the GSI is an integral part of the IPA, it provides a well-defined layer between the AP subsystem (or, for that matter, the modem) and the IPA core. The GSI code presents an abstract interface through which commands and data transfers can be queued to be implemented on a channel. A hardware independent gsi_xfer_elem structure describes a single transfer, and an array of these can be queued on a channel. The information in the gsi_xfer_elem is converted by the GSI layer into the specific layout required by the hardware. A channel has an associated event ring, through which completion of channel commands can be signaled. GSI channel commands are completed in order, and may optionally generate an interrupt on completion. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi.c | 1685 +++++++++++++++++++++++++++++++++++++ drivers/net/ipa/gsi.h | 195 +++++ drivers/net/ipa/gsi_reg.h | 563 +++++++++++++ 3 files changed, 2443 insertions(+) create mode 100644 drivers/net/ipa/gsi.c create mode 100644 drivers/net/ipa/gsi.h create mode 100644 drivers/net/ipa/gsi_reg.h -- 2.17.1 diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c new file mode 100644 index 000000000000..348ee1fc1bf5 --- /dev/null +++ b/drivers/net/ipa/gsi.c @@ -0,0 +1,1685 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_reg.h" +#include "ipa_dma.h" +#include "ipa_i.h" /* ipa_err() */ + +/** + * DOC: The Role of GSI in IPA Operation + * + * The generic software interface (GSI) is an integral component of + * the IPA, providing a well-defined layer between the AP subsystem + * (or, for that matter, the modem) and the IPA core:: + * + * ---------- ------------- --------- + * | | |G| |G| | | + * | APSS |===|S| IPA |S|===| Modem | + * | | |I| |I| | | + * ---------- ------------- --------- + * + * In the above diagram, the APSS and Modem represent "execution + * environments" (EEs), which are independent operating environments + * that use the IPA for data transfer. + * + * Each EE uses a set of unidirectional GSI "channels," which allow + * transfer of data to or from the IPA. A channel is implemented as a + * ring buffer, with a DRAM-resident array of "transfer elements" (TREs) + * available to describe transfers to or from other EEs through the IPA. + * A transfer element can also contain an immediate command, requesting + * the IPA perform actions other than data transfer. + * + * Each transfer element refers to a block of data--also located DRAM. + * After writing one or more TREs to a channel, the writer (either the + * IPA or an EE) writes a doorbell register to inform the receiving side + * how many elements have been written. Writing to a doorbell register + * triggers an interrupt on the receiver. + * + * Each channel has a GSI "event ring" associated with it. An event + * ring is implemented very much like a channel ring, but is always + * directed from the IPA to an EE. The IPA notifies an EE (such as + * the AP) about channel events by adding an entry to the event ring + * associated with the channel; when it writes the event ring's + * doorbell register the EE will be interrupted. + * + * A transfer element has a set of flags. One flag indicates whether + * the completion of the transfer operation generates a channel event. + * Another flag allows transfer elements to be chained together, + * forming a single logical transaction. These flags are used to + * control whether and when interrupts are generated to signal + * completion of a channel transfer. + * + * Elements in channel and event rings are completed (or consumed) + * strictly in order. Completion of one entry implies the completion + * of all preceding entries. A single completion interrupt can + * therefore be used to communicate the completion of many transfers. + */ + +#define GSI_RING_ELEMENT_SIZE 16 /* bytes (channel or event ring) */ + +#define GSI_CHAN_MAX 14 +#define GSI_EVT_RING_MAX 10 + +/* Delay period if interrupt moderation is in effect */ +#define IPA_GSI_EVT_RING_INT_MODT (32 * 1) /* 1ms under 32KHz clock */ + +#define GSI_CMD_TIMEOUT msecs_to_jiffies(5 * MSEC_PER_SEC) + +#define GSI_MHI_ER_START 10 /* First reserved event number */ +#define GSI_MHI_ER_END 16 /* Last reserved event number */ + +#define GSI_RESET_WA_MIN_SLEEP 1000 /* microseconds */ +#define GSI_RESET_WA_MAX_SLEEP 2000 /* microseconds */ + +#define GSI_MAX_PREFETCH 0 /* 0 means 1 segment; 1 means 2 */ + +#define GSI_ISR_MAX_ITER 50 + +/* Hardware values from the error log register code field */ +enum gsi_err_code { + GSI_INVALID_TRE_ERR = 0x1, + GSI_OUT_OF_BUFFERS_ERR = 0x2, + GSI_OUT_OF_RESOURCES_ERR = 0x3, + GSI_UNSUPPORTED_INTER_EE_OP_ERR = 0x4, + GSI_EVT_RING_EMPTY_ERR = 0x5, + GSI_NON_ALLOCATED_EVT_ACCESS_ERR = 0x6, + GSI_HWO_1_ERR = 0x8, +}; + +/* Hardware values used when programming an event ring context */ +enum gsi_evt_chtype { + GSI_EVT_CHTYPE_MHI_EV = 0x0, + GSI_EVT_CHTYPE_XHCI_EV = 0x1, + GSI_EVT_CHTYPE_GPI_EV = 0x2, + GSI_EVT_CHTYPE_XDCI_EV = 0x3, +}; + +/* Hardware values used when programming a channel context */ +enum gsi_channel_protocol { + GSI_CHANNEL_PROTOCOL_MHI = 0x0, + GSI_CHANNEL_PROTOCOL_XHCI = 0x1, + GSI_CHANNEL_PROTOCOL_GPI = 0x2, + GSI_CHANNEL_PROTOCOL_XDCI = 0x3, +}; + +/* Hardware values returned in a transfer completion event structure */ +enum gsi_channel_evt { + GSI_CHANNEL_EVT_INVALID = 0x0, + GSI_CHANNEL_EVT_SUCCESS = 0x1, + GSI_CHANNEL_EVT_EOT = 0x2, + GSI_CHANNEL_EVT_OVERFLOW = 0x3, + GSI_CHANNEL_EVT_EOB = 0x4, + GSI_CHANNEL_EVT_OOB = 0x5, + GSI_CHANNEL_EVT_DB_MODE = 0x6, + GSI_CHANNEL_EVT_UNDEFINED = 0x10, + GSI_CHANNEL_EVT_RE_ERROR = 0x11, +}; + +/* Hardware values signifying the state of an event ring */ +enum gsi_evt_ring_state { + GSI_EVT_RING_STATE_NOT_ALLOCATED = 0x0, + GSI_EVT_RING_STATE_ALLOCATED = 0x1, + GSI_EVT_RING_STATE_ERROR = 0xf, +}; + +/* Hardware values signifying the state of a channel */ +enum gsi_channel_state { + GSI_CHANNEL_STATE_NOT_ALLOCATED = 0x0, + GSI_CHANNEL_STATE_ALLOCATED = 0x1, + GSI_CHANNEL_STATE_STARTED = 0x2, + GSI_CHANNEL_STATE_STOPPED = 0x3, + GSI_CHANNEL_STATE_STOP_IN_PROC = 0x4, + GSI_CHANNEL_STATE_ERROR = 0xf, +}; + +struct gsi_ring { + spinlock_t slock; /* protects wp, rp updates */ + struct ipa_dma_mem mem; + u64 wp; + u64 rp; + u64 wp_local; + u64 rp_local; + u64 end; /* physical addr past last element */ +}; + +struct gsi_channel { + bool from_ipa; /* true: IPA->AP; false: AP->IPA */ + bool priority; /* Does hardware give this channel priority? */ + enum gsi_channel_state state; + struct gsi_ring ring; + void *notify_data; + void **user_data; + struct gsi_evt_ring *evt_ring; + struct mutex mutex; /* protects channel_scratch updates */ + struct completion compl; + atomic_t poll_mode; + u32 tlv_count; /* # slots in TLV */ +}; + +struct gsi_evt_ring { + bool moderation; + enum gsi_evt_ring_state state; + struct gsi_ring ring; + struct completion compl; + struct gsi_channel *channel; +}; + +struct ch_debug_stats { + unsigned long ch_allocate; + unsigned long ch_start; + unsigned long ch_stop; + unsigned long ch_reset; + unsigned long ch_de_alloc; + unsigned long ch_db_stop; + unsigned long cmd_completed; +}; + +struct gsi { + void __iomem *base; + struct device *dev; + u32 phys; + unsigned int irq; + bool irq_wake_enabled; + spinlock_t slock; /* protects global register updates */ + struct mutex mutex; /* protects 1-at-a-time commands, evt_bmap */ + atomic_t channel_count; + atomic_t evt_ring_count; + struct gsi_channel channel[GSI_CHAN_MAX]; + struct ch_debug_stats ch_dbg[GSI_CHAN_MAX]; + struct gsi_evt_ring evt_ring[GSI_EVT_RING_MAX]; + unsigned long evt_bmap; + u32 channel_max; + u32 evt_ring_max; +}; + +/* Hardware values representing a transfer element type */ +enum gsi_re_type { + GSI_RE_XFER = 0x2, + GSI_RE_IMMD_CMD = 0x3, + GSI_RE_NOP = 0x4, +}; + +struct gsi_tre { + u64 buffer_ptr; + u16 buf_len; + u16 rsvd1; + u8 chain : 1, + rsvd4 : 7; + u8 ieob : 1, + ieot : 1, + bei : 1, + rsvd3 : 5; + u8 re_type; + u8 rsvd2; +} __packed; + +struct gsi_xfer_compl_evt { + u64 xfer_ptr; + u16 len; + u8 rsvd1; + u8 code; /* see gsi_channel_evt */ + u16 rsvd; + u8 type; + u8 chid; +} __packed; + +/* Hardware values from the error log register error type field */ +enum gsi_err_type { + GSI_ERR_TYPE_GLOB = 0x1, + GSI_ERR_TYPE_CHAN = 0x2, + GSI_ERR_TYPE_EVT = 0x3, +}; + +struct gsi_log_err { + u8 arg3 : 4, + arg2 : 4; + u8 arg1 : 4, + code : 4; + u8 rsvd : 3, + virt_idx : 5; + u8 err_type : 4, + ee : 4; +} __packed; + +/* Hardware values repreasenting a channel immediate command opcode */ +enum gsi_ch_cmd_opcode { + GSI_CH_ALLOCATE = 0x0, + GSI_CH_START = 0x1, + GSI_CH_STOP = 0x2, + GSI_CH_RESET = 0x9, + GSI_CH_DE_ALLOC = 0xa, + GSI_CH_DB_STOP = 0xb, +}; + +/* Hardware values repreasenting an event ring immediate command opcode */ +enum gsi_evt_ch_cmd_opcode { + GSI_EVT_ALLOCATE = 0x0, + GSI_EVT_RESET = 0x9, + GSI_EVT_DE_ALLOC = 0xa, +}; + +/** gsi_gpi_channel_scratch - GPI protocol SW config area of channel scratch + * + * @max_outstanding_tre: Used for the prefetch management sequence by the + * sequencer. Defines the maximum number of allowed + * outstanding TREs in IPA/GSI (in Bytes). RE engine + * prefetch will be limited by this configuration. It + * is suggested to configure this value to IPA_IF + * channel TLV queue size times element size. To disable + * the feature in doorbell mode (DB Mode=1). Maximum + * outstanding TREs should be set to 64KB + * (or any value larger or equal to ring length . RLEN) + * @outstanding_threshold: Used for the prefetch management sequence by the + * sequencer. Defines the threshold (in Bytes) as to when + * to update the channel doorbell. Should be smaller than + * Maximum outstanding TREs. value. It is suggested to + * configure this value to 2 * element size. + */ +struct gsi_gpi_channel_scratch { + u64 rsvd1; + u16 rsvd2; + u16 max_outstanding_tre; + u16 rsvd3; + u16 outstanding_threshold; +} __packed; + +/** gsi_channel_scratch - channel scratch SW config area */ +union gsi_channel_scratch { + struct gsi_gpi_channel_scratch gpi; + struct { + u32 word1; + u32 word2; + u32 word3; + u32 word4; + } data; +} __packed; + +/* Read a value from the given offset into the I/O space defined in + * the GSI context. + */ +static u32 gsi_readl(struct gsi *gsi, u32 offset) +{ + return readl(gsi->base + offset); +} + +/* Write the provided value to the given offset into the I/O space + * defined in the GSI context. + */ +static void gsi_writel(struct gsi *gsi, u32 v, u32 offset) +{ + writel(v, gsi->base + offset); +} + +static void +_gsi_irq_control_event(struct gsi *gsi, u32 evt_ring_id, bool enable) +{ + u32 mask = BIT(evt_ring_id); + u32 val; + + val = gsi_readl(gsi, GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFS); + if (enable) + val |= mask; + else + val &= ~mask; + gsi_writel(gsi, val, GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFS); +} + +static void gsi_irq_disable_event(struct gsi *gsi, u32 evt_ring_id) +{ + _gsi_irq_control_event(gsi, evt_ring_id, false); +} + +static void gsi_irq_enable_event(struct gsi *gsi, u32 evt_ring_id) +{ + _gsi_irq_control_event(gsi, evt_ring_id, true); +} + +static void _gsi_irq_control_all(struct gsi *gsi, bool enable) +{ + u32 val = enable ? ~0 : 0; + + /* Inter EE commands / interrupt are no supported. */ + gsi_writel(gsi, val, GSI_CNTXT_TYPE_IRQ_MSK_OFFS); + gsi_writel(gsi, val, GSI_CNTXT_SRC_CH_IRQ_MSK_OFFS); + gsi_writel(gsi, val, GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFS); + gsi_writel(gsi, val, GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFS); + gsi_writel(gsi, val, GSI_CNTXT_GLOB_IRQ_EN_OFFS); + /* Never enable GSI_BREAK_POINT */ + val &= ~FIELD_PREP(EN_BREAK_POINT_FMASK, 1); + gsi_writel(gsi, val, GSI_CNTXT_GSI_IRQ_EN_OFFS); +} + +static void gsi_irq_disable_all(struct gsi *gsi) +{ + _gsi_irq_control_all(gsi, false); +} + +static void gsi_irq_enable_all(struct gsi *gsi) +{ + _gsi_irq_control_all(gsi, true); +} + +static u32 gsi_channel_id(struct gsi *gsi, struct gsi_channel *channel) +{ + return (u32)(channel - &gsi->channel[0]); +} + +static u32 gsi_evt_ring_id(struct gsi *gsi, struct gsi_evt_ring *evt_ring) +{ + return (u32)(evt_ring - &gsi->evt_ring[0]); +} + +static enum gsi_channel_state gsi_channel_state(struct gsi *gsi, u32 channel_id) +{ + u32 val = gsi_readl(gsi, GSI_CH_C_CNTXT_0_OFFS(channel_id)); + + return (enum gsi_channel_state)FIELD_GET(CHSTATE_FMASK, val); +} + +static enum gsi_evt_ring_state +gsi_evt_ring_state(struct gsi *gsi, u32 evt_ring_id) +{ + u32 val = gsi_readl(gsi, GSI_EV_CH_E_CNTXT_0_OFFS(evt_ring_id)); + + return (enum gsi_evt_ring_state)FIELD_GET(EV_CHSTATE_FMASK, val); +} + +static void gsi_isr_chan_ctrl(struct gsi *gsi) +{ + u32 channel_mask; + + channel_mask = gsi_readl(gsi, GSI_CNTXT_SRC_CH_IRQ_OFFS); + gsi_writel(gsi, channel_mask, GSI_CNTXT_SRC_CH_IRQ_CLR_OFFS); + + ipa_assert(!(channel_mask & ~GENMASK(gsi->channel_max - 1, 0))); + + while (channel_mask) { + struct gsi_channel *channel; + int i = __ffs(channel_mask); + + channel = &gsi->channel[i]; + channel->state = gsi_channel_state(gsi, i); + + complete(&channel->compl); + + channel_mask ^= BIT(i); + } +} + +static void gsi_isr_evt_ctrl(struct gsi *gsi) +{ + u32 evt_mask; + + evt_mask = gsi_readl(gsi, GSI_CNTXT_SRC_EV_CH_IRQ_OFFS); + gsi_writel(gsi, evt_mask, GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFS); + + ipa_assert(!(evt_mask & ~GENMASK(gsi->evt_ring_max - 1, 0))); + + while (evt_mask) { + struct gsi_evt_ring *evt_ring; + int i = __ffs(evt_mask); + + evt_ring = &gsi->evt_ring[i]; + evt_ring->state = gsi_evt_ring_state(gsi, i); + + complete(&evt_ring->compl); + + evt_mask ^= BIT(i); + } +} + +static void +gsi_isr_glob_chan_err(struct gsi *gsi, u32 err_ee, u32 channel_id, u32 code) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + if (err_ee != IPA_EE_AP) + ipa_bug_on(code != GSI_UNSUPPORTED_INTER_EE_OP_ERR); + + if (WARN_ON(channel_id >= gsi->channel_max)) { + ipa_err("unexpected channel_id %u\n", channel_id); + return; + } + + switch (code) { + case GSI_INVALID_TRE_ERR: + ipa_err("got INVALID_TRE_ERR\n"); + channel->state = gsi_channel_state(gsi, channel_id); + ipa_bug_on(channel->state != GSI_CHANNEL_STATE_ERROR); + break; + case GSI_OUT_OF_BUFFERS_ERR: + ipa_err("got OUT_OF_BUFFERS_ERR\n"); + break; + case GSI_OUT_OF_RESOURCES_ERR: + ipa_err("got OUT_OF_RESOURCES_ERR\n"); + complete(&channel->compl); + break; + case GSI_UNSUPPORTED_INTER_EE_OP_ERR: + ipa_err("got UNSUPPORTED_INTER_EE_OP_ERR\n"); + break; + case GSI_NON_ALLOCATED_EVT_ACCESS_ERR: + ipa_err("got NON_ALLOCATED_EVT_ACCESS_ERR\n"); + break; + case GSI_HWO_1_ERR: + ipa_err("got HWO_1_ERR\n"); + break; + default: + ipa_err("unexpected channel error code %u\n", code); + ipa_bug(); + } +} + +static void +gsi_isr_glob_evt_err(struct gsi *gsi, u32 err_ee, u32 evt_ring_id, u32 code) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + + if (err_ee != IPA_EE_AP) + ipa_bug_on(code != GSI_UNSUPPORTED_INTER_EE_OP_ERR); + + if (WARN_ON(evt_ring_id >= gsi->evt_ring_max)) { + ipa_err("unexpected evt_ring_id %u\n", evt_ring_id); + return; + } + + switch (code) { + case GSI_OUT_OF_BUFFERS_ERR: + ipa_err("got OUT_OF_BUFFERS_ERR\n"); + break; + case GSI_OUT_OF_RESOURCES_ERR: + ipa_err("got OUT_OF_RESOURCES_ERR\n"); + complete(&evt_ring->compl); + break; + case GSI_UNSUPPORTED_INTER_EE_OP_ERR: + ipa_err("got UNSUPPORTED_INTER_EE_OP_ERR\n"); + break; + case GSI_EVT_RING_EMPTY_ERR: + ipa_err("got EVT_RING_EMPTY_ERR\n"); + break; + default: + ipa_err("unexpected event error code %u\n", code); + ipa_bug(); + } +} + +static void gsi_isr_glob_err(struct gsi *gsi, u32 err) +{ + struct gsi_log_err *log = (struct gsi_log_err *)&err; + + ipa_err("log err_type %u ee %u idx %u\n", log->err_type, log->ee, + log->virt_idx); + ipa_err("log code 0x%1x arg1 0x%1x arg2 0x%1x arg3 0x%1x\n", log->code, + log->arg1, log->arg2, log->arg3); + + ipa_bug_on(log->err_type == GSI_ERR_TYPE_GLOB); + + switch (log->err_type) { + case GSI_ERR_TYPE_CHAN: + gsi_isr_glob_chan_err(gsi, log->ee, log->virt_idx, log->code); + break; + case GSI_ERR_TYPE_EVT: + gsi_isr_glob_evt_err(gsi, log->ee, log->virt_idx, log->code); + break; + default: + WARN_ON(1); + } +} + +static void gsi_isr_glob_ee(struct gsi *gsi) +{ + u32 val; + + val = gsi_readl(gsi, GSI_CNTXT_GLOB_IRQ_STTS_OFFS); + + if (val & ERROR_INT_FMASK) { + u32 err = gsi_readl(gsi, GSI_ERROR_LOG_OFFS); + + gsi_writel(gsi, 0, GSI_ERROR_LOG_OFFS); + gsi_writel(gsi, ~0, GSI_ERROR_LOG_CLR_OFFS); + + gsi_isr_glob_err(gsi, err); + } + + if (val & EN_GP_INT1_FMASK) + ipa_err("unexpected GP INT1 received\n"); + + ipa_bug_on(val & EN_GP_INT2_FMASK); + ipa_bug_on(val & EN_GP_INT3_FMASK); + + gsi_writel(gsi, val, GSI_CNTXT_GLOB_IRQ_CLR_OFFS); +} + +static void ring_wp_local_inc(struct gsi_ring *ring) +{ + ring->wp_local += GSI_RING_ELEMENT_SIZE; + if (ring->wp_local == ring->end) + ring->wp_local = ring->mem.phys; +} + +static void ring_rp_local_inc(struct gsi_ring *ring) +{ + ring->rp_local += GSI_RING_ELEMENT_SIZE; + if (ring->rp_local == ring->end) + ring->rp_local = ring->mem.phys; +} + +static u16 ring_rp_local_index(struct gsi_ring *ring) +{ + return (u16)(ring->rp_local - ring->mem.phys) / GSI_RING_ELEMENT_SIZE; +} + +static u16 ring_wp_local_index(struct gsi_ring *ring) +{ + return (u16)(ring->wp_local - ring->mem.phys) / GSI_RING_ELEMENT_SIZE; +} + +static void channel_xfer_cb(struct gsi_channel *channel, u16 count) +{ + void *xfer_data; + + if (!channel->from_ipa) { + u16 ring_rp_local = ring_rp_local_index(&channel->ring); + + xfer_data = channel->user_data[ring_rp_local];; + ipa_gsi_irq_tx_notify_cb(xfer_data); + } else { + ipa_gsi_irq_rx_notify_cb(channel->notify_data, count); + } +} + +static u16 gsi_channel_process(struct gsi *gsi, struct gsi_xfer_compl_evt *evt, + bool callback) +{ + struct gsi_channel *channel; + u32 channel_id = (u32)evt->chid; + + ipa_assert(channel_id < gsi->channel_max); + + /* Event tells us the last completed channel ring element */ + channel = &gsi->channel[channel_id]; + channel->ring.rp_local = evt->xfer_ptr; + + if (callback) { + if (evt->code == GSI_CHANNEL_EVT_EOT) + channel_xfer_cb(channel, evt->len); + else + ipa_err("ch %u unexpected %sX event id %hhu\n", + channel_id, channel->from_ipa ? "R" : "T", + evt->code); + } + + /* Record that we've processed this channel ring element. */ + ring_rp_local_inc(&channel->ring); + channel->ring.rp = channel->ring.rp_local; + + return evt->len; +} + +static void +gsi_evt_ring_doorbell(struct gsi *gsi, struct gsi_evt_ring *evt_ring) +{ + u32 evt_ring_id = gsi_evt_ring_id(gsi, evt_ring); + u32 val; + + /* The doorbell 0 and 1 registers store the low-order and + * high-order 32 bits of the event ring doorbell register, + * respectively. LSB (doorbell 0) must be written last. + */ + val = evt_ring->ring.wp_local >> 32; + gsi_writel(gsi, val, GSI_EV_CH_E_DOORBELL_1_OFFS(evt_ring_id)); + + val = evt_ring->ring.wp_local & GENMASK(31, 0); + gsi_writel(gsi, val, GSI_EV_CH_E_DOORBELL_0_OFFS(evt_ring_id)); +} + +static void gsi_channel_doorbell(struct gsi *gsi, struct gsi_channel *channel) +{ + u32 channel_id = gsi_channel_id(gsi, channel); + u32 val; + + /* allocate new events for this channel first + * before submitting the new TREs. + * for TO_GSI channels the event ring doorbell is rang as part of + * interrupt handling. + */ + if (channel->from_ipa) + gsi_evt_ring_doorbell(gsi, channel->evt_ring); + channel->ring.wp = channel->ring.wp_local; + + /* The doorbell 0 and 1 registers store the low-order and + * high-order 32 bits of the channel ring doorbell register, + * respectively. LSB (doorbell 0) must be written last. + */ + val = channel->ring.wp_local >> 32; + gsi_writel(gsi, val, GSI_CH_C_DOORBELL_1_OFFS(channel_id)); + val = channel->ring.wp_local & GENMASK(31, 0); + gsi_writel(gsi, val, GSI_CH_C_DOORBELL_0_OFFS(channel_id)); +} + +static void gsi_event_handle(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + unsigned long flags; + bool check_again; + + spin_lock_irqsave(&evt_ring->ring.slock, flags); + + do { + u32 val = gsi_readl(gsi, GSI_EV_CH_E_CNTXT_4_OFFS(evt_ring_id)); + + evt_ring->ring.rp = evt_ring->ring.rp & GENMASK_ULL(63, 32); + evt_ring->ring.rp |= val; + + check_again = false; + while (evt_ring->ring.rp_local != evt_ring->ring.rp) { + struct gsi_xfer_compl_evt *evt; + + if (atomic_read(&evt_ring->channel->poll_mode)) { + check_again = false; + break; + } + check_again = true; + + evt = ipa_dma_phys_to_virt(&evt_ring->ring.mem, + evt_ring->ring.rp_local); + (void)gsi_channel_process(gsi, evt, true); + + ring_rp_local_inc(&evt_ring->ring); + ring_wp_local_inc(&evt_ring->ring); /* recycle */ + } + + gsi_evt_ring_doorbell(gsi, evt_ring); + } while (check_again); + + spin_unlock_irqrestore(&evt_ring->ring.slock, flags); +} + +static void gsi_isr_ioeb(struct gsi *gsi) +{ + u32 evt_mask; + + evt_mask = gsi_readl(gsi, GSI_CNTXT_SRC_IEOB_IRQ_OFFS); + evt_mask &= gsi_readl(gsi, GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFS); + gsi_writel(gsi, evt_mask, GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFS); + + ipa_assert(!(evt_mask & ~GENMASK(gsi->evt_ring_max - 1, 0))); + + while (evt_mask) { + u32 i = (u32)__ffs(evt_mask); + + gsi_event_handle(gsi, i); + + evt_mask ^= BIT(i); + } +} + +static void gsi_isr_inter_ee_chan_ctrl(struct gsi *gsi) +{ + u32 channel_mask; + + channel_mask = gsi_readl(gsi, GSI_INTER_EE_SRC_CH_IRQ_OFFS); + gsi_writel(gsi, channel_mask, GSI_INTER_EE_SRC_CH_IRQ_CLR_OFFS); + + ipa_assert(!(channel_mask & ~GENMASK(gsi->channel_max - 1, 0))); + + while (channel_mask) { + int i = __ffs(channel_mask); + + /* not currently expected */ + ipa_err("ch %d was inter-EE changed\n", i); + channel_mask ^= BIT(i); + } +} + +static void gsi_isr_inter_ee_evt_ctrl(struct gsi *gsi) +{ + u32 evt_mask; + + evt_mask = gsi_readl(gsi, GSI_INTER_EE_SRC_EV_CH_IRQ_OFFS); + gsi_writel(gsi, evt_mask, GSI_INTER_EE_SRC_EV_CH_IRQ_CLR_OFFS); + + ipa_assert(!(evt_mask & ~GENMASK(gsi->evt_ring_max - 1, 0))); + + while (evt_mask) { + u32 i = (u32)__ffs(evt_mask); + + /* not currently expected */ + ipa_err("evt %d was inter-EE changed\n", i); + evt_mask ^= BIT(i); + } +} + +static void gsi_isr_general(struct gsi *gsi) +{ + u32 val; + + val = gsi_readl(gsi, GSI_CNTXT_GSI_IRQ_STTS_OFFS); + + ipa_bug_on(val & CLR_MCS_STACK_OVRFLOW_FMASK); + ipa_bug_on(val & CLR_CMD_FIFO_OVRFLOW_FMASK); + ipa_bug_on(val & CLR_BUS_ERROR_FMASK); + + if (val & CLR_BREAK_POINT_FMASK) + ipa_err("got breakpoint\n"); + + gsi_writel(gsi, val, GSI_CNTXT_GSI_IRQ_CLR_OFFS); +} + +/* Returns a bitmask of pending GSI interrupts */ +static u32 gsi_isr_type(struct gsi *gsi) +{ + return gsi_readl(gsi, GSI_CNTXT_TYPE_IRQ_OFFS); +} + +static irqreturn_t gsi_isr(int irq, void *dev_id) +{ + struct gsi *gsi = dev_id; + u32 type; + u32 cnt; + + cnt = 0; + while ((type = gsi_isr_type(gsi))) { + do { + u32 single = BIT(__ffs(type)); + + switch (single) { + case CH_CTRL_FMASK: + gsi_isr_chan_ctrl(gsi); + break; + case EV_CTRL_FMASK: + gsi_isr_evt_ctrl(gsi); + break; + case GLOB_EE_FMASK: + gsi_isr_glob_ee(gsi); + break; + case IEOB_FMASK: + gsi_isr_ioeb(gsi); + break; + case INTER_EE_CH_CTRL_FMASK: + gsi_isr_inter_ee_chan_ctrl(gsi); + break; + case INTER_EE_EV_CTRL_FMASK: + gsi_isr_inter_ee_evt_ctrl(gsi); + break; + case GENERAL_FMASK: + gsi_isr_general(gsi); + break; + default: + WARN(true, "%s: unrecognized type 0x%08x\n", + __func__, single); + break; + } + type ^= single; + } while (type); + + ipa_bug_on(++cnt > GSI_ISR_MAX_ITER); + } + + return IRQ_HANDLED; +} + +static u32 gsi_channel_max(struct gsi *gsi) +{ + u32 val = gsi_readl(gsi, GSI_GSI_HW_PARAM_2_OFFS); + + return FIELD_GET(NUM_CH_PER_EE_FMASK, val); +} + +static u32 gsi_evt_ring_max(struct gsi *gsi) +{ + u32 val = gsi_readl(gsi, GSI_GSI_HW_PARAM_2_OFFS); + + return FIELD_GET(NUM_EV_PER_EE_FMASK, val); +} + +/* Zero bits in an event bitmap represent event numbers available + * for allocation. Initialize the map so all events supported by + * the hardware are available; then preclude any reserved events + * from allocation. + */ +static u32 gsi_evt_bmap_init(u32 evt_ring_max) +{ + u32 evt_bmap = GENMASK(BITS_PER_LONG - 1, evt_ring_max); + + return evt_bmap | GENMASK(GSI_MHI_ER_END, GSI_MHI_ER_START); +} + +int gsi_device_init(struct gsi *gsi) +{ + u32 evt_ring_max; + u32 channel_max; + u32 val; + int ret; + + val = gsi_readl(gsi, GSI_GSI_STATUS_OFFS); + if (!(val & ENABLED_FMASK)) { + ipa_err("manager EE has not enabled GSI, GSI un-usable\n"); + return -EIO; + } + + channel_max = gsi_channel_max(gsi); + ipa_debug("channel_max %u\n", channel_max); + ipa_assert(channel_max <= GSI_CHAN_MAX); + + evt_ring_max = gsi_evt_ring_max(gsi); + ipa_debug("evt_ring_max %u\n", evt_ring_max); + ipa_assert(evt_ring_max <= GSI_EVT_RING_MAX); + + ret = request_irq(gsi->irq, gsi_isr, IRQF_TRIGGER_HIGH, "gsi", gsi); + if (ret) { + ipa_err("failed to register isr for %u\n", gsi->irq); + return -EIO; + } + + ret = enable_irq_wake(gsi->irq); + if (ret) + ipa_err("error %d enabling gsi wake irq\n", ret); + gsi->irq_wake_enabled = !ret; + gsi->channel_max = channel_max; + gsi->evt_ring_max = evt_ring_max; + gsi->evt_bmap = gsi_evt_bmap_init(evt_ring_max); + + /* Enable all IPA interrupts */ + gsi_irq_enable_all(gsi); + + /* Writing 1 indicates IRQ interrupts; 0 would be MSI */ + gsi_writel(gsi, 1, GSI_CNTXT_INTSET_OFFS); + + /* Initialize the error log */ + gsi_writel(gsi, 0, GSI_ERROR_LOG_OFFS); + + return 0; +} + +void gsi_device_exit(struct gsi *gsi) +{ + ipa_assert(!atomic_read(&gsi->channel_count)); + ipa_assert(!atomic_read(&gsi->evt_ring_count)); + + /* Don't bother clearing the error log again (ERROR_LOG) or + * setting the interrupt type again (INTSET). + */ + gsi_irq_disable_all(gsi); + + /* Clean up everything else set up by gsi_device_init() */ + gsi->evt_bmap = 0; + gsi->evt_ring_max = 0; + gsi->channel_max = 0; + if (gsi->irq_wake_enabled) { + (void)disable_irq_wake(gsi->irq); + gsi->irq_wake_enabled = false; + } + free_irq(gsi->irq, gsi); + gsi->irq = 0; +} + +static void gsi_evt_ring_program(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + u32 int_modt; + u32 int_modc; + u64 phys; + u32 val; + + phys = evt_ring->ring.mem.phys; + int_modt = evt_ring->moderation ? IPA_GSI_EVT_RING_INT_MODT : 0; + int_modc = 1; /* moderation always comes from channel*/ + + val = FIELD_PREP(EV_CHTYPE_FMASK, GSI_EVT_CHTYPE_GPI_EV); + val |= FIELD_PREP(EV_INTYPE_FMASK, 1); + val |= FIELD_PREP(EV_ELEMENT_SIZE_FMASK, GSI_RING_ELEMENT_SIZE); + gsi_writel(gsi, val, GSI_EV_CH_E_CNTXT_0_OFFS(evt_ring_id)); + + val = FIELD_PREP(EV_R_LENGTH_FMASK, (u32)evt_ring->ring.mem.size); + gsi_writel(gsi, val, GSI_EV_CH_E_CNTXT_1_OFFS(evt_ring_id)); + + /* The context 2 and 3 registers store the low-order and + * high-order 32 bits of the address of the event ring, + * respectively. + */ + val = phys & GENMASK(31, 0); + gsi_writel(gsi, val, GSI_EV_CH_E_CNTXT_2_OFFS(evt_ring_id)); + + val = phys >> 32; + gsi_writel(gsi, val, GSI_EV_CH_E_CNTXT_3_OFFS(evt_ring_id)); + + val = FIELD_PREP(MODT_FMASK, int_modt); + val |= FIELD_PREP(MODC_FMASK, int_modc); + gsi_writel(gsi, val, GSI_EV_CH_E_CNTXT_8_OFFS(evt_ring_id)); + + /* No MSI write data, and MSI address high and low address is 0 */ + gsi_writel(gsi, 0, GSI_EV_CH_E_CNTXT_9_OFFS(evt_ring_id)); + gsi_writel(gsi, 0, GSI_EV_CH_E_CNTXT_10_OFFS(evt_ring_id)); + gsi_writel(gsi, 0, GSI_EV_CH_E_CNTXT_11_OFFS(evt_ring_id)); + + /* We don't need to get event read pointer updates */ + gsi_writel(gsi, 0, GSI_EV_CH_E_CNTXT_12_OFFS(evt_ring_id)); + gsi_writel(gsi, 0, GSI_EV_CH_E_CNTXT_13_OFFS(evt_ring_id)); +} + +static void gsi_ring_init(struct gsi_ring *ring) +{ + ring->wp_local = ring->wp = ring->mem.phys; + ring->rp_local = ring->rp = ring->mem.phys; +} + +static int gsi_ring_alloc(struct gsi_ring *ring, u32 count) +{ + size_t size = roundup_pow_of_two(count * GSI_RING_ELEMENT_SIZE); + + /* Hardware requires a power-of-2 ring size (and alignment) */ + if (ipa_dma_alloc(&ring->mem, size, GFP_KERNEL)) + return -ENOMEM; + ipa_assert(!(ring->mem.phys % size)); + + ring->end = ring->mem.phys + size; + spin_lock_init(&ring->slock); + + return 0; +} + +static void gsi_ring_free(struct gsi_ring *ring) +{ + ipa_dma_free(&ring->mem); + memset(ring, 0, sizeof(*ring)); +} + +static void gsi_evt_ring_prime(struct gsi *gsi, struct gsi_evt_ring *evt_ring) +{ + unsigned long flags; + + spin_lock_irqsave(&evt_ring->ring.slock, flags); + memset(evt_ring->ring.mem.virt, 0, evt_ring->ring.mem.size); + evt_ring->ring.wp_local = evt_ring->ring.end - GSI_RING_ELEMENT_SIZE; + gsi_evt_ring_doorbell(gsi, evt_ring); + spin_unlock_irqrestore(&evt_ring->ring.slock, flags); +} + +/* Issue a GSI command by writing a value to a register, then wait + * for completion to be signaled. Returns true if successful or + * false if a timeout occurred. Note that the register offset is + * first, value to write is second (reverse of writel() order). + */ +static bool command(struct gsi *gsi, u32 reg, u32 val, struct completion *compl) +{ + bool ret; + + gsi_writel(gsi, val, reg); + ret = !!wait_for_completion_timeout(compl, GSI_CMD_TIMEOUT); + if (!ret) + ipa_err("command timeout\n"); + + return ret; +} + +/* Issue an event ring command and wait for it to complete */ +static bool evt_ring_command(struct gsi *gsi, u32 evt_ring_id, + enum gsi_evt_ch_cmd_opcode op) +{ + struct completion *compl = &gsi->evt_ring[evt_ring_id].compl; + u32 val; + + reinit_completion(compl); + + val = FIELD_PREP(EV_CHID_FMASK, evt_ring_id); + val |= FIELD_PREP(EV_OPCODE_FMASK, (u32)op); + + return command(gsi, GSI_EV_CH_CMD_OFFS, val, compl); +} + +/* Issue a channel command and wait for it to complete */ +static bool +channel_command(struct gsi *gsi, u32 channel_id, enum gsi_ch_cmd_opcode op) +{ + struct completion *compl = &gsi->channel[channel_id].compl; + u32 val; + + reinit_completion(compl); + + val = FIELD_PREP(CH_CHID_FMASK, channel_id); + val |= FIELD_PREP(CH_OPCODE_FMASK, (u32)op); + + return command(gsi, GSI_CH_CMD_OFFS, val, compl); +} + +/* Note: only GPI interfaces, IRQ interrupts are currently supported */ +static int gsi_evt_ring_alloc(struct gsi *gsi, u32 ring_count, bool moderation) +{ + struct gsi_evt_ring *evt_ring; + unsigned long flags; + u32 evt_ring_id; + u32 val; + int ret; + + /* Get the mutex to allocate from the bitmap and issue a command */ + mutex_lock(&gsi->mutex); + + /* Start by allocating the event id to use */ + ipa_assert(gsi->evt_bmap != ~0UL); + evt_ring_id = (u32)ffz(gsi->evt_bmap); + gsi->evt_bmap |= BIT(evt_ring_id); + + evt_ring = &gsi->evt_ring[evt_ring_id]; + + ret = gsi_ring_alloc(&evt_ring->ring, ring_count); + if (ret) + goto err_free_bmap; + + init_completion(&evt_ring->compl); + + if (!evt_ring_command(gsi, evt_ring_id, GSI_EVT_ALLOCATE)) { + ret = -ETIMEDOUT; + goto err_free_ring; + } + + if (evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) { + ipa_err("evt_ring_id %u allocation failed state %u\n", + evt_ring_id, evt_ring->state); + ret = -ENOMEM; + goto err_free_ring; + } + atomic_inc(&gsi->evt_ring_count); + + evt_ring->moderation = moderation; + + gsi_evt_ring_program(gsi, evt_ring_id); + gsi_ring_init(&evt_ring->ring); + gsi_evt_ring_prime(gsi, evt_ring); + + mutex_unlock(&gsi->mutex); + + spin_lock_irqsave(&gsi->slock, flags); + + /* Enable the event interrupt (clear it first in case pending) */ + val = BIT(evt_ring_id); + gsi_writel(gsi, val, GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFS); + gsi_irq_enable_event(gsi, evt_ring_id); + + spin_unlock_irqrestore(&gsi->slock, flags); + + return evt_ring_id; + +err_free_ring: + gsi_ring_free(&evt_ring->ring); + memset(evt_ring, 0, sizeof(*evt_ring)); +err_free_bmap: + ipa_assert(gsi->evt_bmap & BIT(evt_ring_id)); + gsi->evt_bmap &= ~BIT(evt_ring_id); + + mutex_unlock(&gsi->mutex); + + return ret; +} + +static void gsi_evt_ring_scratch_zero(struct gsi *gsi, u32 evt_ring_id) +{ + gsi_writel(gsi, 0, GSI_EV_CH_E_SCRATCH_0_OFFS(evt_ring_id)); + gsi_writel(gsi, 0, GSI_EV_CH_E_SCRATCH_1_OFFS(evt_ring_id)); +} + +static void gsi_evt_ring_dealloc(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + bool completed; + + ipa_bug_on(evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED); + + mutex_lock(&gsi->mutex); + + completed = evt_ring_command(gsi, evt_ring_id, GSI_EVT_RESET); + ipa_bug_on(!completed); + ipa_bug_on(evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED); + + gsi_evt_ring_program(gsi, evt_ring_id); + gsi_ring_init(&evt_ring->ring); + gsi_evt_ring_scratch_zero(gsi, evt_ring_id); + gsi_evt_ring_prime(gsi, evt_ring); + + completed = evt_ring_command(gsi, evt_ring_id, GSI_EVT_DE_ALLOC); + ipa_bug_on(!completed); + + ipa_bug_on(evt_ring->state != GSI_EVT_RING_STATE_NOT_ALLOCATED); + + ipa_assert(gsi->evt_bmap & BIT(evt_ring_id)); + gsi->evt_bmap &= ~BIT(evt_ring_id); + + mutex_unlock(&gsi->mutex); + + evt_ring->moderation = false; + gsi_ring_free(&evt_ring->ring); + memset(evt_ring, 0, sizeof(*evt_ring)); + + atomic_dec(&gsi->evt_ring_count); +} + +static void gsi_channel_program(struct gsi *gsi, u32 channel_id, + u32 evt_ring_id, bool doorbell_enable) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + u32 low_weight; + u32 val; + + val = FIELD_PREP(CHTYPE_PROTOCOL_FMASK, GSI_CHANNEL_PROTOCOL_GPI); + val |= FIELD_PREP(CHTYPE_DIR_FMASK, channel->from_ipa ? 0 : 1); + val |= FIELD_PREP(ERINDEX_FMASK, evt_ring_id); + val |= FIELD_PREP(ELEMENT_SIZE_FMASK, GSI_RING_ELEMENT_SIZE); + gsi_writel(gsi, val, GSI_CH_C_CNTXT_0_OFFS(channel_id)); + + val = FIELD_PREP(R_LENGTH_FMASK, channel->ring.mem.size); + gsi_writel(gsi, val, GSI_CH_C_CNTXT_1_OFFS(channel_id)); + + /* The context 2 and 3 registers store the low-order and + * high-order 32 bits of the address of the channel ring, + * respectively. + */ + val = channel->ring.mem.phys & GENMASK(31, 0); + gsi_writel(gsi, val, GSI_CH_C_CNTXT_2_OFFS(channel_id)); + + val = channel->ring.mem.phys >> 32; + gsi_writel(gsi, val, GSI_CH_C_CNTXT_3_OFFS(channel_id)); + + low_weight = channel->priority ? FIELD_MAX(WRR_WEIGHT_FMASK) : 0; + val = FIELD_PREP(WRR_WEIGHT_FMASK, low_weight); + val |= FIELD_PREP(MAX_PREFETCH_FMASK, GSI_MAX_PREFETCH); + val |= FIELD_PREP(USE_DB_ENG_FMASK, doorbell_enable ? 1 : 0); + gsi_writel(gsi, val, GSI_CH_C_QOS_OFFS(channel_id)); +} + +int gsi_channel_alloc(struct gsi *gsi, u32 channel_id, u32 channel_count, + bool from_ipa, bool priority, u32 evt_ring_mult, + bool moderation, void *notify_data) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + u32 evt_ring_count; + u32 evt_ring_id; + void **user_data; + int ret; + + evt_ring_count = channel_count * evt_ring_mult; + ret = gsi_evt_ring_alloc(gsi, evt_ring_count, moderation); + if (ret < 0) + return ret; + evt_ring_id = (u32)ret; + + ret = gsi_ring_alloc(&channel->ring, channel_count); + if (ret) + goto err_evt_ring_free; + + user_data = kcalloc(channel_count, sizeof(void *), GFP_KERNEL); + if (!user_data) { + ret = -ENOMEM; + goto err_ring_free; + } + + mutex_init(&channel->mutex); + init_completion(&channel->compl); + atomic_set(&channel->poll_mode, 0); /* Initially in callback mode */ + channel->from_ipa = from_ipa; + channel->notify_data = notify_data; + + mutex_lock(&gsi->mutex); + + if (!channel_command(gsi, channel_id, GSI_CH_ALLOCATE)) { + ret = -ETIMEDOUT; + goto err_mutex_unlock; + } + if (channel->state != GSI_CHANNEL_STATE_ALLOCATED) { + ret = -EIO; + goto err_mutex_unlock; + } + + gsi->ch_dbg[channel_id].ch_allocate++; + + mutex_unlock(&gsi->mutex); + + channel->evt_ring = &gsi->evt_ring[evt_ring_id]; + channel->evt_ring->channel = channel; + channel->priority = priority; + + gsi_channel_program(gsi, channel_id, evt_ring_id, true); + gsi_ring_init(&channel->ring); + + channel->user_data = user_data; + atomic_inc(&gsi->channel_count); + + return 0; + +err_mutex_unlock: + mutex_unlock(&gsi->mutex); + kfree(user_data); +err_ring_free: + gsi_ring_free(&channel->ring); +err_evt_ring_free: + gsi_evt_ring_dealloc(gsi, evt_ring_id); + + return ret; +} + +static void __gsi_channel_scratch_write(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_gpi_channel_scratch *gpi; + union gsi_channel_scratch scr = { }; + u32 val; + + gpi = &scr.gpi; + /* See comments above definition of gsi_gpi_channel_scratch */ + gpi->max_outstanding_tre = channel->tlv_count * GSI_RING_ELEMENT_SIZE; + gpi->outstanding_threshold = 2 * GSI_RING_ELEMENT_SIZE; + + val = scr.data.word1; + gsi_writel(gsi, val, GSI_CH_C_SCRATCH_0_OFFS(channel_id)); + + val = scr.data.word2; + gsi_writel(gsi, val, GSI_CH_C_SCRATCH_1_OFFS(channel_id)); + + val = scr.data.word3; + gsi_writel(gsi, val, GSI_CH_C_SCRATCH_2_OFFS(channel_id)); + + /* We must preserve the upper 16 bits of the last scratch + * register. The next sequence assumes those bits remain + * unchanged between the read and the write. + */ + val = gsi_readl(gsi, GSI_CH_C_SCRATCH_3_OFFS(channel_id)); + val = (scr.data.word4 & GENMASK(31, 16)) | (val & GENMASK(15, 0)); + gsi_writel(gsi, val, GSI_CH_C_SCRATCH_3_OFFS(channel_id)); +} + +void gsi_channel_scratch_write(struct gsi *gsi, u32 channel_id, u32 tlv_count) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + channel->tlv_count = tlv_count; + + mutex_lock(&channel->mutex); + + __gsi_channel_scratch_write(gsi, channel_id); + + mutex_unlock(&channel->mutex); +} + +int gsi_channel_start(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + if (channel->state != GSI_CHANNEL_STATE_ALLOCATED && + channel->state != GSI_CHANNEL_STATE_STOP_IN_PROC && + channel->state != GSI_CHANNEL_STATE_STOPPED) { + ipa_err("bad state %d\n", channel->state); + return -ENOTSUPP; + } + + mutex_lock(&gsi->mutex); + + gsi->ch_dbg[channel_id].ch_start++; + + if (!channel_command(gsi, channel_id, GSI_CH_START)) { + mutex_unlock(&gsi->mutex); + return -ETIMEDOUT; + } + if (channel->state != GSI_CHANNEL_STATE_STARTED) { + ipa_err("channel %u unexpected state %u\n", channel_id, + channel->state); + ipa_bug(); + } + + mutex_unlock(&gsi->mutex); + + return 0; +} + +int gsi_channel_stop(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + int ret; + + if (channel->state == GSI_CHANNEL_STATE_STOPPED) + return 0; + + if (channel->state != GSI_CHANNEL_STATE_STARTED && + channel->state != GSI_CHANNEL_STATE_STOP_IN_PROC && + channel->state != GSI_CHANNEL_STATE_ERROR) { + ipa_err("bad state %d\n", channel->state); + return -ENOTSUPP; + } + + mutex_lock(&gsi->mutex); + + gsi->ch_dbg[channel_id].ch_stop++; + + if (!channel_command(gsi, channel_id, GSI_CH_STOP)) { + /* check channel state here in case the channel is stopped but + * the interrupt was not handled yet. + */ + channel->state = gsi_channel_state(gsi, channel_id); + if (channel->state == GSI_CHANNEL_STATE_STOPPED) { + ret = 0; + goto free_lock; + } + ret = -ETIMEDOUT; + goto free_lock; + } + + if (channel->state != GSI_CHANNEL_STATE_STOPPED && + channel->state != GSI_CHANNEL_STATE_STOP_IN_PROC) { + ipa_err("channel %u unexpected state %u\n", channel_id, + channel->state); + ret = -EBUSY; + goto free_lock; + } + + if (channel->state == GSI_CHANNEL_STATE_STOP_IN_PROC) { + ipa_err("channel %u busy try again\n", channel_id); + ret = -EAGAIN; + goto free_lock; + } + + ret = 0; + +free_lock: + mutex_unlock(&gsi->mutex); + + return ret; +} + +int gsi_channel_reset(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + u32 evt_ring_id; + bool reset_done; + + if (channel->state != GSI_CHANNEL_STATE_STOPPED) { + ipa_err("bad state %d\n", channel->state); + return -ENOTSUPP; + } + + evt_ring_id = gsi_evt_ring_id(gsi, channel->evt_ring); + reset_done = false; + mutex_lock(&gsi->mutex); +reset: + + gsi->ch_dbg[channel_id].ch_reset++; + + if (!channel_command(gsi, channel_id, GSI_CH_RESET)) { + mutex_unlock(&gsi->mutex); + return -ETIMEDOUT; + } + + if (channel->state != GSI_CHANNEL_STATE_ALLOCATED) { + ipa_err("channel_id %u unexpected state %u\n", channel_id, + channel->state); + ipa_bug(); + } + + /* workaround: reset GSI producers again */ + if (channel->from_ipa && !reset_done) { + usleep_range(GSI_RESET_WA_MIN_SLEEP, GSI_RESET_WA_MAX_SLEEP); + reset_done = true; + goto reset; + } + + gsi_channel_program(gsi, channel_id, evt_ring_id, true); + gsi_ring_init(&channel->ring); + + /* restore scratch */ + __gsi_channel_scratch_write(gsi, channel_id); + + mutex_unlock(&gsi->mutex); + + return 0; +} + +void gsi_channel_free(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + u32 evt_ring_id; + bool completed; + + ipa_bug_on(channel->state != GSI_CHANNEL_STATE_ALLOCATED); + + evt_ring_id = gsi_evt_ring_id(gsi, channel->evt_ring); + mutex_lock(&gsi->mutex); + + gsi->ch_dbg[channel_id].ch_de_alloc++; + + completed = channel_command(gsi, channel_id, GSI_CH_DE_ALLOC); + ipa_bug_on(!completed); + + ipa_bug_on(channel->state != GSI_CHANNEL_STATE_NOT_ALLOCATED); + + mutex_unlock(&gsi->mutex); + + kfree(channel->user_data); + gsi_ring_free(&channel->ring); + + gsi_evt_ring_dealloc(gsi, evt_ring_id); + + memset(channel, 0, sizeof(*channel)); + + atomic_dec(&gsi->channel_count); +} + +static u16 __gsi_query_ring_free_re(struct gsi_ring *ring) +{ + u64 delta; + + if (ring->wp_local < ring->rp_local) + delta = ring->rp_local - ring->wp_local; + else + delta = ring->end - ring->wp_local + ring->rp_local; + + return (u16)(delta / GSI_RING_ELEMENT_SIZE - 1); +} + +int gsi_channel_queue(struct gsi *gsi, u32 channel_id, u16 num_xfers, + struct gsi_xfer_elem *xfer, bool ring_db) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + unsigned long flags; + u32 i; + + spin_lock_irqsave(&channel->evt_ring->ring.slock, flags); + + if (num_xfers > __gsi_query_ring_free_re(&channel->ring)) { + spin_unlock_irqrestore(&channel->evt_ring->ring.slock, flags); + ipa_err("no space for %u-element transfer on ch %u\n", + num_xfers, channel_id); + + return -ENOSPC; + } + + for (i = 0; i < num_xfers; i++) { + struct gsi_tre *tre_ptr; + u16 idx = ring_wp_local_index(&channel->ring); + + channel->user_data[idx] = xfer[i].user_data; + + tre_ptr = ipa_dma_phys_to_virt(&channel->ring.mem, + channel->ring.wp_local); + + tre_ptr->buffer_ptr = xfer[i].addr; + tre_ptr->buf_len = xfer[i].len_opcode; + tre_ptr->bei = xfer[i].flags & GSI_XFER_FLAG_BEI ? 1 : 0; + tre_ptr->ieot = xfer[i].flags & GSI_XFER_FLAG_EOT ? 1 : 0; + tre_ptr->ieob = xfer[i].flags & GSI_XFER_FLAG_EOB ? 1 : 0; + tre_ptr->chain = xfer[i].flags & GSI_XFER_FLAG_CHAIN ? 1 : 0; + + if (xfer[i].type == GSI_XFER_ELEM_DATA) + tre_ptr->re_type = GSI_RE_XFER; + else if (xfer[i].type == GSI_XFER_ELEM_IMME_CMD) + tre_ptr->re_type = GSI_RE_IMMD_CMD; + else if (xfer[i].type == GSI_XFER_ELEM_NOP) + tre_ptr->re_type = GSI_RE_NOP; + else + ipa_bug_on("invalid xfer type"); + + ring_wp_local_inc(&channel->ring); + } + + wmb(); /* Ensure TRE is set before ringing doorbell */ + + if (ring_db) + gsi_channel_doorbell(gsi, channel); + + spin_unlock_irqrestore(&channel->evt_ring->ring.slock, flags); + + return 0; +} + +int gsi_channel_poll(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_evt_ring *evt_ring; + unsigned long flags; + u32 evt_ring_id; + int size; + + evt_ring = channel->evt_ring; + evt_ring_id = gsi_evt_ring_id(gsi, evt_ring); + + spin_lock_irqsave(&evt_ring->ring.slock, flags); + + /* update rp to see of we have anything new to process */ + if (evt_ring->ring.rp == evt_ring->ring.rp_local) { + u32 val; + + val = gsi_readl(gsi, GSI_EV_CH_E_CNTXT_4_OFFS(evt_ring_id)); + evt_ring->ring.rp = channel->ring.rp & GENMASK_ULL(63, 32); + evt_ring->ring.rp |= val; + } + + if (evt_ring->ring.rp != evt_ring->ring.rp_local) { + struct gsi_xfer_compl_evt *evt; + + evt = ipa_dma_phys_to_virt(&evt_ring->ring.mem, + evt_ring->ring.rp_local); + size = gsi_channel_process(gsi, evt, false); + + ring_rp_local_inc(&evt_ring->ring); + ring_wp_local_inc(&evt_ring->ring); /* recycle element */ + } else { + size = -ENOENT; + } + + spin_unlock_irqrestore(&evt_ring->ring.slock, flags); + + return size; +} + +static void gsi_channel_mode_set(struct gsi *gsi, u32 channel_id, bool polling) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + unsigned long flags; + u32 evt_ring_id; + + evt_ring_id = gsi_evt_ring_id(gsi, channel->evt_ring); + + spin_lock_irqsave(&gsi->slock, flags); + + if (polling) + gsi_irq_disable_event(gsi, evt_ring_id); + else + gsi_irq_enable_event(gsi, evt_ring_id); + atomic_set(&channel->poll_mode, polling ? 1 : 0); + + spin_unlock_irqrestore(&gsi->slock, flags); +} + +void gsi_channel_intr_enable(struct gsi *gsi, u32 channel_id) +{ + gsi_channel_mode_set(gsi, channel_id, false); +} + +void gsi_channel_intr_disable(struct gsi *gsi, u32 channel_id) +{ + gsi_channel_mode_set(gsi, channel_id, true); +} + +void gsi_channel_config(struct gsi *gsi, u32 channel_id, bool doorbell_enable) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + u32 evt_ring_id; + + evt_ring_id = gsi_evt_ring_id(gsi, channel->evt_ring); + + mutex_lock(&channel->mutex); + + gsi_channel_program(gsi, channel_id, evt_ring_id, doorbell_enable); + gsi_ring_init(&channel->ring); + + /* restore scratch */ + __gsi_channel_scratch_write(gsi, channel_id); + mutex_unlock(&channel->mutex); +} + +/* Initialize GSI driver */ +struct gsi *gsi_init(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct resource *res; + resource_size_t size; + struct gsi *gsi; + int irq; + + /* Get GSI memory range and map it */ + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "gsi"); + if (!res) { + ipa_err("missing \"gsi\" property in DTB\n"); + return ERR_PTR(-EINVAL); + } + + size = resource_size(res); + if (res->start > U32_MAX || size > U32_MAX) { + ipa_err("\"gsi\" values out of range\n"); + return ERR_PTR(-EINVAL); + } + + /* Get IPA GSI IRQ number */ + irq = platform_get_irq_byname(pdev, "gsi"); + if (irq < 0) { + ipa_err("failed to get gsi IRQ!\n"); + return ERR_PTR(irq); + } + + gsi = kzalloc(sizeof(*gsi), GFP_KERNEL); + if (!gsi) + return ERR_PTR(-ENOMEM); + + gsi->base = devm_ioremap_nocache(dev, res->start, size); + if (!gsi->base) { + kfree(gsi); + + return ERR_PTR(-ENOMEM); + } + gsi->dev = dev; + gsi->phys = (u32)res->start; + gsi->irq = irq; + spin_lock_init(&gsi->slock); + mutex_init(&gsi->mutex); + atomic_set(&gsi->channel_count, 0); + atomic_set(&gsi->evt_ring_count, 0); + + return gsi; +} diff --git a/drivers/net/ipa/gsi.h b/drivers/net/ipa/gsi.h new file mode 100644 index 000000000000..497f67cc6f80 --- /dev/null +++ b/drivers/net/ipa/gsi.h @@ -0,0 +1,195 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ +#ifndef _GSI_H_ +#define _GSI_H_ + +#include +#include + +#define GSI_RING_ELEMENT_SIZE 16 /* bytes (channel or event ring) */ + +/** + * enum gsi_xfer_flag - Transfer element flag values. + * @GSI_XFER_FLAG_CHAIN: Not the last element in a transaction. + * @GSI_XFER_FLAG_EOB: Generate event interrupt when complete. + * @GSI_XFER_FLAG_EOT: Interrupt on end of transfer condition. + * @GSI_XFER_FLAG_BEI: Block (do not generate) event interrupt. + * + * Normally an event generated by completion of a transfer will cause + * the AP to be interrupted; the BEI flag prevents that. + */ +enum gsi_xfer_flag { + GSI_XFER_FLAG_CHAIN = BIT(1), + GSI_XFER_FLAG_EOB = BIT(2), + GSI_XFER_FLAG_EOT = BIT(3), + GSI_XFER_FLAG_BEI = BIT(4), +}; + +/** + * enum gsi_xfer_elem_type - Transfer element type. + * @GSI_XFER_ELEM_DATA: Element represents a data transfer + * @GSI_XFER_ELEM_IMME_CMD: Element contains an immediate command. + * @GSI_XFER_ELEM_NOP: Element contans a no-op command. + */ +enum gsi_xfer_elem_type { + GSI_XFER_ELEM_DATA, + GSI_XFER_ELEM_IMME_CMD, + GSI_XFER_ELEM_NOP, +}; + +/** + * gsi_xfer_elem - Description of a single transfer. + * @addr: Physical address of a buffer for data or immediate commands. + * @len_opcode: Length of the data buffer, or enum ipahal_imm_cmd opcode + * @flags: Flags for the transfer + * @type: Command type (immediate command, data transfer NOP) + * @user_data: Data maintained for (but unused by) the transfer element. + */ +struct gsi_xfer_elem { + u64 addr; + u16 len_opcode; + enum gsi_xfer_flag flags; + enum gsi_xfer_elem_type type; + void *user_data; +}; + +struct gsi; + +/** + * gsi_init() - Initialize GSI subsystem + * @pdev: IPA platform device, to look up resources + * + * This stage of initialization can occur before the GSI firmware + * has been loaded. + * + * Return: GSI pointer to provide to other GSI functions. + */ +struct gsi *gsi_init(struct platform_device *pdev); + +/** + * gsi_device_init() - Initialize a GSI device + * @gsi: GSI pointer returned by gsi_init() + * + * Initialize a GSI device. + * + * @Return: 0 if successful or a negative error code otherwise. + */ +int gsi_device_init(struct gsi *gsi); + +/** + * gsi_device_exit() - De-initialize a GSI device + * @gsi: GSI pointer returned by gsi_init() + * + * This is the inverse of gsi_device_init() + */ +void gsi_device_exit(struct gsi *gsi); + +/** + * gsi_channel_alloc() - Allocate a GSI channel + * @gsi: GSI pointer returned by gsi_init() + * @channel_id: Channel to allocate + * @channel_count: Number of transfer element slots in the channel + * @from_ipa: Direction of data transfer (true: IPA->AP; false: AP->IPA) + * @priority: Whether this channel will given prioroity + * @evt_ring_mult: Factor to use to get the number of elements in the + * event ring associated with this channel + * @moderation: Whether interrupt moderation should be enabled + * @notify_data: Pointer value to supply with notifications that + * occur because of events on this channel + * + * @Return: 0 if successful, or a negative error code. + */ +int gsi_channel_alloc(struct gsi *gsi, u32 channel_id, u32 channel_count, + bool from_ipa, bool priority, u32 evt_ring_mult, + bool moderation, void *notify_data); + +/** + * gsi_channel_scratch_write() - Write channel scratch area + * @gsi: GSI pointer returned by gsi_init() + * @channel_id: Channel whose scratch area should be written + * @tlv_count: The number of type-length-value the channel uses + */ +void gsi_channel_scratch_write(struct gsi *gsi, u32 channel_id, u32 tlv_count); + +/** + * gsi_channel_start() - Make a channel operational + * @gsi: GSI pointer returned by gsi_init() + * @channel_id: Channel to start + * + * @Return: 0 if successful, or a negative error code. + */ +int gsi_channel_start(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_stop() - Stop an operational channel + * @gsi: GSI pointer returned by gsi_init() + * @channel_id: Channel to stop + * + * @Return: 0 if successful, or a negative error code. + */ +int gsi_channel_stop(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_reset() - Reset a channel, to recover from error state + * @gsi: GSI pointer returned by gsi_init() + * @channel_id: Channel to be reset + * + * @Return: 0 if successful, or a negative error code. + */ +int gsi_channel_reset(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_free() - Release a previously-allocated channel + * @gsi: GSI pointer returned by gsi_init() + * @channel_id: Channel to be freed + */ +void gsi_channel_free(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_config() - Configure a channel + * @gsi: GSI pointer returned by gsi_init() + * @channel_id: Channel to be configured + * @doorbell_enable: Whether to enable hardware doorbell engine + */ +void gsi_channel_config(struct gsi *gsi, u32 channel_id, bool doorbell_enable); + +/** + * gsi_channel_poll() - Poll for a single completion on a channel + * @gsi: GSI pointer returned by gsi_init() + * @channel_id: Channel to be polled + * + * @Return: Byte transfer count if successful, or a negative error code + */ +int gsi_channel_poll(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_intr_enable() - Enable interrupts on a channel + * @gsi: GSI pointer returned by gsi_init() + * @channel_id: Channel whose interrupts should be enabled + */ +void gsi_channel_intr_enable(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_intr_disable() - Disable interrupts on a channel + * @gsi: GSI pointer returned by gsi_init() + * @channel_id: Channel whose interrupts should be disabled + */ +void gsi_channel_intr_disable(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_queue() - Queue transfer requests on a channel + * @gsi: GSI pointer returned by gsi_init() + * @channel_id: Channel on which transfers should be queued + * @num_xfers: Number of transfer descriptors in the @xfer array + * @xfer: Array of transfer descriptors + * @ring_db: Whether to tell the hardware about these queued transfers + * + * @Return: 0 if successful, or a negative error code + */ +int gsi_channel_queue(struct gsi *gsi, u32 channel_id, u16 num_xfers, + struct gsi_xfer_elem *xfer, bool ring_db); + +#endif /* _GSI_H_ */ diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h new file mode 100644 index 000000000000..fe5f98ef3840 --- /dev/null +++ b/drivers/net/ipa/gsi_reg.h @@ -0,0 +1,563 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ +#ifndef __GSI_REG_H__ +#define __GSI_REG_H__ + +/* The maximum allowed value of "n" for any N-parameterized macro below + * is 3. The N value comes from the ipa_ees enumerated type. + * + * For GSI_INST_RAM_I_OFFS(), the "i" value supplied is an instruction + * offset (where each instruction is 32 bits wide). The maximum offset + * value is 4095. + * + * Macros parameterized by (data) channel number supply a parameter "c". + * The maximum value of "c" is 30 (but the limit is hardware-dependent). + * + * Macros parameterized by event channel number supply a parameter "e". + * The maximum value of "e" is 15 (but the limit is hardware-dependent). + * + * For any K-parameterized macros, the "k" value will represent either an + * event ring id or a (data) channel id. 15 is the maximum value of + * "k" for event rings; otherwise the maximum is 30. + */ +#define GSI_CFG_OFFS 0x00000000 +#define GSI_ENABLE_FMASK 0x00000001 +#define MCS_ENABLE_FMASK 0x00000002 +#define DOUBLE_MCS_CLK_FREQ_FMASK 0x00000004 +#define UC_IS_MCS_FMASK 0x00000008 +#define PWR_CLPS_FMASK 0x00000010 +#define BP_MTRIX_DISABLE_FMASK 0x00000020 + +#define GSI_MCS_CFG_OFFS 0x0000b000 +#define MCS_CFG_ENABLE_FMASK 0x00000001 + +#define GSI_PERIPH_BASE_ADDR_LSB_OFFS 0x00000018 + +#define GSI_PERIPH_BASE_ADDR_MSB_OFFS 0x0000001c + +#define GSI_IC_DISABLE_CHNL_BCK_PRS_LSB_OFFS 0x000000a0 +#define CHNL_REE_INT_FMASK 0x00000007 +#define CHNL_EV_ENG_INT_FMASK 0x00000040 +#define CHNL_INT_END_INT_FMASK 0x00001000 +#define CHNL_CSR_INT_FMASK 0x00fc0000 +#define CHNL_TLV_INT_FMASK 0x3f000000 + +#define GSI_IC_DISABLE_CHNL_BCK_PRS_MSB_OFFS 0x000000a4 +#define CHNL_TIMER_INT_FMASK 0x00000001 +#define CHNL_DB_ENG_INT_FMASK 0x00000040 +#define CHNL_RD_WR_INT_FMASK 0x00003000 +#define CHNL_UCONTROLLER_INT_FMASK 0x00fc0000 + +#define GSI_IC_GEN_EVNT_BCK_PRS_LSB_OFFS 0x000000a8 +#define EVT_REE_INT_FMASK 0x00000007 +#define EVT_EV_ENG_INT_FMASK 0x00000040 +#define EVT_INT_END_INT_FMASK 0x00001000 +#define EVT_CSR_INT_FMASK 0x00fc0000 +#define EVT_TLV_INT_FMASK 0x3f000000 + +#define GSI_IC_GEN_EVNT_BCK_PRS_MSB_OFFS 0x000000ac +#define EVT_TIMER_INT_FMASK 0x00000001 +#define EVT_DB_ENG_INT_FMASK 0x00000040 +#define EVT_RD_WR_INT_FMASK 0x00003000 +#define EVT_UCONTROLLER_INT_FMASK 0x00fc0000 + +#define GSI_IC_GEN_INT_BCK_PRS_LSB_OFFS 0x000000b0 +#define INT_REE_INT_FMASK 0x00000007 +#define INT_EV_ENG_INT_FMASK 0x00000040 +#define INT_INT_END_INT_FMASK 0x00001000 +#define INT_CSR_INT_FMASK 0x00fc0000 +#define INT_TLV_INT_FMASK 0x3f000000 + +#define GSI_IC_GEN_INT_BCK_PRS_MSB_OFFS 0x000000b4 +#define INT_TIMER_INT_FMASK 0x00000001 +#define INT_DB_ENG_INT_FMASK 0x00000040 +#define INT_RD_WR_INT_FMASK 0x00003000 +#define INT_UCONTROLLER_INT_FMASK 0x00fc0000 + +#define GSI_IC_STOP_INT_MOD_BCK_PRS_LSB_OFFS 0x000000b8 +#define REE_INT_FMASK 0x00000007 +#define EV_ENG_INT_FMASK 0x00000040 +#define INT_END_INT_FMASK 0x00001000 +#define CSR_INT_FMASK 0x00fc0000 +#define TLV_INT_FMASK 0x3f000000 + +#define GSI_IC_STOP_INT_MOD_BCK_PRS_MSB_OFFS 0x000000bc +#define TIMER_INT_FMASK 0x00000001 +#define DB_ENG_INT_FMASK 0x00000040 +#define RD_WR_INT_FMASK 0x00003000 +#define UCONTROLLER_INT_FMASK 0x00fc0000 + +#define GSI_IC_PROCESS_DESC_BCK_PRS_LSB_OFFS 0x000000c0 +#define DESC_REE_INT_FMASK 0x00000007 +#define DESC_EV_ENG_INT_FMASK 0x00000040 +#define DESC_INT_END_INT_FMASK 0x00001000 +#define DESC_CSR_INT_FMASK 0x00fc0000 +#define DESC_TLV_INT_FMASK 0x3f000000 + +#define GSI_IC_PROCESS_DESC_BCK_PRS_MSB_OFFS 0x000000c4 +#define DESC_TIMER_INT_FMASK 0x00000001 +#define DESC_DB_ENG_INT_FMASK 0x00000040 +#define DESC_RD_WR_INT_FMASK 0x00003000 +#define DESC_UCONTROLLER_INT_FMASK 0x00fc0000 + +#define GSI_IC_TLV_STOP_BCK_PRS_LSB_OFFS 0x000000c8 +#define STOP_REE_INT_FMASK 0x00000007 +#define STOP_EV_ENG_INT_FMASK 0x00000040 +#define STOP_INT_END_INT_FMASK 0x00001000 +#define STOP_CSR_INT_FMASK 0x00fc0000 +#define STOP_TLV_INT_FMASK 0x3f000000 + +#define GSI_IC_TLV_STOP_BCK_PRS_MSB_OFFS 0x000000cc +#define STOP_TIMER_INT_FMASK 0x00000001 +#define STOP_DB_ENG_INT_FMASK 0x00000040 +#define STOP_RD_WR_INT_FMASK 0x00003000 +#define STOP_UCONTROLLER_INT_FMASK 0x00fc0000 + +#define GSI_IC_TLV_RESET_BCK_PRS_LSB_OFFS 0x000000d0 +#define RST_REE_INT_FMASK 0x00000007 +#define RST_EV_ENG_INT_FMASK 0x00000040 +#define RST_INT_END_INT_FMASK 0x00001000 +#define RST_CSR_INT_FMASK 0x00fc0000 +#define RST_TLV_INT_FMASK 0x3f000000 + +#define GSI_IC_TLV_RESET_BCK_PRS_MSB_OFFS 0x000000d4 +#define RST_TIMER_INT_FMASK 0x00000001 +#define RST_DB_ENG_INT_FMASK 0x00000040 +#define RST_RD_WR_INT_FMASK 0x00003000 +#define RST_UCONTROLLER_INT_FMASK 0x00fc0000 + +#define GSI_IC_RGSTR_TIMER_BCK_PRS_LSB_OFFS 0x000000d8 +#define TMR_REE_INT_FMASK 0x00000007 +#define TMR_EV_ENG_INT_FMASK 0x00000040 +#define TMR_INT_END_INT_FMASK 0x00001000 +#define TMR_CSR_INT_FMASK 0x00fc0000 +#define TMR_TLV_INT_FMASK 0x3f000000 + +#define GSI_IC_RGSTR_TIMER_BCK_PRS_MSB_OFFS 0x000000dc +#define TMR_TIMER_INT_FMASK 0x00000001 +#define TMR_DB_ENG_INT_FMASK 0x00000040 +#define TMR_RD_WR_INT_FMASK 0x00003000 +#define TMR_UCONTROLLER_INT_FMASK 0x00fc0000 + +#define GSI_IC_READ_BCK_PRS_LSB_OFFS 0x000000e0 +#define RD_REE_INT_FMASK 0x00000007 +#define RD_EV_ENG_INT_FMASK 0x00000040 +#define RD_INT_END_INT_FMASK 0x00001000 +#define RD_CSR_INT_FMASK 0x00fc0000 +#define RD_TLV_INT_FMASK 0x3f000000 + +#define GSI_IC_READ_BCK_PRS_MSB_OFFS 0x000000e4 +#define RD_TIMER_INT_FMASK 0x00000001 +#define RD_DB_ENG_INT_FMASK 0x00000040 +#define RD_RD_WR_INT_FMASK 0x00003000 +#define RD_UCONTROLLER_INT_FMASK 0x00fc0000 + +#define GSI_IC_WRITE_BCK_PRS_LSB_OFFS 0x000000e8 +#define WR_REE_INT_FMASK 0x00000007 +#define WR_EV_ENG_INT_FMASK 0x00000040 +#define WR_INT_END_INT_FMASK 0x00001000 +#define WR_CSR_INT_FMASK 0x00fc0000 +#define WR_TLV_INT_FMASK 0x3f000000 + +#define GSI_IC_WRITE_BCK_PRS_MSB_OFFS 0x000000ec +#define WR_TIMER_INT_FMASK 0x00000001 +#define WR_DB_ENG_INT_FMASK 0x00000040 +#define WR_RD_WR_INT_FMASK 0x00003000 +#define WR_UCONTROLLER_INT_FMASK 0x00fc0000 + +#define GSI_IC_UCONTROLLER_GPR_BCK_PRS_LSB_OFFS 0x000000f0 +#define UC_REE_INT_FMASK 0x00000007 +#define UC_EV_ENG_INT_FMASK 0x00000040 +#define UC_INT_END_INT_FMASK 0x00001000 +#define UC_CSR_INT_FMASK 0x00fc0000 +#define UC_TLV_INT_FMASK 0x3f000000 + +#define GSI_IC_UCONTROLLER_GPR_BCK_PRS_MSB_OFFS 0x000000f4 +#define UC_TIMER_INT_FMASK 0x00000001 +#define UC_DB_ENG_INT_FMASK 0x00000040 +#define UC_RD_WR_INT_FMASK 0x00003000 +#define UC_UCONTROLLER_INT_FMASK 0x00fc0000 + +#define GSI_IRAM_PTR_CH_CMD_OFFS 0x00000400 +#define CMD_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_EE_GENERIC_CMD_OFFS 0x00000404 +#define EE_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_CH_DB_OFFS 0x00000418 +#define CH_DB_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_EV_DB_OFFS 0x0000041c +#define EV_DB_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_NEW_RE_OFFS 0x00000420 +#define NEW_RE_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_CH_DIS_COMP_OFFS 0x00000424 +#define DIS_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_CH_EMPTY_OFFS 0x00000428 +#define EMPTY_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_EVENT_GEN_COMP_OFFS 0x0000042c +#define EVT_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_PERIPH_IF_TLV_IN_0_OFFS 0x00000430 +#define IN_0_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_PERIPH_IF_TLV_IN_2_OFFS 0x00000434 +#define IN_2_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_PERIPH_IF_TLV_IN_1_OFFS 0x00000438 +#define IN_1_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_TIMER_EXPIRED_OFFS 0x0000043c +#define TMR_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_WRITE_ENG_COMP_OFFS 0x00000440 +#define WR_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_READ_ENG_COMP_OFFS 0x00000444 +#define RD_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_UC_GP_INT_OFFS 0x00000448 +#define UC_IRAM_PTR_FMASK 0x00000fff + +#define GSI_IRAM_PTR_INT_MOD_STOPPED_OFFS 0x0000044c +#define STOP_IRAM_PTR_FMASK 0x00000fff + +/* Max value of I for the GSI_INST_RAM_I_OFFS() is 4095 */ +#define GSI_INST_RAM_I_OFFS(i) (0x00004000 + 0x0004 * (i)) +#define INST_BYTE_0_FMASK 0x000000ff +#define INST_BYTE_1_FMASK 0x0000ff00 +#define INST_BYTE_2_FMASK 0x00ff0000 +#define INST_BYTE_3_FMASK 0xff000000 + +#define GSI_CH_C_CNTXT_0_OFFS(c) \ + GSI_EE_N_CH_C_CNTXT_0_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_0_OFFS(c, n) \ + (0x0001c000 + 0x4000 * (n) + 0x80 * (c)) +#define CHTYPE_PROTOCOL_FMASK 0x00000007 +#define CHTYPE_DIR_FMASK 0x00000008 +#define EE_FMASK 0x000000f0 +#define CHID_FMASK 0x00001f00 +#define ERINDEX_FMASK 0x0007c000 +#define CHSTATE_FMASK 0x00f00000 +#define ELEMENT_SIZE_FMASK 0xff000000 + +#define GSI_CH_C_CNTXT_1_OFFS(c) \ + GSI_EE_N_CH_C_CNTXT_1_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_1_OFFS(c, n) \ + (0x0001c004 + 0x4000 * (n) + 0x80 * (c)) +#define R_LENGTH_FMASK 0x0000ffff + +#define GSI_CH_C_CNTXT_2_OFFS(c) \ + GSI_EE_N_CH_C_CNTXT_2_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_2_OFFS(c, n) \ + (0x0001c008 + 0x4000 * (n) + 0x80 * (c)) + +#define GSI_CH_C_CNTXT_3_OFFS(c) \ + GSI_EE_N_CH_C_CNTXT_3_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_3_OFFS(c, n) \ + (0x0001c00c + 0x4000 * (n) + 0x80 * (c)) + +#define GSI_CH_C_CNTXT_4_OFFS(c) \ + GSI_EE_N_CH_C_CNTXT_4_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_4_OFFS(c, n) \ + (0x0001c010 + 0x4000 * (n) + 0x80 * (c)) + +#define GSI_CH_C_CNTXT_6_OFFS(c) \ + GSI_EE_N_CH_C_CNTXT_6_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_6_OFFS(c, n) \ + (0x0001c018 + 0x4000 * (n) + 0x80 * (c)) + +#define GSI_CH_C_QOS_OFFS(c) GSI_EE_N_CH_C_QOS_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_QOS_OFFS(c, n) (0x0001c05c + 0x4000 * (n) + 0x80 * (c)) +#define WRR_WEIGHT_FMASK 0x0000000f +#define MAX_PREFETCH_FMASK 0x00000100 +#define USE_DB_ENG_FMASK 0x00000200 + +#define GSI_CH_C_SCRATCH_0_OFFS(c) \ + GSI_EE_N_CH_C_SCRATCH_0_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_0_OFFS(c, n) \ + (0x0001c060 + 0x4000 * (n) + 0x80 * (c)) + +#define GSI_CH_C_SCRATCH_1_OFFS(c) \ + GSI_EE_N_CH_C_SCRATCH_1_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_1_OFFS(c, n) \ + (0x0001c064 + 0x4000 * (n) + 0x80 * (c)) + +#define GSI_CH_C_SCRATCH_2_OFFS(c) \ + GSI_EE_N_CH_C_SCRATCH_2_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_2_OFFS(c, n) \ + (0x0001c068 + 0x4000 * (n) + 0x80 * (c)) + +#define GSI_CH_C_SCRATCH_3_OFFS(c) \ + GSI_EE_N_CH_C_SCRATCH_3_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_3_OFFS(c, n) \ + (0x0001c06c + 0x4000 * (n) + 0x80 * (c)) + +#define GSI_EV_CH_E_CNTXT_0_OFFS(e) \ + GSI_EE_N_EV_CH_E_CNTXT_0_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_0_OFFS(e, n) \ + (0x0001d000 + 0x4000 * (n) + 0x80 * (e)) +#define EV_CHTYPE_FMASK 0x0000000f +#define EV_EE_FMASK 0x000000f0 +#define EV_EVCHID_FMASK 0x0000ff00 +#define EV_INTYPE_FMASK 0x00010000 +#define EV_CHSTATE_FMASK 0x00f00000 +#define EV_ELEMENT_SIZE_FMASK 0xff000000 + +#define GSI_EV_CH_E_CNTXT_1_OFFS(e) \ + GSI_EE_N_EV_CH_E_CNTXT_1_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_1_OFFS(e, n) \ + (0x0001d004 + 0x4000 * (n) + 0x80 * (e)) +#define EV_R_LENGTH_FMASK 0x0000ffff + +#define GSI_EV_CH_E_CNTXT_2_OFFS(e) \ + GSI_EE_N_EV_CH_E_CNTXT_2_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_2_OFFS(e, n) \ + (0x0001d008 + 0x4000 * (n) + 0x80 * (e)) + +#define GSI_EV_CH_E_CNTXT_3_OFFS(e) \ + GSI_EE_N_EV_CH_E_CNTXT_3_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_3_OFFS(e, n) \ + (0x0001d00c + 0x4000 * (n) + 0x80 * (e)) + +#define GSI_EV_CH_E_CNTXT_4_OFFS(e) \ + GSI_EE_N_EV_CH_E_CNTXT_4_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_4_OFFS(e, n) \ + (0x0001d010 + 0x4000 * (n) + 0x80 * (e)) + +#define GSI_EV_CH_E_CNTXT_8_OFFS(e) \ + GSI_EE_N_EV_CH_E_CNTXT_8_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_8_OFFS(e, n) \ + (0x0001d020 + 0x4000 * (n) + 0x80 * (e)) +#define MODT_FMASK 0x0000ffff +#define MODC_FMASK 0x00ff0000 +#define MOD_CNT_FMASK 0xff000000 + +#define GSI_EV_CH_E_CNTXT_9_OFFS(e) \ + GSI_EE_N_EV_CH_E_CNTXT_9_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_9_OFFS(e, n) \ + (0x0001d024 + 0x4000 * (n) + 0x80 * (e)) + +#define GSI_EV_CH_E_CNTXT_10_OFFS(e) \ + GSI_EE_N_EV_CH_E_CNTXT_10_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_10_OFFS(e, n) \ + (0x0001d028 + 0x4000 * (n) + 0x80 * (e)) + +#define GSI_EV_CH_E_CNTXT_11_OFFS(e) \ + GSI_EE_N_EV_CH_E_CNTXT_11_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_11_OFFS(e, n) \ + (0x0001d02c + 0x4000 * (n) + 0x80 * (e)) + +#define GSI_EV_CH_E_CNTXT_12_OFFS(e) \ + GSI_EE_N_EV_CH_E_CNTXT_12_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_12_OFFS(e, n) \ + (0x0001d030 + 0x4000 * (n) + 0x80 * (e)) + +#define GSI_EV_CH_E_CNTXT_13_OFFS(e) \ + GSI_EE_N_EV_CH_E_CNTXT_13_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_13_OFFS(e, n) \ + (0x0001d034 + 0x4000 * (n) + 0x80 * (e)) + +#define GSI_EV_CH_E_SCRATCH_0_OFFS(e) \ + GSI_EE_N_EV_CH_E_SCRATCH_0_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_SCRATCH_0_OFFS(e, n) \ + (0x0001d048 + 0x4000 * (n) + 0x80 * (e)) + +#define GSI_EV_CH_E_SCRATCH_1_OFFS(e) \ + GSI_EE_N_EV_CH_E_SCRATCH_1_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_SCRATCH_1_OFFS(e, n) \ + (0x0001d04c + 0x4000 * (n) + 0x80 * (e)) + +#define GSI_CH_C_DOORBELL_0_OFFS(c) \ + GSI_EE_N_CH_C_DOORBELL_0_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_DOORBELL_0_OFFS(c, n) \ + (0x0001e000 + 0x4000 * (n) + 0x08 * (c)) + +#define GSI_CH_C_DOORBELL_1_OFFS(c) \ + GSI_EE_N_CH_C_DOORBELL_1_OFFS(c, IPA_EE_AP) +#define GSI_EE_N_CH_C_DOORBELL_1_OFFS(c, n) \ + (0x0001e004 + 0x4000 * (n) + 0x08 * (c)) + +#define GSI_EV_CH_E_DOORBELL_0_OFFS(e) \ + GSI_EE_N_EV_CH_E_DOORBELL_0_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_DOORBELL_0_OFFS(e, n) \ + (0x0001e100 + 0x4000 * (n) + 0x08 * (e)) + +#define GSI_EV_CH_E_DOORBELL_1_OFFS(e) \ + GSI_EE_N_EV_CH_E_DOORBELL_1_OFFS(e, IPA_EE_AP) +#define GSI_EE_N_EV_CH_E_DOORBELL_1_OFFS(e, n) \ + (0x0001e104 + 0x4000 * (n) + 0x08 * (e)) + +#define GSI_GSI_STATUS_OFFS GSI_EE_N_GSI_STATUS_OFFS(IPA_EE_AP) +#define GSI_EE_N_GSI_STATUS_OFFS(n) (0x0001f000 + 0x4000 * (n)) +#define ENABLED_FMASK 0x00000001 + +#define GSI_CH_CMD_OFFS GSI_EE_N_CH_CMD_OFFS(IPA_EE_AP) +#define GSI_EE_N_CH_CMD_OFFS(n) (0x0001f008 + 0x4000 * (n)) +#define CH_CHID_FMASK 0x000000ff +#define CH_OPCODE_FMASK 0xff000000 + +#define GSI_EV_CH_CMD_OFFS GSI_EE_N_EV_CH_CMD_OFFS(IPA_EE_AP) +#define GSI_EE_N_EV_CH_CMD_OFFS(n) (0x0001f010 + 0x4000 * (n)) +#define EV_CHID_FMASK 0x000000ff +#define EV_OPCODE_FMASK 0xff000000 + +#define GSI_GSI_HW_PARAM_2_OFFS GSI_EE_N_GSI_HW_PARAM_2_OFFS(IPA_EE_AP) +#define GSI_EE_N_GSI_HW_PARAM_2_OFFS(n) (0x0001f040 + 0x4000 * (n)) +#define IRAM_SIZE_FMASK 0x00000007 +#define NUM_CH_PER_EE_FMASK 0x000000f8 +#define NUM_EV_PER_EE_FMASK 0x00001f00 +#define GSI_CH_PEND_TRANSLATE_FMASK 0x00002000 +#define GSI_CH_FULL_LOGIC_FMASK 0x00004000 +#define IRAM_SIZE_ONE_KB_FVAL 0 +#define IRAM_SIZE_TWO_KB_FVAL 1 + +#define GSI_GSI_SW_VERSION_OFFS GSI_EE_N_GSI_SW_VERSION_OFFS(IPA_EE_AP) +#define GSI_EE_N_GSI_SW_VERSION_OFFS(n) (0x0001f044 + 0x4000 * (n)) +#define STEP_FMASK 0x0000ffff +#define MINOR_FMASK 0x0fff0000 +#define MAJOR_FMASK 0xf0000000 + +#define GSI_GSI_MCS_CODE_VER_OFFS \ + GSI_EE_N_GSI_MCS_CODE_VER_OFFS(IPA_EE_AP) +#define GSI_EE_N_GSI_MCS_CODE_VER_OFFS(n) (0x0001f048 + 0x4000 * (n)) + +#define GSI_CNTXT_TYPE_IRQ_OFFS GSI_EE_N_CNTXT_TYPE_IRQ_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_TYPE_IRQ_OFFS(n) (0x0001f080 + 0x4000 * (n)) +#define CH_CTRL_FMASK 0x00000001 +#define EV_CTRL_FMASK 0x00000002 +#define GLOB_EE_FMASK 0x00000004 +#define IEOB_FMASK 0x00000008 +#define INTER_EE_CH_CTRL_FMASK 0x00000010 +#define INTER_EE_EV_CTRL_FMASK 0x00000020 +#define GENERAL_FMASK 0x00000040 + +#define GSI_CNTXT_TYPE_IRQ_MSK_OFFS \ + GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFS(n) (0x0001f088 + 0x4000 * (n)) +#define MSK_CH_CTRL_FMASK 0x00000001 +#define MSK_EV_CTRL_FMASK 0x00000002 +#define MSK_GLOB_EE_FMASK 0x00000004 +#define MSK_IEOB_FMASK 0x00000008 +#define MSK_INTER_EE_CH_CTRL_FMASK 0x00000010 +#define MSK_INTER_EE_EV_CTRL_FMASK 0x00000020 +#define MSK_GENERAL_FMASK 0x00000040 + +#define GSI_CNTXT_SRC_CH_IRQ_OFFS \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFS(n) (0x0001f090 + 0x4000 * (n)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_OFFS \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFS(n) (0x0001f094 + 0x4000 * (n)) + +#define GSI_CNTXT_SRC_CH_IRQ_MSK_OFFS \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFS(n) (0x0001f098 + 0x4000 * (n)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFS \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFS(n) (0x0001f09c + 0x4000 * (n)) + +#define GSI_CNTXT_SRC_CH_IRQ_CLR_OFFS \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFS(n) (0x0001f0a0 + 0x4000 * (n)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFS \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFS(n) (0x0001f0a4 + 0x4000 * (n)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_OFFS \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFS(n) (0x0001f0b0 + 0x4000 * (n)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFS \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFS(n) (0x0001f0b8 + 0x4000 * (n)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFS \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFS(n) (0x0001f0c0 + 0x4000 * (n)) + +#define GSI_CNTXT_GLOB_IRQ_STTS_OFFS \ + GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFS(n) (0x0001f100 + 0x4000 * (n)) +#define ERROR_INT_FMASK 0x00000001 +#define GP_INT1_FMASK 0x00000002 +#define GP_INT2_FMASK 0x00000004 +#define GP_INT3_FMASK 0x00000008 + +#define GSI_CNTXT_GLOB_IRQ_EN_OFFS \ + GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFS(n) (0x0001f108 + 0x4000 * (n)) +#define EN_ERROR_INT_FMASK 0x00000001 +#define EN_GP_INT1_FMASK 0x00000002 +#define EN_GP_INT2_FMASK 0x00000004 +#define EN_GP_INT3_FMASK 0x00000008 + +#define GSI_CNTXT_GLOB_IRQ_CLR_OFFS \ + GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFS(n) (0x0001f110 + 0x4000 * (n)) +#define CLR_ERROR_INT_FMASK 0x00000001 +#define CLR_GP_INT1_FMASK 0x00000002 +#define CLR_GP_INT2_FMASK 0x00000004 +#define CLR_GP_INT3_FMASK 0x00000008 + +#define GSI_CNTXT_GSI_IRQ_STTS_OFFS \ + GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFS(n) (0x0001f118 + 0x4000 * (n)) +#define BREAK_POINT_FMASK 0x00000001 +#define BUS_ERROR_FMASK 0x00000002 +#define CMD_FIFO_OVRFLOW_FMASK 0x00000004 +#define MCS_STACK_OVRFLOW_FMASK 0x00000008 + +#define GSI_CNTXT_GSI_IRQ_EN_OFFS \ + GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFS(n) (0x0001f120 + 0x4000 * (n)) +#define EN_BREAK_POINT_FMASK 0x00000001 +#define EN_BUS_ERROR_FMASK 0x00000002 +#define EN_CMD_FIFO_OVRFLOW_FMASK 0x00000004 +#define EN_MCS_STACK_OVRFLOW_FMASK 0x00000008 + +#define GSI_CNTXT_GSI_IRQ_CLR_OFFS \ + GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFS(IPA_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFS(n) (0x0001f128 + 0x4000 * (n)) +#define CLR_BREAK_POINT_FMASK 0x00000001 +#define CLR_BUS_ERROR_FMASK 0x00000002 +#define CLR_CMD_FIFO_OVRFLOW_FMASK 0x00000004 +#define CLR_MCS_STACK_OVRFLOW_FMASK 0x00000008 + +#define GSI_EE_N_CNTXT_INTSET_OFFS(n) (0x0001f180 + 0x4000 * (n)) +#define INTYPE_FMASK 0x00000001 +#define GSI_CNTXT_INTSET_OFFS GSI_EE_N_CNTXT_INTSET_OFFS(IPA_EE_AP) + +#define GSI_ERROR_LOG_OFFS GSI_EE_N_ERROR_LOG_OFFS(IPA_EE_AP) +#define GSI_EE_N_ERROR_LOG_OFFS(n) (0x0001f200 + 0x4000 * (n)) + +#define GSI_ERROR_LOG_CLR_OFFS GSI_EE_N_ERROR_LOG_CLR_OFFS(IPA_EE_AP) +#define GSI_EE_N_ERROR_LOG_CLR_OFFS(n) (0x0001f210 + 0x4000 * (n)) + +#define GSI_INTER_EE_SRC_CH_IRQ_OFFS \ + GSI_INTER_EE_N_SRC_CH_IRQ_OFFS(IPA_EE_AP) +#define GSI_INTER_EE_N_SRC_CH_IRQ_OFFS(n) (0x0000c018 + 0x1000 * (n)) + +#define GSI_INTER_EE_SRC_EV_CH_IRQ_OFFS \ + GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFS(IPA_EE_AP) +#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFS(n) (0x0000c01c + 0x1000 * (n)) + +#define GSI_INTER_EE_SRC_CH_IRQ_CLR_OFFS \ + GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFS(IPA_EE_AP) +#define GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFS(n) (0x0000c028 + 0x1000 * (n)) + +#define GSI_INTER_EE_SRC_EV_CH_IRQ_CLR_OFFS \ + GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFS(IPA_EE_AP) +#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFS(n) (0x0000c02c + 0x1000 * (n)) + +#endif /* _GSI_REG_H__ */ From patchwork Wed Nov 7 00:32:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 150360 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4548475ljp; Tue, 6 Nov 2018 16:33:35 -0800 (PST) X-Google-Smtp-Source: AJdET5f4/cXJN/yv9zCg09ixvFpL2mx8HAEQ14asElbSoLfYgyGx21aYbTurszh9DGYwVlYAx6RK X-Received: by 2002:a65:4244:: with SMTP id d4-v6mr25726969pgq.289.1541550815375; Tue, 06 Nov 2018 16:33:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541550815; cv=none; d=google.com; s=arc-20160816; b=G42RivhJOLBkAViTc2cXgzKJlMGqNNO+BL4g/3VBBKyE1Tujf+MD6xIEsj4S8/lWbY fNqeGZUvnhfOSdECjTKQ34Ko9NsAEox5dX8rp4zsLlBL4o4xPX7LpdR/GL3ikVBthA6Y vvHq/DVxWubPeLmZkd87GZicv+CCrIv0VNF3ZwZySk+eqlQwif9jFE2s1Bd0Tej8h0y3 xNU7nkmuipIf9jq1FtKnNKFFjr3vO4WkyeVLj1IdNI/MprwX7+DwrIX8NjM58tcuZ1aQ Ue8SguwoMaziyzQuPgjcPb67h2W/+CjaNOpqPlcNHdxUi+m3M6PStNGQGqAZijd5AHF0 0vMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=0GKhDEf7lvJjOn0Dlwq9iV8xpscxCdWsFGeeRB9JznA=; b=mJBKw3qTuZL5s1f1Emwd0Q2EqO0IpuEC+8AqJzlhEPzXC58G4uyEPFkUDRYRCO0FxW aWGtFBDEm4RMDK9mR6nWcZcsMxavFou1SIObqS/PG0S9Fb4J5/j3Yd+iMap2D+CYeGVs TPLG4qRyHvhfKBu7p+xPYNqd4h7S8Bo7ui/+4TZ0DSo0AF2ir2Pkac4JCaO7ScWTkFPm QsGF9L4QsMy2DLidXoJjQGV/WYeWzSbVyzDw56gwPAIyBKeRPlOIyuLVwwN3P0PX5hqg xtCzWBKIHLhdYAzIGh4OXSOT1I+desp5t4AsmaoARUjEBe3a0xqdwSnnTQsPAuLa1pWF tR5Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=TXtKPJI9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z16-v6si48491685pga.177.2018.11.06.16.33.34; Tue, 06 Nov 2018 16:33:35 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=TXtKPJI9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389146AbeKGKB0 (ORCPT + 32 others); Wed, 7 Nov 2018 05:01:26 -0500 Received: from mail-it1-f180.google.com ([209.85.166.180]:55012 "EHLO mail-it1-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389089AbeKGKBZ (ORCPT ); Wed, 7 Nov 2018 05:01:25 -0500 Received: by mail-it1-f180.google.com with SMTP id d6so20624412itl.4 for ; Tue, 06 Nov 2018 16:33:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0GKhDEf7lvJjOn0Dlwq9iV8xpscxCdWsFGeeRB9JznA=; b=TXtKPJI9STO3svOaiaZbfwbDrvbGqsboQxdvt/yr3wLmbwvvwuNydSmFfSoqY3TlTH 3M30AAAUVjo/z4YCkwa1YIvFtPp6LdULhzjpqSQdL+/8kJOa6XDqC85WRYsZYOWAQefI ZG3eoxdpRb2kTkgMapNRiEM5CC1L8gbsXtvak= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0GKhDEf7lvJjOn0Dlwq9iV8xpscxCdWsFGeeRB9JznA=; b=QGfURD9UIIpjFuu9qkWpXQct7Z+dHmi+ucItBxJvJ1fNSc5n2xAUHCRZRVeb2rkN+B cFkPvHoDuG9g9zHBCos0eOvse8fRmlaVTOI2aEEkP23kG/+k3Hb/AyR28zyO1xEDVo+2 58xAdgqsv7fN/mK5lvdw3dXGVhBDvEU7qM2+qIRhgHEI4rjjFrZalF3nqPyKslxhV5WT yIhm3KamqgOGjyx7C/L5jPal0BVReslUDE4lECbaIMJ8MBtjdIq2rFpdmsJDKBiczobH NZCZy3mdCaG6EJOmx8ulGOjGKocH0CdG9w+Htx1Ue3MSnTknkGb6XDqre1wNKQZ7qtHF 3kCQ== X-Gm-Message-State: AGRZ1gJ/yvao0kg1MmD9HMAdEKtcPUe15ZjFCZwSmN/c5I9kTB3FwbLK UU5kliY+6T902Z3M4qfCrnxc1Q== X-Received: by 2002:a24:ac02:: with SMTP id s2-v6mr104992ite.105.1541550808480; Tue, 06 Nov 2018 16:33:28 -0800 (PST) Received: from shibby.gateway.innflux.com ([66.228.239.218]) by smtp.gmail.com with ESMTPSA id e184-v6sm1061128ite.9.2018.11.06.16.33.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 16:33:27 -0800 (PST) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, syadagir@codeaurora.org, mjavid@codeaurora.org, robh+dt@kernel.org, mark.rutland@arm.com Subject: [RFC PATCH 07/12] soc: qcom: ipa: IPA register abstraction Date: Tue, 6 Nov 2018 18:32:45 -0600 Message-Id: <20181107003250.5832-8-elder@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107003250.5832-1-elder@linaro.org> References: <20181107003250.5832-1-elder@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org (Much of the following is copied from text in "ipa_reg.c". Please see that for the more complete explanation.) The IPA code abstracts the details of its 32-bit registers, allowing access to them to be done generically. The original motivation for this was that the field width and/or position for values stored in some registers differed for different versions of IPA hardware. Abstracting access this way allows code that uses such registers to be simpler, describing how register fields are used without proliferating special-case code that is dependent on hardware version. Each IPA register has a name, which is one of the values in the "ipa_reg" enumerated type (e.g., IPA_ENABLED_PIPES). The offset (memory address) of the register having a given name is maintained internal to the "ipa_reg" module. Some registers hold one or more fields that are less than 32 bits wide. Each of these registers has a data structure that breaks out those fields into individual (32-bit) values. These field structures allow the register contents to be defined in a hardware independent way. Such registers have a pair of functions associated with them to "construct" (when writing) and "parse" (when reading) the fields found within them, using the register's fields structure. This allows the content of these registers to be read in a generic way. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_reg.c | 972 ++++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_reg.h | 614 ++++++++++++++++++++++++ 2 files changed, 1586 insertions(+) create mode 100644 drivers/net/ipa/ipa_reg.c create mode 100644 drivers/net/ipa/ipa_reg.h -- 2.17.1 diff --git a/drivers/net/ipa/ipa_reg.c b/drivers/net/ipa/ipa_reg.c new file mode 100644 index 000000000000..5e0aa6163235 --- /dev/null +++ b/drivers/net/ipa/ipa_reg.c @@ -0,0 +1,972 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ + +#include +#include +#include + +#include "ipa_reg.h" + +/* I/O remapped base address of IPA register space */ +static void __iomem *ipa_reg_virt; + +/* struct ipa_reg_desc - descriptor for an abstracted hardware register + * + * @construct - fn to construct the register value from its field structure + * @parse - function to parse register field values into its field structure + * @offset - register offset relative to base address + * @n_ofst - size multiplier for "N-parameterized" registers + */ +struct ipa_reg_desc { + u32 (*construct)(enum ipa_reg reg, const void *fields); + void (*parse)(enum ipa_reg reg, void *fields, u32 val); + u32 offset; + u16 n_ofst; +}; + +/* IPA_ROUTE register */ + +void ipa_reg_route(struct ipa_reg_route *route, u32 ep_id) +{ + route->route_dis = 0; + route->route_def_pipe = ep_id; + route->route_def_hdr_table = 1; + route->route_def_hdr_ofst = 0; + route->route_frag_def_pipe = ep_id; + route->route_def_retain_hdr = 1; +} + +#define ROUTE_DIS_FMASK 0x00000001 +#define ROUTE_DEF_PIPE_FMASK 0x0000003e +#define ROUTE_DEF_HDR_TABLE_FMASK 0x00000040 +#define ROUTE_DEF_HDR_OFST_FMASK 0x0001ff80 +#define ROUTE_FRAG_DEF_PIPE_FMASK 0x003e0000 +#define ROUTE_DEF_RETAIN_HDR_FMASK 0x01000000 + +static u32 ipa_reg_construct_route(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_route *route = fields; + u32 val; + + val = FIELD_PREP(ROUTE_DIS_FMASK, route->route_dis); + val |= FIELD_PREP(ROUTE_DEF_PIPE_FMASK, route->route_def_pipe); + val |= FIELD_PREP(ROUTE_DEF_HDR_TABLE_FMASK, + route->route_def_hdr_table); + val |= FIELD_PREP(ROUTE_DEF_HDR_OFST_FMASK, route->route_def_hdr_ofst); + val |= FIELD_PREP(ROUTE_FRAG_DEF_PIPE_FMASK, + route->route_frag_def_pipe); + val |= FIELD_PREP(ROUTE_DEF_RETAIN_HDR_FMASK, + route->route_def_retain_hdr); + + return val; +} + +/* IPA_ENDP_INIT_HDR_N register */ + +static void +ipa_reg_endp_init_hdr_common(struct ipa_reg_endp_init_hdr *init_hdr) +{ + init_hdr->hdr_additional_const_len = 0; /* XXX description? */ + init_hdr->hdr_a5_mux = 0; /* XXX description? */ + init_hdr->hdr_len_inc_deagg_hdr = 0; /* XXX description? */ + init_hdr->hdr_metadata_reg_valid = 0; /* XXX description? */ +} + +void ipa_reg_endp_init_hdr_cons(struct ipa_reg_endp_init_hdr *init_hdr, + u32 header_size, u32 metadata_offset, + u32 length_offset) +{ + init_hdr->hdr_len = header_size; + init_hdr->hdr_ofst_metadata_valid = 1; + init_hdr->hdr_ofst_metadata = metadata_offset; /* XXX ignored */ + init_hdr->hdr_ofst_pkt_size_valid = 1; + init_hdr->hdr_ofst_pkt_size = length_offset; + + ipa_reg_endp_init_hdr_common(init_hdr); +} + +void ipa_reg_endp_init_hdr_prod(struct ipa_reg_endp_init_hdr *init_hdr, + u32 header_size, u32 metadata_offset, + u32 length_offset) +{ + init_hdr->hdr_len = header_size; + init_hdr->hdr_ofst_metadata_valid = 1; + init_hdr->hdr_ofst_metadata = metadata_offset; + init_hdr->hdr_ofst_pkt_size_valid = 1; + init_hdr->hdr_ofst_pkt_size = length_offset; /* XXX ignored */ + + ipa_reg_endp_init_hdr_common(init_hdr); +} + +#define HDR_LEN_FMASK 0x0000003f +#define HDR_OFST_METADATA_VALID_FMASK 0x00000040 +#define HDR_OFST_METADATA_FMASK 0x00001f80 +#define HDR_ADDITIONAL_CONST_LEN_FMASK 0x0007e000 +#define HDR_OFST_PKT_SIZE_VALID_FMASK 0x00080000 +#define HDR_OFST_PKT_SIZE_FMASK 0x03f00000 +#define HDR_A5_MUX_FMASK 0x04000000 +#define HDR_LEN_INC_DEAGG_HDR_FMASK 0x08000000 +#define HDR_METADATA_REG_VALID_FMASK 0x10000000 + +static u32 +ipa_reg_construct_endp_init_hdr_n(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_endp_init_hdr *init_hdr = fields; + u32 val; + + val = FIELD_PREP(HDR_LEN_FMASK, init_hdr->hdr_len); + val |= FIELD_PREP(HDR_OFST_METADATA_VALID_FMASK, + init_hdr->hdr_ofst_metadata_valid); + val |= FIELD_PREP(HDR_OFST_METADATA_FMASK, init_hdr->hdr_ofst_metadata); + val |= FIELD_PREP(HDR_ADDITIONAL_CONST_LEN_FMASK, + init_hdr->hdr_additional_const_len); + val |= FIELD_PREP(HDR_OFST_PKT_SIZE_VALID_FMASK, + init_hdr->hdr_ofst_pkt_size_valid); + val |= FIELD_PREP(HDR_OFST_PKT_SIZE_FMASK, + init_hdr->hdr_ofst_pkt_size); + val |= FIELD_PREP(HDR_A5_MUX_FMASK, init_hdr->hdr_a5_mux); + val |= FIELD_PREP(HDR_LEN_INC_DEAGG_HDR_FMASK, + init_hdr->hdr_len_inc_deagg_hdr); + val |= FIELD_PREP(HDR_METADATA_REG_VALID_FMASK, + init_hdr->hdr_metadata_reg_valid); + + return val; +} + +/* IPA_ENDP_INIT_HDR_EXT_N register */ + +void ipa_reg_endp_init_hdr_ext_common(struct ipa_reg_endp_init_hdr_ext *hdr_ext) +{ + hdr_ext->hdr_endianness = 1; /* big endian */ + hdr_ext->hdr_total_len_or_pad_valid = 1; + hdr_ext->hdr_total_len_or_pad = 0; /* pad */ + hdr_ext->hdr_total_len_or_pad_offset = 0; /* XXX description? */ +} + +void ipa_reg_endp_init_hdr_ext_cons(struct ipa_reg_endp_init_hdr_ext *hdr_ext, + u32 pad_align, bool pad_included) +{ + hdr_ext->hdr_payload_len_inc_padding = pad_included ? 1 : 0; + hdr_ext->hdr_pad_to_alignment = pad_align; + + ipa_reg_endp_init_hdr_ext_common(hdr_ext); +} + +void ipa_reg_endp_init_hdr_ext_prod(struct ipa_reg_endp_init_hdr_ext *hdr_ext, + u32 pad_align) +{ + hdr_ext->hdr_payload_len_inc_padding = 0; + hdr_ext->hdr_pad_to_alignment = pad_align; /* XXX ignored */ + + ipa_reg_endp_init_hdr_ext_common(hdr_ext); +} + +#define HDR_ENDIANNESS_FMASK 0x00000001 +#define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK 0x00000002 +#define HDR_TOTAL_LEN_OR_PAD_FMASK 0x00000004 +#define HDR_PAYLOAD_LEN_INC_PADDING_FMASK 0x00000008 +#define HDR_TOTAL_LEN_OR_PAD_OFFSET_FMASK 0x000003f0 +#define HDR_PAD_TO_ALIGNMENT_FMASK 0x00003c00 + +static u32 +ipa_reg_construct_endp_init_hdr_ext_n(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_endp_init_hdr_ext *init_hdr_ext = fields; + u32 val; + + /* 0 = little endian; 1 = big endian */ + val = FIELD_PREP(HDR_ENDIANNESS_FMASK, 1); + val |= FIELD_PREP(HDR_TOTAL_LEN_OR_PAD_VALID_FMASK, + init_hdr_ext->hdr_total_len_or_pad_valid); + val |= FIELD_PREP(HDR_TOTAL_LEN_OR_PAD_FMASK, + init_hdr_ext->hdr_total_len_or_pad); + val |= FIELD_PREP(HDR_PAYLOAD_LEN_INC_PADDING_FMASK, + init_hdr_ext->hdr_payload_len_inc_padding); + val |= FIELD_PREP(HDR_TOTAL_LEN_OR_PAD_OFFSET_FMASK, 0); + val |= FIELD_PREP(HDR_PAD_TO_ALIGNMENT_FMASK, + init_hdr_ext->hdr_pad_to_alignment); + + return val; +} + +/* IPA_ENDP_INIT_AGGR_N register */ + +static void +ipa_reg_endp_init_aggr_common(struct ipa_reg_endp_init_aggr *init_aggr) +{ + init_aggr->aggr_force_close = 0; /* XXX description? */ + init_aggr->aggr_hard_byte_limit_en = 0; /* XXX ignored for PROD? */ +} + +void ipa_reg_endp_init_aggr_cons(struct ipa_reg_endp_init_aggr *init_aggr, + u32 byte_limit, u32 packet_limit, + bool close_on_eof) +{ + init_aggr->aggr_en = IPA_ENABLE_AGGR; + init_aggr->aggr_type = IPA_GENERIC; + init_aggr->aggr_byte_limit = byte_limit; + init_aggr->aggr_time_limit = IPA_AGGR_TIME_LIMIT_DEFAULT; + init_aggr->aggr_pkt_limit = packet_limit; + init_aggr->aggr_sw_eof_active = close_on_eof ? 1 : 0; + + ipa_reg_endp_init_aggr_common(init_aggr); +} + +void ipa_reg_endp_init_aggr_prod(struct ipa_reg_endp_init_aggr *init_aggr, + enum ipa_aggr_en aggr_en, + enum ipa_aggr_type aggr_type) +{ + init_aggr->aggr_en = (u32)aggr_en; + init_aggr->aggr_type = aggr_en == IPA_BYPASS_AGGR ? 0 : (u32)aggr_type; + init_aggr->aggr_byte_limit = 0; /* ignored */ + init_aggr->aggr_time_limit = 0; /* ignored */ + init_aggr->aggr_pkt_limit = 0; /* ignored */ + init_aggr->aggr_sw_eof_active = 0; /* ignored */ + + ipa_reg_endp_init_aggr_common(init_aggr); +} + +#define AGGR_EN_FMASK 0x00000003 +#define AGGR_TYPE_FMASK 0x0000001c +#define AGGR_BYTE_LIMIT_FMASK 0x000003e0 +#define AGGR_TIME_LIMIT_FMASK 0x00007c00 +#define AGGR_PKT_LIMIT_FMASK 0x001f8000 +#define AGGR_SW_EOF_ACTIVE_FMASK 0x00200000 +#define AGGR_FORCE_CLOSE_FMASK 0x00400000 +#define AGGR_HARD_BYTE_LIMIT_ENABLE_FMASK 0x01000000 + +static u32 +ipa_reg_construct_endp_init_aggr_n(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_endp_init_aggr *init_aggr = fields; + u32 val; + + val = FIELD_PREP(AGGR_EN_FMASK, init_aggr->aggr_en); + val |= FIELD_PREP(AGGR_TYPE_FMASK, init_aggr->aggr_type); + val |= FIELD_PREP(AGGR_BYTE_LIMIT_FMASK, init_aggr->aggr_byte_limit); + val |= FIELD_PREP(AGGR_TIME_LIMIT_FMASK, init_aggr->aggr_time_limit); + val |= FIELD_PREP(AGGR_PKT_LIMIT_FMASK, init_aggr->aggr_pkt_limit); + val |= FIELD_PREP(AGGR_SW_EOF_ACTIVE_FMASK, + init_aggr->aggr_sw_eof_active); + val |= FIELD_PREP(AGGR_FORCE_CLOSE_FMASK, init_aggr->aggr_force_close); + val |= FIELD_PREP(AGGR_HARD_BYTE_LIMIT_ENABLE_FMASK, + init_aggr->aggr_hard_byte_limit_en); + + return val; +} + +static void +ipa_reg_parse_endp_init_aggr_n(enum ipa_reg reg, void *fields, u32 val) +{ + struct ipa_reg_endp_init_aggr *init_aggr = fields; + + memset(init_aggr, 0, sizeof(*init_aggr)); + + init_aggr->aggr_en = FIELD_GET(AGGR_EN_FMASK, val); + init_aggr->aggr_type = FIELD_GET(AGGR_TYPE_FMASK, val); + init_aggr->aggr_byte_limit = FIELD_GET(AGGR_BYTE_LIMIT_FMASK, val); + init_aggr->aggr_time_limit = FIELD_GET(AGGR_TIME_LIMIT_FMASK, val); + init_aggr->aggr_pkt_limit = FIELD_GET(AGGR_PKT_LIMIT_FMASK, val); + init_aggr->aggr_sw_eof_active = + FIELD_GET(AGGR_SW_EOF_ACTIVE_FMASK, val); + init_aggr->aggr_force_close = FIELD_GET(AGGR_SW_EOF_ACTIVE_FMASK, val); + init_aggr->aggr_hard_byte_limit_en = + FIELD_GET(AGGR_HARD_BYTE_LIMIT_ENABLE_FMASK, val); +} + +/* IPA_AGGR_FORCE_CLOSE register */ + +void ipa_reg_aggr_force_close(struct ipa_reg_aggr_force_close *force_close, + u32 pipe_bitmap) +{ + force_close->pipe_bitmap = pipe_bitmap; +} + +#define PIPE_BITMAP_FMASK 0x000fffff + +static u32 +ipa_reg_construct_aggr_force_close(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_aggr_force_close *force_close = fields; + + return FIELD_PREP(PIPE_BITMAP_FMASK, force_close->pipe_bitmap); +} + +/* IPA_ENDP_INIT_MODE_N register */ + +static void +ipa_reg_endp_init_mode_common(struct ipa_reg_endp_init_mode *init_mode) +{ + init_mode->byte_threshold = 0; /* XXX description? */ + init_mode->pipe_replication_en = 0; /* XXX description? */ + init_mode->pad_en = 0; /* XXX description? */ + init_mode->hdr_ftch_disable = 0; /* XXX description? */ +} + +/* IPA_ENDP_INIT_MODE is not valid for consumer pipes */ +void ipa_reg_endp_init_mode_cons(struct ipa_reg_endp_init_mode *init_mode) +{ + init_mode->mode = 0; /* ignored */ + init_mode->dest_pipe_index = 0; /* ignored */ + + ipa_reg_endp_init_mode_common(init_mode); +} + +void ipa_reg_endp_init_mode_prod(struct ipa_reg_endp_init_mode *init_mode, + enum ipa_mode mode, u32 dest_endp) +{ + init_mode->mode = mode; + init_mode->dest_pipe_index = mode == IPA_DMA ? dest_endp : 0; + + ipa_reg_endp_init_mode_common(init_mode); +} + +#define MODE_FMASK 0x00000007 +#define DEST_PIPE_INDEX_FMASK 0x000001f0 +#define BYTE_THRESHOLD_FMASK 0x0ffff000 +#define PIPE_REPLICATION_EN_FMASK 0x10000000 +#define PAD_EN_FMASK 0x20000000 +#define HDR_FTCH_DISABLE_FMASK 0x40000000 + +static u32 +ipa_reg_construct_endp_init_mode_n(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_endp_init_mode *init_mode = fields; + u32 val; + + val = FIELD_PREP(MODE_FMASK, init_mode->mode); + val |= FIELD_PREP(DEST_PIPE_INDEX_FMASK, init_mode->dest_pipe_index); + val |= FIELD_PREP(BYTE_THRESHOLD_FMASK, init_mode->byte_threshold); + val |= FIELD_PREP(PIPE_REPLICATION_EN_FMASK, + init_mode->pipe_replication_en); + val |= FIELD_PREP(PAD_EN_FMASK, init_mode->pad_en); + val |= FIELD_PREP(HDR_FTCH_DISABLE_FMASK, init_mode->hdr_ftch_disable); + + return val; +} + +/* IPA_ENDP_INIT_CTRL_N register */ + +void +ipa_reg_endp_init_ctrl(struct ipa_reg_endp_init_ctrl *init_ctrl, bool suspend) +{ + init_ctrl->endp_suspend = suspend ? 1 : 0; + init_ctrl->endp_delay = 0; +} + +#define ENDP_SUSPEND_FMASK 0x00000001 +#define ENDP_DELAY_FMASK 0x00000002 + +static u32 +ipa_reg_construct_endp_init_ctrl_n(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_endp_init_ctrl *init_ctrl = fields; + u32 val; + + val = FIELD_PREP(ENDP_SUSPEND_FMASK, init_ctrl->endp_suspend); + val |= FIELD_PREP(ENDP_DELAY_FMASK, init_ctrl->endp_delay); + + return val; +} + +static void +ipa_reg_parse_endp_init_ctrl_n(enum ipa_reg reg, void *fields, u32 val) +{ + struct ipa_reg_endp_init_ctrl *init_ctrl = fields; + + memset(init_ctrl, 0, sizeof(*init_ctrl)); + + init_ctrl->endp_suspend = FIELD_GET(ENDP_SUSPEND_FMASK, val); + init_ctrl->endp_delay = FIELD_GET(ENDP_DELAY_FMASK, val); +} + +/* IPA_ENDP_INIT_DEAGGR_N register */ + +static void +ipa_reg_endp_init_deaggr_common(struct ipa_reg_endp_init_deaggr *init_deaggr) +{ + init_deaggr->deaggr_hdr_len = 0; /* XXX description? */ + init_deaggr->packet_offset_valid = 0; /* XXX description? */ + init_deaggr->packet_offset_location = 0; /* XXX description? */ + init_deaggr->max_packet_len = 0; /* XXX description? */ +} + +/* XXX The deaggr setting seems not to be valid for consumer endpoints */ +void +ipa_reg_endp_init_deaggr_cons(struct ipa_reg_endp_init_deaggr *init_deaggr) +{ + ipa_reg_endp_init_deaggr_common(init_deaggr); +} + +void +ipa_reg_endp_init_deaggr_prod(struct ipa_reg_endp_init_deaggr *init_deaggr) +{ + ipa_reg_endp_init_deaggr_common(init_deaggr); +} + +#define DEAGGR_HDR_LEN_FMASK 0x0000003f +#define PACKET_OFFSET_VALID_FMASK 0x00000080 +#define PACKET_OFFSET_LOCATION_FMASK 0x00003f00 +#define MAX_PACKET_LEN_FMASK 0xffff0000 + +static u32 +ipa_reg_construct_endp_init_deaggr_n(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_endp_init_deaggr *init_deaggr = fields; + u32 val; + + /* fields value is completely ignored (can be NULL) */ + val = FIELD_PREP(DEAGGR_HDR_LEN_FMASK, init_deaggr->deaggr_hdr_len); + val |= FIELD_PREP(PACKET_OFFSET_VALID_FMASK, + init_deaggr->packet_offset_valid); + val |= FIELD_PREP(PACKET_OFFSET_LOCATION_FMASK, + init_deaggr->packet_offset_location); + val |= FIELD_PREP(MAX_PACKET_LEN_FMASK, + init_deaggr->max_packet_len); + + return val; +} + +/* IPA_ENDP_INIT_SEQ_N register */ + +static void +ipa_reg_endp_init_seq_common(struct ipa_reg_endp_init_seq *init_seq) +{ + init_seq->dps_seq_type = 0; /* XXX description? */ + init_seq->hps_rep_seq_type = 0; /* XXX description? */ + init_seq->dps_rep_seq_type = 0; /* XXX description? */ +} + +void ipa_reg_endp_init_seq_cons(struct ipa_reg_endp_init_seq *init_seq) +{ + init_seq->hps_seq_type = 0; /* ignored */ + + ipa_reg_endp_init_seq_common(init_seq); +} + +void ipa_reg_endp_init_seq_prod(struct ipa_reg_endp_init_seq *init_seq, + enum ipa_seq_type seq_type) +{ + init_seq->hps_seq_type = (u32)seq_type; + + ipa_reg_endp_init_seq_common(init_seq); +} + +#define HPS_SEQ_TYPE_FMASK 0x0000000f +#define DPS_SEQ_TYPE_FMASK 0x000000f0 +#define HPS_REP_SEQ_TYPE_FMASK 0x00000f00 +#define DPS_REP_SEQ_TYPE_FMASK 0x0000f000 + +static u32 +ipa_reg_construct_endp_init_seq_n(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_endp_init_seq *init_seq = fields; + u32 val; + + val = FIELD_PREP(HPS_SEQ_TYPE_FMASK, init_seq->hps_seq_type); + val |= FIELD_PREP(DPS_SEQ_TYPE_FMASK, init_seq->dps_seq_type); + val |= FIELD_PREP(HPS_REP_SEQ_TYPE_FMASK, init_seq->hps_rep_seq_type); + val |= FIELD_PREP(DPS_REP_SEQ_TYPE_FMASK, init_seq->dps_rep_seq_type); + + return val; +} + +/* IPA_ENDP_INIT_CFG_N register */ + +static void +ipa_reg_endp_init_cfg_common(struct ipa_reg_endp_init_cfg *init_cfg) +{ + init_cfg->frag_offload_en = 0; /* XXX description? */ + init_cfg->cs_gen_qmb_master_sel = 0; /* XXX description? */ +} + +void ipa_reg_endp_init_cfg_cons(struct ipa_reg_endp_init_cfg *init_cfg, + enum ipa_cs_offload_en offload_type) +{ + init_cfg->cs_offload_en = offload_type; + init_cfg->cs_metadata_hdr_offset = 0; /* ignored */ + + ipa_reg_endp_init_cfg_common(init_cfg); +} + +void ipa_reg_endp_init_cfg_prod(struct ipa_reg_endp_init_cfg *init_cfg, + enum ipa_cs_offload_en offload_type, + u32 metadata_offset) +{ + init_cfg->cs_offload_en = offload_type; + init_cfg->cs_metadata_hdr_offset = metadata_offset; + + ipa_reg_endp_init_cfg_common(init_cfg); +} + +#define FRAG_OFFLOAD_EN_FMASK 0x00000001 +#define CS_OFFLOAD_EN_FMASK 0x00000006 +#define CS_METADATA_HDR_OFFSET_FMASK 0x00000078 +#define CS_GEN_QMB_MASTER_SEL_FMASK 0x00000100 + +static u32 +ipa_reg_construct_endp_init_cfg_n(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_endp_init_cfg *init_cfg = fields; + u32 val; + + val = FIELD_PREP(FRAG_OFFLOAD_EN_FMASK, init_cfg->frag_offload_en); + val |= FIELD_PREP(CS_OFFLOAD_EN_FMASK, init_cfg->cs_offload_en); + val |= FIELD_PREP(CS_METADATA_HDR_OFFSET_FMASK, + init_cfg->cs_metadata_hdr_offset); + val |= FIELD_PREP(CS_GEN_QMB_MASTER_SEL_FMASK, + init_cfg->cs_gen_qmb_master_sel); + + return val; +} + +/* IPA_ENDP_INIT_HDR_METADATA_MASK_N register */ + +void ipa_reg_endp_init_hdr_metadata_mask_cons( + struct ipa_reg_endp_init_hdr_metadata_mask *metadata_mask, + u32 mask) +{ + metadata_mask->metadata_mask = mask; +} + +/* IPA_ENDP_INIT_HDR_METADATA_MASK is not valid for producer pipes */ +void ipa_reg_endp_init_hdr_metadata_mask_prod( + struct ipa_reg_endp_init_hdr_metadata_mask *metadata_mask) +{ + metadata_mask->metadata_mask = 0; /* ignored */ +} + + +#define METADATA_MASK_FMASK 0xffffffff + +static u32 ipa_reg_construct_endp_init_hdr_metadata_mask_n(enum ipa_reg reg, + const void *fields) +{ + const struct ipa_reg_endp_init_hdr_metadata_mask *metadata_mask; + + metadata_mask = fields; + + return FIELD_PREP(METADATA_MASK_FMASK, metadata_mask->metadata_mask); +} + +/* IPA_SHARED_MEM_SIZE register (read-only) */ + +#define SHARED_MEM_SIZE_FMASK 0x0000ffff +#define SHARED_MEM_BADDR_FMASK 0xffff0000 + +static void +ipa_reg_parse_shared_mem_size(enum ipa_reg reg, void *fields, u32 val) +{ + struct ipa_reg_shared_mem_size *mem_size = fields; + + memset(mem_size, 0, sizeof(*mem_size)); + + mem_size->shared_mem_size = FIELD_GET(SHARED_MEM_SIZE_FMASK, val); + mem_size->shared_mem_baddr = FIELD_GET(SHARED_MEM_BADDR_FMASK, val); +} + +/* IPA_ENDP_STATUS_N register */ + +static void ipa_reg_endp_status_common(struct ipa_reg_endp_status *endp_status) +{ + endp_status->status_pkt_suppress = 0; /* XXX description? */ +} + +void ipa_reg_endp_status_cons(struct ipa_reg_endp_status *endp_status, + bool enable) +{ + endp_status->status_en = enable ? 1 : 0; + endp_status->status_endp = 0; /* ignored */ + endp_status->status_location = 0; /* before packet data */ + + ipa_reg_endp_status_common(endp_status); +} + +void ipa_reg_endp_status_prod(struct ipa_reg_endp_status *endp_status, + bool enable, u32 endp) +{ + endp_status->status_en = enable ? 1 : 0; + endp_status->status_endp = endp; + endp_status->status_location = 0; /* ignored */ + + ipa_reg_endp_status_common(endp_status); +} + +#define STATUS_EN_FMASK 0x00000001 +#define STATUS_ENDP_FMASK 0x0000003e +#define STATUS_LOCATION_FMASK 0x00000100 +#define STATUS_PKT_SUPPRESS_FMASK 0x00000200 + +static u32 ipa_reg_construct_endp_status_n(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_endp_status *endp_status = fields; + u32 val; + + val = FIELD_PREP(STATUS_EN_FMASK, endp_status->status_en); + val |= FIELD_PREP(STATUS_ENDP_FMASK, endp_status->status_endp); + val |= FIELD_PREP(STATUS_LOCATION_FMASK, endp_status->status_location); + val |= FIELD_PREP(STATUS_PKT_SUPPRESS_FMASK, 0); + + return val; +} + +/* IPA_ENDP_FILTER_ROUTER_HSH_CFG_N register */ + +void ipa_reg_hash_tuple(struct ipa_reg_hash_tuple *tuple) +{ + tuple->src_id = 0; /* pipe number in flt, table index in rt */ + tuple->src_ip = 0; + tuple->dst_ip = 0; + tuple->src_port = 0; + tuple->dst_port = 0; + tuple->protocol = 0; + tuple->metadata = 0; + tuple->undefined = 0; +} + +#define FILTER_HASH_MSK_SRC_ID_FMASK 0x00000001 +#define FILTER_HASH_MSK_SRC_IP_FMASK 0x00000002 +#define FILTER_HASH_MSK_DST_IP_FMASK 0x00000004 +#define FILTER_HASH_MSK_SRC_PORT_FMASK 0x00000008 +#define FILTER_HASH_MSK_DST_PORT_FMASK 0x00000010 +#define FILTER_HASH_MSK_PROTOCOL_FMASK 0x00000020 +#define FILTER_HASH_MSK_METADATA_FMASK 0x00000040 +#define FILTER_HASH_UNDEFINED1_FMASK 0x0000ff80 + +#define ROUTER_HASH_MSK_SRC_ID_FMASK 0x00010000 +#define ROUTER_HASH_MSK_SRC_IP_FMASK 0x00020000 +#define ROUTER_HASH_MSK_DST_IP_FMASK 0x00040000 +#define ROUTER_HASH_MSK_SRC_PORT_FMASK 0x00080000 +#define ROUTER_HASH_MSK_DST_PORT_FMASK 0x00100000 +#define ROUTER_HASH_MSK_PROTOCOL_FMASK 0x00200000 +#define ROUTER_HASH_MSK_METADATA_FMASK 0x00400000 +#define ROUTER_HASH_UNDEFINED2_FMASK 0xff800000 + +static u32 ipa_reg_construct_hash_cfg_n(enum ipa_reg reg, const void *fields) +{ + const struct ipa_ep_filter_router_hsh_cfg *hsh_cfg = fields; + u32 val; + + val = FIELD_PREP(FILTER_HASH_MSK_SRC_ID_FMASK, hsh_cfg->flt.src_id); + val |= FIELD_PREP(FILTER_HASH_MSK_SRC_IP_FMASK, hsh_cfg->flt.src_ip); + val |= FIELD_PREP(FILTER_HASH_MSK_DST_IP_FMASK, hsh_cfg->flt.dst_ip); + val |= FIELD_PREP(FILTER_HASH_MSK_SRC_PORT_FMASK, + hsh_cfg->flt.src_port); + val |= FIELD_PREP(FILTER_HASH_MSK_DST_PORT_FMASK, + hsh_cfg->flt.dst_port); + val |= FIELD_PREP(FILTER_HASH_MSK_PROTOCOL_FMASK, + hsh_cfg->flt.protocol); + val |= FIELD_PREP(FILTER_HASH_MSK_METADATA_FMASK, + hsh_cfg->flt.metadata); + val |= FIELD_PREP(FILTER_HASH_UNDEFINED1_FMASK, hsh_cfg->flt.undefined); + + val |= FIELD_PREP(ROUTER_HASH_MSK_SRC_ID_FMASK, hsh_cfg->rt.src_id); + val |= FIELD_PREP(ROUTER_HASH_MSK_SRC_IP_FMASK, hsh_cfg->rt.src_ip); + val |= FIELD_PREP(ROUTER_HASH_MSK_DST_IP_FMASK, hsh_cfg->rt.dst_ip); + val |= FIELD_PREP(ROUTER_HASH_MSK_SRC_PORT_FMASK, hsh_cfg->rt.src_port); + val |= FIELD_PREP(ROUTER_HASH_MSK_DST_PORT_FMASK, hsh_cfg->rt.dst_port); + val |= FIELD_PREP(ROUTER_HASH_MSK_PROTOCOL_FMASK, hsh_cfg->rt.protocol); + val |= FIELD_PREP(ROUTER_HASH_MSK_METADATA_FMASK, hsh_cfg->rt.metadata); + val |= FIELD_PREP(FILTER_HASH_UNDEFINED1_FMASK, hsh_cfg->rt.undefined); + + return val; +} + +static void ipa_reg_parse_hash_cfg_n(enum ipa_reg reg, void *fields, u32 val) +{ + struct ipa_ep_filter_router_hsh_cfg *hsh_cfg = fields; + + memset(hsh_cfg, 0, sizeof(*hsh_cfg)); + + hsh_cfg->flt.src_id = FIELD_GET(FILTER_HASH_MSK_SRC_ID_FMASK, val); + hsh_cfg->flt.src_ip = FIELD_GET(FILTER_HASH_MSK_SRC_IP_FMASK, val); + hsh_cfg->flt.dst_ip = FIELD_GET(FILTER_HASH_MSK_DST_IP_FMASK, val); + hsh_cfg->flt.src_port = FIELD_GET(FILTER_HASH_MSK_SRC_PORT_FMASK, val); + hsh_cfg->flt.dst_port = FIELD_GET(FILTER_HASH_MSK_DST_PORT_FMASK, val); + hsh_cfg->flt.protocol = FIELD_GET(FILTER_HASH_MSK_PROTOCOL_FMASK, val); + hsh_cfg->flt.metadata = FIELD_GET(FILTER_HASH_MSK_METADATA_FMASK, val); + hsh_cfg->flt.undefined = FIELD_GET(FILTER_HASH_UNDEFINED1_FMASK, val); + + hsh_cfg->rt.src_id = FIELD_GET(ROUTER_HASH_MSK_SRC_ID_FMASK, val); + hsh_cfg->rt.src_ip = FIELD_GET(ROUTER_HASH_MSK_SRC_IP_FMASK, val); + hsh_cfg->rt.dst_ip = FIELD_GET(ROUTER_HASH_MSK_DST_IP_FMASK, val); + hsh_cfg->rt.src_port = FIELD_GET(ROUTER_HASH_MSK_SRC_PORT_FMASK, val); + hsh_cfg->rt.dst_port = FIELD_GET(ROUTER_HASH_MSK_DST_PORT_FMASK, val); + hsh_cfg->rt.protocol = FIELD_GET(ROUTER_HASH_MSK_PROTOCOL_FMASK, val); + hsh_cfg->rt.metadata = FIELD_GET(ROUTER_HASH_MSK_METADATA_FMASK, val); + hsh_cfg->rt.undefined = FIELD_GET(ROUTER_HASH_UNDEFINED2_FMASK, val); +} + +/* IPA_RSRC_GRP_XY_RSRC_TYPE_N register(s) */ + +void +ipa_reg_rsrc_grp_xy_rsrc_type_n(struct ipa_reg_rsrc_grp_xy_rsrc_type_n *limits, + u32 x_min, u32 x_max, u32 y_min, u32 y_max) +{ + limits->x_min = x_min; + limits->x_max = x_max; + limits->y_min = y_min; + limits->y_max = y_max; +} + +#define X_MIN_LIM_FMASK 0x0000003f +#define X_MAX_LIM_FMASK 0x00003f00 +#define Y_MIN_LIM_FMASK 0x003f0000 +#define Y_MAX_LIM_FMASK 0x3f000000 + +static u32 +ipa_reg_construct_rsrg_grp_xy_rsrc_type_n(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_rsrc_grp_xy_rsrc_type_n *limits = fields; + u32 val; + + val = FIELD_PREP(X_MIN_LIM_FMASK, limits->x_min); + val |= FIELD_PREP(X_MAX_LIM_FMASK, limits->x_max); + + /* DST_23 register has only X fields at ipa V3_5 */ + if (reg == IPA_DST_RSRC_GRP_23_RSRC_TYPE_N) + return val; + + val |= FIELD_PREP(Y_MIN_LIM_FMASK, limits->y_min); + val |= FIELD_PREP(Y_MAX_LIM_FMASK, limits->y_max); + + return val; +} + +/* IPA_QSB_MAX_WRITES register */ + +void ipa_reg_qsb_max_writes(struct ipa_reg_qsb_max_writes *max_writes, + u32 qmb_0_max_writes, u32 qmb_1_max_writes) +{ + max_writes->qmb_0_max_writes = qmb_0_max_writes; + max_writes->qmb_1_max_writes = qmb_1_max_writes; +} + +#define GEN_QMB_0_MAX_WRITES_FMASK 0x0000000f +#define GEN_QMB_1_MAX_WRITES_FMASK 0x000000f0 + +static u32 +ipa_reg_construct_qsb_max_writes(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_qsb_max_writes *max_writes = fields; + u32 val; + + val = FIELD_PREP(GEN_QMB_0_MAX_WRITES_FMASK, + max_writes->qmb_0_max_writes); + val |= FIELD_PREP(GEN_QMB_1_MAX_WRITES_FMASK, + max_writes->qmb_1_max_writes); + + return val; +} + +/* IPA_QSB_MAX_READS register */ + +void ipa_reg_qsb_max_reads(struct ipa_reg_qsb_max_reads *max_reads, + u32 qmb_0_max_reads, u32 qmb_1_max_reads) +{ + max_reads->qmb_0_max_reads = qmb_0_max_reads; + max_reads->qmb_1_max_reads = qmb_1_max_reads; +} + +#define GEN_QMB_0_MAX_READS_FMASK 0x0000000f +#define GEN_QMB_1_MAX_READS_FMASK 0x000000f0 + +static u32 ipa_reg_construct_qsb_max_reads(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_qsb_max_reads *max_reads = fields; + u32 val; + + val = FIELD_PREP(GEN_QMB_0_MAX_READS_FMASK, max_reads->qmb_0_max_reads); + val |= FIELD_PREP(GEN_QMB_1_MAX_READS_FMASK, + max_reads->qmb_1_max_reads); + + return val; +} + +/* IPA_IDLE_INDICATION_CFG register */ + +void ipa_reg_idle_indication_cfg(struct ipa_reg_idle_indication_cfg *indication, + u32 debounce_thresh, bool non_idle_enable) +{ + indication->enter_idle_debounce_thresh = debounce_thresh; + indication->const_non_idle_enable = non_idle_enable; +} + +#define ENTER_IDLE_DEBOUNCE_THRESH_FMASK 0x0000ffff +#define CONST_NON_IDLE_ENABLE_FMASK 0x00010000 + +static u32 +ipa_reg_construct_idle_indication_cfg(enum ipa_reg reg, const void *fields) +{ + const struct ipa_reg_idle_indication_cfg *indication_cfg; + u32 val; + + indication_cfg = fields; + + val = FIELD_PREP(ENTER_IDLE_DEBOUNCE_THRESH_FMASK, + indication_cfg->enter_idle_debounce_thresh); + val |= FIELD_PREP(CONST_NON_IDLE_ENABLE_FMASK, + indication_cfg->const_non_idle_enable); + + return val; +} + +/* The entries in the following table have the following constraints: + * - 0 is not a valid offset (it represents an unused entry). It is + * a bug for code to attempt to access a register which has an + * undefined (zero) offset value. + * - If a construct function is supplied, the register must be + * written using ipa_write_reg_n_fields() (or its wrapper + * function ipa_write_reg_fields()). + * - Generally, if a parse function is supplied, the register should + * read using ipa_read_reg_n_fields() (or ipa_read_reg_fields()). + * (Currently some debug code reads some registers directly, without + * parsing.) + */ +#define cfunc(f) ipa_reg_construct_ ## f +#define pfunc(f) ipa_reg_parse_ ## f +#define reg_obj_common(id, cf, pf, o, n) \ + [id] = { \ + .construct = cf, \ + .parse = pf, \ + .offset = o, \ + .n_ofst = n, \ + } +#define reg_obj_cfunc(id, f, o, n) \ + reg_obj_common(id, cfunc(f), NULL, o, n) +#define reg_obj_pfunc(id, f, o, n) \ + reg_obj_common(id, NULL, pfunc(f), o, n) +#define reg_obj_both(id, f, o, n) \ + reg_obj_common(id, cfunc(f), pfunc(f), o, n) +#define reg_obj_nofunc(id, o, n) \ + reg_obj_common(id, NULL, NULL, o, n) + +/* IPAv3.5.1 */ +static const struct ipa_reg_desc ipa_reg[] = { + reg_obj_cfunc(IPA_ROUTE, route, 0x00000048, 0x0000), + reg_obj_nofunc(IPA_IRQ_STTS_EE_N, 0x00003008, 0x1000), + reg_obj_nofunc(IPA_IRQ_EN_EE_N, 0x0000300c, 0x1000), + reg_obj_nofunc(IPA_IRQ_CLR_EE_N, 0x00003010, 0x1000), + reg_obj_nofunc(IPA_IRQ_SUSPEND_INFO_EE_N, 0x00003030, 0x1000), + reg_obj_nofunc(IPA_SUSPEND_IRQ_EN_EE_N, 0x00003034, 0x1000), + reg_obj_nofunc(IPA_SUSPEND_IRQ_CLR_EE_N, 0x00003038, 0x1000), + reg_obj_nofunc(IPA_BCR, 0x000001d0, 0x0000), + reg_obj_nofunc(IPA_ENABLED_PIPES, 0x00000038, 0x0000), + reg_obj_nofunc(IPA_TAG_TIMER, 0x00000060, 0x0000), + reg_obj_nofunc(IPA_STATE_AGGR_ACTIVE, 0x0000010c, 0x0000), + reg_obj_cfunc(IPA_ENDP_INIT_HDR_N, + endp_init_hdr_n, 0x00000810, 0x0070), + reg_obj_cfunc(IPA_ENDP_INIT_HDR_EXT_N, + endp_init_hdr_ext_n, 0x00000814, 0x0070), + reg_obj_both(IPA_ENDP_INIT_AGGR_N, + endp_init_aggr_n, 0x00000824, 0x0070), + reg_obj_cfunc(IPA_AGGR_FORCE_CLOSE, + aggr_force_close, 0x000001ec, 0x0000), + reg_obj_cfunc(IPA_ENDP_INIT_MODE_N, + endp_init_mode_n, 0x00000820, 0x0070), + reg_obj_both(IPA_ENDP_INIT_CTRL_N, + endp_init_ctrl_n, 0x00000800, 0x0070), + reg_obj_cfunc(IPA_ENDP_INIT_DEAGGR_N, + endp_init_deaggr_n, 0x00000834, 0x0070), + reg_obj_cfunc(IPA_ENDP_INIT_SEQ_N, + endp_init_seq_n, 0x0000083c, 0x0070), + reg_obj_cfunc(IPA_ENDP_INIT_CFG_N, + endp_init_cfg_n, 0x00000808, 0x0070), + reg_obj_nofunc(IPA_IRQ_EE_UC_N, 0x0000301c, 0x1000), + reg_obj_cfunc(IPA_ENDP_INIT_HDR_METADATA_MASK_N, + endp_init_hdr_metadata_mask_n, 0x00000818, 0x0070), + reg_obj_pfunc(IPA_SHARED_MEM_SIZE, + shared_mem_size, 0x00000054, 0x0000), + reg_obj_nofunc(IPA_SRAM_DIRECT_ACCESS_N, 0x00007000, 0x0004), + reg_obj_nofunc(IPA_LOCAL_PKT_PROC_CNTXT_BASE, 0x000001e8, 0x0000), + reg_obj_cfunc(IPA_ENDP_STATUS_N, + endp_status_n, 0x00000840, 0x0070), + reg_obj_both(IPA_ENDP_FILTER_ROUTER_HSH_CFG_N, + hash_cfg_n, 0x0000085c, 0x0070), + reg_obj_cfunc(IPA_SRC_RSRC_GRP_01_RSRC_TYPE_N, + rsrg_grp_xy_rsrc_type_n, 0x00000400, 0x0020), + reg_obj_cfunc(IPA_SRC_RSRC_GRP_23_RSRC_TYPE_N, + rsrg_grp_xy_rsrc_type_n, 0x00000404, 0x0020), + reg_obj_cfunc(IPA_DST_RSRC_GRP_01_RSRC_TYPE_N, + rsrg_grp_xy_rsrc_type_n, 0x00000500, 0x0020), + reg_obj_cfunc(IPA_DST_RSRC_GRP_23_RSRC_TYPE_N, + rsrg_grp_xy_rsrc_type_n, 0x00000504, 0x0020), + reg_obj_cfunc(IPA_QSB_MAX_WRITES, + qsb_max_writes, 0x00000074, 0x0000), + reg_obj_cfunc(IPA_QSB_MAX_READS, + qsb_max_reads, 0x00000078, 0x0000), + reg_obj_cfunc(IPA_IDLE_INDICATION_CFG, + idle_indication_cfg, 0x00000220, 0x0000), +}; + +#undef reg_obj_nofunc +#undef reg_obj_both +#undef reg_obj_pfunc +#undef reg_obj_cfunc +#undef reg_obj_common +#undef pfunc +#undef cfunc + +int ipa_reg_init(phys_addr_t phys_addr, size_t size) +{ + ipa_reg_virt = ioremap(phys_addr, size); + + return ipa_reg_virt ? 0 : -ENOMEM; +} + +void ipa_reg_exit(void) +{ + iounmap(ipa_reg_virt); + ipa_reg_virt = NULL; +} + +/* Get the offset of an "n parameterized" register */ +u32 ipa_reg_n_offset(enum ipa_reg reg, u32 n) +{ + return ipa_reg[reg].offset + n * ipa_reg[reg].n_ofst; +} + +/* ipa_read_reg_n() - Get an "n parameterized" register's value */ +u32 ipa_read_reg_n(enum ipa_reg reg, u32 n) +{ + return ioread32(ipa_reg_virt + ipa_reg_n_offset(reg, n)); +} + +/* ipa_write_reg_n() - Write a raw value to an "n parameterized" register */ +void ipa_write_reg_n(enum ipa_reg reg, u32 n, u32 val) +{ + iowrite32(val, ipa_reg_virt + ipa_reg_n_offset(reg, n)); +} + +/* ipa_read_reg_n_fields() - Parse value of an "n parameterized" register */ +void ipa_read_reg_n_fields(enum ipa_reg reg, u32 n, void *fields) +{ + u32 val = ipa_read_reg_n(reg, n); + + ipa_reg[reg].parse(reg, fields, val); +} + +/* ipa_write_reg_n_fields() - Construct a vlaue to write to an "n + * parameterized" register + */ +void ipa_write_reg_n_fields(enum ipa_reg reg, u32 n, const void *fields) +{ + u32 val = ipa_reg[reg].construct(reg, fields); + + ipa_write_reg_n(reg, n, val); +} + +/* Maximum representable aggregation byte limit value (in bytes) */ +u32 ipa_reg_aggr_max_byte_limit(void) +{ + return FIELD_MAX(AGGR_BYTE_LIMIT_FMASK) * SZ_1K; +} + +/* Maximum representable aggregation packet limit value */ +u32 ipa_reg_aggr_max_packet_limit(void) +{ + return FIELD_MAX(AGGR_PKT_LIMIT_FMASK); +} diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h new file mode 100644 index 000000000000..fb7c1ab6408c --- /dev/null +++ b/drivers/net/ipa/ipa_reg.h @@ -0,0 +1,614 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ +#ifndef _IPA_REG_H_ +#define _IPA_REG_H_ + +/** + * DOC: The IPA Register Abstraction + * + * The IPA code abstracts the details of its 32-bit registers, allowing access + * to them to be done generically. The original motivation for this was that + * the field width and/or position for values stored in some registers differed + * for different versions of IPA hardware. Abstracting access this way allows + * code that uses such registers to be simpler, describing how register fields + * are used without proliferating special-case code that is dependent on + * hardware version. + * + * Each IPA register has a name, which is one of the values in the "ipa_reg" + * enumerated type (e.g., IPA_ENABLED_PIPES). The offset (memory address) of + * the register having a given name is maintained internal to the "ipa_reg" + * module. + * + * For simple registers that hold a single 32-bit value, two functions provide + * access to the register: + * u32 ipa_read_reg(enum ipa_reg reg); + * void ipa_write_reg(enum ipa_reg reg, u32 val); + * + * Some registers are "N-parameterized." This means there is a set of + * registers having identical format, and each is accessed by supplying + * the "N" value to select which register is intended. The names for + * N-parameterized registers have an "_N" suffix (e.g. IPA_IRQ_STTS_EE_N). + * Details of computing the offset for such registers are maintained internal + * to the "ipa_reg" module. For simple registers holding a single 32-bit + * value, these functions provide access to N-parameterized registers: + * u32 ipa_read_reg_n(enum ipa_reg reg, u32 n); + * void ipa_write_reg_n(enum ipa_reg reg, u32 n, u32 val); + * + * Some registers contain fields less than 32 bits wide (call these "field + * registers"). For each such register a "field structure" is defined to + * represent the values of the individual fields within the register. The + * name of the structure matches the name of the register (in lower case). + * For example, the individual fields in the IPA_ROUTE register are represented + * by the field structure named ipa_reg_route. + * + * Each field register has a function used to fill in its corresponding + * field structure with particular values. Parameters to this function + * supply values to assign. In many cases only a few such parameters + * are required, because some field values are invariant. The name of + * this sort of function is derived from the structure name, so for example + * ipa_reg_route() is used to initialize an ipa_reg_route structure. + * Field registers associated with endpoints often use different fields + * or different values dependent on whether the endpoint is a producer or + * consumer. In these cases separate functions are used to initialize + * the field structure (for example ipa_reg_endp_init_hdr_cons() and + * ipa_reg_endp_init_hdr_prod()). + * + * The position and width of fields within a register are defined (in + * "ipa_reg.c") using field masks, and the names of the members in the field + * structure associated with such registers match the names of the bit masks + * that define the fields. (E.g., ipa_reg_route->route_dis is used to + * represent the field defined by the ROUTE_DIS field mask.) + * + * "Field registers" are accessed using these functions: + * void ipa_read_reg_fields(enum ipa_reg reg, void *fields); + * void ipa_write_reg_fields(enum ipa_reg reg, const void *fields); + * The "fields" parameter in both cases is the address of the "field structure" + * associated with the register being accessed. When reading, the structure + * is filled by ipa_read_reg_fields() with values found in the register's + * fields. (All fields will be filled; there is no need for the caller to + * initialize the passed-in structure before the call.) When writing, the + * caller initializes the structure with all values that should be written to + * the fields in the register. + * + * "Field registers" can also be N-parameterized, in which case they are + * accessed using these functions: + * void ipa_read_reg_n_fields(enum ipa_reg reg, u32 n, void *fields); + * void ipa_write_reg_n_fields(enum ipa_reg reg, u32 n, + * const void *fields); + */ + +/* Register names */ +enum ipa_reg { + IPA_ROUTE, + IPA_IRQ_STTS_EE_N, + IPA_IRQ_EN_EE_N, + IPA_IRQ_CLR_EE_N, + IPA_IRQ_SUSPEND_INFO_EE_N, + IPA_SUSPEND_IRQ_EN_EE_N, + IPA_SUSPEND_IRQ_CLR_EE_N, + IPA_BCR, + IPA_ENABLED_PIPES, + IPA_TAG_TIMER, + IPA_STATE_AGGR_ACTIVE, + IPA_ENDP_INIT_HDR_N, + IPA_ENDP_INIT_HDR_EXT_N, + IPA_ENDP_INIT_AGGR_N, + IPA_AGGR_FORCE_CLOSE, + IPA_ENDP_INIT_MODE_N, + IPA_ENDP_INIT_CTRL_N, + IPA_ENDP_INIT_DEAGGR_N, + IPA_ENDP_INIT_SEQ_N, + IPA_ENDP_INIT_CFG_N, + IPA_IRQ_EE_UC_N, + IPA_ENDP_INIT_HDR_METADATA_MASK_N, + IPA_SHARED_MEM_SIZE, + IPA_SRAM_DIRECT_ACCESS_N, + IPA_LOCAL_PKT_PROC_CNTXT_BASE, + IPA_ENDP_STATUS_N, + IPA_ENDP_FILTER_ROUTER_HSH_CFG_N, + IPA_SRC_RSRC_GRP_01_RSRC_TYPE_N, + IPA_SRC_RSRC_GRP_23_RSRC_TYPE_N, + IPA_DST_RSRC_GRP_01_RSRC_TYPE_N, + IPA_DST_RSRC_GRP_23_RSRC_TYPE_N, + IPA_QSB_MAX_WRITES, + IPA_QSB_MAX_READS, + IPA_IDLE_INDICATION_CFG, +}; + +/** + * struct ipa_reg_route - IPA_ROUTE field structure + * @route_dis: route disable + * @route_def_pipe: route default pipe + * @route_def_hdr_table: route default header table + * @route_def_hdr_ofst: route default header offset table + * @route_frag_def_pipe: Default pipe to route fragmented exception + * packets and frag new rule statues, if source pipe does not have + * a notification status pipe defined. + * @route_def_retain_hdr: default value of retain header. It is used + * when no rule was hit + */ +struct ipa_reg_route { + u32 route_dis; + u32 route_def_pipe; + u32 route_def_hdr_table; + u32 route_def_hdr_ofst; + u32 route_frag_def_pipe; + u32 route_def_retain_hdr; +}; + +/** + * ipa_reg_endp_init_hdr - ENDP_INIT_HDR_N field structure + * + * @hdr_len: + * @hdr_ofst_metadata_valid: + * @hdr_ofst_metadata: + * @hdr_additional_const_len: + * @hdr_ofst_pkt_size_valid: + * @hdr_ofst_pkt_size: + * @hdr_a5_mux: + * @hdr_len_inc_deagg_hdr: + * @hdr_metadata_reg_valid: +*/ +struct ipa_reg_endp_init_hdr { + u32 hdr_len; + u32 hdr_ofst_metadata_valid; + u32 hdr_ofst_metadata; + u32 hdr_additional_const_len; + u32 hdr_ofst_pkt_size_valid; + u32 hdr_ofst_pkt_size; + u32 hdr_a5_mux; + u32 hdr_len_inc_deagg_hdr; + u32 hdr_metadata_reg_valid; +}; + +/** + * ipa_reg_endp_init_hdr_ext - IPA_ENDP_INIT_HDR_EXT_N field structure + * + * @hdr_endianness: + * @hdr_total_len_or_pad_valid: + * @hdr_total_len_or_pad: + * @hdr_payload_len_inc_padding: + * @hdr_total_len_or_pad_offset: + * @hdr_pad_to_alignment: + */ +struct ipa_reg_endp_init_hdr_ext { + u32 hdr_endianness; /* 0 = little endian; 1 = big endian */ + u32 hdr_total_len_or_pad_valid; + u32 hdr_total_len_or_pad; /* 0 = pad; 1 = total_len */ + u32 hdr_payload_len_inc_padding; + u32 hdr_total_len_or_pad_offset; + u32 hdr_pad_to_alignment; +}; + +/** + * enum ipa_aggr_en - aggregation setting type in IPA end-point + */ +enum ipa_aggr_en { + IPA_BYPASS_AGGR = 0, + IPA_ENABLE_AGGR = 1, + IPA_ENABLE_DEAGGR = 2, +}; + +/** + * enum ipa_aggr_type - type of aggregation in IPA end-point + */ +enum ipa_aggr_type { + IPA_MBIM_16 = 0, + IPA_HDLC = 1, + IPA_TLP = 2, + IPA_RNDIS = 3, + IPA_GENERIC = 4, + IPA_QCMAP = 6, +}; + +#define IPA_AGGR_TIME_LIMIT_DEFAULT 1 /* XXX units? */ + +/** + * struct ipa_reg_endp_init_aggr - IPA_ENDP_INIT_AGGR_N field structure + * @aggr_en: bypass aggregation, enable aggregation, or deaggregation + * (enum ipa_aggr_en) + * @aggr_type: type of aggregation (enum ipa_aggr_type aggr) + * @aggr_byte_limit: aggregated byte limit in KB, or no limit if 0 + * (producer pipes only) + * @aggr_time_limit: time limit before close of aggregation, or + * aggregation disabled if 0 (producer pipes only) + * @aggr_pkt_limit: packet limit before closing aggregation, or no + * limit if 0 (producer pipes only) XXX units + * @aggr_sw_eof_active: whether EOF closes aggregation--in addition to + * hardware aggregation configuration (producer + * pipes configured for generic aggregation only) + * @aggr_force_close: whether to force a close XXX verify/when + * @aggr_hard_byte_limit_en: whether aggregation frames close *before* + * byte count has crossed limit, rather than + * after XXX producer only? + */ +struct ipa_reg_endp_init_aggr { + u32 aggr_en; /* enum ipa_aggr_en */ + u32 aggr_type; /* enum ipa_aggr_type */ + u32 aggr_byte_limit; + u32 aggr_time_limit; + u32 aggr_pkt_limit; + u32 aggr_sw_eof_active; + u32 aggr_force_close; + u32 aggr_hard_byte_limit_en; +}; + +/** + * struct ipa_aggr_force_close - IPA_AGGR_FORCE_CLOSE field structure + * @pipe_bitmap: bitmap of pipes on which aggregation should be closed + */ +struct ipa_reg_aggr_force_close { + u32 pipe_bitmap; +}; + +/** + * enum ipa_mode - mode setting type in IPA end-point + * @BASIC: basic mode + * @ENABLE_FRAMING_HDLC: not currently supported + * @ENABLE_DEFRAMING_HDLC: not currently supported + * @DMA: all data arriving IPA will not go through IPA logic blocks, this + * allows IPA to work as DMA for specific pipes. + */ +enum ipa_mode { + IPA_BASIC = 0, + IPA_ENABLE_FRAMING_HDLC = 1, + IPA_ENABLE_DEFRAMING_HDLC = 2, + IPA_DMA = 3, +}; + +/** + * struct ipa_reg_endp_init_mode - IPA_ENDP_INIT_MODE_N field structure + * + * @mode: endpoint mode setting (enum ipa_mode_type) + * @dst_pipe_index: This parameter specifies destination output-pipe-packets + * will be routed to. Valid for DMA mode only and for Input + * Pipes only (IPA Consumer) + * @byte_threshold: + * @pipe_replication_en: + * @pad_en: + * @hdr_ftch_disable: + */ +struct ipa_reg_endp_init_mode { + u32 mode; /* enum ipa_mode */ + u32 dest_pipe_index; + u32 byte_threshold; + u32 pipe_replication_en; + u32 pad_en; + u32 hdr_ftch_disable; +}; + +/** + * struct ipa_ep_init_ctrl - IPA_ENDP_INIT_CTRL_N field structure + * + * @ipa_ep_suspend: 0 - ENDP is enabled, 1 - ENDP is suspended (disabled). + * Valid for PROD Endpoints + * @ipa_ep_delay: 0 - ENDP is free-running, 1 - ENDP is delayed. + * SW controls the data flow of an endpoint usind this bit. + * Valid for CONS Endpoints + */ +struct ipa_reg_endp_init_ctrl { + u32 endp_suspend; + u32 endp_delay; +}; + +/** + * struct ipa_reg_endp_init_deaggr - IPA_ENDP_INIT_DEAGGR_N field structure + * + * @deaggr_hdr_len: + * @packet_offset_valid: + * @packet_offset_location: + * @max_packet_len: + */ +struct ipa_reg_endp_init_deaggr { + u32 deaggr_hdr_len; + u32 packet_offset_valid; + u32 packet_offset_location; + u32 max_packet_len; +}; + +/* HPS, DPS sequencers types */ +enum ipa_seq_type { + IPA_SEQ_DMA_ONLY = 0x00, + /* Packet Processing + no decipher + uCP (for Ethernet Bridging) */ + IPA_SEQ_PKT_PROCESS_NO_DEC_UCP = 0x02, + /* 2 Packet Processing pass + no decipher + uCP */ + IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP = 0x04, + /* DMA + DECIPHER/CIPHER */ + IPA_SEQ_DMA_DEC = 0x11, + /* COMP/DECOMP */ + IPA_SEQ_DMA_COMP_DECOMP = 0x20, + /* Invalid sequencer type */ + IPA_SEQ_INVALID = 0xff, +}; + +/** + * struct ipa_ep_init_seq - IPA_ENDP_INIT_SEQ_N field structure + * @hps_seq_type: type of HPS sequencer (enum ipa_hps_dps_sequencer_type) + * @dps_seq_type: type of DPS sequencer (enum ipa_hps_dps_sequencer_type) + */ +struct ipa_reg_endp_init_seq { + u32 hps_seq_type; + u32 dps_seq_type; + u32 hps_rep_seq_type; + u32 dps_rep_seq_type; +}; + +/** + * enum ipa_cs_offload_en - checksum offload setting + */ +enum ipa_cs_offload_en { + IPA_CS_OFFLOAD_NONE = 0, + IPA_CS_OFFLOAD_UL = 1, + IPA_CS_OFFLOAD_DL = 2, + IPA_CS_RSVD +}; + +/** + * struct ipa_reg_endp_init_cfg - IPA_ENDP_INIT_CFG_N field structure + * @frag_offload_en: + * @cs_offload_en: type of offloading (enum ipa_cs_offload) + * @cs_metadata_hdr_offset: offload (in 4-byte words) within header + * where 4-byte checksum metadata begins. Valid only for consumer + * pipes. + * @cs_gen_qmb_master_sel: + */ +struct ipa_reg_endp_init_cfg { + u32 frag_offload_en; + u32 cs_offload_en; /* enum ipa_cs_offload_en */ + u32 cs_metadata_hdr_offset; + u32 cs_gen_qmb_master_sel; +}; + +/** + * struct ipa_reg_endp_init_hdr_metadata_mask - + * IPA_ENDP_INIT_HDR_METADATA_MASK_N field structure + * @metadata_mask: mask specifying metadata bits to write + * + * Valid for producer pipes only. + */ +struct ipa_reg_endp_init_hdr_metadata_mask { + u32 metadata_mask; +}; + +/** + * struct ipa_reg_shared_mem_size - SHARED_MEM_SIZE field structure + * @shared_mem_size: Available size [in 8Bytes] of SW partition within + * IPA shared memory. + * @shared_mem_baddr: Offset of SW partition within IPA + * shared memory[in 8Bytes]. To get absolute address of SW partition, + * add this offset to IPA_SRAM_DIRECT_ACCESS_N baddr. + */ +struct ipa_reg_shared_mem_size { + u32 shared_mem_size; + u32 shared_mem_baddr; +}; + +/** + * struct ipa_reg_endp_status - IPA_ENDP_STATUS_N field structure + * @status_en: Determines if end point supports Status Indications. SW should + * set this bit in order to enable Statuses. Output Pipe - send + * Status indications only if bit is set. Input Pipe - forward Status + * indication to STATUS_ENDP only if bit is set. Valid for Input + * and Output Pipes (IPA Consumer and Producer) + * @status_endp: Statuses generated for this endpoint will be forwarded to the + * specified Status End Point. Status endpoint needs to be + * configured with STATUS_EN=1 Valid only for Input Pipes (IPA + * Consumer) + * @status_location: Location of PKT-STATUS on destination pipe. + * If set to 0 (default), PKT-STATUS will be appended before the packet + * for this endpoint. If set to 1, PKT-STATUS will be appended after the + * packet for this endpoint. Valid only for Output Pipes (IPA Producer) + * @status_pkt_suppress: + */ +struct ipa_reg_endp_status { + u32 status_en; + u32 status_endp; + u32 status_location; + u32 status_pkt_suppress; +}; + +/** + * struct ipa_hash_tuple - structure used to group filter and route fields in + * struct ipa_ep_filter_router_hsh_cfg + * @src_id: pipe number for flt, table index for rt + * @src_ip_addr: IP source address + * @dst_ip_addr: IP destination address + * @src_port: L4 source port + * @dst_port: L4 destination port + * @protocol: IP protocol field + * @meta_data: packet meta-data + * + * Each field is a Boolean value, indicating whether that particular value + * should be used for filtering or routing. + * + */ +struct ipa_reg_hash_tuple { + u32 src_id; /* pipe number in flt, table index in rt */ + u32 src_ip; + u32 dst_ip; + u32 src_port; + u32 dst_port; + u32 protocol; + u32 metadata; + u32 undefined; +}; + +/** + * struct ipa_ep_filter_router_hsh_cfg - IPA_ENDP_FILTER_ROUTER_HSH_CFG_N + * field structure + * @flt: Hash tuple info for filtering + * @undefined1: + * @rt: Hash tuple info for routing + * @undefined2: + * @undefinedX: Undefined/Unused bit fields set of the register + */ +struct ipa_ep_filter_router_hsh_cfg { + struct ipa_reg_hash_tuple flt; + struct ipa_reg_hash_tuple rt; +}; + +/** + * struct ipa_reg_rsrc_grp_xy_rsrc_type_n - + * IPA_{SRC,DST}_RSRC_GRP_{02}{13}Y_RSRC_TYPE_N field structure + * @x_min - first group min value + * @x_max - first group max value + * @y_min - second group min value + * @y_max - second group max value + * + * This field structure is used for accessing the following registers: + * IPA_SRC_RSRC_GRP_01_RSRC_TYPE_N IPA_SRC_RSRC_GRP_23_RSRC_TYPE_N + * IPA_DST_RSRC_GRP_01_RSRC_TYPE_N IPA_DST_RSRC_GRP_23_RSRC_TYPE_N + * + */ +struct ipa_reg_rsrc_grp_xy_rsrc_type_n { + u32 x_min; + u32 x_max; + u32 y_min; + u32 y_max; +}; + +/** + * struct ipa_reg_qsb_max_writes - IPA_QSB_MAX_WRITES field register + * @qmb_0_max_writes: Max number of outstanding writes for GEN_QMB_0 + * @qmb_1_max_writes: Max number of outstanding writes for GEN_QMB_1 + */ +struct ipa_reg_qsb_max_writes { + u32 qmb_0_max_writes; + u32 qmb_1_max_writes; +}; + +/** + * struct ipa_reg_qsb_max_reads - IPA_QSB_MAX_READS field register + * @qmb_0_max_reads: Max number of outstanding reads for GEN_QMB_0 + * @qmb_1_max_reads: Max number of outstanding reads for GEN_QMB_1 + */ +struct ipa_reg_qsb_max_reads { + u32 qmb_0_max_reads; + u32 qmb_1_max_reads; +}; + +/** struct ipa_reg_idle_indication_cfg - IPA_IDLE_INDICATION_CFG field register + * @enter_idle_debounce_thresh: configure the debounce threshold + * @const_non_idle_enable: enable the asserting of the IDLE value and DCD + */ +struct ipa_reg_idle_indication_cfg { + u32 enter_idle_debounce_thresh; + u32 const_non_idle_enable; +}; + +/* Initialize the IPA register subsystem */ +int ipa_reg_init(phys_addr_t phys_addr, size_t size); +void ipa_reg_exit(void); + +void ipa_reg_route(struct ipa_reg_route *route, u32 ep_id); +void ipa_reg_endp_init_hdr_cons(struct ipa_reg_endp_init_hdr *init_hdr, + u32 header_size, u32 metadata_offset, + u32 length_offset); +void ipa_reg_endp_init_hdr_prod(struct ipa_reg_endp_init_hdr *init_hdr, + u32 header_size, u32 metadata_offset, + u32 length_offset); +void ipa_reg_endp_init_hdr_ext_cons(struct ipa_reg_endp_init_hdr_ext *hdr_ext, + u32 pad_align, bool pad_included); +void ipa_reg_endp_init_hdr_ext_prod(struct ipa_reg_endp_init_hdr_ext *hdr_ext, + u32 pad_align); +void ipa_reg_endp_init_aggr_cons(struct ipa_reg_endp_init_aggr *init_aggr, + u32 byte_limit, u32 packet_limit, + bool close_on_eof); +void ipa_reg_endp_init_aggr_prod(struct ipa_reg_endp_init_aggr *init_aggr, + enum ipa_aggr_en aggr_en, + enum ipa_aggr_type aggr_type); +void ipa_reg_aggr_force_close(struct ipa_reg_aggr_force_close *force_close, + u32 pipe_bitmap); +void ipa_reg_endp_init_mode_cons(struct ipa_reg_endp_init_mode *init_mode); +void ipa_reg_endp_init_mode_prod(struct ipa_reg_endp_init_mode *init_mode, + enum ipa_mode mode, u32 dest_endp); +void ipa_reg_endp_init_cfg_cons(struct ipa_reg_endp_init_cfg *init_cfg, + enum ipa_cs_offload_en offload_type); +void ipa_reg_endp_init_cfg_prod(struct ipa_reg_endp_init_cfg *init_cfg, + enum ipa_cs_offload_en offload_type, + u32 metadata_offset); +void ipa_reg_endp_init_ctrl(struct ipa_reg_endp_init_ctrl *init_ctrl, + bool suspend); +void ipa_reg_endp_init_deaggr_cons( + struct ipa_reg_endp_init_deaggr *init_deaggr); +void ipa_reg_endp_init_deaggr_prod( + struct ipa_reg_endp_init_deaggr *init_deaggr); +void ipa_reg_endp_init_seq_cons(struct ipa_reg_endp_init_seq *init_seq); +void ipa_reg_endp_init_seq_prod(struct ipa_reg_endp_init_seq *init_seq, + enum ipa_seq_type seq_type); +void ipa_reg_endp_init_hdr_metadata_mask_cons( + struct ipa_reg_endp_init_hdr_metadata_mask *metadata_mask, + u32 mask); +void ipa_reg_endp_init_hdr_metadata_mask_prod( + struct ipa_reg_endp_init_hdr_metadata_mask *metadata_mask); +void ipa_reg_endp_status_cons(struct ipa_reg_endp_status *endp_status, + bool enable); +void ipa_reg_endp_status_prod(struct ipa_reg_endp_status *endp_status, + bool enable, u32 endp); + +void ipa_reg_hash_tuple(struct ipa_reg_hash_tuple *tuple); + +void ipa_reg_rsrc_grp_xy_rsrc_type_n( + struct ipa_reg_rsrc_grp_xy_rsrc_type_n *limits, + u32 x_min, u32 x_max, u32 y_min, u32 y_max); + +void ipa_reg_qsb_max_writes(struct ipa_reg_qsb_max_writes *max_writes, + u32 qmb_0_max_writes, u32 qmb_1_max_writes); +void ipa_reg_qsb_max_reads(struct ipa_reg_qsb_max_reads *max_reads, + u32 qmb_0_max_reads, u32 qmb_1_max_reads); + +void ipa_reg_idle_indication_cfg(struct ipa_reg_idle_indication_cfg *indication, + u32 debounce_thresh, bool non_idle_enable); + +/* Get the offset of an n-parameterized register */ +u32 ipa_reg_n_offset(enum ipa_reg reg, u32 n); + +/* Get the offset of a register */ +static inline u32 ipa_reg_offset(enum ipa_reg reg) +{ + return ipa_reg_n_offset(reg, 0); +} + +/* ipa_read_reg_n() - Get the raw value of n-parameterized register */ +u32 ipa_read_reg_n(enum ipa_reg reg, u32 n); + +/* ipa_write_reg_n() - Write a raw value to an n-param register */ +void ipa_write_reg_n(enum ipa_reg reg, u32 n, u32 val); + +/* ipa_read_reg_n_fields() - Get the parsed value of an n-param register */ +void ipa_read_reg_n_fields(enum ipa_reg reg, u32 n, void *fields); + +/* ipa_write_reg_n_fields() - Write a parsed value to an n-param register */ +void ipa_write_reg_n_fields(enum ipa_reg reg, u32 n, const void *fields); + +/* ipa_read_reg() - Get the raw value from a register */ +static inline u32 ipa_read_reg(enum ipa_reg reg) +{ + return ipa_read_reg_n(reg, 0); +} + +/* ipa_write_reg() - Write a raw value to a register*/ +static inline void ipa_write_reg(enum ipa_reg reg, u32 val) +{ + ipa_write_reg_n(reg, 0, val); +} + +/* ipa_read_reg_fields() - Get the parsed value of a register */ +static inline void ipa_read_reg_fields(enum ipa_reg reg, void *fields) +{ + ipa_read_reg_n_fields(reg, 0, fields); +} + +/* ipa_write_reg_fields() - Write a parsed value to a register */ +static inline void ipa_write_reg_fields(enum ipa_reg reg, const void *fields) +{ + ipa_write_reg_n_fields(reg, 0, fields); +} + +u32 ipa_reg_aggr_max_byte_limit(void); +u32 ipa_reg_aggr_max_packet_limit(void); + +#endif /* _IPA_REG_H_ */ From patchwork Wed Nov 7 00:32:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 150364 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4548626ljp; Tue, 6 Nov 2018 16:33:45 -0800 (PST) X-Google-Smtp-Source: AJdET5cveJM35aFRREGosl8dG8dRge9NYxEg3cLLYJ/aTD3zgtlast+uMA53Rw/5SlacB5IfGcQt X-Received: by 2002:a62:5343:: with SMTP id h64-v6mr28224468pfb.226.1541550825747; Tue, 06 Nov 2018 16:33:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541550825; cv=none; d=google.com; s=arc-20160816; b=cDFmVZ9ea2Rovn/cYKVYgWVoUSgfVeZSytwrQYh/Tm056HQTeyzAcMCp1a8guByhnd gPq2UeZkBUXwfgh5hmp0feC1XrcbXIBQmn6SyzKtCOnMsOEf97i2FEgymbsFtIOQaA8j WPKNytTV4WOgEG+MmO8XImGP4TTGy3sKNW523PK2P7+akpz39r825bU/Y4PwMw5TAgA/ KOwg+sFwzyfIoJGjUKi0BMhrB49nfCqCJajlLn8gVPDS680iV948tx+Yg+F0kDKdaGXP fzw+CHJx0vk7MzKFxC2nDDr6+7/CNh0nfwY9OEl3S4Glwzt/st44keLbfvxAwBqtKcuG 0XUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=y7CaTTa4nb0tf4qOqTHqu8UAGVAco9SAQMvrUTafhKk=; b=XaPdVdTIaA5AJ9leRHvUv/CHlVzWP8VezpTlrTTQYqGFOlPRgITJEuIydKfxTTUNe1 Nc2DOaOrdC8iTc2U57qwXDzqCkXLqbciv1Sr9RBv6w02lu3cg47c+jGwnIuLVkWABG6A GF7mRrkXu7DvF3KW8PpQSyW1oTgy2d9cngkxuThqboCxidlTKlLDRYHp3Cy6oeuC0mvK rJqKPbMzVgQHlsdWlfgBhUuuEKmqc0mvKOjmwHthECDkD7QW05gRoyQ/CYPY4n10BHKt wu72kFVF7AhnF1/bSjSpSYoYdYhFbcW0VZX/mcUpfQhNJx/wRWIgZG9h1KmteKmV51+v NAgg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Ug6ozgBx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w18-v6si50330095pfg.70.2018.11.06.16.33.45; Tue, 06 Nov 2018 16:33:45 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Ug6ozgBx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389261AbeKGKBg (ORCPT + 32 others); Wed, 7 Nov 2018 05:01:36 -0500 Received: from mail-it1-f194.google.com ([209.85.166.194]:34946 "EHLO mail-it1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389208AbeKGKBg (ORCPT ); Wed, 7 Nov 2018 05:01:36 -0500 Received: by mail-it1-f194.google.com with SMTP id v11so14310707itj.0 for ; Tue, 06 Nov 2018 16:33:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=y7CaTTa4nb0tf4qOqTHqu8UAGVAco9SAQMvrUTafhKk=; b=Ug6ozgBx1wWiThxiUZjnaj9UVKA70KosvxOQcTN8B9woSU5z0bdhVZbYJbe/+NRNBb Qxq+HPqFSCvREJjTygxae0wpJhAvxDuQWrX9El4y+XZW9wY9UTAgVabtNxcDIq72kWSy DHb3FKlIgAbSeB350keBdzPXXpC+QunO5CXvs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=y7CaTTa4nb0tf4qOqTHqu8UAGVAco9SAQMvrUTafhKk=; b=DfEpMNHfT81Pf7GBqCw4k8tL3rdBppc8wIv5diwEHY5iirOPHtxUgnBzM4niKIIU1n 4fLDwCkopFtKh8VP3fx4uWbASFmSMwWXxHt7h9uxN1gFRJG3mnt9pJXNCbDdmN7Vm37u xcP3Ht8kbw9+r1DJNhu7ZWmA3YW73V6ngoVy6q8gwlHbw31dUQSrdtOz/LxzEOePsXxv kTrQsklWeiVZGL+FQH35i5mmSYoxNFhvqXEM+jyS0sNYMxJb66snuqFZcum7qbVrKq1v 7qkb0dCZRdUIVJcZRo+51KOtPdZLblqjKX2q2MH3fOh/XNQ9ursBLiWM49l+UaqLGvCA Ryrw== X-Gm-Message-State: AGRZ1gJzT8Mq0Q72UZSrP0Qjbel7b/P+BFNQHDtsHWAbLUvVUWeRXPLn s05GozToM7IgGLN/DlkVkF95Lw== X-Received: by 2002:a24:ee8a:: with SMTP id b132-v6mr114169iti.17.1541550820212; Tue, 06 Nov 2018 16:33:40 -0800 (PST) Received: from shibby.gateway.innflux.com ([66.228.239.218]) by smtp.gmail.com with ESMTPSA id e184-v6sm1061128ite.9.2018.11.06.16.33.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 16:33:39 -0800 (PST) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, syadagir@codeaurora.org, mjavid@codeaurora.org, robh+dt@kernel.org, mark.rutland@arm.com Subject: [RFC PATCH 11/12] soc: qcom: ipa: IPA rmnet interface Date: Tue, 6 Nov 2018 18:32:49 -0600 Message-Id: <20181107003250.5832-12-elder@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107003250.5832-1-elder@linaro.org> References: <20181107003250.5832-1-elder@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The IPA uses "rmnet" as a way to present remote network resources as if they were local to the AP. IPA interfaces representing networks accessible via the modem are represented as rmnet interfaces, implemented by the "rmnet data driver" found here: drivers/net/ethernet/qualcomm/rmnet/ The IPA is able to perform aggregation of packets, as well as checksum offload. These options (plus others, such as configuring MTU size) are configurable using an ioctl interface. In addition, rmnet devices support multiplexing. TX packets are handed to the data path layer, and when their transmission is complete the notification callback will be called. The data path code posts RX packets to the hardware, and when they are filled they are supplied here by a receive notification callback. The IPA driver currently does not support the modem shutting down (or crashing). But the rmnet_ipa device roughly represents the availability of networks reachable by the modem. If the modem is operational, an ipa_rmnet network device will be available. Modem operation is managed by the remoteproc subsystem. Note: This portion of the driver will be heavily affected by planned rework on the data path code. Signed-off-by: Alex Elder --- drivers/net/ipa/msm_rmnet.h | 120 +++++ drivers/net/ipa/rmnet_config.h | 31 ++ drivers/net/ipa/rmnet_ipa.c | 805 +++++++++++++++++++++++++++++++++ 3 files changed, 956 insertions(+) create mode 100644 drivers/net/ipa/msm_rmnet.h create mode 100644 drivers/net/ipa/rmnet_config.h create mode 100644 drivers/net/ipa/rmnet_ipa.c -- 2.17.1 diff --git a/drivers/net/ipa/msm_rmnet.h b/drivers/net/ipa/msm_rmnet.h new file mode 100644 index 000000000000..042380fd53fb --- /dev/null +++ b/drivers/net/ipa/msm_rmnet.h @@ -0,0 +1,120 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ +#ifndef _MSM_RMNET_H_ +#define _MSM_RMNET_H_ + +/* Bitmap macros for RmNET driver operation mode. */ +#define RMNET_MODE_NONE 0x00 +#define RMNET_MODE_LLP_ETH 0x01 +#define RMNET_MODE_LLP_IP 0x02 +#define RMNET_MODE_QOS 0x04 + +/* IOCTL commands + * Values chosen to not conflict with other drivers in the ecosystem + */ + +#define RMNET_IOCTL_SET_LLP_ETHERNET 0x000089f1 /* Set Ethernet protocol */ +#define RMNET_IOCTL_SET_LLP_IP 0x000089f2 /* Set RAWIP protocol */ +#define RMNET_IOCTL_GET_LLP 0x000089f3 /* Get link protocol */ +#define RMNET_IOCTL_SET_QOS_ENABLE 0x000089f4 /* Set QoS header enabled */ +#define RMNET_IOCTL_SET_QOS_DISABLE 0x000089f5 /* Set QoS header disabled*/ +#define RMNET_IOCTL_GET_QOS 0x000089f6 /* Get QoS header state */ +#define RMNET_IOCTL_GET_OPMODE 0x000089f7 /* Get operation mode */ +#define RMNET_IOCTL_OPEN 0x000089f8 /* Open transport port */ +#define RMNET_IOCTL_CLOSE 0x000089f9 /* Close transport port */ +#define RMNET_IOCTL_FLOW_ENABLE 0x000089fa /* Flow enable */ +#define RMNET_IOCTL_FLOW_DISABLE 0x000089fb /* Flow disable */ +#define RMNET_IOCTL_FLOW_SET_HNDL 0x000089fc /* Set flow handle */ +#define RMNET_IOCTL_EXTENDED 0x000089fd /* Extended IOCTLs */ + +/* RmNet Data Required IOCTLs */ +#define RMNET_IOCTL_GET_SUPPORTED_FEATURES 0x0000 /* Get features */ +#define RMNET_IOCTL_SET_MRU 0x0001 /* Set MRU */ +#define RMNET_IOCTL_GET_MRU 0x0002 /* Get MRU */ +#define RMNET_IOCTL_GET_EPID 0x0003 /* Get endpoint ID */ +#define RMNET_IOCTL_GET_DRIVER_NAME 0x0004 /* Get driver name */ +#define RMNET_IOCTL_ADD_MUX_CHANNEL 0x0005 /* Add MUX ID */ +#define RMNET_IOCTL_SET_EGRESS_DATA_FORMAT 0x0006 /* Set EDF */ +#define RMNET_IOCTL_SET_INGRESS_DATA_FORMAT 0x0007 /* Set IDF */ +#define RMNET_IOCTL_SET_AGGREGATION_COUNT 0x0008 /* Set agg count */ +#define RMNET_IOCTL_GET_AGGREGATION_COUNT 0x0009 /* Get agg count */ +#define RMNET_IOCTL_SET_AGGREGATION_SIZE 0x000a /* Set agg size */ +#define RMNET_IOCTL_GET_AGGREGATION_SIZE 0x000b /* Get agg size */ +#define RMNET_IOCTL_FLOW_CONTROL 0x000c /* Do flow control */ +#define RMNET_IOCTL_GET_DFLT_CONTROL_CHANNEL 0x000d /* For legacy use */ +#define RMNET_IOCTL_GET_HWSW_MAP 0x000e /* Get HW/SW map */ +#define RMNET_IOCTL_SET_RX_HEADROOM 0x000f /* RX Headroom */ +#define RMNET_IOCTL_GET_EP_PAIR 0x0010 /* Endpoint pair */ +#define RMNET_IOCTL_SET_QOS_VERSION 0x0011 /* 8/6 byte QoS hdr*/ +#define RMNET_IOCTL_GET_QOS_VERSION 0x0012 /* 8/6 byte QoS hdr*/ +#define RMNET_IOCTL_GET_SUPPORTED_QOS_MODES 0x0013 /* Get QoS modes */ +#define RMNET_IOCTL_SET_SLEEP_STATE 0x0014 /* Set sleep state */ +#define RMNET_IOCTL_SET_XLAT_DEV_INFO 0x0015 /* xlat dev name */ +#define RMNET_IOCTL_DEREGISTER_DEV 0x0016 /* Dereg a net dev */ +#define RMNET_IOCTL_GET_SG_SUPPORT 0x0017 /* Query sg support*/ + +/* Return values for the RMNET_IOCTL_GET_SUPPORTED_FEATURES IOCTL */ +#define RMNET_IOCTL_FEAT_NOTIFY_MUX_CHANNEL BIT(0) +#define RMNET_IOCTL_FEAT_SET_EGRESS_DATA_FORMAT BIT(1) +#define RMNET_IOCTL_FEAT_SET_INGRESS_DATA_FORMAT BIT(2) + +/* Input values for the RMNET_IOCTL_SET_EGRESS_DATA_FORMAT IOCTL */ +#define RMNET_IOCTL_EGRESS_FORMAT_AGGREGATION BIT(2) +#define RMNET_IOCTL_EGRESS_FORMAT_CHECKSUM BIT(4) + +/* Input values for the RMNET_IOCTL_SET_INGRESS_DATA_FORMAT IOCTL */ +#define RMNET_IOCTL_INGRESS_FORMAT_CHECKSUM BIT(4) +#define RMNET_IOCTL_INGRESS_FORMAT_AGG_DATA BIT(5) + +/* User space may not have this defined. */ +#ifndef IFNAMSIZ +#define IFNAMSIZ 16 +#endif + +struct rmnet_ioctl_extended_s { + u32 extended_ioctl; + union { + u32 data; /* Generic data field for most extended IOCTLs */ + + /* Return values for + * RMNET_IOCTL_GET_DRIVER_NAME + * RMNET_IOCTL_GET_DFLT_CONTROL_CHANNEL + */ + char if_name[IFNAMSIZ]; + + /* Input values for the RMNET_IOCTL_ADD_MUX_CHANNEL IOCTL */ + struct { + u32 mux_id; + char vchannel_name[IFNAMSIZ]; + } rmnet_mux_val; + + /* Input values for the RMNET_IOCTL_FLOW_CONTROL IOCTL */ + struct { + u8 flow_mode; + u8 mux_id; + } flow_control_prop; + + /* Return values for RMNET_IOCTL_GET_EP_PAIR */ + struct { + u32 consumer_pipe_num; + u32 producer_pipe_num; + } ipa_ep_pair; + + struct { + u32 __data; /* Placeholder for legacy data*/ + u32 agg_size; + u32 agg_count; + } ingress_format; + } u; +}; + +struct rmnet_ioctl_data_s { + union { + u32 operation_mode; + u32 tcm_handle; + } u; +}; +#endif /* _MSM_RMNET_H_ */ diff --git a/drivers/net/ipa/rmnet_config.h b/drivers/net/ipa/rmnet_config.h new file mode 100644 index 000000000000..3b9a549ca1bd --- /dev/null +++ b/drivers/net/ipa/rmnet_config.h @@ -0,0 +1,31 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ +#ifndef _RMNET_CONFIG_H_ +#define _RMNET_CONFIG_H_ + +#include + +/* XXX We want to use struct rmnet_map_header, but that's currently defined in + * XXX drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h + * XXX We also want to use RMNET_MAP_GET_CD_BIT(Y), defined in the same file. + */ +struct rmnet_map_header_s { +#ifndef RMNET_USE_BIG_ENDIAN_STRUCTS + u8 pad_len : 6, + reserved_bit : 1, + cd_bit : 1; +#else + u8 cd_bit : 1, + reserved_bit : 1, + pad_len : 6; +#endif /* RMNET_USE_BIG_ENDIAN_STRUCTS */ + u8 mux_id; + u16 pkt_len; +} __aligned(1); + +#define RMNET_MAP_GET_CD_BIT(Y) (((struct rmnet_map_header_s *)Y->data)->cd_bit) + +#endif /* _RMNET_CONFIG_H_ */ diff --git a/drivers/net/ipa/rmnet_ipa.c b/drivers/net/ipa/rmnet_ipa.c new file mode 100644 index 000000000000..7006afe3a5ea --- /dev/null +++ b/drivers/net/ipa/rmnet_ipa.c @@ -0,0 +1,805 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ + +/* WWAN Transport Network Driver. */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "msm_rmnet.h" +#include "rmnet_config.h" +#include "ipa_qmi.h" +#include "ipa_i.h" + +#define DRIVER_NAME "wwan_ioctl" +#define IPA_WWAN_DEV_NAME "rmnet_ipa%d" + +#define MUX_CHANNEL_MAX 10 /* max mux channels */ + +#define NAPI_WEIGHT 60 + +#define WWAN_DATA_LEN 2000 +#define HEADROOM_FOR_QMAP 8 /* for mux header */ +#define TAILROOM 0 /* for padding by mux layer */ + +#define DEFAULT_OUTSTANDING_HIGH 128 +#define DEFAULT_OUTSTANDING_HIGH_CTL (DEFAULT_OUTSTANDING_HIGH + 32) +#define DEFAULT_OUTSTANDING_LOW 64 + +#define IPA_APPS_WWAN_CONS_RING_COUNT 256 +#define IPA_APPS_WWAN_PROD_RING_COUNT 512 + +static int ipa_rmnet_poll(struct napi_struct *napi, int budget); + +/** struct ipa_wwan_private - WWAN private data + * @net: network interface struct implemented by this driver + * @stats: iface statistics + * @outstanding_high: number of outstanding packets allowed + * @outstanding_low: number of outstanding packets which shall cause + * + * WWAN private - holds all relevant info about WWAN driver + */ +struct ipa_wwan_private { + struct net_device_stats stats; + atomic_t outstanding_pkts; + int outstanding_high_ctl; + int outstanding_high; + int outstanding_low; + struct napi_struct napi; +}; + +struct rmnet_ipa_context { + struct net_device *dev; + struct mutex mux_id_mutex; /* protects mux_id[] */ + u32 mux_id_count; + u32 mux_id[MUX_CHANNEL_MAX]; + u32 wan_prod_ep_id; + u32 wan_cons_ep_id; + struct mutex ep_setup_mutex; /* endpoint setup/teardown */ +}; + +static bool initialized; /* Avoid duplicate initialization */ + +static struct rmnet_ipa_context rmnet_ipa_ctx_struct; +static struct rmnet_ipa_context *rmnet_ipa_ctx = &rmnet_ipa_ctx_struct; + +/** wwan_open() - Opens the wwan network interface */ +static int ipa_wwan_open(struct net_device *dev) +{ + struct ipa_wwan_private *wwan_ptr = netdev_priv(dev); + + napi_enable(&wwan_ptr->napi); + netif_start_queue(dev); + + return 0; +} + +/** ipa_wwan_stop() - Stops the wwan network interface. */ +static int ipa_wwan_stop(struct net_device *dev) +{ + netif_stop_queue(dev); + + return 0; +} + +/** ipa_wwan_xmit() - Transmits an skb. + * + * @skb: skb to be transmitted + * @dev: network device + * + * Return codes: + * NETDEV_TX_OK: Success + * NETDEV_TX_BUSY: Error while transmitting the skb. Try again later + */ +static int ipa_wwan_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct ipa_wwan_private *wwan_ptr = netdev_priv(dev); + unsigned int skb_len; + int outstanding; + + if (skb->protocol != htons(ETH_P_MAP)) { + dev_kfree_skb_any(skb); + dev->stats.tx_dropped++; + return NETDEV_TX_OK; + } + + /* Control packets are sent even if queue is stopped. We + * always honor the data and control high-water marks. + */ + outstanding = atomic_read(&wwan_ptr->outstanding_pkts); + if (!RMNET_MAP_GET_CD_BIT(skb)) { /* Data packet? */ + if (netif_queue_stopped(dev)) + return NETDEV_TX_BUSY; + if (outstanding >= wwan_ptr->outstanding_high) + return NETDEV_TX_BUSY; + } else if (outstanding >= wwan_ptr->outstanding_high_ctl) { + return NETDEV_TX_BUSY; + } + + /* both data packets and commands will be routed to + * IPA_CLIENT_Q6_WAN_CONS based on status configuration. + */ + skb_len = skb->len; + if (ipa_tx_dp(IPA_CLIENT_APPS_WAN_PROD, skb)) + return NETDEV_TX_BUSY; + + atomic_inc(&wwan_ptr->outstanding_pkts); + dev->stats.tx_packets++; + dev->stats.tx_bytes += skb_len; + + return NETDEV_TX_OK; +} + +/** apps_ipa_tx_complete_notify() - Rx notify + * + * @priv: driver context + * @evt: event type + * @data: data provided with event + * + * Check that the packet is the one we sent and release it + * This function will be called in defered context in IPA wq. + */ +static void apps_ipa_tx_complete_notify(void *priv, enum ipa_dp_evt_type evt, + unsigned long data) +{ + struct ipa_wwan_private *wwan_ptr; + struct net_device *dev = priv; + struct sk_buff *skb; + + skb = (struct sk_buff *)data; + + if (dev != rmnet_ipa_ctx->dev) { + dev_kfree_skb_any(skb); + return; + } + + if (evt != IPA_WRITE_DONE) { + ipa_err("unsupported evt on Tx callback, Drop the packet\n"); + dev_kfree_skb_any(skb); + dev->stats.tx_dropped++; + return; + } + + wwan_ptr = netdev_priv(dev); + atomic_dec(&wwan_ptr->outstanding_pkts); + __netif_tx_lock_bh(netdev_get_tx_queue(dev, 0)); + if (netif_queue_stopped(dev) && + atomic_read(&wwan_ptr->outstanding_pkts) < + wwan_ptr->outstanding_low) { + netif_wake_queue(dev); + } + + __netif_tx_unlock_bh(netdev_get_tx_queue(dev, 0)); + dev_kfree_skb_any(skb); +} + +/** apps_ipa_packet_receive_notify() - Rx notify + * + * @priv: driver context + * @evt: event type + * @data: data provided with event + * + * IPA will pass a packet to the Linux network stack with skb->data + */ +static void apps_ipa_packet_receive_notify(void *priv, enum ipa_dp_evt_type evt, + unsigned long data) +{ + struct ipa_wwan_private *wwan_ptr; + struct net_device *dev = priv; + + wwan_ptr = netdev_priv(dev); + if (evt == IPA_RECEIVE) { + struct sk_buff *skb = (struct sk_buff *)data; + int ret; + unsigned int packet_len = skb->len; + + skb->dev = rmnet_ipa_ctx->dev; + skb->protocol = htons(ETH_P_MAP); + + ret = netif_receive_skb(skb); + if (ret) { + pr_err_ratelimited("fail on netif_receive_skb\n"); + dev->stats.rx_dropped++; + } + dev->stats.rx_packets++; + dev->stats.rx_bytes += packet_len; + } else if (evt == IPA_CLIENT_START_POLL) { + napi_schedule(&wwan_ptr->napi); + } else if (evt == IPA_CLIENT_COMP_NAPI) { + napi_complete(&wwan_ptr->napi); + } else { + ipa_err("Invalid evt %d received in wan_ipa_receive\n", evt); + } +} + +/** handle_ingress_format() - Ingress data format configuration */ +static int handle_ingress_format(struct net_device *dev, + struct rmnet_ioctl_extended_s *in) +{ + enum ipa_cs_offload_en offload_type; + enum ipa_client_type client; + u32 metadata_offset; + u32 rx_buffer_size; + u32 channel_count; + u32 length_offset; + u32 header_size; + bool aggr_active; + u32 aggr_bytes; + u32 aggr_count; + u32 aggr_size; /* in KB */ + u32 ep_id; + int ret; + + client = IPA_CLIENT_APPS_WAN_CONS; + channel_count = IPA_APPS_WWAN_CONS_RING_COUNT; + header_size = sizeof(struct rmnet_map_header_s); + metadata_offset = offsetof(struct rmnet_map_header_s, mux_id); + length_offset = offsetof(struct rmnet_map_header_s, pkt_len); + offload_type = IPA_CS_OFFLOAD_NONE; + aggr_bytes = IPA_GENERIC_AGGR_BYTE_LIMIT; + aggr_count = IPA_GENERIC_AGGR_PKT_LIMIT; + aggr_active = false; + + if (in->u.data & RMNET_IOCTL_INGRESS_FORMAT_CHECKSUM) + offload_type = IPA_CS_OFFLOAD_DL; + + if (in->u.data & RMNET_IOCTL_INGRESS_FORMAT_AGG_DATA) { + aggr_bytes = in->u.ingress_format.agg_size; + aggr_count = in->u.ingress_format.agg_count; + aggr_active = true; + } + + if (aggr_bytes > ipa_reg_aggr_max_byte_limit()) + return -EINVAL; + + if (aggr_count > ipa_reg_aggr_max_packet_limit()) + return -EINVAL; + + /* Compute the buffer size required to handle the requested + * aggregation byte limit. The aggr_byte_limit value is + * expressed as a number of KB, but we derive that value + * after computing the buffer size to use (in bytes). The + * buffer must be sufficient to hold one IPA_MTU-sized + * packet *after* the limit is reached. + * + * (Note that the rx_buffer_size value reflects only the + * space for data, not any standard metadata or headers.) + */ + rx_buffer_size = ipa_aggr_byte_limit_buf_size(aggr_bytes); + + /* Account for the extra IPA_MTU past the limit in the + * buffer, and convert the result to the KB units the + * aggr_byte_limit uses. + */ + aggr_size = (rx_buffer_size - IPA_MTU) / SZ_1K; + + mutex_lock(&rmnet_ipa_ctx->ep_setup_mutex); + + if (rmnet_ipa_ctx->wan_cons_ep_id != IPA_EP_ID_BAD) { + ret = -EBUSY; + goto out_unlock; + } + + ret = ipa_ep_alloc(client); + if (ret < 0) + goto out_unlock; + ep_id = ret; + + /* Record our endpoint configuration parameters */ + ipa_endp_init_hdr_cons(ep_id, header_size, metadata_offset, + length_offset); + ipa_endp_init_hdr_ext_cons(ep_id, 0, true); + ipa_endp_init_aggr_cons(ep_id, aggr_size, aggr_count, true); + ipa_endp_init_cfg_cons(ep_id, offload_type); + ipa_endp_init_hdr_metadata_mask_cons(ep_id, 0xff000000); + ipa_endp_status_cons(ep_id, !aggr_active); + + ipa_ctx->ipa_client_apps_wan_cons_agg_gro = aggr_active; + + ret = ipa_ep_setup(ep_id, channel_count, 1, rx_buffer_size, + apps_ipa_packet_receive_notify, dev); + if (ret) + ipa_ep_free(ep_id); + else + rmnet_ipa_ctx->wan_cons_ep_id = ep_id; +out_unlock: + mutex_unlock(&rmnet_ipa_ctx->ep_setup_mutex); + + return ret; +} + +/** handle_egress_format() - Egress data format configuration */ +static int handle_egress_format(struct net_device *dev, + struct rmnet_ioctl_extended_s *e) +{ + enum ipa_cs_offload_en offload_type; + enum ipa_client_type dst_client; + enum ipa_client_type client; + enum ipa_aggr_type aggr_type; + enum ipa_aggr_en aggr_en; + u32 channel_count; + u32 length_offset; + u32 header_align; + u32 header_offset; + u32 header_size; + u32 ep_id; + int ret; + + client = IPA_CLIENT_APPS_WAN_PROD; + dst_client = IPA_CLIENT_APPS_LAN_CONS; + channel_count = IPA_APPS_WWAN_PROD_RING_COUNT; + header_size = sizeof(struct rmnet_map_header_s); + offload_type = IPA_CS_OFFLOAD_NONE; + aggr_en = IPA_BYPASS_AGGR; + aggr_type = 0; /* ignored if BYPASS */ + header_offset = 0; + length_offset = 0; + header_align = 0; + + if (e->u.data & RMNET_IOCTL_EGRESS_FORMAT_CHECKSUM) { + offload_type = IPA_CS_OFFLOAD_UL; + header_offset = sizeof(struct rmnet_map_header_s) / 4; + header_size += sizeof(u32); + } + + if (e->u.data & RMNET_IOCTL_EGRESS_FORMAT_AGGREGATION) { + aggr_en = IPA_ENABLE_DEAGGR; + aggr_type = IPA_QCMAP; + length_offset = offsetof(struct rmnet_map_header_s, pkt_len); + header_align = ilog2(sizeof(u32)); + } + + mutex_lock(&rmnet_ipa_ctx->ep_setup_mutex); + + if (rmnet_ipa_ctx->wan_prod_ep_id != IPA_EP_ID_BAD) { + ret = -EBUSY; + goto out_unlock; + } + + ret = ipa_ep_alloc(client); + if (ret < 0) + goto out_unlock; + ep_id = ret; + + if (aggr_en == IPA_ENABLE_DEAGGR && !ipa_endp_aggr_support(ep_id)) { + ret = -ENOTSUPP; + goto out_unlock; + } + + /* We really do want 0 metadata offset */ + ipa_endp_init_hdr_prod(ep_id, header_size, 0, length_offset); + ipa_endp_init_hdr_ext_prod(ep_id, header_align); + ipa_endp_init_mode_prod(ep_id, IPA_BASIC, dst_client); + ipa_endp_init_aggr_prod(ep_id, aggr_en, aggr_type); + ipa_endp_init_cfg_prod(ep_id, offload_type, header_offset); + ipa_endp_init_seq_prod(ep_id); + ipa_endp_init_deaggr_prod(ep_id); + /* Enable source notification status for exception packets + * (i.e. QMAP commands) to be routed to modem. + */ + ipa_endp_status_prod(ep_id, true, IPA_CLIENT_Q6_WAN_CONS); + + /* Use a deferred interrupting no-op to reduce completion interrupts */ + ipa_no_intr_init(ep_id); + + ret = ipa_ep_setup(ep_id, channel_count, 1, 0, + apps_ipa_tx_complete_notify, dev); + if (ret) + ipa_ep_free(ep_id); + else + rmnet_ipa_ctx->wan_prod_ep_id = ep_id; + +out_unlock: + mutex_unlock(&rmnet_ipa_ctx->ep_setup_mutex); + + return ret; +} + +/** ipa_wwan_add_mux_channel() - add a mux_id */ +static int ipa_wwan_add_mux_channel(u32 mux_id) +{ + int ret; + u32 i; + + mutex_lock(&rmnet_ipa_ctx->mux_id_mutex); + + if (rmnet_ipa_ctx->mux_id_count >= MUX_CHANNEL_MAX) { + ret = -EFAULT; + goto out; + } + + for (i = 0; i < rmnet_ipa_ctx->mux_id_count; i++) + if (mux_id == rmnet_ipa_ctx->mux_id[i]) + break; + + /* Record the mux_id if it hasn't already been seen */ + if (i == rmnet_ipa_ctx->mux_id_count) + rmnet_ipa_ctx->mux_id[rmnet_ipa_ctx->mux_id_count++] = mux_id; + ret = 0; +out: + mutex_unlock(&rmnet_ipa_ctx->mux_id_mutex); + + return ret; +} + +/** ipa_wwan_ioctl_extended() - rmnet extended I/O control */ +static int ipa_wwan_ioctl_extended(struct net_device *dev, void __user *data) +{ + struct rmnet_ioctl_extended_s edata = { }; + size_t size = sizeof(edata); + + if (copy_from_user(&edata, data, size)) + return -EFAULT; + + switch (edata.extended_ioctl) { + case RMNET_IOCTL_GET_SUPPORTED_FEATURES: /* Get features */ + edata.u.data = RMNET_IOCTL_FEAT_NOTIFY_MUX_CHANNEL; + edata.u.data |= RMNET_IOCTL_FEAT_SET_EGRESS_DATA_FORMAT; + edata.u.data |= RMNET_IOCTL_FEAT_SET_INGRESS_DATA_FORMAT; + goto copy_out; + + case RMNET_IOCTL_GET_EPID: /* Get endpoint ID */ + edata.u.data = 1; + goto copy_out; + + case RMNET_IOCTL_GET_DRIVER_NAME: /* Get driver name */ + memcpy(&edata.u.if_name, rmnet_ipa_ctx->dev->name, IFNAMSIZ); + goto copy_out; + + case RMNET_IOCTL_ADD_MUX_CHANNEL: /* Add MUX ID */ + return ipa_wwan_add_mux_channel(edata.u.rmnet_mux_val.mux_id); + + case RMNET_IOCTL_SET_EGRESS_DATA_FORMAT: /* Egress data format */ + return handle_egress_format(dev, &edata) ? -EFAULT : 0; + + case RMNET_IOCTL_SET_INGRESS_DATA_FORMAT: /* Ingress format */ + return handle_ingress_format(dev, &edata) ? -EFAULT : 0; + + case RMNET_IOCTL_GET_EP_PAIR: /* Get endpoint pair */ + edata.u.ipa_ep_pair.consumer_pipe_num = + ipa_client_ep_id(IPA_CLIENT_APPS_WAN_PROD); + edata.u.ipa_ep_pair.producer_pipe_num = + ipa_client_ep_id(IPA_CLIENT_APPS_WAN_CONS); + goto copy_out; + + case RMNET_IOCTL_GET_SG_SUPPORT: /* Get SG support */ + edata.u.data = 1; /* Scatter/gather is always supported */ + goto copy_out; + + /* Unsupported requests */ + case RMNET_IOCTL_SET_MRU: /* Set MRU */ + case RMNET_IOCTL_GET_MRU: /* Get MRU */ + case RMNET_IOCTL_GET_AGGREGATION_COUNT: /* Get agg count */ + case RMNET_IOCTL_SET_AGGREGATION_COUNT: /* Set agg count */ + case RMNET_IOCTL_GET_AGGREGATION_SIZE: /* Get agg size */ + case RMNET_IOCTL_SET_AGGREGATION_SIZE: /* Set agg size */ + case RMNET_IOCTL_FLOW_CONTROL: /* Do flow control */ + case RMNET_IOCTL_GET_DFLT_CONTROL_CHANNEL: /* For legacy use */ + case RMNET_IOCTL_GET_HWSW_MAP: /* Get HW/SW map */ + case RMNET_IOCTL_SET_RX_HEADROOM: /* Set RX Headroom */ + case RMNET_IOCTL_SET_QOS_VERSION: /* Set 8/6 byte QoS */ + case RMNET_IOCTL_GET_QOS_VERSION: /* Get 8/6 byte QoS */ + case RMNET_IOCTL_GET_SUPPORTED_QOS_MODES: /* Get QoS modes */ + case RMNET_IOCTL_SET_SLEEP_STATE: /* Set sleep state */ + case RMNET_IOCTL_SET_XLAT_DEV_INFO: /* xlat dev name */ + case RMNET_IOCTL_DEREGISTER_DEV: /* Deregister netdev */ + return -ENOTSUPP; /* Defined, but unsupported command */ + + default: + return -EINVAL; /* Invalid (unrecognized) command */ + } + +copy_out: + return copy_to_user(data, &edata, size) ? -EFAULT : 0; +} + +/** ipa_wwan_ioctl() - I/O control for wwan network driver */ +static int ipa_wwan_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) +{ + struct rmnet_ioctl_data_s ioctl_data = { }; + void __user *data; + size_t size; + + data = ifr->ifr_ifru.ifru_data; + size = sizeof(ioctl_data); + + switch (cmd) { + /* These features are implied; alternatives are not supported */ + case RMNET_IOCTL_SET_LLP_IP: /* RAW IP protocol */ + case RMNET_IOCTL_SET_QOS_DISABLE: /* QoS header disabled */ + return 0; + + /* These features are not supported; use alternatives */ + case RMNET_IOCTL_SET_LLP_ETHERNET: /* Ethernet protocol */ + case RMNET_IOCTL_SET_QOS_ENABLE: /* QoS header enabled */ + case RMNET_IOCTL_GET_OPMODE: /* Get operation mode */ + case RMNET_IOCTL_FLOW_ENABLE: /* Flow enable */ + case RMNET_IOCTL_FLOW_DISABLE: /* Flow disable */ + case RMNET_IOCTL_FLOW_SET_HNDL: /* Set flow handle */ + return -ENOTSUPP; + + case RMNET_IOCTL_GET_LLP: /* Get link protocol */ + ioctl_data.u.operation_mode = RMNET_MODE_LLP_IP; + goto copy_out; + + case RMNET_IOCTL_GET_QOS: /* Get QoS header state */ + ioctl_data.u.operation_mode = RMNET_MODE_NONE; + goto copy_out; + + case RMNET_IOCTL_OPEN: /* Open transport port */ + case RMNET_IOCTL_CLOSE: /* Close transport port */ + return 0; + + case RMNET_IOCTL_EXTENDED: /* Extended IOCTLs */ + return ipa_wwan_ioctl_extended(dev, data); + + default: + return -EINVAL; + } + +copy_out: + return copy_to_user(data, &ioctl_data, size) ? -EFAULT : 0; +} + +static const struct net_device_ops ipa_wwan_ops_ip = { + .ndo_open = ipa_wwan_open, + .ndo_stop = ipa_wwan_stop, + .ndo_start_xmit = ipa_wwan_xmit, + .ndo_do_ioctl = ipa_wwan_ioctl, +}; + +/** wwan_setup() - Setup the wwan network driver */ +static void ipa_wwan_setup(struct net_device *dev) +{ + dev->netdev_ops = &ipa_wwan_ops_ip; + ether_setup(dev); + dev->header_ops = NULL; /* No header (override ether_setup() value) */ + dev->type = ARPHRD_RAWIP; + dev->hard_header_len = 0; + dev->max_mtu = WWAN_DATA_LEN; + dev->mtu = dev->max_mtu; + dev->addr_len = 0; + dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST); + dev->needed_headroom = HEADROOM_FOR_QMAP; + dev->needed_tailroom = TAILROOM; + dev->watchdog_timeo = msecs_to_jiffies(10 * MSEC_PER_SEC); +} + +/** ipa_wwan_probe() - Network probe function */ +static int ipa_wwan_probe(struct platform_device *pdev) +{ + struct ipa_wwan_private *wwan_ptr; + struct net_device *dev; + int ret; + + mutex_init(&rmnet_ipa_ctx->ep_setup_mutex); + mutex_init(&rmnet_ipa_ctx->mux_id_mutex); + + /* Mark client handles bad until we initialize them */ + rmnet_ipa_ctx->wan_prod_ep_id = IPA_EP_ID_BAD; + rmnet_ipa_ctx->wan_cons_ep_id = IPA_EP_ID_BAD; + + ret = ipa_modem_smem_init(); + if (ret) + goto err_clear_ctx; + + /* start A7 QMI service/client */ + ipa_qmi_init(); + + /* initialize wan-driver netdev */ + dev = alloc_netdev(sizeof(struct ipa_wwan_private), + IPA_WWAN_DEV_NAME, + NET_NAME_UNKNOWN, + ipa_wwan_setup); + if (!dev) { + ipa_err("no memory for netdev\n"); + ret = -ENOMEM; + goto err_clear_ctx; + } + rmnet_ipa_ctx->dev = dev; + wwan_ptr = netdev_priv(dev); + wwan_ptr->outstanding_high_ctl = DEFAULT_OUTSTANDING_HIGH_CTL; + wwan_ptr->outstanding_high = DEFAULT_OUTSTANDING_HIGH; + wwan_ptr->outstanding_low = DEFAULT_OUTSTANDING_LOW; + atomic_set(&wwan_ptr->outstanding_pkts, 0); + + /* Enable SG support in netdevice. */ + dev->hw_features |= NETIF_F_SG; + + netif_napi_add(dev, &wwan_ptr->napi, ipa_rmnet_poll, NAPI_WEIGHT); + ret = register_netdev(dev); + if (ret) { + ipa_err("unable to register ipa_netdev %d rc=%d\n", 0, ret); + goto err_napi_del; + } + + /* offline charging mode */ + ipa_proxy_clk_unvote(); + + /* Till the system is suspended, we keep the clock open */ + ipa_client_add(); + + initialized = true; + + return 0; + +err_napi_del: + netif_napi_del(&wwan_ptr->napi); + free_netdev(dev); +err_clear_ctx: + memset(&rmnet_ipa_ctx_struct, 0, sizeof(rmnet_ipa_ctx_struct)); + + return ret; +} + +static int ipa_wwan_remove(struct platform_device *pdev) +{ + struct ipa_wwan_private *wwan_ptr = netdev_priv(rmnet_ipa_ctx->dev); + + dev_info(&pdev->dev, "rmnet_ipa started deinitialization\n"); + + mutex_lock(&rmnet_ipa_ctx->ep_setup_mutex); + + ipa_client_add(); + + if (rmnet_ipa_ctx->wan_cons_ep_id != IPA_EP_ID_BAD) { + ipa_ep_teardown(rmnet_ipa_ctx->wan_cons_ep_id); + rmnet_ipa_ctx->wan_cons_ep_id = IPA_EP_ID_BAD; + } + + if (rmnet_ipa_ctx->wan_prod_ep_id != IPA_EP_ID_BAD) { + ipa_ep_teardown(rmnet_ipa_ctx->wan_prod_ep_id); + rmnet_ipa_ctx->wan_prod_ep_id = IPA_EP_ID_BAD; + } + + ipa_client_remove(); + + netif_napi_del(&wwan_ptr->napi); + mutex_unlock(&rmnet_ipa_ctx->ep_setup_mutex); + unregister_netdev(rmnet_ipa_ctx->dev); + + if (rmnet_ipa_ctx->dev) + free_netdev(rmnet_ipa_ctx->dev); + rmnet_ipa_ctx->dev = NULL; + + mutex_destroy(&rmnet_ipa_ctx->mux_id_mutex); + mutex_destroy(&rmnet_ipa_ctx->ep_setup_mutex); + + initialized = false; + + dev_info(&pdev->dev, "rmnet_ipa completed deinitialization\n"); + + return 0; +} + +/** rmnet_ipa_ap_suspend() - suspend callback for runtime_pm + * @dev: pointer to device + * + * This callback will be invoked by the runtime_pm framework when an AP suspend + * operation is invoked, usually by pressing a suspend button. + * + * Returns -EAGAIN to runtime_pm framework in case there are pending packets + * in the Tx queue. This will postpone the suspend operation until all the + * pending packets will be transmitted. + * + * In case there are no packets to send, releases the WWAN0_PROD entity. + * As an outcome, the number of IPA active clients should be decremented + * until IPA clocks can be gated. + */ +static int rmnet_ipa_ap_suspend(struct device *dev) +{ + struct net_device *netdev = rmnet_ipa_ctx->dev; + struct ipa_wwan_private *wwan_ptr; + int ret; + + if (!netdev) { + ipa_err("netdev is NULL.\n"); + ret = 0; + goto bail; + } + + netif_tx_lock_bh(netdev); + wwan_ptr = netdev_priv(netdev); + if (!wwan_ptr) { + ipa_err("wwan_ptr is NULL.\n"); + ret = 0; + goto unlock_and_bail; + } + + /* Do not allow A7 to suspend in case there are outstanding packets */ + if (atomic_read(&wwan_ptr->outstanding_pkts) != 0) { + ret = -EAGAIN; + goto unlock_and_bail; + } + + /* Make sure that there is no Tx operation ongoing */ + netif_stop_queue(netdev); + + ret = 0; + ipa_client_remove(); + +unlock_and_bail: + netif_tx_unlock_bh(netdev); +bail: + + return ret; +} + +/** rmnet_ipa_ap_resume() - resume callback for runtime_pm + * @dev: pointer to device + * + * This callback will be invoked by the runtime_pm framework when an AP resume + * operation is invoked. + * + * Enables the network interface queue and returns success to the + * runtime_pm framework. + */ +static int rmnet_ipa_ap_resume(struct device *dev) +{ + struct net_device *netdev = rmnet_ipa_ctx->dev; + + ipa_client_add(); + if (netdev) + netif_wake_queue(netdev); + + return 0; +} + +static const struct of_device_id rmnet_ipa_dt_match[] = { + {.compatible = "qcom,rmnet-ipa"}, + {}, +}; +MODULE_DEVICE_TABLE(of, rmnet_ipa_dt_match); + +static const struct dev_pm_ops rmnet_ipa_pm_ops = { + .suspend_noirq = rmnet_ipa_ap_suspend, + .resume_noirq = rmnet_ipa_ap_resume, +}; + +static struct platform_driver rmnet_ipa_driver = { + .driver = { + .name = "rmnet_ipa", + .owner = THIS_MODULE, + .pm = &rmnet_ipa_pm_ops, + .of_match_table = rmnet_ipa_dt_match, + }, + .probe = ipa_wwan_probe, + .remove = ipa_wwan_remove, +}; + +int ipa_wwan_init(void) +{ + if (initialized) + return 0; + + return platform_driver_register(&rmnet_ipa_driver); +} + +void ipa_wwan_cleanup(void) +{ + platform_driver_unregister(&rmnet_ipa_driver); + memset(&rmnet_ipa_ctx_struct, 0, sizeof(rmnet_ipa_ctx_struct)); +} + +static int ipa_rmnet_poll(struct napi_struct *napi, int budget) +{ + return ipa_rx_poll(rmnet_ipa_ctx->wan_cons_ep_id, budget); +} + +MODULE_DESCRIPTION("WWAN Network Interface"); +MODULE_LICENSE("GPL v2"); From patchwork Wed Nov 7 00:32:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 150365 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4548659ljp; Tue, 6 Nov 2018 16:33:47 -0800 (PST) X-Google-Smtp-Source: AJdET5ety/vchbXqb8U+Bd9kNitvw1k/pL03b627r5H2TW1SkJXmIFfNp84+IuZuMGxtGJGAUpcB X-Received: by 2002:a63:5722:: with SMTP id l34mr16615359pgb.118.1541550827780; Tue, 06 Nov 2018 16:33:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541550827; cv=none; d=google.com; s=arc-20160816; b=FNVlFkLwDtj5Xr2498Jemg58WVJvrUa+LAnqsAGVAVdvL6WUFUhTa/XLgmD7anPBiB 1BY2ZfYhLMQ8LIM+Uh/FqUTPt1fCYXqBy2UndVzIlo/nO2xaKh+yNECB0qgfSJyZrzIb CpHb2xjVTKR7afDir3CKoTvTnLEGIGYf3E480m2yw1dcE8b7Esf0uw/ejKfSs8/LekHw umwEsFr18hlqH8/SSb4I+kOx35gmXk1OKyijTpNx+weoekG2w+iyYkA4Od22pBU3mtpD OUIg6Its/yNh1vz44Agrzis5vQ8+oQ23kM7QTBVjQxJiT+/5PNUeKqQBivVPCZWPXtd0 9z4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=5JISvLNEM/XFeEqEpPblBopQI3QE8y5eH3aQHKL6OvM=; b=0Veb3z1O5qinuMAS4THg13f0ZuhgFvx9BtKHxUl+JP8rTuMyVkwVH6sdF71Ec10QvE aaQ604UFTVJuTpi9mDnxvrLZEqZdOVKHsr3bIPs9g3cq55hpULIFChsMpt9Ok4VswJAW DnieKztxslfX8UwBUTaD7n68Vd64jhHvSa0wClLLiGLJdkYClDwhbtBkcdaPGWxqv+iz bzAE9mtDqOtrxk/Gcey6ni4eS8ApWeEnqspqwbr5B8zDNv5EpbE6NCmEqep0YQ4RA6Yf Gso8juuPCWQjln7dj4Ph30fir9o+ztWWiUo49d6CzOe0jzBxBHBx26KxohqA5wx7pbrT VWsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DcDiqtsd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t191si25512577pgd.579.2018.11.06.16.33.47; Tue, 06 Nov 2018 16:33:47 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DcDiqtsd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389299AbeKGKBj (ORCPT + 32 others); Wed, 7 Nov 2018 05:01:39 -0500 Received: from mail-it1-f195.google.com ([209.85.166.195]:51608 "EHLO mail-it1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389271AbeKGKBi (ORCPT ); Wed, 7 Nov 2018 05:01:38 -0500 Received: by mail-it1-f195.google.com with SMTP id h13so20648608itl.1 for ; Tue, 06 Nov 2018 16:33:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5JISvLNEM/XFeEqEpPblBopQI3QE8y5eH3aQHKL6OvM=; b=DcDiqtsd6an42kVfabVTPwJUYT0foPW4pkOpdBfa1itlPezOXtjbhe48wPJE6CS+xY EZfa7rKI9Ll/lMHqhmAtAhX2hIcNwRYiHd5q7ZwcSXFHbbZNf64BZKHPkvn1gisyfKCl qLsKFnxms+ZwnVmfaHWpmNuqd3gZ90fk+HKjw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5JISvLNEM/XFeEqEpPblBopQI3QE8y5eH3aQHKL6OvM=; b=sZNdmJIg3aaP1XHIdnF2uAMpJE37Xj7gLsXjVzRS+ACNC8I6egQWCYVc4NvLOHK92N ZcGXSE0OW+oN0ZwqBuechQw5hn/HmjiuQ+DYUciuTBhEAL5nYLZipt0iOh7Qi2P04vlZ AwXUv9aOHTlDYbigPXzdVXn+9uUpzF27aX+ZON9YIytIi0Ae3zDznDCBFPXvoMvA0mdO mJYF8PxAas7adrj2UOqqI3QYvFKg9lQGwDop+aStRqmUoKYsC8aVx67Ny7lsKNP2NnRU jH25BbqeBqmf91x+Y95AA2XwV/TACaRxoXbMaxHNvjmdMuIJi5jqShtppBOPYCNbm1ox lVug== X-Gm-Message-State: AGRZ1gITYN6VjeSJQiy/NJerd8F8SBSK/wr54mrkaginoM1/zpMiWB/E XLQt4pyzze5SkYv/emNW5Vi9Kw== X-Received: by 2002:a24:85c2:: with SMTP id r185-v6mr138413itd.104.1541550822250; Tue, 06 Nov 2018 16:33:42 -0800 (PST) Received: from shibby.gateway.innflux.com ([66.228.239.218]) by smtp.gmail.com with ESMTPSA id e184-v6sm1061128ite.9.2018.11.06.16.33.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 16:33:41 -0800 (PST) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, syadagir@codeaurora.org, mjavid@codeaurora.org, robh+dt@kernel.org, mark.rutland@arm.com Subject: [RFC PATCH 12/12] soc: qcom: ipa: build and "ipa_i.h" Date: Tue, 6 Nov 2018 18:32:50 -0600 Message-Id: <20181107003250.5832-13-elder@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107003250.5832-1-elder@linaro.org> References: <20181107003250.5832-1-elder@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This last IPA code patch includes "Kconfig" for kernel configuration and a "Makefile" to build the code. The main configuration option enabling the code to be built is "CONFIG_IPA". A second one, "CONFIG_IPA_ASSERT", is on by default if CONFIG_IPA is selected, but can be disabled to cause ipa_assert() calls to not be included. Finally, "ipa_i.h" is a sort of dumping ground header file, declaring things used and needed throughout the code. (I expect much of this can be doled out into separate headers.) Signed-off-by: Alex Elder --- drivers/net/ipa/Kconfig | 30 ++ drivers/net/ipa/Makefile | 7 + drivers/net/ipa/ipa_i.h | 573 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 610 insertions(+) create mode 100644 drivers/net/ipa/Kconfig create mode 100644 drivers/net/ipa/Makefile create mode 100644 drivers/net/ipa/ipa_i.h -- 2.17.1 diff --git a/drivers/net/ipa/Kconfig b/drivers/net/ipa/Kconfig new file mode 100644 index 000000000000..f8ea9363f532 --- /dev/null +++ b/drivers/net/ipa/Kconfig @@ -0,0 +1,30 @@ +config IPA + tristate "Qualcomm IPA support" + depends on NET + select QCOM_QMI_HELPERS + select QCOM_MDT_LOADER + default n + help + Choose Y here to include support for the Qualcomm IP + Accelerator (IPA), a hardware block present in some + Qualcomm SoCs. The IPA is a programmable protocol + processor that is capable of generic hardware handling + of IP packets, including routing, filtering, and NAT. + Currently the IPA driver supports only basic transport + of network traffic between the AP and modem, on the + Qualcomm SDM845 SoC. + + If unsure, say N. + +config IPA_ASSERT + bool "Enable IPA assertions" + depends on IPA + default y + help + Incorporate IPA assertion verification in the build. This + cause various design assumptions to be checked at runtime, + generating a report (and a crash) if any assumed condition + does not hold. You may wish to disable this to avoid the + overhead of checking. + + If unsure doubt, say "Y" here. diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile new file mode 100644 index 000000000000..6b1de4ab2dad --- /dev/null +++ b/drivers/net/ipa/Makefile @@ -0,0 +1,7 @@ +obj-$(CONFIG_IPA) += ipa.o + +ipa-y := ipa_main.o ipa_dp.o \ + ipa_utils.o ipa_interrupts.o \ + ipa_uc.o gsi.o rmnet_ipa.o \ + ipa_qmi.o ipa_qmi_msg.o \ + ipahal.o ipa_reg.o ipa_dma.o diff --git a/drivers/net/ipa/ipa_i.h b/drivers/net/ipa/ipa_i.h new file mode 100644 index 000000000000..efbb2cb7177f --- /dev/null +++ b/drivers/net/ipa/ipa_i.h @@ -0,0 +1,573 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018 Linaro Ltd. + */ +#ifndef _IPA_I_H_ +#define _IPA_I_H_ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipa_dma.h" +#include "ipa_reg.h" +#include "gsi.h" + +#define IPA_MTU 1500 + +#define IPA_EP_COUNT_MAX 31 +#define IPA_LAN_RX_HEADER_LENGTH 0 +#define IPA_DL_CHECKSUM_LENGTH 8 +#define IPA_GENERIC_RX_POOL_SZ 192 + +#define IPA_GENERIC_AGGR_BYTE_LIMIT (6 * SZ_1K) /* bytes */ +#define IPA_GENERIC_AGGR_TIME_LIMIT 1 /* milliseconds */ +#define IPA_GENERIC_AGGR_PKT_LIMIT 0 + +#define IPA_MAX_STATUS_STAT_NUM 30 + +/* An explicitly bad endpoint identifier value */ +#define IPA_EP_ID_BAD (~(u32)0) + +#define IPA_MEM_CANARY_VAL 0xdeadbeef + +#define IPA_GSI_CHANNEL_STOP_MAX_RETRY 10 +#define IPA_GSI_CHANNEL_STOP_PKT_SIZE 1 + +/** + * DOC: + * The IPA has a block of shared memory, divided into regions used for + * specific purposes. Values below define this layout (i.e., the + * sizes and locations of all these regions). One or two "canary" + * values sit between some regions, as a check for erroneous writes + * outside a region. There are combinations of and routing tables, + * covering IPv4 and IPv6, and for each of those, hashed and + * non-hashed variants. About half of routing table entries are + * reserved for modem use. + */ + +/* The maximum number of filter table entries (IPv4, IPv6; hashed and not) */ +#define IPA_MEM_FLT_COUNT 14 + +/* The number of routing table entries (IPv4, IPv6; hashed and not) */ +#define IPA_MEM_RT_COUNT 15 + + /* Which routing table entries are for the modem */ +#define IPA_MEM_MODEM_RT_COUNT 8 +#define IPA_MEM_MODEM_RT_INDEX_MIN 0 +#define IPA_MEM_MODEM_RT_INDEX_MAX \ + (IPA_MEM_MODEM_RT_INDEX_MIN + IPA_MEM_MODEM_RT_COUNT - 1) + +#define IPA_MEM_V4_FLT_HASH_OFST 0x288 +#define IPA_MEM_V4_FLT_NHASH_OFST 0x308 +#define IPA_MEM_V6_FLT_HASH_OFST 0x388 +#define IPA_MEM_V6_FLT_NHASH_OFST 0x408 +#define IPA_MEM_V4_RT_HASH_OFST 0x488 +#define IPA_MEM_V4_RT_NHASH_OFST 0x508 +#define IPA_MEM_V6_RT_HASH_OFST 0x588 +#define IPA_MEM_V6_RT_NHASH_OFST 0x608 +#define IPA_MEM_MODEM_HDR_OFST 0x688 +#define IPA_MEM_MODEM_HDR_SIZE 0x140 +#define IPA_MEM_APPS_HDR_OFST 0x7c8 +#define IPA_MEM_APPS_HDR_SIZE 0x0 +#define IPA_MEM_MODEM_HDR_PROC_CTX_OFST 0x7d0 +#define IPA_MEM_MODEM_HDR_PROC_CTX_SIZE 0x200 +#define IPA_MEM_APPS_HDR_PROC_CTX_OFST 0x9d0 +#define IPA_MEM_APPS_HDR_PROC_CTX_SIZE 0x200 +#define IPA_MEM_MODEM_OFST 0xbd8 +#define IPA_MEM_MODEM_SIZE 0x1024 +#define IPA_MEM_END_OFST 0x2000 +#define IPA_MEM_UC_EVENT_RING_OFST 0x1c00 /* v3.5 and later */ + +#define ipa_debug(fmt, args...) dev_dbg(ipa_ctx->dev, fmt, ## args) +#define ipa_err(fmt, args...) dev_err(ipa_ctx->dev, fmt, ## args) + +#define ipa_bug() \ + do { \ + ipa_err("an unrecoverable error has occurred\n"); \ + BUG(); \ + } while (0) + +#define ipa_bug_on(condition) \ + do { \ + if (condition) { \ + ipa_err("ipa_bug_on(%s) failed!\n", #condition); \ + ipa_bug(); \ + } \ + } while (0) + +#ifdef CONFIG_IPA_ASSERT + +/* Communicate a condition assumed by the code. This is intended as + * an informative statement about something that should always be true. + * + * N.B.: Conditions asserted must not incorporate code with side-effects + * that are necessary for correct execution. And an assertion + * failure should not be expected to force a crash (because all + * assertion code is optionally compiled out). + */ +#define ipa_assert(cond) \ + do { \ + if (!(cond)) { \ + ipa_err("ipa_assert(%s) failed!\n", #cond); \ + ipa_bug(); \ + } \ + } while (0) +#else /* !CONFIG_IPA_ASSERT */ + +#define ipa_assert(expr) ((void)0) + +#endif /* !CONFIG_IPA_ASSERT */ + +enum ipa_ees { + IPA_EE_AP = 0, + IPA_EE_Q6 = 1, + IPA_EE_UC = 2, +}; + +/** + * enum ipa_client_type - names for the various IPA "clients" + * + * These are from the perspective of the clients, e.g. HSIC1_PROD + * means HSIC client is the producer and IPA is the consumer. + * PROD clients are always even, and CONS clients are always odd. + */ +enum ipa_client_type { + IPA_CLIENT_WLAN1_PROD = 10, + IPA_CLIENT_WLAN1_CONS = 11, + + IPA_CLIENT_WLAN2_CONS = 13, + + IPA_CLIENT_WLAN3_CONS = 15, + + IPA_CLIENT_USB_PROD = 18, + IPA_CLIENT_USB_CONS = 19, + + IPA_CLIENT_USB_DPL_CONS = 27, + + IPA_CLIENT_APPS_LAN_PROD = 32, + IPA_CLIENT_APPS_LAN_CONS = 33, + + IPA_CLIENT_APPS_WAN_PROD = 34, + IPA_CLIENT_APPS_WAN_CONS = 35, + + IPA_CLIENT_APPS_CMD_PROD = 36, + + IPA_CLIENT_Q6_LAN_PROD = 50, + IPA_CLIENT_Q6_LAN_CONS = 51, + + IPA_CLIENT_Q6_WAN_PROD = 52, + IPA_CLIENT_Q6_WAN_CONS = 53, + + IPA_CLIENT_Q6_CMD_PROD = 54, + + IPA_CLIENT_TEST_PROD = 62, + IPA_CLIENT_TEST_CONS = 63, + + IPA_CLIENT_TEST1_PROD = 64, + IPA_CLIENT_TEST1_CONS = 65, + + IPA_CLIENT_TEST2_PROD = 66, + IPA_CLIENT_TEST2_CONS = 67, + + IPA_CLIENT_TEST3_PROD = 68, + IPA_CLIENT_TEST3_CONS = 69, + + IPA_CLIENT_TEST4_PROD = 70, + IPA_CLIENT_TEST4_CONS = 71, + + IPA_CLIENT_DUMMY_CONS = 73, + + IPA_CLIENT_MAX, +}; + +static inline bool ipa_producer(enum ipa_client_type client) +{ + return !((u32)client & 1); /* Even numbers are producers */ +} + +static inline bool ipa_consumer(enum ipa_client_type client) +{ + return !ipa_producer(client); +} + +static inline bool ipa_modem_consumer(enum ipa_client_type client) +{ + return client == IPA_CLIENT_Q6_LAN_CONS || + client == IPA_CLIENT_Q6_WAN_CONS; +} + +static inline bool ipa_modem_producer(enum ipa_client_type client) +{ + return client == IPA_CLIENT_Q6_LAN_PROD || + client == IPA_CLIENT_Q6_WAN_PROD || + client == IPA_CLIENT_Q6_CMD_PROD; +} + +static inline bool ipa_ap_consumer(enum ipa_client_type client) +{ + return client == IPA_CLIENT_APPS_LAN_CONS || + client == IPA_CLIENT_APPS_WAN_CONS; +} + +/** + * enum ipa_irq_type - IPA Interrupt Type + * + * Used to register handlers for IPA interrupts. + */ +enum ipa_irq_type { + IPA_INVALID_IRQ = 0, + IPA_UC_IRQ_0, + IPA_UC_IRQ_1, + IPA_TX_SUSPEND_IRQ, + IPA_IRQ_MAX +}; + +/** + * typedef ipa_irq_handler_t - irq handler/callback type + * @param ipa_irq_type - interrupt type + * @param interrupt_data - interrupt information data + * + * Callback function registered by ipa_add_interrupt_handler() to + * handle a specific interrupt type + */ +typedef void (*ipa_irq_handler_t)(enum ipa_irq_type interrupt, + u32 interrupt_data); + +/** + * struct ipa_tx_suspend_irq_data - Interrupt data for IPA_TX_SUSPEND_IRQ + * @endpoints: Bitmask of endpoints which cause IPA_TX_SUSPEND_IRQ interrupt + */ +struct ipa_tx_suspend_irq_data { + u32 endpoints; +}; + +/** + * enum ipa_dp_evt_type - Data path event type + */ +enum ipa_dp_evt_type { + IPA_RECEIVE, + IPA_WRITE_DONE, + IPA_CLIENT_START_POLL, + IPA_CLIENT_COMP_NAPI, +}; + +typedef void (*ipa_notify_cb)(void *priv, enum ipa_dp_evt_type evt, + unsigned long data); + +/** + * struct ipa_ep_context - IPA end point context + * @allocated: True when the endpoint has been allocated + * @client: Client associated with the endpoint + * @channel_id: EP's GSI channel + * @evt_ring_id: EP's GSI channel event ring + * @priv: Pointer supplied when client_notify is called + * notified for new data avail + * @client_notify: Function called for event notification + * @napi_enabled: Endpoint uses NAPI + */ +struct ipa_ep_context { + bool allocated; + enum ipa_client_type client; + u32 channel_id; + u32 evt_ring_id; + bool bytes_xfered_valid; + u16 bytes_xfered; + + struct ipa_reg_endp_init_hdr init_hdr; + struct ipa_reg_endp_init_hdr_ext hdr_ext; + struct ipa_reg_endp_init_mode init_mode; + struct ipa_reg_endp_init_aggr init_aggr; + struct ipa_reg_endp_init_cfg init_cfg; + struct ipa_reg_endp_init_seq init_seq; + struct ipa_reg_endp_init_deaggr init_deaggr; + struct ipa_reg_endp_init_hdr_metadata_mask metadata_mask; + struct ipa_reg_endp_status status; + + void (*client_notify)(void *priv, enum ipa_dp_evt_type evt, + unsigned long data); + void *priv; + bool napi_enabled; + struct ipa_sys_context *sys; +}; + +/** + * enum ipa_desc_type - IPA decriptor type + */ +enum ipa_desc_type { + IPA_DATA_DESC, + IPA_DATA_DESC_SKB, + IPA_DATA_DESC_SKB_PAGED, + IPA_IMM_CMD_DESC, +}; + +/** + * struct ipa_desc - IPA descriptor + * @type: Type of data in the descriptor + * @len_opcode: Length of the payload, or opcode for immediate commands + * @payload: Points to descriptor payload (e.g., socket buffer) + * @callback: Completion callback + * @user1: Pointer data supplied to callback + * @user2: Integer data supplied with callback + */ +struct ipa_desc { + enum ipa_desc_type type; + u16 len_opcode; + void *payload; + void (*callback)(void *user1, int user2); + void *user1; + int user2; +}; + +/** + * enum ipahal_imm_cmd: IPA immediate commands + * + * All immediate commands are issued using the APPS_CMD_PROD + * endpoint. The numeric values here are the opcodes for IPA v3.5.1 + * hardware + */ +enum ipahal_imm_cmd { + IPA_IMM_CMD_IP_V4_FILTER_INIT = 3, + IPA_IMM_CMD_IP_V6_FILTER_INIT = 4, + IPA_IMM_CMD_IP_V4_ROUTING_INIT = 7, + IPA_IMM_CMD_IP_V6_ROUTING_INIT = 8, + IPA_IMM_CMD_HDR_INIT_LOCAL = 9, + IPA_IMM_CMD_DMA_TASK_32B_ADDR = 17, + IPA_IMM_CMD_DMA_SHARED_MEM = 19, +}; + +/** + * struct ipa_transport_pm - Transport power management data + * @dec_clients: ? + * @transport_pm_mutex: Mutex to protect the transport_pm functionality. + */ +struct ipa_transport_pm { + atomic_t dec_clients; + struct mutex transport_pm_mutex; /* XXX comment this */ +}; + +struct ipa_smp2p_info { + struct qcom_smem_state *valid_state; + struct qcom_smem_state *enabled_state; + unsigned int valid_bit; + unsigned int enabled_bit; + unsigned int clock_query_irq; + unsigned int post_init_irq; + bool ipa_clk_on; + bool res_sent; +}; + +struct ipa_dma_task_info { + struct ipa_dma_mem mem; + void *payload; +}; + +/** + * struct ipa_context - IPA context + * @filter_bitmap: End-points supporting filtering bitmap + * @ipa_irq: IRQ number used for IPA + * @ipa_phys: Physical address of IPA register memory + * @gsi: Pointer to GSI structure + * @dev: IPA device structure + * @ep: Endpoint array + * @dp: Data path information + * @smem_size: Size of shared memory + * @smem_offset: Offset of the usable area in shared memory + * @active_clients_mutex: Used when active clients count changes from/to 0 + * @active_clients_count: Active client count + * @power_mgmt_wq: Workqueue for power management + * @transport_pm: Transport power management related information + * @cmd_prod_ep_id: Endpoint for APPS_CMD_PROD + * @lan_cons_ep_id: Endpoint for APPS_LAN_CONS + * @memory_path: Path for memory interconnect + * @imem_path: Path for internal memory interconnect + * @config_path: Path for configuration interconnect + * @modem_clk_vote_valid: Whether proxy clock vote is held for modem + * @ep_count: Number of endpoints available in hardware + * @uc_ctx: Microcontroller context + * @wakeup_lock: Lock protecting updates to wakeup_count + * @wakeup_count: Count of times wakelock is acquired + * @wakeup: Wakeup source + * @ipa_client_apps_wan_cons_agg_gro: APPS_WAN_CONS generic receive offload + * @smp2p_info: Information related to SMP2P + * @dma_task_info: Preallocated DMA task + */ +struct ipa_context { + u32 filter_bitmap; + u32 ipa_irq; + phys_addr_t ipa_phys; + struct gsi *gsi; + struct device *dev; + + struct ipa_ep_context ep[IPA_EP_COUNT_MAX]; + struct ipa_dp *dp; + u32 smem_size; + u16 smem_offset; + struct mutex active_clients_mutex; /* count changes from/to 0 */ + atomic_t active_clients_count; + struct workqueue_struct *power_mgmt_wq; + struct ipa_transport_pm transport_pm; + u32 cmd_prod_ep_id; + u32 lan_cons_ep_id; + struct icc_path *memory_path; + struct icc_path *imem_path; + struct icc_path *config_path; + struct clk *core_clock; + bool modem_clk_vote_valid; + u32 ep_count; + + struct ipa_uc_ctx *uc_ctx; + + spinlock_t wakeup_lock; /* protects updates to wakeup_count */ + u32 wakeup_count; + struct wakeup_source wakeup; + + /* RMNET_IOCTL_INGRESS_FORMAT_AGG_DATA */ + bool ipa_client_apps_wan_cons_agg_gro; + /* M-release support to know client endpoint */ + struct ipa_smp2p_info smp2p_info; + struct ipa_dma_task_info dma_task_info; +}; + +extern struct ipa_context *ipa_ctx; + +int ipa_wwan_init(void); +void ipa_wwan_cleanup(void); + +int ipa_stop_gsi_channel(u32 ep_id); + +void ipa_cfg_ep(u32 ep_id); + +int ipa_tx_dp(enum ipa_client_type dst, struct sk_buff *skb); + +bool ipa_endp_aggr_support(u32 ep_id); +enum ipa_seq_type ipa_endp_seq_type(u32 ep_id); + +void ipa_endp_init_hdr_cons(u32 ep_id, u32 header_size, + u32 metadata_offset, u32 length_offset); +void ipa_endp_init_hdr_prod(u32 ep_id, u32 header_size, + u32 metadata_offset, u32 length_offset); +void ipa_endp_init_hdr_ext_cons(u32 ep_id, u32 pad_align, + bool pad_included); +void ipa_endp_init_hdr_ext_prod(u32 ep_id, u32 pad_align); +void ipa_endp_init_mode_cons(u32 ep_id); +void ipa_endp_init_mode_prod(u32 ep_id, enum ipa_mode mode, + enum ipa_client_type dst_client); +void ipa_endp_init_aggr_cons(u32 ep_id, u32 size, u32 count, + bool close_on_eof); +void ipa_endp_init_aggr_prod(u32 ep_id, enum ipa_aggr_en aggr_en, + enum ipa_aggr_type aggr_type); +void ipa_endp_init_cfg_cons(u32 ep_id, + enum ipa_cs_offload_en offload_type); +void ipa_endp_init_cfg_prod(u32 ep_id, enum ipa_cs_offload_en offload_type, + u32 metadata_offset); +void ipa_endp_init_seq_cons(u32 ep_id); +void ipa_endp_init_seq_prod(u32 ep_id); +void ipa_endp_init_deaggr_cons(u32 ep_id); +void ipa_endp_init_deaggr_prod(u32 ep_id); +void ipa_endp_init_hdr_metadata_mask_cons(u32 ep_id, u32 mask); +void ipa_endp_init_hdr_metadata_mask_prod(u32 ep_id); +void ipa_endp_status_cons(u32 ep_id, bool enable); +void ipa_endp_status_prod(u32 ep_id, bool enable, + enum ipa_client_type client); +int ipa_ep_alloc(enum ipa_client_type client); +void ipa_ep_free(u32 ep_id); + +void ipa_no_intr_init(u32 prod_ep_id); + +int ipa_ep_setup(u32 ep_id, u32 channel_count, u32 evt_ring_mult, + u32 rx_buffer_size, + void (*client_notify)(void *priv, enum ipa_dp_evt_type type, + unsigned long data), + void *priv); + +void ipa_ep_teardown(u32 ep_id); + +void ipa_rx_switch_to_poll_mode(struct ipa_sys_context *sys); + +void ipa_add_interrupt_handler(enum ipa_irq_type interrupt, + ipa_irq_handler_t handler); + +void ipa_remove_interrupt_handler(enum ipa_irq_type interrupt); + +void ipa_proxy_clk_vote(void); +void ipa_proxy_clk_unvote(void); + +u32 ipa_filter_bitmap_init(void); + +bool ipa_is_modem_ep(u32 ep_id); + +u32 ipa_client_ep_id(enum ipa_client_type client); +u32 ipa_client_channel_id(enum ipa_client_type client); +u32 ipa_client_tlv_count(enum ipa_client_type client); + +void ipa_init_hw(void); + +int ipa_interconnect_init(struct device *dev); +void ipa_interconnect_exit(void); + +int ipa_interconnect_enable(void); +int ipa_interconnect_disable(void); + +int ipa_send_cmd_timeout(struct ipa_desc *desc, u32 timeout); +static inline int ipa_send_cmd(struct ipa_desc *desc) +{ + return ipa_send_cmd_timeout(desc, 0); +} + +void ipa_client_add(void); +bool ipa_client_add_additional(void); +void ipa_client_remove(void); + +u32 ipa_aggr_byte_limit_buf_size(u32 byte_limit); + +void ipa_cfg_default_route(enum ipa_client_type client); + +int ipa_interrupts_init(void); + +void ipa_suspend_active_aggr_wa(u32 ep_id); +void ipa_lan_rx_cb(void *priv, enum ipa_dp_evt_type evt, unsigned long data); + +void ipa_sram_settings_read(void); + +int ipa_modem_smem_init(void); + +struct ipa_uc_ctx *ipa_uc_init(phys_addr_t phys_addr); +bool ipa_uc_loaded(void); +void ipa_uc_panic_notifier(void); + +u32 ipa_get_ep_count(void); +int ipa_ap_suspend(struct device *dev); +int ipa_ap_resume(struct device *dev); +void ipa_set_resource_groups_min_max_limits(void); +void ipa_ep_suspend_all(void); +void ipa_ep_resume_all(void); +void ipa_inc_acquire_wakelock(void); +void ipa_dec_release_wakelock(void); +int ipa_rx_poll(u32 ep_id, int budget); +void ipa_reset_freeze_vote(void); +void ipa_enable_dcd(void); + +int ipa_gsi_dma_task_alloc(void); +void ipa_gsi_dma_task_free(void); + +void ipa_set_flt_tuple_mask(u32 ep_id); +void ipa_set_rt_tuple_mask(int tbl_idx); + +void ipa_gsi_irq_rx_notify_cb(void *chan_data, u16 count); +void ipa_gsi_irq_tx_notify_cb(void *xfer_data); + +bool ipa_ep_polling(struct ipa_ep_context *ep); + +struct ipa_dp *ipa_dp_init(void); +void ipa_dp_exit(struct ipa_dp *dp); + +#endif /* _IPA_I_H_ */