From patchwork Fri Feb 28 22:41:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9375AC3F2D4 for ; Fri, 28 Feb 2020 22:44:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5ECBB246B6 for ; Fri, 28 Feb 2020 22:44:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="OUXVXnzi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726733AbgB1WoG (ORCPT ); Fri, 28 Feb 2020 17:44:06 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:44410 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726874AbgB1WmM (ORCPT ); Fri, 28 Feb 2020 17:42:12 -0500 Received: by mail-yw1-f65.google.com with SMTP id t141so4895174ywc.11 for ; Fri, 28 Feb 2020 14:42:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=87+frOTHX8qnO3QpcSs16fPwX2RXtb0vNrYvrQADVwE=; b=OUXVXnziXjzdpfuLRW1taEtDpgyKCViDitiG64XI2BzJrg3AL0VBcpErt2EHf5252h ExAjluZLaAQ1xt7p/2lmK72LjnayS1iUSfh86nlmdfHjGXVIJo2gCFCR+oTdYR2jCI2F HNY7/SPtXWEYzVjnVUboaDOhMyeSY5jGm8IIh1EJRgI31hrH1mxMxdFrqGxhsYdQehyH vOXNcwkHjHSxhpdEoNNyGMiXduKnyor0oqjbmwFjZ5bK2ocE/s8Li4qpE2nw+pCUPASr /2eAxQedZQmeaPFtbC8BbYbkEr6QFQIMHT94a3XKTfRfPj6Y6fZVS1mB6toSy0syQbyF 4p0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=87+frOTHX8qnO3QpcSs16fPwX2RXtb0vNrYvrQADVwE=; b=IMjDUt6vUpzF6pg8uTBGD9b1/PbspXAdJZ2cRps8H6qqdrFkuEEs5u4RDukqH+CUbO JO9tdT1QLWutBuosAFB7a+o4nbE3u+xbveJp3b8iPpko5n01Sq9EnXLZ7UF5Rmw+KPU6 89ZOD4v9qU56ARGAa0dwez/DJpWqUK3UNAsPBtWS/gI/wGuGEho/H3LnBduHpPoENkhU Zn1Mp20ZOHoW2+YqzUzGCSZdnP5VDbwo6IprejVUHXs3w04rx+fQpNfxiJnFtl5GVepr bnB1MzFM7NY5MoVseQFTGKdCnqGSRB9sdsOEOSXUlyPUURt5cM6EKNKeiTxd4xNiZsQU GcvA== X-Gm-Message-State: APjAAAWfPxGHsIEifYOHzdLEAuu0fiZ079k0okIzMWSzbnBi57F0EFoD 9VKs+DZGLFH/yK1DMKDKowjFog== X-Google-Smtp-Source: APXvYqw5FAvtXFtlfg7OoMGDUn3VM8hzGG/ytz9cZ2nYfI2JUdb8yurZWcz3Qzup/MaWVDO6Smw5pg== X-Received: by 2002:a81:ae21:: with SMTP id m33mr7122769ywh.54.1582929731652; Fri, 28 Feb 2020 14:42:11 -0800 (PST) Received: from localhost.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id d188sm4637830ywe.50.2020.02.28.14.42.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Feb 2020 14:42:11 -0800 (PST) From: Alex Elder To: Bjorn Andersson , Ohad Ben-Cohen , Arnd Bergmann , David Miller Cc: Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 01/17] remoteproc: add IPA notification to q6v5 driver Date: Fri, 28 Feb 2020 16:41:48 -0600 Message-Id: <20200228224204.17746-2-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200228224204.17746-1-elder@linaro.org> References: <20200228224204.17746-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Set up a subdev in the q6v5 modem remoteproc driver that generates event notifications for the IPA driver to use for initialization and recovery following a modem shutdown or crash. A pair of new functions provides a way for the IPA driver to register and deregister a notification callback function that will be called whenever modem events (about to boot, running, about to shut down, etc.) occur. A void pointer value (provided by the IPA driver at registration time) and an event type are supplied to the callback function. One event, MODEM_REMOVING, is signaled whenever the q6v5 driver is about to remove the notification subdevice. It requires the IPA driver de-register its callback. This sub-device is only used by the modem subsystem (MSS) driver, so the code that adds the new subdev and allows registration and deregistration of the notifier is found in "qcom_q6v6_mss.c". Signed-off-by: Alex Elder --- NOTE: This was developed last year. Recently there is another proposal that addresses what this does in a more general way. For now I'm simply including this in my IPA patch series to satisfy the need. If/when the other proposal lands upstream it won't be hard to adapt the IPA driver to use it. -Alex drivers/remoteproc/Kconfig | 6 ++ drivers/remoteproc/Makefile | 1 + drivers/remoteproc/qcom_q6v5_ipa_notify.c | 85 +++++++++++++++++++ drivers/remoteproc/qcom_q6v5_mss.c | 42 ++++++++- .../linux/remoteproc/qcom_q6v5_ipa_notify.h | 82 ++++++++++++++++++ 5 files changed, 214 insertions(+), 2 deletions(-) create mode 100644 drivers/remoteproc/qcom_q6v5_ipa_notify.c create mode 100644 include/linux/remoteproc/qcom_q6v5_ipa_notify.h diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig index de3862c15fcc..80c3cac60fbe 100644 --- a/drivers/remoteproc/Kconfig +++ b/drivers/remoteproc/Kconfig @@ -167,6 +167,12 @@ config QCOM_Q6V5_WCSS Say y here to support the Qualcomm Peripheral Image Loader for the Hexagon V5 based WCSS remote processors. +config QCOM_Q6V5_IPA_NOTIFY + tristate + depends on IPA + depends on QCOM_Q6V5_MSS + default IPA + config QCOM_SYSMON tristate "Qualcomm sysmon driver" depends on RPMSG diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile index e30a1b15fbac..0effd3825035 100644 --- a/drivers/remoteproc/Makefile +++ b/drivers/remoteproc/Makefile @@ -21,6 +21,7 @@ obj-$(CONFIG_QCOM_Q6V5_ADSP) += qcom_q6v5_adsp.o obj-$(CONFIG_QCOM_Q6V5_MSS) += qcom_q6v5_mss.o obj-$(CONFIG_QCOM_Q6V5_PAS) += qcom_q6v5_pas.o obj-$(CONFIG_QCOM_Q6V5_WCSS) += qcom_q6v5_wcss.o +obj-$(CONFIG_QCOM_Q6V5_IPA_NOTIFY) += qcom_q6v5_ipa_notify.o obj-$(CONFIG_QCOM_SYSMON) += qcom_sysmon.o obj-$(CONFIG_QCOM_WCNSS_PIL) += qcom_wcnss_pil.o qcom_wcnss_pil-y += qcom_wcnss.o diff --git a/drivers/remoteproc/qcom_q6v5_ipa_notify.c b/drivers/remoteproc/qcom_q6v5_ipa_notify.c new file mode 100644 index 000000000000..e1c10a128bfd --- /dev/null +++ b/drivers/remoteproc/qcom_q6v5_ipa_notify.c @@ -0,0 +1,85 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Qualcomm IPA notification subdev support + * + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include + +static void +ipa_notify_common(struct rproc_subdev *subdev, enum qcom_rproc_event event) +{ + struct qcom_rproc_ipa_notify *ipa_notify; + qcom_ipa_notify_t notify; + + ipa_notify = container_of(subdev, struct qcom_rproc_ipa_notify, subdev); + notify = ipa_notify->notify; + if (notify) + notify(ipa_notify->data, event); +} + +static int ipa_notify_prepare(struct rproc_subdev *subdev) +{ + ipa_notify_common(subdev, MODEM_STARTING); + + return 0; +} + +static int ipa_notify_start(struct rproc_subdev *subdev) +{ + ipa_notify_common(subdev, MODEM_RUNNING); + + return 0; +} + +static void ipa_notify_stop(struct rproc_subdev *subdev, bool crashed) + +{ + ipa_notify_common(subdev, crashed ? MODEM_CRASHED : MODEM_STOPPING); +} + +static void ipa_notify_unprepare(struct rproc_subdev *subdev) +{ + ipa_notify_common(subdev, MODEM_OFFLINE); +} + +static void ipa_notify_removing(struct rproc_subdev *subdev) +{ + ipa_notify_common(subdev, MODEM_REMOVING); +} + +/* Register the IPA notification subdevice with the Q6V5 MSS remoteproc */ +void qcom_add_ipa_notify_subdev(struct rproc *rproc, + struct qcom_rproc_ipa_notify *ipa_notify) +{ + ipa_notify->notify = NULL; + ipa_notify->data = NULL; + ipa_notify->subdev.prepare = ipa_notify_prepare; + ipa_notify->subdev.start = ipa_notify_start; + ipa_notify->subdev.stop = ipa_notify_stop; + ipa_notify->subdev.unprepare = ipa_notify_unprepare; + + rproc_add_subdev(rproc, &ipa_notify->subdev); +} +EXPORT_SYMBOL_GPL(qcom_add_ipa_notify_subdev); + +/* Remove the IPA notification subdevice */ +void qcom_remove_ipa_notify_subdev(struct rproc *rproc, + struct qcom_rproc_ipa_notify *ipa_notify) +{ + struct rproc_subdev *subdev = &ipa_notify->subdev; + + ipa_notify_removing(subdev); + + rproc_remove_subdev(rproc, subdev); + ipa_notify->notify = NULL; /* Make it obvious */ +} +EXPORT_SYMBOL_GPL(qcom_remove_ipa_notify_subdev); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Qualcomm IPA notification remoteproc subdev"); diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c index 97093f4f58e1..ac60588ebe5a 100644 --- a/drivers/remoteproc/qcom_q6v5_mss.c +++ b/drivers/remoteproc/qcom_q6v5_mss.c @@ -22,6 +22,7 @@ #include #include #include +#include "linux/remoteproc/qcom_q6v5_ipa_notify.h" #include #include #include @@ -201,6 +202,7 @@ struct q6v5 { struct qcom_rproc_glink glink_subdev; struct qcom_rproc_subdev smd_subdev; struct qcom_rproc_ssr ssr_subdev; + struct qcom_rproc_ipa_notify ipa_notify_subdev; struct qcom_sysmon *sysmon; bool need_mem_protection; bool has_alt_reset; @@ -1540,6 +1542,39 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc) return 0; } +#if IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) + +/* Register IPA notification function */ +int qcom_register_ipa_notify(struct rproc *rproc, qcom_ipa_notify_t notify, + void *data) +{ + struct qcom_rproc_ipa_notify *ipa_notify; + struct q6v5 *qproc = rproc->priv; + + if (!notify) + return -EINVAL; + + ipa_notify = &qproc->ipa_notify_subdev; + if (ipa_notify->notify) + return -EBUSY; + + ipa_notify->notify = notify; + ipa_notify->data = data; + + return 0; +} +EXPORT_SYMBOL_GPL(qcom_register_ipa_notify); + +/* Deregister IPA notification function */ +void qcom_deregister_ipa_notify(struct rproc *rproc) +{ + struct q6v5 *qproc = rproc->priv; + + qproc->ipa_notify_subdev.notify = NULL; +} +EXPORT_SYMBOL_GPL(qcom_deregister_ipa_notify); +#endif /* !IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) */ + static int q6v5_probe(struct platform_device *pdev) { const struct rproc_hexagon_res *desc; @@ -1664,10 +1699,11 @@ static int q6v5_probe(struct platform_device *pdev) qcom_add_glink_subdev(rproc, &qproc->glink_subdev); qcom_add_smd_subdev(rproc, &qproc->smd_subdev); qcom_add_ssr_subdev(rproc, &qproc->ssr_subdev, "mpss"); + qcom_add_ipa_notify_subdev(rproc, &qproc->ipa_notify_subdev); qproc->sysmon = qcom_add_sysmon_subdev(rproc, "modem", 0x12); if (IS_ERR(qproc->sysmon)) { ret = PTR_ERR(qproc->sysmon); - goto remove_ssr_subdev; + goto remove_ipa_subdev; } ret = rproc_add(rproc); @@ -1678,7 +1714,8 @@ static int q6v5_probe(struct platform_device *pdev) remove_sysmon_subdev: qcom_remove_sysmon_subdev(qproc->sysmon); -remove_ssr_subdev: +remove_ipa_subdev: + qcom_remove_ipa_notify_subdev(qproc->rproc, &qproc->ipa_notify_subdev); qcom_remove_ssr_subdev(qproc->rproc, &qproc->ssr_subdev); qcom_remove_smd_subdev(qproc->rproc, &qproc->smd_subdev); qcom_remove_glink_subdev(qproc->rproc, &qproc->glink_subdev); @@ -1700,6 +1737,7 @@ static int q6v5_remove(struct platform_device *pdev) rproc_del(rproc); qcom_remove_sysmon_subdev(qproc->sysmon); + qcom_remove_ipa_notify_subdev(rproc, &qproc->ipa_notify_subdev); qcom_remove_ssr_subdev(rproc, &qproc->ssr_subdev); qcom_remove_smd_subdev(rproc, &qproc->smd_subdev); qcom_remove_glink_subdev(rproc, &qproc->glink_subdev); diff --git a/include/linux/remoteproc/qcom_q6v5_ipa_notify.h b/include/linux/remoteproc/qcom_q6v5_ipa_notify.h new file mode 100644 index 000000000000..0820edc0ab7d --- /dev/null +++ b/include/linux/remoteproc/qcom_q6v5_ipa_notify.h @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (C) 2019 Linaro Ltd. */ + +#ifndef __QCOM_Q6V5_IPA_NOTIFY_H__ +#define __QCOM_Q6V5_IPA_NOTIFY_H__ + +#if IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) + +#include + +enum qcom_rproc_event { + MODEM_STARTING = 0, /* Modem is about to be started */ + MODEM_RUNNING = 1, /* Startup complete; modem is operational */ + MODEM_STOPPING = 2, /* Modem is about to shut down */ + MODEM_CRASHED = 3, /* Modem has crashed (implies stopping) */ + MODEM_OFFLINE = 4, /* Modem is now offline */ + MODEM_REMOVING = 5, /* Modem is about to be removed */ +}; + +typedef void (*qcom_ipa_notify_t)(void *data, enum qcom_rproc_event event); + +struct qcom_rproc_ipa_notify { + struct rproc_subdev subdev; + + qcom_ipa_notify_t notify; + void *data; +}; + +/** + * qcom_add_ipa_notify_subdev() - Register IPA notification subdevice + * @rproc: rproc handle + * @ipa_notify: IPA notification subdevice handle + * + * Register the @ipa_notify subdevice with the @rproc so modem events + * can be sent to IPA when they occur. + * + * This is defined in "qcom_q6v5_ipa_notify.c". + */ +void qcom_add_ipa_notify_subdev(struct rproc *rproc, + struct qcom_rproc_ipa_notify *ipa_notify); + +/** + * qcom_remove_ipa_notify_subdev() - Remove IPA SSR subdevice + * @rproc: rproc handle + * @ipa_notify: IPA notification subdevice handle + * + * This is defined in "qcom_q6v5_ipa_notify.c". + */ +void qcom_remove_ipa_notify_subdev(struct rproc *rproc, + struct qcom_rproc_ipa_notify *ipa_notify); + +/** + * qcom_register_ipa_notify() - Register IPA notification function + * @rproc: Remote processor handle + * @notify: Non-null IPA notification callback function pointer + * @data: Data supplied to IPA notification callback function + * + * @Return: 0 if successful, or a negative error code otherwise + * + * This is defined in "qcom_q6v5_mss.c". + */ +int qcom_register_ipa_notify(struct rproc *rproc, qcom_ipa_notify_t notify, + void *data); +/** + * qcom_deregister_ipa_notify() - Deregister IPA notification function + * @rproc: Remote processor handle + * + * This is defined in "qcom_q6v5_mss.c". + */ +void qcom_deregister_ipa_notify(struct rproc *rproc); + +#else /* !IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) */ + +struct qcom_rproc_ipa_notify { /* empty */ }; + +#define qcom_add_ipa_notify_subdev(rproc, ipa_notify) /* no-op */ +#define qcom_remove_ipa_notify_subdev(rproc, ipa_notify) /* no-op */ + +#endif /* !IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) */ + +#endif /* !__QCOM_Q6V5_IPA_NOTIFY_H__ */ From patchwork Fri Feb 28 22:41:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F39EBC3F2D2 for ; Fri, 28 Feb 2020 22:42:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B4BFD246AC for ; Fri, 28 Feb 2020 22:42:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="mj9hwy6M" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726901AbgB1WmP (ORCPT ); Fri, 28 Feb 2020 17:42:15 -0500 Received: from mail-yw1-f52.google.com ([209.85.161.52]:44625 "EHLO mail-yw1-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726892AbgB1WmO (ORCPT ); Fri, 28 Feb 2020 17:42:14 -0500 Received: by mail-yw1-f52.google.com with SMTP id t141so4895249ywc.11 for ; Fri, 28 Feb 2020 14:42:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HtBzbHspY1ceZ7S67aMewQoXLRoYH4pgWqbzOtj2eMg=; b=mj9hwy6Mti/O7HjLER8byZFrS6mQPnULSPL+IsIJxNascr6MeZ5AB2qGRvjTefcMSZ 18R+SgT30GwZ4bOD9f6h70qzWgSQCpyOJ1Ci0My6NpF1PeK2sfuFOy+L26Iw2u5iykZJ KUzFFkDOWzlmjCiLA8XjW0Dce9a+dMruT4IBdgosALThxkL4scOOGUmJzoWLDxcgbWJZ n7rgrKbZgKI1lcQu/rT9rtJZdvFFb7c/B4Fh8szXwhGefieXISiFbmdPyCBqIG54/bq0 KVsyA1NYzN1F/Uk9gB55Zyz6J0nzGYY3aPVCzxClV/ed0Ay2Q0ImXNNaqnf6mpJaWI9x 9v3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HtBzbHspY1ceZ7S67aMewQoXLRoYH4pgWqbzOtj2eMg=; b=gylLHd/2jigimu3lpdbfJTf8Hs74f0ra0wTpBNC5us1pLZYFY3MK/aIa+LwcQ4wgN0 0d2Gtk+g/ODgUUoj70LcyRlutKTpiOMmaP1Fs9hX+FRHczzGyF+h3G327dYcNcipArly refZS1OYf/9EX2AmWCxdCbWnvAaPWJSm55oQuTc9KcACliyJQADYxG8zv0MnL9pLtqBY xqkbqqzCnrNbAGkDJIf5xvb5dL5iCbkAK3clvvv4AA8TS4tNegb81+/kLBnfd8ct0H34 6uXe0cvbqYwH/18rAxoAs0lAr41RSekm9PlTmMH5wx006+NuyVNCxrhGoWsusaM5cc7E 22AA== X-Gm-Message-State: APjAAAXBPv/LsDKuhCSlSufZlKhXgOK1aIV4iQpLoNJsP5QYubtvzneZ n+LQBDuUbwPoes5+JLkf+UuKqg== X-Google-Smtp-Source: APXvYqz1pRiPHzLlLz3CG+TFCb/qvF248lOfBZnJVqUl3hArx6azdJ7lWy0mw/HNLBIzcvfQJDUZhg== X-Received: by 2002:a0d:d64d:: with SMTP id y74mr6951357ywd.386.1582929733568; Fri, 28 Feb 2020 14:42:13 -0800 (PST) Received: from localhost.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id d188sm4637830ywe.50.2020.02.28.14.42.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Feb 2020 14:42:13 -0800 (PST) From: Alex Elder To: Rob Herring , Mark Rutland , Arnd Bergmann , David Miller Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org, Rob Herring Subject: [PATCH 02/17] dt-bindings: soc: qcom: add IPA bindings Date: Fri, 28 Feb 2020 16:41:49 -0600 Message-Id: <20200228224204.17746-3-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200228224204.17746-1-elder@linaro.org> References: <20200228224204.17746-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add the binding definitions for the "qcom,ipa" device tree node. Signed-off-by: Alex Elder Reviewed-by: Rob Herring --- NOTE: Rob, you signed off on this last year. I made a single change to it (which you suggested): the license is now dual GPL 2.0/BSD 2-clause. If you have any objection to including your sign-off please say so. -Alex .../devicetree/bindings/net/qcom,ipa.yaml | 192 ++++++++++++++++++ 1 file changed, 192 insertions(+) create mode 100644 Documentation/devicetree/bindings/net/qcom,ipa.yaml diff --git a/Documentation/devicetree/bindings/net/qcom,ipa.yaml b/Documentation/devicetree/bindings/net/qcom,ipa.yaml new file mode 100644 index 000000000000..91d08f2c7791 --- /dev/null +++ b/Documentation/devicetree/bindings/net/qcom,ipa.yaml @@ -0,0 +1,192 @@ +# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/net/qcom,ipa.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Qualcomm IP Accelerator (IPA) + +maintainers: + - Alex Elder + +description: + This binding describes the Qualcomm IPA. The IPA is capable of offloading + certain network processing tasks (e.g. filtering, routing, and NAT) from + the main processor. + + The IPA sits between multiple independent "execution environments," + including the Application Processor (AP) and the modem. The IPA presents + a Generic Software Interface (GSI) to each execution environment. + The GSI is an integral part of the IPA, but it is logically isolated + and has a distinct interrupt and a separately-defined address space. + + See also soc/qcom/qcom,smp2p.txt and interconnect/interconnect.txt. + + - | + -------- --------- + | | | | + | AP +<---. .----+ Modem | + | +--. | | .->+ | + | | | | | | | | + -------- | | | | --------- + v | v | + --+-+---+-+-- + | GSI | + |-----------| + | | + | IPA | + | | + ------------- + +properties: + compatible: + const: "qcom,sdm845-ipa" + + reg: + items: + - description: IPA registers + - description: IPA shared memory + - description: GSI registers + + reg-names: + items: + - const: ipa-reg + - const: ipa-shared + - const: gsi + + clocks: + maxItems: 1 + + clock-names: + const: core + + interrupts: + items: + - description: IPA interrupt (hardware IRQ) + - description: GSI interrupt (hardware IRQ) + - description: Modem clock query interrupt (smp2p interrupt) + - description: Modem setup ready interrupt (smp2p interrupt) + + interrupt-names: + items: + - const: ipa + - const: gsi + - const: ipa-clock-query + - const: ipa-setup-ready + + interconnects: + items: + - description: Interconnect path between IPA and main memory + - description: Interconnect path between IPA and internal memory + - description: Interconnect path between IPA and the AP subsystem + + interconnect-names: + items: + - const: memory + - const: imem + - const: config + + qcom,smem-states: + $ref: /schemas/types.yaml#/definitions/phandle-array + description: State bits used in by the AP to signal the modem. + items: + - description: Whether the "ipa-clock-enabled" state bit is valid + - description: Whether the IPA clock is enabled (if valid) + + qcom,smem-state-names: + $ref: /schemas/types.yaml#/definitions/string-array + description: The names of the state bits used for SMP2P output + items: + - const: ipa-clock-enabled-valid + - const: ipa-clock-enabled + + modem-init: + type: boolean + description: + If present, it indicates that the modem is responsible for + performing early IPA initialization, including loading and + validating firwmare used by the GSI. + + modem-remoteproc: + $ref: /schemas/types.yaml#definitions/phandle + description: + This defines the phandle to the remoteproc node representing + the modem subsystem. This is requied so the IPA driver can + receive and act on notifications of modem up/down events. + + memory-region: + $ref: /schemas/types.yaml#/definitions/phandle-array + maxItems: 1 + description: + If present, a phandle for a reserved memory area that holds + the firmware passed to Trust Zone for authentication. Required + when Trust Zone (not the modem) performs early initialization. + +required: + - compatible + - reg + - clocks + - interrupts + - interconnects + - qcom,smem-states + - modem-remoteproc + +oneOf: + - required: + - modem-init + - required: + - memory-region + +examples: + - | + smp2p-mpss { + compatible = "qcom,smp2p"; + ipa_smp2p_out: ipa-ap-to-modem { + qcom,entry-name = "ipa"; + #qcom,smem-state-cells = <1>; + }; + + ipa_smp2p_in: ipa-modem-to-ap { + qcom,entry-name = "ipa"; + interrupt-controller; + #interrupt-cells = <2>; + }; + }; + ipa@1e40000 { + compatible = "qcom,sdm845-ipa"; + + modem-init; + modem-remoteproc = <&mss_pil>; + + reg = <0 0x1e40000 0 0x7000>, + <0 0x1e47000 0 0x2000>, + <0 0x1e04000 0 0x2c000>; + reg-names = "ipa-reg", + "ipa-shared"; + "gsi"; + + interrupts-extended = <&intc 0 311 IRQ_TYPE_EDGE_RISING>, + <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>, + <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, + <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "ipa", + "gsi", + "ipa-clock-query", + "ipa-setup-ready"; + + clocks = <&rpmhcc RPMH_IPA_CLK>; + clock-names = "core"; + + interconnects = + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>, + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>, + <&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>; + interconnect-names = "memory", + "imem", + "config"; + + qcom,smem-states = <&ipa_smp2p_out 0>, + <&ipa_smp2p_out 1>; + qcom,smem-state-names = "ipa-clock-enabled-valid", + "ipa-clock-enabled"; + }; From patchwork Fri Feb 28 22:41:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F1BCC3F2CD for ; Fri, 28 Feb 2020 22:43:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4DC8A246B6 for ; Fri, 28 Feb 2020 22:43:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="oeEG5s49" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727366AbgB1Wnr (ORCPT ); Fri, 28 Feb 2020 17:43:47 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:39794 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727004AbgB1WmY (ORCPT ); Fri, 28 Feb 2020 17:42:24 -0500 Received: by mail-yw1-f65.google.com with SMTP id x184so4941775ywd.6 for ; Fri, 28 Feb 2020 14:42:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dy6rQ4eMObqJkHXDR4c+2Co/ufIH9Vkxn2UrqymA1DE=; b=oeEG5s49ighQkbOVmxlHVSaJOP3puRx4Om3NXCgbGYBEWLxTRXxjEvzgoGIzQ1bPlU JMNRxbxYcx0541MyL5Jx8/574xFROX7m8dJ1IcddRR7YDKeZAv94OHKcfuQ3/Lm1tTLg 7PV1BNeBdYspnp/2rrY7+vEvvbosZhFAikFyzxE6x33pOA3JUdCV8/WHuNXceshVozDx 6aBlbJhY5zkK+2gS4YTfuiOhX7W0bZkSlC+aGBZ096RxheJ6Xusb3mmKhn4SG93jPVVp GBErlpapL8NXngpU2aq7jbO2M7nTxs7Ln1+dKKI97+hN0haHY3ctCYe6oSsViW/qyIkn pD4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dy6rQ4eMObqJkHXDR4c+2Co/ufIH9Vkxn2UrqymA1DE=; b=mhW+YS/+0KaAjR8PNI9eR5iXzPaErD8diWpoGces7uLzRoQM0RIsIg+toS0L0gTKpA VF9BxpxRBC19FtvJqqJVjc3tvuj/Mzhu3I6ASYHjGCLFAG8FRPpFRs23Wu3X6sjnGq1U l769JXi2oZIoSVDd7bqRaeBzW9VA7NWR99rbOsr6BxCf1HvvD1Nd9Y+1XF6kCzG1pz9a HK6NCB57TIULTDF4FNNh88mD4fcZ64gLH4o0t6Bmvqguq7pzAAJ70K1SZIEIqMJWeFUY 2eyFf1HgTwbsebSihKIhmX2N6SvcEQEdwzjT2mN1ffToV6/e6OKtYC4shcaT6i4W71vz xw+A== X-Gm-Message-State: APjAAAVW67k7OLgkyNPTyxdxIqIi+vMcTRnuro0T3b+PfdlQaKpFJaOW Ctom9g3FduiC+/ewp/+FBvLmxg== X-Google-Smtp-Source: APXvYqxY4dMbBw5dk+AKermRHzX5ZAHb4drnCIZ+Vk2yT4rc10jOenkx14/qy2SIAtrOs8FL/5KxXg== X-Received: by 2002:a5b:54b:: with SMTP id r11mr5992335ybp.17.1582929740245; Fri, 28 Feb 2020 14:42:20 -0800 (PST) Received: from localhost.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id d188sm4637830ywe.50.2020.02.28.14.42.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Feb 2020 14:42:19 -0800 (PST) From: Alex Elder To: Arnd Bergmann , David Miller Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 05/17] soc: qcom: ipa: clocking, interrupts, and memory Date: Fri, 28 Feb 2020 16:41:52 -0600 Message-Id: <20200228224204.17746-6-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200228224204.17746-1-elder@linaro.org> References: <20200228224204.17746-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch incorporates three source files (and their headers). They're grouped into one patch mainly for the purpose of making the number and size of patches in this series somewhat reasonable. - "ipa_clock.c" and "ipa_clock.h" implement clocking for the IPA device. The IPA has a single core clock managed by the common clock framework. In addition, the IPA has three buses whose bandwidth is managed by the Linux interconnect framework. At this time the core clock and all three buses are either on or off; we don't yet do any more fine-grained management than that. The core clock and interconnects are enabled and disabled as a unit, using a unified clock-like abstraction, ipa_clock_get()/ipa_clock_put(). - "ipa_interrupt.c" and "ipa_interrupt.h" implement IPA interrupts. There are two hardware IRQs used by the IPA driver (the other is the GSI interrupt, described in a separate patch). Several types of interrupt are handled by the IPA IRQ handler; these are not part of data/fast path. - The IPA has a region of local memory that is accessible by the AP (and modem). Within that region are areas with certain defined purposes. "ipa_mem.c" and "ipa_mem.h" define those regions, and implement their initialization. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_clock.c | 313 +++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_clock.h | 53 ++++++ drivers/net/ipa/ipa_interrupt.c | 253 +++++++++++++++++++++++++ drivers/net/ipa/ipa_interrupt.h | 117 ++++++++++++ drivers/net/ipa/ipa_mem.c | 314 ++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_mem.h | 90 +++++++++ 6 files changed, 1140 insertions(+) create mode 100644 drivers/net/ipa/ipa_clock.c create mode 100644 drivers/net/ipa/ipa_clock.h create mode 100644 drivers/net/ipa/ipa_interrupt.c create mode 100644 drivers/net/ipa/ipa_interrupt.h create mode 100644 drivers/net/ipa/ipa_mem.c create mode 100644 drivers/net/ipa/ipa_mem.h diff --git a/drivers/net/ipa/ipa_clock.c b/drivers/net/ipa/ipa_clock.c new file mode 100644 index 000000000000..a60ffb801285 --- /dev/null +++ b/drivers/net/ipa/ipa_clock.c @@ -0,0 +1,313 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_modem.h" + +/** + * DOC: IPA Clocking + * + * The "IPA Clock" manages both the IPA core clock and the interconnects + * (buses) the IPA depends on as a single logical entity. A reference count + * is incremented by "get" operations and decremented by "put" operations. + * Transitions of that count from 0 to 1 result in the clock and interconnects + * being enabled, and transitions of the count from 1 to 0 cause them to be + * disabled. We currently operate the core clock at a fixed clock rate, and + * all buses at a fixed average and peak bandwidth. As more advanced IPA + * features are enabled, we can make better use of clock and bus scaling. + * + * An IPA clock reference must be held for any access to IPA hardware. + */ + +#define IPA_CORE_CLOCK_RATE (75UL * 1000 * 1000) /* Hz */ + +/* Interconnect path bandwidths (each times 1000 bytes per second) */ +#define IPA_MEMORY_AVG (80 * 1000) /* 80 MBps */ +#define IPA_MEMORY_PEAK (600 * 1000) + +#define IPA_IMEM_AVG (80 * 1000) +#define IPA_IMEM_PEAK (350 * 1000) + +#define IPA_CONFIG_AVG (40 * 1000) +#define IPA_CONFIG_PEAK (40 * 1000) + +/** + * struct ipa_clock - IPA clocking information + * @count: Clocking reference count + * @mutex; Protects clock enable/disable + * @core: IPA core clock + * @memory_path: Memory interconnect + * @imem_path: Internal memory interconnect + * @config_path: Configuration space interconnect + */ +struct ipa_clock { + atomic_t count; + struct mutex mutex; /* protects clock enable/disable */ + struct clk *core; + struct icc_path *memory_path; + struct icc_path *imem_path; + struct icc_path *config_path; +}; + +static struct icc_path * +ipa_interconnect_init_one(struct device *dev, const char *name) +{ + struct icc_path *path; + + path = of_icc_get(dev, name); + if (IS_ERR(path)) + dev_err(dev, "error %d getting memory interconnect\n", + PTR_ERR(path)); + + return path; +} + +/* Initialize interconnects required for IPA operation */ +static int ipa_interconnect_init(struct ipa_clock *clock, struct device *dev) +{ + struct icc_path *path; + + path = ipa_interconnect_init_one(dev, "memory"); + if (IS_ERR(path)) + goto err_return; + clock->memory_path = path; + + path = ipa_interconnect_init_one(dev, "imem"); + if (IS_ERR(path)) + goto err_memory_path_put; + clock->imem_path = path; + + path = ipa_interconnect_init_one(dev, "config"); + if (IS_ERR(path)) + goto err_imem_path_put; + clock->config_path = path; + + return 0; + +err_imem_path_put: + icc_put(clock->imem_path); +err_memory_path_put: + icc_put(clock->memory_path); +err_return: + return PTR_ERR(path); +} + +/* Inverse of ipa_interconnect_init() */ +static void ipa_interconnect_exit(struct ipa_clock *clock) +{ + icc_put(clock->config_path); + icc_put(clock->imem_path); + icc_put(clock->memory_path); +} + +/* Currently we only use one bandwidth level, so just "enable" interconnects */ +static int ipa_interconnect_enable(struct ipa *ipa) +{ + struct ipa_clock *clock = ipa->clock; + int ret; + + ret = icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + if (ret) + return ret; + + ret = icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); + if (ret) + goto err_memory_path_disable; + + ret = icc_set_bw(clock->config_path, IPA_CONFIG_AVG, IPA_CONFIG_PEAK); + if (ret) + goto err_imem_path_disable; + + return 0; + +err_imem_path_disable: + (void)icc_set_bw(clock->imem_path, 0, 0); +err_memory_path_disable: + (void)icc_set_bw(clock->memory_path, 0, 0); + + return ret; +} + +/* To disable an interconnect, we just its bandwidth to 0 */ +static int ipa_interconnect_disable(struct ipa *ipa) +{ + struct ipa_clock *clock = ipa->clock; + int ret; + + ret = icc_set_bw(clock->memory_path, 0, 0); + if (ret) + return ret; + + ret = icc_set_bw(clock->imem_path, 0, 0); + if (ret) + goto err_memory_path_reenable; + + ret = icc_set_bw(clock->config_path, 0, 0); + if (ret) + goto err_imem_path_reenable; + + return 0; + +err_imem_path_reenable: + (void)icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); +err_memory_path_reenable: + (void)icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + + return ret; +} + +/* Turn on IPA clocks, including interconnects */ +static int ipa_clock_enable(struct ipa *ipa) +{ + int ret; + + ret = ipa_interconnect_enable(ipa); + if (ret) + return ret; + + ret = clk_prepare_enable(ipa->clock->core); + if (ret) + ipa_interconnect_disable(ipa); + + return ret; +} + +/* Inverse of ipa_clock_enable() */ +static void ipa_clock_disable(struct ipa *ipa) +{ + clk_disable_unprepare(ipa->clock->core); + (void)ipa_interconnect_disable(ipa); +} + +/* Get an IPA clock reference, but only if the reference count is + * already non-zero. Returns true if the additional reference was + * added successfully, or false otherwise. + */ +bool ipa_clock_get_additional(struct ipa *ipa) +{ + return !!atomic_inc_not_zero(&ipa->clock->count); +} + +/* Get an IPA clock reference. If the reference count is non-zero, it is + * incremented and return is immediate. Otherwise it is checked again + * under protection of the mutex, and if appropriate the clock (and + * interconnects) are enabled suspended endpoints (if any) are resumed + * before returning. + * + * Incrementing the reference count is intentionally deferred until + * after the clock is running and endpoints are resumed. + */ +void ipa_clock_get(struct ipa *ipa) +{ + struct ipa_clock *clock = ipa->clock; + int ret; + + /* If the clock is running, just bump the reference count */ + if (ipa_clock_get_additional(ipa)) + return; + + /* Otherwise get the mutex and check again */ + mutex_lock(&clock->mutex); + + /* A reference might have been added before we got the mutex. */ + if (ipa_clock_get_additional(ipa)) + goto out_mutex_unlock; + + ret = ipa_clock_enable(ipa); + if (ret) { + dev_err(&ipa->pdev->dev, "error %d enabling IPA clock\n", ret); + goto out_mutex_unlock; + } + + ipa_endpoint_resume(ipa); + + atomic_inc(&clock->count); + +out_mutex_unlock: + mutex_unlock(&clock->mutex); +} + +/* Attempt to remove an IPA clock reference. If this represents the last + * reference, suspend endpoints and disable the clock (and interconnects) + * under protection of a mutex. + */ +void ipa_clock_put(struct ipa *ipa) +{ + struct ipa_clock *clock = ipa->clock; + + /* If this is not the last reference there's nothing more to do */ + if (!atomic_dec_and_mutex_lock(&clock->count, &clock->mutex)) + return; + + ipa_endpoint_suspend(ipa); + + ipa_clock_disable(ipa); + + mutex_unlock(&clock->mutex); +} + +/* Initialize IPA clocking */ +struct ipa_clock *ipa_clock_init(struct device *dev) +{ + struct ipa_clock *clock; + struct clk *clk; + int ret; + + clk = clk_get(dev, "core"); + if (IS_ERR(clk)) { + dev_err(dev, "error %d getting core clock\n", PTR_ERR(clk)); + return ERR_CAST(clk); + } + + ret = clk_set_rate(clk, IPA_CORE_CLOCK_RATE); + if (ret) { + dev_err(dev, "error setting core clock rate to %u\n", + ret, IPA_CORE_CLOCK_RATE); + goto err_clk_put; + } + + clock = kzalloc(sizeof(*clock), GFP_KERNEL); + if (!clock) { + ret = -ENOMEM; + goto err_clk_put; + } + clock->core = clk; + + ret = ipa_interconnect_init(clock, dev); + if (ret) + goto err_kfree; + + mutex_init(&clock->mutex); + atomic_set(&clock->count, 0); + + return clock; + +err_kfree: + kfree(clock); +err_clk_put: + clk_put(clk); + + return ERR_PTR(ret); +} + +/* Inverse of ipa_clock_init() */ +void ipa_clock_exit(struct ipa_clock *clock) +{ + struct clk *clk = clock->core; + + WARN_ON(atomic_read(&clock->count) != 0); + mutex_destroy(&clock->mutex); + ipa_interconnect_exit(clock); + kfree(clock); + clk_put(clk); +} diff --git a/drivers/net/ipa/ipa_clock.h b/drivers/net/ipa/ipa_clock.h new file mode 100644 index 000000000000..bc52b35e6bb2 --- /dev/null +++ b/drivers/net/ipa/ipa_clock.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_CLOCK_H_ +#define _IPA_CLOCK_H_ + +struct device; + +struct ipa; + +/** + * ipa_clock_init() - Initialize IPA clocking + * @dev: IPA device + * + * @Return: A pointer to an ipa_clock structure, or a pointer-coded error + */ +struct ipa_clock *ipa_clock_init(struct device *dev); + +/** + * ipa_clock_exit() - Inverse of ipa_clock_init() + * @clock: IPA clock pointer + */ +void ipa_clock_exit(struct ipa_clock *clock); + +/** + * ipa_clock_get() - Get an IPA clock reference + * @ipa: IPA pointer + * + * This call blocks if this is the first reference. + */ +void ipa_clock_get(struct ipa *ipa); + +/** + * ipa_clock_get_additional() - Get an IPA clock reference if not first + * @ipa: IPA pointer + * + * This returns immediately, and only takes a reference if not the first + */ +bool ipa_clock_get_additional(struct ipa *ipa); + +/** + * ipa_clock_put() - Drop an IPA clock reference + * @ipa: IPA pointer + * + * This drops a clock reference. If the last reference is being dropped, + * the clock is stopped and RX endpoints are suspended. This call will + * not block unless the last reference is dropped. + */ +void ipa_clock_put(struct ipa *ipa); + +#endif /* _IPA_CLOCK_H_ */ diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c new file mode 100644 index 000000000000..90353987c45f --- /dev/null +++ b/drivers/net/ipa/ipa_interrupt.c @@ -0,0 +1,253 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ + +/* DOC: IPA Interrupts + * + * The IPA has an interrupt line distinct from the interrupt used by the GSI + * code. Whereas GSI interrupts are generally related to channel events (like + * transfer completions), IPA interrupts are related to other events related + * to the IPA. Some of the IPA interrupts come from a microcontroller + * embedded in the IPA. Each IPA interrupt type can be both masked and + * acknowledged independent of the others. + * + * Two of the IPA interrupts are initiated by the microcontroller. A third + * can be generated to signal the need for a wakeup/resume when an IPA + * endpoint has been suspended. There are other IPA events, but at this + * time only these three are supported. + */ + +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_reg.h" +#include "ipa_endpoint.h" +#include "ipa_interrupt.h" + +/** + * struct ipa_interrupt - IPA interrupt information + * @ipa: IPA pointer + * @irq: Linux IRQ number used for IPA interrupts + * @enabled: Mask indicating which interrupts are enabled + * @handler: Array of handlers indexed by IPA interrupt ID + */ +struct ipa_interrupt { + struct ipa *ipa; + u32 irq; + u32 enabled; + ipa_irq_handler_t handler[IPA_IRQ_COUNT]; +}; + +/* Returns true if the interrupt type is associated with the microcontroller */ +static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 irq_id) +{ + return irq_id == IPA_IRQ_UC_0 || irq_id == IPA_IRQ_UC_1; +} + +/* Process a particular interrupt type that has been received */ +static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 irq_id) +{ + bool uc_irq = ipa_interrupt_uc(interrupt, irq_id); + struct ipa *ipa = interrupt->ipa; + u32 mask = BIT(irq_id); + + /* For microcontroller interrupts, clear the interrupt right away, + * "to avoid clearing unhandled interrupts." + */ + if (uc_irq) + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); + + if (irq_id < IPA_IRQ_COUNT && interrupt->handler[irq_id]) + interrupt->handler[irq_id](interrupt->ipa, irq_id); + + /* Clearing the SUSPEND_TX interrupt also clears the register + * that tells us which suspended endpoint(s) caused the interrupt, + * so defer clearing until after the handler has been called. + */ + if (!uc_irq) + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); +} + +/* Process all IPA interrupt types that have been signaled */ +static void ipa_interrupt_process_all(struct ipa_interrupt *interrupt) +{ + struct ipa *ipa = interrupt->ipa; + u32 enabled = interrupt->enabled; + u32 mask; + + /* The status register indicates which conditions are present, + * including conditions whose interrupt is not enabled. Handle + * only the enabled ones. + */ + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + while ((mask &= enabled)) { + do { + u32 irq_id = __ffs(mask); + + mask ^= BIT(irq_id); + + ipa_interrupt_process(interrupt, irq_id); + } while (mask); + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + } +} + +/* Threaded part of the IPA IRQ handler */ +static irqreturn_t ipa_isr_thread(int irq, void *dev_id) +{ + struct ipa_interrupt *interrupt = dev_id; + + ipa_clock_get(interrupt->ipa); + + ipa_interrupt_process_all(interrupt); + + ipa_clock_put(interrupt->ipa); + + return IRQ_HANDLED; +} + +/* Hard part (i.e., "real" IRQ handler) of the IRQ handler */ +static irqreturn_t ipa_isr(int irq, void *dev_id) +{ + struct ipa_interrupt *interrupt = dev_id; + struct ipa *ipa = interrupt->ipa; + u32 mask; + + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + if (mask & interrupt->enabled) + return IRQ_WAKE_THREAD; + + /* Nothing in the mask was supposed to cause an interrupt */ + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); + + dev_err(&ipa->pdev->dev, "%s: unexpected interrupt, mask 0x%08x\n", + __func__, mask); + + return IRQ_HANDLED; +} + +/* Common function used to enable/disable TX_SUSPEND for an endpoint */ +static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt, + u32 endpoint_id, bool enable) +{ + struct ipa *ipa = interrupt->ipa; + u32 mask = BIT(endpoint_id); + u32 val; + + /* assert(mask & ipa->available); */ + val = ioread32(ipa->reg_virt + IPA_REG_SUSPEND_IRQ_EN_OFFSET); + if (enable) + val |= mask; + else + val &= ~mask; + iowrite32(val, ipa->reg_virt + IPA_REG_SUSPEND_IRQ_EN_OFFSET); +} + +/* Enable TX_SUSPEND for an endpoint */ +void +ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt, u32 endpoint_id) +{ + ipa_interrupt_suspend_control(interrupt, endpoint_id, true); +} + +/* Disable TX_SUSPEND for an endpoint */ +void +ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt, u32 endpoint_id) +{ + ipa_interrupt_suspend_control(interrupt, endpoint_id, false); +} + +/* Clear the suspend interrupt for all endpoints that signaled it */ +void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt) +{ + struct ipa *ipa = interrupt->ipa; + u32 val; + + val = ioread32(ipa->reg_virt + IPA_REG_IRQ_SUSPEND_INFO_OFFSET); + iowrite32(val, ipa->reg_virt + IPA_REG_SUSPEND_IRQ_CLR_OFFSET); +} + +/* Simulate arrival of an IPA TX_SUSPEND interrupt */ +void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt) +{ + ipa_interrupt_process(interrupt, IPA_IRQ_TX_SUSPEND); +} + +/* Add a handler for an IPA interrupt */ +void ipa_interrupt_add(struct ipa_interrupt *interrupt, + enum ipa_irq_id ipa_irq, ipa_irq_handler_t handler) +{ + struct ipa *ipa = interrupt->ipa; + + /* assert(ipa_irq < IPA_IRQ_COUNT); */ + interrupt->handler[ipa_irq] = handler; + + /* Update the IPA interrupt mask to enable it */ + interrupt->enabled |= BIT(ipa_irq); + iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); +} + +/* Remove the handler for an IPA interrupt type */ +void +ipa_interrupt_remove(struct ipa_interrupt *interrupt, enum ipa_irq_id ipa_irq) +{ + struct ipa *ipa = interrupt->ipa; + + /* assert(ipa_irq < IPA_IRQ_COUNT); */ + /* Update the IPA interrupt mask to disable it */ + interrupt->enabled &= ~BIT(ipa_irq); + iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); + + interrupt->handler[ipa_irq] = NULL; +} + +/* Set up the IPA interrupt framework */ +struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + struct ipa_interrupt *interrupt; + unsigned int irq; + int ret; + + ret = platform_get_irq_byname(ipa->pdev, "ipa"); + if (ret <= 0) { + dev_err(dev, "DT error %d getting \"ipa\" IRQ property\n", + ret); + return ERR_PTR(ret ? : -EINVAL); + } + irq = ret; + + interrupt = kzalloc(sizeof(*interrupt), GFP_KERNEL); + if (!interrupt) + return ERR_PTR(-ENOMEM); + interrupt->ipa = ipa; + interrupt->irq = irq; + + /* Start with all IPA interrupts disabled */ + iowrite32(0, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); + + ret = request_threaded_irq(irq, ipa_isr, ipa_isr_thread, IRQF_ONESHOT, + "ipa", interrupt); + if (ret) { + dev_err(dev, "error %d requesting \"ipa\" IRQ\n", ret); + goto err_kfree; + } + + return interrupt; + +err_kfree: + kfree(interrupt); + + return ERR_PTR(ret); +} + +/* Tear down the IPA interrupt framework */ +void ipa_interrupt_teardown(struct ipa_interrupt *interrupt) +{ + free_irq(interrupt->irq, interrupt); + kfree(interrupt); +} diff --git a/drivers/net/ipa/ipa_interrupt.h b/drivers/net/ipa/ipa_interrupt.h new file mode 100644 index 000000000000..d4f4c1c9f0b1 --- /dev/null +++ b/drivers/net/ipa/ipa_interrupt.h @@ -0,0 +1,117 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_INTERRUPT_H_ +#define _IPA_INTERRUPT_H_ + +#include +#include + +struct ipa; +struct ipa_interrupt; + +/** + * enum ipa_irq_id - IPA interrupt type + * @IPA_IRQ_UC_0: Microcontroller event interrupt + * @IPA_IRQ_UC_1: Microcontroller response interrupt + * @IPA_IRQ_TX_SUSPEND: Data ready interrupt + * + * The data ready interrupt is signaled if data has arrived that is destined + * for an AP RX endpoint whose underlying GSI channel is suspended/stopped. + */ +enum ipa_irq_id { + IPA_IRQ_UC_0 = 2, + IPA_IRQ_UC_1 = 3, + IPA_IRQ_TX_SUSPEND = 14, + IPA_IRQ_COUNT, /* Number of interrupt types (not an index) */ +}; + +/** + * typedef ipa_irq_handler_t - IPA interrupt handler function type + * @ipa: IPA pointer + * @irq_id: interrupt type + * + * Callback function registered by ipa_interrupt_add() to handle a specific + * IPA interrupt type + */ +typedef void (*ipa_irq_handler_t)(struct ipa *ipa, enum ipa_irq_id irq_id); + +/** + * ipa_interrupt_add() - Register a handler for an IPA interrupt type + * @irq_id: IPA interrupt type + * @handler: Handler function for the interrupt + * + * Add a handler for an IPA interrupt and enable it. IPA interrupt + * handlers are run in threaded interrupt context, so are allowed to + * block. + */ +void ipa_interrupt_add(struct ipa_interrupt *interrupt, enum ipa_irq_id irq_id, + ipa_irq_handler_t handler); + +/** + * ipa_interrupt_remove() - Remove the handler for an IPA interrupt type + * @interrupt: IPA interrupt structure + * @irq_id: IPA interrupt type + * + * Remove an IPA interrupt handler and disable it. + */ +void ipa_interrupt_remove(struct ipa_interrupt *interrupt, + enum ipa_irq_id irq_id); + +/** + * ipa_interrupt_suspend_enable - Enable TX_SUSPEND for an endpoint + * @interrupt: IPA interrupt structure + * @endpoint_id: Endpoint whose interrupt should be enabled + * + * Note: The "TX" in the name is from the perspective of the IPA hardware. + * A TX_SUSPEND interrupt arrives on an AP RX enpoint when packet data can't + * be delivered to the endpoint because it is suspended (or its underlying + * channel is stopped). + */ +void ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt, + u32 endpoint_id); + +/** + * ipa_interrupt_suspend_disable - Disable TX_SUSPEND for an endpoint + * @interrupt: IPA interrupt structure + * @endpoint_id: Endpoint whose interrupt should be disabled + */ +void ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt, + u32 endpoint_id); + +/** + * ipa_interrupt_suspend_clear_all - clear all suspend interrupts + * @interrupt: IPA interrupt structure + * + * Clear the TX_SUSPEND interrupt for all endpoints that signaled it. + */ +void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt); + +/** + * ipa_interrupt_simulate_suspend() - Simulate TX_SUSPEND IPA interrupt + * @interrupt: IPA interrupt structure + * + * This calls the TX_SUSPEND interrupt handler, as if such an interrupt + * had been signaled. This is needed to work around a hardware quirk + * that occurs if aggregation is active on an endpoint when its underlying + * channel is suspended. + */ +void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt); + +/** + * ipa_interrupt_setup() - Set up the IPA interrupt framework + * @ipa: IPA pointer + * + * @Return: Pointer to IPA SMP2P info, or a pointer-coded error + */ +struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa); + +/** + * ipa_interrupt_teardown() - Tear down the IPA interrupt framework + * @interrupt: IPA interrupt structure + */ +void ipa_interrupt_teardown(struct ipa_interrupt *interrupt); + +#endif /* _IPA_INTERRUPT_H_ */ diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c new file mode 100644 index 000000000000..42d2c29d9f0c --- /dev/null +++ b/drivers/net/ipa/ipa_mem.c @@ -0,0 +1,314 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_reg.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" +#include "ipa_data.h" +#include "ipa_table.h" +#include "gsi_trans.h" + +/* "Canary" value placed between memory regions to detect overflow */ +#define IPA_MEM_CANARY_VAL cpu_to_le32(0xdeadbeef) + +/* Add an immediate command to a transaction that zeroes a memory region */ +static void +ipa_mem_zero_region_add(struct gsi_trans *trans, const struct ipa_mem *mem) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + dma_addr_t addr = ipa->zero_addr; + + if (!mem->size) + return; + + ipa_cmd_dma_shared_mem_add(trans, mem->offset, mem->size, addr, true); +} + +/** + * ipa_mem_setup() - Set up IPA AP and modem shared memory areas + * + * Set up the shared memory regions in IPA local memory. This involves + * zero-filling memory regions, and in the case of header memory, telling + * the IPA where it's located. + * + * This function performs the initial setup of this memory. If the modem + * crashes, its regions are re-zeroed in ipa_mem_zero_modem(). + * + * The AP informs the modem where its portions of memory are located + * in a QMI exchange that occurs at modem startup. + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_mem_setup(struct ipa *ipa) +{ + dma_addr_t addr = ipa->zero_addr; + struct gsi_trans *trans; + u32 offset; + u16 size; + + /* Get a transaction to define the header memory region and to zero + * the processing context and modem memory regions. + */ + trans = ipa_cmd_trans_alloc(ipa, 4); + if (!trans) { + dev_err(&ipa->pdev->dev, "no transaction for memory setup\n"); + return -EBUSY; + } + + /* Initialize IPA-local header memory. The modem and AP header + * regions are contiguous, and initialized together. + */ + offset = ipa->mem[IPA_MEM_MODEM_HEADER].offset; + size = ipa->mem[IPA_MEM_MODEM_HEADER].size; + size += ipa->mem[IPA_MEM_AP_HEADER].size; + + ipa_cmd_hdr_init_local_add(trans, offset, size, addr); + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM_PROC_CTX]); + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_AP_PROC_CTX]); + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM]); + + gsi_trans_commit_wait(trans); + + /* Tell the hardware where the processing context area is located */ + iowrite32(ipa->mem_offset + offset, + ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET); + + return 0; +} + +void ipa_mem_teardown(struct ipa *ipa) +{ + /* Nothing to do */ +} + +#ifdef IPA_VALIDATE + +static bool ipa_mem_valid(struct ipa *ipa, enum ipa_mem_id mem_id) +{ + const struct ipa_mem *mem = &ipa->mem[mem_id]; + struct device *dev = &ipa->pdev->dev; + u16 size_multiple; + + /* Other than modem memory, sizes must be a multiple of 8 */ + size_multiple = mem_id == IPA_MEM_MODEM ? 4 : 8; + if (mem->size % size_multiple) + dev_err(dev, "region %u size not a multiple of %u bytes\n", + mem_id, size_multiple); + else if (mem->offset % 8) + dev_err(dev, "region %u offset not 8-byte aligned\n", mem_id); + else if (mem->offset < mem->canary_count * sizeof(__le32)) + dev_err(dev, "region %u offset too small for %hu canaries\n", + mem_id, mem->canary_count); + else if (mem->offset + mem->size > ipa->mem_size) + dev_err(dev, "region %u ends beyond memory limit (0x%08x)\n", + mem_id, ipa->mem_size); + else + return true; + + return false; +} + +#else /* !IPA_VALIDATE */ + +static bool ipa_mem_valid(struct ipa *ipa, enum ipa_mem_id mem_id) +{ + return true; +} + +#endif /*! IPA_VALIDATE */ + +/** + * ipa_mem_config() - Configure IPA shared memory + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_mem_config(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + enum ipa_mem_id mem_id; + dma_addr_t addr; + u32 mem_size; + void *virt; + u32 val; + + /* Check the advertised location and size of the shared memory area */ + val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET); + + /* The fields in the register are in 8 byte units */ + ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK); + /* Make sure the end is within the region's mapped space */ + mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK); + + /* If the sizes don't match, issue a warning */ + if (ipa->mem_offset + mem_size > ipa->mem_size) { + dev_warn(dev, "ignoring larger reported memory size: 0x%08x\n", + mem_size); + } else if (ipa->mem_offset + mem_size < ipa->mem_size) { + dev_warn(dev, "limiting IPA memory size to 0x%08x\n", + mem_size); + ipa->mem_size = mem_size; + } + + /* Prealloc DMA memory for zeroing regions */ + virt = dma_alloc_coherent(dev, IPA_MEM_MAX, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + ipa->zero_addr = addr; + ipa->zero_virt = virt; + ipa->zero_size = IPA_MEM_MAX; + + /* Verify each defined memory region is valid, and if indicated + * for the region, write "canary" values in the space prior to + * the region's base address. + */ + for (mem_id = 0; mem_id < IPA_MEM_COUNT; mem_id++) { + const struct ipa_mem *mem = &ipa->mem[mem_id]; + u16 canary_count; + __le32 *canary; + + /* Validate all regions (even undefined ones) */ + if (!ipa_mem_valid(ipa, mem_id)) + goto err_dma_free; + + /* Skip over undefined regions */ + if (!mem->offset && !mem->size) + continue; + + canary_count = mem->canary_count; + if (!canary_count) + continue; + + /* Write canary values in the space before the region */ + canary = ipa->mem_virt + ipa->mem_offset + mem->offset; + do + *--canary = IPA_MEM_CANARY_VAL; + while (--canary_count); + } + + /* Make sure filter and route table memory regions are valid */ + if (!ipa_table_valid(ipa)) + goto err_dma_free; + + /* Validate memory-related properties relevant to immediate commands */ + if (!ipa_cmd_data_valid(ipa)) + goto err_dma_free; + + /* Verify the microcontroller ring alignment (0 is OK too) */ + if (ipa->mem[IPA_MEM_UC_EVENT_RING].offset % 1024) { + dev_err(dev, "microcontroller ring not 1024-byte aligned\n"); + goto err_dma_free; + } + + return 0; + +err_dma_free: + dma_free_coherent(dev, IPA_MEM_MAX, ipa->zero_virt, ipa->zero_addr); + + return -EINVAL; +} + +/* Inverse of ipa_mem_config() */ +void ipa_mem_deconfig(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + + dma_free_coherent(dev, ipa->zero_size, ipa->zero_virt, ipa->zero_addr); + ipa->zero_size = 0; + ipa->zero_virt = NULL; + ipa->zero_addr = 0; +} + +/** + * ipa_mem_zero_modem() - Zero IPA-local memory regions owned by the modem + * + * Zero regions of IPA-local memory used by the modem. These are configured + * (and initially zeroed) by ipa_mem_setup(), but if the modem crashes and + * restarts via SSR we need to re-initialize them. A QMI message tells the + * modem where to find regions of IPA local memory it needs to know about + * (these included). + */ +int ipa_mem_zero_modem(struct ipa *ipa) +{ + struct gsi_trans *trans; + + /* Get a transaction to zero the modem memory, modem header, + * and modem processing context regions. + */ + trans = ipa_cmd_trans_alloc(ipa, 3); + if (!trans) { + dev_err(&ipa->pdev->dev, + "no transaction to zero modem memory\n"); + return -EBUSY; + } + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM_HEADER]); + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM_PROC_CTX]); + + ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM]); + + gsi_trans_commit_wait(trans); + + return 0; +} + +/* Perform memory region-related initialization */ +int ipa_mem_init(struct ipa *ipa, u32 count, const struct ipa_mem *mem) +{ + struct device *dev = &ipa->pdev->dev; + struct resource *res; + int ret; + + if (count > IPA_MEM_COUNT) { + dev_err(dev, "to many memory regions (%u > %u)\n", + count, IPA_MEM_COUNT); + return -EINVAL; + } + + ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64)); + if (ret) { + dev_err(dev, "error %d setting DMA mask\n", ret); + return ret; + } + + res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM, + "ipa-shared"); + if (!res) { + dev_err(dev, + "DT error getting \"ipa-shared\" memory property\n"); + return -ENODEV; + } + + ipa->mem_virt = memremap(res->start, resource_size(res), MEMREMAP_WC); + if (!ipa->mem_virt) { + dev_err(dev, "unable to remap \"ipa-shared\" memory\n"); + return -ENOMEM; + } + + ipa->mem_addr = res->start; + ipa->mem_size = resource_size(res); + + /* The ipa->mem[] array is indexed by enum ipa_mem_id values */ + ipa->mem = mem; + + return 0; +} + +/* Inverse of ipa_mem_init() */ +void ipa_mem_exit(struct ipa *ipa) +{ + memunmap(ipa->mem_virt); +} diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h new file mode 100644 index 000000000000..065cb499ebe5 --- /dev/null +++ b/drivers/net/ipa/ipa_mem.h @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_MEM_H_ +#define _IPA_MEM_H_ + +struct ipa; + +/** + * DOC: IPA Local Memory + * + * The IPA has a block of shared memory, divided into regions used for + * specific purposes. + * + * The regions within the shared block are bounded by an offset (relative to + * the "ipa-shared" memory range) and size found in the IPA_SHARED_MEM_SIZE + * register. + * + * Each region is optionally preceded by one or more 32-bit "canary" values. + * These are meant to detect out-of-range writes (if they become corrupted). + * A given region (such as a filter or routing table) has the same number + * of canaries for all IPA hardware versions. Still, the number used is + * defined in the config data, allowing for generic handling of regions. + * + * The set of memory regions is defined in configuration data. They are + * subject to these constraints: + * - a zero offset and zero size represents and undefined region + * - a region's offset is defined to be *past* all "canary" values + * - offset must be large enough to account for all canaries + * - a region's size may be zero, but may still have canaries + * - all offsets must be 8-byte aligned + * - most sizes must be a multiple of 8 + * - modem memory size must be a multiple of 4 + * - the microcontroller ring offset must be a multiple of 1024 + */ + +/* The maximum allowed size for any memory region */ +#define IPA_MEM_MAX (2 * PAGE_SIZE) + +/* IPA-resident memory region ids */ +enum ipa_mem_id { + IPA_MEM_UC_SHARED, /* 0 canaries */ + IPA_MEM_UC_INFO, /* 0 canaries */ + IPA_MEM_V4_FILTER_HASHED, /* 2 canaries */ + IPA_MEM_V4_FILTER, /* 2 canaries */ + IPA_MEM_V6_FILTER_HASHED, /* 2 canaries */ + IPA_MEM_V6_FILTER, /* 2 canaries */ + IPA_MEM_V4_ROUTE_HASHED, /* 2 canaries */ + IPA_MEM_V4_ROUTE, /* 2 canaries */ + IPA_MEM_V6_ROUTE_HASHED, /* 2 canaries */ + IPA_MEM_V6_ROUTE, /* 2 canaries */ + IPA_MEM_MODEM_HEADER, /* 2 canaries */ + IPA_MEM_AP_HEADER, /* 0 canaries */ + IPA_MEM_MODEM_PROC_CTX, /* 2 canaries */ + IPA_MEM_AP_PROC_CTX, /* 0 canaries */ + IPA_MEM_PDN_CONFIG, /* 2 canaries (IPA v4.0 and above) */ + IPA_MEM_STATS_QUOTA, /* 2 canaries (IPA v4.0 and above) */ + IPA_MEM_STATS_TETHERING, /* 0 canaries (IPA v4.0 and above) */ + IPA_MEM_STATS_DROP, /* 0 canaries (IPA v4.0 and above) */ + IPA_MEM_MODEM, /* 0 canaries */ + IPA_MEM_UC_EVENT_RING, /* 1 canary */ + IPA_MEM_COUNT, /* Number of regions (not an index) */ +}; + +/** + * struct ipa_mem - IPA local memory region description + * @offset: offset in IPA memory space to base of the region + * @size: size in bytes base of the region + * @canary_count # 32-bit "canary" values that precede region + */ +struct ipa_mem { + u32 offset; + u16 size; + u16 canary_count; +}; + +int ipa_mem_config(struct ipa *ipa); +void ipa_mem_deconfig(struct ipa *ipa); + +int ipa_mem_setup(struct ipa *ipa); +void ipa_mem_teardown(struct ipa *ipa); + +int ipa_mem_zero_modem(struct ipa *ipa); + +int ipa_mem_init(struct ipa *ipa, u32 count, const struct ipa_mem *mem); +void ipa_mem_exit(struct ipa *ipa); + +#endif /* _IPA_MEM_H_ */ From patchwork Fri Feb 28 22:41:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190176 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 225AAC3F2D3 for ; Fri, 28 Feb 2020 22:43:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BC318246AC for ; Fri, 28 Feb 2020 22:43:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="zBkwyrIR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726961AbgB1Wnl (ORCPT ); Fri, 28 Feb 2020 17:43:41 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:37969 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727093AbgB1Wm3 (ORCPT ); Fri, 28 Feb 2020 17:42:29 -0500 Received: by mail-yw1-f65.google.com with SMTP id 10so4946358ywv.5 for ; Fri, 28 Feb 2020 14:42:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DXUsvnza60UtTVHeA0SEYDL54izSmT+eKj1dJg7901k=; b=zBkwyrIRT2VQCZrKqeo8hD9SffR31dj0Xbxd9aLZ7HKZcXJsB58rN5SCvX6QnMidtW BQWnkaXKqqZC4JvT5FNxmkdOHb7jS81Hl0QVGbuDrgCCWK68fmxVXZ40KuQubRtYQ/BS 3MZCy3xcKv3xW6EPq4RejKNlxs5pDtv2KaUis0S3Kk8K3gUclbkEdQkMtm4yACtfCtH/ dld+T2Sv2dnOOJHgSBlkhnEcYwKteutPgP6ixEyDMAgJGw0hiwL+qzvIriJDJN0duv6s lQcpMdYHD7vXnAaEh++oBNwsOvuBEiHKJNT/kzNoK0rex3DQ0x62e71/iJYxGes0X+Gw U0GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DXUsvnza60UtTVHeA0SEYDL54izSmT+eKj1dJg7901k=; b=IQFF9RCf34LYEtwo0f/8tZJtute6y0VS/F7bnAKG3q8WY+XbWdBvS5+lJcT1ozT5oF xhlTXAvHR43vXcsJZJmUp6ekUt2ISgDiogVkpF79cAT1p2DRVAJYE2K+L6QqjKgWGy9p q3k+XC9665GF5guoDKNiqI7sgQcR2QQA/UAo4g0nGDMm+RCRLDKF2TKX+Zj2CAwWePiM BuexWkLC8gLCbDocw8/GAZ/dqzPWfUvBn5k0PRaaemIb/2NY+WiQxDguo/LAI6SGG2/k rXu5I/Pi7qO+00Hb54aPiZJiyKXDzMcQL1iv+3PHaaXOHD9CammS4YUWYd4jKtu7Np+T Efeg== X-Gm-Message-State: APjAAAUchRtJOJlmcCVOtrPt9zRF+2GPCwYGmZOiVPWef27mOVzAMnAs Jg28FKbEZdBQYr2fWXmYN/JdQA== X-Google-Smtp-Source: APXvYqxyW4soXdzZPdyegb6d75oLvQrGzzrMdTb6rvzdcmDgTMLwgv+oZ34S4Ou1z2FMlgnutz6b2g== X-Received: by 2002:a81:4858:: with SMTP id v85mr6689328ywa.288.1582929745070; Fri, 28 Feb 2020 14:42:25 -0800 (PST) Received: from localhost.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id d188sm4637830ywe.50.2020.02.28.14.42.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Feb 2020 14:42:24 -0800 (PST) From: Alex Elder To: Arnd Bergmann , David Miller Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/17] soc: qcom: ipa: the generic software interface Date: Fri, 28 Feb 2020 16:41:54 -0600 Message-Id: <20200228224204.17746-8-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200228224204.17746-1-elder@linaro.org> References: <20200228224204.17746-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch includes "gsi.c", which implements the generic software interface (GSI) for IPA. The generic software interface abstracts channels, which provide a means of transferring data either from the AP to the IPA, or from the IPA to the AP. A ring buffer of "transfer elements" (TREs) is used to describe data transfers to perform. The AP writes a doorbell register associated with a channel to let it know it has added new entries (for an AP->IPA channel) or has finished processing entries (for an IPA->AP channel). Each channel also has an event ring buffer, used by the IPA to communicate information about events related to a channel (for example, the completion of TREs). The IPA writes its own doorbell register, which triggers an interrupt on the AP, to signal that new event information has arrived. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi.c | 2097 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 2097 insertions(+) create mode 100644 drivers/net/ipa/gsi.c diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c new file mode 100644 index 000000000000..f48d74f44592 --- /dev/null +++ b/drivers/net/ipa/gsi.c @@ -0,0 +1,2097 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_reg.h" +#include "gsi_private.h" +#include "gsi_trans.h" +#include "ipa_gsi.h" +#include "ipa_data.h" + +/** + * DOC: The IPA Generic Software Interface + * + * The generic software interface (GSI) is an integral component of the IPA, + * providing a well-defined communication layer between the AP subsystem + * and the IPA core. The modem uses the GSI layer as well. + * + * -------- --------- + * | | | | + * | AP +<---. .----+ Modem | + * | +--. | | .->+ | + * | | | | | | | | + * -------- | | | | --------- + * v | v | + * --+-+---+-+-- + * | GSI | + * |-----------| + * | | + * | IPA | + * | | + * ------------- + * + * In the above diagram, the AP and Modem represent "execution environments" + * (EEs), which are independent operating environments that use the IPA for + * data transfer. + * + * Each EE uses a set of unidirectional GSI "channels," which allow transfer + * of data to or from the IPA. A channel is implemented as a ring buffer, + * with a DRAM-resident array of "transfer elements" (TREs) available to + * describe transfers to or from other EEs through the IPA. A transfer + * element can also contain an immediate command, requesting the IPA perform + * actions other than data transfer. + * + * Each TRE refers to a block of data--also located DRAM. After writing one + * or more TREs to a channel, the writer (either the IPA or an EE) writes a + * doorbell register to inform the receiving side how many elements have + * been written. + * + * Each channel has a GSI "event ring" associated with it. An event ring + * is implemented very much like a channel ring, but is always directed from + * the IPA to an EE. The IPA notifies an EE (such as the AP) about channel + * events by adding an entry to the event ring associated with the channel. + * The GSI then writes its doorbell for the event ring, causing the target + * EE to be interrupted. Each entry in an event ring contains a pointer + * to the channel TRE whose completion the event represents. + * + * Each TRE in a channel ring has a set of flags. One flag indicates whether + * the completion of the transfer operation generates an entry (and possibly + * an interrupt) in the channel's event ring. Other flags allow transfer + * elements to be chained together, forming a single logical transaction. + * TRE flags are used to control whether and when interrupts are generated + * to signal completion of channel transfers. + * + * Elements in channel and event rings are completed (or consumed) strictly + * in order. Completion of one entry implies the completion of all preceding + * entries. A single completion interrupt can therefore communicate the + * completion of many transfers. + * + * Note that all GSI registers are little-endian, which is the assumed + * endianness of I/O space accesses. The accessor functions perform byte + * swapping if needed (i.e., for a big endian CPU). + */ + +/* Delay period for interrupt moderation (in 32KHz IPA internal timer ticks) */ +#define GSI_EVT_RING_INT_MODT (32 * 1) /* 1ms under 32KHz clock */ + +#define GSI_CMD_TIMEOUT 5 /* seconds */ + +#define GSI_MHI_EVENT_ID_START 10 /* 1st event id reserved for MHI */ +#define GSI_MHI_EVENT_ID_END 16 /* Last event id reserved for MHI */ + +#define GSI_ISR_MAX_ITER 50 /* Detect interrupt storms */ + +/* An entry in an event ring */ +struct gsi_event { + __le64 xfer_ptr; + __le16 len; + u8 reserved1; + u8 code; + __le16 reserved2; + u8 type; + u8 chid; +}; + +/* Hardware values from the error log register error code field */ +enum gsi_err_code { + GSI_INVALID_TRE_ERR = 0x1, + GSI_OUT_OF_BUFFERS_ERR = 0x2, + GSI_OUT_OF_RESOURCES_ERR = 0x3, + GSI_UNSUPPORTED_INTER_EE_OP_ERR = 0x4, + GSI_EVT_RING_EMPTY_ERR = 0x5, + GSI_NON_ALLOCATED_EVT_ACCESS_ERR = 0x6, + GSI_HWO_1_ERR = 0x8, +}; + +/* Hardware values from the error log register error type field */ +enum gsi_err_type { + GSI_ERR_TYPE_GLOB = 0x1, + GSI_ERR_TYPE_CHAN = 0x2, + GSI_ERR_TYPE_EVT = 0x3, +}; + +/* Hardware values used when programming an event ring */ +enum gsi_evt_chtype { + GSI_EVT_CHTYPE_MHI_EV = 0x0, + GSI_EVT_CHTYPE_XHCI_EV = 0x1, + GSI_EVT_CHTYPE_GPI_EV = 0x2, + GSI_EVT_CHTYPE_XDCI_EV = 0x3, +}; + +/* Hardware values used when programming a channel */ +enum gsi_channel_protocol { + GSI_CHANNEL_PROTOCOL_MHI = 0x0, + GSI_CHANNEL_PROTOCOL_XHCI = 0x1, + GSI_CHANNEL_PROTOCOL_GPI = 0x2, + GSI_CHANNEL_PROTOCOL_XDCI = 0x3, +}; + +/* Hardware values representing an event ring immediate command opcode */ +enum gsi_evt_cmd_opcode { + GSI_EVT_ALLOCATE = 0x0, + GSI_EVT_RESET = 0x9, + GSI_EVT_DE_ALLOC = 0xa, +}; + +/* Hardware values representing a generic immediate command opcode */ +enum gsi_generic_cmd_opcode { + GSI_GENERIC_HALT_CHANNEL = 0x1, + GSI_GENERIC_ALLOCATE_CHANNEL = 0x2, +}; + +/* Hardware values representing a channel immediate command opcode */ +enum gsi_ch_cmd_opcode { + GSI_CH_ALLOCATE = 0x0, + GSI_CH_START = 0x1, + GSI_CH_STOP = 0x2, + GSI_CH_RESET = 0x9, + GSI_CH_DE_ALLOC = 0xa, +}; + +/** gsi_channel_scratch_gpi - GPI protocol scratch register + * @max_outstanding_tre: + * Defines the maximum number of TREs allowed in a single transaction + * on a channel (in bytes). This determines the amount of prefetch + * performed by the hardware. We configure this to equal the size of + * the TLV FIFO for the channel. + * @outstanding_threshold: + * Defines the threshold (in bytes) determining when the sequencer + * should update the channel doorbell. We configure this to equal + * the size of two TREs. + */ +struct gsi_channel_scratch_gpi { + u64 reserved1; + u16 reserved2; + u16 max_outstanding_tre; + u16 reserved3; + u16 outstanding_threshold; +}; + +/** gsi_channel_scratch - channel scratch configuration area + * + * The exact interpretation of this register is protocol-specific. + * We only use GPI channels; see struct gsi_channel_scratch_gpi, above. + */ +union gsi_channel_scratch { + struct gsi_channel_scratch_gpi gpi; + struct { + u32 word1; + u32 word2; + u32 word3; + u32 word4; + } data; +}; + +/* Check things that can be validated at build time. */ +static void gsi_validate_build(void) +{ + /* This is used as a divisor */ + BUILD_BUG_ON(!GSI_RING_ELEMENT_SIZE); + + /* Code assumes the size of channel and event ring element are + * the same (and fixed). Make sure the size of an event ring + * element is what's expected. + */ + BUILD_BUG_ON(sizeof(struct gsi_event) != GSI_RING_ELEMENT_SIZE); + + /* Hardware requires a 2^n ring size. We ensure the number of + * elements in an event ring is a power of 2 elsewhere; this + * ensure the elements themselves meet the requirement. + */ + BUILD_BUG_ON(!is_power_of_2(GSI_RING_ELEMENT_SIZE)); + + /* The channel element size must fit in this field */ + BUILD_BUG_ON(GSI_RING_ELEMENT_SIZE > field_max(ELEMENT_SIZE_FMASK)); + + /* The event ring element size must fit in this field */ + BUILD_BUG_ON(GSI_RING_ELEMENT_SIZE > field_max(EV_ELEMENT_SIZE_FMASK)); +} + +/* Return the channel id associated with a given channel */ +static u32 gsi_channel_id(struct gsi_channel *channel) +{ + return channel - &channel->gsi->channel[0]; +} + +static void gsi_irq_ieob_enable(struct gsi *gsi, u32 evt_ring_id) +{ + u32 val; + + gsi->event_enable_bitmap |= BIT(evt_ring_id); + val = gsi->event_enable_bitmap; + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); +} + +static void gsi_isr_ieob_clear(struct gsi *gsi, u32 mask) +{ + iowrite32(mask, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET); +} + +static void gsi_irq_ieob_disable(struct gsi *gsi, u32 evt_ring_id) +{ + u32 val; + + gsi->event_enable_bitmap &= ~BIT(evt_ring_id); + val = gsi->event_enable_bitmap; + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); +} + +/* Enable all GSI_interrupt types */ +static void gsi_irq_enable(struct gsi *gsi) +{ + u32 val; + + /* We don't use inter-EE channel or event interrupts */ + val = GSI_CNTXT_TYPE_IRQ_MSK_ALL; + val &= ~MSK_INTER_EE_CH_CTRL_FMASK; + val &= ~MSK_INTER_EE_EV_CTRL_FMASK; + iowrite32(val, gsi->virt + GSI_CNTXT_TYPE_IRQ_MSK_OFFSET); + + val = GENMASK(gsi->channel_count - 1, 0); + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET); + + val = GENMASK(gsi->evt_ring_count - 1, 0); + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET); + + /* Each IEOB interrupt is enabled (later) as needed by channels */ + iowrite32(0, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); + + val = GSI_CNTXT_GLOB_IRQ_ALL; + iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET); + + /* Never enable GSI_BREAK_POINT */ + val = GSI_CNTXT_GSI_IRQ_ALL & ~EN_BREAK_POINT_FMASK; + iowrite32(val, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET); +} + +/* Disable all GSI_interrupt types */ +static void gsi_irq_disable(struct gsi *gsi) +{ + iowrite32(0, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET); + iowrite32(0, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET); + iowrite32(0, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); + iowrite32(0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET); + iowrite32(0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET); + iowrite32(0, gsi->virt + GSI_CNTXT_TYPE_IRQ_MSK_OFFSET); +} + +/* Return the virtual address associated with a ring index */ +void *gsi_ring_virt(struct gsi_ring *ring, u32 index) +{ + /* Note: index *must* be used modulo the ring count here */ + return ring->virt + (index % ring->count) * GSI_RING_ELEMENT_SIZE; +} + +/* Return the 32-bit DMA address associated with a ring index */ +static u32 gsi_ring_addr(struct gsi_ring *ring, u32 index) +{ + return (ring->addr & GENMASK(31, 0)) + index * GSI_RING_ELEMENT_SIZE; +} + +/* Return the ring index of a 32-bit ring offset */ +static u32 gsi_ring_index(struct gsi_ring *ring, u32 offset) +{ + return (offset - gsi_ring_addr(ring, 0)) / GSI_RING_ELEMENT_SIZE; +} + +/* Issue a GSI command by writing a value to a register, then wait for + * completion to be signaled. Reports an error if the command times out. + * (Timeout is not expected, and suggests broken hardware.) + */ +static int +gsi_command(struct gsi *gsi, u32 reg, u32 val, struct completion *completion) +{ + reinit_completion(completion); + + iowrite32(val, gsi->virt + reg); + if (!wait_for_completion_timeout(completion, GSI_CMD_TIMEOUT * HZ)) + return -ETIMEDOUT; + + return 0; +} + +/* Return the hardware's notion of the current state of an event ring */ +static enum gsi_evt_ring_state +gsi_evt_ring_state(struct gsi *gsi, u32 evt_ring_id) +{ + u32 val; + + val = ioread32(gsi->virt + GSI_EV_CH_E_CNTXT_0_OFFSET(evt_ring_id)); + + return u32_get_bits(val, EV_CHSTATE_FMASK); +} + +/* Return whether an event ring's state is valid for an operation */ +static bool gsi_evt_ring_state_valid(struct gsi *gsi, u32 evt_ring_id, + enum gsi_evt_cmd_opcode opcode) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + struct device *dev = evt_ring->channel->gsi->dev; + enum gsi_evt_ring_state state = evt_ring->state; + bool valid; + + switch (opcode) { + case GSI_EVT_ALLOCATE: + valid = state == GSI_EVT_RING_STATE_NOT_ALLOCATED; + break; + + case GSI_EVT_RESET: + valid = state == GSI_EVT_RING_STATE_ALLOCATED || + state == GSI_EVT_RING_STATE_ERROR; + break; + + case GSI_EVT_DE_ALLOC: + valid = state == GSI_EVT_RING_STATE_ALLOCATED; + break; + + default: + dev_err(dev, + "event ring %u unrecognized state %u for opcode %u\n", + evt_ring_id, state, opcode); + return false; + } + + if (!valid) + dev_err(dev, + "event ring %u unexpected state %u for opcode %u\n", + evt_ring_id, state, opcode); + + return valid; +} + +/* Issue an event ring command and wait for it to complete */ +static int evt_ring_command(struct gsi *gsi, u32 evt_ring_id, + enum gsi_evt_cmd_opcode opcode) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + struct completion *completion = &evt_ring->completion; + u32 val; + int ret; + + if (!gsi_evt_ring_state_valid(gsi, evt_ring_id, opcode)) + return -EINVAL; + + val = u32_encode_bits(evt_ring_id, EV_CHID_FMASK); + val |= u32_encode_bits(opcode, EV_OPCODE_FMASK); + + ret = gsi_command(gsi, GSI_EV_CH_CMD_OFFSET, val, completion); + if (ret) + dev_err(gsi->dev, + "error %d issuing command %u to event ring %u\n", + ret, opcode, evt_ring_id); + + return ret; +} + +/* Allocate an event ring in NOT_ALLOCATED state */ +static int gsi_evt_ring_alloc_command(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + int ret; + + /* Get initial event ring state */ + evt_ring->state = gsi_evt_ring_state(gsi, evt_ring_id); + + ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_ALLOCATE); + if (!ret && evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) { + dev_err(gsi->dev, "bad event ring state (%u) after alloc\n", + evt_ring->state); + ret = -EIO; + } + + return ret; +} + +/* Reset a GSI event ring in ALLOCATED or ERROR state. */ +static void gsi_evt_ring_reset_command(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + int ret; + + ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_RESET); + if (!ret && evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) + dev_err(gsi->dev, "bad event ring state (%u) after reset\n", + evt_ring->state); +} + +/* Issue a hardware de-allocation request for an allocated event ring */ +static void gsi_evt_ring_de_alloc_command(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + int ret; + + ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_DE_ALLOC); + if (!ret && evt_ring->state != GSI_EVT_RING_STATE_NOT_ALLOCATED) + dev_err(gsi->dev, "bad event ring state (%u) after dealloc\n", + evt_ring->state); +} + +/* Return the hardware's notion of the current state of a channel */ +static enum gsi_channel_state +gsi_channel_state(struct gsi *gsi, u32 channel_id) +{ + u32 val; + + val = ioread32(gsi->virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id)); + + return u32_get_bits(val, CHSTATE_FMASK); +} + +/* Return whether a channel's state is valid for an operation */ +static bool gsi_channel_state_valid(struct gsi *gsi, u32 channel_id, + enum gsi_ch_cmd_opcode opcode) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + enum gsi_channel_state state = channel->state; + struct device *dev = channel->gsi->dev; + bool valid; + + switch (opcode) { + case GSI_CH_ALLOCATE: + valid = state == GSI_CHANNEL_STATE_NOT_ALLOCATED; + break; + + case GSI_CH_START: + valid = state == GSI_CHANNEL_STATE_ALLOCATED || + state == GSI_CHANNEL_STATE_STOP_IN_PROC || + state == GSI_CHANNEL_STATE_STOPPED; + break; + + case GSI_CH_STOP: + valid = state == GSI_CHANNEL_STATE_STARTED || + state == GSI_CHANNEL_STATE_STOP_IN_PROC || + state == GSI_CHANNEL_STATE_ERROR; + break; + + case GSI_CH_RESET: + valid = state == GSI_CHANNEL_STATE_STOPPED; + break; + + case GSI_CH_DE_ALLOC: + valid = state == GSI_CHANNEL_STATE_ALLOCATED; + break; + + default: + dev_err(dev, + "channel %u unrecognized state %u for opcode %u\n", + channel_id, state, opcode); + return false; + } + + if (!valid) + dev_err(dev, "channel %u unexpected state %u for opcode %u\n", + channel_id, state, opcode); + + return valid; +} + +/* Issue a channel command and wait for it to complete */ +static int +gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode opcode) +{ + struct completion *completion = &channel->completion; + u32 channel_id = gsi_channel_id(channel); + u32 val; + int ret; + + if (!gsi_channel_state_valid(channel->gsi, channel_id, opcode)) + return -EINVAL; + + val = u32_encode_bits(channel_id, CH_CHID_FMASK); + val |= u32_encode_bits(opcode, CH_OPCODE_FMASK); + + ret = gsi_command(channel->gsi, GSI_CH_CMD_OFFSET, val, completion); + if (ret) + dev_err(channel->gsi->dev, + "error %d issuing command %u to channel %u\n", + ret, opcode, channel_id); + + return ret; +} + +/* Allocate GSI channel in NOT_ALLOCATED state */ +static int gsi_channel_alloc_command(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + int ret; + + /* Get initial channel state */ + channel->state = gsi_channel_state(gsi, channel_id); + + ret = gsi_channel_command(channel, GSI_CH_ALLOCATE); + if (!ret && channel->state != GSI_CHANNEL_STATE_ALLOCATED) { + dev_err(gsi->dev, "bad channel state (%u) after alloc\n", + channel->state); + ret = -EIO; + } + + return ret; +} + +/* Start an ALLOCATED channel */ +static int gsi_channel_start_command(struct gsi_channel *channel) +{ + int ret; + + ret = gsi_channel_command(channel, GSI_CH_START); + if (!ret && channel->state != GSI_CHANNEL_STATE_STARTED) { + dev_err(channel->gsi->dev, + "bad channel state (%u) after start\n", + channel->state); + ret = -EIO; + } + + return ret; +} + +/* Stop a GSI channel in STARTED state */ +static int gsi_channel_stop_command(struct gsi_channel *channel) +{ + int ret; + + ret = gsi_channel_command(channel, GSI_CH_STOP); + if (ret || channel->state == GSI_CHANNEL_STATE_STOPPED) + return ret; + + /* We may have to try again if stop is in progress */ + if (channel->state == GSI_CHANNEL_STATE_STOP_IN_PROC) + return -EAGAIN; + + dev_err(channel->gsi->dev, "bad channel state (%u) after stop\n", + channel->state); + + return -EIO; +} + +/* Reset a GSI channel in ALLOCATED or ERROR state. */ +static void gsi_channel_reset_command(struct gsi_channel *channel) +{ + int ret; + + msleep(1); /* A short delay is required before a RESET command */ + + ret = gsi_channel_command(channel, GSI_CH_RESET); + if (!ret && channel->state != GSI_CHANNEL_STATE_ALLOCATED) + dev_err(channel->gsi->dev, + "bad channel state (%u) after reset\n", + channel->state); +} + +/* Deallocate an ALLOCATED GSI channel */ +static void gsi_channel_de_alloc_command(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + int ret; + + ret = gsi_channel_command(channel, GSI_CH_DE_ALLOC); + if (!ret && channel->state != GSI_CHANNEL_STATE_NOT_ALLOCATED) + dev_err(gsi->dev, "bad channel state (%u) after dealloc\n", + channel->state); +} + +/* Ring an event ring doorbell, reporting the last entry processed by the AP. + * The index argument (modulo the ring count) is the first unfilled entry, so + * we supply one less than that with the doorbell. Update the event ring + * index field with the value provided. + */ +static void gsi_evt_ring_doorbell(struct gsi *gsi, u32 evt_ring_id, u32 index) +{ + struct gsi_ring *ring = &gsi->evt_ring[evt_ring_id].ring; + u32 val; + + ring->index = index; /* Next unused entry */ + + /* Note: index *must* be used modulo the ring count here */ + val = gsi_ring_addr(ring, (index - 1) % ring->count); + iowrite32(val, gsi->virt + GSI_EV_CH_E_DOORBELL_0_OFFSET(evt_ring_id)); +} + +/* Program an event ring for use */ +static void gsi_evt_ring_program(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + size_t size = evt_ring->ring.count * GSI_RING_ELEMENT_SIZE; + u32 val; + + val = u32_encode_bits(GSI_EVT_CHTYPE_GPI_EV, EV_CHTYPE_FMASK); + val |= EV_INTYPE_FMASK; + val |= u32_encode_bits(GSI_RING_ELEMENT_SIZE, EV_ELEMENT_SIZE_FMASK); + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_0_OFFSET(evt_ring_id)); + + val = u32_encode_bits(size, EV_R_LENGTH_FMASK); + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_1_OFFSET(evt_ring_id)); + + /* The context 2 and 3 registers store the low-order and + * high-order 32 bits of the address of the event ring, + * respectively. + */ + val = evt_ring->ring.addr & GENMASK(31, 0); + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_2_OFFSET(evt_ring_id)); + + val = evt_ring->ring.addr >> 32; + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_3_OFFSET(evt_ring_id)); + + /* Enable interrupt moderation by setting the moderation delay */ + val = u32_encode_bits(GSI_EVT_RING_INT_MODT, MODT_FMASK); + val |= u32_encode_bits(1, MODC_FMASK); /* comes from channel */ + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_8_OFFSET(evt_ring_id)); + + /* No MSI write data, and MSI address high and low address is 0 */ + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_9_OFFSET(evt_ring_id)); + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_10_OFFSET(evt_ring_id)); + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_11_OFFSET(evt_ring_id)); + + /* We don't need to get event read pointer updates */ + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_12_OFFSET(evt_ring_id)); + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_13_OFFSET(evt_ring_id)); + + /* Finally, tell the hardware we've completed event 0 (arbitrary) */ + gsi_evt_ring_doorbell(gsi, evt_ring_id, 0); +} + +/* Return the last (most recent) transaction completed on a channel. */ +static struct gsi_trans *gsi_channel_trans_last(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + struct gsi_trans *trans; + + spin_lock_bh(&trans_info->spinlock); + + if (!list_empty(&trans_info->complete)) + trans = list_last_entry(&trans_info->complete, + struct gsi_trans, links); + else if (!list_empty(&trans_info->polled)) + trans = list_last_entry(&trans_info->polled, + struct gsi_trans, links); + else + trans = NULL; + + /* Caller will wait for this, so take a reference */ + if (trans) + refcount_inc(&trans->refcount); + + spin_unlock_bh(&trans_info->spinlock); + + return trans; +} + +/* Wait for transaction activity on a channel to complete */ +static void gsi_channel_trans_quiesce(struct gsi_channel *channel) +{ + struct gsi_trans *trans; + + /* Get the last transaction, and wait for it to complete */ + trans = gsi_channel_trans_last(channel); + if (trans) { + wait_for_completion(&trans->completion); + gsi_trans_free(trans); + } +} + +/* Stop channel activity. Transactions may not be allocated until thawed. */ +static void gsi_channel_freeze(struct gsi_channel *channel) +{ + gsi_channel_trans_quiesce(channel); + + napi_disable(&channel->napi); + + gsi_irq_ieob_disable(channel->gsi, channel->evt_ring_id); +} + +/* Allow transactions to be used on the channel again. */ +static void gsi_channel_thaw(struct gsi_channel *channel) +{ + gsi_irq_ieob_enable(channel->gsi, channel->evt_ring_id); + + napi_enable(&channel->napi); +} + +/* Program a channel for use */ +static void gsi_channel_program(struct gsi_channel *channel, bool doorbell) +{ + size_t size = channel->tre_ring.count * GSI_RING_ELEMENT_SIZE; + u32 channel_id = gsi_channel_id(channel); + union gsi_channel_scratch scr = { }; + struct gsi_channel_scratch_gpi *gpi; + struct gsi *gsi = channel->gsi; + u32 wrr_weight = 0; + u32 val; + + /* Arbitrarily pick TRE 0 as the first channel element to use */ + channel->tre_ring.index = 0; + + /* We program all channels to use GPI protocol */ + val = u32_encode_bits(GSI_CHANNEL_PROTOCOL_GPI, CHTYPE_PROTOCOL_FMASK); + if (channel->toward_ipa) + val |= CHTYPE_DIR_FMASK; + val |= u32_encode_bits(channel->evt_ring_id, ERINDEX_FMASK); + val |= u32_encode_bits(GSI_RING_ELEMENT_SIZE, ELEMENT_SIZE_FMASK); + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id)); + + val = u32_encode_bits(size, R_LENGTH_FMASK); + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_1_OFFSET(channel_id)); + + /* The context 2 and 3 registers store the low-order and + * high-order 32 bits of the address of the channel ring, + * respectively. + */ + val = channel->tre_ring.addr & GENMASK(31, 0); + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_2_OFFSET(channel_id)); + + val = channel->tre_ring.addr >> 32; + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_3_OFFSET(channel_id)); + + /* Command channel gets low weighted round-robin priority */ + if (channel->command) + wrr_weight = field_max(WRR_WEIGHT_FMASK); + val = u32_encode_bits(wrr_weight, WRR_WEIGHT_FMASK); + + /* Max prefetch is 1 segment (do not set MAX_PREFETCH_FMASK) */ + + /* Enable the doorbell engine if requested */ + if (doorbell) + val |= USE_DB_ENG_FMASK; + + if (!channel->use_prefetch) + val |= USE_ESCAPE_BUF_ONLY_FMASK; + + iowrite32(val, gsi->virt + GSI_CH_C_QOS_OFFSET(channel_id)); + + /* Now update the scratch registers for GPI protocol */ + gpi = &scr.gpi; + gpi->max_outstanding_tre = gsi_channel_trans_tre_max(gsi, channel_id) * + GSI_RING_ELEMENT_SIZE; + gpi->outstanding_threshold = 2 * GSI_RING_ELEMENT_SIZE; + + val = scr.data.word1; + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_0_OFFSET(channel_id)); + + val = scr.data.word2; + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_1_OFFSET(channel_id)); + + val = scr.data.word3; + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_2_OFFSET(channel_id)); + + /* We must preserve the upper 16 bits of the last scratch register. + * The next sequence assumes those bits remain unchanged between the + * read and the write. + */ + val = ioread32(gsi->virt + GSI_CH_C_SCRATCH_3_OFFSET(channel_id)); + val = (scr.data.word4 & GENMASK(31, 16)) | (val & GENMASK(15, 0)); + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_3_OFFSET(channel_id)); + + /* All done! */ +} + +static void gsi_channel_deprogram(struct gsi_channel *channel) +{ + /* Nothing to do */ +} + +/* Start an allocated GSI channel */ +int gsi_channel_start(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + u32 evt_ring_id = channel->evt_ring_id; + int ret; + + mutex_lock(&gsi->mutex); + + ret = gsi_channel_start_command(channel); + + mutex_unlock(&gsi->mutex); + + /* Clear the channel's event ring interrupt in case it's pending */ + gsi_isr_ieob_clear(gsi, BIT(evt_ring_id)); + + gsi_channel_thaw(channel); + + return 0; +} + +/* Stop a started channel */ +int gsi_channel_stop(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + int ret; + + gsi_channel_freeze(channel); + + /* Channel could have entered STOPPED state since last call if the + * STOP command timed out. We won't stop a channel if stopping it + * was successful previously (so we still want the freeze above). + */ + if (channel->state == GSI_CHANNEL_STATE_STOPPED) + return 0; + + mutex_lock(&gsi->mutex); + + ret = gsi_channel_stop_command(channel); + + mutex_unlock(&gsi->mutex); + + /* Thaw the channel if we need to retry (or on error) */ + if (ret) + gsi_channel_thaw(channel); + + return ret; +} + +/* Reset and reconfigure a channel (possibly leaving doorbell disabled) */ +void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool db_enable) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + mutex_lock(&gsi->mutex); + + /* Due to a hardware quirk we need to reset RX channels twice. */ + gsi_channel_reset_command(channel); + if (!channel->toward_ipa) + gsi_channel_reset_command(channel); + + gsi_channel_program(channel, db_enable); + gsi_channel_trans_cancel_pending(channel); + + mutex_unlock(&gsi->mutex); +} + +/* Stop a STARTED channel for suspend (only stop if RX and requested) */ +int gsi_channel_suspend(struct gsi *gsi, u32 channel_id, bool stop) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + if (stop && !channel->toward_ipa) + return gsi_channel_stop(gsi, channel_id); + + gsi_channel_freeze(channel); + + return 0; +} + +/* Resume a suspended channel (starting wll be requested if STOPPED) */ +int gsi_channel_resume(struct gsi *gsi, u32 channel_id, bool start) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + if (start && !channel->toward_ipa) + return gsi_channel_start(gsi, channel_id); + + gsi_channel_thaw(channel); + + return 0; +} + +/** + * gsi_channel_tx_queued() - Report queued TX transfers for a channel + * @channel: Channel for which to report + * + * Report to the network stack the number of bytes and transactions that + * have been queued to hardware since last call. This and the next function + * supply information used by the network stack for throttling. + * + * For each channel we track the number of transactions used and bytes of + * data those transactions represent. We also track what those values are + * each time this function is called. Subtracting the two tells us + * the number of bytes and transactions that have been added between + * successive calls. + * + * Calling this each time we ring the channel doorbell allows us to + * provide accurate information to the network stack about how much + * work we've given the hardware at any point in time. + */ +void gsi_channel_tx_queued(struct gsi_channel *channel) +{ + u32 trans_count; + u32 byte_count; + + byte_count = channel->byte_count - channel->queued_byte_count; + trans_count = channel->trans_count - channel->queued_trans_count; + channel->queued_byte_count = channel->byte_count; + channel->queued_trans_count = channel->trans_count; + + ipa_gsi_channel_tx_queued(channel->gsi, gsi_channel_id(channel), + trans_count, byte_count); +} + +/** + * gsi_channel_tx_update() - Report completed TX transfers + * @channel: Channel that has completed transmitting packets + * @trans: Last transation known to be complete + * + * Compute the number of transactions and bytes that have been transferred + * over a TX channel since the given transaction was committed. Report this + * information to the network stack. + * + * At the time a transaction is committed, we record its channel's + * committed transaction and byte counts *in the transaction*. + * Completions are signaled by the hardware with an interrupt, and + * we can determine the latest completed transaction at that time. + * + * The difference between the byte/transaction count recorded in + * the transaction and the count last time we recorded a completion + * tells us exactly how much data has been transferred between + * completions. + * + * Calling this each time we learn of a newly-completed transaction + * allows us to provide accurate information to the network stack + * about how much work has been completed by the hardware at a given + * point in time. + */ +static void +gsi_channel_tx_update(struct gsi_channel *channel, struct gsi_trans *trans) +{ + u64 byte_count = trans->byte_count + trans->len; + u64 trans_count = trans->trans_count + 1; + + byte_count -= channel->compl_byte_count; + channel->compl_byte_count += byte_count; + trans_count -= channel->compl_trans_count; + channel->compl_trans_count += trans_count; + + ipa_gsi_channel_tx_completed(channel->gsi, gsi_channel_id(channel), + trans_count, byte_count); +} + +/* Channel control interrupt handler */ +static void gsi_isr_chan_ctrl(struct gsi *gsi) +{ + u32 channel_mask; + + channel_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_CH_IRQ_OFFSET); + iowrite32(channel_mask, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET); + + while (channel_mask) { + u32 channel_id = __ffs(channel_mask); + struct gsi_channel *channel; + + channel_mask ^= BIT(channel_id); + + channel = &gsi->channel[channel_id]; + channel->state = gsi_channel_state(gsi, channel_id); + + complete(&channel->completion); + } +} + +/* Event ring control interrupt handler */ +static void gsi_isr_evt_ctrl(struct gsi *gsi) +{ + u32 event_mask; + + event_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_OFFSET); + iowrite32(event_mask, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET); + + while (event_mask) { + u32 evt_ring_id = __ffs(event_mask); + struct gsi_evt_ring *evt_ring; + + event_mask ^= BIT(evt_ring_id); + + evt_ring = &gsi->evt_ring[evt_ring_id]; + evt_ring->state = gsi_evt_ring_state(gsi, evt_ring_id); + + complete(&evt_ring->completion); + } +} + +/* Global channel error interrupt handler */ +static void +gsi_isr_glob_chan_err(struct gsi *gsi, u32 err_ee, u32 channel_id, u32 code) +{ + if (code == GSI_OUT_OF_RESOURCES_ERR) { + dev_err(gsi->dev, "channel %u out of resources\n", channel_id); + complete(&gsi->channel[channel_id].completion); + return; + } + + /* Report, but otherwise ignore all other error codes */ + dev_err(gsi->dev, "channel %u global error ee 0x%08x code 0x%08x\n", + channel_id, err_ee, code); +} + +/* Global event error interrupt handler */ +static void +gsi_isr_glob_evt_err(struct gsi *gsi, u32 err_ee, u32 evt_ring_id, u32 code) +{ + if (code == GSI_OUT_OF_RESOURCES_ERR) { + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + u32 channel_id = gsi_channel_id(evt_ring->channel); + + complete(&evt_ring->completion); + dev_err(gsi->dev, "evt_ring for channel %u out of resources\n", + channel_id); + return; + } + + /* Report, but otherwise ignore all other error codes */ + dev_err(gsi->dev, "event ring %u global error ee %u code 0x%08x\n", + evt_ring_id, err_ee, code); +} + +/* Global error interrupt handler */ +static void gsi_isr_glob_err(struct gsi *gsi) +{ + enum gsi_err_type type; + enum gsi_err_code code; + u32 which; + u32 val; + u32 ee; + + /* Get the logged error, then reinitialize the log */ + val = ioread32(gsi->virt + GSI_ERROR_LOG_OFFSET); + iowrite32(0, gsi->virt + GSI_ERROR_LOG_OFFSET); + iowrite32(~0, gsi->virt + GSI_ERROR_LOG_CLR_OFFSET); + + ee = u32_get_bits(val, ERR_EE_FMASK); + which = u32_get_bits(val, ERR_VIRT_IDX_FMASK); + type = u32_get_bits(val, ERR_TYPE_FMASK); + code = u32_get_bits(val, ERR_CODE_FMASK); + + if (type == GSI_ERR_TYPE_CHAN) + gsi_isr_glob_chan_err(gsi, ee, which, code); + else if (type == GSI_ERR_TYPE_EVT) + gsi_isr_glob_evt_err(gsi, ee, which, code); + else /* type GSI_ERR_TYPE_GLOB should be fatal */ + dev_err(gsi->dev, "unexpected global error 0x%08x\n", type); +} + +/* Generic EE interrupt handler */ +static void gsi_isr_gp_int1(struct gsi *gsi) +{ + u32 result; + u32 val; + + val = ioread32(gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET); + result = u32_get_bits(val, GENERIC_EE_RESULT_FMASK); + if (result != GENERIC_EE_SUCCESS_FVAL) + dev_err(gsi->dev, "global INT1 generic result %u\n", result); + + complete(&gsi->completion); +} +/* Inter-EE interrupt handler */ +static void gsi_isr_glob_ee(struct gsi *gsi) +{ + u32 val; + + val = ioread32(gsi->virt + GSI_CNTXT_GLOB_IRQ_STTS_OFFSET); + + if (val & ERROR_INT_FMASK) + gsi_isr_glob_err(gsi); + + iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_CLR_OFFSET); + + val &= ~ERROR_INT_FMASK; + + if (val & EN_GP_INT1_FMASK) { + val ^= EN_GP_INT1_FMASK; + gsi_isr_gp_int1(gsi); + } + + if (val) + dev_err(gsi->dev, "unexpected global interrupt 0x%08x\n", val); +} + +/* I/O completion interrupt event */ +static void gsi_isr_ieob(struct gsi *gsi) +{ + u32 event_mask; + + event_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_OFFSET); + gsi_isr_ieob_clear(gsi, event_mask); + + while (event_mask) { + u32 evt_ring_id = __ffs(event_mask); + + event_mask ^= BIT(evt_ring_id); + + gsi_irq_ieob_disable(gsi, evt_ring_id); + napi_schedule(&gsi->evt_ring[evt_ring_id].channel->napi); + } +} + +/* General event interrupts represent serious problems, so report them */ +static void gsi_isr_general(struct gsi *gsi) +{ + struct device *dev = gsi->dev; + u32 val; + + val = ioread32(gsi->virt + GSI_CNTXT_GSI_IRQ_STTS_OFFSET); + iowrite32(val, gsi->virt + GSI_CNTXT_GSI_IRQ_CLR_OFFSET); + + if (val) + dev_err(dev, "unexpected general interrupt 0x%08x\n", val); +} + +/** + * gsi_isr() - Top level GSI interrupt service routine + * @irq: Interrupt number (ignored) + * @dev_id: GSI pointer supplied to request_irq() + * + * This is the main handler function registered for the GSI IRQ. Each type + * of interrupt has a separate handler function that is called from here. + */ +static irqreturn_t gsi_isr(int irq, void *dev_id) +{ + struct gsi *gsi = dev_id; + u32 intr_mask; + u32 cnt = 0; + + while ((intr_mask = ioread32(gsi->virt + GSI_CNTXT_TYPE_IRQ_OFFSET))) { + /* intr_mask contains bitmask of pending GSI interrupts */ + do { + u32 gsi_intr = BIT(__ffs(intr_mask)); + + intr_mask ^= gsi_intr; + + switch (gsi_intr) { + case CH_CTRL_FMASK: + gsi_isr_chan_ctrl(gsi); + break; + case EV_CTRL_FMASK: + gsi_isr_evt_ctrl(gsi); + break; + case GLOB_EE_FMASK: + gsi_isr_glob_ee(gsi); + break; + case IEOB_FMASK: + gsi_isr_ieob(gsi); + break; + case GENERAL_FMASK: + gsi_isr_general(gsi); + break; + default: + dev_err(gsi->dev, + "%s: unrecognized type 0x%08x\n", + __func__, gsi_intr); + break; + } + } while (intr_mask); + + if (++cnt > GSI_ISR_MAX_ITER) { + dev_err(gsi->dev, "interrupt flood\n"); + break; + } + } + + return IRQ_HANDLED; +} + +/* Return the transaction associated with a transfer completion event */ +static struct gsi_trans *gsi_event_trans(struct gsi_channel *channel, + struct gsi_event *event) +{ + u32 tre_offset; + u32 tre_index; + + /* Event xfer_ptr records the TRE it's associated with */ + tre_offset = le64_to_cpu(event->xfer_ptr) & GENMASK(31, 0); + tre_index = gsi_ring_index(&channel->tre_ring, tre_offset); + + return gsi_channel_trans_mapped(channel, tre_index); +} + +/** + * gsi_evt_ring_rx_update() - Record lengths of received data + * @evt_ring: Event ring associated with channel that received packets + * @index: Event index in ring reported by hardware + * + * Events for RX channels contain the actual number of bytes received into + * the buffer. Every event has a transaction associated with it, and here + * we update transactions to record their actual received lengths. + * + * This function is called whenever we learn that the GSI hardware has filled + * new events since the last time we checked. The ring's index field tells + * the first entry in need of processing. The index provided is the + * first *unfilled* event in the ring (following the last filled one). + * + * Events are sequential within the event ring, and transactions are + * sequential within the transaction pool. + * + * Note that @index always refers to an element *within* the event ring. + */ +static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index) +{ + struct gsi_channel *channel = evt_ring->channel; + struct gsi_ring *ring = &evt_ring->ring; + struct gsi_trans_info *trans_info; + struct gsi_event *event_done; + struct gsi_event *event; + struct gsi_trans *trans; + u32 byte_count = 0; + u32 old_index; + u32 event_avail; + + trans_info = &channel->trans_info; + + /* We'll start with the oldest un-processed event. RX channels + * replenish receive buffers in single-TRE transactions, so we + * can just map that event to its transaction. Transactions + * associated with completion events are consecutive. + */ + old_index = ring->index; + event = gsi_ring_virt(ring, old_index); + trans = gsi_event_trans(channel, event); + + /* Compute the number of events to process before we wrap, + * and determine when we'll be done processing events. + */ + event_avail = ring->count - old_index % ring->count; + event_done = gsi_ring_virt(ring, index); + do { + trans->len = __le16_to_cpu(event->len); + byte_count += trans->len; + + /* Move on to the next event and transaction */ + if (--event_avail) + event++; + else + event = gsi_ring_virt(ring, 0); + trans = gsi_trans_pool_next(&trans_info->pool, trans); + } while (event != event_done); + + /* We record RX bytes when they are received */ + channel->byte_count += byte_count; + channel->trans_count++; +} + +/* Initialize a ring, including allocating DMA memory for its entries */ +static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count) +{ + size_t size = count * GSI_RING_ELEMENT_SIZE; + struct device *dev = gsi->dev; + dma_addr_t addr; + + /* Hardware requires a 2^n ring size, with alignment equal to size */ + ring->virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL); + if (ring->virt && addr % size) { + dma_free_coherent(dev, size, ring->virt, ring->addr); + dev_err(dev, "unable to alloc 0x%zx-aligned ring buffer\n", + size); + return -EINVAL; /* Not a good error value, but distinct */ + } else if (!ring->virt) { + return -ENOMEM; + } + ring->addr = addr; + ring->count = count; + + return 0; +} + +/* Free a previously-allocated ring */ +static void gsi_ring_free(struct gsi *gsi, struct gsi_ring *ring) +{ + size_t size = ring->count * GSI_RING_ELEMENT_SIZE; + + dma_free_coherent(gsi->dev, size, ring->virt, ring->addr); +} + +/* Allocate an available event ring id */ +static int gsi_evt_ring_id_alloc(struct gsi *gsi) +{ + u32 evt_ring_id; + + if (gsi->event_bitmap == ~0U) { + dev_err(gsi->dev, "event rings exhausted\n"); + return -ENOSPC; + } + + evt_ring_id = ffz(gsi->event_bitmap); + gsi->event_bitmap |= BIT(evt_ring_id); + + return (int)evt_ring_id; +} + +/* Free a previously-allocated event ring id */ +static void gsi_evt_ring_id_free(struct gsi *gsi, u32 evt_ring_id) +{ + gsi->event_bitmap &= ~BIT(evt_ring_id); +} + +/* Ring a channel doorbell, reporting the first un-filled entry */ +void gsi_channel_doorbell(struct gsi_channel *channel) +{ + struct gsi_ring *tre_ring = &channel->tre_ring; + u32 channel_id = gsi_channel_id(channel); + struct gsi *gsi = channel->gsi; + u32 val; + + /* Note: index *must* be used modulo the ring count here */ + val = gsi_ring_addr(tre_ring, tre_ring->index % tre_ring->count); + iowrite32(val, gsi->virt + GSI_CH_C_DOORBELL_0_OFFSET(channel_id)); +} + +/* Consult hardware, move any newly completed transactions to completed list */ +static void gsi_channel_update(struct gsi_channel *channel) +{ + u32 evt_ring_id = channel->evt_ring_id; + struct gsi *gsi = channel->gsi; + struct gsi_evt_ring *evt_ring; + struct gsi_trans *trans; + struct gsi_ring *ring; + u32 offset; + u32 index; + + evt_ring = &gsi->evt_ring[evt_ring_id]; + ring = &evt_ring->ring; + + /* See if there's anything new to process; if not, we're done. Note + * that index always refers to an entry *within* the event ring. + */ + offset = GSI_EV_CH_E_CNTXT_4_OFFSET(evt_ring_id); + index = gsi_ring_index(ring, ioread32(gsi->virt + offset)); + if (index == ring->index % ring->count) + return; + + /* Get the transaction for the latest completed event. Take a + * reference to keep it from completing before we give the events + * for this and previous transactions back to the hardware. + */ + trans = gsi_event_trans(channel, gsi_ring_virt(ring, index - 1)); + refcount_inc(&trans->refcount); + + /* For RX channels, update each completed transaction with the number + * of bytes that were actually received. For TX channels, report + * the number of transactions and bytes this completion represents + * up the network stack. + */ + if (channel->toward_ipa) + gsi_channel_tx_update(channel, trans); + else + gsi_evt_ring_rx_update(evt_ring, index); + + gsi_trans_move_complete(trans); + + /* Tell the hardware we've handled these events */ + gsi_evt_ring_doorbell(channel->gsi, channel->evt_ring_id, index); + + gsi_trans_free(trans); +} + +/** + * gsi_channel_poll_one() - Return a single completed transaction on a channel + * @channel: Channel to be polled + * + * @Return: Transaction pointer, or null if none are available + * + * This function returns the first entry on a channel's completed transaction + * list. If that list is empty, the hardware is consulted to determine + * whether any new transactions have completed. If so, they're moved to the + * completed list and the new first entry is returned. If there are no more + * completed transactions, a null pointer is returned. + */ +static struct gsi_trans *gsi_channel_poll_one(struct gsi_channel *channel) +{ + struct gsi_trans *trans; + + /* Get the first transaction from the completed list */ + trans = gsi_channel_trans_complete(channel); + if (!trans) { + /* List is empty; see if there's more to do */ + gsi_channel_update(channel); + trans = gsi_channel_trans_complete(channel); + } + + if (trans) + gsi_trans_move_polled(trans); + + return trans; +} + +/** + * gsi_channel_poll() - NAPI poll function for a channel + * @napi: NAPI structure for the channel + * @budget: Budget supplied by NAPI core + + * @Return: Number of items polled (<= budget) + * + * Single transactions completed by hardware are polled until either + * the budget is exhausted, or there are no more. Each transaction + * polled is passed to gsi_trans_complete(), to perform remaining + * completion processing and retire/free the transaction. + */ +static int gsi_channel_poll(struct napi_struct *napi, int budget) +{ + struct gsi_channel *channel; + int count = 0; + + channel = container_of(napi, struct gsi_channel, napi); + while (count < budget) { + struct gsi_trans *trans; + + trans = gsi_channel_poll_one(channel); + if (!trans) + break; + gsi_trans_complete(trans); + } + + if (count < budget) { + napi_complete(&channel->napi); + gsi_irq_ieob_enable(channel->gsi, channel->evt_ring_id); + } + + return count; +} + +/* The event bitmap represents which event ids are available for allocation. + * Set bits are not available, clear bits can be used. This function + * initializes the map so all events supported by the hardware are available, + * then precludes any reserved events from being allocated. + */ +static u32 gsi_event_bitmap_init(u32 evt_ring_max) +{ + u32 event_bitmap = GENMASK(BITS_PER_LONG - 1, evt_ring_max); + + event_bitmap |= GENMASK(GSI_MHI_EVENT_ID_END, GSI_MHI_EVENT_ID_START); + + return event_bitmap; +} + +/* Setup function for event rings */ +static void gsi_evt_ring_setup(struct gsi *gsi) +{ + /* Nothing to do */ +} + +/* Inverse of gsi_evt_ring_setup() */ +static void gsi_evt_ring_teardown(struct gsi *gsi) +{ + /* Nothing to do */ +} + +/* Setup function for a single channel */ +static int gsi_channel_setup_one(struct gsi *gsi, u32 channel_id, + bool db_enable) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + u32 evt_ring_id = channel->evt_ring_id; + int ret; + + if (!channel->gsi) + return 0; /* Ignore uninitialized channels */ + + ret = gsi_evt_ring_alloc_command(gsi, evt_ring_id); + if (ret) + return ret; + + gsi_evt_ring_program(gsi, evt_ring_id); + + ret = gsi_channel_alloc_command(gsi, channel_id); + if (ret) + goto err_evt_ring_de_alloc; + + gsi_channel_program(channel, db_enable); + + if (channel->toward_ipa) + netif_tx_napi_add(&gsi->dummy_dev, &channel->napi, + gsi_channel_poll, NAPI_POLL_WEIGHT); + else + netif_napi_add(&gsi->dummy_dev, &channel->napi, + gsi_channel_poll, NAPI_POLL_WEIGHT); + + return 0; + +err_evt_ring_de_alloc: + /* We've done nothing with the event ring yet so don't reset */ + gsi_evt_ring_de_alloc_command(gsi, evt_ring_id); + + return ret; +} + +/* Inverse of gsi_channel_setup_one() */ +static void gsi_channel_teardown_one(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + u32 evt_ring_id = channel->evt_ring_id; + + if (!channel->gsi) + return; /* Ignore uninitialized channels */ + + netif_napi_del(&channel->napi); + + gsi_channel_deprogram(channel); + gsi_channel_de_alloc_command(gsi, channel_id); + gsi_evt_ring_reset_command(gsi, evt_ring_id); + gsi_evt_ring_de_alloc_command(gsi, evt_ring_id); +} + +static int gsi_generic_command(struct gsi *gsi, u32 channel_id, + enum gsi_generic_cmd_opcode opcode) +{ + struct completion *completion = &gsi->completion; + u32 val; + u32 ret; + + val = u32_encode_bits(opcode, GENERIC_OPCODE_FMASK); + val |= u32_encode_bits(channel_id, GENERIC_CHID_FMASK); + val |= u32_encode_bits(GSI_EE_MODEM, GENERIC_EE_FMASK); + + ret = gsi_command(gsi, GSI_GENERIC_CMD_OFFSET, val, completion); + if (ret) + dev_err(gsi->dev, + "error %d issuing generic command %u for channel %u\n", + ret, opcode, channel_id); + + return ret; +} + +static int gsi_modem_channel_alloc(struct gsi *gsi, u32 channel_id) +{ + return gsi_generic_command(gsi, channel_id, + GSI_GENERIC_ALLOCATE_CHANNEL); +} + +static void gsi_modem_channel_halt(struct gsi *gsi, u32 channel_id) +{ + int ret; + + ret = gsi_generic_command(gsi, channel_id, GSI_GENERIC_HALT_CHANNEL); + if (ret) + dev_err(gsi->dev, "error %d halting modem channel %u\n", + channel_id); +} + +/* Setup function for channels */ +static int gsi_channel_setup(struct gsi *gsi, bool db_enable) +{ + u32 channel_id = 0; + u32 mask; + int ret; + + gsi_evt_ring_setup(gsi); + gsi_irq_enable(gsi); + + mutex_lock(&gsi->mutex); + + do { + ret = gsi_channel_setup_one(gsi, channel_id, db_enable); + if (ret) + goto err_unwind; + } while (++channel_id < gsi->channel_count); + + /* Make sure no channels were defined that hardware does not support */ + while (channel_id < GSI_CHANNEL_COUNT_MAX) { + struct gsi_channel *channel = &gsi->channel[channel_id++]; + + if (!channel->gsi) + continue; /* Ignore uninitialized channels */ + + dev_err(gsi->dev, "channel %u not supported by hardware\n", + channel_id - 1); + channel_id = gsi->channel_count; + goto err_unwind; + } + + /* Allocate modem channels if necessary */ + mask = gsi->modem_channel_bitmap; + while (mask) { + u32 modem_channel_id = __ffs(mask); + + ret = gsi_modem_channel_alloc(gsi, modem_channel_id); + if (ret) + goto err_unwind_modem; + + /* Clear bit from mask only after success (for unwind) */ + mask ^= BIT(modem_channel_id); + } + + mutex_unlock(&gsi->mutex); + + return 0; + +err_unwind_modem: + /* Compute which modem channels need to be deallocated */ + mask ^= gsi->modem_channel_bitmap; + while (mask) { + u32 channel_id = __fls(mask); + + mask ^= BIT(channel_id); + + gsi_modem_channel_halt(gsi, channel_id); + } + +err_unwind: + while (channel_id--) + gsi_channel_teardown_one(gsi, channel_id); + + mutex_unlock(&gsi->mutex); + + gsi_irq_disable(gsi); + gsi_evt_ring_teardown(gsi); + + return ret; +} + +/* Inverse of gsi_channel_setup() */ +static void gsi_channel_teardown(struct gsi *gsi) +{ + u32 mask = gsi->modem_channel_bitmap; + u32 channel_id; + + mutex_lock(&gsi->mutex); + + while (mask) { + u32 channel_id = __fls(mask); + + mask ^= BIT(channel_id); + + gsi_modem_channel_halt(gsi, channel_id); + } + + channel_id = gsi->channel_count - 1; + do + gsi_channel_teardown_one(gsi, channel_id); + while (channel_id--); + + mutex_unlock(&gsi->mutex); + + gsi_irq_disable(gsi); + gsi_evt_ring_teardown(gsi); +} + +/* Setup function for GSI. GSI firmware must be loaded and initialized */ +int gsi_setup(struct gsi *gsi, bool db_enable) +{ + u32 val; + + /* Here is where we first touch the GSI hardware */ + val = ioread32(gsi->virt + GSI_GSI_STATUS_OFFSET); + if (!(val & ENABLED_FMASK)) { + dev_err(gsi->dev, "GSI has not been enabled\n"); + return -EIO; + } + + val = ioread32(gsi->virt + GSI_GSI_HW_PARAM_2_OFFSET); + + gsi->channel_count = u32_get_bits(val, NUM_CH_PER_EE_FMASK); + if (!gsi->channel_count) { + dev_err(gsi->dev, "GSI reports zero channels supported\n"); + return -EINVAL; + } + if (gsi->channel_count > GSI_CHANNEL_COUNT_MAX) { + dev_warn(gsi->dev, + "limiting to %u channels (hardware supports %u)\n", + GSI_CHANNEL_COUNT_MAX, gsi->channel_count); + gsi->channel_count = GSI_CHANNEL_COUNT_MAX; + } + + gsi->evt_ring_count = u32_get_bits(val, NUM_EV_PER_EE_FMASK); + if (!gsi->evt_ring_count) { + dev_err(gsi->dev, "GSI reports zero event rings supported\n"); + return -EINVAL; + } + if (gsi->evt_ring_count > GSI_EVT_RING_COUNT_MAX) { + dev_warn(gsi->dev, + "limiting to %u event rings (hardware supports %u)\n", + GSI_EVT_RING_COUNT_MAX, gsi->evt_ring_count); + gsi->evt_ring_count = GSI_EVT_RING_COUNT_MAX; + } + + /* Initialize the error log */ + iowrite32(0, gsi->virt + GSI_ERROR_LOG_OFFSET); + + /* Writing 1 indicates IRQ interrupts; 0 would be MSI */ + iowrite32(1, gsi->virt + GSI_CNTXT_INTSET_OFFSET); + + return gsi_channel_setup(gsi, db_enable); +} + +/* Inverse of gsi_setup() */ +void gsi_teardown(struct gsi *gsi) +{ + gsi_channel_teardown(gsi); +} + +/* Initialize a channel's event ring */ +static int gsi_channel_evt_ring_init(struct gsi_channel *channel) +{ + struct gsi *gsi = channel->gsi; + struct gsi_evt_ring *evt_ring; + int ret; + + ret = gsi_evt_ring_id_alloc(gsi); + if (ret < 0) + return ret; + channel->evt_ring_id = ret; + + evt_ring = &gsi->evt_ring[channel->evt_ring_id]; + evt_ring->channel = channel; + + ret = gsi_ring_alloc(gsi, &evt_ring->ring, channel->event_count); + if (!ret) + return 0; /* Success! */ + + dev_err(gsi->dev, "error %d allocating channel %u event ring\n", + ret, gsi_channel_id(channel)); + + gsi_evt_ring_id_free(gsi, channel->evt_ring_id); + + return ret; +} + +/* Inverse of gsi_channel_evt_ring_init() */ +static void gsi_channel_evt_ring_exit(struct gsi_channel *channel) +{ + u32 evt_ring_id = channel->evt_ring_id; + struct gsi *gsi = channel->gsi; + struct gsi_evt_ring *evt_ring; + + evt_ring = &gsi->evt_ring[evt_ring_id]; + gsi_ring_free(gsi, &evt_ring->ring); + gsi_evt_ring_id_free(gsi, evt_ring_id); +} + +/* Init function for event rings */ +static void gsi_evt_ring_init(struct gsi *gsi) +{ + u32 evt_ring_id = 0; + + gsi->event_bitmap = gsi_event_bitmap_init(GSI_EVT_RING_COUNT_MAX); + gsi->event_enable_bitmap = 0; + do + init_completion(&gsi->evt_ring[evt_ring_id].completion); + while (++evt_ring_id < GSI_EVT_RING_COUNT_MAX); +} + +/* Inverse of gsi_evt_ring_init() */ +static void gsi_evt_ring_exit(struct gsi *gsi) +{ + /* Nothing to do */ +} + +static bool gsi_channel_data_valid(struct gsi *gsi, + const struct ipa_gsi_endpoint_data *data) +{ +#ifdef IPA_VALIDATION + u32 channel_id = data->channel_id; + struct device *dev = gsi->dev; + + /* Make sure channel ids are in the range driver supports */ + if (channel_id >= GSI_CHANNEL_COUNT_MAX) { + dev_err(dev, "bad channel id %u (must be less than %u)\n", + channel_id, GSI_CHANNEL_COUNT_MAX); + return false; + } + + if (data->ee_id != GSI_EE_AP && data->ee_id != GSI_EE_MODEM) { + dev_err(dev, "bad EE id %u (AP or modem)\n", data->ee_id); + return false; + } + + if (!data->channel.tlv_count || + data->channel.tlv_count > GSI_TLV_MAX) { + dev_err(dev, "channel %u bad tlv_count %u (must be 1..%u)\n", + channel_id, data->channel.tlv_count, GSI_TLV_MAX); + return false; + } + + /* We have to allow at least one maximally-sized transaction to + * be outstanding (which would use tlv_count TREs). Given how + * gsi_channel_tre_max() is computed, tre_count has to be almost + * twice the TLV FIFO size to satisfy this requirement. + */ + if (data->channel.tre_count < 2 * data->channel.tlv_count - 1) { + dev_err(dev, "channel %u TLV count %u exceeds TRE count %u\n", + channel_id, data->channel.tlv_count, + data->channel.tre_count); + return false; + } + + if (!is_power_of_2(data->channel.tre_count)) { + dev_err(dev, "channel %u bad tre_count %u (not power of 2)\n", + channel_id, data->channel.tre_count); + return false; + } + + if (!is_power_of_2(data->channel.event_count)) { + dev_err(dev, "channel %u bad event_count %u (not power of 2)\n", + channel_id, data->channel.event_count); + return false; + } +#endif /* IPA_VALIDATION */ + + return true; +} + +/* Init function for a single channel */ +static int gsi_channel_init_one(struct gsi *gsi, + const struct ipa_gsi_endpoint_data *data, + bool command, bool prefetch) +{ + struct gsi_channel *channel; + u32 tre_count; + int ret; + + if (!gsi_channel_data_valid(gsi, data)) + return -EINVAL; + + /* Worst case we need an event for every outstanding TRE */ + if (data->channel.tre_count > data->channel.event_count) { + dev_warn(gsi->dev, "channel %u limited to %u TREs\n", + data->channel_id, data->channel.tre_count); + tre_count = data->channel.event_count; + } else { + tre_count = data->channel.tre_count; + } + + channel = &gsi->channel[data->channel_id]; + memset(channel, 0, sizeof(*channel)); + + channel->gsi = gsi; + channel->toward_ipa = data->toward_ipa; + channel->command = command; + channel->use_prefetch = command && prefetch; + channel->tlv_count = data->channel.tlv_count; + channel->tre_count = tre_count; + channel->event_count = data->channel.event_count; + init_completion(&channel->completion); + + ret = gsi_channel_evt_ring_init(channel); + if (ret) + goto err_clear_gsi; + + ret = gsi_ring_alloc(gsi, &channel->tre_ring, data->channel.tre_count); + if (ret) { + dev_err(gsi->dev, "error %d allocating channel %u ring\n", + ret, data->channel_id); + goto err_channel_evt_ring_exit; + } + + ret = gsi_channel_trans_init(gsi, data->channel_id); + if (ret) + goto err_ring_free; + + if (command) { + u32 tre_max = gsi_channel_tre_max(gsi, data->channel_id); + + ret = ipa_cmd_pool_init(channel, tre_max); + } + if (!ret) + return 0; /* Success! */ + + gsi_channel_trans_exit(channel); +err_ring_free: + gsi_ring_free(gsi, &channel->tre_ring); +err_channel_evt_ring_exit: + gsi_channel_evt_ring_exit(channel); +err_clear_gsi: + channel->gsi = NULL; /* Mark it not (fully) initialized */ + + return ret; +} + +/* Inverse of gsi_channel_init_one() */ +static void gsi_channel_exit_one(struct gsi_channel *channel) +{ + if (!channel->gsi) + return; /* Ignore uninitialized channels */ + + if (channel->command) + ipa_cmd_pool_exit(channel); + gsi_channel_trans_exit(channel); + gsi_ring_free(channel->gsi, &channel->tre_ring); + gsi_channel_evt_ring_exit(channel); +} + +/* Init function for channels */ +static int gsi_channel_init(struct gsi *gsi, bool prefetch, u32 count, + const struct ipa_gsi_endpoint_data *data, + bool modem_alloc) +{ + int ret = 0; + u32 i; + + gsi_evt_ring_init(gsi); + + /* The endpoint data array is indexed by endpoint name */ + for (i = 0; i < count; i++) { + bool command = i == IPA_ENDPOINT_AP_COMMAND_TX; + + if (ipa_gsi_endpoint_data_empty(&data[i])) + continue; /* Skip over empty slots */ + + /* Mark modem channels to be allocated (hardware workaround) */ + if (data[i].ee_id == GSI_EE_MODEM) { + if (modem_alloc) + gsi->modem_channel_bitmap |= + BIT(data[i].channel_id); + continue; + } + + ret = gsi_channel_init_one(gsi, &data[i], command, prefetch); + if (ret) + goto err_unwind; + } + + return ret; + +err_unwind: + while (i--) { + if (ipa_gsi_endpoint_data_empty(&data[i])) + continue; + if (modem_alloc && data[i].ee_id == GSI_EE_MODEM) { + gsi->modem_channel_bitmap &= ~BIT(data[i].channel_id); + continue; + } + gsi_channel_exit_one(&gsi->channel[data->channel_id]); + } + gsi_evt_ring_exit(gsi); + + return ret; +} + +/* Inverse of gsi_channel_init() */ +static void gsi_channel_exit(struct gsi *gsi) +{ + u32 channel_id = GSI_CHANNEL_COUNT_MAX - 1; + + do + gsi_channel_exit_one(&gsi->channel[channel_id]); + while (channel_id--); + gsi->modem_channel_bitmap = 0; + + gsi_evt_ring_exit(gsi); +} + +/* Init function for GSI. GSI hardware does not need to be "ready" */ +int gsi_init(struct gsi *gsi, struct platform_device *pdev, bool prefetch, + u32 count, const struct ipa_gsi_endpoint_data *data, + bool modem_alloc) +{ + struct resource *res; + resource_size_t size; + unsigned int irq; + int ret; + + gsi_validate_build(); + + gsi->dev = &pdev->dev; + + /* The GSI layer performs NAPI on all endpoints. NAPI requires a + * network device structure, but the GSI layer does not have one, + * so we must create a dummy network device for this purpose. + */ + init_dummy_netdev(&gsi->dummy_dev); + + /* Get the GSI IRQ and request for it to wake the system */ + ret = platform_get_irq_byname(pdev, "gsi"); + if (ret <= 0) { + dev_err(gsi->dev, + "DT error %d getting \"gsi\" IRQ property\n", ret); + return ret ? : -EINVAL; + } + irq = ret; + + ret = request_irq(irq, gsi_isr, 0, "gsi", gsi); + if (ret) { + dev_err(gsi->dev, "error %d requesting \"gsi\" IRQ\n", ret); + return ret; + } + gsi->irq = irq; + + ret = enable_irq_wake(gsi->irq); + if (ret) + dev_warn(gsi->dev, "error %d enabling gsi wake irq\n", ret); + gsi->irq_wake_enabled = !ret; + + /* Get GSI memory range and map it */ + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "gsi"); + if (!res) { + dev_err(gsi->dev, + "DT error getting \"gsi\" memory property\n"); + ret = -ENODEV; + goto err_disable_irq_wake; + } + + size = resource_size(res); + if (res->start > U32_MAX || size > U32_MAX - res->start) { + dev_err(gsi->dev, "DT memory resource \"gsi\" out of range\n"); + ret = -EINVAL; + goto err_disable_irq_wake; + } + + gsi->virt = ioremap(res->start, size); + if (!gsi->virt) { + dev_err(gsi->dev, "unable to remap \"gsi\" memory\n"); + ret = -ENOMEM; + goto err_disable_irq_wake; + } + + ret = gsi_channel_init(gsi, prefetch, count, data, modem_alloc); + if (ret) + goto err_iounmap; + + mutex_init(&gsi->mutex); + init_completion(&gsi->completion); + + return 0; + +err_iounmap: + iounmap(gsi->virt); +err_disable_irq_wake: + if (gsi->irq_wake_enabled) + (void)disable_irq_wake(gsi->irq); + free_irq(gsi->irq, gsi); + + return ret; +} + +/* Inverse of gsi_init() */ +void gsi_exit(struct gsi *gsi) +{ + mutex_destroy(&gsi->mutex); + gsi_channel_exit(gsi); + if (gsi->irq_wake_enabled) + (void)disable_irq_wake(gsi->irq); + free_irq(gsi->irq, gsi); + iounmap(gsi->virt); +} + +/* The maximum number of outstanding TREs on a channel. This limits + * a channel's maximum number of transactions outstanding (worst case + * is one TRE per transaction). + * + * The absolute limit is the number of TREs in the channel's TRE ring, + * and in theory we should be able use all of them. But in practice, + * doing that led to the hardware reporting exhaustion of event ring + * slots for writing completion information. So the hardware limit + * would be (tre_count - 1). + * + * We reduce it a bit further though. Transaction resource pools are + * sized to be a little larger than this maximum, to allow resource + * allocations to always be contiguous. The number of entries in a + * TRE ring buffer is a power of 2, and the extra resources in a pool + * tends to nearly double the memory allocated for it. Reducing the + * maximum number of outstanding TREs allows the number of entries in + * a pool to avoid crossing that power-of-2 boundary, and this can + * substantially reduce pool memory requirements. The number we + * reduce it by matches the number added in gsi_trans_pool_init(). + */ +u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + /* Hardware limit is channel->tre_count - 1 */ + return channel->tre_count - (channel->tlv_count - 1); +} + +/* Returns the maximum number of TREs in a single transaction for a channel */ +u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + return channel->tlv_count; +} From patchwork Fri Feb 28 22:41:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0469BC3F2D5 for ; Fri, 28 Feb 2020 22:43:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B0A2A246AC for ; Fri, 28 Feb 2020 22:43:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="JQKCeE1s" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726822AbgB1Wnf (ORCPT ); Fri, 28 Feb 2020 17:43:35 -0500 Received: from mail-yw1-f66.google.com ([209.85.161.66]:42426 "EHLO mail-yw1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727115AbgB1Wmb (ORCPT ); Fri, 28 Feb 2020 17:42:31 -0500 Received: by mail-yw1-f66.google.com with SMTP id n127so4916073ywd.9 for ; Fri, 28 Feb 2020 14:42:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=b47MKvxD2msol8/MWCxj/Lo4whOdBr+UIw9UckJBD2M=; b=JQKCeE1sINUaZmgL6fmXVTYxY6xBba3Wp0U/KnreSRt3pdxW4xtJl/tG/ljszRe7W6 LtrgQOGZqGn1MhU6ABB2xJ+eg7B4WmTUdi7HwFEbI8o7pC7jkPpFBl1TgD7ZPLorugIn N7uepY6HvisdOPEgir6sAp6wZqcZf/zPI6TAwB5cJE2kYZclSULVURK1cZNSVyNMC3Wa a9m5K9oI6fnD9e05dzulfR+7sj5cDpf9KquLxH8ek7M7c85FEJchaU4Q6ZAl8mq98I8B 9jbZcYlUHeW4U/+zg4VPpKTqIgZ7hR45uqbOWyTfTDfVtlAqALjC0gyBxd9Sg/tkCxHW YKHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=b47MKvxD2msol8/MWCxj/Lo4whOdBr+UIw9UckJBD2M=; b=md9NxBUqUjKDQrjwgeb1izeqPJ0FAZFVDdmP7wjChFuNGzrxCo+8tx8TrByXvCxk6p MkeUe2/vtlJ6OUauWPQHHZm+sBxU26bljmGDqXozq4kjy4Q81Uy2qABkPNzyYGr1Nj3J XqmWySI005T4uwmONVWlu5Qk/OaQFMj4qGcEUjBhxUE1NzdPPf/L+izntHFkmtNHG6DY Ovd8kjX2R/+ONANKZdDGSWVcAK3abIg96tEEejnnSreCg70LnH5uYKdBNtNItO1E7pJ8 GzwCURwACGrlJdz0wGm8uMGwyUzwxtNkGAe7Q5E2fOtSzGB31GPz0umYgYq1cvSnLf3u EzMA== X-Gm-Message-State: APjAAAU32zQcg84gCxg4Y9jMVL9NoR5yWUZA2Q+n172wo7U1GbI0t5DM lDCE1L0C0FJNmuhX6DTv5omVjw== X-Google-Smtp-Source: APXvYqwEicRswfTvRY7H/iQHyxdQZGBArwSG6Pw6q5WZLDi4wTf5h1vi8PiJ6vnKZZRE0kPvDXU42Q== X-Received: by 2002:a81:8803:: with SMTP id y3mr6985802ywf.338.1582929749187; Fri, 28 Feb 2020 14:42:29 -0800 (PST) Received: from localhost.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id d188sm4637830ywe.50.2020.02.28.14.42.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Feb 2020 14:42:28 -0800 (PST) From: Alex Elder To: Arnd Bergmann , David Miller Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/17] soc: qcom: ipa: GSI transactions Date: Fri, 28 Feb 2020 16:41:56 -0600 Message-Id: <20200228224204.17746-10-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200228224204.17746-1-elder@linaro.org> References: <20200228224204.17746-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch implements GSI transactions. A GSI transaction is a structure that represents a single request (consisting of one or more TREs) sent to the GSI hardware. The last TRE in a transaction includes a flag requesting that the GSI interrupt the AP to notify that it has completed. TREs are executed and completed strictly in order. For this reason, the completion of a single TRE implies that all previous TREs (in particular all of those "earlier" in a transaction) have completed. Whenever there is a need to send a request (a set of TREs) to the IPA, a GSI transaction is allocated, specifying the number of TREs that will be required. Details of the request (e.g. transfer offsets and length) are represented by in a Linux scatterlist array that is incorporated in the transaction structure. Once all commands (TREs) are added to a transaction it is committed. When the hardware signals that the request has completed, a callback function allows for cleanup or followup activity to be performed before the transaction is freed. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi_trans.c | 786 ++++++++++++++++++++++++++++++++++++ drivers/net/ipa/gsi_trans.h | 226 +++++++++++ 2 files changed, 1012 insertions(+) create mode 100644 drivers/net/ipa/gsi_trans.c create mode 100644 drivers/net/ipa/gsi_trans.h diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c new file mode 100644 index 000000000000..09fbbfbcca1b --- /dev/null +++ b/drivers/net/ipa/gsi_trans.c @@ -0,0 +1,786 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_private.h" +#include "gsi_trans.h" +#include "ipa_gsi.h" +#include "ipa_data.h" +#include "ipa_cmd.h" + +/** + * DOC: GSI Transactions + * + * A GSI transaction abstracts the behavior of a GSI channel by representing + * everything about a related group of IPA commands in a single structure. + * (A "command" in this sense is either a data transfer or an IPA immediate + * command.) Most details of interaction with the GSI hardware are managed + * by the GSI transaction core, allowing users to simply describe commands + * to be performed. When a transaction has completed a callback function + * (dependent on the type of endpoint associated with the channel) allows + * cleanup of resources associated with the transaction. + * + * To perform a command (or set of them), a user of the GSI transaction + * interface allocates a transaction, indicating the number of TREs required + * (one per command). If sufficient TREs are available, they are reserved + * for use in the transaction and the allocation succeeds. This way + * exhaustion of the available TREs in a channel ring is detected + * as early as possible. All resources required to complete a transaction + * are allocated at transaction allocation time. + * + * Commands performed as part of a transaction are represented in an array + * of Linux scatterlist structures. This array is allocated with the + * transaction, and its entries are initialized using standard scatterlist + * functions (such as sg_set_buf() or skb_to_sgvec()). + * + * Once a transaction's scatterlist structures have been initialized, the + * transaction is committed. The caller is responsible for mapping buffers + * for DMA if necessary, and this should be done *before* allocating + * the transaction. Between a successful allocation and commit of a + * transaction no errors should occur. + * + * Committing transfers ownership of the entire transaction to the GSI + * transaction core. The GSI transaction code formats the content of + * the scatterlist array into the channel ring buffer and informs the + * hardware that new TREs are available to process. + * + * The last TRE in each transaction is marked to interrupt the AP when the + * GSI hardware has completed it. Because transfers described by TREs are + * performed strictly in order, signaling the completion of just the last + * TRE in the transaction is sufficient to indicate the full transaction + * is complete. + * + * When a transaction is complete, ipa_gsi_trans_complete() is called by the + * GSI code into the IPA layer, allowing it to perform any final cleanup + * required before the transaction is freed. + */ + +/* Hardware values representing a transfer element type */ +enum gsi_tre_type { + GSI_RE_XFER = 0x2, + GSI_RE_IMMD_CMD = 0x3, +}; + +/* An entry in a channel ring */ +struct gsi_tre { + __le64 addr; /* DMA address */ + __le16 len_opcode; /* length in bytes or enum IPA_CMD_* */ + __le16 reserved; + __le32 flags; /* TRE_FLAGS_* */ +}; + +/* gsi_tre->flags mask values (in CPU byte order) */ +#define TRE_FLAGS_CHAIN_FMASK GENMASK(0, 0) +#define TRE_FLAGS_IEOB_FMASK GENMASK(8, 8) +#define TRE_FLAGS_IEOT_FMASK GENMASK(9, 9) +#define TRE_FLAGS_BEI_FMASK GENMASK(10, 10) +#define TRE_FLAGS_TYPE_FMASK GENMASK(23, 16) + +int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count, + u32 max_alloc) +{ + void *virt; + +#ifdef IPA_VALIDATE + if (!size || size % 8) + return -EINVAL; + if (count < max_alloc) + return -EINVAL; + if (!max_alloc) + return -EINVAL; +#endif /* IPA_VALIDATE */ + + /* By allocating a few extra entries in our pool (one less + * than the maximum number that will be requested in a + * single allocation), we can always satisfy requests without + * ever worrying about straddling the end of the pool array. + * If there aren't enough entries starting at the free index, + * we just allocate free entries from the beginning of the pool. + */ + virt = kcalloc(count + max_alloc - 1, size, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + pool->base = virt; + /* If the allocator gave us any extra memory, use it */ + pool->count = ksize(pool->base) / size; + pool->free = 0; + pool->max_alloc = max_alloc; + pool->size = size; + pool->addr = 0; /* Only used for DMA pools */ + + return 0; +} + +void gsi_trans_pool_exit(struct gsi_trans_pool *pool) +{ + kfree(pool->base); + memset(pool, 0, sizeof(*pool)); +} + +/* Allocate the requested number of (zeroed) entries from the pool */ +/* Home-grown DMA pool. This way we can preallocate and use the tre_count + * to guarantee allocations will succeed. Even though we specify max_alloc + * (and it can be more than one), we only allow allocation of a single + * element from a DMA pool. + */ +int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool, + size_t size, u32 count, u32 max_alloc) +{ + size_t total_size; + dma_addr_t addr; + void *virt; + +#ifdef IPA_VALIDATE + if (!size || size % 8) + return -EINVAL; + if (count < max_alloc) + return -EINVAL; + if (!max_alloc) + return -EINVAL; +#endif /* IPA_VALIDATE */ + + /* Don't let allocations cross a power-of-two boundary */ + size = __roundup_pow_of_two(size); + total_size = (count + max_alloc - 1) * size; + + /* The allocator will give us a power-of-2 number of pages. But we + * can't guarantee that, so request it. That way we won't waste any + * memory that would be available beyond the required space. + */ + total_size = get_order(total_size) << PAGE_SHIFT; + + virt = dma_alloc_coherent(dev, total_size, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + pool->base = virt; + pool->count = total_size / size; + pool->free = 0; + pool->size = size; + pool->max_alloc = max_alloc; + pool->addr = addr; + + return 0; +} + +void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool) +{ + dma_free_coherent(dev, pool->size, pool->base, pool->addr); + memset(pool, 0, sizeof(*pool)); +} + +/* Return the byte offset of the next free entry in the pool */ +static u32 gsi_trans_pool_alloc_common(struct gsi_trans_pool *pool, u32 count) +{ + u32 offset; + + /* assert(count > 0); */ + /* assert(count <= pool->max_alloc); */ + + /* Allocate from beginning if wrap would occur */ + if (count > pool->count - pool->free) + pool->free = 0; + + offset = pool->free * pool->size; + pool->free += count; + memset(pool->base + offset, 0, count * pool->size); + + return offset; +} + +/* Allocate a contiguous block of zeroed entries from a pool */ +void *gsi_trans_pool_alloc(struct gsi_trans_pool *pool, u32 count) +{ + return pool->base + gsi_trans_pool_alloc_common(pool, count); +} + +/* Allocate a single zeroed entry from a DMA pool */ +void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr) +{ + u32 offset = gsi_trans_pool_alloc_common(pool, 1); + + *addr = pool->addr + offset; + + return pool->base + offset; +} + +/* Return the pool element that immediately follows the one given. + * This only works done if elements are allocated one at a time. + */ +void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element) +{ + void *end = pool->base + pool->count * pool->size; + + /* assert(element >= pool->base); */ + /* assert(element < end); */ + /* assert(pool->max_alloc == 1); */ + element += pool->size; + + return element < end ? element : pool->base; +} + +/* Map a given ring entry index to the transaction associated with it */ +static void gsi_channel_trans_map(struct gsi_channel *channel, u32 index, + struct gsi_trans *trans) +{ + /* Note: index *must* be used modulo the ring count here */ + channel->trans_info.map[index % channel->tre_ring.count] = trans; +} + +/* Return the transaction mapped to a given ring entry */ +struct gsi_trans * +gsi_channel_trans_mapped(struct gsi_channel *channel, u32 index) +{ + /* Note: index *must* be used modulo the ring count here */ + return channel->trans_info.map[index % channel->tre_ring.count]; +} + +/* Return the oldest completed transaction for a channel (or null) */ +struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel) +{ + return list_first_entry_or_null(&channel->trans_info.complete, + struct gsi_trans, links); +} + +/* Move a transaction from the allocated list to the pending list */ +static void gsi_trans_move_pending(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_move_tail(&trans->links, &trans_info->pending); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Move a transaction and all of its predecessors from the pending list + * to the completed list. + */ +void gsi_trans_move_complete(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + struct list_head list; + + spin_lock_bh(&trans_info->spinlock); + + /* Move this transaction and all predecessors to completed list */ + list_cut_position(&list, &trans_info->pending, &trans->links); + list_splice_tail(&list, &trans_info->complete); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Move a transaction from the completed list to the polled list */ +void gsi_trans_move_polled(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_move_tail(&trans->links, &trans_info->polled); + + spin_unlock_bh(&trans_info->spinlock); +} + +/* Reserve some number of TREs on a channel. Returns true if successful */ +static bool +gsi_trans_tre_reserve(struct gsi_trans_info *trans_info, u32 tre_count) +{ + int avail = atomic_read(&trans_info->tre_avail); + int new; + + do { + new = avail - (int)tre_count; + if (unlikely(new < 0)) + return false; + } while (!atomic_try_cmpxchg(&trans_info->tre_avail, &avail, new)); + + return true; +} + +/* Release previously-reserved TRE entries to a channel */ +static void +gsi_trans_tre_release(struct gsi_trans_info *trans_info, u32 tre_count) +{ + atomic_add(tre_count, &trans_info->tre_avail); +} + +/* Allocate a GSI transaction on a channel */ +struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id, + u32 tre_count, + enum dma_data_direction direction) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_trans_info *trans_info; + struct gsi_trans *trans; + + /* assert(tre_count <= gsi_channel_trans_tre_max(gsi, channel_id)); */ + + trans_info = &channel->trans_info; + + /* We reserve the TREs now, but consume them at commit time. + * If there aren't enough available, we're done. + */ + if (!gsi_trans_tre_reserve(trans_info, tre_count)) + return NULL; + + /* Allocate and initialize non-zero fields in the the transaction */ + trans = gsi_trans_pool_alloc(&trans_info->pool, 1); + trans->gsi = gsi; + trans->channel_id = channel_id; + trans->tre_count = tre_count; + init_completion(&trans->completion); + + /* Allocate the scatterlist and (if requested) info entries. */ + trans->sgl = gsi_trans_pool_alloc(&trans_info->sg_pool, tre_count); + sg_init_marker(trans->sgl, tre_count); + + trans->direction = direction; + + spin_lock_bh(&trans_info->spinlock); + + list_add_tail(&trans->links, &trans_info->alloc); + + spin_unlock_bh(&trans_info->spinlock); + + refcount_set(&trans->refcount, 1); + + return trans; +} + +/* Free a previously-allocated transaction (used only in case of error) */ +void gsi_trans_free(struct gsi_trans *trans) +{ + struct gsi_trans_info *trans_info; + + if (!refcount_dec_and_test(&trans->refcount)) + return; + + trans_info = &trans->gsi->channel[trans->channel_id].trans_info; + + spin_lock_bh(&trans_info->spinlock); + + list_del(&trans->links); + + spin_unlock_bh(&trans_info->spinlock); + + ipa_gsi_trans_release(trans); + + /* Releasing the reserved TREs implicitly frees the sgl[] and + * (if present) info[] arrays, plus the transaction itself. + */ + gsi_trans_tre_release(trans_info, trans->tre_count); +} + +/* Add an immediate command to a transaction */ +void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size, + dma_addr_t addr, enum dma_data_direction direction, + enum ipa_cmd_opcode opcode) +{ + struct ipa_cmd_info *info; + u32 which = trans->used++; + struct scatterlist *sg; + + /* assert(which < trans->tre_count); */ + + /* Set the page information for the buffer. We also need to fill in + * the DMA address for the buffer (something dma_map_sg() normally + * does). + */ + sg = &trans->sgl[which]; + + sg_set_buf(sg, buf, size); + sg_dma_address(sg) = addr; + + info = &trans->info[which]; + info->opcode = opcode; + info->direction = direction; +} + +/* Add a page transfer to a transaction. It will fill the only TRE. */ +int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size, + u32 offset) +{ + struct scatterlist *sg = &trans->sgl[0]; + int ret; + + /* assert(trans->tre_count == 1); */ + /* assert(!trans->used); */ + + sg_set_page(sg, page, size, offset); + ret = dma_map_sg(trans->gsi->dev, sg, 1, trans->direction); + if (!ret) + return -ENOMEM; + + trans->used++; /* Transaction now owns the (DMA mapped) page */ + + return 0; +} + +/* Add an SKB transfer to a transaction. No other TREs will be used. */ +int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb) +{ + struct scatterlist *sg = &trans->sgl[0]; + u32 used; + int ret; + + /* assert(trans->tre_count == 1); */ + /* assert(!trans->used); */ + + /* skb->len will not be 0 (checked early) */ + ret = skb_to_sgvec(skb, sg, 0, skb->len); + if (ret < 0) + return ret; + used = ret; + + ret = dma_map_sg(trans->gsi->dev, sg, used, trans->direction); + if (!ret) + return -ENOMEM; + + trans->used += used; /* Transaction now owns the (DMA mapped) skb */ + + return 0; +} + +/* Compute the length/opcode value to use for a TRE */ +static __le16 gsi_tre_len_opcode(enum ipa_cmd_opcode opcode, u32 len) +{ + return opcode == IPA_CMD_NONE ? cpu_to_le16((u16)len) + : cpu_to_le16((u16)opcode); +} + +/* Compute the flags value to use for a given TRE */ +static __le32 gsi_tre_flags(bool last_tre, bool bei, enum ipa_cmd_opcode opcode) +{ + enum gsi_tre_type tre_type; + u32 tre_flags; + + tre_type = opcode == IPA_CMD_NONE ? GSI_RE_XFER : GSI_RE_IMMD_CMD; + tre_flags = u32_encode_bits(tre_type, TRE_FLAGS_TYPE_FMASK); + + /* Last TRE contains interrupt flags */ + if (last_tre) { + /* All transactions end in a transfer completion interrupt */ + tre_flags |= TRE_FLAGS_IEOT_FMASK; + /* Don't interrupt when outbound commands are acknowledged */ + if (bei) + tre_flags |= TRE_FLAGS_BEI_FMASK; + } else { /* All others indicate there's more to come */ + tre_flags |= TRE_FLAGS_CHAIN_FMASK; + } + + return cpu_to_le32(tre_flags); +} + +static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr, + u32 len, bool last_tre, bool bei, + enum ipa_cmd_opcode opcode) +{ + struct gsi_tre tre; + + tre.addr = cpu_to_le64(addr); + tre.len_opcode = gsi_tre_len_opcode(opcode, len); + tre.reserved = 0; + tre.flags = gsi_tre_flags(last_tre, bei, opcode); + + /* ARM64 can write 16 bytes as a unit with a single instruction. + * Doing the assignment this way is an attempt to make that happen. + */ + *dest_tre = tre; +} + +/** + * __gsi_trans_commit() - Common GSI transaction commit code + * @trans: Transaction to commit + * @ring_db: Whether to tell the hardware about these queued transfers + * + * Formats channel ring TRE entries based on the content of the scatterlist. + * Maps a transaction pointer to the last ring entry used for the transaction, + * so it can be recovered when it completes. Moves the transaction to the + * pending list. Finally, updates the channel ring pointer and optionally + * rings the doorbell. + */ +static void __gsi_trans_commit(struct gsi_trans *trans, bool ring_db) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_ring *ring = &channel->tre_ring; + enum ipa_cmd_opcode opcode = IPA_CMD_NONE; + bool bei = channel->toward_ipa; + struct ipa_cmd_info *info; + struct gsi_tre *dest_tre; + struct scatterlist *sg; + u32 byte_count = 0; + u32 avail; + u32 i; + + /* assert(trans->used > 0); */ + + /* Consume the entries. If we cross the end of the ring while + * filling them we'll switch to the beginning to finish. + * If there is no info array we're doing a simple data + * transfer request, whose opcode is IPA_CMD_NONE. + */ + info = trans->info ? &trans->info[0] : NULL; + avail = ring->count - ring->index % ring->count; + dest_tre = gsi_ring_virt(ring, ring->index); + for_each_sg(trans->sgl, sg, trans->used, i) { + bool last_tre = i == trans->used - 1; + dma_addr_t addr = sg_dma_address(sg); + u32 len = sg_dma_len(sg); + + byte_count += len; + if (!avail--) + dest_tre = gsi_ring_virt(ring, 0); + if (info) + opcode = info++->opcode; + + gsi_trans_tre_fill(dest_tre, addr, len, last_tre, bei, opcode); + dest_tre++; + } + ring->index += trans->used; + + if (channel->toward_ipa) { + /* We record TX bytes when they are sent */ + trans->len = byte_count; + trans->trans_count = channel->trans_count; + trans->byte_count = channel->byte_count; + channel->trans_count++; + channel->byte_count += byte_count; + } + + /* Associate the last TRE with the transaction */ + gsi_channel_trans_map(channel, ring->index - 1, trans); + + gsi_trans_move_pending(trans); + + /* Ring doorbell if requested, or if all TREs are allocated */ + if (ring_db || !atomic_read(&channel->trans_info.tre_avail)) { + /* Report what we're handing off to hardware for TX channels */ + if (channel->toward_ipa) + gsi_channel_tx_queued(channel); + gsi_channel_doorbell(channel); + } +} + +/* Commit a GSI transaction */ +void gsi_trans_commit(struct gsi_trans *trans, bool ring_db) +{ + if (trans->used) + __gsi_trans_commit(trans, ring_db); + else + gsi_trans_free(trans); +} + +/* Commit a GSI transaction and wait for it to complete */ +void gsi_trans_commit_wait(struct gsi_trans *trans) +{ + if (!trans->used) + goto out_trans_free; + + refcount_inc(&trans->refcount); + + __gsi_trans_commit(trans, true); + + wait_for_completion(&trans->completion); + +out_trans_free: + gsi_trans_free(trans); +} + +/* Commit a GSI transaction and wait for it to complete, with timeout */ +int gsi_trans_commit_wait_timeout(struct gsi_trans *trans, + unsigned long timeout) +{ + unsigned long timeout_jiffies = msecs_to_jiffies(timeout); + unsigned long remaining = 1; /* In case of empty transaction */ + + if (!trans->used) + goto out_trans_free; + + refcount_inc(&trans->refcount); + + __gsi_trans_commit(trans, true); + + remaining = wait_for_completion_timeout(&trans->completion, + timeout_jiffies); +out_trans_free: + gsi_trans_free(trans); + + return remaining ? 0 : -ETIMEDOUT; +} + +/* Process the completion of a transaction; called while polling */ +void gsi_trans_complete(struct gsi_trans *trans) +{ + /* If the entire SGL was mapped when added, unmap it now */ + if (trans->direction != DMA_NONE) + dma_unmap_sg(trans->gsi->dev, trans->sgl, trans->used, + trans->direction); + + ipa_gsi_trans_complete(trans); + + complete(&trans->completion); + + gsi_trans_free(trans); +} + +/* Cancel a channel's pending transactions */ +void gsi_channel_trans_cancel_pending(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + struct gsi_trans *trans; + bool cancelled; + + /* channel->gsi->mutex is held by caller */ + spin_lock_bh(&trans_info->spinlock); + + cancelled = !list_empty(&trans_info->pending); + list_for_each_entry(trans, &trans_info->pending, links) + trans->cancelled = true; + + list_splice_tail_init(&trans_info->pending, &trans_info->complete); + + spin_unlock_bh(&trans_info->spinlock); + + /* Schedule NAPI polling to complete the cancelled transactions */ + if (cancelled) + napi_schedule(&channel->napi); +} + +/* Issue a command to read a single byte from a channel */ +int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_ring *ring = &channel->tre_ring; + struct gsi_trans_info *trans_info; + struct gsi_tre *dest_tre; + + trans_info = &channel->trans_info; + + /* First reserve the TRE, if possible */ + if (!gsi_trans_tre_reserve(trans_info, 1)) + return -EBUSY; + + /* Now fill the the reserved TRE and tell the hardware */ + + dest_tre = gsi_ring_virt(ring, ring->index); + gsi_trans_tre_fill(dest_tre, addr, 1, true, false, IPA_CMD_NONE); + + ring->index++; + gsi_channel_doorbell(channel); + + return 0; +} + +/* Mark a gsi_trans_read_byte() request done */ +void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + gsi_trans_tre_release(&channel->trans_info, 1); +} + +/* Initialize a channel's GSI transaction info */ +int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_trans_info *trans_info; + u32 tre_max; + int ret; + + /* Ensure the size of a channel element is what's expected */ + BUILD_BUG_ON(sizeof(struct gsi_tre) != GSI_RING_ELEMENT_SIZE); + + /* The map array is used to determine what transaction is associated + * with a TRE that the hardware reports has completed. We need one + * map entry per TRE. + */ + trans_info = &channel->trans_info; + trans_info->map = kcalloc(channel->tre_count, sizeof(*trans_info->map), + GFP_KERNEL); + if (!trans_info->map) + return -ENOMEM; + + /* We can't use more TREs than there are available in the ring. + * This limits the number of transactions that can be oustanding. + * Worst case is one TRE per transaction (but we actually limit + * it to something a little less than that). We allocate resources + * for transactions (including transaction structures) based on + * this maximum number. + */ + tre_max = gsi_channel_tre_max(channel->gsi, channel_id); + + /* Transactions are allocated one at a time. */ + ret = gsi_trans_pool_init(&trans_info->pool, sizeof(struct gsi_trans), + tre_max, 1); + if (ret) + goto err_kfree; + + /* A transaction uses a scatterlist array to represent the data + * transfers implemented by the transaction. Each scatterlist + * element is used to fill a single TRE when the transaction is + * committed. So we need as many scatterlist elements as the + * maximum number of TREs that can be outstanding. + * + * All TREs in a transaction must fit within the channel's TLV FIFO. + * A transaction on a channel can allocate as many TREs as that but + * no more. + */ + ret = gsi_trans_pool_init(&trans_info->sg_pool, + sizeof(struct scatterlist), + tre_max, channel->tlv_count); + if (ret) + goto err_trans_pool_exit; + + /* Finally, the tre_avail field is what ultimately limits the number + * of outstanding transactions and their resources. A transaction + * allocation succeeds only if the TREs available are sufficient for + * what the transaction might need. Transaction resource pools are + * sized based on the maximum number of outstanding TREs, so there + * will always be resources available if there are TREs available. + */ + atomic_set(&trans_info->tre_avail, tre_max); + + spin_lock_init(&trans_info->spinlock); + INIT_LIST_HEAD(&trans_info->alloc); + INIT_LIST_HEAD(&trans_info->pending); + INIT_LIST_HEAD(&trans_info->complete); + INIT_LIST_HEAD(&trans_info->polled); + + return 0; + +err_trans_pool_exit: + gsi_trans_pool_exit(&trans_info->pool); +err_kfree: + kfree(trans_info->map); + + dev_err(gsi->dev, "error %d initializing channel %u transactions\n", + channel_id); + + return ret; +} + +/* Inverse of gsi_channel_trans_init() */ +void gsi_channel_trans_exit(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + + gsi_trans_pool_exit(&trans_info->sg_pool); + gsi_trans_pool_exit(&trans_info->pool); + kfree(trans_info->map); +} diff --git a/drivers/net/ipa/gsi_trans.h b/drivers/net/ipa/gsi_trans.h new file mode 100644 index 000000000000..1477fc15b30a --- /dev/null +++ b/drivers/net/ipa/gsi_trans.h @@ -0,0 +1,226 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _GSI_TRANS_H_ +#define _GSI_TRANS_H_ + +#include +#include +#include +#include + +#include "ipa_cmd.h" + +struct scatterlist; +struct device; +struct sk_buff; + +struct gsi; +struct gsi_trans; +struct gsi_trans_pool; + +/** + * struct gsi_trans - a GSI transaction + * + * Most fields in this structure for internal use by the transaction core code: + * @links: Links for channel transaction lists by state + * @gsi: GSI pointer + * @channel_id: Channel number transaction is associated with + * @cancelled: If set by the core code, transaction was cancelled + * @tre_count: Number of TREs reserved for this transaction + * @used: Number of TREs *used* (could be less than tre_count) + * @len: Total # of transfer bytes represented in sgl[] (set by core) + * @data: Preserved but not touched by the core transaction code + * @sgl: An array of scatter/gather entries managed by core code + * @info: Array of command information structures (command channel) + * @direction: DMA transfer direction (DMA_NONE for commands) + * @refcount: Reference count used for destruction + * @completion: Completed when the transaction completes + * @byte_count: TX channel byte count recorded when transaction committed + * @trans_count: Channel transaction count when committed (for BQL accounting) + * + * The size used for some fields in this structure were chosen to ensure + * the full structure size is no larger than 128 bytes. + */ +struct gsi_trans { + struct list_head links; /* gsi_channel lists */ + + struct gsi *gsi; + u8 channel_id; + + bool cancelled; /* true if transaction was cancelled */ + + u8 tre_count; /* # TREs requested */ + u8 used; /* # entries used in sgl[] */ + u32 len; /* total # bytes across sgl[] */ + + void *data; + struct scatterlist *sgl; + struct ipa_cmd_info *info; /* array of entries, or null */ + enum dma_data_direction direction; + + refcount_t refcount; + struct completion completion; + + u64 byte_count; /* channel byte_count when committed */ + u64 trans_count; /* channel trans_count when committed */ +}; + +/** + * gsi_trans_pool_init() - Initialize a pool of structures for transactions + * @gsi: GSI pointer + * @size: Size of elements in the pool + * @count: Minimum number of elements in the pool + * @max_alloc: Maximum number of elements allocated at a time from pool + * + * @Return: 0 if successful, or a negative error code + */ +int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count, + u32 max_alloc); + +/** + * gsi_trans_pool_alloc() - Allocate one or more elements from a pool + * @pool: Pool pointer + * @count: Number of elements to allocate from the pool + * + * @Return: Virtual address of element(s) allocated from the pool + */ +void *gsi_trans_pool_alloc(struct gsi_trans_pool *pool, u32 count); + +/** + * gsi_trans_pool_exit() - Inverse of gsi_trans_pool_init() + * @pool: Pool pointer + */ +void gsi_trans_pool_exit(struct gsi_trans_pool *pool); + +/** + * gsi_trans_pool_init_dma() - Initialize a pool of DMA-able structures + * @dev: Device used for DMA + * @pool: Pool pointer + * @size: Size of elements in the pool + * @count: Minimum number of elements in the pool + * @max_alloc: Maximum number of elements allocated at a time from pool + * + * @Return: 0 if successful, or a negative error code + * + * Structures in this pool reside in DMA-coherent memory. + */ +int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool, + size_t size, u32 count, u32 max_alloc); + +/** + * gsi_trans_pool_alloc_dma() - Allocate an element from a DMA pool + * @pool: DMA pool pointer + * @addr: DMA address "handle" associated with the allocation + * + * @Return: Virtual address of element allocated from the pool + * + * Only one element at a time may be allocated from a DMA pool. + */ +void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr); + +/** + * gsi_trans_pool_exit() - Inverse of gsi_trans_pool_init() + * @pool: Pool pointer + */ +void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool); + +/** + * gsi_channel_trans_alloc() - Allocate a GSI transaction on a channel + * @gsi: GSI pointer + * @channel_id: Channel the transaction is associated with + * @tre_count: Number of elements in the transaction + * @direction: DMA direction for entire SGL (or DMA_NONE) + * + * @Return: A GSI transaction structure, or a null pointer if all + * available transactions are in use + */ +struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id, + u32 tre_count, + enum dma_data_direction direction); + +/** + * gsi_trans_free() - Free a previously-allocated GSI transaction + * @trans: Transaction to be freed + */ +void gsi_trans_free(struct gsi_trans *trans); + +/** + * gsi_trans_cmd_add() - Add an immediate command to a transaction + * @trans: Transaction + * @buf: Buffer pointer for command payload + * @size: Number of bytes in buffer + * @addr: DMA address for payload + * @direction: Direction of DMA transfer (or DMA_NONE if none required) + * @opcode: IPA immediate command opcode + */ +void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size, + dma_addr_t addr, enum dma_data_direction direction, + enum ipa_cmd_opcode opcode); + +/** + * gsi_trans_page_add() - Add a page transfer to a transaction + * @trans: Transaction + * @page: Page pointer + * @size: Number of bytes (starting at offset) to transfer + * @offset: Offset within page for start of transfer + */ +int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size, + u32 offset); + +/** + * gsi_trans_skb_add() - Add a socket transfer to a transaction + * @trans: Transaction + * @skb: Socket buffer for transfer (outbound) + * + * @Return: 0, or -EMSGSIZE if socket data won't fit in transaction. + */ +int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb); + +/** + * gsi_trans_commit() - Commit a GSI transaction + * @trans: Transaction to commit + * @ring_db: Whether to tell the hardware about these queued transfers + */ +void gsi_trans_commit(struct gsi_trans *trans, bool ring_db); + +/** + * gsi_trans_commit_wait() - Commit a GSI transaction and wait for it + * to complete + * @trans: Transaction to commit + */ +void gsi_trans_commit_wait(struct gsi_trans *trans); + +/** + * gsi_trans_commit_wait_timeout() - Commit a GSI transaction and wait for + * it to complete, with timeout + * @trans: Transaction to commit + * @timeout: Timeout period (in milliseconds) + */ +int gsi_trans_commit_wait_timeout(struct gsi_trans *trans, + unsigned long timeout); + +/** + * gsi_trans_read_byte() - Issue a single byte read TRE on a channel + * @gsi: GSI pointer + * @channel_id: Channel on which to read a byte + * @addr: DMA address into which to transfer the one byte + * + * This is not a transaction operation at all. It's defined here because + * it needs to be done in coordination with other transaction activity. + */ +int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr); + +/** + * gsi_trans_read_byte_done() - Clean up after a single byte read TRE + * @gsi: GSI pointer + * @channel_id: Channel on which byte was read + * + * This function needs to be called to signal that the work related + * to reading a byte initiated by gsi_trans_read_byte() is complete. + */ +void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id); + +#endif /* _GSI_TRANS_H_ */ From patchwork Fri Feb 28 22:41:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A167DC3F2CD for ; Fri, 28 Feb 2020 22:42:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4047F2469F for ; Fri, 28 Feb 2020 22:42:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="sjTeUbP9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727174AbgB1Wme (ORCPT ); Fri, 28 Feb 2020 17:42:34 -0500 Received: from mail-yw1-f54.google.com ([209.85.161.54]:37993 "EHLO mail-yw1-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727144AbgB1Wmd (ORCPT ); Fri, 28 Feb 2020 17:42:33 -0500 Received: by mail-yw1-f54.google.com with SMTP id 10so4946580ywv.5 for ; Fri, 28 Feb 2020 14:42:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/vLFgITHO46tQ4Pw4XuNCTfPDwjorL+Dk9e9duKCHA0=; b=sjTeUbP9rz4S4vk6Xv4/qrmO3vvGJZ0C6GNSb4FMn83dzTtIGXWL+DZxVGFSjTaVjz zRKbB9nMjTp0H/fi9S3EZXtPwUDiNIEsTxREQpLUrNyCGiaZu0zN0GZjLYbd533mLAm1 0MCd014kvUf73v8O1ERSCT3/Z26/tIPkcaOj0L1uBub9x02wssn2N7KxlsQiJ+zQ0jry tVpuaudzkVN2YB2XO73fJZsrEzX+pMghLz/kJX6VE5p+rKR5g2XqXzP0gDhqfDyn0ifK B2MlkoT9zwzsj2IPoCKrjn9uZhIoTzd+b3dyuF0+VlhX25PMWX0fFgTbk2EV92tpXzN1 qwIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/vLFgITHO46tQ4Pw4XuNCTfPDwjorL+Dk9e9duKCHA0=; b=VDUsbVg7WKiO2VL99jCaiFMhogZr45rk3xH0paVjHP16Ij7UeEq9pAhq8weAihB8QW OtWh7ueq4ZzaTE26PCq+NSooVGmpeWfFpq9crhZid5HkpYFIYIaihElFI7QnCERc0qK5 qG9Fs6wg5daluvnyw3XHmcaVLxn/JXCpeWFd4czO0+AclsGzY310vn6GBQt3R1PtKBuR 7npQRrQYpajEa26nKGxs70qpl5yHlPEBdBPItbHqYzb0p2EN8twPkahkbJ2fYcFaHXbP e+SCtQ0mkniY1zcWHBYEg+XYv1RNuuY+ohWCi9NBfr9UXkje1FLgDP1U5QPH49CxNwRk 2uxQ== X-Gm-Message-State: APjAAAUAb3pj5T2yFBa9HKIXLZlnmOUN6ZYPmQHGhoWbLmJPnl0nZVD2 TNHFEVtKcwCoxXorP8ShDYHAnA== X-Google-Smtp-Source: APXvYqy5lc/5/HJUEEM/eaTT6Zl5gyZ2GD67R2KtFeK3qa3T7ev0VtSImVNBvuRXmKhLt5dJS9qVVQ== X-Received: by 2002:a25:4e08:: with SMTP id c8mr5733226ybb.329.1582929751255; Fri, 28 Feb 2020 14:42:31 -0800 (PST) Received: from localhost.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id d188sm4637830ywe.50.2020.02.28.14.42.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Feb 2020 14:42:30 -0800 (PST) From: Alex Elder To: Arnd Bergmann , David Miller Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 10/17] soc: qcom: ipa: IPA endpoints Date: Fri, 28 Feb 2020 16:41:57 -0600 Message-Id: <20200228224204.17746-11-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200228224204.17746-1-elder@linaro.org> References: <20200228224204.17746-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch includes the code implementing an IPA endpoint. This is the primary abstraction implemented by the IPA. An endpoint is one end of a network connection between two entities physically connected to the IPA. Specifically, the AP and the modem implement endpoints, and an (AP endpoint, modem endpoint) pair implements the transfer of network data in one direction between the AP and modem. Endpoints are built on top of GSI channels, but IPA endpoints represent the higher-level functionality that the IPA provides. Data can be sent through a GSI channel, but it is the IPA endpoint that represents what is on the "other end" to receive that data. Other functionality, including aggregation, checksum offload and (at some future date) IP routing and filtering are all associated with the IPA endpoint. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_endpoint.c | 1706 ++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_endpoint.h | 110 ++ 2 files changed, 1816 insertions(+) create mode 100644 drivers/net/ipa/ipa_endpoint.c create mode 100644 drivers/net/ipa/ipa_endpoint.h diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c new file mode 100644 index 000000000000..1ec5b48da6c8 --- /dev/null +++ b/drivers/net/ipa/ipa_endpoint.c @@ -0,0 +1,1706 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" +#include "ipa_modem.h" +#include "ipa_table.h" +#include "ipa_gsi.h" + +#define atomic_dec_not_zero(v) atomic_add_unless((v), -1, 0) + +#define IPA_REPLENISH_BATCH 16 + +#define IPA_RX_BUFFER_SIZE (PAGE_SIZE << IPA_RX_BUFFER_ORDER) +#define IPA_RX_BUFFER_ORDER 1 /* 8KB endpoint RX buffers (2 pages) */ + +/* The amount of RX buffer space consumed by standard skb overhead */ +#define IPA_RX_BUFFER_OVERHEAD (PAGE_SIZE - SKB_MAX_ORDER(NET_SKB_PAD, 0)) + +#define IPA_ENDPOINT_STOP_RETRY_MAX 10 +#define IPA_ENDPOINT_STOP_RX_SIZE 1 /* bytes */ + +#define IPA_ENDPOINT_RESET_AGGR_RETRY_MAX 3 +#define IPA_AGGR_TIME_LIMIT_DEFAULT 1000 /* microseconds */ + +#define ENDPOINT_STOP_DMA_TIMEOUT 15 /* milliseconds */ + +/** enum ipa_status_opcode - status element opcode hardware values */ +enum ipa_status_opcode { + IPA_STATUS_OPCODE_PACKET = 0x01, + IPA_STATUS_OPCODE_NEW_FRAG_RULE = 0x02, + IPA_STATUS_OPCODE_DROPPED_PACKET = 0x04, + IPA_STATUS_OPCODE_SUSPENDED_PACKET = 0x08, + IPA_STATUS_OPCODE_LOG = 0x10, + IPA_STATUS_OPCODE_DCMP = 0x20, + IPA_STATUS_OPCODE_PACKET_2ND_PASS = 0x40, +}; + +/** enum ipa_status_exception - status element exception type */ +enum ipa_status_exception { + /* 0 means no exception */ + IPA_STATUS_EXCEPTION_DEAGGR = 0x01, + IPA_STATUS_EXCEPTION_IPTYPE = 0x04, + IPA_STATUS_EXCEPTION_PACKET_LENGTH = 0x08, + IPA_STATUS_EXCEPTION_FRAG_RULE_MISS = 0x10, + IPA_STATUS_EXCEPTION_SW_FILT = 0x20, + /* The meaning of the next value depends on whether the IP version */ + IPA_STATUS_EXCEPTION_NAT = 0x40, /* IPv4 */ + IPA_STATUS_EXCEPTION_IPV6CT = IPA_STATUS_EXCEPTION_NAT, +}; + +/* Status element provided by hardware */ +struct ipa_status { + u8 opcode; /* enum ipa_status_opcode */ + u8 exception; /* enum ipa_status_exception */ + __le16 mask; + __le16 pkt_len; + u8 endp_src_idx; + u8 endp_dst_idx; + __le32 metadata; + __le32 flags1; + __le64 flags2; + __le32 flags3; + __le32 flags4; +}; + +/* Field masks for struct ipa_status structure fields */ + +#define IPA_STATUS_SRC_IDX_FMASK GENMASK(4, 0) + +#define IPA_STATUS_DST_IDX_FMASK GENMASK(4, 0) + +#define IPA_STATUS_FLAGS1_FLT_LOCAL_FMASK GENMASK(0, 0) +#define IPA_STATUS_FLAGS1_FLT_HASH_FMASK GENMASK(1, 1) +#define IPA_STATUS_FLAGS1_FLT_GLOBAL_FMASK GENMASK(2, 2) +#define IPA_STATUS_FLAGS1_FLT_RET_HDR_FMASK GENMASK(3, 3) +#define IPA_STATUS_FLAGS1_FLT_RULE_ID_FMASK GENMASK(13, 4) +#define IPA_STATUS_FLAGS1_RT_LOCAL_FMASK GENMASK(14, 14) +#define IPA_STATUS_FLAGS1_RT_HASH_FMASK GENMASK(15, 15) +#define IPA_STATUS_FLAGS1_UCP_FMASK GENMASK(16, 16) +#define IPA_STATUS_FLAGS1_RT_TBL_IDX_FMASK GENMASK(21, 17) +#define IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK GENMASK(31, 22) + +#define IPA_STATUS_FLAGS2_NAT_HIT_FMASK GENMASK_ULL(0, 0) +#define IPA_STATUS_FLAGS2_NAT_ENTRY_IDX_FMASK GENMASK_ULL(13, 1) +#define IPA_STATUS_FLAGS2_NAT_TYPE_FMASK GENMASK_ULL(15, 14) +#define IPA_STATUS_FLAGS2_TAG_INFO_FMASK GENMASK_ULL(63, 16) + +#define IPA_STATUS_FLAGS3_SEQ_NUM_FMASK GENMASK(7, 0) +#define IPA_STATUS_FLAGS3_TOD_CTR_FMASK GENMASK(31, 8) + +#define IPA_STATUS_FLAGS4_HDR_LOCAL_FMASK GENMASK(0, 0) +#define IPA_STATUS_FLAGS4_HDR_OFFSET_FMASK GENMASK(10, 1) +#define IPA_STATUS_FLAGS4_FRAG_HIT_FMASK GENMASK(11, 11) +#define IPA_STATUS_FLAGS4_FRAG_RULE_FMASK GENMASK(15, 12) +#define IPA_STATUS_FLAGS4_HW_SPECIFIC_FMASK GENMASK(31, 16) + +#ifdef IPA_VALIDATE + +static void ipa_endpoint_validate_build(void) +{ + /* The aggregation byte limit defines the point at which an + * aggregation window will close. It is programmed into the + * IPA hardware as a number of KB. We don't use "hard byte + * limit" aggregation, which means that we need to supply + * enough space in a receive buffer to hold a complete MTU + * plus normal skb overhead *after* that aggregation byte + * limit has been crossed. + * + * This check just ensures we don't define a receive buffer + * size that would exceed what we can represent in the field + * that is used to program its size. + */ + BUILD_BUG_ON(IPA_RX_BUFFER_SIZE > + field_max(AGGR_BYTE_LIMIT_FMASK) * SZ_1K + + IPA_MTU + IPA_RX_BUFFER_OVERHEAD); + + /* I honestly don't know where this requirement comes from. But + * it holds, and if we someday need to loosen the constraint we + * can try to track it down. + */ + BUILD_BUG_ON(sizeof(struct ipa_status) % 4); +} + +static bool ipa_endpoint_data_valid_one(struct ipa *ipa, u32 count, + const struct ipa_gsi_endpoint_data *all_data, + const struct ipa_gsi_endpoint_data *data) +{ + const struct ipa_gsi_endpoint_data *other_data; + struct device *dev = &ipa->pdev->dev; + enum ipa_endpoint_name other_name; + + if (ipa_gsi_endpoint_data_empty(data)) + return true; + + if (!data->toward_ipa) { + if (data->endpoint.filter_support) { + dev_err(dev, "filtering not supported for " + "RX endpoint %u\n", + data->endpoint_id); + return false; + } + + return true; /* Nothing more to check for RX */ + } + + if (data->endpoint.config.status_enable) { + other_name = data->endpoint.config.tx.status_endpoint; + if (other_name >= count) { + dev_err(dev, "status endpoint name %u out of range " + "for endpoint %u\n", + other_name, data->endpoint_id); + return false; + } + + /* Status endpoint must be defined... */ + other_data = &all_data[other_name]; + if (ipa_gsi_endpoint_data_empty(other_data)) { + dev_err(dev, "DMA endpoint name %u undefined " + "for endpoint %u\n", + other_name, data->endpoint_id); + return false; + } + + /* ...and has to be an RX endpoint... */ + if (other_data->toward_ipa) { + dev_err(dev, + "status endpoint for endpoint %u not RX\n", + data->endpoint_id); + return false; + } + + /* ...and if it's to be an AP endpoint... */ + if (other_data->ee_id == GSI_EE_AP) { + /* ...make sure it has status enabled. */ + if (!other_data->endpoint.config.status_enable) { + dev_err(dev, + "status not enabled for endpoint %u\n", + other_data->endpoint_id); + return false; + } + } + } + + if (data->endpoint.config.dma_mode) { + other_name = data->endpoint.config.dma_endpoint; + if (other_name >= count) { + dev_err(dev, "DMA endpoint name %u out of range " + "for endpoint %u\n", + other_name, data->endpoint_id); + return false; + } + + other_data = &all_data[other_name]; + if (ipa_gsi_endpoint_data_empty(other_data)) { + dev_err(dev, "DMA endpoint name %u undefined " + "for endpoint %u\n", + other_name, data->endpoint_id); + return false; + } + } + + return true; +} + +static bool ipa_endpoint_data_valid(struct ipa *ipa, u32 count, + const struct ipa_gsi_endpoint_data *data) +{ + const struct ipa_gsi_endpoint_data *dp = data; + struct device *dev = &ipa->pdev->dev; + enum ipa_endpoint_name name; + + ipa_endpoint_validate_build(); + + if (count > IPA_ENDPOINT_COUNT) { + dev_err(dev, "too many endpoints specified (%u > %u)\n", + count, IPA_ENDPOINT_COUNT); + return false; + } + + /* Make sure needed endpoints have defined data */ + if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_COMMAND_TX])) { + dev_err(dev, "command TX endpoint not defined\n"); + return false; + } + if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_LAN_RX])) { + dev_err(dev, "LAN RX endpoint not defined\n"); + return false; + } + if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_MODEM_TX])) { + dev_err(dev, "AP->modem TX endpoint not defined\n"); + return false; + } + if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_MODEM_RX])) { + dev_err(dev, "AP<-modem RX endpoint not defined\n"); + return false; + } + + for (name = 0; name < count; name++, dp++) + if (!ipa_endpoint_data_valid_one(ipa, count, data, dp)) + return false; + + return true; +} + +#else /* !IPA_VALIDATE */ + +static bool ipa_endpoint_data_valid(struct ipa *ipa, u32 count, + const struct ipa_gsi_endpoint_data *data) +{ + return true; +} + +#endif /* !IPA_VALIDATE */ + +/* Allocate a transaction to use on a non-command endpoint */ +static struct gsi_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint, + u32 tre_count) +{ + struct gsi *gsi = &endpoint->ipa->gsi; + u32 channel_id = endpoint->channel_id; + enum dma_data_direction direction; + + direction = endpoint->toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + + return gsi_channel_trans_alloc(gsi, channel_id, tre_count, direction); +} + +/* suspend_delay represents suspend for RX, delay for TX endpoints. + * Note that suspend is not supported starting with IPA v4.0. + */ +static int +ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay) +{ + u32 offset = IPA_REG_ENDP_INIT_CTRL_N_OFFSET(endpoint->endpoint_id); + struct ipa *ipa = endpoint->ipa; + u32 mask; + u32 val; + + /* assert(ipa->version == IPA_VERSION_3_5_1 */ + mask = endpoint->toward_ipa ? ENDP_DELAY_FMASK : ENDP_SUSPEND_FMASK; + + val = ioread32(ipa->reg_virt + offset); + if (suspend_delay == !!(val & mask)) + return -EALREADY; /* Already set to desired state */ + + val ^= mask; + iowrite32(val, ipa->reg_virt + offset); + + return 0; +} + +/* Enable or disable delay or suspend mode on all modem endpoints */ +void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable) +{ + bool support_suspend; + u32 endpoint_id; + + /* DELAY mode doesn't work right on IPA v4.2 */ + if (ipa->version == IPA_VERSION_4_2) + return; + + /* Only IPA v3.5.1 supports SUSPEND mode on RX endpoints */ + support_suspend = ipa->version == IPA_VERSION_3_5_1; + + for (endpoint_id = 0; endpoint_id < IPA_ENDPOINT_MAX; endpoint_id++) { + struct ipa_endpoint *endpoint = &ipa->endpoint[endpoint_id]; + + if (endpoint->ee_id != GSI_EE_MODEM) + continue; + + /* Set TX delay mode, or for IPA v3.5.1 RX suspend mode */ + if (endpoint->toward_ipa || support_suspend) + (void)ipa_endpoint_init_ctrl(endpoint, enable); + } +} + +/* Reset all modem endpoints to use the default exception endpoint */ +int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa) +{ + u32 initialized = ipa->initialized; + struct gsi_trans *trans; + u32 count; + + /* We need one command per modem TX endpoint. We can get an upper + * bound on that by assuming all initialized endpoints are modem->IPA. + * That won't happen, and we could be more precise, but this is fine + * for now. We need to end the transactio with a "tag process." + */ + count = hweight32(initialized) + ipa_cmd_tag_process_count(); + trans = ipa_cmd_trans_alloc(ipa, count); + if (!trans) { + dev_err(&ipa->pdev->dev, + "no transaction to reset modem exception endpoints\n"); + return -EBUSY; + } + + while (initialized) { + u32 endpoint_id = __ffs(initialized); + struct ipa_endpoint *endpoint; + u32 offset; + + initialized ^= BIT(endpoint_id); + + /* We only reset modem TX endpoints */ + endpoint = &ipa->endpoint[endpoint_id]; + if (!(endpoint->ee_id == GSI_EE_MODEM && endpoint->toward_ipa)) + continue; + + offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id); + + /* Value written is 0, and all bits are updated. That + * means status is disabled on the endpoint, and as a + * result all other fields in the register are ignored. + */ + ipa_cmd_register_write_add(trans, offset, 0, ~0, false); + } + + ipa_cmd_tag_process_add(trans); + + /* XXX This should have a 1 second timeout */ + gsi_trans_commit_wait(trans); + + return 0; +} + +static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_CFG_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + /* FRAG_OFFLOAD_EN is 0 */ + if (endpoint->data->checksum) { + if (endpoint->toward_ipa) { + u32 checksum_offset; + + val |= u32_encode_bits(IPA_CS_OFFLOAD_UL, + CS_OFFLOAD_EN_FMASK); + /* Checksum header offset is in 4-byte units */ + checksum_offset = sizeof(struct rmnet_map_header); + checksum_offset /= sizeof(u32); + val |= u32_encode_bits(checksum_offset, + CS_METADATA_HDR_OFFSET_FMASK); + } else { + val |= u32_encode_bits(IPA_CS_OFFLOAD_DL, + CS_OFFLOAD_EN_FMASK); + } + } else { + val |= u32_encode_bits(IPA_CS_OFFLOAD_NONE, + CS_OFFLOAD_EN_FMASK); + } + /* CS_GEN_QMB_MASTER_SEL is 0 */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_HDR_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + if (endpoint->data->qmap) { + size_t header_size = sizeof(struct rmnet_map_header); + + if (endpoint->toward_ipa && endpoint->data->checksum) + header_size += sizeof(struct rmnet_map_ul_csum_header); + + val |= u32_encode_bits(header_size, HDR_LEN_FMASK); + /* metadata is the 4 byte rmnet_map header itself */ + val |= HDR_OFST_METADATA_VALID_FMASK; + val |= u32_encode_bits(0, HDR_OFST_METADATA_FMASK); + /* HDR_ADDITIONAL_CONST_LEN is 0; (IPA->AP only) */ + if (!endpoint->toward_ipa) { + u32 size_offset = offsetof(struct rmnet_map_header, + pkt_len); + + val |= HDR_OFST_PKT_SIZE_VALID_FMASK; + val |= u32_encode_bits(size_offset, + HDR_OFST_PKT_SIZE_FMASK); + } + /* HDR_A5_MUX is 0 */ + /* HDR_LEN_INC_DEAGG_HDR is 0 */ + /* HDR_METADATA_REG_VALID is 0; (AP->IPA only) */ + } + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(endpoint->endpoint_id); + u32 pad_align = endpoint->data->rx.pad_align; + u32 val = 0; + + val |= HDR_ENDIANNESS_FMASK; /* big endian */ + val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK; + /* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */ + /* HDR_PAYLOAD_LEN_INC_PADDING is 0 */ + /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */ + if (!endpoint->toward_ipa) + val |= u32_encode_bits(pad_align, HDR_PAD_TO_ALIGNMENT_FMASK); + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +/** + * Generate a metadata mask value that will select only the mux_id + * field in an rmnet_map header structure. The mux_id is at offset + * 1 byte from the beginning of the structure, but the metadata + * value is treated as a 4-byte unit. So this mask must be computed + * with endianness in mind. Note that ipa_endpoint_init_hdr_metadata_mask() + * will convert this value to the proper byte order. + * + * Marked __always_inline because this is really computing a + * constant value. + */ +static __always_inline __be32 ipa_rmnet_mux_id_metadata_mask(void) +{ + size_t mux_id_offset = offsetof(struct rmnet_map_header, mux_id); + u32 mux_id_mask = 0; + u8 *bytes; + + bytes = (u8 *)&mux_id_mask; + bytes[mux_id_offset] = 0xff; /* mux_id is 1 byte */ + + return cpu_to_be32(mux_id_mask); +} + +static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint) +{ + u32 endpoint_id = endpoint->endpoint_id; + u32 val = 0; + u32 offset; + + offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id); + + if (!endpoint->toward_ipa && endpoint->data->qmap) + val = ipa_rmnet_mux_id_metadata_mask(); + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_MODE_N_OFFSET(endpoint->endpoint_id); + u32 val; + + if (endpoint->toward_ipa && endpoint->data->dma_mode) { + enum ipa_endpoint_name name = endpoint->data->dma_endpoint; + u32 dma_endpoint_id; + + dma_endpoint_id = endpoint->ipa->name_map[name]->endpoint_id; + + val = u32_encode_bits(IPA_DMA, MODE_FMASK); + val |= u32_encode_bits(dma_endpoint_id, DEST_PIPE_INDEX_FMASK); + } else { + val = u32_encode_bits(IPA_BASIC, MODE_FMASK); + } + /* Other bitfields unspecified (and 0) */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +/* Compute the aggregation size value to use for a given buffer size */ +static u32 ipa_aggr_size_kb(u32 rx_buffer_size) +{ + /* We don't use "hard byte limit" aggregation, so we define the + * aggregation limit such that our buffer has enough space *after* + * that limit to receive a full MTU of data, plus overhead. + */ + rx_buffer_size -= IPA_MTU + IPA_RX_BUFFER_OVERHEAD; + + return rx_buffer_size / SZ_1K; +} + +static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_AGGR_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + if (endpoint->data->aggregation) { + if (!endpoint->toward_ipa) { + u32 aggr_size = ipa_aggr_size_kb(IPA_RX_BUFFER_SIZE); + u32 limit; + + val |= u32_encode_bits(IPA_ENABLE_AGGR, AGGR_EN_FMASK); + val |= u32_encode_bits(IPA_GENERIC, AGGR_TYPE_FMASK); + val |= u32_encode_bits(aggr_size, + AGGR_BYTE_LIMIT_FMASK); + limit = IPA_AGGR_TIME_LIMIT_DEFAULT; + val |= u32_encode_bits(limit / IPA_AGGR_GRANULARITY, + AGGR_TIME_LIMIT_FMASK); + val |= u32_encode_bits(0, AGGR_PKT_LIMIT_FMASK); + if (endpoint->data->rx.aggr_close_eof) + val |= AGGR_SW_EOF_ACTIVE_FMASK; + /* AGGR_HARD_BYTE_LIMIT_ENABLE is 0 */ + } else { + val |= u32_encode_bits(IPA_ENABLE_DEAGGR, + AGGR_EN_FMASK); + val |= u32_encode_bits(IPA_QCMAP, AGGR_TYPE_FMASK); + /* other fields ignored */ + } + /* AGGR_FORCE_CLOSE is 0 */ + } else { + val |= u32_encode_bits(IPA_BYPASS_AGGR, AGGR_EN_FMASK); + /* other fields ignored */ + } + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +/* A return value of 0 indicates an error */ +static u32 ipa_reg_init_hol_block_timer_val(struct ipa *ipa, u32 microseconds) +{ + u32 scale; + u32 base; + u32 val; + + if (!microseconds) + return 0; /* invalid delay */ + + /* Timer is represented in units of clock ticks. */ + if (ipa->version < IPA_VERSION_4_2) + return microseconds; /* XXX Needs to be computed */ + + /* IPA v4.2 represents the tick count as base * scale */ + scale = 1; /* XXX Needs to be computed */ + if (scale > field_max(SCALE_FMASK)) + return 0; /* scale too big */ + + base = DIV_ROUND_CLOSEST(microseconds, scale); + if (base > field_max(BASE_VALUE_FMASK)) + return 0; /* microseconds too big */ + + val = u32_encode_bits(scale, SCALE_FMASK); + val |= u32_encode_bits(base, BASE_VALUE_FMASK); + + return val; +} + +static int ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint, + u32 microseconds) +{ + u32 endpoint_id = endpoint->endpoint_id; + struct ipa *ipa = endpoint->ipa; + u32 offset; + u32 val; + + /* XXX We'll fix this when the register definition is clear */ + if (microseconds) { + struct device *dev = &ipa->pdev->dev; + + dev_err(dev, "endpoint %u non-zero HOLB period (ignoring)\n", + endpoint_id); + microseconds = 0; + } + + if (microseconds) { + val = ipa_reg_init_hol_block_timer_val(ipa, microseconds); + if (!val) + return -EINVAL; + } else { + val = 0; /* timeout is immediate */ + } + offset = IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(endpoint_id); + iowrite32(val, ipa->reg_virt + offset); + + return 0; +} + +static void +ipa_endpoint_init_hol_block_enable(struct ipa_endpoint *endpoint, bool enable) +{ + u32 endpoint_id = endpoint->endpoint_id; + u32 offset; + u32 val; + + val = u32_encode_bits(enable ? 1 : 0, HOL_BLOCK_EN_FMASK); + offset = IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(endpoint_id); + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa) +{ + u32 i; + + for (i = 0; i < IPA_ENDPOINT_MAX; i++) { + struct ipa_endpoint *endpoint = &ipa->endpoint[i]; + + if (endpoint->ee_id != GSI_EE_MODEM) + continue; + + (void)ipa_endpoint_init_hol_block_timer(endpoint, 0); + ipa_endpoint_init_hol_block_enable(endpoint, true); + } +} + +static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + /* DEAGGR_HDR_LEN is 0 */ + /* PACKET_OFFSET_VALID is 0 */ + /* PACKET_OFFSET_LOCATION is ignored (not valid) */ + /* MAX_PACKET_LEN is 0 (not enforced) */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_SEQ_N_OFFSET(endpoint->endpoint_id); + u32 seq_type = endpoint->seq_type; + u32 val = 0; + + val |= u32_encode_bits(seq_type & 0xf, HPS_SEQ_TYPE_FMASK); + val |= u32_encode_bits((seq_type >> 4) & 0xf, DPS_SEQ_TYPE_FMASK); + /* HPS_REP_SEQ_TYPE is 0 */ + /* DPS_REP_SEQ_TYPE is 0 */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +/** + * ipa_endpoint_skb_tx() - Transmit a socket buffer + * @endpoint: Endpoint pointer + * @skb: Socket buffer to send + * + * Returns: 0 if successful, or a negative error code + */ +int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb) +{ + struct gsi_trans *trans; + u32 nr_frags; + int ret; + + /* Make sure source endpoint's TLV FIFO has enough entries to + * hold the linear portion of the skb and all its fragments. + * If not, see if we can linearize it before giving up. + */ + nr_frags = skb_shinfo(skb)->nr_frags; + if (1 + nr_frags > endpoint->trans_tre_max) { + if (skb_linearize(skb)) + return -E2BIG; + nr_frags = 0; + } + + trans = ipa_endpoint_trans_alloc(endpoint, 1 + nr_frags); + if (!trans) + return -EBUSY; + + ret = gsi_trans_skb_add(trans, skb); + if (ret) + goto err_trans_free; + trans->data = skb; /* transaction owns skb now */ + + gsi_trans_commit(trans, !netdev_xmit_more()); + + return 0; + +err_trans_free: + gsi_trans_free(trans); + + return -ENOMEM; +} + +static void ipa_endpoint_status(struct ipa_endpoint *endpoint) +{ + u32 endpoint_id = endpoint->endpoint_id; + struct ipa *ipa = endpoint->ipa; + u32 val = 0; + u32 offset; + + offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id); + + if (endpoint->data->status_enable) { + val |= STATUS_EN_FMASK; + if (endpoint->toward_ipa) { + enum ipa_endpoint_name name; + u32 status_endpoint_id; + + name = endpoint->data->tx.status_endpoint; + status_endpoint_id = ipa->name_map[name]->endpoint_id; + + val |= u32_encode_bits(status_endpoint_id, + STATUS_ENDP_FMASK); + } + /* STATUS_LOCATION is 0 (status element precedes packet) */ + /* The next field is present for IPA v4.0 and above */ + /* STATUS_PKT_SUPPRESS_FMASK is 0 */ + } + + iowrite32(val, ipa->reg_virt + offset); +} + +static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint) +{ + struct gsi_trans *trans; + bool doorbell = false; + struct page *page; + u32 offset; + u32 len; + int ret; + + page = dev_alloc_pages(IPA_RX_BUFFER_ORDER); + if (!page) + return -ENOMEM; + + trans = ipa_endpoint_trans_alloc(endpoint, 1); + if (!trans) + goto err_free_pages; + + /* Offset the buffer to make space for skb headroom */ + offset = NET_SKB_PAD; + len = IPA_RX_BUFFER_SIZE - offset; + + ret = gsi_trans_page_add(trans, page, len, offset); + if (ret) + goto err_trans_free; + trans->data = page; /* transaction owns page now */ + + if (++endpoint->replenish_ready == IPA_REPLENISH_BATCH) { + doorbell = true; + endpoint->replenish_ready = 0; + } + + gsi_trans_commit(trans, doorbell); + + return 0; + +err_trans_free: + gsi_trans_free(trans); +err_free_pages: + __free_pages(page, IPA_RX_BUFFER_ORDER); + + return -ENOMEM; +} + +/** + * ipa_endpoint_replenish() - Replenish the Rx packets cache. + * + * Allocate RX packet wrapper structures with maximal socket buffers + * for an endpoint. These are supplied to the hardware, which fills + * them with incoming data. + */ +static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one) +{ + struct gsi *gsi; + u32 backlog; + + if (!endpoint->replenish_enabled) { + if (add_one) + atomic_inc(&endpoint->replenish_saved); + return; + } + + if (add_one) + atomic_inc(&endpoint->replenish_backlog); + + while (atomic_dec_not_zero(&endpoint->replenish_backlog)) + if (ipa_endpoint_replenish_one(endpoint)) + goto try_again_later; + + return; + +try_again_later: + /* The last one didn't succeed, so fix the backlog */ + backlog = atomic_inc_return(&endpoint->replenish_backlog); + + /* Whenever a receive buffer transaction completes we'll try to + * replenish again. It's unlikely, but if we fail to supply even + * one buffer, nothing will trigger another replenish attempt. + * Receive buffer transactions use one TRE, so schedule work to + * try replenishing again if our backlog is *all* available TREs. + */ + gsi = &endpoint->ipa->gsi; + if (backlog == gsi_channel_tre_max(gsi, endpoint->channel_id)) + schedule_delayed_work(&endpoint->replenish_work, + msecs_to_jiffies(1)); +} + +static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint) +{ + struct gsi *gsi = &endpoint->ipa->gsi; + u32 max_backlog; + u32 saved; + + endpoint->replenish_enabled = true; + while ((saved = atomic_xchg(&endpoint->replenish_saved, 0))) + atomic_add(saved, &endpoint->replenish_backlog); + + /* Start replenishing if hardware currently has no buffers */ + max_backlog = gsi_channel_tre_max(gsi, endpoint->channel_id); + if (atomic_read(&endpoint->replenish_backlog) == max_backlog) { + ipa_endpoint_replenish(endpoint, false); + return; + } +} + +static void ipa_endpoint_replenish_disable(struct ipa_endpoint *endpoint) +{ + u32 backlog; + + endpoint->replenish_enabled = false; + while ((backlog = atomic_xchg(&endpoint->replenish_backlog, 0))) + atomic_add(backlog, &endpoint->replenish_saved); +} + +static void ipa_endpoint_replenish_work(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct ipa_endpoint *endpoint; + + endpoint = container_of(dwork, struct ipa_endpoint, replenish_work); + + ipa_endpoint_replenish(endpoint, false); +} + +static void ipa_endpoint_skb_copy(struct ipa_endpoint *endpoint, + void *data, u32 len, u32 extra) +{ + struct sk_buff *skb; + + skb = __dev_alloc_skb(len, GFP_ATOMIC); + if (skb) { + skb_put(skb, len); + memcpy(skb->data, data, len); + skb->truesize += extra; + } + + /* Now receive it, or drop it if there's no netdev */ + if (endpoint->netdev) + ipa_modem_skb_rx(endpoint->netdev, skb); + else if (skb) + dev_kfree_skb_any(skb); +} + +static bool ipa_endpoint_skb_build(struct ipa_endpoint *endpoint, + struct page *page, u32 len) +{ + struct sk_buff *skb; + + /* Nothing to do if there's no netdev */ + if (!endpoint->netdev) + return false; + + /* assert(len <= SKB_WITH_OVERHEAD(IPA_RX_BUFFER_SIZE-NET_SKB_PAD)); */ + skb = build_skb(page_address(page), IPA_RX_BUFFER_SIZE); + if (skb) { + /* Reserve the headroom and account for the data */ + skb_reserve(skb, NET_SKB_PAD); + skb_put(skb, len); + } + + /* Receive the buffer (or record drop if unable to build it) */ + ipa_modem_skb_rx(endpoint->netdev, skb); + + return skb != NULL; +} + +/* The format of a packet status element is the same for several status + * types (opcodes). The NEW_FRAG_RULE, LOG, DCMP (decompression) types + * aren't currently supported + */ +static bool ipa_status_format_packet(enum ipa_status_opcode opcode) +{ + switch (opcode) { + case IPA_STATUS_OPCODE_PACKET: + case IPA_STATUS_OPCODE_DROPPED_PACKET: + case IPA_STATUS_OPCODE_SUSPENDED_PACKET: + case IPA_STATUS_OPCODE_PACKET_2ND_PASS: + return true; + default: + return false; + } +} + +static bool ipa_endpoint_status_skip(struct ipa_endpoint *endpoint, + const struct ipa_status *status) +{ + u32 endpoint_id; + + if (!ipa_status_format_packet(status->opcode)) + return true; + if (!status->pkt_len) + return true; + endpoint_id = u32_get_bits(status->endp_dst_idx, + IPA_STATUS_DST_IDX_FMASK); + if (endpoint_id != endpoint->endpoint_id) + return true; + + return false; /* Don't skip this packet, process it */ +} + +/* Return whether the status indicates the packet should be dropped */ +static bool ipa_status_drop_packet(const struct ipa_status *status) +{ + u32 val; + + /* Deaggregation exceptions we drop; others we consume */ + if (status->exception) + return status->exception == IPA_STATUS_EXCEPTION_DEAGGR; + + /* Drop the packet if it fails to match a routing rule; otherwise no */ + val = le32_get_bits(status->flags1, IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK); + + return val == field_max(IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK); +} + +static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint, + struct page *page, u32 total_len) +{ + void *data = page_address(page) + NET_SKB_PAD; + u32 unused = IPA_RX_BUFFER_SIZE - total_len; + u32 resid = total_len; + + while (resid) { + const struct ipa_status *status = data; + u32 align; + u32 len; + + if (resid < sizeof(*status)) { + dev_err(&endpoint->ipa->pdev->dev, + "short message (%u bytes < %zu byte status)\n", + resid, sizeof(*status)); + break; + } + + /* Skip over status packets that lack packet data */ + if (ipa_endpoint_status_skip(endpoint, status)) { + data += sizeof(*status); + resid -= sizeof(*status); + continue; + } + + /* Compute the amount of buffer space consumed by the + * packet, including the status element. If the hardware + * is configured to pad packet data to an aligned boundary, + * account for that. And if checksum offload is is enabled + * a trailer containing computed checksum information will + * be appended. + */ + align = endpoint->data->rx.pad_align ? : 1; + len = le16_to_cpu(status->pkt_len); + len = sizeof(*status) + ALIGN(len, align); + if (endpoint->data->checksum) + len += sizeof(struct rmnet_map_dl_csum_trailer); + + /* Charge the new packet with a proportional fraction of + * the unused space in the original receive buffer. + * XXX Charge a proportion of the *whole* receive buffer? + */ + if (!ipa_status_drop_packet(status)) { + u32 extra = unused * len / total_len; + void *data2 = data + sizeof(*status); + u32 len2 = le16_to_cpu(status->pkt_len); + + /* Client receives only packet data (no status) */ + ipa_endpoint_skb_copy(endpoint, data2, len2, extra); + } + + /* Consume status and the full packet it describes */ + data += len; + resid -= len; + } +} + +/* Complete a TX transaction, command or from ipa_endpoint_skb_tx() */ +static void ipa_endpoint_tx_complete(struct ipa_endpoint *endpoint, + struct gsi_trans *trans) +{ +} + +/* Complete transaction initiated in ipa_endpoint_replenish_one() */ +static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint, + struct gsi_trans *trans) +{ + struct page *page; + + ipa_endpoint_replenish(endpoint, true); + + if (trans->cancelled) + return; + + /* Parse or build a socket buffer using the actual received length */ + page = trans->data; + if (endpoint->data->status_enable) + ipa_endpoint_status_parse(endpoint, page, trans->len); + else if (ipa_endpoint_skb_build(endpoint, page, trans->len)) + trans->data = NULL; /* Pages have been consumed */ +} + +void ipa_endpoint_trans_complete(struct ipa_endpoint *endpoint, + struct gsi_trans *trans) +{ + if (endpoint->toward_ipa) + ipa_endpoint_tx_complete(endpoint, trans); + else + ipa_endpoint_rx_complete(endpoint, trans); +} + +void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint, + struct gsi_trans *trans) +{ + if (endpoint->toward_ipa) { + struct ipa *ipa = endpoint->ipa; + + /* Nothing to do for command transactions */ + if (endpoint != ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]) { + struct sk_buff *skb = trans->data; + + if (skb) + dev_kfree_skb_any(skb); + } + } else { + struct page *page = trans->data; + + if (page) + __free_pages(page, IPA_RX_BUFFER_ORDER); + } +} + +void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id) +{ + u32 val; + + /* ROUTE_DIS is 0 */ + val = u32_encode_bits(endpoint_id, ROUTE_DEF_PIPE_FMASK); + val |= ROUTE_DEF_HDR_TABLE_FMASK; + val |= u32_encode_bits(0, ROUTE_DEF_HDR_OFST_FMASK); + val |= u32_encode_bits(endpoint_id, ROUTE_FRAG_DEF_PIPE_FMASK); + val |= ROUTE_DEF_RETAIN_HDR_FMASK; + + iowrite32(val, ipa->reg_virt + IPA_REG_ROUTE_OFFSET); +} + +void ipa_endpoint_default_route_clear(struct ipa *ipa) +{ + ipa_endpoint_default_route_set(ipa, 0); +} + +static bool ipa_endpoint_aggr_active(struct ipa_endpoint *endpoint) +{ + u32 mask = BIT(endpoint->endpoint_id); + struct ipa *ipa = endpoint->ipa; + u32 offset; + u32 val; + + /* assert(mask & ipa->available); */ + offset = ipa_reg_state_aggr_active_offset(ipa->version); + val = ioread32(ipa->reg_virt + offset); + + return !!(val & mask); +} + +static void ipa_endpoint_force_close(struct ipa_endpoint *endpoint) +{ + u32 mask = BIT(endpoint->endpoint_id); + struct ipa *ipa = endpoint->ipa; + + /* assert(mask & ipa->available); */ + iowrite32(mask, ipa->reg_virt + IPA_REG_AGGR_FORCE_CLOSE_OFFSET); +} + +/** + * ipa_endpoint_reset_rx_aggr() - Reset RX endpoint with aggregation active + * @endpoint: Endpoint to be reset + * + * If aggregation is active on an RX endpoint when a reset is performed + * on its underlying GSI channel, a special sequence of actions must be + * taken to ensure the IPA pipeline is properly cleared. + * + * @Return: 0 if successful, or a negative error code + */ +static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint) +{ + struct device *dev = &endpoint->ipa->pdev->dev; + struct ipa *ipa = endpoint->ipa; + bool endpoint_suspended = false; + struct gsi *gsi = &ipa->gsi; + dma_addr_t addr; + bool db_enable; + u32 retries; + u32 len = 1; + void *virt; + int ret; + + virt = kzalloc(len, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + addr = dma_map_single(dev, virt, len, DMA_FROM_DEVICE); + if (dma_mapping_error(dev, addr)) { + ret = -ENOMEM; + goto out_kfree; + } + + /* Force close aggregation before issuing the reset */ + ipa_endpoint_force_close(endpoint); + + /* Reset and reconfigure the channel with the doorbell engine + * disabled. Then poll until we know aggregation is no longer + * active. We'll re-enable the doorbell (if appropriate) when + * we reset again below. + */ + gsi_channel_reset(gsi, endpoint->channel_id, false); + + /* Make sure the channel isn't suspended */ + if (endpoint->ipa->version == IPA_VERSION_3_5_1) + if (!ipa_endpoint_init_ctrl(endpoint, false)) + endpoint_suspended = true; + + /* Start channel and do a 1 byte read */ + ret = gsi_channel_start(gsi, endpoint->channel_id); + if (ret) + goto out_suspend_again; + + ret = gsi_trans_read_byte(gsi, endpoint->channel_id, addr); + if (ret) + goto err_endpoint_stop; + + /* Wait for aggregation to be closed on the channel */ + retries = IPA_ENDPOINT_RESET_AGGR_RETRY_MAX; + do { + if (!ipa_endpoint_aggr_active(endpoint)) + break; + msleep(1); + } while (retries--); + + /* Check one last time */ + if (ipa_endpoint_aggr_active(endpoint)) + dev_err(dev, "endpoint %u still active during reset\n", + endpoint->endpoint_id); + + gsi_trans_read_byte_done(gsi, endpoint->channel_id); + + ret = ipa_endpoint_stop(endpoint); + if (ret) + goto out_suspend_again; + + /* Finally, reset and reconfigure the channel again (re-enabling the + * the doorbell engine if appropriate). Sleep for 1 millisecond to + * complete the channel reset sequence. Finish by suspending the + * channel again (if necessary). + */ + db_enable = ipa->version == IPA_VERSION_3_5_1; + gsi_channel_reset(gsi, endpoint->channel_id, db_enable); + + msleep(1); + + goto out_suspend_again; + +err_endpoint_stop: + ipa_endpoint_stop(endpoint); +out_suspend_again: + if (endpoint_suspended) + (void)ipa_endpoint_init_ctrl(endpoint, true); + dma_unmap_single(dev, addr, len, DMA_FROM_DEVICE); +out_kfree: + kfree(virt); + + return ret; +} + +static void ipa_endpoint_reset(struct ipa_endpoint *endpoint) +{ + u32 channel_id = endpoint->channel_id; + struct ipa *ipa = endpoint->ipa; + bool db_enable; + bool special; + int ret = 0; + + /* On IPA v3.5.1, if an RX endpoint is reset while aggregation + * is active, we need to handle things specially to recover. + * All other cases just need to reset the underlying GSI channel. + * + * IPA v3.5.1 enables the doorbell engine. Newer versions do not. + */ + db_enable = ipa->version == IPA_VERSION_3_5_1; + special = !endpoint->toward_ipa && endpoint->data->aggregation; + if (special && ipa_endpoint_aggr_active(endpoint)) + ret = ipa_endpoint_reset_rx_aggr(endpoint); + else + gsi_channel_reset(&ipa->gsi, channel_id, db_enable); + + if (ret) + dev_err(&ipa->pdev->dev, + "error %d resetting channel %u for endpoint %u\n", + ret, endpoint->channel_id, endpoint->endpoint_id); +} + +static int ipa_endpoint_stop_rx_dma(struct ipa *ipa) +{ + u16 size = IPA_ENDPOINT_STOP_RX_SIZE; + struct gsi_trans *trans; + dma_addr_t addr; + int ret; + + trans = ipa_cmd_trans_alloc(ipa, 1); + if (!trans) { + dev_err(&ipa->pdev->dev, + "no transaction for RX endpoint STOP workaround\n"); + return -EBUSY; + } + + /* Read into the highest part of the zero memory area */ + addr = ipa->zero_addr + ipa->zero_size - size; + + ipa_cmd_dma_task_32b_addr_add(trans, size, addr, false); + + ret = gsi_trans_commit_wait_timeout(trans, ENDPOINT_STOP_DMA_TIMEOUT); + if (ret) + gsi_trans_free(trans); + + return ret; +} + +/** + * ipa_endpoint_stop() - Stops a GSI channel in IPA + * @client: Client whose endpoint should be stopped + * + * This function implements the sequence to stop a GSI channel + * in IPA. This function returns when the channel is is STOP state. + * + * Return value: 0 on success, negative otherwise + */ +int ipa_endpoint_stop(struct ipa_endpoint *endpoint) +{ + u32 retries = IPA_ENDPOINT_STOP_RETRY_MAX; + int ret; + + do { + struct ipa *ipa = endpoint->ipa; + struct gsi *gsi = &ipa->gsi; + + ret = gsi_channel_stop(gsi, endpoint->channel_id); + if (ret != -EAGAIN) + break; + + if (endpoint->toward_ipa) + continue; + + /* For IPA v3.5.1, send a DMA read task and check again */ + if (ipa->version == IPA_VERSION_3_5_1) { + ret = ipa_endpoint_stop_rx_dma(ipa); + if (ret) + break; + } + + msleep(1); + } while (retries--); + + return retries ? ret : -EIO; +} + +static void ipa_endpoint_program(struct ipa_endpoint *endpoint) +{ + struct device *dev = &endpoint->ipa->pdev->dev; + int ret; + + if (endpoint->toward_ipa) { + bool delay_mode = endpoint->data->tx.delay; + + ret = ipa_endpoint_init_ctrl(endpoint, delay_mode); + /* Endpoint is expected to not be in delay mode */ + if (!ret != delay_mode) { + dev_warn(dev, + "TX endpoint %u was %sin delay mode\n", + endpoint->endpoint_id, + delay_mode ? "already " : ""); + } + ipa_endpoint_init_hdr_ext(endpoint); + ipa_endpoint_init_aggr(endpoint); + ipa_endpoint_init_deaggr(endpoint); + ipa_endpoint_init_seq(endpoint); + } else { + if (endpoint->ipa->version == IPA_VERSION_3_5_1) { + if (!ipa_endpoint_init_ctrl(endpoint, false)) + dev_warn(dev, + "RX endpoint %u was suspended\n", + endpoint->endpoint_id); + } + ipa_endpoint_init_hdr_ext(endpoint); + ipa_endpoint_init_aggr(endpoint); + } + ipa_endpoint_init_cfg(endpoint); + ipa_endpoint_init_hdr(endpoint); + ipa_endpoint_init_hdr_metadata_mask(endpoint); + ipa_endpoint_init_mode(endpoint); + ipa_endpoint_status(endpoint); +} + +int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint) +{ + struct ipa *ipa = endpoint->ipa; + struct gsi *gsi = &ipa->gsi; + int ret; + + ret = gsi_channel_start(gsi, endpoint->channel_id); + if (ret) { + dev_err(&ipa->pdev->dev, + "error %d starting %cX channel %u for endpoint %u\n", + endpoint->channel_id, endpoint->toward_ipa ? 'T' : 'R', + endpoint->endpoint_id); + return ret; + } + + if (!endpoint->toward_ipa) { + ipa_interrupt_suspend_enable(ipa->interrupt, + endpoint->endpoint_id); + ipa_endpoint_replenish_enable(endpoint); + } + + ipa->enabled |= BIT(endpoint->endpoint_id); + + return 0; +} + +void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint) +{ + u32 mask = BIT(endpoint->endpoint_id); + struct ipa *ipa = endpoint->ipa; + int ret; + + if (!(endpoint->ipa->enabled & mask)) + return; + + endpoint->ipa->enabled ^= mask; + + if (!endpoint->toward_ipa) { + ipa_endpoint_replenish_disable(endpoint); + ipa_interrupt_suspend_disable(ipa->interrupt, + endpoint->endpoint_id); + } + + /* Note that if stop fails, the channel's state is not well-defined */ + ret = ipa_endpoint_stop(endpoint); + if (ret) + dev_err(&ipa->pdev->dev, + "error %d attempting to stop endpoint %u\n", ret, + endpoint->endpoint_id); +} + +/** + * ipa_endpoint_suspend_aggr() - Emulate suspend interrupt + * @endpoint_id: Endpoint on which to emulate a suspend + * + * Emulate suspend IPA interrupt to unsuspend an endpoint suspended + * with an open aggregation frame. This is to work around a hardware + * issue in IPA version 3.5.1 where the suspend interrupt will not be + * generated when it should be. + */ +static void ipa_endpoint_suspend_aggr(struct ipa_endpoint *endpoint) +{ + struct ipa *ipa = endpoint->ipa; + + /* assert(ipa->version == IPA_VERSION_3_5_1); */ + + if (!endpoint->data->aggregation) + return; + + /* Nothing to do if the endpoint doesn't have aggregation open */ + if (!ipa_endpoint_aggr_active(endpoint)) + return; + + /* Force close aggregation */ + ipa_endpoint_force_close(endpoint); + + ipa_interrupt_simulate_suspend(ipa->interrupt); +} + +void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint) +{ + struct device *dev = &endpoint->ipa->pdev->dev; + struct gsi *gsi = &endpoint->ipa->gsi; + bool stop_channel; + int ret; + + if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id))) + return; + + if (!endpoint->toward_ipa) + ipa_endpoint_replenish_disable(endpoint); + + /* IPA v3.5.1 doesn't use channel stop for suspend */ + stop_channel = endpoint->ipa->version != IPA_VERSION_3_5_1; + if (!endpoint->toward_ipa && !stop_channel) { + /* Due to a hardware bug, a client suspended with an open + * aggregation frame will not generate a SUSPEND IPA + * interrupt. We work around this by force-closing the + * aggregation frame, then simulating the arrival of such + * an interrupt. + */ + WARN_ON(ipa_endpoint_init_ctrl(endpoint, true)); + ipa_endpoint_suspend_aggr(endpoint); + } + + ret = gsi_channel_suspend(gsi, endpoint->channel_id, stop_channel); + if (ret) + dev_err(dev, "error %d suspending channel %u\n", ret, + endpoint->channel_id); +} + +void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint) +{ + struct device *dev = &endpoint->ipa->pdev->dev; + struct gsi *gsi = &endpoint->ipa->gsi; + bool start_channel; + int ret; + + if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id))) + return; + + /* IPA v3.5.1 doesn't use channel start for resume */ + start_channel = endpoint->ipa->version != IPA_VERSION_3_5_1; + if (!endpoint->toward_ipa && !start_channel) + WARN_ON(ipa_endpoint_init_ctrl(endpoint, false)); + + ret = gsi_channel_resume(gsi, endpoint->channel_id, start_channel); + if (ret) + dev_err(dev, "error %d resuming channel %u\n", ret, + endpoint->channel_id); + else if (!endpoint->toward_ipa) + ipa_endpoint_replenish_enable(endpoint); +} + +void ipa_endpoint_suspend(struct ipa *ipa) +{ + if (ipa->modem_netdev) + ipa_modem_suspend(ipa->modem_netdev); + + ipa_endpoint_suspend_one(ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]); + ipa_endpoint_suspend_one(ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]); +} + +void ipa_endpoint_resume(struct ipa *ipa) +{ + ipa_endpoint_resume_one(ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]); + ipa_endpoint_resume_one(ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]); + + if (ipa->modem_netdev) + ipa_modem_resume(ipa->modem_netdev); +} + +static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint) +{ + struct gsi *gsi = &endpoint->ipa->gsi; + u32 channel_id = endpoint->channel_id; + + /* Only AP endpoints get set up */ + if (endpoint->ee_id != GSI_EE_AP) + return; + + endpoint->trans_tre_max = gsi_channel_trans_tre_max(gsi, channel_id); + if (!endpoint->toward_ipa) { + /* RX transactions require a single TRE, so the maximum + * backlog is the same as the maximum outstanding TREs. + */ + endpoint->replenish_enabled = false; + atomic_set(&endpoint->replenish_saved, + gsi_channel_tre_max(gsi, endpoint->channel_id)); + atomic_set(&endpoint->replenish_backlog, 0); + INIT_DELAYED_WORK(&endpoint->replenish_work, + ipa_endpoint_replenish_work); + } + + ipa_endpoint_program(endpoint); + + endpoint->ipa->set_up |= BIT(endpoint->endpoint_id); +} + +static void ipa_endpoint_teardown_one(struct ipa_endpoint *endpoint) +{ + endpoint->ipa->set_up &= ~BIT(endpoint->endpoint_id); + + if (!endpoint->toward_ipa) + cancel_delayed_work_sync(&endpoint->replenish_work); + + ipa_endpoint_reset(endpoint); +} + +void ipa_endpoint_setup(struct ipa *ipa) +{ + u32 initialized = ipa->initialized; + + ipa->set_up = 0; + while (initialized) { + u32 endpoint_id = __ffs(initialized); + + initialized ^= BIT(endpoint_id); + + ipa_endpoint_setup_one(&ipa->endpoint[endpoint_id]); + } +} + +void ipa_endpoint_teardown(struct ipa *ipa) +{ + u32 set_up = ipa->set_up; + + while (set_up) { + u32 endpoint_id = __fls(set_up); + + set_up ^= BIT(endpoint_id); + + ipa_endpoint_teardown_one(&ipa->endpoint[endpoint_id]); + } + ipa->set_up = 0; +} + +int ipa_endpoint_config(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + u32 initialized; + u32 rx_base; + u32 rx_mask; + u32 tx_mask; + int ret = 0; + u32 max; + u32 val; + + /* Find out about the endpoints supplied by the hardware, and ensure + * the highest one doesn't exceed the number we support. + */ + val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET); + + /* Our RX is an IPA producer */ + rx_base = u32_get_bits(val, BAM_PROD_LOWEST_FMASK); + max = rx_base + u32_get_bits(val, BAM_MAX_PROD_PIPES_FMASK); + if (max > IPA_ENDPOINT_MAX) { + dev_err(dev, "too many endpoints (%u > %u)\n", + max, IPA_ENDPOINT_MAX); + return -EINVAL; + } + rx_mask = GENMASK(max - 1, rx_base); + + /* Our TX is an IPA consumer */ + max = u32_get_bits(val, BAM_MAX_CONS_PIPES_FMASK); + tx_mask = GENMASK(max - 1, 0); + + ipa->available = rx_mask | tx_mask; + + /* Check for initialized endpoints not supported by the hardware */ + if (ipa->initialized & ~ipa->available) { + dev_err(dev, "unavailable endpoint id(s) 0x%08x\n", + ipa->initialized & ~ipa->available); + ret = -EINVAL; /* Report other errors too */ + } + + initialized = ipa->initialized; + while (initialized) { + u32 endpoint_id = __ffs(initialized); + struct ipa_endpoint *endpoint; + + initialized ^= BIT(endpoint_id); + + /* Make sure it's pointing in the right direction */ + endpoint = &ipa->endpoint[endpoint_id]; + if ((endpoint_id < rx_base) != !!endpoint->toward_ipa) { + dev_err(dev, "endpoint id %u wrong direction\n", + endpoint_id); + ret = -EINVAL; + } + } + + return ret; +} + +void ipa_endpoint_deconfig(struct ipa *ipa) +{ + ipa->available = 0; /* Nothing more to do */ +} + +static void ipa_endpoint_init_one(struct ipa *ipa, enum ipa_endpoint_name name, + const struct ipa_gsi_endpoint_data *data) +{ + struct ipa_endpoint *endpoint; + + endpoint = &ipa->endpoint[data->endpoint_id]; + + if (data->ee_id == GSI_EE_AP) + ipa->channel_map[data->channel_id] = endpoint; + ipa->name_map[name] = endpoint; + + endpoint->ipa = ipa; + endpoint->ee_id = data->ee_id; + endpoint->seq_type = data->endpoint.seq_type; + endpoint->channel_id = data->channel_id; + endpoint->endpoint_id = data->endpoint_id; + endpoint->toward_ipa = data->toward_ipa; + endpoint->data = &data->endpoint.config; + + ipa->initialized |= BIT(endpoint->endpoint_id); +} + +void ipa_endpoint_exit_one(struct ipa_endpoint *endpoint) +{ + endpoint->ipa->initialized &= ~BIT(endpoint->endpoint_id); + + memset(endpoint, 0, sizeof(*endpoint)); +} + +void ipa_endpoint_exit(struct ipa *ipa) +{ + u32 initialized = ipa->initialized; + + while (initialized) { + u32 endpoint_id = __fls(initialized); + + initialized ^= BIT(endpoint_id); + + ipa_endpoint_exit_one(&ipa->endpoint[endpoint_id]); + } + memset(ipa->name_map, 0, sizeof(ipa->name_map)); + memset(ipa->channel_map, 0, sizeof(ipa->channel_map)); +} + +/* Returns a bitmask of endpoints that support filtering, or 0 on error */ +u32 ipa_endpoint_init(struct ipa *ipa, u32 count, + const struct ipa_gsi_endpoint_data *data) +{ + enum ipa_endpoint_name name; + u32 filter_map; + + if (!ipa_endpoint_data_valid(ipa, count, data)) + return 0; /* Error */ + + ipa->initialized = 0; + + filter_map = 0; + for (name = 0; name < count; name++, data++) { + if (ipa_gsi_endpoint_data_empty(data)) + continue; /* Skip over empty slots */ + + ipa_endpoint_init_one(ipa, name, data); + + if (data->endpoint.filter_support) + filter_map |= BIT(data->endpoint_id); + } + + if (!ipa_filter_map_valid(ipa, filter_map)) + goto err_endpoint_exit; + + return filter_map; /* Non-zero bitmask */ + +err_endpoint_exit: + ipa_endpoint_exit(ipa); + + return 0; /* Error */ +} diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h new file mode 100644 index 000000000000..4b336a1f759d --- /dev/null +++ b/drivers/net/ipa/ipa_endpoint.h @@ -0,0 +1,110 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_ENDPOINT_H_ +#define _IPA_ENDPOINT_H_ + +#include +#include +#include + +#include "gsi.h" +#include "ipa_reg.h" + +struct net_device; +struct sk_buff; + +struct ipa; +struct ipa_gsi_endpoint_data; + +/* Non-zero granularity of counter used to implement aggregation timeout */ +#define IPA_AGGR_GRANULARITY 500 /* microseconds */ + +#define IPA_MTU ETH_DATA_LEN + +enum ipa_endpoint_name { + IPA_ENDPOINT_AP_MODEM_TX = 0, + IPA_ENDPOINT_MODEM_LAN_TX, + IPA_ENDPOINT_MODEM_COMMAND_TX, + IPA_ENDPOINT_AP_COMMAND_TX, + IPA_ENDPOINT_MODEM_AP_TX, + IPA_ENDPOINT_AP_LAN_RX, + IPA_ENDPOINT_AP_MODEM_RX, + IPA_ENDPOINT_MODEM_AP_RX, + IPA_ENDPOINT_MODEM_LAN_RX, + IPA_ENDPOINT_COUNT, /* Number of names (not an index) */ +}; + +#define IPA_ENDPOINT_MAX 32 /* Max supported by driver */ + +/** + * struct ipa_endpoint - IPA endpoint information + * @client: Client associated with the endpoint + * @channel_id: EP's GSI channel + * @evt_ring_id: EP's GSI channel event ring + */ +struct ipa_endpoint { + struct ipa *ipa; + enum ipa_seq_type seq_type; + enum gsi_ee_id ee_id; + u32 channel_id; + u32 endpoint_id; + bool toward_ipa; + const struct ipa_endpoint_config_data *data; + + u32 trans_tre_max; /* maximum descriptors per transaction */ + u32 evt_ring_id; + + /* Net device this endpoint is associated with, if any */ + struct net_device *netdev; + + /* Receive buffer replenishing for RX endpoints */ + bool replenish_enabled; + u32 replenish_ready; + atomic_t replenish_saved; + atomic_t replenish_backlog; + struct delayed_work replenish_work; /* global wq */ +}; + +void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa); + +void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable); + +int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa); + +int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb); + +int ipa_endpoint_stop(struct ipa_endpoint *endpoint); + +void ipa_endpoint_exit_one(struct ipa_endpoint *endpoint); + +int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint); +void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint); + +void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint); +void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint); + +void ipa_endpoint_suspend(struct ipa *ipa); +void ipa_endpoint_resume(struct ipa *ipa); + +void ipa_endpoint_setup(struct ipa *ipa); +void ipa_endpoint_teardown(struct ipa *ipa); + +int ipa_endpoint_config(struct ipa *ipa); +void ipa_endpoint_deconfig(struct ipa *ipa); + +void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id); +void ipa_endpoint_default_route_clear(struct ipa *ipa); + +u32 ipa_endpoint_init(struct ipa *ipa, u32 count, + const struct ipa_gsi_endpoint_data *data); +void ipa_endpoint_exit(struct ipa *ipa); + +void ipa_endpoint_trans_complete(struct ipa_endpoint *ipa, + struct gsi_trans *trans); +void ipa_endpoint_trans_release(struct ipa_endpoint *ipa, + struct gsi_trans *trans); + +#endif /* _IPA_ENDPOINT_H_ */ From patchwork Fri Feb 28 22:41:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190180 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A1A3C3F2D8 for ; Fri, 28 Feb 2020 22:42:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 14023246AC for ; Fri, 28 Feb 2020 22:42:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="O1c0BIMs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727169AbgB1Wmj (ORCPT ); Fri, 28 Feb 2020 17:42:39 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:33636 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727198AbgB1Wmi (ORCPT ); Fri, 28 Feb 2020 17:42:38 -0500 Received: by mail-yw1-f65.google.com with SMTP id j186so4986181ywe.0 for ; Fri, 28 Feb 2020 14:42:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xQPHP7soRrVFXmFvhGpBfXcKcLLCCcQxwvCnU1L5Jk4=; b=O1c0BIMsHQ+tMXjvNszfTcqe/u1zb8o2ZmZzyjMJU/JO1xrcwSaRhhUiPmrxp2GXnx t/2zr8nBqxe1p7zvNb6p3xaDHSyza2Z6Ki0NwyyAuW6bd9jQv46eBAH8Y24mHP/aps1F Ke47KoVnAIgb4d0JS6TCo5bO9d/J+cQRO00RVNRW9wikV/O7rHLQx8fQFpRQaWVzyMEn Lh8O6wIc0yBHuUqojncsmhxBxB5nDPtPLTr9jhSYget8I3GITzHRL1Se5XYkHzWNMfYx UjLzVE3GzULubmk0hdFfCIg6hqHYH2d0l6NilwIVL4u6jDlYviAE/qjUTNhOzDCxHWzA dczQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xQPHP7soRrVFXmFvhGpBfXcKcLLCCcQxwvCnU1L5Jk4=; b=EBgJ6vTDWUP3Q1Y/Ex/lhUvJzE4lvckILbPT1TIfloHW495Ybr6SkPWrzl9UlSs4Wj 5BY5zWaK9xTI/CG0xdnbPhSa5+PpicuL2gVAHbELnE6btDaJegbGFLUw0Q7wyvb9vs/B Aha0R7Up3FUkW3FLf9j2WWSskqrnE1d8Wbx9KP3407l2hrSleWVVCeE3V3vWH1ziZYGP FIXZ5rRUOfJr02+7Kr35w4yQgPaYDtg8BUl6l1H6af1YSK4mJWcU3sG4fbmigNphXGir 91R4pq5BnSKwdCStseMiDVMuCFiRAd42m8ofUFA624K+650NNBmKCSNBfWLX+hYvOjDY RJOA== X-Gm-Message-State: APjAAAXkTHIrBKmb1f5Fed2TdFzQT/1LjgMrYlqZtYM6VXkgGhJV3cP+ g8qafIEjrxAgqyxWKSOc8XR7IQ== X-Google-Smtp-Source: APXvYqzpsoO247v8zO1irTwx28HbZqkNiDCg/69RdpAPmNyYfCtVUuUuQzX8BmHay/N5ttOerOO5kQ== X-Received: by 2002:a0d:c981:: with SMTP id l123mr6281826ywd.284.1582929755456; Fri, 28 Feb 2020 14:42:35 -0800 (PST) Received: from localhost.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id d188sm4637830ywe.50.2020.02.28.14.42.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Feb 2020 14:42:34 -0800 (PST) From: Alex Elder To: Arnd Bergmann , David Miller Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 12/17] soc: qcom: ipa: immediate commands Date: Fri, 28 Feb 2020 16:41:59 -0600 Message-Id: <20200228224204.17746-13-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200228224204.17746-1-elder@linaro.org> References: <20200228224204.17746-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org One TX endpoint (per EE) is used for issuing immediate commands to the IPA. These commands request activites beyond simple data transfers to be done by the IPA hardware. For example, the IPA is able to manage routing packets among endpoints, and immediate commands are used to configure tables used for that routing. Immediate commands are built on top of GSI transactions. They are different from normal transfers (in that they use a special endpoint, and their "payload" is interpreted differently), so separate functions are used to issue immediate command transactions. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_cmd.c | 680 ++++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_cmd.h | 195 +++++++++++ 2 files changed, 875 insertions(+) create mode 100644 drivers/net/ipa/ipa_cmd.c create mode 100644 drivers/net/ipa/ipa_cmd.h diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c new file mode 100644 index 000000000000..e14a384f2886 --- /dev/null +++ b/drivers/net/ipa/ipa_cmd.c @@ -0,0 +1,680 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_endpoint.h" +#include "ipa_table.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" + +/** + * DOC: IPA Immediate Commands + * + * The AP command TX endpoint is used to issue immediate commands to the IPA. + * An immediate command is generally used to request the IPA do something + * other than data transfer to another endpoint. + * + * Immediate commands are represented by GSI transactions just like other + * transfer requests, represented by a single GSI TRE. Each immediate + * command has a well-defined format, having a payload of a known length. + * This allows the transfer element's length field to be used to hold an + * immediate command's opcode. The payload for a command resides in DRAM + * and is described by a single scatterlist entry in its transaction. + * Commands do not require a transaction completion callback. To commit + * an immediate command transaction, either gsi_trans_commit_wait() or + * gsi_trans_commit_wait_timeout() is used. + */ + +/* Some commands can wait until indicated pipeline stages are clear */ +enum pipeline_clear_options { + pipeline_clear_hps = 0, + pipeline_clear_src_grp = 1, + pipeline_clear_full = 2, +}; + +/* IPA_CMD_IP_V{4,6}_{FILTER,ROUTING}_INIT */ + +struct ipa_cmd_hw_ip_fltrt_init { + __le64 hash_rules_addr; + __le64 flags; + __le64 nhash_rules_addr; +}; + +/* Field masks for ipa_cmd_hw_ip_fltrt_init structure fields */ +#define IP_FLTRT_FLAGS_HASH_SIZE_FMASK GENMASK_ULL(11, 0) +#define IP_FLTRT_FLAGS_HASH_ADDR_FMASK GENMASK_ULL(27, 12) +#define IP_FLTRT_FLAGS_NHASH_SIZE_FMASK GENMASK_ULL(39, 28) +#define IP_FLTRT_FLAGS_NHASH_ADDR_FMASK GENMASK_ULL(55, 40) + +/* IPA_CMD_HDR_INIT_LOCAL */ + +struct ipa_cmd_hw_hdr_init_local { + __le64 hdr_table_addr; + __le32 flags; + __le32 reserved; +}; + +/* Field masks for ipa_cmd_hw_hdr_init_local structure fields */ +#define HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK GENMASK(11, 0) +#define HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK GENMASK(27, 12) + +/* IPA_CMD_REGISTER_WRITE */ + +/* For IPA v4.0+, this opcode gets modified with pipeline clear options */ + +#define REGISTER_WRITE_OPCODE_SKIP_CLEAR_FMASK GENMASK(8, 8) +#define REGISTER_WRITE_OPCODE_CLEAR_OPTION_FMASK GENMASK(10, 9) + +struct ipa_cmd_register_write { + __le16 flags; /* Unused/reserved for IPA v3.5.1 */ + __le16 offset; + __le32 value; + __le32 value_mask; + __le32 clear_options; /* Unused/reserved for IPA v4.0+ */ +}; + +/* Field masks for ipa_cmd_register_write structure fields */ +/* The next field is present for IPA v4.0 and above */ +#define REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK GENMASK(14, 11) +/* The next field is present for IPA v3.5.1 only */ +#define REGISTER_WRITE_FLAGS_SKIP_CLEAR_FMASK GENMASK(15, 15) + +/* The next field and its values are present for IPA v3.5.1 only */ +#define REGISTER_WRITE_CLEAR_OPTIONS_FMASK GENMASK(1, 0) + +/* IPA_CMD_IP_PACKET_INIT */ + +struct ipa_cmd_ip_packet_init { + u8 dest_endpoint; + u8 reserved[7]; +}; + +/* Field masks for ipa_cmd_ip_packet_init dest_endpoint field */ +#define IPA_PACKET_INIT_DEST_ENDPOINT_FMASK GENMASK(4, 0) + +/* IPA_CMD_DMA_TASK_32B_ADDR */ + +/* This opcode gets modified with a DMA operation count */ + +#define DMA_TASK_32B_ADDR_OPCODE_COUNT_FMASK GENMASK(15, 8) + +struct ipa_cmd_hw_dma_task_32b_addr { + __le16 flags; + __le16 size; + __le32 addr; + __le16 packet_size; + u8 reserved[6]; +}; + +/* Field masks for ipa_cmd_hw_dma_task_32b_addr flags field */ +#define DMA_TASK_32B_ADDR_FLAGS_SW_RSVD_FMASK GENMASK(10, 0) +#define DMA_TASK_32B_ADDR_FLAGS_CMPLT_FMASK GENMASK(11, 11) +#define DMA_TASK_32B_ADDR_FLAGS_EOF_FMASK GENMASK(12, 12) +#define DMA_TASK_32B_ADDR_FLAGS_FLSH_FMASK GENMASK(13, 13) +#define DMA_TASK_32B_ADDR_FLAGS_LOCK_FMASK GENMASK(14, 14) +#define DMA_TASK_32B_ADDR_FLAGS_UNLOCK_FMASK GENMASK(15, 15) + +/* IPA_CMD_DMA_SHARED_MEM */ + +/* For IPA v4.0+, this opcode gets modified with pipeline clear options */ + +#define DMA_SHARED_MEM_OPCODE_SKIP_CLEAR_FMASK GENMASK(8, 8) +#define DMA_SHARED_MEM_OPCODE_CLEAR_OPTION_FMASK GENMASK(10, 9) + +struct ipa_cmd_hw_dma_mem_mem { + __le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */ + __le16 size; + __le16 local_addr; + __le16 flags; + __le64 system_addr; +}; + +/* Flag allowing atomic clear of target region after reading data (v4.0+)*/ +#define DMA_SHARED_MEM_CLEAR_AFTER_READ GENMASK(15, 15) + +/* Field masks for ipa_cmd_hw_dma_mem_mem structure fields */ +#define DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK GENMASK(0, 0) +/* The next two fields are present for IPA v3.5.1 only. */ +#define DMA_SHARED_MEM_FLAGS_SKIP_CLEAR_FMASK GENMASK(1, 1) +#define DMA_SHARED_MEM_FLAGS_CLEAR_OPTIONS_FMASK GENMASK(3, 2) + +/* IPA_CMD_IP_PACKET_TAG_STATUS */ + +struct ipa_cmd_ip_packet_tag_status { + __le64 tag; +}; + +#define IP_PACKET_TAG_STATUS_TAG_FMASK GENMASK(63, 16) + +/* Immediate command payload */ +union ipa_cmd_payload { + struct ipa_cmd_hw_ip_fltrt_init table_init; + struct ipa_cmd_hw_hdr_init_local hdr_init_local; + struct ipa_cmd_register_write register_write; + struct ipa_cmd_ip_packet_init ip_packet_init; + struct ipa_cmd_hw_dma_task_32b_addr dma_task_32b_addr; + struct ipa_cmd_hw_dma_mem_mem dma_shared_mem; + struct ipa_cmd_ip_packet_tag_status ip_packet_tag_status; +}; + +static void ipa_cmd_validate_build(void) +{ + /* The sizes of a filter and route tables need to fit into fields + * in the ipa_cmd_hw_ip_fltrt_init structure. Although hashed tables + * might not be used, non-hashed and hashed tables have the same + * maximum size. IPv4 and IPv6 filter tables have the same number + * of entries, as and IPv4 and IPv6 route tables have the same number + * of entries. + */ +#define TABLE_SIZE (TABLE_COUNT_MAX * IPA_TABLE_ENTRY_SIZE) +#define TABLE_COUNT_MAX max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX) + BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK)); + BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK)); +#undef TABLE_COUNT_MAX +#undef TABLE_SIZE +} + +#ifdef IPA_VALIDATE + +/* Validate a memory region holding a table */ +bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem, + bool route, bool ipv6, bool hashed) +{ + struct device *dev = &ipa->pdev->dev; + u32 offset_max; + + offset_max = hashed ? field_max(IP_FLTRT_FLAGS_HASH_ADDR_FMASK) + : field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK); + if (mem->offset > offset_max || + ipa->mem_offset > offset_max - mem->offset) { + dev_err(dev, "IPv%c %s%s table region offset too large " + "(0x%04x + 0x%04x > 0x%04x)\n", + ipv6 ? '6' : '4', hashed ? "hashed " : "", + route ? "route" : "filter", + ipa->mem_offset, mem->offset, offset_max); + return false; + } + + if (mem->offset > ipa->mem_size || + mem->size > ipa->mem_size - mem->offset) { + dev_err(dev, "IPv%c %s%s table region out of range " + "(0x%04x + 0x%04x > 0x%04x)\n", + ipv6 ? '6' : '4', hashed ? "hashed " : "", + route ? "route" : "filter", + mem->offset, mem->size, ipa->mem_size); + return false; + } + + return true; +} + +/* Validate the memory region that holds headers */ +static bool ipa_cmd_header_valid(struct ipa *ipa) +{ + const struct ipa_mem *mem = &ipa->mem[IPA_MEM_MODEM_HEADER]; + struct device *dev = &ipa->pdev->dev; + u32 offset_max; + u32 size_max; + u32 size; + + offset_max = field_max(HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK); + if (mem->offset > offset_max || + ipa->mem_offset > offset_max - mem->offset) { + dev_err(dev, "header table region offset too large " + "(0x%04x + 0x%04x > 0x%04x)\n", + ipa->mem_offset + mem->offset, offset_max); + return false; + } + + size_max = field_max(HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK); + size = ipa->mem[IPA_MEM_MODEM_HEADER].size; + size += ipa->mem[IPA_MEM_AP_HEADER].size; + if (mem->offset > ipa->mem_size || size > ipa->mem_size - mem->offset) { + dev_err(dev, "header table region out of range " + "(0x%04x + 0x%04x > 0x%04x)\n", + mem->offset, size, ipa->mem_size); + return false; + } + + return true; +} + +/* Indicate whether an offset can be used with a register_write command */ +static bool ipa_cmd_register_write_offset_valid(struct ipa *ipa, + const char *name, u32 offset) +{ + struct ipa_cmd_register_write *payload; + struct device *dev = &ipa->pdev->dev; + u32 offset_max; + u32 bit_count; + + /* The maximum offset in a register_write immediate command depends + * on the version of IPA. IPA v3.5.1 supports a 16 bit offset, but + * newer versions allow some additional high-order bits. + */ + bit_count = BITS_PER_BYTE * sizeof(payload->offset); + if (ipa->version != IPA_VERSION_3_5_1) + bit_count += hweight32(REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK); + BUILD_BUG_ON(bit_count > 32); + offset_max = ~0 >> (32 - bit_count); + + if (offset > offset_max || ipa->mem_offset > offset_max - offset) { + dev_err(dev, "%s offset too large 0x%04x + 0x%04x > 0x%04x)\n", + ipa->mem_offset + offset, offset_max); + return false; + } + + return true; +} + +/* Check whether offsets passed to register_write are valid */ +static bool ipa_cmd_register_write_valid(struct ipa *ipa) +{ + const char *name; + u32 offset; + + offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version); + name = "filter/route hash flush"; + if (!ipa_cmd_register_write_offset_valid(ipa, name, offset)) + return false; + + offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT); + name = "maximal endpoint status"; + if (!ipa_cmd_register_write_offset_valid(ipa, name, offset)) + return false; + + return true; +} + +bool ipa_cmd_data_valid(struct ipa *ipa) +{ + if (!ipa_cmd_header_valid(ipa)) + return false; + + if (!ipa_cmd_register_write_valid(ipa)) + return false; + + return true; +} + +#endif /* IPA_VALIDATE */ + +int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + struct device *dev = channel->gsi->dev; + int ret; + + /* This is as good a place as any to validate build constants */ + ipa_cmd_validate_build(); + + /* Even though command payloads are allocated one at a time, + * a single transaction can require up to tlv_count of them, + * so we treat them as if that many can be allocated at once. + */ + ret = gsi_trans_pool_init_dma(dev, &trans_info->cmd_pool, + sizeof(union ipa_cmd_payload), + tre_max, channel->tlv_count); + if (ret) + return ret; + + /* Each TRE needs a command info structure */ + ret = gsi_trans_pool_init(&trans_info->info_pool, + sizeof(struct ipa_cmd_info), + tre_max, channel->tlv_count); + if (ret) + gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool); + + return ret; +} + +void ipa_cmd_pool_exit(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + struct device *dev = channel->gsi->dev; + + gsi_trans_pool_exit(&trans_info->info_pool); + gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool); +} + +static union ipa_cmd_payload * +ipa_cmd_payload_alloc(struct ipa *ipa, dma_addr_t *addr) +{ + struct gsi_trans_info *trans_info; + struct ipa_endpoint *endpoint; + + endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; + trans_info = &ipa->gsi.channel[endpoint->channel_id].trans_info; + + return gsi_trans_pool_alloc_dma(&trans_info->cmd_pool, addr); +} + +/* If hash_size is 0, hash_offset and hash_addr ignored. */ +void ipa_cmd_table_init_add(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, u16 size, u32 offset, + dma_addr_t addr, u16 hash_size, u32 hash_offset, + dma_addr_t hash_addr) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_hw_ip_fltrt_init *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + u64 val; + + /* Record the non-hash table offset and size */ + offset += ipa->mem_offset; + val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK); + val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK); + + /* The hash table offset and address are zero if its size is 0 */ + if (hash_size) { + /* Record the hash table offset and size */ + hash_offset += ipa->mem_offset; + val |= u64_encode_bits(hash_offset, + IP_FLTRT_FLAGS_HASH_ADDR_FMASK); + val |= u64_encode_bits(hash_size, + IP_FLTRT_FLAGS_HASH_SIZE_FMASK); + } + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->table_init; + + /* Fill in all offsets and sizes and the non-hash table address */ + if (hash_size) + payload->hash_rules_addr = cpu_to_le64(hash_addr); + payload->flags = cpu_to_le64(val); + payload->nhash_rules_addr = cpu_to_le64(addr); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Initialize header space in IPA-local memory */ +void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size, + dma_addr_t addr) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL; + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_hw_hdr_init_local *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + u32 flags; + + offset += ipa->mem_offset; + + /* With this command we tell the IPA where in its local memory the + * header tables reside. The content of the buffer provided is + * also written via DMA into that space. The IPA hardware owns + * the table, but the AP must initialize it. + */ + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->hdr_init_local; + + payload->hdr_table_addr = cpu_to_le64(addr); + flags = u32_encode_bits(size, HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK); + flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK); + payload->flags = cpu_to_le32(flags); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value, + u32 mask, bool clear_full) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + struct ipa_cmd_register_write *payload; + union ipa_cmd_payload *cmd_payload; + u32 opcode = IPA_CMD_REGISTER_WRITE; + dma_addr_t payload_addr; + u32 clear_option; + u32 options; + u16 flags; + + /* pipeline_clear_src_grp is not used */ + clear_option = clear_full ? pipeline_clear_full : pipeline_clear_hps; + + if (ipa->version != IPA_VERSION_3_5_1) { + u16 offset_high; + u32 val; + + /* Opcode encodes pipeline clear options */ + /* SKIP_CLEAR is always 0 (don't skip pipeline clear) */ + val = u16_encode_bits(clear_option, + REGISTER_WRITE_OPCODE_CLEAR_OPTION_FMASK); + opcode |= val; + + /* Extract the high 4 bits from the offset */ + offset_high = (u16)u32_get_bits(offset, GENMASK(19, 16)); + offset &= (1 << 16) - 1; + + /* Extract the top 4 bits and encode it into the flags field */ + flags = u16_encode_bits(offset_high, + REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK); + options = 0; /* reserved */ + + } else { + flags = 0; /* SKIP_CLEAR flag is always 0 */ + options = u16_encode_bits(clear_option, + REGISTER_WRITE_CLEAR_OPTIONS_FMASK); + } + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->register_write; + + payload->flags = cpu_to_le16(flags); + payload->offset = cpu_to_le16((u16)offset); + payload->value = cpu_to_le32(value); + payload->value_mask = cpu_to_le32(mask); + payload->clear_options = cpu_to_le32(options); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + DMA_NONE, opcode); +} + +/* Skip IP packet processing on the next data transfer on a TX channel */ +static void ipa_cmd_ip_packet_init_add(struct gsi_trans *trans, u8 endpoint_id) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_INIT; + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_ip_packet_init *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + + /* assert(endpoint_id < + field_max(IPA_PACKET_INIT_DEST_ENDPOINT_FMASK)); */ + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->ip_packet_init; + + payload->dest_endpoint = u8_encode_bits(endpoint_id, + IPA_PACKET_INIT_DEST_ENDPOINT_FMASK); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Use a 32-bit DMA command to zero a block of memory */ +void ipa_cmd_dma_task_32b_addr_add(struct gsi_trans *trans, u16 size, + dma_addr_t addr, bool toward_ipa) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_DMA_TASK_32B_ADDR; + struct ipa_cmd_hw_dma_task_32b_addr *payload; + union ipa_cmd_payload *cmd_payload; + enum dma_data_direction direction; + dma_addr_t payload_addr; + u16 flags; + + /* assert(addr <= U32_MAX); */ + addr &= GENMASK_ULL(31, 0); + + /* The opcode encodes the number of DMA operations in the high byte */ + opcode |= u16_encode_bits(1, DMA_TASK_32B_ADDR_OPCODE_COUNT_FMASK); + + direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + + /* complete: 0 = don't interrupt; eof: 0 = don't assert eot */ + flags = DMA_TASK_32B_ADDR_FLAGS_FLSH_FMASK; + /* lock: 0 = don't lock endpoint; unlock: 0 = don't unlock */ + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->dma_task_32b_addr; + + payload->flags = cpu_to_le16(flags); + payload->size = cpu_to_le16(size); + payload->addr = cpu_to_le32((u32)addr); + payload->packet_size = cpu_to_le16(size); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Use a DMA command to read or write a block of IPA-resident memory */ +void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, u16 size, + dma_addr_t addr, bool toward_ipa) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM; + struct ipa_cmd_hw_dma_mem_mem *payload; + union ipa_cmd_payload *cmd_payload; + enum dma_data_direction direction; + dma_addr_t payload_addr; + u16 flags; + + /* size and offset must fit in 16 bit fields */ + /* assert(size > 0 && size <= U16_MAX); */ + /* assert(offset <= U16_MAX && ipa->mem_offset <= U16_MAX - offset); */ + + offset += ipa->mem_offset; + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->dma_shared_mem; + + /* payload->clear_after_read was reserved prior to IPA v4.0. It's + * never needed for current code, so it's 0 regardless of version. + */ + payload->size = cpu_to_le16(size); + payload->local_addr = cpu_to_le16(offset); + /* payload->flags: + * direction: 0 = write to IPA, 1 read from IPA + * Starting at v4.0 these are reserved; either way, all zero: + * pipeline clear: 0 = wait for pipeline clear (don't skip) + * clear_options: 0 = pipeline_clear_hps + * Instead, for v4.0+ these are encoded in the opcode. But again + * since both values are 0 we won't bother OR'ing them in. + */ + flags = toward_ipa ? 0 : DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK; + payload->flags = cpu_to_le16(flags); + payload->system_addr = cpu_to_le64(addr); + + direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +static void ipa_cmd_ip_tag_status_add(struct gsi_trans *trans, u64 tag) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_TAG_STATUS; + enum dma_data_direction direction = DMA_TO_DEVICE; + struct ipa_cmd_ip_packet_tag_status *payload; + union ipa_cmd_payload *cmd_payload; + dma_addr_t payload_addr; + + /* assert(tag <= field_max(IP_PACKET_TAG_STATUS_TAG_FMASK)); */ + + cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + payload = &cmd_payload->ip_packet_tag_status; + + payload->tag = u64_encode_bits(tag, IP_PACKET_TAG_STATUS_TAG_FMASK); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +/* Issue a small command TX data transfer */ +static void ipa_cmd_transfer_add(struct gsi_trans *trans, u16 size) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + enum dma_data_direction direction = DMA_TO_DEVICE; + enum ipa_cmd_opcode opcode = IPA_CMD_NONE; + union ipa_cmd_payload *payload; + dma_addr_t payload_addr; + + /* assert(size <= sizeof(*payload)); */ + + /* Just transfer a zero-filled payload structure */ + payload = ipa_cmd_payload_alloc(ipa, &payload_addr); + + gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr, + direction, opcode); +} + +void ipa_cmd_tag_process_add(struct gsi_trans *trans) +{ + ipa_cmd_register_write_add(trans, 0, 0, 0, true); +#if 1 + /* Reference these functions to avoid a compile error */ + (void)ipa_cmd_ip_packet_init_add; + (void)ipa_cmd_ip_tag_status_add; + (void) ipa_cmd_transfer_add; +#else + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + struct gsi_endpoint *endpoint; + + endpoint = ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]; + ipa_cmd_ip_packet_init_add(trans, endpoint->endpoint_id); + + ipa_cmd_ip_tag_status_add(trans, 0xcba987654321); + + ipa_cmd_transfer_add(trans, 4); +#endif +} + +/* Returns the number of commands required for the tag process */ +u32 ipa_cmd_tag_process_count(void) +{ + return 4; +} + +static struct ipa_cmd_info * +ipa_cmd_info_alloc(struct ipa_endpoint *endpoint, u32 tre_count) +{ + struct gsi_channel *channel; + + channel = &endpoint->ipa->gsi.channel[endpoint->channel_id]; + + return gsi_trans_pool_alloc(&channel->trans_info.info_pool, tre_count); +} + +/* Allocate a transaction for the command TX endpoint */ +struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count) +{ + struct ipa_endpoint *endpoint; + struct gsi_trans *trans; + + endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]; + + trans = gsi_channel_trans_alloc(&ipa->gsi, endpoint->channel_id, + tre_count, DMA_NONE); + if (trans) + trans->info = ipa_cmd_info_alloc(endpoint, tre_count); + + return trans; +} diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h new file mode 100644 index 000000000000..4917525b3a47 --- /dev/null +++ b/drivers/net/ipa/ipa_cmd.h @@ -0,0 +1,195 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_CMD_H_ +#define _IPA_CMD_H_ + +#include +#include + +struct sk_buff; +struct scatterlist; + +struct ipa; +struct ipa_mem; +struct gsi_trans; +struct gsi_channel; + +/** + * enum ipa_cmd_opcode: IPA immediate commands + * + * All immediate commands are issued using the AP command TX endpoint. + * The numeric values here are the opcodes for IPA v3.5.1 hardware. + * + * IPA_CMD_NONE is a special (invalid) value that's used to indicate + * a request is *not* an immediate command. + */ +enum ipa_cmd_opcode { + IPA_CMD_NONE = 0, + IPA_CMD_IP_V4_FILTER_INIT = 3, + IPA_CMD_IP_V6_FILTER_INIT = 4, + IPA_CMD_IP_V4_ROUTING_INIT = 7, + IPA_CMD_IP_V6_ROUTING_INIT = 8, + IPA_CMD_HDR_INIT_LOCAL = 9, + IPA_CMD_REGISTER_WRITE = 12, + IPA_CMD_IP_PACKET_INIT = 16, + IPA_CMD_DMA_TASK_32B_ADDR = 17, + IPA_CMD_DMA_SHARED_MEM = 19, + IPA_CMD_IP_PACKET_TAG_STATUS = 20, +}; + +/** + * struct ipa_cmd_info - information needed for an IPA immediate command + * + * @opcode: The command opcode. + * @direction: Direction of data transfer for DMA commands + */ +struct ipa_cmd_info { + enum ipa_cmd_opcode opcode; + enum dma_data_direction direction; +}; + + +#ifdef IPA_VALIDATE + +/** + * ipa_cmd_table_valid() - Validate a memory region holding a table + * @ipa: - IPA pointer + * @mem: - IPA memory region descriptor + * @route: - Whether the region holds a route or filter table + * @ipv6: - Whether the table is for IPv6 or IPv4 + * @hashed: - Whether the table is hashed or non-hashed + * + * @Return: true if region is valid, false otherwise + */ +bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem, + bool route, bool ipv6, bool hashed); + +/** + * ipa_cmd_data_valid() - Validate command-realted configuration is valid + * @ipa: - IPA pointer + * + * @Return: true if assumptions required for command are valid + */ +bool ipa_cmd_data_valid(struct ipa *ipa); + +#else /* !IPA_VALIDATE */ + +static inline bool ipa_cmd_table_valid(struct ipa *ipa, + const struct ipa_mem *mem, bool route, + bool ipv6, bool hashed) +{ + return true; +} + +static inline bool ipa_cmd_data_valid(struct ipa *ipa) +{ + return true; +} + +#endif /* !IPA_VALIDATE */ + +/** + * ipa_cmd_pool_init() - initialize command channel pools + * @channel: AP->IPA command TX GSI channel pointer + * @tre_count: Number of pool elements to allocate + * + * @Return: 0 if successful, or a negative error code + */ +int ipa_cmd_pool_init(struct gsi_channel *gsi_channel, u32 tre_count); + +/** + * ipa_cmd_pool_exit() - Inverse of ipa_cmd_pool_init() + * @channel: AP->IPA command TX GSI channel pointer + */ +void ipa_cmd_pool_exit(struct gsi_channel *channel); + +/** + * ipa_cmd_table_init_add() - Add table init command to a transaction + * @trans: GSI transaction + * @opcode: IPA immediate command opcode + * @size: Size of non-hashed routing table memory + * @offset: Offset in IPA shared memory of non-hashed routing table memory + * @addr: DMA address of non-hashed table data to write + * @hash_size: Size of hashed routing table memory + * @hash_offset: Offset in IPA shared memory of hashed routing table memory + * @hash_addr: DMA address of hashed table data to write + * + * If hash_size is 0, hash_offset and hash_addr are ignored. + */ +void ipa_cmd_table_init_add(struct gsi_trans *trans, enum ipa_cmd_opcode opcode, + u16 size, u32 offset, dma_addr_t addr, + u16 hash_size, u32 hash_offset, + dma_addr_t hash_addr); + +/** + * ipa_cmd_hdr_init_local_add() - Add a header init command to a transaction + * @ipa: IPA structure + * @offset: Offset of header memory in IPA local space + * @size: Size of header memory + * @addr: DMA address of buffer to be written from + * + * Defines and fills the location in IPA memory to use for headers. + */ +void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size, + dma_addr_t addr); + +/** + * ipa_cmd_register_write_add() - Add a register write command to a transaction + * @trans: GSI transaction + * @offset: Offset of register to be written + * @value: Value to be written + * @mask: Mask of bits in register to update with bits from value + * @clear_full: Pipeline clear option; true means full pipeline clear + */ +void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value, + u32 mask, bool clear_full); + +/** + * ipa_cmd_dma_task_32b_addr_add() - Add a 32-bit DMA command to a transaction + * @trans: GSi transaction + * @size: Number of bytes to be memory to be transferred + * @addr: DMA address of buffer to be read into or written from + * @toward_ipa: true means write to IPA memory; false means read + */ +void ipa_cmd_dma_task_32b_addr_add(struct gsi_trans *trans, u16 size, + dma_addr_t addr, bool toward_ipa); + +/** + * ipa_cmd_dma_shared_mem_add() - Add a DMA memory command to a transaction + * @trans: GSI transaction + * @offset: Offset of IPA memory to be read or written + * @size: Number of bytes of memory to be transferred + * @addr: DMA address of buffer to be read into or written from + * @toward_ipa: true means write to IPA memory; false means read + */ +void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, + u16 size, dma_addr_t addr, bool toward_ipa); + +/** + * ipa_cmd_tag_process_add() - Add IPA tag process commands to a transaction + * @trans: GSI transaction + */ +void ipa_cmd_tag_process_add(struct gsi_trans *trans); + +/** + * ipa_cmd_tag_process_add_count() - Number of commands in a tag process + * + * @Return: The number of elements to allocate in a transaction + * to hold tag process commands + */ +u32 ipa_cmd_tag_process_count(void); + +/** + * ipa_cmd_trans_alloc() - Allocate a transaction for the command TX endpoint + * @ipa: IPA pointer + * @tre_count: Number of elements in the transaction + * + * @Return: A GSI transaction structure, or a null pointer if all + * available transactions are in use + */ +struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count); + +#endif /* _IPA_CMD_H_ */ From patchwork Fri Feb 28 22:42:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190178 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8DC5C3F2CD for ; Fri, 28 Feb 2020 22:43:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 74D64246AC for ; Fri, 28 Feb 2020 22:43:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Ta/YO6T1" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727228AbgB1WnJ (ORCPT ); Fri, 28 Feb 2020 17:43:09 -0500 Received: from mail-yw1-f66.google.com ([209.85.161.66]:42447 "EHLO mail-yw1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727247AbgB1Wmm (ORCPT ); Fri, 28 Feb 2020 17:42:42 -0500 Received: by mail-yw1-f66.google.com with SMTP id n127so4916478ywd.9 for ; Fri, 28 Feb 2020 14:42:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LjIqQIN/2szfQ75PSKeiupXgUSHFFM6xFTZ/Ss1UYRs=; b=Ta/YO6T1NPlZvYv1cYbv2Nr4GNxZVpu6+76Xeyv9KrDQAt5BKMYdy7kH9JjS6fp9UD Xftzafzy9mrwwN4AeYn88X/zSuZkHzkSO34b16AjRa9kSs0LEQ/FLvcmMqSqxQu4fxgZ RLTwlcdbcFuPrlGKCqDABIDzv6HMHmS0q3qWvDdkPQuJ5RfhvbhZQgDdwFZarbv6CJbI MlFHK8WfkN+F5n1nzm6M467zfYPYzXIm9hxicbeKBXnMeKZnzIm3onJk6BwfSTBcNqeX zmkC6qZncyWcUtp5/BqCGN5iKJiII4a4q4OVLpMD+vYPdR2uZw920/uhPmXGpwM+h8nB oCNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LjIqQIN/2szfQ75PSKeiupXgUSHFFM6xFTZ/Ss1UYRs=; b=NRLSxkd9sH9OZBYYHB/tzbRMdG8xVO265LVA/suLEbKTu1snLBG1kNEVptF9RYZi0C PeLgUGNVYhq201C4V0+cdfZ7w7iNPfqDIGqI1k8eHLQM1HxY7QA+GsmdTU0QWRzbr/oc fSTyBV4cq9mdiciK/V2i7u3po/5dRRTz1Ec6HBDX+FhxCTyV61mhHcxB2R4slX6wNaT1 vQtm7J+S1HtGHnhHxc0ijFqy7+jKHOn0wVDEcgTlJi3efH3qCxY0ud+ZQCwj+mxqHp4T QDk+GgybgOjjx5nSPKS1sj8Avnhdunf51AcoMjiI4Y0eIu6ComkwDaW/jmoIX8XCSC2a wJQA== X-Gm-Message-State: APjAAAUn0RfCKh4erVCRd1XiLyRr8nnbDGwAK5Rupo6OUcKG4syOR8ER 3rV4HyX2Pa9jTyzXQ9qO+gI4rA== X-Google-Smtp-Source: APXvYqxFLchq6m0IAMU/2eOgIYeHfnAji6ya0cl/yJkFBw20vrMMTcxXcQ16AuCLqbmuWNNEOECsNA== X-Received: by 2002:a81:4b42:: with SMTP id y63mr7079567ywa.502.1582929759643; Fri, 28 Feb 2020 14:42:39 -0800 (PST) Received: from localhost.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id d188sm4637830ywe.50.2020.02.28.14.42.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Feb 2020 14:42:39 -0800 (PST) From: Alex Elder To: Arnd Bergmann , David Miller Cc: Bjorn Andersson , Andy Gross , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 14/17] soc: qcom: ipa: AP/modem communications Date: Fri, 28 Feb 2020 16:42:01 -0600 Message-Id: <20200228224204.17746-15-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200228224204.17746-1-elder@linaro.org> References: <20200228224204.17746-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This patch implements two forms of out-of-band communication between the AP and modem. - QMI is a mechanism that allows clients running on the AP interact with services running on the modem (and vice-versa). The AP IPA driver uses QMI to communicate with the corresponding IPA driver resident on the modem, to agree on parameters used with the IPA hardware and to ensure both sides are ready before entering operational mode. - SMP2P is a more primitive mechanism available for the modem and AP to communicate with each other. It provides a means for either the AP or modem to interrupt the other, and furthermore, to provide 32 bits worth of information. The IPA driver uses SMP2P to tell the modem what the state of the IPA clock was in the event of a crash. This allows the modem to safely access the IPA hardware (or avoid doing so) when a crash occurs, for example, to access information within the IPA hardware. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_qmi.c | 538 +++++++++++++++++++++++++++ drivers/net/ipa/ipa_qmi.h | 41 +++ drivers/net/ipa/ipa_qmi_msg.c | 663 ++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_qmi_msg.h | 252 +++++++++++++ drivers/net/ipa/ipa_smp2p.c | 335 +++++++++++++++++ drivers/net/ipa/ipa_smp2p.h | 48 +++ 6 files changed, 1877 insertions(+) create mode 100644 drivers/net/ipa/ipa_qmi.c create mode 100644 drivers/net/ipa/ipa_qmi.h create mode 100644 drivers/net/ipa/ipa_qmi_msg.c create mode 100644 drivers/net/ipa/ipa_qmi_msg.h create mode 100644 drivers/net/ipa/ipa_smp2p.c create mode 100644 drivers/net/ipa/ipa_smp2p.h diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c new file mode 100644 index 000000000000..5090f0f923ad --- /dev/null +++ b/drivers/net/ipa/ipa_qmi.c @@ -0,0 +1,538 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_endpoint.h" +#include "ipa_mem.h" +#include "ipa_table.h" +#include "ipa_modem.h" +#include "ipa_qmi_msg.h" + +/** + * DOC: AP/Modem QMI Handshake + * + * The AP and modem perform a "handshake" at initialization time to ensure + * both sides know when everything is ready to begin operating. The AP + * driver (this code) uses two QMI handles (endpoints) for this; a client + * using a service on the modem, and server to service modem requests (and + * to supply an indication message from the AP). Once the handshake is + * complete, the AP and modem may begin IPA operation. This occurs + * only when the AP IPA driver, modem IPA driver, and IPA microcontroller + * are ready. + * + * The QMI service on the modem expects to receive an INIT_DRIVER request from + * the AP, which contains parameters used by the modem during initialization. + * The AP sends this request as soon as it is knows the modem side service + * is available. The modem responds to this request, and if this response + * contains a success result, the AP knows the modem IPA driver is ready. + * + * The modem is responsible for loading firmware on the IPA microcontroller. + * This occurs only during the initial modem boot. The modem sends a + * separate DRIVER_INIT_COMPLETE request to the AP to report that the + * microcontroller is ready. The AP may assume the microcontroller is + * ready and remain so (even if the modem reboots) once it has received + * and responded to this request. + * + * There is one final exchange involved in the handshake. It is required + * on the initial modem boot, but optional (but in practice does occur) on + * subsequent boots. The modem expects to receive a final INIT_COMPLETE + * indication message from the AP when it is about to begin its normal + * operation. The AP will only send this message after it has received + * and responded to an INDICATION_REGISTER request from the modem. + * + * So in summary: + * - Whenever the AP learns the modem has booted and its IPA QMI service + * is available, it sends an INIT_DRIVER request to the modem. The + * modem supplies a success response when it is ready to operate. + * - On the initial boot, the modem sets up the IPA microcontroller, and + * sends a DRIVER_INIT_COMPLETE request to the AP when this is done. + * - When the modem is ready to receive an INIT_COMPLETE indication from + * the AP, it sends an INDICATION_REGISTER request to the AP. + * - On the initial modem boot, everything is ready when: + * - AP has received a success response from its INIT_DRIVER request + * - AP has responded to a DRIVER_INIT_COMPLETE request + * - AP has responded to an INDICATION_REGISTER request from the modem + * - AP has sent an INIT_COMPLETE indication to the modem + * - On subsequent modem boots, everything is ready when: + * - AP has received a success response from its INIT_DRIVER request + * - AP has responded to a DRIVER_INIT_COMPLETE request + * - The INDICATION_REGISTER request and INIT_COMPLETE indication are + * optional for non-initial modem boots, and have no bearing on the + * determination of when things are "ready" + */ + +#define IPA_HOST_SERVICE_SVC_ID 0x31 +#define IPA_HOST_SVC_VERS 1 +#define IPA_HOST_SERVICE_INS_ID 1 + +#define IPA_MODEM_SERVICE_SVC_ID 0x31 +#define IPA_MODEM_SERVICE_INS_ID 2 +#define IPA_MODEM_SVC_VERS 1 + +#define QMI_INIT_DRIVER_TIMEOUT 60000 /* A minute in milliseconds */ + +/* Send an INIT_COMPLETE indication message to the modem */ +static void ipa_server_init_complete(struct ipa_qmi *ipa_qmi) +{ + struct ipa *ipa = container_of(ipa_qmi, struct ipa, qmi); + struct qmi_handle *qmi = &ipa_qmi->server_handle; + struct sockaddr_qrtr *sq = &ipa_qmi->modem_sq; + struct ipa_init_complete_ind ind = { }; + int ret; + + ind.status.result = QMI_RESULT_SUCCESS_V01; + ind.status.error = QMI_ERR_NONE_V01; + + ret = qmi_send_indication(qmi, sq, IPA_QMI_INIT_COMPLETE, + IPA_QMI_INIT_COMPLETE_IND_SZ, + ipa_init_complete_ind_ei, &ind); + if (ret) + dev_err(&ipa->pdev->dev, + "error %d sending init complete indication\n", ret); + else + ipa_qmi->indication_sent = true; +} + +/* If requested (and not already sent) send the INIT_COMPLETE indication */ +static void ipa_qmi_indication(struct ipa_qmi *ipa_qmi) +{ + if (!ipa_qmi->indication_requested) + return; + + if (ipa_qmi->indication_sent) + return; + + ipa_server_init_complete(ipa_qmi); +} + +/* Determine whether everything is ready to start normal operation. + * We know everything (else) is ready when we know the IPA driver on + * the modem is ready, and the microcontroller is ready. + * + * When the modem boots (or reboots), the handshake sequence starts + * with the AP sending the modem an INIT_DRIVER request. Within + * that request, the uc_loaded flag will be zero (false) for an + * initial boot, non-zero (true) for a subsequent (SSR) boot. + */ +static void ipa_qmi_ready(struct ipa_qmi *ipa_qmi) +{ + struct ipa *ipa = container_of(ipa_qmi, struct ipa, qmi); + int ret; + + /* We aren't ready until the modem and microcontroller are */ + if (!ipa_qmi->modem_ready || !ipa_qmi->uc_ready) + return; + + /* Send the indication message if it was requested */ + ipa_qmi_indication(ipa_qmi); + + /* The initial boot requires us to send the indication. */ + if (ipa_qmi->initial_boot) { + if (!ipa_qmi->indication_sent) + return; + + /* The initial modem boot completed successfully */ + ipa_qmi->initial_boot = false; + } + + /* We're ready. Start up normal operation */ + ipa = container_of(ipa_qmi, struct ipa, qmi); + ret = ipa_modem_start(ipa); + if (ret) + dev_err(&ipa->pdev->dev, "error %d starting modem\n", ret); +} + +/* All QMI clients from the modem node are gone (modem shut down or crashed). */ +static void ipa_server_bye(struct qmi_handle *qmi, unsigned int node) +{ + struct ipa_qmi *ipa_qmi; + + ipa_qmi = container_of(qmi, struct ipa_qmi, server_handle); + + /* The modem client and server go away at the same time */ + memset(&ipa_qmi->modem_sq, 0, sizeof(ipa_qmi->modem_sq)); + + /* initial_boot doesn't change when modem reboots */ + /* uc_ready doesn't change when modem reboots */ + ipa_qmi->modem_ready = false; + ipa_qmi->indication_requested = false; + ipa_qmi->indication_sent = false; +} + +static struct qmi_ops ipa_server_ops = { + .bye = ipa_server_bye, +}; + +/* Callback function to handle an INDICATION_REGISTER request message from the + * modem. This informs the AP that the modem is now ready to receive the + * INIT_COMPLETE indication message. + */ +static void ipa_server_indication_register(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, + const void *decoded) +{ + struct ipa_indication_register_rsp rsp = { }; + struct ipa_qmi *ipa_qmi; + struct ipa *ipa; + int ret; + + ipa_qmi = container_of(qmi, struct ipa_qmi, server_handle); + ipa = container_of(ipa_qmi, struct ipa, qmi); + + rsp.rsp.result = QMI_RESULT_SUCCESS_V01; + rsp.rsp.error = QMI_ERR_NONE_V01; + + ret = qmi_send_response(qmi, sq, txn, IPA_QMI_INDICATION_REGISTER, + IPA_QMI_INDICATION_REGISTER_RSP_SZ, + ipa_indication_register_rsp_ei, &rsp); + if (!ret) { + ipa_qmi->indication_requested = true; + ipa_qmi_ready(ipa_qmi); /* We might be ready now */ + } else { + dev_err(&ipa->pdev->dev, + "error %d sending register indication response\n", ret); + } +} + +/* Respond to a DRIVER_INIT_COMPLETE request message from the modem. */ +static void ipa_server_driver_init_complete(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, + const void *decoded) +{ + struct ipa_driver_init_complete_rsp rsp = { }; + struct ipa_qmi *ipa_qmi; + struct ipa *ipa; + int ret; + + ipa_qmi = container_of(qmi, struct ipa_qmi, server_handle); + ipa = container_of(ipa_qmi, struct ipa, qmi); + + rsp.rsp.result = QMI_RESULT_SUCCESS_V01; + rsp.rsp.error = QMI_ERR_NONE_V01; + + ret = qmi_send_response(qmi, sq, txn, IPA_QMI_DRIVER_INIT_COMPLETE, + IPA_QMI_DRIVER_INIT_COMPLETE_RSP_SZ, + ipa_driver_init_complete_rsp_ei, &rsp); + if (!ret) { + ipa_qmi->uc_ready = true; + ipa_qmi_ready(ipa_qmi); /* We might be ready now */ + } else { + dev_err(&ipa->pdev->dev, + "error %d sending init complete response\n", ret); + } +} + +/* The server handles two request message types sent by the modem. */ +static struct qmi_msg_handler ipa_server_msg_handlers[] = { + { + .type = QMI_REQUEST, + .msg_id = IPA_QMI_INDICATION_REGISTER, + .ei = ipa_indication_register_req_ei, + .decoded_size = IPA_QMI_INDICATION_REGISTER_REQ_SZ, + .fn = ipa_server_indication_register, + }, + { + .type = QMI_REQUEST, + .msg_id = IPA_QMI_DRIVER_INIT_COMPLETE, + .ei = ipa_driver_init_complete_req_ei, + .decoded_size = IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ, + .fn = ipa_server_driver_init_complete, + }, +}; + +/* Handle an INIT_DRIVER response message from the modem. */ +static void ipa_client_init_driver(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, const void *decoded) +{ + txn->result = 0; /* IPA_QMI_INIT_DRIVER request was successful */ + complete(&txn->completion); +} + +/* The client handles one response message type sent by the modem. */ +static struct qmi_msg_handler ipa_client_msg_handlers[] = { + { + .type = QMI_RESPONSE, + .msg_id = IPA_QMI_INIT_DRIVER, + .ei = ipa_init_modem_driver_rsp_ei, + .decoded_size = IPA_QMI_INIT_DRIVER_RSP_SZ, + .fn = ipa_client_init_driver, + }, +}; + +/* Return a pointer to an init modem driver request structure, which contains + * configuration parameters for the modem. The modem may be started multiple + * times, but generally these parameters don't change so we can reuse the + * request structure once it's initialized. The only exception is the + * skip_uc_load field, which will be set only after the microcontroller has + * reported it has completed its initialization. + */ +static const struct ipa_init_modem_driver_req * +init_modem_driver_req(struct ipa_qmi *ipa_qmi) +{ + struct ipa *ipa = container_of(ipa_qmi, struct ipa, qmi); + static struct ipa_init_modem_driver_req req; + const struct ipa_mem *mem; + + /* The microcontroller is initialized on the first boot */ + req.skip_uc_load_valid = 1; + req.skip_uc_load = ipa->uc_loaded ? 1 : 0; + + /* We only have to initialize most of it once */ + if (req.platform_type_valid) + return &req; + + req.platform_type_valid = 1; + req.platform_type = IPA_QMI_PLATFORM_TYPE_MSM_ANDROID; + + mem = &ipa->mem[IPA_MEM_MODEM_HEADER]; + if (mem->size) { + req.hdr_tbl_info_valid = 1; + req.hdr_tbl_info.start = ipa->mem_offset + mem->offset; + req.hdr_tbl_info.end = req.hdr_tbl_info.start + mem->size - 1; + } + + mem = &ipa->mem[IPA_MEM_V4_ROUTE]; + req.v4_route_tbl_info_valid = 1; + req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset; + req.v4_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE; + + mem = &ipa->mem[IPA_MEM_V6_ROUTE]; + req.v6_route_tbl_info_valid = 1; + req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset; + req.v6_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE; + + mem = &ipa->mem[IPA_MEM_V4_FILTER]; + req.v4_filter_tbl_start_valid = 1; + req.v4_filter_tbl_start = ipa->mem_offset + mem->offset; + + mem = &ipa->mem[IPA_MEM_V6_FILTER]; + req.v6_filter_tbl_start_valid = 1; + req.v6_filter_tbl_start = ipa->mem_offset + mem->offset; + + mem = &ipa->mem[IPA_MEM_MODEM]; + if (mem->size) { + req.modem_mem_info_valid = 1; + req.modem_mem_info.start = ipa->mem_offset + mem->offset; + req.modem_mem_info.size = mem->size; + } + + req.ctrl_comm_dest_end_pt_valid = 1; + req.ctrl_comm_dest_end_pt = + ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]->endpoint_id; + + /* skip_uc_load_valid and skip_uc_load are set above */ + + mem = &ipa->mem[IPA_MEM_MODEM_PROC_CTX]; + if (mem->size) { + req.hdr_proc_ctx_tbl_info_valid = 1; + req.hdr_proc_ctx_tbl_info.start = + ipa->mem_offset + mem->offset; + req.hdr_proc_ctx_tbl_info.end = + req.hdr_proc_ctx_tbl_info.start + mem->size - 1; + } + + /* Nothing to report for the compression table (zip_tbl_info) */ + + mem = &ipa->mem[IPA_MEM_V4_ROUTE_HASHED]; + if (mem->size) { + req.v4_hash_route_tbl_info_valid = 1; + req.v4_hash_route_tbl_info.start = + ipa->mem_offset + mem->offset; + req.v4_hash_route_tbl_info.count = + mem->size / IPA_TABLE_ENTRY_SIZE; + } + + mem = &ipa->mem[IPA_MEM_V6_ROUTE_HASHED]; + if (mem->size) { + req.v6_hash_route_tbl_info_valid = 1; + req.v6_hash_route_tbl_info.start = + ipa->mem_offset + mem->offset; + req.v6_hash_route_tbl_info.count = + mem->size / IPA_TABLE_ENTRY_SIZE; + } + + mem = &ipa->mem[IPA_MEM_V4_FILTER_HASHED]; + if (mem->size) { + req.v4_hash_filter_tbl_start_valid = 1; + req.v4_hash_filter_tbl_start = ipa->mem_offset + mem->offset; + } + + mem = &ipa->mem[IPA_MEM_V6_FILTER_HASHED]; + if (mem->size) { + req.v6_hash_filter_tbl_start_valid = 1; + req.v6_hash_filter_tbl_start = ipa->mem_offset + mem->offset; + } + + /* None of the stats fields are valid (IPA v4.0 and above) */ + + if (ipa->version != IPA_VERSION_3_5_1) { + mem = &ipa->mem[IPA_MEM_STATS_QUOTA]; + if (mem->size) { + req.hw_stats_quota_base_addr_valid = 1; + req.hw_stats_quota_base_addr = + ipa->mem_offset + mem->offset; + req.hw_stats_quota_size_valid = 1; + req.hw_stats_quota_size = ipa->mem_offset + mem->size; + } + + mem = &ipa->mem[IPA_MEM_STATS_DROP]; + if (mem->size) { + req.hw_stats_drop_base_addr_valid = 1; + req.hw_stats_drop_base_addr = + ipa->mem_offset + mem->offset; + req.hw_stats_drop_size_valid = 1; + req.hw_stats_drop_size = ipa->mem_offset + mem->size; + } + } + + return &req; +} + +/* Send an INIT_DRIVER request to the modem, and wait for it to complete. */ +static void ipa_client_init_driver_work(struct work_struct *work) +{ + unsigned long timeout = msecs_to_jiffies(QMI_INIT_DRIVER_TIMEOUT); + const struct ipa_init_modem_driver_req *req; + struct ipa_qmi *ipa_qmi; + struct qmi_handle *qmi; + struct qmi_txn txn; + struct device *dev; + struct ipa *ipa; + int ret; + + ipa_qmi = container_of(work, struct ipa_qmi, init_driver_work); + qmi = &ipa_qmi->client_handle, + + ipa = container_of(ipa_qmi, struct ipa, qmi); + dev = &ipa->pdev->dev; + + ret = qmi_txn_init(qmi, &txn, NULL, NULL); + if (ret < 0) { + dev_err(dev, "error %d preparing init driver request\n", ret); + return; + } + + /* Send the request, and if successful wait for its response */ + req = init_modem_driver_req(ipa_qmi); + ret = qmi_send_request(qmi, &ipa_qmi->modem_sq, &txn, + IPA_QMI_INIT_DRIVER, IPA_QMI_INIT_DRIVER_REQ_SZ, + ipa_init_modem_driver_req_ei, req); + if (ret) + dev_err(dev, "error %d sending init driver request\n", ret); + else if ((ret = qmi_txn_wait(&txn, timeout))) + dev_err(dev, "error %d awaiting init driver response\n", ret); + + if (!ret) { + ipa_qmi->modem_ready = true; + ipa_qmi_ready(ipa_qmi); /* We might be ready now */ + } else { + /* If any error occurs we need to cancel the transaction */ + qmi_txn_cancel(&txn); + } +} + +/* The modem server is now available. We will send an INIT_DRIVER request + * to the modem, but can't wait for it to complete in this callback thread. + * Schedule a worker on the global workqueue to do that for us. + */ +static int +ipa_client_new_server(struct qmi_handle *qmi, struct qmi_service *svc) +{ + struct ipa_qmi *ipa_qmi; + + ipa_qmi = container_of(qmi, struct ipa_qmi, client_handle); + + ipa_qmi->modem_sq.sq_family = AF_QIPCRTR; + ipa_qmi->modem_sq.sq_node = svc->node; + ipa_qmi->modem_sq.sq_port = svc->port; + + schedule_work(&ipa_qmi->init_driver_work); + + return 0; +} + +static struct qmi_ops ipa_client_ops = { + .new_server = ipa_client_new_server, +}; + +/* This is called by ipa_setup(). We can be informed via remoteproc that + * the modem has shut down, in which case this function will be called + * again to prepare for it coming back up again. + */ +int ipa_qmi_setup(struct ipa *ipa) +{ + struct ipa_qmi *ipa_qmi = &ipa->qmi; + int ret; + + ipa_qmi->initial_boot = true; + + /* The server handle is used to handle the DRIVER_INIT_COMPLETE + * request on the first modem boot. It also receives the + * INDICATION_REGISTER request on the first boot and (optionally) + * subsequent boots. The INIT_COMPLETE indication message is + * sent over the server handle if requested. + */ + ret = qmi_handle_init(&ipa_qmi->server_handle, + IPA_QMI_SERVER_MAX_RCV_SZ, &ipa_server_ops, + ipa_server_msg_handlers); + if (ret) + return ret; + + ret = qmi_add_server(&ipa_qmi->server_handle, IPA_HOST_SERVICE_SVC_ID, + IPA_HOST_SVC_VERS, IPA_HOST_SERVICE_INS_ID); + if (ret) + goto err_server_handle_release; + + /* The client handle is only used for sending an INIT_DRIVER request + * to the modem, and receiving its response message. + */ + ret = qmi_handle_init(&ipa_qmi->client_handle, + IPA_QMI_CLIENT_MAX_RCV_SZ, &ipa_client_ops, + ipa_client_msg_handlers); + if (ret) + goto err_server_handle_release; + + /* We need this ready before the service lookup is added */ + INIT_WORK(&ipa_qmi->init_driver_work, ipa_client_init_driver_work); + + ret = qmi_add_lookup(&ipa_qmi->client_handle, IPA_MODEM_SERVICE_SVC_ID, + IPA_MODEM_SVC_VERS, IPA_MODEM_SERVICE_INS_ID); + if (ret) + goto err_client_handle_release; + + return 0; + +err_client_handle_release: + /* Releasing the handle also removes registered lookups */ + qmi_handle_release(&ipa_qmi->client_handle); + memset(&ipa_qmi->client_handle, 0, sizeof(ipa_qmi->client_handle)); +err_server_handle_release: + /* Releasing the handle also removes registered services */ + qmi_handle_release(&ipa_qmi->server_handle); + memset(&ipa_qmi->server_handle, 0, sizeof(ipa_qmi->server_handle)); + + return ret; +} + +void ipa_qmi_teardown(struct ipa *ipa) +{ + cancel_work_sync(&ipa->qmi.init_driver_work); + + qmi_handle_release(&ipa->qmi.client_handle); + memset(&ipa->qmi.client_handle, 0, sizeof(ipa->qmi.client_handle)); + + qmi_handle_release(&ipa->qmi.server_handle); + memset(&ipa->qmi.server_handle, 0, sizeof(ipa->qmi.server_handle)); +} diff --git a/drivers/net/ipa/ipa_qmi.h b/drivers/net/ipa/ipa_qmi.h new file mode 100644 index 000000000000..3993687593d0 --- /dev/null +++ b/drivers/net/ipa/ipa_qmi.h @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_QMI_H_ +#define _IPA_QMI_H_ + +#include +#include + +struct ipa; + +/** + * struct ipa_qmi - QMI state associated with an IPA + * @client_handle - used to send an QMI requests to the modem + * @server_handle - used to handle QMI requests from the modem + * @initialized - whether QMI initialization has completed + * @indication_register_received - tracks modem request receipt + * @init_driver_response_received - tracks modem response receipt + */ +struct ipa_qmi { + struct qmi_handle client_handle; + struct qmi_handle server_handle; + + /* Information used for the client handle */ + struct sockaddr_qrtr modem_sq; + struct work_struct init_driver_work; + + /* Flags used in negotiating readiness */ + bool initial_boot; + bool uc_ready; + bool modem_ready; + bool indication_requested; + bool indication_sent; +}; + +int ipa_qmi_setup(struct ipa *ipa); +void ipa_qmi_teardown(struct ipa *ipa); + +#endif /* !_IPA_QMI_H_ */ diff --git a/drivers/net/ipa/ipa_qmi_msg.c b/drivers/net/ipa/ipa_qmi_msg.c new file mode 100644 index 000000000000..03a1d0e55964 --- /dev/null +++ b/drivers/net/ipa/ipa_qmi_msg.c @@ -0,0 +1,663 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#include +#include + +#include "ipa_qmi_msg.h" + +/* QMI message structure definition for struct ipa_indication_register_req */ +struct qmi_elem_info ipa_indication_register_req_ei[] = { + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + master_driver_init_complete_valid), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_indication_register_req, + master_driver_init_complete_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + master_driver_init_complete), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_indication_register_req, + master_driver_init_complete), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + data_usage_quota_reached_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + data_usage_quota_reached_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + data_usage_quota_reached), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + data_usage_quota_reached), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + ipa_mhi_ready_ind_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + ipa_mhi_ready_ind_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + ipa_mhi_ready_ind), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + ipa_mhi_ready_ind), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_indication_register_rsp */ +struct qmi_elem_info ipa_indication_register_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_rsp, + rsp), + .tlv_type = 0x02, + .offset = offsetof(struct ipa_indication_register_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_driver_init_complete_req */ +struct qmi_elem_info ipa_driver_init_complete_req_ei[] = { + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_driver_init_complete_req, + status), + .tlv_type = 0x01, + .offset = offsetof(struct ipa_driver_init_complete_req, + status), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_driver_init_complete_rsp */ +struct qmi_elem_info ipa_driver_init_complete_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_driver_init_complete_rsp, + rsp), + .tlv_type = 0x02, + .elem_size = offsetof(struct ipa_driver_init_complete_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_complete_ind */ +struct qmi_elem_info ipa_init_complete_ind_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_complete_ind, + status), + .tlv_type = 0x02, + .elem_size = offsetof(struct ipa_init_complete_ind, + status), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_bounds */ +struct qmi_elem_info ipa_mem_bounds_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_bounds, start), + .offset = offsetof(struct ipa_mem_bounds, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_bounds, end), + .offset = offsetof(struct ipa_mem_bounds, end), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_array */ +struct qmi_elem_info ipa_mem_array_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_array, start), + .offset = offsetof(struct ipa_mem_array, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_array, count), + .offset = offsetof(struct ipa_mem_array, count), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_range */ +struct qmi_elem_info ipa_mem_range_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_range, start), + .offset = offsetof(struct ipa_mem_range, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_range, size), + .offset = offsetof(struct ipa_mem_range, size), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_modem_driver_req */ +struct qmi_elem_info ipa_init_modem_driver_req_ei[] = { + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + platform_type_valid), + .tlv_type = 0x10, + .elem_size = offsetof(struct ipa_init_modem_driver_req, + platform_type_valid), + }, + { + .data_type = QMI_SIGNED_4_BYTE_ENUM, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + platform_type), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_req, + platform_type), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_tbl_info_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_tbl_info), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_route_tbl_info_valid), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_route_tbl_info), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_route_tbl_info_valid), + .tlv_type = 0x13, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_route_tbl_info), + .tlv_type = 0x13, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_filter_tbl_start_valid), + .tlv_type = 0x14, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_filter_tbl_start), + .tlv_type = 0x14, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_filter_tbl_start_valid), + .tlv_type = 0x15, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_filter_tbl_start), + .tlv_type = 0x15, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + modem_mem_info_valid), + .tlv_type = 0x16, + .offset = offsetof(struct ipa_init_modem_driver_req, + modem_mem_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + modem_mem_info), + .tlv_type = 0x16, + .offset = offsetof(struct ipa_init_modem_driver_req, + modem_mem_info), + .ei_array = ipa_mem_range_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt_valid), + .tlv_type = 0x17, + .offset = offsetof(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt), + .tlv_type = 0x17, + .offset = offsetof(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + skip_uc_load_valid), + .tlv_type = 0x18, + .offset = offsetof(struct ipa_init_modem_driver_req, + skip_uc_load_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + skip_uc_load), + .tlv_type = 0x18, + .offset = offsetof(struct ipa_init_modem_driver_req, + skip_uc_load), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info_valid), + .tlv_type = 0x19, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info), + .tlv_type = 0x19, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + zip_tbl_info_valid), + .tlv_type = 0x1a, + .offset = offsetof(struct ipa_init_modem_driver_req, + zip_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + zip_tbl_info), + .tlv_type = 0x1a, + .offset = offsetof(struct ipa_init_modem_driver_req, + zip_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info_valid), + .tlv_type = 0x1b, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info), + .tlv_type = 0x1b, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info_valid), + .tlv_type = 0x1c, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info), + .tlv_type = 0x1c, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start_valid), + .tlv_type = 0x1d, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start), + .tlv_type = 0x1d, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start_valid), + .tlv_type = 0x1e, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start), + .tlv_type = 0x1e, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_quota_base_addr_valid), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_quota_base_addr_valid), + }, + { + .data_type = QMI_SIGNED_4_BYTE_ENUM, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_quota_base_addr), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_quota_base_addr), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_quota_size_valid), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_quota_size_valid), + }, + { + .data_type = QMI_SIGNED_4_BYTE_ENUM, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_quota_size), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_quota_size), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_drop_size_valid), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_drop_size_valid), + }, + { + .data_type = QMI_SIGNED_4_BYTE_ENUM, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hw_stats_drop_size), + .tlv_type = 0x1f, + .offset = offsetof(struct ipa_init_modem_driver_req, + hw_stats_drop_size), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_modem_driver_rsp */ +struct qmi_elem_info ipa_init_modem_driver_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + rsp), + .tlv_type = 0x02, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt_valid), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + default_end_pt_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + default_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + default_end_pt), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + default_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending_valid), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending), + }, + { + .data_type = QMI_EOTI, + }, +}; diff --git a/drivers/net/ipa/ipa_qmi_msg.h b/drivers/net/ipa/ipa_qmi_msg.h new file mode 100644 index 000000000000..cfac456cea0c --- /dev/null +++ b/drivers/net/ipa/ipa_qmi_msg.h @@ -0,0 +1,252 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2020 Linaro Ltd. + */ +#ifndef _IPA_QMI_MSG_H_ +#define _IPA_QMI_MSG_H_ + +/* === Only "ipa_qmi" and "ipa_qmi_msg.c" should include this file === */ + +#include +#include + +/* Request/response/indication QMI message ids used for IPA. Receiving + * end issues a response for requests; indications require no response. + */ +#define IPA_QMI_INDICATION_REGISTER 0x20 /* modem -> AP request */ +#define IPA_QMI_INIT_DRIVER 0x21 /* AP -> modem request */ +#define IPA_QMI_INIT_COMPLETE 0x22 /* AP -> modem indication */ +#define IPA_QMI_DRIVER_INIT_COMPLETE 0x35 /* modem -> AP request */ + +/* The maximum size required for message types. These sizes include + * the message data, along with type (1 byte) and length (2 byte) + * information for each field. The qmi_send_*() interfaces require + * the message size to be provided. + */ +#define IPA_QMI_INDICATION_REGISTER_REQ_SZ 12 /* -> server handle */ +#define IPA_QMI_INDICATION_REGISTER_RSP_SZ 7 /* <- server handle */ +#define IPA_QMI_INIT_DRIVER_REQ_SZ 162 /* client handle -> */ +#define IPA_QMI_INIT_DRIVER_RSP_SZ 25 /* client handle <- */ +#define IPA_QMI_INIT_COMPLETE_IND_SZ 7 /* <- server handle */ +#define IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ 4 /* -> server handle */ +#define IPA_QMI_DRIVER_INIT_COMPLETE_RSP_SZ 7 /* <- server handle */ + +/* Maximum size of messages we expect the AP to receive (max of above) */ +#define IPA_QMI_SERVER_MAX_RCV_SZ 8 +#define IPA_QMI_CLIENT_MAX_RCV_SZ 25 + +/* Request message for the IPA_QMI_INDICATION_REGISTER request */ +struct ipa_indication_register_req { + u8 master_driver_init_complete_valid; + u8 master_driver_init_complete; + u8 data_usage_quota_reached_valid; + u8 data_usage_quota_reached; + u8 ipa_mhi_ready_ind_valid; + u8 ipa_mhi_ready_ind; +}; + +/* The response to a IPA_QMI_INDICATION_REGISTER request consists only of + * a standard QMI response. + */ +struct ipa_indication_register_rsp { + struct qmi_response_type_v01 rsp; +}; + +/* Request message for the IPA_QMI_DRIVER_INIT_COMPLETE request */ +struct ipa_driver_init_complete_req { + u8 status; +}; + +/* The response to a IPA_QMI_DRIVER_INIT_COMPLETE request consists only + * of a standard QMI response. + */ +struct ipa_driver_init_complete_rsp { + struct qmi_response_type_v01 rsp; +}; + +/* The message for the IPA_QMI_INIT_COMPLETE_IND indication consists + * only of a standard QMI response. + */ +struct ipa_init_complete_ind { + struct qmi_response_type_v01 status; +}; + +/* The AP tells the modem its platform type. We assume Android. */ +enum ipa_platform_type { + IPA_QMI_PLATFORM_TYPE_INVALID = 0, /* Invalid */ + IPA_QMI_PLATFORM_TYPE_TN = 1, /* Data card */ + IPA_QMI_PLATFORM_TYPE_LE = 2, /* Data router */ + IPA_QMI_PLATFORM_TYPE_MSM_ANDROID = 3, /* Android MSM */ + IPA_QMI_PLATFORM_TYPE_MSM_WINDOWS = 4, /* Windows MSM */ + IPA_QMI_PLATFORM_TYPE_MSM_QNX_V01 = 5, /* QNX MSM */ +}; + +/* This defines the start and end offset of a range of memory. Both + * fields are offsets relative to the start of IPA shared memory. + * The end value is the last addressable byte *within* the range. + */ +struct ipa_mem_bounds { + u32 start; + u32 end; +}; + +/* This defines the location and size of an array. The start value + * is an offset relative to the start of IPA shared memory. The + * size of the array is implied by the number of entries (the entry + * size is assumed to be known). + */ +struct ipa_mem_array { + u32 start; + u32 count; +}; + +/* This defines the location and size of a range of memory. The + * start is an offset relative to the start of IPA shared memory. + * This differs from the ipa_mem_bounds structure in that the size + * (in bytes) of the memory region is specified rather than the + * offset of its last byte. + */ +struct ipa_mem_range { + u32 start; + u32 size; +}; + +/* The message for the IPA_QMI_INIT_DRIVER request contains information + * from the AP that affects modem initialization. + */ +struct ipa_init_modem_driver_req { + u8 platform_type_valid; + u32 platform_type; /* enum ipa_platform_type */ + + /* Modem header table information. This defines the IPA shared + * memory in which the modem may insert header table entries. + */ + u8 hdr_tbl_info_valid; + struct ipa_mem_bounds hdr_tbl_info; + + /* Routing table information. These define the location and size of + * non-hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_route_tbl_info_valid; + struct ipa_mem_array v4_route_tbl_info; + u8 v6_route_tbl_info_valid; + struct ipa_mem_array v6_route_tbl_info; + + /* Filter table information. These define the location of the + * non-hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_filter_tbl_start_valid; + u32 v4_filter_tbl_start; + u8 v6_filter_tbl_start_valid; + u32 v6_filter_tbl_start; + + /* Modem memory information. This defines the location and + * size of memory available for the modem to use. + */ + u8 modem_mem_info_valid; + struct ipa_mem_range modem_mem_info; + + /* This defines the destination endpoint on the AP to which + * the modem driver can send control commands. Must be less + * than ipa_endpoint_max(). + */ + u8 ctrl_comm_dest_end_pt_valid; + u32 ctrl_comm_dest_end_pt; + + /* This defines whether the modem should load the microcontroller + * or not. It is unnecessary to reload it if the modem is being + * restarted. + * + * NOTE: this field is named "is_ssr_bootup" elsewhere. + */ + u8 skip_uc_load_valid; + u8 skip_uc_load; + + /* Processing context memory information. This defines the memory in + * which the modem may insert header processing context table entries. + */ + u8 hdr_proc_ctx_tbl_info_valid; + struct ipa_mem_bounds hdr_proc_ctx_tbl_info; + + /* Compression command memory information. This defines the memory + * in which the modem may insert compression/decompression commands. + */ + u8 zip_tbl_info_valid; + struct ipa_mem_bounds zip_tbl_info; + + /* Routing table information. These define the location and size + * of hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_hash_route_tbl_info_valid; + struct ipa_mem_array v4_hash_route_tbl_info; + u8 v6_hash_route_tbl_info_valid; + struct ipa_mem_array v6_hash_route_tbl_info; + + /* Filter table information. These define the location and size + * of hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_hash_filter_tbl_start_valid; + u32 v4_hash_filter_tbl_start; + u8 v6_hash_filter_tbl_start_valid; + u32 v6_hash_filter_tbl_start; + + /* Statistics information. These define the locations of the + * first and last statistics sub-regions. (IPA v4.0 and above) + */ + u8 hw_stats_quota_base_addr_valid; + u32 hw_stats_quota_base_addr; + u8 hw_stats_quota_size_valid; + u32 hw_stats_quota_size; + u8 hw_stats_drop_base_addr_valid; + u32 hw_stats_drop_base_addr; + u8 hw_stats_drop_size_valid; + u32 hw_stats_drop_size; +}; + +/* The response to a IPA_QMI_INIT_DRIVER request begins with a standard + * QMI response, but contains other information as well. Currently we + * simply wait for the the INIT_DRIVER transaction to complete and + * ignore any other data that might be returned. + */ +struct ipa_init_modem_driver_rsp { + struct qmi_response_type_v01 rsp; + + /* This defines the destination endpoint on the modem to which + * the AP driver can send control commands. Must be less than + * ipa_endpoint_max(). + */ + u8 ctrl_comm_dest_end_pt_valid; + u32 ctrl_comm_dest_end_pt; + + /* This defines the default endpoint. The AP driver is not + * required to configure the hardware with this value. Must + * be less than ipa_endpoint_max(). + */ + u8 default_end_pt_valid; + u32 default_end_pt; + + /* This defines whether a second handshake is required to complete + * initialization. + */ + u8 modem_driver_init_pending_valid; + u8 modem_driver_init_pending; +}; + +/* Message structure definitions defined in "ipa_qmi_msg.c" */ +extern struct qmi_elem_info ipa_indication_register_req_ei[]; +extern struct qmi_elem_info ipa_indication_register_rsp_ei[]; +extern struct qmi_elem_info ipa_driver_init_complete_req_ei[]; +extern struct qmi_elem_info ipa_driver_init_complete_rsp_ei[]; +extern struct qmi_elem_info ipa_init_complete_ind_ei[]; +extern struct qmi_elem_info ipa_mem_bounds_ei[]; +extern struct qmi_elem_info ipa_mem_array_ei[]; +extern struct qmi_elem_info ipa_mem_range_ei[]; +extern struct qmi_elem_info ipa_init_modem_driver_req_ei[]; +extern struct qmi_elem_info ipa_init_modem_driver_rsp_ei[]; + +#endif /* !_IPA_QMI_MSG_H_ */ diff --git a/drivers/net/ipa/ipa_smp2p.c b/drivers/net/ipa/ipa_smp2p.c new file mode 100644 index 000000000000..4d33aa7ebfbb --- /dev/null +++ b/drivers/net/ipa/ipa_smp2p.c @@ -0,0 +1,335 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include + +#include "ipa_smp2p.h" +#include "ipa.h" +#include "ipa_uc.h" +#include "ipa_clock.h" + +/** + * DOC: IPA SMP2P communication with the modem + * + * SMP2P is a primitive communication mechanism available between the AP and + * the modem. The IPA driver uses this for two purposes: to enable the modem + * to state that the GSI hardware is ready to use; and to communicate the + * state of the IPA clock in the event of a crash. + * + * GSI needs to have early initialization completed before it can be used. + * This initialization is done either by Trust Zone or by the modem. In the + * latter case, the modem uses an SMP2P interrupt to tell the AP IPA driver + * when the GSI is ready to use. + * + * The modem is also able to inquire about the current state of the IPA + * clock by trigging another SMP2P interrupt to the AP. We communicate + * whether the clock is enabled using two SMP2P state bits--one to + * indicate the clock state (on or off), and a second to indicate the + * clock state bit is valid. The modem will poll the valid bit until it + * is set, and at that time records whether the AP has the IPA clock enabled. + * + * Finally, if the AP kernel panics, we update the SMP2P state bits even if + * we never receive an interrupt from the modem requesting this. + */ + +/** + * struct ipa_smp2p - IPA SMP2P information + * @ipa: IPA pointer + * @valid_state: SMEM state indicating enabled state is valid + * @enabled_state: SMEM state to indicate clock is enabled + * @valid_bit: Valid bit in 32-bit SMEM state mask + * @enabled_bit: Enabled bit in 32-bit SMEM state mask + * @enabled_bit: Enabled bit in 32-bit SMEM state mask + * @clock_query_irq: IPA interrupt triggered by modem for clock query + * @setup_ready_irq: IPA interrupt triggered by modem to signal GSI ready + * @clock_on: Whether IPA clock is on + * @notified: Whether modem has been notified of clock state + * @disabled: Whether setup ready interrupt handling is disabled + * @mutex mutex: Motex protecting ready interrupt/shutdown interlock + * @panic_notifier: Panic notifier structure +*/ +struct ipa_smp2p { + struct ipa *ipa; + struct qcom_smem_state *valid_state; + struct qcom_smem_state *enabled_state; + u32 valid_bit; + u32 enabled_bit; + u32 clock_query_irq; + u32 setup_ready_irq; + bool clock_on; + bool notified; + bool disabled; + struct mutex mutex; + struct notifier_block panic_notifier; +}; + +/** + * ipa_smp2p_notify() - use SMP2P to tell modem about IPA clock state + * @smp2p: SMP2P information + * + * This is called either when the modem has requested it (by triggering + * the modem clock query IPA interrupt) or whenever the AP is shutting down + * (via a panic notifier). It sets the two SMP2P state bits--one saying + * whether the IPA clock is running, and the other indicating the first bit + * is valid. + */ +static void ipa_smp2p_notify(struct ipa_smp2p *smp2p) +{ + u32 value; + u32 mask; + + if (smp2p->notified) + return; + + smp2p->clock_on = ipa_clock_get_additional(smp2p->ipa); + + /* Signal whether the clock is enabled */ + mask = BIT(smp2p->enabled_bit); + value = smp2p->clock_on ? mask : 0; + qcom_smem_state_update_bits(smp2p->enabled_state, mask, value); + + /* Now indicate that the enabled flag is valid */ + mask = BIT(smp2p->valid_bit); + value = mask; + qcom_smem_state_update_bits(smp2p->valid_state, mask, value); + + smp2p->notified = true; +} + +/* Threaded IRQ handler for modem "ipa-clock-query" SMP2P interrupt */ +static irqreturn_t ipa_smp2p_modem_clk_query_isr(int irq, void *dev_id) +{ + struct ipa_smp2p *smp2p = dev_id; + + ipa_smp2p_notify(smp2p); + + return IRQ_HANDLED; +} + +static int ipa_smp2p_panic_notifier(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct ipa_smp2p *smp2p; + + smp2p = container_of(nb, struct ipa_smp2p, panic_notifier); + + ipa_smp2p_notify(smp2p); + + if (smp2p->clock_on) + ipa_uc_panic_notifier(smp2p->ipa); + + return NOTIFY_DONE; +} + +static int ipa_smp2p_panic_notifier_register(struct ipa_smp2p *smp2p) +{ + /* IPA panic handler needs to run before modem shuts down */ + smp2p->panic_notifier.notifier_call = ipa_smp2p_panic_notifier; + smp2p->panic_notifier.priority = INT_MAX; /* Do it early */ + + return atomic_notifier_chain_register(&panic_notifier_list, + &smp2p->panic_notifier); +} + +static void ipa_smp2p_panic_notifier_unregister(struct ipa_smp2p *smp2p) +{ + atomic_notifier_chain_unregister(&panic_notifier_list, + &smp2p->panic_notifier); +} + +/* Threaded IRQ handler for modem "ipa-setup-ready" SMP2P interrupt */ +static irqreturn_t ipa_smp2p_modem_setup_ready_isr(int irq, void *dev_id) +{ + struct ipa_smp2p *smp2p = dev_id; + + mutex_lock(&smp2p->mutex); + + if (!smp2p->disabled) { + int ret; + + ret = ipa_setup(smp2p->ipa); + if (ret) + dev_err(&smp2p->ipa->pdev->dev, + "error %d from ipa_setup()\n", ret); + smp2p->disabled = true; + } + + mutex_unlock(&smp2p->mutex); + + return IRQ_HANDLED; +} + +/* Initialize SMP2P interrupts */ +static int ipa_smp2p_irq_init(struct ipa_smp2p *smp2p, const char *name, + irq_handler_t handler) +{ + struct device *dev = &smp2p->ipa->pdev->dev; + unsigned int irq; + int ret; + + ret = platform_get_irq_byname(smp2p->ipa->pdev, name); + if (ret <= 0) { + dev_err(dev, "DT error %d getting \"%s\" IRQ property\n", + ret, name); + return ret ? : -EINVAL; + } + irq = ret; + + ret = request_threaded_irq(irq, NULL, handler, 0, name, smp2p); + if (ret) { + dev_err(dev, "error %d requesting \"%s\" IRQ\n", ret, name); + return ret; + } + + return irq; +} + +static void ipa_smp2p_irq_exit(struct ipa_smp2p *smp2p, u32 irq) +{ + free_irq(irq, smp2p); +} + +/* Drop the clock reference if it was taken in ipa_smp2p_notify() */ +static void ipa_smp2p_clock_release(struct ipa *ipa) +{ + if (!ipa->smp2p->clock_on) + return; + + ipa_clock_put(ipa); + ipa->smp2p->clock_on = false; +} + +/* Initialize the IPA SMP2P subsystem */ +int ipa_smp2p_init(struct ipa *ipa, bool modem_init) +{ + struct qcom_smem_state *enabled_state; + struct device *dev = &ipa->pdev->dev; + struct qcom_smem_state *valid_state; + struct ipa_smp2p *smp2p; + u32 enabled_bit; + u32 valid_bit; + int ret; + + valid_state = qcom_smem_state_get(dev, "ipa-clock-enabled-valid", + &valid_bit); + if (IS_ERR(valid_state)) + return PTR_ERR(valid_state); + if (valid_bit >= 32) /* BITS_PER_U32 */ + return -EINVAL; + + enabled_state = qcom_smem_state_get(dev, "ipa-clock-enabled", + &enabled_bit); + if (IS_ERR(enabled_state)) + return PTR_ERR(enabled_state); + if (enabled_bit >= 32) /* BITS_PER_U32 */ + return -EINVAL; + + smp2p = kzalloc(sizeof(*smp2p), GFP_KERNEL); + if (!smp2p) + return -ENOMEM; + + smp2p->ipa = ipa; + + /* These fields are needed by the clock query interrupt + * handler, so initialize them now. + */ + mutex_init(&smp2p->mutex); + smp2p->valid_state = valid_state; + smp2p->valid_bit = valid_bit; + smp2p->enabled_state = enabled_state; + smp2p->enabled_bit = enabled_bit; + + /* We have enough information saved to handle notifications */ + ipa->smp2p = smp2p; + + ret = ipa_smp2p_irq_init(smp2p, "ipa-clock-query", + ipa_smp2p_modem_clk_query_isr); + if (ret < 0) + goto err_null_smp2p; + smp2p->clock_query_irq = ret; + + ret = ipa_smp2p_panic_notifier_register(smp2p); + if (ret) + goto err_irq_exit; + + if (modem_init) { + /* Result will be non-zero (negative for error) */ + ret = ipa_smp2p_irq_init(smp2p, "ipa-setup-ready", + ipa_smp2p_modem_setup_ready_isr); + if (ret < 0) + goto err_notifier_unregister; + smp2p->setup_ready_irq = ret; + } + + return 0; + +err_notifier_unregister: + ipa_smp2p_panic_notifier_unregister(smp2p); +err_irq_exit: + ipa_smp2p_irq_exit(smp2p, smp2p->clock_query_irq); +err_null_smp2p: + ipa->smp2p = NULL; + mutex_destroy(&smp2p->mutex); + kfree(smp2p); + + return ret; +} + +void ipa_smp2p_exit(struct ipa *ipa) +{ + struct ipa_smp2p *smp2p = ipa->smp2p; + + if (smp2p->setup_ready_irq) + ipa_smp2p_irq_exit(smp2p, smp2p->setup_ready_irq); + ipa_smp2p_panic_notifier_unregister(smp2p); + ipa_smp2p_irq_exit(smp2p, smp2p->clock_query_irq); + /* We won't get notified any more; drop clock reference (if any) */ + ipa_smp2p_clock_release(ipa); + ipa->smp2p = NULL; + mutex_destroy(&smp2p->mutex); + kfree(smp2p); +} + +void ipa_smp2p_disable(struct ipa *ipa) +{ + struct ipa_smp2p *smp2p = ipa->smp2p; + + if (!smp2p->setup_ready_irq) + return; + + mutex_lock(&smp2p->mutex); + + smp2p->disabled = true; + + mutex_unlock(&smp2p->mutex); +} + +/* Reset state tracking whether we have notified the modem */ +void ipa_smp2p_notify_reset(struct ipa *ipa) +{ + struct ipa_smp2p *smp2p = ipa->smp2p; + u32 mask; + + if (!smp2p->notified) + return; + + ipa_smp2p_clock_release(ipa); + + /* Reset the clock enabled valid flag */ + mask = BIT(smp2p->valid_bit); + qcom_smem_state_update_bits(smp2p->valid_state, mask, 0); + + /* Mark the clock disabled for good measure... */ + mask = BIT(smp2p->enabled_bit); + qcom_smem_state_update_bits(smp2p->enabled_state, mask, 0); + + smp2p->notified = false; +} diff --git a/drivers/net/ipa/ipa_smp2p.h b/drivers/net/ipa/ipa_smp2p.h new file mode 100644 index 000000000000..1f65cdc9d406 --- /dev/null +++ b/drivers/net/ipa/ipa_smp2p.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019-2020 Linaro Ltd. + */ +#ifndef _IPA_SMP2P_H_ +#define _IPA_SMP2P_H_ + +#include + +struct ipa; + +/** + * ipa_smp2p_init() - Initialize the IPA SMP2P subsystem + * @ipa: IPA pointer + * @modem_init: Whether the modem is responsible for GSI initialization + * + * @Return: 0 if successful, or a negative error code + * + */ +int ipa_smp2p_init(struct ipa *ipa, bool modem_init); + +/** + * ipa_smp2p_exit() - Inverse of ipa_smp2p_init() + * @ipa: IPA pointer + */ +void ipa_smp2p_exit(struct ipa *ipa); + +/** + * ipa_smp2p_disable() - Prevent "ipa-setup-ready" interrupt handling + * @IPA: IPA pointer + * + * Prevent handling of the "setup ready" interrupt from the modem. + * This is used before initiating shutdown of the driver. + */ +void ipa_smp2p_disable(struct ipa *ipa); + +/** + * ipa_smp2p_notify_reset() - Reset modem notification state + * @ipa: IPA pointer + * + * If the modem crashes it queries the IPA clock state. In cleaning + * up after such a crash this is used to reset some state maintained + * for managing this notification. + */ +void ipa_smp2p_notify_reset(struct ipa *ipa); + +#endif /* _IPA_SMP2P_H_ */ From patchwork Fri Feb 28 22:42:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 190179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78818C3F2D7 for ; Fri, 28 Feb 2020 22:42:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4F60F246B0 for ; Fri, 28 Feb 2020 22:42:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="aFJmacyo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726917AbgB1Wm6 (ORCPT ); Fri, 28 Feb 2020 17:42:58 -0500 Received: from mail-yw1-f52.google.com ([209.85.161.52]:43826 "EHLO mail-yw1-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727305AbgB1Wmq (ORCPT ); Fri, 28 Feb 2020 17:42:46 -0500 Received: by mail-yw1-f52.google.com with SMTP id u78so128939ywf.10 for ; Fri, 28 Feb 2020 14:42:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+OwhcmpR7pb9yvMAW/3zddjXHmCNfQqId9+2KCqrSJ0=; b=aFJmacyoYMfOrWc7Py00c+Je5msNK68dvGgpMGjv4b6l1knIu9+IIbetY2GBb/i0g1 Uy3tDaBgLQY5nCv31krfzrDL4NajPGA4s9pswLqtwoaQiN4T5XLmg5PblTbCuhiJX7F/ pDJFFYgTCHKHp/WzP+RBmbLrvosSAcU23o0bS5bBZA+zAjSOiaoAlATN6BnmKSx+OZhX ugr0CXu1V78lqqzVGow1mo0qSv0ypxJzAtrO+51fw6J4XlsIVY7BgYaK0cm93NDumSsi B0ukVTvkOtKfCwanqirrfraMtRgF2Ys2Wr4czjjqhnlZlnFGG3GkPZKbPXqufz0cPYPX 6IOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+OwhcmpR7pb9yvMAW/3zddjXHmCNfQqId9+2KCqrSJ0=; b=F1Eiq3DZYMaQilKrCG4Jyj7NMvDcueGukCwPxVDIwFIeC9MW5tI7o8nONrWUsSy7Dc C5cWSRVuDq9fV9GgWxfTATslEvol12iS+EcJMVIJajPR88aOw78P3lBRSenb+hDnJIdi I5SJw9qMi+BVCdxfYWJdSAhx+ggk5UA8khK9pFylSVJK4e4urCFxlJXUZrXs1QYFERAD MMFhGK+6EjPt7YZpRETXAEoeUU4xraF6KZBh+jphPizSGAU/R0AV0V1RaCa2ogtbaR65 tkWWMHhq+gg+VsRS5q968bIsQ5HaYUv3Pq46jRoN0mdZnDAExzNY8tsf4Vhf+JfZi0Nn IGJg== X-Gm-Message-State: APjAAAXd6xLEu5KVEUObSqaqZwgie0uPYStKHSwmM3QcNNqZe/a0RvUF ZmBdoPAs43YKRTejh7c/A8ORJw== X-Google-Smtp-Source: APXvYqy/xVwiXn9ASJWR47pESZIiWVOz3fDE0idfoGWN5WK6UN8CA60IZR+AvIQawo8+inA2mbi/cQ== X-Received: by 2002:a81:82c5:: with SMTP id s188mr6860426ywf.59.1582929765490; Fri, 28 Feb 2020 14:42:45 -0800 (PST) Received: from localhost.localdomain (c-73-185-129-58.hsd1.mn.comcast.net. [73.185.129.58]) by smtp.gmail.com with ESMTPSA id d188sm4637830ywe.50.2020.02.28.14.42.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Feb 2020 14:42:44 -0800 (PST) From: Alex Elder To: Bjorn Andersson , Andy Gross Cc: Arnd Bergmann , David Miller , Johannes Berg , Dan Williams , Evan Green , Eric Caruso , Susheel Yadav Yadagiri , Chaitanya Pratapa , Subash Abhinov Kasiviswanathan , Rob Herring , Mark Rutland , Ohad Ben-Cohen , Siddharth Gupta , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 17/17] arm64: dts: sdm845: add IPA information Date: Fri, 28 Feb 2020 16:42:04 -0600 Message-Id: <20200228224204.17746-18-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200228224204.17746-1-elder@linaro.org> References: <20200228224204.17746-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add IPA-related nodes and definitions to "sdm845.dtsi". Signed-off-by: Alex Elder --- arch/arm64/boot/dts/qcom/sdm845.dtsi | 51 ++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi index d42302b8889b..58fd1c611849 100644 --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi @@ -675,6 +675,17 @@ interrupt-controller; #interrupt-cells = <2>; }; + + ipa_smp2p_out: ipa-ap-to-modem { + qcom,entry-name = "ipa"; + #qcom,smem-state-cells = <1>; + }; + + ipa_smp2p_in: ipa-modem-to-ap { + qcom,entry-name = "ipa"; + interrupt-controller; + #interrupt-cells = <2>; + }; }; smp2p-slpi { @@ -1435,6 +1446,46 @@ }; }; + ipa@1e40000 { + compatible = "qcom,sdm845-ipa"; + + modem-init; + modem-remoteproc = <&mss_pil>; + + reg = <0 0x1e40000 0 0x7000>, + <0 0x1e47000 0 0x2000>, + <0 0x1e04000 0 0x2c000>; + reg-names = "ipa-reg", + "ipa-shared", + "gsi"; + + interrupts-extended = + <&intc 0 311 IRQ_TYPE_EDGE_RISING>, + <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>, + <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, + <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "ipa", + "gsi", + "ipa-clock-query", + "ipa-setup-ready"; + + clocks = <&rpmhcc RPMH_IPA_CLK>; + clock-names = "core"; + + interconnects = + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>, + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>, + <&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>; + interconnect-names = "memory", + "imem", + "config"; + + qcom,smem-states = <&ipa_smp2p_out 0>, + <&ipa_smp2p_out 1>; + qcom,smem-state-names = "ipa-clock-enabled-valid", + "ipa-clock-enabled"; + }; + tcsr_mutex_regs: syscon@1f40000 { compatible = "syscon"; reg = <0 0x01f40000 0 0x40000>;