From patchwork Sun May 12 01:24:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163969 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4128637ilr; Sat, 11 May 2019 18:25:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqxQHgN5Ea/ZcgQ8QxsPgbY0Z6K3gIMmDUQIpbDBwTv208inlP40Ja7TXDKI9iu9j8eYRDMs X-Received: by 2002:a63:4820:: with SMTP id v32mr23325083pga.89.1557624336610; Sat, 11 May 2019 18:25:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624336; cv=none; d=google.com; s=arc-20160816; b=fpnfDmUtqxuXI/E8iouMBPAnFd49rBKxHrT5FuseGf199l/HfnuTsWA3CAnenPuW6z HvAegppS0u79dukO1h80tVH/92BrLa4nNDTi0CAJ5GduixA0fo3E2L67oyXtv07bnE+u wsY21cP6B0w+rvMn4Q9JsMRYzRcKmvEQMvInsKwlzQDSKtoffRthj+M0YA6+4tLpfzVo YQBafwTAuLPwjh9YrhgrRnR1q4xwHEmZoa44+gUvhlMzBYoBFlNeoAm3ObOzrBXiFarf JGNUKA8fYFnuePHsjZ06e3MfTBHLqytDwfHnWe4FCNtbXO4LmMofET52S4XZixROfRku q6kQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=r4mnV8n9h367f6/VUTN5uF0kUnyjJ6IcwlGv5KmdBDM=; b=hb5gv2GlszLl9lGI8w5Wgmj9wFglQ1cj2UrCq8tdm+Ty4ZHg45UK/Rz2Ym3ipjvNzz UMuR+31yKDRna3fPn2aCwPxyYW7RMQOBY+KOHR+/Lju8YO1JnGCbcp+32XCUy1n1CPHe +FGmRnHGD/AdIp/u3imJd6whixy/JyxTvfWEETs9tEmmeiwr0L7T0nGtv2LIgTnjcuME EulCOnEXCoe6XWQ5/8eQouPr6DQUSHImV6ltRs132LXHxZZT5l1mEUaHdGBpd8XcrYF1 /6XHJc2lqX+9R7DlfIdQOA96b/FrMxpoFR/lJVyo/2bOl3LnixxRoLEbIXZIvatQtBWQ /+qw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JAhAfA4m; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f65si7647507pfa.1.2019.05.11.18.25.36; Sat, 11 May 2019 18:25:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JAhAfA4m; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726556AbfELBZb (ORCPT + 30 others); Sat, 11 May 2019 21:25:31 -0400 Received: from mail-it1-f193.google.com ([209.85.166.193]:36822 "EHLO mail-it1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726124AbfELBZa (ORCPT ); Sat, 11 May 2019 21:25:30 -0400 Received: by mail-it1-f193.google.com with SMTP id e184so2803558ite.1 for ; Sat, 11 May 2019 18:25:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r4mnV8n9h367f6/VUTN5uF0kUnyjJ6IcwlGv5KmdBDM=; b=JAhAfA4mdmRtC0N8z1ZmuFlRZqLVCZ1hZxon5/lNAiV5zNbCFeBTVMs2UYyTIAK3x4 08sv/uYY8fHQGJPkbkgo5983E41ygMh1INe7kE2C0sphzm6DQVdMI1PpdlS96l8RJdVY 1jb4f6PLkUiMzPnI93pUahouN4tXgT+ljfHXLreq3ByEJGWJUGsIme3/GpUWxkEiSTKK mgoKIF/WF3bxjzO2LcNZPtfxbaFKhvMf0g+6Ahk9vBuvqcldgGlHJE3DfHDJWnoKuzE2 EZqLdkCuYEHQXuHNavep6WIstkQP+QyWUFNFN0rwCxW5GkJ+DkLNwzR2/GDzcT1Nfpw2 thgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r4mnV8n9h367f6/VUTN5uF0kUnyjJ6IcwlGv5KmdBDM=; b=Yo572R/uY4Rk+E8HADmn/knDIFJOqCA3S8TkhNDbvGkE4xTE8yPfZez7/DlSRKOmVs 35npHx2u7pFfXvyyRuqbQ+fS1huDyN13hwYz3cHrVgqwaybTcsgse7BZO32fa/99sl0t El8ZOOSoJHq/cR5DAgcsXxOUYiP1atpT5kt6h42lq+CkS7V2vcOgfWyHA/XhkoVSI5me dFhZI34oflgneltyfC31FhlrmJdboisGXbF99x6tcOBBrVDuMRhl6C23cZQabJ8+7iiZ F6ibW5o8IOxeyPsW2BepUmf/i9tFC2pnWXDTPZVSIyV6+YBTIz4GGOBmcSZhJ8H4QF7F LsVg== X-Gm-Message-State: APjAAAX19yyf//RbqS2KcFnZt/iou5hZxWgTLuDRktbQ3EoSuT/MqtME bVypdD/BVf5Dew9Wrtgd7H7ZGA== X-Received: by 2002:a24:2e86:: with SMTP id i128mr12888057ita.87.1557624329029; Sat, 11 May 2019 18:25:29 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:28 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org, kvalo@codeaurora.org, johannes@sipsolutions.net, andy.shevchenko@gmail.com Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 01/18] bitfield.h: add FIELD_MAX() and field_max() Date: Sat, 11 May 2019 20:24:51 -0500 Message-Id: <20190512012508.10608-2-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Define FIELD_MAX(), which supplies the maximum value that can be represented by a field value. Define field_max() as well, to go along with the lower-case forms of the field mask functions. Signed-off-by: Alex Elder --- include/linux/bitfield.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+) -- 2.20.1 diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h index 3f1ef4450a7c..cf4f06774520 100644 --- a/include/linux/bitfield.h +++ b/include/linux/bitfield.h @@ -63,6 +63,19 @@ (1ULL << __bf_shf(_mask))); \ }) +/** + * FIELD_MAX() - produce the maximum value representable by a field + * @_mask: shifted mask defining the field's length and position + * + * FIELD_MAX() returns the maximum value that can be held in the field + * specified by @_mask. + */ +#define FIELD_MAX(_mask) \ + ({ \ + __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_MAX: "); \ + (typeof(_mask))((_mask) >> __bf_shf(_mask)); \ + }) + /** * FIELD_FIT() - check if value fits in the field * @_mask: shifted mask defining the field's length and position @@ -118,6 +131,7 @@ static __always_inline u64 field_mask(u64 field) { return field / field_multiplier(field); } +#define field_max(field) ((typeof(field))field_mask(field)) #define ____MAKE_OP(type,base,to,from) \ static __always_inline __##type type##_encode_bits(base v, base field) \ { \ From patchwork Sun May 12 01:24:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163986 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4129423ilr; Sat, 11 May 2019 18:26:59 -0700 (PDT) X-Google-Smtp-Source: APXvYqynN+OftR6chOtDCz5KQETTVXynTkzhYU/y7aD6/zPrNDEe/IfeCZZFyA8Rw3mPU6ftAJst X-Received: by 2002:a17:902:6b44:: with SMTP id g4mr22806559plt.157.1557624419837; Sat, 11 May 2019 18:26:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624419; cv=none; d=google.com; s=arc-20160816; b=F11fMC9VGeejifPJeXZSn/AMWPpXdFApf6EIKmRdNNMsWIQaTVGJ/JLSGv5CNEzyt+ yF7VQToiD2QYHrNTmwd4bgdCn5EDffX2Rb+usiOBY1gg5qbhIMy36x/DLVlipioCimVm ehRBzfOOta3qXYi8ZblRtheNHdGBCqmEuUsFgtcrKYHLIs6gGTEZ8M/GuSh7Yj8rqg/7 blEUaHqQA145zj1+t8FB1irD8MNc9W8Nix8YZhUP+SDC7eIaoX/Zsz68nMuApKVLJIAV BPOEpVyoWSLiMqUImWSyQqP8AGlWd/NKPkYk6HGBRSX+DelFxSGDy/xC9KzEv9T69wm9 BpfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ZVycjHRBln996eRpk6k47CxqNjyaNSdSdKwD7DF2v4g=; b=KX/weSBfBxyM1iF4bSDywB3/zYVuhaU5UU+13F/MWp3uCOwdwTQ8EBmKkivak+sBjS 26IrI8hy9VFVkBiN+Mb9T+xs8UrJ92KfI8cKwbXLfi59nxtDKiNG+CA9XASBbwAkAs25 yV8un1AtcCUQcDtiYX9jPpE0lx73towVBUOBZOQzNBza8DJGMefGob5vFJ2LtOrKCJtE E2rjqkPhD9NYGSsxLMHOqgSsTc+XTElWcsuGohEDlVaDdHhKUhVpROdgHMYQsI29NATG tSS5nufV+Z4AU6kdkYtRrp5LbwhIK5FoogHtj409LZehF7BIvsJ5Fim4O24/LPjgN81k fq0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=mwaQeNmW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l22si11307893pgc.523.2019.05.11.18.26.59; Sat, 11 May 2019 18:26:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=mwaQeNmW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726581AbfELBZe (ORCPT + 30 others); Sat, 11 May 2019 21:25:34 -0400 Received: from mail-it1-f194.google.com ([209.85.166.194]:56129 "EHLO mail-it1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726529AbfELBZb (ORCPT ); Sat, 11 May 2019 21:25:31 -0400 Received: by mail-it1-f194.google.com with SMTP id q132so15165932itc.5 for ; Sat, 11 May 2019 18:25:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZVycjHRBln996eRpk6k47CxqNjyaNSdSdKwD7DF2v4g=; b=mwaQeNmWR9ZKIdVhlyKXSsaF91NBkAVwoudjrhGFawDHq1RUlZftn53OjELSg6B3QA /8DUCt68ZXbY/0p/MwswJcClkwJfH3PDwHtvacduSPl9bnSB9JTh+DM6zTpZ3TEGY9nc vg+mZ+Tm3YGxXyoJLRAJAczsp7BKK82rC00IEwRbyJuwJ+03mWpyQMBEPjUA5aNWZpIY C1NgIxBDY1cSB31zt+kvwMCQJCCD0UHShTEeP7FEF9HufTuhH6wpdIY6Nbk+/pncxl6Y L2HM+eIP5SO/Dm+ONM1iS3RIiKVmBenu1mYuTJZ6Ha5PLpc48mIjBIvo/B/yIEOhmmcp eu9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZVycjHRBln996eRpk6k47CxqNjyaNSdSdKwD7DF2v4g=; b=pRT7R4fo3nDWfswx8sfNiaTlb0jw4vDJyKw4WAe+MPPjxfLI9rNb+2VObWM8sHQ1/D e/IbOQeiHYAvoxH1yXLijzfrd3W9IVuuPvR4MvO6lHR1ky2HXB1RzDFZLDmAGvHMlCBp D0Zfu5migHOnTD/g37GlUuQy7NFnk1slGPVQ1j2RQ4/n34aYgAoXOH4GfFreQkRQeP5K j9sefxBLRFVMRXVqLvSaq5whlnlANVdWJQWLg5zMFQucLBBWE7Co6NJWH0BmOpW9KYAU FVpDJH/OkwXLY1Fy6/4gQKpQzDVlmr6UpJCQVV1k4yPDKDs2ftaPgCf02tONWQ9srJBE KWiw== X-Gm-Message-State: APjAAAX47Db9RacHrXvgdBauVlQR6eX4tHb5jIntrj4Ddn4qpCT5eZuP vPwBzYu2IM3LJM1HyLqn+Xz2ag== X-Received: by 2002:a24:7f84:: with SMTP id r126mr11927200itc.99.1557624330598; Sat, 11 May 2019 18:25:30 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:29 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org, subashab@codeaurora.org, stranche@codeaurora.org, yuehaibing@huawei.com, joe@perches.com Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 02/18] soc: qcom: create "include/soc/qcom/rmnet.h" Date: Sat, 11 May 2019 20:24:52 -0500 Message-Id: <20190512012508.10608-3-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The IPA driver requires some (but not all) symbols defined in "drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h". Create a new public header file "include/soc/qcom/rmnet.h" and move the needed definitions there. Signed-off-by: Alex Elder --- .../ethernet/qualcomm/rmnet/rmnet_handlers.c | 1 + .../net/ethernet/qualcomm/rmnet/rmnet_map.h | 24 ------------ .../qualcomm/rmnet/rmnet_map_command.c | 1 + .../ethernet/qualcomm/rmnet/rmnet_map_data.c | 1 + .../net/ethernet/qualcomm/rmnet/rmnet_vnd.c | 1 + include/soc/qcom/rmnet.h | 38 +++++++++++++++++++ 6 files changed, 42 insertions(+), 24 deletions(-) create mode 100644 include/soc/qcom/rmnet.h -- 2.20.1 diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c index 11167abe5934..3aa79b7ed539 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c @@ -17,6 +17,7 @@ #include #include #include +#include #include "rmnet_private.h" #include "rmnet_config.h" #include "rmnet_vnd.h" diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h index 884f1f52dcc2..39d0be99a771 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h @@ -39,30 +39,6 @@ enum rmnet_map_commands { RMNET_MAP_COMMAND_ENUM_LENGTH }; -struct rmnet_map_header { - u8 pad_len:6; - u8 reserved_bit:1; - u8 cd_bit:1; - u8 mux_id; - __be16 pkt_len; -} __aligned(1); - -struct rmnet_map_dl_csum_trailer { - u8 reserved1; - u8 valid:1; - u8 reserved2:7; - u16 csum_start_offset; - u16 csum_length; - __be16 csum_value; -} __aligned(1); - -struct rmnet_map_ul_csum_header { - __be16 csum_start_offset; - u16 csum_insert_offset:14; - u16 udp_ip4_ind:1; - u16 csum_enabled:1; -} __aligned(1); - #define RMNET_MAP_GET_MUX_ID(Y) (((struct rmnet_map_header *) \ (Y)->data)->mux_id) #define RMNET_MAP_GET_CD_BIT(Y) (((struct rmnet_map_header *) \ diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c index f6cf59aee212..54b86a8be570 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c @@ -11,6 +11,7 @@ */ #include +#include #include "rmnet_config.h" #include "rmnet_map.h" #include "rmnet_private.h" diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c index 57a9c314a665..e3fb4035820c 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c @@ -17,6 +17,7 @@ #include #include #include +#include #include "rmnet_config.h" #include "rmnet_map.h" #include "rmnet_private.h" diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c index d11c16aeb19a..b8df36e827d4 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c @@ -17,6 +17,7 @@ #include #include #include +#include #include "rmnet_config.h" #include "rmnet_handlers.h" #include "rmnet_private.h" diff --git a/include/soc/qcom/rmnet.h b/include/soc/qcom/rmnet.h new file mode 100644 index 000000000000..80dcd6e68c3d --- /dev/null +++ b/include/soc/qcom/rmnet.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _SOC_QCOM_RMNET_H_ +#define _SOC_QCOM_RMNET_H_ + +#include + +/* Header structure that precedes packets in ETH_P_MAP protocol */ +struct rmnet_map_header { + u8 pad_len : 6; + u8 reserved_bit : 1; + u8 cd_bit : 1; + u8 mux_id; + __be16 pkt_len; +} __aligned(1); + +/* Checksum offload metadata header for outbound packets*/ +struct rmnet_map_ul_csum_header { + __be16 csum_start_offset; + u16 csum_insert_offset : 14; + u16 udp_ip4_ind : 1; + u16 csum_enabled : 1; +} __aligned(1); + +/* Checksum offload metadata trailer for inbound packets */ +struct rmnet_map_dl_csum_trailer { + u8 reserved1; + u8 valid : 1; + u8 reserved2 : 7; + u16 csum_start_offset; + u16 csum_length; + __be16 csum_value; +} __aligned(1); + +#endif /* _SOC_QCOM_RMNET_H_ */ From patchwork Sun May 12 01:24:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163985 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4129399ilr; Sat, 11 May 2019 18:26:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqyRqMoSGDumSFVl0caHZvfwUXlbGPEWsN3/zpGC2AzDqex7x6ASCr66NxUgIqQdpbJJoHjH X-Received: by 2002:aa7:9a8c:: with SMTP id w12mr24739863pfi.187.1557624416792; Sat, 11 May 2019 18:26:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624416; cv=none; d=google.com; s=arc-20160816; b=miTLcVRqfv3we6qD1ZCV9Ms1NjKvXM1+5b/lcZpckSEPcq/SyocFWeJumKIOubxszX D44/rWfurlkXLKcv81/pTRNWTCsxkvr9mUnqNXjIU3Wqldsmwr/E9aSLy9e7iuVVhjSb p41MjyZv8a/2CrQXh4gy8KDhQIjetnpgCdT1o6OEk5/firWqI0YBko6YastuOPC/UXKY JOIUXK69AborX6ZWjwLNyTpsRk8n4DcN+VM9a9N05WZQpRl9+nYVwLuBmsUcaW8y4sUX jUykVdiQ9FNskNkDOR5En3+pAz59vFJJRHptNL5xCXUy3jpBNDnCUl2sTT7C212xHmMd yC5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=h9s9Psuy/E87ixvvmJRRBd63LEM82ud4Ysnws8m9DPo=; b=V5uEdluoQwY8dX8sQirKGNOWA1VQhjGTnHn/hOLwdyVm6yIdV7j3g4e9xjFjb+fkVj +1lX/AMrwSEzTMgmxohKqxHrPuqbt3cEtlolQm004sxq/SmRrjBzRpD0AK7Y4fNggBia lJwwbPhRGTGD2gPVsA8BdRW+x/VI7pFBULMwhGdhlGf0kyqdsdeal7SNTrmvtxKv/2hR z2PQzr1pQggb89TLjCnW8mjFHykScuqNVYAhlZyY0UVyVEHmzUB2GboyvgBFgXMSiaeB 1udY8rBoLnMuqQRb6BPgnVUkUpG+U0GsMODNunmQ65L+aaVNUFYm9S6vAtBCNuLqk3cq 6SXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="AWXXSy/t"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l22si11307893pgc.523.2019.05.11.18.26.56; Sat, 11 May 2019 18:26:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="AWXXSy/t"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726609AbfELBZh (ORCPT + 30 others); Sat, 11 May 2019 21:25:37 -0400 Received: from mail-it1-f196.google.com ([209.85.166.196]:54083 "EHLO mail-it1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726124AbfELBZd (ORCPT ); Sat, 11 May 2019 21:25:33 -0400 Received: by mail-it1-f196.google.com with SMTP id m141so13818339ita.3 for ; Sat, 11 May 2019 18:25:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=h9s9Psuy/E87ixvvmJRRBd63LEM82ud4Ysnws8m9DPo=; b=AWXXSy/tHZxFMRnTUKsZJnn9/i8AXv0hS2HYZR6FQcrVjQrQ09I32/yoVtzsUR2uvV 7zFSuZ42IIityqAbgIch+MnZjykjf86pSvE3xK6T3DbJLy303eH3CTiokRPPw456N8xL 7rI3DRDo93kVQVqJEx4+iSJvkrcLeNnYP4jstVIDSaVJG+KMvmJzVubuadREOf9UCb8W USnVNLzHRgNAXX2TjQLba4bBE2mOLm4VG47LigzSgco1g+vRs5H9TFKdWGAgoWbfuYl3 +Be+cvBJCbfwgeh+cHqB9JKmSpIV8MVT217A79khc94LPvZ/bvDo631/dJdIpWXfDX0V 7e+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=h9s9Psuy/E87ixvvmJRRBd63LEM82ud4Ysnws8m9DPo=; b=kw/zkTu7GiIm3fTCO6ox4HkPdNt7mNnGf0ubfAQq0fRJQq3ArK69NsdbTs/V6rZPub KD0zB4JntQNtiujE4RkA28FMhmZRG/w1Dul4gkUAIWRJEP7M8fsSljx740k31rGYsqHC j6zcVYmKI1A3NAeZmHwqdBZCaALq1myeIEURPjBzTAfgsQMCCr51Rk7TUGeLb0H5gwWz uLOyzDOODxTfrqqgxALzhB450APsz5fzW+BXY8eXpIm+s1zG+oPZCDeZfS68NT7wsMIn JwVAS1Y0P10KidbCS2tSbaAs1cRsMJW5dVRXevuMXRRzxyO/csqeCGhBhFs5Hi3PqKeI 1Jhg== X-Gm-Message-State: APjAAAV+prhSmMzCcqvnfZrYlOgHa/lZ+O0zritSqKOXQkYmoirNAsAo l6VqqMI9N1BQocQT/xkpK/bEEg== X-Received: by 2002:a24:56c1:: with SMTP id o184mr815571itb.123.1557624331932; Sat, 11 May 2019 18:25:31 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:31 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org, robh+dt@kernel.org, mark.rutland@arm.com, andy.gross@linaro.org, david.brown@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 03/18] dt-bindings: soc: qcom: add IPA bindings Date: Sat, 11 May 2019 20:24:53 -0500 Message-Id: <20190512012508.10608-4-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add the binding definitions for the "qcom,ipa" device tree node. Signed-off-by: Alex Elder --- .../devicetree/bindings/net/qcom,ipa.txt | 164 ++++++++++++++++++ 1 file changed, 164 insertions(+) create mode 100644 Documentation/devicetree/bindings/net/qcom,ipa.txt -- 2.20.1 diff --git a/Documentation/devicetree/bindings/net/qcom,ipa.txt b/Documentation/devicetree/bindings/net/qcom,ipa.txt new file mode 100644 index 000000000000..2705e198f12e --- /dev/null +++ b/Documentation/devicetree/bindings/net/qcom,ipa.txt @@ -0,0 +1,164 @@ +Qualcomm IP Accelerator (IPA) + +This binding describes the Qualcomm IPA. The IPA is capable of offloading +certain network processing tasks (e.g. filtering, routing, and NAT) from +the main processor. + +The IPA sits between multiple independent "execution environments," +including the Application Processor (AP) and the modem. The IPA presents +a Generic Software Interface (GSI) to each execution environment. +The GSI is an integral part of the IPA, but it is logically isolated +and has a distinct interrupt and a separately-defined address space. + + -------- --------- + | | | | + | AP +<---. .----+ Modem | + | +--. | | .->+ | + | | | | | | | | + -------- | | | | --------- + v | v | + --+-+---+-+-- + | GSI | + |-----------| + | | + | IPA | + | | + ------------- + +See also: + bindings/interrupt-controller/interrupts.txt + bindings/interconnect/interconnect.txt + bindings/soc/qcom/qcom,smp2p.txt + bindings/reserved-memory/reserved-memory.txt + bindings/clock/clock-bindings.txt + +All properties except "modem-init" defined below are required. + +- compatible: + Must be "qcom,sdm845-ipa". + +- modem-init: + This Boolean property is optional. If present, it indicates that + the modem is responsible for performing early IPA initialization, + including loading and validating firwmare used by the GSI. This + early initialization is performed by Trust Zone otherwise. + +- reg: + Resources specifying the physical address spaces of the IPA and GSI. + +- reg-names: + The names of the two address space ranges defined by the "reg" + property. Must be: + "ipa-reg" + "ipa-shared" + "gsi" + +- interrupts: + Specifies the IRQs used by the IPA. Four interrupts are required, + specifying: the IPA IRQ; the GSI IRQ; the clock query interrupt + from the modem; and the "ready for setup" interrupt from the modem. + The first two are hardware IRQs; the third and fourth are SMP2P + input interrupts. + +- interrupt-names: + The names of the interrupts defined by the "interrupts-extended" + property. Must be: + "ipa" + "gsi" + "ipa-clock-query" + "ipa-setup-ready" + +- clocks: + Resource that defines the IPA core clock. + +- clock-names: + The name used for the IPA core clock. Must be "core". + +- interconnects: + Specifies the interconnects used by the IPA. Three interconnects + are required, specifying: the path from the IPA to memory; from + IPA to internal (SoC resident) memory; and between the AP subsystem + and IPA for register access. + +- interconnect-names: + The names of the interconnects defined by the "interconnects" + property. Must be: + "memory" + "imem" + "config" + +- qcom,smem-states + The state bits used for SMP2P output. Two states must be specified. + The first indicates whether the value in the second bit is valid + (1 means valid). The second, if valid, defines whether the IPA + clock is enabled (1 means enabled). + +- qcom,smem-state-names + The names of the state bits used for SMP2P output. Must be: + "ipa-clock-enabled-valid" + "ipa-clock-enabled" + +- memory-region + A phandle for a reserved memory area that holds the firmware passed + to Trust Zone for authentication. (Note, this is required + only when Trust Zone performs early initialization; that is, + it is required if "modem-init" is not defined.) + += EXAMPLE + +The following example represents the IPA present in the SDM845 SoC. It +shows portions of the "modem-smp2p" node to indicate its relationship +with the interrupts and SMEM states used by the IPA. + + smp2p-mpss { + compatible = "qcom,smp2p"; + . . . + ipa_smp2p_out: ipa-ap-to-modem { + qcom,entry-name = "ipa"; + #qcom,smem-state-cells = <1>; + }; + + ipa_smp2p_in: ipa-modem-to-ap { + qcom,entry-name = "ipa"; + interrupt-controller; + #interrupt-cells = <2>; + }; + }; + + ipa@1e40000 { + compatible = "qcom,sdm845-ipa"; + + modem-init; + + reg = <0 0x1e40000 0 0x7000>, + <0 0x1e47000 0 0x2000>, + <0 0x1e04000 0 0x2c000>; + reg-names = "ipa-reg", + "ipa-shared"; + "gsi"; + + interrupts-extended = <&intc 0 311 IRQ_TYPE_EDGE_RISING>, + <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>, + <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, + <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "ipa", + "gsi", + "ipa-clock-query", + "ipa-setup-ready"; + + clocks = <&rpmhcc RPMH_IPA_CLK>; + clock-names = "core"; + + interconnects = + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>, + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>, + <&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>; + interconnect-names = "memory", + "imem", + "config"; + + qcom,smem-states = <&ipa_smp2p_out 0>, + <&ipa_smp2p_out 1>; + qcom,smem-state-names = "ipa-clock-enabled-valid", + "ipa-clock-enabled"; + }; From patchwork Sun May 12 01:24:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163970 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4128664ilr; Sat, 11 May 2019 18:25:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqzMSh6x/9ftjQjvkrs0mSBcql+QFKscD1IHtlWrbi0gTuWYxWjfWKUs5Ygxu0pLL8f0px7i X-Received: by 2002:a17:902:e708:: with SMTP id co8mr22662529plb.141.1557624339819; Sat, 11 May 2019 18:25:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624339; cv=none; d=google.com; s=arc-20160816; b=kZQZO/QBcUg3kzsOE1nNME+jxpK7weXeGlmYbMXMBXlaRoQqUeZLHzkWId22zRPqSp 2SW9cEDWhKSGH4EUNhkcGNFkZKbdVVzQit4t3C4fcZk2oS/npJa++GOl6bjUnbFgHmt9 a30V8sAIEBxVxhvaKtzQ7/8iUtc9TsRpmeT14ydhPfxG6HnnUaHM152C8VVugnBXFOnu o+wz9DoIAZdrgEGjB/06o9hZzIqFPT1jI6/iSBZXlIbG4FEO1RZALBcJWzsZwHke0KI2 4KVuhFkJTcnAGmsHVoDbnHjBXJklADos3Aa9Gk6S+m2BX3AbMZ8XajHhN+gR34eclWgy lcGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nVIPWBPq+MI4RmmWRUw13SOCJUl7Yb4AYIX56A01O8s=; b=qsrFlxMeti6bRlQlmSqa1HfrJ4lq37IJGz+Ox0tIzwYp+SRaZdu1o/GL99EwT4pgbS aIqhLGNxS4zLcf8ry3FO30rznbqVUF1VQZu47qPYeC34JFRk89WaVAsjOu02ltw6vU3D avF+FzVnVZ0i+xaIkcGz3DoYfI3JcFZ6i99olRdeCqViiZAZ3n0x5mT+t8dn4o5E7LXN 7VUJZobxGATSikv5fEaBUR519LjGp6fVJebmZSDhMx4kK34+uT8m5M5r9fSAylXamCWV PyMk5b+B7m6VzoEIUoZujJmr2vIYQHLIjNCxPKKYQb/TBYKj1Un7B6prLko25Otml7q6 fe0g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=EGjNvpXW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 9si12624654pgz.358.2019.05.11.18.25.39; Sat, 11 May 2019 18:25:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=EGjNvpXW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726653AbfELBZi (ORCPT + 30 others); Sat, 11 May 2019 21:25:38 -0400 Received: from mail-it1-f194.google.com ([209.85.166.194]:33966 "EHLO mail-it1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726529AbfELBZf (ORCPT ); Sat, 11 May 2019 21:25:35 -0400 Received: by mail-it1-f194.google.com with SMTP id p18so10781118itm.1 for ; Sat, 11 May 2019 18:25:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nVIPWBPq+MI4RmmWRUw13SOCJUl7Yb4AYIX56A01O8s=; b=EGjNvpXWP8oqG5Q4V35yOG80lWiN+v+oWK/z7dTqCN0TZlg7J2b9ZaQnHCLtqGvlwE t1dCHSO6ESv67oRtJwxjkmvmV/VMnJ0O3dILFkPdWmHlzlHly8uuFR6RGf2YYBUJ/Fg1 2ownbgVS/ba9d1K85jp6MHFxwLgx8ZmRz/B6A/fwAxdZsn3+kCqJvOCulYpdX6ryqa5g tjeU34DJgQwdO7iez7/ieI8KRgKixwIYhA5Lg6sFSbzGrmMS6LBbi8RhgD76IIhbvrVr 19anNPL4w7rxrYC0lDaHu4EfDQMWaiKFHtGesvTY3eHNzJQ1vIx3YwsEws+Ej+XWDxhF 6GVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nVIPWBPq+MI4RmmWRUw13SOCJUl7Yb4AYIX56A01O8s=; b=t3ZK5Fm/ioLR1or7tDBoj1/7zvX9QogNSkDYqYJ00KystwnrTHozJ05aJaBfYUjNan l/wJQamICubTPJbFsDSrVJtxa3gK7Y0FJ0s5SPTEb/cqwj5IgSRGNDhIJgFDZreRuReg 4z+hSPZR4aJVxy9Tr3Y4UUbhFh5g0u13Akk1KTwHBNuz+afEPY/ELGoFOyBe5wJk4Jh3 73m5ZMjQUs2No0S/l77KDkHQcY4JQ91ORXJ8jow2q9crOVc0eRYfmpDkZBWdSkNuq8Na qDtQYtqGmG+G3ixERnK75QwMsDUhNz5iXcXGZhfHxsOxT/oPMSR352kRa+a/RGmXy8NI aaDQ== X-Gm-Message-State: APjAAAVwdnzJXP8V5zHR6y7aoM/k5KG7VRbCWz33OetE+/2MBtOBStmm WPxcZOuz3S3pgJXNvA7a56ASPg== X-Received: by 2002:a24:4315:: with SMTP id s21mr13431570itb.84.1557624333336; Sat, 11 May 2019 18:25:33 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:32 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 04/18] soc: qcom: ipa: main code Date: Sat, 11 May 2019 20:24:54 -0500 Message-Id: <20190512012508.10608-5-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch includes three source files that represent some basic "main program" code for the IPA driver. They are: - "ipa.h" defines the top-level IPA structure which represents an IPA device throughout the code. - "ipa_main.c" contains the platform driver probe function, along with some general code used during initialization. - "ipa_reg.h" defines the offsets of the 32-bit registers used for the IPA device, along with masks that define the position and width of fields less than 32 bits located within these registers. Each file includes some documentation that provides a little more overview of how the code is organized and used. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa.h | 131 ++++++ drivers/net/ipa/ipa_main.c | 920 +++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_reg.h | 279 +++++++++++ 3 files changed, 1330 insertions(+) create mode 100644 drivers/net/ipa/ipa.h create mode 100644 drivers/net/ipa/ipa_main.c create mode 100644 drivers/net/ipa/ipa_reg.h -- 2.20.1 diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h new file mode 100644 index 000000000000..c580254d1e0e --- /dev/null +++ b/drivers/net/ipa/ipa.h @@ -0,0 +1,131 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_H_ +#define _IPA_H_ + +#include +#include +#include +#include + +#include "gsi.h" +#include "ipa_qmi.h" +#include "ipa_endpoint.h" +#include "ipa_interrupt.h" + +struct clk; +struct icc_path; +struct net_device; +struct platform_device; + +struct ipa_clock; +struct ipa_smp2p; +struct ipa_interrupt; + +/** + * struct ipa - IPA information + * @gsi: Embedded GSI structure + * @pdev: Platform device + * @smp2p: SMP2P information + * @clock: IPA clocking information + * @suspend_ref: Whether clock reference preventing suspend taken + * @route_virt: Virtual address of routing table + * @route_addr: DMA address for routing table + * @filter_virt: Virtual address of filter table + * @filter_addr: DMA address for filter table + * @interrupt: IPA Interrupt information + * @uc_loaded: Non-zero when microcontroller has reported it's ready + * @ipa_phys: Physical address of IPA memory space + * @ipa_virt: Virtual address for IPA memory space + * @reg_virt: Virtual address used for IPA register access + * @shared_phys: Physical address of memory space shared with modem + * @shared_virt: Virtual address of memory space shared with modem + * @shared_offset: Additional offset used for shared memory + * @wakeup: Wakeup source information + * @filter_support: Bit mask indicating endpoints that support filtering + * @initialized: Bit mask indicating endpoints initialized + * @set_up: Bit mask indicating endpoints set up + * @enabled: Bit mask indicating endpoints enabled + * @suspended: Bit mask indicating endpoints suspended + * @endpoint: Array of endpoint information + * @endpoint_map: Mapping of GSI channel to IPA endpoint information + * @command_endpoint: Endpoint used for command TX + * @default_endpoint: Endpoint used for default route RX + * @modem_netdev: Network device structure used for modem + * @setup_complete: Flag indicating whether setup stage has completed + * @qmi: QMI information + */ +struct ipa { + struct gsi gsi; + struct platform_device *pdev; + struct ipa_smp2p *smp2p; + struct ipa_clock *clock; + atomic_t suspend_ref; + + void *route_virt; + dma_addr_t route_addr; + void *filter_virt; + dma_addr_t filter_addr; + + struct ipa_interrupt *interrupt; + u32 uc_loaded; + + phys_addr_t reg_phys; + void __iomem *reg_virt; + phys_addr_t shared_phys; + void *shared_virt; + u32 shared_offset; + + struct wakeup_source wakeup; + + /* Bit masks indicating endpoint state */ + u32 filter_support; + u32 initialized; + u32 set_up; + u32 enabled; + u32 suspended; + + struct ipa_endpoint endpoint[IPA_ENDPOINT_MAX]; + struct ipa_endpoint *endpoint_map[GSI_CHANNEL_MAX]; + struct ipa_endpoint *command_endpoint; /* TX */ + struct ipa_endpoint *default_endpoint; /* Default route RX */ + + struct net_device *modem_netdev; + u32 setup_complete; + + struct ipa_qmi qmi; +}; + +/** + * ipa_setup() - Perform IPA setup + * @ipa: IPA pointer + * + * IPA initialization is broken into stages: init; config; setup; and + * sometimes enable. (These have inverses exit, deconfig, teardown, and + * disable.) Activities performed at the init stage can be done without + * requiring any access to hardware. For IPA, activities performed at the + * config stage require the IPA clock to be running, because they involve + * access to IPA registers. The setup stage is performed only after the + * GSI hardware is ready (more on this below). And finally IPA endpoints + * can be enabled once they're successfully set up. + * + * This function, @ipa_setup(), starts the setup stage. + * + * In order for the GSI hardware to be functional it needs firmware to be + * loaded (in addition to some other low-level initialization). This early + * GSI initialization can be done either by Trust Zone or by the modem. If + * it's done by Trust Zone, the AP loads the GSI firmware and supplies it to + * Trust Zone to verify and install. The AP knows when this completes, and + * whether it was successful. In this case the AP proceeds to setup once it + * knows GSI is ready. + * + * If the modem performs early GSI initialization, the AP needs to know when + * this has occurred. An SMP2P interrupt is used for this purpose, and + * receipt of that interrupt triggers the call to ipa_setup(). + */ +int ipa_setup(struct ipa *ipa); + +#endif /* _IPA_H_ */ diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c new file mode 100644 index 000000000000..4dc5ead7bab0 --- /dev/null +++ b/drivers/net/ipa/ipa_main.c @@ -0,0 +1,920 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" +#include "ipa_netdev.h" +#include "ipa_smp2p.h" +#include "ipa_uc.h" +#include "ipa_interrupt.h" + +/** + * DOC: The IP Accelerator + * + * This driver supports the Qualcomm IP Accelerator (IPA), which is a + * networking component found in many Qualcomm SoCs. The IPA is connected + * to the application processor (AP), but is also connected (and partially + * controlled by) other "execution environments" (EEs), such as a modem. + * + * The IPA is the conduit between the AP and the modem that carries network + * traffic. This driver presents a network interface representing the + * connection of the modem to external (e.g. LTE) networks. The IPA can + * provide protocol checksum calculation, offloading this work from the AP. + * The IPA is able to provide additional functionality, including routing, + * filtering, and NAT support, but that more advanced functionality is not + * currently supported. + * + * Certain resources--including routing tables and filter tables--are still + * defined in this driver, because they must be initialized even when the + * advanced hardware features are not used. + * + * There are two distinct layers that implement the IPA hardware, and this + * is reflected in the organization of the driver. The generic software + * interface (GSI) is an integral component of the IPA, providing a + * well-defined communication layer between the AP subsystem and the IPA + * core. The GSI implements a set of "channels" used for communication + * between the AP and the IPA. + * + * The IPA layer uses GSI channels to implement its "endpoints". And while + * a GSI channel carries data between the AP and the IPA, a pair of IPA + * endpoints is used to carry traffic between two EEs. Specifically, the main + * modem network interface is implemented by two pairs of endpoints: a TX + * endpoint on the AP coupled with an RX endpoint on the modem; and another + * RX endpoint on the AP receiving data from a TX endpoint on the modem. + */ + +#define IPA_TABLE_ALIGN 128 /* Minimum table alignment */ +#define IPA_TABLE_ENTRY_SIZE sizeof(u64) /* Holds a physical address */ +#define IPA_FILTER_SIZE 8 /* Filter descriptor size */ +#define IPA_ROUTE_SIZE 8 /* Route descriptor size */ + +/* Backward compatibility register value to use for SDM845 */ +#define IPA_BCR_REG_VAL 0x0000003b + +/* The name of the main firmware file relative to /lib/firmware */ +#define IPA_FWS_PATH "ipa_fws.mdt" +#define IPA_PAS_ID 15 +/** + * ipa_filter_tuple_zero() - Zero an endpoints filter tuple + * @endpoint_id: Endpoint whose filter tuple should be zeroed + * + * Endpoint must be for AP (not modem) and support filtering. Updates the + * filter masks values without changing routing ones. + */ +static void ipa_filter_tuple_zero(struct ipa_endpoint *endpoint) +{ + enum ipa_endpoint_id endpoint_id = endpoint->endpoint_id; + u32 offset; + u32 val; + + offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(endpoint_id); + + val = ioread32(endpoint->ipa->reg_virt + offset); + + /* Zero all filter-related fields, preserving the rest */ + u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL); + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_filter_hash_tuple_config(struct ipa *ipa) +{ + u32 ep_mask = ipa->filter_support; + + while (ep_mask) { + enum ipa_endpoint_id endpoint_id = __ffs(ep_mask); + struct ipa_endpoint *endpoint; + + ep_mask ^= BIT(endpoint_id); + + endpoint = &ipa->endpoint[endpoint_id]; + if (endpoint->ee_id != GSI_EE_MODEM) + ipa_filter_tuple_zero(endpoint); + } +} + +/** + * ipa_route_tuple_zero() - Zero a routing table entry tuple + * @route_id: Identifier for routing table entry to be zeroed + * + * Updates the routing table values without changing filtering ones. + */ +static void ipa_route_tuple_zero(struct ipa *ipa, u32 route_id) +{ + u32 offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(route_id); + u32 val; + + val = ioread32(ipa->reg_virt + offset); + + /* Zero all route-related fields, preserving the rest */ + u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL); + + iowrite32(val, ipa->reg_virt + offset); +} + +static void ipa_route_hash_tuple_config(struct ipa *ipa) +{ + u32 route_mask; + u32 modem_mask; + + BUILD_BUG_ON(!IPA_SMEM_MODEM_RT_COUNT); + BUILD_BUG_ON(IPA_SMEM_RT_COUNT < IPA_SMEM_MODEM_RT_COUNT); + BUILD_BUG_ON(IPA_SMEM_RT_COUNT >= BITS_PER_LONG); + + /* Compute a mask representing non-modem routing table entries */ + route_mask = GENMASK(IPA_SMEM_RT_COUNT - 1, 0); + modem_mask = GENMASK(IPA_SMEM_MODEM_RT_INDEX_MAX, + IPA_SMEM_MODEM_RT_INDEX_MIN); + route_mask &= ~modem_mask; + + while (route_mask) { + u32 route_id = __ffs(route_mask); + + route_mask ^= BIT(route_id); + + ipa_route_tuple_zero(ipa, route_id); + } +} + +/** + * ipa_route_setup() - Initialize an empty routing table + * @ipa: IPA pointer + * + * Each entry in the routing table contains the DMA address of a route + * descriptor. A special zero descriptor is allocated that represents "no + * route" and this function initializes all its entries to point at that + * zero route. The zero route is allocated with the table, immediately past + * its end. + * + * Return: 0 if successful or -ENOMEM + */ +static int ipa_route_setup(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + u64 zero_route_addr; + dma_addr_t addr; + u32 route_id; + size_t size; + u64 *virt; + + BUILD_BUG_ON(!IPA_ROUTE_SIZE); + BUILD_BUG_ON(sizeof(*virt) != IPA_TABLE_ENTRY_SIZE); + + /* Allocate the routing table, with enough space at the end of the + * table to hold the zero route descriptor. Initialize all filter + * table entries to point to the zero route. + */ + size = IPA_SMEM_RT_COUNT * IPA_TABLE_ENTRY_SIZE; + virt = dma_alloc_coherent(dev, size + IPA_ROUTE_SIZE, &addr, + GFP_KERNEL); + if (!virt) + return -ENOMEM; + ipa->route_virt = virt; + ipa->route_addr = addr; + + /* Zero route is immediately after the route table */ + zero_route_addr = addr + size; + + for (route_id = 0; route_id < IPA_SMEM_RT_COUNT; route_id++) + *virt++ = zero_route_addr; + + ipa_cmd_route_config_ipv4(ipa, size); + ipa_cmd_route_config_ipv6(ipa, size); + + ipa_route_hash_tuple_config(ipa); + + /* Configure default route for exception packets */ + ipa_endpoint_default_route_setup(ipa->default_endpoint); + + return 0; +} + +/** + * ipa_route_teardown() - Inverse of ipa_route_setup(). + * @ipa: IPA pointer + */ +static void ipa_route_teardown(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + size_t size; + + ipa_endpoint_default_route_teardown(ipa->default_endpoint); + + size = IPA_SMEM_RT_COUNT * IPA_TABLE_ENTRY_SIZE; + size += IPA_ROUTE_SIZE; + + dma_free_coherent(dev, size, ipa->route_virt, ipa->route_addr); + ipa->route_virt = NULL; + ipa->route_addr = 0; +} + +/** + * ipa_filter_setup() - Initialize an empty filter table + * @ipa: IPA pointer + * + * The filter table consists of a bitmask representing which endpoints support + * filtering, followed by one table entry for each set bit in the mask. Each + * entry in the filter table contains the DMA address of a filter descriptor. + * A special zero descriptor is allocated that represents "no filter" and this + * function initializes all its entries to point at that zero filter. The + * zero filter is allocated with the table, immediately past its end. + * + * Return: 0 if successful or a negative error code + */ +static int ipa_filter_setup(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + u64 zero_filter_addr; + u32 filter_count; + dma_addr_t addr; + size_t size; + u64 *virt; + u32 i; + + BUILD_BUG_ON(!IPA_FILTER_SIZE); + + /* Allocate the filter table, with an extra slot for the bitmap. Also + * allocate enough space at the end of the table to hold the zero + * filter descriptor. Initialize all filter table entries point to + * that. + */ + filter_count = hweight32(ipa->filter_support); + size = (filter_count + 1) * IPA_TABLE_ENTRY_SIZE; + virt = dma_alloc_coherent(dev, size + IPA_FILTER_SIZE, &addr, + GFP_KERNEL); + if (!virt) + goto err_clear_filter_support; + ipa->filter_virt = virt; + ipa->filter_addr = addr; + + /* Zero filter is immediately after the filter table */ + zero_filter_addr = addr + size; + + /* Save the filter table bitmap. The "soft" bitmap value must be + * converted to the hardware representation by shifting it left one + * position. (Bit 0 represents global filtering, which is possible + * but not used.) + */ + *virt++ = ipa->filter_support << 1; + + /* Now point every entry in the table at the empty filter */ + for (i = 0; i < filter_count; i++) + *virt++ = zero_filter_addr; + + ipa_cmd_filter_config_ipv4(ipa, size); + ipa_cmd_filter_config_ipv6(ipa, size); + + ipa_filter_hash_tuple_config(ipa); + + return 0; + +err_clear_filter_support: + ipa->filter_support = 0; + + return -ENOMEM; +} + +/** + * ipa_filter_teardown() - Inverse of ipa_filter_setup(). + * @ipa: IPA pointer + */ +static void ipa_filter_teardown(struct ipa *ipa) +{ + u32 filter_count = hweight32(ipa->filter_support); + struct device *dev = &ipa->pdev->dev; + size_t size; + + size = (filter_count + 1) * IPA_TABLE_ENTRY_SIZE; + size += IPA_FILTER_SIZE; + + dma_free_coherent(dev, size, ipa->filter_virt, ipa->filter_addr); + ipa->filter_virt = NULL; + ipa->filter_addr = 0; + ipa->filter_support = 0; +} + +/** + * ipa_suspend_handler() - Handle the suspend interrupt + * @ipa: IPA pointer + * @interrupt: Interrupt type. + * + * When in suspended state, the IPA can trigger a resume by sending a SUSPEND + * IPA interrupt. + */ +static void ipa_suspend_handler(struct ipa *ipa, + enum ipa_interrupt_id interrupt_id) +{ + /* Take a a single clock reference to prevent suspend. All + * endpoints will be resumed as a result. This reference will + * be dropped when we get a power management suspend request. + */ + if (!atomic_xchg(&ipa->suspend_ref, 1)) + ipa_clock_get(ipa->clock); + + /* Acknowledge/clear the suspend interrupt on all endpoints */ + ipa_interrupt_suspend_clear_all(ipa->interrupt); +} + +/* Remoteproc callbacks for SSR events: prepare, start, stop, unprepare */ +int ipa_ssr_prepare(struct rproc_subdev *subdev) +{ + return 0; +} +EXPORT_SYMBOL_GPL(ipa_ssr_prepare); + +int ipa_ssr_start(struct rproc_subdev *subdev) +{ + return 0; +} +EXPORT_SYMBOL_GPL(ipa_ssr_start); + +void ipa_ssr_stop(struct rproc_subdev *subdev, bool crashed) +{ +} +EXPORT_SYMBOL_GPL(ipa_ssr_stop); + +void ipa_ssr_unprepare(struct rproc_subdev *subdev) +{ +} +EXPORT_SYMBOL_GPL(ipa_ssr_unprepare); + +/** + * ipa_setup() - Set up IPA hardware + * @ipa: IPA pointer + * + * Perform initialization that requires issuing immediate commands using the + * command TX endpoint. This cannot be run until early initialization + * (including loading GSI firmware) is complete. + */ +int ipa_setup(struct ipa *ipa) +{ + struct ipa_endpoint *rx_endpoint; + struct ipa_endpoint *tx_endpoint; + int ret; + + dev_dbg(&ipa->pdev->dev, "%s() started\n", __func__); + + ret = gsi_setup(&ipa->gsi); + if (ret) + return ret; + + ipa->interrupt = ipa_interrupt_setup(ipa); + if (IS_ERR(ipa->interrupt)) { + ret = PTR_ERR(ipa->interrupt); + goto err_gsi_teardown; + } + ipa_interrupt_add(ipa->interrupt, IPA_INTERRUPT_TX_SUSPEND, + ipa_suspend_handler); + + ipa_uc_setup(ipa); + + ipa_endpoint_setup(ipa); + + /* We need to use the AP command out endpoint to perform other + * initialization, so we set that up first. + */ + ret = ipa_endpoint_enable_one(ipa->command_endpoint); + if (ret) + goto err_endpoint_teardown; + + ret = ipa_smem_setup(ipa); + if (ret) + goto err_command_disable; + + ret = ipa_route_setup(ipa); + if (ret) + goto err_smem_teardown; + + ret = ipa_filter_setup(ipa); + if (ret) + goto err_route_teardown; + + ret = ipa_endpoint_enable_one(ipa->default_endpoint); + if (ret) + goto err_filter_teardown; + + rx_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_MODEM_RX]; + tx_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_MODEM_TX]; + ipa->modem_netdev = ipa_netdev_setup(ipa, rx_endpoint, tx_endpoint); + if (IS_ERR(ipa->modem_netdev)) { + ret = PTR_ERR(ipa->modem_netdev); + goto err_default_disable; + } + + ipa->setup_complete = 1; + + dev_info(&ipa->pdev->dev, "IPA driver setup completed successfully\n"); + + return 0; + +err_default_disable: + ipa_endpoint_disable_one(ipa->default_endpoint); +err_filter_teardown: + ipa_filter_teardown(ipa); +err_route_teardown: + ipa_route_teardown(ipa); +err_smem_teardown: + ipa_smem_teardown(ipa); +err_command_disable: + ipa_endpoint_disable_one(ipa->command_endpoint); +err_endpoint_teardown: + ipa_endpoint_teardown(ipa); + ipa_uc_teardown(ipa); + ipa_interrupt_remove(ipa->interrupt, IPA_INTERRUPT_TX_SUSPEND); + ipa_interrupt_teardown(ipa->interrupt); +err_gsi_teardown: + gsi_teardown(&ipa->gsi); + + return ret; +} + +/** + * ipa_teardown() - Inverse of ipa_setup() + * @ipa: IPA pointer + */ +static void ipa_teardown(struct ipa *ipa) +{ + ipa_netdev_teardown(ipa->modem_netdev); + ipa_endpoint_disable_one(ipa->default_endpoint); + ipa_filter_teardown(ipa); + ipa_route_teardown(ipa); + ipa_smem_teardown(ipa); + ipa_endpoint_disable_one(ipa->command_endpoint); + ipa_endpoint_teardown(ipa); + ipa_uc_teardown(ipa); + ipa_interrupt_remove(ipa->interrupt, IPA_INTERRUPT_TX_SUSPEND); + ipa_interrupt_teardown(ipa->interrupt); + gsi_teardown(&ipa->gsi); +} + +/** + * ipa_hardware_config() - Primitive hardware initialization + * @ipa: IPA pointer + */ +static void ipa_hardware_config(struct ipa *ipa) +{ + u32 val; + + /* SDM845 has IPA version 3.5.1 */ + val = IPA_BCR_REG_VAL; + iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET); + + val = u32_encode_bits(8, GEN_QMB_0_MAX_WRITES_FMASK); + val |= u32_encode_bits(4, GEN_QMB_1_MAX_WRITES_FMASK); + iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_WRITES_OFFSET); + + val = u32_encode_bits(8, GEN_QMB_0_MAX_READS_FMASK); + val |= u32_encode_bits(12, GEN_QMB_1_MAX_READS_FMASK); + iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_READS_OFFSET); +} + +/** + * ipa_hardware_deconfig() - Inverse of ipa_hardware_config() + * @ipa: IPA pointer + * + * This restores the power-on reset values (even if they aren't different) + */ +static void ipa_hardware_deconfig(struct ipa *ipa) +{ + /* Values we program above are the same as the power-on reset values */ +} + +static void ipa_resource_config_src_one(struct ipa *ipa, + const struct ipa_resource_src *resource) +{ + u32 offset = IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET; + u32 stride = IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_STRIDE; + enum ipa_resource_type_src n = resource->type; + const struct ipa_resource_limits *xlimits; + const struct ipa_resource_limits *ylimits; + u32 val; + + xlimits = &resource->limits[IPA_RESOURCE_GROUP_LWA_DL]; + ylimits = &resource->limits[IPA_RESOURCE_GROUP_UL_DL]; + + val = u32_encode_bits(xlimits->min, X_MIN_LIM_FMASK); + val |= u32_encode_bits(xlimits->max, X_MAX_LIM_FMASK); + val |= u32_encode_bits(ylimits->min, Y_MIN_LIM_FMASK); + val |= u32_encode_bits(ylimits->max, Y_MAX_LIM_FMASK); + + iowrite32(val, ipa->reg_virt + offset + n * stride); +} + +static void ipa_resource_config_dst_one(struct ipa *ipa, + const struct ipa_resource_dst *resource) +{ + u32 offset = IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET; + u32 stride = IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_STRIDE; + enum ipa_resource_type_dst n = resource->type; + const struct ipa_resource_limits *xlimits; + const struct ipa_resource_limits *ylimits; + u32 val; + + xlimits = &resource->limits[IPA_RESOURCE_GROUP_LWA_DL]; + ylimits = &resource->limits[IPA_RESOURCE_GROUP_UL_DL]; + + val = u32_encode_bits(xlimits->min, X_MIN_LIM_FMASK); + val |= u32_encode_bits(xlimits->max, X_MAX_LIM_FMASK); + val |= u32_encode_bits(ylimits->min, Y_MIN_LIM_FMASK); + val |= u32_encode_bits(ylimits->max, Y_MAX_LIM_FMASK); + + iowrite32(val, ipa->reg_virt + offset + n * stride); +} + +static void +ipa_resource_config(struct ipa *ipa, const struct ipa_resource_data *data) +{ + const struct ipa_resource_src *resource_src; + const struct ipa_resource_dst *resource_dst; + u32 i; + + resource_src = data->resource_src; + resource_dst = data->resource_dst; + + for (i = 0; i < data->resource_src_count; i++) + ipa_resource_config_src_one(ipa, &resource_src[i]); + + for (i = 0; i < data->resource_dst_count; i++) + ipa_resource_config_dst_one(ipa, &resource_dst[i]); +} + +static void ipa_resource_deconfig(struct ipa *ipa) +{ + /* Nothing to do */ +} + +static void ipa_idle_indication_cfg(struct ipa *ipa, + u32 enter_idle_debounce_thresh, + bool const_non_idle_enable) +{ + u32 val; + + val = u32_encode_bits(enter_idle_debounce_thresh, + ENTER_IDLE_DEBOUNCE_THRESH_FMASK); + if (const_non_idle_enable) + val |= CONST_NON_IDLE_ENABLE_FMASK; + + iowrite32(val, ipa->reg_virt + IPA_REG_IDLE_INDICATION_CFG_OFFSET); +} + +/** + * ipa_dcd_config() - Enable dynamic clock division on IPA + * + * Configures when the IPA signals it is idle to the global clock + * controller, which can respond by scalling down the clock to + * save power. + */ +static void ipa_dcd_config(struct ipa *ipa) +{ + /* Recommended values for IPA 3.5 according to IPA HPG */ + ipa_idle_indication_cfg(ipa, 256, false); +} + +static void ipa_dcd_deconfig(struct ipa *ipa) +{ + /* Power-on reset values */ + ipa_idle_indication_cfg(ipa, 0, true); +} + +/** + * ipa_config() - Configure IPA hardware + * @ipa: IPA pointer + * + * Perform initialization requiring IPA clock to be enabled. + */ +static int ipa_config(struct ipa *ipa, const struct ipa_data *data) +{ + u32 val; + int ret; + + /* Get a clock reference to allow initialization. This reference + * is held after initialization completes, and won't get dropped + * unless/until a system suspend request arrives. + */ + atomic_set(&ipa->suspend_ref, 1); + ipa_clock_get(ipa->clock); + + ipa_hardware_config(ipa); + + /* Ensure we support the number of endpoints supplied by hardware */ + val = ioread32(ipa->reg_virt + IPA_REG_ENABLED_PIPES_OFFSET); + if (val > IPA_ENDPOINT_MAX) { + ret = -EINVAL; + goto err_hardware_deconfig; + } + + ret = ipa_smem_config(ipa); + if (ret) + goto err_hardware_deconfig; + + /* Assign resource limitation to each group */ + ipa_resource_config(ipa, data->resource_data); + + /* Note enabling dynamic clock division must not be + * attempted for IPA hardware versions prior to 3.5. + */ + ipa_dcd_config(ipa); + + return 0; + +err_hardware_deconfig: + ipa_hardware_deconfig(ipa); + ipa_clock_put(ipa->clock); + + return ret; +} + +/** + * ipa_deconfig() - Inverse of ipa_config() + * @ipa: IPA pointer + */ +static void ipa_deconfig(struct ipa *ipa) +{ + ipa_dcd_deconfig(ipa); + ipa_resource_deconfig(ipa); + ipa_smem_deconfig(ipa); + ipa_hardware_deconfig(ipa); + + ipa_clock_put(ipa->clock); +} + +static int ipa_firmware_load(struct device *dev) +{ + const struct firmware *fw; + struct device_node *node; + struct resource res; + phys_addr_t phys; + ssize_t size; + void *virt; + int ret; + + node = of_parse_phandle(dev->of_node, "memory-region", 0); + if (!node) { + dev_err(dev, "memory-region not specified\n"); + return -EINVAL; + } + + ret = of_address_to_resource(node, 0, &res); + if (ret) + return ret; + + ret = request_firmware(&fw, IPA_FWS_PATH, dev); + if (ret) + return ret; + + phys = res.start; + size = (size_t)resource_size(&res); + virt = memremap(phys, size, MEMREMAP_WC); + if (!virt) { + ret = -ENOMEM; + goto out_release_firmware; + } + + ret = qcom_mdt_load(dev, fw, IPA_FWS_PATH, IPA_PAS_ID, + virt, phys, size, NULL); + if (!ret) + ret = qcom_scm_pas_auth_and_reset(IPA_PAS_ID); + + memunmap(virt); +out_release_firmware: + release_firmware(fw); + + return ret; +} + +static const struct of_device_id ipa_match[] = { + { + .compatible = "qcom,sdm845-ipa", + .data = &ipa_data_sdm845, + }, + { }, +}; + +/** + * ipa_probe() - IPA platform driver probe function + * @pdev: Platform device pointer + * + * @Return: 0 if successful, or a negative error code (possibly + * EPROBE_DEFER) + * + * This is the main entry point for the IPA driver. When successful, it + * initializes the IPA hardware for use. + * + * Initialization proceeds in several stages. The "init" stage involves + * activities that can be initialized without access to the IPA hardware. + * The "setup" stage requires the IPA clock to be active so IPA registers + * can beaccessed, but does not require access to the GSI layer. The + * "setup" stage requires access to GSI, and includes initialization that's + * performed by issuing IPA immediate commands. + */ +static int ipa_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + const struct ipa_data *data; + struct ipa *ipa; + bool modem_init; + int ret; + + /* We assume we're working on 64-bit hardware */ + BUILD_BUG_ON(!IS_ENABLED(CONFIG_64BIT)); + BUILD_BUG_ON(ARCH_DMA_MINALIGN % IPA_TABLE_ALIGN); + + data = of_device_get_match_data(dev); + + modem_init = of_property_read_bool(dev->of_node, "modem-init"); + + /* If we need Trust Zone, make sure it's ready */ + if (!modem_init) + if (!qcom_scm_is_available()) + return -EPROBE_DEFER; + + ipa = kzalloc(sizeof(*ipa), GFP_KERNEL); + if (!ipa) + return -ENOMEM; + ipa->pdev = pdev; + dev_set_drvdata(dev, ipa); + + /* Initialize the clock and interconnects early. They might + * not be ready when we're probed, so might return -EPROBE_DEFER. + */ + atomic_set(&ipa->suspend_ref, 0); + + ipa->clock = ipa_clock_init(ipa); + if (IS_ERR(ipa->clock)) { + ret = PTR_ERR(ipa->clock); + goto err_free_ipa; + } + + ret = ipa_mem_init(ipa); + if (ret) + goto err_clock_exit; + + ret = gsi_init(&ipa->gsi, pdev, data->endpoint_data_count, + data->endpoint_data); + if (ret) + goto err_mem_exit; + + ipa->smp2p = ipa_smp2p_init(ipa, modem_init); + if (IS_ERR(ipa->smp2p)) { + ret = PTR_ERR(ipa->smp2p); + goto err_gsi_exit; + } + + ret = ipa_endpoint_init(ipa, data->endpoint_data_count, + data->endpoint_data); + if (ret) + goto err_smp2p_exit; + ipa->command_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_COMMAND_TX]; + ipa->default_endpoint = &ipa->endpoint[IPA_ENDPOINT_AP_LAN_RX]; + + /* Create a wakeup source. */ + wakeup_source_init(&ipa->wakeup, "ipa"); + + /* Proceed to real initialization */ + ret = ipa_config(ipa, data); + if (ret) + goto err_endpoint_exit; + + dev_info(dev, "IPA driver initialized"); + + /* If the modem is verifying and loading firmware, we're + * done. We will receive an SMP2P interrupt when it is OK + * to proceed with the setup phase (involving issuing + * immediate commands after GSI is initialized). + */ + if (modem_init) + return 0; + + /* Otherwise we need to load the firmware and have Trust + * Zone validate and install it. If that succeeds we can + * proceed with setup. + */ + ret = ipa_firmware_load(dev); + if (ret) + goto err_deconfig; + + ret = ipa_setup(ipa); + if (ret) + goto err_deconfig; + + return 0; + +err_deconfig: + ipa_deconfig(ipa); +err_endpoint_exit: + wakeup_source_remove(&ipa->wakeup); + ipa_endpoint_exit(ipa); +err_smp2p_exit: + ipa_smp2p_exit(ipa->smp2p); +err_gsi_exit: + gsi_exit(&ipa->gsi); +err_mem_exit: + ipa_mem_exit(ipa); +err_clock_exit: + ipa_clock_exit(ipa->clock); +err_free_ipa: + kfree(ipa); + + return ret; +} + +static int ipa_remove(struct platform_device *pdev) +{ + struct ipa *ipa = dev_get_drvdata(&pdev->dev); + + ipa_smp2p_disable(ipa->smp2p); + if (ipa->setup_complete) + ipa_teardown(ipa); + + ipa_deconfig(ipa); + wakeup_source_remove(&ipa->wakeup); + ipa_endpoint_exit(ipa); + ipa_smp2p_exit(ipa->smp2p); + ipa_mem_exit(ipa); + ipa_clock_exit(ipa->clock); + kfree(ipa); + + return 0; +} + +/** + * ipa_suspend() - Power management system suspend callback + * @dev: IPA device structure + * + * Return: Zero + * + * Called by the PM framework when a system suspend operation is invoked. + */ +int ipa_suspend(struct device *dev) +{ + struct ipa *ipa = dev_get_drvdata(dev); + + ipa_clock_put(ipa->clock); + atomic_set(&ipa->suspend_ref, 0); + + return 0; +} + +/** + * ipa_resume() - Power management system resume callback + * @dev: IPA device structure + * + * Return: Always returns 0 + * + * Called by the PM framework when a system resume operation is invoked. + */ +int ipa_resume(struct device *dev) +{ + struct ipa *ipa = dev_get_drvdata(dev); + + /* This clock reference will keep the IPA out of suspend + * until we get a power management suspend request. + */ + atomic_set(&ipa->suspend_ref, 1); + ipa_clock_get(ipa->clock); + + return 0; +} + +static const struct dev_pm_ops ipa_pm_ops = { + .suspend_noirq = ipa_suspend, + .resume_noirq = ipa_resume, +}; + +static struct platform_driver ipa_driver = { + .probe = ipa_probe, + .remove = ipa_remove, + .driver = { + .name = "ipa", + .owner = THIS_MODULE, + .pm = &ipa_pm_ops, + .of_match_table = ipa_match, + }, +}; + +module_platform_driver(ipa_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Qualcomm IP Accelerator device driver"); diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h new file mode 100644 index 000000000000..8d04db6f7b00 --- /dev/null +++ b/drivers/net/ipa/ipa_reg.h @@ -0,0 +1,279 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_REG_H_ +#define _IPA_REG_H_ + +#include + +/** + * DOC: IPA Registers + * + * IPA registers are located within the "ipa" address space defined by + * Device Tree. The offset of each register within that space is specified + * by symbols defined below. The address space is mapped to virtual memory + * space in ipa_mem_init(). All IPA registers are 32 bits wide. + * + * Certain register types are duplicated for a number of instances of + * something. For example, each IPA endpoint has an set of registers + * defining its configuration. The offset to an endpoint's set of registers + * is computed based on an "base" offset plus an additional "stride" offset + * that's dependent on the endpoint's ID. For such registers, the offset + * is computed by a function-like macro that takes a parameter used in + * the computation. + * + * The offset of a register dependent on execution environment is computed + * by a macro that is supplied a parameter "ee". The "ee" value is a member + * of the gsi_ee enumerated type. + * + * The offset of a register dependent on endpoint id is computed by a macro + * that is supplied a parameter "ep". The "ep" value must be less than + * IPA_ENDPOINT_MAX. + * + * The offset of registers related to hashed filter and router tables is + * computed by a macro that is supplied a parameter "er". The "er" represents + * an endpoint ID for filters, or a route ID for routes. For filters, the + * endpoint ID must be less than IPA_ENDPOINT_MAX, but is further restricted + * because not all endpoints support filtering. For routes, the route ID + * must be less than IPA_SMEM_RT_COUNT. + * + * Some registers encode multiple fields within them. For these, each field + * has a symbol below definining a mask that defines both the position and + * width of the field within its register. + */ + +#define IPA_REG_ENABLED_PIPES_OFFSET 0x00000038 + +#define IPA_REG_ROUTE_OFFSET 0x00000048 +#define ROUTE_DIS_FMASK GENMASK(0, 0) +#define ROUTE_DEF_PIPE_FMASK GENMASK(5, 1) +#define ROUTE_DEF_HDR_TABLE_FMASK GENMASK(6, 6) +#define ROUTE_DEF_HDR_OFST_FMASK GENMASK(16, 7) +#define ROUTE_FRAG_DEF_PIPE_FMASK GENMASK(21, 17) +#define ROUTE_DEF_RETAIN_HDR_FMASK GENMASK(24, 24) + +#define IPA_REG_SHARED_MEM_SIZE_OFFSET 0x00000054 +#define SHARED_MEM_SIZE_FMASK GENMASK(15, 0) +#define SHARED_MEM_BADDR_FMASK GENMASK(31, 16) + +#define IPA_REG_QSB_MAX_WRITES_OFFSET 0x00000074 +#define GEN_QMB_0_MAX_WRITES_FMASK GENMASK(3, 0) +#define GEN_QMB_1_MAX_WRITES_FMASK GENMASK(7, 4) + +#define IPA_REG_QSB_MAX_READS_OFFSET 0x00000078 +#define GEN_QMB_0_MAX_READS_FMASK GENMASK(3, 0) +#define GEN_QMB_1_MAX_READS_FMASK GENMASK(7, 4) + +#define IPA_REG_STATE_AGGR_ACTIVE_OFFSET 0x0000010c + +#define IPA_REG_BCR_OFFSET 0x000001d0 + +#define IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET 0x000001e8 + +#define IPA_REG_AGGR_FORCE_CLOSE_OFFSET 0x000001ec +#define PIPE_BITMAP_FMASK GENMASK(19, 0) + +#define IPA_REG_IDLE_INDICATION_CFG_OFFSET 0x00000220 +#define ENTER_IDLE_DEBOUNCE_THRESH_FMASK GENMASK(15, 0) +#define CONST_NON_IDLE_ENABLE_FMASK GENMASK(16, 16) + +#define IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET 0x00000400 +#define IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_STRIDE 0x0020 +#define IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET 0x00000500 +#define IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_STRIDE 0x0020 +#define X_MIN_LIM_FMASK GENMASK(5, 0) +#define X_MAX_LIM_FMASK GENMASK(13, 8) +#define Y_MIN_LIM_FMASK GENMASK(21, 16) +#define Y_MAX_LIM_FMASK GENMASK(29, 24) + +#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \ + (0x00000800 + 0x0070 * (ep)) +#define ENDP_SUSPEND_FMASK GENMASK(0, 0) +#define ENDP_DELAY_FMASK GENMASK(1, 1) + +#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \ + (0x00000808 + 0x0070 * (ep)) +#define FRAG_OFFLOAD_EN_FMASK GENMASK(0, 0) +#define CS_OFFLOAD_EN_FMASK GENMASK(2, 1) +#define CS_METADATA_HDR_OFFSET_FMASK GENMASK(6, 3) +#define CS_GEN_QMB_MASTER_SEL_FMASK GENMASK(8, 8) + +#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \ + (0x00000810 + 0x0070 * (ep)) +#define HDR_LEN_FMASK GENMASK(5, 0) +#define HDR_OFST_METADATA_VALID_FMASK GENMASK(6, 6) +#define HDR_OFST_METADATA_FMASK GENMASK(12, 7) +#define HDR_ADDITIONAL_CONST_LEN_FMASK GENMASK(18, 13) +#define HDR_OFST_PKT_SIZE_VALID_FMASK GENMASK(19, 19) +#define HDR_OFST_PKT_SIZE_FMASK GENMASK(25, 20) +#define HDR_A5_MUX_FMASK GENMASK(26, 26) +#define HDR_LEN_INC_DEAGG_HDR_FMASK GENMASK(27, 27) +#define HDR_METADATA_REG_VALID_FMASK GENMASK(28, 28) + +#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \ + (0x00000814 + 0x0070 * (ep)) +#define HDR_ENDIANNESS_FMASK GENMASK(0, 0) +#define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK GENMASK(1, 1) +#define HDR_TOTAL_LEN_OR_PAD_FMASK GENMASK(2, 2) +#define HDR_PAYLOAD_LEN_INC_PADDING_FMASK GENMASK(3, 3) +#define HDR_TOTAL_LEN_OR_PAD_OFFSET_FMASK GENMASK(9, 4) +#define HDR_PAD_TO_ALIGNMENT_FMASK GENMASK(13, 10) + +#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(ep) \ + (0x00000818 + 0x0070 * (ep)) + +#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \ + (0x00000824 + 0x0070 * (ep)) +#define AGGR_EN_FMASK GENMASK(1, 0) +#define AGGR_TYPE_FMASK GENMASK(4, 2) +#define AGGR_BYTE_LIMIT_FMASK GENMASK(9, 5) +#define AGGR_TIME_LIMIT_FMASK GENMASK(14, 10) +#define AGGR_PKT_LIMIT_FMASK GENMASK(20, 15) +#define AGGR_SW_EOF_ACTIVE_FMASK GENMASK(21, 21) +#define AGGR_FORCE_CLOSE_FMASK GENMASK(22, 22) +#define AGGR_HARD_BYTE_LIMIT_ENABLE_FMASK GENMASK(24, 24) + +#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(ep) \ + (0x00000820 + 0x0070 * (ep)) +#define MODE_FMASK GENMASK(2, 0) +#define DEST_PIPE_INDEX_FMASK GENMASK(8, 4) +#define BYTE_THRESHOLD_FMASK GENMASK(27, 12) +#define PIPE_REPLICATION_EN_FMASK GENMASK(28, 28) +#define PAD_EN_FMASK GENMASK(29, 29) +#define HDR_FTCH_DISABLE_FMASK GENMASK(30, 30) + +#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(ep) \ + (0x00000834 + 0x0070 * (ep)) +#define DEAGGR_HDR_LEN_FMASK GENMASK(5, 0) +#define PACKET_OFFSET_VALID_FMASK GENMASK(7, 7) +#define PACKET_OFFSET_LOCATION_FMASK GENMASK(13, 8) +#define MAX_PACKET_LEN_FMASK GENMASK(31, 16) + +#define IPA_REG_ENDP_INIT_SEQ_N_OFFSET(ep) \ + (0x0000083c + 0x0070 * (ep)) +#define HPS_SEQ_TYPE_FMASK GENMASK(3, 0) +#define DPS_SEQ_TYPE_FMASK GENMASK(7, 4) +#define HPS_REP_SEQ_TYPE_FMASK GENMASK(11, 8) +#define DPS_REP_SEQ_TYPE_FMASK GENMASK(15, 12) + +#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \ + (0x00000840 + 0x0070 * (ep)) +#define STATUS_EN_FMASK GENMASK(0, 0) +#define STATUS_ENDP_FMASK GENMASK(5, 1) +#define STATUS_LOCATION_FMASK GENMASK(8, 8) +#define STATUS_PKT_SUPPRESS_FMASK GENMASK(9, 9) + +/* "er" is either an endpoint id (for filters) or a route id (for routes) */ +#define IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(er) \ + (0x0000085c + 0x0070 * (er)) +#define FILTER_HASH_MSK_SRC_ID_FMASK GENMASK(0, 0) +#define FILTER_HASH_MSK_SRC_IP_FMASK GENMASK(1, 1) +#define FILTER_HASH_MSK_DST_IP_FMASK GENMASK(2, 2) +#define FILTER_HASH_MSK_SRC_PORT_FMASK GENMASK(3, 3) +#define FILTER_HASH_MSK_DST_PORT_FMASK GENMASK(4, 4) +#define FILTER_HASH_MSK_PROTOCOL_FMASK GENMASK(5, 5) +#define FILTER_HASH_MSK_METADATA_FMASK GENMASK(6, 6) +#define FILTER_HASH_UNDEFINED1_FMASK GENMASK(15, 7) +#define IPA_REG_ENDP_FILTER_HASH_MSK_ALL GENMASK(15, 0) + +#define ROUTER_HASH_MSK_SRC_ID_FMASK GENMASK(16, 16) +#define ROUTER_HASH_MSK_SRC_IP_FMASK GENMASK(17, 17) +#define ROUTER_HASH_MSK_DST_IP_FMASK GENMASK(18, 18) +#define ROUTER_HASH_MSK_SRC_PORT_FMASK GENMASK(19, 19) +#define ROUTER_HASH_MSK_DST_PORT_FMASK GENMASK(20, 20) +#define ROUTER_HASH_MSK_PROTOCOL_FMASK GENMASK(21, 21) +#define ROUTER_HASH_MSK_METADATA_FMASK GENMASK(22, 22) +#define ROUTER_HASH_UNDEFINED2_FMASK GENMASK(31, 23) +#define IPA_REG_ENDP_ROUTER_HASH_MSK_ALL GENMASK(31, 16) + +#define IPA_REG_IRQ_STTS_OFFSET \ + IPA_REG_IRQ_STTS_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_STTS_EE_N_OFFSET(ee) \ + (0x00003008 + 0x1000 * (ee)) + +#define IPA_REG_IRQ_EN_OFFSET \ + IPA_REG_IRQ_EN_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_EN_EE_N_OFFSET(ee) \ + (0x0000300c + 0x1000 * (ee)) + +#define IPA_REG_IRQ_CLR_OFFSET \ + IPA_REG_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_CLR_EE_N_OFFSET(ee) \ + (0x00003010 + 0x1000 * (ee)) + +#define IPA_REG_IRQ_UC_OFFSET \ + IPA_REG_IRQ_UC_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_UC_EE_N_OFFSET(ee) \ + (0x0000301c + 0x1000 * (ee)) + +#define IPA_REG_IRQ_SUSPEND_INFO_OFFSET \ + IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(ee) \ + (0x00003030 + 0x1000 * (ee)) + +#define IPA_REG_SUSPEND_IRQ_EN_OFFSET \ + IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(ee) \ + (0x00003034 + 0x1000 * (ee)) + +#define IPA_REG_SUSPEND_IRQ_CLR_OFFSET \ + IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP) +#define IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(ee) \ + (0x00003038 + 0x1000 * (ee)) + +/** enum ipa_cs_offload_en - checksum offload field in ENDP_INIT_CFG_N */ +enum ipa_cs_offload_en { + IPA_CS_OFFLOAD_NONE = 0, + IPA_CS_OFFLOAD_UL = 1, + IPA_CS_OFFLOAD_DL = 2, + IPA_CS_RSVD +}; + +/** enum ipa_aggr_en - aggregation type field in ENDP_INIT_AGGR_N */ +enum ipa_aggr_en { + IPA_BYPASS_AGGR = 0, + IPA_ENABLE_AGGR = 1, + IPA_ENABLE_DEAGGR = 2, +}; + +/** enum ipa_aggr_type - aggregation type field in in_ENDP_INIT_AGGR_N */ +enum ipa_aggr_type { + IPA_MBIM_16 = 0, + IPA_HDLC = 1, + IPA_TLP = 2, + IPA_RNDIS = 3, + IPA_GENERIC = 4, + IPA_QCMAP = 6, +}; + +/** enum ipa_mode - mode field in ENDP_INIT_MODE_N */ +enum ipa_mode { + IPA_BASIC = 0, + IPA_ENABLE_FRAMING_HDLC = 1, + IPA_ENABLE_DEFRAMING_HDLC = 2, + IPA_DMA = 3, +}; + +/** + * enum ipa_seq_type - HPS and DPS sequencer type fields in in ENDP_INIT_SEQ_N + * @IPA_SEQ_DMA_ONLY: only DMA is performed + * @IPA_SEQ_PKT_PROCESS_NO_DEC_UCP: + * packet processing + no decipher + microcontroller (Ethernet Bridging) + * @IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP: + * second packet processing pass + no decipher + microcontroller + * @IPA_SEQ_DMA_DEC: DMA + cipher/decipher + * @IPA_SEQ_DMA_COMP_DECOMP: DMA + compression/decompression + * @IPA_SEQ_INVALID: invalid sequencer type + */ +enum ipa_seq_type { + IPA_SEQ_DMA_ONLY = 0x00, + IPA_SEQ_PKT_PROCESS_NO_DEC_UCP = 0x02, + IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP = 0x04, + IPA_SEQ_DMA_DEC = 0x11, + IPA_SEQ_DMA_COMP_DECOMP = 0x20, + IPA_SEQ_INVALID = 0xff, +}; + +#endif /* _IPA_REG_H_ */ From patchwork Sun May 12 01:24:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163971 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4128708ilr; Sat, 11 May 2019 18:25:43 -0700 (PDT) X-Google-Smtp-Source: APXvYqyZwrlHotjgaRh0jMHQhH3qmIoTfoMTnDcpZLJWDftHfewKRTCmGNirXYhz/fkAoDW4NbJk X-Received: by 2002:a63:6ac3:: with SMTP id f186mr23423419pgc.326.1557624342932; Sat, 11 May 2019 18:25:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624342; cv=none; d=google.com; s=arc-20160816; b=szzCHTt4BYvzDdgufPc3OFjS0cWaiGRkNc4RHc6OZyFfLmS3QOKWF862q9LT33GPtV IMoJQKwAxuuyXoqC21qxVL79xZ5JrbX42WKDzpScwtzev3rEWdV2p9DRWQpueKOs/r5b GXyhTQfTCByAdJmXrSvb2bQyp9ou1I+gZS8bcJZGhpb0tyPdCIwOZhfjvNQYh7hcQGtN YCsiCrnqJWptNwjd7LPrjqfJacQ15aG4BcACjqXmbjVuNmSeU/mXDHiA1D1nrEuOfxSO +UHidRFl0cGSKaX598JSM/KOrwQBawdxjo0VFoHyTpdlwQjwW2yIQUcCnkka2kkWBVGd BfUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=O+nUFC6cQZFpp93OxhMjcDjllvqEhXm9UXJsgqoN5M8=; b=nDlCwhNuBQUl39pdwBkV30vwnadVAJBnBe2eoHJjaNG0bzkn3EalPD4dTXI1RSYIvR Nvm2YGHhT3/ypIGFSVAJZ0Sm3nyW2vAdAhsB7lviDtoRGievFDDRQgg+rk5f+On7kZw9 eLeGoYrl/D7t08ezojAOaA8LBXSGRyazB2O8car+BJkWPcg0XAvMhQ6if/vuQvr0YdPj pg6p7IC0q0gX1xTtr/UIRkKd9b0VKSyUcX4ml9PjodSSFU8fzRBFIgLAeBUpQMcMNYZE k8dZDV2uzVsUr2cQ9mbn3K0XH256OfPzVdMxR8yayNMR0J7Uw6TTgmqkm3mr39ybBPLG pqIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=E7CHoWWn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x1si14842793pfo.49.2019.05.11.18.25.42; Sat, 11 May 2019 18:25:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=E7CHoWWn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726680AbfELBZl (ORCPT + 30 others); Sat, 11 May 2019 21:25:41 -0400 Received: from mail-it1-f195.google.com ([209.85.166.195]:54448 "EHLO mail-it1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726604AbfELBZg (ORCPT ); Sat, 11 May 2019 21:25:36 -0400 Received: by mail-it1-f195.google.com with SMTP id a190so15169744ite.4 for ; Sat, 11 May 2019 18:25:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O+nUFC6cQZFpp93OxhMjcDjllvqEhXm9UXJsgqoN5M8=; b=E7CHoWWnRQG59dHc5bUpoWUurm4hXo08RZmITK2SNTmiK50TGKto2egX8/xK7wHQPU ToGztGT+rmQ2Bit5wy2gjUWoJQyF5OG8VuRqJynHgBDOwB9eF9iB1/CZ31LTVE3wFJCm F/RZFS6r6105PQg0bArYNKu5DCxRA+fcCrfgy6PDRNn8UlxBogrUiTvSZQ0NrfjAW5xu GVerLROy40zPEgVZ3YVsLZEAvvtrpkrwVo1Dd+40ieBq034K1gn6o3zSAIXbvC2EaDoM 1FYrjWXwPBwEePDGpbkng6H9rQkTtkFnWQu7GQoSl4t+UMDiLCgP5MGOfOt1mgC5guQW dBOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O+nUFC6cQZFpp93OxhMjcDjllvqEhXm9UXJsgqoN5M8=; b=n+6i++tUM2elcHaW45JWCkzaMlmqfc5e4n/ZGTFEfKB25rGDu7JH4UhmRT65SIhLzI wdpB/mtFcUIc3cZ59HDULSdQMUszLI6+Xth1s6yCKZfP5V5AgcfcVgpEIYHdakzOPLmE HVp3fdTRHGGmFC0jL3+D9WQ76cXL7QR7FUaJMWdB2UjMaPA3TofZT6ZVB0a/UMyqFyIF RNm/Yjmg/77UkwHRygybsldnB6GR55TAM21J/kz9RFfbzHC1f+Ixywwtp2AZhd46hwtP 2VCrzqrehDBl2oHRnQnJ6ggCZuNmOjzKnm8UXGT/jWKm5e6KYb+l8k8Ow5cKZ1zq5t7T ohNg== X-Gm-Message-State: APjAAAWJ+g8qB8ABoC+YB1TZE5nvCCFdg0HBd0OqjkOGZNPnTSlh2Jur DIM+1kDNxY9KsvfdwTw71bBopg== X-Received: by 2002:a24:9d46:: with SMTP id f67mr13250586itd.58.1557624334635; Sat, 11 May 2019 18:25:34 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:34 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 05/18] soc: qcom: ipa: configuration data Date: Sat, 11 May 2019 20:24:55 -0500 Message-Id: <20190512012508.10608-6-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch defines configuration data that is used to specify some of the details of IPA hardware supported by the driver. It is built as Device Tree match data, discovered at boot time. Initially the driver only supports the Qualcomm SDM845 SoC. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_data-sdm845.c | 245 +++++++++++++++++++++++++++ drivers/net/ipa/ipa_data.h | 267 ++++++++++++++++++++++++++++++ 2 files changed, 512 insertions(+) create mode 100644 drivers/net/ipa/ipa_data-sdm845.c create mode 100644 drivers/net/ipa/ipa_data.h -- 2.20.1 diff --git a/drivers/net/ipa/ipa_data-sdm845.c b/drivers/net/ipa/ipa_data-sdm845.c new file mode 100644 index 000000000000..62c0f25f5161 --- /dev/null +++ b/drivers/net/ipa/ipa_data-sdm845.c @@ -0,0 +1,245 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include + +#include "gsi.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" + +/* Differentiate Boolean from numerical options */ +#define NO 0 +#define YES 1 + +/* Endpoint configuration for the SDM845 SoC. */ +static const struct gsi_ipa_endpoint_data gsi_ipa_endpoint_data[] = { + { + .ee_id = GSI_EE_AP, + .channel_id = 4, + .endpoint_id = IPA_ENDPOINT_AP_COMMAND_TX, + .toward_ipa = YES, + .channel = { + .tlv_count = 20, + .wrr_priority = YES, + .tre_count = 256, + .event_count = 512, + }, + .endpoint = { + .seq_type = IPA_SEQ_DMA_ONLY, + .config = { + .dma_mode = YES, + .dma_endpoint = IPA_ENDPOINT_AP_LAN_RX, + }, + }, + }, + { + .ee_id = GSI_EE_AP, + .channel_id = 5, + .endpoint_id = IPA_ENDPOINT_AP_LAN_RX, + .toward_ipa = NO, + .channel = { + .tlv_count = 8, + .tre_count = 256, + .event_count = 256, + }, + .endpoint = { + .seq_type = IPA_SEQ_INVALID, + .config = { + .checksum = YES, + .aggregation = YES, + .status_enable = YES, + .rx = { + .pad_align = ilog2(sizeof(u32)), + }, + }, + }, + }, + { + .ee_id = GSI_EE_AP, + .channel_id = 3, + .endpoint_id = IPA_ENDPOINT_AP_MODEM_TX, + .toward_ipa = YES, + .channel = { + .tlv_count = 16, + .tre_count = 512, + .event_count = 512, + }, + .endpoint = { + .support_flt = YES, + .seq_type = + IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP, + .config = { + .checksum = YES, + .qmap = YES, + .status_enable = YES, + .tx = { + .delay = YES, + .status_endpoint = + IPA_ENDPOINT_MODEM_AP_RX, + }, + }, + }, + }, + { + .ee_id = GSI_EE_AP, + .channel_id = 6, + .endpoint_id = IPA_ENDPOINT_AP_MODEM_RX, + .toward_ipa = NO, + .channel = { + .tlv_count = 8, + .tre_count = 256, + .event_count = 256, + }, + .endpoint = { + .seq_type = IPA_SEQ_INVALID, + .config = { + .checksum = YES, + .qmap = YES, + .aggregation = YES, + .rx = { + .aggr_close_eof = YES, + }, + }, + }, + }, + { + .ee_id = GSI_EE_MODEM, + .channel_id = 1, + .endpoint_id = IPA_ENDPOINT_MODEM_COMMAND_TX, + .toward_ipa = YES, + .endpoint = { + .seq_type = IPA_SEQ_PKT_PROCESS_NO_DEC_UCP, + }, + }, + { + .ee_id = GSI_EE_MODEM, + .channel_id = 0, + .endpoint_id = IPA_ENDPOINT_MODEM_LAN_TX, + .toward_ipa = YES, + .endpoint = { + .support_flt = YES, + }, + }, + { + .ee_id = GSI_EE_MODEM, + .channel_id = 3, + .endpoint_id = IPA_ENDPOINT_MODEM_LAN_RX, + .toward_ipa = NO, + }, + { + .ee_id = GSI_EE_MODEM, + .channel_id = 4, + .endpoint_id = IPA_ENDPOINT_MODEM_AP_TX, + .toward_ipa = YES, + .endpoint = { + .support_flt = YES, + }, + }, + { + .ee_id = GSI_EE_MODEM, + .channel_id = 2, + .endpoint_id = IPA_ENDPOINT_MODEM_AP_RX, + .toward_ipa = NO, + }, +}; + +static const struct ipa_resource_src ipa_resource_src[] = { + { + .type = IPA_RESOURCE_TYPE_SRC_PKT_CONTEXTS, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 1, + .max = 63, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 1, + .max = 63, + }, + }, + { + .type = IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_LISTS, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 10, + .max = 10, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 10, + .max = 10, + }, + }, + { + .type = IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_BUFF, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 12, + .max = 12, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 14, + .max = 14, + }, + }, + { + .type = IPA_RESOURCE_TYPE_SRC_HPS_DMARS, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 0, + .max = 63, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 0, + .max = 63, + }, + }, + { + .type = IPA_RESOURCE_TYPE_SRC_ACK_ENTRIES, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 14, + .max = 14, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 20, + .max = 20, + }, + }, +}; + +static const struct ipa_resource_dst ipa_resource_dst[] = { + { + .type = IPA_RESOURCE_TYPE_DST_DATA_SECTORS, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 4, + .max = 4, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 4, + .max = 4, + }, + }, + { + .type = IPA_RESOURCE_TYPE_DST_DPS_DMARS, + .limits[IPA_RESOURCE_GROUP_LWA_DL] = { + .min = 2, + .max = 63, + }, + .limits[IPA_RESOURCE_GROUP_UL_DL] = { + .min = 1, + .max = 63, + }, + }, +}; + +/* Resource configuration for the SDM845 SoC. */ +static const struct ipa_resource_data ipa_resource_data = { + .resource_src = ipa_resource_src, + .resource_src_count = ARRAY_SIZE(ipa_resource_src), + .resource_dst = ipa_resource_dst, + .resource_dst_count = ARRAY_SIZE(ipa_resource_dst), +}; + +/* Configuration data for the SDM845 SoC. */ +const struct ipa_data ipa_data_sdm845 = { + .endpoint_data = gsi_ipa_endpoint_data, + .endpoint_data_count = ARRAY_SIZE(gsi_ipa_endpoint_data), + .resource_data = &ipa_resource_data, +}; diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h new file mode 100644 index 000000000000..f7669f73efc3 --- /dev/null +++ b/drivers/net/ipa/ipa_data.h @@ -0,0 +1,267 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_DATA_H_ +#define _IPA_DATA_H_ + +#include + +#include "ipa_endpoint.h" + +/** + * DOC: IPA/GSI Configuration Data + * + * Boot-time configuration data is used to define the configuration of the + * IPA and GSI resources to use for a given platform. This data is supplied + * via the Device Tree match table, associated with a particular compatible + * string. The data defines information about resources, endpoints, and + * channels. For endpoints and channels, the configuration data defines how + * these hardware entities are initially configured, but in almost all cases, + * this configuration never changes. + * + * Resources are data structures used internally by the IPA hardware. The + * configuration data defines the number (or limits of the number) of various + * types of these resources. + * + * Endpoint configuration data defines properties of both IPA endpoints and + * GSI channels. A channel is a GSI construct, and represents a single + * communication path between the IPA and a particular execution environment + * (EE), such as the AP or Modem. Each EE has a set of channels associated + * with it, and each channel has an ID unique for that EE. Only GSI channels + * associated with the AP are of concern to this driver. + * + * An endpoint is an IPA construct representing a single channel anywhere + * within the system. As such, an IPA endpoint ID maps directly to an + * (EE, channel_id) pair. Generally, this driver is concerned with only + * endpoints associated with the AP, however this will change when support + * for routing (etc.) is added. IPA endpoint and GSI channel configuration + * data are defined together, establishing the endpoint_id->(EE, channel_id) + * mapping. + * + * Endpoint configuration data consists of three parts: properties that + * are common to IPA and GSI (EE ID, channel ID, endpoint ID, and direction); + * properties associated with the GSI channel; and properties associated with + * the IPA endpoint. + */ + +/** + * struct gsi_channel_data - GSI channel configuration data + * @tlv_count: number of entries in channel's TLV FIFO + * @wrr_priority: whether channel gets priority (AP command TX only) + * @tre_count: number of TREs in the channel ring + * @event_count: number of slots in the associated event ring + * + * A GSI channel is a unidirectional means of transferring data to or from + * (and through) the IPA. A GSI channel has a fixed number of "transfer + * elements" (TREs) that specify individual commands. A set of commands + * are provided to a GSI channel, and when they complete the GSI generates + * an event (and an interrupt) to signal their completion. These event + * structures are managed in a fixed-size event ring. + * + * Each GSI channel is fed by a FIFO if type/length/value (TLV) structures, + * and the number of entries in this FIFO limits the number of TREs that can + * be included in a single transaction. + * + * The GSI does weighted round-robin servicing of its channels, and it's + * possible to adjust a channel's priority of service. Only the AP command + * TX channel specifies that it should get priority. + */ +struct gsi_channel_data { + u32 tlv_count; + + u32 wrr_priority; + u32 tre_count; + u32 event_count; +}; + +/** + * struct ipa_endpoint_tx_data - configuration data for TX endpoints + * @delay: whether endpoint starts in delay mode + * @status_endpoint: endpoint to which status elements are sent + * + * Delay mode prevents an endpoint from transmitting anything, even if + * commands have been presented to the hardware. Once the endpoint exits + * delay mode, queued transfer commands are sent. + * + * The @status_endpoint is only valid if the endpoint's @status_enable + * flag is set. + */ +struct ipa_endpoint_tx_data { + u32 delay; + enum ipa_endpoint_id status_endpoint; +}; + +/** + * struct ipa_endpoint_rx_data - configuration data for RX endpoints + * @pad_align: power-of-2 boundary to which packet payload is aligned + * @aggr_close_eof: whether aggregation closes on end-of-frame + * + * With each packet it transfers, the IPA hardware can perform certain + * transformations of its packet data. One of these is adding pad bytes + * to the end of the packet data so the result ends on a power-of-2 boundary. + * + * It is also able to aggregate multiple packets into a single receive buffer. + * Aggregation is "open" while a buffer is being filled, and "closes" when + * certain criteria are met. One of those criteria is the sender indicating + * a "frame" consisting of several transfers has ended. + */ +struct ipa_endpoint_rx_data { + u32 pad_align; + u32 aggr_close_eof; +}; + +/** + * struct ipa_endpoint_config_data - IPA endpoint hardware configuration + * @checksum: whether checksum offload is enabled + * @qmap: whether endpoint uses QMAP protocol + * @aggregation: whether endpoint supports aggregation + * @dma_mode: whether endpoint operates in DMA mode + * @dma_endpoint: peer endpoint, if operating in DMA mode + * @status_enable: whether status elements are generated for endpoint + * @tx: TX-specific endpoint information (see above) + * @rx: RX-specific endpoint information (see above) + */ +struct ipa_endpoint_config_data { + u32 checksum; + u32 qmap; + u32 aggregation; + u32 dma_mode; + enum ipa_endpoint_id dma_endpoint; + u32 status_enable; + union { + struct ipa_endpoint_tx_data tx; + struct ipa_endpoint_rx_data rx; + }; +}; + +/** + * struct ipa_endpoint_data - IPA endpoint configuration data + * @support_flt: whether endpoint supports filtering + * @seq_type: hardware sequencer type used for endpoint + * @config: hardware configuration (see above) + * + * Not all endpoints support the IPA filtering capability. A filter table + * defines the filters to apply for those endpoints that support it. The + * AP is responsible for initializing this table, and it must include entries + * for non-AP endpoints. For this reason we define *all* endpoints used + * in the system, and indicate whether they support filtering. + * + * The remaining endpoint configuration data applies only to AP endpoints. + * The IPA hardware is implemented by sequencers, and the AP must program + * the type(s) of these sequencers at initialization time. The remaining + * endpoint configuration data is defined above. + */ +struct ipa_endpoint_data { + u32 support_flt; + /* The rest are specified only for AP endpoints */ + enum ipa_seq_type seq_type; + struct ipa_endpoint_config_data config; +}; + +/** + * struct gsi_ipa_endpoint_data - GSI channel/IPA endpoint data + * ee: GSI execution environment ID + * channel_id: GSI channel ID + * endpoint_id: IPA endpoint ID + * toward_ipa: direction of data transfer + * gsi: GSI channel configuration data (see above) + * ipa: IPA endpoint configuration data (see above) + */ +struct gsi_ipa_endpoint_data { + u32 ee_id; + u32 channel_id; + enum ipa_endpoint_id endpoint_id; + u32 toward_ipa; + + struct gsi_channel_data channel; + struct ipa_endpoint_data endpoint; +}; + +/** enum ipa_resource_group - IPA resource group */ +enum ipa_resource_group { + IPA_RESOURCE_GROUP_LWA_DL, /* currently not used */ + IPA_RESOURCE_GROUP_UL_DL, + IPA_RESOURCE_GROUP_MAX, +}; + +/** enum ipa_resource_type_src - source resource types */ +enum ipa_resource_type_src { + IPA_RESOURCE_TYPE_SRC_PKT_CONTEXTS, + IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_LISTS, + IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_BUFF, + IPA_RESOURCE_TYPE_SRC_HPS_DMARS, + IPA_RESOURCE_TYPE_SRC_ACK_ENTRIES, +}; + +/** enum ipa_resource_type_dst - destination resource types */ +enum ipa_resource_type_dst { + IPA_RESOURCE_TYPE_DST_DATA_SECTORS, + IPA_RESOURCE_TYPE_DST_DPS_DMARS, +}; + +/** + * struct ipa_resource_limits - minimum and maximum resource counts + * @min: minimum number of resources of a given type + * @max: maximum number of resources of a given type + */ +struct ipa_resource_limits { + u32 min; + u32 max; +}; + +/** + * struct ipa_resource_src - source endpoint group resource usage + * @type: source group resource type + * @limits: array of limits to use for each resource group + */ +struct ipa_resource_src { + enum ipa_resource_type_src type; + struct ipa_resource_limits limits[IPA_RESOURCE_GROUP_MAX]; +}; + +/** + * struct ipa_resource_dst - destination endpoint group resource usage + * @type: destination group resource type + * @limits: array of limits to use for each resource group + */ +struct ipa_resource_dst { + enum ipa_resource_type_dst type; + struct ipa_resource_limits limits[IPA_RESOURCE_GROUP_MAX]; +}; + +/** + * struct ipa_resource_data - IPA resource configuration data + * @resource_src: source endpoint group resources + * @resource_src_count: number of entries in the resource_src array + * @resource_dst: destination endpoint group resources + * @resource_dst_count: number of entries in the resource_dst array + * + * In order to manage quality of service between endpoints, certain resources + * required for operation are allocated to groups of endpoints. Generally + * this information is invisible to the AP, but the AP is responsible for + * programming it at initialization time, so we specify it here. + */ +struct ipa_resource_data { + const struct ipa_resource_src *resource_src; + u32 resource_src_count; + const struct ipa_resource_dst *resource_dst; + u32 resource_dst_count; +}; + +/** + * struct ipa_data - combined IPA/GSI configuration data + * @resource_data: IPA resource configuration data + * @endpoint_data: IPA endpoint/GSI channel data + * @endpoint_data_count: number of entries in endpoint_data array + */ +struct ipa_data { + const struct ipa_resource_data *resource_data; + const struct gsi_ipa_endpoint_data *endpoint_data; + u32 endpoint_data_count; /* # entries in endpoint_data[] */ +}; + +extern const struct ipa_data ipa_data_sdm845; + +#endif /* _IPA_DATA_H_ */ From patchwork Sun May 12 01:24:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163984 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4129336ilr; Sat, 11 May 2019 18:26:49 -0700 (PDT) X-Google-Smtp-Source: APXvYqwBaArHMV06TKMi0mzgjIF2nATVDBJKJlVk5QzVmtoVMH8c1pR+0vRyhch/bfXSjMAvnhEb X-Received: by 2002:a17:902:e409:: with SMTP id ci9mr12642416plb.103.1557624409434; Sat, 11 May 2019 18:26:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624409; cv=none; d=google.com; s=arc-20160816; b=wdwsTr6Y+DF9xIcFjLVfvX3GR69FCRQpOgIHWPk68pgtkILA169WIgrY2+KmuVwfOO yuLsoCNz3GgHXPe0NZhTmQE8WHySesWJdtBnL4WynK9J8HBh9RYRKXwiJeRC0BK0zM+g X6o2AMkooukk1fVe0VwJLvCpLksBgEe+O/lDcrFxi1r0AoOLbIrgwOluOqZpepNDp0H2 m1FVmzsZfKwGkRllYoz3GmBzeAE7Q9y1PJ1i3EaiAd+C8vQytnvzpx3i8WGbUcJm5fXE 0iwgY44elpIC8ssNEvTFkS6FbEN80mcjn4FZVkkS5PjGZ3cvZwoLHN+fpKnUU05re1VV ZoCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=32b8I7bRpq9tzNw1WmITCVe1elQY3noD3WghUaCPdB8=; b=N9+Z+OwAFBrkF8oYj8cgknjsPV0MKgyhBYAxERJB2FByXfRNpY/UzUnMhbq1t2l0nw /c45qcKE1sh6UR3DetHJnx/VQrqgX9FjEQaUWMTqv7z64jKS7YB7QmeYbRvUZl9BFBW1 KVAxoVIAuG2W9/JHDn1G3+NMU91A5W7ayoSWUkyoQ0krm/j0tK5e1W4udh5R0CswOC1z BKD0lu1zMG0vwnjItwAWlpG04Xhmh83JmwqWOufAnr6MMP+/hMvzp71f4Ff0uck4ilDM A3DNz8R76CIPKrTnDaboIag7NygnImeZNJDKv9ECu7ZTRqQp2ezKMZA57ZurfqHC0coU nE1Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ENAKp6YI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o6si12617231plk.296.2019.05.11.18.26.49; Sat, 11 May 2019 18:26:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ENAKp6YI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726967AbfELB0s (ORCPT + 30 others); Sat, 11 May 2019 21:26:48 -0400 Received: from mail-it1-f195.google.com ([209.85.166.195]:35158 "EHLO mail-it1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726612AbfELBZi (ORCPT ); Sat, 11 May 2019 21:25:38 -0400 Received: by mail-it1-f195.google.com with SMTP id u186so15155008ith.0 for ; Sat, 11 May 2019 18:25:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=32b8I7bRpq9tzNw1WmITCVe1elQY3noD3WghUaCPdB8=; b=ENAKp6YIApo74ohtyPShZ9kX5p7zy4SgyfaFl9vcBkOkWBKKCpH4BTXyqp41FDJe/9 62CNmdDusS1723+LJaid+f0K8yyyayYPBt1YonY0cETyjQmoU7foEbzfnqzC8hp/Fpn3 DvEDTmXGor+DnO8L6JvITMwsKfroidzkmtyxEG7vIZ3DkP1S8JL3wfQkxjjS82zon3r7 PtsYtHSLSGOaBsITutiAawxVYYPlhCw0EDiX1WVFi4zMgaWg6vNzF0evChE+0gfr4qIa c1RuOVpBx5Y3dybAKPW5gHSNAVZzwwXHXgZN3OXJ3+fvym7aAaU5jZqKZ37QFxAmTbue m0TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=32b8I7bRpq9tzNw1WmITCVe1elQY3noD3WghUaCPdB8=; b=LAFRNpdJB9Ck45sb1mXmd1JP/wcTZy6l4gmqeQQqQsFAcsrufpdvPhBOaJqHj2wBep 4iE54YXUPM8dPz4Ct1skCFUOTiPcEOsiNHF/WNk9lXRq0sfUNA0Ci0evk+x9rdIOQ+xM JL3cJrLF2g0JX0vaS8aio+RbJvzBYmP53zZeJjiasqLha5ij+aCK+989FyvWX+voSwWv R3CNGIJ+8fvTfIBpY6KxJDbfjlrw4XGLumkIb2j8Gs4rsM7/dG1lFJL/x6ZOKhkC+/gI fqGaChVqtpa3RcidcGJRuj43X4xWSNMbTPFTzweZtCxXqS41iOzZO1ZJ7w4NqhA89lid tVlg== X-Gm-Message-State: APjAAAVc48FOEIjTTqqJw1UJPZd/71aQapUCeF83ml7+2WX3yCRn7pP/ pLps1QqfVxWCgYoJh7N2TMc/Sg== X-Received: by 2002:a24:c3c2:: with SMTP id s185mr12133175itg.156.1557624336042; Sat, 11 May 2019 18:25:36 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:35 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 06/18] soc: qcom: ipa: clocking, interrupts, and memory Date: Sat, 11 May 2019 20:24:56 -0500 Message-Id: <20190512012508.10608-7-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch incorporates three source files (and their headers). They're grouped into one patch mainly for the purpose of making the number and size of patches in this series somewhat reasonable. - "ipa_clock.c" and "ipa_clock.h" implement clocking for the IPA device. The IPA has a single core clock managed by the common clock framework. In addition, the IPA has three buses whose bandwidth is managed by the Linux interconnect framework. At this time the core clock and all three buses are either on or off; we don't yet do any more fine-grained management than that. The core clock and interconnects are enabled and disabled as a unit, using a unified clock-like abstraction, ipa_clock_get()/ipa_clock_put(). - "ipa_interrupt.c" and "ipa_interrupt.h" implement IPA interrupts. There are two hardware IRQs used by the IPA driver (the other is the GSI interrupt, described in a separate patch). Several types of interrupt are handled by the IPA IRQ handler; these are not part of data/fast path. - The IPA has a region of local memory that is accessible by the AP (and modem). Within that region are areas with certain defined purposes. "ipa_mem.c" and "ipa_mem.h" define those regions, and implement their initialization. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_clock.c | 291 ++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_clock.h | 52 ++++++ drivers/net/ipa/ipa_interrupt.c | 279 ++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_interrupt.h | 53 ++++++ drivers/net/ipa/ipa_mem.c | 237 ++++++++++++++++++++++++++ drivers/net/ipa/ipa_mem.h | 82 +++++++++ 6 files changed, 994 insertions(+) create mode 100644 drivers/net/ipa/ipa_clock.c create mode 100644 drivers/net/ipa/ipa_clock.h create mode 100644 drivers/net/ipa/ipa_interrupt.c create mode 100644 drivers/net/ipa/ipa_interrupt.h create mode 100644 drivers/net/ipa/ipa_mem.c create mode 100644 drivers/net/ipa/ipa_mem.h -- 2.20.1 diff --git a/drivers/net/ipa/ipa_clock.c b/drivers/net/ipa/ipa_clock.c new file mode 100644 index 000000000000..686f7ac2ce15 --- /dev/null +++ b/drivers/net/ipa/ipa_clock.c @@ -0,0 +1,291 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_netdev.h" + +/** + * DOC: IPA Clocking + * + * The "IPA Clock" manages both the IPA core clock and the interconnects + * (buses) the IPA depends on as a single logical entity. A reference count + * is incremented by "get" operations and decremented by "put" operations. + * Transitions of that count from 0 to 1 result in the clock and interconnects + * being enabled, and transitions of the count from 1 to 0 cause them to be + * disabled. We currently operate the core clock at a fixed clock rate, and + * all buses at a fixed average and peak bandwidth. As more advanced IPA + * features are enabled, we can will better use of clock and bus scaling. + * + * An IPA clock reference must be held for any access to IPA hardware. + */ + +#define IPA_CORE_CLOCK_RATE (75UL * 1000 * 1000) /* Hz */ + +/* Interconnect path bandwidths (each times 1000 bytes per second) */ +#define IPA_MEMORY_AVG (80 * 1000) /* 80 MBps */ +#define IPA_MEMORY_PEAK (600 * 1000) + +#define IPA_IMEM_AVG (80 * 1000) +#define IPA_IMEM_PEAK (350 * 1000) + +#define IPA_CONFIG_AVG (40 * 1000) +#define IPA_CONFIG_PEAK (40 * 1000) + +/** + * struct ipa_clock - IPA clocking information + * @core: IPA core clock + * @memory_path: Memory interconnect + * @imem_path: Internal memory interconnect + * @config_path: Configuration space interconnect + * @mutex; Protects clock enable/disable + * @count: Clocking reference count + */ +struct ipa_clock { + struct ipa *ipa; + atomic_t count; + struct mutex mutex; /* protects clock enable/disable */ + struct clk *core; + struct icc_path *memory_path; + struct icc_path *imem_path; + struct icc_path *config_path; +}; + +/* Initialize interconnects required for IPA operation */ +static int ipa_interconnect_init(struct ipa_clock *clock, struct device *dev) +{ + struct icc_path *path; + + path = of_icc_get(dev, "memory"); + if (IS_ERR(path)) + goto err_return; + clock->memory_path = path; + + path = of_icc_get(dev, "imem"); + if (IS_ERR(path)) + goto err_memory_path_put; + clock->imem_path = path; + + path = of_icc_get(dev, "config"); + if (IS_ERR(path)) + goto err_imem_path_put; + clock->config_path = path; + + return 0; + +err_imem_path_put: + icc_put(clock->imem_path); +err_memory_path_put: + icc_put(clock->memory_path); +err_return: + + return PTR_ERR(path); +} + +/* Inverse of ipa_interconnect_init() */ +static void ipa_interconnect_exit(struct ipa_clock *clock) +{ + icc_put(clock->config_path); + icc_put(clock->imem_path); + icc_put(clock->memory_path); +} + +/* Currently we only use one bandwidth level, so just "enable" interconnects */ +static int ipa_interconnect_enable(struct ipa_clock *clock) +{ + int ret; + + ret = icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + if (ret) + return ret; + + ret = icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); + if (ret) + goto err_disable_memory_path; + + ret = icc_set_bw(clock->config_path, IPA_CONFIG_AVG, IPA_CONFIG_PEAK); + if (ret) + goto err_disable_imem_path; + + return 0; + +err_disable_imem_path: + (void)icc_set_bw(clock->imem_path, 0, 0); +err_disable_memory_path: + (void)icc_set_bw(clock->memory_path, 0, 0); + + return ret; +} + +/* To disable an interconnect, we just its bandwidth to 0 */ +static int ipa_interconnect_disable(struct ipa_clock *clock) +{ + int ret; + + ret = icc_set_bw(clock->memory_path, 0, 0); + if (ret) + return ret; + + ret = icc_set_bw(clock->imem_path, 0, 0); + if (ret) + goto err_reenable_memory_path; + + ret = icc_set_bw(clock->config_path, 0, 0); + if (ret) + goto err_reenable_imem_path; + + return 0; + +err_reenable_imem_path: + (void)icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK); +err_reenable_memory_path: + (void)icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK); + + return ret; +} + +/* Turn on IPA clocks, including interconnects */ +static int ipa_clock_enable(struct ipa_clock *clock) +{ + int ret; + + ret = ipa_interconnect_enable(clock); + if (ret) + return ret; + + ret = clk_prepare_enable(clock->core); + if (ret) + ipa_interconnect_disable(clock); + + return ret; +} + +/* Inverse of ipa_clock_enable() */ +static void ipa_clock_disable(struct ipa_clock *clock) +{ + clk_disable_unprepare(clock->core); + (void)ipa_interconnect_disable(clock); +} + +/* Get an IPA clock reference, but only if the reference count is + * already non-zero. Returns true if the additional reference was + * added successfully, or false otherwise. + */ +bool ipa_clock_get_additional(struct ipa_clock *clock) +{ + return !!atomic_inc_not_zero(&clock->count); +} + +/* Get an IPA clock reference. If the reference count is non-zero, it is + * incremented and return is immediate. Otherwise it is checked again + * under protection of the mutex, and enable clocks and resume RX endpoints + * before returning. For the first reference, the count is intentionally + * not incremented until after these activities are complete. + */ +void ipa_clock_get(struct ipa_clock *clock) +{ + /* If the clock is running, just bump the reference count */ + if (ipa_clock_get_additional(clock)) + return; + + /* Otherwise get the mutex and check again */ + mutex_lock(&clock->mutex); + + /* A reference might have been added before we got the mutex. */ + if (!ipa_clock_get_additional(clock)) { + int ret; + + ret = ipa_clock_enable(clock); + if (!WARN(ret, "error %d enabling IPA clock\n", ret)) { + struct ipa *ipa = clock->ipa; + + if (ipa->default_endpoint) + ipa_endpoint_resume(ipa->default_endpoint); + + if (ipa->modem_netdev) + ipa_netdev_resume(ipa->modem_netdev); + + atomic_inc(&clock->count); + } + } + + mutex_unlock(&clock->mutex); +} + +/* Attempt to remove an IPA clock reference. If this represents + * the last reference, suspend endpoints and disable the clock + * (and interconnects) under protection of a mutex. + */ +void ipa_clock_put(struct ipa_clock *clock) +{ + /* If this is not the last reference there's nothing more to do */ + if (!atomic_dec_and_mutex_lock(&clock->count, &clock->mutex)) + return; + + if (clock->ipa->modem_netdev) + ipa_netdev_suspend(clock->ipa->modem_netdev); + + if (clock->ipa->default_endpoint) + ipa_endpoint_suspend(clock->ipa->default_endpoint); + + ipa_clock_disable(clock); + + mutex_unlock(&clock->mutex); +} + +/* Initialize IPA clocking */ +struct ipa_clock *ipa_clock_init(struct ipa *ipa) +{ + struct device *dev = &ipa->pdev->dev; + struct ipa_clock *clock; + int ret; + + clock = kzalloc(sizeof(*clock), GFP_KERNEL); + if (!clock) + return ERR_PTR(-ENOMEM); + + clock->ipa = ipa; + clock->core = clk_get(dev, "core"); + if (IS_ERR(clock->core)) { + ret = PTR_ERR(clock->core); + goto err_free_clock; + } + + ret = clk_set_rate(clock->core, IPA_CORE_CLOCK_RATE); + if (ret) + goto err_clk_put; + + ret = ipa_interconnect_init(clock, dev); + if (ret) + goto err_clk_put; + + mutex_init(&clock->mutex); + atomic_set(&clock->count, 0); + + return clock; + +err_clk_put: + clk_put(clock->core); +err_free_clock: + kfree(clock); + + return ERR_PTR(ret); +} + +/* Inverse of ipa_clock_init() */ +void ipa_clock_exit(struct ipa_clock *clock) +{ + mutex_destroy(&clock->mutex); + ipa_interconnect_exit(clock); + clk_put(clock->core); + kfree(clock); +} diff --git a/drivers/net/ipa/ipa_clock.h b/drivers/net/ipa/ipa_clock.h new file mode 100644 index 000000000000..f38c3face29a --- /dev/null +++ b/drivers/net/ipa/ipa_clock.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_CLOCK_H_ +#define _IPA_CLOCK_H_ + +struct ipa; +struct ipa_clock; + +/** + * ipa_clock_init() - Initialize IPA clocking + * @ipa: IPA pointer + * + * @Return: A pointer to an ipa_clock structure, or a pointer-coded error + */ +struct ipa_clock *ipa_clock_init(struct ipa *ipa); + +/** + * ipa_clock_exit() - Inverse of ipa_clock_init() + * @clock: IPA clock pointer + */ +void ipa_clock_exit(struct ipa_clock *clock); + +/** + * ipa_clock_get() - Get an IPA clock reference + * @clock: IPA clock pointer + * + * This call blocks if this is the first reference. + */ +void ipa_clock_get(struct ipa_clock *clock); + +/** + * ipa_clock_get_additional() - Get an IPA clock reference if not first + * @clock: IPA clock pointer + * + * This returns immediately, and only takes a reference if not the first + */ +bool ipa_clock_get_additional(struct ipa_clock *clock); + +/** + * ipa_clock_put() - Drop an IPA clock reference + * @clock: IPA clock pointer + * + * This drops a clock reference. If the last reference is being dropped, + * the clock is stopped and RX endpoints are suspended. This call will + * not block unless the last reference is dropped. + */ +void ipa_clock_put(struct ipa_clock *clock); + +#endif /* _IPA_CLOCK_H_ */ diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c new file mode 100644 index 000000000000..5be6b3c762ed --- /dev/null +++ b/drivers/net/ipa/ipa_interrupt.c @@ -0,0 +1,279 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +/* DOC: IPA Interrupts + * + * The IPA has an interrupt line distinct from the interrupt used by the GSI + * code. Whereas GSI interrupts are generally related to channel events (like + * transfer completions), IPA interrupts are related to other events related + * to the IPA. Some of the IPA interrupts come from a microcontroller + * embedded in the IPA. Each IPA interrupt type can be both masked and + * acknowledged independent of the others. + * + * Two of the IPA interrupts are initiated by the microcontroller. A third + * can be generated to signal the need for a wakeup/resume when an IPA + * endpoint has been suspended. There are other IPA events defined, but at + * this time only these three are supported. + */ + +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_reg.h" +#include "ipa_endpoint.h" +#include "ipa_interrupt.h" + +/* Maximum number of bits in an IPA interrupt mask */ +#define IPA_INTERRUPT_MAX (sizeof(u32) * BITS_PER_BYTE) + +struct ipa_interrupt_info { + ipa_irq_handler_t handler; + enum ipa_interrupt_id interrupt_id; +}; + +/** + * struct ipa_interrupt - IPA interrupt information + * @ipa: IPA pointer + * @irq: Linux IRQ number used for IPA interrupts + * @interrupt_info: Information for each IPA interrupt type + */ +struct ipa_interrupt { + struct ipa *ipa; + u32 irq; + u32 enabled; + struct ipa_interrupt_info info[IPA_INTERRUPT_MAX]; +}; + +/* Map a logical interrupt number to a hardware IPA IRQ number */ +static const u32 ipa_interrupt_mapping[] = { + [IPA_INTERRUPT_UC_0] = 2, + [IPA_INTERRUPT_UC_1] = 3, + [IPA_INTERRUPT_TX_SUSPEND] = 14, +}; + +static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 ipa_irq) +{ + return ipa_irq == ipa_interrupt_mapping[IPA_INTERRUPT_UC_0] || + ipa_irq == ipa_interrupt_mapping[IPA_INTERRUPT_UC_1]; +} + +static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 ipa_irq) +{ + struct ipa_interrupt_info *info = &interrupt->info[ipa_irq]; + bool uc_irq = ipa_interrupt_uc(interrupt, ipa_irq); + struct ipa *ipa = interrupt->ipa; + u32 mask = BIT(ipa_irq); + + /* For microcontroller interrupts, clear the interrupt right away, + * "to avoid clearing unhandled interrupts." + */ + if (uc_irq) + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); + + if (info->handler) + info->handler(interrupt->ipa, info->interrupt_id); + + /* Clearing the SUSPEND_TX interrupt also clears the register + * that tells us which suspended endpoint(s) caused the interrupt, + * so defer clearing until after the handler's been called. + */ + if (!uc_irq) + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); +} + +static void ipa_interrupt_process_all(struct ipa_interrupt *interrupt) +{ + struct ipa *ipa = interrupt->ipa; + u32 enabled = interrupt->enabled; + u32 mask; + + /* The status register indicates which conditions are present, + * including conditions whose interrupt is not enabled. Handle + * only the enabled ones. + */ + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + while ((mask &= enabled)) { + do { + u32 ipa_irq = __ffs(mask); + + mask ^= BIT(ipa_irq); + + ipa_interrupt_process(interrupt, ipa_irq); + } while (mask); + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + } +} + +/* Threaded part of the IRQ handler */ +static irqreturn_t ipa_isr_thread(int irq, void *dev_id) +{ + struct ipa_interrupt *interrupt = dev_id; + + ipa_clock_get(interrupt->ipa->clock); + + ipa_interrupt_process_all(interrupt); + + ipa_clock_put(interrupt->ipa->clock); + + return IRQ_HANDLED; +} + +/* Hard part of the IRQ handler */ +static irqreturn_t ipa_isr(int irq, void *dev_id) +{ + struct ipa_interrupt *interrupt = dev_id; + struct ipa *ipa = interrupt->ipa; + u32 mask; + + mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET); + if (mask & interrupt->enabled) + return IRQ_WAKE_THREAD; + + /* Nothing in the mask was supposed to cause an interrupt */ + iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET); + + dev_err(&ipa->pdev->dev, "%s: unexpected interrupt, mask 0x%08x\n", + __func__, mask); + + return IRQ_HANDLED; +} + +static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id, + bool enable) +{ + u32 offset = IPA_REG_SUSPEND_IRQ_EN_OFFSET; + u32 mask = BIT(endpoint_id); + u32 val; + + val = ioread32(interrupt->ipa->reg_virt + offset); + if (enable) + val |= mask; + else + val &= ~mask; + iowrite32(val, interrupt->ipa->reg_virt + offset); +} + +void ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id) +{ + ipa_interrupt_suspend_control(interrupt, endpoint_id, true); +} + +void ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id) +{ + ipa_interrupt_suspend_control(interrupt, endpoint_id, false); +} + +/* Clear the suspend interrupt for all endpoints that signaled it */ +void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt) +{ + struct ipa *ipa = interrupt->ipa; + u32 val; + + val = ioread32(ipa->reg_virt + IPA_REG_IRQ_SUSPEND_INFO_OFFSET); + iowrite32(val, ipa->reg_virt + IPA_REG_SUSPEND_IRQ_CLR_OFFSET); +} + +/** + * ipa_interrupt_simulate() - Simulate arrival of an IPA TX_SUSPEND interrupt + * + * This is needed to work around a problem that occurs if aggregation + * is active on an endpoint when its underlying channel is suspended. + */ +void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt) +{ + u32 ipa_irq = ipa_interrupt_mapping[IPA_INTERRUPT_TX_SUSPEND]; + + ipa_interrupt_process(interrupt, ipa_irq); +} + +/** + * ipa_interrupt_add() - Adds handler for an IPA interrupt + * @interrupt_id: IPA interrupt type + * @handler: The handler for that interrupt + * + * Adds handler for an IPA interrupt and enable it. IPA interrupt + * handlers are run in threaded interrupt context, so are allowed to + * block. + */ +void ipa_interrupt_add(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id, + ipa_irq_handler_t handler) +{ + u32 ipa_irq = ipa_interrupt_mapping[interrupt_id]; + struct ipa *ipa = interrupt->ipa; + + interrupt->info[ipa_irq].handler = handler; + interrupt->info[ipa_irq].interrupt_id = interrupt_id; + + /* Update the IPA interrupt mask to enable it */ + interrupt->enabled |= BIT(ipa_irq); + iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); +} + +/** + * ipa_interrupt_remove() - Removes handler for an IPA interrupt type + * @interrupt: IPA interrupt type + * + * Remove an IPA interrupt handler and disable it. + */ +void ipa_interrupt_remove(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id) +{ + u32 ipa_irq = ipa_interrupt_mapping[interrupt_id]; + struct ipa *ipa = interrupt->ipa; + + /* Update the IPA interrupt mask to disable it */ + interrupt->enabled &= ~BIT(ipa_irq); + iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); + + interrupt->info[ipa_irq].handler = NULL; +} + +/** + * ipa_interrupts_init() - Initialize the IPA interrupts framework + */ +struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa) +{ + struct ipa_interrupt *interrupt; + unsigned int irq; + int ret; + + ret = platform_get_irq_byname(ipa->pdev, "ipa"); + if (ret < 0) + return ERR_PTR(ret); + irq = ret; + + interrupt = kzalloc(sizeof(*interrupt), GFP_KERNEL); + if (!interrupt) + return ERR_PTR(-ENOMEM); + interrupt->ipa = ipa; + interrupt->irq = irq; + + /* Start with all IPA interrupts disabled */ + iowrite32(0, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET); + + ret = request_threaded_irq(irq, ipa_isr, ipa_isr_thread, IRQF_ONESHOT, + "ipa", interrupt); + if (ret) + goto err_free_interrupt; + + return interrupt; + +err_free_interrupt: + kfree(interrupt); + + return ERR_PTR(ret); +} + +void ipa_interrupt_teardown(struct ipa_interrupt *interrupt) +{ + free_irq(interrupt->irq, interrupt); +} diff --git a/drivers/net/ipa/ipa_interrupt.h b/drivers/net/ipa/ipa_interrupt.h new file mode 100644 index 000000000000..6e452430c156 --- /dev/null +++ b/drivers/net/ipa/ipa_interrupt.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_INTERRUPT_H_ +#define _IPA_INTERRUPT_H_ + +#include +#include + +struct ipa; +struct ipa_interrupt; + +/** + * enum ipa_interrupt_id - IPA Interrupt Type + * + * Used to register handlers for IPA interrupts. + */ +enum ipa_interrupt_id { + IPA_INTERRUPT_UC_0, + IPA_INTERRUPT_UC_1, + IPA_INTERRUPT_TX_SUSPEND, +}; + +/** + * typedef ipa_irq_handler_t - irq handler/callback type + * @param interrupt - interrupt type + * @param interrupt_data - interrupt information data + * + * Callback function registered by ipa_interrupt_add() to handle a specific + * interrupt type + */ +typedef void (*ipa_irq_handler_t)(struct ipa *ipa, + enum ipa_interrupt_id interrupt_id); + +struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa); +void ipa_interrupt_teardown(struct ipa_interrupt *interrupt); + +void ipa_interrupt_add(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id, + ipa_irq_handler_t handler); +void ipa_interrupt_remove(struct ipa_interrupt *interrupt, + enum ipa_interrupt_id interrupt_id); + +void ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id); +void ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt, + enum ipa_endpoint_id endpoint_id); +void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt); +void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt); + +#endif /* _IPA_INTERRUPT_H_ */ diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c new file mode 100644 index 000000000000..dc4190ddc9db --- /dev/null +++ b/drivers/net/ipa/ipa_mem.c @@ -0,0 +1,237 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_reg.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" + +/* "Canary" value placed between memory regions to detect overflow */ +#define IPA_SMEM_CANARY_VAL cpu_to_le32(0xdeadbeef) + +/* Only used for IPA_SMEM_UC_EVENT_RING */ +static __always_inline void smem_set_canary(struct ipa *ipa, u32 offset) +{ + __le32 *cp = ipa->shared_virt + offset; + + BUILD_BUG_ON(offset < sizeof(*cp)); + + *--cp = IPA_SMEM_CANARY_VAL; +} + +static __always_inline void smem_set_canaries(struct ipa *ipa, u32 offset) +{ + __le32 *cp = ipa->shared_virt + offset; + + BUILD_BUG_ON(offset < 2 * sizeof(*cp)); + BUILD_BUG_ON(offset % 8); + + *--cp = IPA_SMEM_CANARY_VAL; + *--cp = IPA_SMEM_CANARY_VAL; +} + +/* Initialize AP-owned areas in the shared memory by zero-filling them. */ +static int ipa_smem_zero_ap(struct ipa *ipa) +{ + int ret = 0; + + BUILD_BUG_ON(IPA_SMEM_AP_HDR_OFFSET % 8); + BUILD_BUG_ON(IPA_SMEM_AP_HDR_PROC_CTX_OFFSET % 8); + + if (IPA_SMEM_AP_HDR_SIZE) { + ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_AP_HDR_OFFSET, + IPA_SMEM_AP_HDR_SIZE); + if (ret) + return ret; + } + + if (IPA_SMEM_AP_HDR_PROC_CTX_SIZE) + ret = ipa_cmd_smem_dma_zero(ipa, + IPA_SMEM_AP_HDR_PROC_CTX_OFFSET, + IPA_SMEM_AP_HDR_PROC_CTX_SIZE); + + return ret; +} + +/** + * ipa_smem_setup() - Set up IPA AP and modem shared memory areas + * + * Configure the shared memory areas used for AP and modem header + * structures. Zero the AP areas; the modem areas will be zeroed + * each time the modem comes up. + * + * Return: 0 if successful, or a negative error code + */ +int ipa_smem_setup(struct ipa *ipa) +{ + int ret = 0; + u32 val; + + if (IPA_SMEM_AP_HDR_SIZE) { + ret = ipa_cmd_hdr_init_local(ipa, IPA_SMEM_AP_HDR_OFFSET, + IPA_SMEM_AP_HDR_SIZE); + if (ret) + return ret; + } + + if (IPA_SMEM_MODEM_HDR_SIZE) + ret = ipa_cmd_hdr_init_local(ipa, IPA_SMEM_MODEM_HDR_OFFSET, + IPA_SMEM_MODEM_HDR_SIZE); + + val = ipa->shared_offset + IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET; + iowrite32(val, ipa->reg_virt + + IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET); + + /* Modem smem regions will be zeroed whenever modem comes up */ + ipa_smem_zero_ap(ipa); + + return ret; +} + +void ipa_smem_teardown(struct ipa *ipa) +{ + /* Nothing to do */ +} + +/** + * ipa_smem_config() - Configure IPA shared memory + * + * Return: 0 if successful, or a negative error code + */ +int ipa_smem_config(struct ipa *ipa) +{ + u32 size; + u32 val; + + /* Check the advertised location and size of the shared memory area */ + val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET); + + /* The fields in the register are in 8 byte units */ + ipa->shared_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK); + dev_dbg(&ipa->pdev->dev, "shared memory offset 0x%x bytes\n", + ipa->shared_offset); + if (WARN_ON(ipa->shared_offset)) + return -EINVAL; + + /* The code assumes a certain minimum shared memory area size */ + size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK); + dev_dbg(&ipa->pdev->dev, "shared memory size 0x%x bytes\n", size); + if (WARN_ON(size < IPA_SMEM_SIZE)) + return -EINVAL; + + /* Now write "canary" values before each sub-section. */ + smem_set_canaries(ipa, IPA_SMEM_V4_FLT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V4_FLT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_FLT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_FLT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V4_RT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V4_RT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_RT_HASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_V6_RT_NHASH_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_MODEM_HDR_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET); + smem_set_canaries(ipa, IPA_SMEM_MODEM_OFFSET); + + /* Only one canary precedes the microcontroller ring */ + BUILD_BUG_ON(IPA_SMEM_UC_EVENT_RING_OFFSET % 1024); + smem_set_canary(ipa, IPA_SMEM_UC_EVENT_RING_OFFSET); + + return 0; +} + +void ipa_smem_deconfig(struct ipa *ipa) +{ + /* Don't bother zeroing any of the shared memory on exit */ +} + +/** + * ipa_smem_zero_modem() - Initialize modem general memory and header memory + */ +int ipa_smem_zero_modem(struct ipa *ipa) +{ + int ret = 0; + + if (IPA_SMEM_MODEM_SIZE) { + ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_MODEM_OFFSET, + IPA_SMEM_MODEM_SIZE); + if (ret) + return ret; + } + + if (IPA_SMEM_MODEM_HDR_SIZE) { + ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_MODEM_HDR_OFFSET, + IPA_SMEM_MODEM_HDR_SIZE); + if (ret) + return ret; + } + + if (IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE) + ret = ipa_cmd_smem_dma_zero(ipa, + IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET, + IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE); + + return ret; +} + +int ipa_mem_init(struct ipa *ipa) +{ + struct resource *res; + int ret; + + ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64)); + if (ret) + return ret; + + /* Set up IPA shared memory */ + res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM, + "ipa-shared"); + if (!res) + return -ENODEV; + + /* The code assumes a certain minimum shared memory area size */ + if (WARN_ON(resource_size(res) < IPA_SMEM_SIZE)) + return -EINVAL; + + ipa->shared_virt = memremap(res->start, resource_size(res), + MEMREMAP_WC); + if (!ipa->shared_virt) + ret = -ENOMEM; + ipa->shared_phys = res->start; + + /* Setup IPA register memory */ + res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM, + "ipa-reg"); + if (!res) { + ret = -ENODEV; + goto err_unmap_shared; + } + + ipa->reg_virt = ioremap(res->start, resource_size(res)); + if (!ipa->reg_virt) { + ret = -ENOMEM; + goto err_unmap_shared; + } + ipa->reg_phys = res->start; + + return 0; + +err_unmap_shared: + memunmap(ipa->shared_virt); + + return ret; +} + +void ipa_mem_exit(struct ipa *ipa) +{ + iounmap(ipa->reg_virt); + memunmap(ipa->shared_virt); +} diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h new file mode 100644 index 000000000000..53f32c32ac06 --- /dev/null +++ b/drivers/net/ipa/ipa_mem.h @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_MEM_H_ +#define _IPA_MEM_H_ + +struct ipa; + +/** + * DOC: + * The IPA has a block of shared memory, divided into regions used for + * specific purposes. The offset within the IPA address space of this shared + * memory block is defined by the IPA_SMEM_DIRECT_ACCESS_OFFSET register. + * + * The regions within the shared block are bounded by an offset and size found + * in the IPA_SHARED_MEM_SIZE register. The first 128 bytes of the shared + * memory block are shared with the microcontroller, and the first 40 bytes of + * that contain a structure used to communicate between the microcontroller + * and the AP. + * + * There is a set of filter and routing tables, and each is given a 128 byte + * region in shared memory. Each entry in a filter or route table is + * IPA_TABLE_ENTRY_SIZE, or 8 bytes. The first "slot" of every table is + * filled with a "canary" value, and the table offsets defined below represent + * the location of the first real entry in each table after this. + * + * The number of filter table entries depends on the number of endpoints that + * support filtering. The first non-canary slot of a filter table contains a + * bitmap, with each set bit indicating an endpoint containing an entry in the + * table. Bit 0 is used to represent a global filter. + * + * About half of the routing table entries are reserved for modem use. + */ + +/* The maximum number of filter table entries (IPv4, IPv6; hashed and not) */ +#define IPA_SMEM_FLT_COUNT 14 + +/* The number of routing table entries (IPv4, IPv6; hashed and not) */ +#define IPA_SMEM_RT_COUNT 15 + + /* Which routing table entries are for the modem */ +#define IPA_SMEM_MODEM_RT_COUNT 8 +#define IPA_SMEM_MODEM_RT_INDEX_MIN 0 +#define IPA_SMEM_MODEM_RT_INDEX_MAX \ + (IPA_SMEM_MODEM_RT_INDEX_MIN + IPA_SMEM_MODEM_RT_COUNT - 1) + +/* Regions within the shared memory block. Table sizes are 0x80 bytes. */ +#define IPA_SMEM_V4_FLT_HASH_OFFSET 0x0288 +#define IPA_SMEM_V4_FLT_NHASH_OFFSET 0x0308 +#define IPA_SMEM_V6_FLT_HASH_OFFSET 0x0388 +#define IPA_SMEM_V6_FLT_NHASH_OFFSET 0x0408 +#define IPA_SMEM_V4_RT_HASH_OFFSET 0x0488 +#define IPA_SMEM_V4_RT_NHASH_OFFSET 0x0508 +#define IPA_SMEM_V6_RT_HASH_OFFSET 0x0588 +#define IPA_SMEM_V6_RT_NHASH_OFFSET 0x0608 +#define IPA_SMEM_MODEM_HDR_OFFSET 0x0688 +#define IPA_SMEM_MODEM_HDR_SIZE 0x0140 +#define IPA_SMEM_AP_HDR_OFFSET 0x07c8 +#define IPA_SMEM_AP_HDR_SIZE 0x0000 +#define IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET 0x07d0 +#define IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE 0x0200 +#define IPA_SMEM_AP_HDR_PROC_CTX_OFFSET 0x09d0 +#define IPA_SMEM_AP_HDR_PROC_CTX_SIZE 0x0200 +#define IPA_SMEM_MODEM_OFFSET 0x0bd8 +#define IPA_SMEM_MODEM_SIZE 0x1024 +#define IPA_SMEM_UC_EVENT_RING_OFFSET 0x1c00 /* v3.5 and later */ +#define IPA_SMEM_SIZE 0x2000 + +int ipa_smem_config(struct ipa *ipa); +void ipa_smem_deconfig(struct ipa *ipa); + +int ipa_smem_setup(struct ipa *ipa); +void ipa_smem_teardown(struct ipa *ipa); + +int ipa_smem_zero_modem(struct ipa *ipa); + +int ipa_mem_init(struct ipa *ipa); +void ipa_mem_exit(struct ipa *ipa); + +#endif /* _IPA_SMEM_H_ */ From patchwork Sun May 12 01:24:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163983 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4129269ilr; Sat, 11 May 2019 18:26:42 -0700 (PDT) X-Google-Smtp-Source: APXvYqwqIqAcxUG4z4h8LTaS4dDdE9rQplz9AlPkrPLGxHxjSQlc8rWDuQC68/KfkXSkDPiKKs31 X-Received: by 2002:aa7:980e:: with SMTP id e14mr24976841pfl.142.1557624402123; Sat, 11 May 2019 18:26:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624402; cv=none; d=google.com; s=arc-20160816; b=yqDASh4Ce/fo/LemqYtF7PK6RxyJH+DPqwSk7nfg3Q1d7zcCTliVsMLJVEgtAedXfY dqfOqaFElRBpyXGtLqn1QTsqZdXItza0AaR+J+0snwsZxxvPtM1R9cbSKIMCf/lnHvwC aEl02Ro8JvUE+rtQLtjyULYqpBhk3oPkzv/UZBhOTgr+lFjyFhWmDEtt3CLozMHDapMr m6wViq8G2HOY+esUN9LylQ16P92dWR946jx59KSq9x6PYELkYjkgS04zpmYG53tnwHfW bi/0EuyTVn6/whXoQHfTgVoU6cWlTBeKqlcqKAn3N7kPvwTaNxv/I3o82EdWPhgM0fw8 Tzbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Wf0TOrrLxzKcHaG1KCgyPYskSeCrrYCI+kKDr0TmMA8=; b=w0+VGoOLewMhtorVfj2c4PGexkadzxLrA37BHPptFKBBrElgWI6Be+I4LaallStkVn FvrFEaq+Zq4WPxHXLZFlS5vaFaeROOygqM2Hnfw9S7HmM+XGOceTiDwNcwA10stm1PfM GE3i01O85EQKfoVg6SVXEGdmWqK5h4J0hQbVqRUdA+k5R6WniNJEoSmCRTSVx1jXyZ13 ReLgjl9I9BsS+hyGfdtY7RbIOUzfWQnQHdu8IWxKPAfGFW+lFZ4zQ3NqkCfRyM7Hu6Bu 1mXgLwHvn0qUYpoRqjkng+zQtlhPmtudmvcfYOZrRLkPr3eSkK/4pnZGhN6Y1j5mZsui IWJw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BUDCrgLw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o6si12617231plk.296.2019.05.11.18.26.41; Sat, 11 May 2019 18:26:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BUDCrgLw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726952AbfELB0k (ORCPT + 30 others); Sat, 11 May 2019 21:26:40 -0400 Received: from mail-io1-f65.google.com ([209.85.166.65]:45263 "EHLO mail-io1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726651AbfELBZj (ORCPT ); Sat, 11 May 2019 21:25:39 -0400 Received: by mail-io1-f65.google.com with SMTP id b3so7382964iob.12 for ; Sat, 11 May 2019 18:25:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Wf0TOrrLxzKcHaG1KCgyPYskSeCrrYCI+kKDr0TmMA8=; b=BUDCrgLwsl4WnNTZ6tE0aic9rUUVYxa8OOwJropzaLf8+/1MIwqWr0QUcGE7fMTqti HkMdNaJVfyLArdAcvKgZVTYEvieJKJR28oTdg19YPhAyUEOj3hTLs1Z870GfaH11m4kF fKPX0WPt+EdLYIy/1LPU105uP7Tp9joXtqSyFrh6PdE/lS9DO7LrO7KfkPretagYKXkL ZDhOvFZMcI+KeX3rFbTLy8++J6cAWWvQYvXw1RU+xctVI0oU+0pAg5sp4jH5t5tdEio2 L++He1woYIpsAxtyXAAi2wjBCF/H3IwqBCdiSPAw6Ajt5bMefwPm2zTJXcASdwn2BRYw 1maw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Wf0TOrrLxzKcHaG1KCgyPYskSeCrrYCI+kKDr0TmMA8=; b=oG+j3TSphrQdodPOY+bBoBj4N72RftbssbOm9c2ShJv+s+2Uf3hPfG6vIhD0/FaRmW sUuzgFtwqwk0sykLjhTL0zFOBNa3zSbIxNADzYbEOjOWksaIYBjEpZaXHu4Ijd81n4YA ZWxFaKS6KXqZD/zjWG0ZLZp+d+YNwOk3dF5+EjWWqde9aifF8otDXj2pupUij394OgVC wuVt7dKuvW8cqtrxYFhk3a8w+FQ792CFV2X04swaHri+JHeUHbNINITfA+JFUtT+PqPU ucU1vypcGR5QfMB9L1W4RF5TUjbBbXh/+Vxu9kXEmN0HHMLhfpYB6E1jd0QTCPN8onMA +VPA== X-Gm-Message-State: APjAAAVTvUQqiwdbjIaOlNc07UlFbx0s++HyyR1J0yQSQWA0MNIca1Xu v3p6Cc4TLOitO4B7NMVPuXS/6w== X-Received: by 2002:a5e:a51a:: with SMTP id 26mr4317801iog.171.1557624337588; Sat, 11 May 2019 18:25:37 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:36 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 07/18] soc: qcom: ipa: GSI headers Date: Sat, 11 May 2019 20:24:57 -0500 Message-Id: <20190512012508.10608-8-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The Generic Software Interface is a layer of the IPA driver that abstracts the underlying hardware. The next patch includes the main code for GSI (including some additional documentation). This patch just includes three GSI header files. - "gsi.h" is the top-level GSI header file. There is one of these associated with the IPA structure; in fact, it is embedded within the IPA structure. (Were it not embedded this way, many of the definitions structures defined here could be private to GSI code.) The main abstraction implemented by the GSI code is the channel, and this header exposes several operations that can be performed on a GSI channel. - "gsi_private.h" exposes some definitions that are intended to be private, used only by the main GSI code and the GSI transaction code (defined in an upcoming patch). - Like "ipa_reg.h", "gsi_reg.h" defines the offsets of the 32-bit registers used by the GSI layer, along with masks that define the position and width of fields less than 32 bits located within these registers. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi.h | 241 ++++++++++++++++++++++ drivers/net/ipa/gsi_private.h | 147 +++++++++++++ drivers/net/ipa/gsi_reg.h | 376 ++++++++++++++++++++++++++++++++++ 3 files changed, 764 insertions(+) create mode 100644 drivers/net/ipa/gsi.h create mode 100644 drivers/net/ipa/gsi_private.h create mode 100644 drivers/net/ipa/gsi_reg.h -- 2.20.1 diff --git a/drivers/net/ipa/gsi.h b/drivers/net/ipa/gsi.h new file mode 100644 index 000000000000..8194a4110a40 --- /dev/null +++ b/drivers/net/ipa/gsi.h @@ -0,0 +1,241 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _GSI_H_ +#define _GSI_H_ + +#include +#include +#include +#include +#include +#include + +#define GSI_CHANNEL_MAX 14 +#define GSI_EVT_RING_MAX 10 + +struct device; +struct scatterlist; +struct platform_device; + +struct gsi; +struct gsi_trans; +struct gsi_channel_data; +struct gsi_ipa_endpoint_data; + +/* Execution environment IDs */ +enum gsi_ee_id { + GSI_EE_AP = 0, + GSI_EE_MODEM = 1, + GSI_EE_UC = 2, + GSI_EE_TZ = 3, +}; + +/* Channel operation statistics, aggregated across all channels */ +struct gsi_channel_stats { + u64 allocate; + u64 start; + u64 stop; + u64 reset; + u64 free; +}; + +struct gsi_ring { + spinlock_t spinlock; /* protects wp, rp updates */ + void *virt; + dma_addr_t addr; + size_t size; + u32 base; /* low 32 bits of addr */ + u32 wp; + u32 wp_local; + u32 rp_local; + u32 end; /* offset past last element */ +}; + +struct gsi_trans_info { + struct gsi_trans **map; /* TRE -> transaction map */ + u32 pool_count; /* # transactions in the pool */ + struct gsi_trans *pool; /* transaction allocation pool */ + u32 pool_free; /* next free transaction in pool */ + u32 sg_pool_count; /* # SGs in the allocation pool */ + struct scatterlist *sg_pool; /* SG allocation pool */ + u32 sg_pool_free; /* next free SG pool entry */ + + spinlock_t spinlock; /* protects updates to the rest */ + u32 tre_avail; /* unallocated TREs in ring */ + struct list_head alloc; /* allocated, not committed */ + struct list_head pending; /* committed, awaiting completion */ + struct list_head complete; /* completed, awaiting poll */ + struct list_head polled; /* returned by gsi_channel_poll_one() */ +}; + +/* Hardware values signifying the state of a channel */ +enum gsi_channel_state { + GSI_CHANNEL_STATE_NOT_ALLOCATED = 0x0, + GSI_CHANNEL_STATE_ALLOCATED = 0x1, + GSI_CHANNEL_STATE_STARTED = 0x2, + GSI_CHANNEL_STATE_STOPPED = 0x3, + GSI_CHANNEL_STATE_STOP_IN_PROC = 0x4, + GSI_CHANNEL_STATE_ERROR = 0xf, +}; + +/* We only care about channels between IPA and AP */ +struct gsi_channel { + struct gsi *gsi; + u32 toward_ipa; /* 0: IPA->AP; 1: AP->IPA */ + + const struct gsi_channel_data *data; /* initialization data */ + + struct completion completion; /* signals channel state changes */ + enum gsi_channel_state state; + + struct gsi_ring tre_ring; + u32 evt_ring_id; + + u64 byte_count; /* total # bytes transferred */ + u64 trans_count; /* total # transactions */ + u64 doorbell_byte_count; /* TX byte_count at last doorbell */ + u64 doorbell_trans_count; /* TX trans_count at last doorbell */ + + struct gsi_trans_info trans_info; + + struct napi_struct napi; +}; + +/* Hardware values signifying the state of an event ring */ +enum gsi_evt_ring_state { + GSI_EVT_RING_STATE_NOT_ALLOCATED = 0x0, + GSI_EVT_RING_STATE_ALLOCATED = 0x1, + GSI_EVT_RING_STATE_ERROR = 0xf, +}; + +struct gsi_evt_ring { + struct gsi_channel *channel; + struct completion completion; /* signals event ring state changes */ + enum gsi_evt_ring_state state; + struct gsi_ring ring; +}; + +struct gsi { + struct device *dev; /* Same as IPA device */ + struct net_device dummy_dev; /* needed for NAPI */ + void __iomem *virt; + u32 irq; + u32 irq_wake_enabled; /* 1: irq wake was enabled */ + struct gsi_channel channel[GSI_CHANNEL_MAX]; + struct gsi_channel_stats channel_stats; + struct gsi_evt_ring evt_ring[GSI_EVT_RING_MAX]; + u32 event_bitmap; + u32 event_enable_bitmap; + spinlock_t spinlock; /* global register updates */ + struct mutex mutex; /* protects commands, programming */ +}; + +/** + * gsi_setup() - Set up the GSI subsystem + * @gsi: Address of GSI structure embedded in an IPA structure + * + * @Return: 0 if successful, or a negative error code + * + * Performs initialization that must wait until the GSI hardware is + * ready (including firmware loaded). + */ +int gsi_setup(struct gsi *gsi); + +/** + * gsi_teardown() - Tear down GSI subsystem + * @gsi: GSI address previously passed to a successful gsi_setup() call + */ +void gsi_teardown(struct gsi *gsi); + +/** + * gsi_channel_trans_max() - Channel maximum number of transactions + * @gsi: GSI pointer + * @channel_id: Channel whose limit is to be returned + * + * @Return: The maximum number of pending transactions on the channel + */ +u32 gsi_channel_trans_max(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_trans_tre_max() - Return the maximum TREs per transaction + * @gsi: GSI pointer + * @channel_id: Channel whose limit is to be returned + * + * @Return: The maximum TRE count per transaction on the channel + */ +u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_trans_quiesce() - Wait for channel transactions to complete + * @gsi: GSI pointer + * @channel_id: Channel to quiesce + * + * Wait for all of a channel's currently-allocated transactions to + * be committed, complete, and be freed. + * + * NOTE: Assumes no new transactions will be issued before it returns. + */ +void gsi_channel_trans_quiesce(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_start() - Make a GSI channel operational + * @gsi: GSI pointer + * @channel_id: Channel to start + * + * @Return: 0 if successful, or a negative error code + */ +int gsi_channel_start(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_stop() - Stop an operational GSI channel + * @gsi: GSI pointer returned by gsi_setup() + * @channel_id: Channel to stop + * + * @Return: 0 if successful, or a negative error code + */ +int gsi_channel_stop(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_reset() - Reset a GSI channel + * @gsi: GSI pointer + * @channel_id: Channel to be reset + * + * @Return: 0 if successful, or a negative error code + * + * GSI hardware relinquishes ownership of all pending receive buffer + * transactions as a result of reset. They will be completed with + * result code -ECANCELED. + */ +int gsi_channel_reset(struct gsi *gsi, u32 channel_id); + +/** + * gsi_channel_config() - Configure a GSI channel + * @gsi: GSI pointer + * @channel_id: Channel to be configured + * @doorbell_enable: Whether to enable hardware doorbell engine + */ +void gsi_channel_config(struct gsi *gsi, u32 channel_id, bool doorbell_enable); + +/** + * gsi_init() - Initialize the GSI subsystem + * @gsi: Address of GSI structure embedded in an IPA structure + * @pdev: IPA platform device + * + * @Return: 0 if successful, or a negative error code + * + * Early stage initialization of the GSI subsystem, performing tasks + * that can be done before the GSI hardware is ready to use. + */ +int gsi_init(struct gsi *gsi, struct platform_device *pdev, u32 data_count, + const struct gsi_ipa_endpoint_data *data); + +/** + * gsi_exit() - Exit the GSI subsystem + * @gsi: GSI address previously passed to a successful gsi_init() call + */ +void gsi_exit(struct gsi *gsi); + +#endif /* _GSI_H_ */ diff --git a/drivers/net/ipa/gsi_private.h b/drivers/net/ipa/gsi_private.h new file mode 100644 index 000000000000..31a665fb7756 --- /dev/null +++ b/drivers/net/ipa/gsi_private.h @@ -0,0 +1,147 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _GSI_PRIVATE_H_ +#define _GSI_PRIVATE_H_ + +#include + +/* === NOTE: Only "gsi.c" and "gsi_trans.c" should include this file === */ + +struct gsi_trans; +struct gsi_ring; +struct gsi_channel; + +/* An entry in an event ring */ +struct gsi_xfer_compl_evt { + __le64 xfer_ptr; + __le16 len; + u8 rsvd1; + u8 code; + __le16 rsvd; + u8 type; + u8 chid; +} __packed; + +/* An entry in a channel ring */ +struct gsi_tre { + __le64 addr; /* DMA address */ + __le16 len_opcode; /* length in bytes or enum IPA_CMD_* */ + __le16 reserved; + __le32 flags; /* GSI_TRE_FLAGS_* */ +}; + +/** + * gsi_trans_move_complete() - Mark a GSI transaction completed + * @trans: Transaction to commit + */ +void gsi_trans_move_complete(struct gsi_trans *trans); + +/** + * gsi_trans_move_polled() - Mark a transaction polled + * @trans: Transaction to update + */ +void gsi_trans_move_polled(struct gsi_trans *trans); + +/** + * gsi_trans_complete() - Complete a GSI transaction + * @trans: Transaction to complete + * + * Marks a transaction complete (including freeing it). + */ +void gsi_trans_complete(struct gsi_trans *trans); + +/** + * gsi_channel_trans_mapped() - Return a transaction mapped to a TRE index + * @channel: Channel associated with the transaction + * @index: Index of the TRE having a transaction + * + * @Return: The GSI transaction pointer associated with the TRE index + */ +struct gsi_trans *gsi_channel_trans_mapped(struct gsi_channel *channel, + u32 index); + +/** + * gsi_channel_trans_complete() - Return a channel's next completed transaction + * @channel: Channel whose next transaction is to be returned + * + * @Return: The next completed transaction, or NULL if nothing new + */ +struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel); + +/** + * gsi_channel_trans_cancel_pending() - Cancel pending transactions + * @channel: Channel whose pending transactions should be cancelled + * + * Cancel all pending transactions on a channel. These are + * transactions that have been comitted but not yet completed. This + * is required when the channel gets reset. At that time all + * pending transactions will be completed with a result -ECANCELED. + * + * NOTE: Transactions already complete at the time of this call are + * unaffected. + */ +void gsi_channel_trans_cancel_pending(struct gsi_channel *channel); + +/** + * gsi_channel_trans_init() - Initialize a channel's GSI transaction info + * @channel: The channel whose transaction info is to be set up + * + * @Return: 0 if successful, or -ENOMEM on allocation failure + * + * Creates and sets up information for managing transactions on a channel + */ +int gsi_channel_trans_init(struct gsi_channel *channel); + +/** + * gsi_channel_trans_exit() - Inverse of gsi_channel_trans_init() + * @channel: Channel whose transaction information is to be cleaned up + */ +void gsi_channel_trans_exit(struct gsi_channel *channel); + +/** + * gsi_channel_doorbell() - Ring a channel's doorbell + * @channel: Channel whose doorbell should be rung + * + * Rings a channel's doorbell to inform the GSI hardware that new + * transactions (TREs, really) are available for it to process. + */ +void gsi_channel_doorbell(struct gsi_channel *channel); + +/** + * gsi_ring_virt() - Return virtual address for a 32-bit ring offset + * @ring: Ring whose address is to be translated + * @addr: 32-bit ring "hardware" address (low 32 bits of DMA address) + */ +void *gsi_ring_virt(struct gsi_ring *ring, u32 offset); + +/** + * gsi_ring_index() - Return index for a 32-bit ring offset + * @ring: Ring whose address is to be translated + * @addr: 32-bit ring "hardware" address (low 32 bits of DMA address) + */ +u32 ring_index(struct gsi_ring *ring, u32 offset); + +/** + * gsi_ring_wp_local_add() - Advance ring local write pointer + * @ring: Ring whose address is to be translated + * @val: Number of ring slots to add to local write pointer + */ +void gsi_ring_wp_local_add(struct gsi_ring *ring, u32 val); + +/** + * gsi_event_handle() - Handle the arrival of an event + * @gsi: GSI pointer + * @evt_ring_id: Event ring for which an event has occurred + * + * This is normally used in IRQ handling when an IEOB interrupt + * arrives. But it is also used when cancelling transactions + * following a channel reset. + */ +void gsi_event_handle(struct gsi *gsi, u32 evt_ring_id); + +u32 gsi_channel_id(struct gsi_channel *channel); + +#endif /* _GSI_PRIVATE_H_ */ diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h new file mode 100644 index 000000000000..969215ae1580 --- /dev/null +++ b/drivers/net/ipa/gsi_reg.h @@ -0,0 +1,376 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _GSI_REG_H_ +#define _GSI_REG_H_ + +#include + +/* === NOTE: Only "gsi.c" should include this file === */ + +/** + * DOC: GSI Registers + * + * GSI registers are located within the "gsi" address space defined by Device + * Tree. The offset of each register within that space is specified by + * symbols defined below. The GSI address space is mapped to virtual memory + * space in gsi_init(). All GSI registers are 32 bits wide. + * + * Each register type is duplicated for a number of instances of something. + * For example, each GSI channel has its own set of registers defining its + * configuration. The offset to a channel's set of registers is computed + * based on a "base" offset plus an additional "stride" amount computed by + * from the channel's ID. For such registers, the offset is computed by a + * function-like macro that takes a parameter used in the computation. + * + * The offset of a register dependent on execution environment is computed + * by a macro that is supplied a parameter "ee". The "ee" value is a member + * of the gsi_ee enumerated type. + * + * The offset of a channel register is computed by a macro that is supplied a + * parameter "ch". The "ch" value is a channel id whose maximum value is 30 + * (though the actual limit is hardware-dependent). + * + * The offset of an event register is computed by a macro that is supplied a + * parameter "ev". The "ev" value is an event id whose maximum value is 15 + * (though the actual limit is hardware-dependent). + */ + +#define GSI_INTER_EE_SRC_CH_IRQ_OFFSET \ + GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(ee) \ + (0x0000c018 + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET \ + GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(ee) \ + (0x0000c01c + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_CH_IRQ_CLR_OFFSET \ + GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(ee) \ + (0x0000c028 + 0x1000 * (ee)) + +#define GSI_INTER_EE_SRC_EV_CH_IRQ_CLR_OFFSET \ + GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \ + (0x0000c02c + 0x1000 * (ee)) + +#define GSI_CH_C_CNTXT_0_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_0_OFFSET(ch, ee) \ + (0x0001c000 + 0x4000 * (ee) + 0x80 * (ch)) +#define CHTYPE_PROTOCOL_FMASK GENMASK(2, 0) +#define CHTYPE_DIR_FMASK GENMASK(3, 3) +#define EE_FMASK GENMASK(7, 4) +#define CHID_FMASK GENMASK(12, 8) +#define ERINDEX_FMASK GENMASK(18, 14) +#define CHSTATE_FMASK GENMASK(23, 20) +#define ELEMENT_SIZE_FMASK GENMASK(31, 24) + +#define GSI_CH_C_CNTXT_1_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_1_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_1_OFFSET(ch, ee) \ + (0x0001c004 + 0x4000 * (ee) + 0x80 * (ch)) +#define R_LENGTH_FMASK GENMASK(15, 0) + +#define GSI_CH_C_CNTXT_2_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_2_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_2_OFFSET(ch, ee) \ + (0x0001c008 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_CNTXT_3_OFFSET(ch) \ + GSI_EE_N_CH_C_CNTXT_3_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_CNTXT_3_OFFSET(ch, ee) \ + (0x0001c00c + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_QOS_OFFSET(ch) \ + GSI_EE_N_CH_C_QOS_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_QOS_OFFSET(ch, ee) \ + (0x0001c05c + 0x4000 * (ee) + 0x80 * (ch)) +#define WRR_WEIGHT_FMASK GENMASK(3, 0) +#define MAX_PREFETCH_FMASK GENMASK(8, 8) +#define USE_DB_ENG_FMASK GENMASK(9, 9) + +#define GSI_CH_C_SCRATCH_0_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_0_OFFSET(ch, ee) \ + (0x0001c060 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_1_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_1_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_1_OFFSET(ch, ee) \ + (0x0001c064 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_2_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_2_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_2_OFFSET(ch, ee) \ + (0x0001c068 + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_CH_C_SCRATCH_3_OFFSET(ch) \ + GSI_EE_N_CH_C_SCRATCH_3_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_SCRATCH_3_OFFSET(ch, ee) \ + (0x0001c06c + 0x4000 * (ee) + 0x80 * (ch)) + +#define GSI_EV_CH_E_CNTXT_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_0_OFFSET(ev, ee) \ + (0x0001d000 + 0x4000 * (ee) + 0x80 * (ev)) +#define EV_CHTYPE_FMASK GENMASK(3, 0) +#define EV_EE_FMASK GENMASK(7, 4) +#define EV_EVCHID_FMASK GENMASK(15, 8) +#define EV_INTYPE_FMASK GENMASK(16, 16) +#define EV_CHSTATE_FMASK GENMASK(23, 20) +#define EV_ELEMENT_SIZE_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_E_CNTXT_1_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_1_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_1_OFFSET(ev, ee) \ + (0x0001d004 + 0x4000 * (ee) + 0x80 * (ev)) +#define EV_R_LENGTH_FMASK GENMASK(15, 0) + +#define GSI_EV_CH_E_CNTXT_2_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_2_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_2_OFFSET(ev, ee) \ + (0x0001d008 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_3_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_3_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_3_OFFSET(ev, ee) \ + (0x0001d00c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_4_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_4_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_4_OFFSET(ev, ee) \ + (0x0001d010 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_8_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_8_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_8_OFFSET(ev, ee) \ + (0x0001d020 + 0x4000 * (ee) + 0x80 * (ev)) +#define MODT_FMASK GENMASK(15, 0) +#define MODC_FMASK GENMASK(23, 16) +#define MOD_CNT_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_E_CNTXT_9_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_9_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_9_OFFSET(ev, ee) \ + (0x0001d024 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_10_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_10_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_10_OFFSET(ev, ee) \ + (0x0001d028 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_11_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_11_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_11_OFFSET(ev, ee) \ + (0x0001d02c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_12_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_12_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_12_OFFSET(ev, ee) \ + (0x0001d030 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_CNTXT_13_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_CNTXT_13_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_CNTXT_13_OFFSET(ev, ee) \ + (0x0001d034 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_SCRATCH_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_SCRATCH_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_SCRATCH_0_OFFSET(ev, ee) \ + (0x0001d048 + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_EV_CH_E_SCRATCH_1_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_SCRATCH_1_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_SCRATCH_1_OFFSET(ev, ee) \ + (0x0001d04c + 0x4000 * (ee) + 0x80 * (ev)) + +#define GSI_CH_C_DOORBELL_0_OFFSET(ch) \ + GSI_EE_N_CH_C_DOORBELL_0_OFFSET((ch), GSI_EE_AP) +#define GSI_EE_N_CH_C_DOORBELL_0_OFFSET(ch, ee) \ + (0x0001e000 + 0x4000 * (ee) + 0x08 * (ch)) + +#define GSI_EV_CH_E_DOORBELL_0_OFFSET(ev) \ + GSI_EE_N_EV_CH_E_DOORBELL_0_OFFSET((ev), GSI_EE_AP) +#define GSI_EE_N_EV_CH_E_DOORBELL_0_OFFSET(ev, ee) \ + (0x0001e100 + 0x4000 * (ee) + 0x08 * (ev)) + +#define GSI_GSI_STATUS_OFFSET \ + GSI_EE_N_GSI_STATUS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_GSI_STATUS_OFFSET(ee) \ + (0x0001f000 + 0x4000 * (ee)) +#define ENABLED_FMASK GENMASK(0, 0) + +#define GSI_CH_CMD_OFFSET \ + GSI_EE_N_CH_CMD_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CH_CMD_OFFSET(ee) \ + (0x0001f008 + 0x4000 * (ee)) +#define CH_CHID_FMASK GENMASK(7, 0) +#define CH_OPCODE_FMASK GENMASK(31, 24) + +#define GSI_EV_CH_CMD_OFFSET \ + GSI_EE_N_EV_CH_CMD_OFFSET(GSI_EE_AP) +#define GSI_EE_N_EV_CH_CMD_OFFSET(ee) \ + (0x0001f010 + 0x4000 * (ee)) +#define EV_CHID_FMASK GENMASK(7, 0) +#define EV_OPCODE_FMASK GENMASK(31, 24) + +#define GSI_GSI_HW_PARAM_2_OFFSET \ + GSI_EE_N_GSI_HW_PARAM_2_OFFSET(GSI_EE_AP) +#define GSI_EE_N_GSI_HW_PARAM_2_OFFSET(ee) \ + (0x0001f040 + 0x4000 * (ee)) +#define IRAM_SIZE_FMASK GENMASK(2, 0) +#define NUM_CH_PER_EE_FMASK GENMASK(7, 3) +#define NUM_EV_PER_EE_FMASK GENMASK(12, 8) +#define GSI_CH_PEND_TRANSLATE_FMASK GENMASK(13, 13) +#define GSI_CH_FULL_LOGIC_FMASK GENMASK(14, 14) +#define IRAM_SIZE_ONE_KB_FVAL 0 +#define IRAM_SIZE_TWO_KB_FVAL 1 + +#define GSI_CNTXT_TYPE_IRQ_OFFSET \ + GSI_EE_N_CNTXT_TYPE_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_TYPE_IRQ_OFFSET(ee) \ + (0x0001f080 + 0x4000 * (ee)) +#define CH_CTRL_FMASK GENMASK(0, 0) +#define EV_CTRL_FMASK GENMASK(1, 1) +#define GLOB_EE_FMASK GENMASK(2, 2) +#define IEOB_FMASK GENMASK(3, 3) +#define INTER_EE_CH_CTRL_FMASK GENMASK(4, 4) +#define INTER_EE_EV_CTRL_FMASK GENMASK(5, 5) +#define GENERAL_FMASK GENMASK(6, 6) + +#define GSI_CNTXT_TYPE_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFSET(ee) \ + (0x0001f088 + 0x4000 * (ee)) +#define MSK_CH_CTRL_FMASK GENMASK(0, 0) +#define MSK_EV_CTRL_FMASK GENMASK(1, 1) +#define MSK_GLOB_EE_FMASK GENMASK(2, 2) +#define MSK_IEOB_FMASK GENMASK(3, 3) +#define MSK_INTER_EE_CH_CTRL_FMASK GENMASK(4, 4) +#define MSK_INTER_EE_EV_CTRL_FMASK GENMASK(5, 5) +#define MSK_GENERAL_FMASK GENMASK(6, 6) +#define GSI_CNTXT_TYPE_IRQ_MSK_ALL GENMASK(6, 0) + +#define GSI_CNTXT_SRC_CH_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFSET(ee) \ + (0x0001f090 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFSET(ee) \ + (0x0001f094 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFSET(ee) \ + (0x0001f098 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET(ee) \ + (0x0001f09c + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFSET(ee) \ + (0x0001f0a0 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \ + (0x0001f0a4 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFSET(ee) \ + (0x0001f0b0 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET(ee) \ + (0x0001f0b8 + 0x4000 * (ee)) + +#define GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET(ee) \ + (0x0001f0c0 + 0x4000 * (ee)) + +#define GSI_CNTXT_GLOB_IRQ_STTS_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFSET(ee) \ + (0x0001f100 + 0x4000 * (ee)) +#define ERROR_INT_FMASK GENMASK(0, 0) +#define GP_INT1_FMASK GENMASK(1, 1) +#define GP_INT2_FMASK GENMASK(2, 2) +#define GP_INT3_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GLOB_IRQ_EN_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFSET(ee) \ + (0x0001f108 + 0x4000 * (ee)) +#define EN_ERROR_INT_FMASK GENMASK(0, 0) +#define EN_GP_INT1_FMASK GENMASK(1, 1) +#define EN_GP_INT2_FMASK GENMASK(2, 2) +#define EN_GP_INT3_FMASK GENMASK(3, 3) +#define GSI_CNTXT_GLOB_IRQ_ALL GENMASK(3, 0) + +#define GSI_CNTXT_GLOB_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFSET(ee) \ + (0x0001f110 + 0x4000 * (ee)) +#define CLR_ERROR_INT_FMASK GENMASK(0, 0) +#define CLR_GP_INT1_FMASK GENMASK(1, 1) +#define CLR_GP_INT2_FMASK GENMASK(2, 2) +#define CLR_GP_INT3_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GSI_IRQ_STTS_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFSET(ee) \ + (0x0001f118 + 0x4000 * (ee)) +#define BREAK_POINT_FMASK GENMASK(0, 0) +#define BUS_ERROR_FMASK GENMASK(1, 1) +#define CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_GSI_IRQ_EN_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFSET(ee) \ + (0x0001f120 + 0x4000 * (ee)) +#define EN_BREAK_POINT_FMASK GENMASK(0, 0) +#define EN_BUS_ERROR_FMASK GENMASK(1, 1) +#define EN_CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define EN_MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) +#define GSI_CNTXT_GSI_IRQ_ALL GENMASK(3, 0) + +#define GSI_CNTXT_GSI_IRQ_CLR_OFFSET \ + GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFSET(ee) \ + (0x0001f128 + 0x4000 * (ee)) +#define CLR_BREAK_POINT_FMASK GENMASK(0, 0) +#define CLR_BUS_ERROR_FMASK GENMASK(1, 1) +#define CLR_CMD_FIFO_OVRFLOW_FMASK GENMASK(2, 2) +#define CLR_MCS_STACK_OVRFLOW_FMASK GENMASK(3, 3) + +#define GSI_CNTXT_INTSET_OFFSET \ + GSI_EE_N_CNTXT_INTSET_OFFSET(GSI_EE_AP) +#define GSI_EE_N_CNTXT_INTSET_OFFSET(ee) \ + (0x0001f180 + 0x4000 * (ee)) +#define INTYPE_FMASK GENMASK(0, 0) + +#define GSI_ERROR_LOG_OFFSET \ + GSI_EE_N_ERROR_LOG_OFFSET(GSI_EE_AP) +#define GSI_EE_N_ERROR_LOG_OFFSET(ee) \ + (0x0001f200 + 0x4000 * (ee)) + +#define GSI_ERROR_LOG_CLR_OFFSET \ + GSI_EE_N_ERROR_LOG_CLR_OFFSET(GSI_EE_AP) +#define GSI_EE_N_ERROR_LOG_CLR_OFFSET(ee) \ + (0x0001f210 + 0x4000 * (ee)) + +#endif /* _GSI_REG_H_ */ From patchwork Sun May 12 01:24:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163973 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4128764ilr; Sat, 11 May 2019 18:25:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqw4W++NHcd5JBz1QwbhmAbaeHcGMoWSZf2zoLbiEyfxadGLgOBVFNXcO4/9hxe+V1pGkw87 X-Received: by 2002:a62:2506:: with SMTP id l6mr24618012pfl.250.1557624348300; Sat, 11 May 2019 18:25:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624348; cv=none; d=google.com; s=arc-20160816; b=fW76T9E/l4W6X5Y04oZB7Y3IR5ZjtIqvOgQrH5jFGH7yn5UI8QlAGNPOB4QIb39gdr TmxK7iNLo1Gb56lTDVku9M0uvbMkJagIfOLB+W6pg1XV3zMyWIlfIDNVzH5scB8kOoUB /7ryVJ9V+m34aRGQhjXTGlcKb1DpOt2JjIc3kxrtZfqfNE6gR9kSqXgYABWDgceOTS0b AH02+1E4dUJi4iFs4CgOBO540LMofBQqbMcjb8+eoXEgOmYXq4S3Uv4jZ6z1MGTgmdUl F2qTomxjs84lIFyg7b09k6P2qA+m0VWu4wULmoUCjrmtHaNzPGX7geYFOEC9c08lnIH4 +URA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=2P6lzbNz9LN9Yx2+EH0GhR5OaQPCQbRN02nIIRvlaS4=; b=bqxVhxPkarO/HPwYvxTX5VkrgPFguo37CFPjCXnq7T3ayVsqS5ZJuMACpV4aMTMNIl 2uaSmI2/RQv5X8FO2N2VUDJGfdBO8shQCeNCdC6SDGaUvRygSJSoOni0jIflUVlVkfuj n5JUz40qZUtOkLY6zwvjhOc+4JwTKYn5nPAGGGVfC0L6EHgIk4A992/Hg5y53U5h6xZK oQoVj7HVeEbJHI7+MZ1nV6ET7copnK4WL1FuR4E7nrBJxHRyfwfIM1OJsTBFgHdnCoJO PfAjusOOIg80kzUO6DOi6r+LD0URmALlX/38mFX7ejjc1Z5xdAvX+MlYnJCLPeuN6FuU GSjw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KGQAaO4G; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j10si12512894pgh.273.2019.05.11.18.25.48; Sat, 11 May 2019 18:25:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KGQAaO4G; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726736AbfELBZq (ORCPT + 30 others); Sat, 11 May 2019 21:25:46 -0400 Received: from mail-io1-f67.google.com ([209.85.166.67]:38473 "EHLO mail-io1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726674AbfELBZm (ORCPT ); Sat, 11 May 2019 21:25:42 -0400 Received: by mail-io1-f67.google.com with SMTP id u2so1199717ior.5 for ; Sat, 11 May 2019 18:25:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2P6lzbNz9LN9Yx2+EH0GhR5OaQPCQbRN02nIIRvlaS4=; b=KGQAaO4GXI0CwVfAc05qBUBFAm4jHqwH03NKU7Do02iJqC4fCVugge3qsbmQJbWZAY tb+hcsnfZXFCTruQ2cmeKuH9y1mD1hKdJFk14If3rLi2ThgP7HUiZxUDaANMJLmM4bX0 o5fas9Zak/rqlKVLh+85eRYKF0Wx5p5VJySWH0lcba4IXJIcL95kOCqKdz6w3/8zEtVz dH3JPXheoAQP/vkl7F6cHB67udwtn++iE1x/iS2SYf/gHXyrhHRkBEuPmrtxZ6+yBwoY k6k5aIZWuclPhPEvP4bPH+zASUpCtKmfew1JKj54if9NKkXMBIy/Jrv6QPFs1J/joQSn D/sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2P6lzbNz9LN9Yx2+EH0GhR5OaQPCQbRN02nIIRvlaS4=; b=WT27YkSEfnzswoqUOKj9I/7NtZ/W9yY5crfb1p0+m8nSDyilyKZnPFq9wGx/UKfsUe sNMvrEyglad2hwJC7OtrGo7qI2Gt7in4kION1ovkUW3X4VfNnFGnnF7lCyBEtTn+dWzS ieMLrjprvIXuYruOHRHasxP8aRbFpgGY7wBJILliCdO6fIYQ1/rOpsQ2azzSRonpuRPb 78p5kg4aBAc9jas/fVTFBDUPOCU36wMkW+QrKCLy5nYQuTh72hzA2CppHTWsITD1b63B ZYDT48KUX+Mpyvb7Oo8bwmkZ+LrtxQhcg/A3jaKptzuO2KJgUjnb+OcxjGGm6/LCUGpr M7NQ== X-Gm-Message-State: APjAAAVDW2lCkX8hjnoYoV+Yt+DFjLA1PPtGS5lslaJgy/0Ghrk0oQmA wE3jPxWO9nxRMO5d1PAuanUgjQ== X-Received: by 2002:a6b:3b88:: with SMTP id i130mr11704792ioa.21.1557624339543; Sat, 11 May 2019 18:25:39 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:38 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 08/18] soc: qcom: ipa: the generic software interface Date: Sat, 11 May 2019 20:24:58 -0500 Message-Id: <20190512012508.10608-9-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch includes "gsi.c", which implements the generic software interface (GSI) for IPA. The generic software interface abstracts channels, which provide a means of transferring data either from the AP to the IPA, or from the IPA to the AP. A ring buffer of "transfer elements" (TREs) is used to describe data transfers to perform. The AP writes a doorbell register associated with a channel to let it know it has added new entries (for an AP->IPA channel) or has finished processing entries (for an IPA->AP channel). Each channel also has an event ring buffer, used by the IPA to communicate information about events related to a channel (for example, the completion of TREs). The IPA writes its own doorbell register, which triggers an interrupt on the AP, to signal that new event information has arrived. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi.c | 1741 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 1741 insertions(+) create mode 100644 drivers/net/ipa/gsi.c -- 2.20.1 diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c new file mode 100644 index 000000000000..e9dd40c058c6 --- /dev/null +++ b/drivers/net/ipa/gsi.c @@ -0,0 +1,1741 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_reg.h" +#include "gsi_private.h" +#include "gsi_trans.h" +#include "ipa_gsi.h" +#include "ipa_data.h" + +/** + * DOC: The IPA Generic Software Interface + * + * The generic software interface (GSI) is an integral component of the IPA, + * providing a well-defined communication layer between the AP subsystem + * and the IPA core. The modem uses the GSI layer as well. + * + * -------- --------- + * | | | | + * | AP +<---. .----+ Modem | + * | +--. | | .->+ | + * | | | | | | | | + * -------- | | | | --------- + * v | v | + * --+-+---+-+-- + * | GSI | + * |-----------| + * | | + * | IPA | + * | | + * ------------- + * + * In the above diagram, the AP and Modem represent "execution environments" + * (EEs), which are independent operating environments that use the IPA for + * data transfer. + * + * Each EE uses a set of unidirectional GSI "channels," which allow transfer + * of data to or from the IPA. A channel is implemented as a ring buffer, + * with a DRAM-resident array of "transfer elements" (TREs) available to + * describe transfers to or from other EEs through the IPA. A transfer + * element can also contain an immediate command, requesting the IPA perform + * actions other than data transfer. + * + * Each TRE refers to a block of data--also located DRAM. After writing one + * or more TREs to a channel, the writer (either the IPA or an EE) writes a + * doorbell register to inform the receiving side how many elements have + * been written. Writing to a doorbell register triggers an interrupt on + * the receiver. + * + * Each channel has a GSI "event ring" associated with it. An event ring + * is implemented very much like a channel ring, but is always directed from + * the IPA to an EE. The IPA notifies an EE (such as the AP) about channel + * events by adding an entry to the event ring associated with the channel; + * when it writes the event ring's doorbell register the EE is interrupted. + * Each entry in an event ring contains a pointer to the channel TRE whose + * completion the event represents. + * + * Each TRE in a channel ring has a set of flags. One flag indicates whether + * the completion of the transfer operation generates an entry (and possibly + * an interrupt) in the channel's event ring. Oother flags allow transfer + * elements to be chained together, forming a single logical transaction. + * TRE flags are used to control whether and when interrupts are generated + * to signal completion of channel transfers. + * + * Elements in channel and event rings are completed (or consumed) strictly + * in order. Completion of one entry implies the completion of all preceding + * entries. A single completion interrupt can communicate the completion of + * many transfers. + * + * Note that all GSI registers are little-endian, which is the assumed + * endianness of I/O space accesses. The accessor functions perform byte + * swapping if needed (i.e., for a big endian CPU). + */ + +/* Delay period for interrupt moderation (in 32KHz IPA timer ticks) */ +#define IPA_GSI_EVT_RING_INT_MODT (32 * 1) /* 1ms under 32KHz clock */ + +#define GSI_CMD_TIMEOUT 5 /* seconds */ + +#define GSI_MHI_ER_START 10 /* First reserved event number */ +#define GSI_MHI_ER_END 16 /* Last reserved event number */ + +#define GSI_RESET_WA_MIN_SLEEP 1000 /* microseconds */ +#define GSI_RESET_WA_MAX_SLEEP 2000 /* microseconds */ + +#define GSI_ISR_MAX_ITER 50 + +/* Hardware values from the error log register error code field */ +enum gsi_err_code { + GSI_INVALID_TRE_ERR = 0x1, + GSI_OUT_OF_BUFFERS_ERR = 0x2, + GSI_OUT_OF_RESOURCES_ERR = 0x3, + GSI_UNSUPPORTED_INTER_EE_OP_ERR = 0x4, + GSI_EVT_RING_EMPTY_ERR = 0x5, + GSI_NON_ALLOCATED_EVT_ACCESS_ERR = 0x6, + GSI_HWO_1_ERR = 0x8, +}; + +/* Hardware values from the error log register error type field */ +enum gsi_err_type { + GSI_ERR_TYPE_GLOB = 0x1, + GSI_ERR_TYPE_CHAN = 0x2, + GSI_ERR_TYPE_EVT = 0x3, +}; + +/* Fields in an error log register at GSI_ERROR_LOG_OFFSET */ +#define GSI_LOG_ERR_ARG3_FMASK GENMASK(3, 0) +#define GSI_LOG_ERR_ARG2_FMASK GENMASK(7, 4) +#define GSI_LOG_ERR_ARG1_FMASK GENMASK(11, 8) +#define GSI_LOG_ERR_CODE_FMASK GENMASK(15, 12) +#define GSI_LOG_ERR_VIRT_IDX_FMASK GENMASK(23, 19) +#define GSI_LOG_ERR_TYPE_FMASK GENMASK(27, 24) +#define GSI_LOG_ERR_EE_FMASK GENMASK(31, 28) + +/* Hardware values used when programming an event ring */ +enum gsi_evt_chtype { + GSI_EVT_CHTYPE_MHI_EV = 0x0, + GSI_EVT_CHTYPE_XHCI_EV = 0x1, + GSI_EVT_CHTYPE_GPI_EV = 0x2, + GSI_EVT_CHTYPE_XDCI_EV = 0x3, +}; + +/* Hardware values used when programming a channel */ +enum gsi_channel_protocol { + GSI_CHANNEL_PROTOCOL_MHI = 0x0, + GSI_CHANNEL_PROTOCOL_XHCI = 0x1, + GSI_CHANNEL_PROTOCOL_GPI = 0x2, + GSI_CHANNEL_PROTOCOL_XDCI = 0x3, +}; + +/* Hardware values representing an event ring immediate command opcode */ +enum gsi_evt_ch_cmd_opcode { + GSI_EVT_ALLOCATE = 0x0, + GSI_EVT_RESET = 0x9, + GSI_EVT_DE_ALLOC = 0xa, +}; + +/* Hardware values representing a channel immediate command opcode */ +enum gsi_ch_cmd_opcode { + GSI_CH_ALLOCATE = 0x0, + GSI_CH_START = 0x1, + GSI_CH_STOP = 0x2, + GSI_CH_RESET = 0x9, + GSI_CH_DE_ALLOC = 0xa, + GSI_CH_DB_STOP = 0xb, +}; + +/** gsi_gpi_channel_scratch - GPI protocol scratch register + * + * @max_outstanding_tre: + * Defines the maximum number of TREs allowed in a single transaction + * on a channel (in Bytes). This determines the amount of prefetch + * performed by the hardware. We configure this to equal the size of + * the TLV FIFO for the channel. + * @outstanding_threshold: + * Defines the threshold (in Bytes) determining when the sequencer + * should update the channel doorbell. We configure this to equal + * the size of two TREs. + */ +struct gsi_gpi_channel_scratch { + u64 rsvd1; + u16 rsvd2; + u16 max_outstanding_tre; + u16 rsvd3; + u16 outstanding_threshold; +} __packed; + +/** gsi_channel_scratch - channel scratch configuration area + * + * The exact interpretation of this register is protocol-specific. + * We only use GPI channels; see struct gsi_gpi_channel_scratch, above. + */ +union gsi_channel_scratch { + struct gsi_gpi_channel_scratch gpi; + struct { + u32 word1; + u32 word2; + u32 word3; + u32 word4; + } data; +} __packed; + +/* Enable or disable an event interrupt */ +static void +_gsi_irq_control_event(struct gsi *gsi, u32 evt_ring_id, bool enable) +{ + u32 mask = BIT(evt_ring_id); + u32 val; + + if (enable) + gsi->event_enable_bitmap |= mask; + else + gsi->event_enable_bitmap &= ~mask; + + val = gsi->event_enable_bitmap; + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); +} + +static void gsi_irq_enable_event(struct gsi *gsi, u32 evt_ring_id) +{ + _gsi_irq_control_event(gsi, evt_ring_id, true); +} + +static void gsi_irq_disable_event(struct gsi *gsi, u32 evt_ring_id) +{ + _gsi_irq_control_event(gsi, evt_ring_id, false); +} + +/* Enable or disable all interrupt types */ +static void _gsi_irq_control_all(struct gsi *gsi, bool enable) +{ + u32 val; + + /* Inter EE commands / interrupt are no supported. */ + val = enable ? GSI_CNTXT_TYPE_IRQ_MSK_ALL : 0; + iowrite32(val, gsi->virt + GSI_CNTXT_TYPE_IRQ_MSK_OFFSET); + + val = enable ? GENMASK(GSI_CHANNEL_MAX - 1, 0) : 0; + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET); + + val = enable ? GENMASK(GSI_EVT_RING_MAX - 1, 0) : 0; + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET); + + /* IEOB interrupts are managed individually */ + val = enable ? gsi->event_enable_bitmap : 0; + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); + + val = enable ? GSI_CNTXT_GLOB_IRQ_ALL : 0; + iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET); + + /* Never enable GSI_BREAK_POINT */ + val = enable ? GSI_CNTXT_GSI_IRQ_ALL & ~EN_BREAK_POINT_FMASK : 0; + iowrite32(val, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET); +} + +static void gsi_irq_disable_all(struct gsi *gsi) +{ + _gsi_irq_control_all(gsi, false); +} + +static void gsi_irq_enable_all(struct gsi *gsi) +{ + _gsi_irq_control_all(gsi, true); +} + +/* Return the channel id associated with a given channel */ +u32 gsi_channel_id(struct gsi_channel *channel) +{ + return channel - &channel->gsi->channel[0]; +} + +/* Return the hardware's notion of the current state of a channel */ +static enum gsi_channel_state gsi_channel_state(struct gsi_channel *channel) +{ + u32 channel_id = gsi_channel_id(channel); + struct gsi *gsi = channel->gsi; + u32 val; + + val = ioread32(gsi->virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id)); + + return u32_get_bits(val, CHSTATE_FMASK); +} + +/* Return the hardware's notion of the current state of an event ring */ +static enum gsi_evt_ring_state +gsi_evt_ring_state(struct gsi *gsi, u32 evt_ring_id) +{ + u32 val = ioread32(gsi->virt + GSI_EV_CH_E_CNTXT_0_OFFSET(evt_ring_id)); + + return u32_get_bits(val, EV_CHSTATE_FMASK); +} + +/* Channel control interrupt handler */ +static void gsi_isr_chan_ctrl(struct gsi *gsi) +{ + u32 channel_mask; + + channel_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_CH_IRQ_OFFSET); + iowrite32(channel_mask, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET); + + while (channel_mask) { + u32 channel_id = __ffs(channel_mask); + struct gsi_channel *channel; + + channel_mask ^= BIT(channel_id); + + channel = &gsi->channel[channel_id]; + channel->state = gsi_channel_state(channel); + + complete(&channel->completion); + } +} + +static void gsi_isr_evt_ctrl(struct gsi *gsi) +{ + u32 evt_mask; + + evt_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_OFFSET); + iowrite32(evt_mask, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET); + + while (evt_mask) { + u32 evt_ring_id = __ffs(evt_mask); + struct gsi_evt_ring *evt_ring; + + evt_mask ^= BIT(evt_ring_id); + + evt_ring = &gsi->evt_ring[evt_ring_id]; + evt_ring->state = gsi_evt_ring_state(gsi, evt_ring_id); + + complete(&evt_ring->completion); + } +} + +static void +gsi_isr_glob_chan_err(struct gsi *gsi, u32 err_ee, u32 channel_id, u32 code) +{ + if (code == GSI_OUT_OF_RESOURCES_ERR) { + dev_err(gsi->dev, "channel %u out of resources\n", channel_id); + complete(&gsi->channel[channel_id].completion); + return; + } + + /* Report, but otherwise ignore all other error codes */ + WARN(true, "channel %u global error ee 0x%08x code 0x%08x\n", + channel_id, err_ee, code); +} + +static void +gsi_isr_glob_evt_err(struct gsi *gsi, u32 err_ee, u32 evt_ring_id, u32 code) +{ + if (code == GSI_OUT_OF_RESOURCES_ERR) { + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + u32 channel_id = gsi_channel_id(evt_ring->channel); + + complete(&evt_ring->completion); + dev_err(gsi->dev, "evt_ring for channel %u out of resources\n", + channel_id); + return; + } + + /* Report, but otherwise ignore all other error codes */ + WARN(true, "event ring 0x%08x global error ee %u code 0x%08x\n", + evt_ring_id, err_ee, code); +} + +static void gsi_isr_glob_err(struct gsi *gsi) +{ + enum gsi_err_type type; + enum gsi_err_code code; + u32 which; + u32 val; + u32 ee; + + /* Get the logged error, then reinitialize the log */ + val = ioread32(gsi->virt + GSI_ERROR_LOG_OFFSET); + iowrite32(0, gsi->virt + GSI_ERROR_LOG_OFFSET); + iowrite32(~0, gsi->virt + GSI_ERROR_LOG_CLR_OFFSET); + + ee = u32_get_bits(val, GSI_LOG_ERR_EE_FMASK); + which = u32_get_bits(val, GSI_LOG_ERR_VIRT_IDX_FMASK); + type = u32_get_bits(val, GSI_LOG_ERR_TYPE_FMASK); + code = u32_get_bits(val, GSI_LOG_ERR_CODE_FMASK); + + if (type == GSI_ERR_TYPE_CHAN) + gsi_isr_glob_chan_err(gsi, ee, which, code); + else if (type == GSI_ERR_TYPE_EVT) + gsi_isr_glob_evt_err(gsi, ee, which, code); + else /* type GSI_ERR_TYPE_GLOB should be fatal */ + WARN(true, "unexpected global error 0x%08x\n", type); +} + +static void gsi_isr_glob_ee(struct gsi *gsi) +{ + u32 val; + + val = ioread32(gsi->virt + GSI_CNTXT_GLOB_IRQ_STTS_OFFSET); + + if (val & ERROR_INT_FMASK) + gsi_isr_glob_err(gsi); + + iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_CLR_OFFSET); + + val ^= ERROR_INT_FMASK; + + if (val & EN_GP_INT1_FMASK) + dev_err(gsi->dev, "unexpected global INT1\n"); + val ^= EN_GP_INT1_FMASK; + + WARN(val, "unexpected global interrupt 0x%08x\n", val); +} + +/* Returns true if the interrupt state (enabled or not) changed */ +static bool gsi_channel_intr(struct gsi_channel *channel, bool enable) +{ + u32 evt_ring_id = channel->evt_ring_id; + struct gsi *gsi = channel->gsi; + u32 mask = BIT(evt_ring_id); + unsigned long flags; + bool different; + u32 enabled; + + spin_lock_irqsave(&gsi->spinlock, flags); + + enabled = gsi->event_enable_bitmap & mask; + different = enable == !enabled; + + if (different) { + if (enabled) + gsi_irq_disable_event(channel->gsi, evt_ring_id); + else + gsi_irq_enable_event(channel->gsi, evt_ring_id); + } + + spin_unlock_irqrestore(&gsi->spinlock, flags); + + return different; +} + +/* This function is almost always called in interrupt context, + * meaning the interrupt is enabled. The request to disable + * the interrupt here will therefore "succeed", that is, it + * will disable an enabled interrupt. + * + * However, this function is also called when cancelling pending + * transactions, and when that occurs it's possible interrupts are + * already disabled. For that reason we only schedule NAPI if we + * actually caused interrupts to become disabled. + */ +void gsi_event_handle(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + struct gsi_channel *channel = evt_ring->channel; + + if (gsi_channel_intr(channel, false)) + napi_schedule(&channel->napi); +} + +static void gsi_isr_ieob(struct gsi *gsi) +{ + u32 evt_mask; + + evt_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_OFFSET); + evt_mask &= ioread32(gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET); + iowrite32(evt_mask, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET); + + while (evt_mask) { + u32 evt_ring_id = __ffs(evt_mask); + + evt_mask ^= BIT(evt_ring_id); + + gsi_event_handle(gsi, evt_ring_id); + } +} + +static void gsi_isr_inter_ee_chan_ctrl(struct gsi *gsi) +{ + u32 channel_mask; + + channel_mask = ioread32(gsi->virt + GSI_INTER_EE_SRC_CH_IRQ_OFFSET); + iowrite32(channel_mask, gsi->virt + GSI_INTER_EE_SRC_CH_IRQ_CLR_OFFSET); + + while (channel_mask) { + u32 channel_id = __ffs(channel_mask); + + /* not currently expected */ + dev_err(gsi->dev, "ch %u inter-EE interrupt\n", channel_id); + channel_mask ^= BIT(channel_id); + } +} + +static void gsi_isr_inter_ee_evt_ctrl(struct gsi *gsi) +{ + u32 evt_mask; + + evt_mask = ioread32(gsi->virt + GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET); + iowrite32(evt_mask, gsi->virt + GSI_INTER_EE_SRC_EV_CH_IRQ_CLR_OFFSET); + + while (evt_mask) { + u32 evt_ring_id = __ffs(evt_mask); + + /* not currently expected */ + dev_err(gsi->dev, "evt %u inter-EE interrupt\n", evt_ring_id); + evt_mask ^= BIT(evt_ring_id); + } +} + +static void gsi_isr_general(struct gsi *gsi) +{ + u32 val; + + val = ioread32(gsi->virt + GSI_CNTXT_GSI_IRQ_STTS_OFFSET); + iowrite32(val, gsi->virt + GSI_CNTXT_GSI_IRQ_CLR_OFFSET); + + if (val & CLR_BREAK_POINT_FMASK) + dev_err(gsi->dev, "breakpoint!\n"); + val ^= CLR_BREAK_POINT_FMASK; + + WARN(val, "unexpected general interrupt 0x%08x\n", val); +} + +/** + * gsi_isr() - Top level GSI interrupt service routine + * @irq: Interrupt number (ignored) + * @dev_id: Device id pointer supplied to request_irq() + * + * This is the main handler function registered for the GSI IRQ. The + * GSI pointer is supplied as the "device id" value when the handler + * is registered, and is provided here. Each type of interrupt has a + * separate handler function that is called from here. + */ +static irqreturn_t gsi_isr(int irq, void *dev_id) +{ + struct gsi *gsi = dev_id; + u32 intr_mask; + u32 cnt = 0; + + while ((intr_mask = ioread32(gsi->virt + GSI_CNTXT_TYPE_IRQ_OFFSET))) { + /* intr_mask contains bitmask of pending GSI interrupts */ + do { + u32 gsi_intr = BIT(__ffs(intr_mask)); + + intr_mask ^= gsi_intr; + + switch (gsi_intr) { + case CH_CTRL_FMASK: + gsi_isr_chan_ctrl(gsi); + break; + case EV_CTRL_FMASK: + gsi_isr_evt_ctrl(gsi); + break; + case GLOB_EE_FMASK: + gsi_isr_glob_ee(gsi); + break; + case IEOB_FMASK: + gsi_isr_ieob(gsi); + break; + case INTER_EE_CH_CTRL_FMASK: + gsi_isr_inter_ee_chan_ctrl(gsi); + break; + case INTER_EE_EV_CTRL_FMASK: + gsi_isr_inter_ee_evt_ctrl(gsi); + break; + case GENERAL_FMASK: + gsi_isr_general(gsi); + break; + default: + WARN(true, "%s: unrecognized type 0x%08x\n", + __func__, gsi_intr); + break; + } + } while (intr_mask); + + if (WARN(++cnt > GSI_ISR_MAX_ITER, "interrupt flood\n")) + break; + } + + return IRQ_HANDLED; +} + +/* Return the virtual address associated with a 32-bit ring offset */ +void *gsi_ring_virt(struct gsi_ring *ring, u32 offset) +{ + return ring->virt + (offset - ring->base); +} + +/* Return the ring index of a 32-bit ring offset */ +u32 ring_index(struct gsi_ring *ring, u32 offset) +{ + /* Code assumes channel and event ring elements are the same size */ + BUILD_BUG_ON(sizeof(struct gsi_tre) != + sizeof(struct gsi_xfer_compl_evt)); + + return (offset - ring->base) / sizeof(struct gsi_tre); +} + +/* Return the 32-bit ring offset that precedes the one at the given offset */ +static u32 ring_prev(struct gsi_ring *ring, u32 offset) +{ + if (offset == ring->base) + offset = ring->end; + + return offset - sizeof(struct gsi_tre); +} + +/* Advance a ring's local write pointer by the given number of slots */ +void gsi_ring_wp_local_add(struct gsi_ring *ring, u32 val) +{ + ring->wp_local += val * sizeof(struct gsi_tre); + if (ring->wp_local >= ring->end) + ring->wp_local -= ring->size; +} + +/* Advance a ring's local read pointer by the given number of slots */ +static void gsi_ring_rp_local_add(struct gsi_ring *ring, u32 val) +{ + ring->rp_local += val * sizeof(struct gsi_tre); + if (ring->rp_local == ring->end) + ring->rp_local -= ring->size; +} + +static void __gsi_evt_tx_update(struct gsi_evt_ring *evt_ring, u32 rp) +{ + struct gsi_channel *channel = evt_ring->channel; + struct gsi_ring *ring = &evt_ring->ring; + struct gsi_xfer_compl_evt *evt; + struct gsi_trans *first_trans; + struct gsi_trans *last_trans; + u32 trans_count; + u32 byte_count; + u32 tre_offset; + u32 tre_index; + + /* Get the first (oldest) un-processed event */ + evt = gsi_ring_virt(ring, ring->rp_local); + /* Get the TRE offset from that, and its associated transaction */ + tre_offset = le64_to_cpu(evt->xfer_ptr) & GENMASK(31, 0); + tre_index = ring_index(&channel->tre_ring, tre_offset); + first_trans = gsi_channel_trans_mapped(channel, tre_index); + + /* Get the last (newest) un-processed event */ + evt = gsi_ring_virt(ring, ring_prev(ring, rp)); + /* Get the TRE offset from that, and its associated transaction */ + tre_offset = le64_to_cpu(evt->xfer_ptr) & GENMASK(31, 0); + tre_index = ring_index(&channel->tre_ring, tre_offset); + last_trans = gsi_channel_trans_mapped(channel, tre_index); + + /* Report the total number of transactions and bytes that have + * been transferred, *including* the last one. + */ + trans_count = last_trans->trans_count - first_trans->trans_count + 1; + byte_count = last_trans->byte_count - first_trans->byte_count; + byte_count += last_trans->len; + + ipa_gsi_channel_tx_completed(channel->gsi, gsi_channel_id(channel), + trans_count, byte_count); +} + +/** + * __gsi_evt_rx_update() - Record lengths of received data + * @evt_ring: Event ring associated with channel that received packets + * @ep: Last event in the ring associated with a completed request + * + * Events for RX channels contain the actual number of bytes received into + * the buffer. Every event has a transaction associated with it, and here + * we update each transaction's result code to record the received length. + * + * This function is called whenever we learn that the GSI hardware has filled + * new events since the last time we checked. We need to update transaction + * lengths for events starting at the ring's rp_local up to (and including) + * the ring offset supplied as an argument. + * + * Events are sequential within the event ring, and transactions are + * sequential within the transaction pool. We compute the first event's + * transaction pointer; the next event's transaction will just next one in + * the transaction pool. + * + * Note that @rp always points to an element *within* the event ring. + */ +static void __gsi_evt_rx_update(struct gsi_evt_ring *evt_ring, u32 rp) +{ + struct gsi_channel *channel = evt_ring->channel; + struct gsi_ring *ring = &evt_ring->ring; + struct gsi_xfer_compl_evt *evt_last; + struct gsi_xfer_compl_evt *evt_end; + struct gsi_trans_info *trans_info; + struct gsi_xfer_compl_evt *evt; + struct gsi_trans *trans_end; + struct gsi_trans *trans; + u32 byte_count = 0; + u32 tre_offset; + u32 tre_index; + + /* Start with the first un-processed event */ + evt = gsi_ring_virt(ring, ring->rp_local); + evt_last = gsi_ring_virt(ring, rp); + evt_end = gsi_ring_virt(ring, ring->end); + + /* Event xfer_ptr records the TRE it's associated with */ + tre_offset = le64_to_cpu(evt->xfer_ptr) & GENMASK(31, 0); + tre_index = ring_index(&channel->tre_ring, tre_offset); + /* Get the transaction mapped to the first unprocessed event */ + trans = gsi_channel_trans_mapped(channel, tre_index); + trans_info = &channel->trans_info; + trans_end = &trans_info->pool[trans_info->pool_count]; + + do { + trans->len = __le16_to_cpu(evt->len); + trans->result = __le16_to_cpu(evt->len); + byte_count += trans->result; + if (++evt == evt_end) + evt = gsi_ring_virt(&evt_ring->ring, ring->base); + if (++trans == trans_end) + trans = &trans_info->pool[0]; + } while (evt != evt_last); + + /* We record RX bytes when they are received */ + channel->byte_count += byte_count; + channel->trans_count++; +} + +static void +gsi_evt_ring_doorbell(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + u32 val; + + /* We only need to write the lower 32 bits */ + val = evt_ring->ring.wp_local; + iowrite32(val, gsi->virt + GSI_EV_CH_E_DOORBELL_0_OFFSET(evt_ring_id)); +} + +static u32 gsi_channel_max(struct gsi *gsi) +{ + u32 val = ioread32(gsi->virt + GSI_GSI_HW_PARAM_2_OFFSET); + + return u32_get_bits(val, NUM_CH_PER_EE_FMASK); +} + +static u32 gsi_evt_ring_max(struct gsi *gsi) +{ + u32 val = ioread32(gsi->virt + GSI_GSI_HW_PARAM_2_OFFSET); + + return u32_get_bits(val, NUM_EV_PER_EE_FMASK); +} + +/* Issue a GSI command by writing a value to a register, then wait + * for completion to be signaled. Returns true if successful or + * false if a timeout occurred. + */ +static void +gsi_command(struct gsi *gsi, u32 reg, u32 val, struct completion *completion) +{ + unsigned long ret; + + reinit_completion(completion); + + iowrite32(val, gsi->virt + reg); + ret = wait_for_completion_timeout(completion, GSI_CMD_TIMEOUT * HZ); + WARN(!ret, "%s timeout reg 0x%08x val 0x%08x\n", __func__, reg, val); +} + +/* Issue an event ring command and wait for it to complete */ +static void evt_ring_command(struct gsi *gsi, u32 evt_ring_id, + enum gsi_evt_ch_cmd_opcode op) +{ + struct completion *completion = &gsi->evt_ring[evt_ring_id].completion; + u32 val = 0; + + val |= u32_encode_bits(evt_ring_id, EV_CHID_FMASK); + val |= u32_encode_bits(op, EV_OPCODE_FMASK); + + gsi_command(gsi, GSI_EV_CH_CMD_OFFSET, val, completion); +} + +/* Issue a channel command and wait for it to complete */ +static void +gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode op) +{ + u32 channel_id = gsi_channel_id(channel); + u32 val = 0; + + val |= u32_encode_bits(channel_id, CH_CHID_FMASK); + val |= u32_encode_bits(op, CH_OPCODE_FMASK); + + gsi_command(channel->gsi, GSI_CH_CMD_OFFSET, val, &channel->completion); +} + +static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count) +{ + size_t size = roundup_pow_of_two(count * sizeof(struct gsi_tre)); + dma_addr_t addr; + + /* Hardware requires a power-of-2 ring size (and alignment) */ + ring->virt = dma_alloc_coherent(gsi->dev, size, &addr, GFP_KERNEL); + if (!ring->virt) + return -ENOMEM; + ring->addr = addr; + ring->base = addr & GENMASK(31, 0); + ring->size = size; + ring->end = ring->base + size; + spin_lock_init(&ring->spinlock); + + return 0; +} + +static void gsi_ring_free(struct gsi *gsi, struct gsi_ring *ring) +{ + dma_free_coherent(gsi->dev, ring->size, ring->virt, ring->addr); + memset(ring, 0, sizeof(*ring)); +} + +static void gsi_evt_ring_prime(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + struct gsi_ring *ring = &evt_ring->ring; + unsigned long flags; + + spin_lock_irqsave(&ring->spinlock, flags); + + memset(ring->virt, 0, ring->size); + /* Point the write pointer at the last element */ + ring->wp_local = ring_prev(ring, ring->base); + gsi_evt_ring_doorbell(gsi, evt_ring_id); + + spin_unlock_irqrestore(&ring->spinlock, flags); +} + +static void gsi_evt_ring_program(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + u32 val = 0; + + BUILD_BUG_ON(sizeof(struct gsi_xfer_compl_evt) > + field_max(EV_ELEMENT_SIZE_FMASK)); + + val |= u32_encode_bits(GSI_EVT_CHTYPE_GPI_EV, EV_CHTYPE_FMASK); + val |= EV_INTYPE_FMASK; + val |= u32_encode_bits(sizeof(struct gsi_xfer_compl_evt), + EV_ELEMENT_SIZE_FMASK); + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_0_OFFSET(evt_ring_id)); + + val = u32_encode_bits(evt_ring->ring.size, EV_R_LENGTH_FMASK); + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_1_OFFSET(evt_ring_id)); + + /* The context 2 and 3 registers store the low-order and + * high-order 32 bits of the address of the event ring, + * respectively. + */ + val = evt_ring->ring.base; + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_2_OFFSET(evt_ring_id)); + + val = evt_ring->ring.addr >> 32; + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_3_OFFSET(evt_ring_id)); + + /* Enable interrupt moderation by setting the moderation delay */ + val = u32_encode_bits(IPA_GSI_EVT_RING_INT_MODT, MODT_FMASK); + val |= u32_encode_bits(1, MODC_FMASK); /* comes from channel */ + iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_8_OFFSET(evt_ring_id)); + + /* No MSI write data, and MSI address high and low address is 0 */ + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_9_OFFSET(evt_ring_id)); + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_10_OFFSET(evt_ring_id)); + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_11_OFFSET(evt_ring_id)); + + /* We don't need to get event read pointer updates */ + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_12_OFFSET(evt_ring_id)); + iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_13_OFFSET(evt_ring_id)); +} + +static void gsi_ring_init(struct gsi_ring *ring) +{ + ring->wp = ring->base; + ring->wp_local = ring->base; + ring->rp_local = ring->base; +} + +static void gsi_evt_ring_scratch_zero(struct gsi *gsi, u32 evt_ring_id) +{ + iowrite32(0, gsi->virt + GSI_EV_CH_E_SCRATCH_0_OFFSET(evt_ring_id)); + iowrite32(0, gsi->virt + GSI_EV_CH_E_SCRATCH_1_OFFSET(evt_ring_id)); +} + +static int gsi_evt_ring_alloc_hw(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + unsigned long flags; + u32 val; + + evt_ring_command(gsi, evt_ring_id, GSI_EVT_ALLOCATE); + + if (evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) { + dev_err(gsi->dev, "evt_ring_id %u allocation bad state %u\n", + evt_ring_id, evt_ring->state); + return -EIO; + } + + gsi_evt_ring_program(gsi, evt_ring_id); + gsi_ring_init(&evt_ring->ring); + gsi_evt_ring_prime(gsi, evt_ring_id); + + spin_lock_irqsave(&gsi->spinlock, flags); + + /* Enable the event interrupt (clear it first in case pending) */ + val = BIT(evt_ring_id); + iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET); + gsi_irq_enable_event(gsi, evt_ring_id); + + spin_unlock_irqrestore(&gsi->spinlock, flags); + + return 0; +} + +static void gsi_evt_ring_free_hw(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + unsigned long flags; + + spin_lock_irqsave(&gsi->spinlock, flags); + + /* Disable the event interrupt */ + gsi_irq_disable_event(gsi, evt_ring_id); + + spin_unlock_irqrestore(&gsi->spinlock, flags); + + evt_ring_command(gsi, evt_ring_id, GSI_EVT_RESET); + + gsi_evt_ring_program(gsi, evt_ring_id); + gsi_ring_init(&evt_ring->ring); + gsi_evt_ring_scratch_zero(gsi, evt_ring_id); + gsi_evt_ring_prime(gsi, evt_ring_id); + + evt_ring_command(gsi, evt_ring_id, GSI_EVT_DE_ALLOC); +} + +static int gsi_evt_ring_id_alloc(struct gsi *gsi) +{ + u32 evt_ring_id; + + if (gsi->event_bitmap == ~0U) + return -ENOSPC; + + evt_ring_id = ffz(gsi->event_bitmap); + gsi->event_bitmap |= BIT(evt_ring_id); + + return (int)evt_ring_id; +} + +static void gsi_evt_ring_id_free(struct gsi *gsi, u32 evt_ring_id) +{ + gsi->event_bitmap &= ~BIT(evt_ring_id); +} + +void gsi_channel_doorbell(struct gsi_channel *channel) +{ + u32 channel_id = gsi_channel_id(channel); + struct gsi *gsi = channel->gsi; + u32 val; + + channel->tre_ring.wp = channel->tre_ring.wp_local; + + /* We only need to write the lower 32 bits */ + val = channel->tre_ring.wp_local; + iowrite32(val, gsi->virt + GSI_CH_C_DOORBELL_0_OFFSET(channel_id)); +} + +static void __gsi_evt_ring_update(struct gsi *gsi, u32 evt_ring_id) +{ + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + u32 offset = GSI_EV_CH_E_CNTXT_4_OFFSET(evt_ring_id); + struct gsi_channel *channel = evt_ring->channel; + struct gsi_ring *tre_ring = &channel->tre_ring; + struct gsi_ring *ring = &evt_ring->ring; + u32 rp = ioread32(gsi->virt + offset); + struct gsi_xfer_compl_evt *evt; + struct gsi_trans *trans; + u32 tre_offset; + u32 tre_index; + u32 rp_last; + + /* If we have nothing new to process we're done */ + if (ring->rp_local == rp) + return; + + /* Extract information from the newly-completed events. For TX + * channels, report the number of transferred bytes they represent. + * For RX channels, update each transaction with the number of bytes + * actually received. + */ + if (channel->toward_ipa) + __gsi_evt_tx_update(evt_ring, rp); + else + __gsi_evt_rx_update(evt_ring, rp); + + /* Get the TRE pointer from the latest completion event, and get + * the transaction associated with that. Move all new transactions + * up to and including that one to the completed list. + */ + rp_last = ring_prev(ring, rp); + evt = gsi_ring_virt(ring, rp_last); + tre_offset = le64_to_cpu(evt->xfer_ptr) & GENMASK(31, 0); + tre_index = ring_index(tre_ring, tre_offset); + trans = gsi_channel_trans_mapped(channel, tre_index); + gsi_trans_move_complete(trans); + + /* We need nothing more from these TREs, so consume them */ + tre_ring->rp_local = tre_offset; + gsi_ring_rp_local_add(tre_ring, 1); + + /* Record that we're caught up on these events, and give the + * completed ones back to the hardware for reuse. + */ + ring->rp_local = rp; + ring->wp_local = rp_last; + gsi_evt_ring_doorbell(channel->gsi, channel->evt_ring_id); +} + +/* Consult hardware, move any newly completed transactions to completed list */ +static void gsi_channel_update(struct gsi_channel *channel) +{ + struct gsi_evt_ring *evt_ring; + unsigned long flags; + + evt_ring = &channel->gsi->evt_ring[channel->evt_ring_id]; + + spin_lock_irqsave(&evt_ring->ring.spinlock, flags); + + __gsi_evt_ring_update(channel->gsi, channel->evt_ring_id); + + spin_unlock_irqrestore(&evt_ring->ring.spinlock, flags); +} + +/** + * gsi_channel_poll_one() - Return a single completed transaction on a channel + * @channel: Channel to be polled + * + * @Return: Transaction pointer, or null if none are available + * + * This function returns the first entry on a channel's completed + * transaction list. If that list is empty, the hardware is consulted + * to determine whether any new transactions have completed. If so, + * they're moved to the completed list and the new first entry is + * returned. If there are no more completed transactions, a null + * pointer is returned. + */ +static struct gsi_trans *gsi_channel_poll_one(struct gsi_channel *channel) +{ + struct gsi_trans *trans; + + /* Get the first transaction from the completed list */ + trans = gsi_channel_trans_complete(channel); + if (!trans) { + /* List is empty; see if there's more to do */ + gsi_channel_update(channel); + trans = gsi_channel_trans_complete(channel); + } + + if (trans) + gsi_trans_move_polled(trans); + + return trans; +} + +/** + * gsi_channel_poll() - NAPI poll function for a channel + * @napi: NAPI structure for the channel + * @budget: Budget supplied by NAPI core + + * @channel_id: Channel to be reset + * + * @Return: Number of items polled (<= budget) + * + * Single transactions completed by hardware are polled until either + * the budget is exhausted, or there are no more. Each transaction + * polled is passed to gsi_trans_complete(), to perform remaining + * completion processing and retire/free the transaction. + */ +static int gsi_channel_poll(struct napi_struct *napi, int budget) +{ + struct gsi_channel *channel; + int count = 0; + + channel = container_of(napi, struct gsi_channel, napi); + while (count < budget) { + struct gsi_trans *trans; + + trans = gsi_channel_poll_one(channel); + if (!trans) + break; + gsi_trans_complete(trans); + } + + if (count < budget) { + napi_complete(&channel->napi); + (void)gsi_channel_intr(channel, true); + } + + return count; +} + +/* The event bitmap represents which event ids are available for + * allocation. Set bits are not available, clear bits can be used. + * This function initializes the map so all events supported by the + * hardware are available, then precludes any reserved events from + * being allocated. + */ +static u32 gsi_event_bitmap_init(u32 evt_ring_max) +{ + u32 event_bitmap = GENMASK(BITS_PER_LONG - 1, evt_ring_max); + + return event_bitmap | GENMASK(GSI_MHI_ER_END, GSI_MHI_ER_START); +} + +/* Setup function for event rings */ +static int gsi_evt_ring_setup(struct gsi *gsi) +{ + u32 evt_ring_max; + u32 evt_ring_id; + + evt_ring_max = gsi_evt_ring_max(gsi); + dev_dbg(gsi->dev, "evt_ring_max %u\n", evt_ring_max); + if (evt_ring_max != GSI_EVT_RING_MAX) + return -EIO; + + for (evt_ring_id = 0; evt_ring_id < GSI_EVT_RING_MAX; evt_ring_id++) { + struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; + + evt_ring->state = gsi_evt_ring_state(gsi, evt_ring_id); + if (evt_ring->state != GSI_EVT_RING_STATE_NOT_ALLOCATED) + return -EIO; + } + + /* Enable all event interrupts */ + gsi_irq_enable_all(gsi); + + return 0; +} + +/* Inverse of gsi_evt_ring_setup() */ +static void gsi_evt_ring_teardown(struct gsi *gsi) +{ + gsi_irq_disable_all(gsi); +} + +static void gsi_channel_scratch_write(struct gsi_channel *channel) +{ + u32 channel_id = gsi_channel_id(channel); + struct gsi_gpi_channel_scratch *gpi; + union gsi_channel_scratch scr = { }; + struct gsi *gsi = channel->gsi; + u32 val; + + /* See comments above definition of gsi_gpi_channel_scratch */ + gpi = &scr.gpi; + gpi->max_outstanding_tre = channel->data->tlv_count * + sizeof(struct gsi_tre); + gpi->outstanding_threshold = 2 * sizeof(struct gsi_tre); + + val = scr.data.word1; + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_0_OFFSET(channel_id)); + + val = scr.data.word2; + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_1_OFFSET(channel_id)); + + val = scr.data.word3; + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_2_OFFSET(channel_id)); + + /* We must preserve the upper 16 bits of the last scratch + * register. The next sequence assumes those bits remain + * unchanged between the read and the write. + */ + val = ioread32(gsi->virt + GSI_CH_C_SCRATCH_3_OFFSET(channel_id)); + val = (scr.data.word4 & GENMASK(31, 16)) | (val & GENMASK(15, 0)); + iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_3_OFFSET(channel_id)); +} + +static void gsi_channel_program(struct gsi_channel *channel, bool doorbell) +{ + u32 channel_id = gsi_channel_id(channel); + struct gsi *gsi = channel->gsi; + u32 wrr_weight = 0; + u32 val = 0; + + BUILD_BUG_ON(sizeof(struct gsi_tre) > field_max(ELEMENT_SIZE_FMASK)); + + val |= u32_encode_bits(GSI_CHANNEL_PROTOCOL_GPI, CHTYPE_PROTOCOL_FMASK); + if (channel->toward_ipa) + val |= CHTYPE_DIR_FMASK; + val |= u32_encode_bits(channel->evt_ring_id, ERINDEX_FMASK); + val |= u32_encode_bits(sizeof(struct gsi_tre), ELEMENT_SIZE_FMASK); + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id)); + + val = u32_encode_bits(channel->tre_ring.size, R_LENGTH_FMASK); + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_1_OFFSET(channel_id)); + + /* The context 2 and 3 registers store the low-order and + * high-order 32 bits of the address of the channel ring, + * respectively. + */ + val = channel->tre_ring.addr & GENMASK(31, 0); + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_2_OFFSET(channel_id)); + + val = channel->tre_ring.addr >> 32; + iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_3_OFFSET(channel_id)); + + if (channel->data->wrr_priority) + wrr_weight = field_max(WRR_WEIGHT_FMASK); + val = u32_encode_bits(wrr_weight, WRR_WEIGHT_FMASK); + + /* Max prefetch is 1 segment (do not set MAX_PREFETCH_FMASK) */ + if (doorbell) + val |= USE_DB_ENG_FMASK; + iowrite32(val, gsi->virt + GSI_CH_C_QOS_OFFSET(channel_id)); +} + +static void +__gsi_channel_config(struct gsi_channel *channel, bool doorbell_enable) +{ + gsi_channel_program(channel, doorbell_enable); + gsi_ring_init(&channel->tre_ring); + gsi_channel_scratch_write(channel); +} + +void gsi_channel_config(struct gsi *gsi, u32 channel_id, bool doorbell_enable) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + mutex_lock(&gsi->mutex); + + __gsi_channel_config(channel, doorbell_enable); + + mutex_unlock(&gsi->mutex); +} + +/* Setup function for a single channel */ +static int gsi_channel_setup_one(struct gsi_channel *channel) +{ + struct gsi *gsi = channel->gsi; + int ret; + + if (!gsi) + return 0; /* Ignore uninitialized channels */ + + channel->state = gsi_channel_state(channel); + if (channel->state != GSI_CHANNEL_STATE_NOT_ALLOCATED) + return -EIO; + + mutex_lock(&gsi->mutex); + + ret = gsi_evt_ring_alloc_hw(gsi, channel->evt_ring_id); + if (ret) { + mutex_unlock(&gsi->mutex); + + return ret; + } + + gsi_channel_command(channel, GSI_CH_ALLOCATE); + ret = channel->state == GSI_CHANNEL_STATE_ALLOCATED ? 0 : -EIO; + if (ret) { + gsi_evt_ring_free_hw(gsi, channel->evt_ring_id); + mutex_unlock(&gsi->mutex); + + return ret; + } + + __gsi_channel_config(channel, true); + + mutex_unlock(&gsi->mutex); + + gsi->channel_stats.allocate++; + + if (channel->toward_ipa) + netif_tx_napi_add(&gsi->dummy_dev, &channel->napi, + gsi_channel_poll, NAPI_POLL_WEIGHT); + else + netif_napi_add(&gsi->dummy_dev, &channel->napi, + gsi_channel_poll, NAPI_POLL_WEIGHT); + + return 0; +} + +/* Inverse of gsi_channel_setup_one() */ +static void gsi_channel_teardown_one(struct gsi_channel *channel) +{ + struct gsi *gsi = channel->gsi; + + if (!gsi) + return; + + netif_napi_del(&channel->napi); + + mutex_lock(&gsi->mutex); + + gsi_channel_command(channel, GSI_CH_DE_ALLOC); + + gsi->channel_stats.free++; + + gsi_evt_ring_free_hw(gsi, channel->evt_ring_id); + + mutex_unlock(&gsi->mutex); + + gsi_channel_trans_exit(channel); +} + +/* Setup function for channels */ +static int gsi_channel_setup(struct gsi *gsi) +{ + u32 channel_max; + u32 channel_id; + int ret; + + channel_max = gsi_channel_max(gsi); + dev_dbg(gsi->dev, "channel_max %u\n", channel_max); + if (channel_max != GSI_CHANNEL_MAX) + return -EIO; + + ret = gsi_evt_ring_setup(gsi); + if (ret) + return ret; + + for (channel_id = 0; channel_id < GSI_CHANNEL_MAX; channel_id++) { + ret = gsi_channel_setup_one(&gsi->channel[channel_id]); + if (ret) + goto err_unwind; + } + + return 0; + +err_unwind: + while (channel_id--) + gsi_channel_teardown_one(&gsi->channel[channel_id]); + gsi_evt_ring_teardown(gsi); + + return ret; +} + +/* Inverse of gsi_channel_setup() */ +static void gsi_channel_teardown(struct gsi *gsi) +{ + u32 channel_id; + + for (channel_id = 0; channel_id < GSI_CHANNEL_MAX; channel_id++) { + struct gsi_channel *channel = &gsi->channel[channel_id]; + + gsi_channel_teardown_one(channel); + } + + gsi_evt_ring_teardown(gsi); +} + +/* Setup function for GSI. GSI firmware must be loaded and initialized */ +int gsi_setup(struct gsi *gsi) +{ + u32 val; + + /* Here is where we first touch the GSI hardware */ + val = ioread32(gsi->virt + GSI_GSI_STATUS_OFFSET); + if (!(val & ENABLED_FMASK)) { + dev_err(gsi->dev, "GSI has not been enabled\n"); + return -EIO; + } + + /* Initialize the error log */ + iowrite32(0, gsi->virt + GSI_ERROR_LOG_OFFSET); + + /* Writing 1 indicates IRQ interrupts; 0 would be MSI */ + iowrite32(1, gsi->virt + GSI_CNTXT_INTSET_OFFSET); + + return gsi_channel_setup(gsi); +} + +/* Inverse of gsi_setup() */ +void gsi_teardown(struct gsi *gsi) +{ + gsi_channel_teardown(gsi); +} + +/* Initialize a channel's event ring */ +static int gsi_channel_evt_ring_init(struct gsi_channel *channel) +{ + struct gsi *gsi = channel->gsi; + struct gsi_evt_ring *evt_ring; + int ret; + + ret = gsi_evt_ring_id_alloc(gsi); + if (ret < 0) + return ret; + channel->evt_ring_id = ret; + + evt_ring = &gsi->evt_ring[channel->evt_ring_id]; + evt_ring->channel = channel; + + ret = gsi_ring_alloc(gsi, &evt_ring->ring, channel->data->event_count); + if (ret) + goto err_free_evt_ring_id; + + return 0; + +err_free_evt_ring_id: + gsi_evt_ring_id_free(gsi, channel->evt_ring_id); + + return ret; +} + +/* Inverse of gsi_channel_evt_ring_init() */ +static void gsi_channel_evt_ring_exit(struct gsi_channel *channel) +{ + struct gsi *gsi = channel->gsi; + struct gsi_evt_ring *evt_ring; + + evt_ring = &gsi->evt_ring[channel->evt_ring_id]; + gsi_ring_free(gsi, &evt_ring->ring); + + gsi_evt_ring_id_free(gsi, channel->evt_ring_id); +} + +/* Init function for event rings */ +static void gsi_evt_ring_init(struct gsi *gsi) +{ + u32 evt_ring_id; + + BUILD_BUG_ON(GSI_EVT_RING_MAX >= BITS_PER_LONG); + + gsi->event_bitmap = gsi_event_bitmap_init(GSI_EVT_RING_MAX); + gsi->event_enable_bitmap = 0; + for (evt_ring_id = 0; evt_ring_id < GSI_EVT_RING_MAX; evt_ring_id++) + init_completion(&gsi->evt_ring[evt_ring_id].completion); +} + +/* Inverse of gsi_evt_ring_init() */ +static void gsi_evt_ring_exit(struct gsi *gsi) +{ + /* Nothing to do */ +} + +/* Init function for a single channel */ +static int +gsi_channel_init_one(struct gsi *gsi, const struct gsi_ipa_endpoint_data *data) +{ + struct gsi_channel *channel; + int ret; + + if (data->ee_id != GSI_EE_AP) + return 0; /* Ignore non-AP channels */ + + if (data->channel_id >= GSI_CHANNEL_MAX) + return -EIO; + channel = &gsi->channel[data->channel_id]; + + channel->gsi = gsi; + channel->toward_ipa = data->toward_ipa; + channel->data = &data->channel; + + init_completion(&channel->completion); + + ret = gsi_channel_evt_ring_init(channel); + if (ret) + return ret; + + ret = gsi_ring_alloc(gsi, &channel->tre_ring, channel->data->tre_count); + if (ret) + goto err_channel_evt_ring_exit; + + ret = gsi_channel_trans_init(channel); + if (ret) + goto err_ring_free; + + return 0; + +err_ring_free: + gsi_ring_free(gsi, &channel->tre_ring); +err_channel_evt_ring_exit: + gsi_channel_evt_ring_exit(channel); + + return ret; +} + +/* Inverse of gsi_channel_init_one() */ +static void gsi_channel_exit_one(struct gsi_channel *channel) +{ + gsi_channel_trans_exit(channel); + gsi_ring_free(channel->gsi, &channel->tre_ring); + gsi_channel_evt_ring_exit(channel); +} + +/* Init function for channels */ +static int gsi_channel_init(struct gsi *gsi, u32 data_count, + const struct gsi_ipa_endpoint_data *data) +{ + int ret = 0; + u32 i; + + gsi_evt_ring_init(gsi); + for (i = 0; i < data_count; i++) { + ret = gsi_channel_init_one(gsi, &data[i]); + if (ret) + break; + } + + return ret; +} + +/* Inverse of gsi_channel_init() */ +static void gsi_channel_exit(struct gsi *gsi) +{ + u32 channel_id; + + for (channel_id = 0; channel_id < GSI_CHANNEL_MAX; channel_id++) + gsi_channel_exit_one(&gsi->channel[channel_id]); + gsi_evt_ring_exit(gsi); +} + +/* Init function for GSI. GSI hardware does not need to be "ready" */ +int gsi_init(struct gsi *gsi, struct platform_device *pdev, u32 data_count, + const struct gsi_ipa_endpoint_data *data) +{ + struct resource *res; + resource_size_t size; + unsigned int irq; + int ret; + + gsi->dev = &pdev->dev; + init_dummy_netdev(&gsi->dummy_dev); + + /* Get GSI memory range and map it */ + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "gsi"); + if (!res) + return -ENXIO; + + size = resource_size(res); + if (res->start > U32_MAX || size > U32_MAX - res->start) + return -EINVAL; + + gsi->virt = ioremap_nocache(res->start, size); + if (!gsi->virt) + return -ENOMEM; + + ret = platform_get_irq_byname(pdev, "gsi"); + if (ret < 0) + goto err_unmap_virt; + irq = ret; + + ret = request_irq(irq, gsi_isr, 0, "gsi", gsi); + if (ret) + goto err_unmap_virt; + gsi->irq = irq; + + ret = enable_irq_wake(gsi->irq); + if (ret) + dev_err(gsi->dev, "error %d enabling gsi wake irq\n", ret); + gsi->irq_wake_enabled = ret ? 0 : 1; + + spin_lock_init(&gsi->spinlock); + mutex_init(&gsi->mutex); + + ret = gsi_channel_init(gsi, data_count, data); + if (ret) + goto err_mutex_destroy; + + return 0; + +err_mutex_destroy: + mutex_destroy(&gsi->mutex); + if (gsi->irq_wake_enabled) + (void)disable_irq_wake(gsi->irq); + free_irq(gsi->irq, gsi); +err_unmap_virt: + iounmap(gsi->virt); + + return ret; +} + +/* Inverse of gsi_init() */ +void gsi_exit(struct gsi *gsi) +{ + gsi_channel_exit(gsi); + + mutex_destroy(&gsi->mutex); + if (gsi->irq_wake_enabled) + (void)disable_irq_wake(gsi->irq); + free_irq(gsi->irq, gsi); + iounmap(gsi->virt); +} + +/* Returns the maximum number of pending transactions on a channel */ +u32 gsi_channel_trans_max(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + return channel->data->tre_count; +} + +/* Returns the maximum number of TREs in a single transaction for a channel */ +u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + return channel->data->tlv_count; +} + +/* Wait for all transaction activity on a channel to complete */ +void gsi_channel_trans_quiesce(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_trans_info *trans_info; + struct gsi_trans *trans = NULL; + struct gsi_evt_ring *evt_ring; + struct list_head *list; + unsigned long flags; + + trans_info = &channel->trans_info; + evt_ring = &channel->gsi->evt_ring[channel->evt_ring_id]; + + spin_lock_irqsave(&evt_ring->ring.spinlock, flags); + + /* Find the last list to which a transaction was added */ + if (!list_empty(&trans_info->alloc)) + list = &trans_info->alloc; + else if (!list_empty(&trans_info->pending)) + list = &trans_info->pending; + else if (!list_empty(&trans_info->complete)) + list = &trans_info->complete; + else if (!list_empty(&trans_info->polled)) + list = &trans_info->polled; + else + list = NULL; + + if (list) { + struct gsi_trans *trans; + + /* The last entry on this list is the last one allocated. + * Grab a reference so we can wait for it. + */ + trans = list_last_entry(list, struct gsi_trans, links); + refcount_inc(&trans->refcount); + } + + spin_lock_irqsave(&evt_ring->ring.spinlock, flags); + + /* If there is one, wait for it to complete */ + if (trans) { + wait_for_completion(&trans->completion); + gsi_trans_free(trans); + } +} + +/* Make a channel operational */ +int gsi_channel_start(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + if (channel->state != GSI_CHANNEL_STATE_ALLOCATED && + channel->state != GSI_CHANNEL_STATE_STOP_IN_PROC && + channel->state != GSI_CHANNEL_STATE_STOPPED) { + dev_err(gsi->dev, "channel %u bad state %u\n", channel_id, + (u32)channel->state); + return -ENOTSUPP; + } + + napi_enable(&channel->napi); + + mutex_lock(&gsi->mutex); + + gsi_channel_command(channel, GSI_CH_START); + + mutex_unlock(&gsi->mutex); + + gsi->channel_stats.start++; + + return 0; +} + +/* Stop an operational channel */ +int gsi_channel_stop(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + int ret; + + if (channel->state == GSI_CHANNEL_STATE_STOPPED) + return 0; + + if (channel->state != GSI_CHANNEL_STATE_STARTED && + channel->state != GSI_CHANNEL_STATE_STOP_IN_PROC && + channel->state != GSI_CHANNEL_STATE_ERROR) { + dev_err(gsi->dev, "channel %u bad state %u\n", channel_id, + (u32)channel->state); + return -ENOTSUPP; + } + + gsi_channel_trans_quiesce(gsi, channel_id); + + mutex_lock(&gsi->mutex); + + gsi_channel_command(channel, GSI_CH_STOP); + + mutex_unlock(&gsi->mutex); + + if (channel->state == GSI_CHANNEL_STATE_STOPPED) + ret = 0; + else if (channel->state == GSI_CHANNEL_STATE_STOP_IN_PROC) + ret = -EAGAIN; + else + ret = -EIO; + + gsi->channel_stats.stop++; + + if (!ret) + napi_disable(&channel->napi); + + return ret; +} + +/* Reset a GSI channel */ +int gsi_channel_reset(struct gsi *gsi, u32 channel_id) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + + if (channel->state != GSI_CHANNEL_STATE_STOPPED) { + dev_err(gsi->dev, "channel %u bad state %u\n", channel_id, + (u32)channel->state); + return -ENOTSUPP; + } + + /* In case the reset follows stop, need to wait 1 msec */ + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); + + mutex_lock(&gsi->mutex); + + gsi_channel_command(channel, GSI_CH_RESET); + + /* workaround: reset RX channels again */ + if (!channel->toward_ipa) { + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); + gsi_channel_command(channel, GSI_CH_RESET); + } + + __gsi_channel_config(channel, true); + + /* Cancel pending transactions before the channel is started again */ + gsi_channel_trans_cancel_pending(channel); + + mutex_unlock(&gsi->mutex); + + gsi->channel_stats.reset++; + + return 0; +} From patchwork Sun May 12 01:24:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163972 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4128751ilr; Sat, 11 May 2019 18:25:46 -0700 (PDT) X-Google-Smtp-Source: APXvYqxSzP8P0/Hoawo8FXW+WYfUTT8MmpErGjD8ZuUaZvZS4FLFzYUveuInsK+C4SE210VtwQ6p X-Received: by 2002:a62:1692:: with SMTP id 140mr3421132pfw.166.1557624346652; Sat, 11 May 2019 18:25:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624346; cv=none; d=google.com; s=arc-20160816; b=tUiUkX6sSRawxxw6Yi8lb+tr8C4+jac4u26xgFbqi57MgMIzEAhWWVnbbCejHyLSsl 8B/QGrpoA9caXUBR7+wbRBdDA7Vvyn5qq2UcFVpztln8cUfUaj5AylukIhTn6qIlyz2m NSKHGYeQvU1u3PZW+T2aq0rfne1T1TE41ZKakvv7W6RwxWf3PWJRkphddKm98412sVbV runZmMHn9XPueNkAv1NW1SQL4P3Pp1EjUUoo/nCHKwjiZAKeM+P51x/1OHZSSCce6N5p yRTMPlS+aPDWWna0Hpc87wGIqzSZhQr8YyX5sRlQERN4XdzDZEYoDpEZodch2JGC/6Yv kzJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Z1vpsRw/Qv1PVg/5Liz57VgaRQaN+KbLk5lodQH6TDw=; b=ZmAiO1CrG13FyWbIa+BHtr77EIo/wFFYHQikb56ZdDFaF6wBISXUynbtHy/ZC7oRbN zeW4oXc3eJIJe4VPJdBuSrWIs1dLLwlGIA/ov1CFCrKBTnCf9hCdpLeykzlW3aq8WzG5 6XIeu7LXF30xy+76I6jDPZ5EGLR4Ol/iuXU1WqQYB6Z4TF3Lz+u5tiEwPvimgN0Azu58 i7hyaeGzMe81P30EINzDTCg4KQrncrFtaQctErq6VXKEuDsGzv7faEpqa6YVvEs0ZWdE DIct17cD1bdE7YjpgEpAsz6I34CHe/ZbnDgm4Lveg5LeX7jo5NM/67f7CGM3cEGt+/bF DYkQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cfBZa96J; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x1si14842793pfo.49.2019.05.11.18.25.46; Sat, 11 May 2019 18:25:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=cfBZa96J; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726713AbfELBZp (ORCPT + 30 others); Sat, 11 May 2019 21:25:45 -0400 Received: from mail-io1-f65.google.com ([209.85.166.65]:33507 "EHLO mail-io1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726604AbfELBZm (ORCPT ); Sat, 11 May 2019 21:25:42 -0400 Received: by mail-io1-f65.google.com with SMTP id z4so7437941iol.0 for ; Sat, 11 May 2019 18:25:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Z1vpsRw/Qv1PVg/5Liz57VgaRQaN+KbLk5lodQH6TDw=; b=cfBZa96JjTLCleX9rHtjKSPZjVNLIMKBrGrDrCioSGAxcZJo38ptYGzlIfhhsq+NOm 0iysZGuibyIfnoX8TU81kNctmAir2bHF/LxNtdfCWkqoOtgTCAo/nIo8ySA22ySk3mUB Ijb5tOmlKvhI+lqo5IaaGCF/abI+i92XgM9IdgLt0TWbWtwdXQ32wY7K9QnDZEZwyddU BUKklLop8JlrlZP3cGrn+Kk0cI7wrO9OQSNfiG10/wE7NC6SIBgDsHoMzOJeMl28U9c7 9svfyCTUmQVX17AqiCyCa/EwmGYJxMd4wzYelviJVxoZLw4rtuq3Zk3DmiBEfTwqain2 KClA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Z1vpsRw/Qv1PVg/5Liz57VgaRQaN+KbLk5lodQH6TDw=; b=SLQ1mC4jFL3dSmtHXXqPx8F0OTdUOy2C1hXjpqBbiQ9X74nEec520YRv8soMXYYbsz vasU3XxLTp1DMyEY3Vc3fFR3ShvLGvHyVxIMB/ByrALHThoUuc7+KPK096vgJKPxMwuy 0yJekBxwcQdKkvRUP831o/G52d3BbbwGm9k87CJdhSIStjapQ4Ff4wgX5FymzkC7q9Hq f+U4t3myqWgE/t6xrxglx1hAgRYBuFfXgi8CvRcPpCcDWjDbwB4qAi1CLhE5L/VZ9TNW Sz9ZvswckhPsnGAIdvrncgYNlakntfmjY16Mfu6qwO74Al3f+7Yt8SznOfCKzdKY27Kw 03eQ== X-Gm-Message-State: APjAAAUyzW4M5ITWmMWzS7f8KIRxTzPmADAvyp1+3D+vS0BhtmO9SdWc lg5+H4/jBnG0Bj76o76XWayAJg== X-Received: by 2002:a05:6602:1d6:: with SMTP id w22mr2928270iot.215.1557624340875; Sat, 11 May 2019 18:25:40 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:40 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 09/18] soc: qcom: ipa: GSI transactions Date: Sat, 11 May 2019 20:24:59 -0500 Message-Id: <20190512012508.10608-10-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch implements GSI transactions. A GSI transaction is a structure that represents a single request (consisting of one or more TREs) sent to the GSI hardware. The last TRE in a transaction includes a flag requesting that the GSI interrupt the AP to notify that it has completed. TREs are executed and completed strictly in order. For this reason, the completion of a single TRE implies that all previous TREs (in particular all of those "earlier" in a transaction) have completed. Whenever there is a need to send a request (a set of TREs) to the IPA, a GSI transaction is allocated, specifying the number of TREs that will be required. Details of the request (e.g. transfer offsets and length) are represented by in a Linux scatterlist array that is incorporated in the transaction structure. Once "filled," the transaction is committed. The GSI transaction layer performs all needed mapping (and unmapping) for DMA, and issues the request to the hardware. When the hardware signals that the request has completed, a function in the IPA layer is called, allowing for cleanup or followup activity to be performed before the transaction is freed. Signed-off-by: Alex Elder --- drivers/net/ipa/gsi_trans.c | 604 ++++++++++++++++++++++++++++++++++++ drivers/net/ipa/gsi_trans.h | 106 +++++++ 2 files changed, 710 insertions(+) create mode 100644 drivers/net/ipa/gsi_trans.c create mode 100644 drivers/net/ipa/gsi_trans.h -- 2.20.1 diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c new file mode 100644 index 000000000000..e35e8369d5d0 --- /dev/null +++ b/drivers/net/ipa/gsi_trans.c @@ -0,0 +1,604 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_private.h" +#include "gsi_trans.h" +#include "ipa_gsi.h" +#include "ipa_data.h" +#include "ipa_cmd.h" + +/** + * DOC: GSI Transactions + * + * A GSI transaction abstracts the behavior of a GSI channel by representing + * everything about a related group of data transfers in a single structure. + * Most details of interaction with the GSI hardware are managed by the GSI + * transaction core, allowing users to simply describe transfers to be + * performed and optionally supply a callback function to run once the set + * of transfers has been completed. + * + * To perform a data transfer (or a related set of them), a user of the GSI + * transaction interface allocates a transaction, indicating the number of + * TREs required (one per data transfer). If sufficient TREs are available, + * they are reserved for use in the transaction and the allocation succeeds. + * This way exhaustion of the available TREs in a channel ring is detected + * as early as possible. All resources required to complete a transaction + * are allocated at transaction allocation time. + * + * Transfers performed as part of a transaction are represented in an array + * of Linux scatterlist structures. This array is allocated with the + * transaction, and its entries must be initialized using standard + * scatterlist functions (such as sg_init_one() or skb_to_sgvec()). The + * user must supply the total number of bytes represented by all transfers + * in the transaction. + * + * Once a transaction has been prepared, it is committed. The GSI transaction + * layer is responsible for DMA mapping (and unmapping) memory described in + * the transaction's scatterlist array. The only way committing a transaction + * fails is if this DMA mapping step returns an error. Otherwise, ownership + * of the entire transaction is transferred to the GSI transaction core. The + * GSI transaction code formats the content of the scatterlist array into the + * channel ring buffer and informs the hardware that new TREs are available + * to process. + * + * The last TRE in each transaction is marked to interrupt the AP when the + * GSI hardware has completed it. Because transfers described by TREs are + * performed strictly in order, signaling the completion of just the last + * TRE in the transaction is sufficient to indicate the full transaction + * is complete. + * + * When a transaction is complete, ipa_gsi_trans_complete() is called by the + * GSI code into the IPA layer, allowing it to perform any final cleanup + * required before the transaction is freed. + * + * It is possible to await the completion of a transaction; only immediate + * commands currently use this functionality. + */ + +/* gsi_tre->flags mask values (in CPU byte order) */ +#define GSI_TRE_FLAGS_CHAIN_FMASK GENMASK(0, 0) +#define GSI_TRE_FLAGS_IEOB_FMASK GENMASK(8, 8) +#define GSI_TRE_FLAGS_IEOT_FMASK GENMASK(9, 9) +#define GSI_TRE_FLAGS_BEI_FMASK GENMASK(10, 10) +#define GSI_TRE_FLAGS_TYPE_FMASK GENMASK(23, 16) + +/* Hardware values representing a transfer element type */ +enum gsi_tre_type { + GSI_RE_XFER = 0x2, + GSI_RE_IMMD_CMD = 0x3, + GSI_RE_NOP = 0x4, +}; + +/* Map a given ring entry index to the transaction associated with it */ +static void gsi_channel_trans_map(struct gsi_channel *channel, u32 index, + struct gsi_trans *trans) +{ + channel->trans_info.map[index] = trans; +} + +/* Return the transaction mapped to a given ring entry */ +struct gsi_trans * +gsi_channel_trans_mapped(struct gsi_channel *channel, u32 index) +{ + return channel->trans_info.map[index]; +} + +/* Return the oldest completed transaction for a channel (or null) */ +struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel) +{ + return list_first_entry_or_null(&channel->trans_info.complete, + struct gsi_trans, links); +} + +/* Move a transaction from the allocated list to the pending list */ +static void gsi_trans_move_pending(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + unsigned long flags; + + spin_lock_irqsave(&trans_info->spinlock, flags); + + list_move_tail(&trans->links, &trans_info->pending); + + spin_unlock_irqrestore(&trans_info->spinlock, flags); +} + +/* Move a transaction and all of its predecessors from the pending list + * to the completed list. + */ +void gsi_trans_move_complete(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info; + struct list_head list; + unsigned long flags; + + trans_info = &channel->trans_info; + + spin_lock_irqsave(&trans_info->spinlock, flags); + + /* Move this transaction and all predecessors to completed list */ + list_cut_position(&list, &trans_info->pending, &trans->links); + list_splice_tail(&list, &trans_info->complete); + + spin_unlock_irqrestore(&trans_info->spinlock, flags); +} + +/* Move a transaction from the completed list to the polled list */ +void gsi_trans_move_polled(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + unsigned long flags; + + spin_lock_irqsave(&trans_info->spinlock, flags); + + list_move_tail(&trans->links, &trans_info->polled); + + spin_unlock_irqrestore(&trans_info->spinlock, flags); +} + +/* Allocate a GSI transaction on a channel */ +struct gsi_trans * +gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id, u32 tre_count) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_trans_info *trans_info = &channel->trans_info; + struct gsi_trans *trans = NULL; + unsigned long flags; + + /* Caller should know the limit is gsi_channel_trans_max() */ + if (WARN_ON(tre_count > channel->data->tlv_count)) + return NULL; + + spin_lock_irqsave(&trans_info->spinlock, flags); + + if (trans_info->tre_avail >= tre_count) { + u32 avail; + + /* Allocate the transaction */ + if (trans_info->pool_free == trans_info->pool_count) + trans_info->pool_free = 0; + trans = &trans_info->pool[trans_info->pool_free++]; + + /* Allocate the scatter/gather entries it will use. If + * what's needed would cross the end-of-pool boundary, + * allocate them from the beginning. + */ + avail = trans_info->sg_pool_count - trans_info->sg_pool_free; + if (tre_count > avail) + trans_info->sg_pool_free = 0; + trans->sgl = &trans_info->sg_pool[trans_info->sg_pool_free]; + trans->sgc = tre_count; + trans_info->sg_pool_free += tre_count; + + /* We reserve the TREs now, but consume them at commit time */ + trans_info->tre_avail -= tre_count; + + list_add_tail(&trans->links, &trans_info->alloc); + } + + spin_unlock_irqrestore(&trans_info->spinlock, flags); + + if (!trans) + return NULL; + + trans->gsi = gsi; + trans->channel_id = channel_id; + refcount_set(&trans->refcount, 1); + trans->tre_count = tre_count; + init_completion(&trans->completion); + + /* We're reusing, so make sure all fields are reinitialized */ + trans->dev = gsi->dev; + trans->result = 0; /* Success assumed unless overwritten */ + trans->data = NULL; + + return trans; +} + +/* Free a previously-allocated transaction (used only in case of error) */ +void gsi_trans_free(struct gsi_trans *trans) +{ + struct gsi_trans_info *trans_info; + struct gsi_channel *channel; + unsigned long flags; + + if (!refcount_dec_and_test(&trans->refcount)) + return; + + channel = &trans->gsi->channel[trans->channel_id]; + trans_info = &channel->trans_info; + + spin_lock_irqsave(&trans_info->spinlock, flags); + + list_del(&trans->links); + trans_info->tre_avail += trans->tre_count; + + spin_unlock_irqrestore(&trans_info->spinlock, flags); +} + +/* Compute the length/opcode value to use for a TRE */ +static __le16 gsi_tre_len_opcode(enum ipa_cmd_opcode opcode, u32 len) +{ + return opcode == IPA_CMD_NONE ? cpu_to_le16((u16)len) + : cpu_to_le16((u16)opcode); +} + +/* Compute the flags value to use for a given TRE */ +static __le32 gsi_tre_flags(bool last_tre, bool bei, enum ipa_cmd_opcode opcode) +{ + enum gsi_tre_type tre_type; + u32 tre_flags; + + tre_type = opcode == IPA_CMD_NONE ? GSI_RE_XFER : GSI_RE_IMMD_CMD; + tre_flags = u32_encode_bits(tre_type, GSI_TRE_FLAGS_TYPE_FMASK); + + /* Last TRE contains interrupt flags */ + if (last_tre) { + /* All transactions end in a transfer completion interrupt */ + tre_flags |= GSI_TRE_FLAGS_IEOT_FMASK; + /* Don't interrupt when outbound commands are acknowledged */ + if (bei) + tre_flags |= GSI_TRE_FLAGS_BEI_FMASK; + } else { /* All others indicate there's more to come */ + tre_flags |= GSI_TRE_FLAGS_CHAIN_FMASK; + } + + return cpu_to_le32(tre_flags); +} + +static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr, + u32 len, bool last_tre, bool bei, + enum ipa_cmd_opcode opcode) +{ + struct gsi_tre tre; + + tre.addr = cpu_to_le64(addr); + tre.len_opcode = gsi_tre_len_opcode(opcode, len); + tre.reserved = 0; + tre.flags = gsi_tre_flags(last_tre, bei, opcode); + + *dest_tre = tre; /* Write TRE as a single (16-byte) unit */ +} + +int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr) +{ + struct gsi_channel *channel = &gsi->channel[channel_id]; + struct gsi_trans_info *trans_info; + struct gsi_evt_ring *evt_ring; + struct gsi_tre *dest_tre; + bool exhausted = false; + unsigned long flags; + u32 wp_local; + + /* assert(!channel->toward_ipa); */ + + trans_info = &channel->trans_info; + spin_lock_irqsave(&trans_info->spinlock, flags); + + if (trans_info->tre_avail) + trans_info->tre_avail--; + else + exhausted = true; + + spin_unlock_irqrestore(&trans_info->spinlock, flags); + + if (exhausted) + return -EBUSY; + + evt_ring = &gsi->evt_ring[channel->evt_ring_id]; + spin_lock_irqsave(&evt_ring->ring.spinlock, flags); + + wp_local = channel->tre_ring.wp_local; + if (wp_local == channel->tre_ring.end) + wp_local = channel->tre_ring.base; + dest_tre = gsi_ring_virt(&channel->tre_ring, wp_local); + + gsi_trans_tre_fill(dest_tre, addr, 1, true, false, IPA_CMD_NONE); + + gsi_ring_wp_local_add(&channel->tre_ring, 1); + + gsi_channel_doorbell(channel); + + spin_unlock_irqrestore(&evt_ring->ring.spinlock, flags); + + return 0; +} + +/** + * __gsi_trans_commit() - Common GSI transaction commit code + * @trans: Transaction to commit + * @opcode: Immediate command opcode, or IPA_CMD_NONE + * @ring_db: Whether to tell the hardware about these queued transfers + * + * @Return: 0 if successful, or a negative error code + * + * Maps the transactions's scatterlist array for DMA, and returns -ENOMEM + * if that fails. Formats channel ring TRE entries based on the content of + * the scatterlist. Records the transaction pointer in the map entry for + * the last ring entry used for the transaction so it can be recovered when + * it completes, and moves the transaction to the pending list. Updates the + * channel ring pointer and optionally rings the doorbell. + */ +static int __gsi_trans_commit(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, bool ring_db) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + enum dma_data_direction direction; + bool bei = channel->toward_ipa; + struct gsi_evt_ring *evt_ring; + struct scatterlist *sg; + struct gsi_tre *dest_tre; + u32 queued_trans_count; + u32 queued_byte_count; + unsigned long flags; + u32 byte_count = 0; + u32 wp_local; + u32 index; + u32 avail; + int ret; + u32 i; + + direction = channel->toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + ret = dma_map_sg(trans->dev, trans->sgl, trans->sgc, direction); + if (!ret) + return -ENOMEM; + + evt_ring = &channel->gsi->evt_ring[channel->evt_ring_id]; + + spin_lock_irqsave(&evt_ring->ring.spinlock, flags); + + /* We'll consume the entries available at the end of the ring, + * switching to the beginning to finish if necessary. + */ + wp_local = channel->tre_ring.wp_local; + dest_tre = gsi_ring_virt(&channel->tre_ring, wp_local); + + avail = (channel->tre_ring.end - wp_local) / sizeof(*dest_tre); + + for_each_sg(trans->sgl, sg, trans->sgc, i) { + bool last_tre = i == trans->tre_count - 1; + dma_addr_t addr = sg_dma_address(sg); + u32 len = sg_dma_len(sg); + + byte_count += len; + if (!avail--) + dest_tre = gsi_ring_virt(&channel->tre_ring, + channel->tre_ring.base); + + gsi_trans_tre_fill(dest_tre, addr, len, last_tre, bei, opcode); + dest_tre++; + } + + if (channel->toward_ipa) { + /* We record TX bytes when they are sent */ + trans->len = byte_count; + trans->trans_count = channel->trans_count; + trans->byte_count = channel->byte_count; + channel->trans_count++; + channel->byte_count += byte_count; + } + + /* Advance the write pointer; record info for last used element */ + gsi_ring_wp_local_add(&channel->tre_ring, trans->tre_count - 1); + index = ring_index(&channel->tre_ring, channel->tre_ring.wp_local); + gsi_channel_trans_map(channel, index, trans); + gsi_ring_wp_local_add(&channel->tre_ring, 1); + + gsi_trans_move_pending(trans); + + /* Ring doorbell if requested, or if all TREs are allocated */ + if (ring_db || !channel->trans_info.tre_avail) { + /* Report what we're handing off to hardware for TX channels */ + if (channel->toward_ipa) { + queued_trans_count = channel->trans_count - + channel->doorbell_trans_count; + queued_byte_count = channel->byte_count - + channel->doorbell_byte_count; + channel->doorbell_trans_count = channel->trans_count; + channel->doorbell_byte_count = channel->byte_count; + + ipa_gsi_channel_tx_queued(trans->gsi, + gsi_channel_id(channel), + queued_trans_count, + queued_byte_count); + } + + gsi_channel_doorbell(channel); + } + + spin_unlock_irqrestore(&evt_ring->ring.spinlock, flags); + + return 0; +} + +/* Commit a GSI transaction */ +int gsi_trans_commit(struct gsi_trans *trans, bool ring_db) +{ + return __gsi_trans_commit(trans, IPA_CMD_NONE, ring_db); +} + +/* Commit a GSI command transaction and wait for it to complete */ +int gsi_trans_commit_command(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode) +{ + int ret; + + refcount_inc(&trans->refcount); + + ret = __gsi_trans_commit(trans, opcode, true); + if (ret) + goto out_free_trans; + + wait_for_completion(&trans->completion); + +out_free_trans: + gsi_trans_free(trans); + + return ret; +} + +/* Commit a GSI command transaction, wait for it to complete, with timeout */ +int gsi_trans_commit_command_timeout(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, + unsigned long timeout) +{ + unsigned long timeout_jiffies = msecs_to_jiffies(timeout); + unsigned long remaining; + int ret; + + refcount_inc(&trans->refcount); + + ret = __gsi_trans_commit(trans, opcode, true); + if (ret) + goto out_free_trans; + + remaining = wait_for_completion_timeout(&trans->completion, + timeout_jiffies); +out_free_trans: + gsi_trans_free(trans); + + return ret ? ret : remaining ? 0 : -ETIMEDOUT; +} + +/* Return a channel's next completed transaction (or NULL) */ +void gsi_trans_complete(struct gsi_trans *trans) +{ + struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id]; + enum dma_data_direction direction; + + direction = channel->toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE; + + dma_unmap_sg(trans->dev, trans->sgl, trans->sgc, direction); + + ipa_gsi_trans_complete(trans); + + complete(&trans->completion); + + gsi_trans_free(trans); +} + +/* Cancel a channel's pending transactions */ +void gsi_channel_trans_cancel_pending(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + u32 evt_ring_id = channel->evt_ring_id; + struct gsi *gsi = channel->gsi; + struct gsi_evt_ring *evt_ring; + struct gsi_trans *trans; + unsigned long flags; + + evt_ring = &gsi->evt_ring[evt_ring_id]; + + spin_lock_irqsave(&evt_ring->ring.spinlock, flags); + + list_for_each_entry(trans, &trans_info->pending, links) + trans->result = -ECANCELED; + + list_splice_tail_init(&trans_info->pending, &trans_info->complete); + + spin_unlock_irqrestore(&evt_ring->ring.spinlock, flags); + + spin_lock_irqsave(&gsi->spinlock, flags); + + if (gsi->event_enable_bitmap & BIT(evt_ring_id)) + gsi_event_handle(gsi, evt_ring_id); + + spin_unlock_irqrestore(&gsi->spinlock, flags); +} + +/* Initialize a channel's GSI transaction info */ +int gsi_channel_trans_init(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + u32 tre_count = channel->data->tre_count; + + trans_info->map = kcalloc(tre_count, sizeof(*trans_info->map), + GFP_KERNEL); + if (!trans_info->map) + return -ENOMEM; + + /* We will never need more transactions than there are TRE + * entries in the transfer ring. For that reason, we can + * preallocate an array of (at least) that many transactions, + * and use a single free index to determine the next one + * available for allocation. + */ + trans_info->pool_count = tre_count; + trans_info->pool = kcalloc(trans_info->pool_count, + sizeof(*trans_info->pool), GFP_KERNEL); + if (!trans_info->pool) + goto err_free_map; + /* If we get extra memory from the allocator, use it */ + trans_info->pool_count = + ksize(trans_info->pool) / sizeof(*trans_info->pool); + trans_info->pool_free = 0; + + /* While transactions are allocated one at a time, a transaction + * can have multiple TREs. The number of TRE entries in a single + * transaction is limited by the number of TLV FIFO entries the + * channel has. We reserve TREs when a transaction is allocated, + * but we don't actually use them until the transaction is + * committed. + * + * A transaction uses a scatterlist array to represent the data + * transfers implemented by the transaction. Each scatterliest + * element is used to fill a single TRE when the transaction is + * committed. As a result, we need the same number of scatterlist + * elements as there are TREs in the transfer ring, and we can + * preallocate them in a pool. + * + * If we allocate a few (tlv_count - 1) extra entries in our pool + * we can always satisfy requests without ever worrying about + * straddling the end of the array. If there aren't enough + * entries starting at the free index, we just allocate free + * entries from the beginning of the pool. + */ + trans_info->sg_pool_count = tre_count + channel->data->tlv_count - 1; + trans_info->sg_pool = kcalloc(trans_info->sg_pool_count, + sizeof(*trans_info->sg_pool), GFP_KERNEL); + if (!trans_info->sg_pool) + goto err_free_pool; + /* Use any extra memory we get from the allocator */ + trans_info->sg_pool_count = + ksize(trans_info->sg_pool) / sizeof(*trans_info->sg_pool); + trans_info->sg_pool_free = 0; + + spin_lock_init(&trans_info->spinlock); + trans_info->tre_avail = tre_count; /* maximum active */ + INIT_LIST_HEAD(&trans_info->alloc); + INIT_LIST_HEAD(&trans_info->pending); + INIT_LIST_HEAD(&trans_info->complete); + INIT_LIST_HEAD(&trans_info->polled); + + return 0; + +err_free_pool: + kfree(trans_info->pool); +err_free_map: + kfree(trans_info->map); + + return -ENOMEM; +} + +/* Inverse of gsi_channel_trans_init() */ +void gsi_channel_trans_exit(struct gsi_channel *channel) +{ + struct gsi_trans_info *trans_info = &channel->trans_info; + + kfree(trans_info->sg_pool); + kfree(trans_info->pool); + kfree(trans_info->map); +} diff --git a/drivers/net/ipa/gsi_trans.h b/drivers/net/ipa/gsi_trans.h new file mode 100644 index 000000000000..160902b077a7 --- /dev/null +++ b/drivers/net/ipa/gsi_trans.h @@ -0,0 +1,106 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _GSI_TRANS_H_ +#define _GSI_TRANS_H_ + +#include +#include +#include + +struct scatterlist; +struct device; + +struct gsi; +struct gsi_trans; +enum ipa_cmd_opcode; + +struct gsi_trans { + struct list_head links; /* gsi_channel lists */ + + struct gsi *gsi; + u32 channel_id; + + u32 tre_count; /* # TREs requested */ + u32 len; /* total # bytes in sgl */ + struct scatterlist *sgl; + u32 sgc; /* # entries in sgl[] */ + + struct completion completion; + refcount_t refcount; + + /* fields above are internal only */ + + struct device *dev; /* Use this for DMA mapping */ + long result; /* RX count, 0, or error code */ + + u64 byte_count; /* channel byte_count when committed */ + u64 trans_count; /* channel trans_count when committed */ + + void *data; +}; + +/** + * gsi_channel_trans_alloc() - Allocate a GSI transaction on a channel + * @gsi: GSI pointer + * @channel_id: Channel the transaction is associated with + * @tre_count: Number of elements in the transaction + * + * @Return: A GSI transaction structure, or a null pointer if all + * available transactions are in use + */ +struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id, + u32 tre_count); + +/** + * gsi_trans_free() - Free a previously-allocated GSI transaction + * @trans: Transaction to be freed + * + * Note: this should only be used in error paths, before the transaction is + * committed or in the event committing the transaction produces an error. + * Successfully committing a transaction passes ownership of the structure + * to the core transaction code. + */ +void gsi_trans_free(struct gsi_trans *trans); + +/** + * gsi_trans_commit() - Commit a GSI transaction + * @trans: Transaction to commit + * @ring_db: Whether to tell the hardware about these queued transfers + * @callback: Function called when transaction has completed. + */ +int gsi_trans_commit(struct gsi_trans *trans, bool ring_db); + +/** + * gsi_trans_commit_command() - Commit a GSI command transaction and wait + * wait for it to complete + * @trans: Transaction to commit + */ +int gsi_trans_commit_command(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode); + +/** + * gsi_trans_commit_command_timeout() - Commit a GSI command transaction, + * wait for it to complete, with timeout + * @trans: Transaction to commit + * @ring_db: Whether to tell the hardware about these queued transfers + * @timeout: Timeout period (in milliseconds) + */ +int gsi_trans_commit_command_timeout(struct gsi_trans *trans, + enum ipa_cmd_opcode opcode, + unsigned long timeout); + +/** + * gsi_trans_read_byte() - Issue a single byte read TRE on a channel + * @gsi: GSI pointer + * @channel_id: Channel on which to read a byte + * @addr: DMA address into which to transfer the one byte + * + * This is not a transaction operation at all. It's defined here because + * it needs to be done in coordination with other transaction activity. + */ +int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr); + +#endif /* _GSI_TRANS_H_ */ From patchwork Sun May 12 01:25:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163974 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4128804ilr; Sat, 11 May 2019 18:25:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqzdFnJ6Xg/ISPMGYOAla37tNITib4VBAWwrTs6W2qMWo403h04MLvD/Tac1D5Un734qo9aL X-Received: by 2002:a17:902:12f:: with SMTP id 44mr23086065plb.193.1557624351569; Sat, 11 May 2019 18:25:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624351; cv=none; d=google.com; s=arc-20160816; b=PlCIPlldcp/6B6fqaGHsh/Pyyvr5HWaOYYRHn1J0FKNpSAYMo0Rns8aDHexe0C+2Yb 6DJ+72yby0X11Q/UUySetsGwZTdO9Eo25/gMVH39N7fIAd62xTvpNS8zeyj2ZhFi2F1A bcnUptv6Oxzon/UObJZPBqqQvApHfpeGA9A9WPZm5oh8CLStE6k10dUjd6rNQdc5MUc4 y4lBiTqxqnMXRrNfRKEbC1PcTE9LHg4CGPZAMEKdG9omXt9RrwH/ql9+Zeaeb1OoZ6tm 0Z5PsaoZu6Jz9o38UZ6H5jojvgSBnPE/1uL2ggTubf0wTFuiblIMX/P3MWPFTShBaDhW +eqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=jRmWMCr9C4dJ0Jxfhe2zjagSxgsETvqOgSRpXe72yco=; b=ABlW37WPuGvUCYvBIA/bb5RQZxUHIECXw8kwPZDCLlj9gKthLjpiOKQ5PfwqlJf7SB kJHS/KdrfBXqKU+ApumQxbEYfp4FZIXbUDBjJ3fnP3ArDWfyCek77IMFh+BXiPg7iYlq BUFN96L8C9PF6zFjHYblvO74nQR+sa9s7XbOtHQa40yS8M17HtGDEMSJHGmWMxMCgfAf wpseDTSbG0gbKpo6/KVpss3u8OcbvfVvfAvC/UgqVm5ryiMwPIvO7er/SGR1vCLKmcjS LKwtDtMLt+OtMkD2F0gy2EpHnbfwk7U3aX1b1awQ0mSrM3uFkt9WrraSJQn+EV//zOPU sPsw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZtOjMZmC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w24si12985733plq.297.2019.05.11.18.25.51; Sat, 11 May 2019 18:25:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZtOjMZmC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726772AbfELBZt (ORCPT + 30 others); Sat, 11 May 2019 21:25:49 -0400 Received: from mail-it1-f194.google.com ([209.85.166.194]:35172 "EHLO mail-it1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726681AbfELBZn (ORCPT ); Sat, 11 May 2019 21:25:43 -0400 Received: by mail-it1-f194.google.com with SMTP id u186so15155163ith.0 for ; Sat, 11 May 2019 18:25:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jRmWMCr9C4dJ0Jxfhe2zjagSxgsETvqOgSRpXe72yco=; b=ZtOjMZmCTR/mrrnpw9PTvwmk/K3sCZTUUTDUVRIIqp9Mtmr86wlCJSG4Xetmehw55H WUELhrR76P6FtqznqwuyJs47HbORH3XjXBTO4d0PksfiX41gxlxHpJdAqsaYVRm1+ZIv 1qRrH0eYcsqcMSVP1Tf5Qjh3ZJWwNlu/+s5aBchXDq19Z6w9ylMyXTNVJAY9hqxB6pwK pkUyMvtgHx+7bHSLVbAMyMkDuLaIdZic+Z7EIkhPN+Fr0hwyQ95WBNUhCdof/rVmotZ7 PKwaHIUAX6k4o5Gx4eQgy6heYu1mQx1/9wlVy2BW5YKyIySIcOCvYimQLRChjxlGSHUM TU+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jRmWMCr9C4dJ0Jxfhe2zjagSxgsETvqOgSRpXe72yco=; b=TEuraM3W7o1UuSptlLVqVoC02ytHFC3tm1rrBo4eUAqGkH577udABCrUbUYQhx86to zYZNvQ8hf2zibgqcxBbxq7K+VOmKHmZo6yTkq+j1fOArfa9WVnNdD3afiIshm1hanj1W iFODFf059sRyqLpfAjBgxGD9poviMldGCIwidXcnbIGR8NmzjaNunS0TgsTW4bTZYz+a rh+1KolwNyq+zvfBzYujFfhLodTQH/R569qmv1sFxoEstWZfY91AgReQBHFVcHeP6ONJ OSigqjzICdILsM/dE6j8a1P7KOfcTIhIuH2qTXTos558PGATkKUCYwqTupWaANb04EIg c7uw== X-Gm-Message-State: APjAAAX+QJde5+PU+OwN+EYppCyc09r5oThHEmvBYBYgVqTZ92zTsQ0J KJy4rkoff+pmSmP+Ib598SViQw== X-Received: by 2002:a02:1cc6:: with SMTP id c189mr13576396jac.119.1557624342247; Sat, 11 May 2019 18:25:42 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:41 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 10/18] soc: qcom: ipa: IPA interface to GSI Date: Sat, 11 May 2019 20:25:00 -0500 Message-Id: <20190512012508.10608-11-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch provides interface functions supplied by the IPA layer that are called from the GSI layer. One function is called when a GSI transaction has completed. The others allow the GSI layer to inform the IPA layer when transactions have been supplied to hardware, and when the hardware has indicated transactions have completed. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_gsi.c | 48 ++++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_gsi.h | 49 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 97 insertions(+) create mode 100644 drivers/net/ipa/ipa_gsi.c create mode 100644 drivers/net/ipa/ipa_gsi.h -- 2.20.1 diff --git a/drivers/net/ipa/ipa_gsi.c b/drivers/net/ipa/ipa_gsi.c new file mode 100644 index 000000000000..c4e6c96d1676 --- /dev/null +++ b/drivers/net/ipa/ipa_gsi.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include + +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_endpoint.h" + +void ipa_gsi_trans_complete(struct gsi_trans *trans) +{ + struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi); + struct ipa_endpoint *endpoint; + + endpoint = ipa->endpoint_map[trans->channel_id]; + if (endpoint == ipa->command_endpoint) + return; /* Nothing to do for commands */ + + if (endpoint->toward_ipa) + ipa_endpoint_skb_tx_callback(trans); + else + ipa_endpoint_rx_callback(trans); +} + +void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count) +{ + struct ipa *ipa = container_of(gsi, struct ipa, gsi); + struct ipa_endpoint *endpoint; + + endpoint = ipa->endpoint_map[channel_id]; + if (endpoint->netdev) + netdev_sent_queue(endpoint->netdev, byte_count); +} + +void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count) +{ + struct ipa *ipa = container_of(gsi, struct ipa, gsi); + struct ipa_endpoint *endpoint; + + endpoint = ipa->endpoint_map[channel_id]; + if (endpoint->netdev) + netdev_completed_queue(endpoint->netdev, count, byte_count); +} diff --git a/drivers/net/ipa/ipa_gsi.h b/drivers/net/ipa/ipa_gsi.h new file mode 100644 index 000000000000..72adb520da40 --- /dev/null +++ b/drivers/net/ipa/ipa_gsi.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_GSI_TRANS_H_ +#define _IPA_GSI_TRANS_H_ + +#include + +struct gsi_trans; + +/** + * ipa_gsi_trans_complete() - GSI transaction completion callback + * @gsi: GSI pointer + * @trans: Transaction that has completed + * + * This called from the GSI layer to notify the IPA layer that a + * transaction has completed. + */ +void ipa_gsi_trans_complete(struct gsi_trans *trans); + +/** + * ipa_gsi_channel_tx_queued() - GSI queued to hardware notification + * @gsi: GSI pointer + * @channel_id: Channel number + * @count: Number of transactions queued + * @byte_count: Number of bytes to transfer represented by transactions + * + * This called from the GSI layer to notify the IPA layer that some + * number of transactions have been queued to hardware for execution. + */ +void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count); +/** + * ipa_gsi_trans_complete() - GSI transaction completion callback +ipa_gsi_channel_tx_completed() + * @gsi: GSI pointer + * @channel_id: Channel number + * @count: Number of transactions completed since last report + * @byte_count: Number of bytes transferred represented by transactions + * + * This called from the GSI layer to notify the IPA layer that the hardware + * has reported the completion of some number of transactions. + */ +void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count, + u32 byte_count); + +#endif /* _IPA_GSI_TRANS_H_ */ From patchwork Sun May 12 01:25:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163982 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4129155ilr; Sat, 11 May 2019 18:26:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqy1tMWicEJzNuTVEBaGxLSN4xkkj7qqzsS5P7uos3UyD4j5KepMBXotAlAyf8JKb+EVkN5G X-Received: by 2002:a17:902:2907:: with SMTP id g7mr22825300plb.238.1557624388997; Sat, 11 May 2019 18:26:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624388; cv=none; d=google.com; s=arc-20160816; b=kRdhiqAtuJqAQQiHZ3RjoMuaiM543rfs62A4/m7XMX08FKCfc39FIVvzWsXautc15T TMyDmWYn8tkLvWNYhNOssLmnb1ktD92c2F0NV9+10O8tIrUEVNaNBPny9f4TWjHGvrKf whw9o9gr5phQ3t3wh3u7O0N2qt3P3gFc/RjupZVhLLsXqwEsqXM0qKxg+Z6LG3VpfgJQ dcu+zGlSCF4DYNUU0Yab+WJC/Q47Fg2KQFqY6ia3IEEi4yJpIO9RpbN0g//qEWNz6MYi HlyyteUmrr5WIrLcNo1j2w67lZ6G9aQvWyRs2UMg+09nN0nEnmPVT3PTjs9IywDDes95 UViw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7LNsBnLoE7JBPqIwg7rnHvP45Plgg/HQpxMUmmNKUF4=; b=RMyJ0l5vtl03nu23w2ms04it1ilPKu3KwHJxCIZvy0fiRvRsDZh7c/tQ6Y3hpddrDV NE9eBw28OOciDqBNev6ZRqXJNuFxy+J/uIgiglOlqJ6V98Mo35lxXPT4L4aGicA3Jh6f GvZcH6DpirQhc4FeySlCWWuz2JWC77ElzFfwqSvj6h7fN5zQR6VYWRWZrHgvAOyycJ02 gye60Bc2khd0kSrsjvUExoOZ7HbmfAyfs8a1VCA+NE8Melcg1jZgw0pTuwcm6gMN0wlv CoFcvaU5cUYqFphwQqjMk3SSqa8nku6Tz7OP9BmmV6/sUa2l/O9Zl3n/QhjJijpYBkwW 6p6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ziWI76uD; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 31si12807028plz.198.2019.05.11.18.26.28; Sat, 11 May 2019 18:26:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ziWI76uD; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726941AbfELB01 (ORCPT + 30 others); Sat, 11 May 2019 21:26:27 -0400 Received: from mail-io1-f66.google.com ([209.85.166.66]:40725 "EHLO mail-io1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726715AbfELBZq (ORCPT ); Sat, 11 May 2019 21:25:46 -0400 Received: by mail-io1-f66.google.com with SMTP id s20so7404962ioj.7 for ; Sat, 11 May 2019 18:25:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7LNsBnLoE7JBPqIwg7rnHvP45Plgg/HQpxMUmmNKUF4=; b=ziWI76uDArHXoozYmvUmAUZBiIFDDfXLHh8OsR5flXQtdamUyidLa7CTUICwv+iLJC sxyAvo3ScFmAZVZ9jkXxLTj4Na4RJdmCaKASpYwtTd0R3Y3OUiyyEYU7t3d6vjliE9oA 0yrudOSFqDjYtKBojJvbf+kxOFIaTFzJl0afbv7FswyP+v4ObQuEIr8j6e0bs8FAqiyz w61DkICy0zbkvxPhhcpT2/Y983+Ib3CnoCZwVkXf4xY+X8/xhzzfgr5d97tQOK2Xbwpf +xQIKq0eFi5OqQAzusJIPillBXkxixf5lytK6qBqRp8TXb3+aueXC/zEOJbo8RU47vHf xoYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7LNsBnLoE7JBPqIwg7rnHvP45Plgg/HQpxMUmmNKUF4=; b=HXshrhTkt0qwKCtBlwLNakJ+wlACcrSuADvtcxGsDacFmKCupfdQQujVWOhkx3Tq79 l5rKWti7E83EIopI+RUGOu2LSRCrJhY9zxp0p0Dl85gi1xTItt4u5JKYZqvJXl/MVg0T aO2Vrz4D6aLc0lS99SuwRof9X4gk1o+wN744Mo0TJxuFqgZ3WUl+DoXxFzpWQXZk+DRc ZqP1ROLTxkjDTpSXJMA3CO2vuXHxBI4HVX4zBjGVLSxj/emNdYBPyPmFkThwi2F5z0Os ggJ8KFG8sLJe0r4QzMqcRCpovyIDW56zSgJFnvxUbdQKAYLRDMid0Kgilzfp2yzfUf9Q 6r2g== X-Gm-Message-State: APjAAAXf0VhFg5g3iWIX+aPWGhT4F/3/6+vzsJnukAJaQKgVEeBo7vDD PbXX8tpJNB5QNM/1Qdsy1w51IA== X-Received: by 2002:a6b:f305:: with SMTP id m5mr10974257ioh.271.1557624344228; Sat, 11 May 2019 18:25:44 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:43 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 11/18] soc: qcom: ipa: IPA endpoints Date: Sat, 11 May 2019 20:25:01 -0500 Message-Id: <20190512012508.10608-12-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch includes the code implementing an IPA endpoint. This is the primary abstraction implemented by the IPA. An endpoint is one end of a network connection between two entities physically connected to the IPA. Specifically, the AP and the modem implement endpoints, and an (AP endpoint, modem endpoint) pair implements the transfer of network data in one direction between the AP and modem. Endpoints are built on top of GSI channels, but IPA endpoints represent the higher-level functionality that the IPA provides. Data can be sent through a GSI channel, but it is the IPA endpoint that represents what is on the "other end" to receive that data. Other functionality, including aggregation, checksum offload and (at some future date) IP routing and filtering are all associated with the IPA endpoint. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_endpoint.c | 1253 ++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_endpoint.h | 96 +++ 2 files changed, 1349 insertions(+) create mode 100644 drivers/net/ipa/ipa_endpoint.c create mode 100644 drivers/net/ipa/ipa_endpoint.h -- 2.20.1 diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c new file mode 100644 index 000000000000..5dc10abc6480 --- /dev/null +++ b/drivers/net/ipa/ipa_endpoint.c @@ -0,0 +1,1253 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" +#include "ipa_netdev.h" + +#define atomic_dec_not_zero(v) atomic_add_unless((v), -1, 0) + +#define IPA_REPLENISH_BATCH 16 + +#define IPA_RX_BUFFER_SIZE (PAGE_SIZE << IPA_RX_BUFFER_ORDER) +#define IPA_RX_BUFFER_ORDER 1 /* 8KB endpoint RX buffers (2 pages) */ + +/* The amount of RX buffer space consumed by standard skb overhead */ +#define IPA_RX_BUFFER_OVERHEAD (PAGE_SIZE - SKB_MAX_ORDER(NET_SKB_PAD, 0)) + +#define IPA_ENDPOINT_STOP_RETRY_MAX 10 +#define IPA_ENDPOINT_STOP_RX_SIZE 1 /* bytes */ + +#define IPA_ENDPOINT_RESET_AGGR_RETRY_MAX 3 +#define IPA_AGGR_TIME_LIMIT_DEFAULT 1 /* milliseconds */ + +/** enum ipa_status_opcode - status element opcode hardware values */ +enum ipa_status_opcode { + IPA_STATUS_OPCODE_PACKET = 0x01, + IPA_STATUS_OPCODE_NEW_FRAG_RULE = 0x02, + IPA_STATUS_OPCODE_DROPPED_PACKET = 0x04, + IPA_STATUS_OPCODE_SUSPENDED_PACKET = 0x08, + IPA_STATUS_OPCODE_LOG = 0x10, + IPA_STATUS_OPCODE_DCMP = 0x20, + IPA_STATUS_OPCODE_PACKET_2ND_PASS = 0x40, +}; + +/** enum ipa_status_exception - status element exception type */ +enum ipa_status_exception { + IPA_STATUS_EXCEPTION_NONE, + IPA_STATUS_EXCEPTION_DEAGGR, + IPA_STATUS_EXCEPTION_IPTYPE, + IPA_STATUS_EXCEPTION_PACKET_LENGTH, + IPA_STATUS_EXCEPTION_PACKET_THRESHOLD, + IPA_STATUS_EXCEPTION_FRAG_RULE_MISS, + IPA_STATUS_EXCEPTION_SW_FILT, + IPA_STATUS_EXCEPTION_NAT, + IPA_STATUS_EXCEPTION_IPV6CT, + IPA_STATUS_EXCEPTION_MAX, +}; + +/** + * struct ipa_status - Abstracted IPA status element + * @opcode: Status element type + * @exception: The first exception that took place + * @pkt_len: Payload length + * @dst_endpoint: Destination endpoint + * @metadata: 32-bit metadata value used by packet + * @rt_miss: Flag; if 1, indicates there was a routing rule miss + * + * Note that the hardware status element supplies additional information + * that is currently unused. + */ +struct ipa_status { + enum ipa_status_opcode opcode; + enum ipa_status_exception exception; + u32 pkt_len; + u32 dst_endpoint; + u32 metadata; + u32 rt_miss; +}; + +/* Field masks for struct ipa_status_raw structure fields */ + +#define IPA_STATUS_SRC_IDX_FMASK GENMASK(4, 0) + +#define IPA_STATUS_DST_IDX_FMASK GENMASK(4, 0) + +#define IPA_STATUS_FLAGS1_FLT_LOCAL_FMASK GENMASK(0, 0) +#define IPA_STATUS_FLAGS1_FLT_HASH_FMASK GENMASK(1, 1) +#define IPA_STATUS_FLAGS1_FLT_GLOBAL_FMASK GENMASK(2, 2) +#define IPA_STATUS_FLAGS1_FLT_RET_HDR_FMASK GENMASK(3, 3) +#define IPA_STATUS_FLAGS1_FLT_RULE_ID_FMASK GENMASK(13, 4) +#define IPA_STATUS_FLAGS1_RT_LOCAL_FMASK GENMASK(14, 14) +#define IPA_STATUS_FLAGS1_RT_HASH_FMASK GENMASK(15, 15) +#define IPA_STATUS_FLAGS1_UCP_FMASK GENMASK(16, 16) +#define IPA_STATUS_FLAGS1_RT_TBL_IDX_FMASK GENMASK(21, 17) +#define IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK GENMASK(31, 22) + +#define IPA_STATUS_FLAGS2_NAT_HIT_FMASK GENMASK_ULL(0, 0) +#define IPA_STATUS_FLAGS2_NAT_ENTRY_IDX_FMASK GENMASK_ULL(13, 1) +#define IPA_STATUS_FLAGS2_NAT_TYPE_FMASK GENMASK_ULL(15, 14) +#define IPA_STATUS_FLAGS2_TAG_INFO_FMASK GENMASK_ULL(63, 16) + +#define IPA_STATUS_FLAGS3_SEQ_NUM_FMASK GENMASK(7, 0) +#define IPA_STATUS_FLAGS3_TOD_CTR_FMASK GENMASK(31, 8) + +#define IPA_STATUS_FLAGS4_HDR_LOCAL_FMASK GENMASK(0, 0) +#define IPA_STATUS_FLAGS4_HDR_OFFSET_FMASK GENMASK(10, 1) +#define IPA_STATUS_FLAGS4_FRAG_HIT_FMASK GENMASK(11, 11) +#define IPA_STATUS_FLAGS4_FRAG_RULE_FMASK GENMASK(15, 12) +#define IPA_STATUS_FLAGS4_HW_SPECIFIC_FMASK GENMASK(31, 16) + +/* Status element provided by hardware */ +struct ipa_status_raw { + u8 opcode; + u8 exception; + u16 mask; + u16 pkt_len; + u8 endp_src_idx; /* Only bottom 5 bits valid */ + u8 endp_dst_idx; /* Only bottom 5 bits valid */ + u32 metadata; + u32 flags1; + u64 flags2; + u32 flags3; + u32 flags4; +}; + +static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint); + +/* suspend_delay represents suspend for RX, delay for TX endpoints */ +bool ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay) +{ + u32 offset = IPA_REG_ENDP_INIT_CTRL_N_OFFSET(endpoint->endpoint_id); + u32 mask; + u32 val; + + mask = endpoint->toward_ipa ? ENDP_DELAY_FMASK : ENDP_SUSPEND_FMASK; + + val = ioread32(endpoint->ipa->reg_virt + offset); + if (suspend_delay == !!(val & mask)) + return false; /* Already set to desired state */ + + val ^= mask; + iowrite32(val, endpoint->ipa->reg_virt + offset); + + return true; +} + +static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_CFG_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + /* FRAG_OFFLOAD_EN is 0 */ + if (endpoint->data->config.checksum) { + if (endpoint->toward_ipa) { + u32 checksum_offset; + + val |= u32_encode_bits(IPA_CS_OFFLOAD_UL, + CS_OFFLOAD_EN_FMASK); + /* Checksum header offset is in 4-byte units */ + checksum_offset = sizeof(struct rmnet_map_header); + checksum_offset /= sizeof(u32); + val |= u32_encode_bits(checksum_offset, + CS_METADATA_HDR_OFFSET_FMASK); + } else { + val |= u32_encode_bits(IPA_CS_OFFLOAD_DL, + CS_OFFLOAD_EN_FMASK); + } + } else { + val |= u32_encode_bits(IPA_CS_OFFLOAD_NONE, + CS_OFFLOAD_EN_FMASK); + } + /* CS_GEN_QMB_MASTER_SEL is 0 */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_HDR_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + if (endpoint->data->config.qmap) { + size_t header_size = sizeof(struct rmnet_map_header); + + if (endpoint->toward_ipa && endpoint->data->config.checksum) + header_size += sizeof(struct rmnet_map_ul_csum_header); + + val |= u32_encode_bits(header_size, HDR_LEN_FMASK); + /* metadata is the 4 byte rmnet_map header itself */ + val |= HDR_OFST_METADATA_VALID_FMASK; + val |= u32_encode_bits(0, HDR_OFST_METADATA_FMASK); + /* HDR_ADDITIONAL_CONST_LEN is 0; (IPA->AP only) */ + if (!endpoint->toward_ipa) { + u32 size_offset = offsetof(struct rmnet_map_header, + pkt_len); + + val |= HDR_OFST_PKT_SIZE_VALID_FMASK; + val |= u32_encode_bits(size_offset, + HDR_OFST_PKT_SIZE_FMASK); + } + /* HDR_A5_MUX is 0 */ + /* HDR_LEN_INC_DEAGG_HDR is 0 */ + /* HDR_METADATA_REG_VALID is 0; (AP->IPA only) */ + } + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(endpoint->endpoint_id); + u32 pad_align = endpoint->data->config.rx.pad_align; + u32 val = 0; + + val |= HDR_ENDIANNESS_FMASK; /* big endian */ + val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK; + /* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */ + /* HDR_PAYLOAD_LEN_INC_PADDING is 0 */ + /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */ + if (!endpoint->toward_ipa) + val |= u32_encode_bits(pad_align, HDR_PAD_TO_ALIGNMENT_FMASK); + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +/** + * Generate a metadata mask value that will select only the mux_id + * field in an rmnet_map header structure. The mux_id is at offset + * 1 byte from the beginning of the structure, but the metadata + * value is treated as a 4-byte unit. So this mask must be computed + * with endianness in mind. Note that ipa_endpoint_init_hdr_metadata_mask() + * will convert this value to the proper byte order. + * + * Marked __always_inline because this is really computing a + * constant value. + */ +static __always_inline __be32 ipa_rmnet_mux_id_metadata_mask(void) +{ + size_t mux_id_offset = offsetof(struct rmnet_map_header, mux_id); + u32 mux_id_mask = 0; + u8 *bytes; + + bytes = (u8 *)&mux_id_mask; + bytes[mux_id_offset] = 0xff; /* mux_id is 1 byte */ + + return cpu_to_be32(mux_id_mask); +} + +static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint) +{ + u32 endpoint_id = endpoint->endpoint_id; + u32 val = 0; + u32 offset; + + offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id); + + if (!endpoint->toward_ipa && endpoint->data->config.qmap) + val = ipa_rmnet_mux_id_metadata_mask(); + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +/* Compute the aggregation size value to use for a given buffer size */ +static u32 ipa_aggr_size_kb(u32 rx_buffer_size) +{ + BUILD_BUG_ON(IPA_RX_BUFFER_SIZE > + field_max(AGGR_BYTE_LIMIT_FMASK) * SZ_1K + + IPA_MTU + IPA_RX_BUFFER_OVERHEAD); + + /* Because we don't have the "hard byte limit" enabled, we + * need to make sure there's enough space in the buffer to + * receive a complete MTU (plus normal skb overhead) beyond + * the aggregated size limit we specify. + */ + rx_buffer_size -= IPA_MTU + IPA_RX_BUFFER_OVERHEAD; + + return rx_buffer_size / SZ_1K; +} + +static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint) +{ + const struct ipa_endpoint_config_data *config = &endpoint->data->config; + u32 offset = IPA_REG_ENDP_INIT_AGGR_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + if (config->aggregation) { + if (!endpoint->toward_ipa) { + u32 aggr_size = ipa_aggr_size_kb(IPA_RX_BUFFER_SIZE); + + val |= u32_encode_bits(IPA_ENABLE_AGGR, AGGR_EN_FMASK); + val |= u32_encode_bits(IPA_GENERIC, AGGR_TYPE_FMASK); + val |= u32_encode_bits(aggr_size, + AGGR_BYTE_LIMIT_FMASK); + val |= u32_encode_bits(IPA_AGGR_TIME_LIMIT_DEFAULT, + AGGR_TIME_LIMIT_FMASK); + val |= u32_encode_bits(0, AGGR_PKT_LIMIT_FMASK); + if (config->rx.aggr_close_eof) + val |= AGGR_SW_EOF_ACTIVE_FMASK; + /* AGGR_HARD_BYTE_LIMIT_ENABLE is 0 */ + } else { + val |= u32_encode_bits(IPA_ENABLE_DEAGGR, + AGGR_EN_FMASK); + val |= u32_encode_bits(IPA_QCMAP, AGGR_TYPE_FMASK); + /* other fields ignored */ + } + /* AGGR_FORCE_CLOSE is 0 */ + } else { + val |= u32_encode_bits(IPA_BYPASS_AGGR, AGGR_EN_FMASK); + /* other fields ignored */ + } + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_MODE_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + if (endpoint->toward_ipa && endpoint->data->config.dma_mode) { + u32 dma_endpoint_id = endpoint->data->config.dma_endpoint; + + val |= u32_encode_bits(IPA_DMA, MODE_FMASK); + val |= u32_encode_bits(dma_endpoint_id, DEST_PIPE_INDEX_FMASK); + } else { + val |= u32_encode_bits(IPA_BASIC, MODE_FMASK); + } + /* Other bitfields unspecified (and 0) */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(endpoint->endpoint_id); + u32 val = 0; + + /* DEAGGR_HDR_LEN is 0 */ + /* PACKET_OFFSET_VALID is 0 */ + /* PACKET_OFFSET_LOCATION is ignored (not valid) */ + /* MAX_PACKET_LEN is 0 (not enforced) */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint) +{ + u32 offset = IPA_REG_ENDP_INIT_SEQ_N_OFFSET(endpoint->endpoint_id); + u32 seq_type = endpoint->data->seq_type; + u32 val = 0; + + val |= u32_encode_bits(seq_type & 0xf, HPS_SEQ_TYPE_FMASK); + val |= u32_encode_bits((seq_type >> 4) & 0xf, DPS_SEQ_TYPE_FMASK); + /* HPS_REP_SEQ_TYPE is 0 */ + /* DPS_REP_SEQ_TYPE is 0 */ + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +/* Complete transaction initiated in ipa_endpoint_skb_tx() */ +void ipa_endpoint_skb_tx_callback(struct gsi_trans *trans) +{ + struct sk_buff *skb = trans->data; + + dev_kfree_skb_any(skb); +} + +/** + * ipa_endpoint_skb_tx() - Transmit a socket buffer + * @endpoint: Endpoint pointer + * @skb: Socket buffer to send + * + * Returns: 0 if successful, or a negative error code + */ +int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb) +{ + struct gsi_trans *trans; + bool doorbell; + u32 nr_frags; + int ret; + + /* Make sure source endpoint's TLV FIFO has enough entries to + * hold the linear portion of the skb and all its fragments. + * If not, see if we can linearize it before giving up. + */ + nr_frags = skb_shinfo(skb)->nr_frags; + if (1 + nr_frags > endpoint->trans_tre_max) { + if (skb_linearize(skb)) + return -ENOMEM; + nr_frags = 0; + } + + trans = gsi_channel_trans_alloc(&endpoint->ipa->gsi, + endpoint->channel_id, nr_frags + 1); + if (!trans) + return -EBUSY; + trans->data = skb; + + ret = skb_to_sgvec(skb, trans->sgl, 0, skb->len); + if (ret < 0) + goto err_trans_free; + trans->sgc = ret; + + /* doorbell = __netdev_sent_queue(skb->dev, ->len, ->xmit_more); */ + doorbell = !skb->xmit_more; + ret = gsi_trans_commit(trans, doorbell); + if (ret) + goto err_trans_free; + return 0; + +err_trans_free: + gsi_trans_free(trans); + + return -ENOMEM; +} + +static void ipa_endpoint_status(struct ipa_endpoint *endpoint) +{ + const struct ipa_endpoint_config_data *config = &endpoint->data->config; + enum ipa_endpoint_id endpoint_id = endpoint->endpoint_id; + u32 val = 0; + u32 offset; + + offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id); + + if (endpoint->data->config.status_enable) { + val |= STATUS_EN_FMASK; + if (endpoint->toward_ipa) { + u32 status_endpoint_id = config->tx.status_endpoint; + + val |= u32_encode_bits(status_endpoint_id, + STATUS_ENDP_FMASK); + } + /* STATUS_LOCATION is 0 (status element precedes packet) */ + /* STATUS_PKT_SUPPRESS_FMASK */ + } + + iowrite32(val, endpoint->ipa->reg_virt + offset); +} + +static void ipa_endpoint_skb_copy(struct ipa_endpoint *endpoint, + void *data, u32 len, u32 extra) +{ + struct sk_buff *skb; + + skb = __dev_alloc_skb(len, GFP_ATOMIC); + if (skb) { + skb_put(skb, len); + memcpy(skb->data, data, len); + skb->truesize += extra; + } + + /* Now receive it, or drop it if there's no netdev */ + if (endpoint->netdev) + ipa_netdev_skb_rx(endpoint->netdev, skb); + else if (skb) + dev_kfree_skb_any(skb); +} + +static void ipa_endpoint_skb_build(struct ipa_endpoint *endpoint, + struct page *page, u32 len) +{ + struct sk_buff *skb; + + /* assert(len <= SKB_WITH_OVERHEAD(IPA_RX_BUFFER_SIZE-NET_SKB_PAD)); */ + skb = build_skb(page_address(page), IPA_RX_BUFFER_SIZE); + if (skb) { + /* Reserve the headroom and account for the data */ + skb_reserve(skb, NET_SKB_PAD); + skb_put(skb, len); + } + + /* Now receive it, or drop it if there's no netdev */ + if (endpoint->netdev) + ipa_netdev_skb_rx(endpoint->netdev, skb); + else if (skb) + dev_kfree_skb_any(skb); + + /* If no socket buffer took the pages, free them */ + if (!skb) + __free_pages(page, IPA_RX_BUFFER_ORDER); +} + +/* Maps an exception type returned in a ipa_status_raw structure + * to the ipa_status_exception value that represents it in + * the exception field of a ipa_status structure. Returns + * IPA_STATUS_EXCEPTION_MAX for an unrecognized value. + */ +static enum ipa_status_exception exception_map(u8 exception, bool is_ipv6) +{ + switch (exception) { + case 0x00: return IPA_STATUS_EXCEPTION_NONE; + case 0x01: return IPA_STATUS_EXCEPTION_DEAGGR; + case 0x04: return IPA_STATUS_EXCEPTION_IPTYPE; + case 0x08: return IPA_STATUS_EXCEPTION_PACKET_LENGTH; + case 0x10: return IPA_STATUS_EXCEPTION_FRAG_RULE_MISS; + case 0x20: return IPA_STATUS_EXCEPTION_SW_FILT; + case 0x40: return is_ipv6 ? IPA_STATUS_EXCEPTION_IPV6CT + : IPA_STATUS_EXCEPTION_NAT; + default: return IPA_STATUS_EXCEPTION_MAX; + } +} + +/* A rule miss is indicated as an all-1's value in the rt_rule_id + * or flt_rule_id field of the ipa_status structure. + */ +static bool ipa_rule_miss_id(u32 id) +{ + return id == field_max(IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK); +} + +size_t ipa_status_parse(struct ipa_status *status, void *data, u32 count) +{ + const struct ipa_status_raw *status_raw = data; + bool is_ipv6; + u32 val; + + BUILD_BUG_ON(sizeof(*status_raw) % 4); + if (WARN_ON(count < sizeof(*status_raw))) + return 0; + + status->opcode = status_raw->opcode; + is_ipv6 = status_raw->mask & BIT(7) ? false : true; + status->exception = exception_map(status_raw->exception, is_ipv6); + status->pkt_len = status_raw->pkt_len; + val = u32_get_bits(status_raw->endp_dst_idx, IPA_STATUS_DST_IDX_FMASK); + status->dst_endpoint = val; + status->metadata = status_raw->metadata; + val = u32_get_bits(status_raw->flags1, + IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK); + status->rt_miss = ipa_rule_miss_id(val) ? 1 : 0; + + return sizeof(*status_raw); +} + +/* The format of a packet status element is the same for several status + * types (opcodes). The NEW_FRAG_RULE, LOG, DCMP (decompression) types + * aren't currently supported + */ +static bool ipa_status_format_packet(enum ipa_status_opcode opcode) +{ + switch (opcode) { + case IPA_STATUS_OPCODE_PACKET: + case IPA_STATUS_OPCODE_DROPPED_PACKET: + case IPA_STATUS_OPCODE_SUSPENDED_PACKET: + case IPA_STATUS_OPCODE_PACKET_2ND_PASS: + return true; + default: + return false; + } +} + +static bool ipa_endpoint_status_skip(struct ipa_endpoint *endpoint, + struct ipa_status *status) +{ + if (!ipa_status_format_packet(status->opcode)) + return true; + if (!status->pkt_len) + return true; + if (status->dst_endpoint != endpoint->endpoint_id) + return true; + + return false; /* Don't skip this packet, process it */ +} + +static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint, + struct page *page, u32 total_len) +{ + void *data = page_address(page) + NET_SKB_PAD; + u32 unused = IPA_RX_BUFFER_SIZE - total_len; + u32 resid = total_len; + + while (resid) { + struct ipa_status status; + bool drop_packet = false; + size_t status_size; + u32 align; + u32 len; + + status_size = ipa_status_parse(&status, data, resid); + + /* Skip over status packets that lack packet data */ + if (ipa_endpoint_status_skip(endpoint, &status)) { + data += status_size; + resid -= status_size; + continue; + } + + /* Packet data follows the status structure. Unless + * the packet failed to match a routing rule, or it + * had a deaggregation exception, we'll consume it. + */ + if (status.exception == IPA_STATUS_EXCEPTION_NONE) { + if (status.rt_miss) + drop_packet = true; + } else if (status.exception == IPA_STATUS_EXCEPTION_DEAGGR) { + drop_packet = true; + } + + /* Compute the amount of buffer space consumed by the + * packet, including the status element. If the hardware + * is configured to pad packet data to an aligned boundary, + * account for that. And if checksum offload is is enabled + * a trailer containing computed checksum information will + * be appended. + */ + align = endpoint->data->config.rx.pad_align ? : 1; + len = status_size + ALIGN(status.pkt_len, align); + if (endpoint->data->config.checksum) + len += sizeof(struct rmnet_map_dl_csum_trailer); + + /* Charge the new packet with a proportional fraction of + * the unused space in the original receive buffer. + * XXX Charge a proportion of the *whole* receive buffer? + */ + if (!drop_packet) { + u32 extra = unused * len / total_len; + void *data2 = data + status_size; + u32 len2 = status.pkt_len; + + /* Client receives only packet data (no status) */ + ipa_endpoint_skb_copy(endpoint, data2, len2, extra); + } + + /* Consume status and the full packet it describes */ + data += len; + resid -= len; + } + + __free_pages(page, IPA_RX_BUFFER_ORDER); +} + +/* Complete transaction initiated in ipa_endpoint_replenish_one() */ +void ipa_endpoint_rx_callback(struct gsi_trans *trans) +{ + struct page *page = trans->data; + struct ipa_endpoint *endpoint; + struct ipa *ipa; + u32 len; + + ipa = container_of(trans->gsi, struct ipa, gsi); + endpoint = ipa->endpoint_map[trans->channel_id]; + + atomic_inc(&endpoint->replenish_backlog); + ipa_endpoint_replenish(endpoint); + + if (trans->result == -ECANCELED) { + __free_pages(page, IPA_RX_BUFFER_ORDER); + return; + } + + /* Record the actual received length and build a socket buffer */ + /* assert(trans_result > 0); */ + len = trans->result; + + if (endpoint->data->config.status_enable) + ipa_endpoint_status_parse(endpoint, page, len); + else + ipa_endpoint_skb_build(endpoint, page, len); +} + +static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint) +{ + struct gsi_trans *trans; + bool doorbell = false; + struct page *page; + u32 offset; + u32 len; + + page = dev_alloc_pages(IPA_RX_BUFFER_ORDER); + if (!page) + return false; + offset = NET_SKB_PAD; + len = IPA_RX_BUFFER_SIZE - offset; + + trans = gsi_channel_trans_alloc(&endpoint->ipa->gsi, + endpoint->channel_id, 1); + if (!trans) + goto err_page_free; + trans->data = page; + + /* Set up and map a scatterlist entry representing the buffer */ + sg_init_table(trans->sgl, trans->sgc); + sg_set_page(trans->sgl, page, len, offset); + + if (++endpoint->replenish_ready == IPA_REPLENISH_BATCH) { + doorbell = true; + endpoint->replenish_ready = 0; + } + + if (!gsi_trans_commit(trans, doorbell)) + return 0; + +err_page_free: + __free_pages(page, IPA_RX_BUFFER_ORDER); + + return -ENOMEM; +} + +/** + * ipa_endpoint_replenish() - Replenish the Rx packets cache. + * + * Allocate RX packet wrapper structures with maximal socket buffers + * for an endpoint. These are supplied to the hardware, which fills + * them with incoming data. + */ +static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint) +{ + u32 backlog; + + while (atomic_dec_not_zero(&endpoint->replenish_backlog)) + if (ipa_endpoint_replenish_one(endpoint)) + goto try_again_later; + + return; + +try_again_later: + /* The last one didn't succeed, so fix the backlog */ + backlog = atomic_inc_return(&endpoint->replenish_backlog); + + /* Whenever a receive buffer transaction completes we'll try to + * replenish again. It's unlikely, but if we fail to supply even + * one buffer, nothing will trigger another replenish attempt. + * If this happens, schedule work to try again. + */ + if (backlog == endpoint->replenish_max) + schedule_delayed_work(&endpoint->replenish_work, + msecs_to_jiffies(1)); +} + +static void ipa_endpoint_replenish_work(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct ipa_endpoint *endpoint; + + endpoint = container_of(dwork, struct ipa_endpoint, replenish_work); + + ipa_endpoint_replenish(endpoint); +} + +static bool ipa_endpoint_set_up(struct ipa_endpoint *endpoint) +{ + struct ipa *ipa = endpoint->ipa; + + return ipa && (ipa->set_up & BIT(endpoint->endpoint_id)); +} + +static void ipa_endpoint_default_route_set(struct ipa *ipa, + enum ipa_endpoint_id endpoint_id) +{ + u32 val; + + /* ROUTE_DIS is 0 */ + val = u32_encode_bits(endpoint_id, ROUTE_DEF_PIPE_FMASK); + val |= ROUTE_DEF_HDR_TABLE_FMASK; + val |= u32_encode_bits(0, ROUTE_DEF_HDR_OFST_FMASK); + val |= u32_encode_bits(endpoint_id, ROUTE_FRAG_DEF_PIPE_FMASK); + val |= ROUTE_DEF_RETAIN_HDR_FMASK; + + iowrite32(val, ipa->reg_virt + IPA_REG_ROUTE_OFFSET); +} +/** + * ipa_endpoint_default_route_init() - Configure IPA default route + * @ipa: IPA pointer + * @client: Client to which exceptions should be directed + */ +void ipa_endpoint_default_route_setup(struct ipa_endpoint *endpoint) +{ + ipa_endpoint_default_route_set(endpoint->ipa, endpoint->endpoint_id); +} + +/** + * ipa_endpoint_default_route_teardown() - + * Inverse of ipa_endpoint_default_route_setup() + * @ipa: IPA pointer + */ +void ipa_endpoint_default_route_teardown(struct ipa_endpoint *endpoint) +{ + ipa_endpoint_default_route_set(endpoint->ipa, 0); +} + +/** + * ipa_endpoint_stop()- Stops a GSI channel in IPA + * @client: Client whose endpoint should be stopped + * + * This function implements the sequence to stop a GSI channel + * in IPA. This function returns when the channel is is STOP state. + * + * Return value: 0 on success, negative otherwise + */ +int ipa_endpoint_stop(struct ipa_endpoint *endpoint) +{ + struct device *dev = &endpoint->ipa->pdev->dev; + size_t size = IPA_ENDPOINT_STOP_RX_SIZE; + struct gsi *gsi = &endpoint->ipa->gsi; + void *virt = NULL; + dma_addr_t addr; + int ret; + int i; + + /* An RX endpoint might not stop right away. In that case we issue + * a small (1-byte) DMA command, delay for a bit (1-2 milliseconds), + * and try again. Allocate the DMA buffer in case this is needed. + */ + if (!endpoint->toward_ipa) { + virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + } + + for (i = 0; i < IPA_ENDPOINT_STOP_RETRY_MAX; i++) { + ret = gsi_channel_stop(gsi, endpoint->channel_id); + if (ret != -EAGAIN) + break; + + if (endpoint->toward_ipa) + continue; + + /* Send a 1 byte 32-bit DMA task and try again after a delay */ + ret = ipa_cmd_dma_task_32(endpoint->ipa, size, addr); + if (ret) + break; + + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); + } + if (i >= IPA_ENDPOINT_STOP_RETRY_MAX) + ret = -EIO; + + if (!endpoint->toward_ipa) + dma_free_coherent(dev, size, virt, addr); + + return ret; +} + +bool ipa_endpoint_enabled(struct ipa_endpoint *endpoint) +{ + return !!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)); +} + +int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint) +{ + struct ipa *ipa = endpoint->ipa; + int ret; + + if (WARN_ON(!ipa_endpoint_set_up(endpoint))) + return -EINVAL; + + ret = gsi_channel_start(&ipa->gsi, endpoint->channel_id); + if (ret) + return ret; + + ipa_interrupt_suspend_enable(ipa->interrupt, endpoint->endpoint_id); + + if (!endpoint->toward_ipa) + ipa_endpoint_replenish(endpoint); + + ipa->enabled |= BIT(endpoint->endpoint_id); + + return 0; +} + +void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint) +{ + struct ipa *ipa = endpoint->ipa; + int ret; + + if (WARN_ON(!ipa_endpoint_enabled(endpoint))) + return; + + ipa_interrupt_suspend_disable(ipa->interrupt, endpoint->endpoint_id); + + ret = ipa_endpoint_stop(endpoint); + WARN(ret, "error %d attempting to stop endpoint %u\n", ret, + endpoint->endpoint_id); + + if (!ret) + endpoint->ipa->enabled &= ~BIT(endpoint->endpoint_id); +} + +static bool ipa_endpoint_aggr_active(struct ipa_endpoint *endpoint) +{ + u32 mask = BIT(endpoint->endpoint_id); + struct ipa *ipa = endpoint->ipa; + u32 val; + + val = ioread32(ipa->reg_virt + IPA_REG_STATE_AGGR_ACTIVE_OFFSET); + + return !!(val & mask); +} + +static void ipa_endpoint_force_close(struct ipa_endpoint *endpoint) +{ + u32 mask = BIT(endpoint->endpoint_id); + struct ipa *ipa = endpoint->ipa; + u32 val; + + val = u32_encode_bits(mask, PIPE_BITMAP_FMASK); + iowrite32(val, ipa->reg_virt + IPA_REG_AGGR_FORCE_CLOSE_OFFSET); +} + +/** + * ipa_endpoint_reset_rx_aggr() - Reset RX endpoint with aggregation active + * @endpoint: Endpoint to be reset + * + * If aggregation is active on an RX endpoint when a reset is performed + * on its underlying GSI channel, a special sequence of actions must be + * taken to ensure the IPA pipeline is properly cleared. + * + * Return: 0 if successful, or a negative error code + */ +static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint) +{ + struct device *dev = &endpoint->ipa->pdev->dev; + struct ipa *ipa = endpoint->ipa; + bool endpoint_suspended = false; + struct gsi *gsi = &ipa->gsi; + dma_addr_t addr; + u32 len = 1; + void *virt; + int ret; + int i; + + virt = kzalloc(len, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + addr = dma_map_single(dev, virt, len, DMA_FROM_DEVICE); + if (dma_mapping_error(dev, addr)) { + ret = -ENOMEM; + goto out_free_virt; + } + + /* Force close aggregation before issuing the reset */ + ipa_endpoint_force_close(endpoint); + + ret = gsi_channel_reset(gsi, endpoint->channel_id); + if (ret) + goto out_unmap_addr; + + /* Reconfigure the channel with the doorbell engine disabled, + * and poll until we know aggregation isn't active any more. + */ + gsi_channel_config(gsi, endpoint->channel_id, false); + + if (ipa_endpoint_init_ctrl(endpoint, false)) + endpoint_suspended = true; + + /* Start channel and do a 1 byte read */ + ret = gsi_channel_start(gsi, endpoint->channel_id); + if (ret) + goto out_suspend_again; + + ret = gsi_trans_read_byte(gsi, endpoint->channel_id, addr); + if (ret) + goto err_stop_channel; + + /* Wait for aggregation to be closed on the channel */ + for (i = 0; i < IPA_ENDPOINT_RESET_AGGR_RETRY_MAX; i++) { + if (!ipa_endpoint_aggr_active(endpoint)) + break; + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); + } + WARN_ON(ipa_endpoint_aggr_active(endpoint)); + + ret = ipa_endpoint_stop(endpoint); + if (ret) + goto out_suspend_again; + + /* Finally, reset the channel again, and sleep for 1 millisecond + * to complete the channel reset sequence. + * + * Finish by suspending the channel again (if necessary) and + * re-enabling its doorbell engine. + */ + ret = gsi_channel_reset(gsi, endpoint->channel_id); + + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); + + goto out_suspend_again; + +err_stop_channel: + ipa_endpoint_stop(endpoint); +out_suspend_again: + if (endpoint_suspended) + (void)ipa_endpoint_init_ctrl(endpoint, true); + + /* Reconfigure the channel, with doorbell engine enabled again */ + gsi_channel_config(gsi, endpoint->channel_id, true); +out_unmap_addr: + dma_unmap_single(dev, addr, len, DMA_FROM_DEVICE); +out_free_virt: + kfree(virt); + + return ret; +} + +static void ipa_endpoint_reset(struct ipa_endpoint *endpoint) +{ + u32 channel_id = endpoint->channel_id; + struct ipa *ipa = endpoint->ipa; + struct gsi *gsi = &ipa->gsi; + int ret; + + /* For TX endpoints, or RX endpoints without aggregation active, + * we only need to reset the underlying GSI channel. + */ + if (!endpoint->toward_ipa && endpoint->data->config.aggregation) { + if (ipa_endpoint_aggr_active(endpoint)) + ret = ipa_endpoint_reset_rx_aggr(endpoint); + else + ret = gsi_channel_reset(gsi, channel_id); + } else { + ret = gsi_channel_reset(gsi, channel_id); + } + WARN(ret, "error %d attempting to reset channel %u\n", ret, + endpoint->channel_id); +} + +static bool ipa_endpoint_suspended(struct ipa_endpoint *endpoint) +{ + return !!(endpoint->ipa->suspended & BIT(endpoint->endpoint_id)); +} + +/** + * ipa_endpoint_suspend_aggr() - Emulate suspend interrupt + * @endpoint_id: Endpoint on which to emulate a suspend + * + * Emulate suspend IPA interrupt to unsuspend an endpoint suspended + * with an open aggregation frame. This is to work around a hardware + * issue where the suspend interrupt will not be generated when it + * should be. + */ +static void ipa_endpoint_suspend_aggr(struct ipa_endpoint *endpoint) +{ + struct ipa *ipa = endpoint->ipa; + + /* Nothing to do if the endpoint doesn't have aggregation open */ + if (!ipa_endpoint_aggr_active(endpoint)) + return; + + /* Force close aggregation */ + ipa_endpoint_force_close(endpoint); + + ipa_interrupt_simulate_suspend(ipa->interrupt); +} + +/* Suspend an RX endpoint */ +void ipa_endpoint_suspend(struct ipa_endpoint *endpoint) +{ + struct gsi *gsi = &endpoint->ipa->gsi; + + if (!ipa_endpoint_enabled(endpoint)) + return; + + if (!ipa_endpoint_init_ctrl(endpoint, true)) + return; + + /* Due to a hardware bug, a client suspended with an open + * aggregation frame will not generate a SUSPEND IPA interrupt. + * We work around this by force-closing the aggregation frame, + * then simulating the arrival of such an interrupt. + */ + if (endpoint->data->config.aggregation) + ipa_endpoint_suspend_aggr(endpoint); + + gsi_channel_trans_quiesce(gsi, endpoint->channel_id); + + endpoint->ipa->suspended |= BIT(endpoint->endpoint_id); +} + +/* Resume a suspended RX endpoint */ +void ipa_endpoint_resume(struct ipa_endpoint *endpoint) +{ + if (!ipa_endpoint_suspended(endpoint)) + return; + + if (!ipa_endpoint_init_ctrl(endpoint, false)) + return; + + endpoint->ipa->suspended &= ~BIT(endpoint->endpoint_id); +} + +static void ipa_endpoint_program(struct ipa_endpoint *endpoint) +{ + if (endpoint->toward_ipa) { + bool delay_mode = !!endpoint->data->config.tx.delay; + + (void)ipa_endpoint_init_ctrl(endpoint, delay_mode); + ipa_endpoint_init_hdr_ext(endpoint); + ipa_endpoint_init_aggr(endpoint); + ipa_endpoint_init_deaggr(endpoint); + ipa_endpoint_init_seq(endpoint); + } else { + (void)ipa_endpoint_init_ctrl(endpoint, false); + ipa_endpoint_init_hdr_ext(endpoint); + ipa_endpoint_init_aggr(endpoint); + } + ipa_endpoint_init_cfg(endpoint); + ipa_endpoint_init_hdr(endpoint); + ipa_endpoint_init_hdr_metadata_mask(endpoint); + ipa_endpoint_init_mode(endpoint); + ipa_endpoint_status(endpoint); +} + +static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint) +{ + struct gsi *gsi = &endpoint->ipa->gsi; + u32 channel_id = endpoint->channel_id; + + /* Only AP endpoints get configured */ + if (endpoint->ee_id != GSI_EE_AP) + return; + + endpoint->trans_tre_max = gsi_channel_trans_tre_max(gsi, channel_id); + if (!endpoint->toward_ipa) { + endpoint->replenish_max = + gsi_channel_trans_max(gsi, channel_id); + atomic_set(&endpoint->replenish_backlog, + endpoint->replenish_max); + INIT_DELAYED_WORK(&endpoint->replenish_work, + ipa_endpoint_replenish_work); + } + + ipa_endpoint_program(endpoint); + + endpoint->ipa->set_up |= BIT(endpoint->endpoint_id); +} + +static void ipa_endpoint_teardown_one(struct ipa_endpoint *endpoint) +{ + if (!endpoint->toward_ipa) + cancel_delayed_work_sync(&endpoint->replenish_work); + + ipa_endpoint_reset(endpoint); + + endpoint->ipa->set_up &= ~BIT(endpoint->endpoint_id); +} + +void ipa_endpoint_setup(struct ipa *ipa) +{ + u32 initialized = ipa->initialized; + + ipa->set_up = 0; + while (initialized) { + enum ipa_endpoint_id endpoint_id = __ffs(initialized); + + initialized ^= BIT(endpoint_id); + + ipa_endpoint_setup_one(&ipa->endpoint[endpoint_id]); + } +} + +void ipa_endpoint_teardown(struct ipa *ipa) +{ + u32 set_up = ipa->set_up; + + while (set_up) { + enum ipa_endpoint_id endpoint_id = __fls(set_up); + + set_up ^= BIT(endpoint_id); + + ipa_endpoint_teardown_one(&ipa->endpoint[endpoint_id]); + } +} + +static int ipa_endpoint_init_one(struct ipa *ipa, + const struct gsi_ipa_endpoint_data *data) +{ + struct ipa_endpoint *endpoint; + + if (data->endpoint_id >= IPA_ENDPOINT_MAX) + return -EIO; + endpoint = &ipa->endpoint[data->endpoint_id]; + + if (data->ee_id == GSI_EE_AP) + ipa->endpoint_map[data->channel_id] = endpoint; + + endpoint->ipa = ipa; + endpoint->ee_id = data->ee_id; + endpoint->channel_id = data->channel_id; + endpoint->endpoint_id = data->endpoint_id; + endpoint->toward_ipa = data->toward_ipa; + endpoint->data = &data->endpoint; + + if (endpoint->data->support_flt) + ipa->filter_support |= BIT(endpoint->endpoint_id); + + ipa->initialized |= BIT(endpoint->endpoint_id); + + return 0; +} + +void ipa_endpoint_exit_one(struct ipa_endpoint *endpoint) +{ + endpoint->ipa->initialized &= ~BIT(endpoint->endpoint_id); +} + +int ipa_endpoint_init(struct ipa *ipa, u32 data_count, + const struct gsi_ipa_endpoint_data *data) +{ + u32 initialized; + int ret; + u32 i; + + ipa->initialized = 0; + + ipa->filter_support = 0; + for (i = 0; i < data_count; i++) { + ret = ipa_endpoint_init_one(ipa, &data[i]); + if (ret) + goto err_endpoint_unwind; + } + dev_dbg(&ipa->pdev->dev, "initialized 0x%08x\n", ipa->initialized); + + /* Verify the bitmap of endpoints that support filtering. */ + dev_dbg(&ipa->pdev->dev, "filter_support 0x%08x\n", + ipa->filter_support); + if (!ipa->filter_support) + goto err_endpoint_unwind; + if (hweight32(ipa->filter_support) > IPA_SMEM_FLT_COUNT) + goto err_endpoint_unwind; + + return 0; + +err_endpoint_unwind: + initialized = ipa->initialized; + while (initialized) { + enum ipa_endpoint_id endpoint_id = __fls(initialized); + + initialized ^= BIT(endpoint_id); + + ipa_endpoint_exit_one(&ipa->endpoint[endpoint_id]); + } + + return ret; +} + +void ipa_endpoint_exit(struct ipa *ipa) +{ + u32 initialized = ipa->initialized; + + while (initialized) { + enum ipa_endpoint_id endpoint_id = __fls(initialized); + + initialized ^= BIT(endpoint_id); + + ipa_endpoint_exit_one(&ipa->endpoint[endpoint_id]); + } +} diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h new file mode 100644 index 000000000000..39fb02f524bc --- /dev/null +++ b/drivers/net/ipa/ipa_endpoint.h @@ -0,0 +1,96 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_ENDPOINT_H_ +#define _IPA_ENDPOINT_H_ + +#include +#include +#include + +#include "gsi.h" +#include "ipa_reg.h" + +struct net_device; +struct sk_buff; + +struct ipa; +struct gsi_ipa_endpoint_data; + +#define IPA_MTU ETH_DATA_LEN + +enum ipa_endpoint_id { + IPA_ENDPOINT_INVALID = 0, + IPA_ENDPOINT_AP_MODEM_TX = 2, + IPA_ENDPOINT_MODEM_LAN_TX = 3, + IPA_ENDPOINT_MODEM_COMMAND_TX = 4, + IPA_ENDPOINT_AP_COMMAND_TX = 5, + IPA_ENDPOINT_MODEM_AP_TX = 6, + IPA_ENDPOINT_AP_LAN_RX = 9, + IPA_ENDPOINT_AP_MODEM_RX = 10, + IPA_ENDPOINT_MODEM_AP_RX = 12, + IPA_ENDPOINT_MODEM_LAN_RX = 13, +}; + +#define IPA_ENDPOINT_MAX 32 /* Max supported */ + +/** + * struct ipa_endpoint - IPA endpoint information + * @client: Client associated with the endpoint + * @channel_id: EP's GSI channel + * @evt_ring_id: EP's GSI channel event ring + */ +struct ipa_endpoint { + struct ipa *ipa; + enum ipa_seq_type seq_type; + enum gsi_ee_id ee_id; + u32 channel_id; + enum ipa_endpoint_id endpoint_id; + u32 toward_ipa; /* Boolean */ + const struct ipa_endpoint_data *data; + + u32 trans_tre_max; /* maximum descriptors per transaction */ + u32 evt_ring_id; + + /* Net device this endpoint is associated with, if any */ + struct net_device *netdev; + + /* Receive buffer replenishing for RX endpoints */ + u32 replenish_max; + u32 replenish_ready; + atomic_t replenish_backlog; + struct delayed_work replenish_work; /* global wq */ +}; + +bool ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay); + +int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb); + +int ipa_endpoint_stop(struct ipa_endpoint *endpoint); + +void ipa_endpoint_exit_one(struct ipa_endpoint *endpoint); + +bool ipa_endpoint_enabled(struct ipa_endpoint *endpoint); +int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint); +void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint); + +void ipa_endpoint_default_route_setup(struct ipa_endpoint *endpoint); +void ipa_endpoint_default_route_teardown(struct ipa_endpoint *endpoint); + +void ipa_endpoint_suspend(struct ipa_endpoint *endpoint); +void ipa_endpoint_resume(struct ipa_endpoint *endpoint); + +void ipa_endpoint_setup(struct ipa *ipa); +void ipa_endpoint_teardown(struct ipa *ipa); + +int ipa_endpoint_init(struct ipa *ipa, u32 data_count, + const struct gsi_ipa_endpoint_data *data); +void ipa_endpoint_exit(struct ipa *ipa); + +void ipa_endpoint_skb_tx_callback(struct gsi_trans *trans); +void ipa_endpoint_rx_callback(struct gsi_trans *trans); + + +#endif /* _IPA_ENDPOINT_H_ */ From patchwork Sun May 12 01:25:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163981 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4129108ilr; Sat, 11 May 2019 18:26:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqzHHR5NpN1aNudStdaZXQvkacUbEC0RX3JVf6Pp83xnHKsBju38jWVBqcrzukxeVdH0eL6x X-Received: by 2002:a63:8949:: with SMTP id v70mr24085898pgd.196.1557624383288; Sat, 11 May 2019 18:26:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624383; cv=none; d=google.com; s=arc-20160816; b=YWLm7JDKaBBcIQFrdV3pEIzQj59Hhsgi4keDvpDlFnIzh79GldLvRRfK3CNJMwZbvp I/p3c+LfDO08hpsq35H96gSDwtmpQIXo9M1KXoH0i55lSRb58zcNkuH2glqrfkP6mRix PRu1BuyIeEddfHbIyttPM4NGIFuud4xXjXgorRx2I6BMyoRYbt527tAj0hxYnu7O7tMM DrzJVFNVPtKK4w2vxX+G8IUcyzQzLTGewBKSK/jN5N7YPig9sp6fUXzWW/8Qy13aUi05 51fTfdgY3hcloEzqqmuljghsovy28uxUQT7meBCN4B9tvkzuJrO8DT/T3TQD9Xwn5qSJ cKiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9qZ6lyvVdxzw7YfoGPBiUMIk1peGoGuqX6HqTcXeMbo=; b=O0B99+H7oV4mub7vNb5ZKR+JjDn5dKz/hJ/ThiN7+x35K7log7L+9i3ShWL2FU94oe vgsoYJwAXTn9UjdBIC1u4Tujoq3DzF4emH4OMmFsCY7Ap+L932caPep/dJowayGeR6Le A1OtIZwRxiGy0qtQ09y/tiGEnc0ZLoALbkB4GqSUwL9T5tZRJnZmg3p2iSldkJ93AFwA KyZuS1v3peDSa0HyD4xuI5C4YL0ushPbndFL/kwZU4EtX0I37OGtiOSWhvADrrzUxlSb U/E2ydLHJU30E6Zi789n699s/EWYHrUM99f/iYuHR2E0i76ipFK5dgtJEZbUf0t5PO0I hBtQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=zafFvUma; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 31si12807028plz.198.2019.05.11.18.26.23; Sat, 11 May 2019 18:26:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=zafFvUma; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726924AbfELB0V (ORCPT + 30 others); Sat, 11 May 2019 21:26:21 -0400 Received: from mail-io1-f67.google.com ([209.85.166.67]:33512 "EHLO mail-io1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726740AbfELBZr (ORCPT ); Sat, 11 May 2019 21:25:47 -0400 Received: by mail-io1-f67.google.com with SMTP id z4so7438008iol.0 for ; Sat, 11 May 2019 18:25:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9qZ6lyvVdxzw7YfoGPBiUMIk1peGoGuqX6HqTcXeMbo=; b=zafFvUmaVY6W5prsMubDaMsCtp7sQ/T+06t5iIpSitQaFEJH7jnFyE9NuViEnnJeEE AslvYlPpCqI2KFU/ApUZn1leI2GoRbxI1hvCzgTuuRB3KfsR1jGgoUb/Z2B13uN6VFQ+ qud/nI2qYrzZl0zskWORV+yK8BsA5QaAs3V/J2dmIYFA2jDE1/v7EYkvRLMVb5qRXjyh K7srRUvVx9OtxeI/vMqjE1crMkOjZetC5kQIn9gjYFDZ2pFEVGmligs7nngpuaOWHCNh UEPlwHRQDJBDeE1t9Y1lZyAGYNKIKLxQ88jV2QW6bqt0t2oNuFlw/Y0RO1iYf59gDxjr 45hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9qZ6lyvVdxzw7YfoGPBiUMIk1peGoGuqX6HqTcXeMbo=; b=XbSLlnmsfigmbudkCq+C6F0QlUmbXHksCRuC7N0eBEsS7gQ6rrxpHsuQtZ1qtxE4hl Ck6hpwfgvDK0fMuzkUC+uvHbi2OeS5tW8hDvrG7NVKEKUaomlN73t7ZVVV/ludt1jTbi N3M0aFPmeVuNidNnCkZuqNaj/DDr6L6xmV4i/nhXmC5Uu59YW6zgAPXojrbeGWzbeIvT W4RmqE+PYrrmWQicoX9ei0K/E5+awmk8g0e3zhHvpqmDSV42FKDh0/JU1C3BfP3VDvBT 8QmYatUIZaG0ILTRRVyZgp3s2JYZvccxsDLbrnBTtG6/oBVse5qGxsqyED/mElsrOGGp x3Nw== X-Gm-Message-State: APjAAAUux7S/GOK3XiOy3AwM9bV2ORwjXanL2bYVmwXwabWF+CAbavJl hjgYnUNWSdzq/ngUi8wmzlet0A== X-Received: by 2002:a05:6602:228e:: with SMTP id d14mr11587592iod.172.1557624346163; Sat, 11 May 2019 18:25:46 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:44 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 12/18] soc: qcom: ipa: immediate commands Date: Sat, 11 May 2019 20:25:02 -0500 Message-Id: <20190512012508.10608-13-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org One TX endpoint (per EE) is used for issuing immediate commands to the IPA. These commands request activites beyond simple data transfers to be done by the IPA hardware. For example, the IPA is able to manage routing packets among endpoints, and immediate commands are used to configure tables used for that routing. Immediate commands are built on top of GSI transactions. They are different from normal transfers (in that they use a special endpoint, and their "payload" is interpreted differently), so separate functions are used to issue immediate command transactions. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_cmd.c | 372 ++++++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_cmd.h | 116 ++++++++++++ 2 files changed, 488 insertions(+) create mode 100644 drivers/net/ipa/ipa_cmd.c create mode 100644 drivers/net/ipa/ipa_cmd.h -- 2.20.1 diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c new file mode 100644 index 000000000000..7b655d6f9b7b --- /dev/null +++ b/drivers/net/ipa/ipa_cmd.c @@ -0,0 +1,372 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include + +#include "gsi.h" +#include "gsi_trans.h" +#include "ipa.h" +#include "ipa_endpoint.h" +#include "ipa_cmd.h" +#include "ipa_mem.h" + +/** + * DOC: IPA Immediate Commands + * + * The AP command TX endpoint is used to issue immediate commands to the IPA. + * An immediate command is generally used to request the IPA do something + * other than data transfer to another endpoint. + * + * Immediate commands are represented by GSI transactions just like other + * transfer requests, represented by a single GSI TRE. Each immediate + * command has a well-defined format, having a payload of a known length. + * This allows the transfer element's length field to be used to hold an + * immediate command's opcode. The payload for a command resides in DRAM + * and is described by a single scatterlist entry in its transaction. + * Commands do not require a transaction completion callback. To commit + * an immediate command transaction, either gsi_trans_commit_command() or + * gsi_trans_commit_command_timeout() is used. + */ + +#define IPA_GSI_DMA_TASK_TIMEOUT 15 /* milliseconds */ + +/** + * __ipa_cmd_timeout() - Send an immediate command with timeout + * @ipa: IPA structure + * @opcode: Immediate command opcode (must not be IPA_CMD_NONE) + * payload: Pointer to command payload + * size: Size of payload + * @timeout: Milliseconds to wait for completion (0 waits indefinitely) + * + * This common function implements ipa_cmd() and ipa_cmd_timeout(). It + * allocates, initializes, and commits a transaction for the immediate + * command. The transaction is committed using gsi_trans_commit_command(), + * or if a non-zero timeout is supplied, gsi_trans_commit_command_timeout(). + * + * Return: 0 if successful, or a negative error code + */ +static int __ipa_cmd_timeout(struct ipa *ipa, enum ipa_cmd_opcode opcode, + void *payload, size_t size, u32 timeout) +{ + struct ipa_endpoint *endpoint = ipa->command_endpoint; + struct gsi_trans *trans; + int ret; + + /* assert(opcode != IPA_CMD_NONE) */ + trans = gsi_channel_trans_alloc(&ipa->gsi, endpoint->channel_id, 1); + if (!trans) + return -EBUSY; + + sg_init_one(trans->sgl, payload, size); + + if (timeout) + ret = gsi_trans_commit_command_timeout(trans, opcode, timeout); + else + ret = gsi_trans_commit_command(trans, opcode); + if (ret) + goto err_trans_free; + + return 0; + +err_trans_free: + gsi_trans_free(trans); + + return ret; +} + +static int +ipa_cmd(struct ipa *ipa, enum ipa_cmd_opcode opcode, void *payload, size_t size) +{ + return __ipa_cmd_timeout(ipa, opcode, payload, size, 0); +} + +static int ipa_cmd_timeout(struct ipa *ipa, enum ipa_cmd_opcode opcode, + void *payload, size_t size) +{ + return __ipa_cmd_timeout(ipa, opcode, payload, size, + IPA_GSI_DMA_TASK_TIMEOUT); +} + +/* Field masks for ipa_imm_cmd_hw_hdr_init_local structure fields */ +#define IPA_CMD_HDR_INIT_FLAGS_TABLE_SIZE_FMASK GENMASK(11, 0) +#define IPA_CMD_HDR_INIT_FLAGS_HDR_ADDR_FMASK GENMASK(27, 12) + +struct ipa_imm_cmd_hw_hdr_init_local { + u64 hdr_table_addr; + u32 flags; + u32 pad; /* XXX Needed? */ +}; + +/* Initialize header space in IPA local memory */ +int ipa_cmd_hdr_init_local(struct ipa *ipa, u32 offset, u32 size) +{ + struct ipa_imm_cmd_hw_hdr_init_local *payload; + struct device *dev = &ipa->pdev->dev; + dma_addr_t addr; + void *virt; + u32 flags; + u32 max; + int ret; + + /* Note: size *can* be zero in this case */ + if (size > field_max(IPA_CMD_HDR_INIT_FLAGS_TABLE_SIZE_FMASK)) + return -EINVAL; + + max = field_max(IPA_CMD_HDR_INIT_FLAGS_HDR_ADDR_FMASK); + if (offset > max || ipa->shared_offset > max - offset) + return -EINVAL; + offset += ipa->shared_offset; + + /* A zero-filled buffer of the right size is all that's required */ + virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + payload = kzalloc(sizeof(*payload), GFP_KERNEL); + if (!payload) { + ret = -ENOMEM; + goto out_dma_free; + } + + payload->hdr_table_addr = addr; + flags = u32_encode_bits(size, IPA_CMD_HDR_INIT_FLAGS_TABLE_SIZE_FMASK); + flags |= u32_encode_bits(offset, IPA_CMD_HDR_INIT_FLAGS_HDR_ADDR_FMASK); + payload->flags = flags; + + ret = ipa_cmd(ipa, IPA_CMD_HDR_INIT_LOCAL, payload, sizeof(*payload)); + + kfree(payload); +out_dma_free: + dma_free_coherent(dev, size, virt, addr); + + return ret; +} + +enum ipahal_pipeline_clear_option { + IPAHAL_HPS_CLEAR = 0, + IPAHAL_SRC_GRP_CLEAR = 1, + IPAHAL_FULL_PIPELINE_CLEAR = 2, +}; + +/* Field masks for ipa_imm_cmd_hw_dma_shared_mem structure fields */ +#define IPA_CMD_DMA_SHARED_FLAGS_DIRECTION_FMASK GENMASK(0, 0) +#define IPA_CMD_DMA_SHARED_FLAGS_SKIP_CLEAR_FMASK GENMASK(1, 1) +#define IPA_CMD_DMA_SHARED_FLAGS_CLEAR_OPTIONS_FMASK GENMASK(3, 2) + +struct ipa_imm_cmd_hw_dma_shared_mem { + u16 sw_rsvd; + u16 size; + u16 local_addr; + u16 flags; + u64 system_addr; +}; + +/* Use a DMA command to zero a block of memory */ +int ipa_cmd_smem_dma_zero(struct ipa *ipa, u32 offset, u32 size) +{ + struct ipa_imm_cmd_hw_dma_shared_mem *payload; + struct device *dev = &ipa->pdev->dev; + dma_addr_t addr; + void *virt; + u32 val; + int ret; + + /* size must be non-zero, and must fit in a 16 bit field */ + if (!size || size > U16_MAX) + return -EINVAL; + + /* offset must fit in a 16 bit local_addr field */ + if (offset > U16_MAX || ipa->shared_offset > U16_MAX - offset) + return -EINVAL; + offset += ipa->shared_offset; + + /* A zero-filled buffer of the right size is all that's required */ + virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL); + if (!virt) + return -ENOMEM; + + payload = kzalloc(sizeof(*payload), GFP_KERNEL); + if (!payload) { + ret = -ENOMEM; + goto out_dma_free; + } + + payload->size = size; + payload->local_addr = offset; + /* direction: 0 = write to IPA; skip clear: 0 = don't wait */ + val = u16_encode_bits(IPAHAL_HPS_CLEAR, + IPA_CMD_DMA_SHARED_FLAGS_CLEAR_OPTIONS_FMASK); + payload->flags = val; + payload->system_addr = addr; + + ret = ipa_cmd(ipa, IPA_CMD_DMA_SHARED_MEM, payload, sizeof(*payload)); + + kfree(payload); +out_dma_free: + dma_free_coherent(dev, size, virt, addr); + + return ret; +} + +/* Field masks for ipa_imm_cmd_hw_ip_fltrt_init structure fields */ +#define IPA_CMD_IP_FLTRT_FLAGS_HASH_SIZE_FMASK GENMASK_ULL(11, 0) +#define IPA_CMD_IP_FLTRT_FLAGS_HASH_ADDR_FMASK GENMASK_ULL(27, 12) +#define IPA_CMD_IP_FLTRT_FLAGS_NHASH_SIZE_FMASK GENMASK_ULL(39, 28) +#define IPA_CMD_IP_FLTRT_FLAGS_NHASH_ADDR_FMASK GENMASK_ULL(55, 40) + +struct ipa_imm_cmd_hw_ip_fltrt_init { + u64 hash_rules_addr; + u64 flags; + u64 nhash_rules_addr; +}; + +/* Configure a routing or filter table, for IPv4 or IPv6 */ +static int ipa_cmd_table_config(struct ipa *ipa, enum ipa_cmd_opcode opcode, + dma_addr_t addr, size_t size, u32 hash_offset, + u32 nhash_offset) +{ + struct ipa_imm_cmd_hw_ip_fltrt_init *payload; + u64 val; + u32 max; + int ret; + + if (size > field_max(IPA_CMD_IP_FLTRT_FLAGS_HASH_SIZE_FMASK)) + return -EINVAL; + if (size > field_max(IPA_CMD_IP_FLTRT_FLAGS_NHASH_SIZE_FMASK)) + return -EINVAL; + + max = field_max(IPA_CMD_IP_FLTRT_FLAGS_HASH_ADDR_FMASK); + if (hash_offset > max || ipa->shared_offset > max - hash_offset) + return -EINVAL; + hash_offset += ipa->shared_offset; + + max = field_max(IPA_CMD_IP_FLTRT_FLAGS_NHASH_ADDR_FMASK); + if (nhash_offset > max || ipa->shared_offset > max - nhash_offset) + return -EINVAL; + nhash_offset += ipa->shared_offset; + + payload = kzalloc(sizeof(*payload), GFP_KERNEL); + if (!payload) + return -ENOMEM; + + payload->hash_rules_addr = addr; + val = u64_encode_bits(size, IPA_CMD_IP_FLTRT_FLAGS_HASH_SIZE_FMASK); + val |= u64_encode_bits(hash_offset, + IPA_CMD_IP_FLTRT_FLAGS_HASH_ADDR_FMASK); + val |= u64_encode_bits(size, IPA_CMD_IP_FLTRT_FLAGS_NHASH_SIZE_FMASK); + val |= u64_encode_bits(nhash_offset, + IPA_CMD_IP_FLTRT_FLAGS_NHASH_ADDR_FMASK); + payload->flags = val; + payload->nhash_rules_addr = addr; + + ret = ipa_cmd(ipa, opcode, payload, sizeof(*payload)); + + kfree(payload); + + return ret; +} + +/* Configure IPv4 routing table */ +int ipa_cmd_route_config_ipv4(struct ipa *ipa, size_t size) +{ + enum ipa_cmd_opcode opcode = IPA_CMD_IP_V4_ROUTING_INIT; + u32 nhash_offset = IPA_SMEM_V4_RT_NHASH_OFFSET; + u32 hash_offset = IPA_SMEM_V4_RT_HASH_OFFSET; + dma_addr_t addr = ipa->route_addr; + + return ipa_cmd_table_config(ipa, opcode, addr, size, hash_offset, + nhash_offset); +} + +/* Configure IPv6 routing table */ +int ipa_cmd_route_config_ipv6(struct ipa *ipa, size_t size) +{ + enum ipa_cmd_opcode opcode = IPA_CMD_IP_V6_ROUTING_INIT; + u32 nhash_offset = IPA_SMEM_V6_RT_NHASH_OFFSET; + u32 hash_offset = IPA_SMEM_V6_RT_HASH_OFFSET; + dma_addr_t addr = ipa->route_addr; + + return ipa_cmd_table_config(ipa, opcode, addr, size, hash_offset, + nhash_offset); +} + +/* Configure IPv4 filter table */ +int ipa_cmd_filter_config_ipv4(struct ipa *ipa, size_t size) +{ + enum ipa_cmd_opcode opcode = IPA_CMD_IP_V4_FILTER_INIT; + u32 nhash_offset = IPA_SMEM_V4_FLT_NHASH_OFFSET; + u32 hash_offset = IPA_SMEM_V4_FLT_HASH_OFFSET; + dma_addr_t addr = ipa->filter_addr; + + return ipa_cmd_table_config(ipa, opcode, addr, size, hash_offset, + nhash_offset); +} + +/* Configure IPv6 filter table */ +int ipa_cmd_filter_config_ipv6(struct ipa *ipa, size_t size) +{ + enum ipa_cmd_opcode opcode = IPA_CMD_IP_V6_FILTER_INIT; + u32 nhash_offset = IPA_SMEM_V6_FLT_NHASH_OFFSET; + u32 hash_offset = IPA_SMEM_V6_FLT_HASH_OFFSET; + dma_addr_t addr = ipa->filter_addr; + + return ipa_cmd_table_config(ipa, opcode, addr, size, hash_offset, + nhash_offset); +} + +/* Field masks for ipa_imm_cmd_hw_dma_task_32b_addr structure fields */ +#define IPA_CMD_DMA32_TASK_SW_RSVD_FMASK GENMASK(10, 0) +#define IPA_CMD_DMA32_TASK_CMPLT_FMASK GENMASK(11, 11) +#define IPA_CMD_DMA32_TASK_EOF_FMASK GENMASK(12, 12) +#define IPA_CMD_DMA32_TASK_FLSH_FMASK GENMASK(13, 13) +#define IPA_CMD_DMA32_TASK_LOCK_FMASK GENMASK(14, 14) +#define IPA_CMD_DMA32_TASK_UNLOCK_FMASK GENMASK(15, 15) +#define IPA_CMD_DMA32_SIZE1_FMASK GENMASK(31, 16) +#define IPA_CMD_DMA32_PACKET_SIZE_FMASK GENMASK(15, 0) + +struct ipa_imm_cmd_hw_dma_task_32b_addr { + u32 size1_flags; + u32 addr1; + u32 packet_size; + u32 rsvd; +}; + +/* Use a 32-bit DMA command to zero a block of memory */ +int ipa_cmd_dma_task_32(struct ipa *ipa, size_t size, dma_addr_t addr) +{ + struct ipa_imm_cmd_hw_dma_task_32b_addr *payload; + u32 size1_flags; + int ret; + + if (size > field_max(IPA_CMD_DMA32_SIZE1_FMASK)) + return -EINVAL; + if (size > field_max(IPA_CMD_DMA32_PACKET_SIZE_FMASK)) + return -EINVAL; + + payload = kzalloc(sizeof(*payload), GFP_KERNEL); + if (!payload) + return -ENOMEM; + + /* complete: 0 = don't interrupt; eof: 0 = don't assert eot */ + size1_flags = IPA_CMD_DMA32_TASK_FLSH_FMASK; + /* lock: 0 = don't lock endpoint; unlock: 0 = don't unlock */ + size1_flags |= u32_encode_bits(size, IPA_CMD_DMA32_SIZE1_FMASK); + + payload->size1_flags = size1_flags; + payload->addr1 = addr; + payload->packet_size = + u32_encode_bits(size, IPA_CMD_DMA32_PACKET_SIZE_FMASK); + + ret = ipa_cmd_timeout(ipa, IPA_CMD_DMA_TASK_32B_ADDR, payload, + sizeof(payload)); + + kfree(payload); + + return ret; +} diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h new file mode 100644 index 000000000000..dadb9d92067a --- /dev/null +++ b/drivers/net/ipa/ipa_cmd.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_CMD_H_ +#define _IPA_CMD_H_ + +#include + +struct sk_buff; + +struct ipa; + +/** + * enum ipa_cmd_opcode: IPA immediate commands + * + * All immediate commands are issued using the AP command TX endpoint. + * The numeric values here are the opcodes for IPA v3.5.1 hardware. + * + * IPA_CMD_NONE is a special (invalid) value that's used to indicate + * a request is *not* an immediate command. + */ +enum ipa_cmd_opcode { + IPA_CMD_NONE = 0, + IPA_CMD_IP_V4_FILTER_INIT = 3, + IPA_CMD_IP_V6_FILTER_INIT = 4, + IPA_CMD_IP_V4_ROUTING_INIT = 7, + IPA_CMD_IP_V6_ROUTING_INIT = 8, + IPA_CMD_HDR_INIT_LOCAL = 9, + IPA_CMD_DMA_TASK_32B_ADDR = 17, + IPA_CMD_DMA_SHARED_MEM = 19, +}; + +/** + * ipa_cmd_hdr_init_local() - Initialize header space in IPA local memory + * @ipa: IPA structure + * @offset: Offset of memory to be initialized + * @size: Size of memory to be initialized + * + * Return: 0 if successful, or a negative error code + * + * Defines the location of a block of local memory to use for + * headers and fills it with zeroes. + */ +int ipa_cmd_hdr_init_local(struct ipa *ipa, u32 offset, u32 size); + +/** + * ipa_cmd_smem_dma_zero() - Use a DMA command to zero a block of memory + * @ipa: IPA structure + * @offset: Offset of memory to be zeroed + * @size: Size in bytes of memory to be zeroed + * + * Return: 0 if successful, or a negative error code + */ +int ipa_cmd_smem_dma_zero(struct ipa *ipa, u32 offset, u32 size); + +/** + * ipa_cmd_route_config_ipv4() - Configure IPv4 routing table + * @ipa: IPA structure + * @size: Size in bytes of table + * + * Return: 0 if successful, or a negative error code + * + * Defines the location and size of the IPv4 routing table and + * zeroes its content. + */ +int ipa_cmd_route_config_ipv4(struct ipa *ipa, size_t size); + +/** + * ipa_cmd_route_config_ipv6() - Configure IPv6 routing table + * @ipa: IPA structure + * @size: Size in bytes of table + * + * Return: 0 if successful, or a negative error code + * + * Defines the location and size of the IPv6 routing table and + * zeroes its content. + */ +int ipa_cmd_route_config_ipv6(struct ipa *ipa, size_t size); + +/** + * ipa_cmd_filter_config_ipv4() - Configure IPv4 filter table + * @ipa: IPA structure + * @size: Size in bytes of table + * + * Return: 0 if successful, or a negative error code + * + * Defines the location and size of the IPv4 filter table and + * zeroes its content. + */ +int ipa_cmd_filter_config_ipv4(struct ipa *ipa, size_t size); + +/** + * ipa_cmd_filter_config_ipv6() - Configure IPv6 filter table + * @ipa: IPA structure + * @size: Size in bytes of table + * + * Return: 0 if successful, or a negative error code + * + * Defines the location and size of the IPv6 filter table and + * zeroes its content. + */ +int ipa_cmd_filter_config_ipv6(struct ipa *ipa, size_t size); + +/** + * ipa_cmd_dma_task_32() - Use a 32-bit DMA command to zero a block of memory + * @ipa: IPA structure + * @size: Size of memory to be zeroed + * @addr: DMA address defining start of range to be zeroed + * + * Return: 0 if successful, or a negative error code + */ +int ipa_cmd_dma_task_32(struct ipa *ipa, size_t size, dma_addr_t addr); + +#endif /* _IPA_CMD_H_ */ From patchwork Sun May 12 01:25:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163975 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4128834ilr; Sat, 11 May 2019 18:25:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqyQHIIPqxn+qAy8642rTfgAP3nz2T+SiXysS3njwHzWvULWGbvrMZkgEWBPsEgqbzAwoNfi X-Received: by 2002:a63:445d:: with SMTP id t29mr23565265pgk.303.1557624353546; Sat, 11 May 2019 18:25:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624353; cv=none; d=google.com; s=arc-20160816; b=Ot3IRC8/r/qlNYiQW6tdd3I+HxhiiDHUMSpvod8AXvCKEcCoFHuVucsLqfTqffSVnL CCj9W5zmh83tX63ppmZ83GHaeNvDf6U94ry2vpqmlbBaT0DTnIt776Q0E3OUlWUoQ4vK Kd6ydM9wMrRHXwu1+LzY4hHaTp4qMDd0eG+V9mYf0sa40kIeEa2W5p787wl3nEvgyw5W mY+0qA8caZYCI3/CJJpzLsxr0mUjHE4mtnVytdglwj23RF2wlvInQPEE8OFfTCm+2aQV Z/wY1WpY/PVdfXMbMRUc/810AJYQe4BuzPYEgbMNq5sTcSK3qvZe8ovdzWXIb7eBIHBO DtWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=dDiiKpUcl0Y/kkIzJNWO7a5kCPlO0quaqmsVgIRabhY=; b=fLpbfGG+cxLTpWRzmtW9248TST0ioQbaTi5vb+2bpMBl/kCAEYBJo1zEgsa6qNPQoI VlFP22j6gwE8KK2Se4OFQWdDU32XpLvjt6RPlkkHrRZ0hUuHbG37KS6ROVoIvPbqI52y hmmjyQFAjn90tCS6qhf+TXMfTMzw9s+WggIBPVP7rclRPapzhCvzfbT6XxWSB+qv5Bes vQbgK0E/5fqideiT5fcznESh6KnAiJIvChIuBmjx2TsEU1P+SlL0JbO+pyY8CckbfTUJ aK0dnAfkv4gJ25iuRiiFWpMAiZ03WdugeBp0i63Epjgvca63YiKMh7piiAcngaFeuC2u h7Yw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fT5rVrkc; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w24si12985733plq.297.2019.05.11.18.25.53; Sat, 11 May 2019 18:25:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fT5rVrkc; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726793AbfELBZw (ORCPT + 30 others); Sat, 11 May 2019 21:25:52 -0400 Received: from mail-it1-f195.google.com ([209.85.166.195]:50303 "EHLO mail-it1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726762AbfELBZs (ORCPT ); Sat, 11 May 2019 21:25:48 -0400 Received: by mail-it1-f195.google.com with SMTP id i10so11319028ite.0 for ; Sat, 11 May 2019 18:25:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dDiiKpUcl0Y/kkIzJNWO7a5kCPlO0quaqmsVgIRabhY=; b=fT5rVrkc9sTmCwhRvUBo51ZCwLAc2JOAQYgStR9++O267zXhh5ng2Jh4jU5fpKkMVj BgWTLKkhbGv93Y4pYD8T84fgIAO8Bb3qswvZ9kIadAD08hWKatmOjc0rCiOP3JB1tdsL ECryhyeXFntXb8BKlCP2ACRkv8JNcmIuhwnM8n0R8SSDHk8zmvcHP1bJklOcaoL3FnWf sE87fOYCKRPH/hjA/JJ0GOFRprwBE/9TD9jFLDqeP4ZDifyxnwkhDx2onHhjIewC6kJN xSnzJuI0XzIS7tQqPjGGsWkcDLlXFJyu7HZh68uuNOiq2Cpk18JDB2MrSkOBABrZoZdt Kq3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dDiiKpUcl0Y/kkIzJNWO7a5kCPlO0quaqmsVgIRabhY=; b=Ylgd4HfCLy92RadvqvhgcuJineweExjNhsAoKT98yk2UWz0LCjoumdVVlY5LWEk00L cLCS9AbW4ipE2NmT7bBtnNgwNq4+9X00ZSA7WrNNvuyvqdU1j9bUfc7ulug9/DhgBTx/ a+7kyuUjWQjWis1JREK3uLnQVdKz0O7JkjYVUnkpfXrj/2R2QdN4K6IgLFZgliDAEj0a x3rcDxEgKGehLJk4m2x2748LBJjbdEyL0krgLF/RJgFOAfgF5KTduDe4+A7qT/8l10YX HSU3PCNtK9BaK71AKDMpfWRMMaL7rwlIA1JN9VsMn+YDedyLonQFXeCOSPgkSzsM3acm M4HA== X-Gm-Message-State: APjAAAW62L7HxLtrsABIeSekIDlfutmOwDMEUEJhutXJKwzO6wGmXtYz xA2Lb/AHXc+1nfcBePAacraQVw== X-Received: by 2002:a05:6638:639:: with SMTP id h25mr14318530jar.125.1557624347686; Sat, 11 May 2019 18:25:47 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:46 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 13/18] soc: qcom: ipa: IPA network device and microcontroller Date: Sat, 11 May 2019 20:25:03 -0500 Message-Id: <20190512012508.10608-14-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch includes the code that implements a Linux network device, using one TX and one RX IPA endpoint. It is used to implement the network device representing the modem and its connection to wireless networks. There are only a few things that are really modem-specific though, and they aren't clearly called out here. Such distinctions will be made clearer if we wish to support a network device for anything other than the modem. Sort of unrelated, this patch also includes the code supporting the microcontroller CPU present on the IPA. The microcontroller can be used to implement special handling of packets, but at this time we don't support that. Still, it is a component that needs to be initialized, and in the event of a crash we need to do some synchronization between the AP and the microcontroller. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_netdev.c | 250 +++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_netdev.h | 24 ++++ drivers/net/ipa/ipa_uc.c | 208 +++++++++++++++++++++++++++++ drivers/net/ipa/ipa_uc.h | 32 +++++ 4 files changed, 514 insertions(+) create mode 100644 drivers/net/ipa/ipa_netdev.c create mode 100644 drivers/net/ipa/ipa_netdev.h create mode 100644 drivers/net/ipa/ipa_uc.c create mode 100644 drivers/net/ipa/ipa_uc.h -- 2.20.1 diff --git a/drivers/net/ipa/ipa_netdev.c b/drivers/net/ipa/ipa_netdev.c new file mode 100644 index 000000000000..18e6a9de401a --- /dev/null +++ b/drivers/net/ipa/ipa_netdev.c @@ -0,0 +1,250 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +/* Modem Transport Network Driver. */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_data.h" +#include "ipa_endpoint.h" +#include "ipa_mem.h" +#include "ipa_netdev.h" +#include "ipa_qmi.h" + +#define IPA_NETDEV_NAME "rmnet_ipa%d" + +#define TAILROOM 0 /* for padding by mux layer */ + +#define IPA_NETDEV_TIMEOUT 10 /* seconds */ + +/** struct ipa_priv - IPA network device private data */ +struct ipa_priv { + struct ipa_endpoint *tx_endpoint; + struct ipa_endpoint *rx_endpoint; +}; + +/** ipa_netdev_open() - Opens the modem network interface */ +static int ipa_netdev_open(struct net_device *netdev) +{ + struct ipa_priv *priv = netdev_priv(netdev); + int ret; + + ret = ipa_endpoint_enable_one(priv->rx_endpoint); + if (ret) + return ret; + ret = ipa_endpoint_enable_one(priv->tx_endpoint); + if (ret) + goto err_disable_rx; + + netif_start_queue(netdev); + + return 0; + +err_disable_rx: + ipa_endpoint_disable_one(priv->rx_endpoint); + + return ret; +} + +/** ipa_netdev_stop() - Stops the modem network interface. */ +static int ipa_netdev_stop(struct net_device *netdev) +{ + struct ipa_priv *priv = netdev_priv(netdev); + + netif_stop_queue(netdev); + + ipa_endpoint_disable_one(priv->tx_endpoint); + ipa_endpoint_disable_one(priv->rx_endpoint); + + return 0; +} + +/** ipa_netdev_xmit() - Transmits an skb. + * + * @skb: skb to be transmitted + * @dev: network device + * + * Return codes: + * NETDEV_TX_OK: Success + * NETDEV_TX_BUSY: Error while transmitting the skb. Try again later + */ +static int ipa_netdev_xmit(struct sk_buff *skb, struct net_device *netdev) +{ + struct net_device_stats *stats = &netdev->stats; + struct ipa_priv *priv = netdev_priv(netdev); + struct ipa_endpoint *endpoint; + u32 skb_len = skb->len; + + if (!skb_len) + goto err_drop; + + endpoint = priv->tx_endpoint; + if (endpoint->data->config.qmap && skb->protocol != htons(ETH_P_MAP)) + goto err_drop; + + if (ipa_endpoint_skb_tx(endpoint, skb)) + return NETDEV_TX_BUSY; + + stats->tx_packets++; + stats->tx_bytes += skb_len; + + return NETDEV_TX_OK; + +err_drop: + dev_kfree_skb_any(skb); + stats->tx_dropped++; + + return NETDEV_TX_OK; +} + +void ipa_netdev_skb_rx(struct net_device *netdev, struct sk_buff *skb) +{ + struct net_device_stats *stats = &netdev->stats; + + if (skb) { + skb->dev = netdev; + skb->protocol = htons(ETH_P_MAP); + stats->rx_packets++; + stats->rx_bytes += skb->len; + + (void)netif_receive_skb(skb); + } else { + stats->rx_dropped++; + } +} + +static const struct net_device_ops ipa_netdev_ops = { + .ndo_open = ipa_netdev_open, + .ndo_stop = ipa_netdev_stop, + .ndo_start_xmit = ipa_netdev_xmit, +}; + +/** netdev_setup() - netdev setup function */ +static void netdev_setup(struct net_device *netdev) +{ + netdev->netdev_ops = &ipa_netdev_ops; + ether_setup(netdev); + /* No header ops (override value set by ether_setup()) */ + netdev->header_ops = NULL; + netdev->type = ARPHRD_RAWIP; + netdev->hard_header_len = 0; + netdev->max_mtu = IPA_MTU; + netdev->mtu = netdev->max_mtu; + netdev->addr_len = 0; + netdev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST); + /* The endpoint is configured for QMAP */ + netdev->needed_headroom = sizeof(struct rmnet_map_header); + netdev->needed_tailroom = TAILROOM; + netdev->watchdog_timeo = IPA_NETDEV_TIMEOUT * HZ; + netdev->hw_features = NETIF_F_SG; +} + +/** ipa_netdev_suspend() - suspend callback for runtime_pm + * @dev: pointer to device + * + * This callback will be invoked by the runtime_pm framework when an AP suspend + * operation is invoked, usually by pressing a suspend button. + * + * Returns -EAGAIN to runtime_pm framework in case there are pending packets + * in the Tx queue. This will postpone the suspend operation until all the + * pending packets will be transmitted. + * + * In case there are no packets to send, releases the WWAN0_PROD entity. + * As an outcome, the number of IPA active clients should be decremented + * until IPA clocks can be gated. + */ +void ipa_netdev_suspend(struct net_device *netdev) +{ + struct ipa_priv *priv = netdev_priv(netdev); + + netif_stop_queue(netdev); + + ipa_endpoint_suspend(priv->rx_endpoint); +} + +/** ipa_netdev_resume() - resume callback for runtime_pm + * @dev: pointer to device + * + * This callback will be invoked by the runtime_pm framework when an AP resume + * operation is invoked. + * + * Enables the network interface queue and returns success to the + * runtime_pm framework. + */ +void ipa_netdev_resume(struct net_device *netdev) +{ + struct ipa_priv *priv = netdev_priv(netdev); + + ipa_endpoint_resume(priv->rx_endpoint); + + netif_wake_queue(netdev); +} + +struct net_device *ipa_netdev_setup(struct ipa *ipa, + struct ipa_endpoint *rx_endpoint, + struct ipa_endpoint *tx_endpoint) +{ + struct net_device *netdev; + struct ipa_priv *priv; + int ret; + + /* Zero modem shared memory before we begin */ + ret = ipa_smem_zero_modem(ipa); + if (ret) + return ERR_PTR(ret); + + /* Start QMI communication with the modem */ + ret = ipa_qmi_setup(ipa); + if (ret) + return ERR_PTR(ret); + + netdev = alloc_netdev(sizeof(struct ipa_priv), IPA_NETDEV_NAME, + NET_NAME_UNKNOWN, netdev_setup); + if (!netdev) { + ret = -ENOMEM; + goto err_qmi_exit; + } + + rx_endpoint->netdev = netdev; + tx_endpoint->netdev = netdev; + + priv = netdev_priv(netdev); + priv->rx_endpoint = rx_endpoint; + priv->tx_endpoint = tx_endpoint; + + ret = register_netdev(netdev); + if (ret) + goto err_free_netdev; + + return netdev; + +err_free_netdev: + free_netdev(netdev); +err_qmi_exit: + ipa_qmi_teardown(ipa); + + return ERR_PTR(ret); +} + +void ipa_netdev_teardown(struct net_device *netdev) +{ + struct ipa_priv *priv = netdev_priv(netdev); + struct ipa *ipa = priv->tx_endpoint->ipa; + + if (!netif_queue_stopped(netdev)) + (void)ipa_netdev_stop(netdev); + + unregister_netdev(netdev); + + free_netdev(netdev); + + ipa_qmi_teardown(ipa); +} diff --git a/drivers/net/ipa/ipa_netdev.h b/drivers/net/ipa/ipa_netdev.h new file mode 100644 index 000000000000..8ab1e8ea0b4a --- /dev/null +++ b/drivers/net/ipa/ipa_netdev.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_NETDEV_H_ +#define _IPA_NETDEV_H_ + +struct ipa; +struct ipa_endpoint; +struct net_device; +struct sk_buff; + +struct net_device *ipa_netdev_setup(struct ipa *ipa, + struct ipa_endpoint *rx_endpoint, + struct ipa_endpoint *tx_endpoint); +void ipa_netdev_teardown(struct net_device *netdev); + +void ipa_netdev_skb_rx(struct net_device *netdev, struct sk_buff *skb); + +void ipa_netdev_suspend(struct net_device *netdev); +void ipa_netdev_resume(struct net_device *netdev); + +#endif /* _IPA_NETDEV_H_ */ diff --git a/drivers/net/ipa/ipa_uc.c b/drivers/net/ipa/ipa_uc.c new file mode 100644 index 000000000000..57256d1c3b90 --- /dev/null +++ b/drivers/net/ipa/ipa_uc.c @@ -0,0 +1,208 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include + +#include "ipa.h" +#include "ipa_clock.h" +#include "ipa_uc.h" + +/** + * DOC: The IPA embedded microcontroller + * + * The IPA incorporates a microcontroller that is able to do some additional + * handling/offloading of network activity. The current code makes + * essentially no use of the microcontroller, but it still requires some + * initialization. It needs to be notified in the event the AP crashes. + * + * The microcontroller can generate two interrupts to the AP. One interrupt + * is used to indicate that a response to a request from the AP is available. + * The other is used to notify the AP of the occurrence of an event. In + * addition, the AP can interrupt the microcontroller by writing a register. + * + * A 128 byte block of structured memory within the IPA SRAM is used together + * with these interrupts to implement the communication interface between the + * AP and the IPA microcontroller. Each side writes data to the shared area + * before interrupting its peer, which will read the written data in response + * to the interrupt. Some information found in the shared area is currently + * unused. All remaining space in the shared area is reserved, and must not + * be read or written by the AP. + */ +/* Supports hardware interface version 0x2000 */ + +/* Offset relative to the base of the IPA shared address space of the + * shared region used for communication with the microcontroller. The + * region is 128 bytes in size, but only the first 40 bytes are used. + */ +#define IPA_SMEM_UC_OFFSET 0x0000 + +/* Delay to allow a the microcontroller to save state when crashing */ +#define IPA_SEND_DELAY 100 /* microseconds */ + +/** + * struct ipa_uc_shared_area - AP/microcontroller shared memory area + * @command: command code (AP->microcontroller) + * @command_param: low 32 bits of command parameter (AP->microcontroller) + * @command_param_hi: high 32 bits of command parameter (AP->microcontroller) + * + * @response: response code (microcontroller->AP) + * @response_param: response parameter (microcontroller->AP) + * + * @event: event code (microcontroller->AP) + * @event_param: event parameter (microcontroller->AP) + * + * @first_error_address: address of first error-source on SNOC + * @hw_state: state of hardware (including error type information) + * @warning_counter: counter of non-fatal hardware errors + * @interface_version: hardware-reported interface version + */ +struct ipa_uc_shared_area { + u8 command; /* enum ipa_uc_command */ + u8 reserved0[3]; + __le32 command_param; + __le32 command_param_hi; + u8 response; /* enum ipa_uc_response */ + u8 reserved1[3]; + __le32 response_param; + u8 event; /* enum ipa_uc_event */ + u8 reserved2[3]; + + __le32 event_param; + __le32 first_error_address; + u8 hw_state; + u8 warning_counter; + __le16 reserved3; + __le16 interface_version; + __le16 reserved4; +}; + +/** enum ipa_uc_command - commands from the AP to the microcontroller */ +enum ipa_uc_command { + IPA_UC_COMMAND_NO_OP = 0, + IPA_UC_COMMAND_UPDATE_FLAGS = 1, + IPA_UC_COMMAND_DEBUG_RUN_TEST = 2, + IPA_UC_COMMAND_DEBUG_GET_INFO = 3, + IPA_UC_COMMAND_ERR_FATAL = 4, + IPA_UC_COMMAND_CLK_GATE = 5, + IPA_UC_COMMAND_CLK_UNGATE = 6, + IPA_UC_COMMAND_MEMCPY = 7, + IPA_UC_COMMAND_RESET_PIPE = 8, + IPA_UC_COMMAND_REG_WRITE = 9, + IPA_UC_COMMAND_GSI_CH_EMPTY = 10, +}; + +/** enum ipa_uc_response - microcontroller response codes */ +enum ipa_uc_response { + IPA_UC_RESPONSE_NO_OP = 0, + IPA_UC_RESPONSE_INIT_COMPLETED = 1, + IPA_UC_RESPONSE_CMD_COMPLETED = 2, + IPA_UC_RESPONSE_DEBUG_GET_INFO = 3, +}; + +/** enum ipa_uc_event - common cpu events reported by the microcontroller */ +enum ipa_uc_event { + IPA_UC_EVENT_NO_OP = 0, + IPA_UC_EVENT_ERROR = 1, + IPA_UC_EVENT_LOG_INFO = 2, +}; + +/* Microcontroller event IPA interrupt handler */ +static void ipa_uc_event_handler(struct ipa *ipa, + enum ipa_interrupt_id interrupt_id) +{ + struct ipa_uc_shared_area *shared; + + shared = ipa->shared_virt + IPA_SMEM_UC_OFFSET; + dev_err(&ipa->pdev->dev, "unsupported microcontroller event %hhu\n", + shared->event); + WARN_ON(shared->event == IPA_UC_EVENT_ERROR); +} + +/* Microcontroller response IPA interrupt handler */ +static void ipa_uc_response_hdlr(struct ipa *ipa, + enum ipa_interrupt_id interrupt_id) +{ + struct ipa_uc_shared_area *shared; + + /* An INIT_COMPLETED response message is sent to the AP by the + * microcontroller when it is operational. Other than this, the AP + * should only receive responses from the microntroller when it has + * sent it a request message. + * + * We can drop the clock reference taken in ipa_uc_init() once we + * know the microcontroller has finished its initialization. + */ + shared = ipa->shared_virt + IPA_SMEM_UC_OFFSET; + switch (shared->response) { + case IPA_UC_RESPONSE_INIT_COMPLETED: + ipa->uc_loaded = 1; + ipa_clock_put(ipa->clock); + break; + default: + dev_warn(&ipa->pdev->dev, + "unsupported microcontroller response %hhu\n", + shared->response); + break; + } +} + +/* ipa_uc_setup() - Set up the microcontroller */ +void ipa_uc_setup(struct ipa *ipa) +{ + /* The microcontroller needs the IPA clock running until it has + * completed its initialization. It signals this by sending an + * INIT_COMPLETED response message to the AP. This could occur after + * we have finished doing the rest of the IPA initialization, so we + * need to take an extra "proxy" reference, and hold it until we've + * received that signal. (This reference is dropped in + * ipa_uc_response_hdlr(), above.) + */ + ipa_clock_get(ipa->clock); + + ipa->uc_loaded = 0; + ipa_interrupt_add(ipa->interrupt, IPA_INTERRUPT_UC_0, + ipa_uc_event_handler); + ipa_interrupt_add(ipa->interrupt, IPA_INTERRUPT_UC_1, + ipa_uc_response_hdlr); +} + +/* Inverse of ipa_uc_setup() */ +void ipa_uc_teardown(struct ipa *ipa) +{ + ipa_interrupt_remove(ipa->interrupt, IPA_INTERRUPT_UC_1); + ipa_interrupt_remove(ipa->interrupt, IPA_INTERRUPT_UC_0); + if (!ipa->uc_loaded) + ipa_clock_put(ipa->clock); +} + +/* Send a command to the microcontroller */ +static void send_uc_command(struct ipa *ipa, u32 command, u32 command_param) +{ + struct ipa_uc_shared_area *shared; + + shared = ipa->shared_virt + IPA_SMEM_UC_OFFSET; + shared->command = command; + shared->command_param = cpu_to_le32(command_param); + shared->command_param_hi = 0; + shared->response = 0; + shared->response_param = 0; + + iowrite32(1, ipa->reg_virt + IPA_REG_IRQ_UC_OFFSET); +} + +/* Tell the microcontroller the AP is shutting down */ +void ipa_uc_panic_notifier(struct ipa *ipa) +{ + if (!ipa->uc_loaded) + return; + + send_uc_command(ipa, IPA_UC_COMMAND_ERR_FATAL, 0); + + /* give uc enough time to save state */ + udelay(IPA_SEND_DELAY); +} diff --git a/drivers/net/ipa/ipa_uc.h b/drivers/net/ipa/ipa_uc.h new file mode 100644 index 000000000000..c258cb6e1161 --- /dev/null +++ b/drivers/net/ipa/ipa_uc.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_UC_H_ +#define _IPA_UC_H_ + +struct ipa; + +/** + * ipa_uc_setup() - set up the IPA microcontroller subsystem + * @ipa: IPA pointer + */ +void ipa_uc_setup(struct ipa *ipa); + +/** + * ipa_uc_teardown() - inverse of ipa_uc_setup() + * @ipa: IPA pointer + */ +void ipa_uc_teardown(struct ipa *ipa); + +/** + * ipa_uc_panic_notifier() + * @ipa: IPA pointer + * + * Notifier function called when the system crashes, to inform the + * microcontroller of the event. + */ +void ipa_uc_panic_notifier(struct ipa *ipa); + +#endif /* _IPA_UC_H_ */ From patchwork Sun May 12 01:25:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163976 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4128852ilr; Sat, 11 May 2019 18:25:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqxKeeUa01af3zRfn62OH9GakWSjlk0VfjBkPiUaWGVMa6rdQS0KsthV5K2CvjBjWQoQh4sO X-Received: by 2002:a62:6fc6:: with SMTP id k189mr24895339pfc.154.1557624356357; Sat, 11 May 2019 18:25:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624356; cv=none; d=google.com; s=arc-20160816; b=xMyEAPByCE3ju0KKtMmzsOabCfuxkd8lKAGUuy+wYuyMTD3zR0DCzqLTqFYet5VL5w 2ZIWfPsc4iiM3Mvbgpl9l/bzoxKpJocA27YEKUQtyjJbrowk5zriodVFkb9cyNd5oUQy EYvzY+g5sjAaGvpSPwKCMDFQe23DpcaypOj5/dBoGX1Vfx4X7CLxmxZkHogYJNA0VBD2 AIkk1LCzFdOu7LiXuQfYVUMNefIhYKQn8oxDp0rQ0noZEbKxOvkBMAlW7Qe4/2FVcWV2 iOb4BpnVDYfPlPt8xV8KE8wTosFwZ9A+zJqXIuQROvltqFcG+ViyW2UWU9P9Oipxq/G7 yNJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8VdaA9F2Sm9V8OOjn6xa89XW+9g2gU/neA4kTVWT9dI=; b=noRg6Zvll0KarKZLOw9hx532VO0pvHfFsKwZCGxVXuq4a14lrpw3s0u18pJoIrmp3F Eu/HHK90p7krxrtm3M44J6eJcraV1k9Mgn49XIluYMMKqvhn1oPYJSe0kck3ZbvMmRTN CkNT/LN8Il2t/5jnOF8eE0vQNX2BLfTbTzJJhBEviQcxyhnbQI6vOPBJweyaOoYLNJnT YgP/JWB4VNetcpcGFB3QPgvBBN0hI3g1p99wmYvRwN51mtaHeCQxkAqSiGj1M8DORt6I I+nJ/xbm5JoP1i/vFi//MQLLdQfZLUKYo6BIUx1lz+TPnZkxAaxq9wHYj/1HWebFVqB4 UOVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="uZfc/llU"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g2si11125265pgp.54.2019.05.11.18.25.56; Sat, 11 May 2019 18:25:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="uZfc/llU"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726817AbfELBZy (ORCPT + 30 others); Sat, 11 May 2019 21:25:54 -0400 Received: from mail-it1-f195.google.com ([209.85.166.195]:50306 "EHLO mail-it1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726681AbfELBZv (ORCPT ); Sat, 11 May 2019 21:25:51 -0400 Received: by mail-it1-f195.google.com with SMTP id i10so11319071ite.0 for ; Sat, 11 May 2019 18:25:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8VdaA9F2Sm9V8OOjn6xa89XW+9g2gU/neA4kTVWT9dI=; b=uZfc/llUpYuzPkLoxJJ2G4TOkt0qKhTwk8K0aDj79V1Gn+OVexHEBKC2TzNDvPMW3V 7Bhcf5XESCFrkW9HZNh/tUIPtKBmpxDctNdB5ZJ1hZl0D7+E9M4vW4SZA4tNW+E+gIuT 23j1XQ/qr1XggQEugQj183qLStHzA5jO406D53dp7McOsaI3l4W6xhfzIt6Mq/017G4v P1J0nqw5eZ8G7VkNnPNq5E5VsfzBpUaCA+mZ+Xf1YsDqfE2lIw6fEMHudqPgVjBeXldn qW7YnfVq2kPMA2pTYyqAuNW06ERdCwycFLPD+vO3bJhp+SHfynoRucCwy8hBmSeVthaO Wi3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8VdaA9F2Sm9V8OOjn6xa89XW+9g2gU/neA4kTVWT9dI=; b=uTSOn1qNgPHD7NuJ5Gpd4dGntnD89QvoMP05easQt/f689aNgR8ARoHf3if1R+XAzq B7A0cfk5NzL6NqiNmKNJRTgHYLrMjuyFelmXQiDxSLhBIsf21tbZOC5kgOUKK0XtXycs ZGSvHhsja9U8BpzZ1l3L+M2yp6Fk+HLqK2mdPDeC2WtM0P0yNZ8QUk4CKq8b4mQUgK7H InRcp8KIjjdi6AC+9vxW5Hc1qGJz7sdbD1MgAXTUbHOqmiqmAr320IrM3j729Bm74AkW cacGASb36rBQjysnrQ6EZX9TeGtwq4sPsQ6zipkz5XytrVp8CQR9tNnuRI3Rxzedu1Z+ k1Zw== X-Gm-Message-State: APjAAAVMiH2RtjB0zW+WuKLUKJSeRPM7gdSoRmlz7DSTVxIvXBFBN3Tp bBHwLim85j2gPZZjtjHUsNB52g== X-Received: by 2002:a24:e0ce:: with SMTP id c197mr7774407ith.31.1557624349352; Sat, 11 May 2019 18:25:49 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:48 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 14/18] soc: qcom: ipa: AP/modem communications Date: Sat, 11 May 2019 20:25:04 -0500 Message-Id: <20190512012508.10608-15-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch implements two forms of out-of-band communication between the AP and modem. - QMI is a mechanism that allows clients running on the AP interact with services running on the modem (and vice-versa). The AP IPA driver uses QMI to communicate with the corresponding IPA driver resident on the modem, to agree on parameters used with the IPA hardware and to ensure both sides are ready before entering operational mode. - SMP2P is a more primitive mechanism available for the modem and AP to communicate with each other. It provides a means for either the AP or modem to interrupt the other, and furthermore, to provide 32 bits worth of information. The IPA driver uses SMP2P to tell the modem what the state of the IPA clock was in the event of a crash. This allows the modem to safely access the IPA hardware (or avoid doing so) when a crash occurs, for example, to access information within the IPA hardware. Signed-off-by: Alex Elder --- drivers/net/ipa/ipa_qmi.c | 399 +++++++++++++++++++++++ drivers/net/ipa/ipa_qmi.h | 35 ++ drivers/net/ipa/ipa_qmi_msg.c | 583 ++++++++++++++++++++++++++++++++++ drivers/net/ipa/ipa_qmi_msg.h | 238 ++++++++++++++ drivers/net/ipa/ipa_smp2p.c | 304 ++++++++++++++++++ drivers/net/ipa/ipa_smp2p.h | 47 +++ 6 files changed, 1606 insertions(+) create mode 100644 drivers/net/ipa/ipa_qmi.c create mode 100644 drivers/net/ipa/ipa_qmi.h create mode 100644 drivers/net/ipa/ipa_qmi_msg.c create mode 100644 drivers/net/ipa/ipa_qmi_msg.h create mode 100644 drivers/net/ipa/ipa_smp2p.c create mode 100644 drivers/net/ipa/ipa_smp2p.h -- 2.20.1 diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c new file mode 100644 index 000000000000..ce5455e58ca9 --- /dev/null +++ b/drivers/net/ipa/ipa_qmi.c @@ -0,0 +1,399 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +#include "ipa.h" +#include "ipa_endpoint.h" +#include "ipa_mem.h" +#include "ipa_qmi_msg.h" + +#define QMI_INIT_DRIVER_TIMEOUT 60000 /* A minute in milliseconds */ + +/* The AP and modem perform a "handshake" at initialization time to ensure + * each side knows the other side is ready. Two QMI handles (endpoints) are + * used for this; one provides service on the modem for AP requests, and the + * other is on the AP to service modem requests (and to supply an indication + * from the AP). + * + * The QMI service on the modem expects to receive an INIT_DRIVER request from + * the AP, which contains parameters used by the modem during initialization. + * The AP sends this request using the client handle as soon as it is knows + * the modem side service is available. The modem responds to this request + * immediately. + * + * When the modem learns the AP service is available, it is able to + * communicate its status to the AP. The modem uses this to tell + * the AP when it is ready to receive an indication, sending an + * INDICATION_REGISTER request to the handle served by the AP. This + * is independent of the modem's initialization of its driver. + * + * When the modem has completed the driver initialization requested by the + * AP, it sends a DRIVER_INIT_COMPLETE request to the AP. This request + * could arrive at the AP either before or after the INDICATION_REGISTER + * request. + * + * The final step in the handshake occurs after the AP has received both + * requests from the modem. The AP completes the handshake by sending an + * INIT_COMPLETE_IND indication message to the modem. + */ + +#define IPA_HOST_SERVICE_SVC_ID 0x31 +#define IPA_HOST_SVC_VERS 1 +#define IPA_HOST_SERVICE_INS_ID 1 + +#define IPA_MODEM_SERVICE_SVC_ID 0x31 +#define IPA_MODEM_SERVICE_INS_ID 2 +#define IPA_MODEM_SVC_VERS 1 + +/* Send an INIT_COMPLETE_IND indication message to the modem */ +static int ipa_send_master_driver_init_complete_ind(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq) +{ + struct ipa_init_complete_ind ind = { }; + + ind.status.result = QMI_RESULT_SUCCESS_V01; + ind.status.error = QMI_ERR_NONE_V01; + + return qmi_send_indication(qmi, sq, IPA_QMI_INIT_COMPLETE_IND, + IPA_QMI_INIT_COMPLETE_IND_SZ, + ipa_init_complete_ind_ei, &ind); +} + +/* This function is called to determine whether to complete the handshake by + * sending an INIT_COMPLETE_IND indication message to the modem. The + * "init_driver" parameter is false when we've received an INDICATION_REGISTER + * request message from the modem, or true when we've received the response + * from the INIT_DRIVER request message we send. If this function decides the + * message should be sent, it calls ipa_send_master_driver_init_complete_ind() + * to send it. + */ +static void ipa_handshake_complete(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, bool init_driver) +{ + struct ipa *ipa; + bool send_it; + int ret; + + if (init_driver) { + ipa = container_of(qmi, struct ipa, qmi.client_handle); + ipa->qmi.init_driver_response_received = true; + send_it = !!ipa->qmi.indication_register_received; + } else { + ipa = container_of(qmi, struct ipa, qmi.server_handle); + ipa->qmi.indication_register_received = 1; + send_it = !!ipa->qmi.init_driver_response_received; + } + if (!send_it) + return; + + ret = ipa_send_master_driver_init_complete_ind(qmi, sq); + WARN(ret, "error %d sending init complete indication\n", ret); +} + +/* Callback function to handle an INDICATION_REGISTER request message from the + * modem. This informs the AP that the modem is now ready to receive the + * INIT_COMPLETE_IND indication message. + */ +static void ipa_indication_register_fn(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, + const void *decoded) +{ + struct ipa_indication_register_rsp rsp = { }; + int ret; + + rsp.rsp.result = QMI_RESULT_SUCCESS_V01; + rsp.rsp.error = QMI_ERR_NONE_V01; + + ret = qmi_send_response(qmi, sq, txn, IPA_QMI_INDICATION_REGISTER, + IPA_QMI_INDICATION_REGISTER_RSP_SZ, + ipa_indication_register_rsp_ei, &rsp); + if (!WARN(ret, "error %d sending response\n", ret)) + ipa_handshake_complete(qmi, sq, false); +} + +/* Callback function to handle a DRIVER_INIT_COMPLETE request message from the + * modem. This informs the AP that the modem has completed the initializion + * of its driver. + */ +static void ipa_driver_init_complete_fn(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, + const void *decoded) +{ + struct ipa_driver_init_complete_rsp rsp = { }; + int ret; + + rsp.rsp.result = QMI_RESULT_SUCCESS_V01; + rsp.rsp.error = QMI_ERR_NONE_V01; + + ret = qmi_send_response(qmi, sq, txn, IPA_QMI_DRIVER_INIT_COMPLETE, + IPA_QMI_DRIVER_INIT_COMPLETE_RSP_SZ, + ipa_driver_init_complete_rsp_ei, &rsp); + + WARN(ret, "error %d sending response\n", ret); +} + +/* The server handles two request message types sent by the modem. */ +static struct qmi_msg_handler ipa_server_msg_handlers[] = { + { + .type = QMI_REQUEST, + .msg_id = IPA_QMI_INDICATION_REGISTER, + .ei = ipa_indication_register_req_ei, + .decoded_size = IPA_QMI_INDICATION_REGISTER_REQ_SZ, + .fn = ipa_indication_register_fn, + }, + { + .type = QMI_REQUEST, + .msg_id = IPA_QMI_DRIVER_INIT_COMPLETE, + .ei = ipa_driver_init_complete_req_ei, + .decoded_size = IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ, + .fn = ipa_driver_init_complete_fn, + }, +}; + +/* Callback function to handle an IPA_QMI_INIT_DRIVER response message from + * the modem. This only acknowledges that the modem received the request. + * The modem will eventually report that it has completed its modem + * initialization by sending a IPA_QMI_DRIVER_INIT_COMPLETE request. + */ +static void ipa_init_driver_rsp_fn(struct qmi_handle *qmi, + struct sockaddr_qrtr *sq, + struct qmi_txn *txn, + const void *decoded) +{ + txn->result = 0; /* IPA_QMI_INIT_DRIVER request was successful */ + complete(&txn->completion); + + ipa_handshake_complete(qmi, sq, true); +} + +/* The client handles one response message type sent by the modem. */ +static struct qmi_msg_handler ipa_client_msg_handlers[] = { + { + .type = QMI_RESPONSE, + .msg_id = IPA_QMI_INIT_DRIVER, + .ei = ipa_init_modem_driver_rsp_ei, + .decoded_size = IPA_QMI_INIT_DRIVER_RSP_SZ, + .fn = ipa_init_driver_rsp_fn, + }, +}; + +/* Return a pointer to an init modem driver request structure, which contains + * configuration parameters for the modem. The modem may be started multiple + * times, but generally these parameters don't change so we can reuse the + * request structure once it's initialized. The only exception is the + * skip_uc_load field, which will be set only after the microcontroller has + * reported it has completed its initialization. + */ +static const struct ipa_init_modem_driver_req * +init_modem_driver_req(struct ipa_qmi *ipa_qmi) +{ + struct ipa *ipa = container_of(ipa_qmi, struct ipa, qmi); + static struct ipa_init_modem_driver_req req; + + /* This is not the first boot if the microcontroller is loaded */ + req.skip_uc_load = ipa->uc_loaded; + req.skip_uc_load_valid = true; + + /* We only have to initialize most of it once */ + if (req.platform_type_valid) + return &req; + + req.platform_type_valid = true; + req.platform_type = IPA_QMI_PLATFORM_TYPE_MSM_ANDROID; + + req.hdr_tbl_info_valid = IPA_SMEM_MODEM_HDR_SIZE ? 1 : 0; + req.hdr_tbl_info.start = ipa_qmi->base + IPA_SMEM_MODEM_HDR_OFFSET; + req.hdr_tbl_info.end = req.hdr_tbl_info.start + + IPA_SMEM_MODEM_HDR_SIZE - 1; + + req.v4_route_tbl_info_valid = true; + req.v4_route_tbl_info.start = + ipa_qmi->base + IPA_SMEM_V4_RT_NHASH_OFFSET; + req.v4_route_tbl_info.count = IPA_SMEM_MODEM_RT_COUNT; + + req.v6_route_tbl_info_valid = true; + req.v6_route_tbl_info.start = + ipa_qmi->base + IPA_SMEM_V6_RT_NHASH_OFFSET; + req.v6_route_tbl_info.count = IPA_SMEM_MODEM_RT_COUNT; + + req.v4_filter_tbl_start_valid = true; + req.v4_filter_tbl_start = ipa_qmi->base + IPA_SMEM_V4_FLT_NHASH_OFFSET; + + req.v6_filter_tbl_start_valid = true; + req.v6_filter_tbl_start = ipa_qmi->base + IPA_SMEM_V6_FLT_NHASH_OFFSET; + + req.modem_mem_info_valid = IPA_SMEM_MODEM_SIZE ? 1 : 0; + req.modem_mem_info.start = ipa_qmi->base + IPA_SMEM_MODEM_OFFSET; + req.modem_mem_info.size = IPA_SMEM_MODEM_SIZE; + + req.ctrl_comm_dest_end_pt_valid = true; + req.ctrl_comm_dest_end_pt = IPA_ENDPOINT_AP_MODEM_RX; + + req.hdr_proc_ctx_tbl_info_valid = + IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE ? 1 : 0; + req.hdr_proc_ctx_tbl_info.start = + ipa_qmi->base + IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET; + req.hdr_proc_ctx_tbl_info.end = req.hdr_proc_ctx_tbl_info.start + + IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE - 1; + + req.v4_hash_route_tbl_info_valid = true; + req.v4_hash_route_tbl_info.start = + ipa_qmi->base + IPA_SMEM_V4_RT_HASH_OFFSET; + req.v4_hash_route_tbl_info.count = IPA_SMEM_MODEM_RT_COUNT; + + req.v6_hash_route_tbl_info_valid = true; + req.v6_hash_route_tbl_info.start = + ipa_qmi->base + IPA_SMEM_V6_RT_HASH_OFFSET; + req.v6_hash_route_tbl_info.count = IPA_SMEM_MODEM_RT_COUNT; + + req.v4_hash_filter_tbl_start_valid = true; + req.v4_hash_filter_tbl_start = + ipa_qmi->base + IPA_SMEM_V4_FLT_HASH_OFFSET; + + req.v6_hash_filter_tbl_start_valid = true; + req.v6_hash_filter_tbl_start = + ipa_qmi->base + IPA_SMEM_V6_FLT_HASH_OFFSET; + + return &req; +} + +/* The modem service we requested is now available via the client handle. + * Send an INIT_DRIVER request to the modem. + */ +static int +ipa_client_new_server(struct qmi_handle *qmi, struct qmi_service *svc) +{ + const struct ipa_init_modem_driver_req *req; + struct ipa *ipa; + struct sockaddr_qrtr sq; + struct qmi_txn *txn; + int ret; + + ipa = container_of(qmi, struct ipa, qmi.client_handle); + req = init_modem_driver_req(&ipa->qmi); + + txn = kzalloc(sizeof(*txn), GFP_KERNEL); + if (!txn) + return -ENOMEM; + + ret = qmi_txn_init(qmi, txn, NULL, NULL); + if (ret) { + kfree(txn); + return ret; + } + + sq.sq_family = AF_QIPCRTR; + sq.sq_node = svc->node; + sq.sq_port = svc->port; + + ret = qmi_send_request(qmi, &sq, txn, IPA_QMI_INIT_DRIVER, + IPA_QMI_INIT_DRIVER_REQ_SZ, + ipa_init_modem_driver_req_ei, req); + if (!ret) + ret = qmi_txn_wait(txn, MAX_SCHEDULE_TIMEOUT); + if (ret) + qmi_txn_cancel(txn); + kfree(txn); + + return ret; +} + +/* The only callback we supply for the client handle is notification that the + * service on the modem has become available. + */ +static struct qmi_ops ipa_client_ops = { + .new_server = ipa_client_new_server, +}; + +static int ipa_qmi_initialize(struct ipa *ipa) +{ + struct ipa_qmi *ipa_qmi = &ipa->qmi; + int ret; + + /* The only handle operation that might be interesting for the server + * would be del_client, to find out when the modem side client has + * disappeared. But other than reporting the event, we wouldn't do + * anything about that. So we just pass a null pointer for its handle + * operations. All the real work is done by the message handlers. + */ + ret = qmi_handle_init(&ipa_qmi->server_handle, + IPA_QMI_SERVER_MAX_RCV_SZ, NULL, + ipa_server_msg_handlers); + if (ret) + return ret; + + ret = qmi_add_server(&ipa_qmi->server_handle, IPA_HOST_SERVICE_SVC_ID, + IPA_HOST_SVC_VERS, IPA_HOST_SERVICE_INS_ID); + if (ret) + goto err_release_server_handle; + + /* The client handle is only used for sending an INIT_DRIVER request + * to the modem, and receiving its response message. + */ + ret = qmi_handle_init(&ipa_qmi->client_handle, + IPA_QMI_CLIENT_MAX_RCV_SZ, &ipa_client_ops, + ipa_client_msg_handlers); + if (ret) + goto err_release_server_handle; + + ret = qmi_add_lookup(&ipa_qmi->client_handle, IPA_MODEM_SERVICE_SVC_ID, + IPA_MODEM_SVC_VERS, IPA_MODEM_SERVICE_INS_ID); + if (ret) + goto err_release_client_handle; + + /* All QMI offsets are relative to the start of IPA shared memory */ + ipa_qmi->base = ipa->shared_offset; + ipa_qmi->initialized = 1; + + return 0; + +err_release_client_handle: + /* Releasing the handle also removes registered lookups */ + qmi_handle_release(&ipa_qmi->client_handle); + memset(&ipa_qmi->client_handle, 0, sizeof(ipa_qmi->client_handle)); +err_release_server_handle: + /* Releasing the handle also removes registered services */ + qmi_handle_release(&ipa_qmi->server_handle); + memset(&ipa_qmi->server_handle, 0, sizeof(ipa_qmi->server_handle)); + + return ret; +} + +/* This is called by ipa_netdev_setup(). We can be informed via remoteproc + * that the modem has shut down, in which case this function will be called + * again to prepare for it coming back up again. + */ +int ipa_qmi_setup(struct ipa *ipa) +{ + ipa->qmi.init_driver_response_received = 0; + ipa->qmi.indication_register_received = 0; + + if (!ipa->qmi.initialized) + return ipa_qmi_initialize(ipa); + + return 0; +} + +void ipa_qmi_teardown(struct ipa *ipa) +{ + if (!ipa->qmi.initialized) + return; + + qmi_handle_release(&ipa->qmi.client_handle); + memset(&ipa->qmi.client_handle, 0, sizeof(ipa->qmi.client_handle)); + + qmi_handle_release(&ipa->qmi.server_handle); + memset(&ipa->qmi.server_handle, 0, sizeof(ipa->qmi.server_handle)); + + ipa->qmi.initialized = 0; +} diff --git a/drivers/net/ipa/ipa_qmi.h b/drivers/net/ipa/ipa_qmi.h new file mode 100644 index 000000000000..cfdafa23cf8f --- /dev/null +++ b/drivers/net/ipa/ipa_qmi.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_QMI_H_ +#define _IPA_QMI_H_ + +#include +#include + +struct ipa; + +/** + * struct ipa_qmi - QMI state associated with an IPA + * @initialized - whether QMI initialization has completed + * @client_handle - used to send an QMI requests to the modem + * @server_handle - used to handle QMI requests from the modem + * @indication_register_received - tracks modem request receipt + * @init_driver_response_received - tracks modem response receipt + */ +struct ipa_qmi { + u32 initialized; + u32 base; + struct qmi_handle client_handle; + struct qmi_handle server_handle; + u32 indication_register_received; + u32 init_driver_response_received; + +}; + +int ipa_qmi_setup(struct ipa *ipa); +void ipa_qmi_teardown(struct ipa *ipa); + +#endif /* !_IPA_QMI_H_ */ diff --git a/drivers/net/ipa/ipa_qmi_msg.c b/drivers/net/ipa/ipa_qmi_msg.c new file mode 100644 index 000000000000..b6b278dff6fb --- /dev/null +++ b/drivers/net/ipa/ipa_qmi_msg.c @@ -0,0 +1,583 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#include +#include + +#include "ipa_qmi_msg.h" + +/* QMI message structure definition for struct ipa_indication_register_req */ +struct qmi_elem_info ipa_indication_register_req_ei[] = { + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + master_driver_init_complete_valid), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_indication_register_req, + master_driver_init_complete_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + master_driver_init_complete), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_indication_register_req, + master_driver_init_complete), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + data_usage_quota_reached_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + data_usage_quota_reached_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_req, + data_usage_quota_reached), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_indication_register_req, + data_usage_quota_reached), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_indication_register_rsp */ +struct qmi_elem_info ipa_indication_register_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_indication_register_rsp, + rsp), + .tlv_type = 0x02, + .offset = offsetof(struct ipa_indication_register_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_driver_init_complete_req */ +struct qmi_elem_info ipa_driver_init_complete_req_ei[] = { + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_driver_init_complete_req, + status), + .tlv_type = 0x01, + .offset = offsetof(struct ipa_driver_init_complete_req, + status), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_driver_init_complete_rsp */ +struct qmi_elem_info ipa_driver_init_complete_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_driver_init_complete_rsp, + rsp), + .tlv_type = 0x02, + .elem_size = offsetof(struct ipa_driver_init_complete_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_complete_ind */ +struct qmi_elem_info ipa_init_complete_ind_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_complete_ind, + status), + .tlv_type = 0x02, + .elem_size = offsetof(struct ipa_init_complete_ind, + status), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_bounds */ +struct qmi_elem_info ipa_mem_bounds_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_bounds, start), + .offset = offsetof(struct ipa_mem_bounds, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_bounds, end), + .offset = offsetof(struct ipa_mem_bounds, end), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_array */ +struct qmi_elem_info ipa_mem_array_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_array, start), + .offset = offsetof(struct ipa_mem_array, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_array, count), + .offset = offsetof(struct ipa_mem_array, count), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_mem_range */ +struct qmi_elem_info ipa_mem_range_ei[] = { + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_range, start), + .offset = offsetof(struct ipa_mem_range, start), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_mem_range, size), + .offset = offsetof(struct ipa_mem_range, size), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_modem_driver_req */ +struct qmi_elem_info ipa_init_modem_driver_req_ei[] = { + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + platform_type_valid), + .tlv_type = 0x10, + .elem_size = offsetof(struct ipa_init_modem_driver_req, + platform_type_valid), + }, + { + .data_type = QMI_SIGNED_4_BYTE_ENUM, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + platform_type), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_req, + platform_type), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_tbl_info_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_tbl_info), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_route_tbl_info_valid), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_route_tbl_info), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_route_tbl_info_valid), + .tlv_type = 0x13, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_route_tbl_info), + .tlv_type = 0x13, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_filter_tbl_start_valid), + .tlv_type = 0x14, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_filter_tbl_start), + .tlv_type = 0x14, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_filter_tbl_start_valid), + .tlv_type = 0x15, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_filter_tbl_start), + .tlv_type = 0x15, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + modem_mem_info_valid), + .tlv_type = 0x16, + .offset = offsetof(struct ipa_init_modem_driver_req, + modem_mem_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + modem_mem_info), + .tlv_type = 0x16, + .offset = offsetof(struct ipa_init_modem_driver_req, + modem_mem_info), + .ei_array = ipa_mem_range_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt_valid), + .tlv_type = 0x17, + .offset = offsetof(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt), + .tlv_type = 0x17, + .offset = offsetof(struct ipa_init_modem_driver_req, + ctrl_comm_dest_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + skip_uc_load_valid), + .tlv_type = 0x18, + .offset = offsetof(struct ipa_init_modem_driver_req, + skip_uc_load_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + skip_uc_load), + .tlv_type = 0x18, + .offset = offsetof(struct ipa_init_modem_driver_req, + skip_uc_load), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info_valid), + .tlv_type = 0x19, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info), + .tlv_type = 0x19, + .offset = offsetof(struct ipa_init_modem_driver_req, + hdr_proc_ctx_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + zip_tbl_info_valid), + .tlv_type = 0x1a, + .offset = offsetof(struct ipa_init_modem_driver_req, + zip_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + zip_tbl_info), + .tlv_type = 0x1a, + .offset = offsetof(struct ipa_init_modem_driver_req, + zip_tbl_info), + .ei_array = ipa_mem_bounds_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info_valid), + .tlv_type = 0x1b, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info), + .tlv_type = 0x1b, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info_valid), + .tlv_type = 0x1c, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info_valid), + }, + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info), + .tlv_type = 0x1c, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_route_tbl_info), + .ei_array = ipa_mem_array_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start_valid), + .tlv_type = 0x1d, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start), + .tlv_type = 0x1d, + .offset = offsetof(struct ipa_init_modem_driver_req, + v4_hash_filter_tbl_start), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start_valid), + .tlv_type = 0x1e, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start), + .tlv_type = 0x1e, + .offset = offsetof(struct ipa_init_modem_driver_req, + v6_hash_filter_tbl_start), + }, + { + .data_type = QMI_EOTI, + }, +}; + +/* QMI message structure definition for struct ipa_init_modem_driver_rsp */ +struct qmi_elem_info ipa_init_modem_driver_rsp_ei[] = { + { + .data_type = QMI_STRUCT, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + rsp), + .tlv_type = 0x02, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + rsp), + .ei_array = qmi_response_type_v01_ei, + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt_valid), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt), + .tlv_type = 0x10, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + ctrl_comm_dest_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + default_end_pt_valid), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + default_end_pt_valid), + }, + { + .data_type = QMI_UNSIGNED_4_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + default_end_pt), + .tlv_type = 0x11, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + default_end_pt), + }, + { + .data_type = QMI_OPT_FLAG, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending_valid), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending_valid), + }, + { + .data_type = QMI_UNSIGNED_1_BYTE, + .elem_len = 1, + .elem_size = + sizeof_field(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending), + .tlv_type = 0x12, + .offset = offsetof(struct ipa_init_modem_driver_rsp, + modem_driver_init_pending), + }, + { + .data_type = QMI_EOTI, + }, +}; diff --git a/drivers/net/ipa/ipa_qmi_msg.h b/drivers/net/ipa/ipa_qmi_msg.h new file mode 100644 index 000000000000..174e7789efa4 --- /dev/null +++ b/drivers/net/ipa/ipa_qmi_msg.h @@ -0,0 +1,238 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2018-2019 Linaro Ltd. + */ +#ifndef _IPA_QMI_MSG_H_ +#define _IPA_QMI_MSG_H_ + +#include +#include + +/* === NOTE: Only "ipa_qmi_msg.c" should include this file === */ + +/* Request/response/indication QMI message ids used for IPA. Receiving + * end issues a response for requests; indications require no response. + */ +#define IPA_QMI_INDICATION_REGISTER 0x20 /* modem -> AP request */ +#define IPA_QMI_INIT_DRIVER 0x21 /* AP -> modem request */ +#define IPA_QMI_INIT_COMPLETE_IND 0x22 /* AP -> modem indication */ +#define IPA_QMI_DRIVER_INIT_COMPLETE 0x35 /* modem -> AP request */ + +/* The maximum size required for message types. These sizes include + * the message data, along with type (1 byte) and length (2 byte) + * information for each field. The qmi_send_*() interfaces require + * the message size to be provided. + */ +#define IPA_QMI_INDICATION_REGISTER_REQ_SZ 8 /* -> server handle */ +#define IPA_QMI_INDICATION_REGISTER_RSP_SZ 7 /* <- server handle */ +#define IPA_QMI_INIT_DRIVER_REQ_SZ 134 /* client handle -> */ +#define IPA_QMI_INIT_DRIVER_RSP_SZ 25 /* client handle <- */ +#define IPA_QMI_INIT_COMPLETE_IND_SZ 7 /* server handle -> */ +#define IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ 4 /* -> server handle */ +#define IPA_QMI_DRIVER_INIT_COMPLETE_RSP_SZ 7 /* <- server handle */ + +/* Maximum size of messages we expect the AP to receive (max of above) */ +#define IPA_QMI_SERVER_MAX_RCV_SZ 8 +#define IPA_QMI_CLIENT_MAX_RCV_SZ 25 + +/* Request message for the IPA_QMI_INDICATION_REGISTER request */ +struct ipa_indication_register_req { + u8 master_driver_init_complete_valid; + u8 master_driver_init_complete; + u8 data_usage_quota_reached_valid; + u8 data_usage_quota_reached; +}; + +/* The response to a IPA_QMI_INDICATION_REGISTER request consists only of + * a standard QMI response. + */ +struct ipa_indication_register_rsp { + struct qmi_response_type_v01 rsp; +}; + +/* Request message for the IPA_QMI_DRIVER_INIT_COMPLETE request */ +struct ipa_driver_init_complete_req { + u8 status; +}; + +/* The response to a IPA_QMI_DRIVER_INIT_COMPLETE request consists only + * of a standard QMI response. + */ +struct ipa_driver_init_complete_rsp { + struct qmi_response_type_v01 rsp; +}; + +/* The message for the IPA_QMI_INIT_COMPLETE_IND indication consists + * only of a standard QMI response. + */ +struct ipa_init_complete_ind { + struct qmi_response_type_v01 status; +}; + +/* The AP tells the modem its platform type. We assume Android. */ +enum ipa_platform_type { + IPA_QMI_PLATFORM_TYPE_INVALID = 0, /* Invalid */ + IPA_QMI_PLATFORM_TYPE_TN = 1, /* Data card */ + IPA_QMI_PLATFORM_TYPE_LE = 2, /* Data router */ + IPA_QMI_PLATFORM_TYPE_MSM_ANDROID = 3, /* Android MSM */ + IPA_QMI_PLATFORM_TYPE_MSM_WINDOWS = 4, /* Windows MSM */ + IPA_QMI_PLATFORM_TYPE_MSM_QNX_V01 = 5, /* QNX MSM */ +}; + +/* This defines the start and end offset of a range of memory. Both + * fields are offsets relative to the start of IPA shared memory. + * The end value is the last addressable byte *within* the range. + */ +struct ipa_mem_bounds { + u32 start; + u32 end; +}; + +/* This defines the location and size of an array. The start value + * is an offset relative to the start of IPA shared memory. The + * size of the array is implied by the number of entries (the entry + * size is assumed to be known). + */ +struct ipa_mem_array { + u32 start; + u32 count; +}; + +/* This defines the location and size of a range of memory. The + * start is an offset relative to the start of IPA shared memory. + * This differs from the ipa_mem_bounds structure in that the size + * (in bytes) of the memory region is specified rather than the + * offset of its last byte. + */ +struct ipa_mem_range { + u32 start; + u32 size; +}; + +/* The message for the IPA_QMI_INIT_DRIVER request contains information + * from the AP that affects modem initialization. + */ +struct ipa_init_modem_driver_req { + u8 platform_type_valid; + u32 platform_type; /* enum ipa_platform_type */ + + /* Modem header table information. This defines the IPA shared + * memory in which the modem may insert header table entries. + */ + u8 hdr_tbl_info_valid; + struct ipa_mem_bounds hdr_tbl_info; + + /* Routing table information. These define the location and size of + * non-hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_route_tbl_info_valid; + struct ipa_mem_array v4_route_tbl_info; + u8 v6_route_tbl_info_valid; + struct ipa_mem_array v6_route_tbl_info; + + /* Filter table information. These define the location and size of + * non-hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_filter_tbl_start_valid; + u32 v4_filter_tbl_start; + u8 v6_filter_tbl_start_valid; + u32 v6_filter_tbl_start; + + /* Modem memory information. This defines the location and + * size of memory available for the modem to use. + */ + u8 modem_mem_info_valid; + struct ipa_mem_range modem_mem_info; + + /* This defines the destination endpoint on the AP to which + * the modem driver can send control commands. IPA supports + * 20 endpoints, so this must be 19 or less. + */ + u8 ctrl_comm_dest_end_pt_valid; + u32 ctrl_comm_dest_end_pt; + + /* This defines whether the modem should load the microcontroller + * or not. It is unnecessary to reload it if the modem is being + * restarted. + * + * NOTE: this field is named "is_ssr_bootup" elsewhere. + */ + u8 skip_uc_load_valid; + u8 skip_uc_load; + + /* Processing context memory information. This defines the memory in + * which the modem may insert header processing context table entries. + */ + u8 hdr_proc_ctx_tbl_info_valid; + struct ipa_mem_bounds hdr_proc_ctx_tbl_info; + + /* Compression command memory information. This defines the memory + * in which the modem may insert compression/decompression commands. + */ + u8 zip_tbl_info_valid; + struct ipa_mem_bounds zip_tbl_info; + + /* Routing table information. These define the location and size + * of hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_hash_route_tbl_info_valid; + struct ipa_mem_array v4_hash_route_tbl_info; + u8 v6_hash_route_tbl_info_valid; + struct ipa_mem_array v6_hash_route_tbl_info; + + /* Filter table information. These define the location and size + * of hashable IPv4 and IPv6 filter tables. The start values are + * offsets relative to the start of IPA shared memory. + */ + u8 v4_hash_filter_tbl_start_valid; + u32 v4_hash_filter_tbl_start; + u8 v6_hash_filter_tbl_start_valid; + u32 v6_hash_filter_tbl_start; +}; + +/* The response to a IPA_QMI_INIT_DRIVER request begins with a standard + * QMI response, but contains other information as well. Currently we + * simply wait for the the INIT_DRIVER transaction to complete and + * ignore any other data that might be returned. + */ +struct ipa_init_modem_driver_rsp { + struct qmi_response_type_v01 rsp; + + /* This defines the destination endpoint on the modem to which + * the AP driver can send control commands. IPA supports + * 20 endpoints, so this must be 19 or less. + */ + u8 ctrl_comm_dest_end_pt_valid; + u32 ctrl_comm_dest_end_pt; + + /* This defines the default endpoint. The AP driver is not + * required to configure the hardware with this value. IPA + * supports 20 endpoints, so this must be 19 or less. + */ + u8 default_end_pt_valid; + u32 default_end_pt; + + /* This defines whether a second handshake is required to complete + * initialization. + */ + u8 modem_driver_init_pending_valid; + u8 modem_driver_init_pending; +}; + +/* Message structure definitions defined in "ipa_qmi_msg.c" */ +extern struct qmi_elem_info ipa_indication_register_req_ei[]; +extern struct qmi_elem_info ipa_indication_register_rsp_ei[]; +extern struct qmi_elem_info ipa_driver_init_complete_req_ei[]; +extern struct qmi_elem_info ipa_driver_init_complete_rsp_ei[]; +extern struct qmi_elem_info ipa_init_complete_ind_ei[]; +extern struct qmi_elem_info ipa_mem_bounds_ei[]; +extern struct qmi_elem_info ipa_mem_array_ei[]; +extern struct qmi_elem_info ipa_mem_range_ei[]; +extern struct qmi_elem_info ipa_init_modem_driver_req_ei[]; +extern struct qmi_elem_info ipa_init_modem_driver_rsp_ei[]; + +#endif /* !_IPA_QMI_MSG_H_ */ diff --git a/drivers/net/ipa/ipa_smp2p.c b/drivers/net/ipa/ipa_smp2p.c new file mode 100644 index 000000000000..c59f358b44b4 --- /dev/null +++ b/drivers/net/ipa/ipa_smp2p.c @@ -0,0 +1,304 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include + +#include "ipa_smp2p.h" +#include "ipa.h" +#include "ipa_uc.h" +#include "ipa_clock.h" + +/** + * DOC: IPA SMP2P communication with the modem + * + * SMP2P is a primitive communication mechanism available between the AP and + * the modem. The IPA driver uses this for two purposes: to enable the modem + * to state that the GSI hardware is ready to use; and to communicate the + * state of the IPA clock in the event of a crash. + * + * GSI needs to have early initialization completed before it can be used. + * This initialization is done either by Trust Zone or by the modem. In the + * latter case, the modem uses an SMP2P interrupt to tell the AP IPA driver + * when the GSI is ready to use. + * + * The modem is also able to inquire about the current state of the IPA + * clock by trigging another SMP2P interrupt to the AP. We communicate + * whether the clock is enabled using two SMP2P state bits--one to + * indicate the clock state (on or off), and a second to indicate the + * clock state bit is valid. The modem will poll the valid bit until it + * is set, and at that time records whether the AP has the IPA clock enabled. + * + * Finally, if the AP kernel panics, we update the SMP2P state bits even if + * we never receive an interrupt from the modem requesting this. + */ + +/** + * struct ipa_smp2p - IPA SMP2P information + * @ipa: IPA pointer + * @valid_state: SMEM state indicating enabled state is valid + * @enabled_state: SMEM state to indicate clock is enabled + * @valid_bit: Valid bit in 32-bit SMEM state mask + * @enabled_bit: Enabled bit in 32-bit SMEM state mask + * @enabled_bit: Enabled bit in 32-bit SMEM state mask + * @clock_query_irq: IPA interrupt triggered by modem for clock query + * @setup_ready_irq: IPA interrupt triggered by modem to signal GSI ready + * @clock_on: Whether IPA clock is on + * @notified: Whether modem has been notified of clock state + * @disabled: Whether setup ready interrupt handling is disabled + * @mutex mutex: Motex protecting ready interrupt/shutdown interlock + * @panic_notifier: Panic notifier structure +*/ +struct ipa_smp2p { + struct ipa *ipa; + struct qcom_smem_state *valid_state; + struct qcom_smem_state *enabled_state; + u32 valid_bit; + u32 enabled_bit; + u32 clock_query_irq; + u32 setup_ready_irq; + u32 clock_on; + u32 notified; + u32 disabled; + struct mutex mutex; + struct notifier_block panic_notifier; +}; + +/** + * ipa_smp2p_notify() - use SMP2P to tell modem about IPA clock state + * @smp2p: SMP2P information + * + * This is called either when the modem has requested it (by triggering + * the modem clock query IPA interrupt) or whenever the AP is shutting down + * (via a panic notifier). It sets the two SMP2P state bits--one saying + * whether the IPA clock is running, and the other indicating the first bit + * is valid. + */ +static void ipa_smp2p_notify(struct ipa_smp2p *smp2p) +{ + u32 value; + u32 mask; + + if (smp2p->notified) + return; + + smp2p->clock_on = ipa_clock_get_additional(smp2p->ipa->clock) ? 1 : 0; + + /* Signal whether the clock is enabled */ + mask = BIT(smp2p->enabled_bit); + value = smp2p->clock_on ? mask : 0; + qcom_smem_state_update_bits(smp2p->enabled_state, mask, value); + + /* Now indicate that the enabled flag is valid */ + mask = BIT(smp2p->valid_bit); + value = mask; + qcom_smem_state_update_bits(smp2p->valid_state, mask, value); + + smp2p->notified = 1; +} + +/* Threaded IRQ handler for modem "ipa-clock-query" SMP2P interrupt */ +static irqreturn_t ipa_smp2p_modem_clk_query_isr(int irq, void *dev_id) +{ + struct ipa_smp2p *smp2p = dev_id; + + ipa_smp2p_notify(smp2p); + + return IRQ_HANDLED; +} + +static int ipa_smp2p_panic_notifier(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct ipa_smp2p *smp2p; + + smp2p = container_of(nb, struct ipa_smp2p, panic_notifier); + + ipa_smp2p_notify(smp2p); + + if (smp2p->clock_on) + ipa_uc_panic_notifier(smp2p->ipa); + + return NOTIFY_DONE; +} + +static int ipa_smp2p_panic_notifier_register(struct ipa_smp2p *smp2p) +{ + /* IPA panic handler needs to run before modem shuts down */ + smp2p->panic_notifier.notifier_call = ipa_smp2p_panic_notifier; + smp2p->panic_notifier.priority = INT_MAX; /* Do it early */ + + return atomic_notifier_chain_register(&panic_notifier_list, + &smp2p->panic_notifier); +} + +static void ipa_smp2p_panic_notifier_unregister(struct ipa_smp2p *smp2p) +{ + atomic_notifier_chain_unregister(&panic_notifier_list, + &smp2p->panic_notifier); +} + +/* Threaded IRQ handler for modem "ipa-setup-ready" SMP2P interrupt */ +static irqreturn_t ipa_smp2p_modem_setup_ready_isr(int irq, void *dev_id) +{ + struct ipa_smp2p *smp2p = dev_id; + int ret; + + mutex_lock(&smp2p->mutex); + if (!smp2p->disabled) { + ret = ipa_setup(smp2p->ipa); + WARN(ret, "error %d from IPA setup\n", ret); + } + mutex_unlock(&smp2p->mutex); + + return IRQ_HANDLED; +} + +/* Initialize SMP2P interrupts */ +static int ipa_smp2p_irq_init(struct ipa_smp2p *smp2p, const char *name, + irq_handler_t handler) +{ + unsigned int irq; + int ret; + + ret = platform_get_irq_byname(smp2p->ipa->pdev, name); + if (ret < 0) + return ret; + if (!ret) + return -EINVAL; /* IRQ mapping failure */ + irq = ret; + + ret = request_threaded_irq(irq, NULL, handler, 0, name, smp2p); + if (ret) + return ret; + + return irq; +} + +static void ipa_smp2p_irq_exit(struct ipa_smp2p *smp2p, u32 irq) +{ + free_irq(irq, smp2p); +} + +/* Initialize the IPA SMP2P subsystem */ +struct ipa_smp2p *ipa_smp2p_init(struct ipa *ipa, bool modem_init) +{ + struct qcom_smem_state *enabled_state; + struct device *dev = &ipa->pdev->dev; + struct qcom_smem_state *valid_state; + struct ipa_smp2p *smp2p; + u32 enabled_bit; + u32 valid_bit; + int ret; + + valid_state = qcom_smem_state_get(dev, "ipa-clock-enabled-valid", + &valid_bit); + if (IS_ERR(valid_state)) + return ERR_CAST(valid_state); + if (valid_bit >= BITS_PER_LONG) + return ERR_PTR(-EINVAL); + + enabled_state = qcom_smem_state_get(dev, "ipa-clock-enabled", + &enabled_bit); + if (IS_ERR(enabled_state)) + return ERR_CAST(enabled_state); + if (enabled_bit >= BITS_PER_LONG) + return ERR_PTR(-EINVAL); + + smp2p = kzalloc(sizeof(*smp2p), GFP_KERNEL); + if (!smp2p) + return ERR_PTR(-ENOMEM); + + smp2p->ipa = ipa; + + /* These fields are needed by the clock query interrupt + * handler, so initialize them now. + */ + mutex_init(&smp2p->mutex); + smp2p->valid_state = valid_state; + smp2p->valid_bit = valid_bit; + smp2p->enabled_state = enabled_state; + smp2p->enabled_bit = enabled_bit; + + ret = ipa_smp2p_irq_init(smp2p, "ipa-clock-query", + ipa_smp2p_modem_clk_query_isr); + if (ret < 0) + goto err_mutex_destroy; + smp2p->clock_query_irq = ret; + + ret = ipa_smp2p_panic_notifier_register(smp2p); + if (ret) + goto err_irq_exit; + + if (modem_init) { + /* Result will be non-zero (negative for error) */ + ret = ipa_smp2p_irq_init(smp2p, "ipa-setup-ready", + ipa_smp2p_modem_setup_ready_isr); + if (ret < 0) + goto err_notifier_unregister; + smp2p->setup_ready_irq = ret; + } + + return smp2p; + +err_notifier_unregister: + ipa_smp2p_panic_notifier_unregister(smp2p); +err_irq_exit: + ipa_smp2p_irq_exit(smp2p, smp2p->clock_query_irq); +err_mutex_destroy: + mutex_destroy(&smp2p->mutex); + kfree(smp2p); + + return ERR_PTR(ret); +} + +void ipa_smp2p_exit(struct ipa_smp2p *smp2p) +{ + if (smp2p->setup_ready_irq) + ipa_smp2p_irq_exit(smp2p, smp2p->setup_ready_irq); + ipa_smp2p_panic_notifier_unregister(smp2p); + ipa_smp2p_irq_exit(smp2p, smp2p->clock_query_irq); + mutex_destroy(&smp2p->mutex); + kfree(smp2p); +} + +void ipa_smp2p_disable(struct ipa_smp2p *smp2p) +{ + if (smp2p->setup_ready_irq) { + mutex_lock(&smp2p->mutex); + smp2p->disabled = 1; + mutex_unlock(&smp2p->mutex); + } +} + +/* Reset state tracking whether we have notified the modem */ +void ipa_smp2p_notify_reset(struct ipa_smp2p *smp2p) +{ + u32 mask; + + if (!smp2p->notified) + return; + + /* Drop the clock reference if it was taken above */ + if (smp2p->clock_on) { + ipa_clock_put(smp2p->ipa->clock); + smp2p->clock_on = 0; + } + + /* Reset the clock enabled valid flag */ + mask = BIT(smp2p->valid_bit); + qcom_smem_state_update_bits(smp2p->valid_state, mask, 0); + + /* Mark the clock disabled for good measure... */ + mask = BIT(smp2p->enabled_bit); + qcom_smem_state_update_bits(smp2p->enabled_state, mask, 0); + + smp2p->notified = 0; +} diff --git a/drivers/net/ipa/ipa_smp2p.h b/drivers/net/ipa/ipa_smp2p.h new file mode 100644 index 000000000000..9c7e4339a7b0 --- /dev/null +++ b/drivers/net/ipa/ipa_smp2p.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _IPA_SMP2P_H_ +#define _IPA_SMP2P_H_ + +#include + +struct ipa; + +/** + * ipa_smp2p_init() - Initialize the IPA SMP2P subsystem + * @ipa: IPA pointer + * @modem_init: Whether the modem is responsible for GSI initialization + * + * @Return: Pointer to IPA SMP2P info, or a pointer-coded error + */ +struct ipa_smp2p *ipa_smp2p_init(struct ipa *ipa, bool modem_init); + +/** + * ipa_smp2p_exit() - Inverse of ipa_smp2p_init() + * @smp2p: SMP2P information pointer + */ +void ipa_smp2p_exit(struct ipa_smp2p *smp2p); + +/** + * ipa_smp2p_disable() - Prevent "ipa-setup-ready" interrupt handling + * @smp2p: SMP2P information pointer + * + * Prevent handling of the "setup ready" interrupt from the modem. + * This is used before initiating shutdown of the driver. + */ +void ipa_smp2p_disable(struct ipa_smp2p *smp2p); + +/** + * ipa_smp2p_notify_reset() - Reset modem notification state + * @smp2p: SMP2P information pointer + * + * If the modem crashes it queries the IPA clock state. In cleaning + * up after such a crash this is used to reset some state maintained + * for managing this notification. + */ +void ipa_smp2p_notify_reset(struct ipa_smp2p *smp2p); + +#endif /* _IPA_SMP2P_H_ */ From patchwork Sun May 12 01:25:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163980 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4129050ilr; Sat, 11 May 2019 18:26:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqy7frvPvtZ/PeemBLxL5lU55KRJCWQfAeslMaWjJg363zKWXyIMIzSfwCY4q2rGeYjDyt7W X-Received: by 2002:a65:4243:: with SMTP id d3mr12550446pgq.57.1557624377680; Sat, 11 May 2019 18:26:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624377; cv=none; d=google.com; s=arc-20160816; b=nkSFhkeZq+DmO6mTe8X2kxra97itk9cks8UK8AeIB37lLPY/eODQTuh+Mx9KwZ0oLY 0zHUX8ehMwAoSvimqH/iNrB0vHOf9o+Isnk4nwYNRL9bFG9uTStqxBoIZsWh64DXCC1M SpcB/LGKY8G1HdePNpU/2mD23YYvZUYbmIHDnVEp0Zda0BLIOblVeUUSZZZmb58PcGmM r0k1GsyYkezVho5/8+LT5fGSUpBRmzpclE1ptV5OlSFO89zI46C04N8m9I00oXFCqSSZ 0xoIYoROlSA4PnxuH2f+NrxaGDFu36g/z4ut+JuuTXXW3xHO2vBc9KZ2sQBV3Ml53jTo L8tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9Quaedvq3gCPPNDemFqjgPMgNa8XiOI9kX1OdXaei5M=; b=Rrpkq5Tg/WYlGvoanEfYNehKxGXQX7GV+B1MZjYVfPV74JtslEX5n3mswALEEyX4Po dCKwFCL37VRpmbAESLUL87MZlLNndcmnH68I9nV/bWXaKDREhMWtkVGMRiE5XDl8qLFq 304WeFiGLUl3xeRdUzY1lpVZr4ZTwJcN4j/3C8iV3HB11vFvVWFsHcgbbDPdovEYEC/d MQIpNDUIjMN3ree8XNKCC9d8QEzcGwOzYSX6oiPumpRvrr23FypHGChnbLVOdQnTgwrU 7Ym1SNlQXKFvTt6Kgh4ZMA/+2QInvmqYN+MvRZrYovaMP1DXupoYk5hHwFwMpm0GT35C DrJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=GCKgYtIJ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k34si12378393pgl.286.2019.05.11.18.26.17; Sat, 11 May 2019 18:26:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=GCKgYtIJ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726907AbfELB0Q (ORCPT + 30 others); Sat, 11 May 2019 21:26:16 -0400 Received: from mail-it1-f194.google.com ([209.85.166.194]:54475 "EHLO mail-it1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726775AbfELBZw (ORCPT ); Sat, 11 May 2019 21:25:52 -0400 Received: by mail-it1-f194.google.com with SMTP id a190so15170227ite.4 for ; Sat, 11 May 2019 18:25:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9Quaedvq3gCPPNDemFqjgPMgNa8XiOI9kX1OdXaei5M=; b=GCKgYtIJk37fQcWn2CVOj7WsDgbDIndhJYzN62szv8pN47399ZAusxkq+4/+xFQkRf ZrYruufgX3eRbgYUx2qGklCshp3HU8ppV1OOceFt8sqvB63MAoBTTf6dh4G8Qj+OvaxJ MR0a0rgw6mplkQUf9LopWAgkoqMZ7gVEtBgjN5htRe/sa1mAdP465FQ2zoFrrolh8dei l4HI9+Yzn2hJ7Az4XqQRpKG+adZl4d+Rl3Qv7lL1hg7heBxuq1Nwb8xjJtJEtVL61t9Q 3JJ3W8QWlcVJbqWX4AOGFfZkNRsxEyYmDIe3To5R9fNQ34wKin+ECCU/whDlKsFuXwv1 HrEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9Quaedvq3gCPPNDemFqjgPMgNa8XiOI9kX1OdXaei5M=; b=W7w+ktyCOdc/SfBr0FA69hVg6l/R7L79DVBjgdD63nBKd+c3utVyAf2x+2M6rDj/ta U7o/FHkAIj/lGv6bUDMLGVNj8MwilZidu16ZFwiPCUX/3iRIv+9/d5rJzHCliMfrHOD6 vpQxp7H/icsx7CBLHYY58BT7hRkanZr3oTfX9JU9i6yt6WB2LxtJAdGVlcUdd8UrZPbT qiSVMW9IC0xZ10lszdj9+8T7oVWU7cpyg20cEOsKonz8qalEiDKqvxyxi3D/y+ZxetVv qAme7FxV6JInHRJYFognwUalJlTFAsB7Wfi3FDgmV7/Q3w5x2lMC/0tTU0NH6wuhh9Jw 7dyg== X-Gm-Message-State: APjAAAXad25WHQl40AJ7OAitZF0EJlyM/+oMQGju2yp1/r7oHcLdrgNv EBydfl7pVGb80nFMsqY3OKNV1Q== X-Received: by 2002:a24:6787:: with SMTP id u129mr1226156itc.112.1557624351110; Sat, 11 May 2019 18:25:51 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:50 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org, sridhar.samudrala@intel.com, jakub.kicinski@netronome.com, daniel@iogearbox.net, mcroce@redhat.com, j.neuschaefer@gmx.net Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 15/18] soc: qcom: ipa: support build of IPA code Date: Sat, 11 May 2019 20:25:05 -0500 Message-Id: <20190512012508.10608-16-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add build and Kconfig support for the Qualcomm IPA driver. Signed-off-by: Alex Elder --- drivers/net/Kconfig | 2 ++ drivers/net/Makefile | 1 + drivers/net/ipa/Kconfig | 16 ++++++++++++++++ drivers/net/ipa/Makefile | 7 +++++++ 4 files changed, 26 insertions(+) create mode 100644 drivers/net/ipa/Kconfig create mode 100644 drivers/net/ipa/Makefile -- 2.20.1 diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 7a96d168efc4..0603fde43d54 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -387,6 +387,8 @@ source "drivers/net/fddi/Kconfig" source "drivers/net/hippi/Kconfig" +source "drivers/net/ipa/Kconfig" + config NET_SB1000 tristate "General Instruments Surfboard 1000" depends on PNP diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 21cde7e78621..c01f48badba6 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -45,6 +45,7 @@ obj-$(CONFIG_ETHERNET) += ethernet/ obj-$(CONFIG_FDDI) += fddi/ obj-$(CONFIG_HIPPI) += hippi/ obj-$(CONFIG_HAMRADIO) += hamradio/ +obj-$(CONFIG_IPA) += ipa/ obj-$(CONFIG_PLIP) += plip/ obj-$(CONFIG_PPP) += ppp/ obj-$(CONFIG_PPP_ASYNC) += ppp/ diff --git a/drivers/net/ipa/Kconfig b/drivers/net/ipa/Kconfig new file mode 100644 index 000000000000..b1e3f7405992 --- /dev/null +++ b/drivers/net/ipa/Kconfig @@ -0,0 +1,16 @@ +config IPA + tristate "Qualcomm IPA support" + depends on NET + select QCOM_QMI_HELPERS + select QCOM_MDT_LOADER + default n + help + Choose Y here to include support for the Qualcomm IP Accelerator + (IPA), a hardware block present in some Qualcomm SoCs. The IPA + is a programmable protocol processor that is capable of generic + hardware handling of IP packets, including routing, filtering, + and NAT. Currently the IPA driver supports only basic transport + of network traffic between the AP and modem, on the Qualcomm + SDM845 SoC. + + If unsure, say N. diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile new file mode 100644 index 000000000000..a43039c09a25 --- /dev/null +++ b/drivers/net/ipa/Makefile @@ -0,0 +1,7 @@ +obj-$(CONFIG_IPA) += ipa.o + +ipa-y := ipa_main.o ipa_clock.o ipa_mem.o \ + ipa_interrupt.o gsi.o gsi_trans.o \ + ipa_gsi.o ipa_smp2p.o ipa_uc.o \ + ipa_endpoint.o ipa_cmd.o ipa_netdev.o \ + ipa_qmi.o ipa_qmi_msg.o ipa_data-sdm845.o From patchwork Sun May 12 01:25:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163979 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4128959ilr; Sat, 11 May 2019 18:26:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqx8fsKkza2h4I6dOYIRGYQFsQRgU3ouRwoSUvIYYs9l3FqaEbvKXRiG+1q0FIeqVMcwNxC1 X-Received: by 2002:a62:5653:: with SMTP id k80mr24545248pfb.144.1557624369634; Sat, 11 May 2019 18:26:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624369; cv=none; d=google.com; s=arc-20160816; b=ZhwRlIpGwSrn8efNbFFb/DSNlOGzuHMScG3c5eK4Q4DQgRb2SrtgVstQNwWmbDTKRz NgqDKQeZUgXyOimPHDx9GUwf2lpaUdjfctgNuC/r8dHraugTVxL7eIdJ1LSAEdxfCIDl R8FNBM2lVqlXHMJXI0N94nTEqoK3m6TLHASxN50aFAypY+H7OiUhU1B+nfOxnmGYqm8q AvwaeTm5k+SFroGqZwdVEb+SFcvPPpmNjN2wW2Ny84V4lNl1PBDXv1L7OiBUaXvn81iJ uYZbUZwNRwHTZSVIaw1bLgGmJeoUxlWSnv+EEMyzEiV4awhXQr61b0YXy3ZOTHk+g5Aa TkIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=BugBRhEBl3G1fy7c7RR+dHZQApGESbUCUc9bnd+0Brg=; b=UdNhhXp5RWLdNmIztRqgfbY7eeFh12kfztFLXBKV9E/QaSHnjfpgEVcG3UExzzkRA0 cJoXRwTWtdaICf1G6DiOkm2W5AxMNG5yZiDPCrNhQYCDpnha/OcfpiWuxYMtvx4VXQxL Mi6L2fiKCJpC4zWao4l/reCTaj3qBOYOTktskoGxsIcq8+XHv45O+EEPw44sCi569FO2 Cowm5RSoXXNoOgMucz11/MOfRJ4rLki5zA9Zgu9s93fxKbuC1pB6uUONs8nJB86CDj28 N1aQQB/xkOYlePb8UR98D8bPDSvcziw5NtagEwohKEdxuLmue+QMb3KxdGfIvn7jGzAk JnXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=wisLLX59; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k34si12378393pgl.286.2019.05.11.18.26.09; Sat, 11 May 2019 18:26:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=wisLLX59; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726889AbfELB0I (ORCPT + 30 others); Sat, 11 May 2019 21:26:08 -0400 Received: from mail-it1-f193.google.com ([209.85.166.193]:33726 "EHLO mail-it1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726762AbfELBZx (ORCPT ); Sat, 11 May 2019 21:25:53 -0400 Received: by mail-it1-f193.google.com with SMTP id u16so10792466itc.0 for ; Sat, 11 May 2019 18:25:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BugBRhEBl3G1fy7c7RR+dHZQApGESbUCUc9bnd+0Brg=; b=wisLLX59mbsDuhRnzFKNEtf9oOEPwlmc7PpaGaBg7gkYeuG4UqrK+zqrhXleO0/U5v qLDpyv9u2O2uqBVSskQbAM3ts50+sYspzicdtlS4vU8y3jk35Tis4l2nzyh8MqfjIZan wDcLqe0bAXnoYFvGAeu8paG3YKvvWxIt7cJZrd3pm3kK2H5/T3mTDBDjmwXCi0A8tSnv pXgBe9vX2upBcCm6rqMQa0CuUvlL+e5RJnVPpvMQ2x+zCYB+Y4FzsPfufEHXdZ+ngKnv 6xsuT9ezZRvhyFmrKplc+x9YxNB6FLZh0hcPaNVf+gn+IJYuPUwT/eHSIk8PwD5GKm/X wK5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BugBRhEBl3G1fy7c7RR+dHZQApGESbUCUc9bnd+0Brg=; b=MhRyRMfsTKaKGUceGJDingOHthLoz05eoX1QwEmLhZCvXSVJcROKpb6xwkid3HcVxR PvS724E1jhQ8vEc0XulhOo92XR+kyjPB7Z/tESl2VJTTPU7cyatfDOpeUk3Zkt2YjkFO 1rJmadX4VQjV6zGVMCCxdQb8hWPFMFhDpUML84NsXDTJc6ztKxLnVjewsWZkXm3VNUdA 5WSyfRZXlCQDtIc3A1hjbRKmIZXXTT7s5ZN+xNKPR98xBAjlTNZDI4zqmb7419gkLF/E L5ZWFBZaxlY11MjRqtGOV2GeEk17W2XL9+LJeihou4/RZy1/OzFrRF4vZ75rp2U24t1S 3+AA== X-Gm-Message-State: APjAAAWRTu3FOIzZXwqBp/Bn+3XGeyj8oRZ14qf51AQnVyO9afFMgGhF 8kHr+wljhaQz0lCfyIqlQpMzFQ== X-Received: by 2002:a02:80e:: with SMTP id 14mr13632821jac.71.1557624352463; Sat, 11 May 2019 18:25:52 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:51 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org, mchehab+samsung@kernel.org, gregkh@linuxfoundation.org, nicolas.ferre@microchip.com, paulmck@linux.ibm.com Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 16/18] MAINTAINERS: add entry for the Qualcomm IPA driver Date: Sat, 11 May 2019 20:25:06 -0500 Message-Id: <20190512012508.10608-17-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add an entry in the MAINTAINERS file for the Qualcomm IPA driver Signed-off-by: Alex Elder --- MAINTAINERS | 6 ++++++ 1 file changed, 6 insertions(+) -- 2.20.1 diff --git a/MAINTAINERS b/MAINTAINERS index 2c2fce72e694..2348a90d4dff 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12694,6 +12694,12 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers) S: Supported F: sound/soc/qcom/ +QCOM IPA DRIVER +M: Alex Elder +L: netdev@vger.kernel.org +S: Supported +F: drivers/net/ipa/ + QEMU MACHINE EMULATOR AND VIRTUALIZER SUPPORT M: Gabriel Somlo M: "Michael S. Tsirkin" From patchwork Sun May 12 01:25:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163977 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4128867ilr; Sat, 11 May 2019 18:25:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqxr5Mtt8ryPjmtu5fcMXlV0hGVbT8Me+RcekRVTH9B3rstG+NGAoePH8gbszq9Ypi5osm3h X-Received: by 2002:a17:902:7689:: with SMTP id m9mr22674745pll.274.1557624358096; Sat, 11 May 2019 18:25:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624358; cv=none; d=google.com; s=arc-20160816; b=qeCAWE3I15Qx91h5zPj0Pv9jjdTU+lxi+biM4ap90Yr+OgsbMgn7s8PYCJ7kHOKNbW 1Ghmb/bHDNdGGh6BOS/CnTOI0/se+tuftwjJSEFuasSet/hKq/Uq6g+5WIoUC0wJ7yt0 0mmrVxtXnZdSidFpszH5fHuEqcURQOAHBJSLkQdX0hlboaANlRH61C/9JHXBI2yDJUvy +ZoDhXG0iO2Dj/e42xyPR0bSDsomgRKMRL8LnnugFcU5B533YMntB3TyinutJmzLS0Di H0aCOu4RgxbpF+z1T8M+00o/2hVed2CDjbix1kEjMKFZbIb2jIZhCptR0nXc1LYLIFm7 OdgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=HHRmV7yhtSWQbIFA/oMwKk8g276jL1Sz3uKi1jWy/30=; b=pGKTlwARZ0xTVZ695QCuwAutFcXxxAOLI5+7l/KbKx3lELiB3PTDAmzqW3T4arecwM l7ySZbKtnUXaP3TOno1f9Pdn6RrRy9TQseucMI9oij2ZQ/mF5VZnGIawLVYJL6WOJDZW p5CIAflXrDKbyj9YFJuC1lTR7PpeiXQxGXit5ggnl+xuW3ZL0XNw7ssoFXT80EFIROgP X/Ewf0Y94T7/BSGrc4xvYCSz0vaIrrB8KrMIyQfSFlkFVgUaj6UPqeD7gvJIj7dQHEuf NwioGVBDkengeM3ETMXihERfJUVXpmCmIUxqQZY4v5nVTT/1WI1viUG2Ucrj+I90iXio SO0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=inYC9kKQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g2si11125265pgp.54.2019.05.11.18.25.57; Sat, 11 May 2019 18:25:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=inYC9kKQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726835AbfELBZ4 (ORCPT + 30 others); Sat, 11 May 2019 21:25:56 -0400 Received: from mail-it1-f193.google.com ([209.85.166.193]:33991 "EHLO mail-it1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726805AbfELBZy (ORCPT ); Sat, 11 May 2019 21:25:54 -0400 Received: by mail-it1-f193.google.com with SMTP id p18so10781443itm.1 for ; Sat, 11 May 2019 18:25:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HHRmV7yhtSWQbIFA/oMwKk8g276jL1Sz3uKi1jWy/30=; b=inYC9kKQT2zZLFQjj9yjgmT6loR3m5ScDdmA670i9cD7AVrEI68Q7s037mEDlCjODd QPYAOf8k5p4av4/ump2SBqlYlo6scjXApT0zEeJ1VXEdYPvET9jg4naxg2REbfBC96el M/cf46haKHbF20wuzlSLPUFgMmpruexsfE+q0N1rF1dLr9JkA2dbbwBT5PpLkEqXjb0N /biFzLSgOo2TFWIQGiWZvZ5IWnGSDgw+u5/sSgPpId/oDY2QLLGOAEFWb1UVXyODPe1R jicp3UWv6BL4bdUMfEdbtPx/e40Aw/qIAtGwAUrqZ6pNvvLbAP3X0M3C/GojEevDLCA8 w57g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HHRmV7yhtSWQbIFA/oMwKk8g276jL1Sz3uKi1jWy/30=; b=Mzq7PqPzozF8pJY5AS1MWMjU7cMFjRfW5dC+AZfKgQqrwp6nZWZ2UsO6DZmqcGbkGB Fa/fcVBUlifssd4gP16SsVIeDid4csx2lca1yzVvWoK7ujsxJt5rj5G2wPLVJYPCeSW0 qA2/8UcqoI4GT7vSUn5gs/RQZrXfmg4njHIKLkNTIFskSL0TU3hEK1VGsNc2IL2wBKf3 ghdpjX+epLzChqF/raGb94Z0AQaBRM+Xt17nmZAM+Hb5jK71YGqL9ujqmcFND6+IR2QR cdvFmaCQIEfml4Lp9b6rma0WQwKUiEyjK7UjsjWI+g8gkyF1yhjbRrqJQ5Ew4P+5PHBY eZUQ== X-Gm-Message-State: APjAAAUOdvyUr3slUQaP0vuoV2T2uOS19WhK3Ekdgykn31/GiSYLsdH1 Zmgt1zLF+HWggFNPxuGzibWI2w== X-Received: by 2002:a24:75d4:: with SMTP id y203mr13005033itc.142.1557624353765; Sat, 11 May 2019 18:25:53 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:53 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org, robh+dt@kernel.org, mark.rutland@arm.com, andy.gross@linaro.org, david.brown@linaro.org Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 17/18] arm64: dts: sdm845: add IPA information Date: Sat, 11 May 2019 20:25:07 -0500 Message-Id: <20190512012508.10608-18-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add IPA-related nodes and definitions to "sdm845.dtsi". Signed-off-by: Alex Elder --- arch/arm64/boot/dts/qcom/sdm845.dtsi | 51 ++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) -- 2.20.1 diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi index 5308f1671824..b8b2bb753710 100644 --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi @@ -18,6 +18,7 @@ #include #include #include +#include / { interrupt-parent = <&intc>; @@ -342,6 +343,17 @@ interrupt-controller; #interrupt-cells = <2>; }; + + ipa_smp2p_out: ipa-ap-to-modem { + qcom,entry-name = "ipa"; + #qcom,smem-state-cells = <1>; + }; + + ipa_smp2p_in: ipa-modem-to-ap { + qcom,entry-name = "ipa"; + interrupt-controller; + #interrupt-cells = <2>; + }; }; smp2p-slpi { @@ -1090,6 +1102,45 @@ }; }; + ipa@1e40000 { + compatible = "qcom,sdm845-ipa"; + + modem-init; + + reg = <0 0x1e40000 0 0x7000>, + <0 0x1e47000 0 0x2000>, + <0 0x1e04000 0 0x2c000>; + reg-names = "ipa-reg", + "ipa-shared", + "gsi"; + + interrupts-extended = + <&intc 0 311 IRQ_TYPE_EDGE_RISING>, + <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>, + <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, + <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>; + interrupt-names = "ipa", + "gsi", + "ipa-clock-query", + "ipa-setup-ready"; + + clocks = <&rpmhcc RPMH_IPA_CLK>; + clock-names = "core"; + + interconnects = + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>, + <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>, + <&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>; + interconnect-names = "memory", + "imem", + "config"; + + qcom,smem-states = <&ipa_smp2p_out 0>, + <&ipa_smp2p_out 1>; + qcom,smem-state-names = "ipa-clock-enabled-valid", + "ipa-clock-enabled"; + }; + tcsr_mutex_regs: syscon@1f40000 { compatible = "syscon"; reg = <0 0x01f40000 0 0x40000>; From patchwork Sun May 12 01:25:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 163978 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp4128907ilr; Sat, 11 May 2019 18:26:03 -0700 (PDT) X-Google-Smtp-Source: APXvYqx41SLWaoRvMD8B2k5Q36CtpAiRoc8vLIXzQjoJxQ/plQK8ZIY3YaYDXaKofaIf5rEqCEGU X-Received: by 2002:aa7:90d3:: with SMTP id k19mr24529820pfk.1.1557624363163; Sat, 11 May 2019 18:26:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557624363; cv=none; d=google.com; s=arc-20160816; b=Kb3Co39rmb+nS6u8MD4Z6zTTTXUSqy/MtxB0Rb0HjgXbvyZ1lOIqR9QCblmWKkUvtn s5+DvdaKHdbZikqGA8SSO3pJf/TNkvWZwgAc1ge1cOuCVfEiVEqq+BJwvHPTsTVIcC9Y LbSA6r01qV0YGw+bU7r4YiyUHIFAJBPpcjCtJIkSwnI8XFTQ2HIG4uGdzj8bULsPIiqE iTM+h2PIhUat7/elIvw3wBWO3Mmqbpqr1eJ5Mh3DOgVXIqdg4uaXFyU5uxG4ApDNqdJZ pQnPPRj0KfB2Dk5QCyskUn8SYmEe+Cb/TMSbVMW5ZzPN+BPGkEje2vrXgWv2HqWek/GL mUxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=z5NnI56YEhBpccDrupNLnoq/PDM7aNqfb+ZvdWdM4KE=; b=04iIcY5g4nrrRU/H4mYxUMyff7EwDu7hmu8057CtTKlEPlfbw6F+mt2TJtoKl4NXfO V0XOaIVrosLXHkl1EdNXRrLct02Lw7iL/lXjWOTOUAzTdVhlAXwF6MpeVbeQrNflflSb 19jnnE7pHtwooNTUytIjaGp4aPP6OT5u+QVg2N5qGayS9kC/bMKqOY0sHdfDbcS7tSYw LZInYA69kspsAaMBxYfRrunL2kLOQp6dkYZ/UR6eCH2bg9n2B6E28oCuvwcNHhzc2OOB 1U8hrifqt54eIFWfw0XsT5UA1d86ro5MlzQMRh7WdDYKY6l9laBOd27ex3Sw7xSIQvAI KLvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BM1olpLU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g2si11125265pgp.54.2019.05.11.18.26.02; Sat, 11 May 2019 18:26:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BM1olpLU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726873AbfELB0B (ORCPT + 30 others); Sat, 11 May 2019 21:26:01 -0400 Received: from mail-it1-f196.google.com ([209.85.166.196]:54136 "EHLO mail-it1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726820AbfELBZ4 (ORCPT ); Sat, 11 May 2019 21:25:56 -0400 Received: by mail-it1-f196.google.com with SMTP id m141so13819011ita.3 for ; Sat, 11 May 2019 18:25:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=z5NnI56YEhBpccDrupNLnoq/PDM7aNqfb+ZvdWdM4KE=; b=BM1olpLU6epTYjGJ/w3Y5wrCxFpt9gJr/IhjGEhDAP7XCp5jNnnTGkcrVXxvW7dyNR d3cDmCrUJXPEsfDmJbVEdWa1ntXVkJkVH2WLVMGj9gcly/OZy8Rk9NTbsaTKIANzl/KD bPjIqHpUfYBk1dgRp30F04dQ81WnGD3Hk81nXawuc6x5BPIAEYC4eJ5RYwjZqC/QCQR/ m/UyAGAJ6lfr35QFSvB0ZKDuHRY9nKGJ4bQjbYOx2J145l6vJCa5kAmSBBn3cVYD5knq HouP0tE9DxJn7Fpc18wO4vvn+0rr52c9XUAFPbCWZDOhG+5DN+TtOFRUmWHfk9YypniD GzVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z5NnI56YEhBpccDrupNLnoq/PDM7aNqfb+ZvdWdM4KE=; b=SwnmPfWSgEdL+YRdnrfQ8W1ibwkMRKR4814+vYFx7CLNvOgckQDM2VmSKtOFsrU0Y9 +KWgZZdFeNXBGdEnTj65SSKDMlnC/zwSkZlS7A3SQkJEgLIhV6gFW+vQ665/oJZqerko PcHPzwXBvQ/3Z8sNsshi7LAHr4qktDaMRlj51QAvCwhb+5ETZ2LJwWFfkPMlsaUg+d31 fct508T8yXfkXs21o+94GjB71b6SVT2iBdO7E/h1jiGDIMm0ReXPYy3aReYT56/yLrdu NV12VDDNl0iMTOl2BhSWJA3RtkLT5NzICAFdn5BlYQcvXxZEPcYKPEtDi06B5IvzeaMo K/Vw== X-Gm-Message-State: APjAAAVs3/35rdF+UuKnLdbw7kwjUXl2Q9b3Qw+OkldV33LbNetRApob joo23zCzngBGjAQXL1WeYYv7sg== X-Received: by 2002:a24:c347:: with SMTP id s68mr12557479itg.140.1557624355327; Sat, 11 May 2019 18:25:55 -0700 (PDT) Received: from shibby.hil-lafwehx.chi.wayport.net (hampton-inn.wintek.com. [72.12.199.50]) by smtp.gmail.com with ESMTPSA id u134sm1579013itb.32.2019.05.11.18.25.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 May 2019 18:25:54 -0700 (PDT) From: Alex Elder To: davem@davemloft.net, arnd@arndb.de, bjorn.andersson@linaro.org, ilias.apalodimas@linaro.org, catalin.marinas@arm.com, will.deacon@arm.com, andy.gross@linaro.org, olof@lixom.net, maxime.ripard@bootlin.com, horms+renesas@verge.net.au, jagan@amarulasolutions.com, stefan.wahren@i2se.com, marc.w.gonzalez@free.fr, enric.balletbo@collabora.com Cc: syadagir@codeaurora.org, mjavid@codeaurora.org, evgreen@chromium.org, benchan@google.com, ejcaruso@google.com, abhishek.esse@gmail.com, linux-kernel@vger.kernel.org, Alex Elder Subject: [PATCH 18/18] arm64: defconfig: enable build of IPA code Date: Sat, 11 May 2019 20:25:08 -0500 Message-Id: <20190512012508.10608-19-elder@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190512012508.10608-1-elder@linaro.org> References: <20190512012508.10608-1-elder@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add CONFIG_IPA to the 64-bit Arm defconfig. Signed-off-by: Alex Elder --- arch/arm64/configs/defconfig | 1 + 1 file changed, 1 insertion(+) -- 2.20.1 diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig index 2d9c39033c1a..4f4d803e563d 100644 --- a/arch/arm64/configs/defconfig +++ b/arch/arm64/configs/defconfig @@ -268,6 +268,7 @@ CONFIG_SMSC911X=y CONFIG_SNI_AVE=y CONFIG_SNI_NETSEC=y CONFIG_STMMAC_ETH=m +CONFIG_IPA=y CONFIG_MDIO_BUS_MUX_MMIOREG=y CONFIG_AT803X_PHY=m CONFIG_MARVELL_PHY=m