From patchwork Wed Jan 8 13:47:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jie Luo X-Patchwork-Id: 855757 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8604A1FCFFB; Wed, 8 Jan 2025 13:48:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344095; cv=none; b=cvq/86E9F3d1IzmtOqm3ieslIq659JYC/DjmzQQA5GOSpVrjiip5HA6UJpfs3Vr4L3m1YQ0WJwkQYEgxw1fUZF1Ucd8L4MsKRK9u9B8BXj+QmF8Ih6VRuBB/0RsYJP/fM98vkb8pbmhVXGvqGeh60SDsibKF7GXsowQJaJ1K9/o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344095; c=relaxed/simple; bh=t7pl5xDCETw0P1d2GSbD55w5jALxmqrlnSy/wiGEoNk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=UoOPeAUTDFEuRlT9U2QzTp/Zw52ga4yJLJ4Xcqtqu82wzA6B/b1ZNt2g3CqreaRDITwCBhL9CDo1Db7j4+yMOsf5KXO3YoH9ssXEW0LAlY42c1TFCfm6NZViD31SNzmp3w4tl+UGytowZohpHkO8gF/8QoqEzaz+Cc4BG6dW5nw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=kHvByyK8; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="kHvByyK8" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 508BkTQT007169; Wed, 8 Jan 2025 13:47:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= Gu6lfThNwKjvrwAfNjQ332yn9JFXlLk3oAgOjlGfbJs=; b=kHvByyK8lLVFyKJO nxNSk1uvj4TZBvLiicEL5f3DS7LFa0lvPK5l3ztc4GX7DATFc6U3Cuf3l/vBLMee jn+DaecQhhHOC7e38AoepRVSH3PCmwt7AG9RU3uga6DD3Gyqq2Qgn9y6+ok04HUO Ra70FtNIWQ5QiSA1rAENVhNoBuKd/gWQ9HopVDS4kkyRlk37BCVaqa1PRqfmy6gm mjnweaeCjDOvDjIQZ67OiWwtCk9Qe1zZUNWQYZ9+CEOA2KqXhF7m2CgrbmXFKVQp oTkVXuyr0FowvBasQy1/DXpCtv1m4r+rvt/08mrk2psIDevZpqHLbygkAhM9bxxq Xgn2Lg== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 441pgnrnhn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 08 Jan 2025 13:47:58 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 508DlvdB026359 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 8 Jan 2025 13:47:57 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Wed, 8 Jan 2025 05:47:50 -0800 From: Luo Jie Date: Wed, 8 Jan 2025 21:47:09 +0800 Subject: [PATCH net-next v2 02/14] docs: networking: Add PPE driver documentation for Qualcomm IPQ9574 SoC Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250108-qcom_ipq_ppe-v2-2-7394dbda7199@quicinc.com> References: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> In-Reply-To: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1736344057; l=10908; i=quic_luoj@quicinc.com; s=20240808; h=from:subject:message-id; bh=xfH+Tdshg2IOyksx9ke04tr+6t92HijNuq77xmCy1hg=; b=+fCdwmt1FAOqBL8X07iLytcgc3OzF5KIv6jTwCslDB92v9+peDWL7XSpCAjTTm8gV2duDfRs6 SD9+E9goue8BsSCnM1l0RsAB8AnP2/YRjTcZYTU6O9z6diKrJRoqIjh X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=P81jeEL23FcOkZtXZXeDDiPwIwgAHVZFASJV12w3U6w= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: anYzZrvQzPAFzdY6G0OgHZR5FXXBpUmx X-Proofpoint-ORIG-GUID: anYzZrvQzPAFzdY6G0OgHZR5FXXBpUmx X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxlogscore=999 mlxscore=0 suspectscore=0 impostorscore=0 lowpriorityscore=0 spamscore=0 adultscore=0 phishscore=0 priorityscore=1501 bulkscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501080115 From: Lei Wei Add description and high-level diagram for PPE, driver overview and module enable/debug information. Signed-off-by: Lei Wei Signed-off-by: Luo Jie --- .../networking/device_drivers/ethernet/index.rst | 1 + .../device_drivers/ethernet/qualcomm/ppe/ppe.rst | 197 +++++++++++++++++++++ 2 files changed, 198 insertions(+) diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/Documentation/networking/device_drivers/ethernet/index.rst index 6fc1961492b7..978d87edaeb5 100644 --- a/Documentation/networking/device_drivers/ethernet/index.rst +++ b/Documentation/networking/device_drivers/ethernet/index.rst @@ -49,6 +49,7 @@ Contents: neterion/s2io netronome/nfp pensando/ionic + qualcomm/ppe/ppe smsc/smc9 stmicro/stmmac ti/cpsw diff --git a/Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst b/Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst new file mode 100644 index 000000000000..955fc31d740c --- /dev/null +++ b/Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst @@ -0,0 +1,197 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=============================================== +PPE Ethernet Driver for Qualcomm IPQ SoC Family +=============================================== + +Copyright (c) 2025 Qualcomm Innovation Center, Inc. All rights reserved. + +Author: Lei Wei + + +Contents +======== + +- `PPE Overview`_ +- `PPE Driver Overview`_ +- `PPE Driver Supported SoCs`_ +- `Enabling the Driver`_ +- `Debugging`_ + + +PPE Overview +============ + +IPQ (Qualcomm Internet Processor) SoC (System-on-Chip) series is Qualcomm's series of +networking SoC for Wi-Fi access points. The PPE (Packet Process Engine) is the Ethernet +packet process engine in the IPQ SoC. + +Below is a simplified hardware diagram of IPQ9574 SoC which includes the PPE engine and +other blocks which are in the SoC but outside the PPE engine. These blocks work together +to enable the Ethernet for the IPQ SoC:: + + +------+ +------+ +------+ +------+ +------+ +------+ start +-------+ + |netdev| |netdev| |netdev| |netdev| |netdev| |netdev|<------|PHYLINK| + +------+ +------+ +------+ +------+ +------+ +------+ stop +-+-+-+-+ + | | | ^ + +-------+ +-------------------------+--------+----------------------+ | | | + | GCC | | | EDMA | | | | | + +---+---+ | PPE +---+----+ | | | | + | clk | | | | | | + +------>| +-----------------------+------+-----+---------------+ | | | | + | | Switch Core |Port0 | |Port7(EIP FIFO)| | | | | + | | +---+--+ +------+--------+ | | | | + | | | | | | | | | + +-------+ | | +------+---------------+----+ | | | | | + |CMN PLL| | | +---+ +---+ +----+ | +--------+ | | | | | | + +---+---+ | | |BM | |QM | |SCH | | | L2/L3 | ....... | | | | | | + | | | | +---+ +---+ +----+ | +--------+ | | | | | | + | | | | +------+--------------------+ | | | | | + | | | | | | | | | | + | v | | +-----+-+-----+-+-----+-+-+---+--+-----+-+-----+ | | | | | + | +------+ | | |Port1| |Port2| |Port3| |Port4| |Port5| |Port6| | | | | | + | |NSSCC | | | +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ | | mac| | | + | +-+-+--+ | | |MAC0 | |MAC1 | |MAC2 | |MAC3 | |MAC4 | |MAC5 | | |<---+ | | + | ^ | |clk | | +-----+-+-----+-+-----+-+-----+--+-----+-+-----+ | | ops | | + | | | +---->| +----|------|-------|-------|---------|--------|-----+ | | | + | | | +---------------------------------------------------------+ | | + | | | | | | | | | | | + | | | MII clk | QSGMII USXGMII USXGMII | | + | | +------------->| | | | | | | | + | | +-------------------------+ +---------+ +---------+ | | + | |125/312.5M clk| (PCS0) | | (PCS1) | | (PCS2) | pcs ops | | + | +--------------+ UNIPHY0 | | UNIPHY1 | | UNIPHY2 |<--------+ | + +--------------->| | | | | | | + | 31.25M ref clk +-------------------------+ +---------+ +---------+ | + | | | | | | | | + | +-----------------------------------------------------+ | + |25/50M ref clk| +-------------------------+ +------+ +------+ | link | + +------------->| | QUAD PHY | | PHY4 | | PHY5 | |---------+ + | +-------------------------+ +------+ +------+ | change + | | + | MDIO bus | + +-----------------------------------------------------+ + +The CMN (Common) PLL, NSSCC (Networking Sub System Clock Controller) and GCC (Global +Clock Controller) blocks are in the SoC and act as clock providers. + +The UNIPHY block is in the SoC and provides the PCS (Physical Coding Sublayer) and +XPCS (10-Gigabit Physical Coding Sublayer) functions to support different interface +modes between the PPE MAC and the external PHY. + +This documentation focuses on the descriptions of PPE engine and the PPE driver. + +The Ethernet functionality in the PPE (Packet Process Engine) is comprised of three +components: the switch core, port wrapper and Ethernet DMA. + +The Switch core in the IPQ9574 PPE has maximum of 6 front panel ports and two FIFO +interfaces. One of the two FIFO interfaces is used for Ethernet port to host CPU +communication using Ethernet DMA. The other is used communicating to the EIP engine +which is used for IPsec offload. On the IPQ9574, the PPE includes 6 GMAC/XGMACs that +can be connected with external Ethernet PHY. Switch core also includes BM (Buffer +Management), QM (Queue Management) and SCH (Scheduler) modules for supporting the +packet processing. + +The port wrapper provides connections from the 6 GMAC/XGMACS to UNIPHY (PCS) supporting +various modes such as SGMII/QSGMII/PSGMII/USXGMII/10G-BASER. There are 3 UNIPHY (PCS) +instances supported on the IPQ9574. + +Ethernet DMA is used to transmit and receive packets between the Ethernet subsystem +and ARM host CPU. + +The following lists the main blocks in the PPE engine which will be driven by this +PPE driver: + +- BM + BM is the hardware buffer manager for the PPE switch ports. +- QM + Queue Manager for managing the egress hardware queues of the PPE switch ports. +- SCH + The scheduler which manages the hardware traffic scheduling for the PPE switch ports. +- L2 + The L2 block performs the packet bridging in the switch core. The bridge domain is + represented by the VSI (Virtual Switch Instance) domain in PPE. FDB learning can be + enabled based on the VSI domain and bridge forwarding occurs within the VSI domain. +- MAC + The PPE in the IPQ9574 supports up to six MACs (MAC0 to MAC5) which are corresponding + to six switch ports (port1 to port6). The MAC block is connected with external PHY + through the UNIPHY PCS block. Each MAC block includes the GMAC and XGMAC blocks and + the switch port can select to use GMAC or XMAC through a MUX selection according to + the external PHY's capability. +- EDMA (Ethernet DMA) + The Ethernet DMA is used to transmit and receive Ethernet packets between the PPE + ports and the ARM cores. + +The received packet on a PPE MAC port can be forwarded to another PPE MAC port. It can +be also forwarded to internal switch port0 so that the packet can be delivered to the +ARM cores using the Ethernet DMA (EDMA) engine. The Ethernet DMA driver will deliver the +packet to the corresponding 'netdevice' interface. + +The software instantiations of the PPE MAC (netdevice), PCS and external PHYs interact +with the Linux PHYLINK framework to manage the connectivity between the PPE ports and +the connected PHYs, and the port link states. This is also illustrated in above diagram. + + +PPE Driver Overview +=================== +PPE driver is Ethernet driver for the Qualcomm IPQ SoC. It is a single platform driver +which includes the PPE part and Ethernet DMA part. The PPE part initializes and drives the +various blocks in PPE switch core such as BM/QM/L2 blocks and the PPE MACs. The EDMA part +drives the Ethernet DMA for packet transfer between PPE ports and ARM cores, and enables +the netdevice driver for the PPE ports. + +The PPE driver files in drivers/net/ethernet/qualcomm/ppe/ are listed as below: + +- Makefile +- ppe.c +- ppe.h +- ppe_config.c +- ppe_config.h +- ppe_debugfs.c +- ppe_debugfs.h +- ppe_regs.h + +The ppe.c file contains the main PPE platform driver and undertakes the initialization of +PPE switch core blocks such as QM, BM and L2. The configuration APIs for these hardware +blocks are provided in the ppe_config.c file. + +The ppe.h defines the PPE device data structure which will be used by PPE driver functions. + +The ppe_debugfs.c enables the PPE statistics counters such as PPE port Rx and Tx counters, +CPU code counters and queue counters. + + +PPE Driver Supported SoCs +========================= + +The PPE driver supports the following IPQ SoC: + +- IPQ9574 + + +Enabling the Driver +=================== + +The driver is located in the menu structure at: + + -> Device Drivers + -> Network device support (NETDEVICES [=y]) + -> Ethernet driver support + -> Qualcomm devices + -> Qualcomm Technologies, Inc. PPE Ethernet support + +If this driver is built as a module, we can use below commands to install and remove it: + +- insmod qcom-ppe.ko +- rmmod qcom-ppe.ko + +The PPE driver functionally depends on the CMN PLL and NSSCC clock controller drivers. +Please make sure the dependent modules are installed before installing the PPE driver +module. + + +Debugging +========= + +The PPE hardware counters are available in the debugfs and can be checked by the command +``cat /sys/kernel/debug/ppe/packet_counters``. From patchwork Wed Jan 8 13:47:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jie Luo X-Patchwork-Id: 855756 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED0F41F9F5A; Wed, 8 Jan 2025 13:48:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344109; cv=none; b=mngyLBn4LFrkNSlrs/sRPQtdmtn7HVbGgd6YPM6YpL1SbIVmK+JODSrkHbPDdzcDRBezd1TJLrqWjpuWKiLQjr4KzcVoeLiuClnFPGKIleElhjZq6+HunjuuJEN/aWInQqImSmjk6NeN/OxqoFiS+v1VtR7aZyW2c1XNCd41+d8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344109; c=relaxed/simple; bh=+f+nPuXmP6GKJ7yQGURtz2byhf5mVGRXASOnpEUr0gI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=JTHhtMEUdAt2jjCM9pB+dRmVvh0GZzbl0oBTG1eqbkiwD34CA5pGJrKL4lLkGzy20EEWGmWJqpJqX+GWIN4BWgc8FDnDCuFtuirViTYrONOjt26FwiKfjuwnCyMRaxsep94iD94x/sWhURlQw7dMy7zBXWcUZuKfQvslw1rkcDg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=gmeK/YWK; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="gmeK/YWK" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 508BkS92011525; Wed, 8 Jan 2025 13:48:11 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 69gfZjfnwI+qXhZSefQKTjrQjz5fLExeBvynPx3+GJk=; b=gmeK/YWKhkP3PLpk nqLzsEGhTwNKep6XedYEgKJsKrn99yL33xmZLkNP7xRnwFZGp9T+9k2cLl474WJZ zrVpwx7L1Gp7NewD4TydZCNWMvXWNkHgMnvxOnBeOfmRXo+Hy21TOpQoxl+vb4RO bQ+M1uP3ED8EdqXKnzjcQ+1qx3NW5Bl8iwu0AVMHr07cUyXgO1tjupq+xtLtAJeV fUlgh2aXqfU8LnfnTbwP2FcTVLq/CvoAuGm9TppXjguxjTNnIUjnwYL+z3r584Ub TDOG/9rJXuNEVmhPP9aSuEIrRSWY2F4c0rKM9Ax+jH90zKt/hdcMug9n6Ed8Ap9d ADsOPQ== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 441ppn0m69-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 08 Jan 2025 13:48:11 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 508Dm9mA026727 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 8 Jan 2025 13:48:09 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Wed, 8 Jan 2025 05:48:03 -0800 From: Luo Jie Date: Wed, 8 Jan 2025 21:47:11 +0800 Subject: [PATCH net-next v2 04/14] net: ethernet: qualcomm: Initialize PPE buffer management for IPQ9574 Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250108-qcom_ipq_ppe-v2-4-7394dbda7199@quicinc.com> References: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> In-Reply-To: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1736344057; l=11509; i=quic_luoj@quicinc.com; s=20240808; h=from:subject:message-id; bh=+f+nPuXmP6GKJ7yQGURtz2byhf5mVGRXASOnpEUr0gI=; b=Z2pSZujqRGHGWdotjscsVaPjrUd6jjlm3jqAGKqwuv+GWjckkkQkORXsXdnMktz6+cq11V0bI xHCH2KmGRz0D8E09eVgv0yLdA8EiLyXai2Im/sNlKLbz1KEYf3dONKE X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=P81jeEL23FcOkZtXZXeDDiPwIwgAHVZFASJV12w3U6w= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: lNBDCC3C4RgJPCfCIJdMvvQVtHN2Tr9W X-Proofpoint-ORIG-GUID: lNBDCC3C4RgJPCfCIJdMvvQVtHN2Tr9W X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 suspectscore=0 impostorscore=0 bulkscore=0 phishscore=0 priorityscore=1501 mlxscore=0 spamscore=0 lowpriorityscore=0 adultscore=0 mlxlogscore=999 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501080115 The BM (Buffer Management) config controls the pause frame generated on the PPE port. There are maximum 15 BM ports and 4 groups supported, all BM ports are assigned to group 0 by default. The number of hardware buffers configured for the port influence the threshold of the flow control for that port. Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/Makefile | 2 +- drivers/net/ethernet/qualcomm/ppe/ppe.c | 5 + drivers/net/ethernet/qualcomm/ppe/ppe_config.c | 193 +++++++++++++++++++++++++ drivers/net/ethernet/qualcomm/ppe/ppe_config.h | 12 ++ drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 59 ++++++++ 5 files changed, 270 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/Makefile b/drivers/net/ethernet/qualcomm/ppe/Makefile index 63d50d3b4f2e..410a7bb54cfe 100644 --- a/drivers/net/ethernet/qualcomm/ppe/Makefile +++ b/drivers/net/ethernet/qualcomm/ppe/Makefile @@ -4,4 +4,4 @@ # obj-$(CONFIG_QCOM_PPE) += qcom-ppe.o -qcom-ppe-objs := ppe.o +qcom-ppe-objs := ppe.o ppe_config.o diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c index 4d90fcb8fa43..e8aa4eabaa7f 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c @@ -15,6 +15,7 @@ #include #include "ppe.h" +#include "ppe_config.h" #define PPE_PORT_MAX 8 #define PPE_CLK_RATE 353000000 @@ -194,6 +195,10 @@ static int qcom_ppe_probe(struct platform_device *pdev) if (ret) return dev_err_probe(dev, ret, "PPE clock config failed\n"); + ret = ppe_hw_config(ppe_dev); + if (ret) + return dev_err_probe(dev, ret, "PPE HW config failed\n"); + platform_set_drvdata(pdev, ppe_dev); return 0; diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c new file mode 100644 index 000000000000..848f65ef32ea --- /dev/null +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c @@ -0,0 +1,193 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +/* PPE HW initialization configs such as BM(buffer management), + * QM(queue management) and scheduler configs. + */ + +#include +#include +#include +#include + +#include "ppe.h" +#include "ppe_config.h" +#include "ppe_regs.h" + +/** + * struct ppe_bm_port_config - PPE BM port configuration. + * @port_id_start: The fist BM port ID to configure. + * @port_id_end: The last BM port ID to configure. + * @pre_alloc: BM port dedicated buffer number. + * @in_fly_buf: Buffer number for receiving the packet after pause frame sent. + * @ceil: Ceil to generate the back pressure. + * @weight: Weight value. + * @resume_offset: Resume offset from the threshold value. + * @resume_ceil: Ceil to resume from the back pressure state. + * @dynamic: Dynamic threshold used or not. + * + * The is for configuring the threshold that impacts the port + * flow control. + */ +struct ppe_bm_port_config { + unsigned int port_id_start; + unsigned int port_id_end; + unsigned int pre_alloc; + unsigned int in_fly_buf; + unsigned int ceil; + unsigned int weight; + unsigned int resume_offset; + unsigned int resume_ceil; + bool dynamic; +}; + +/* Assign the share buffer number 1550 to group 0 by default. */ +static int ipq9574_ppe_bm_group_config = 1550; + +/* The buffer configurations per PPE port. There are 15 BM ports and + * 4 BM groups supported by PPE. BM port (0-7) is for EDMA port 0, + * BM port (8-13) is for PPE physical port 1-6 and BM port 14 is for + * EIP port. + */ +static struct ppe_bm_port_config ipq9574_ppe_bm_port_config[] = { + { + /* Buffer configuration for the BM port ID 0 of EDMA. */ + .port_id_start = 0, + .port_id_end = 0, + .pre_alloc = 0, + .in_fly_buf = 100, + .ceil = 1146, + .weight = 7, + .resume_offset = 8, + .resume_ceil = 0, + .dynamic = true, + }, + { + /* Buffer configuration for the BM port ID 1-7 of EDMA. */ + .port_id_start = 1, + .port_id_end = 7, + .pre_alloc = 0, + .in_fly_buf = 100, + .ceil = 250, + .weight = 4, + .resume_offset = 36, + .resume_ceil = 0, + .dynamic = true, + }, + { + /* Buffer configuration for the BM port ID 8-13 of PPE ports. */ + .port_id_start = 8, + .port_id_end = 13, + .pre_alloc = 0, + .in_fly_buf = 128, + .ceil = 250, + .weight = 4, + .resume_offset = 36, + .resume_ceil = 0, + .dynamic = true, + }, + { + /* Buffer configuration for the BM port ID 14 of EIP. */ + .port_id_start = 14, + .port_id_end = 14, + .pre_alloc = 0, + .in_fly_buf = 40, + .ceil = 250, + .weight = 4, + .resume_offset = 36, + .resume_ceil = 0, + .dynamic = true, + }, +}; + +static int ppe_config_bm_threshold(struct ppe_device *ppe_dev, int bm_port_id, + struct ppe_bm_port_config port_cfg) +{ + u32 reg, val, bm_fc_val[2]; + int ret; + + /* Configure BM flow control related threshold. */ + PPE_BM_PORT_FC_SET_WEIGHT(bm_fc_val, port_cfg.weight); + PPE_BM_PORT_FC_SET_RESUME_OFFSET(bm_fc_val, port_cfg.resume_offset); + PPE_BM_PORT_FC_SET_RESUME_THRESHOLD(bm_fc_val, port_cfg.resume_ceil); + PPE_BM_PORT_FC_SET_DYNAMIC(bm_fc_val, port_cfg.dynamic); + PPE_BM_PORT_FC_SET_REACT_LIMIT(bm_fc_val, port_cfg.in_fly_buf); + PPE_BM_PORT_FC_SET_PRE_ALLOC(bm_fc_val, port_cfg.pre_alloc); + + /* Configure low/high bits of the ceiling for the BM port. */ + val = FIELD_PREP(GENMASK(2, 0), port_cfg.ceil); + PPE_BM_PORT_FC_SET_CEILING_LOW(bm_fc_val, val); + val = FIELD_PREP(GENMASK(10, 3), port_cfg.ceil); + PPE_BM_PORT_FC_SET_CEILING_HIGH(bm_fc_val, val); + + reg = PPE_BM_PORT_FC_CFG_TBL_ADDR + PPE_BM_PORT_FC_CFG_TBL_INC * bm_port_id; + ret = regmap_bulk_write(ppe_dev->regmap, reg, + bm_fc_val, ARRAY_SIZE(bm_fc_val)); + if (ret) + return ret; + + /* Assign the default group ID 0 to the BM port. */ + val = FIELD_PREP(PPE_BM_PORT_GROUP_ID_SHARED_GROUP_ID, 0); + reg = PPE_BM_PORT_GROUP_ID_ADDR + PPE_BM_PORT_GROUP_ID_INC * bm_port_id; + ret = regmap_update_bits(ppe_dev->regmap, reg, + PPE_BM_PORT_GROUP_ID_SHARED_GROUP_ID, + val); + if (ret) + return ret; + + /* Enable BM port flow control. */ + val = FIELD_PREP(PPE_BM_PORT_FC_MODE_EN, true); + reg = PPE_BM_PORT_FC_MODE_ADDR + PPE_BM_PORT_FC_MODE_INC * bm_port_id; + + return regmap_update_bits(ppe_dev->regmap, reg, + PPE_BM_PORT_FC_MODE_EN, + val); +} + +/* Configure the buffer threshold for the port flow control function. */ +static int ppe_config_bm(struct ppe_device *ppe_dev) +{ + unsigned int i, bm_port_id, port_cfg_cnt; + struct ppe_bm_port_config *port_cfg; + u32 reg, val; + int ret; + + /* Configure the allocated buffer number only for group 0. + * The buffer number of group 1-3 is already cleared to 0 + * after PPE reset during the probe of PPE driver. + */ + reg = PPE_BM_SHARED_GROUP_CFG_ADDR; + val = FIELD_PREP(PPE_BM_SHARED_GROUP_CFG_SHARED_LIMIT, + ipq9574_ppe_bm_group_config); + ret = regmap_update_bits(ppe_dev->regmap, reg, + PPE_BM_SHARED_GROUP_CFG_SHARED_LIMIT, + val); + if (ret) + goto bm_config_fail; + + /* Configure buffer thresholds for the BM ports. */ + port_cfg = ipq9574_ppe_bm_port_config; + port_cfg_cnt = ARRAY_SIZE(ipq9574_ppe_bm_port_config); + for (i = 0; i < port_cfg_cnt; i++) { + for (bm_port_id = port_cfg[i].port_id_start; + bm_port_id <= port_cfg[i].port_id_end; bm_port_id++) { + ret = ppe_config_bm_threshold(ppe_dev, bm_port_id, + port_cfg[i]); + if (ret) + goto bm_config_fail; + } + } + + return 0; + +bm_config_fail: + dev_err(ppe_dev->dev, "PPE BM config error %d\n", ret); + return ret; +} + +int ppe_hw_config(struct ppe_device *ppe_dev) +{ + return ppe_config_bm(ppe_dev); +} diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h new file mode 100644 index 000000000000..7b2f6a71cd4c --- /dev/null +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2025 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __PPE_CONFIG_H__ +#define __PPE_CONFIG_H__ + +#include "ppe.h" + +int ppe_hw_config(struct ppe_device *ppe_dev); +#endif diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h new file mode 100644 index 000000000000..b00f77ec45fe --- /dev/null +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2025 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +/* PPE hardware register and table declarations. */ +#ifndef __PPE_REGS_H__ +#define __PPE_REGS_H__ + +#include + +/* There are 15 BM ports and 4 BM groups supported by PPE. + * BM port (0-7) is for EDMA port 0, BM port (8-13) is for + * PPE physical port 1-6 and BM port 14 is for EIP port. + */ +#define PPE_BM_PORT_FC_MODE_ADDR 0x600100 +#define PPE_BM_PORT_FC_MODE_ENTRIES 15 +#define PPE_BM_PORT_FC_MODE_INC 0x4 +#define PPE_BM_PORT_FC_MODE_EN BIT(0) + +#define PPE_BM_PORT_GROUP_ID_ADDR 0x600180 +#define PPE_BM_PORT_GROUP_ID_ENTRIES 15 +#define PPE_BM_PORT_GROUP_ID_INC 0x4 +#define PPE_BM_PORT_GROUP_ID_SHARED_GROUP_ID GENMASK(1, 0) + +#define PPE_BM_SHARED_GROUP_CFG_ADDR 0x600290 +#define PPE_BM_SHARED_GROUP_CFG_ENTRIES 4 +#define PPE_BM_SHARED_GROUP_CFG_INC 0x4 +#define PPE_BM_SHARED_GROUP_CFG_SHARED_LIMIT GENMASK(10, 0) + +#define PPE_BM_PORT_FC_CFG_TBL_ADDR 0x601000 +#define PPE_BM_PORT_FC_CFG_TBL_ENTRIES 15 +#define PPE_BM_PORT_FC_CFG_TBL_INC 0x10 +#define PPE_BM_PORT_FC_W0_REACT_LIMIT GENMASK(8, 0) +#define PPE_BM_PORT_FC_W0_RESUME_THRESHOLD GENMASK(17, 9) +#define PPE_BM_PORT_FC_W0_RESUME_OFFSET GENMASK(28, 18) +#define PPE_BM_PORT_FC_W0_CEILING_LOW GENMASK(31, 29) +#define PPE_BM_PORT_FC_W1_CEILING_HIGH GENMASK(7, 0) +#define PPE_BM_PORT_FC_W1_WEIGHT GENMASK(10, 8) +#define PPE_BM_PORT_FC_W1_DYNAMIC BIT(11) +#define PPE_BM_PORT_FC_W1_PRE_ALLOC GENMASK(22, 12) + +#define PPE_BM_PORT_FC_SET_REACT_LIMIT(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_BM_PORT_FC_W0_REACT_LIMIT) +#define PPE_BM_PORT_FC_SET_RESUME_THRESHOLD(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_BM_PORT_FC_W0_RESUME_THRESHOLD) +#define PPE_BM_PORT_FC_SET_RESUME_OFFSET(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_BM_PORT_FC_W0_RESUME_OFFSET) +#define PPE_BM_PORT_FC_SET_CEILING_LOW(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_BM_PORT_FC_W0_CEILING_LOW) +#define PPE_BM_PORT_FC_SET_CEILING_HIGH(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_BM_PORT_FC_W1_CEILING_HIGH) +#define PPE_BM_PORT_FC_SET_WEIGHT(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_BM_PORT_FC_W1_WEIGHT) +#define PPE_BM_PORT_FC_SET_DYNAMIC(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_BM_PORT_FC_W1_DYNAMIC) +#define PPE_BM_PORT_FC_SET_PRE_ALLOC(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_BM_PORT_FC_W1_PRE_ALLOC) +#endif From patchwork Wed Jan 8 13:47:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jie Luo X-Patchwork-Id: 855755 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B3651FC7C3; Wed, 8 Jan 2025 13:48:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344120; cv=none; b=Lw+UlgZLfsE/A+Cv1mYr1ijCitcfIA/Bti52/ZMxylh7BBqtJC99feDnPo4Ohk7EuqCdLYSIF+DF+ayqy8rzMvC4RUeBdvWizi9Awm6CYo66AdSY/piajmi0FcxS73NMhE/tT4ugbqFazKApSUqWBbRucSHd+uAZhk0NQO75q8E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344120; c=relaxed/simple; bh=gtLMIsPyX/TXuKMi7XwKeXVuX6ZSPszx49OAGONGH18=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=lpTJK0EGHY+7DgizjfpvtLe5hd1pHUmyX2jSWhxVywKHAKyGDCe2awSGuAyt6Mpy54ZA/Roz876pfs9fR6X8ojikPSeLqvtJ1+5zOgPg/ZMqadw9tbNHVKtBFl3lwDuE/aGrYHHAig9xLM32RMU1+YYhOT+Al7nBfxg7un+7JsY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=S7rtk7cS; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="S7rtk7cS" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 508BlEjR020200; Wed, 8 Jan 2025 13:48:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 7PiRsY3XWO22qTVUz913LFP2CWDsLR3Mx4CI/Ynvagg=; b=S7rtk7cSQZC4VdhP 4+BK6b9i4cZr7HwsiZsLaag+LpjdNPX9dCUYF6mKJviytmJznXXF4yah7DToyzB9 vo+A87O9jNVVnbTnVy1VeYZtIcmZ672eLfBr65v5bqSHlPLG0tsBhACjtpHog9L7 sKhh//6Ogi1jwcjp28lLSBs1HRUI9q/XnykJYE0s5W81B0KPNaTY/FzD/ySu36B5 Slc6cJRw2y7npD+7wSFJGR7MYxe3Zig5Yd5Pbh3zQq6B6nmXa+XyEkWfJiE+4ISr WKuFq4wwoM3qCorpMPxjISXXIEMLhq8agsTf+WkqPVIcqfIZTcjCR7BD1k7PvSOB 6zavog== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 441nm18u0m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 08 Jan 2025 13:48:23 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 508DmMLi016489 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 8 Jan 2025 13:48:22 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Wed, 8 Jan 2025 05:48:15 -0800 From: Luo Jie Date: Wed, 8 Jan 2025 21:47:13 +0800 Subject: [PATCH net-next v2 06/14] net: ethernet: qualcomm: Initialize the PPE scheduler settings Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250108-qcom_ipq_ppe-v2-6-7394dbda7199@quicinc.com> References: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> In-Reply-To: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1736344057; l=28141; i=quic_luoj@quicinc.com; s=20240808; h=from:subject:message-id; bh=gtLMIsPyX/TXuKMi7XwKeXVuX6ZSPszx49OAGONGH18=; b=O+rb7Bbh1sTuYJuWtnTbi3FMjZwfW/ZM0CH2QnMzFmaAfWX1xqNozwYU99M8a0G0PTXrHAS7n UNqYADt7AK0DZ3tQCafmpCxSLH5HDJy/3JPv4KwdhbmM8qtfcJGxulP X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=P81jeEL23FcOkZtXZXeDDiPwIwgAHVZFASJV12w3U6w= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 43OqpvSzXtKowAnu3PIBucN17ngDeHSr X-Proofpoint-GUID: 43OqpvSzXtKowAnu3PIBucN17ngDeHSr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 impostorscore=0 suspectscore=0 adultscore=0 malwarescore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 clxscore=1015 mlxlogscore=999 mlxscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501080115 The PPE scheduler settings determine the priority of scheduling the packet across the different hardware queues per PPE port. Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/ppe_config.c | 789 ++++++++++++++++++++++++- drivers/net/ethernet/qualcomm/ppe/ppe_config.h | 37 ++ drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 97 +++ 3 files changed, 922 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c index 9d4e455e8b3b..2041efeb3a55 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c @@ -16,6 +16,8 @@ #include "ppe_config.h" #include "ppe_regs.h" +#define PPE_QUEUE_SCH_PRI_NUM 8 + /** * struct ppe_bm_port_config - PPE BM port configuration. * @port_id_start: The fist BM port ID to configure. @@ -63,6 +65,66 @@ struct ppe_qm_queue_config { bool dynamic; }; +/** + * struct ppe_scheduler_bm_config - PPE arbitration for buffer config. + * @valid: Arbitration entry valid or not. + * @is_egress: Arbitration entry for egress or not. + * @port: Port ID to use arbitration entry. + * @second_valid: Second port valid or not. + * @second_port: Second port to use. + * + * Configure the scheduler settings for accessing and releasing the PPE buffers. + */ +struct ppe_scheduler_bm_config { + bool valid; + bool is_egress; + unsigned int port; + bool second_valid; + unsigned int second_port; +}; + +/** + * struct ppe_scheduler_qm_config - PPE arbitration for scheduler config. + * @ensch_port_bmp: Port bit map for enqueue scheduler. + * @ensch_port: Port ID to enqueue scheduler. + * @desch_port: Port ID to dequeue scheduler. + * @desch_second_valid: Dequeue for the second port valid or not. + * @desch_second_port: Second port ID to dequeue scheduler. + * + * Configure the scheduler settings for enqueuing and dequeuing packets on + * the PPE port. + */ +struct ppe_scheduler_qm_config { + unsigned int ensch_port_bmp; + unsigned int ensch_port; + unsigned int desch_port; + bool desch_second_valid; + unsigned int desch_second_port; +}; + +/** + * struct ppe_scheduler_port_config - PPE port scheduler config. + * @port: Port ID to be scheduled. + * @flow_level: Scheduler flow level or not. + * @node_id: Node ID, for level 0, queue ID is used. + * @loop_num: Loop number of scheduler config. + * @pri_max: Max priority configured. + * @flow_id: Strict priority ID. + * @drr_node_id: Node ID for scheduler. + * + * PPE port scheduler configuration which decides the priority in the + * packet scheduler for the egress port. + */ +struct ppe_scheduler_port_config { + unsigned int port; + bool flow_level; + unsigned int node_id; + unsigned int loop_num; + unsigned int pri_max; + unsigned int flow_id; + unsigned int drr_node_id; +}; + /* Assign the share buffer number 1550 to group 0 by default. */ static int ipq9574_ppe_bm_group_config = 1550; @@ -149,6 +211,599 @@ static struct ppe_qm_queue_config ipq9574_ppe_qm_queue_config[] = { }, }; +/* Scheduler configuration for the assigning and releasing buffers for the + * packet passing through PPE, which is different per SoC. + */ +static struct ppe_scheduler_bm_config ipq9574_ppe_sch_bm_config[] = { + {1, 0, 0, 0, 0}, + {1, 1, 0, 0, 0}, + {1, 0, 5, 0, 0}, + {1, 1, 5, 0, 0}, + {1, 0, 6, 0, 0}, + {1, 1, 6, 0, 0}, + {1, 0, 1, 0, 0}, + {1, 1, 1, 0, 0}, + {1, 0, 0, 0, 0}, + {1, 1, 0, 0, 0}, + {1, 0, 5, 0, 0}, + {1, 1, 5, 0, 0}, + {1, 0, 6, 0, 0}, + {1, 1, 6, 0, 0}, + {1, 0, 7, 0, 0}, + {1, 1, 7, 0, 0}, + {1, 0, 0, 0, 0}, + {1, 1, 0, 0, 0}, + {1, 0, 1, 0, 0}, + {1, 1, 1, 0, 0}, + {1, 0, 5, 0, 0}, + {1, 1, 5, 0, 0}, + {1, 0, 6, 0, 0}, + {1, 1, 6, 0, 0}, + {1, 0, 2, 0, 0}, + {1, 1, 2, 0, 0}, + {1, 0, 0, 0, 0}, + {1, 1, 0, 0, 0}, + {1, 0, 5, 0, 0}, + {1, 1, 5, 0, 0}, + {1, 0, 6, 0, 0}, + {1, 1, 6, 0, 0}, + {1, 0, 1, 0, 0}, + {1, 1, 1, 0, 0}, + {1, 0, 3, 0, 0}, + {1, 1, 3, 0, 0}, + {1, 0, 0, 0, 0}, + {1, 1, 0, 0, 0}, + {1, 0, 5, 0, 0}, + {1, 1, 5, 0, 0}, + {1, 0, 6, 0, 0}, + {1, 1, 6, 0, 0}, + {1, 0, 7, 0, 0}, + {1, 1, 7, 0, 0}, + {1, 0, 0, 0, 0}, + {1, 1, 0, 0, 0}, + {1, 0, 1, 0, 0}, + {1, 1, 1, 0, 0}, + {1, 0, 5, 0, 0}, + {1, 1, 5, 0, 0}, + {1, 0, 6, 0, 0}, + {1, 1, 6, 0, 0}, + {1, 0, 4, 0, 0}, + {1, 1, 4, 0, 0}, + {1, 0, 0, 0, 0}, + {1, 1, 0, 0, 0}, + {1, 0, 5, 0, 0}, + {1, 1, 5, 0, 0}, + {1, 0, 6, 0, 0}, + {1, 1, 6, 0, 0}, + {1, 0, 1, 0, 0}, + {1, 1, 1, 0, 0}, + {1, 0, 0, 0, 0}, + {1, 1, 0, 0, 0}, + {1, 0, 5, 0, 0}, + {1, 1, 5, 0, 0}, + {1, 0, 6, 0, 0}, + {1, 1, 6, 0, 0}, + {1, 0, 2, 0, 0}, + {1, 1, 2, 0, 0}, + {1, 0, 0, 0, 0}, + {1, 1, 0, 0, 0}, + {1, 0, 7, 0, 0}, + {1, 1, 7, 0, 0}, + {1, 0, 5, 0, 0}, + {1, 1, 5, 0, 0}, + {1, 0, 6, 0, 0}, + {1, 1, 6, 0, 0}, + {1, 0, 1, 0, 0}, + {1, 1, 1, 0, 0}, + {1, 0, 0, 0, 0}, + {1, 1, 0, 0, 0}, + {1, 0, 5, 0, 0}, + {1, 1, 5, 0, 0}, + {1, 0, 6, 0, 0}, + {1, 1, 6, 0, 0}, + {1, 0, 3, 0, 0}, + {1, 1, 3, 0, 0}, + {1, 0, 1, 0, 0}, + {1, 1, 1, 0, 0}, + {1, 0, 0, 0, 0}, + {1, 1, 0, 0, 0}, + {1, 0, 5, 0, 0}, + {1, 1, 5, 0, 0}, + {1, 0, 6, 0, 0}, + {1, 1, 6, 0, 0}, + {1, 0, 4, 0, 0}, + {1, 1, 4, 0, 0}, + {1, 0, 7, 0, 0}, + {1, 1, 7, 0, 0}, +}; + +/* Scheduler configuration for dispatching packet on PPE queues, which + * is different per SoC. + */ +static struct ppe_scheduler_qm_config ipq9574_ppe_sch_qm_config[] = { + {0x98, 6, 0, 1, 1}, + {0x94, 5, 6, 1, 3}, + {0x86, 0, 5, 1, 4}, + {0x8C, 1, 6, 1, 0}, + {0x1C, 7, 5, 1, 1}, + {0x98, 2, 6, 1, 0}, + {0x1C, 5, 7, 1, 1}, + {0x34, 3, 6, 1, 0}, + {0x8C, 4, 5, 1, 1}, + {0x98, 2, 6, 1, 0}, + {0x8C, 5, 4, 1, 1}, + {0xA8, 0, 6, 1, 2}, + {0x98, 5, 1, 1, 0}, + {0x98, 6, 5, 1, 2}, + {0x89, 1, 6, 1, 4}, + {0xA4, 3, 0, 1, 1}, + {0x8C, 5, 6, 1, 4}, + {0xA8, 0, 2, 1, 1}, + {0x98, 6, 5, 1, 0}, + {0xC4, 4, 3, 1, 1}, + {0x94, 6, 5, 1, 0}, + {0x1C, 7, 6, 1, 1}, + {0x98, 2, 5, 1, 0}, + {0x1C, 6, 7, 1, 1}, + {0x1C, 5, 6, 1, 0}, + {0x94, 3, 5, 1, 1}, + {0x8C, 4, 6, 1, 0}, + {0x94, 1, 5, 1, 3}, + {0x94, 6, 1, 1, 0}, + {0xD0, 3, 5, 1, 2}, + {0x98, 6, 0, 1, 1}, + {0x94, 5, 6, 1, 3}, + {0x94, 1, 5, 1, 0}, + {0x98, 2, 6, 1, 1}, + {0x8C, 4, 5, 1, 0}, + {0x1C, 7, 6, 1, 1}, + {0x8C, 0, 5, 1, 4}, + {0x89, 1, 6, 1, 2}, + {0x98, 5, 0, 1, 1}, + {0x94, 6, 5, 1, 3}, + {0x92, 0, 6, 1, 2}, + {0x98, 1, 5, 1, 0}, + {0x98, 6, 2, 1, 1}, + {0xD0, 0, 5, 1, 3}, + {0x94, 6, 0, 1, 1}, + {0x8C, 5, 6, 1, 4}, + {0x8C, 1, 5, 1, 0}, + {0x1C, 6, 7, 1, 1}, + {0x1C, 5, 6, 1, 0}, + {0xB0, 2, 3, 1, 1}, + {0xC4, 4, 5, 1, 0}, + {0x8C, 6, 4, 1, 1}, + {0xA4, 3, 6, 1, 0}, + {0x1C, 5, 7, 1, 1}, + {0x4C, 0, 5, 1, 4}, + {0x8C, 6, 0, 1, 1}, + {0x34, 7, 6, 1, 3}, + {0x94, 5, 0, 1, 1}, + {0x98, 6, 5, 1, 2}, +}; + +static struct ppe_scheduler_port_config ppe_port_sch_config[] = { + { + .port = 0, + .flow_level = true, + .node_id = 0, + .loop_num = 1, + .pri_max = 1, + .flow_id = 0, + .drr_node_id = 0, + }, + { + .port = 0, + .flow_level = false, + .node_id = 0, + .loop_num = 8, + .pri_max = 8, + .flow_id = 0, + .drr_node_id = 0, + }, + { + .port = 0, + .flow_level = false, + .node_id = 8, + .loop_num = 8, + .pri_max = 8, + .flow_id = 0, + .drr_node_id = 0, + }, + { + .port = 0, + .flow_level = false, + .node_id = 16, + .loop_num = 8, + .pri_max = 8, + .flow_id = 0, + .drr_node_id = 0, + }, + { + .port = 0, + .flow_level = false, + .node_id = 24, + .loop_num = 8, + .pri_max = 8, + .flow_id = 0, + .drr_node_id = 0, + }, + { + .port = 0, + .flow_level = false, + .node_id = 32, + .loop_num = 8, + .pri_max = 8, + .flow_id = 0, + .drr_node_id = 0, + }, + { + .port = 0, + .flow_level = false, + .node_id = 40, + .loop_num = 8, + .pri_max = 8, + .flow_id = 0, + .drr_node_id = 0, + }, + { + .port = 0, + .flow_level = false, + .node_id = 48, + .loop_num = 8, + .pri_max = 8, + .flow_id = 0, + .drr_node_id = 0, + }, + { + .port = 0, + .flow_level = false, + .node_id = 56, + .loop_num = 8, + .pri_max = 8, + .flow_id = 0, + .drr_node_id = 0, + }, + { + .port = 0, + .flow_level = false, + .node_id = 256, + .loop_num = 8, + .pri_max = 8, + .flow_id = 0, + .drr_node_id = 0, + }, + { + .port = 0, + .flow_level = false, + .node_id = 264, + .loop_num = 8, + .pri_max = 8, + .flow_id = 0, + .drr_node_id = 0, + }, + { + .port = 1, + .flow_level = true, + .node_id = 36, + .loop_num = 2, + .pri_max = 0, + .flow_id = 1, + .drr_node_id = 8, + }, + { + .port = 1, + .flow_level = false, + .node_id = 144, + .loop_num = 16, + .pri_max = 8, + .flow_id = 36, + .drr_node_id = 48, + }, + { + .port = 1, + .flow_level = false, + .node_id = 272, + .loop_num = 4, + .pri_max = 4, + .flow_id = 36, + .drr_node_id = 48, + }, + { + .port = 2, + .flow_level = true, + .node_id = 40, + .loop_num = 2, + .pri_max = 0, + .flow_id = 2, + .drr_node_id = 12, + }, + { + .port = 2, + .flow_level = false, + .node_id = 160, + .loop_num = 16, + .pri_max = 8, + .flow_id = 40, + .drr_node_id = 64, + }, + { + .port = 2, + .flow_level = false, + .node_id = 276, + .loop_num = 4, + .pri_max = 4, + .flow_id = 40, + .drr_node_id = 64, + }, + { + .port = 3, + .flow_level = true, + .node_id = 44, + .loop_num = 2, + .pri_max = 0, + .flow_id = 3, + .drr_node_id = 16, + }, + { + .port = 3, + .flow_level = false, + .node_id = 176, + .loop_num = 16, + .pri_max = 8, + .flow_id = 44, + .drr_node_id = 80, + }, + { + .port = 3, + .flow_level = false, + .node_id = 280, + .loop_num = 4, + .pri_max = 4, + .flow_id = 44, + .drr_node_id = 80, + }, + { + .port = 4, + .flow_level = true, + .node_id = 48, + .loop_num = 2, + .pri_max = 0, + .flow_id = 4, + .drr_node_id = 20, + }, + { + .port = 4, + .flow_level = false, + .node_id = 192, + .loop_num = 16, + .pri_max = 8, + .flow_id = 48, + .drr_node_id = 96, + }, + { + .port = 4, + .flow_level = false, + .node_id = 284, + .loop_num = 4, + .pri_max = 4, + .flow_id = 48, + .drr_node_id = 96, + }, + { + .port = 5, + .flow_level = true, + .node_id = 52, + .loop_num = 2, + .pri_max = 0, + .flow_id = 5, + .drr_node_id = 24, + }, + { + .port = 5, + .flow_level = false, + .node_id = 208, + .loop_num = 16, + .pri_max = 8, + .flow_id = 52, + .drr_node_id = 112, + }, + { + .port = 5, + .flow_level = false, + .node_id = 288, + .loop_num = 4, + .pri_max = 4, + .flow_id = 52, + .drr_node_id = 112, + }, + { + .port = 6, + .flow_level = true, + .node_id = 56, + .loop_num = 2, + .pri_max = 0, + .flow_id = 6, + .drr_node_id = 28, + }, + { + .port = 6, + .flow_level = false, + .node_id = 224, + .loop_num = 16, + .pri_max = 8, + .flow_id = 56, + .drr_node_id = 128, + }, + { + .port = 6, + .flow_level = false, + .node_id = 292, + .loop_num = 4, + .pri_max = 4, + .flow_id = 56, + .drr_node_id = 128, + }, + { + .port = 7, + .flow_level = true, + .node_id = 60, + .loop_num = 2, + .pri_max = 0, + .flow_id = 7, + .drr_node_id = 32, + }, + { + .port = 7, + .flow_level = false, + .node_id = 240, + .loop_num = 16, + .pri_max = 8, + .flow_id = 60, + .drr_node_id = 144, + }, + { + .port = 7, + .flow_level = false, + .node_id = 296, + .loop_num = 4, + .pri_max = 4, + .flow_id = 60, + .drr_node_id = 144, + }, +}; + +/* Set the PPE queue level scheduler configuration. */ +static int ppe_scheduler_l0_queue_map_set(struct ppe_device *ppe_dev, + int node_id, int port, + struct ppe_scheduler_cfg scheduler_cfg) +{ + u32 val, reg; + int ret; + + reg = PPE_L0_FLOW_MAP_TBL_ADDR + node_id * PPE_L0_FLOW_MAP_TBL_INC; + val = FIELD_PREP(PPE_L0_FLOW_MAP_TBL_FLOW_ID, scheduler_cfg.flow_id); + val |= FIELD_PREP(PPE_L0_FLOW_MAP_TBL_C_PRI, scheduler_cfg.pri); + val |= FIELD_PREP(PPE_L0_FLOW_MAP_TBL_E_PRI, scheduler_cfg.pri); + val |= FIELD_PREP(PPE_L0_FLOW_MAP_TBL_C_NODE_WT, scheduler_cfg.drr_node_wt); + val |= FIELD_PREP(PPE_L0_FLOW_MAP_TBL_E_NODE_WT, scheduler_cfg.drr_node_wt); + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + return ret; + + reg = PPE_L0_C_FLOW_CFG_TBL_ADDR + + (scheduler_cfg.flow_id * PPE_QUEUE_SCH_PRI_NUM + scheduler_cfg.pri) * + PPE_L0_C_FLOW_CFG_TBL_INC; + val = FIELD_PREP(PPE_L0_C_FLOW_CFG_TBL_NODE_ID, scheduler_cfg.drr_node_id); + val |= FIELD_PREP(PPE_L0_C_FLOW_CFG_TBL_NODE_CREDIT_UNIT, scheduler_cfg.unit_is_packet); + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + return ret; + + reg = PPE_L0_E_FLOW_CFG_TBL_ADDR + + (scheduler_cfg.flow_id * PPE_QUEUE_SCH_PRI_NUM + scheduler_cfg.pri) * + PPE_L0_E_FLOW_CFG_TBL_INC; + val = FIELD_PREP(PPE_L0_E_FLOW_CFG_TBL_NODE_ID, scheduler_cfg.drr_node_id); + val |= FIELD_PREP(PPE_L0_E_FLOW_CFG_TBL_NODE_CREDIT_UNIT, scheduler_cfg.unit_is_packet); + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + return ret; + + reg = PPE_L0_FLOW_PORT_MAP_TBL_ADDR + node_id * PPE_L0_FLOW_PORT_MAP_TBL_INC; + val = FIELD_PREP(PPE_L0_FLOW_PORT_MAP_TBL_PORT_NUM, port); + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + return ret; + + reg = PPE_L0_COMP_CFG_TBL_ADDR + node_id * PPE_L0_COMP_CFG_TBL_INC; + val = FIELD_PREP(PPE_L0_COMP_CFG_TBL_NODE_METER_LEN, scheduler_cfg.frame_mode); + + return regmap_update_bits(ppe_dev->regmap, reg, + PPE_L0_COMP_CFG_TBL_NODE_METER_LEN, + val); +} + +/* Set the PPE flow level scheduler configuration. */ +static int ppe_scheduler_l1_queue_map_set(struct ppe_device *ppe_dev, + int node_id, int port, + struct ppe_scheduler_cfg scheduler_cfg) +{ + u32 val, reg; + int ret; + + val = FIELD_PREP(PPE_L1_FLOW_MAP_TBL_FLOW_ID, scheduler_cfg.flow_id); + val |= FIELD_PREP(PPE_L1_FLOW_MAP_TBL_C_PRI, scheduler_cfg.pri); + val |= FIELD_PREP(PPE_L1_FLOW_MAP_TBL_E_PRI, scheduler_cfg.pri); + val |= FIELD_PREP(PPE_L1_FLOW_MAP_TBL_C_NODE_WT, scheduler_cfg.drr_node_wt); + val |= FIELD_PREP(PPE_L1_FLOW_MAP_TBL_E_NODE_WT, scheduler_cfg.drr_node_wt); + reg = PPE_L1_FLOW_MAP_TBL_ADDR + node_id * PPE_L1_FLOW_MAP_TBL_INC; + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + return ret; + + val = FIELD_PREP(PPE_L1_C_FLOW_CFG_TBL_NODE_ID, scheduler_cfg.drr_node_id); + val |= FIELD_PREP(PPE_L1_C_FLOW_CFG_TBL_NODE_CREDIT_UNIT, scheduler_cfg.unit_is_packet); + reg = PPE_L1_C_FLOW_CFG_TBL_ADDR + + (scheduler_cfg.flow_id * PPE_QUEUE_SCH_PRI_NUM + scheduler_cfg.pri) * + PPE_L1_C_FLOW_CFG_TBL_INC; + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + return ret; + + val = FIELD_PREP(PPE_L1_E_FLOW_CFG_TBL_NODE_ID, scheduler_cfg.drr_node_id); + val |= FIELD_PREP(PPE_L1_E_FLOW_CFG_TBL_NODE_CREDIT_UNIT, scheduler_cfg.unit_is_packet); + reg = PPE_L1_E_FLOW_CFG_TBL_ADDR + + (scheduler_cfg.flow_id * PPE_QUEUE_SCH_PRI_NUM + scheduler_cfg.pri) * + PPE_L1_E_FLOW_CFG_TBL_INC; + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + return ret; + + val = FIELD_PREP(PPE_L1_FLOW_PORT_MAP_TBL_PORT_NUM, port); + reg = PPE_L1_FLOW_PORT_MAP_TBL_ADDR + node_id * PPE_L1_FLOW_PORT_MAP_TBL_INC; + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + return ret; + + reg = PPE_L1_COMP_CFG_TBL_ADDR + node_id * PPE_L1_COMP_CFG_TBL_INC; + val = FIELD_PREP(PPE_L1_COMP_CFG_TBL_NODE_METER_LEN, scheduler_cfg.frame_mode); + + return regmap_update_bits(ppe_dev->regmap, reg, PPE_L1_COMP_CFG_TBL_NODE_METER_LEN, val); +} + +/** + * ppe_queue_scheduler_set - Configure scheduler for PPE hardware queue + * @ppe_dev: PPE device + * @node_id: PPE queue ID or flow ID + * @flow_level: Flow level scheduler or queue level scheduler + * @port: PPE port ID set scheduler configuration + * @scheduler_cfg: PPE scheduler configuration + * + * PPE scheduler configuration supports queue level and flow level on + * the PPE egress port. + * + * Return 0 on success, negative error code on failure. + */ +int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, + int node_id, bool flow_level, int port, + struct ppe_scheduler_cfg scheduler_cfg) +{ + if (flow_level) + return ppe_scheduler_l1_queue_map_set(ppe_dev, node_id, + port, scheduler_cfg); + + return ppe_scheduler_l0_queue_map_set(ppe_dev, node_id, + port, scheduler_cfg); +} + static int ppe_config_bm_threshold(struct ppe_device *ppe_dev, int bm_port_id, struct ppe_bm_port_config port_cfg) { @@ -356,6 +1011,134 @@ static int ppe_config_qm(struct ppe_device *ppe_dev) return ret; } +static int ppe_node_scheduler_config(struct ppe_device *ppe_dev, + struct ppe_scheduler_port_config config) +{ + struct ppe_scheduler_cfg sch_cfg; + int ret, i; + + for (i = 0; i < config.loop_num; i++) { + if (!config.pri_max) { + /* Round robin scheduler without priority. */ + sch_cfg.flow_id = config.flow_id; + sch_cfg.pri = 0; + sch_cfg.drr_node_id = config.drr_node_id; + } else { + sch_cfg.flow_id = config.flow_id + (i / config.pri_max); + sch_cfg.pri = i % config.pri_max; + sch_cfg.drr_node_id = config.drr_node_id + i; + } + + /* Scheduler weight, must be more than 0. */ + sch_cfg.drr_node_wt = 1; + /* Byte based to be scheduled. */ + sch_cfg.unit_is_packet = false; + /* Frame + CRC calculated. */ + sch_cfg.frame_mode = PPE_SCH_WITH_FRAME_CRC; + + ret = ppe_queue_scheduler_set(ppe_dev, config.node_id + i, + config.flow_level, + config.port, + sch_cfg); + if (ret) + return ret; + } + + return 0; +} + +/* Initialize scheduler settings for PPE buffer utilization and dispatching + * packet on PPE queue. + */ +static int ppe_config_scheduler(struct ppe_device *ppe_dev) +{ + struct ppe_scheduler_port_config port_cfg; + struct ppe_scheduler_qm_config qm_cfg; + struct ppe_scheduler_bm_config bm_cfg; + int ret, i, count; + u32 val, reg; + + count = ARRAY_SIZE(ipq9574_ppe_sch_bm_config); + + /* Configure the depth of BM scheduler entries. */ + val = FIELD_PREP(PPE_BM_SCH_CTRL_SCH_DEPTH, count); + val |= FIELD_PREP(PPE_BM_SCH_CTRL_SCH_OFFSET, 0); + val |= FIELD_PREP(PPE_BM_SCH_CTRL_SCH_EN, 1); + + ret = regmap_write(ppe_dev->regmap, PPE_BM_SCH_CTRL_ADDR, val); + if (ret) + goto sch_config_fail; + + /* Configure each BM scheduler entry with the valid ingress port and + * egress port, the second port takes effect when the specified port + * is in the inactive state. + */ + for (i = 0; i < count; i++) { + bm_cfg = ipq9574_ppe_sch_bm_config[i]; + + val = FIELD_PREP(PPE_BM_SCH_CFG_TBL_VALID, bm_cfg.valid); + val |= FIELD_PREP(PPE_BM_SCH_CFG_TBL_DIR, bm_cfg.is_egress); + val |= FIELD_PREP(PPE_BM_SCH_CFG_TBL_PORT_NUM, bm_cfg.port); + val |= FIELD_PREP(PPE_BM_SCH_CFG_TBL_SECOND_PORT_VALID, bm_cfg.second_valid); + val |= FIELD_PREP(PPE_BM_SCH_CFG_TBL_SECOND_PORT, bm_cfg.second_port); + + reg = PPE_BM_SCH_CFG_TBL_ADDR + i * PPE_BM_SCH_CFG_TBL_INC; + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + goto sch_config_fail; + } + + count = ARRAY_SIZE(ipq9574_ppe_sch_qm_config); + + /* Configure the depth of QM scheduler entries. */ + val = FIELD_PREP(PPE_PSCH_SCH_DEPTH_CFG_SCH_DEPTH, count); + ret = regmap_write(ppe_dev->regmap, PPE_PSCH_SCH_DEPTH_CFG_ADDR, val); + if (ret) + goto sch_config_fail; + + /* Configure each QM scheduler entry with enqueue port and dequeue + * port, the second port takes effect when the specified dequeue + * port is in the inactive port. + */ + for (i = 0; i < count; i++) { + qm_cfg = ipq9574_ppe_sch_qm_config[i]; + + val = FIELD_PREP(PPE_PSCH_SCH_CFG_TBL_ENS_PORT_BITMAP, + qm_cfg.ensch_port_bmp); + val |= FIELD_PREP(PPE_PSCH_SCH_CFG_TBL_ENS_PORT, + qm_cfg.ensch_port); + val |= FIELD_PREP(PPE_PSCH_SCH_CFG_TBL_DES_PORT, + qm_cfg.desch_port); + val |= FIELD_PREP(PPE_PSCH_SCH_CFG_TBL_DES_SECOND_PORT_EN, + qm_cfg.desch_second_valid); + val |= FIELD_PREP(PPE_PSCH_SCH_CFG_TBL_DES_SECOND_PORT, + qm_cfg.desch_second_port); + reg = PPE_PSCH_SCH_CFG_TBL_ADDR + i * PPE_PSCH_SCH_CFG_TBL_INC; + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + goto sch_config_fail; + } + + count = ARRAY_SIZE(ppe_port_sch_config); + + /* Configure scheduler per PPE queue or flow. */ + for (i = 0; i < count; i++) { + port_cfg = ppe_port_sch_config[i]; + + if (port_cfg.port >= ppe_dev->num_ports) + break; + + ret = ppe_node_scheduler_config(ppe_dev, port_cfg); + if (ret) + goto sch_config_fail; + } + +sch_config_fail: + dev_err(ppe_dev->dev, "PPE scheduler arbitration config error %d\n", ret); + return ret; +}; + int ppe_hw_config(struct ppe_device *ppe_dev) { int ret; @@ -364,5 +1147,9 @@ int ppe_hw_config(struct ppe_device *ppe_dev) if (ret) return ret; - return ppe_config_qm(ppe_dev); + ret = ppe_config_qm(ppe_dev); + if (ret) + return ret; + + return ppe_config_scheduler(ppe_dev); } diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h index 7b2f6a71cd4c..f28cfe7e1548 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h @@ -8,5 +8,42 @@ #include "ppe.h" +/** + * enum ppe_scheduler_frame_mode - PPE scheduler frame mode. + * @PPE_SCH_WITH_IPG_PREAMBLE_FRAME_CRC: The scheduled frame includes IPG, + * preamble, Ethernet packet and CRC. + * @PPE_SCH_WITH_FRAME_CRC: The scheduled frame includes Ethernet frame and CRC + * excluding IPG and preamble. + * @PPE_SCH_WITH_L3_PAYLOAD: The scheduled frame includes layer 3 packet data. + */ +enum ppe_scheduler_frame_mode { + PPE_SCH_WITH_IPG_PREAMBLE_FRAME_CRC = 0, + PPE_SCH_WITH_FRAME_CRC = 1, + PPE_SCH_WITH_L3_PAYLOAD = 2, +}; + +/** + * struct ppe_scheduler_cfg - PPE scheduler configuration. + * @flow_id: PPE flow ID. + * @pri: Scheduler priority. + * @drr_node_id: Node ID for scheduled traffic. + * @drr_node_wt: Weight for scheduled traffic. + * @unit_is_packet: Packet based or byte based unit for scheduled traffic. + * @frame_mode: Packet mode to be scheduled. + * + * PPE scheduler supports commit rate and exceed rate configurations. + */ +struct ppe_scheduler_cfg { + int flow_id; + int pri; + int drr_node_id; + int drr_node_wt; + bool unit_is_packet; + enum ppe_scheduler_frame_mode frame_mode; +}; + int ppe_hw_config(struct ppe_device *ppe_dev); +int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, + int node_id, bool flow_level, int port, + struct ppe_scheduler_cfg scheduler_cfg); #endif diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h index 6eac3ab8e58b..4c832179d539 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -9,16 +9,113 @@ #include +/* PPE scheduler configurations for buffer manager block. */ +#define PPE_BM_SCH_CTRL_ADDR 0xb000 +#define PPE_BM_SCH_CTRL_INC 4 +#define PPE_BM_SCH_CTRL_SCH_DEPTH GENMASK(7, 0) +#define PPE_BM_SCH_CTRL_SCH_OFFSET GENMASK(14, 8) +#define PPE_BM_SCH_CTRL_SCH_EN BIT(31) + +#define PPE_BM_SCH_CFG_TBL_ADDR 0xc000 +#define PPE_BM_SCH_CFG_TBL_ENTRIES 128 +#define PPE_BM_SCH_CFG_TBL_INC 0x10 +#define PPE_BM_SCH_CFG_TBL_PORT_NUM GENMASK(3, 0) +#define PPE_BM_SCH_CFG_TBL_DIR BIT(4) +#define PPE_BM_SCH_CFG_TBL_VALID BIT(5) +#define PPE_BM_SCH_CFG_TBL_SECOND_PORT_VALID BIT(6) +#define PPE_BM_SCH_CFG_TBL_SECOND_PORT GENMASK(11, 8) + /* PPE queue counters enable/disable control. */ #define PPE_EG_BRIDGE_CONFIG_ADDR 0x20044 #define PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN BIT(2) +/* Port scheduler global config. */ +#define PPE_PSCH_SCH_DEPTH_CFG_ADDR 0x400000 +#define PPE_PSCH_SCH_DEPTH_CFG_INC 4 +#define PPE_PSCH_SCH_DEPTH_CFG_SCH_DEPTH GENMASK(7, 0) + +/* PPE queue level scheduler configurations. */ +#define PPE_L0_FLOW_MAP_TBL_ADDR 0x402000 +#define PPE_L0_FLOW_MAP_TBL_ENTRIES 300 +#define PPE_L0_FLOW_MAP_TBL_INC 0x10 +#define PPE_L0_FLOW_MAP_TBL_FLOW_ID GENMASK(5, 0) +#define PPE_L0_FLOW_MAP_TBL_C_PRI GENMASK(8, 6) +#define PPE_L0_FLOW_MAP_TBL_E_PRI GENMASK(11, 9) +#define PPE_L0_FLOW_MAP_TBL_C_NODE_WT GENMASK(21, 12) +#define PPE_L0_FLOW_MAP_TBL_E_NODE_WT GENMASK(31, 22) + +#define PPE_L0_C_FLOW_CFG_TBL_ADDR 0x404000 +#define PPE_L0_C_FLOW_CFG_TBL_ENTRIES 512 +#define PPE_L0_C_FLOW_CFG_TBL_INC 0x10 +#define PPE_L0_C_FLOW_CFG_TBL_NODE_ID GENMASK(7, 0) +#define PPE_L0_C_FLOW_CFG_TBL_NODE_CREDIT_UNIT BIT(8) + +#define PPE_L0_E_FLOW_CFG_TBL_ADDR 0x406000 +#define PPE_L0_E_FLOW_CFG_TBL_ENTRIES 512 +#define PPE_L0_E_FLOW_CFG_TBL_INC 0x10 +#define PPE_L0_E_FLOW_CFG_TBL_NODE_ID GENMASK(7, 0) +#define PPE_L0_E_FLOW_CFG_TBL_NODE_CREDIT_UNIT BIT(8) + +#define PPE_L0_FLOW_PORT_MAP_TBL_ADDR 0x408000 +#define PPE_L0_FLOW_PORT_MAP_TBL_ENTRIES 300 +#define PPE_L0_FLOW_PORT_MAP_TBL_INC 0x10 +#define PPE_L0_FLOW_PORT_MAP_TBL_PORT_NUM GENMASK(3, 0) + +#define PPE_L0_COMP_CFG_TBL_ADDR 0x428000 +#define PPE_L0_COMP_CFG_TBL_ENTRIES 300 +#define PPE_L0_COMP_CFG_TBL_INC 0x10 +#define PPE_L0_COMP_CFG_TBL_SHAPER_METER_LEN GENMASK(1, 0) +#define PPE_L0_COMP_CFG_TBL_NODE_METER_LEN GENMASK(3, 2) + /* Table addresses for per-queue dequeue setting. */ #define PPE_DEQ_OPR_TBL_ADDR 0x430000 #define PPE_DEQ_OPR_TBL_ENTRIES 300 #define PPE_DEQ_OPR_TBL_INC 0x10 #define PPE_DEQ_OPR_TBL_DEQ_DISABLE BIT(0) +/* PPE flow level scheduler configurations. */ +#define PPE_L1_FLOW_MAP_TBL_ADDR 0x440000 +#define PPE_L1_FLOW_MAP_TBL_ENTRIES 64 +#define PPE_L1_FLOW_MAP_TBL_INC 0x10 +#define PPE_L1_FLOW_MAP_TBL_FLOW_ID GENMASK(3, 0) +#define PPE_L1_FLOW_MAP_TBL_C_PRI GENMASK(6, 4) +#define PPE_L1_FLOW_MAP_TBL_E_PRI GENMASK(9, 7) +#define PPE_L1_FLOW_MAP_TBL_C_NODE_WT GENMASK(19, 10) +#define PPE_L1_FLOW_MAP_TBL_E_NODE_WT GENMASK(29, 20) + +#define PPE_L1_C_FLOW_CFG_TBL_ADDR 0x442000 +#define PPE_L1_C_FLOW_CFG_TBL_ENTRIES 64 +#define PPE_L1_C_FLOW_CFG_TBL_INC 0x10 +#define PPE_L1_C_FLOW_CFG_TBL_NODE_ID GENMASK(5, 0) +#define PPE_L1_C_FLOW_CFG_TBL_NODE_CREDIT_UNIT BIT(6) + +#define PPE_L1_E_FLOW_CFG_TBL_ADDR 0x444000 +#define PPE_L1_E_FLOW_CFG_TBL_ENTRIES 64 +#define PPE_L1_E_FLOW_CFG_TBL_INC 0x10 +#define PPE_L1_E_FLOW_CFG_TBL_NODE_ID GENMASK(5, 0) +#define PPE_L1_E_FLOW_CFG_TBL_NODE_CREDIT_UNIT BIT(6) + +#define PPE_L1_FLOW_PORT_MAP_TBL_ADDR 0x446000 +#define PPE_L1_FLOW_PORT_MAP_TBL_ENTRIES 64 +#define PPE_L1_FLOW_PORT_MAP_TBL_INC 0x10 +#define PPE_L1_FLOW_PORT_MAP_TBL_PORT_NUM GENMASK(3, 0) + +#define PPE_L1_COMP_CFG_TBL_ADDR 0x46a000 +#define PPE_L1_COMP_CFG_TBL_ENTRIES 64 +#define PPE_L1_COMP_CFG_TBL_INC 0x10 +#define PPE_L1_COMP_CFG_TBL_SHAPER_METER_LEN GENMASK(1, 0) +#define PPE_L1_COMP_CFG_TBL_NODE_METER_LEN GENMASK(3, 2) + +/* PPE port scheduler configurations for egress. */ +#define PPE_PSCH_SCH_CFG_TBL_ADDR 0x47a000 +#define PPE_PSCH_SCH_CFG_TBL_ENTRIES 128 +#define PPE_PSCH_SCH_CFG_TBL_INC 0x10 +#define PPE_PSCH_SCH_CFG_TBL_DES_PORT GENMASK(3, 0) +#define PPE_PSCH_SCH_CFG_TBL_ENS_PORT GENMASK(7, 4) +#define PPE_PSCH_SCH_CFG_TBL_ENS_PORT_BITMAP GENMASK(15, 8) +#define PPE_PSCH_SCH_CFG_TBL_DES_SECOND_PORT_EN BIT(16) +#define PPE_PSCH_SCH_CFG_TBL_DES_SECOND_PORT GENMASK(20, 17) + /* There are 15 BM ports and 4 BM groups supported by PPE. * BM port (0-7) is for EDMA port 0, BM port (8-13) is for * PPE physical port 1-6 and BM port 14 is for EIP port. From patchwork Wed Jan 8 13:47:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jie Luo X-Patchwork-Id: 855754 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC6C01FF60F; Wed, 8 Jan 2025 13:48:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344130; cv=none; b=F6ez2oGo7no/hOIHg5/wncq8nl2PNALFmzoNoyd7fkjOz9T5KLgxH9aEYV1b6Bw/Y6CqjDJecXu10rZ/mLXSn9wdM9x2p0puodGZGBM3A1ZcIHFj1sWGixbv00gBGrcGcUOX1f6NYdCgc0ws2wthX4mv/q9S9Qi98dOWqKNlACQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344130; c=relaxed/simple; bh=+Ec43bSnbpku5NMv7jjRmcWpNRD4ah75dVYT4MMS8Xw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=jfNRlbFMxntLc+UPgHUatMnk9MdqZPbYxwk1/tEV7ivHECe4nIAjXxlK4Yy38tbDSrVp3oYj3VtWfcGC+YKahmDCSQ8KecFeANciQdL5T/ZaeQ3XgCe/9u7mXLRaRAJ1+tOP9AHHo/zLFgd0AYjhuVAMtTgb94+c4ccapSFis6Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=NLtZ/CZ8; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="NLtZ/CZ8" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 508BkTQw018483; Wed, 8 Jan 2025 13:48:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= c9mFC2KTHaa37mlku0aOp4p+azpelAzN21CmUqeEeZo=; b=NLtZ/CZ8tf1ADhna 6l8gdzp1/E09JJ1JjVKIu4TixxsXEOO6bEZ+7DXu8XflvtYsW6oVvSl7KS5Udi1R 1fW2jbl7cIDLnYoyM20+oJRKxn7YfmApBTF80FGpLx9hDxsb4JD51cO2RZh0C65C 0Jk76wgHZSo8B/QnS0RvS5RX6Tc/2GWzyoEg8oK+nsr5Bk1/nM+DXOLPe3KeyFnD Lwu/5bYQ4/KJ/P2IFzztexIJ4OD/Do5/jdIS8jsNaoPqIfJaPiHoW7JWrVBRf1Zz EWKFyZbfGJxwQ//LWMwASuWpWLIRtsUdWT8RiMXDGopOpW7xktGnl7YKOa4YPebs 2BziAQ== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 441nm18u19-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 08 Jan 2025 13:48:35 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 508DmYJq002507 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 8 Jan 2025 13:48:34 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Wed, 8 Jan 2025 05:48:28 -0800 From: Luo Jie Date: Wed, 8 Jan 2025 21:47:15 +0800 Subject: [PATCH net-next v2 08/14] net: ethernet: qualcomm: Initialize PPE service code settings Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250108-qcom_ipq_ppe-v2-8-7394dbda7199@quicinc.com> References: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> In-Reply-To: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1736344057; l=15825; i=quic_luoj@quicinc.com; s=20240808; h=from:subject:message-id; bh=+Ec43bSnbpku5NMv7jjRmcWpNRD4ah75dVYT4MMS8Xw=; b=wiFmOH2B5+o5LawCr3z6BR3V5ZDi5nH2EM9lEbOcRwlnMPQx0MmXLAS1+7dxCq+6zCFtHj5uP uDiP9U7aIqGDvO/h7eB+8LVjZBusAZMOb/gUiKFtP0Cw6uNiVDJti8N X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=P81jeEL23FcOkZtXZXeDDiPwIwgAHVZFASJV12w3U6w= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: m6oxz-gd-EKt0XGz4OuLpla_n7GkE4Ic X-Proofpoint-GUID: m6oxz-gd-EKt0XGz4OuLpla_n7GkE4Ic X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 impostorscore=0 suspectscore=0 adultscore=0 malwarescore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 clxscore=1015 mlxlogscore=999 mlxscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501080115 PPE service code is a special code (0-255) that is defined by PPE for PPE's packet processing stages, as per the network functions required for the packet. For packet being sent out by ARM cores on Ethernet ports, The service code 1 is used as the default service code. This service code is used to bypass most of packet processing stages of the PPE before the packet transmitted out PPE port, since the software network stack has already processed the packet. Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/ppe_config.c | 95 +++++++++++++++- drivers/net/ethernet/qualcomm/ppe/ppe_config.h | 145 +++++++++++++++++++++++++ drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 53 +++++++++ 3 files changed, 292 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c index f379ee9d94a6..c337b4deddc8 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c @@ -8,6 +8,7 @@ */ #include +#include #include #include #include @@ -1080,6 +1081,75 @@ int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, return 0; } +/** + * ppe_sc_config_set - Set PPE service code configuration + * @ppe_dev: PPE device + * @sc: Service ID, 0-255 supported by PPE + * @cfg: Service code configuration + * + * PPE service code is used by the PPE during its packet processing stages, + * to perform or bypass certain selected packet operations on the packet. + * + * Return 0 on success, negative error code on failure. + */ +int ppe_sc_config_set(struct ppe_device *ppe_dev, int sc, struct ppe_sc_cfg cfg) +{ + u32 val, reg, servcode_val[2] = {}; + unsigned long bitmap_value; + int ret; + + val = FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_PORT_ID_VALID, cfg.dest_port_valid); + val |= FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_PORT_ID, cfg.dest_port); + val |= FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_DIRECTION, cfg.is_src); + + bitmap_value = bitmap_read(cfg.bitmaps.egress, 0, PPE_SC_BYPASS_EGRESS_SIZE); + val |= FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_BYPASS_BITMAP, bitmap_value); + val |= FIELD_PREP(PPE_IN_L2_SERVICE_TBL_RX_CNT_EN, + test_bit(PPE_SC_BYPASS_COUNTER_RX, cfg.bitmaps.counter)); + val |= FIELD_PREP(PPE_IN_L2_SERVICE_TBL_TX_CNT_EN, + test_bit(PPE_SC_BYPASS_COUNTER_TX, cfg.bitmaps.counter)); + reg = PPE_IN_L2_SERVICE_TBL_ADDR + PPE_IN_L2_SERVICE_TBL_INC * sc; + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + return ret; + + bitmap_value = bitmap_read(cfg.bitmaps.ingress, 0, PPE_SC_BYPASS_INGRESS_SIZE); + PPE_SERVICE_SET_BYPASS_BITMAP(servcode_val, bitmap_value); + PPE_SERVICE_SET_RX_CNT_EN(servcode_val, + test_bit(PPE_SC_BYPASS_COUNTER_RX_VLAN, cfg.bitmaps.counter)); + reg = PPE_SERVICE_TBL_ADDR + PPE_SERVICE_TBL_INC * sc; + + ret = regmap_bulk_write(ppe_dev->regmap, reg, + servcode_val, ARRAY_SIZE(servcode_val)); + if (ret) + return ret; + + reg = PPE_EG_SERVICE_TBL_ADDR + PPE_EG_SERVICE_TBL_INC * sc; + ret = regmap_bulk_read(ppe_dev->regmap, reg, + servcode_val, ARRAY_SIZE(servcode_val)); + if (ret) + return ret; + + PPE_EG_SERVICE_SET_NEXT_SERVCODE(servcode_val, cfg.next_service_code); + PPE_EG_SERVICE_SET_UPDATE_ACTION(servcode_val, cfg.eip_field_update_bitmap); + PPE_EG_SERVICE_SET_HW_SERVICE(servcode_val, cfg.eip_hw_service); + PPE_EG_SERVICE_SET_OFFSET_SEL(servcode_val, cfg.eip_offset_sel); + PPE_EG_SERVICE_SET_TX_CNT_EN(servcode_val, + test_bit(PPE_SC_BYPASS_COUNTER_TX_VLAN, cfg.bitmaps.counter)); + + ret = regmap_bulk_write(ppe_dev->regmap, reg, + servcode_val, ARRAY_SIZE(servcode_val)); + if (ret) + return ret; + + bitmap_value = bitmap_read(cfg.bitmaps.tunnel, 0, PPE_SC_BYPASS_TUNNEL_SIZE); + val = FIELD_PREP(PPE_TL_SERVICE_TBL_BYPASS_BITMAP, bitmap_value); + reg = PPE_TL_SERVICE_TBL_ADDR + PPE_TL_SERVICE_TBL_INC * sc; + + return regmap_write(ppe_dev->regmap, reg, val); +} + static int ppe_config_bm_threshold(struct ppe_device *ppe_dev, int bm_port_id, struct ppe_bm_port_config port_cfg) { @@ -1490,6 +1560,25 @@ static int ppe_queue_dest_init(struct ppe_device *ppe_dev) return 0; } +/* Initialize the service code 1 used by CPU port. */ +static int ppe_servcode_init(struct ppe_device *ppe_dev) +{ + struct ppe_sc_cfg sc_cfg = {}; + + bitmap_zero(sc_cfg.bitmaps.counter, PPE_SC_BYPASS_COUNTER_SIZE); + bitmap_zero(sc_cfg.bitmaps.tunnel, PPE_SC_BYPASS_TUNNEL_SIZE); + + bitmap_fill(sc_cfg.bitmaps.ingress, PPE_SC_BYPASS_INGRESS_SIZE); + clear_bit(PPE_SC_BYPASS_INGRESS_FAKE_MAC_HEADER, sc_cfg.bitmaps.ingress); + clear_bit(PPE_SC_BYPASS_INGRESS_SERVICE_CODE, sc_cfg.bitmaps.ingress); + clear_bit(PPE_SC_BYPASS_INGRESS_FAKE_L2_PROTO, sc_cfg.bitmaps.ingress); + + bitmap_fill(sc_cfg.bitmaps.egress, PPE_SC_BYPASS_EGRESS_SIZE); + clear_bit(PPE_SC_BYPASS_EGRESS_ACL_POST_ROUTING_CHECK, sc_cfg.bitmaps.egress); + + return ppe_sc_config_set(ppe_dev, PPE_EDMA_SC_BYPASS_ID, sc_cfg); +} + int ppe_hw_config(struct ppe_device *ppe_dev) { int ret; @@ -1506,5 +1595,9 @@ int ppe_hw_config(struct ppe_device *ppe_dev) if (ret) return ret; - return ppe_queue_dest_init(ppe_dev); + ret = ppe_queue_dest_init(ppe_dev); + if (ret) + return ret; + + return ppe_servcode_init(ppe_dev); } diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h index 6553da34effe..db5b033229d9 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h @@ -6,6 +6,8 @@ #ifndef __PPE_CONFIG_H__ #define __PPE_CONFIG_H__ +#include + #include "ppe.h" /* There are different table index ranges for configuring queue base ID of @@ -18,6 +20,9 @@ #define PPE_QUEUE_INTER_PRI_NUM 16 #define PPE_QUEUE_HASH_NUM 256 +/* The service code is used by EDMA port to transmit packet to PPE. */ +#define PPE_EDMA_SC_BYPASS_ID 1 + /** * enum ppe_scheduler_frame_mode - PPE scheduler frame mode. * @PPE_SCH_WITH_IPG_PREAMBLE_FRAME_CRC: The scheduled frame includes IPG, @@ -90,6 +95,144 @@ struct ppe_queue_ucast_dest { int dest_port; }; +/* Hardware bitmaps for bypassing features of the ingress packet. */ +enum ppe_sc_ingress_type { + PPE_SC_BYPASS_INGRESS_VLAN_TAG_FMT_CHECK = 0, + PPE_SC_BYPASS_INGRESS_VLAN_MEMBER_CHECK = 1, + PPE_SC_BYPASS_INGRESS_VLAN_TRANSLATE = 2, + PPE_SC_BYPASS_INGRESS_MY_MAC_CHECK = 3, + PPE_SC_BYPASS_INGRESS_DIP_LOOKUP = 4, + PPE_SC_BYPASS_INGRESS_FLOW_LOOKUP = 5, + PPE_SC_BYPASS_INGRESS_FLOW_ACTION = 6, + PPE_SC_BYPASS_INGRESS_ACL = 7, + PPE_SC_BYPASS_INGRESS_FAKE_MAC_HEADER = 8, + PPE_SC_BYPASS_INGRESS_SERVICE_CODE = 9, + PPE_SC_BYPASS_INGRESS_WRONG_PKT_FMT_L2 = 10, + PPE_SC_BYPASS_INGRESS_WRONG_PKT_FMT_L3_IPV4 = 11, + PPE_SC_BYPASS_INGRESS_WRONG_PKT_FMT_L3_IPV6 = 12, + PPE_SC_BYPASS_INGRESS_WRONG_PKT_FMT_L4 = 13, + PPE_SC_BYPASS_INGRESS_FLOW_SERVICE_CODE = 14, + PPE_SC_BYPASS_INGRESS_ACL_SERVICE_CODE = 15, + PPE_SC_BYPASS_INGRESS_FAKE_L2_PROTO = 16, + PPE_SC_BYPASS_INGRESS_PPPOE_TERMINATION = 17, + PPE_SC_BYPASS_INGRESS_DEFAULT_VLAN = 18, + PPE_SC_BYPASS_INGRESS_DEFAULT_PCP = 19, + PPE_SC_BYPASS_INGRESS_VSI_ASSIGN = 20, + /* Values 21-23 are not specified by hardware. */ + PPE_SC_BYPASS_INGRESS_VLAN_ASSIGN_FAIL = 24, + PPE_SC_BYPASS_INGRESS_SOURCE_GUARD = 25, + PPE_SC_BYPASS_INGRESS_MRU_MTU_CHECK = 26, + PPE_SC_BYPASS_INGRESS_FLOW_SRC_CHECK = 27, + PPE_SC_BYPASS_INGRESS_FLOW_QOS = 28, + /* This must be last as it determines the size of the BITMAP. */ + PPE_SC_BYPASS_INGRESS_SIZE, +}; + +/* Hardware bitmaps for bypassing features of the egress packet. */ +enum ppe_sc_egress_type { + PPE_SC_BYPASS_EGRESS_VLAN_MEMBER_CHECK = 0, + PPE_SC_BYPASS_EGRESS_VLAN_TRANSLATE = 1, + PPE_SC_BYPASS_EGRESS_VLAN_TAG_FMT_CTRL = 2, + PPE_SC_BYPASS_EGRESS_FDB_LEARN = 3, + PPE_SC_BYPASS_EGRESS_FDB_REFRESH = 4, + PPE_SC_BYPASS_EGRESS_L2_SOURCE_SECURITY = 5, + PPE_SC_BYPASS_EGRESS_MANAGEMENT_FWD = 6, + PPE_SC_BYPASS_EGRESS_BRIDGING_FWD = 7, + PPE_SC_BYPASS_EGRESS_IN_STP_FLTR = 8, + PPE_SC_BYPASS_EGRESS_EG_STP_FLTR = 9, + PPE_SC_BYPASS_EGRESS_SOURCE_FLTR = 10, + PPE_SC_BYPASS_EGRESS_POLICER = 11, + PPE_SC_BYPASS_EGRESS_L2_PKT_EDIT = 12, + PPE_SC_BYPASS_EGRESS_L3_PKT_EDIT = 13, + PPE_SC_BYPASS_EGRESS_ACL_POST_ROUTING_CHECK = 14, + PPE_SC_BYPASS_EGRESS_PORT_ISOLATION = 15, + PPE_SC_BYPASS_EGRESS_PRE_ACL_QOS = 16, + PPE_SC_BYPASS_EGRESS_POST_ACL_QOS = 17, + PPE_SC_BYPASS_EGRESS_DSCP_QOS = 18, + PPE_SC_BYPASS_EGRESS_PCP_QOS = 19, + PPE_SC_BYPASS_EGRESS_PREHEADER_QOS = 20, + PPE_SC_BYPASS_EGRESS_FAKE_MAC_DROP = 21, + PPE_SC_BYPASS_EGRESS_TUNL_CONTEXT = 22, + PPE_SC_BYPASS_EGRESS_FLOW_POLICER = 23, + /* This must be last as it determines the size of the BITMAP. */ + PPE_SC_BYPASS_EGRESS_SIZE, +}; + +/* Hardware bitmaps for bypassing counter of packet. */ +enum ppe_sc_counter_type { + PPE_SC_BYPASS_COUNTER_RX_VLAN = 0, + PPE_SC_BYPASS_COUNTER_RX = 1, + PPE_SC_BYPASS_COUNTER_TX_VLAN = 2, + PPE_SC_BYPASS_COUNTER_TX = 3, + /* This must be last as it determines the size of the BITMAP. */ + PPE_SC_BYPASS_COUNTER_SIZE, +}; + +/* Hardware bitmaps for bypassing features of tunnel packet. */ +enum ppe_sc_tunnel_type { + PPE_SC_BYPASS_TUNNEL_SERVICE_CODE = 0, + PPE_SC_BYPASS_TUNNEL_TUNNEL_HANDLE = 1, + PPE_SC_BYPASS_TUNNEL_L3_IF_CHECK = 2, + PPE_SC_BYPASS_TUNNEL_VLAN_CHECK = 3, + PPE_SC_BYPASS_TUNNEL_DMAC_CHECK = 4, + PPE_SC_BYPASS_TUNNEL_UDP_CSUM_0_CHECK = 5, + PPE_SC_BYPASS_TUNNEL_TBL_DE_ACCE_CHECK = 6, + PPE_SC_BYPASS_TUNNEL_PPPOE_MC_TERM_CHECK = 7, + PPE_SC_BYPASS_TUNNEL_TTL_EXCEED_CHECK = 8, + PPE_SC_BYPASS_TUNNEL_MAP_SRC_CHECK = 9, + PPE_SC_BYPASS_TUNNEL_MAP_DST_CHECK = 10, + PPE_SC_BYPASS_TUNNEL_LPM_DST_LOOKUP = 11, + PPE_SC_BYPASS_TUNNEL_LPM_LOOKUP = 12, + PPE_SC_BYPASS_TUNNEL_WRONG_PKT_FMT_L2 = 13, + PPE_SC_BYPASS_TUNNEL_WRONG_PKT_FMT_L3_IPV4 = 14, + PPE_SC_BYPASS_TUNNEL_WRONG_PKT_FMT_L3_IPV6 = 15, + PPE_SC_BYPASS_TUNNEL_WRONG_PKT_FMT_L4 = 16, + PPE_SC_BYPASS_TUNNEL_WRONG_PKT_FMT_TUNNEL = 17, + /* Values 18-19 are not specified by hardware. */ + PPE_SC_BYPASS_TUNNEL_PRE_IPO = 20, + /* This must be last as it determines the size of the BITMAP. */ + PPE_SC_BYPASS_TUNNEL_SIZE, +}; + +/** + * struct ppe_sc_bypss - PPE service bypass bitmaps + * @ingress: Bitmap of features that can be bypassed on the ingress packet. + * @egress: Bitmap of features that can be bypassed on the egress packet. + * @counter: Bitmap of features that can be bypassed on the counter type. + * @tunnel: Bitmap of features that can be bypassed on the tunnel packet. + */ +struct ppe_sc_bypass { + DECLARE_BITMAP(ingress, PPE_SC_BYPASS_INGRESS_SIZE); + DECLARE_BITMAP(egress, PPE_SC_BYPASS_EGRESS_SIZE); + DECLARE_BITMAP(counter, PPE_SC_BYPASS_COUNTER_SIZE); + DECLARE_BITMAP(tunnel, PPE_SC_BYPASS_TUNNEL_SIZE); +}; + +/** + * struct ppe_sc_cfg - PPE service code configuration. + * @dest_port_valid: Generate destination port or not. + * @dest_port: Destination port ID. + * @bitmaps: Bitmap of bypass features. + * @is_src: Destination port acts as source port, packet sent to CPU. + * @next_service_code: New service code generated. + * @eip_field_update_bitmap: Fields updated as actions taken for EIP. + * @eip_hw_service: Selected hardware functions for EIP. + * @eip_offset_sel: Packet offset selection, using packet's layer 4 offset + * or using packet's layer 3 offset for EIP. + * + * Service code is generated during the packet passing through PPE. + */ +struct ppe_sc_cfg { + bool dest_port_valid; + int dest_port; + struct ppe_sc_bypass bitmaps; + bool is_src; + int next_service_code; + int eip_field_update_bitmap; + int eip_hw_service; + int eip_offset_sel; +}; + int ppe_hw_config(struct ppe_device *ppe_dev); int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, int node_id, bool flow_level, int port, @@ -109,4 +252,6 @@ int ppe_queue_ucast_offset_hash_set(struct ppe_device *ppe_dev, int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, enum ppe_resource_type type, int *res_start, int *res_end); +int ppe_sc_config_set(struct ppe_device *ppe_dev, int sc, + struct ppe_sc_cfg cfg); #endif diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h index 0232f23dcefe..80f003afad78 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -25,10 +25,63 @@ #define PPE_BM_SCH_CFG_TBL_SECOND_PORT_VALID BIT(6) #define PPE_BM_SCH_CFG_TBL_SECOND_PORT GENMASK(11, 8) +/* PPE service code configuration for the ingress direction functions, + * including bypass configuration for relevant PPE switch core functions + * such as flow entry lookup bypass. + */ +#define PPE_SERVICE_TBL_ADDR 0x15000 +#define PPE_SERVICE_TBL_ENTRIES 256 +#define PPE_SERVICE_TBL_INC 0x10 +#define PPE_SERVICE_W0_BYPASS_BITMAP GENMASK(31, 0) +#define PPE_SERVICE_W1_RX_CNT_EN BIT(0) + +#define PPE_SERVICE_SET_BYPASS_BITMAP(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_SERVICE_W0_BYPASS_BITMAP) +#define PPE_SERVICE_SET_RX_CNT_EN(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_SERVICE_W1_RX_CNT_EN) + /* PPE queue counters enable/disable control. */ #define PPE_EG_BRIDGE_CONFIG_ADDR 0x20044 #define PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN BIT(2) +/* PPE service code configuration on the egress direction. */ +#define PPE_EG_SERVICE_TBL_ADDR 0x43000 +#define PPE_EG_SERVICE_TBL_ENTRIES 256 +#define PPE_EG_SERVICE_TBL_INC 0x10 +#define PPE_EG_SERVICE_W0_UPDATE_ACTION GENMASK(31, 0) +#define PPE_EG_SERVICE_W1_NEXT_SERVCODE GENMASK(7, 0) +#define PPE_EG_SERVICE_W1_HW_SERVICE GENMASK(13, 8) +#define PPE_EG_SERVICE_W1_OFFSET_SEL BIT(14) +#define PPE_EG_SERVICE_W1_TX_CNT_EN BIT(15) + +#define PPE_EG_SERVICE_SET_UPDATE_ACTION(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_EG_SERVICE_W0_UPDATE_ACTION) +#define PPE_EG_SERVICE_SET_NEXT_SERVCODE(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_EG_SERVICE_W1_NEXT_SERVCODE) +#define PPE_EG_SERVICE_SET_HW_SERVICE(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_EG_SERVICE_W1_HW_SERVICE) +#define PPE_EG_SERVICE_SET_OFFSET_SEL(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_EG_SERVICE_W1_OFFSET_SEL) +#define PPE_EG_SERVICE_SET_TX_CNT_EN(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_EG_SERVICE_W1_TX_CNT_EN) + +/* PPE service code configuration for destination port and counter. */ +#define PPE_IN_L2_SERVICE_TBL_ADDR 0x66000 +#define PPE_IN_L2_SERVICE_TBL_ENTRIES 256 +#define PPE_IN_L2_SERVICE_TBL_INC 0x10 +#define PPE_IN_L2_SERVICE_TBL_DST_PORT_ID_VALID BIT(0) +#define PPE_IN_L2_SERVICE_TBL_DST_PORT_ID GENMASK(4, 1) +#define PPE_IN_L2_SERVICE_TBL_DST_DIRECTION BIT(5) +#define PPE_IN_L2_SERVICE_TBL_DST_BYPASS_BITMAP GENMASK(29, 6) +#define PPE_IN_L2_SERVICE_TBL_RX_CNT_EN BIT(30) +#define PPE_IN_L2_SERVICE_TBL_TX_CNT_EN BIT(31) + +/* PPE service code configuration for the tunnel packet. */ +#define PPE_TL_SERVICE_TBL_ADDR 0x306000 +#define PPE_TL_SERVICE_TBL_ENTRIES 256 +#define PPE_TL_SERVICE_TBL_INC 4 +#define PPE_TL_SERVICE_TBL_BYPASS_BITMAP GENMASK(31, 0) + /* Port scheduler global config. */ #define PPE_PSCH_SCH_DEPTH_CFG_ADDR 0x400000 #define PPE_PSCH_SCH_DEPTH_CFG_INC 4 From patchwork Wed Jan 8 13:47:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jie Luo X-Patchwork-Id: 855753 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 333321FE443; Wed, 8 Jan 2025 13:49:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344143; cv=none; b=uE6sLBjAV8HyBMr0cJh7q/BMqg5uUgPahK9T6scYCux5V1D6Qt4RvMaxnzJfanDvVVMyZ6c8ZtojnXH8NTLLPWPK2qh6aRUmL6s/uwb04m0GYOnHXiUczPDfEeedis2cwd4jn1iZHGozNgFBIIx1pJPPX/oaXVDF4PP/sapurQE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344143; c=relaxed/simple; bh=vbspJJJnA8gEHhvz1yP/hiYVAM/XAh4addcphIs4XNo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=iyjZOjDBT15zNCVioR5AUiklsxUnfNZiMjnqQwQLoy20cjgEdhva3OBFMLRjK3YbJoiUJxXVYf0J+CXKB2Um8prmE6lJvurWaBkNjH4rWmOf+hyxf8jOHM0j9oF7kSi3P9WCB5ahbaYZgNg4Upgar3u0yT8vnizYOLXKQcSHz88= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=EGjIPz8P; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="EGjIPz8P" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 508Bklug003049; Wed, 8 Jan 2025 13:48:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 3C8g3Jg7aHGL7o0vvHluxjUlCW/txUfZeETKya4sPlA=; b=EGjIPz8PnBcN9Z6/ BLJ/4rCt9PFKNdUMCbaHbPHr207/Khii9xvkvAx3fRu8lO87kiiyLMKnC0vakCVa cU1/e6jXWlqYGickATZY/x5xbed3zgsPXO6+DIR4jpLBa+F+3FuU31QRRdCUZ6ma TdsprCTUUGsfsJWldnm5QrFsK2d5NsNeJLATVWYtDQCoBHxOVg1LPdU+W7YEk+2s 4ml/jfmBwTh5oZr7tb64to1dWhYboicCxZyI+2c/WIMujFoSkgvoBe2gOO2+0C5S Gf1oiJBms/EQtzuBH/D0hPDcf/aj2T3uaEciMI6BnfgMqvJjFyPEdz7dZB68iNSX 2PL3YA== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 441md310ye-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 08 Jan 2025 13:48:48 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 508Dml1R027502 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 8 Jan 2025 13:48:47 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Wed, 8 Jan 2025 05:48:40 -0800 From: Luo Jie Date: Wed, 8 Jan 2025 21:47:17 +0800 Subject: [PATCH net-next v2 10/14] net: ethernet: qualcomm: Initialize PPE RSS hash settings Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250108-qcom_ipq_ppe-v2-10-7394dbda7199@quicinc.com> References: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> In-Reply-To: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1736344057; l=12088; i=quic_luoj@quicinc.com; s=20240808; h=from:subject:message-id; bh=vbspJJJnA8gEHhvz1yP/hiYVAM/XAh4addcphIs4XNo=; b=Jq9SckN2o86POb0QI9vAprJ4eL+tXjkBe34io+ZbJhdvwjMk+YX+aFej/UiyWJXCvrXpmowwV 7sGE5AI87cRA3t+KaXnmlnVdaRz3JSojNPvm4QzSB6a4xHgT6AONEjr X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=P81jeEL23FcOkZtXZXeDDiPwIwgAHVZFASJV12w3U6w= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: O_FLHOJRKSqwCEsLwEm47n4X4Ral861V X-Proofpoint-ORIG-GUID: O_FLHOJRKSqwCEsLwEm47n4X4Ral861V X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 spamscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 phishscore=0 mlxlogscore=999 lowpriorityscore=0 adultscore=0 malwarescore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501080114 PPE RSS hash is generated during PPE receive, based on the packet content (3 tuples or 5 tuples) and as per the configured RSS seed. The hash is then used to select the queue to transmit the packet to the ARM CPU. This patch initializes the RSS hash settings that are used to generate the hash for the packet during PPE packet receive. Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/ppe_config.c | 194 ++++++++++++++++++++++++- drivers/net/ethernet/qualcomm/ppe/ppe_config.h | 39 +++++ drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 40 +++++ 3 files changed, 272 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c index d3633cf12f81..1f180784a330 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c @@ -1195,6 +1195,143 @@ int ppe_counter_enable_set(struct ppe_device *ppe_dev, int port, bool enable) val); } +static int ppe_rss_hash_ipv4_config(struct ppe_device *ppe_dev, int index, + struct ppe_rss_hash_cfg cfg) +{ + u32 reg, val; + + switch (index) { + case 0: + val = FIELD_PREP(PPE_RSS_HASH_MIX_IPV4_VAL, cfg.hash_sip_mix[0]); + break; + case 1: + val = FIELD_PREP(PPE_RSS_HASH_MIX_IPV4_VAL, cfg.hash_dip_mix[0]); + break; + case 2: + val = FIELD_PREP(PPE_RSS_HASH_MIX_IPV4_VAL, cfg.hash_protocol_mix); + break; + case 3: + val = FIELD_PREP(PPE_RSS_HASH_MIX_IPV4_VAL, cfg.hash_dport_mix); + break; + case 4: + val = FIELD_PREP(PPE_RSS_HASH_MIX_IPV4_VAL, cfg.hash_sport_mix); + break; + default: + return -EINVAL; + } + + reg = PPE_RSS_HASH_MIX_IPV4_ADDR + index * PPE_RSS_HASH_MIX_IPV4_INC; + + return regmap_write(ppe_dev->regmap, reg, val); +} + +static int ppe_rss_hash_ipv6_config(struct ppe_device *ppe_dev, int index, + struct ppe_rss_hash_cfg cfg) +{ + u32 reg, val; + + switch (index) { + case 0 ... 3: + val = FIELD_PREP(PPE_RSS_HASH_MIX_VAL, cfg.hash_sip_mix[index]); + break; + case 4 ... 7: + val = FIELD_PREP(PPE_RSS_HASH_MIX_VAL, cfg.hash_dip_mix[index - 4]); + break; + case 8: + val = FIELD_PREP(PPE_RSS_HASH_MIX_VAL, cfg.hash_protocol_mix); + break; + case 9: + val = FIELD_PREP(PPE_RSS_HASH_MIX_VAL, cfg.hash_dport_mix); + break; + case 10: + val = FIELD_PREP(PPE_RSS_HASH_MIX_VAL, cfg.hash_sport_mix); + break; + default: + return -EINVAL; + } + + reg = PPE_RSS_HASH_MIX_ADDR + index * PPE_RSS_HASH_MIX_INC; + + return regmap_write(ppe_dev->regmap, reg, val); +} + +/** + * ppe_rss_hash_config_set - Configure the PPE hash settings for the packet received. + * @ppe_dev: PPE device + * @mode: Configure RSS hash for the packet type IPv4 and IPv6. + * @hash_cfg: RSS hash configuration + * + * PPE RSS hash settings are configured for the packet type IPv4 and IPv6. + * + * Return 0 on success, negative error code on failure. + */ +int ppe_rss_hash_config_set(struct ppe_device *ppe_dev, int mode, + struct ppe_rss_hash_cfg cfg) +{ + u32 val, reg; + int i, ret; + + if (mode & PPE_RSS_HASH_MODE_IPV4) { + val = FIELD_PREP(PPE_RSS_HASH_MASK_IPV4_HASH_MASK, cfg.hash_mask); + val |= FIELD_PREP(PPE_RSS_HASH_MASK_IPV4_FRAGMENT, cfg.hash_fragment_mode); + ret = regmap_write(ppe_dev->regmap, PPE_RSS_HASH_MASK_IPV4_ADDR, val); + if (ret) + return ret; + + val = FIELD_PREP(PPE_RSS_HASH_SEED_IPV4_VAL, cfg.hash_seed); + ret = regmap_write(ppe_dev->regmap, PPE_RSS_HASH_SEED_IPV4_ADDR, val); + if (ret) + return ret; + + for (i = 0; i < PPE_RSS_HASH_MIX_IPV4_ENTRIES; i++) { + ret = ppe_rss_hash_ipv4_config(ppe_dev, i, cfg); + if (ret) + return ret; + } + + for (i = 0; i < PPE_RSS_HASH_FIN_IPV4_ENTRIES; i++) { + val = FIELD_PREP(PPE_RSS_HASH_FIN_IPV4_INNER, cfg.hash_fin_inner[i]); + val |= FIELD_PREP(PPE_RSS_HASH_FIN_IPV4_OUTER, cfg.hash_fin_outer[i]); + reg = PPE_RSS_HASH_FIN_IPV4_ADDR + i * PPE_RSS_HASH_FIN_IPV4_INC; + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + return ret; + } + } + + if (mode & PPE_RSS_HASH_MODE_IPV6) { + val = FIELD_PREP(PPE_RSS_HASH_MASK_HASH_MASK, cfg.hash_mask); + val |= FIELD_PREP(PPE_RSS_HASH_MASK_FRAGMENT, cfg.hash_fragment_mode); + ret = regmap_write(ppe_dev->regmap, PPE_RSS_HASH_MASK_ADDR, val); + if (ret) + return ret; + + val = FIELD_PREP(PPE_RSS_HASH_SEED_VAL, cfg.hash_seed); + ret = regmap_write(ppe_dev->regmap, PPE_RSS_HASH_SEED_ADDR, val); + if (ret) + return ret; + + for (i = 0; i < PPE_RSS_HASH_MIX_ENTRIES; i++) { + ret = ppe_rss_hash_ipv6_config(ppe_dev, i, cfg); + if (ret) + return ret; + } + + for (i = 0; i < PPE_RSS_HASH_FIN_ENTRIES; i++) { + val = FIELD_PREP(PPE_RSS_HASH_FIN_INNER, cfg.hash_fin_inner[i]); + val |= FIELD_PREP(PPE_RSS_HASH_FIN_OUTER, cfg.hash_fin_outer[i]); + reg = PPE_RSS_HASH_FIN_ADDR + i * PPE_RSS_HASH_FIN_INC; + + ret = regmap_write(ppe_dev->regmap, reg, val); + if (ret) + return ret; + } + } + + return 0; +} + static int ppe_config_bm_threshold(struct ppe_device *ppe_dev, int bm_port_id, struct ppe_bm_port_config port_cfg) { @@ -1666,6 +1803,57 @@ static int ppe_port_config_init(struct ppe_device *ppe_dev) return ppe_counter_enable_set(ppe_dev, 0, true); } +/* Initialize the PPE RSS configuration for IPv4 and IPv6 packet receive. + * RSS settings are to calculate the random RSS hash value generated during + * packet receive. This hash is then used to generate the queue offset used + * to determine the queue used to transmit the packet. + */ +static int ppe_rss_hash_init(struct ppe_device *ppe_dev) +{ + u16 fins[PPE_RSS_HASH_TUPLES] = { 0x205, 0x264, 0x227, 0x245, 0x201 }; + u8 ips[PPE_RSS_HASH_IP_LENGTH] = { 0x13, 0xb, 0x13, 0xb }; + struct ppe_rss_hash_cfg hash_cfg; + int i, ret; + + hash_cfg.hash_seed = get_random_u32(); + hash_cfg.hash_mask = 0xfff; + + /* Use 5 tuple as RSS hash key for the first fragment of TCP, UDP + * and UDP-Lite packets. + */ + hash_cfg.hash_fragment_mode = false; + + /* The final common seed configs used to calculate the RSS has value, + * which is available for both IPv4 and IPv6 packet. + */ + for (i = 0; i < ARRAY_SIZE(fins); i++) { + hash_cfg.hash_fin_inner[i] = fins[i] & 0x1f; + hash_cfg.hash_fin_outer[i] = fins[i] >> 5; + } + + /* RSS seeds for IP protocol, L4 destination & source port and + * destination & source IP used to calculate the RSS hash value. + */ + hash_cfg.hash_protocol_mix = 0x13; + hash_cfg.hash_dport_mix = 0xb; + hash_cfg.hash_sport_mix = 0x13; + hash_cfg.hash_dip_mix[0] = 0xb; + hash_cfg.hash_sip_mix[0] = 0x13; + + /* Configure RSS seed configs for IPv4 packet. */ + ret = ppe_rss_hash_config_set(ppe_dev, PPE_RSS_HASH_MODE_IPV4, hash_cfg); + if (ret) + return ret; + + for (i = 0; i < ARRAY_SIZE(ips); i++) { + hash_cfg.hash_sip_mix[i] = ips[i]; + hash_cfg.hash_dip_mix[i] = ips[i]; + } + + /* Configure RSS seed configs for IPv6 packet. */ + return ppe_rss_hash_config_set(ppe_dev, PPE_RSS_HASH_MODE_IPV6, hash_cfg); +} + int ppe_hw_config(struct ppe_device *ppe_dev) { int ret; @@ -1690,5 +1878,9 @@ int ppe_hw_config(struct ppe_device *ppe_dev) if (ret) return ret; - return ppe_port_config_init(ppe_dev); + ret = ppe_port_config_init(ppe_dev); + if (ret) + return ret; + + return ppe_rss_hash_init(ppe_dev); } diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h index d5ffc48460df..6190e2d53aa8 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h @@ -23,6 +23,12 @@ /* The service code is used by EDMA port to transmit packet to PPE. */ #define PPE_EDMA_SC_BYPASS_ID 1 +/* The PPE RSS hash configured for IPv4 and IPv6 packet separately. */ +#define PPE_RSS_HASH_MODE_IPV4 BIT(0) +#define PPE_RSS_HASH_MODE_IPV6 BIT(1) +#define PPE_RSS_HASH_IP_LENGTH 4 +#define PPE_RSS_HASH_TUPLES 5 + /** * enum ppe_scheduler_frame_mode - PPE scheduler frame mode. * @PPE_SCH_WITH_IPG_PREAMBLE_FRAME_CRC: The scheduled frame includes IPG, @@ -243,6 +249,37 @@ enum ppe_action_type { PPE_ACTION_REDIRECT_TO_CPU = 3, }; +/** + * struct ppe_rss_hash_cfg - PPE RSS hash configuration. + * @hash_mask: Mask of the generated hash value. + * @hash_fragment_mode: Hash generation mode for the first fragment of + * TCP, UDP and UDP-Lite packets, to use either 3 tuple or 5 tuple for + * RSS hash key computation. + * @hash_seed: Seed to generate RSS hash. + * @hash_sip_mix: Source IP selection. + * @hash_dip_mix: Destination IP selection. + * @hash_protocol_mix: Protocol selection. + * @hash_sport_mix: Source L4 port selection. + * @hash_sport_mix: Destination L4 port selection. + * @hash_fin_inner: RSS hash value first selection. + * @hash_fin_outer: RSS hash value second selection. + * + * PPE RSS hash value is generated for the packet based on the RSS hash + * configured. + */ +struct ppe_rss_hash_cfg { + u32 hash_mask; + bool hash_fragment_mode; + u32 hash_seed; + u8 hash_sip_mix[PPE_RSS_HASH_IP_LENGTH]; + u8 hash_dip_mix[PPE_RSS_HASH_IP_LENGTH]; + u8 hash_protocol_mix; + u8 hash_sport_mix; + u8 hash_dport_mix; + u8 hash_fin_inner[PPE_RSS_HASH_TUPLES]; + u8 hash_fin_outer[PPE_RSS_HASH_TUPLES]; +}; + int ppe_hw_config(struct ppe_device *ppe_dev); int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, int node_id, bool flow_level, int port, @@ -265,4 +302,6 @@ int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, int ppe_sc_config_set(struct ppe_device *ppe_dev, int sc, struct ppe_sc_cfg cfg); int ppe_counter_enable_set(struct ppe_device *ppe_dev, int port, bool enable); +int ppe_rss_hash_config_set(struct ppe_device *ppe_dev, int mode, + struct ppe_rss_hash_cfg hash_cfg); #endif diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h index e4596ffe04f6..5aa46c41e066 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -16,6 +16,46 @@ #define PPE_BM_SCH_CTRL_SCH_OFFSET GENMASK(14, 8) #define PPE_BM_SCH_CTRL_SCH_EN BIT(31) +/* RSS settings are to calculate the random RSS hash value generated during + * packet receive to ARM cores. This hash is then used to generate the queue + * offset used to determine the queue used to transmit the packet to ARM cores. + */ +#define PPE_RSS_HASH_MASK_ADDR 0xb4318 +#define PPE_RSS_HASH_MASK_HASH_MASK GENMASK(20, 0) +#define PPE_RSS_HASH_MASK_FRAGMENT BIT(28) + +#define PPE_RSS_HASH_SEED_ADDR 0xb431c +#define PPE_RSS_HASH_SEED_VAL GENMASK(31, 0) + +#define PPE_RSS_HASH_MIX_ADDR 0xb4320 +#define PPE_RSS_HASH_MIX_ENTRIES 11 +#define PPE_RSS_HASH_MIX_INC 4 +#define PPE_RSS_HASH_MIX_VAL GENMASK(4, 0) + +#define PPE_RSS_HASH_FIN_ADDR 0xb4350 +#define PPE_RSS_HASH_FIN_ENTRIES 5 +#define PPE_RSS_HASH_FIN_INC 4 +#define PPE_RSS_HASH_FIN_INNER GENMASK(4, 0) +#define PPE_RSS_HASH_FIN_OUTER GENMASK(9, 5) + +#define PPE_RSS_HASH_MASK_IPV4_ADDR 0xb4380 +#define PPE_RSS_HASH_MASK_IPV4_HASH_MASK GENMASK(20, 0) +#define PPE_RSS_HASH_MASK_IPV4_FRAGMENT BIT(28) + +#define PPE_RSS_HASH_SEED_IPV4_ADDR 0xb4384 +#define PPE_RSS_HASH_SEED_IPV4_VAL GENMASK(31, 0) + +#define PPE_RSS_HASH_MIX_IPV4_ADDR 0xb4390 +#define PPE_RSS_HASH_MIX_IPV4_ENTRIES 5 +#define PPE_RSS_HASH_MIX_IPV4_INC 4 +#define PPE_RSS_HASH_MIX_IPV4_VAL GENMASK(4, 0) + +#define PPE_RSS_HASH_FIN_IPV4_ADDR 0xb43b0 +#define PPE_RSS_HASH_FIN_IPV4_ENTRIES 5 +#define PPE_RSS_HASH_FIN_IPV4_INC 4 +#define PPE_RSS_HASH_FIN_IPV4_INNER GENMASK(4, 0) +#define PPE_RSS_HASH_FIN_IPV4_OUTER GENMASK(9, 5) + #define PPE_BM_SCH_CFG_TBL_ADDR 0xc000 #define PPE_BM_SCH_CFG_TBL_ENTRIES 128 #define PPE_BM_SCH_CFG_TBL_INC 0x10 From patchwork Wed Jan 8 13:47:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jie Luo X-Patchwork-Id: 855752 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FF181FCF44; Wed, 8 Jan 2025 13:49:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344157; cv=none; b=a1vM26HuZuwDKFOaRK6BCkcMqtpF++QtQTKOtjirbhbf+hiE0hRcILTZtn3Yqsr2S3is5LRde5pquO2V0s/2uAtGd9YZMwJMzR7blgOqCN5B9lUWofuHJvYFq8D84jP+EOIR1TzuLGYhqMX0VoKox0oDbL0wvqYzQ7AvALuS/qM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344157; c=relaxed/simple; bh=WIS11GVyw2P3qBOIKoIg717azYRxFjTvOF0hCk6VBiA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=sskCVUb7qpfNuB6pVXEzHiKCN3Dpsc2Kn4dM2IgfpsW2ZaxF1o2EIb355V/VMDhl8+HlmDjWXCqe6VWABwPz82DGAVLR+sr5YjyL2b3eobsmIWtqDIG2quIouHRugQkRlyrKCgcWUob0oBpHe0NEJmEHnV3bpaCycFZ8+fETmrM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=D2xbS9Qu; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="D2xbS9Qu" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 508BkhjE018565; Wed, 8 Jan 2025 13:49:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= VWydOJylodGaOfkXjruGu8XO6AigPlVDtZolmWSPV8o=; b=D2xbS9QuTmkpfr2x vNqNRCfOvK5gSCo1Pg3r/HQtT9vlo9Py8ZCWZHWfggTd4cbpJ1bvcJGyMF7AzKE0 LJ0tQXxzCrcMZPGdLZ3HlpgQ43UL7x6sEJby1dfbGCYEo2mw0DFgVieE9gQkLEG2 9/zODPbTSNSjsky6j8MCR1fsWGM7cA6JcoeWo6qbtYJJmFwO5rIs+CNDwh8+SyLL WS2DywR5ppWZO3W1Rl5RoJXBfllHHdc5zj7RzE9rrKabomW41ONIAmkNLClXdHt7 5tcyax6w3fdFLuMQCUuQHA0jyiSaFDBXUBV7y8oheEqiTMWEevbRlR8QMtkjsR2d uh2bJg== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 441nm18u2e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 08 Jan 2025 13:49:00 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 508DmxfI027598 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 8 Jan 2025 13:48:59 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Wed, 8 Jan 2025 05:48:53 -0800 From: Luo Jie Date: Wed, 8 Jan 2025 21:47:19 +0800 Subject: [PATCH net-next v2 12/14] net: ethernet: qualcomm: Initialize PPE L2 bridge settings Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250108-qcom_ipq_ppe-v2-12-7394dbda7199@quicinc.com> References: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> In-Reply-To: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1736344057; l=7232; i=quic_luoj@quicinc.com; s=20240808; h=from:subject:message-id; bh=22QhPseO7oXZalCM+0Fh1Hh0dLteM8ovNmlK0gRUn7o=; b=WdozYzWr5GEgQiRAOMGLsDM62WpTgbxKj1iOtKuu3X39q3aLPcq1AFXs9tkSN52uYfaHRsXEy 1cQ1YFmb856DCSNi5goL+428EGJiyWvcp4Gq8X9h79CeCRZTTtNcJiu X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=P81jeEL23FcOkZtXZXeDDiPwIwgAHVZFASJV12w3U6w= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: pf1Le1yBR5gA8a6fEacRsiUer1IvcTMq X-Proofpoint-GUID: pf1Le1yBR5gA8a6fEacRsiUer1IvcTMq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 impostorscore=0 suspectscore=0 adultscore=0 malwarescore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 clxscore=1015 mlxlogscore=999 mlxscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501080115 From: Lei Wei Configure the default L2 bridge settings for the PPE ports to enable L2 frame forwarding between CPU port and PPE Ethernet ports. The per-port L2 bridge settings are initialized as follows: For PPE CPU port, the PPE bridge TX is enabled and FDB learn is disabled. For PPE physical port, the PPE bridge TX is disabled and FDB learn is enabled by default and the L2 forward action is initialized as forward to CPU port. Signed-off-by: Lei Wei Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/ppe_config.c | 74 +++++++++++++++++++++++++- drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 50 +++++++++++++++++ 2 files changed, 123 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c index 39e4d19c2bd6..ab008aa1c23c 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c @@ -1895,6 +1895,74 @@ static int ppe_queues_to_ring_init(struct ppe_device *ppe_dev) return ppe_ring_queue_map_set(ppe_dev, 0, queue_bmap); } +/* Initialize PPE bridge settings to enable L2 frame receive and transmit + * between CPU port and PPE Ethernet ports. + */ +static int ppe_bridge_init(struct ppe_device *ppe_dev) +{ + u32 reg, mask, port_cfg[4], vsi_cfg[2]; + int ret, i; + + /* Configure the following settings for CPU port0: + * a.) Enable Bridge TX + * b.) Disable FDB new address learning + * c.) Disable station move address learning + */ + mask = PPE_PORT_BRIDGE_TXMAC_EN; + mask |= PPE_PORT_BRIDGE_NEW_LRN_EN; + mask |= PPE_PORT_BRIDGE_STA_MOVE_LRN_EN; + ret = regmap_update_bits(ppe_dev->regmap, + PPE_PORT_BRIDGE_CTRL_ADDR, + mask, + PPE_PORT_BRIDGE_TXMAC_EN); + if (ret) + return ret; + + for (i = 1; i < ppe_dev->num_ports; i++) { + /* Enable invalid VSI forwarding for all the physical ports + * to CPU port0, in case no VSI is assigned to the physical + * port. + */ + reg = PPE_L2_VP_PORT_TBL_ADDR + PPE_L2_VP_PORT_TBL_INC * i; + ret = regmap_bulk_read(ppe_dev->regmap, reg, + port_cfg, ARRAY_SIZE(port_cfg)); + + if (ret) + return ret; + + PPE_L2_PORT_SET_INVALID_VSI_FWD_EN(port_cfg, true); + PPE_L2_PORT_SET_DST_INFO(port_cfg, 0); + + ret = regmap_bulk_write(ppe_dev->regmap, reg, + port_cfg, ARRAY_SIZE(port_cfg)); + if (ret) + return ret; + } + + for (i = 0; i < PPE_VSI_TBL_ENTRIES; i++) { + /* Enable address learning for the bridge VSI to enable + * forwarding. Set VSI forward membership to include only + * CPU port0. + */ + PPE_VSI_SET_MEMBER_PORT_BITMAP(vsi_cfg, BIT(0)); + PPE_VSI_SET_UUC_BITMAP(vsi_cfg, BIT(0)); + PPE_VSI_SET_UMC_BITMAP(vsi_cfg, BIT(0)); + PPE_VSI_SET_BC_BITMAP(vsi_cfg, BIT(0)); + PPE_VSI_SET_NEW_ADDR_LRN_EN(vsi_cfg, true); + PPE_VSI_SET_NEW_ADDR_FWD_CMD(vsi_cfg, PPE_ACTION_FORWARD); + PPE_VSI_SET_STATION_MOVE_LRN_EN(vsi_cfg, true); + PPE_VSI_SET_STATION_MOVE_FWD_CMD(vsi_cfg, PPE_ACTION_FORWARD); + + reg = PPE_VSI_TBL_ADDR + PPE_VSI_TBL_INC * i; + ret = regmap_bulk_write(ppe_dev->regmap, reg, + vsi_cfg, ARRAY_SIZE(vsi_cfg)); + if (ret) + return ret; + } + + return 0; +} + int ppe_hw_config(struct ppe_device *ppe_dev) { int ret; @@ -1927,5 +1995,9 @@ int ppe_hw_config(struct ppe_device *ppe_dev) if (ret) return ret; - return ppe_queues_to_ring_init(ppe_dev); + ret = ppe_queues_to_ring_init(ppe_dev); + if (ret) + return ret; + + return ppe_bridge_init(ppe_dev); } diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h index da55b06b30b8..f23fafa35766 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -117,6 +117,14 @@ #define PPE_EG_SERVICE_SET_TX_CNT_EN(tbl_cfg, value) \ u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_EG_SERVICE_W1_TX_CNT_EN) +/* PPE port bridge configuration */ +#define PPE_PORT_BRIDGE_CTRL_ADDR 0x60300 +#define PPE_PORT_BRIDGE_CTRL_ENTRIES 8 +#define PPE_PORT_BRIDGE_CTRL_INC 4 +#define PPE_PORT_BRIDGE_NEW_LRN_EN BIT(0) +#define PPE_PORT_BRIDGE_STA_MOVE_LRN_EN BIT(3) +#define PPE_PORT_BRIDGE_TXMAC_EN BIT(16) + /* PPE port control configurations for the traffic to the multicast queues. */ #define PPE_MC_MTU_CTRL_TBL_ADDR 0x60a00 #define PPE_MC_MTU_CTRL_TBL_ENTRIES 8 @@ -125,6 +133,36 @@ #define PPE_MC_MTU_CTRL_TBL_MTU_CMD GENMASK(15, 14) #define PPE_MC_MTU_CTRL_TBL_TX_CNT_EN BIT(16) +/* PPE VSI configurations */ +#define PPE_VSI_TBL_ADDR 0x63800 +#define PPE_VSI_TBL_ENTRIES 64 +#define PPE_VSI_TBL_INC 0x10 +#define PPE_VSI_W0_MEMBER_PORT_BITMAP GENMASK(7, 0) +#define PPE_VSI_W0_UUC_BITMAP GENMASK(15, 8) +#define PPE_VSI_W0_UMC_BITMAP GENMASK(23, 16) +#define PPE_VSI_W0_BC_BITMAP GENMASK(31, 24) +#define PPE_VSI_W1_NEW_ADDR_LRN_EN BIT(0) +#define PPE_VSI_W1_NEW_ADDR_FWD_CMD GENMASK(2, 1) +#define PPE_VSI_W1_STATION_MOVE_LRN_EN BIT(3) +#define PPE_VSI_W1_STATION_MOVE_FWD_CMD GENMASK(5, 4) + +#define PPE_VSI_SET_MEMBER_PORT_BITMAP(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_VSI_W0_MEMBER_PORT_BITMAP) +#define PPE_VSI_SET_UUC_BITMAP(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_VSI_W0_UUC_BITMAP) +#define PPE_VSI_SET_UMC_BITMAP(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_VSI_W0_UMC_BITMAP) +#define PPE_VSI_SET_BC_BITMAP(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_VSI_W0_BC_BITMAP) +#define PPE_VSI_SET_NEW_ADDR_LRN_EN(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_VSI_W1_NEW_ADDR_LRN_EN) +#define PPE_VSI_SET_NEW_ADDR_FWD_CMD(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_VSI_W1_NEW_ADDR_FWD_CMD) +#define PPE_VSI_SET_STATION_MOVE_LRN_EN(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_VSI_W1_STATION_MOVE_LRN_EN) +#define PPE_VSI_SET_STATION_MOVE_FWD_CMD(tbl_cfg, value) \ + u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_VSI_W1_STATION_MOVE_FWD_CMD) + /* PPE port control configurations for the traffic to the unicast queues. */ #define PPE_MRU_MTU_CTRL_TBL_ADDR 0x65000 #define PPE_MRU_MTU_CTRL_TBL_ENTRIES 256 @@ -163,6 +201,18 @@ #define PPE_IN_L2_SERVICE_TBL_RX_CNT_EN BIT(30) #define PPE_IN_L2_SERVICE_TBL_TX_CNT_EN BIT(31) +/* L2 Port configurations */ +#define PPE_L2_VP_PORT_TBL_ADDR 0x98000 +#define PPE_L2_VP_PORT_TBL_ENTRIES 256 +#define PPE_L2_VP_PORT_TBL_INC 0x10 +#define PPE_L2_VP_PORT_W0_INVALID_VSI_FWD_EN BIT(0) +#define PPE_L2_VP_PORT_W0_DST_INFO GENMASK(9, 2) + +#define PPE_L2_PORT_SET_INVALID_VSI_FWD_EN(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_L2_VP_PORT_W0_INVALID_VSI_FWD_EN) +#define PPE_L2_PORT_SET_DST_INFO(tbl_cfg, value) \ + u32p_replace_bits((u32 *)tbl_cfg, value, PPE_L2_VP_PORT_W0_DST_INFO) + /* PPE service code configuration for the tunnel packet. */ #define PPE_TL_SERVICE_TBL_ADDR 0x306000 #define PPE_TL_SERVICE_TBL_ENTRIES 256 From patchwork Wed Jan 8 13:47:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jie Luo X-Patchwork-Id: 855751 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E95E21FCF44; Wed, 8 Jan 2025 13:49:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344167; cv=none; b=olCL1HvOFHaK33kzd4SoTFMLeeYAW5gakGE16FPS/b+FYuwuZsDRXE/YRlPg1ZXkIHUdIsEoUcMsifLsAHlekKdtUUHfewume7WsxbDX6kNhPsdeR8RK/oMBmkbttUGOzIpuJOXvAVEqwgsAbEghCmWL1sWdZpGalN6G+45y+Vo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736344167; c=relaxed/simple; bh=SmMR76gKRqk7H1K3X/jdc1knG7XQz1rKQRJyOnhscbk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=spwxOpm7WyWAG2N3NrIB07GMgwB7mPQAayhDQN07GrAx1GwSUzCuwr7sFmaY6MsUIgzaxh+t3sctjQaxdVZf8Jc/hnS79OMNXt0Epkcx1VEExr/JFZ39nTGX6AEkKIbngeKHLs9XtA+3CeYVBUibgL/CH1m0/BWH45oCeHjPLm0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=pTgh4OZa; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="pTgh4OZa" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 508BkVHs007662; Wed, 8 Jan 2025 13:49:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 5paUqlZGdFVkZ3W09aSnLd7L4jph9QMldWq1WvAkIvs=; b=pTgh4OZaK2j5IPrL 20zUYqvf8w3ggCzoQb1DrC7yNn41lHhCdoReSOiiowpJRicTwGcWvpjjqW5vsrc2 SfNoPSqMoQ+O8WRjn7M8PG0NcQzQFrN/ljU9CUVB6KNbt1Wy/4+FKyfr90EzHC9T 3wyuGy3o+sqEGgVcorTuYiC1ZwE+Q/dgGWbLNxdg27gpaIFDe6K4jOUB14HYpIOj cJ8edkzPJ/mRbqfLid1lVxZGKtU233Pok6fjDsLFpkDazNDkAReIURMujBXXvxXJ ek+7XjzUfRm+zE41ydwAhh8nYssAQ0lmx0uUlRd0Yopv7Q9xjpIVoBZ7PoZtZaWJ sLAKKw== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 441pgnrnmp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 08 Jan 2025 13:49:07 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 508Dn6ZZ017343 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 8 Jan 2025 13:49:06 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Wed, 8 Jan 2025 05:48:59 -0800 From: Luo Jie Date: Wed, 8 Jan 2025 21:47:20 +0800 Subject: [PATCH net-next v2 13/14] net: ethernet: qualcomm: Add PPE debugfs support for PPE counters Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250108-qcom_ipq_ppe-v2-13-7394dbda7199@quicinc.com> References: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> In-Reply-To: <20250108-qcom_ipq_ppe-v2-0-7394dbda7199@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1736344057; l=29970; i=quic_luoj@quicinc.com; s=20240808; h=from:subject:message-id; bh=SmMR76gKRqk7H1K3X/jdc1knG7XQz1rKQRJyOnhscbk=; b=apxeEBR5nK69eKn1hBPxol/2pNmE+G4e8gmIGpKqdpEaX8QYkDAdQ+zMjaL7lzVazGOHbQTsg aPMu1tEUdYNBXxIFUfJ3KB3WG4r8FTKvgrvShJW2mtrYL/LxLbVhp/R X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=P81jeEL23FcOkZtXZXeDDiPwIwgAHVZFASJV12w3U6w= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: KQ20PRShhln_f5jP3DIy1x_NdOh2IvRi X-Proofpoint-ORIG-GUID: KQ20PRShhln_f5jP3DIy1x_NdOh2IvRi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxlogscore=999 mlxscore=0 suspectscore=0 impostorscore=0 lowpriorityscore=0 spamscore=0 adultscore=0 phishscore=0 priorityscore=1501 bulkscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501080115 The PPE hardware packet counters are made available through the debugfs entry "/sys/kernel/debug/ppe/packet_counters". Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/Makefile | 2 +- drivers/net/ethernet/qualcomm/ppe/ppe.c | 11 + drivers/net/ethernet/qualcomm/ppe/ppe.h | 3 + drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c | 692 ++++++++++++++++++++++++ drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h | 16 + drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 102 ++++ 6 files changed, 825 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/Makefile b/drivers/net/ethernet/qualcomm/ppe/Makefile index 410a7bb54cfe..9e60b2400c16 100644 --- a/drivers/net/ethernet/qualcomm/ppe/Makefile +++ b/drivers/net/ethernet/qualcomm/ppe/Makefile @@ -4,4 +4,4 @@ # obj-$(CONFIG_QCOM_PPE) += qcom-ppe.o -qcom-ppe-objs := ppe.o ppe_config.o +qcom-ppe-objs := ppe.o ppe_config.o ppe_debugfs.o diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c index e8aa4eabaa7f..70fdaf4b4375 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c @@ -16,6 +16,7 @@ #include "ppe.h" #include "ppe_config.h" +#include "ppe_debugfs.h" #define PPE_PORT_MAX 8 #define PPE_CLK_RATE 353000000 @@ -199,11 +200,20 @@ static int qcom_ppe_probe(struct platform_device *pdev) if (ret) return dev_err_probe(dev, ret, "PPE HW config failed\n"); + ppe_debugfs_setup(ppe_dev); platform_set_drvdata(pdev, ppe_dev); return 0; } +static void qcom_ppe_remove(struct platform_device *pdev) +{ + struct ppe_device *ppe_dev; + + ppe_dev = platform_get_drvdata(pdev); + ppe_debugfs_teardown(ppe_dev); +} + static const struct of_device_id qcom_ppe_of_match[] = { { .compatible = "qcom,ipq9574-ppe" }, {}, @@ -216,6 +226,7 @@ static struct platform_driver qcom_ppe_driver = { .of_match_table = qcom_ppe_of_match, }, .probe = qcom_ppe_probe, + .remove = qcom_ppe_remove, }; module_platform_driver(qcom_ppe_driver); diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.h b/drivers/net/ethernet/qualcomm/ppe/ppe.h index cc6767b7c2b8..e9a208b77459 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe.h @@ -11,6 +11,7 @@ struct device; struct regmap; +struct dentry; /** * struct ppe_device - PPE device private data. @@ -18,6 +19,7 @@ struct regmap; * @regmap: PPE register map. * @clk_rate: PPE clock rate. * @num_ports: Number of PPE ports. + * @debugfs_root: Debugfs root entry. * @num_icc_paths: Number of interconnect paths. * @icc_paths: Interconnect path array. * @@ -30,6 +32,7 @@ struct ppe_device { struct regmap *regmap; unsigned long clk_rate; unsigned int num_ports; + struct dentry *debugfs_root; unsigned int num_icc_paths; struct icc_bulk_data icc_paths[] __counted_by(num_icc_paths); }; diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c new file mode 100644 index 000000000000..6ae05aefe966 --- /dev/null +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c @@ -0,0 +1,692 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +/* PPE debugfs routines for display of PPE counters useful for debug. */ + +#include +#include +#include +#include + +#include "ppe.h" +#include "ppe_config.h" +#include "ppe_debugfs.h" +#include "ppe_regs.h" + +#define PPE_PKT_CNT_TBL_SIZE 3 +#define PPE_DROP_PKT_CNT_TBL_SIZE 5 + +#define PPE_W0_PKT_CNT GENMASK(31, 0) +#define PPE_W2_DROP_PKT_CNT_LOW GENMASK(31, 8) +#define PPE_W3_DROP_PKT_CNT_HIGH GENMASK(7, 0) + +#define PPE_GET_PKT_CNT(tbl_cnt) \ + u32_get_bits(*((u32 *)(tbl_cnt)), PPE_W0_PKT_CNT) +#define PPE_GET_DROP_PKT_CNT_LOW(tbl_cnt) \ + u32_get_bits(*((u32 *)(tbl_cnt) + 0x2), PPE_W2_DROP_PKT_CNT_LOW) +#define PPE_GET_DROP_PKT_CNT_HIGH(tbl_cnt) \ + u32_get_bits(*((u32 *)(tbl_cnt) + 0x3), PPE_W3_DROP_PKT_CNT_HIGH) + +#define PRINT_COUNTER_PREFIX(desc, cnt_type) \ + seq_printf(seq, "%-16s %16s", desc, cnt_type) + +#define PRINT_CPU_CODE_COUNTER(cnt, code) \ + seq_printf(seq, "%10u(cpucode:%d)", cnt, code) + +#define PRINT_DROP_CODE_COUNTER(cnt, port, code) \ + seq_printf(seq, "%10u(port=%d),dropcode:%d", cnt, port, code) + +#define PRINT_SINGLE_COUNTER(tag, cnt, str, index) \ +do { \ + if (!((tag) % 4)) \ + seq_printf(seq, "\n%-16s %16s", "", ""); \ + seq_printf(seq, "%10u(%s=%04d)", cnt, str, index); \ +} while (0) + +#define PRINT_TWO_COUNTERS(tag, cnt0, cnt1, str, index) \ +do { \ + if (!((tag) % 4)) \ + seq_printf(seq, "\n%-16s %16s", "", ""); \ + seq_printf(seq, "%10u/%u(%s=%04d)", cnt0, cnt1, str, index); \ +} while (0) + +/** + * enum ppe_cnt_size_type - PPE counter size type + * @PPE_PKT_CNT_SIZE_1WORD: Counter size with single register + * @PPE_PKT_CNT_SIZE_3WORD: Counter size with table of 3 words + * @PPE_PKT_CNT_SIZE_5WORD: Counter size with table of 5 words + * + * PPE takes the different register size to record the packet counters. + * It uses single register, or register table with 3 words or 5 words. + * The counter with table size 5 words also records the drop counter. + * There are also some other counter types occupying sizes less than 32 + * bits, which is not covered by this enumeration type. + */ +enum ppe_cnt_size_type { + PPE_PKT_CNT_SIZE_1WORD, + PPE_PKT_CNT_SIZE_3WORD, + PPE_PKT_CNT_SIZE_5WORD, +}; + +static int ppe_pkt_cnt_get(struct ppe_device *ppe_dev, u32 reg, + enum ppe_cnt_size_type cnt_type, + u32 *cnt, u32 *drop_cnt) +{ + u32 drop_pkt_cnt[PPE_DROP_PKT_CNT_TBL_SIZE]; + u32 pkt_cnt[PPE_PKT_CNT_TBL_SIZE]; + u32 value; + int ret; + + switch (cnt_type) { + case PPE_PKT_CNT_SIZE_1WORD: + ret = regmap_read(ppe_dev->regmap, reg, &value); + if (ret) + return ret; + + *cnt = value; + break; + case PPE_PKT_CNT_SIZE_3WORD: + ret = regmap_bulk_read(ppe_dev->regmap, reg, + pkt_cnt, ARRAY_SIZE(pkt_cnt)); + if (ret) + return ret; + + *cnt = PPE_GET_PKT_CNT(pkt_cnt); + break; + case PPE_PKT_CNT_SIZE_5WORD: + ret = regmap_bulk_read(ppe_dev->regmap, reg, + drop_pkt_cnt, ARRAY_SIZE(drop_pkt_cnt)); + if (ret) + return ret; + + *cnt = PPE_GET_PKT_CNT(drop_pkt_cnt); + + /* Drop counter with low 24 bits. */ + value = PPE_GET_DROP_PKT_CNT_LOW(drop_pkt_cnt); + *drop_cnt = FIELD_PREP(GENMASK(23, 0), value); + + /* Drop counter with high 8 bits. */ + value = PPE_GET_DROP_PKT_CNT_HIGH(drop_pkt_cnt); + *drop_cnt |= FIELD_PREP(GENMASK(31, 24), value); + break; + } + + return 0; +} + +static void ppe_tbl_pkt_cnt_clear(struct ppe_device *ppe_dev, u32 reg, + enum ppe_cnt_size_type cnt_type) +{ + u32 drop_pkt_cnt[PPE_DROP_PKT_CNT_TBL_SIZE] = {}; + u32 pkt_cnt[PPE_PKT_CNT_TBL_SIZE] = {}; + + switch (cnt_type) { + case PPE_PKT_CNT_SIZE_1WORD: + regmap_write(ppe_dev->regmap, reg, 0); + break; + case PPE_PKT_CNT_SIZE_3WORD: + regmap_bulk_write(ppe_dev->regmap, reg, + pkt_cnt, ARRAY_SIZE(pkt_cnt)); + break; + case PPE_PKT_CNT_SIZE_5WORD: + regmap_bulk_write(ppe_dev->regmap, reg, + drop_pkt_cnt, ARRAY_SIZE(drop_pkt_cnt)); + break; + } +} + +/* The number of packets dropped because of no buffer available, no PPE + * buffer assigned to these packets. + */ +static void ppe_port_rx_drop_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + int ret, i, tag = 0; + u32 reg, drop_cnt; + + PRINT_COUNTER_PREFIX("PRX_DROP_CNT", "SILENT_DROP:"); + for (i = 0; i < PPE_DROP_CNT_TBL_ENTRIES; i++) { + reg = PPE_DROP_CNT_TBL_ADDR + i * PPE_DROP_CNT_TBL_INC; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD, + &drop_cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (drop_cnt > 0) { + tag++; + PRINT_SINGLE_COUNTER(tag, drop_cnt, "port", i); + } + } + + seq_putc(seq, '\n'); +} + +/* The number of packets dropped because hardware buffers were available + * only partially for the packet. + */ +static void ppe_port_rx_bm_drop_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0; + int ret, i, tag = 0; + + PRINT_COUNTER_PREFIX("PRX_BM_DROP_CNT", "OVERFLOW_DROP:"); + for (i = 0; i < PPE_DROP_STAT_TBL_ENTRIES; i++) { + reg = PPE_DROP_STAT_TBL_ADDR + PPE_DROP_STAT_TBL_INC * i; + + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (pkt_cnt > 0) { + tag++; + PRINT_SINGLE_COUNTER(tag, pkt_cnt, "port", i); + } + } + + seq_putc(seq, '\n'); +} + +/* The number of currently occupied buffers, that can't be flushed. */ +static void ppe_port_rx_bm_port_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + int used_cnt, react_cnt; + int ret, i, tag = 0; + u32 reg, val; + + PRINT_COUNTER_PREFIX("PRX_BM_PORT_CNT", "USED/REACT:"); + for (i = 0; i < PPE_BM_USED_CNT_TBL_ENTRIES; i++) { + reg = PPE_BM_USED_CNT_TBL_ADDR + i * PPE_BM_USED_CNT_TBL_INC; + ret = regmap_read(ppe_dev->regmap, reg, &val); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + /* The number of PPE buffers used for caching the received + * packets before the pause frame sent. + */ + used_cnt = FIELD_GET(PPE_BM_USED_CNT_VAL, val); + + reg = PPE_BM_REACT_CNT_TBL_ADDR + i * PPE_BM_REACT_CNT_TBL_INC; + ret = regmap_read(ppe_dev->regmap, reg, &val); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + /* The number of PPE buffers used for caching the received + * packets after pause frame sent out. + */ + react_cnt = FIELD_GET(PPE_BM_REACT_CNT_VAL, val); + + if (used_cnt > 0 || react_cnt > 0) { + tag++; + PRINT_TWO_COUNTERS(tag, used_cnt, react_cnt, "port", i); + } + } + + seq_putc(seq, '\n'); +} + +/* The number of packets processed by the ingress parser module of PPE. */ +static void ppe_parse_pkt_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, cnt, tunnel_cnt; + int i, ret, tag = 0; + + PRINT_COUNTER_PREFIX("IPR_PKT_CNT", "TPRX/IPRX:"); + for (i = 0; i < PPE_IPR_PKT_CNT_TBL_ENTRIES; i++) { + reg = PPE_TPR_PKT_CNT_TBL_ADDR + i * PPE_TPR_PKT_CNT_TBL_INC; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD, + &tunnel_cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + reg = PPE_IPR_PKT_CNT_TBL_ADDR + i * PPE_IPR_PKT_CNT_TBL_INC; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD, + &cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (tunnel_cnt > 0 || cnt > 0) { + tag++; + PRINT_TWO_COUNTERS(tag, tunnel_cnt, cnt, "port", i); + } + } + + seq_putc(seq, '\n'); +} + +/* The number of packets received or dropped on the ingress direction. */ +static void ppe_port_rx_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt, drop_cnt; + int ret, i, tag = 0; + + PRINT_COUNTER_PREFIX("PORT_RX_CNT", "RX/RX_DROP:"); + for (i = 0; i < PPE_PHY_PORT_RX_CNT_TBL_ENTRIES; i++) { + reg = PPE_PHY_PORT_RX_CNT_TBL_ADDR + PPE_PHY_PORT_RX_CNT_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD, + &pkt_cnt, &drop_cnt); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (pkt_cnt > 0) { + tag++; + PRINT_TWO_COUNTERS(tag, pkt_cnt, drop_cnt, "port", i); + } + } + + seq_putc(seq, '\n'); +} + +/* The number of packets received or dropped by the port. */ +static void ppe_vp_rx_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt, drop_cnt; + int ret, i, tag = 0; + + PRINT_COUNTER_PREFIX("VPORT_RX_CNT", "RX/RX_DROP:"); + for (i = 0; i < PPE_PORT_RX_CNT_TBL_ENTRIES; i++) { + reg = PPE_PORT_RX_CNT_TBL_ADDR + PPE_PORT_RX_CNT_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD, + &pkt_cnt, &drop_cnt); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (pkt_cnt > 0) { + tag++; + PRINT_TWO_COUNTERS(tag, pkt_cnt, drop_cnt, "port", i); + } + } + + seq_putc(seq, '\n'); +} + +/* The number of packets received or dropped by layer 2 processing. */ +static void ppe_pre_l2_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt, drop_cnt; + int ret, i, tag = 0; + + PRINT_COUNTER_PREFIX("PRE_L2_CNT", "RX/RX_DROP:"); + for (i = 0; i < PPE_PRE_L2_CNT_TBL_ENTRIES; i++) { + reg = PPE_PRE_L2_CNT_TBL_ADDR + PPE_PRE_L2_CNT_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD, + &pkt_cnt, &drop_cnt); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (pkt_cnt > 0) { + tag++; + PRINT_TWO_COUNTERS(tag, pkt_cnt, drop_cnt, "vsi", i); + } + } + + seq_putc(seq, '\n'); +} + +/* The number of VLAN packets received by PPE. */ +static void ppe_vlan_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0; + int ret, i, tag = 0; + + PRINT_COUNTER_PREFIX("VLAN_CNT", "RX:"); + for (i = 0; i < PPE_VLAN_CNT_TBL_ENTRIES; i++) { + reg = PPE_VLAN_CNT_TBL_ADDR + PPE_VLAN_CNT_TBL_INC * i; + + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (pkt_cnt > 0) { + tag++; + PRINT_SINGLE_COUNTER(tag, pkt_cnt, "vsi", i); + } + } + + seq_putc(seq, '\n'); +} + +/* The number of packets handed to CPU by PPE. */ +static void ppe_cpu_code_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0; + int ret, i; + + PRINT_COUNTER_PREFIX("CPU_CODE_CNT", "CODE:"); + for (i = 0; i < PPE_DROP_CPU_CNT_TBL_ENTRIES; i++) { + reg = PPE_DROP_CPU_CNT_TBL_ADDR + PPE_DROP_CPU_CNT_TBL_INC * i; + + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (!pkt_cnt) + continue; + + /* There are 256 CPU codes saved in the first 256 entries + * of register table, and 128 drop codes for each PPE port + * (0-7), the total entries is 256 + 8 * 128. + */ + if (i < 256) + PRINT_CPU_CODE_COUNTER(pkt_cnt, i); + else + PRINT_DROP_CODE_COUNTER(pkt_cnt, (i - 256) % 8, + (i - 256) / 8); + seq_putc(seq, '\n'); + PRINT_COUNTER_PREFIX("", ""); + } + + seq_putc(seq, '\n'); +} + +/* The number of packets forwarded by VLAN on the egress direction. */ +static void ppe_eg_vsi_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0; + int ret, i, tag = 0; + + PRINT_COUNTER_PREFIX("EG_VSI_CNT", "TX:"); + for (i = 0; i < PPE_EG_VSI_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_EG_VSI_COUNTER_TBL_ADDR + PPE_EG_VSI_COUNTER_TBL_INC * i; + + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (pkt_cnt > 0) { + tag++; + PRINT_SINGLE_COUNTER(tag, pkt_cnt, "vsi", i); + } + } + + seq_putc(seq, '\n'); +} + +/* The number of packets trasmitted or dropped by port. */ +static void ppe_vp_tx_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0, drop_cnt = 0; + int ret, i, tag = 0; + + PRINT_COUNTER_PREFIX("VPORT_TX_CNT", "TX/TX_DROP:"); + for (i = 0; i < PPE_VPORT_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_VPORT_TX_COUNTER_TBL_ADDR + PPE_VPORT_TX_COUNTER_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + reg = PPE_VPORT_TX_DROP_CNT_TBL_ADDR + PPE_VPORT_TX_DROP_CNT_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &drop_cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (pkt_cnt > 0 || drop_cnt > 0) { + tag++; + PRINT_TWO_COUNTERS(tag, pkt_cnt, drop_cnt, "port", i); + } + } + + seq_putc(seq, '\n'); +} + +/* The number of packets trasmitted or dropped on the egress direction. */ +static void ppe_port_tx_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0, drop_cnt = 0; + int ret, i, tag = 0; + + PRINT_COUNTER_PREFIX("PORT_TX_CNT", "TX/TX_DROP:"); + for (i = 0; i < PPE_PORT_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_PORT_TX_COUNTER_TBL_ADDR + PPE_PORT_TX_COUNTER_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + reg = PPE_PORT_TX_DROP_CNT_TBL_ADDR + PPE_PORT_TX_DROP_CNT_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &drop_cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (pkt_cnt > 0 || drop_cnt > 0) { + tag++; + PRINT_TWO_COUNTERS(tag, pkt_cnt, drop_cnt, "port", i); + } + } + + seq_putc(seq, '\n'); +} + +/* The number of packets transmitted or pending by the PPE queue. */ +static void ppe_queue_tx_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, val, pkt_cnt = 0, pend_cnt = 0; + int ret, i, tag = 0; + + PRINT_COUNTER_PREFIX("QUEUE_TX_CNT", "TX/PEND:"); + for (i = 0; i < PPE_QUEUE_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_QUEUE_TX_COUNTER_TBL_ADDR + PPE_QUEUE_TX_COUNTER_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + if (i < PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES) { + reg = PPE_AC_UNICAST_QUEUE_CNT_TBL_ADDR + + PPE_AC_UNICAST_QUEUE_CNT_TBL_INC * i; + ret = regmap_read(ppe_dev->regmap, reg, &val); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + pend_cnt = FIELD_GET(PPE_AC_UNICAST_QUEUE_CNT_TBL_PEND_CNT, val); + } else { + reg = PPE_AC_MULTICAST_QUEUE_CNT_TBL_ADDR + + PPE_AC_MULTICAST_QUEUE_CNT_TBL_INC * + (i - PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES); + ret = regmap_read(ppe_dev->regmap, reg, &val); + if (ret) { + seq_printf(seq, "ERROR %d\n", ret); + return; + } + + pend_cnt = FIELD_GET(PPE_AC_MULTICAST_QUEUE_CNT_TBL_PEND_CNT, val); + } + + if (pkt_cnt > 0 || pend_cnt > 0) { + tag++; + PRINT_TWO_COUNTERS(tag, pkt_cnt, pend_cnt, "queue", i); + } + } + + seq_putc(seq, '\n'); +} + +/* Display the various packet counters of PPE. */ +static int ppe_packet_counter_show(struct seq_file *seq, void *v) +{ + struct ppe_device *ppe_dev = seq->private; + + ppe_port_rx_drop_counter_get(ppe_dev, seq); + ppe_port_rx_bm_drop_counter_get(ppe_dev, seq); + ppe_port_rx_bm_port_counter_get(ppe_dev, seq); + ppe_parse_pkt_counter_get(ppe_dev, seq); + ppe_port_rx_counter_get(ppe_dev, seq); + ppe_vp_rx_counter_get(ppe_dev, seq); + ppe_pre_l2_counter_get(ppe_dev, seq); + ppe_vlan_counter_get(ppe_dev, seq); + ppe_cpu_code_counter_get(ppe_dev, seq); + ppe_eg_vsi_counter_get(ppe_dev, seq); + ppe_vp_tx_counter_get(ppe_dev, seq); + ppe_port_tx_counter_get(ppe_dev, seq); + ppe_queue_tx_counter_get(ppe_dev, seq); + + return 0; +} + +static int ppe_packet_counter_open(struct inode *inode, struct file *file) +{ + return single_open(file, ppe_packet_counter_show, inode->i_private); +} + +static ssize_t ppe_packet_counter_clear(struct file *file, + const char __user *buf, + size_t count, loff_t *pos) +{ + struct ppe_device *ppe_dev = file_inode(file)->i_private; + u32 reg; + int i; + + for (i = 0; i < PPE_DROP_CNT_TBL_ENTRIES; i++) { + reg = PPE_DROP_CNT_TBL_ADDR + i * PPE_DROP_CNT_TBL_INC; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD); + } + + for (i = 0; i < PPE_DROP_STAT_TBL_ENTRIES; i++) { + reg = PPE_DROP_STAT_TBL_ADDR + PPE_DROP_STAT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + for (i = 0; i < PPE_IPR_PKT_CNT_TBL_ENTRIES; i++) { + reg = PPE_IPR_PKT_CNT_TBL_ADDR + i * PPE_IPR_PKT_CNT_TBL_INC; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD); + + reg = PPE_TPR_PKT_CNT_TBL_ADDR + i * PPE_TPR_PKT_CNT_TBL_INC; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD); + } + + for (i = 0; i < PPE_VLAN_CNT_TBL_ENTRIES; i++) { + reg = PPE_VLAN_CNT_TBL_ADDR + PPE_VLAN_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + for (i = 0; i < PPE_PRE_L2_CNT_TBL_ENTRIES; i++) { + reg = PPE_PRE_L2_CNT_TBL_ADDR + PPE_PRE_L2_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD); + } + + for (i = 0; i < PPE_PORT_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_PORT_TX_DROP_CNT_TBL_ADDR + PPE_PORT_TX_DROP_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + + reg = PPE_PORT_TX_COUNTER_TBL_ADDR + PPE_PORT_TX_COUNTER_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + for (i = 0; i < PPE_EG_VSI_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_EG_VSI_COUNTER_TBL_ADDR + PPE_EG_VSI_COUNTER_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + for (i = 0; i < PPE_VPORT_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_VPORT_TX_COUNTER_TBL_ADDR + PPE_VPORT_TX_COUNTER_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + + reg = PPE_VPORT_TX_DROP_CNT_TBL_ADDR + PPE_VPORT_TX_DROP_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + for (i = 0; i < PPE_QUEUE_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_QUEUE_TX_COUNTER_TBL_ADDR + PPE_QUEUE_TX_COUNTER_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + ppe_tbl_pkt_cnt_clear(ppe_dev, PPE_EPE_DBG_IN_CNT_ADDR, PPE_PKT_CNT_SIZE_1WORD); + ppe_tbl_pkt_cnt_clear(ppe_dev, PPE_EPE_DBG_OUT_CNT_ADDR, PPE_PKT_CNT_SIZE_1WORD); + + for (i = 0; i < PPE_DROP_CPU_CNT_TBL_ENTRIES; i++) { + reg = PPE_DROP_CPU_CNT_TBL_ADDR + PPE_DROP_CPU_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + for (i = 0; i < PPE_PORT_RX_CNT_TBL_ENTRIES; i++) { + reg = PPE_PORT_RX_CNT_TBL_ADDR + PPE_PORT_RX_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD); + } + + for (i = 0; i < PPE_PHY_PORT_RX_CNT_TBL_ENTRIES; i++) { + reg = PPE_PHY_PORT_RX_CNT_TBL_ADDR + PPE_PHY_PORT_RX_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD); + } + + return count; +} + +static const struct file_operations ppe_debugfs_packet_counter_fops = { + .owner = THIS_MODULE, + .open = ppe_packet_counter_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, + .write = ppe_packet_counter_clear, +}; + +void ppe_debugfs_setup(struct ppe_device *ppe_dev) +{ + ppe_dev->debugfs_root = debugfs_create_dir("ppe", NULL); + debugfs_create_file("packet_counters", 0444, + ppe_dev->debugfs_root, + ppe_dev, + &ppe_debugfs_packet_counter_fops); +} + +void ppe_debugfs_teardown(struct ppe_device *ppe_dev) +{ + debugfs_remove_recursive(ppe_dev->debugfs_root); + ppe_dev->debugfs_root = NULL; +} diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h new file mode 100644 index 000000000000..ba0a5b3af583 --- /dev/null +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2025 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +/* PPE debugfs counters setup. */ + +#ifndef __PPE_DEBUGFS_H__ +#define __PPE_DEBUGFS_H__ + +#include "ppe.h" + +void ppe_debugfs_setup(struct ppe_device *ppe_dev); +void ppe_debugfs_teardown(struct ppe_device *ppe_dev); + +#endif diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h index f23fafa35766..a9f3a2bc4861 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -16,6 +16,39 @@ #define PPE_BM_SCH_CTRL_SCH_OFFSET GENMASK(14, 8) #define PPE_BM_SCH_CTRL_SCH_EN BIT(31) +/* PPE drop counters. */ +#define PPE_DROP_CNT_TBL_ADDR 0xb024 +#define PPE_DROP_CNT_TBL_ENTRIES 8 +#define PPE_DROP_CNT_TBL_INC 4 + +/* BM port drop counters. */ +#define PPE_DROP_STAT_TBL_ADDR 0xe000 +#define PPE_DROP_STAT_TBL_ENTRIES 30 +#define PPE_DROP_STAT_TBL_INC 0x10 + +#define PPE_EPE_DBG_IN_CNT_ADDR 0x26054 +#define PPE_EPE_DBG_OUT_CNT_ADDR 0x26070 + +/* Egress VLAN counters. */ +#define PPE_EG_VSI_COUNTER_TBL_ADDR 0x41000 +#define PPE_EG_VSI_COUNTER_TBL_ENTRIES 64 +#define PPE_EG_VSI_COUNTER_TBL_INC 0x10 + +/* Port TX counters. */ +#define PPE_PORT_TX_COUNTER_TBL_ADDR 0x45000 +#define PPE_PORT_TX_COUNTER_TBL_ENTRIES 8 +#define PPE_PORT_TX_COUNTER_TBL_INC 0x10 + +/* Virtual port TX counters. */ +#define PPE_VPORT_TX_COUNTER_TBL_ADDR 0x47000 +#define PPE_VPORT_TX_COUNTER_TBL_ENTRIES 256 +#define PPE_VPORT_TX_COUNTER_TBL_INC 0x10 + +/* Queue counters. */ +#define PPE_QUEUE_TX_COUNTER_TBL_ADDR 0x4a000 +#define PPE_QUEUE_TX_COUNTER_TBL_ENTRIES 300 +#define PPE_QUEUE_TX_COUNTER_TBL_INC 0x10 + /* RSS settings are to calculate the random RSS hash value generated during * packet receive to ARM cores. This hash is then used to generate the queue * offset used to determine the queue used to transmit the packet to ARM cores. @@ -213,6 +246,51 @@ #define PPE_L2_PORT_SET_DST_INFO(tbl_cfg, value) \ u32p_replace_bits((u32 *)tbl_cfg, value, PPE_L2_VP_PORT_W0_DST_INFO) +/* Port RX and RX drop counters. */ +#define PPE_PORT_RX_CNT_TBL_ADDR 0x150000 +#define PPE_PORT_RX_CNT_TBL_ENTRIES 256 +#define PPE_PORT_RX_CNT_TBL_INC 0x20 + +/* Physical port RX and RX drop counters. */ +#define PPE_PHY_PORT_RX_CNT_TBL_ADDR 0x156000 +#define PPE_PHY_PORT_RX_CNT_TBL_ENTRIES 8 +#define PPE_PHY_PORT_RX_CNT_TBL_INC 0x20 + +/* Counters for the packet to CPU port. */ +#define PPE_DROP_CPU_CNT_TBL_ADDR 0x160000 +#define PPE_DROP_CPU_CNT_TBL_ENTRIES 1280 +#define PPE_DROP_CPU_CNT_TBL_INC 0x10 + +/* VLAN counters. */ +#define PPE_VLAN_CNT_TBL_ADDR 0x178000 +#define PPE_VLAN_CNT_TBL_ENTRIES 64 +#define PPE_VLAN_CNT_TBL_INC 0x10 + +/* PPE L2 counters. */ +#define PPE_PRE_L2_CNT_TBL_ADDR 0x17c000 +#define PPE_PRE_L2_CNT_TBL_ENTRIES 64 +#define PPE_PRE_L2_CNT_TBL_INC 0x20 + +/* Port TX drop counters. */ +#define PPE_PORT_TX_DROP_CNT_TBL_ADDR 0x17d000 +#define PPE_PORT_TX_DROP_CNT_TBL_ENTRIES 8 +#define PPE_PORT_TX_DROP_CNT_TBL_INC 0x10 + +/* Virtual port TX counters. */ +#define PPE_VPORT_TX_DROP_CNT_TBL_ADDR 0x17e000 +#define PPE_VPORT_TX_DROP_CNT_TBL_ENTRIES 256 +#define PPE_VPORT_TX_DROP_CNT_TBL_INC 0x10 + +/* Counters for the tunnel packet. */ +#define PPE_TPR_PKT_CNT_TBL_ADDR 0x1d0080 +#define PPE_TPR_PKT_CNT_TBL_ENTRIES 8 +#define PPE_TPR_PKT_CNT_TBL_INC 4 + +/* Counters for the all packet received. */ +#define PPE_IPR_PKT_CNT_TBL_ADDR 0x1e0080 +#define PPE_IPR_PKT_CNT_TBL_ENTRIES 8 +#define PPE_IPR_PKT_CNT_TBL_INC 4 + /* PPE service code configuration for the tunnel packet. */ #define PPE_TL_SERVICE_TBL_ADDR 0x306000 #define PPE_TL_SERVICE_TBL_ENTRIES 256 @@ -325,6 +403,18 @@ #define PPE_BM_PORT_GROUP_ID_INC 0x4 #define PPE_BM_PORT_GROUP_ID_SHARED_GROUP_ID GENMASK(1, 0) +/* Counters for PPE buffers used for packets cached. */ +#define PPE_BM_USED_CNT_TBL_ADDR 0x6001c0 +#define PPE_BM_USED_CNT_TBL_ENTRIES 15 +#define PPE_BM_USED_CNT_TBL_INC 0x4 +#define PPE_BM_USED_CNT_VAL GENMASK(10, 0) + +/* Counters for PPE buffers used for packets received after pause frame sent. */ +#define PPE_BM_REACT_CNT_TBL_ADDR 0x600240 +#define PPE_BM_REACT_CNT_TBL_ENTRIES 15 +#define PPE_BM_REACT_CNT_TBL_INC 0x4 +#define PPE_BM_REACT_CNT_VAL GENMASK(8, 0) + #define PPE_BM_SHARED_GROUP_CFG_ADDR 0x600290 #define PPE_BM_SHARED_GROUP_CFG_ENTRIES 4 #define PPE_BM_SHARED_GROUP_CFG_INC 0x4 @@ -449,6 +539,18 @@ #define PPE_AC_GRP_SET_BUF_LIMIT(tbl_cfg, value) \ u32p_replace_bits((u32 *)(tbl_cfg) + 0x1, value, PPE_AC_GRP_W1_BUF_LIMIT) +/* Counters for packets handled by unicast queues (0-255). */ +#define PPE_AC_UNICAST_QUEUE_CNT_TBL_ADDR 0x84e000 +#define PPE_AC_UNICAST_QUEUE_CNT_TBL_ENTRIES 256 +#define PPE_AC_UNICAST_QUEUE_CNT_TBL_INC 0x10 +#define PPE_AC_UNICAST_QUEUE_CNT_TBL_PEND_CNT GENMASK(12, 0) + +/* Counters for packets handled by multicast queues (256-299). */ +#define PPE_AC_MULTICAST_QUEUE_CNT_TBL_ADDR 0x852000 +#define PPE_AC_MULTICAST_QUEUE_CNT_TBL_ENTRIES 44 +#define PPE_AC_MULTICAST_QUEUE_CNT_TBL_INC 0x10 +#define PPE_AC_MULTICAST_QUEUE_CNT_TBL_PEND_CNT GENMASK(12, 0) + /* Table addresses for per-queue enqueue setting. */ #define PPE_ENQ_OPR_TBL_ADDR 0x85c000 #define PPE_ENQ_OPR_TBL_ENTRIES 300