From patchwork Fri May 2 13:19:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 887261 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 933542550B0; Fri, 2 May 2025 13:21:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192100; cv=none; b=E11EhjFmqXvZ9/pe3OkQLQiWc3ikyjonLClN1qMohRGVvvpWWBTQM2z0hXoH4jS+rqCUaAC/fFXaP9rA0IBzCYP3CkB9iYz2soU3LzXwIJK3HRxU/KBHj1A9KcJHM5Wgbb/quOAxbGYDHN1O/jHJR4+tqlRbLy2+q3MTgHTvS74= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192100; c=relaxed/simple; bh=J517nRqZNdHZjC2NRvwPfAYIHFUamuS9Y2BbOlm/uAI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ruvqCj8oPm6kxfRySVs+FCQ+j1vmJwX6DKt91o/agdqJiKjUutFHNv8M06d5QPVXP0SbeRH32RXboTM3X4olMjb2YdQXGwG01J8Eod+BBEp86vILsPHEL+O8t6HJ6QSQ4ZhnhBgOA+R0310pfIOLq8C6QdpkV3OjUI/ezn6X+sQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=HkCYY8zB; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="HkCYY8zB" Received: from pps.filterd (m0431383.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 541NLuNN020046; Fri, 2 May 2025 06:21:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=n 0xoXPD2B7pwP5RQlEy9L49mqo4/FkcIPtwXUVfEsGI=; b=HkCYY8zBlWlqZSXes l4JJ6Cn8wUz4wkEtRsb4zCpqn7xHNQtZcurXQrqaBc2ODMINcXm+XY6F6PV9Eo/+ F8cfv6KioJh3Z9tHWzmDqdtb7Y7ccwXUNrtheHq2MTe5IT6DqOtETAe1wO1Z5JZO OPuOokZpocbPUcMssi+qIFJRtr+QSmh+8JC6AeDVvlWVzEULWel8bII42m8Awnq4 v34Uw28UgK0u8P9qZYoZr/+4qiMpePhk/6Mqy+nx/Adly5gX7DI84MRLogHArg+x Ggq0aI5KZiBlKdaSF2RmlZQUnW7RG3brhHwWyVOjkrz6gZ9shLPRY9O3kD7VF6oo 63MVA== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 46cjjgs8hw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:21:12 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:21:11 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:21:11 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id 5A6635B6922; Fri, 2 May 2025 06:20:54 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 01/15] crypto: octeontx2: Share engine group info with AF driver Date: Fri, 2 May 2025 18:49:42 +0530 Message-ID: <20250502132005.611698-2-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfXzapzfKE1fdJK skDri7h8k46fk+A2aPPni5xEnFVpdB7obk7Tt2S4ClwsfjwiI+wGGxZo24+YEpt0loa3JwgCv53 iePnczsqDt7S15CW6I/LKnfTLykGjPVyGDjlDrhdpb5h5+0VWcXSI3Zp/TNCuLmugolOiyFCh6K HhqWf5GySstLvC1X/Xn8ymo7NVahiUvEO4Lk2uyS5ohF2lS8xr/MQvurR/Rf8oldszJ2PnsihnQ JqRUXcNwOacuCN3frfX+s4LJev0ubavavUivNc+gBgttDZl3DUhriuYCo/K/LwTGA7fDj8P7rVb t+GG34T7lW9EkmTNofZ2IzPc8OzTe0gSS9hmGAaBuJHo2/iRRKo2FLZdYX8sNVciXMsnT1Tp7W3 WWHm1tyaO89UqxFLpzcb07ccOM52D1B5BQHdXBrQJcZ+8CzpDOQidL1Jy+hlUnoAcQCUWd8N X-Proofpoint-ORIG-GUID: mEk9fd4vU8JwUCEy36KtviJAw7cFjSCJ X-Authority-Analysis: v=2.4 cv=CtO/cm4D c=1 sm=1 tr=0 ts=6814c6c8 cx=c_pps a=gIfcoYsirJbf48DBMSPrZA==:117 a=gIfcoYsirJbf48DBMSPrZA==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=GI_xRDtv3xhmJ9wnqnoA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: mEk9fd4vU8JwUCEy36KtviJAw7cFjSCJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 From: Bharat Bhushan CPT crypto hardware have multiple engines of different type and these engines of a give type are attached to one of the engine group. Software will submit ecnap/decap work to these engine group. Engine group details are available with CPT crypto driver. This is shared with AF driver using mailbox message to enable use cases like inline-ipsec etc. Also, no need to try to delete engine groups if engine group initialization fails. Engine groups will never be created before engine group initialization. Signed-off-by: Bharat Bhushan Signed-off-by: Tanmay Jagdale --- .../marvell/octeontx2/otx2_cpt_common.h | 7 -- .../marvell/octeontx2/otx2_cptpf_main.c | 4 +- .../marvell/octeontx2/otx2_cptpf_mbox.c | 1 + .../marvell/octeontx2/otx2_cptpf_ucode.c | 116 ++++++++++++++++-- .../marvell/octeontx2/otx2_cptpf_ucode.h | 3 +- .../net/ethernet/marvell/octeontx2/af/mbox.h | 16 +++ .../net/ethernet/marvell/octeontx2/af/rvu.h | 10 ++ .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 21 ++++ 8 files changed, 160 insertions(+), 18 deletions(-) diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h index c5b7c57574ef..df735eab8f08 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h @@ -32,13 +32,6 @@ #define BAD_OTX2_CPT_ENG_TYPE OTX2_CPT_MAX_ENG_TYPES -enum otx2_cpt_eng_type { - OTX2_CPT_AE_TYPES = 1, - OTX2_CPT_SE_TYPES = 2, - OTX2_CPT_IE_TYPES = 3, - OTX2_CPT_MAX_ENG_TYPES, -}; - /* Take mbox id from end of CPT mbox range in AF (range 0xA00 - 0xBFF) */ #define MBOX_MSG_RX_INLINE_IPSEC_LF_CFG 0xBFE #define MBOX_MSG_GET_ENG_GRP_NUM 0xBFF diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c index 12971300296d..8a7ed0152371 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c @@ -813,7 +813,7 @@ static int otx2_cptpf_probe(struct pci_dev *pdev, sysfs_grp_del: sysfs_remove_group(&dev->kobj, &cptpf_sysfs_group); cleanup_eng_grps: - otx2_cpt_cleanup_eng_grps(pdev, &cptpf->eng_grps); + otx2_cpt_cleanup_eng_grps(cptpf); unregister_intr: cptpf_disable_afpf_mbox_intr(cptpf); destroy_afpf_mbox: @@ -843,7 +843,7 @@ static void otx2_cptpf_remove(struct pci_dev *pdev) /* Delete sysfs entry created for kernel VF limits */ sysfs_remove_group(&pdev->dev.kobj, &cptpf_sysfs_group); /* Cleanup engine groups */ - otx2_cpt_cleanup_eng_grps(pdev, &cptpf->eng_grps); + otx2_cpt_cleanup_eng_grps(cptpf); /* Disable AF-PF mailbox interrupt */ cptpf_disable_afpf_mbox_intr(cptpf); /* Destroy AF-PF mbox */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c index ec1ac7e836a3..5e6f70ac35a7 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c @@ -507,6 +507,7 @@ static void process_afpf_mbox_msg(struct otx2_cptpf_dev *cptpf, case MBOX_MSG_CPT_INLINE_IPSEC_CFG: case MBOX_MSG_NIX_INLINE_IPSEC_CFG: case MBOX_MSG_CPT_LF_RESET: + case MBOX_MSG_CPT_SET_ENG_GRP_NUM: break; default: diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c index 42c5484ce66a..17081aed173f 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c @@ -1142,6 +1142,68 @@ int otx2_cpt_get_eng_grp(struct otx2_cpt_eng_grps *eng_grps, int eng_type) return eng_grp_num; } +static int otx2_cpt_get_eng_grp_type(struct otx2_cpt_eng_grps *eng_grps, + int grp_num) +{ + struct otx2_cpt_eng_grp_info *grp; + + grp = &eng_grps->grp[grp_num]; + if (!grp->is_enabled) + return 0; + + if (eng_grp_has_eng_type(grp, OTX2_CPT_SE_TYPES) && + !eng_grp_has_eng_type(grp, OTX2_CPT_IE_TYPES)) + return OTX2_CPT_SE_TYPES; + + if (eng_grp_has_eng_type(grp, OTX2_CPT_IE_TYPES)) + return OTX2_CPT_IE_TYPES; + + if (eng_grp_has_eng_type(grp, OTX2_CPT_AE_TYPES)) + return OTX2_CPT_AE_TYPES; + return 0; +} + +static int otx2_cpt_set_eng_grp_num(struct otx2_cptpf_dev *cptpf, + enum otx2_cpt_eng_type eng_type, bool set) +{ + struct cpt_set_egrp_num *req; + struct pci_dev *pdev = cptpf->pdev; + + if (!eng_type || eng_type >= OTX2_CPT_MAX_ENG_TYPES) + return -EINVAL; + + req = (struct cpt_set_egrp_num *) + otx2_mbox_alloc_msg_rsp(&cptpf->afpf_mbox, 0, + sizeof(*req), sizeof(struct msg_rsp)); + if (!req) { + dev_err(&pdev->dev, "RVU MBOX failed to get message.\n"); + return -EFAULT; + } + + memset(req, 0, sizeof(*req)); + req->hdr.id = MBOX_MSG_CPT_SET_ENG_GRP_NUM; + req->hdr.sig = OTX2_MBOX_REQ_SIG; + req->hdr.pcifunc = OTX2_CPT_RVU_PFFUNC(cptpf->pf_id, 0); + req->set = set; + req->eng_type = eng_type; + req->eng_grp_num = otx2_cpt_get_eng_grp(&cptpf->eng_grps, eng_type); + + return otx2_cpt_send_mbox_msg(&cptpf->afpf_mbox, pdev); +} + +static int otx2_cpt_set_eng_grp_nums(struct otx2_cptpf_dev *cptpf, bool set) +{ + enum otx2_cpt_eng_type type; + int ret; + + for (type = OTX2_CPT_AE_TYPES; type < OTX2_CPT_MAX_ENG_TYPES; type++) { + ret = otx2_cpt_set_eng_grp_num(cptpf, type, set); + if (ret) + return ret; + } + return 0; +} + int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf, struct otx2_cpt_eng_grps *eng_grps) { @@ -1222,6 +1284,10 @@ int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf, if (ret) goto delete_eng_grp; + ret = otx2_cpt_set_eng_grp_nums(cptpf, 1); + if (ret) + goto unset_eng_grp; + eng_grps->is_grps_created = true; cpt_ucode_release_fw(&fw_info); @@ -1269,6 +1335,8 @@ int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf, mutex_unlock(&eng_grps->lock); return 0; +unset_eng_grp: + otx2_cpt_set_eng_grp_nums(cptpf, 0); delete_eng_grp: delete_engine_grps(pdev, eng_grps); release_fw: @@ -1348,9 +1416,10 @@ int otx2_cpt_disable_all_cores(struct otx2_cptpf_dev *cptpf) return cptx_disable_all_cores(cptpf, total_cores, BLKADDR_CPT0); } -void otx2_cpt_cleanup_eng_grps(struct pci_dev *pdev, - struct otx2_cpt_eng_grps *eng_grps) +void otx2_cpt_cleanup_eng_grps(struct otx2_cptpf_dev *cptpf) { + struct otx2_cpt_eng_grps *eng_grps = &cptpf->eng_grps; + struct pci_dev *pdev = cptpf->pdev; struct otx2_cpt_eng_grp_info *grp; int i, j; @@ -1364,6 +1433,8 @@ void otx2_cpt_cleanup_eng_grps(struct pci_dev *pdev, grp->engs[j].bmap = NULL; } } + + otx2_cpt_set_eng_grp_nums(cptpf, 0); mutex_unlock(&eng_grps->lock); } @@ -1386,8 +1457,7 @@ int otx2_cpt_init_eng_grps(struct pci_dev *pdev, dev_err(&pdev->dev, "Number of engines %d > than max supported %d\n", eng_grps->engs_num, OTX2_CPT_MAX_ENGINES); - ret = -EINVAL; - goto cleanup_eng_grps; + return -EINVAL; } for (i = 0; i < OTX2_CPT_MAX_ENGINE_GROUPS; i++) { @@ -1401,14 +1471,20 @@ int otx2_cpt_init_eng_grps(struct pci_dev *pdev, sizeof(long), GFP_KERNEL); if (!grp->engs[j].bmap) { ret = -ENOMEM; - goto cleanup_eng_grps; + goto release_bmap; } } } return 0; -cleanup_eng_grps: - otx2_cpt_cleanup_eng_grps(pdev, eng_grps); +release_bmap: + for (i = 0; i < OTX2_CPT_MAX_ENGINE_GROUPS; i++) { + grp = &eng_grps->grp[i]; + for (j = 0; j < OTX2_CPT_MAX_ETYPES_PER_GRP; j++) { + kfree(grp->engs[j].bmap); + grp->engs[j].bmap = NULL; + } + } return ret; } @@ -1590,6 +1666,7 @@ int otx2_cpt_dl_custom_egrp_create(struct otx2_cptpf_dev *cptpf, bool has_se, has_ie, has_ae; struct fw_info_t fw_info; int ucode_idx = 0; + int egrp; if (!eng_grps->is_grps_created) { dev_err(dev, "Not allowed before creating the default groups\n"); @@ -1727,7 +1804,21 @@ int otx2_cpt_dl_custom_egrp_create(struct otx2_cptpf_dev *cptpf, } ret = create_engine_group(dev, eng_grps, engs, grp_idx, (void **)uc_info, 1); + if (ret) + goto release_fw; + ret = otx2_cpt_set_eng_grp_num(cptpf, engs[0].type, 1); + if (ret) { + egrp = otx2_cpt_get_eng_grp(eng_grps, engs[0].type); + ret = delete_engine_group(dev, &eng_grps->grp[egrp]); + } + if (ucode_idx > 1) { + ret = otx2_cpt_set_eng_grp_num(cptpf, engs[1].type, 1); + if (ret) { + egrp = otx2_cpt_get_eng_grp(eng_grps, engs[1].type); + ret = delete_engine_group(dev, &eng_grps->grp[egrp]); + } + } release_fw: cpt_ucode_release_fw(&fw_info); err_unlock: @@ -1745,6 +1836,7 @@ int otx2_cpt_dl_custom_egrp_delete(struct otx2_cptpf_dev *cptpf, struct device *dev = &cptpf->pdev->dev; char *tmp, *err_msg; int egrp; + int type; int ret; err_msg = "Invalid input string format(ex: egrp:0)"; @@ -1766,6 +1858,16 @@ int otx2_cpt_dl_custom_egrp_delete(struct otx2_cptpf_dev *cptpf, return -EINVAL; } mutex_lock(&eng_grps->lock); + type = otx2_cpt_get_eng_grp_type(eng_grps, egrp); + if (!type) { + mutex_unlock(&eng_grps->lock); + return -EINVAL; + } + ret = otx2_cpt_set_eng_grp_num(cptpf, type, 0); + if (ret) { + mutex_unlock(&eng_grps->lock); + return -EINVAL; + } ret = delete_engine_group(dev, &eng_grps->grp[egrp]); mutex_unlock(&eng_grps->lock); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h index 7e6a6a4ec37c..85ead693e359 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h @@ -155,8 +155,7 @@ struct otx2_cpt_eng_grps { struct otx2_cptpf_dev; int otx2_cpt_init_eng_grps(struct pci_dev *pdev, struct otx2_cpt_eng_grps *eng_grps); -void otx2_cpt_cleanup_eng_grps(struct pci_dev *pdev, - struct otx2_cpt_eng_grps *eng_grps); +void otx2_cpt_cleanup_eng_grps(struct otx2_cptpf_dev *cptpf); int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf, struct otx2_cpt_eng_grps *eng_grps); int otx2_cpt_disable_all_cores(struct otx2_cptpf_dev *cptpf); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 005ca8a056c0..973ff5cf1a7d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -211,6 +211,8 @@ M(CPT_CTX_CACHE_SYNC, 0xA07, cpt_ctx_cache_sync, msg_req, msg_rsp) \ M(CPT_LF_RESET, 0xA08, cpt_lf_reset, cpt_lf_rst_req, msg_rsp) \ M(CPT_FLT_ENG_INFO, 0xA09, cpt_flt_eng_info, cpt_flt_eng_info_req, \ cpt_flt_eng_info_rsp) \ +M(CPT_SET_ENG_GRP_NUM, 0xA0A, cpt_set_eng_grp_num, cpt_set_egrp_num, \ + msg_rsp) \ /* SDP mbox IDs (range 0x1000 - 0x11FF) */ \ M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \ M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \ @@ -1941,6 +1943,20 @@ struct cpt_flt_eng_info_rsp { u64 rsvd; }; +enum otx2_cpt_eng_type { + OTX2_CPT_AE_TYPES = 1, + OTX2_CPT_SE_TYPES = 2, + OTX2_CPT_IE_TYPES = 3, + OTX2_CPT_MAX_ENG_TYPES, +}; + +struct cpt_set_egrp_num { + struct mbox_msghdr hdr; + bool set; + u8 eng_type; + u8 eng_grp_num; +}; + struct sdp_node_info { /* Node to which this PF belons to */ u8 node_id; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 147d7f5c1fcc..fa403da555ff 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -520,6 +520,15 @@ struct rep_evtq_ent { struct rep_event event; }; +struct rvu_cpt_eng_grp { + u8 eng_type; + u8 grp_num; +}; + +struct rvu_cpt { + struct rvu_cpt_eng_grp eng_grp[OTX2_CPT_MAX_ENG_TYPES]; +}; + struct rvu { void __iomem *afreg_base; void __iomem *pfreg_base; @@ -600,6 +609,7 @@ struct rvu { spinlock_t mcs_intrq_lock; /* CPT interrupt lock */ spinlock_t cpt_intr_lock; + struct rvu_cpt rvu_cpt; struct mutex mbox_lock; /* Serialize mbox up and down msgs */ u16 rep_pcifunc; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c index 3c5bbaf12e59..e720ae03133d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c @@ -656,6 +656,27 @@ static int cpt_inline_ipsec_cfg_outbound(struct rvu *rvu, int blkaddr, u8 cptlf, return 0; } +int rvu_mbox_handler_cpt_set_eng_grp_num(struct rvu *rvu, + struct cpt_set_egrp_num *req, + struct msg_rsp *rsp) +{ + struct rvu_cpt *rvu_cpt = &rvu->rvu_cpt; + u8 eng_type = req->eng_type; + + if (!eng_type || eng_type >= OTX2_CPT_MAX_ENG_TYPES) + return -EINVAL; + + if (req->set) { + rvu_cpt->eng_grp[eng_type].grp_num = req->eng_grp_num; + rvu_cpt->eng_grp[eng_type].eng_type = eng_type; + } else { + rvu_cpt->eng_grp[eng_type].grp_num = 0; + rvu_cpt->eng_grp[eng_type].eng_type = 0; + } + + return 0; +} + int rvu_mbox_handler_cpt_inline_ipsec_cfg(struct rvu *rvu, struct cpt_inline_ipsec_cfg_msg *req, struct msg_rsp *rsp) From patchwork Fri May 2 13:19:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 887260 Received: from mx0a-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79835246778; Fri, 2 May 2025 13:22:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192133; cv=none; b=pc9GDmg3w8dVUGkQqHVfUFWfyGrpPdi5GkCI9rD9h7IWodnCg7K+9QFLskVvb0qH4lrXy9g4WKaUCI58Vg6YLkxb7nOPy/AN9HHSUGJIY/JU8NmgqbuhL6q7+/bq+cV9k6hD8qmpMeiHPhStqtcOqlpEVSy60kSZGEEo8fF3qcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192133; c=relaxed/simple; bh=Bmec8ft+PFxlzI+xrjaOYINq3oIpdhvpiQwhI/wCnlg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nVPRWuDcF7hUPLIryHEajkeIcYu9BuZYvr4Xc9t10ZnM/dGOesqd7/kQmFe4UX0SYme1ODVZiB9ez/3Sg7C/lLQARK0EgX0g/6GWlEIzieW6uLppoinspPk41p+wj1HeJzx5pUzuapdaKHrzb+L0qEXJVKZ7IVHOqCA7eP8K4E4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=NYV/12MA; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="NYV/12MA" Received: from pps.filterd (m0431384.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 542ANFXI000863; Fri, 2 May 2025 06:21:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=O AEsz8l7/irz2klrfBCJ98obQY2J47xLuNq/qtk1NTs=; b=NYV/12MAUKsZd9Rvd abVGwwiUe4yJNDBkRf0zsXwMIfO4D5QrKT5M9eyeznfDhyT/COsrtxiXN6sYHTa4 a9u4xRYtLCCvr4MB5w5s1B0i7ot2zWmgAU+9RF6tbybCFTHFbvSjCG6NyjcqKUDo gOog5bDtasfgPvSir6zmdZRvcroAA3b3GY/rsyHiUzb6n6gEXhVJK9POIVj/e8XX TeIdJ5DvDEiUesNg7ldlJVK0BYH4cq1xmBB8mchPCFJKDPJFvOhb8lKtswmeNa2Z ptZIlorkxh3b8oosAma51U1fSGIhx/+jTwzkGhL8bhEYoa/R0/gfHwlUMWnH1VnN LyyuA== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cuqyr9sd-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:21:31 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:21:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:21:28 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id D26705B693A; Fri, 2 May 2025 06:21:14 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 02/15] octeontx2-af: Configure crypto hardware for inline ipsec Date: Fri, 2 May 2025 18:49:43 +0530 Message-ID: <20250502132005.611698-3-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 0pB0h7hiNreudM2MsTouMKDV_ad8a60b X-Authority-Analysis: v=2.4 cv=JvPxrN4C c=1 sm=1 tr=0 ts=6814c6db cx=c_pps a=gIfcoYsirJbf48DBMSPrZA==:117 a=gIfcoYsirJbf48DBMSPrZA==:17 a=IkcTkHD0fZMA:10 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=fy8_9_DjlNJKTM-CpToA:9 a=3ZKOabzyN94A:10 a=QEXdDO2ut3YA:10 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: 0pB0h7hiNreudM2MsTouMKDV_ad8a60b X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfXytATkCP40SKw 86EnaCW4S2vSn5XLtPXw31B0xGUT6qLqicxUIWsiRQG8BLY8uxehTzbGnQISFJVoEYKftNdpT/F fKpmLd+6Yc6VdvZRvWra2PQefnG7KJyHS30MeNR4xw/i4DcPOJ/jiJUUnFOjbpngdyTeyO9ezNm VIidgEG1+YuK+Us2+WuBP+ddG/W/W1M6SNu5VjkUT7w46P8ShLKgvp7gZfSqvIRTeIDjlLBepVh aqv2HV2G8XXZxOadluqf/k2/PU09IA3saOnM9eMTr1UQdsynv3OclxFzg9AwN1fnnm/W1KQnoIl FYNGlg6hnVvXKGjFsUFtA/Q5Ow337zxVSMx7x/JTZyIq5AGBO0yJ0IgRqPpXDCCp2ov4GPHtlS8 FLFeHck81HxrLPzWUQGHuUxXofvNIpFj17vhdxQWSIkSt5vueZU5z7aZzXiitNW3X/WyiWz3 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 From: Bharat Bhushan Currently cpt_rx_inline_lf_cfg mailbox is handled by CPT PF driver to configures inbound inline ipsec. Ideally inbound inline ipsec configuration should be done by AF driver. This patch adds support to allocate, attach and initialize a cptlf from AF. It also configures NIX to send CPT instruction if the packet needs inline ipsec processing and configures CPT LF to handle inline inbound instruction received from NIX. Signed-off-by: Bharat Bhushan Signed-off-by: Tanmay Jagdale --- .../net/ethernet/marvell/octeontx2/af/mbox.h | 14 + .../net/ethernet/marvell/octeontx2/af/rvu.c | 4 +- .../net/ethernet/marvell/octeontx2/af/rvu.h | 34 ++ .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 563 ++++++++++++++++++ .../ethernet/marvell/octeontx2/af/rvu_cpt.h | 67 +++ .../ethernet/marvell/octeontx2/af/rvu_nix.c | 4 +- .../ethernet/marvell/octeontx2/af/rvu_reg.h | 5 + 7 files changed, 687 insertions(+), 4 deletions(-) create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 973ff5cf1a7d..8540a04a92f9 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -1950,6 +1950,20 @@ enum otx2_cpt_eng_type { OTX2_CPT_MAX_ENG_TYPES, }; +struct cpt_rx_inline_lf_cfg_msg { + struct mbox_msghdr hdr; + u16 sso_pf_func; + u16 param1; + u16 param2; + u16 opcode; + u32 credit; + u32 credit_th; + u16 bpid; + u32 reserved; + u8 ctx_ilen_valid : 1; + u8 ctx_ilen : 7; +}; + struct cpt_set_egrp_num { struct mbox_msghdr hdr; bool set; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 6575c422635b..d9f000cda5e5 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -1775,8 +1775,8 @@ int rvu_mbox_handler_attach_resources(struct rvu *rvu, return err; } -static u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf, - int blkaddr, int lf) +u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf, + int blkaddr, int lf) { u16 vec; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index fa403da555ff..6923fd756b19 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -525,8 +525,38 @@ struct rvu_cpt_eng_grp { u8 grp_num; }; +struct rvu_cpt_rx_inline_lf_cfg { + u16 sso_pf_func; + u16 param1; + u16 param2; + u16 opcode; + u32 credit; + u32 credit_th; + u16 bpid; + u32 reserved; + u8 ctx_ilen_valid : 1; + u8 ctx_ilen : 7; +}; + +struct rvu_cpt_inst_queue { + u8 *vaddr; + u8 *real_vaddr; + dma_addr_t dma_addr; + dma_addr_t real_dma_addr; + u32 size; +}; + struct rvu_cpt { struct rvu_cpt_eng_grp eng_grp[OTX2_CPT_MAX_ENG_TYPES]; + + /* RX inline ipsec lock */ + struct mutex lock; + bool rx_initialized; + u16 msix_offset; + u8 inline_ipsec_egrp; + struct rvu_cpt_inst_queue cpt0_iq; + struct rvu_cpt_inst_queue cpt1_iq; + struct rvu_cpt_rx_inline_lf_cfg rx_cfg; }; struct rvu { @@ -1066,6 +1096,8 @@ void rvu_program_channels(struct rvu *rvu); /* CN10K NIX */ void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw); +void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *req, + int blkaddr); /* CN10K RVU - LMT*/ void rvu_reset_lmt_map_tbl(struct rvu *rvu, u16 pcifunc); @@ -1097,6 +1129,8 @@ int rvu_mcs_init(struct rvu *rvu); int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc); void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena); void rvu_mcs_exit(struct rvu *rvu); +u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf, + int blkaddr, int lf); /* Representor APIs */ int rvu_rep_pf_init(struct rvu *rvu); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c index e720ae03133d..89e0739ba414 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c @@ -11,6 +11,7 @@ #include "rvu_reg.h" #include "mbox.h" #include "rvu.h" +#include "rvu_cpt.h" /* CPT PF device id */ #define PCI_DEVID_OTX2_CPT_PF 0xA0FD @@ -968,6 +969,33 @@ int rvu_mbox_handler_cpt_ctx_cache_sync(struct rvu *rvu, struct msg_req *req, return rvu_cpt_ctx_flush(rvu, req->hdr.pcifunc); } +static int cpt_rx_ipsec_lf_reset(struct rvu *rvu, int blkaddr, int slot) +{ + struct rvu_block *block; + u16 pcifunc = 0; + int cptlf, ret; + u64 ctl, ctl2; + + block = &rvu->hw->block[blkaddr]; + + cptlf = rvu_get_lf(rvu, block, pcifunc, slot); + if (cptlf < 0) + return CPT_AF_ERR_LF_INVALID; + + ctl = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf)); + ctl2 = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf)); + + ret = rvu_lf_reset(rvu, block, cptlf); + if (ret) + dev_err(rvu->dev, "Failed to reset blkaddr %d LF%d\n", + block->addr, cptlf); + + rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), ctl); + rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf), ctl2); + + return 0; +} + int rvu_mbox_handler_cpt_lf_reset(struct rvu *rvu, struct cpt_lf_rst_req *req, struct msg_rsp *rsp) { @@ -1087,6 +1115,72 @@ static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr) #define DQPTR GENMASK_ULL(19, 0) #define NQPTR GENMASK_ULL(51, 32) +static void cpt_rx_ipsec_lf_enable_iqueue(struct rvu *rvu, int blkaddr, + int slot) +{ + u64 val; + + /* Set Execution Enable of instruction queue */ + val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG); + val |= BIT_ULL(16); + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG, val); + + /* Set iqueue's enqueuing */ + val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL); + val |= BIT_ULL(0); + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL, val); +} + +static void cpt_rx_ipsec_lf_disable_iqueue(struct rvu *rvu, int blkaddr, + int slot) +{ + int timeout = 1000000; + u64 inprog, inst_ptr; + u64 qsize, pending; + int i = 0; + + /* Disable instructions enqueuing */ + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL, 0x0); + + inprog = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG); + inprog |= BIT_ULL(16); + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG, inprog); + + qsize = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_Q_SIZE) + & 0x7FFF; + do { + inst_ptr = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, + CPT_LF_Q_INST_PTR); + pending = (FIELD_GET(XQ_XOR, inst_ptr) * qsize * 40) + + FIELD_GET(NQPTR, inst_ptr) - + FIELD_GET(DQPTR, inst_ptr); + udelay(1); + timeout--; + } while ((pending != 0) && (timeout != 0)); + + if (timeout == 0) + dev_warn(rvu->dev, "TIMEOUT: CPT poll on pending instructions\n"); + + timeout = 1000000; + /* Wait for CPT queue to become execution-quiescent */ + do { + inprog = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, + CPT_LF_INPROG); + if ((FIELD_GET(INFLIGHT, inprog) == 0) && + (FIELD_GET(GRB_CNT, inprog) == 0)) { + i++; + } else { + i = 0; + timeout--; + } + } while ((timeout != 0) && (i < 10)); + + if (timeout == 0) + dev_warn(rvu->dev, "TIMEOUT: CPT poll on inflight count\n"); + /* Wait for 2 us to flush all queue writes to memory */ + udelay(2); +} + static void cpt_lf_disable_iqueue(struct rvu *rvu, int blkaddr, int slot) { int timeout = 1000000; @@ -1310,6 +1404,474 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc) return 0; } +static irqreturn_t rvu_cpt_rx_ipsec_misc_intr_handler(int irq, void *ptr) +{ + struct rvu_block *block = ptr; + struct rvu *rvu = block->rvu; + int blkaddr = block->addr; + struct device *dev = rvu->dev; + int slot = 0; + u64 val; + + val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_MISC_INT); + + if (val & (1 << 6)) { + dev_err(dev, "Memory error detected while executing CPT_INST_S, LF %d.\n", + slot); + } else if (val & (1 << 5)) { + dev_err(dev, "HW error from an engine executing CPT_INST_S, LF %d.", + slot); + } else if (val & (1 << 3)) { + dev_err(dev, "SMMU fault while writing CPT_RES_S to CPT_INST_S[RES_ADDR], LF %d.\n", + slot); + } else if (val & (1 << 2)) { + dev_err(dev, "Memory error when accessing instruction memory queue CPT_LF_Q_BASE[ADDR].\n"); + } else if (val & (1 << 1)) { + dev_err(dev, "Error enqueuing an instruction received at CPT_LF_NQ.\n"); + } else { + dev_err(dev, "Unhandled interrupt in CPT LF %d\n", slot); + return IRQ_NONE; + } + + /* Acknowledge interrupts */ + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_MISC_INT, + val & CPT_LF_MISC_INT_MASK); + + return IRQ_HANDLED; +} + +static int rvu_cpt_rx_inline_setup_irq(struct rvu *rvu, int blkaddr, int slot) +{ + struct rvu_hwinfo *hw = rvu->hw; + struct rvu_block *block; + struct rvu_pfvf *pfvf; + u16 msix_offset; + int pcifunc = 0; + int ret, cptlf; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + if (!pfvf->msix.bmap) + return -ENODEV; + + block = &hw->block[blkaddr]; + cptlf = rvu_get_lf(rvu, block, pcifunc, slot); + if (cptlf < 0) + return CPT_AF_ERR_LF_INVALID; + + msix_offset = rvu_get_msix_offset(rvu, pfvf, blkaddr, cptlf); + if (msix_offset == MSIX_VECTOR_INVALID) + return -ENODEV; + + ret = rvu_cpt_do_register_interrupt(block, msix_offset, + rvu_cpt_rx_ipsec_misc_intr_handler, + "CPTLF RX IPSEC MISC"); + if (ret) + return ret; + + /* Enable All Misc interrupts */ + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, + CPT_LF_MISC_INT_ENA_W1S, CPT_LF_MISC_INT_MASK); + + rvu->rvu_cpt.msix_offset = msix_offset; + return 0; +} + +static void rvu_cpt_rx_inline_cleanup_irq(struct rvu *rvu, int blkaddr, + int slot) +{ + struct rvu_hwinfo *hw = rvu->hw; + struct rvu_block *block; + + /* Disable All Misc interrupts */ + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, + CPT_LF_MISC_INT_ENA_W1C, CPT_LF_MISC_INT_MASK); + + block = &hw->block[blkaddr]; + free_irq(pci_irq_vector(rvu->pdev, rvu->rvu_cpt.msix_offset), block); +} + +static int rvu_rx_attach_cptlf(struct rvu *rvu, int blkaddr) +{ + struct rsrc_attach attach; + + memset(&attach, 0, sizeof(struct rsrc_attach)); + attach.hdr.id = MBOX_MSG_ATTACH_RESOURCES; + attach.hdr.sig = OTX2_MBOX_REQ_SIG; + attach.hdr.ver = OTX2_MBOX_VERSION; + attach.hdr.pcifunc = 0; + attach.modify = 1; + attach.cptlfs = 1; + attach.cpt_blkaddr = blkaddr; + + return rvu_mbox_handler_attach_resources(rvu, &attach, NULL); +} + +static int rvu_rx_detach_cptlf(struct rvu *rvu) +{ + struct rsrc_detach detach; + + memset(&detach, 0, sizeof(struct rsrc_detach)); + detach.hdr.id = MBOX_MSG_ATTACH_RESOURCES; + detach.hdr.sig = OTX2_MBOX_REQ_SIG; + detach.hdr.ver = OTX2_MBOX_VERSION; + detach.hdr.pcifunc = 0; + detach.partial = 1; + detach.cptlfs = 1; + + return rvu_mbox_handler_detach_resources(rvu, &detach, NULL); +} + +/* Allocate memory for CPT outbound Instruction queue. + * Instruction queue memory format is: + * ----------------------------- + * | Instruction Group memory | + * | (CPT_LF_Q_SIZE[SIZE_DIV40] | + * | x 16 Bytes) | + * | | + * ----------------------------- <-- CPT_LF_Q_BASE[ADDR] + * | Flow Control (128 Bytes) | + * | | + * ----------------------------- + * | Instruction Memory | + * | (CPT_LF_Q_SIZE[SIZE_DIV40] | + * | × 40 × 64 bytes) | + * | | + * ----------------------------- + */ +static int rvu_rx_cpt_iq_alloc(struct rvu *rvu, struct rvu_cpt_inst_queue *iq) +{ + iq->size = RVU_CPT_INST_QLEN_BYTES + RVU_CPT_Q_FC_LEN + + RVU_CPT_INST_GRP_QLEN_BYTES + OTX2_ALIGN; + + iq->real_vaddr = dma_alloc_coherent(rvu->dev, iq->size, + &iq->real_dma_addr, GFP_KERNEL); + if (!iq->real_vaddr) + return -ENOMEM; + + /* iq->vaddr/dma_addr points to Flow Control location */ + iq->vaddr = iq->real_vaddr + RVU_CPT_INST_GRP_QLEN_BYTES; + iq->dma_addr = iq->real_dma_addr + RVU_CPT_INST_GRP_QLEN_BYTES; + + /* Align pointers */ + iq->vaddr = PTR_ALIGN(iq->vaddr, OTX2_ALIGN); + iq->dma_addr = PTR_ALIGN(iq->dma_addr, OTX2_ALIGN); + return 0; +} + +static void rvu_rx_cpt_iq_free(struct rvu *rvu, int blkaddr) +{ + struct rvu_cpt_inst_queue *iq; + + if (blkaddr == BLKADDR_CPT0) + iq = &rvu->rvu_cpt.cpt0_iq; + else + iq = &rvu->rvu_cpt.cpt1_iq; + + if (!iq->real_vaddr) + dma_free_coherent(rvu->dev, iq->size, iq->real_vaddr, + iq->real_dma_addr); + + iq->real_vaddr = NULL; + iq->vaddr = NULL; +} + +static int rvu_rx_cpt_set_grp_pri_ilen(struct rvu *rvu, int blkaddr, int cptlf) +{ + u64 reg_val; + + reg_val = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf)); + /* Set High priority */ + reg_val |= 1; + /* Set engine group */ + reg_val |= ((1ULL << rvu->rvu_cpt.inline_ipsec_egrp) << 48); + /* Set ilen if valid */ + if (rvu->rvu_cpt.rx_cfg.ctx_ilen_valid) + reg_val |= rvu->rvu_cpt.rx_cfg.ctx_ilen << 17; + + rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), reg_val); + return 0; +} + +static int rvu_cpt_rx_inline_cptlf_init(struct rvu *rvu, int blkaddr, int slot) +{ + struct rvu_cpt_inst_queue *iq; + struct rvu_block *block; + int pcifunc = 0; + int cptlf; + int err; + u64 val; + + /* Attach cptlf with AF for inline inbound ipsec */ + err = rvu_rx_attach_cptlf(rvu, blkaddr); + if (err) + return err; + + block = &rvu->hw->block[blkaddr]; + cptlf = rvu_get_lf(rvu, block, pcifunc, slot); + if (cptlf < 0) { + err = CPT_AF_ERR_LF_INVALID; + goto detach_cptlf; + } + + if (blkaddr == BLKADDR_CPT0) + iq = &rvu->rvu_cpt.cpt0_iq; + else + iq = &rvu->rvu_cpt.cpt1_iq; + + /* Allocate CPT instruction queue */ + err = rvu_rx_cpt_iq_alloc(rvu, iq); + if (err) + goto detach_cptlf; + + /* reset CPT LF */ + cpt_rx_ipsec_lf_reset(rvu, blkaddr, slot); + + /* Disable IQ */ + cpt_rx_ipsec_lf_disable_iqueue(rvu, blkaddr, slot); + + /* Set IQ base address */ + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_Q_BASE, + iq->dma_addr); + /* Set IQ size */ + val = FIELD_PREP(CPT_LF_Q_SIZE_DIV40, RVU_CPT_SIZE_DIV40 + + RVU_CPT_EXTRA_SIZE_DIV40); + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_Q_SIZE, val); + + /* Enable IQ */ + cpt_rx_ipsec_lf_enable_iqueue(rvu, blkaddr, slot); + + /* Set High priority */ + rvu_rx_cpt_set_grp_pri_ilen(rvu, blkaddr, cptlf); + + return 0; +detach_cptlf: + rvu_rx_detach_cptlf(rvu); + return err; +} + +static void rvu_cpt_rx_inline_cptlf_clean(struct rvu *rvu, int blkaddr, + int slot) +{ + /* Disable IQ */ + cpt_rx_ipsec_lf_disable_iqueue(rvu, blkaddr, slot); + + /* Free Instruction Queue */ + rvu_rx_cpt_iq_free(rvu, blkaddr); + + /* Detach CPTLF */ + rvu_rx_detach_cptlf(rvu); +} + +static void rvu_cpt_save_rx_inline_lf_cfg(struct rvu *rvu, + struct cpt_rx_inline_lf_cfg_msg *req) +{ + rvu->rvu_cpt.rx_cfg.sso_pf_func = req->sso_pf_func; + rvu->rvu_cpt.rx_cfg.param1 = req->param1; + rvu->rvu_cpt.rx_cfg.param2 = req->param2; + rvu->rvu_cpt.rx_cfg.opcode = req->opcode; + rvu->rvu_cpt.rx_cfg.credit = req->credit; + rvu->rvu_cpt.rx_cfg.credit_th = req->credit_th; + rvu->rvu_cpt.rx_cfg.bpid = req->bpid; + rvu->rvu_cpt.rx_cfg.ctx_ilen_valid = req->ctx_ilen_valid; + rvu->rvu_cpt.rx_cfg.ctx_ilen = req->ctx_ilen; +} + +static void +rvu_show_diff_cpt_rx_inline_lf_cfg(struct rvu *rvu, + struct cpt_rx_inline_lf_cfg_msg *req) +{ + struct device *dev = rvu->dev; + + if (rvu->rvu_cpt.rx_cfg.sso_pf_func != req->sso_pf_func) + dev_info(dev, "Mismatch RX inline config sso_pf_func Req %x Prog %x\n", + req->sso_pf_func, rvu->rvu_cpt.rx_cfg.sso_pf_func); + if (rvu->rvu_cpt.rx_cfg.param1 != req->param1) + dev_info(dev, "Mismatch RX inline config param1 Req %x Prog %x\n", + req->param1, rvu->rvu_cpt.rx_cfg.param1); + if (rvu->rvu_cpt.rx_cfg.param2 != req->param2) + dev_info(dev, "Mismatch RX inline config param2 Req %x Prog %x\n", + req->param2, rvu->rvu_cpt.rx_cfg.param2); + if (rvu->rvu_cpt.rx_cfg.opcode != req->opcode) + dev_info(dev, "Mismatch RX inline config opcode Req %x Prog %x\n", + req->opcode, rvu->rvu_cpt.rx_cfg.opcode); + if (rvu->rvu_cpt.rx_cfg.credit != req->credit) + dev_info(dev, "Mismatch RX inline config credit Req %x Prog %x\n", + req->credit, rvu->rvu_cpt.rx_cfg.credit); + if (rvu->rvu_cpt.rx_cfg.credit_th != req->credit_th) + dev_info(dev, "Mismatch RX inline config credit_th Req %x Prog %x\n", + req->credit_th, rvu->rvu_cpt.rx_cfg.credit_th); + if (rvu->rvu_cpt.rx_cfg.bpid != req->bpid) + dev_info(dev, "Mismatch RX inline config bpid Req %x Prog %x\n", + req->bpid, rvu->rvu_cpt.rx_cfg.bpid); + if (rvu->rvu_cpt.rx_cfg.ctx_ilen != req->ctx_ilen) + dev_info(dev, "Mismatch RX inline config ctx_ilen Req %x Prog %x\n", + req->ctx_ilen, rvu->rvu_cpt.rx_cfg.ctx_ilen); + if (rvu->rvu_cpt.rx_cfg.ctx_ilen_valid != req->ctx_ilen_valid) + dev_info(dev, "Mismatch RX inline config ctx_ilen_valid Req %x Prog %x\n", + req->ctx_ilen_valid, + rvu->rvu_cpt.rx_cfg.ctx_ilen_valid); +} + +static void rvu_cpt_rx_inline_nix_cfg(struct rvu *rvu) +{ + struct nix_inline_ipsec_cfg nix_cfg; + + nix_cfg.enable = 1; + nix_cfg.credit_th = rvu->rvu_cpt.rx_cfg.credit_th; + nix_cfg.bpid = rvu->rvu_cpt.rx_cfg.bpid; + if (!rvu->rvu_cpt.rx_cfg.credit || rvu->rvu_cpt.rx_cfg.credit > + RVU_CPT_INST_QLEN_MSGS) + nix_cfg.cpt_credit = RVU_CPT_INST_QLEN_MSGS - 1; + else + nix_cfg.cpt_credit = rvu->rvu_cpt.rx_cfg.credit - 1; + + nix_cfg.gen_cfg.egrp = rvu->rvu_cpt.inline_ipsec_egrp; + if (rvu->rvu_cpt.rx_cfg.opcode) { + nix_cfg.gen_cfg.opcode = rvu->rvu_cpt.rx_cfg.opcode; + } else { + if (is_rvu_otx2(rvu)) + nix_cfg.gen_cfg.opcode = OTX2_CPT_INLINE_RX_OPCODE; + else + nix_cfg.gen_cfg.opcode = CN10K_CPT_INLINE_RX_OPCODE; + } + + nix_cfg.gen_cfg.param1 = rvu->rvu_cpt.rx_cfg.param1; + nix_cfg.gen_cfg.param2 = rvu->rvu_cpt.rx_cfg.param2; + nix_cfg.inst_qsel.cpt_pf_func = rvu_get_pf(0); + nix_cfg.inst_qsel.cpt_slot = 0; + + nix_inline_ipsec_cfg(rvu, &nix_cfg, BLKADDR_NIX0); + + if (is_block_implemented(rvu->hw, BLKADDR_CPT1)) + nix_inline_ipsec_cfg(rvu, &nix_cfg, BLKADDR_NIX1); +} + +static int rvu_cpt_rx_inline_ipsec_cfg(struct rvu *rvu) +{ + struct rvu_block *block; + struct cpt_inline_ipsec_cfg_msg req; + u16 pcifunc = 0; + int cptlf; + int err; + + memset(&req, 0, sizeof(struct cpt_inline_ipsec_cfg_msg)); + req.sso_pf_func_ovrd = 0; // Add sysfs interface to set this + req.sso_pf_func = rvu->rvu_cpt.rx_cfg.sso_pf_func; + req.enable = 1; + + block = &rvu->hw->block[BLKADDR_CPT0]; + cptlf = rvu_get_lf(rvu, block, pcifunc, 0); + if (cptlf < 0) + return CPT_AF_ERR_LF_INVALID; + + err = cpt_inline_ipsec_cfg_inbound(rvu, BLKADDR_CPT0, cptlf, &req); + if (err) + return err; + + if (!is_block_implemented(rvu->hw, BLKADDR_CPT1)) + return 0; + + block = &rvu->hw->block[BLKADDR_CPT1]; + cptlf = rvu_get_lf(rvu, block, pcifunc, 0); + if (cptlf < 0) + return CPT_AF_ERR_LF_INVALID; + + return cpt_inline_ipsec_cfg_inbound(rvu, BLKADDR_CPT1, cptlf, &req); +} + +static int rvu_cpt_rx_inline_cptlf_setup(struct rvu *rvu, int blkaddr, int slot) +{ + int err; + + err = rvu_cpt_rx_inline_cptlf_init(rvu, blkaddr, slot); + if (err) { + dev_err(rvu->dev, + "CPTLF configuration failed for RX inline ipsec\n"); + return err; + } + + err = rvu_cpt_rx_inline_setup_irq(rvu, blkaddr, slot); + if (err) { + dev_err(rvu->dev, + "CPTLF Interrupt setup failed for RX inline ipsec\n"); + rvu_cpt_rx_inline_cptlf_clean(rvu, blkaddr, slot); + return err; + } + return 0; +} + +static void rvu_rx_cptlf_cleanup(struct rvu *rvu, int blkaddr, int slot) +{ + /* IRQ cleanup */ + rvu_cpt_rx_inline_cleanup_irq(rvu, blkaddr, slot); + + /* CPTLF cleanup */ + rvu_cpt_rx_inline_cptlf_clean(rvu, blkaddr, slot); +} + +int rvu_mbox_handler_cpt_rx_inline_lf_cfg(struct rvu *rvu, + struct cpt_rx_inline_lf_cfg_msg *req, + struct msg_rsp *rsp) +{ + u8 egrp = OTX2_CPT_INVALID_CRYPTO_ENG_GRP; + int err; + int i; + + mutex_lock(&rvu->rvu_cpt.lock); + if (rvu->rvu_cpt.rx_initialized) { + dev_info(rvu->dev, "Inline RX CPT already initialized\n"); + rvu_show_diff_cpt_rx_inline_lf_cfg(rvu, req); + err = 0; + goto unlock; + } + + /* Get Inline Ipsec Engine Group */ + for (i = 0; i < OTX2_CPT_MAX_ENG_TYPES; i++) { + if (rvu->rvu_cpt.eng_grp[i].eng_type == OTX2_CPT_IE_TYPES) { + egrp = rvu->rvu_cpt.eng_grp[i].grp_num; + break; + } + } + + if (egrp == OTX2_CPT_INVALID_CRYPTO_ENG_GRP) { + dev_err(rvu->dev, + "Engine group for inline ipsec not available\n"); + err = -ENODEV; + goto unlock; + } + rvu->rvu_cpt.inline_ipsec_egrp = egrp; + + rvu_cpt_save_rx_inline_lf_cfg(rvu, req); + + err = rvu_cpt_rx_inline_cptlf_setup(rvu, BLKADDR_CPT0, 0); + if (err) + goto unlock; + + if (is_block_implemented(rvu->hw, BLKADDR_CPT1)) { + err = rvu_cpt_rx_inline_cptlf_setup(rvu, BLKADDR_CPT1, 0); + if (err) + goto cptlf_cleanup; + } + + rvu_cpt_rx_inline_nix_cfg(rvu); + + err = rvu_cpt_rx_inline_ipsec_cfg(rvu); + if (err) + goto cptlf1_cleanup; + + rvu->rvu_cpt.rx_initialized = true; + mutex_unlock(&rvu->rvu_cpt.lock); + return 0; + +cptlf1_cleanup: + rvu_rx_cptlf_cleanup(rvu, BLKADDR_CPT1, 0); +cptlf_cleanup: + rvu_rx_cptlf_cleanup(rvu, BLKADDR_CPT0, 0); +unlock: + mutex_unlock(&rvu->rvu_cpt.lock); + return err; +} + #define MAX_RXC_ICB_CNT GENMASK_ULL(40, 32) int rvu_cpt_init(struct rvu *rvu) @@ -1336,5 +1898,6 @@ int rvu_cpt_init(struct rvu *rvu) spin_lock_init(&rvu->cpt_intr_lock); + mutex_init(&rvu->rvu_cpt.lock); return 0; } diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h new file mode 100644 index 000000000000..4b57c7038d6c --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Marvell AF CPT driver + * + * Copyright (C) 2022 Marvell. + */ + +#ifndef RVU_CPT_H +#define RVU_CPT_H + +#include + +/* CPT instruction size in bytes */ +#define RVU_CPT_INST_SIZE 64 + +/* CPT instruction (CPT_INST_S) queue length */ +#define RVU_CPT_INST_QLEN 8200 + +/* CPT instruction queue size passed to HW is in units of + * 40*CPT_INST_S messages. + */ +#define RVU_CPT_SIZE_DIV40 (RVU_CPT_INST_QLEN / 40) + +/* CPT instruction and pending queues length in CPT_INST_S messages */ +#define RVU_CPT_INST_QLEN_MSGS ((RVU_CPT_SIZE_DIV40 - 1) * 40) + +/* CPT needs 320 free entries */ +#define RVU_CPT_INST_QLEN_EXTRA_BYTES (320 * RVU_CPT_INST_SIZE) +#define RVU_CPT_EXTRA_SIZE_DIV40 (320 / 40) + +/* CPT instruction queue length in bytes */ +#define RVU_CPT_INST_QLEN_BYTES \ + ((RVU_CPT_SIZE_DIV40 * 40 * RVU_CPT_INST_SIZE) + \ + RVU_CPT_INST_QLEN_EXTRA_BYTES) + +/* CPT instruction group queue length in bytes */ +#define RVU_CPT_INST_GRP_QLEN_BYTES \ + ((RVU_CPT_SIZE_DIV40 + RVU_CPT_EXTRA_SIZE_DIV40) * 16) + +/* CPT FC length in bytes */ +#define RVU_CPT_Q_FC_LEN 128 + +/* CPT LF_Q_SIZE Register */ +#define CPT_LF_Q_SIZE_DIV40 GENMASK_ULL(14, 0) + +/* CPT invalid engine group num */ +#define OTX2_CPT_INVALID_CRYPTO_ENG_GRP 0xFF + +/* Fastpath ipsec opcode with inplace processing */ +#define OTX2_CPT_INLINE_RX_OPCODE (0x26 | (1 << 6)) +#define CN10K_CPT_INLINE_RX_OPCODE (0x29 | (1 << 6)) + +/* Calculate CPT register offset */ +#define CPT_RVU_FUNC_ADDR_S(blk, slot, offs) \ + (((blk) << 20) | ((slot) << 12) | (offs)) + +static inline void otx2_cpt_write64(void __iomem *reg_base, u64 blk, u64 slot, + u64 offs, u64 val) +{ + writeq_relaxed(val, reg_base + CPT_RVU_FUNC_ADDR_S(blk, slot, offs)); +} + +static inline u64 otx2_cpt_read64(void __iomem *reg_base, u64 blk, u64 slot, + u64 offs) +{ + return readq_relaxed(reg_base + CPT_RVU_FUNC_ADDR_S(blk, slot, offs)); +} +#endif // RVU_CPT_H diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 613655fcd34f..6bd995c45dad 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -5486,8 +5486,8 @@ int rvu_mbox_handler_nix_lso_format_cfg(struct rvu *rvu, #define CPT_INST_CREDIT_BPID GENMASK_ULL(30, 22) #define CPT_INST_CREDIT_CNT GENMASK_ULL(21, 0) -static void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *req, - int blkaddr) +void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *req, + int blkaddr) { u8 cpt_idx, cpt_blkaddr; u64 val; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h index 62cdc714ba57..a982cffdc5f5 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h @@ -563,6 +563,11 @@ #define CPT_LF_CTL 0x10 #define CPT_LF_INPROG 0x40 +#define CPT_LF_MISC_INT 0xb0 +#define CPT_LF_MISC_INT_ENA_W1S 0xb0 +#define CPT_LF_MISC_INT_ENA_W1C 0xb0 +#define CPT_LF_MISC_INT_MASK 0x6e +#define CPT_LF_Q_BASE 0xf0 #define CPT_LF_Q_SIZE 0x100 #define CPT_LF_Q_INST_PTR 0x110 #define CPT_LF_Q_GRP_PTR 0x120 From patchwork Fri May 2 13:19:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 886706 Received: from mx0a-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79899254AEE; Fri, 2 May 2025 13:22:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192133; cv=none; b=bMY61T6Q6E3Te7uPfcLhyJhHQEWrURE9BMM825tazrWM9SteDr42zgK1A+4QsMp6shQmXKMWFMoEZ/ltOfjXwaHOEf5JApAVV+H7CvqtXym9ZaNqq+f3tnmtDL1X4O0Bp1sRCuczUBoXnyg6XlZ0NIydtcbWKkWqb7EVHDdTPu4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192133; c=relaxed/simple; bh=pNToQ4sn8mggyUxqZ4vVLqm2SADL++fk9nZg0HdSa4I=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NwnsRxd1l7Je9JKspeIsRhvJK8HzZbpGS166/0yHKRMcL5iqtki3l0BCOJRGpmymeOt6u1ZB2cgfriBxXAnGGVoov3iYxUFF7YixaLkIL76lW9Yjmlyz7n/RgBFUETcQKODZ1uz/XSWwD4uLS/9d/1k9DTt3l2r+zviiejNaS1A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=glwkkgQX; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="glwkkgQX" Received: from pps.filterd (m0431384.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5429m2rh008255; Fri, 2 May 2025 06:21:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=q jDrNvUo+XqJVxxrOTNvSb0lMwxy+8viuYTnXF0wgXs=; b=glwkkgQXXh9PXDq44 uWUCen4l1pUYCh3AoddQHk2cy4Vo3nn+TEHIxb7dJD8DhSxfPmhmOq8JWmXgnUxw 6SVxk0VVNXI7aqvSnWQ/FeKKNMV0RSuLmK3JkJbC6RndHDyXZl20ikrlAhViDe+j y1udQJRDXu0K8VzOYhdbyZAQXXT8d8jNuU5ADjRqBR1UVOaiB31HP/nMBLptili2 Oaf8xAleOLeNW8AJx9yYkGQo2KpWVg+WKOubpQwoSNilBJ928eyM1Q8dpHekMW6H Z5ImVV1eRVfHMkuHgz6SR8FNCtQVKAdnX+gXwKdyhs0DDUnT1FNPnluSHmjuQcGz ZMRew== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cuqyr9sw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:21:45 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:21:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:21:44 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id 33F2D5B6922; Fri, 2 May 2025 06:21:33 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 03/15] octeontx2-af: Setup Large Memory Transaction for crypto Date: Fri, 2 May 2025 18:49:44 +0530 Message-ID: <20250502132005.611698-4-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: H8HsNSuyrZVsA8SacGoAW4TtKYlLR02y X-Authority-Analysis: v=2.4 cv=JvPxrN4C c=1 sm=1 tr=0 ts=6814c6e9 cx=c_pps a=gIfcoYsirJbf48DBMSPrZA==:117 a=gIfcoYsirJbf48DBMSPrZA==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=gcLevytPFhx6Mr_woyEA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: H8HsNSuyrZVsA8SacGoAW4TtKYlLR02y X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX17cf5hDJIvlZ keSH2DUzeSS81ZOeieISG2FdseIfeLYDBNAsMo8BuVELxmLelGsz/D+AvY/+Lh1m2MxS1UlyvpZ n/J2DmecDRau+3y2SEdAI0u5xL8ub4utUjhdUTOkYfaDLGs4lKOj4J+VLwJ8/UmD8GxdLgHsr6D t9p/DLWmCgGkFb2jqZrdlqgKaq2zu0CDTdsmfioiCESypI31qMYT2za05XrmqLJ8N4rSRUAd127 xeQuVbo7lKxntwsWDuElhxQ12iKVsQzqBDRJqvZI2lDHd9vkRLlRTVeImSp4yB959vEdUjm0HeS Znks1mdEKxWk2JBybLQNatO3i5oyyzCv2zeNHmUM05W6Ni34BqTdGPvPq97mycUOVIKtw0wSdCl Utc8XG+HWtnLtCTkloJeZXY/BYAxKKcq1k8PMHyX7ENEvNrkbQlF8Q3U+5x2Vuf2g7QMURCT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 From: Bharat Bhushan Large Memory Transaction store (LMTST) operation is required for enqueuing workto CPT hardware. An LMTST operation makes one or more 128-byte write operation to normal, cacheable memory region. This patch setup LMTST memory region for enqueuing work to CPT hardware. Signed-off-by: Bharat Bhushan Signed-off-by: Tanmay Jagdale --- .../net/ethernet/marvell/octeontx2/af/rvu.c | 1 + .../net/ethernet/marvell/octeontx2/af/rvu.h | 7 +++ .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 51 +++++++++++++++++++ .../ethernet/marvell/octeontx2/af/rvu_cpt.h | 4 ++ 4 files changed, 63 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index d9f000cda5e5..ea346e59835b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -731,6 +731,7 @@ static void rvu_free_hw_resources(struct rvu *rvu) rvu_npa_freemem(rvu); rvu_npc_freemem(rvu); rvu_nix_freemem(rvu); + rvu_cpt_freemem(rvu); /* Free block LF bitmaps */ for (id = 0; id < BLK_COUNT; id++) { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 6923fd756b19..6551fdb612dc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -557,6 +557,12 @@ struct rvu_cpt { struct rvu_cpt_inst_queue cpt0_iq; struct rvu_cpt_inst_queue cpt1_iq; struct rvu_cpt_rx_inline_lf_cfg rx_cfg; + + /* CPT LMTST */ + void *lmt_base; + u64 lmt_addr; + size_t lmt_size; + dma_addr_t lmt_iova; }; struct rvu { @@ -1086,6 +1092,7 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int slot); int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc); int rvu_cpt_init(struct rvu *rvu); +void rvu_cpt_freemem(struct rvu *rvu); #define NDC_AF_BANK_MASK GENMASK_ULL(7, 0) #define NDC_AF_BANK_LINE_MASK GENMASK_ULL(31, 16) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c index 89e0739ba414..8ed56ac512ef 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c @@ -1874,10 +1874,46 @@ int rvu_mbox_handler_cpt_rx_inline_lf_cfg(struct rvu *rvu, #define MAX_RXC_ICB_CNT GENMASK_ULL(40, 32) +static int rvu_cpt_lmt_init(struct rvu *rvu) +{ + struct lmtst_tbl_setup_req req; + dma_addr_t iova; + void *base; + int size; + int err; + + if (is_rvu_otx2(rvu)) + return 0; + + memset(&req, 0, sizeof(struct lmtst_tbl_setup_req)); + + size = LMT_LINE_SIZE * LMT_BURST_SIZE + OTX2_ALIGN; + base = dma_alloc_attrs(rvu->dev, size, &iova, GFP_ATOMIC, + DMA_ATTR_FORCE_CONTIGUOUS); + if (!base) + return -ENOMEM; + + req.lmt_iova = ALIGN(iova, OTX2_ALIGN); + req.use_local_lmt_region = true; + err = rvu_mbox_handler_lmtst_tbl_setup(rvu, &req, NULL); + if (err) { + dma_free_attrs(rvu->dev, size, base, iova, + DMA_ATTR_FORCE_CONTIGUOUS); + return err; + } + + rvu->rvu_cpt.lmt_addr = (__force u64)PTR_ALIGN(base, OTX2_ALIGN); + rvu->rvu_cpt.lmt_base = base; + rvu->rvu_cpt.lmt_size = size; + rvu->rvu_cpt.lmt_iova = iova; + return 0; +} + int rvu_cpt_init(struct rvu *rvu) { struct rvu_hwinfo *hw = rvu->hw; u64 reg_val; + int ret; /* Retrieve CPT PF number */ rvu->cpt_pf_num = get_cpt_pf_num(rvu); @@ -1898,6 +1934,21 @@ int rvu_cpt_init(struct rvu *rvu) spin_lock_init(&rvu->cpt_intr_lock); + ret = rvu_cpt_lmt_init(rvu); + if (ret) + return ret; + mutex_init(&rvu->rvu_cpt.lock); return 0; } + +void rvu_cpt_freemem(struct rvu *rvu) +{ + if (is_rvu_otx2(rvu)) + return; + + if (rvu->rvu_cpt.lmt_base) + dma_free_attrs(rvu->dev, rvu->rvu_cpt.lmt_size, + rvu->rvu_cpt.lmt_base, rvu->rvu_cpt.lmt_iova, + DMA_ATTR_FORCE_CONTIGUOUS); +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h index 4b57c7038d6c..e6fa247a03ba 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h @@ -49,6 +49,10 @@ #define OTX2_CPT_INLINE_RX_OPCODE (0x26 | (1 << 6)) #define CN10K_CPT_INLINE_RX_OPCODE (0x29 | (1 << 6)) +/* CPT LMTST */ +#define LMT_LINE_SIZE 128 /* LMT line size in bytes */ +#define LMT_BURST_SIZE 32 /* 32 LMTST lines for burst */ + /* Calculate CPT register offset */ #define CPT_RVU_FUNC_ADDR_S(blk, slot, offs) \ (((blk) << 20) | ((slot) << 12) | (offs)) From patchwork Fri May 2 13:19:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 886705 Received: from mx0a-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0DD9C255237; Fri, 2 May 2025 13:22:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192134; cv=none; b=a7IxAC2aJy/V0lFkmU9xRLg6HkNl5nvE5nrZ7K7JruSM/RWf/dUAcLgtH/Gat1FeRh4pHVLx1a5xMi8vXbu0+cT+oR5ncwWrN6eircwzd9MDDpkoCIzt7H8aAMqJhwbU0ksR4JdeLTbuoOLaZ3WfvTYRcudwLkh1hpj9w/EYkqI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192134; c=relaxed/simple; bh=37B6PMh/6sEC5uQdEJHN3nbfj8eUefTL3Eaw1/BFKpU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dt7xrJ0WuiP4HVYJmD6EYX2t54IyPzq6aoXARugKn0/swTWRe1mEPQm5QJOvwZrLKpo3K/xkUuj6NjD8coxlyoSOLpUWkZu6IqXfNrQy94NC5sCNrLCxl9Xj6bAizUAekzRSSnNVpGCmg6c3lVt0T+ZgVWRpOuJSWt0P4qmxUC0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=aoahraDh; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="aoahraDh" Received: from pps.filterd (m0431384.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 542CxphH022581; Fri, 2 May 2025 06:21:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=A iW053hUhkoewKY57vLCn4s/zGxUU0wcbj/9keQadSU=; b=aoahraDhLu8MdpAHi kBuGcX179F/cM7FXXfWfp3GXf81lG6tgrz03IPb3z1zaQfsECIAuN+aGHEGe00Hp o0ZM45w8ZXrjWSQZK5be6AJBoHa90Rlp156r7qotcSOEbf3Fe37TNzW/sP72Xq43 eMkuu4VEVYzOauN3lhrjrBk0UhjvN7cR2yu42LJ9EzRVsZzWe1G2z+gp522oBGN0 hhaUBShFPdhm8lvhv8N7u/zVOORy0BXWCHZGzghu6ZdgQIbNWiOgn3+ZD72eUroF n/i5E0mnfgHmQNAMTJWxvkdbLhPu+T8fUxhWu6/YfbcfceNCmpKtMWktd3wp+ze/ UEg7w== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cuqyr9t9-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:21:58 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:21:56 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:21:56 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id E38F33F7041; Fri, 2 May 2025 06:21:45 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 04/15] octeontx2-af: Handle inbound inline ipsec config in AF Date: Fri, 2 May 2025 18:49:45 +0530 Message-ID: <20250502132005.611698-5-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: ER1MhkG2ozJ0NhtQL-nlQQ-M_jRAI6Eh X-Authority-Analysis: v=2.4 cv=JvPxrN4C c=1 sm=1 tr=0 ts=6814c6f6 cx=c_pps a=rEv8fa4AjpPjGxpoe8rlIQ==:117 a=rEv8fa4AjpPjGxpoe8rlIQ==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=IFrmSRDCLOpRzf7F5L0A:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: ER1MhkG2ozJ0NhtQL-nlQQ-M_jRAI6Eh X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX+7KGt4+2+W7N 0xl8i5fz4ao+tNV/+BSByLEjdqCh2+lw05uvBLaWr62yW2iD334B1HhJpMvF2daEnPU9C0T9KSS vJWRYeyBLAXErYxR6eiLQ5+q8sBH7XVJuazp1mmEL3B9vZQaE0KtjrFAINvjcyn2NqrFO/ObhNl GpZ0XzRE9ZQ5VifDcDjahAF1B/5uvrCtjpthbj8A4OhoauohkNinbs6Hy4Ij+Fq947FW0mxZPdR 35pFhcVrmRmKocV8vSHsXrgxF1AISnvGsWKfBG0Qnf0bQ+OASWooq0Rf5v7Y9YfzoOlAolZxaY3 A6uKPulhd0AHtXub/LkrwZlfzFOIaCkdN6uVlCUIBTBKEM1hHnXWbLVdE6qScAnQQi/xDGIdJRq 6ElsRhtFUA055qnTDlXepeDLX16c1BpUTCXYYlTXnGOwjkKkeOCStReWfVlm7u2wUeC9loVJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 From: Bharat Bhushan Now CPT context flush can be handled in AF as CPT LF can be attached to it. With that AF driver can completely handle inbound inline ipsec configuration mailbox, so forward this mailbox to AF driver. Signed-off-by: Bharat Bhushan Signed-off-by: Tanmay Jagdale --- .../marvell/octeontx2/otx2_cpt_common.h | 1 - .../marvell/octeontx2/otx2_cptpf_mbox.c | 3 - .../net/ethernet/marvell/octeontx2/af/mbox.h | 2 + .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 67 +++++++++---------- .../ethernet/marvell/octeontx2/af/rvu_reg.h | 1 + 5 files changed, 34 insertions(+), 40 deletions(-) diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h index df735eab8f08..27a2dd997f73 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h @@ -33,7 +33,6 @@ #define BAD_OTX2_CPT_ENG_TYPE OTX2_CPT_MAX_ENG_TYPES /* Take mbox id from end of CPT mbox range in AF (range 0xA00 - 0xBFF) */ -#define MBOX_MSG_RX_INLINE_IPSEC_LF_CFG 0xBFE #define MBOX_MSG_GET_ENG_GRP_NUM 0xBFF #define MBOX_MSG_GET_CAPS 0xBFD #define MBOX_MSG_GET_KVF_LIMITS 0xBFC diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c index 5e6f70ac35a7..222419bd5ac9 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c @@ -326,9 +326,6 @@ static int cptpf_handle_vf_req(struct otx2_cptpf_dev *cptpf, case MBOX_MSG_GET_KVF_LIMITS: err = handle_msg_kvf_limits(cptpf, vf, req); break; - case MBOX_MSG_RX_INLINE_IPSEC_LF_CFG: - err = handle_msg_rx_inline_ipsec_lf_cfg(cptpf, req); - break; default: err = forward_to_af(cptpf, vf, req, size); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 8540a04a92f9..ad74a27888da 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -213,6 +213,8 @@ M(CPT_FLT_ENG_INFO, 0xA09, cpt_flt_eng_info, cpt_flt_eng_info_req, \ cpt_flt_eng_info_rsp) \ M(CPT_SET_ENG_GRP_NUM, 0xA0A, cpt_set_eng_grp_num, cpt_set_egrp_num, \ msg_rsp) \ +M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, cpt_rx_inline_lf_cfg_msg, \ + msg_rsp) \ /* SDP mbox IDs (range 0x1000 - 0x11FF) */ \ M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \ M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c index 8ed56ac512ef..2e8ac71979ae 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c @@ -12,6 +12,7 @@ #include "mbox.h" #include "rvu.h" #include "rvu_cpt.h" +#include /* CPT PF device id */ #define PCI_DEVID_OTX2_CPT_PF 0xA0FD @@ -26,6 +27,10 @@ /* Default CPT_AF_RXC_CFG1:max_rxc_icb_cnt */ #define CPT_DFLT_MAX_RXC_ICB_CNT 0xC0ULL +/* CPT LMTST */ +#define LMT_LINE_SIZE 128 /* LMT line size in bytes */ +#define LMT_BURST_SIZE 32 /* 32 LMTST lines for burst */ + #define cpt_get_eng_sts(e_min, e_max, rsp, etype) \ ({ \ u64 free_sts = 0, busy_sts = 0; \ @@ -1253,20 +1258,36 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int s return 0; } +static void cn10k_cpt_inst_flush(struct rvu *rvu, u64 *inst, u64 size) +{ + u64 val = 0, tar_addr = 0; + void __iomem *io_addr; + u64 blkaddr = BLKADDR_CPT0; + + io_addr = rvu->pfreg_base + CPT_RVU_FUNC_ADDR_S(blkaddr, 0, CPT_LF_NQX); + + /* Target address for LMTST flush tells HW how many 128bit + * words are present. + * tar_addr[6:4] size of first LMTST - 1 in units of 128b. + */ + tar_addr |= (__force u64)io_addr | (((size / 16) - 1) & 0x7) << 4; + dma_wmb(); + memcpy((u64 *)rvu->rvu_cpt.lmt_addr, inst, size); + cn10k_lmt_flush(val, tar_addr); + dma_wmb(); +} + #define CPT_RES_LEN 16 #define CPT_SE_IE_EGRP 1ULL static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr, int nix_blkaddr) { - int cpt_pf_num = rvu->cpt_pf_num; - struct cpt_inst_lmtst_req *req; dma_addr_t res_daddr; int timeout = 3000; u8 cpt_idx; - u64 *inst; + u64 inst[8]; u16 *res; - int rc; res = kzalloc(CPT_RES_LEN, GFP_KERNEL); if (!res) @@ -1276,24 +1297,11 @@ static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr, DMA_BIDIRECTIONAL); if (dma_mapping_error(rvu->dev, res_daddr)) { dev_err(rvu->dev, "DMA mapping failed for CPT result\n"); - rc = -EFAULT; - goto res_free; + kfree(res); + return -EFAULT; } *res = 0xFFFF; - /* Send mbox message to CPT PF */ - req = (struct cpt_inst_lmtst_req *) - otx2_mbox_alloc_msg_rsp(&rvu->afpf_wq_info.mbox_up, - cpt_pf_num, sizeof(*req), - sizeof(struct msg_rsp)); - if (!req) { - rc = -ENOMEM; - goto res_daddr_unmap; - } - req->hdr.sig = OTX2_MBOX_REQ_SIG; - req->hdr.id = MBOX_MSG_CPT_INST_LMTST; - - inst = req->inst; /* Prepare CPT_INST_S */ inst[0] = 0; inst[1] = res_daddr; @@ -1314,11 +1322,8 @@ static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr, rvu_write64(rvu, nix_blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx), BIT_ULL(22) - 1); - otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, cpt_pf_num); - rc = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, cpt_pf_num); - if (rc) - dev_warn(rvu->dev, "notification to pf %d failed\n", - cpt_pf_num); + cn10k_cpt_inst_flush(rvu, inst, 64); + /* Wait for CPT instruction to be completed */ do { mdelay(1); @@ -1331,11 +1336,8 @@ static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr, if (timeout == 0) dev_warn(rvu->dev, "Poll for result hits hard loop counter\n"); -res_daddr_unmap: dma_unmap_single(rvu->dev, res_daddr, CPT_RES_LEN, DMA_BIDIRECTIONAL); -res_free: kfree(res); - return 0; } @@ -1381,23 +1383,16 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc) goto unlock; } - /* Enable BAR2 ALIAS for this pcifunc. */ - reg = BIT_ULL(16) | pcifunc; - rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg); - for (i = 0; i < max_ctx_entries; i++) { cam_data = rvu_read64(rvu, blkaddr, CPT_AF_CTX_CAM_DATA(i)); if ((FIELD_GET(CTX_CAM_PF_FUNC, cam_data) == pcifunc) && FIELD_GET(CTX_CAM_CPTR, cam_data)) { reg = BIT_ULL(46) | FIELD_GET(CTX_CAM_CPTR, cam_data); - rvu_write64(rvu, blkaddr, - CPT_AF_BAR2_ALIASX(slot, CPT_LF_CTX_FLUSH), - reg); + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, + CPT_LF_CTX_FLUSH, reg); } } - rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0); - unlock: mutex_unlock(&rvu->rsrc_lock); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h index a982cffdc5f5..245e69fcbff9 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h @@ -571,6 +571,7 @@ #define CPT_LF_Q_SIZE 0x100 #define CPT_LF_Q_INST_PTR 0x110 #define CPT_LF_Q_GRP_PTR 0x120 +#define CPT_LF_NQX 0x400 #define CPT_LF_CTX_FLUSH 0x510 #define NPC_AF_BLK_RST (0x00040) From patchwork Fri May 2 13:19:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 887259 Received: from mx0a-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E0875246765; Fri, 2 May 2025 13:22:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192151; cv=none; b=LL6zgwSPrvTOSk2Nsz3UXn/h052kEEZklUtVr9O2MpLMrs6n/OYNy/v6b+ZzdhQ16cChZYOpj2Y64Ws4T/qEPqBZPkQjnUMsv85vnm0eUAbzGljHh66CmNUJHw9pEPJZH6rnEpsPMD7Vm1qvRk2gQjSKkE4c+oAT9tIblt8URdk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192151; c=relaxed/simple; bh=xAFWhinEGfnmv7WMulrUJNDvrDMWxQJyesJdHWgJLRw=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=AfbwqrkQrPf63Tod9bt5mEwGdD0V7HQ6Ayzjf66VboNai2cZCD6gOQK6cjecMJOq1aTWZpp2gJLdaNPWjAFiP8QeEQGK0YRpqkfvMyubW07xaLXpsWToW+xt8DNlPBq/+bZPZTntU4K+vcucbFCeBh9VW9BkDJ5HpzVKy1lOK5U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=SNQDdbpU; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="SNQDdbpU" Received: from pps.filterd (m0431384.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5429m2mL008278; Fri, 2 May 2025 06:22:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=Z 4R6ZSfamybqac9UXtc3JpHVgsmI1U+RufUGucqL4pc=; b=SNQDdbpUTnhRZ1u/H JZ+zpq0+P8V2qZ/28iZ08jzdvo6I9lrGjn/EXPOkGzMXGUqbKlDst/zMCkIsy4Tn z/LgeB5+boT3U2xIIEchnbDp5Z9+FUfP5Gl4bC0BUsiQ+jGCsamicjPSgieQEySq 2XeChUuCgOWO1IOeQx1ZzMz2I22Tup/jKIqK42dYWjAOi6pMRIiBdOAoF9kfF6iT 2jKxbgmF7SuDwef8Ke7kdFaLEG+b3a8I2PlVeH9mF9M0GdO6Wlh+jVmC6YGq4laT mhrpddbx4utq+wB99i6jR4br50H4o6rA5/3pG10CExDWxt+YCc37htiCrd/IXoRg MPT6g== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cuqyr9ts-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:22:11 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:22:10 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:22:10 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id 32DDA5B6921; Fri, 2 May 2025 06:21:57 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 05/15] crypto: octeontx2: Remove inbound inline ipsec config Date: Fri, 2 May 2025 18:49:46 +0530 Message-ID: <20250502132005.611698-6-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: xGJy7vRJuEp6rc7A8__w22FqLiheYTma X-Authority-Analysis: v=2.4 cv=JvPxrN4C c=1 sm=1 tr=0 ts=6814c704 cx=c_pps a=gIfcoYsirJbf48DBMSPrZA==:117 a=gIfcoYsirJbf48DBMSPrZA==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=DdTADUWHWwzGM8gDDMAA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: xGJy7vRJuEp6rc7A8__w22FqLiheYTma X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX/3IqKprLki/Z dgLHrccpk2iORYE1B1nTOMySzJ6TOJ0332vQ4tNTZ9+yW3adXh5Z0Yx7dcgxm3VS7QAZd9n5Sfp ptlcm20GpBrDtE/D5Sd5C8OTnRNEtzJ7CrFI/Mox2No7K8IL4gDVxr9QO+ohahObtYK3TQUcJkd 9eNo5bd0CsTjLy0W6CMKg2bg7nes0rcRy/zH8gi1GlDsRzJIChDK9WdM4uzwxLZXsC2DGpiWpHm fmtp57DL+WyrQijUItTO6W2DExJkhob/SpWzvLLaHtGsmP/6/hIIr4CW3hF6XzAYm0PimxlKApe vR+euaYc6GPYobdAyQgj1XnbEwol+RQbWYHtEfuHkupR2LfxOv9RQqZgSeL+pUslEhZHwrpDAeh ckx/ZYd6t1FUkxE5pzATtwpxKT2PXsAEOrjWra4E+cR0OtrX/crRPTteuUphl8Ina7WMBR1k X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 From: Bharat Bhushan Now AF driver can handle all inbound inline ipsec configuration so remove this code from CPT driver. Signed-off-by: Bharat Bhushan Signed-off-by: Tanmay Jagdale --- drivers/crypto/marvell/octeontx2/otx2_cptpf.h | 10 - .../marvell/octeontx2/otx2_cptpf_main.c | 46 --- .../marvell/octeontx2/otx2_cptpf_mbox.c | 282 +----------------- .../net/ethernet/marvell/octeontx2/af/mbox.h | 11 - .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 4 - 5 files changed, 2 insertions(+), 351 deletions(-) diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h index e5859a1e1c60..b7d1298e2b85 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h @@ -41,9 +41,6 @@ struct otx2_cptpf_dev { struct work_struct afpf_mbox_work; struct workqueue_struct *afpf_mbox_wq; - struct otx2_mbox afpf_mbox_up; - struct work_struct afpf_mbox_up_work; - /* VF <=> PF mbox */ struct otx2_mbox vfpf_mbox; struct workqueue_struct *vfpf_mbox_wq; @@ -56,10 +53,8 @@ struct otx2_cptpf_dev { u8 pf_id; /* RVU PF number */ u8 max_vfs; /* Maximum number of VFs supported by CPT */ u8 enabled_vfs; /* Number of enabled VFs */ - u8 sso_pf_func_ovrd; /* SSO PF_FUNC override bit */ u8 kvf_limits; /* Kernel crypto limits */ bool has_cpt1; - u8 rsrc_req_blkaddr; /* Devlink */ struct devlink *dl; @@ -67,12 +62,7 @@ struct otx2_cptpf_dev { irqreturn_t otx2_cptpf_afpf_mbox_intr(int irq, void *arg); void otx2_cptpf_afpf_mbox_handler(struct work_struct *work); -void otx2_cptpf_afpf_mbox_up_handler(struct work_struct *work); irqreturn_t otx2_cptpf_vfpf_mbox_intr(int irq, void *arg); void otx2_cptpf_vfpf_mbox_handler(struct work_struct *work); -int otx2_inline_cptlf_setup(struct otx2_cptpf_dev *cptpf, - struct otx2_cptlfs_info *lfs, u8 egrp, int num_lfs); -void otx2_inline_cptlf_cleanup(struct otx2_cptlfs_info *lfs); - #endif /* __OTX2_CPTPF_H */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c index 8a7ed0152371..34dbfea7f974 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c @@ -13,7 +13,6 @@ #define OTX2_CPT_DRV_NAME "rvu_cptpf" #define OTX2_CPT_DRV_STRING "Marvell RVU CPT Physical Function Driver" -#define CPT_UC_RID_CN9K_B0 1 #define CPT_UC_RID_CN10K_A 4 #define CPT_UC_RID_CN10K_B 5 @@ -477,19 +476,10 @@ static int cptpf_afpf_mbox_init(struct otx2_cptpf_dev *cptpf) if (err) goto error; - err = otx2_mbox_init(&cptpf->afpf_mbox_up, cptpf->afpf_mbox_base, - pdev, cptpf->reg_base, MBOX_DIR_PFAF_UP, 1); - if (err) - goto mbox_cleanup; - INIT_WORK(&cptpf->afpf_mbox_work, otx2_cptpf_afpf_mbox_handler); - INIT_WORK(&cptpf->afpf_mbox_up_work, otx2_cptpf_afpf_mbox_up_handler); mutex_init(&cptpf->lock); - return 0; -mbox_cleanup: - otx2_mbox_destroy(&cptpf->afpf_mbox); error: destroy_workqueue(cptpf->afpf_mbox_wq); return err; @@ -499,33 +489,6 @@ static void cptpf_afpf_mbox_destroy(struct otx2_cptpf_dev *cptpf) { destroy_workqueue(cptpf->afpf_mbox_wq); otx2_mbox_destroy(&cptpf->afpf_mbox); - otx2_mbox_destroy(&cptpf->afpf_mbox_up); -} - -static ssize_t sso_pf_func_ovrd_show(struct device *dev, - struct device_attribute *attr, char *buf) -{ - struct otx2_cptpf_dev *cptpf = dev_get_drvdata(dev); - - return sprintf(buf, "%d\n", cptpf->sso_pf_func_ovrd); -} - -static ssize_t sso_pf_func_ovrd_store(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t count) -{ - struct otx2_cptpf_dev *cptpf = dev_get_drvdata(dev); - u8 sso_pf_func_ovrd; - - if (!(cptpf->pdev->revision == CPT_UC_RID_CN9K_B0)) - return count; - - if (kstrtou8(buf, 0, &sso_pf_func_ovrd)) - return -EINVAL; - - cptpf->sso_pf_func_ovrd = sso_pf_func_ovrd; - - return count; } static ssize_t kvf_limits_show(struct device *dev, @@ -558,11 +521,9 @@ static ssize_t kvf_limits_store(struct device *dev, } static DEVICE_ATTR_RW(kvf_limits); -static DEVICE_ATTR_RW(sso_pf_func_ovrd); static struct attribute *cptpf_attrs[] = { &dev_attr_kvf_limits.attr, - &dev_attr_sso_pf_func_ovrd.attr, NULL }; @@ -833,13 +794,6 @@ static void otx2_cptpf_remove(struct pci_dev *pdev) cptpf_sriov_disable(pdev); otx2_cpt_unregister_dl(cptpf); - /* Cleanup Inline CPT LF's if attached */ - if (cptpf->lfs.lfs_num) - otx2_inline_cptlf_cleanup(&cptpf->lfs); - - if (cptpf->cpt1_lfs.lfs_num) - otx2_inline_cptlf_cleanup(&cptpf->cpt1_lfs); - /* Delete sysfs entry created for kernel VF limits */ sysfs_remove_group(&pdev->dev.kobj, &cptpf_sysfs_group); /* Cleanup engine groups */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c index 222419bd5ac9..6b2881b534f5 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c @@ -5,20 +5,6 @@ #include "otx2_cptpf.h" #include "rvu_reg.h" -/* Fastpath ipsec opcode with inplace processing */ -#define CPT_INLINE_RX_OPCODE (0x26 | (1 << 6)) -#define CN10K_CPT_INLINE_RX_OPCODE (0x29 | (1 << 6)) - -#define cpt_inline_rx_opcode(pdev) \ -({ \ - u8 opcode; \ - if (is_dev_otx2(pdev)) \ - opcode = CPT_INLINE_RX_OPCODE; \ - else \ - opcode = CN10K_CPT_INLINE_RX_OPCODE; \ - (opcode); \ -}) - /* * CPT PF driver version, It will be incremented by 1 for every feature * addition in CPT mailbox messages. @@ -126,186 +112,6 @@ static int handle_msg_kvf_limits(struct otx2_cptpf_dev *cptpf, return 0; } -static int send_inline_ipsec_inbound_msg(struct otx2_cptpf_dev *cptpf, - int sso_pf_func, u8 slot) -{ - struct cpt_inline_ipsec_cfg_msg *req; - struct pci_dev *pdev = cptpf->pdev; - - req = (struct cpt_inline_ipsec_cfg_msg *) - otx2_mbox_alloc_msg_rsp(&cptpf->afpf_mbox, 0, - sizeof(*req), sizeof(struct msg_rsp)); - if (req == NULL) { - dev_err(&pdev->dev, "RVU MBOX failed to get message.\n"); - return -EFAULT; - } - memset(req, 0, sizeof(*req)); - req->hdr.id = MBOX_MSG_CPT_INLINE_IPSEC_CFG; - req->hdr.sig = OTX2_MBOX_REQ_SIG; - req->hdr.pcifunc = OTX2_CPT_RVU_PFFUNC(cptpf->pf_id, 0); - req->dir = CPT_INLINE_INBOUND; - req->slot = slot; - req->sso_pf_func_ovrd = cptpf->sso_pf_func_ovrd; - req->sso_pf_func = sso_pf_func; - req->enable = 1; - - return otx2_cpt_send_mbox_msg(&cptpf->afpf_mbox, pdev); -} - -static int rx_inline_ipsec_lf_cfg(struct otx2_cptpf_dev *cptpf, u8 egrp, - struct otx2_cpt_rx_inline_lf_cfg *req) -{ - struct nix_inline_ipsec_cfg *nix_req; - struct pci_dev *pdev = cptpf->pdev; - int ret; - - nix_req = (struct nix_inline_ipsec_cfg *) - otx2_mbox_alloc_msg_rsp(&cptpf->afpf_mbox, 0, - sizeof(*nix_req), - sizeof(struct msg_rsp)); - if (nix_req == NULL) { - dev_err(&pdev->dev, "RVU MBOX failed to get message.\n"); - return -EFAULT; - } - memset(nix_req, 0, sizeof(*nix_req)); - nix_req->hdr.id = MBOX_MSG_NIX_INLINE_IPSEC_CFG; - nix_req->hdr.sig = OTX2_MBOX_REQ_SIG; - nix_req->enable = 1; - nix_req->credit_th = req->credit_th; - nix_req->bpid = req->bpid; - if (!req->credit || req->credit > OTX2_CPT_INST_QLEN_MSGS) - nix_req->cpt_credit = OTX2_CPT_INST_QLEN_MSGS - 1; - else - nix_req->cpt_credit = req->credit - 1; - nix_req->gen_cfg.egrp = egrp; - if (req->opcode) - nix_req->gen_cfg.opcode = req->opcode; - else - nix_req->gen_cfg.opcode = cpt_inline_rx_opcode(pdev); - nix_req->gen_cfg.param1 = req->param1; - nix_req->gen_cfg.param2 = req->param2; - nix_req->inst_qsel.cpt_pf_func = OTX2_CPT_RVU_PFFUNC(cptpf->pf_id, 0); - nix_req->inst_qsel.cpt_slot = 0; - ret = otx2_cpt_send_mbox_msg(&cptpf->afpf_mbox, pdev); - if (ret) - return ret; - - if (cptpf->has_cpt1) { - ret = send_inline_ipsec_inbound_msg(cptpf, req->sso_pf_func, 1); - if (ret) - return ret; - } - - return send_inline_ipsec_inbound_msg(cptpf, req->sso_pf_func, 0); -} - -int -otx2_inline_cptlf_setup(struct otx2_cptpf_dev *cptpf, - struct otx2_cptlfs_info *lfs, u8 egrp, int num_lfs) -{ - int ret; - - ret = otx2_cptlf_init(lfs, 1 << egrp, OTX2_CPT_QUEUE_HI_PRIO, 1); - if (ret) { - dev_err(&cptpf->pdev->dev, - "LF configuration failed for RX inline ipsec.\n"); - return ret; - } - - /* Get msix offsets for attached LFs */ - ret = otx2_cpt_msix_offset_msg(lfs); - if (ret) - goto cleanup_lf; - - /* Register for CPT LF Misc interrupts */ - ret = otx2_cptlf_register_misc_interrupts(lfs); - if (ret) - goto free_irq; - - return 0; -free_irq: - otx2_cptlf_unregister_misc_interrupts(lfs); -cleanup_lf: - otx2_cptlf_shutdown(lfs); - return ret; -} - -void -otx2_inline_cptlf_cleanup(struct otx2_cptlfs_info *lfs) -{ - /* Unregister misc interrupt */ - otx2_cptlf_unregister_misc_interrupts(lfs); - - /* Cleanup LFs */ - otx2_cptlf_shutdown(lfs); -} - -static int handle_msg_rx_inline_ipsec_lf_cfg(struct otx2_cptpf_dev *cptpf, - struct mbox_msghdr *req) -{ - struct otx2_cpt_rx_inline_lf_cfg *cfg_req; - int num_lfs = 1, ret; - u8 egrp; - - cfg_req = (struct otx2_cpt_rx_inline_lf_cfg *)req; - if (cptpf->lfs.lfs_num) { - dev_err(&cptpf->pdev->dev, - "LF is already configured for RX inline ipsec.\n"); - return -EEXIST; - } - /* - * Allow LFs to execute requests destined to only grp IE_TYPES and - * set queue priority of each LF to high - */ - egrp = otx2_cpt_get_eng_grp(&cptpf->eng_grps, OTX2_CPT_IE_TYPES); - if (egrp == OTX2_CPT_INVALID_CRYPTO_ENG_GRP) { - dev_err(&cptpf->pdev->dev, - "Engine group for inline ipsec is not available\n"); - return -ENOENT; - } - - otx2_cptlf_set_dev_info(&cptpf->lfs, cptpf->pdev, cptpf->reg_base, - &cptpf->afpf_mbox, BLKADDR_CPT0); - cptpf->lfs.global_slot = 0; - cptpf->lfs.ctx_ilen_ovrd = cfg_req->ctx_ilen_valid; - cptpf->lfs.ctx_ilen = cfg_req->ctx_ilen; - - ret = otx2_inline_cptlf_setup(cptpf, &cptpf->lfs, egrp, num_lfs); - if (ret) { - dev_err(&cptpf->pdev->dev, "Inline-Ipsec CPT0 LF setup failed.\n"); - return ret; - } - - if (cptpf->has_cpt1) { - cptpf->rsrc_req_blkaddr = BLKADDR_CPT1; - otx2_cptlf_set_dev_info(&cptpf->cpt1_lfs, cptpf->pdev, - cptpf->reg_base, &cptpf->afpf_mbox, - BLKADDR_CPT1); - cptpf->cpt1_lfs.global_slot = num_lfs; - cptpf->cpt1_lfs.ctx_ilen_ovrd = cfg_req->ctx_ilen_valid; - cptpf->cpt1_lfs.ctx_ilen = cfg_req->ctx_ilen; - ret = otx2_inline_cptlf_setup(cptpf, &cptpf->cpt1_lfs, egrp, - num_lfs); - if (ret) { - dev_err(&cptpf->pdev->dev, "Inline CPT1 LF setup failed.\n"); - goto lf_cleanup; - } - cptpf->rsrc_req_blkaddr = 0; - } - - ret = rx_inline_ipsec_lf_cfg(cptpf, egrp, cfg_req); - if (ret) - goto lf1_cleanup; - - return 0; - -lf1_cleanup: - otx2_inline_cptlf_cleanup(&cptpf->cpt1_lfs); -lf_cleanup: - otx2_inline_cptlf_cleanup(&cptpf->lfs); - return ret; -} - static int cptpf_handle_vf_req(struct otx2_cptpf_dev *cptpf, struct otx2_cptvf_info *vf, struct mbox_msghdr *req, int size) @@ -419,28 +225,14 @@ void otx2_cptpf_vfpf_mbox_handler(struct work_struct *work) irqreturn_t otx2_cptpf_afpf_mbox_intr(int __always_unused irq, void *arg) { struct otx2_cptpf_dev *cptpf = arg; - struct otx2_mbox_dev *mdev; - struct otx2_mbox *mbox; - struct mbox_hdr *hdr; u64 intr; /* Read the interrupt bits */ intr = otx2_cpt_read64(cptpf->reg_base, BLKADDR_RVUM, 0, RVU_PF_INT); if (intr & 0x1ULL) { - mbox = &cptpf->afpf_mbox; - mdev = &mbox->dev[0]; - hdr = mdev->mbase + mbox->rx_start; - if (hdr->num_msgs) - /* Schedule work queue function to process the MBOX request */ - queue_work(cptpf->afpf_mbox_wq, &cptpf->afpf_mbox_work); - - mbox = &cptpf->afpf_mbox_up; - mdev = &mbox->dev[0]; - hdr = mdev->mbase + mbox->rx_start; - if (hdr->num_msgs) - /* Schedule work queue function to process the MBOX request */ - queue_work(cptpf->afpf_mbox_wq, &cptpf->afpf_mbox_up_work); + /* Schedule work queue function to process the MBOX request */ + queue_work(cptpf->afpf_mbox_wq, &cptpf->afpf_mbox_work); /* Clear and ack the interrupt */ otx2_cpt_write64(cptpf->reg_base, BLKADDR_RVUM, 0, RVU_PF_INT, 0x1ULL); @@ -466,8 +258,6 @@ static void process_afpf_mbox_msg(struct otx2_cptpf_dev *cptpf, msg->sig, msg->id); return; } - if (cptpf->rsrc_req_blkaddr == BLKADDR_CPT1) - lfs = &cptpf->cpt1_lfs; switch (msg->id) { case MBOX_MSG_READY: @@ -594,71 +384,3 @@ void otx2_cptpf_afpf_mbox_handler(struct work_struct *work) } otx2_mbox_reset(afpf_mbox, 0); } - -static void handle_msg_cpt_inst_lmtst(struct otx2_cptpf_dev *cptpf, - struct mbox_msghdr *msg) -{ - struct cpt_inst_lmtst_req *req = (struct cpt_inst_lmtst_req *)msg; - struct otx2_cptlfs_info *lfs = &cptpf->lfs; - struct msg_rsp *rsp; - - if (cptpf->lfs.lfs_num) - lfs->ops->send_cmd((union otx2_cpt_inst_s *)req->inst, 1, - &lfs->lf[0]); - - rsp = (struct msg_rsp *)otx2_mbox_alloc_msg(&cptpf->afpf_mbox_up, 0, - sizeof(*rsp)); - if (!rsp) - return; - - rsp->hdr.id = msg->id; - rsp->hdr.sig = OTX2_MBOX_RSP_SIG; - rsp->hdr.pcifunc = 0; - rsp->hdr.rc = 0; -} - -static void process_afpf_mbox_up_msg(struct otx2_cptpf_dev *cptpf, - struct mbox_msghdr *msg) -{ - if (msg->id >= MBOX_MSG_MAX) { - dev_err(&cptpf->pdev->dev, - "MBOX msg with unknown ID %d\n", msg->id); - return; - } - - switch (msg->id) { - case MBOX_MSG_CPT_INST_LMTST: - handle_msg_cpt_inst_lmtst(cptpf, msg); - break; - default: - otx2_reply_invalid_msg(&cptpf->afpf_mbox_up, 0, 0, msg->id); - } -} - -void otx2_cptpf_afpf_mbox_up_handler(struct work_struct *work) -{ - struct otx2_cptpf_dev *cptpf; - struct otx2_mbox_dev *mdev; - struct mbox_hdr *rsp_hdr; - struct mbox_msghdr *msg; - struct otx2_mbox *mbox; - int offset, i; - - cptpf = container_of(work, struct otx2_cptpf_dev, afpf_mbox_up_work); - mbox = &cptpf->afpf_mbox_up; - mdev = &mbox->dev[0]; - /* Sync mbox data into memory */ - smp_wmb(); - - rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); - offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN); - - for (i = 0; i < rsp_hdr->num_msgs; i++) { - msg = (struct mbox_msghdr *)(mdev->mbase + offset); - - process_afpf_mbox_up_msg(cptpf, msg); - - offset = mbox->rx_start + msg->next_msgoff; - } - otx2_mbox_msg_send(mbox, 0); -} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index ad74a27888da..f9321084abb6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -386,9 +386,6 @@ M(MCS_CUSTOM_TAG_CFG_GET, 0xa021, mcs_custom_tag_cfg_get, \ #define MBOX_UP_CGX_MESSAGES \ M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, msg_rsp) -#define MBOX_UP_CPT_MESSAGES \ -M(CPT_INST_LMTST, 0xD00, cpt_inst_lmtst, cpt_inst_lmtst_req, msg_rsp) - #define MBOX_UP_MCS_MESSAGES \ M(MCS_INTR_NOTIFY, 0xE00, mcs_intr_notify, mcs_intr_info, msg_rsp) @@ -399,7 +396,6 @@ enum { #define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id, MBOX_MESSAGES MBOX_UP_CGX_MESSAGES -MBOX_UP_CPT_MESSAGES MBOX_UP_MCS_MESSAGES MBOX_UP_REP_MESSAGES #undef M @@ -1915,13 +1911,6 @@ struct cpt_rxc_time_cfg_req { u16 active_limit; }; -/* Mailbox message request format to request for CPT_INST_S lmtst. */ -struct cpt_inst_lmtst_req { - struct mbox_msghdr hdr; - u64 inst[8]; - u64 rsvd; -}; - /* Mailbox message format to request for CPT LF reset */ struct cpt_lf_rst_req { struct mbox_msghdr hdr; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c index 2e8ac71979ae..c7e46e77eab0 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c @@ -704,10 +704,6 @@ int rvu_mbox_handler_cpt_inline_ipsec_cfg(struct rvu *rvu, return CPT_AF_ERR_LF_INVALID; switch (req->dir) { - case CPT_INLINE_INBOUND: - ret = cpt_inline_ipsec_cfg_inbound(rvu, blkaddr, cptlf, req); - break; - case CPT_INLINE_OUTBOUND: ret = cpt_inline_ipsec_cfg_outbound(rvu, blkaddr, cptlf, req); break; From patchwork Fri May 2 13:19:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 886704 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 12635DDAB; Fri, 2 May 2025 13:22:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192170; cv=none; b=e5c6lrOuULY6PKlgdEDTomPhVtYqxnU1V5SbX+f+g9TIPo7A1FYSd8u+Dvw8uhCy0GsYI+l2TOsoyNiouwaNbWinu+Lcc+q2pkd8b+PXfJIIshpXVov4/moc9q6DMjURdJ56arxMn+s6hIpA6NvmCCKuDcApDUNJ/l/cLGeXGDw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192170; c=relaxed/simple; bh=r7nY0des2DiT9LAu4L8lzCK8tf3ocbHTXlnqA7fHY1c=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jeop/a/n6OiPiUx+muiA4URua2OhJkSmsWy+OHHq9m1KQ7Fqgx9R2zif1JdzhtK4GDkqG20eKs2LZPgrUWmKtzaDp6sKymuNWQVVo5GDkMNft7T49b/rQYNYowUSVClNHJ0ppGYfp82/fUlRmv59Oj1G0C5YFHsWMjxbUelqN6o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=UZLSbzr/; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="UZLSbzr/" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5429s2EW004717; Fri, 2 May 2025 06:22:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=A bXLvi8vvbZCaaeEdRCnLA1BlYf4kYdjP1KzgYrhD1g=; b=UZLSbzr/yXc7O0/Vi TaItxu72+4XNNPBG9mBdtcUe81k5yCAufiSzU5C6sQY9dGhkVaGM6yFv/ZNH5SJR oe1cX1i9Y8OgzjA5g/313xDwdlxW4jHY8gjNUxRETuYRPTupOHpAdMtsVlXy3Pxq BYOYfWP4jHTIKEmzPyjOR/bpIVWpdk0YOk+Sw9Gknui9RlZMgFvpmxT3DRiSUN+t lH+JWditzQtmnpj4oTzwWJCROTVoDwtm3qgNiwEgdAcIzldV4Rg6ii3UlCYLK39C 4Uf2bWZY6DNo6OnYOgYdqwwiYJCtbIcFK82A2BHRed/BJRfnDIJQTypuNH1aHqh6 SMVpQ== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cutur9dw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:22:26 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:22:25 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:22:25 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id 8CB6B3F7041; Fri, 2 May 2025 06:22:12 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Rakesh Kudurumalla , "Tanmay Jagdale" Subject: [net-next PATCH v1 06/15] octeontx2-af: Add support for CPT second pass Date: Fri, 2 May 2025 18:49:47 +0530 Message-ID: <20250502132005.611698-7-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 6kmX-Rt7yLlH2pkF1dlLx9wl3Nc9K9D8 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX+kWUJLrYjsDO VKg7HYwr3adffONK8eiugBpovLUYC2eRtk27qWGb2OiSY7+8eIweeCVGSSxHC86BqHVvpqEAlvK R6M9PAEeOiv9OW+u7VlL4CzOSE/uomb8bfYKb6WmOodbbdgXCShOjFf4iW39/FZk1UNgf1FX4rB QsSBb3FvRua6yPkXDCZ/8rk937LG1hlER1nTP52RcjvGH+vgrlZJTgGtVXve9c4wzEspcUQ2f2q 8GsDMEZLhuEeyZRIWJWRO3W002ZCLI/pCl8N1h+W+P4/vhh/aeulxSJ+DJpXwZXuawV8JARikPk 69nSgtfAxUp0FUldYAK1O1zMkIB287lS8jb4kYY1ibauo1ppLeq3JYsloSlUjzk6oAI3VsRX2IX lDGL3TmnPemUV7WFrQPxHzz5SzJlCQ86cChrUa+vPQNu5p+TxOzhczWTR0oHaTkvDLtDUBDq X-Authority-Analysis: v=2.4 cv=BaPY0qt2 c=1 sm=1 tr=0 ts=6814c712 cx=c_pps a=rEv8fa4AjpPjGxpoe8rlIQ==:117 a=rEv8fa4AjpPjGxpoe8rlIQ==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=46K6FPTX4cr-Sf61t84A:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: 6kmX-Rt7yLlH2pkF1dlLx9wl3Nc9K9D8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 From: Rakesh Kudurumalla Implemented mailbox to add mechanism to allocate a rq_mask and apply to nixlf to toggle RQ context fields for CPT second pass packets. Signed-off-by: Rakesh Kudurumalla Signed-off-by: Tanmay Jagdale --- .../net/ethernet/marvell/octeontx2/af/mbox.h | 23 ++++ .../net/ethernet/marvell/octeontx2/af/rvu.h | 7 + .../ethernet/marvell/octeontx2/af/rvu_cn10k.c | 11 ++ .../ethernet/marvell/octeontx2/af/rvu_nix.c | 120 ++++++++++++++++++ .../ethernet/marvell/octeontx2/af/rvu_reg.h | 6 + .../marvell/octeontx2/af/rvu_struct.h | 4 +- 6 files changed, 170 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index f9321084abb6..715efcc04c9e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -323,6 +323,9 @@ M(NIX_CPT_BP_DISABLE, 0x8021, nix_cpt_bp_disable, nix_bp_cfg_req, \ msg_rsp) \ M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg, \ msg_req, nix_inline_ipsec_cfg) \ +M(NIX_LF_INLINE_RQ_CFG, 0x8024, nix_lf_inline_rq_cfg, \ + nix_rq_cpt_field_mask_cfg_req, \ + msg_rsp) \ M(NIX_MCAST_GRP_CREATE, 0x802b, nix_mcast_grp_create, nix_mcast_grp_create_req, \ nix_mcast_grp_create_rsp) \ M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req, \ @@ -857,6 +860,7 @@ enum nix_af_status { NIX_AF_ERR_CQ_CTX_WRITE_ERR = -429, NIX_AF_ERR_AQ_CTX_RETRY_WRITE = -430, NIX_AF_ERR_LINK_CREDITS = -431, + NIX_AF_ERR_RQ_CPT_MASK = -432, NIX_AF_ERR_INVALID_BPID = -434, NIX_AF_ERR_INVALID_BPID_REQ = -435, NIX_AF_ERR_INVALID_MCAST_GRP = -436, @@ -1178,6 +1182,25 @@ struct nix_mark_format_cfg_rsp { u8 mark_format_idx; }; +struct nix_rq_cpt_field_mask_cfg_req { + struct mbox_msghdr hdr; +#define RQ_CTX_MASK_MAX 6 + union { + u64 rq_ctx_word_set[RQ_CTX_MASK_MAX]; + struct nix_cn10k_rq_ctx_s rq_set; + }; + union { + u64 rq_ctx_word_mask[RQ_CTX_MASK_MAX]; + struct nix_cn10k_rq_ctx_s rq_mask; + }; + struct nix_lf_rx_ipec_cfg1_req { + u32 spb_cpt_aura; + u8 rq_mask_enable; + u8 spb_cpt_sizem1; + u8 spb_cpt_enable; + } ipsec_cfg1; +}; + struct nix_rx_mode { struct mbox_msghdr hdr; #define NIX_RX_MODE_UCAST BIT(0) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 6551fdb612dc..71407f6318ec 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -350,6 +350,11 @@ struct nix_lso { u8 in_use; }; +struct nix_rq_cpt_mask { + u8 total; + u8 in_use; +}; + struct nix_txvlan { #define NIX_TX_VTAG_DEF_MAX 0x400 struct rsrc_bmap rsrc; @@ -373,6 +378,7 @@ struct nix_hw { struct nix_flowkey flowkey; struct nix_mark_format mark_format; struct nix_lso lso; + struct nix_rq_cpt_mask rq_msk; struct nix_txvlan txvlan; struct nix_ipolicer *ipolicer; struct nix_bp bp; @@ -398,6 +404,7 @@ struct hw_cap { bool per_pf_mbox_regs; /* PF mbox specified in per PF registers ? */ bool programmable_chans; /* Channels programmable ? */ bool ipolicer; + bool second_cpt_pass; bool nix_multiple_dwrr_mtu; /* Multiple DWRR_MTU to choose from */ bool npc_hash_extract; /* Hash extract enabled ? */ bool npc_exact_match_enabled; /* Exact match supported ? */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c index 7fa98aeb3663..18e2a48e2de1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c @@ -544,6 +544,7 @@ void rvu_program_channels(struct rvu *rvu) void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw) { + struct rvu_hwinfo *hw = rvu->hw; int blkaddr = nix_hw->blkaddr; u64 cfg; @@ -558,6 +559,16 @@ void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw) cfg = rvu_read64(rvu, blkaddr, NIX_AF_CFG); cfg |= BIT_ULL(1) | BIT_ULL(2); rvu_write64(rvu, blkaddr, NIX_AF_CFG, cfg); + + cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST); + + if (!(cfg & BIT_ULL(62))) { + hw->cap.second_cpt_pass = false; + return; + } + + hw->cap.second_cpt_pass = true; + nix_hw->rq_msk.total = NIX_RQ_MSK_PROFILES; } void rvu_apr_block_cn10k_init(struct rvu *rvu) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 6bd995c45dad..b15fd331facf 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -6612,3 +6612,123 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu, return ret; } + +static inline void +configure_rq_mask(struct rvu *rvu, int blkaddr, int nixlf, + u8 rq_mask, bool enable) +{ + u64 cfg, reg; + + cfg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf)); + reg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_CFG(nixlf)); + if (enable) { + cfg |= BIT_ULL(43); + reg = (reg & ~GENMASK_ULL(36, 35)) | ((u64)rq_mask << 35); + } else { + cfg &= ~BIT_ULL(43); + reg = (reg & ~GENMASK_ULL(36, 35)); + } + rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf), cfg); + rvu_write64(rvu, blkaddr, NIX_AF_LFX_CFG(nixlf), reg); +} + +static inline void +configure_spb_cpt(struct rvu *rvu, int blkaddr, int nixlf, + struct nix_rq_cpt_field_mask_cfg_req *req, bool enable) +{ + u64 cfg; + + cfg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf)); + if (enable) { + cfg |= BIT_ULL(37); + cfg &= ~GENMASK_ULL(42, 38); + cfg |= ((u64)req->ipsec_cfg1.spb_cpt_sizem1 << 38); + cfg &= ~GENMASK_ULL(63, 44); + cfg |= ((u64)req->ipsec_cfg1.spb_cpt_aura << 44); + } else { + cfg &= ~BIT_ULL(37); + cfg &= ~GENMASK_ULL(42, 38); + cfg &= ~GENMASK_ULL(63, 44); + } + rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf), cfg); +} + +static +int nix_inline_rq_mask_alloc(struct rvu *rvu, + struct nix_rq_cpt_field_mask_cfg_req *req, + struct nix_hw *nix_hw, int blkaddr) +{ + u8 rq_cpt_mask_select; + int idx, rq_idx; + u64 reg_mask; + u64 reg_set; + + for (idx = 0; idx < nix_hw->rq_msk.in_use; idx++) { + for (rq_idx = 0; rq_idx < RQ_CTX_MASK_MAX; rq_idx++) { + reg_mask = rvu_read64(rvu, blkaddr, + NIX_AF_RX_RQX_MASKX(idx, rq_idx)); + reg_set = rvu_read64(rvu, blkaddr, + NIX_AF_RX_RQX_SETX(idx, rq_idx)); + if (reg_mask != req->rq_ctx_word_mask[rq_idx] && + reg_set != req->rq_ctx_word_set[rq_idx]) + break; + } + if (rq_idx == RQ_CTX_MASK_MAX) + break; + } + + if (idx < nix_hw->rq_msk.in_use) { + /* Match found */ + rq_cpt_mask_select = idx; + return idx; + } + + if (nix_hw->rq_msk.in_use == nix_hw->rq_msk.total) + return NIX_AF_ERR_RQ_CPT_MASK; + + rq_cpt_mask_select = nix_hw->rq_msk.in_use++; + + for (rq_idx = 0; rq_idx < RQ_CTX_MASK_MAX; rq_idx++) { + rvu_write64(rvu, blkaddr, + NIX_AF_RX_RQX_MASKX(rq_cpt_mask_select, rq_idx), + req->rq_ctx_word_mask[rq_idx]); + rvu_write64(rvu, blkaddr, + NIX_AF_RX_RQX_SETX(rq_cpt_mask_select, rq_idx), + req->rq_ctx_word_set[rq_idx]); + } + + return rq_cpt_mask_select; +} + +int rvu_mbox_handler_nix_lf_inline_rq_cfg(struct rvu *rvu, + struct nix_rq_cpt_field_mask_cfg_req *req, + struct msg_rsp *rsp) +{ + struct rvu_hwinfo *hw = rvu->hw; + struct nix_hw *nix_hw; + int blkaddr, nixlf; + int rq_mask, err; + + err = nix_get_nixlf(rvu, req->hdr.pcifunc, &nixlf, &blkaddr); + if (err) + return err; + + nix_hw = get_nix_hw(rvu->hw, blkaddr); + if (!nix_hw) + return NIX_AF_ERR_INVALID_NIXBLK; + + if (!hw->cap.second_cpt_pass) + return NIX_AF_ERR_INVALID_NIXBLK; + + if (req->ipsec_cfg1.rq_mask_enable) { + rq_mask = nix_inline_rq_mask_alloc(rvu, req, nix_hw, blkaddr); + if (rq_mask < 0) + return NIX_AF_ERR_RQ_CPT_MASK; + } + + configure_rq_mask(rvu, blkaddr, nixlf, rq_mask, + req->ipsec_cfg1.rq_mask_enable); + configure_spb_cpt(rvu, blkaddr, nixlf, req, + req->ipsec_cfg1.spb_cpt_enable); + return 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h index 245e69fcbff9..e5e005d5d71e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h @@ -433,6 +433,8 @@ #define NIX_AF_MDQX_IN_MD_COUNT(a) (0x14e0 | (a) << 16) #define NIX_AF_SMQX_STATUS(a) (0x730 | (a) << 16) #define NIX_AF_MDQX_OUT_MD_COUNT(a) (0xdb0 | (a) << 16) +#define NIX_AF_RX_RQX_MASKX(a, b) (0x4A40 | (a) << 16 | (b) << 3) +#define NIX_AF_RX_RQX_SETX(a, b) (0x4A80 | (a) << 16 | (b) << 3) #define NIX_PRIV_AF_INT_CFG (0x8000000) #define NIX_PRIV_LFX_CFG (0x8000010) @@ -452,6 +454,10 @@ #define NIX_AF_TL3_PARENT_MASK GENMASK_ULL(23, 16) #define NIX_AF_TL2_PARENT_MASK GENMASK_ULL(20, 16) +#define NIX_AF_LF_CFG_SHIFT 17 +#define NIX_AF_LF_SSO_PF_FUNC_SHIFT 16 +#define NIX_RQ_MSK_PROFILES 4 + /* SSO */ #define SSO_AF_CONST (0x1000) #define SSO_AF_CONST1 (0x1008) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h index 77ac94cb2ec4..bd37ed3a81ad 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h @@ -377,7 +377,9 @@ struct nix_cn10k_rq_ctx_s { u64 ipsech_ena : 1; u64 ena_wqwd : 1; u64 cq : 20; - u64 rsvd_36_24 : 13; + u64 rsvd_34_24 : 11; + u64 port_ol4_dis : 1; + u64 port_il4_dis : 1; u64 lenerr_dis : 1; u64 csum_il4_dis : 1; u64 csum_ol4_dis : 1; From patchwork Fri May 2 13:19:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 887258 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0B9BDDAB; Fri, 2 May 2025 13:23:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192182; cv=none; b=tckcxBmn6RuR8YG5wM+xH2DEoqZ7shcM7h4WNfbXS4kevqE7UB6Pv9u9QMW1BqsOlMXgPvwkrOJXy2x1OFRpxhJdUWNWb8zXM9sr5XcYrCG4hKcYydtN8X4vn9e+zhMPtEFSW+nMY+mKElr8Pfd4fT2NYCP8Kx0pAizUMcmPvB8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192182; c=relaxed/simple; bh=ixbexPzawmWI+k2Bphjud7QiGsBUtkT6OWMbXoXUXdQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=VyK+Fm6Z4KUbXSxunEhJlYBOd0WfYsRzqoZEHJ3lX0mMfcYcKCcQpvXKqrGOeMV2mwNOW3HYyeHY8WHkTpFilYY5jOFLSnTSginSiCxUOtobQyvOj2zlfJuVzTB+EEUH7hzGGEygwFxafMsxl6B2UqaPtemaLw0Ma9YSfHbAygM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=j71J4/4c; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="j71J4/4c" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 542A3i0I023658; Fri, 2 May 2025 06:22:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=A +d2czD0ySAsGlHkPxTG+SXCqec5/G5n9A4rGAhrJmw=; b=j71J4/4c6o+RiL8mX ae9aUxby+6A6mqBaVJT/31POWETvXG/ELdhFR0L1mOjO/LLxTszr4UBS6UpiBA/1 SoazoPo5mt4IkPCyV3wgjNN46mr43KxKvAF3iuhrwar/dJFmL17TFwEiR5+S7ElS 7nkp+bwgCVEfLKzgJtx5PBeMD9xx8cXhFLp4IwQoxdVyooB2PT0+Os45wwN05x9U ED1nmGkIgJZwNKTwgry9a2FnbHGQd/uovywmzv/jCZ3p7BogqUOoTosE01L41SgO 6s0dmbP40IW3GbuRUM7hF8JZx5qlTVy6WbtCDzA4V4mgpS/6nvJTNirlPUl1B3sS 7cKHA== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cutur9eb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:22:44 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:22:43 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:22:43 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id 420463F7044; Fri, 2 May 2025 06:22:30 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Kiran Kumar K , "Nithin Dabilpuram" , Tanmay Jagdale Subject: [net-next PATCH v1 07/15] octeontx2-af: Add support for SPI to SA index translation Date: Fri, 2 May 2025 18:49:48 +0530 Message-ID: <20250502132005.611698-8-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 9KE7_H05N5gym2oe83jeOcHRVqIB8RpE X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX5avotjnUuYA9 0ttUOCtbISPlq4hVjjlkQX39XVScp9I2Hk7jYoqRmVmLUkiqgSGhKTZF7v6BD1RHQoJB9Z7s7LQ MGMjzhzL2CInPWjvIvpNZdV7WJxD1P+L+1Uo8T0P4zT6jlWDfifcG9lRICl1LaTg/z59CbkhdIj c30dIxF8t6A7dY8QXZOCIV2ur3W+yjMBjAzjsWgJUQBmpnAJhFJLFVhco5mZAwLO8otctmVsHzX 9fC+ukyEZp8GxGTTgfh3bzL90je59gn7LjQe/MkzNSNfB5LThBLWf/xwN+F4szOn6QDnpzwy5lZ x6oIgynV6g75GlbiVv4LL4NmYoPGPl2JgkwELPbsE/+wf2T4LH3L/0l6hwXGMeeHrNWb5FqyqFi ziwl/47EuyWUC8qNO9VbwOX5TRRB37tCiABfnxZcVgd++zR7viAUh8EYgdKLnsEBFIxitW10 X-Authority-Analysis: v=2.4 cv=BaPY0qt2 c=1 sm=1 tr=0 ts=6814c724 cx=c_pps a=rEv8fa4AjpPjGxpoe8rlIQ==:117 a=rEv8fa4AjpPjGxpoe8rlIQ==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=KpuqD5-phFCYfedluycA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: 9KE7_H05N5gym2oe83jeOcHRVqIB8RpE X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 From: Kiran Kumar K In case of IPsec, the inbound SPI can be random. HW supports mapping SPI to an arbitrary SA index. SPI to SA index is done using a lookup in NPC cam entry with key as SPI, MATCH_ID, LFID. Adding Mbox API changes to configure the match table. Signed-off-by: Kiran Kumar K Signed-off-by: Nithin Dabilpuram Signed-off-by: Sunil Kovvuri Goutham Signed-off-by: Tanmay Jagdale --- .../ethernet/marvell/octeontx2/af/Makefile | 2 +- .../net/ethernet/marvell/octeontx2/af/mbox.h | 27 +++ .../net/ethernet/marvell/octeontx2/af/rvu.c | 4 + .../net/ethernet/marvell/octeontx2/af/rvu.h | 13 ++ .../ethernet/marvell/octeontx2/af/rvu_nix.c | 6 + .../marvell/octeontx2/af/rvu_nix_spi.c | 220 ++++++++++++++++++ .../ethernet/marvell/octeontx2/af/rvu_reg.h | 4 + 7 files changed, 275 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile index ccea37847df8..49318017f35f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile @@ -8,7 +8,7 @@ obj-$(CONFIG_OCTEONTX2_MBOX) += rvu_mbox.o obj-$(CONFIG_OCTEONTX2_AF) += rvu_af.o rvu_mbox-y := mbox.o rvu_trace.o -rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \ +rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o rvu_nix_spi.o \ rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \ rvu_cpt.o rvu_devlink.o rpm.o rvu_cn10k.o rvu_switch.o \ rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o \ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 715efcc04c9e..5cebf10a15a7 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -326,6 +326,10 @@ M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg, \ M(NIX_LF_INLINE_RQ_CFG, 0x8024, nix_lf_inline_rq_cfg, \ nix_rq_cpt_field_mask_cfg_req, \ msg_rsp) \ +M(NIX_SPI_TO_SA_ADD, 0x8026, nix_spi_to_sa_add, nix_spi_to_sa_add_req, \ + nix_spi_to_sa_add_rsp) \ +M(NIX_SPI_TO_SA_DELETE, 0x8027, nix_spi_to_sa_delete, nix_spi_to_sa_delete_req, \ + msg_rsp) \ M(NIX_MCAST_GRP_CREATE, 0x802b, nix_mcast_grp_create, nix_mcast_grp_create_req, \ nix_mcast_grp_create_rsp) \ M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req, \ @@ -880,6 +884,29 @@ enum nix_rx_vtag0_type { NIX_AF_LFX_RX_VTAG_TYPE7, }; +/* For SPI to SA index add */ +struct nix_spi_to_sa_add_req { + struct mbox_msghdr hdr; + u32 sa_index; + u32 spi_index; + u16 match_id; + bool valid; +}; + +struct nix_spi_to_sa_add_rsp { + struct mbox_msghdr hdr; + u16 hash_index; + u8 way; + u8 is_duplicate; +}; + +/* To free SPI to SA index */ +struct nix_spi_to_sa_delete_req { + struct mbox_msghdr hdr; + u16 hash_index; + u8 way; +}; + /* For NIX LF context alloc and init */ struct nix_lf_alloc_req { struct mbox_msghdr hdr; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index ea346e59835b..2b7c09bb24e1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -90,6 +90,9 @@ static void rvu_setup_hw_capabilities(struct rvu *rvu) if (is_rvu_npc_hash_extract_en(rvu)) hw->cap.npc_hash_extract = true; + + if (is_rvu_nix_spi_to_sa_en(rvu)) + hw->cap.spi_to_sas = 0x2000; } /* Poll a RVU block's register 'offset', for a 'zero' @@ -2723,6 +2726,7 @@ static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc) rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NPA); rvu_reset_lmt_map_tbl(rvu, pcifunc); rvu_detach_rsrcs(rvu, NULL, pcifunc); + /* In scenarios where PF/VF drivers detach NIXLF without freeing MCAM * entries, check and free the MCAM entries explicitly to avoid leak. * Since LF is detached use LF number as -1. diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 71407f6318ec..42fc3e762bc0 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -395,6 +395,7 @@ struct hw_cap { u16 nix_txsch_per_cgx_lmac; /* Max Q's transmitting to CGX LMAC */ u16 nix_txsch_per_lbk_lmac; /* Max Q's transmitting to LBK LMAC */ u16 nix_txsch_per_sdp_lmac; /* Max Q's transmitting to SDP LMAC */ + u16 spi_to_sas; /* Num of SPI to SA index */ bool nix_fixed_txschq_mapping; /* Schq mapping fixed or flexible */ bool nix_shaping; /* Is shaping and coloring supported */ bool nix_shaper_toggle_wait; /* Shaping toggle needs poll/wait */ @@ -800,6 +801,17 @@ static inline bool is_rvu_npc_hash_extract_en(struct rvu *rvu) return true; } +static inline bool is_rvu_nix_spi_to_sa_en(struct rvu *rvu) +{ + u64 nix_const2; + + nix_const2 = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST2); + if ((nix_const2 >> 48) & 0xffff) + return true; + + return false; +} + static inline u16 rvu_nix_chan_cgx(struct rvu *rvu, u8 cgxid, u8 lmacid, u8 chan) { @@ -992,6 +1004,7 @@ int nix_get_struct_ptrs(struct rvu *rvu, u16 pcifunc, struct nix_hw **nix_hw, int *blkaddr); int rvu_nix_setup_ratelimit_aggr(struct rvu *rvu, u16 pcifunc, u16 rq_idx, u16 match_id); +int rvu_nix_free_spi_to_sa_table(struct rvu *rvu, uint16_t pcifunc); int nix_aq_context_read(struct rvu *rvu, struct nix_hw *nix_hw, struct nix_cn10k_aq_enq_req *aq_req, struct nix_cn10k_aq_enq_rsp *aq_rsp, diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index b15fd331facf..68525bfc8e6d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -1751,6 +1751,9 @@ int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct nix_lf_free_req *req, else rvu_npc_free_mcam_entries(rvu, pcifunc, nixlf); + /* Reset SPI to SA index table */ + rvu_nix_free_spi_to_sa_table(rvu, pcifunc); + /* Free any tx vtag def entries used by this NIX LF */ if (!(req->flags & NIX_LF_DONT_FREE_TX_VTAG)) nix_free_tx_vtag_entries(rvu, pcifunc); @@ -5312,6 +5315,9 @@ void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf) nix_rx_sync(rvu, blkaddr); nix_txschq_free(rvu, pcifunc); + /* Reset SPI to SA index table */ + rvu_nix_free_spi_to_sa_table(rvu, pcifunc); + clear_bit(NIXLF_INITIALIZED, &pfvf->flags); if (is_pf_cgxmapped(rvu, pf) && rvu->rep_mode) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c new file mode 100644 index 000000000000..b8acc23a47bc --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c @@ -0,0 +1,220 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell RVU Admin Function driver + * + * Copyright (C) 2022 Marvell. + * + */ + +#include "rvu.h" + +static bool nix_spi_to_sa_index_check_duplicate(struct rvu *rvu, + struct nix_spi_to_sa_add_req *req, + struct nix_spi_to_sa_add_rsp *rsp, + int blkaddr, int16_t index, u8 way, + bool *is_valid, int lfidx) +{ + u32 spi_index; + u16 match_id; + bool valid; + u8 lfid; + u64 wkey; + + wkey = rvu_read64(rvu, blkaddr, NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way)); + spi_index = (wkey & 0xFFFFFFFF); + match_id = ((wkey >> 32) & 0xFFFF); + lfid = ((wkey >> 48) & 0x7f); + valid = ((wkey >> 55) & 0x1); + + *is_valid = valid; + if (!valid) + return 0; + + if (req->spi_index == spi_index && req->match_id == match_id && + lfidx == lfid) { + rsp->hash_index = index; + rsp->way = way; + rsp->is_duplicate = true; + return 1; + } + return 0; +} + +static void nix_spi_to_sa_index_table_update(struct rvu *rvu, + struct nix_spi_to_sa_add_req *req, + struct nix_spi_to_sa_add_rsp *rsp, + int blkaddr, int16_t index, u8 way, + int lfidx) +{ + u64 wvalue; + u64 wkey; + + wkey = (req->spi_index | ((u64)req->match_id << 32) | + (((u64)lfidx) << 48) | ((u64)req->valid << 55)); + rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way), + wkey); + wvalue = (req->sa_index & 0xFFFFFFFF); + rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_VALUEX_WAYX(index, way), + wvalue); + rsp->hash_index = index; + rsp->way = way; + rsp->is_duplicate = false; +} + +int rvu_mbox_handler_nix_spi_to_sa_delete(struct rvu *rvu, + struct nix_spi_to_sa_delete_req *req, + struct msg_rsp *rsp) +{ + struct rvu_hwinfo *hw = rvu->hw; + u16 pcifunc = req->hdr.pcifunc; + int lfidx, lfid; + int blkaddr; + u64 wvalue; + u64 wkey; + int ret = 0; + + if (!hw->cap.spi_to_sas) + return NIX_AF_ERR_PARAM; + + if (!is_nixlf_attached(rvu, pcifunc)) { + ret = NIX_AF_ERR_AF_LF_INVALID; + goto exit; + } + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0); + if (lfidx < 0) { + ret = NIX_AF_ERR_AF_LF_INVALID; + goto exit; + } + + mutex_lock(&rvu->rsrc_lock); + + wkey = rvu_read64(rvu, blkaddr, + NIX_AF_SPI_TO_SA_KEYX_WAYX(req->hash_index, req->way)); + lfid = ((wkey >> 48) & 0x7f); + if (lfid != lfidx) { + ret = NIX_AF_ERR_AF_LF_INVALID; + goto unlock; + } + + wkey = 0; + rvu_write64(rvu, blkaddr, + NIX_AF_SPI_TO_SA_KEYX_WAYX(req->hash_index, req->way), wkey); + wvalue = 0; + rvu_write64(rvu, blkaddr, + NIX_AF_SPI_TO_SA_VALUEX_WAYX(req->hash_index, req->way), wvalue); +unlock: + mutex_unlock(&rvu->rsrc_lock); +exit: + return ret; +} + +int rvu_mbox_handler_nix_spi_to_sa_add(struct rvu *rvu, + struct nix_spi_to_sa_add_req *req, + struct nix_spi_to_sa_add_rsp *rsp) +{ + u16 way0_index, way1_index, way2_index, way3_index; + struct rvu_hwinfo *hw = rvu->hw; + u16 pcifunc = req->hdr.pcifunc; + bool way0, way1, way2, way3; + int ret = 0; + int blkaddr; + int lfidx; + u64 value; + u64 key; + + if (!hw->cap.spi_to_sas) + return NIX_AF_ERR_PARAM; + + if (!is_nixlf_attached(rvu, pcifunc)) { + ret = NIX_AF_ERR_AF_LF_INVALID; + goto exit; + } + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0); + if (lfidx < 0) { + ret = NIX_AF_ERR_AF_LF_INVALID; + goto exit; + } + + mutex_lock(&rvu->rsrc_lock); + + key = (((u64)lfidx << 48) | ((u64)req->match_id << 32) | req->spi_index); + rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_HASH_KEY, key); + value = rvu_read64(rvu, blkaddr, NIX_AF_SPI_TO_SA_HASH_VALUE); + way0_index = (value & 0x7ff); + way1_index = ((value >> 16) & 0x7ff); + way2_index = ((value >> 32) & 0x7ff); + way3_index = ((value >> 48) & 0x7ff); + + /* Check for duplicate entry */ + if (nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr, + way0_index, 0, &way0, lfidx) || + nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr, + way1_index, 1, &way1, lfidx) || + nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr, + way2_index, 2, &way2, lfidx) || + nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr, + way3_index, 3, &way3, lfidx)) { + ret = 0; + goto unlock; + } + + /* If not present, update first available way with index */ + if (!way0) + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr, + way0_index, 0, lfidx); + else if (!way1) + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr, + way1_index, 1, lfidx); + else if (!way2) + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr, + way2_index, 2, lfidx); + else if (!way3) + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr, + way3_index, 3, lfidx); +unlock: + mutex_unlock(&rvu->rsrc_lock); +exit: + return ret; +} + +int rvu_nix_free_spi_to_sa_table(struct rvu *rvu, uint16_t pcifunc) +{ + struct rvu_hwinfo *hw = rvu->hw; + int lfidx, lfid; + int index, way; + u64 value, key; + int blkaddr; + + if (!hw->cap.spi_to_sas) + return 0; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0); + if (lfidx < 0) + return NIX_AF_ERR_AF_LF_INVALID; + + mutex_lock(&rvu->rsrc_lock); + for (index = 0; index < hw->cap.spi_to_sas / 4; index++) { + for (way = 0; way < 4; way++) { + key = rvu_read64(rvu, blkaddr, + NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way)); + lfid = ((key >> 48) & 0x7f); + if (lfid == lfidx) { + key = 0; + rvu_write64(rvu, blkaddr, + NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way), + key); + value = 0; + rvu_write64(rvu, blkaddr, + NIX_AF_SPI_TO_SA_VALUEX_WAYX(index, way), + value); + } + } + } + mutex_unlock(&rvu->rsrc_lock); + + return 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h index e5e005d5d71e..b64547fe4811 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h @@ -396,6 +396,10 @@ #define NIX_AF_RX_CHANX_CFG(a) (0x1A30 | (a) << 15) #define NIX_AF_CINT_TIMERX(a) (0x1A40 | (a) << 18) #define NIX_AF_LSO_FORMATX_FIELDX(a, b) (0x1B00 | (a) << 16 | (b) << 3) +#define NIX_AF_SPI_TO_SA_KEYX_WAYX(a, b) (0x1C00 | (a) << 16 | (b) << 3) +#define NIX_AF_SPI_TO_SA_VALUEX_WAYX(a, b) (0x1C40 | (a) << 16 | (b) << 3) +#define NIX_AF_SPI_TO_SA_HASH_KEY (0x1C90) +#define NIX_AF_SPI_TO_SA_HASH_VALUE (0x1CA0) #define NIX_AF_LFX_CFG(a) (0x4000 | (a) << 17) #define NIX_AF_LFX_SQS_CFG(a) (0x4020 | (a) << 17) #define NIX_AF_LFX_TX_CFG2(a) (0x4028 | (a) << 17) From patchwork Fri May 2 13:19:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 886703 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9738E255F39; Fri, 2 May 2025 13:23:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192195; cv=none; b=rkFcoX7BoHjEn7mD64RnKlNRjjzNkt3G31sG2Yj2PknoR/jQ48SogC2HdAX2p47KDDM647AyCMfveMQhd/4iBjqiYdfQth26cspd0NIljV7LlDZfoozTSrckCr+dvUGk6WdsoIHGK5VDF0m9daGO8L5B+uCnqE4Hv0O5XWi0xrE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192195; c=relaxed/simple; bh=au+S6SmqxDG+55r8q8quZ4R1Sw0QG4A9zZRMe4d8vnA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=X8thh8WukNkvp5Y7F1BmUUCeLXE++h9gpRoKmDD+ITp95AyN1ZDQZ37elNqxev+QUVlc5zcoQZBvJl68vmHK5JARcyiIIh7bCQ3fhhrGXVrStzCm4IAt16efg7vGvK2dkXrvg0wgMZmpTfMqINNR/6IMkRzhbJ8tW9ytKMKmPCs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=ZHQrgCPY; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="ZHQrgCPY" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 542Cv0Rj031010; Fri, 2 May 2025 06:22:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=R ApmBC2ehpSd6oAoYyn3gNx8vtv7quWGBv0vGFThims=; b=ZHQrgCPYqhRgxygho sIg8nGqHMLKXLCK39ZHvx5V+PVKh/4rb+VVFkda2VG+Pgo7ATWgTSIve/Opeopla xY1L4G98iZmHjGrzxVrkspgWL7/GyKiGCxjqCiwn6qjFami9YSDrOhMVPAALiAQ2 pua0vW+yGCzJso9FCwER04VOPy4emQSoN+2HK6SG7DUZ8o/dTAE641IW9ZtqXTAi Nh9o8bqR+kPUJHAoRSKcfU8jYuMQsQkAVpE+Xqq2/OeO7Q99Cgo1jACVqDW5A5U6 XoGA1q/TK1M9KRkylVYkmQSSI+hJueoesESLOkQUojgQozkF59sXLe72dUglKGRW jt4ug== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cutur9ex-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:22:59 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:22:57 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:22:56 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id D19E23F7041; Fri, 2 May 2025 06:22:44 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Amit Singh Tomar , "Tanmay Jagdale" Subject: [net-next PATCH v1 08/15] octeontx2-af: Add mbox to alloc/free BPIDs Date: Fri, 2 May 2025 18:49:49 +0530 Message-ID: <20250502132005.611698-9-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: n5g9vh1ywJow8x0oe5QTf7F_6aut7Bdj X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX8mWI8wq1krr5 LNx14dxRI+zMpPNaLapPPHKKN7GA166wH8d3oZy5Q2ld+ZTkmOdLAYxT7ioz0C/gRn9XSq5lYsE AV7Vb1/bjFadYCnd1j+GpsVw+LdHJAkidzhffKkP9k4y4zQOwmdMZbI72bFaavLlHGy5oH/Rh5W +JfhfAvEbvUiI9FmovDJz2iTZOCpvSfxWG6S44p4mkTw7NkPUCIwQLtLMkBf+5ynXB5EJPf4JMx e+0o+D5+15fdn1aUDitM43kj9Gg/r0LgR10vNhO7H/tQdZFf4XIrcH831s1Li+q+aFdOrSr3+K9 08n+7XxK7Ag4ZC2rmFqFoDrV1mw+z1Z4HyeQABeEH33mZBWqPTplTqqc2OhYPTUtgTZPp7mAV/i WflX9uvet97nZLP0LiVzVm7erjjSILkCN+0wBeQKUQk+E+Rr51NVONNM4VY8F0yTra9SyT4d X-Authority-Analysis: v=2.4 cv=BaPY0qt2 c=1 sm=1 tr=0 ts=6814c733 cx=c_pps a=gIfcoYsirJbf48DBMSPrZA==:117 a=gIfcoYsirJbf48DBMSPrZA==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=Bbp0eoHYjeZGJuDmjPgA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: n5g9vh1ywJow8x0oe5QTf7F_6aut7Bdj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 From: Geetha sowjanya Adds mbox handlers to allocate/free BPIDs from the free BPIDs pool. This can be used by the PF/VF to request up to 8 BPIds. Also add a mbox handler to configure NIXX_AF_RX_CHANX with multiple Bpids. Signed-off-by: Amit Singh Tomar Signed-off-by: Geetha sowjanya Signed-off-by: Tanmay Jagdale --- .../ethernet/marvell/octeontx2/af/common.h | 1 + .../net/ethernet/marvell/octeontx2/af/mbox.h | 26 +++++ .../ethernet/marvell/octeontx2/af/rvu_nix.c | 100 ++++++++++++++++++ 3 files changed, 127 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h index 406c59100a35..a7c1223dedc6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/common.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h @@ -191,6 +191,7 @@ enum nix_scheduler { #define NIX_INTF_TYPE_CGX 0 #define NIX_INTF_TYPE_LBK 1 #define NIX_INTF_TYPE_SDP 2 +#define NIX_INTF_TYPE_CPT 3 #define MAX_LMAC_PKIND 12 #define NIX_LINK_CGX_LMAC(a, b) (0 + 4 * (a) + (b)) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 5cebf10a15a7..71cf507c2591 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -338,6 +338,9 @@ M(NIX_MCAST_GRP_UPDATE, 0x802d, nix_mcast_grp_update, \ nix_mcast_grp_update_req, \ nix_mcast_grp_update_rsp) \ M(NIX_LF_STATS, 0x802e, nix_lf_stats, nix_stats_req, nix_stats_rsp) \ +M(NIX_ALLOC_BPIDS, 0x8028, nix_alloc_bpids, nix_alloc_bpid_req, nix_bpids) \ +M(NIX_FREE_BPIDS, 0x8029, nix_free_bpids, nix_bpids, msg_rsp) \ +M(NIX_RX_CHAN_CFG, 0x802a, nix_rx_chan_cfg, nix_rx_chan_cfg, nix_rx_chan_cfg) \ /* MCS mbox IDs (range 0xA000 - 0xBFFF) */ \ M(MCS_ALLOC_RESOURCES, 0xa000, mcs_alloc_resources, mcs_alloc_rsrc_req, \ mcs_alloc_rsrc_rsp) \ @@ -1347,6 +1350,29 @@ struct nix_mcast_grp_update_rsp { u32 mce_start_index; }; +struct nix_alloc_bpid_req { + struct mbox_msghdr hdr; + u8 bpid_cnt; + u8 type; + u64 rsvd; +}; + +struct nix_bpids { + struct mbox_msghdr hdr; + u8 bpid_cnt; + u16 bpids[8]; + u64 rsvd; +}; + +struct nix_rx_chan_cfg { + struct mbox_msghdr hdr; + u8 type; /* Interface type(CGX/CPT/LBK) */ + u8 read; + u16 chan; /* RX channel to be configured */ + u64 val; /* NIX_AF_RX_CHAN_CFG value */ + u64 rsvd; +}; + /* Global NIX inline IPSec configuration */ struct nix_inline_ipsec_cfg { struct mbox_msghdr hdr; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 68525bfc8e6d..d5ec6ad0f30c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -569,6 +569,106 @@ void rvu_nix_flr_free_bpids(struct rvu *rvu, u16 pcifunc) mutex_unlock(&rvu->rsrc_lock); } +int rvu_mbox_handler_nix_rx_chan_cfg(struct rvu *rvu, + struct nix_rx_chan_cfg *req, + struct nix_rx_chan_cfg *rsp) +{ + struct rvu_pfvf *pfvf; + int blkaddr; + u16 chan; + + pfvf = rvu_get_pfvf(rvu, req->hdr.pcifunc); + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, req->hdr.pcifunc); + chan = pfvf->rx_chan_base + req->chan; + + if (req->type == NIX_INTF_TYPE_CPT) + chan = chan | BIT(11); + + if (req->read) { + rsp->val = rvu_read64(rvu, blkaddr, + NIX_AF_RX_CHANX_CFG(chan)); + rsp->chan = req->chan; + } else { + rvu_write64(rvu, blkaddr, NIX_AF_RX_CHANX_CFG(chan), req->val); + } + return 0; +} + +int rvu_mbox_handler_nix_alloc_bpids(struct rvu *rvu, + struct nix_alloc_bpid_req *req, + struct nix_bpids *rsp) +{ + u16 pcifunc = req->hdr.pcifunc; + struct nix_hw *nix_hw; + int blkaddr, cnt = 0; + struct nix_bp *bp; + int bpid, err; + + err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr); + if (err) + return err; + + bp = &nix_hw->bp; + + /* For interface like sso uses same bpid across multiple + * application. Find the bpid is it already allocate or + * allocate a new one. + */ + if (req->type > NIX_INTF_TYPE_CPT) { + for (bpid = 0; bpid < bp->bpids.max; bpid++) { + if (bp->intf_map[bpid] == req->type) { + rsp->bpids[cnt] = bpid + bp->free_pool_base; + rsp->bpid_cnt++; + bp->ref_cnt[bpid]++; + cnt++; + } + } + if (rsp->bpid_cnt) + return 0; + } + + for (cnt = 0; cnt < req->bpid_cnt; cnt++) { + bpid = rvu_alloc_rsrc(&bp->bpids); + if (bpid < 0) + return 0; + rsp->bpids[cnt] = bpid + bp->free_pool_base; + bp->intf_map[bpid] = req->type; + bp->fn_map[bpid] = pcifunc; + bp->ref_cnt[bpid]++; + rsp->bpid_cnt++; + } + return 0; +} + +int rvu_mbox_handler_nix_free_bpids(struct rvu *rvu, + struct nix_bpids *req, + struct msg_rsp *rsp) +{ + u16 pcifunc = req->hdr.pcifunc; + int blkaddr, cnt, err, id; + struct nix_hw *nix_hw; + struct nix_bp *bp; + u16 bpid; + + err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr); + if (err) + return err; + + bp = &nix_hw->bp; + for (cnt = 0; cnt < req->bpid_cnt; cnt++) { + bpid = req->bpids[cnt] - bp->free_pool_base; + bp->ref_cnt[bpid]--; + if (bp->ref_cnt[bpid]) + continue; + rvu_free_rsrc(&bp->bpids, bpid); + for (id = 0; id < bp->bpids.max; id++) { + if (bp->fn_map[id] == pcifunc) + bp->fn_map[id] = 0; + } + } + return 0; +} + static u16 nix_get_channel(u16 chan, bool cpt_link) { /* CPT channel for a given link channel is always From patchwork Fri May 2 13:19:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 887257 Received: from mx0a-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 938D1246763; Fri, 2 May 2025 13:23:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192214; cv=none; b=AkoGjJfXnqDohd1ppMl5Vx3phQrXQZAhI7sx80NAX4DsnTvNKNzjyHOURdVdjOLBpsm3AQfI27hW9S+AUmN5xYT1AN18A4FanrpVOPrrJLUJFyGKb9RUjLLSuyh+liu64sEasjIl4iGHVM6Z4tYuSM8d4ZqqsivAN+4vjLfrj6Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192214; c=relaxed/simple; bh=rT7PAVn+/F5I2or7Ec6GDC2hrchfunmQ4sRCFzdw55M=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=HJjnHzfgavbtwQf/I4CGV/ZAIc97iq7kTt3Qq6Pi2YcH5L2slH3LfbVXItiL4u0nut5PNMs6bV7SpVpogTigrMKprC4QG0UZpeQjhVQbynWsRN4MRROmc2kMZP2I+4zx0rabB1uljOgoeD3XaVxh9sGmrQgmQNz62EnWim1knUk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=cDgrCxpb; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="cDgrCxpb" Received: from pps.filterd (m0431384.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5429mQqb008547; Fri, 2 May 2025 06:23:11 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=A +jA9N3sdFKPI37ZfEkaggP6+rrGv7r8ADfSqKWeHRk=; b=cDgrCxpb09QJqcRpT iWKQdutQX0M/bR6UMPuADdUFV0ZzUsqo3ExYxoi004cfkBItHrT4SEfGUROqfGJF auq9O9d87Kd1Ys89V91w3AuM91zplzZpOWPPH7nqsgeQFQJb1bH29HJ1P4A5+0gZ PRUOIjOswVbewHmiEpFcStAdjps/m46vngjb9FxAaTPlrP1rehJeU/icM9t7S0RN kHnQu2uZm5ihYmdkbKifmq9CdbGdV1bdmbcxbslh2jDopldARf7ZxoFM8uaCn7/i KVf6KcteaaCqhuEqM2Qm58Gg/v6pxu4WFG2CQiKiEhnynva2hbrYouvkLxW/+zLp b99CQ== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cuqyr9vu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:23:10 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:23:09 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:23:09 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id A0C673F7044; Fri, 2 May 2025 06:22:58 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 09/15] octeontx2-pf: ipsec: Allocate Ingress SA table Date: Fri, 2 May 2025 18:49:50 +0530 Message-ID: <20250502132005.611698-10-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: QFC_WCYJ3mOTA9Rt2yyg28QPBhy92O6V X-Authority-Analysis: v=2.4 cv=JvPxrN4C c=1 sm=1 tr=0 ts=6814c73e cx=c_pps a=gIfcoYsirJbf48DBMSPrZA==:117 a=gIfcoYsirJbf48DBMSPrZA==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=ShQPJPp02mSZ9WD1TkIA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: QFC_WCYJ3mOTA9Rt2yyg28QPBhy92O6V X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX5TWq5spRKOyU j/lVlcf+pn5ju2tglP94lBCQcWm+KKNLXOmFwstK3bbEk0E5CVM2+6PPLMkaHXLptZN15NdcZez XaVbK46rtPkV2vOXCLCTePW6Vf8sFblzmTm5AZJT3aWo3RJVsIX0vggXE3Au3j+Bzmoxy63aEE9 /BNDSgdWFJxwmO1JDs5kF7OAGCi8JZUxYgmNmcK9Mvk6WxY/XA+ktgKgOwIzzW+xE3qycArWTtp bF9UYc8cdrA+dEzDu+llK2YkB3fy6TdcIUTvm9JeBN3DXF1wQ2/2WpRoGBp9xyx3iZ07as1edo6 0Q2WuSGG1CnfsRqI4ZFaOsq2toqn/QGYKjfaTlQuASKyqI4vILVp+mrk42wCyjAVlyaOLRXyhHx d9G6gWb0yVLBr+dpnuYF9U2X/naVdFEK8I/JC/WsREnz1db22lf7eZXDvnBAIX/Kf0W2eFJS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 Every NIX LF has the facility to maintain a contiguous SA table that is used by NIX RX to find the exact SA context pointer associated with a particular flow. Allocate a 128-entry SA table where each entry is of 2048 bytes which is enough to hold the complete inbound SA context. Add the structure definitions for SA context (cn10k_rx_sa_s) and SA bookkeeping information (ctx_inb_ctx_info). Also, initialize the inb_sw_ctx_list to track all the SA's and their associated NPC rules and hash table related data. Signed-off-by: Tanmay Jagdale --- .../marvell/octeontx2/nic/cn10k_ipsec.c | 20 ++++ .../marvell/octeontx2/nic/cn10k_ipsec.h | 93 +++++++++++++++++++ 2 files changed, 113 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c index fc59e50bafce..c435dcae4929 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c @@ -787,6 +787,7 @@ int cn10k_ipsec_init(struct net_device *netdev) { struct otx2_nic *pf = netdev_priv(netdev); u32 sa_size; + int err; if (!is_dev_support_ipsec_offload(pf->pdev)) return 0; @@ -797,6 +798,22 @@ int cn10k_ipsec_init(struct net_device *netdev) OTX2_ALIGN : sizeof(struct cn10k_tx_sa_s); pf->ipsec.sa_size = sa_size; + /* Set sa_tbl_entry_sz to 2048 since we are programming NIX RX + * to calculate SA index as SPI * 2048. The first 1024 bytes + * are used for SA context and the next half for bookkeeping data. + */ + pf->ipsec.sa_tbl_entry_sz = 2048; + err = qmem_alloc(pf->dev, &pf->ipsec.inb_sa, CN10K_IPSEC_INB_MAX_SA, + pf->ipsec.sa_tbl_entry_sz); + if (err) + return err; + + memset(pf->ipsec.inb_sa->base, 0, + pf->ipsec.sa_tbl_entry_sz * CN10K_IPSEC_INB_MAX_SA); + + /* List to track all ingress SAs */ + INIT_LIST_HEAD(&pf->ipsec.inb_sw_ctx_list); + INIT_WORK(&pf->ipsec.sa_work, cn10k_ipsec_sa_wq_handler); pf->ipsec.sa_workq = alloc_workqueue("cn10k_ipsec_sa_workq", 0, 0); if (!pf->ipsec.sa_workq) { @@ -828,6 +845,9 @@ void cn10k_ipsec_clean(struct otx2_nic *pf) } cn10k_outb_cpt_clean(pf); + + /* Free Ingress SA table */ + qmem_free(pf->dev, pf->ipsec.inb_sa); } EXPORT_SYMBOL(cn10k_ipsec_clean); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h index 9965df0faa3e..6dd6ead0b28b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h @@ -52,10 +52,14 @@ DECLARE_STATIC_KEY_FALSE(cn10k_ipsec_sa_enabled); #define CN10K_CPT_LF_NQX(a) (CPT_LFBASE | 0x400 | (a) << 3) #define CN10K_CPT_LF_CTX_FLUSH (CPT_LFBASE | 0x510) +/* Inbound SA*/ +#define CN10K_IPSEC_INB_MAX_SA 128 + /* IPSEC Instruction opcodes */ #define CN10K_IPSEC_MAJOR_OP_WRITE_SA 0x01UL #define CN10K_IPSEC_MINOR_OP_WRITE_SA 0x09UL #define CN10K_IPSEC_MAJOR_OP_OUTB_IPSEC 0x2AUL +#define CN10K_IPSEC_MAJOR_OP_INB_IPSEC 0x29UL enum cn10k_cpt_comp_e { CN10K_CPT_COMP_E_NOTDONE = 0x00, @@ -81,6 +85,19 @@ enum cn10k_cpt_hw_state_e { CN10K_CPT_HW_IN_USE }; +struct cn10k_inb_sw_ctx_info { + struct list_head list; + struct cn10k_rx_sa_s *sa_entry; + struct xfrm_state *x_state; + dma_addr_t sa_iova; + u32 npc_mcam_entry; + u32 sa_index; + u32 spi; + u16 hash_index; /* Hash index from SPI_TO_SA match */ + u8 way; /* SPI_TO_SA match table way index */ + bool delete_npc_and_match_entry; +}; + struct cn10k_ipsec { /* Outbound CPT */ u64 io_addr; @@ -92,6 +109,12 @@ struct cn10k_ipsec { u32 outb_sa_count; struct work_struct sa_work; struct workqueue_struct *sa_workq; + + /* For Inbound Inline IPSec flows */ + u32 sa_tbl_entry_sz; + struct qmem *inb_sa; + struct list_head inb_sw_ctx_list; + DECLARE_BITMAP(inb_sa_table, CN10K_IPSEC_INB_MAX_SA); }; /* CN10K IPSEC Security Association (SA) */ @@ -146,6 +169,76 @@ struct cn10k_tx_sa_s { u64 hw_ctx[6]; /* W31 - W36 */ }; +struct cn10k_rx_sa_s { + u64 inb_ar_win_sz : 3; /* W0 */ + u64 hard_life_dec : 1; + u64 soft_life_dec : 1; + u64 count_glb_octets : 1; + u64 count_glb_pkts : 1; + u64 count_mib_bytes : 1; + u64 count_mib_pkts : 1; + u64 hw_ctx_off : 7; + u64 ctx_id : 16; + u64 orig_pkt_fabs : 1; + u64 orig_pkt_free : 1; + u64 pkind : 6; + u64 rsvd_w0_40 : 1; + u64 eth_ovrwr : 1; + u64 pkt_output : 2; + u64 pkt_format : 1; + u64 defrag_opt : 2; + u64 x2p_dst : 1; + u64 ctx_push_size : 7; + u64 rsvd_w0_55 : 1; + u64 ctx_hdr_size : 2; + u64 aop_valid : 1; + u64 rsvd_w0_59 : 1; + u64 ctx_size : 4; + + u64 rsvd_w1_31_0 : 32; /* W1 */ + u64 cookie : 32; + + u64 sa_valid : 1; /* W2 Control Word */ + u64 sa_dir : 1; + u64 rsvd_w2_2_3 : 2; + u64 ipsec_mode : 1; + u64 ipsec_protocol : 1; + u64 aes_key_len : 2; + u64 enc_type : 3; + u64 life_unit : 1; + u64 auth_type : 4; + u64 encap_type : 2; + u64 et_ovrwr_ddr_en : 1; + u64 esn_en : 1; + u64 tport_l4_incr_csum : 1; + u64 iphdr_verify : 2; + u64 udp_ports_verify : 1; + u64 l2_l3_hdr_on_error : 1; + u64 rsvd_w25_31 : 7; + u64 spi : 32; + + u64 w3; /* W3 */ + + u8 cipher_key[32]; /* W4 - W7 */ + u32 rsvd_w8_0_31; /* W8 : IV */ + u32 iv_gcm_salt; + u64 rsvd_w9; /* W9 */ + u64 rsvd_w10; /* W10 : UDP Encap */ + u32 dest_ipaddr; /* W11 - Tunnel mode: outer src and dest ipaddr */ + u32 src_ipaddr; + u64 rsvd_w12_w30[19]; /* W12 - W30 */ + + u64 ar_base; /* W31 */ + u64 ar_valid_mask; /* W32 */ + u64 hard_sa_life; /* W33 */ + u64 soft_sa_life; /* W34 */ + u64 mib_octs; /* W35 */ + u64 mib_pkts; /* W36 */ + u64 ar_winbits; /* W37 */ + + u64 rsvd_w38_w100[63]; +}; + /* CPT instruction parameter-1 */ #define CN10K_IPSEC_INST_PARAM1_DIS_L4_CSUM 0x1 #define CN10K_IPSEC_INST_PARAM1_DIS_L3_CSUM 0x2 From patchwork Fri May 2 13:19:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 886702 Received: from mx0a-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7ECF6246763; Fri, 2 May 2025 13:23:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192220; cv=none; b=tdYQOQ7plSxYB5Yhgw6wHrltE1SGoSIxrgbDqXRgBiZMOOOu+uJTTzkEFGC9VjGNTWs24RIFY729CLftuSGUiBlIlDjwJtSNHLEQnkrcWbhQ4VLXsgQPweVWmwoy5TcOaAhqKy2Qpc6D7A6dVv66BJble3C8qkld2WW2/eb2Kc8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192220; c=relaxed/simple; bh=pC2CCmebeUOTqnFjS4F/voA5+PYAyi+CQIXov6sxF7s=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=EnDYtSQqztIthBbqeX68f6O77hFwRbxCEWUrCfoVdYZOWNh+8D2uH7204f0QDZR5ymfGsXvl42TWKlXJIjkZhYETz+NYbuJoc+mVMQsi4Z2c5zVuP1L+2tXhskMy0CN4wvmJIGuzyJv6l+tqYArWyCYk6Pe/8hfX0wnLQmjlRY0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=BItH495H; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="BItH495H" Received: from pps.filterd (m0431384.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5429mQqd008547; Fri, 2 May 2025 06:23:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=n GpbTgCMePKMDJQ/LENXSFO/35PhxOQ5Pg3DYsRsC5k=; b=BItH495HQHKOUFlnn IU5JuErlABWMivjaqlmH8FrUYR877Aj2aJN3vLjDOXaM4PI1Leninfb/9Wzqj0yw ww3LsfNAaWpCFXYJ/IOIlsdn5Ba6VxJuxMe43l0BY3KgeSzK3cnKyz8lWaP1njb4 TQh/UZOhsvzmSSTxMkPCfv2NG2HohtG3eUlGtfc2HgWBJ3ovADfRb9ZeDniPdKDQ zd0S8NxuoQe51qouKvq7Ml560EZ0ghSLr+UK3dQypY8tm10Ydnzmoa+nYerw6tmx sZKo1Fk8xbs6LCyIXxJ9+lzsSKADFEpFVpo/QlUuCqwEMb9ertxu8Qm5wchf10g9 0/YmQ== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cuqyr9wa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:23:22 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:23:21 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:23:21 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id C64015B6921; Fri, 2 May 2025 06:23:09 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows Date: Fri, 2 May 2025 18:49:51 +0530 Message-ID: <20250502132005.611698-11-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: y0kem1ieWEbiMTGGixf1tCvHlgL-rgG2 X-Authority-Analysis: v=2.4 cv=JvPxrN4C c=1 sm=1 tr=0 ts=6814c74a cx=c_pps a=rEv8fa4AjpPjGxpoe8rlIQ==:117 a=rEv8fa4AjpPjGxpoe8rlIQ==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=eRz5E_71QNUotyJBA5UA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: y0kem1ieWEbiMTGGixf1tCvHlgL-rgG2 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX1i38PuzQJN4l 5mCx/JTjiG9pf8pS+6auJIF5/KwtbFkB2X3wXNgEpl24F8ibJwuSOLQ4UIwgPsg/PJE1Ov7pc3w XmjKDk4QN/Ug7JPj6HB1CtYf/ZuwJl+L93+s5sMrJ5qVm48hjC8yqQRP7jV6r5N7fspd1fhzAEb 3vYbGfd23P3b8k82gXI5Tjnjyz32uLp0g4k+wOBlvYVgZquEwDolgEvmqN03z9Zfl7esSgxAgOS x6yK4JDPwQzSsJzDeMjQztHIK2C7i0TFpiRvIRBJK0DgQolKN7dZ5+BkRUK+aY/SctyFpStvyIF dBTqyxiD7W7m08rDVqkTTGMnlYW+Tk8spKL47wU85mMxk+nPmf9+ZkHp+rfvW7oD9UWDu27FOB9 1ZtocHxRpqvrLP8stP5xCFSz/Q+Nlm9CgSd9+qm2pe7VRn3jPR8Tmij7PNyhQjz5gL+mcVfZ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 A incoming encrypted IPsec packet in the RVU NIX hardware needs to be classified for inline fastpath processing and then assinged a RQ and Aura pool before sending to CPT for decryption. Create a dedicated RQ, Aura and Pool with the following setup specifically for IPsec flows: - Set ipsech_en, ipsecd_drop_en in RQ context to enable hardware fastpath processing for IPsec flows. - Configure the dedicated Aura to raise an interrupt when it's buffer count drops below a threshold value so that the buffers can be replenished from the CPU. The RQ, Aura and Pool contexts are initialized only when esp-hw-offload feature is enabled via ethtool. Also, move some of the RQ context macro definitions to otx2_common.h so that they can be used in the IPsec driver as well. Signed-off-by: Tanmay Jagdale --- .../marvell/octeontx2/nic/cn10k_ipsec.c | 201 +++++++++++++++++- .../marvell/octeontx2/nic/cn10k_ipsec.h | 2 + .../marvell/octeontx2/nic/otx2_common.c | 23 +- .../marvell/octeontx2/nic/otx2_common.h | 16 ++ .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 4 + 5 files changed, 227 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c index c435dcae4929..b88c1b4c5839 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c @@ -346,6 +346,193 @@ static int cn10k_outb_cpt_init(struct net_device *netdev) return ret; } +static int cn10k_ipsec_ingress_aura_init(struct otx2_nic *pfvf, int aura_id, + int pool_id, int numptrs) +{ + struct npa_aq_enq_req *aq; + struct otx2_pool *pool; + int err; + + pool = &pfvf->qset.pool[pool_id]; + + /* Allocate memory for HW to update Aura count. + * Alloc one cache line, so that it fits all FC_STYPE modes. + */ + if (!pool->fc_addr) { + err = qmem_alloc(pfvf->dev, &pool->fc_addr, 1, OTX2_ALIGN); + if (err) + return err; + } + + /* Initialize this aura's context via AF */ + aq = otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox); + if (!aq) + return -ENOMEM; + + aq->aura_id = aura_id; + /* Will be filled by AF with correct pool context address */ + aq->aura.pool_addr = pool_id; + aq->aura.pool_caching = 1; + aq->aura.shift = ilog2(numptrs) - 8; + aq->aura.count = numptrs; + aq->aura.limit = numptrs; + aq->aura.avg_level = 255; + aq->aura.ena = 1; + aq->aura.fc_ena = 1; + aq->aura.fc_addr = pool->fc_addr->iova; + aq->aura.fc_hyst_bits = 0; /* Store count on all updates */ + aq->aura.thresh_up = 1; + aq->aura.thresh = aq->aura.count / 4; + aq->aura.thresh_qint_idx = 0; + + /* Enable backpressure for RQ aura */ + if (!is_otx2_lbkvf(pfvf->pdev)) { + aq->aura.bp_ena = 0; + /* If NIX1 LF is attached then specify NIX1_RX. + * + * Below NPA_AURA_S[BP_ENA] is set according to the + * NPA_BPINTF_E enumeration given as: + * 0x0 + a*0x1 where 'a' is 0 for NIX0_RX and 1 for NIX1_RX so + * NIX0_RX is 0x0 + 0*0x1 = 0 + * NIX1_RX is 0x0 + 1*0x1 = 1 + * But in HRM it is given that + * "NPA_AURA_S[BP_ENA](w1[33:32]) - Enable aura backpressure to + * NIX-RX based on [BP] level. One bit per NIX-RX; index + * enumerated by NPA_BPINTF_E." + */ + if (pfvf->nix_blkaddr == BLKADDR_NIX1) + aq->aura.bp_ena = 1; +#ifdef CONFIG_DCB + aq->aura.nix0_bpid = pfvf->bpid[pfvf->queue_to_pfc_map[aura_id]]; +#else + aq->aura.nix0_bpid = pfvf->bpid[0]; +#endif + + /* Set backpressure level for RQ's Aura */ + aq->aura.bp = RQ_BP_LVL_AURA; + } + + /* Fill AQ info */ + aq->ctype = NPA_AQ_CTYPE_AURA; + aq->op = NPA_AQ_INSTOP_INIT; + + return otx2_sync_mbox_msg(&pfvf->mbox); +} + +static int cn10k_ipsec_ingress_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura) +{ + struct otx2_qset *qset = &pfvf->qset; + struct nix_cn10k_aq_enq_req *aq; + + /* Get memory to put this msg */ + aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox); + if (!aq) + return -ENOMEM; + + aq->rq.cq = qidx; + aq->rq.ena = 1; + aq->rq.pb_caching = 1; + aq->rq.lpb_aura = lpb_aura; /* Use large packet buffer aura */ + aq->rq.lpb_sizem1 = (DMA_BUFFER_LEN(pfvf->rbsize) / 8) - 1; + aq->rq.xqe_imm_size = 0; /* Copying of packet to CQE not needed */ + aq->rq.flow_tagw = 32; /* Copy full 32bit flow_tag to CQE header */ + aq->rq.qint_idx = 0; + aq->rq.lpb_drop_ena = 1; /* Enable RED dropping for AURA */ + aq->rq.xqe_drop_ena = 1; /* Enable RED dropping for CQ/SSO */ + aq->rq.xqe_pass = RQ_PASS_LVL_CQ(pfvf->hw.rq_skid, qset->rqe_cnt); + aq->rq.xqe_drop = RQ_DROP_LVL_CQ(pfvf->hw.rq_skid, qset->rqe_cnt); + aq->rq.lpb_aura_pass = RQ_PASS_LVL_AURA; + aq->rq.lpb_aura_drop = RQ_DROP_LVL_AURA; + aq->rq.ipsech_ena = 1; /* IPsec HW fast path enable */ + aq->rq.ipsecd_drop_ena = 1; /* IPsec dynamic drop enable */ + aq->rq.xqe_drop_ena = 0; + aq->rq.ena_wqwd = 1; /* Store NIX headers in packet buffer */ + aq->rq.first_skip = 16; /* Store packet after skiiping 16*8 bytes + * to accommodate NIX headers. + */ + + /* Fill AQ info */ + aq->qidx = qidx; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_INIT; + + return otx2_sync_mbox_msg(&pfvf->mbox); +} + +static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf) +{ + struct otx2_hw *hw = &pfvf->hw; + int stack_pages, pool_id; + struct otx2_pool *pool; + int err, ptr, num_ptrs; + dma_addr_t bufptr; + + num_ptrs = 256; + pool_id = pfvf->ipsec.inb_ipsec_pool; + stack_pages = (num_ptrs + hw->stack_pg_ptrs - 1) / hw->stack_pg_ptrs; + + mutex_lock(&pfvf->mbox.lock); + + /* Initialize aura context */ + err = cn10k_ipsec_ingress_aura_init(pfvf, pool_id, pool_id, num_ptrs); + if (err) + goto fail; + + /* Initialize pool */ + err = otx2_pool_init(pfvf, pool_id, stack_pages, num_ptrs, pfvf->rbsize, AURA_NIX_RQ); + if (err) + goto fail; + + /* Flush accumulated messages */ + err = otx2_sync_mbox_msg(&pfvf->mbox); + if (err) + goto pool_fail; + + /* Allocate pointers and free them to aura/pool */ + pool = &pfvf->qset.pool[pool_id]; + for (ptr = 0; ptr < num_ptrs; ptr++) { + err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr); + if (err) { + err = -ENOMEM; + goto pool_fail; + } + pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr + OTX2_HEAD_ROOM); + } + + /* Initialize RQ and map buffers from pool_id */ + err = cn10k_ipsec_ingress_rq_init(pfvf, pfvf->ipsec.inb_ipsec_rq, pool_id); + if (err) + goto pool_fail; + + mutex_unlock(&pfvf->mbox.lock); + return 0; + +pool_fail: + mutex_unlock(&pfvf->mbox.lock); + qmem_free(pfvf->dev, pool->stack); + qmem_free(pfvf->dev, pool->fc_addr); + page_pool_destroy(pool->page_pool); + devm_kfree(pfvf->dev, pool->xdp); + pool->xsk_pool = NULL; +fail: + otx2_mbox_reset(&pfvf->mbox.mbox, 0); + return err; +} + +static int cn10k_inb_cpt_init(struct net_device *netdev) +{ + struct otx2_nic *pfvf = netdev_priv(netdev); + int ret = 0; + + ret = cn10k_ipsec_setup_nix_rx_hw_resources(pfvf); + if (ret) { + netdev_err(netdev, "Failed to setup NIX HW resources for IPsec\n"); + return ret; + } + + return ret; +} + static int cn10k_outb_cpt_clean(struct otx2_nic *pf) { int ret; @@ -765,14 +952,22 @@ static void cn10k_ipsec_sa_wq_handler(struct work_struct *work) int cn10k_ipsec_ethtool_init(struct net_device *netdev, bool enable) { struct otx2_nic *pf = netdev_priv(netdev); + int ret = 0; /* IPsec offload supported on cn10k */ if (!is_dev_support_ipsec_offload(pf->pdev)) return -EOPNOTSUPP; - /* Initialize CPT for outbound ipsec offload */ - if (enable) - return cn10k_outb_cpt_init(netdev); + /* Initialize CPT for outbound and inbound IPsec offload */ + if (enable) { + ret = cn10k_outb_cpt_init(netdev); + if (ret) + return ret; + + ret = cn10k_inb_cpt_init(netdev); + if (ret) + return ret; + } /* Don't do CPT cleanup if SA installed */ if (pf->ipsec.outb_sa_count) { diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h index 6dd6ead0b28b..5b7b8f3db913 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h @@ -111,6 +111,8 @@ struct cn10k_ipsec { struct workqueue_struct *sa_workq; /* For Inbound Inline IPSec flows */ + u16 inb_ipsec_rq; + u16 inb_ipsec_pool; u32 sa_tbl_entry_sz; struct qmem *inb_sa; struct list_head inb_sw_ctx_list; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 84cd029a85aa..c077e5ae346f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -877,22 +877,6 @@ void otx2_sqb_flush(struct otx2_nic *pfvf) } } -/* RED and drop levels of CQ on packet reception. - * For CQ level is measure of emptiness ( 0x0 = full, 255 = empty). - */ -#define RQ_PASS_LVL_CQ(skid, qsize) ((((skid) + 16) * 256) / (qsize)) -#define RQ_DROP_LVL_CQ(skid, qsize) (((skid) * 256) / (qsize)) - -/* RED and drop levels of AURA for packet reception. - * For AURA level is measure of fullness (0x0 = empty, 255 = full). - * Eg: For RQ length 1K, for pass/drop level 204/230. - * RED accepts pkts if free pointers > 102 & <= 205. - * Drops pkts if free pointers < 102. - */ -#define RQ_BP_LVL_AURA (255 - ((85 * 256) / 100)) /* BP when 85% is full */ -#define RQ_PASS_LVL_AURA (255 - ((95 * 256) / 100)) /* RED when 95% is full */ -#define RQ_DROP_LVL_AURA (255 - ((99 * 256) / 100)) /* Drop when 99% is full */ - int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura) { struct otx2_qset *qset = &pfvf->qset; @@ -1242,6 +1226,13 @@ int otx2_config_nix(struct otx2_nic *pfvf) nixlf->rss_sz = MAX_RSS_INDIR_TBL_SIZE; nixlf->rss_grps = MAX_RSS_GROUPS; nixlf->xqe_sz = pfvf->hw.xqe_size == 128 ? NIX_XQESZ_W16 : NIX_XQESZ_W64; + /* Add an additional RQ for inline inbound IPsec flows + * and store the RQ index for setting it up later when + * IPsec offload is enabled via ethtool. + */ + nixlf->rq_cnt++; + pfvf->ipsec.inb_ipsec_rq = pfvf->hw.rx_queues; + /* We don't know absolute NPA LF idx attached. * AF will replace 'RVU_DEFAULT_PF_FUNC' with * NPA LF attached to this RVU PF/VF. diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 7e3ddb0bee12..b5b87b5553ea 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -76,6 +76,22 @@ enum arua_mapped_qtypes { /* Send skid of 2000 packets required for CQ size of 4K CQEs. */ #define SEND_CQ_SKID 2000 +/* RED and drop levels of CQ on packet reception. + * For CQ level is measure of emptiness ( 0x0 = full, 255 = empty). + */ +#define RQ_PASS_LVL_CQ(skid, qsize) ((((skid) + 16) * 256) / (qsize)) +#define RQ_DROP_LVL_CQ(skid, qsize) (((skid) * 256) / (qsize)) + +/* RED and drop levels of AURA for packet reception. + * For AURA level is measure of fullness (0x0 = empty, 255 = full). + * Eg: For RQ length 1K, for pass/drop level 204/230. + * RED accepts pkts if free pointers > 102 & <= 205. + * Drops pkts if free pointers < 102. + */ +#define RQ_BP_LVL_AURA (255 - ((85 * 256) / 100)) /* BP when 85% is full */ +#define RQ_PASS_LVL_AURA (255 - ((95 * 256) / 100)) /* RED when 95% is full */ +#define RQ_DROP_LVL_AURA (255 - ((99 * 256) / 100)) /* Drop when 99% is full */ + #define OTX2_GET_RX_STATS(reg) \ otx2_read64(pfvf, NIX_LF_RX_STATX(reg)) #define OTX2_GET_TX_STATS(reg) \ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 0aee8e3861f3..8f1c17fa5a0b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -1538,6 +1538,10 @@ int otx2_init_hw_resources(struct otx2_nic *pf) hw->sqpool_cnt = otx2_get_total_tx_queues(pf); hw->pool_cnt = hw->rqpool_cnt + hw->sqpool_cnt; + /* Increase pool count by 1 for ingress inline IPsec */ + pf->ipsec.inb_ipsec_pool = hw->pool_cnt; + hw->pool_cnt++; + if (!otx2_rep_dev(pf->pdev)) { /* Maximum hardware supported transmit length */ pf->tx_max_pktlen = pf->netdev->max_mtu + OTX2_ETH_HLEN; From patchwork Fri May 2 13:19:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 887256 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E88C225522F; Fri, 2 May 2025 13:23:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192239; cv=none; b=mbX3S5qANY89YcITZ0PRsSzdT5Vvhh+6c4JhQ9eB5XnMnq4K/nEarIWK68RwlytpOSGJENv43nq1G74ec2MngWuPmc/y+XpwyZoBLbVx5gmqiZ8ksYuHlEAsPN7wji9HrgSTTPPu7YPsphqJvOtAOi2p2qlk9lr+nlkDjAvZd8A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192239; c=relaxed/simple; bh=Vqe+A83lh54YiRV2ihKSwl9IdRzqp8veVM20/Y8zcN0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jbpvoS6efTD/iwhjsD/ejxZ4GMQyJGjT8jen5CGQTolsCXkxvt9VjIh5jt3evBSoPZ6cbBsmWEuH66GywTJpQHjHURbYxULG7NlcmBxSkqtj53srqLCD/3q9XFVJhBUlHZ2pynBavFpL9Kz0l5Wd+mEwE9aj6B4OQMwHoSdtU/U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=cjdIJ+wn; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="cjdIJ+wn" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 542A3iur023653; Fri, 2 May 2025 06:23:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=+ 6Lv5OYDs2q+y3Cn2tdE1N+/KAFSZBeuRAhZePF16is=; b=cjdIJ+wnk1c+XBxLf oVTlphgCaK2U84oPcy+fstzHPa+aTS6+PAZyBVAN8SguT5PFwqODNT74Odo7ehh3 Bi/7CUmkRM0KhQW2nrme9NtHrvAtYlT+m/3/V3Kn4rQadwAwDScevQKHttIPpXIt Tv8VMXytbeEIc3s+hnY6G7rHx6Vd+IYKOaBKCu7eH2gtvTLMLGpdhtRk4TqYf+DE isgJNLzqr/mUOe2bMFbs87RlxCoUeb1XQizTLyogu2iMjHE4N9RoTwkVhObPso6e aHejlXvR43q7g9lj6HfnZyFPJbJKVw705CiGjkTl96VvItsd2j681llNWL7Yc51d /1h9w== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cutur9ge-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:23:35 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:23:34 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:23:34 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id 37C243F7041; Fri, 2 May 2025 06:23:21 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 11/15] octeontx2-pf: ipsec: Handle NPA threshold interrupt Date: Fri, 2 May 2025 18:49:52 +0530 Message-ID: <20250502132005.611698-12-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: LMQ2yLfuNTivbCgli8JWckRfL1GaE-33 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX7ZGZqzNhQ8rI MINVck8xAa6ThgYj51kJcEzW6j0BU56769xiTpxipVgSIPJVJztF2JieK4HCYYKo1eGOySnCq5l dbrCRywbLxbF8tfbiPuEDDPdRHJF12eqnuu51yquwgR+VHJxBgUM4Hi7yZYI8j7Ys7Eezaqfsd9 RZFlowYylwd9Q7szXa+l7aS1pqui58lTWPWjlABiR5I+epdB//pkD9QMotwnRy9krpT1AlQRFaB fUiG9g1ir6G0/cCXrnMvS7rAtovP5v84vSEEanssOFObSXEeOprhoke+mx7ht9sr6gyoVV5FUkR 3yk0q5QKjbbYkQqTNQLZYziK8202Dymqq3N+j42tN2DiejiQD0vhsoFquIxDIv1eFq36KS2x9AF 5ehn+XlUceKyTnuXxKeqeyJ/8P4AZdme5bmK0W6qjPhWYt9ApTEVMDs9wS59Jjzhr9QxXOTO X-Authority-Analysis: v=2.4 cv=BaPY0qt2 c=1 sm=1 tr=0 ts=6814c757 cx=c_pps a=rEv8fa4AjpPjGxpoe8rlIQ==:117 a=rEv8fa4AjpPjGxpoe8rlIQ==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=JPgNaNvCGk0F3rCmk-4A:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: LMQ2yLfuNTivbCgli8JWckRfL1GaE-33 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 The NPA Aura pool that is dedicated for 1st pass inline IPsec flows raises an interrupt when the buffers of that aura_id drop below a threshold value. Add the following changes to handle this interrupt - Increase the number of MSIX vectors requested for the PF/VF to include NPA vector. - Create a workqueue (refill_npa_inline_ipsecq) to allocate and refill buffers to the pool. - When the interrupt is raised, schedule the workqueue entry, cn10k_ipsec_npa_refill_inb_ipsecq(), where the current count of consumed buffers is determined via NPA_LF_AURA_OP_CNT and then replenished. Signed-off-by: Tanmay Jagdale --- .../marvell/octeontx2/nic/cn10k_ipsec.c | 102 +++++++++++++++++- .../marvell/octeontx2/nic/cn10k_ipsec.h | 1 + .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 4 + .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 4 + 4 files changed, 110 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c index b88c1b4c5839..365327ab9079 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c @@ -519,10 +519,77 @@ static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf) return err; } +static void cn10k_ipsec_npa_refill_inb_ipsecq(struct work_struct *work) +{ + struct cn10k_ipsec *ipsec = container_of(work, struct cn10k_ipsec, + refill_npa_inline_ipsecq); + struct otx2_nic *pfvf = container_of(ipsec, struct otx2_nic, ipsec); + struct otx2_pool *pool = NULL; + struct otx2_qset *qset = NULL; + u64 val, *ptr, op_int = 0, count; + int err, pool_id, idx; + dma_addr_t bufptr; + + qset = &pfvf->qset; + + val = otx2_read64(pfvf, NPA_LF_QINTX_INT(0)); + if (!(val & 1)) + return; + + ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_INT); + val = otx2_atomic64_add(((u64)pfvf->ipsec.inb_ipsec_pool << 44), ptr); + + /* Error interrupt bits */ + if (val & 0xff) + op_int = (val & 0xff); + + /* Refill buffers on a Threshold interrupt */ + if (val & (1 << 16)) { + /* Get the current number of buffers consumed */ + ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_CNT); + count = otx2_atomic64_add(((u64)pfvf->ipsec.inb_ipsec_pool << 44), ptr); + count &= GENMASK_ULL(35, 0); + + /* Refill */ + pool_id = pfvf->ipsec.inb_ipsec_pool; + pool = &pfvf->qset.pool[pool_id]; + + for (idx = 0; idx < count; idx++) { + err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, idx); + if (err) { + netdev_err(pfvf->netdev, + "Insufficient memory for IPsec pool buffers\n"); + break; + } + pfvf->hw_ops->aura_freeptr(pfvf, pool_id, + bufptr + OTX2_HEAD_ROOM); + } + + op_int |= (1 << 16); + } + + /* Clear/ACK Interrupt */ + if (op_int) + otx2_write64(pfvf, NPA_LF_AURA_OP_INT, + ((u64)pfvf->ipsec.inb_ipsec_pool << 44) | op_int); +} + +static irqreturn_t cn10k_ipsec_npa_inb_ipsecq_intr_handler(int irq, void *data) +{ + struct otx2_nic *pf = data; + + schedule_work(&pf->ipsec.refill_npa_inline_ipsecq); + + return IRQ_HANDLED; +} + static int cn10k_inb_cpt_init(struct net_device *netdev) { struct otx2_nic *pfvf = netdev_priv(netdev); - int ret = 0; + int ret = 0, vec; + char *irq_name; + void *ptr; + u64 val; ret = cn10k_ipsec_setup_nix_rx_hw_resources(pfvf); if (ret) { @@ -530,6 +597,34 @@ static int cn10k_inb_cpt_init(struct net_device *netdev) return ret; } + /* Work entry for refilling the NPA queue for ingress inline IPSec */ + INIT_WORK(&pfvf->ipsec.refill_npa_inline_ipsecq, + cn10k_ipsec_npa_refill_inb_ipsecq); + + /* Register NPA interrupt */ + vec = pfvf->hw.npa_msixoff; + irq_name = &pfvf->hw.irq_name[vec * NAME_SIZE]; + snprintf(irq_name, NAME_SIZE, "%s-npa-qint", pfvf->netdev->name); + + ret = request_irq(pci_irq_vector(pfvf->pdev, vec), + cn10k_ipsec_npa_inb_ipsecq_intr_handler, 0, + irq_name, pfvf); + if (ret) { + dev_err(pfvf->dev, + "RVUPF%d: IRQ registration failed for NPA QINT%d\n", + rvu_get_pf(pfvf->pcifunc), 0); + return ret; + } + + /* Enable NPA threshold interrupt */ + ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_INT); + val = BIT_ULL(43) | BIT_ULL(17); + otx2_write64(pfvf, NPA_LF_AURA_OP_INT, + ((u64)pfvf->ipsec.inb_ipsec_pool << 44) | val); + + /* Enable interrupt */ + otx2_write64(pfvf, NPA_LF_QINTX_ENA_W1S(0), BIT_ULL(0)); + return ret; } @@ -1028,6 +1123,8 @@ EXPORT_SYMBOL(cn10k_ipsec_init); void cn10k_ipsec_clean(struct otx2_nic *pf) { + int vec; + if (!is_dev_support_ipsec_offload(pf->pdev)) return; @@ -1043,6 +1140,9 @@ void cn10k_ipsec_clean(struct otx2_nic *pf) /* Free Ingress SA table */ qmem_free(pf->dev, pf->ipsec.inb_sa); + + vec = pci_irq_vector(pf->pdev, pf->hw.npa_msixoff); + free_irq(vec, pf); } EXPORT_SYMBOL(cn10k_ipsec_clean); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h index 5b7b8f3db913..30d5812d52ad 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h @@ -117,6 +117,7 @@ struct cn10k_ipsec { struct qmem *inb_sa; struct list_head inb_sw_ctx_list; DECLARE_BITMAP(inb_sa_table, CN10K_IPSEC_INB_MAX_SA); + struct work_struct refill_npa_inline_ipsecq; }; /* CN10K IPSEC Security Association (SA) */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 8f1c17fa5a0b..0ffc56efcc23 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -2909,6 +2909,10 @@ int otx2_realloc_msix_vectors(struct otx2_nic *pf) num_vec = hw->nix_msixoff; num_vec += NIX_LF_CINT_VEC_START + hw->max_queues; + /* Update number of vectors to include NPA */ + if (hw->nix_msixoff < hw->npa_msixoff) + num_vec = hw->npa_msixoff + 1; + otx2_disable_mbox_intr(pf); pci_free_irq_vectors(hw->pdev); err = pci_alloc_irq_vectors(hw->pdev, num_vec, num_vec, PCI_IRQ_MSIX); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index fb4da816d218..0b0f8a94ca41 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -521,6 +521,10 @@ static int otx2vf_realloc_msix_vectors(struct otx2_nic *vf) num_vec = hw->nix_msixoff; num_vec += NIX_LF_CINT_VEC_START + hw->max_queues; + /* Update number of vectors to include NPA */ + if (hw->nix_msixoff < hw->npa_msixoff) + num_vec = hw->npa_msixoff + 1; + otx2vf_disable_mbox_intr(vf); pci_free_irq_vectors(hw->pdev); err = pci_alloc_irq_vectors(hw->pdev, num_vec, num_vec, PCI_IRQ_MSIX); From patchwork Fri May 2 13:19:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 886701 Received: from mx0a-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 077022561AA; Fri, 2 May 2025 13:24:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192244; cv=none; b=Gkcdvg36fD2K4RRYysV03VjNlQOzqoUmRaSYNj6E3Nzs6a85UJqU89y6+9ovqLxrmENRO05EtoqWYcIqqG8xsG64J9WN5AeEDxkN1Lbu9LYSWbQOENohtE95J4u8zSnxdfyCEqJHqvLs3+VHhwVFE85Ilotn0o1pgPPGhABg9gw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192244; c=relaxed/simple; bh=FE0ERblYfYkGia+acFP7NJFO1joXdtpoT05yVcVtrhI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=UvaLF1Mn60QQwCTQQOluKAvIcjteA+XVd3OaNRKLqRmVppCNHBIMo7HCnjBDsKuQ148uRM9ALmlE3xMMCb55tXV2KO0O+iS5kRr9SD+rxTie+i4gi5u+UJ/h9l4MA1TNwww6BWpgNtEDLketorji5BkaQOJMnrt+zn3X4LnSQfo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=Mx962dnn; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="Mx962dnn" Received: from pps.filterd (m0431384.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5429m2rr008255; Fri, 2 May 2025 06:23:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=C 2vs7LQu5z6sY3eA3ddeYiK+7MB4mIgPTn255lCUS20=; b=Mx962dnn4Ja8dn9ir ZnJ4GB9UJjrZD4uTo50ZWdEoULTk7rlDP39HIQjhblL3HNrDBc69FOOYMPzPOWRn VyPf3MUWY3Pa5q55U71Wp+NXmMntvI/LnMcRZxWAqmeTX2NfGQuhRHiD30JMoGfp Os7X6xWgqWH6Ku2V+7V4v80OC9En0Av581L4df86vnsidSwP+jn7+QPSJZv1Angt Ud05G3TUA989avrWoSDe2rBXvtSGC4AoBxVQGWCDBMnY+dfPDk7TqVi1SiVixOMN yw13qg0X5kunNvMfEgrUHe1hk9xlDsdoXNI20KC9dXb+wNFJyCRxc74Eiv2sj5Rz VAFwQ== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cuqyr9wy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:23:47 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:23:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:23:47 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id 5F68D3F7041; Fri, 2 May 2025 06:23:35 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 12/15] octeontx2-pf: ipsec: Initialize ingress IPsec Date: Fri, 2 May 2025 18:49:53 +0530 Message-ID: <20250502132005.611698-13-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Z9ILX97jld08MtH36iN4nQKVR79Xtdb2 X-Authority-Analysis: v=2.4 cv=JvPxrN4C c=1 sm=1 tr=0 ts=6814c764 cx=c_pps a=rEv8fa4AjpPjGxpoe8rlIQ==:117 a=rEv8fa4AjpPjGxpoe8rlIQ==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=1lKLCS0kNoqInZpCLlIA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: Z9ILX97jld08MtH36iN4nQKVR79Xtdb2 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX4RtDg7mBAtYZ lieW7oprhXH5241yrFr+29wZbLdvrZrz+IWIpbOc1SXWDVixxBED0bcF86dc3N1KZGzsG2RsiSz N2sU2JarY1bNEJ8zE1Yf1/MHdzJzg0raBQxdvyDKar6OFJ6WIafX5TlcTozA5b9fo3eSuZ/Xjg0 zhzBfnkthF165pnAjuTeea2igxvMDx8kBxjHYD9Lt60jU0JHzKSEh9u+keAHy7HygPR1aJvliwK 1Jbckf6TfPWOHsOkMwXyjEoaRiZxl8bRuCAruogiocpyYetya/m2As8kxRiJgkowmp29A9ADB+V F823bw+EWljOlDtswVXvnC5DkJZCkDz4Y2TkeNxObtnPRLXb3r1p2KmWZ8A7vH8ewNLh71XC1bH QZ61PoE0u2yip9MAke4Js6yBpdjsiyzdU1B2SkcwxBrOPE0o+8c7kYMhDM2mS9BTq95NyI9E X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 Initialize ingress inline IPsec offload when ESP offload feature is enabled via Ethtool. As part of initialization, the following mailboxes must be invoked to configure inline IPsec: NIX_INLINE_IPSEC_LF_CFG - Every NIX LF has the provision to maintain a contiguous SA Table. This mailbox configure the SA table base address, size of each SA, maximum number entries in the table. Currently, we support 128 entry table with each SA of size 1024 bytes. NIX_LF_INLINE_RQ_CFG - Post decryption, CPT sends a metapacket of 256 bytes which have enough packet headers to help NIX RX classify it. However, since the packet is not complete, we cannot perform checksum and packet length verification. Hence, configure the RQ context to disable L3, L4 checksum and length verification for packets coming from CPT. NIX_INLINE_IPSEC_CFG - RVU hardware supports 1 common CPT LF for inbound ingress IPsec flows. This CPT LF is configured via this mailbox and is a one time system-wide configuration. NIX_ALLOC_BPID - Configure bacpkpressure between NIX and CPT blocks by allocating a backpressure ID using this mailbox ingress inline IPsec flows. NIX_FREE_BPID - Free this BPID when ESP offload is disabled via ethtool. Signed-off-by: Tanmay Jagdale --- .../marvell/octeontx2/nic/cn10k_ipsec.c | 167 ++++++++++++++++++ .../marvell/octeontx2/nic/cn10k_ipsec.h | 2 + 2 files changed, 169 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c index 365327ab9079..c6f408007511 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c @@ -346,6 +346,97 @@ static int cn10k_outb_cpt_init(struct net_device *netdev) return ret; } +static int cn10k_inb_nix_inline_lf_cfg(struct otx2_nic *pfvf) +{ + struct nix_inline_ipsec_lf_cfg *req; + int ret = 0; + + mutex_lock(&pfvf->mbox.lock); + req = otx2_mbox_alloc_msg_nix_inline_ipsec_lf_cfg(&pfvf->mbox); + if (!req) { + ret = -ENOMEM; + goto error; + } + + req->sa_base_addr = pfvf->ipsec.inb_sa->iova; + req->ipsec_cfg0.tag_const = 0; + req->ipsec_cfg0.tt = 0; + req->ipsec_cfg0.lenm1_max = 11872; /* (Max packet size - 128 (first skip)) */ + req->ipsec_cfg0.sa_pow2_size = 0xb; /* 2048 */ + req->ipsec_cfg1.sa_idx_max = CN10K_IPSEC_INB_MAX_SA - 1; + req->ipsec_cfg1.sa_idx_w = 0x7; + req->enable = 1; + + ret = otx2_sync_mbox_msg(&pfvf->mbox); +error: + mutex_unlock(&pfvf->mbox.lock); + return ret; +} + +static int cn10k_inb_nix_inline_lf_rq_cfg(struct otx2_nic *pfvf) +{ + struct nix_rq_cpt_field_mask_cfg_req *req; + int ret = 0, i; + + mutex_lock(&pfvf->mbox.lock); + req = otx2_mbox_alloc_msg_nix_lf_inline_rq_cfg(&pfvf->mbox); + if (!req) { + ret = -ENOMEM; + goto error; + } + + for (i = 0; i < RQ_CTX_MASK_MAX; i++) + req->rq_ctx_word_mask[i] = 0xffffffffffffffff; + + req->rq_set.len_ol3_dis = 1; + req->rq_set.len_ol4_dis = 1; + req->rq_set.len_il3_dis = 1; + + req->rq_set.len_il4_dis = 1; + req->rq_set.csum_ol4_dis = 1; + req->rq_set.csum_il4_dis = 1; + + req->rq_set.lenerr_dis = 1; + req->rq_set.port_ol4_dis = 1; + req->rq_set.port_il4_dis = 1; + + req->ipsec_cfg1.rq_mask_enable = 1; + req->ipsec_cfg1.spb_cpt_enable = 0; + + ret = otx2_sync_mbox_msg(&pfvf->mbox); +error: + mutex_unlock(&pfvf->mbox.lock); + return ret; +} + +static int cn10k_inb_nix_inline_ipsec_cfg(struct otx2_nic *pfvf) +{ + struct cpt_rx_inline_lf_cfg_msg *req; + int ret = 0; + + mutex_lock(&pfvf->mbox.lock); + req = otx2_mbox_alloc_msg_cpt_rx_inline_lf_cfg(&pfvf->mbox); + if (!req) { + ret = -ENOMEM; + goto error; + } + + req->sso_pf_func = 0; + req->opcode = CN10K_IPSEC_MAJOR_OP_INB_IPSEC | (1 << 6); + req->param1 = 7; /* bit 0:ip_csum_dis 1:tcp_csum_dis 2:esp_trailer_dis */ + req->param2 = 0; + req->bpid = pfvf->ipsec.bpid; + req->credit = 8160; + req->credit_th = 100; + req->ctx_ilen_valid = 1; + req->ctx_ilen = 5; + + ret = otx2_sync_mbox_msg(&pfvf->mbox); +error: + mutex_unlock(&pfvf->mbox.lock); + return ret; +} + static int cn10k_ipsec_ingress_aura_init(struct otx2_nic *pfvf, int aura_id, int pool_id, int numptrs) { @@ -625,6 +716,28 @@ static int cn10k_inb_cpt_init(struct net_device *netdev) /* Enable interrupt */ otx2_write64(pfvf, NPA_LF_QINTX_ENA_W1S(0), BIT_ULL(0)); + /* Enable inbound inline IPSec in NIX LF */ + ret = cn10k_inb_nix_inline_lf_cfg(pfvf); + if (ret) { + netdev_err(netdev, "Error configuring NIX for Inline IPSec\n"); + goto out; + } + + /* IPsec specific RQ settings in NIX LF */ + ret = cn10k_inb_nix_inline_lf_rq_cfg(pfvf); + if (ret) { + netdev_err(netdev, "Error configuring NIX for Inline IPSec\n"); + goto out; + } + + /* One-time configuration to enable CPT LF for inline inbound IPSec */ + ret = cn10k_inb_nix_inline_ipsec_cfg(pfvf); + if (ret && ret != -EEXIST) + netdev_err(netdev, "CPT LF configuration error\n"); + else + ret = 0; + +out: return ret; } @@ -1044,6 +1157,53 @@ static void cn10k_ipsec_sa_wq_handler(struct work_struct *work) rtnl_unlock(); } +static int cn10k_ipsec_configure_cpt_bpid(struct otx2_nic *pfvf) +{ + struct nix_alloc_bpid_req *req; + struct nix_bpids *rsp; + int rc; + + req = otx2_mbox_alloc_msg_nix_alloc_bpids(&pfvf->mbox); + if (!req) + return -ENOMEM; + req->bpid_cnt = 1; + req->type = NIX_INTF_TYPE_CPT; + + rc = otx2_sync_mbox_msg(&pfvf->mbox); + if (rc) + return rc; + + rsp = (struct nix_bpids *)otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) + return PTR_ERR(rsp); + + /* Store the bpid for configuring it in the future */ + pfvf->ipsec.bpid = rsp->bpids[0]; + + return 0; +} + +static int cn10k_ipsec_free_cpt_bpid(struct otx2_nic *pfvf) +{ + struct nix_bpids *req; + int rc; + + req = otx2_mbox_alloc_msg_nix_free_bpids(&pfvf->mbox); + if (!req) + return -ENOMEM; + + req->bpid_cnt = 1; + req->bpids[0] = pfvf->ipsec.bpid; + + rc = otx2_sync_mbox_msg(&pfvf->mbox); + if (rc) + return rc; + + /* Clear the bpid */ + pfvf->ipsec.bpid = 0; + return 0; +} + int cn10k_ipsec_ethtool_init(struct net_device *netdev, bool enable) { struct otx2_nic *pf = netdev_priv(netdev); @@ -1062,6 +1222,10 @@ int cn10k_ipsec_ethtool_init(struct net_device *netdev, bool enable) ret = cn10k_inb_cpt_init(netdev); if (ret) return ret; + + /* Configure NIX <-> CPT backpresure */ + ret = cn10k_ipsec_configure_cpt_bpid(pf); + return ret; } /* Don't do CPT cleanup if SA installed */ @@ -1070,6 +1234,7 @@ int cn10k_ipsec_ethtool_init(struct net_device *netdev, bool enable) return -EBUSY; } + cn10k_ipsec_free_cpt_bpid(pf); return cn10k_outb_cpt_clean(pf); } @@ -1143,6 +1308,8 @@ void cn10k_ipsec_clean(struct otx2_nic *pf) vec = pci_irq_vector(pf->pdev, pf->hw.npa_msixoff); free_irq(vec, pf); + + cn10k_ipsec_free_cpt_bpid(pf); } EXPORT_SYMBOL(cn10k_ipsec_clean); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h index 30d5812d52ad..f042cbadf054 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h @@ -104,6 +104,8 @@ struct cn10k_ipsec { atomic_t cpt_state; struct cn10k_cpt_inst_queue iq; + u32 bpid; /* Backpressure ID for NIX <-> CPT */ + /* SA info */ u32 sa_size; u32 outb_sa_count; From patchwork Fri May 2 13:19:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 887255 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D7672561C5; Fri, 2 May 2025 13:24:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192256; cv=none; b=WbdO698eYazYTHgKs6r9AravLnn1PHK/HY3MAh451hmHUjYU1pvjE5NhJRkQdGA3gDoY+XyrZXmjZw5geGK4BH9foMmxIdSVBcipN2zE9ud9fqCxFKmJwdEt6b4TUYFqwu+N6o11wj1nr/OmJeA8od9SOucynaxewsSawIlhba8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192256; c=relaxed/simple; bh=JNpDvDI9jnLMI9JI/8hAZUmyEFlD81KIQ9wUm5og7/Q=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=c5cDXIZMrWaagHkmGp4MxlqwhI1GbQMMKVQqBEoP/eOpIn/4QoT3p7bGaLsY5kQCUzXmrOI7SNBl0dM7LXfQnrsQFc6Bknme2P1Dh6IQrvpGE/srLbRP3GFQeK181mdPWNardfHR47WnRxAX/o9603oEoxD1P7y8GehkUCph3Fs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=TOYJAHmn; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="TOYJAHmn" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 542CCrJe023157; Fri, 2 May 2025 06:24:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=4 fS1Ti8HMPhBk6oSTl5NdaARu1L4UtTq1Jkv4qRSDKc=; b=TOYJAHmny8aauB2SA poCtGCF0LPMWMN+04xbG4jFDAn+3+rgfjLo7LKNSUIn0pmn5mz7eU0ulEEBShdu5 bbtSYaOu4WfyVRXkbVOASNpImRAHL7PanocCQeMAbom68edqmC9hLm/DNKtRQmvr qx7mLuX1Pjaa+C7TwAVjPpL8SUJCToatUClN8oZSMJFDznlwR0V/ZQwtN/rsZk1z oBI7hfgj4vouDh47EvOr7DBxVvgPru+eKUYIWixrxWJkdWhJS5IXYZWKX9L20N5r rEF5631FP9yvbZMYlbSpocZ88QM4T8XCLA/VR9M82HzsGiBVYe6vLcoR8V/zkGJ6 LglGw== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cutur9gx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:24:00 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:23:59 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:23:59 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id 9E1FE3F7041; Fri, 2 May 2025 06:23:47 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 13/15] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries Date: Fri, 2 May 2025 18:49:54 +0530 Message-ID: <20250502132005.611698-14-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: tFOOYUnNNkXJ3PjuaBYSnWnm4OfI-1XB X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfXxfKMENa5yugX 2KgcU8fnY7rvjAcVF/8fB82BRkvLKN20buAbjItr9ocxEqe6BZDoE3WJDX1JnJj3OeS9XwghYG4 d6FMQ87hKjZKFx+ptiyTT6nG9cdw6103+asMlAaujUqKnyOvenuszF67qMJUmr+L7bSxHu8bA3k VXLxleOvTpFM7fKqYKsP9G1CcooZLm4B4xTJRP2N5ZYNQXQmwW+qzclmJepBkJU1BvkJ20USWR7 QYPar7A+B8/vzlkW22FfWr7ietZl9GDeoT7ES5isGce5d03NjuZRIMx7qpBa+NuIxQgr763+hv7 fFfspJ1MDEHMTTWBI/lX8pI3mBVMTk4lzQ7/sp1f7SXGmGO0AUURB883LMlndMPJNhP6gRRnYhW VZzIpjfw/O83R241OfgHC42+lQ1IREd/ie9oK+sG2xpSCEgnp1XcvU9tV9e9UcIEER8nbYy3 X-Authority-Analysis: v=2.4 cv=BaPY0qt2 c=1 sm=1 tr=0 ts=6814c770 cx=c_pps a=gIfcoYsirJbf48DBMSPrZA==:117 a=gIfcoYsirJbf48DBMSPrZA==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=MCrkOfBp92-sPASCV2wA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: tFOOYUnNNkXJ3PjuaBYSnWnm4OfI-1XB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 NPC rule for IPsec flows ------------------------ Incoming IPsec packets are first classified for hardware fastpath processing in the NPC block. Hence, allocate an MCAM entry in NPC using the MCAM_ALLOC_ENTRY mailbox to add a rule for IPsec flow classification. Then, install an NPC rule at this entry for packet classification based on ESP header and SPI value with match action as UCAST_IPSEC. Also, these packets need to be directed to the dedicated receive queue so provide the RQ index as part of NPC_INSTALL_FLOW mailbox. Add a function to delete NPC rule as well. SPI-to-SA match table --------------------- NIX RX maintains a common hash table for matching the SPI value from in ESP packet to the SA index associated with it. This table has 2K entries with 4 ways. When a packet is received with action as UCAST_IPSEC, NIXRX uses the SPI from the packet header to perform lookup in the SPI-to-SA hash table. This lookup, if successful, returns an SA index that is used by NIXRX to calculate the exact SA context address and programs it in the CPT_INST_S before submitting the packet to CPT for decryption. Add functions to install the delete an entry from this table via the NIX_SPI_TO_SA_ADD and NIX_SPI_TO_SA_DELETE mailbox calls respectively. When the RQs are changed at runtime via ethtool, RVU PF driver frees all the resources and goes through reinitialization with the new set of receive queues. As part of this flow, the UCAST_IPSEC NPC rules that were installed by the RVU PF/VF driver have to be reconfigured with the new RQ index. So, delete the NPC rules when the interface is stopped via otx2_stop(). When otx2_open() is called, re-install the NPC flow and re-initialize the SPI-to-SA table for every SA context that was previously installed. Signed-off-by: Tanmay Jagdale --- .../marvell/octeontx2/nic/cn10k_ipsec.c | 201 ++++++++++++++++++ .../marvell/octeontx2/nic/cn10k_ipsec.h | 7 + .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 9 + 3 files changed, 217 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c index c6f408007511..91c8f13b6e48 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c @@ -346,6 +346,194 @@ static int cn10k_outb_cpt_init(struct net_device *netdev) return ret; } +static int cn10k_inb_alloc_mcam_entry(struct otx2_nic *pfvf, + struct cn10k_inb_sw_ctx_info *inb_ctx_info) +{ + struct otx2_flow_config *flow_cfg = pfvf->flow_cfg; + struct npc_mcam_alloc_entry_req *mcam_req; + struct npc_mcam_alloc_entry_rsp *mcam_rsp; + int err = 0; + + if (!pfvf->flow_cfg || !flow_cfg->flow_ent) + return -ENODEV; + + mutex_lock(&pfvf->mbox.lock); + + /* Request an MCAM entry to install UCAST_IPSEC rule */ + mcam_req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(&pfvf->mbox); + if (!mcam_req) { + err = -ENOMEM; + goto out; + } + + mcam_req->contig = false; + mcam_req->count = 1; + mcam_req->ref_entry = flow_cfg->flow_ent[0]; + mcam_req->priority = NPC_MCAM_HIGHER_PRIO; + + if (otx2_sync_mbox_msg(&pfvf->mbox)) { + err = -ENODEV; + goto out; + } + + mcam_rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp(&pfvf->mbox.mbox, + 0, &mcam_req->hdr); + + /* Store NPC MCAM entry for bookkeeping */ + inb_ctx_info->npc_mcam_entry = mcam_rsp->entry_list[0]; + +out: + mutex_unlock(&pfvf->mbox.lock); + return err; +} + +static int cn10k_inb_install_flow(struct otx2_nic *pfvf, struct xfrm_state *x, + struct cn10k_inb_sw_ctx_info *inb_ctx_info) +{ + struct npc_install_flow_req *req; + int err; + + mutex_lock(&pfvf->mbox.lock); + + req = otx2_mbox_alloc_msg_npc_install_flow(&pfvf->mbox); + if (!req) { + err = -ENOMEM; + goto out; + } + + req->entry = inb_ctx_info->npc_mcam_entry; + req->features |= BIT(NPC_IPPROTO_ESP) | BIT(NPC_IPSEC_SPI) | BIT(NPC_DMAC); + req->intf = NIX_INTF_RX; + req->index = pfvf->ipsec.inb_ipsec_rq; + req->match_id = 0xfeed; + req->channel = pfvf->hw.rx_chan_base; + req->op = NIX_RX_ACTIONOP_UCAST_IPSEC; + req->set_cntr = 1; + req->packet.spi = x->id.spi; + req->mask.spi = 0xffffffff; + + /* Send message to AF */ + err = otx2_sync_mbox_msg(&pfvf->mbox); +out: + mutex_unlock(&pfvf->mbox.lock); + return err; +} + +static int cn10k_inb_delete_flow(struct otx2_nic *pfvf, + struct cn10k_inb_sw_ctx_info *inb_ctx_info) +{ + struct npc_delete_flow_req *req; + int err = 0; + + mutex_lock(&pfvf->mbox.lock); + + req = otx2_mbox_alloc_msg_npc_delete_flow(&pfvf->mbox); + if (!req) { + err = -ENOMEM; + goto out; + } + + req->entry = inb_ctx_info->npc_mcam_entry; + + /* Send message to AF */ + err = otx2_sync_mbox_msg(&pfvf->mbox); +out: + mutex_unlock(&pfvf->mbox.lock); + return err; +} + +static int cn10k_inb_ena_dis_flow(struct otx2_nic *pfvf, + struct cn10k_inb_sw_ctx_info *inb_ctx_info, + bool disable) +{ + struct npc_mcam_ena_dis_entry_req *req; + int err = 0; + + mutex_lock(&pfvf->mbox.lock); + + if (disable) + req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(&pfvf->mbox); + else + req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(&pfvf->mbox); + if (!req) { + err = -ENOMEM; + goto out; + } + + req->entry = inb_ctx_info->npc_mcam_entry; + + err = otx2_sync_mbox_msg(&pfvf->mbox); +out: + mutex_unlock(&pfvf->mbox.lock); + return err; +} + +void cn10k_ipsec_inb_disable_flows(struct otx2_nic *pfvf) +{ + struct cn10k_inb_sw_ctx_info *inb_ctx_info; + + list_for_each_entry(inb_ctx_info, &pfvf->ipsec.inb_sw_ctx_list, list) { + if (cn10k_inb_ena_dis_flow(pfvf, inb_ctx_info, true)) { + netdev_err(pfvf->netdev, "Failed to disable UCAST_IPSEC" + " entry %d\n", inb_ctx_info->npc_mcam_entry); + continue; + } + inb_ctx_info->delete_npc_and_match_entry = false; + } +} + +static int cn10k_inb_install_spi_to_sa_match_entry(struct otx2_nic *pfvf, + struct xfrm_state *x, + struct cn10k_inb_sw_ctx_info *inb_ctx_info) +{ + struct nix_spi_to_sa_add_req *req; + struct nix_spi_to_sa_add_rsp *rsp; + int err; + + mutex_lock(&pfvf->mbox.lock); + req = otx2_mbox_alloc_msg_nix_spi_to_sa_add(&pfvf->mbox); + if (!req) { + mutex_unlock(&pfvf->mbox.lock); + return -ENOMEM; + } + + req->sa_index = inb_ctx_info->sa_index; + req->spi_index = be32_to_cpu(x->id.spi); + req->match_id = 0xfeed; + req->valid = 1; + + /* Send message to AF */ + err = otx2_sync_mbox_msg(&pfvf->mbox); + + rsp = (struct nix_spi_to_sa_add_rsp *)otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + inb_ctx_info->hash_index = rsp->hash_index; + inb_ctx_info->way = rsp->way; + + mutex_unlock(&pfvf->mbox.lock); + return err; +} + +static int cn10k_inb_delete_spi_to_sa_match_entry(struct otx2_nic *pfvf, + struct cn10k_inb_sw_ctx_info *inb_ctx_info) +{ + struct nix_spi_to_sa_delete_req *req; + int err; + + mutex_lock(&pfvf->mbox.lock); + req = otx2_mbox_alloc_msg_nix_spi_to_sa_delete(&pfvf->mbox); + if (!req) { + mutex_unlock(&pfvf->mbox.lock); + return -ENOMEM; + } + + req->hash_index = inb_ctx_info->hash_index; + req->way = inb_ctx_info->way; + + err = otx2_sync_mbox_msg(&pfvf->mbox); + mutex_unlock(&pfvf->mbox.lock); + return err; +} + static int cn10k_inb_nix_inline_lf_cfg(struct otx2_nic *pfvf) { struct nix_inline_ipsec_lf_cfg *req; @@ -677,6 +865,7 @@ static irqreturn_t cn10k_ipsec_npa_inb_ipsecq_intr_handler(int irq, void *data) static int cn10k_inb_cpt_init(struct net_device *netdev) { struct otx2_nic *pfvf = netdev_priv(netdev); + struct cn10k_inb_sw_ctx_info *inb_ctx_info; int ret = 0, vec; char *irq_name; void *ptr; @@ -737,6 +926,18 @@ static int cn10k_inb_cpt_init(struct net_device *netdev) else ret = 0; + /* If the driver has any offloaded inbound SA context(s), re-install the + * associated SPI-to-SA match and NPC rules. This is generally executed + * when the RQs are changed at runtime. + */ + list_for_each_entry(inb_ctx_info, &pfvf->ipsec.inb_sw_ctx_list, list) { + cn10k_inb_ena_dis_flow(pfvf, inb_ctx_info, false); + cn10k_inb_install_flow(pfvf, inb_ctx_info->x_state, inb_ctx_info); + cn10k_inb_install_spi_to_sa_match_entry(pfvf, + inb_ctx_info->x_state, + inb_ctx_info); + } + out: return ret; } diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h index f042cbadf054..aad5ebea64ef 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h @@ -329,6 +329,7 @@ bool otx2_sqe_add_sg_ipsec(struct otx2_nic *pfvf, struct otx2_snd_queue *sq, bool cn10k_ipsec_transmit(struct otx2_nic *pf, struct netdev_queue *txq, struct otx2_snd_queue *sq, struct sk_buff *skb, int num_segs, int size); +void cn10k_ipsec_inb_disable_flows(struct otx2_nic *pf); #else static inline __maybe_unused int cn10k_ipsec_init(struct net_device *netdev) { @@ -359,5 +360,11 @@ cn10k_ipsec_transmit(struct otx2_nic *pf, struct netdev_queue *txq, { return true; } + +static inline void __maybe_unused +cn10k_ipsec_inb_delete_flows(struct otx2_nic *pf) +{ +} + #endif #endif // CN10K_IPSEC_H diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 0ffc56efcc23..7fcd382cb410 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -1714,7 +1714,12 @@ void otx2_free_hw_resources(struct otx2_nic *pf) if (!otx2_rep_dev(pf->pdev)) cn10k_free_all_ipolicers(pf); + /* Delete Inbound IPSec flows if any SA's are installed */ + if (!list_empty(&pf->ipsec.inb_sw_ctx_list)) + cn10k_ipsec_inb_disable_flows(pf); + mutex_lock(&mbox->lock); + /* Reset NIX LF */ free_req = otx2_mbox_alloc_msg_nix_lf_free(mbox); if (free_req) { @@ -2045,6 +2050,10 @@ int otx2_open(struct net_device *netdev) otx2_do_set_rx_mode(pf); + /* Re-initialize IPsec flows if any previously installed */ + if (!list_empty(&pf->ipsec.inb_sw_ctx_list)) + cn10k_ipsec_ethtool_init(netdev, true); + return 0; err_disable_rxtx: From patchwork Fri May 2 13:19:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 886700 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A749723E35E; Fri, 2 May 2025 13:24:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192268; cv=none; b=kY92opFvgN+E3VY8MpULoHE1a7FTPKJY3CjZ1uqAOgrGCu8WxwPVrbN56uS/GRlVqfqFAmbC7rsrKmtEMdnlw/j/c7LS+cDR8Ee7ULdH0o45kvbHjdbSSg+ipHm7JIC2pB9jkzr/IK/VIBzfQD+FvDgS2ApbhrQ+eoZeLf6YeAU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192268; c=relaxed/simple; bh=+RM0f3CV+cEUi8eokXdWyADFznYU6I6nZXEoPA18dn4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=fKv1EvYcVRlMp5PN7+yJ2SLZ05EGo3C3RP2MsyC0CuEPmDkrPs4x9zmbsIO1cM7AYJyIMLRRQY4/Ir525ioi/9sWS+4m7HJGD5ozO2R6mZoOPoqswhNnhz/9/9G5e64hll8glI59jaaFhU2wK9SjEDpanbuRbAmkA04Bm4QhzjU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=EtxdpzMi; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="EtxdpzMi" Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 542A3i0S023658; Fri, 2 May 2025 06:24:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=T JqmDA/vK7XCoPmevuOIhotplQ8cxVT6SVbCsyZCZ58=; b=EtxdpzMi3TFYkkhvj JRyStrPBqGfIpouFhDi+vchjminFGu3iHAy0AE6MvycmSMKMYFdCTgoes5JUAbmh 6YFNpwed7a++y93EVWt3S1ROdcdMjfOrNEbWfKngLurfALmfTpxCggPzSAxJpyRH WBs0U+i5L9HggIT63OF5hY7YseyNPdkkdkdwx5jb90iYMShymE6hqPuoXEhFguxw CkojTTamYtkMufak3OnLvLWnxaY8tSZbQWLpdTOGsHiQbAC5sBk6X1sZakqBjrsJ 87XpJ2Kcy09V705HIrN/XGK/48gbJZyCbwfdmvL7HPv9kFrVMf7sPwi50CiuGdSb RkWag== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cutur9h9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:24:12 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:24:12 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:24:11 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id F0E663F7041; Fri, 2 May 2025 06:23:59 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 14/15] octeontx2-pf: ipsec: Process CPT metapackets Date: Fri, 2 May 2025 18:49:55 +0530 Message-ID: <20250502132005.611698-15-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: wfZedJ0U8BnNR-vor4ynbGd3rT26no7W X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX8cPK87mAi0Zg PM0YJrbd3NjN5HB6v/R68G28v5xI4UIaSPOjt1+8RpHzA+scnU/z0J+rOmUSSV4P6B3BoWbXDJI EebRuNINYm00l6ZfLK1HUck2x6i6uitKtupuA9Q9FqT+KCLtxNKcj9ZIPvrczgWYrOJ/+QFDCgC HFCl+RULyBarCcHcP2HbAXTFoBGACzEagjsUxfqFB5cFSI22lTkKs7Z0WwHlKCQxKIJau2f+0YX 7D/tkGphar1P+CN9/QuT9lEq9pmWmD8pfNxILnyuW8Z1u9131woUNpSBkhGnYaa3ix75FdYwB7B bR8IL6j6Kff1zzMJhReYl8YX95HOibm6I3nUIrsfyW5ys/XdaLobfvCFMJdf3kxxWJJpyLyOUh6 xFGguTqfmSPzbStbHZijZIJJASgvr1uUKSGYNUULP0YhD1hZ0ne48w/IyTSONyE8BO8rWwNL X-Authority-Analysis: v=2.4 cv=BaPY0qt2 c=1 sm=1 tr=0 ts=6814c77c cx=c_pps a=rEv8fa4AjpPjGxpoe8rlIQ==:117 a=rEv8fa4AjpPjGxpoe8rlIQ==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=eJ26ZJIWB3iPADzCtwIA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: wfZedJ0U8BnNR-vor4ynbGd3rT26no7W X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 CPT hardware forwards decrypted IPsec packets to NIX via the X2P bus as metapackets which are of 256 bytes in length. Each metapacket contains CPT_PARSE_HDR_S and initial bytes of the decrypted packet that helps NIX RX in classifying and submitting to CPU. Additionally, CPT also sets BIT(11) of the channel number to indicate that it's a 2nd pass packet from CPT. Since the metapackets are not complete packets, they don't have to go through L3/L4 layer length and checksum verification so these are disabled via the NIX_LF_INLINE_RQ_CFG mailbox during IPsec initialization. The CPT_PARSE_HDR_S contains a WQE pointer to the complete decrypted packet. Add code in the rx NAPI handler to parse the header and extract WQE pointer. Later, use this WQE pointer to construct the skb, set the XFRM packet mode flags to indicate successful decryption before submitting it to the network stack. Signed-off-by: Tanmay Jagdale --- .../marvell/octeontx2/nic/cn10k_ipsec.c | 61 +++++++++++++++++++ .../marvell/octeontx2/nic/cn10k_ipsec.h | 47 ++++++++++++++ .../marvell/octeontx2/nic/otx2_struct.h | 16 +++++ .../marvell/octeontx2/nic/otx2_txrx.c | 25 +++++++- 4 files changed, 147 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c index 91c8f13b6e48..bebf5cdedee4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c @@ -346,6 +346,67 @@ static int cn10k_outb_cpt_init(struct net_device *netdev) return ret; } +struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf, + struct nix_rx_sg_s *sg, + struct sk_buff *skb, + int qidx) +{ + struct nix_wqe_rx_s *wqe = NULL; + u64 *seg_addr = &sg->seg_addr; + struct cpt_parse_hdr_s *cptp; + struct xfrm_offload *xo; + struct otx2_pool *pool; + struct xfrm_state *xs; + struct sec_path *sp; + u64 *va_ptr; + void *va; + int i; + + /* CPT_PARSE_HDR_S is present in the beginning of the buffer */ + va = phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain, *seg_addr)); + + /* Convert CPT_PARSE_HDR_S from BE to LE */ + va_ptr = (u64 *)va; + for (i = 0; i < (sizeof(struct cpt_parse_hdr_s) / sizeof(u64)); i++) + va_ptr[i] = be64_to_cpu(va_ptr[i]); + + cptp = (struct cpt_parse_hdr_s *)va; + + /* Convert the wqe_ptr from CPT_PARSE_HDR_S to a CPU usable pointer */ + wqe = (struct nix_wqe_rx_s *)phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain, + cptp->wqe_ptr)); + + /* Get the XFRM state pointer stored in SA context */ + va_ptr = pfvf->ipsec.inb_sa->base + + (cptp->cookie * pfvf->ipsec.sa_tbl_entry_sz) + 1024; + xs = (struct xfrm_state *)*va_ptr; + + /* Set XFRM offload status and flags for successful decryption */ + sp = secpath_set(skb); + if (!sp) { + netdev_err(pfvf->netdev, "Failed to secpath_set\n"); + wqe = NULL; + goto err_out; + } + + rcu_read_lock(); + xfrm_state_hold(xs); + rcu_read_unlock(); + + sp->xvec[sp->len++] = xs; + sp->olen++; + + xo = xfrm_offload(skb); + xo->flags = CRYPTO_DONE; + xo->status = CRYPTO_SUCCESS; + +err_out: + /* Free the metapacket memory here since it's not needed anymore */ + pool = &pfvf->qset.pool[qidx]; + otx2_free_bufs(pfvf, pool, *seg_addr - OTX2_HEAD_ROOM, pfvf->rbsize); + return wqe; +} + static int cn10k_inb_alloc_mcam_entry(struct otx2_nic *pfvf, struct cn10k_inb_sw_ctx_info *inb_ctx_info) { diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h index aad5ebea64ef..68046e377486 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h @@ -8,6 +8,7 @@ #define CN10K_IPSEC_H #include +#include "otx2_struct.h" DECLARE_STATIC_KEY_FALSE(cn10k_ipsec_sa_enabled); @@ -302,6 +303,41 @@ struct cpt_sg_s { u64 rsvd_63_50 : 14; }; +/* CPT Parse Header Structure for Inbound packets */ +struct cpt_parse_hdr_s { + /* Word 0 */ + u64 cookie : 32; + u64 match_id : 16; + u64 err_sum : 1; + u64 reas_sts : 4; + u64 reserved_53 : 1; + u64 et_owr : 1; + u64 pkt_fmt : 1; + u64 pad_len : 3; + u64 num_frags : 3; + u64 pkt_out : 2; + + /* Word 1 */ + u64 wqe_ptr; + + /* Word 2 */ + u64 frag_age : 16; + u64 res_32_16 : 16; + u64 pf_func : 16; + u64 il3_off : 8; + u64 fi_pad : 3; + u64 fi_offset : 5; + + /* Word 3 */ + u64 hw_ccode : 8; + u64 uc_ccode : 8; + u64 res3_32_16 : 16; + u64 spi : 32; + + /* Word 4 */ + u64 misc; +}; + /* CPT LF_INPROG Register */ #define CPT_LF_INPROG_INFLIGHT GENMASK_ULL(8, 0) #define CPT_LF_INPROG_GRB_CNT GENMASK_ULL(39, 32) @@ -330,6 +366,10 @@ bool cn10k_ipsec_transmit(struct otx2_nic *pf, struct netdev_queue *txq, struct otx2_snd_queue *sq, struct sk_buff *skb, int num_segs, int size); void cn10k_ipsec_inb_disable_flows(struct otx2_nic *pf); +struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf, + struct nix_rx_sg_s *sg, + struct sk_buff *skb, + int qidx); #else static inline __maybe_unused int cn10k_ipsec_init(struct net_device *netdev) { @@ -366,5 +406,12 @@ cn10k_ipsec_inb_delete_flows(struct otx2_nic *pf) { } +struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf, + struct nix_rx_sg_s *sg, + struct sk_buff *skb, + int qidx) +{ + return NULL; +} #endif #endif // CN10K_IPSEC_H diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h index 4e5899d8fa2e..506fab414b7e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h @@ -175,6 +175,22 @@ struct nix_cqe_tx_s { struct nix_send_comp_s comp; }; +/* NIX WQE header structure */ +struct nix_wqe_hdr_s { + u64 flow_tag : 32; + u64 tt : 2; + u64 reserved_34_43 : 10; + u64 node : 2; + u64 q : 14; + u64 wqe_type : 4; +}; + +struct nix_wqe_rx_s { + struct nix_wqe_hdr_s hdr; + struct nix_rx_parse_s parse; + struct nix_rx_sg_s sg; +}; + /* NIX SQE header structure */ struct nix_sqe_hdr_s { u64 total : 18; /* W0 */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index 9593627b35a3..e9d0e27ffd0b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -205,6 +205,9 @@ static bool otx2_skb_add_frag(struct otx2_nic *pfvf, struct sk_buff *skb, } } + if (parse->chan & 0x800) + off = 0; + page = virt_to_page(va); if (likely(skb_shinfo(skb)->nr_frags < MAX_SKB_FRAGS)) { skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, @@ -333,6 +336,7 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf, struct nix_cqe_rx_s *cqe, bool *need_xdp_flush) { struct nix_rx_parse_s *parse = &cqe->parse; + struct nix_wqe_rx_s *orig_pkt_wqe = NULL; struct nix_rx_sg_s *sg = &cqe->sg; struct sk_buff *skb = NULL; void *end, *start; @@ -355,8 +359,25 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf, if (unlikely(!skb)) return; - start = (void *)sg; - end = start + ((cqe->parse.desc_sizem1 + 1) * 16); + if (parse->chan & 0x800) { + orig_pkt_wqe = cn10k_ipsec_process_cpt_metapkt(pfvf, sg, skb, cq->cq_idx); + if (!orig_pkt_wqe) { + netdev_err(pfvf->netdev, "Invalid WQE in CPT metapacket\n"); + napi_free_frags(napi); + cq->pool_ptrs++; + return; + } + /* Switch *sg to the orig_pkt_wqe's *sg which has the actual + * complete decrypted packet by CPT. + */ + sg = &orig_pkt_wqe->sg; + start = (void *)sg; + end = start + ((orig_pkt_wqe->parse.desc_sizem1 + 1) * 16); + } else { + start = (void *)sg; + end = start + ((cqe->parse.desc_sizem1 + 1) * 16); + } + while (start < end) { sg = (struct nix_rx_sg_s *)start; seg_addr = &sg->seg_addr; From patchwork Fri May 2 13:19:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tanmay Jagdale X-Patchwork-Id: 887254 Received: from mx0a-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED593254AF2; Fri, 2 May 2025 13:24:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192294; cv=none; b=CyWJ8dI6A/57h105AhreTWmiP+Zg+/7L6r7TvgxtRUdnlVOl7RXLtTLEkk1ONMhaNcU9geMx4sCwdD1YfMjXwI4nfmxBWG+hzyGHnC7fe8KStr/WUvRWxsPQG+cRbOKO47bEd/CNLUVAXxdGEnKeYh/uY1B/AnjzwqIrqQp2dRU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746192294; c=relaxed/simple; bh=wvmlMbtOarE9caNEUaIOli+1n8K+SsmegETHweI9bV8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KEi9iUmQJdqka66By4xO6n2ymfCUjArcXDxE+aGjFhHY7zieiQdD609kSEdaE784TZZIHi42hHvo0kZhEtYnE0nWDTrXy9j4uLRltGZhUxgpUk9X7XkS6lQO4Fm6smTPjBxTTGBadbPSco1GEhYEBVqyaf7Fg60Iwq8/+qW8xvs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=V3KH1ewl; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="V3KH1ewl" Received: from pps.filterd (m0431384.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5429m2mT008278; Fri, 2 May 2025 06:24:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=3 nZ2uFQB7LUIZpdev28ZB021s3dVPX7/1B0zoHXEVIc=; b=V3KH1ewlZKCwuekXR y7ecXHQJbQzaSsLEZSd1MwghFIEYtHKgtkQgYdWLwsOX2flupWNvLN2M1gx7l2HF 6kaXS+MpAMIXaRYn/z7Np9IB6GtWyuhI1TTRnvkqqCAbqO5Jd1d67DNjJaRK+F3Z 8N4AntCw8X1UspQf9fLHTwcjZsGHxdJfQiUC2GVbaZug/gSSDs4PsZQqzrLiiDk5 NFf0patzIb1aTsbWi+LMfX/IXTWPD5/xz0OUzKUsgjEuBa6/tb+f6/uSOmKiF3Ep Jq45PZw3Y+FixsdIPtAfE3DhgNwKBGSpwGmFUaW6joNdQL70/pCICFMX0osPssUX r+QCQ== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46cuqyr9ya-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 02 May 2025 06:24:25 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 2 May 2025 06:24:24 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 2 May 2025 06:24:24 -0700 Received: from optiplex.marvell.com (unknown [10.28.34.253]) by maili.marvell.com (Postfix) with ESMTP id 7FE6A3F7041; Fri, 2 May 2025 06:24:12 -0700 (PDT) From: Tanmay Jagdale To: , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , Tanmay Jagdale Subject: [net-next PATCH v1 15/15] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows Date: Fri, 2 May 2025 18:49:56 +0530 Message-ID: <20250502132005.611698-16-tanmay@marvell.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250502132005.611698-1-tanmay@marvell.com> References: <20250502132005.611698-1-tanmay@marvell.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: jfyytFcmRtj4ci4CCmOrK2LIsXbPn3GU X-Authority-Analysis: v=2.4 cv=JvPxrN4C c=1 sm=1 tr=0 ts=6814c78a cx=c_pps a=gIfcoYsirJbf48DBMSPrZA==:117 a=gIfcoYsirJbf48DBMSPrZA==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=cBXoo_FVw3z3HtC9pokA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-GUID: jfyytFcmRtj4ci4CCmOrK2LIsXbPn3GU X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTAyMDEwNiBTYWx0ZWRfX3fB/piCS6GWC n7xrjmi85F5JtW3EsgIOzzBXpasHvBjeJ2eJsY9US4ZyAp9rDSLOMjOwszf23EjYrMplzua892M ThQJgvA2V+GSGpkZ+f+xzAroU1xuFMf4qRxh0DdFX8TIHma6q57C5iuPoADjLYy4ZvH2Us8mGGf ZblcVE99vcuqB2L/KtntEjxfGEgTND4o7LU3zfoE//A11lDNfXlFtP6EbckmKq+IUju7I83pQcr YeK85/nSmg3iLHIHaqJ8dYepE8//oVpQT81fM1iVqluOF9D+pTBkEZTr6ZUO2i+Hge2m2tYKVkf H329Z3Gci+HYDRSAaFm5T1/b0WHe6ihb0uZXUZZlKeR98KeLmImYlz6qR7HF4QWHXgH+TgCe1sg p4SvdFpXvpd3LIdQFYDL02wU4B6lqFNqG2CnXbIHZe3v4t2a+JQXnqdU9bVyR39YyV7yCcVe X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-02_01,2025-04-30_01,2025-02-21_01 Add XFRM state hook for inbound flows and configure the following: - Install an NPC rule to classify the 1st pass IPsec packets and direct them to the dedicated RQ - Allocate a free entry from the SA table and populate it with the SA context details based on xfrm state data. - Create a mapping of the SPI value to the SA table index. This is used by NIXRX to calculate the exact SA context pointer address based on the SPI in the packet. - Prepare the CPT SA context to decrypt buffer in place and the write it the CPT hardware via LMT operation. - When the XFRM state is deleted, clear this SA in CPT hardware. Also add XFRM Policy hooks to allow successful offload of inbound PACKET_MODE. Signed-off-by: Tanmay Jagdale --- .../marvell/octeontx2/nic/cn10k_ipsec.c | 449 ++++++++++++++++-- 1 file changed, 419 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c index bebf5cdedee4..6441598c7e0f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c @@ -448,7 +448,7 @@ static int cn10k_inb_alloc_mcam_entry(struct otx2_nic *pfvf, return err; } -static int cn10k_inb_install_flow(struct otx2_nic *pfvf, struct xfrm_state *x, +static int cn10k_inb_install_flow(struct otx2_nic *pfvf, struct cn10k_inb_sw_ctx_info *inb_ctx_info) { struct npc_install_flow_req *req; @@ -463,14 +463,14 @@ static int cn10k_inb_install_flow(struct otx2_nic *pfvf, struct xfrm_state *x, } req->entry = inb_ctx_info->npc_mcam_entry; - req->features |= BIT(NPC_IPPROTO_ESP) | BIT(NPC_IPSEC_SPI) | BIT(NPC_DMAC); + req->features |= BIT(NPC_IPPROTO_ESP) | BIT(NPC_IPSEC_SPI); req->intf = NIX_INTF_RX; req->index = pfvf->ipsec.inb_ipsec_rq; req->match_id = 0xfeed; req->channel = pfvf->hw.rx_chan_base; req->op = NIX_RX_ACTIONOP_UCAST_IPSEC; req->set_cntr = 1; - req->packet.spi = x->id.spi; + req->packet.spi = inb_ctx_info->spi; req->mask.spi = 0xffffffff; /* Send message to AF */ @@ -993,7 +993,7 @@ static int cn10k_inb_cpt_init(struct net_device *netdev) */ list_for_each_entry(inb_ctx_info, &pfvf->ipsec.inb_sw_ctx_list, list) { cn10k_inb_ena_dis_flow(pfvf, inb_ctx_info, false); - cn10k_inb_install_flow(pfvf, inb_ctx_info->x_state, inb_ctx_info); + cn10k_inb_install_flow(pfvf, inb_ctx_info); cn10k_inb_install_spi_to_sa_match_entry(pfvf, inb_ctx_info->x_state, inb_ctx_info); @@ -1035,6 +1035,19 @@ static int cn10k_outb_cpt_clean(struct otx2_nic *pf) return ret; } +static u32 cn10k_inb_alloc_sa(struct otx2_nic *pf, struct xfrm_state *x) +{ + u32 sa_index = 0; + + sa_index = find_first_zero_bit(pf->ipsec.inb_sa_table, CN10K_IPSEC_INB_MAX_SA); + if (sa_index >= CN10K_IPSEC_INB_MAX_SA) + return sa_index; + + set_bit(sa_index, pf->ipsec.inb_sa_table); + + return sa_index; +} + static void cn10k_cpt_inst_flush(struct otx2_nic *pf, struct cpt_inst_s *inst, u64 size) { @@ -1149,6 +1162,137 @@ static int cn10k_outb_write_sa(struct otx2_nic *pf, struct qmem *sa_info) return ret; } +static int cn10k_inb_write_sa(struct otx2_nic *pf, + struct xfrm_state *x, + struct cn10k_inb_sw_ctx_info *inb_ctx_info) +{ + dma_addr_t res_iova, dptr_iova, sa_iova; + struct cn10k_rx_sa_s *sa_dptr, *sa_cptr; + struct cpt_inst_s inst; + u32 sa_size, off; + struct cpt_res_s *res; + u64 reg_val; + int ret; + + res = dma_alloc_coherent(pf->dev, sizeof(struct cpt_res_s), + &res_iova, GFP_ATOMIC); + if (!res) + return -ENOMEM; + + sa_cptr = inb_ctx_info->sa_entry; + sa_iova = inb_ctx_info->sa_iova; + sa_size = sizeof(struct cn10k_rx_sa_s); + + sa_dptr = dma_alloc_coherent(pf->dev, sa_size, &dptr_iova, GFP_ATOMIC); + if (!sa_dptr) { + dma_free_coherent(pf->dev, sizeof(struct cpt_res_s), res, + res_iova); + return -ENOMEM; + } + + for (off = 0; off < (sa_size / 8); off++) + *((u64 *)sa_dptr + off) = cpu_to_be64(*((u64 *)sa_cptr + off)); + + memset(&inst, 0, sizeof(struct cpt_inst_s)); + + res->compcode = 0; + inst.res_addr = res_iova; + inst.dptr = (u64)dptr_iova; + inst.param2 = sa_size >> 3; + inst.dlen = sa_size; + inst.opcode_major = CN10K_IPSEC_MAJOR_OP_WRITE_SA; + inst.opcode_minor = CN10K_IPSEC_MINOR_OP_WRITE_SA; + inst.cptr = sa_iova; + inst.ctx_val = 1; + inst.egrp = CN10K_DEF_CPT_IPSEC_EGRP; + + /* Re-use Outbound CPT LF to install Ingress SAs as well because + * the driver does not own the ingress CPT LF. + */ + pf->ipsec.io_addr = (__force u64)otx2_get_regaddr(pf, CN10K_CPT_LF_NQX(0)); + cn10k_cpt_inst_flush(pf, &inst, sizeof(struct cpt_inst_s)); + dmb(sy); + + ret = cn10k_wait_for_cpt_respose(pf, res); + if (ret) + goto out; + + /* Trigger CTX flush to write dirty data back to DRAM */ + reg_val = FIELD_PREP(GENMASK_ULL(45, 0), sa_iova >> 7); + otx2_write64(pf, CN10K_CPT_LF_CTX_FLUSH, reg_val); + +out: + dma_free_coherent(pf->dev, sa_size, sa_dptr, dptr_iova); + dma_free_coherent(pf->dev, sizeof(struct cpt_res_s), res, res_iova); + return ret; +} + +static void cn10k_xfrm_inb_prepare_sa(struct otx2_nic *pf, struct xfrm_state *x, + struct cn10k_inb_sw_ctx_info *inb_ctx_info) +{ + struct cn10k_rx_sa_s *sa_entry = inb_ctx_info->sa_entry; + int key_len = (x->aead->alg_key_len + 7) / 8; + u8 *key = x->aead->alg_key; + u32 sa_size = sizeof(struct cn10k_rx_sa_s); + u64 *tmp_key; + u32 *tmp_salt; + int idx; + + memset(sa_entry, 0, sizeof(struct cn10k_rx_sa_s)); + + /* Disable ESN for now */ + sa_entry->esn_en = 0; + + /* HW context offset is word-31 */ + sa_entry->hw_ctx_off = 31; + sa_entry->pkind = NPC_RX_CPT_HDR_PKIND; + sa_entry->eth_ovrwr = 1; + sa_entry->pkt_output = 1; + sa_entry->pkt_format = 1; + sa_entry->orig_pkt_free = 0; + /* context push size is up to word 31 */ + sa_entry->ctx_push_size = 31 + 1; + /* context size, 128 Byte aligned up */ + sa_entry->ctx_size = (sa_size / OTX2_ALIGN) & 0xF; + + sa_entry->cookie = inb_ctx_info->sa_index; + + /* 1 word (??) prepanded to context header size */ + sa_entry->ctx_hdr_size = 1; + /* Mark SA entry valid */ + sa_entry->aop_valid = 1; + + sa_entry->sa_dir = 0; /* Inbound */ + sa_entry->ipsec_protocol = 1; /* ESP */ + /* Default to Transport Mode */ + if (x->props.mode == XFRM_MODE_TUNNEL) + sa_entry->ipsec_mode = 1; /* Tunnel Mode */ + + sa_entry->et_ovrwr_ddr_en = 1; + sa_entry->enc_type = 5; /* AES-GCM only */ + sa_entry->aes_key_len = 1; /* AES key length 128 */ + sa_entry->l2_l3_hdr_on_error = 1; + sa_entry->spi = cpu_to_be32(x->id.spi); + + /* Last 4 bytes are salt */ + key_len -= 4; + memcpy(sa_entry->cipher_key, key, key_len); + tmp_key = (u64 *)sa_entry->cipher_key; + + for (idx = 0; idx < key_len / 8; idx++) + tmp_key[idx] = be64_to_cpu(tmp_key[idx]); + + memcpy(&sa_entry->iv_gcm_salt, key + key_len, 4); + tmp_salt = (u32 *)&sa_entry->iv_gcm_salt; + *tmp_salt = be32_to_cpu(*tmp_salt); + + /* Write SA context data to memory before enabling */ + wmb(); + + /* Enable SA */ + sa_entry->sa_valid = 1; +} + static int cn10k_ipsec_get_hw_ctx_offset(void) { /* Offset on Hardware-context offset in word */ @@ -1256,11 +1400,6 @@ static int cn10k_ipsec_validate_state(struct xfrm_state *x, "Only IPv4/v6 xfrm states may be offloaded"); return -EINVAL; } - if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { - NL_SET_ERR_MSG_MOD(extack, - "Cannot offload other than crypto-mode"); - return -EINVAL; - } if (x->props.mode != XFRM_MODE_TRANSPORT && x->props.mode != XFRM_MODE_TUNNEL) { NL_SET_ERR_MSG_MOD(extack, @@ -1272,11 +1411,6 @@ static int cn10k_ipsec_validate_state(struct xfrm_state *x, "Only ESP xfrm state may be offloaded"); return -EINVAL; } - if (x->encap) { - NL_SET_ERR_MSG_MOD(extack, - "Encapsulated xfrm state may not be offloaded"); - return -EINVAL; - } if (!x->aead) { NL_SET_ERR_MSG_MOD(extack, "Cannot offload xfrm states without aead"); @@ -1316,8 +1450,96 @@ static int cn10k_ipsec_validate_state(struct xfrm_state *x, static int cn10k_ipsec_inb_add_state(struct xfrm_state *x, struct netlink_ext_ack *extack) { - NL_SET_ERR_MSG_MOD(extack, "xfrm inbound offload not supported"); - return -EOPNOTSUPP; + struct net_device *netdev = x->xso.dev; + struct cn10k_inb_sw_ctx_info *inb_ctx_info = NULL, *inb_ctx; + bool enable_rule = false; + struct otx2_nic *pf; + u64 *sa_offset_ptr; + u32 sa_index = 0; + int err = 0; + + pf = netdev_priv(netdev); + + /* If XFRM policy was added before state, then the inb_ctx_info instance + * would be allocated there. + */ + list_for_each_entry(inb_ctx, &pf->ipsec.inb_sw_ctx_list, list) { + if (inb_ctx->spi == x->id.spi) { + inb_ctx_info = inb_ctx; + enable_rule = true; + break; + } + } + + if (!inb_ctx_info) { + /* Allocate a structure to track SA related info in driver */ + inb_ctx_info = devm_kzalloc(pf->dev, sizeof(*inb_ctx_info), GFP_KERNEL); + if (!inb_ctx_info) + return -ENOMEM; + + /* Stash pointer in the xfrm offload handle */ + x->xso.offload_handle = (unsigned long)inb_ctx_info; + } + + sa_index = cn10k_inb_alloc_sa(pf, x); + if (sa_index >= CN10K_IPSEC_INB_MAX_SA) { + netdev_err(netdev, "Failed to find free entry in SA Table\n"); + err = -ENOMEM; + goto err_out; + } + + /* Fill in information for bookkeeping */ + inb_ctx_info->sa_index = sa_index; + inb_ctx_info->spi = x->id.spi; + inb_ctx_info->sa_entry = pf->ipsec.inb_sa->base + + (sa_index * pf->ipsec.sa_tbl_entry_sz); + inb_ctx_info->sa_iova = pf->ipsec.inb_sa->iova + + (sa_index * pf->ipsec.sa_tbl_entry_sz); + inb_ctx_info->x_state = x; + + /* Store XFRM state pointer in SA context at an offset of 1KB. + * It will be later used in the rcv_pkt_handler to associate + * an skb with XFRM state. + */ + sa_offset_ptr = pf->ipsec.inb_sa->base + + (sa_index * pf->ipsec.sa_tbl_entry_sz) + 1024; + *sa_offset_ptr = (u64)x; + + err = cn10k_inb_install_spi_to_sa_match_entry(pf, x, inb_ctx_info); + if (err) { + netdev_err(netdev, "Failed to install Inbound IPSec exact match entry\n"); + goto err_out; + } + + cn10k_xfrm_inb_prepare_sa(pf, x, inb_ctx_info); + + netdev_dbg(netdev, "inb_ctx_info: sa_index:%d spi:0x%x mcam_entry:%d" + " hash_index:0x%x way:0x%x\n", + inb_ctx_info->sa_index, inb_ctx_info->spi, + inb_ctx_info->npc_mcam_entry, inb_ctx_info->hash_index, + inb_ctx_info->way); + + err = cn10k_inb_write_sa(pf, x, inb_ctx_info); + if (err) + netdev_err(netdev, "Error writing inbound SA\n"); + + /* Enable NPC rule if policy was already installed */ + if (enable_rule) { + err = cn10k_inb_ena_dis_flow(pf, inb_ctx_info, false); + if (err) + netdev_err(netdev, "Failed to enable rule\n"); + } else { + /* All set, add ctx_info to the list */ + list_add_tail(&inb_ctx_info->list, &pf->ipsec.inb_sw_ctx_list); + } + + cn10k_cpt_device_set_available(pf); + return err; + +err_out: + x->xso.offload_handle = 0; + devm_kfree(pf->dev, inb_ctx_info); + return err; } static int cn10k_ipsec_outb_add_state(struct xfrm_state *x, @@ -1329,10 +1551,6 @@ static int cn10k_ipsec_outb_add_state(struct xfrm_state *x, struct otx2_nic *pf; int err; - err = cn10k_ipsec_validate_state(x, extack); - if (err) - return err; - pf = netdev_priv(netdev); err = qmem_alloc(pf->dev, &sa_info, pf->ipsec.sa_size, OTX2_ALIGN); @@ -1360,10 +1578,52 @@ static int cn10k_ipsec_outb_add_state(struct xfrm_state *x, static int cn10k_ipsec_add_state(struct xfrm_state *x, struct netlink_ext_ack *extack) { + int err; + + err = cn10k_ipsec_validate_state(x, extack); + if (err) + return err; + if (x->xso.dir == XFRM_DEV_OFFLOAD_IN) return cn10k_ipsec_inb_add_state(x, extack); else return cn10k_ipsec_outb_add_state(x, extack); + + return err; +} + +static void cn10k_ipsec_inb_del_state(struct otx2_nic *pf, struct xfrm_state *x) +{ + struct cn10k_inb_sw_ctx_info *inb_ctx_info; + struct cn10k_rx_sa_s *sa_entry; + struct net_device *netdev = x->xso.dev; + int err = 0; + + /* 1. Find SPI to SA entry */ + inb_ctx_info = (struct cn10k_inb_sw_ctx_info *)x->xso.offload_handle; + + if (inb_ctx_info->spi != be32_to_cpu(x->id.spi)) { + netdev_err(netdev, "SPI Mismatch (ctx) 0x%x != 0x%x (xfrm)\n", + inb_ctx_info->spi, be32_to_cpu(x->id.spi)); + return; + } + + /* 2. Delete SA in CPT HW */ + sa_entry = inb_ctx_info->sa_entry; + memset(sa_entry, 0, sizeof(struct cn10k_rx_sa_s)); + + sa_entry->ctx_push_size = 31 + 1; + sa_entry->ctx_size = (sizeof(struct cn10k_rx_sa_s) / OTX2_ALIGN) & 0xF; + sa_entry->aop_valid = 1; + + if (cn10k_cpt_device_set_inuse(pf)) { + err = cn10k_inb_write_sa(pf, x, inb_ctx_info); + if (err) + netdev_err(netdev, "Error (%d) deleting INB SA\n", err); + cn10k_cpt_device_set_available(pf); + } + + x->xso.offload_handle = 0; } static void cn10k_ipsec_del_state(struct xfrm_state *x) @@ -1374,11 +1634,11 @@ static void cn10k_ipsec_del_state(struct xfrm_state *x) struct otx2_nic *pf; int err; - if (x->xso.dir == XFRM_DEV_OFFLOAD_IN) - return; - pf = netdev_priv(netdev); + if (x->xso.dir == XFRM_DEV_OFFLOAD_IN) + return cn10k_ipsec_inb_del_state(pf, x); + sa_info = (struct qmem *)x->xso.offload_handle; sa_entry = (struct cn10k_tx_sa_s *)sa_info->base; memset(sa_entry, 0, sizeof(struct cn10k_tx_sa_s)); @@ -1397,13 +1657,112 @@ static void cn10k_ipsec_del_state(struct xfrm_state *x) /* If no more SA's then update netdev feature for potential change * in NETIF_F_HW_ESP. */ - if (!--pf->ipsec.outb_sa_count) - queue_work(pf->ipsec.sa_workq, &pf->ipsec.sa_work); + pf->ipsec.outb_sa_count--; + queue_work(pf->ipsec.sa_workq, &pf->ipsec.sa_work); +} + +static int cn10k_ipsec_policy_add(struct xfrm_policy *x, + struct netlink_ext_ack *extack) +{ + struct cn10k_inb_sw_ctx_info *inb_ctx_info = NULL, *inb_ctx; + struct net_device *netdev = x->xdo.dev; + struct otx2_nic *pf; + int ret = 0; + bool disable_rule = true; + + if (x->xdo.dir != XFRM_DEV_OFFLOAD_IN) { + netdev_err(netdev, "ERR: Can only offload Inbound policies\n"); + ret = -EINVAL; + } + + if (x->xdo.type != XFRM_DEV_OFFLOAD_PACKET) { + netdev_err(netdev, "ERR: Only Packet mode supported\n"); + ret = -EINVAL; + } + + pf = netdev_priv(netdev); + + /* If XFRM state was added before policy, then the inb_ctx_info instance + * would be allocated there. + */ + list_for_each_entry(inb_ctx, &pf->ipsec.inb_sw_ctx_list, list) { + if (inb_ctx->spi == x->xfrm_vec[0].id.spi) { + inb_ctx_info = inb_ctx; + disable_rule = false; + break; + } + } + + if (!inb_ctx_info) { + /* Allocate a structure to track SA related info in driver */ + inb_ctx_info = devm_kzalloc(pf->dev, sizeof(*inb_ctx_info), GFP_KERNEL); + if (!inb_ctx_info) + return -ENOMEM; + + inb_ctx_info->spi = x->xfrm_vec[0].id.spi; + } + + ret = cn10k_inb_alloc_mcam_entry(pf, inb_ctx_info); + if (ret) { + netdev_err(netdev, "Failed to allocate MCAM entry for Inbound IPSec flow\n"); + goto err_out; + } + + ret = cn10k_inb_install_flow(pf, inb_ctx_info); + if (ret) { + netdev_err(netdev, "Failed to install Inbound IPSec flow\n"); + goto err_out; + } + + /* Leave rule in a disabled state until xfrm_state add is completed */ + if (disable_rule) { + ret = cn10k_inb_ena_dis_flow(pf, inb_ctx_info, true); + if (ret) + netdev_err(netdev, "Failed to disable rule\n"); + + /* All set, add ctx_info to the list */ + list_add_tail(&inb_ctx_info->list, &pf->ipsec.inb_sw_ctx_list); + } + + /* Stash pointer in the xfrm offload handle */ + x->xdo.offload_handle = (unsigned long)inb_ctx_info; + +err_out: + return ret; +} + +static void cn10k_ipsec_policy_delete(struct xfrm_policy *x) +{ + struct cn10k_inb_sw_ctx_info *inb_ctx_info; + struct net_device *netdev = x->xdo.dev; + struct otx2_nic *pf; + + if (!x->xdo.offload_handle) + return; + + pf = netdev_priv(netdev); + inb_ctx_info = (struct cn10k_inb_sw_ctx_info *)x->xdo.offload_handle; + + /* Schedule a workqueue to free NPC rule and SPI-to-SA match table + * entry because they are freed via a mailbox call which can sleep + * and the delete policy routine from XFRM stack is called in an + * atomic context. + */ + inb_ctx_info->delete_npc_and_match_entry = true; + queue_work(pf->ipsec.sa_workq, &pf->ipsec.sa_work); +} + +static void cn10k_ipsec_policy_free(struct xfrm_policy *x) +{ + return; } static const struct xfrmdev_ops cn10k_ipsec_xfrmdev_ops = { .xdo_dev_state_add = cn10k_ipsec_add_state, .xdo_dev_state_delete = cn10k_ipsec_del_state, + .xdo_dev_policy_add = cn10k_ipsec_policy_add, + .xdo_dev_policy_delete = cn10k_ipsec_policy_delete, + .xdo_dev_policy_free = cn10k_ipsec_policy_free, }; static void cn10k_ipsec_sa_wq_handler(struct work_struct *work) @@ -1411,12 +1770,42 @@ static void cn10k_ipsec_sa_wq_handler(struct work_struct *work) struct cn10k_ipsec *ipsec = container_of(work, struct cn10k_ipsec, sa_work); struct otx2_nic *pf = container_of(ipsec, struct otx2_nic, ipsec); + struct cn10k_inb_sw_ctx_info *inb_ctx_info, *tmp; + int err; + + list_for_each_entry_safe(inb_ctx_info, tmp, &pf->ipsec.inb_sw_ctx_list, + list) { + if (!inb_ctx_info->delete_npc_and_match_entry) + continue; + + /* 3. Delete all the associated NPC rules associated */ + err = cn10k_inb_delete_flow(pf, inb_ctx_info); + if (err) { + netdev_err(pf->netdev, "Failed to free UCAST_IPSEC entry %d\n", + inb_ctx_info->npc_mcam_entry); + } + + /* 4. Remove SPI_TO_SA exact match entry */ + err = cn10k_inb_delete_spi_to_sa_match_entry(pf, inb_ctx_info); + if (err) + netdev_err(pf->netdev, "Failed to delete spi_to_sa_match_entry\n"); + + inb_ctx_info->delete_npc_and_match_entry = false; + + /* 5. Finally clear the entry from the SA Table */ + clear_bit(inb_ctx_info->sa_index, pf->ipsec.inb_sa_table); - /* Disable static branch when no more SA enabled */ - static_branch_disable(&cn10k_ipsec_sa_enabled); - rtnl_lock(); - netdev_update_features(pf->netdev); - rtnl_unlock(); + /* 6. Free the inb_ctx_info */ + list_del(&inb_ctx_info->list); + devm_kfree(pf->dev, inb_ctx_info); + } + + if (list_empty(&pf->ipsec.inb_sw_ctx_list) && !pf->ipsec.outb_sa_count) { + static_branch_disable(&cn10k_ipsec_sa_enabled); + rtnl_lock(); + netdev_update_features(pf->netdev); + rtnl_unlock(); + } } static int cn10k_ipsec_configure_cpt_bpid(struct otx2_nic *pfvf)