From patchwork Tue Apr 13 05:17:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 420182 Delivered-To: patch@linaro.org Received: by 2002:a02:c4d2:0:0:0:0:0 with SMTP id h18csp2314693jaj; Mon, 12 Apr 2021 22:19:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxIX7n7wsu8tPpG5Z7Z5A+1f2TijQUWy2vsxlB2UTxGVZVhvJ42PM7Ht4y2gjy+DAtDlNwz X-Received: by 2002:a5d:658c:: with SMTP id q12mr34354838wru.30.1618291147061; Mon, 12 Apr 2021 22:19:07 -0700 (PDT) Return-Path: Received: from mails.dpdk.org (mails.dpdk.org. [217.70.189.124]) by mx.google.com with ESMTP id b9si14328128wrm.328.2021.04.12.22.19.06; Mon, 12 Apr 2021 22:19:07 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) client-ip=217.70.189.124; Authentication-Results: mx.google.com; dkim=fail header.i=@nxp.com header.s=selector2 header.b="V4cs/ymW"; arc=fail (signature failed); spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BBCF1160B33; Tue, 13 Apr 2021 07:18:46 +0200 (CEST) Received: from EUR01-HE1-obe.outbound.protection.outlook.com (mail-eopbgr130052.outbound.protection.outlook.com [40.107.13.52]) by mails.dpdk.org (Postfix) with ESMTP id C0D6C160B2C for ; Tue, 13 Apr 2021 07:18:45 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X47Rkc4mQC1CISlXjhV2rZEP4ov/lYDkMA2hu6FUIecyJh5Rwdr6ARWu1FyPTCPZoiscnollRcomFEAaPr27kj3c9FvX8EqUCVUxGmM4KKSvHBn2Bvi81btNQzDCGkgeqehfjRwC9iU2gk++rDNldnBe3Z8V+fSnkDfdGZUwaGmvAlWlUKsbm3/2UUW7nPd5vQzEs6vvKdMTFPxVlWNHPdFF4TErU2MUBJRDtEnsrRQPEk2k+7+xG2Gq0EvPw+p4IyoJHTSlvVpugDEL3ggOc7M9l3p5/KXpq8LTn7uP9iLAwaJd5Xxz3KkmlyjtC/4fMBOD7jFWzUnaw+kWi15EcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3Ci6HFluxIz6zQFN8/FiOIaWxXEB78sSpX26gq+jKho=; b=n/pazQEh5+D4bItuUgbfeM6TaJPLAHeoxnCnihvGszACXbDruSwQRlmnTVMwoFcNxzFtA5l+zPTWaTAfT0ByMJgDRTJq6hJU5FxYKrfWNb5GXm8ZhEk0yGTsP1ckbCEcX0vUiTGeialsjqH4CewIbYdPnsL3o6OPPRO5BlAlNgfjMdrHJW7j33ms8H0OD87cm/sFEjDi6TO97BoBTm23Nkzz0drScvpdNM3Yi57hPr8JuY2GJjt7fk2HR5UB02PrwBecsCID6dhldAzHmDKpHSxKjWpJZ1s3wDDuwPC05LN/ej3+uArTpuL01DpvQslIGZgxGBRTxg67BNHchL5hJw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3Ci6HFluxIz6zQFN8/FiOIaWxXEB78sSpX26gq+jKho=; b=V4cs/ymW5jJYMZnReSaS0EVZ0wvjPHKxHSFlB6Ugn6/55wFDqsQQu6ZYL/KMFX1gmToAArbMDq2hHziUn2lOnfszuinczc5Zw9TYKB/emaQT5LjhWLpKUzd7wvmDcbSFFn3gVZnnmjuV00cOOHydvLDYSG7IkEesMFdHTa4ck70= Authentication-Results: dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=none action=none header.from=nxp.com; Received: from AM6PR04MB4456.eurprd04.prod.outlook.com (2603:10a6:20b:22::25) by AM5PR04MB3250.eurprd04.prod.outlook.com (2603:10a6:206:b::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4020.22; Tue, 13 Apr 2021 05:18:44 +0000 Received: from AM6PR04MB4456.eurprd04.prod.outlook.com ([fe80::ad9e:a38e:e84e:bf55]) by AM6PR04MB4456.eurprd04.prod.outlook.com ([fe80::ad9e:a38e:e84e:bf55%7]) with mapi id 15.20.4020.022; Tue, 13 Apr 2021 05:18:44 +0000 From: Hemant Agrawal To: dev@dpdk.org, gakhil@marvell.com, nicolas.chautru@intel.com Cc: david.marchand@redhat.com, Hemant Agrawal , Nipun Gupta Date: Tue, 13 Apr 2021 10:47:12 +0530 Message-Id: <20210413051715.26430-6-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210413051715.26430-1-hemant.agrawal@nxp.com> References: <20210410170252.4587-1-hemant.agrawal@nxp.com> <20210413051715.26430-1-hemant.agrawal@nxp.com> X-Originating-IP: [92.120.0.67] X-ClientProxiedBy: HK2PR03CA0059.apcprd03.prod.outlook.com (2603:1096:202:17::29) To AM6PR04MB4456.eurprd04.prod.outlook.com (2603:10a6:20b:22::25) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from dpdk-xeon.ap.freescale.net (92.120.0.67) by HK2PR03CA0059.apcprd03.prod.outlook.com (2603:1096:202:17::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4042.6 via Frontend Transport; Tue, 13 Apr 2021 05:18:41 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fa47d7e9-a205-4a5e-c99a-08d8fe3b9bcb X-MS-TrafficTypeDiagnostic: AM5PR04MB3250: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 12YH7lqoku/mLNWkPxRswIwnopIGVYO6n6vJnNvP4PLdacwFk5RrAgFIIu33Za9+4m77BpS0rvqv6DnTUUd2YafW4wAc8UpDe5Cc7VKZcU8Nm8LvctT7K8XYSeusmfG0pXL/aRDHE/aoACAi5fmDQvti3caIxlU3+TzH3hEqqMGqJn2hBhr7Oa1TtvMQNdvCl/alRMjuZeCv2/pzQG6/6NDFcwNWMpsHA2aNiYPKBq9lc2sGRN2VkXu3B9yqfFqTdjdflpmuOim9ByP/JXP/ZpFAsL9lyO3weLShxDmayIylLmKwB035ogYvF1rjzqTqpiaWfBDr41PxTNmIlzy8PlBGgZ2ZpX1l7uqGZWYsYCuJmUpFXqmhcUUzIKPw7czJjmJ28f5ijuVmFs/Rbfc/wXUzC9x8mNWLvOUAbbJ5sUFZpTzOaAuPiB+GuxL81332jBjqxM9lQe11B+OOr0drgHStNZwIL8XrRS/GHzqcS/VJ6NyPXiSPOk/U8/iZBbjelK4MsDrYror+PouHN2VUhNSERABZFIxWjNWVu0Og1RcQWaHm4lhCp9OctDMkMjaiefvZKYP+sftzPrusSMJmB7sODH0bj6KVlnY971Y3vgzM7D5ulM4vYNH7R7spRhg7Rjpva6iNZ5kAupWKtCcQQZ5RbMuY3yxCa/9sdObUJQs= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR04MB4456.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(346002)(396003)(39860400002)(366004)(136003)(376002)(83380400001)(16526019)(186003)(26005)(54906003)(38350700002)(38100700002)(316002)(66556008)(66946007)(66476007)(6512007)(2616005)(956004)(478600001)(4326008)(44832011)(36756003)(6486002)(6506007)(5660300002)(2906002)(52116002)(1076003)(86362001)(8936002)(30864003)(6666004)(8676002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: sYQEJGnKLxH3FltrV8QARyOHYLMtrJ0iQwlNNHudGBv5jGnS5mpHc/2YyQC6d5HpxmAYakguTaj8k8URnA7jy8m60+y7qEujJsQq8WRedy9C7d+W7svKfpnvMFVv4iVz/Q1GoDSjr3K3TMikupI/PD4rGnizMagaJgdqyv4Kw5NEApa+QKEWFdldV5Wkh0Py5gYasQYGy4Ms/J4cy+jlwPT/0pLEAUlCv2dJo8bZg6vB8AjMlqVTPQKZXKt3vmkTgSW4Pv6E2S38Ai5+ZsgVWhpWGRh+t4421jhjX378gPJeUmvKFp6/yfXXDK4UJCw+s1qpzooefOPQ5l/av/l04Fe0R7jdpi1RHkco7az/NPqBB2bu+E2VJ9gveeP74mDLFuk3+LUJRHNlc3rItMnmlEINAjr6F7d8hkEVa6o5P3wi0sVZJynlwpuky+GRF2fGMzsiwSEJ0QLmos4jhRRvk2pDgqM9GMyPDM+OMuNKZ48Jtbe7hhzVm2H+V1QgvhiDuCRz2ey/INFQnosF4WUtAPFKZOBN++swvCH8jJyTDBkm1kLg6SgLOoGipBXPuZYxd9w8LqtshXPDytg1azgQCHd/iYxJ9s2ekIVzU3b9ate2rKb+Y5XPbrH9BGPMH2Op8NNXKQa6ug9FEkUfh3ESd/c7xwQL+nl4A+GQc8yRyWGCKwTzoD8lFGgM982X8xz0UI6DKjIakLq2+xKOp8+YYuK6LI20KFJqNVfH6pM6k7WHW29WO1QcU7tNz53m+Bs8dGUDFuTkJ6ASel6yt5NDFYW6FUcrIi3tZXlNyncDNGbFyEAZutjuAFGMZLDOSi5+TLydltVHKWgePAPVepiaZlXqTWnPT8Pf7JPTvGYxJZpGLGKIjxE3xvROKC+v3uooGLiKIFmS6OtNrV6T/1WUaIqx5USDzkbiCso36rLlURDQAHqJNL+DARua0B7CjR74zznB/KVdAxByv50JQ5FxQ8zHthXGVz5BFx6xS2Fr3LS9Dti+D30KrKmLWAy0B/yQeDTRNevKKosWGuEaW1zVG8MeSR7t4YdDwseMmTq7f+gxCs1eOsppGTl1C2EYm0aLluM9ZfL9P/8RRxvNA7wkX48Bb15JecaDy0Fi3pEklvs6NKjoWX9QSHeRt7f+kz1jtiazaIw1PYMK4sDZfpf980RjdFvdEHkI1+lgi0lgHlEvsVogMft2GW/AepL45wSB8qUm+tW8zU1TMMTjZgn6rNj+qb3qD7ARCQ0KmXO2PbXw79DDh69dIH88i4Z4Uraqjh4TgB2sWqpwLYYmqegF/w5arsTlED6TPXKY2VebpoDfzE7a8IByg/VuyHC6g4XD X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: fa47d7e9-a205-4a5e-c99a-08d8fe3b9bcb X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB4456.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2021 05:18:44.5443 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: pLp0syav8FwYiJi1LsaxxcfWxgOAbobD6u33RAuyF/l1/RAgiTM7Utgt4mkw+DVnCXD9d9p0OImTjKAkc/g1Uw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR04MB3250 Subject: [dpdk-dev] [PATCH v3 5/8] baseband/la12xx: add enqueue and dequeue support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for enqueue and dequeue the LDPC enc/dec from the modem device. Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- drivers/baseband/la12xx/bbdev_la12xx.c | 397 ++++++++++++++++++++- drivers/baseband/la12xx/bbdev_la12xx_ipc.h | 37 ++ 2 files changed, 430 insertions(+), 4 deletions(-) -- 2.17.1 Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c index 0a68686205..d1040987b2 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.c +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -117,6 +117,10 @@ la12xx_queue_release(struct rte_bbdev *dev, uint16_t q_id) ((uint64_t) ((unsigned long) (A) \ - ((uint64_t)ipc_priv->hugepg_start.host_vaddr))) +#define MODEM_P2V(A) \ + ((uint64_t) ((unsigned long) (A) \ + + (unsigned long)(ipc_priv->peb_start.host_vaddr))) + static int ipc_queue_configure(uint32_t channel_id, ipc_t instance, struct bbdev_la12xx_q_priv *q_priv) { @@ -345,6 +349,387 @@ static const struct rte_bbdev_ops pmd_ops = { .queue_release = la12xx_queue_release, .start = la12xx_start }; + +static int +fill_feca_desc_enc(struct bbdev_la12xx_q_priv *q_priv, + struct bbdev_ipc_dequeue_op *bbdev_ipc_op, + struct rte_bbdev_enc_op *bbdev_enc_op, + struct rte_bbdev_op_data *in_op_data) +{ + RTE_SET_USED(q_priv); + RTE_SET_USED(bbdev_ipc_op); + RTE_SET_USED(bbdev_enc_op); + RTE_SET_USED(in_op_data); + + return 0; +} + +static int +fill_feca_desc_dec(struct bbdev_la12xx_q_priv *q_priv, + struct bbdev_ipc_dequeue_op *bbdev_ipc_op, + struct rte_bbdev_dec_op *bbdev_dec_op, + struct rte_bbdev_op_data *out_op_data) +{ + RTE_SET_USED(q_priv); + RTE_SET_USED(bbdev_ipc_op); + RTE_SET_USED(bbdev_dec_op); + RTE_SET_USED(out_op_data); + + return 0; +} + +static inline int +is_bd_ring_full(uint32_t ci, uint32_t ci_flag, + uint32_t pi, uint32_t pi_flag) +{ + if (pi == ci) { + if (pi_flag != ci_flag) + return 1; /* Ring is Full */ + } + return 0; +} + +static inline int +prepare_ldpc_enc_op(struct rte_bbdev_enc_op *bbdev_enc_op, + struct bbdev_ipc_dequeue_op *bbdev_ipc_op, + struct bbdev_la12xx_q_priv *q_priv, + struct rte_bbdev_op_data *in_op_data, + struct rte_bbdev_op_data *out_op_data) +{ + struct rte_bbdev_op_ldpc_enc *ldpc_enc = &bbdev_enc_op->ldpc_enc; + uint32_t total_out_bits; + int ret; + + total_out_bits = (ldpc_enc->tb_params.cab * + ldpc_enc->tb_params.ea) + (ldpc_enc->tb_params.c - + ldpc_enc->tb_params.cab) * ldpc_enc->tb_params.eb; + + ldpc_enc->output.length = (total_out_bits + 7)/8; + + ret = fill_feca_desc_enc(q_priv, bbdev_ipc_op, + bbdev_enc_op, in_op_data); + if (ret) { + BBDEV_LA12XX_PMD_ERR( + "fill_feca_desc_enc failed, ret: %d", ret); + return ret; + } + + rte_pktmbuf_append(out_op_data->data, ldpc_enc->output.length); + + return 0; +} + +static inline int +prepare_ldpc_dec_op(struct rte_bbdev_dec_op *bbdev_dec_op, + struct bbdev_ipc_dequeue_op *bbdev_ipc_op, + struct bbdev_la12xx_q_priv *q_priv, + struct rte_bbdev_op_data *out_op_data) +{ + struct rte_bbdev_op_ldpc_dec *ldpc_dec = &bbdev_dec_op->ldpc_dec; + uint32_t total_out_bits; + uint32_t num_code_blocks = 0; + uint16_t sys_cols; + int ret; + + sys_cols = (ldpc_dec->basegraph == 1) ? 22 : 10; + if (ldpc_dec->tb_params.c == 1) { + total_out_bits = ((sys_cols * ldpc_dec->z_c) - + ldpc_dec->n_filler); + /* 5G-NR protocol uses 16 bit CRC when output packet + * size <= 3824 (bits). Otherwise 24 bit CRC is used. + * Adjust the output bits accordingly + */ + if (total_out_bits - 16 <= 3824) + total_out_bits -= 16; + else + total_out_bits -= 24; + ldpc_dec->hard_output.length = (total_out_bits / 8); + } else { + total_out_bits = (((sys_cols * ldpc_dec->z_c) - + ldpc_dec->n_filler - 24) * + ldpc_dec->tb_params.c); + ldpc_dec->hard_output.length = (total_out_bits / 8) - 3; + } + + num_code_blocks = ldpc_dec->tb_params.c; + + bbdev_ipc_op->num_code_blocks = rte_cpu_to_be_32(num_code_blocks); + + ret = fill_feca_desc_dec(q_priv, bbdev_ipc_op, + bbdev_dec_op, out_op_data); + if (ret) { + BBDEV_LA12XX_PMD_ERR("fill_feca_desc_dec failed, ret: %d", ret); + return ret; + } + + return 0; +} + +static int +enqueue_single_op(struct bbdev_la12xx_q_priv *q_priv, void *bbdev_op) +{ + struct bbdev_la12xx_private *priv = q_priv->bbdev_priv; + ipc_userspace_t *ipc_priv = priv->ipc_priv; + ipc_instance_t *ipc_instance = ipc_priv->instance; + struct bbdev_ipc_dequeue_op *bbdev_ipc_op; + struct rte_bbdev_op_ldpc_enc *ldpc_enc; + struct rte_bbdev_op_ldpc_dec *ldpc_dec; + uint32_t q_id = q_priv->q_id; + uint32_t ci, ci_flag, pi, pi_flag; + ipc_ch_t *ch = &(ipc_instance->ch_list[q_id]); + ipc_br_md_t *md = &(ch->md); + size_t virt; + char *huge_start_addr = + (char *)q_priv->bbdev_priv->ipc_priv->hugepg_start.host_vaddr; + struct rte_bbdev_op_data *in_op_data, *out_op_data; + char *data_ptr; + uint32_t l1_pcie_addr; + int ret; + uint32_t temp_ci; + + temp_ci = q_priv->host_params->ci; + ci = IPC_GET_CI_INDEX(temp_ci); + ci_flag = IPC_GET_CI_FLAG(temp_ci); + + pi = IPC_GET_PI_INDEX(q_priv->host_pi); + pi_flag = IPC_GET_PI_FLAG(q_priv->host_pi); + + BBDEV_LA12XX_PMD_DP_DEBUG( + "before bd_ring_full: pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring size: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + + if (is_bd_ring_full(ci, ci_flag, pi, pi_flag)) { + BBDEV_LA12XX_PMD_DP_DEBUG( + "bd ring full for queue id: %d", q_id); + return IPC_CH_FULL; + } + + virt = MODEM_P2V(q_priv->host_params->modem_ptr[pi]); + bbdev_ipc_op = (struct bbdev_ipc_dequeue_op *)virt; + q_priv->bbdev_op[pi] = bbdev_op; + + switch (q_priv->op_type) { + case RTE_BBDEV_OP_LDPC_ENC: + ldpc_enc = &(((struct rte_bbdev_enc_op *)bbdev_op)->ldpc_enc); + in_op_data = &ldpc_enc->input; + out_op_data = &ldpc_enc->output; + + ret = prepare_ldpc_enc_op(bbdev_op, bbdev_ipc_op, q_priv, + in_op_data, out_op_data); + if (ret) { + BBDEV_LA12XX_PMD_ERR( + "process_ldpc_enc_op failed, ret: %d", ret); + return ret; + } + break; + + case RTE_BBDEV_OP_LDPC_DEC: + ldpc_dec = &(((struct rte_bbdev_dec_op *)bbdev_op)->ldpc_dec); + in_op_data = &ldpc_dec->input; + + out_op_data = &ldpc_dec->hard_output; + + ret = prepare_ldpc_dec_op(bbdev_op, bbdev_ipc_op, + q_priv, out_op_data); + if (ret) { + BBDEV_LA12XX_PMD_ERR( + "process_ldpc_dec_op failed, ret: %d", ret); + return ret; + } + break; + + default: + BBDEV_LA12XX_PMD_ERR("unsupported bbdev_ipc op type"); + return -1; + } + + if (in_op_data->data) { + data_ptr = rte_pktmbuf_mtod(in_op_data->data, char *); + l1_pcie_addr = (uint32_t)GUL_USER_HUGE_PAGE_ADDR + + data_ptr - huge_start_addr; + bbdev_ipc_op->in_addr = l1_pcie_addr; + bbdev_ipc_op->in_len = in_op_data->length; + } + + if (out_op_data->data) { + data_ptr = rte_pktmbuf_mtod(out_op_data->data, char *); + l1_pcie_addr = (uint32_t)GUL_USER_HUGE_PAGE_ADDR + + data_ptr - huge_start_addr; + bbdev_ipc_op->out_addr = rte_cpu_to_be_32(l1_pcie_addr); + bbdev_ipc_op->out_len = rte_cpu_to_be_32(out_op_data->length); + } + + /* Move Producer Index forward */ + pi++; + /* Flip the PI flag, if wrapping */ + if (unlikely(q_priv->queue_size == pi)) { + pi = 0; + pi_flag = pi_flag ? 0 : 1; + } + + if (pi_flag) + IPC_SET_PI_FLAG(pi); + else + IPC_RESET_PI_FLAG(pi); + /* Wait for Data Copy & pi_flag update to complete before updating pi */ + rte_mb(); + /* now update pi */ + md->pi = rte_cpu_to_be_32(pi); + q_priv->host_pi = pi; + + BBDEV_LA12XX_PMD_DP_DEBUG( + "enter: pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring size: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + + return 0; +} + +/* Enqueue decode burst */ +static uint16_t +enqueue_dec_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_dec_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + int nb_enqueued, ret; + + for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) { + ret = enqueue_single_op(q_priv, ops[nb_enqueued]); + if (ret) + break; + } + + q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued; + q_data->queue_stats.enqueued_count += nb_enqueued; + + return nb_enqueued; +} + +/* Enqueue encode burst */ +static uint16_t +enqueue_enc_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_enc_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + int nb_enqueued, ret; + + for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) { + ret = enqueue_single_op(q_priv, ops[nb_enqueued]); + if (ret) + break; + } + + q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued; + q_data->queue_stats.enqueued_count += nb_enqueued; + + return nb_enqueued; +} + +static inline int +is_bd_ring_empty(uint32_t ci, uint32_t ci_flag, + uint32_t pi, uint32_t pi_flag) +{ + if (ci == pi) { + if (ci_flag == pi_flag) + return 1; /* No more Buffer */ + } + return 0; +} + +/* Dequeue encode burst */ +static void * +dequeue_single_op(struct bbdev_la12xx_q_priv *q_priv, void *dst) +{ + struct bbdev_la12xx_private *priv = q_priv->bbdev_priv; + ipc_userspace_t *ipc_priv = priv->ipc_priv; + uint32_t q_id = q_priv->q_id + HOST_RX_QUEUEID_OFFSET; + ipc_instance_t *ipc_instance = ipc_priv->instance; + ipc_ch_t *ch = &(ipc_instance->ch_list[q_id]); + uint32_t ci, ci_flag, pi, pi_flag; + ipc_br_md_t *md; + void *op; + uint32_t temp_pi; + + md = &(ch->md); + ci = IPC_GET_CI_INDEX(q_priv->host_ci); + ci_flag = IPC_GET_CI_FLAG(q_priv->host_ci); + + temp_pi = q_priv->host_params->pi; + pi = IPC_GET_PI_INDEX(temp_pi); + pi_flag = IPC_GET_PI_FLAG(temp_pi); + + if (is_bd_ring_empty(ci, ci_flag, pi, pi_flag)) + return NULL; + + BBDEV_LA12XX_PMD_DP_DEBUG( + "pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring size: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + + op = q_priv->bbdev_op[ci]; + + rte_memcpy(dst, q_priv->msg_ch_vaddr[ci], + sizeof(struct bbdev_ipc_enqueue_op)); + + /* Move Consumer Index forward */ + ci++; + /* Flip the CI flag, if wrapping */ + if (q_priv->queue_size == ci) { + ci = 0; + ci_flag = ci_flag ? 0 : 1; + } + if (ci_flag) + IPC_SET_CI_FLAG(ci); + else + IPC_RESET_CI_FLAG(ci); + md->ci = rte_cpu_to_be_32(ci); + q_priv->host_ci = ci; + + BBDEV_LA12XX_PMD_DP_DEBUG( + "exit: pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring size: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + + return op; +} + +/* Dequeue decode burst */ +static uint16_t +dequeue_dec_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_dec_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + struct bbdev_ipc_enqueue_op bbdev_ipc_op; + int nb_dequeued; + + for (nb_dequeued = 0; nb_dequeued < nb_ops; nb_dequeued++) { + ops[nb_dequeued] = dequeue_single_op(q_priv, &bbdev_ipc_op); + if (!ops[nb_dequeued]) + break; + ops[nb_dequeued]->status = bbdev_ipc_op.status; + } + q_data->queue_stats.dequeued_count += nb_dequeued; + + return nb_dequeued; +} + +/* Dequeue encode burst */ +static uint16_t +dequeue_enc_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_enc_op **ops, uint16_t nb_ops) +{ + struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private; + struct bbdev_ipc_enqueue_op bbdev_ipc_op; + int nb_enqueued; + + for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) { + ops[nb_enqueued] = dequeue_single_op(q_priv, &bbdev_ipc_op); + if (!ops[nb_enqueued]) + break; + ops[nb_enqueued]->status = bbdev_ipc_op.status; + } + q_data->queue_stats.enqueued_count += nb_enqueued; + + return nb_enqueued; +} + static struct hugepage_info * get_hugepage_info(void) { @@ -720,10 +1105,14 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev, bbdev->intr_handle = NULL; /* register rx/tx burst functions for data path */ - bbdev->dequeue_enc_ops = NULL; - bbdev->dequeue_dec_ops = NULL; - bbdev->enqueue_enc_ops = NULL; - bbdev->enqueue_dec_ops = NULL; + bbdev->dequeue_enc_ops = dequeue_enc_ops; + bbdev->dequeue_dec_ops = dequeue_dec_ops; + bbdev->enqueue_enc_ops = enqueue_enc_ops; + bbdev->enqueue_dec_ops = enqueue_dec_ops; + bbdev->dequeue_ldpc_enc_ops = dequeue_enc_ops; + bbdev->dequeue_ldpc_dec_ops = dequeue_dec_ops; + bbdev->enqueue_ldpc_enc_ops = enqueue_enc_ops; + bbdev->enqueue_ldpc_dec_ops = enqueue_dec_ops; return 0; } diff --git a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h index 9d5789f726..4e181e9254 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h +++ b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h @@ -76,6 +76,25 @@ typedef struct { _IOWR(GUL_IPC_MAGIC, 5, struct ipc_msg *) #define IOCTL_GUL_IPC_CHANNEL_RAISE_INTERRUPT _IOW(GUL_IPC_MAGIC, 6, int *) +#define GUL_USER_HUGE_PAGE_OFFSET (0) +#define GUL_PCI1_ADDR_BASE (0x00000000ULL) + +#define GUL_USER_HUGE_PAGE_ADDR (GUL_PCI1_ADDR_BASE + GUL_USER_HUGE_PAGE_OFFSET) + +/* IPC PI/CI index & flag manipulation helpers */ +#define IPC_PI_CI_FLAG_MASK 0x80000000 /* (1<<31) */ +#define IPC_PI_CI_INDEX_MASK 0x7FFFFFFF /* ~(1<<31) */ + +#define IPC_SET_PI_FLAG(x) (x |= IPC_PI_CI_FLAG_MASK) +#define IPC_RESET_PI_FLAG(x) (x &= IPC_PI_CI_INDEX_MASK) +#define IPC_GET_PI_FLAG(x) (x >> 31) +#define IPC_GET_PI_INDEX(x) (x & IPC_PI_CI_INDEX_MASK) + +#define IPC_SET_CI_FLAG(x) (x |= IPC_PI_CI_FLAG_MASK) +#define IPC_RESET_CI_FLAG(x) (x &= IPC_PI_CI_INDEX_MASK) +#define IPC_GET_CI_FLAG(x) (x >> 31) +#define IPC_GET_CI_INDEX(x) (x & IPC_PI_CI_INDEX_MASK) + /** buffer ring common metadata */ typedef struct ipc_bd_ring_md { volatile uint32_t pi; /**< Producer index and flag (MSB) @@ -173,6 +192,24 @@ struct bbdev_ipc_enqueue_op { uint32_t rsvd; }; +/** Structure specifying dequeue operation (dequeue at LA1224) */ +struct bbdev_ipc_dequeue_op { + /** Input buffer memory address */ + uint32_t in_addr; + /** Input buffer memory length */ + uint32_t in_len; + /** Output buffer memory address */ + uint32_t out_addr; + /** Output buffer memory length */ + uint32_t out_len; + /* Number of code blocks. Only set when HARQ is used */ + uint32_t num_code_blocks; + /** Dequeue Operation flags */ + uint32_t op_flags; + /** Shared metadata between L1 and L2 */ + uint32_t shared_metadata; +}; + /* This shared memory would be on the host side which have copy of some * of the parameters which are also part of Shared BD ring. Read access * of these parameters from the host side would not be over PCI.