From patchwork Wed Sep 28 07:38:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 610279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFD62C6FA86 for ; Wed, 28 Sep 2022 07:40:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233761AbiI1Hkg (ORCPT ); Wed, 28 Sep 2022 03:40:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233711AbiI1Hj6 (ORCPT ); Wed, 28 Sep 2022 03:39:58 -0400 Received: from out30-44.freemail.mail.aliyun.com (out30-44.freemail.mail.aliyun.com [115.124.30.44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4003010D645; Wed, 28 Sep 2022 00:38:14 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R201e4; CH=green; DM=||false|; DS=||; FP=0|-1|-1|-1|0|-1|-1|-1; HT=ay29a033018046059; MF=guanjun@linux.alibaba.com; NM=1; PH=DS; RN=8; SR=0; TI=SMTPD_---0VQv0o2K_1664350691; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VQv0o2K_1664350691) by smtp.aliyun-inc.com; Wed, 28 Sep 2022 15:38:12 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, guanjun@linux.alibaba.com, xuchun.shang@linux.alibaba.com, artie.ding@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/9] crypto/ycc: Add irq support for ycc kernel rings Date: Wed, 28 Sep 2022 15:38:01 +0800 Message-Id: <1664350687-47330-4-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1664350687-47330-1-git-send-email-guanjun@linux.alibaba.com> References: <1664350687-47330-1-git-send-email-guanjun@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Zelin Deng Each kernel ring has its own command done irq. Temporarily user rings will not enable irq. Signed-off-by: Zelin Deng --- drivers/crypto/ycc/ycc_isr.c | 92 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 80 insertions(+), 12 deletions(-) diff --git a/drivers/crypto/ycc/ycc_isr.c b/drivers/crypto/ycc/ycc_isr.c index cd2a2d7..a86c8d7 100644 --- a/drivers/crypto/ycc/ycc_isr.c +++ b/drivers/crypto/ycc/ycc_isr.c @@ -12,6 +12,17 @@ #include #include "ycc_isr.h" +#include "ycc_dev.h" +#include "ycc_ring.h" + + +static irqreturn_t ycc_resp_isr(int irq, void *data) +{ + struct ycc_ring *ring = (struct ycc_ring *)data; + + tasklet_hi_schedule(&ring->resp_handler); + return IRQ_HANDLED; +} /* * TODO: will implement when ycc ring actually work. @@ -38,11 +49,13 @@ static irqreturn_t ycc_g_err_isr(int irq, void *data) return IRQ_HANDLED; } -/* - * TODO: will implement when ycc ring actually work. - */ void ycc_resp_process(uintptr_t ring_addr) { + struct ycc_ring *ring = (struct ycc_ring *)ring_addr; + + ycc_dequeue(ring); + if (ring->ydev->is_polling) + tasklet_hi_schedule(&ring->resp_handler); } int ycc_enable_msix(struct ycc_dev *ydev) @@ -83,34 +96,89 @@ static void ycc_cleanup_global_err_workqueue(struct ycc_dev *ydev) destroy_workqueue(ydev->dev_err_q); } -/* - * TODO: Just request irq for global err. Irq for each ring - * will be requested when ring actually work. - */ int ycc_alloc_irqs(struct ycc_dev *ydev) { struct pci_dev *rcec_pdev = ydev->assoc_dev->pdev; int num = ydev->is_vf ? 1 : YCC_RINGPAIR_NUM; - int ret; + int cpu, cpus = num_online_cpus(); + int ret, i, j; + /* The 0 - (YCC_RINGPAIR_NUM-1) are rings irqs, the last one is dev error irq */ sprintf(ydev->err_irq_name, "ycc_dev_%d_global_err", ydev->id); ret = request_irq(pci_irq_vector(rcec_pdev, num), ycc_g_err_isr, 0, ydev->err_irq_name, ydev); - if (ret) + if (ret) { pr_err("Failed to alloc global irq interrupt for dev: %d\n", ydev->id); + goto out; + } + + if (ydev->is_polling) + goto out; + + for (i = 0; i < num; i++) { + if (ydev->rings[i].type != KERN_RING) + continue; + + ydev->msi_name[i] = kzalloc(16, GFP_KERNEL); + if (!ydev->msi_name[i]) + goto free_irq; + snprintf(ydev->msi_name[i], 16, "ycc_ring_%d", i); + ret = request_irq(pci_irq_vector(rcec_pdev, i), ycc_resp_isr, + 0, ydev->msi_name[i], &ydev->rings[i]); + if (ret) { + kfree(ydev->msi_name[i]); + goto free_irq; + } + if (!ydev->is_vf) + cpu = (i % YCC_RINGPAIR_NUM) % cpus; + else + cpu = smp_processor_id() % cpus; + + ret = irq_set_affinity_hint(pci_irq_vector(rcec_pdev, i), + get_cpu_mask(cpu)); + if (ret) { + free_irq(pci_irq_vector(rcec_pdev, i), &ydev->rings[i]); + kfree(ydev->msi_name[i]); + goto free_irq; + } + } + + return 0; + +free_irq: + for (j = 0; j < i; j++) { + if (ydev->rings[i].type != KERN_RING) + continue; + + free_irq(pci_irq_vector(rcec_pdev, j), &ydev->rings[j]); + kfree(ydev->msi_name[j]); + } + free_irq(pci_irq_vector(rcec_pdev, num), ydev); +out: return ret; } -/* - * TODO: Same as the allocate action. - */ void ycc_free_irqs(struct ycc_dev *ydev) { struct pci_dev *rcec_pdev = ydev->assoc_dev->pdev; int num = ydev->is_vf ? 1 : YCC_RINGPAIR_NUM; + int i; + /* Free device err irq */ free_irq(pci_irq_vector(rcec_pdev, num), ydev); + + if (ydev->is_polling) + return; + + for (i = 0; i < num; i++) { + if (ydev->rings[i].type != KERN_RING) + continue; + + irq_set_affinity_hint(pci_irq_vector(rcec_pdev, i), NULL); + free_irq(pci_irq_vector(rcec_pdev, i), &ydev->rings[i]); + kfree(ydev->msi_name[i]); + } } int ycc_init_global_err(struct ycc_dev *ydev) From patchwork Wed Sep 28 07:38:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 610278 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80321C6FA93 for ; Wed, 28 Sep 2022 07:40:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233765AbiI1Hki (ORCPT ); Wed, 28 Sep 2022 03:40:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233715AbiI1Hj6 (ORCPT ); Wed, 28 Sep 2022 03:39:58 -0400 Received: from out199-7.us.a.mail.aliyun.com (out199-7.us.a.mail.aliyun.com [47.90.199.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77F15118B25; Wed, 28 Sep 2022 00:38:17 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R161e4; CH=green; DM=||false|; DS=||; FP=0|-1|-1|-1|0|-1|-1|-1; HT=ay29a033018046051; MF=guanjun@linux.alibaba.com; NM=1; PH=DS; RN=8; SR=0; TI=SMTPD_---0VQv-CXL_1664350692; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VQv-CXL_1664350692) by smtp.aliyun-inc.com; Wed, 28 Sep 2022 15:38:13 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, guanjun@linux.alibaba.com, xuchun.shang@linux.alibaba.com, artie.ding@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/9] crypto/ycc: Add device error handling support for ycc hw errors Date: Wed, 28 Sep 2022 15:38:02 +0800 Message-Id: <1664350687-47330-5-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1664350687-47330-1-git-send-email-guanjun@linux.alibaba.com> References: <1664350687-47330-1-git-send-email-guanjun@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Zelin Deng Due to ycc hardware limitations, in REE ycc device cannot be reset to recover from fatal error (reset register is only valid in TEE and PCIE FLR only reset queue pointers but not ycc hw), regard all hw errors except queue error as fatal error. Signed-off-by: Zelin Deng --- drivers/crypto/ycc/ycc_isr.c | 93 +++++++++++++++++++++++++++++++++++++++++-- drivers/crypto/ycc/ycc_ring.c | 90 +++++++++++++++++++++++++++++++++++++++++ drivers/crypto/ycc/ycc_ring.h | 3 ++ 3 files changed, 183 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/ycc/ycc_isr.c b/drivers/crypto/ycc/ycc_isr.c index a86c8d7..28f8918 100644 --- a/drivers/crypto/ycc/ycc_isr.c +++ b/drivers/crypto/ycc/ycc_isr.c @@ -15,6 +15,8 @@ #include "ycc_dev.h" #include "ycc_ring.h" +extern void ycc_clear_cmd_ring(struct ycc_ring *ring); +extern void ycc_clear_resp_ring(struct ycc_ring *ring); static irqreturn_t ycc_resp_isr(int irq, void *data) { @@ -24,11 +26,93 @@ static irqreturn_t ycc_resp_isr(int irq, void *data) return IRQ_HANDLED; } -/* - * TODO: will implement when ycc ring actually work. - */ +static void ycc_fatal_error(struct ycc_dev *ydev) +{ + struct ycc_ring *ring; + int i; + + for (i = 0; i < YCC_RINGPAIR_NUM; i++) { + ring = ydev->rings + i; + + if (ring->type != KERN_RING) + continue; + + spin_lock_bh(&ring->lock); + ycc_clear_cmd_ring(ring); + spin_unlock_bh(&ring->lock); + + ycc_clear_resp_ring(ring); + } +} + static void ycc_process_global_err(struct work_struct *work) { + struct ycc_dev *ydev = container_of(work, struct ycc_dev, work); + struct ycc_bar *cfg_bar = &ydev->ycc_bars[YCC_SEC_CFG_BAR]; + struct ycc_ring *ring; + u32 hclk_err, xclk_err; + u32 xclk_ecc_uncor_err_0, xclk_ecc_uncor_err_1; + u32 hclk_ecc_uncor_err; + int i; + + if (pci_wait_for_pending_transaction(ydev->pdev)) + pr_warn("Failed to pending transaction\n"); + + hclk_err = YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_HCLK_INT_STATUS); + xclk_err = YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_XCLK_INT_STATUS); + xclk_ecc_uncor_err_0 = YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_XCLK_MEM_ECC_UNCOR_0); + xclk_ecc_uncor_err_1 = YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_XCLK_MEM_ECC_UNCOR_1); + hclk_ecc_uncor_err = YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_HCLK_MEM_ECC_UNCOR); + + if ((hclk_err & ~(YCC_HCLK_TRNG_ERR)) || xclk_err || hclk_ecc_uncor_err) { + pr_err("Got uncorrected error, must be reset\n"); + /* + * Fatal error, as ycc cannot be reset in REE, clear ring data. + */ + return ycc_fatal_error(ydev); + } + + if (xclk_ecc_uncor_err_0 || xclk_ecc_uncor_err_1) { + pr_err("Got algorithm ECC error: %x, %x\n", + xclk_ecc_uncor_err_0, xclk_ecc_uncor_err_1); + return ycc_fatal_error(ydev); + } + + /* This has to be queue error. Handling command rings. */ + for (i = 0; i < YCC_RINGPAIR_NUM; i++) { + ring = ydev->rings + i; + + if (ring->type != KERN_RING) + continue; + + ring->status = YCC_CSR_RD(ring->csr_vaddr, REG_RING_STATUS); + if (ring->status) { + pr_err("YCC: Dev: %d, Ring: %d got ring err: %x\n", + ydev->id, ring->ring_id, ring->status); + spin_lock_bh(&ring->lock); + ycc_clear_cmd_ring(ring); + spin_unlock_bh(&ring->lock); + } + } + + /* + * Give HW a chance to process all pending_cmds + * through recovering transactions. + */ + pci_set_master(ydev->pdev); + + for (i = 0; i < YCC_RINGPAIR_NUM; i++) { + ring = ydev->rings + i; + + if (ring->type != KERN_RING || !ring->status) + continue; + + ycc_clear_resp_ring(ring); + } + + ycc_g_err_unmask(cfg_bar->vaddr); + clear_bit(YDEV_STATUS_ERR, &ydev->status); + set_bit(YDEV_STATUS_READY, &ydev->status); } static irqreturn_t ycc_g_err_isr(int irq, void *data) @@ -45,6 +129,9 @@ static irqreturn_t ycc_g_err_isr(int irq, void *data) clear_bit(YDEV_STATUS_READY, &ydev->status); + /* Disable YCC mastering, no new transactions */ + pci_clear_master(ydev->pdev); + schedule_work(&ydev->work); return IRQ_HANDLED; } diff --git a/drivers/crypto/ycc/ycc_ring.c b/drivers/crypto/ycc/ycc_ring.c index d808054..f6f6e40 100644 --- a/drivers/crypto/ycc/ycc_ring.c +++ b/drivers/crypto/ycc/ycc_ring.c @@ -483,6 +483,24 @@ int ycc_enqueue(struct ycc_ring *ring, void *cmd) return ret; } +static void ycc_cancel_cmd(struct ycc_ring *ring, struct ycc_cmd_desc *desc) +{ + struct ycc_flags *aflag; + + dma_rmb(); + + aflag = (struct ycc_flags *)desc->private_ptr; + if (!aflag || (u64)aflag == CMD_INVALID_CONTENT_U64) { + pr_debug("YCC: Invalid aflag\n"); + return; + } + + aflag->ycc_done_callback(aflag->ptr, CMD_CANCELLED); + + memset(desc, CMD_INVALID_CONTENT_U8, sizeof(*desc)); + kfree(aflag); +} + static inline void ycc_check_cmd_state(u16 state) { switch (state) { @@ -560,3 +578,75 @@ void ycc_dequeue(struct ycc_ring *ring) if (cnt) YCC_CSR_WR(ring->csr_vaddr, REG_RING_RSP_RD_PTR, ring->resp_rd_ptr); } + +/* + * Clear incompletion cmds in command queue while rollback cmd_wr_ptr. + * + * Note: Make sure been invoked when error occurs in YCC internal and + * YCC status is not ready. + */ +void ycc_clear_cmd_ring(struct ycc_ring *ring) +{ + struct ycc_cmd_desc *desc = NULL; + + ring->cmd_rd_ptr = YCC_CSR_RD(ring->csr_vaddr, REG_RING_CMD_RD_PTR); + ring->cmd_wr_ptr = YCC_CSR_RD(ring->csr_vaddr, REG_RING_CMD_WR_PTR); + + while (ring->cmd_rd_ptr != ring->cmd_wr_ptr) { + desc = (struct ycc_cmd_desc *)ring->cmd_base_vaddr + + ring->cmd_rd_ptr; + ycc_cancel_cmd(ring, desc); + + if (--ring->cmd_wr_ptr == 0) + ring->cmd_wr_ptr = ring->max_desc; + } + + YCC_CSR_WR(ring->csr_vaddr, REG_RING_CMD_WR_PTR, ring->cmd_wr_ptr); +} + +/* + * Clear response queue + * + * Note: Make sure been invoked when error occurs in YCC internal and + * YCC status is not ready. + */ +void ycc_clear_resp_ring(struct ycc_ring *ring) +{ + struct ycc_resp_desc *resp; + int retry; + u32 pending_cmd; + + /* + * Check if the ring has been stopped. *stop* means no + * new transactions, No need to wait for pending_cmds + * been processed under this condition. + */ + retry = ycc_ring_stopped(ring) ? 0 : MAX_ERROR_RETRY; + pending_cmd = YCC_CSR_RD(ring->csr_vaddr, REG_RING_PENDING_CMD); + + ring->resp_wr_ptr = YCC_CSR_RD(ring->csr_vaddr, REG_RING_RSP_WR_PTR); + while (!ycc_ring_empty(ring) || (retry && pending_cmd)) { + if (!ycc_ring_empty(ring)) { + resp = (struct ycc_resp_desc *)ring->resp_base_vaddr + + ring->resp_rd_ptr; + resp->state = CMD_CANCELLED; + ycc_handle_resp(ring, resp); + + if (++ring->resp_rd_ptr == ring->max_desc) + ring->resp_rd_ptr = 0; + + YCC_CSR_WR(ring->csr_vaddr, REG_RING_RSP_RD_PTR, ring->resp_rd_ptr); + } else { + udelay(MAX_SLEEP_US_PER_CHECK); + retry--; + } + + pending_cmd = YCC_CSR_RD(ring->csr_vaddr, REG_RING_PENDING_CMD); + ring->resp_wr_ptr = YCC_CSR_RD(ring->csr_vaddr, REG_RING_RSP_WR_PTR); + } + + if (!retry && pending_cmd) + ring->type = INVAL_RING; + + ring->status = 0; +} diff --git a/drivers/crypto/ycc/ycc_ring.h b/drivers/crypto/ycc/ycc_ring.h index eb3e6f9..c3e7cbf 100644 --- a/drivers/crypto/ycc/ycc_ring.h +++ b/drivers/crypto/ycc/ycc_ring.h @@ -20,6 +20,9 @@ #define CMD_INVALID_CONTENT_U8 0x7f #define CMD_INVALID_CONTENT_U64 0x7f7f7f7f7f7f7f7fULL +#define MAX_SLEEP_US_PER_CHECK 100 /* every 100us to check register */ +#define MAX_ERROR_RETRY 10000 /* 1s in total */ + enum ring_type { FREE_RING, USER_RING, From patchwork Wed Sep 28 07:38:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 610277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74C0CC04A95 for ; Wed, 28 Sep 2022 07:40:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233780AbiI1Hkt (ORCPT ); Wed, 28 Sep 2022 03:40:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233378AbiI1Hj7 (ORCPT ); Wed, 28 Sep 2022 03:39:59 -0400 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77BBA116C39; Wed, 28 Sep 2022 00:38:17 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R281e4; CH=green; DM=||false|; DS=||; FP=0|-1|-1|-1|0|-1|-1|-1; HT=ay29a033018046060; MF=guanjun@linux.alibaba.com; NM=1; PH=DS; RN=8; SR=0; TI=SMTPD_---0VQv-CXf_1664350693; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VQv-CXf_1664350693) by smtp.aliyun-inc.com; Wed, 28 Sep 2022 15:38:14 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, guanjun@linux.alibaba.com, xuchun.shang@linux.alibaba.com, artie.ding@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 5/9] crypto/ycc: Add skcipher algorithm support Date: Wed, 28 Sep 2022 15:38:03 +0800 Message-Id: <1664350687-47330-6-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1664350687-47330-1-git-send-email-guanjun@linux.alibaba.com> References: <1664350687-47330-1-git-send-email-guanjun@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Guanjun Support skcipher algorithm. Signed-off-by: Guanjun --- drivers/crypto/ycc/Kconfig | 9 + drivers/crypto/ycc/Makefile | 2 +- drivers/crypto/ycc/ycc_algs.h | 114 ++++++ drivers/crypto/ycc/ycc_dev.h | 3 + drivers/crypto/ycc/ycc_drv.c | 52 +++ drivers/crypto/ycc/ycc_ring.h | 17 +- drivers/crypto/ycc/ycc_ske.c | 925 ++++++++++++++++++++++++++++++++++++++++++ 7 files changed, 1117 insertions(+), 5 deletions(-) create mode 100644 drivers/crypto/ycc/ycc_algs.h create mode 100644 drivers/crypto/ycc/ycc_ske.c diff --git a/drivers/crypto/ycc/Kconfig b/drivers/crypto/ycc/Kconfig index 6e88ecb..8dae75e 100644 --- a/drivers/crypto/ycc/Kconfig +++ b/drivers/crypto/ycc/Kconfig @@ -2,6 +2,15 @@ config CRYPTO_DEV_YCC tristate "Support for Alibaba YCC cryptographic accelerator" depends on CRYPTO && CRYPTO_HW && PCI default n + select CRYPTO_SKCIPHER + select CRYPTO_LIB_DES + select CRYPTO_SM3_GENERIC + select CRYPTO_AES + select CRYPTO_CBC + select CRYPTO_ECB + select CRYPTO_CTR + select CRYPTO_XTS + select CRYPTO_SM4 help Enables the driver for the on-chip cryptographic accelerator of Alibaba Yitian SoCs which is based on ARMv9 architecture. diff --git a/drivers/crypto/ycc/Makefile b/drivers/crypto/ycc/Makefile index 61d17bd..8a23a72 100644 --- a/drivers/crypto/ycc/Makefile +++ b/drivers/crypto/ycc/Makefile @@ -1,3 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 obj-(CONFIG_CRYPTO_DEV_YCC) += ycc.o -ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o +ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o ycc_ske.o diff --git a/drivers/crypto/ycc/ycc_algs.h b/drivers/crypto/ycc/ycc_algs.h new file mode 100644 index 00000000..6c7b0dc --- /dev/null +++ b/drivers/crypto/ycc/ycc_algs.h @@ -0,0 +1,114 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __YCC_ALG_H +#define __YCC_ALG_H + +#include + +#include "ycc_ring.h" +#include "ycc_dev.h" + +enum ycc_gcm_mode { + YCC_AES_128_GCM = 0, + YCC_AES_192_GCM, + YCC_AES_256_GCM, + YCC_SM4_GCM, +}; + +enum ycc_ccm_mode { + YCC_AES_128_CCM = 0, + YCC_AES_192_CCM, + YCC_AES_256_CCM, + YCC_SM4_CCM, +}; + +enum ycc_ske_alg_mode { + YCC_DES_ECB = 26, + YCC_DES_CBC, + YCC_DES_CFB, + YCC_DES_OFB, + YCC_DES_CTR, /* 30 */ + + YCC_TDES_128_ECB = 31, + YCC_TDES_128_CBC, + YCC_TDES_128_CFB, + YCC_TDES_128_OFB, + YCC_TDES_128_CTR, + YCC_TDES_192_ECB, + YCC_TDES_192_CBC, + YCC_TDES_192_CFB, + YCC_TDES_192_OFB, + YCC_TDES_192_CTR, /* 40 */ + + YCC_AES_128_ECB = 41, + YCC_AES_128_CBC, + YCC_AES_128_CFB, + YCC_AES_128_OFB, + YCC_AES_128_CTR, + YCC_AES_128_XTS, /* 46 */ + + YCC_AES_192_ECB = 48, + YCC_AES_192_CBC, + YCC_AES_192_CFB, + YCC_AES_192_OFB, + YCC_AES_192_CTR, /* 52 */ + + YCC_AES_256_ECB = 55, + YCC_AES_256_CBC, + YCC_AES_256_CFB, + YCC_AES_256_OFB, + YCC_AES_256_CTR, + YCC_AES_256_XTS, /* 60 */ + + YCC_SM4_ECB = 62, + YCC_SM4_CBC, + YCC_SM4_CFB, + YCC_SM4_OFB, + YCC_SM4_CTR, + YCC_SM4_XTS, /* 67 */ +}; + +enum ycc_cmd_id { + YCC_CMD_SKE_ENC = 0x23, + YCC_CMD_SKE_DEC, +}; + +struct ycc_crypto_ctx { + struct ycc_ring *ring; + void *soft_tfm; + + u32 keysize; + u32 key_dma_size; /* dma memory size for key/key+iv */ + + u8 mode; + u8 *cipher_key; + u8 reserved[4]; +}; + +struct ycc_crypto_req { + int mapped_src_nents; + int mapped_dst_nents; + + void *key_vaddr; + dma_addr_t key_paddr; + + struct ycc_cmd_desc desc; + struct skcipher_request *ske_req; + struct skcipher_request ske_subreq; + + void *src_vaddr; + dma_addr_t src_paddr; + void *dst_vaddr; + dma_addr_t dst_paddr; + + int in_len; + int out_len; + int aad_offset; + struct ycc_crypto_ctx *ctx; + u8 last_block[16]; /* used to store iv out when decrypt */ +}; + +#define YCC_DEV(ctx) (&(ctx)->ring->ydev->pdev->dev) + +int ycc_sym_register(void); +void ycc_sym_unregister(void); +#endif diff --git a/drivers/crypto/ycc/ycc_dev.h b/drivers/crypto/ycc/ycc_dev.h index dda82b0..08f9a05 100644 --- a/drivers/crypto/ycc/ycc_dev.h +++ b/drivers/crypto/ycc/ycc_dev.h @@ -151,4 +151,7 @@ static inline void ycc_g_err_unmask(void *vaddr) YCC_CSR_WR(vaddr, REG_YCC_DEV_INT_MASK, 0); } +int ycc_algorithm_register(void); +void ycc_algorithm_unregister(void); + #endif diff --git a/drivers/crypto/ycc/ycc_drv.c b/drivers/crypto/ycc/ycc_drv.c index 8d0fcfd..79c1e18 100644 --- a/drivers/crypto/ycc/ycc_drv.c +++ b/drivers/crypto/ycc/ycc_drv.c @@ -25,6 +25,7 @@ #include "ycc_isr.h" #include "ycc_ring.h" +#include "ycc_algs.h" static const char ycc_name[] = "ycc"; @@ -35,6 +36,8 @@ module_param(is_polling, bool, 0644); module_param(user_rings, int, 0644); +static atomic_t ycc_algs_refcnt; + LIST_HEAD(ycc_table); DEFINE_MUTEX(ycc_mutex); @@ -75,6 +78,40 @@ static int ycc_dev_debugfs_statistics_open(struct inode *inode, struct file *fil .owner = THIS_MODULE, }; +int ycc_algorithm_register(void) +{ + int ret = 0; + + /* No kernel rings */ + if (user_rings == YCC_RINGPAIR_NUM) + return ret; + + /* Only register once */ + if (atomic_inc_return(&ycc_algs_refcnt) > 1) + return ret; + + ret = ycc_sym_register(); + if (ret) + goto err; + + return 0; + +err: + atomic_dec(&ycc_algs_refcnt); + return ret; +} + +void ycc_algorithm_unregister(void) +{ + if (user_rings == YCC_RINGPAIR_NUM) + return; + + if (atomic_dec_return(&ycc_algs_refcnt)) + return; + + ycc_sym_unregister(); +} + static int ycc_device_flr(struct pci_dev *pdev, struct pci_dev *rcec_pdev) { int ret; @@ -321,6 +358,7 @@ static void ycc_rcec_unbind(struct ycc_dev *ydev) ycc_resource_free(rciep); rciep->assoc_dev = NULL; rcec->assoc_dev = NULL; + ycc_algorithm_unregister(); } static int ycc_dev_add(struct ycc_dev *ydev) @@ -421,8 +459,20 @@ static int ycc_drv_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (ret) goto remove_debugfs; + if (test_bit(YDEV_STATUS_READY, &ydev->status)) { + ret = ycc_algorithm_register(); + if (ret) { + pr_err("Failed to register algorithm\n"); + clear_bit(YDEV_STATUS_READY, &ydev->status); + clear_bit(YDEV_STATUS_READY, &ydev->assoc_dev->status); + goto dev_del; + } + } + return ret; +dev_del: + ycc_dev_del(ydev); remove_debugfs: if (ydev->type == YCC_RCIEP) { debugfs_remove_recursive(ydev->debug_dir); @@ -478,6 +528,8 @@ static int __init ycc_drv_init(void) { int ret; + atomic_set(&ycc_algs_refcnt, 0); + ret = pci_register_driver(&ycc_driver); if (ret) goto out; diff --git a/drivers/crypto/ycc/ycc_ring.h b/drivers/crypto/ycc/ycc_ring.h index c3e7cbf..78ba959 100644 --- a/drivers/crypto/ycc/ycc_ring.h +++ b/drivers/crypto/ycc/ycc_ring.h @@ -75,11 +75,20 @@ struct ycc_resp_desc { u8 reserved[6]; }; +struct ycc_skcipher_cmd { + u8 cmd_id; + u8 mode; + u64 sptr:48; + u64 dptr:48; + u32 dlen; + u16 key_idx; /* key used to decrypt kek */ + u8 reserved[2]; + u64 keyptr:48; + u8 padding; +} __packed; + union ycc_real_cmd { - /* - * TODO: Real command will implement when - * corresponding algorithm is ready - */ + struct ycc_skcipher_cmd ske_cmd; u8 padding[32]; }; diff --git a/drivers/crypto/ycc/ycc_ske.c b/drivers/crypto/ycc/ycc_ske.c new file mode 100644 index 00000000..afe1745 --- /dev/null +++ b/drivers/crypto/ycc/ycc_ske.c @@ -0,0 +1,925 @@ +// SPDX-License-Identifier: GPL-2.0 + +#define pr_fmt(fmt) "YCC: Crypto: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "ycc_algs.h" + +static int ycc_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int key_size, int mode, + unsigned int key_dma_size) +{ + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (ctx->cipher_key) { + memset(ctx->cipher_key, 0, ctx->keysize); + } else { + ctx->cipher_key = kzalloc(key_size, GFP_KERNEL); + if (!ctx->cipher_key) + return -ENOMEM; + } + memcpy(ctx->cipher_key, key, key_size); + ctx->mode = mode; + ctx->keysize = key_size; + ctx->key_dma_size = key_dma_size; + + if (ctx->soft_tfm && crypto_skcipher_setkey(ctx->soft_tfm, key, key_size)) + pr_warn("Failed to setkey for soft skcipher tfm\n"); + + return 0; +} + +#define DEFINE_YCC_SKE_AES_SETKEY(name, mode, size) \ +int ycc_skcipher_aes_##name##_setkey(struct crypto_skcipher *tfm, \ + const u8 *key, \ + unsigned int key_size) \ +{ \ + int alg_mode; \ + switch (key_size) { \ + case AES_KEYSIZE_128: \ + alg_mode = YCC_AES_128_##mode; \ + break; \ + case AES_KEYSIZE_192: \ + alg_mode = YCC_AES_192_##mode; \ + break; \ + case AES_KEYSIZE_256: \ + alg_mode = YCC_AES_256_##mode; \ + break; \ + default: \ + return -EINVAL; \ + } \ + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, size); \ +} + +#define DEFINE_YCC_SKE_SM4_SETKEY(name, mode, size) \ +int ycc_skcipher_sm4_##name##_setkey(struct crypto_skcipher *tfm, \ + const u8 *key, \ + unsigned int key_size) \ +{ \ + int alg_mode = YCC_SM4_##mode; \ + if (key_size != SM4_KEY_SIZE) \ + return -EINVAL; \ + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, size); \ +} + +#define DEFINE_YCC_SKE_DES_SETKEY(name, mode, size) \ +int ycc_skcipher_des_##name##_setkey(struct crypto_skcipher *tfm, \ + const u8 *key, \ + unsigned int key_size) \ +{ \ + int alg_mode = YCC_DES_##mode; \ + int ret; \ + if (key_size != DES_KEY_SIZE) \ + return -EINVAL; \ + ret = verify_skcipher_des_key(tfm, key); \ + if (ret) \ + return ret; \ + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, size); \ +} + +#define DEFINE_YCC_SKE_3DES_SETKEY(name, mode, size) \ +int ycc_skcipher_3des_##name##_setkey(struct crypto_skcipher *tfm, \ + const u8 *key, \ + unsigned int key_size) \ +{ \ + int alg_mode = YCC_TDES_192_##mode; \ + int ret; \ + if (key_size != DES3_EDE_KEY_SIZE) \ + return -EINVAL; \ + ret = verify_skcipher_des3_key(tfm, key); \ + if (ret) \ + return ret; \ + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, size); \ +} + +/* + * ECB: Only has 1 key, without IV, at least 32 bytes. + * Others except XTS: |key|iv|, at least 48 bytes. + */ +DEFINE_YCC_SKE_AES_SETKEY(ecb, ECB, 32); +DEFINE_YCC_SKE_AES_SETKEY(cbc, CBC, 48); +DEFINE_YCC_SKE_AES_SETKEY(ctr, CTR, 48); +DEFINE_YCC_SKE_AES_SETKEY(cfb, CFB, 48); +DEFINE_YCC_SKE_AES_SETKEY(ofb, OFB, 48); + +DEFINE_YCC_SKE_SM4_SETKEY(ecb, ECB, 32); +DEFINE_YCC_SKE_SM4_SETKEY(cbc, CBC, 48); +DEFINE_YCC_SKE_SM4_SETKEY(ctr, CTR, 48); +DEFINE_YCC_SKE_SM4_SETKEY(cfb, CFB, 48); +DEFINE_YCC_SKE_SM4_SETKEY(ofb, OFB, 48); + +DEFINE_YCC_SKE_DES_SETKEY(ecb, ECB, 32); +DEFINE_YCC_SKE_DES_SETKEY(cbc, CBC, 48); +DEFINE_YCC_SKE_DES_SETKEY(ctr, CTR, 48); +DEFINE_YCC_SKE_DES_SETKEY(cfb, CFB, 48); +DEFINE_YCC_SKE_DES_SETKEY(ofb, OFB, 48); + +DEFINE_YCC_SKE_3DES_SETKEY(ecb, ECB, 32); +DEFINE_YCC_SKE_3DES_SETKEY(cbc, CBC, 48); +DEFINE_YCC_SKE_3DES_SETKEY(ctr, CTR, 48); +DEFINE_YCC_SKE_3DES_SETKEY(cfb, CFB, 48); +DEFINE_YCC_SKE_3DES_SETKEY(ofb, OFB, 48); + +int ycc_skcipher_aes_xts_setkey(struct crypto_skcipher *tfm, + const u8 *key, + unsigned int key_size) +{ + int alg_mode; + int ret; + + ret = xts_verify_key(tfm, key, key_size); + if (ret) + return ret; + + switch (key_size) { + case AES_KEYSIZE_128 * 2: + alg_mode = YCC_AES_128_XTS; + break; + case AES_KEYSIZE_256 * 2: + alg_mode = YCC_AES_256_XTS; + break; + default: + return -EINVAL; + } + + /* XTS: |key1|key2|iv|, at least 32 + 32 + 16 bytes */ + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, 80); +} + +int ycc_skcipher_sm4_xts_setkey(struct crypto_skcipher *tfm, + const u8 *key, + unsigned int key_size) +{ + int alg_mode; + int ret; + + ret = xts_verify_key(tfm, key, key_size); + if (ret) + return ret; + + if (key_size != SM4_KEY_SIZE * 2) + return -EINVAL; + + alg_mode = YCC_SM4_XTS; + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, 80); +} + +static int ycc_skcipher_fill_key(struct ycc_crypto_req *req) +{ + struct ycc_crypto_ctx *ctx = req->ctx; + struct device *dev = YCC_DEV(ctx); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req->ske_req); + u32 ivsize = crypto_skcipher_ivsize(tfm); + + if (!req->key_vaddr) { + req->key_vaddr = dma_alloc_coherent(dev, ALIGN(ctx->key_dma_size, 64), + &req->key_paddr, GFP_ATOMIC); + if (!req->key_vaddr) + return -ENOMEM; + } + + memset(req->key_vaddr, 0, ALIGN(ctx->key_dma_size, 64)); + /* XTS Mode has 2 keys & 1 iv */ + if (ctx->key_dma_size == 80) { + memcpy(req->key_vaddr + (32 - ctx->keysize / 2), + ctx->cipher_key, ctx->keysize / 2); + memcpy(req->key_vaddr + (64 - ctx->keysize / 2), + ctx->cipher_key + ctx->keysize / 2, ctx->keysize / 2); + } else { + memcpy(req->key_vaddr + (32 - ctx->keysize), ctx->cipher_key, + ctx->keysize); + } + + if (ivsize) { + if (ctx->mode == YCC_DES_ECB || + ctx->mode == YCC_TDES_128_ECB || + ctx->mode == YCC_TDES_192_ECB || + ctx->mode == YCC_AES_128_ECB || + ctx->mode == YCC_AES_192_ECB || + ctx->mode == YCC_AES_256_ECB || + ctx->mode == YCC_SM4_ECB) { + pr_err("Illegal ivsize for ECB mode, should be zero"); + goto clear_key; + } + + /* DES or 3DES */ + if (ctx->mode >= YCC_DES_ECB && ctx->mode <= YCC_TDES_192_CTR) { + if (ivsize > 8) + goto clear_key; + memcpy(req->key_vaddr + ctx->key_dma_size - 8, + req->ske_req->iv, ivsize); + } else { + memcpy(req->key_vaddr + ctx->key_dma_size - 16, + req->ske_req->iv, ivsize); + } + } + + return 0; +clear_key: + memset(req->key_vaddr, 0, ALIGN(ctx->key_dma_size, 64)); + dma_free_coherent(dev, ctx->key_dma_size, req->key_vaddr, req->key_paddr); + req->key_vaddr = NULL; + return -EINVAL; +} + +static int ycc_skcipher_sg_map(struct ycc_crypto_req *req) +{ + struct device *dev = YCC_DEV(req->ctx); + struct skcipher_request *ske_req = req->ske_req; + int src_nents; + + src_nents = sg_nents_for_len(ske_req->src, ske_req->cryptlen); + if (unlikely(src_nents <= 0)) { + pr_err("Failed to get src sg len\n"); + return -EINVAL; + } + + req->src_vaddr = dma_alloc_coherent(dev, ALIGN(req->in_len, 64), + &req->src_paddr, GFP_ATOMIC); + if (!req->src_vaddr) + return -ENOMEM; + + req->dst_vaddr = dma_alloc_coherent(dev, ALIGN(req->in_len, 64), + &req->dst_paddr, GFP_ATOMIC); + if (!req->dst_vaddr) { + dma_free_coherent(dev, ALIGN(req->in_len, 64), + req->src_vaddr, req->src_paddr); + return -ENOMEM; + } + + sg_copy_to_buffer(ske_req->src, src_nents, req->src_vaddr, ske_req->cryptlen); + return 0; +} + +static inline void ycc_skcipher_sg_unmap(struct ycc_crypto_req *req) +{ + struct device *dev = YCC_DEV(req->ctx); + + dma_free_coherent(dev, ALIGN(req->in_len, 64), req->src_vaddr, req->src_paddr); + dma_free_coherent(dev, ALIGN(req->in_len, 64), req->dst_vaddr, req->dst_paddr); +} + +/* + * For CBC & CTR + */ +static void ycc_skcipher_iv_out(struct ycc_crypto_req *req, void *dst) +{ + struct skcipher_request *ske_req = req->ske_req; + struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(ske_req); + u8 bs = crypto_skcipher_blocksize(stfm); + u8 mode = req->ctx->mode; + u8 cmd = req->desc.cmd.ske_cmd.cmd_id; + u32 nb = (ske_req->cryptlen + bs - 1) / bs; + + switch (mode) { + case YCC_DES_CBC: + case YCC_TDES_128_CBC: + case YCC_TDES_192_CBC: + case YCC_AES_128_CBC: + case YCC_AES_192_CBC: + case YCC_AES_256_CBC: + case YCC_SM4_CBC: + if (cmd == YCC_CMD_SKE_DEC) + memcpy(ske_req->iv, req->last_block, bs); + else + memcpy(ske_req->iv, + (u8 *)dst + ALIGN(ske_req->cryptlen, bs) - bs, + bs); + break; + case YCC_DES_CTR: + case YCC_TDES_128_CTR: + case YCC_TDES_192_CTR: + case YCC_AES_128_CTR: + case YCC_AES_192_CTR: + case YCC_AES_256_CTR: + case YCC_SM4_CTR: + for ( ; nb-- ; ) + crypto_inc(ske_req->iv, bs); + break; + default: + return; + } +} + +static int ycc_skcipher_callback(void *ptr, u16 state) +{ + struct ycc_crypto_req *req = (struct ycc_crypto_req *)ptr; + struct skcipher_request *ske_req = req->ske_req; + struct ycc_crypto_ctx *ctx = req->ctx; + struct device *dev = YCC_DEV(ctx); + + sg_copy_from_buffer(ske_req->dst, + sg_nents_for_len(ske_req->dst, ske_req->cryptlen), + req->dst_vaddr, ske_req->cryptlen); + + if (state == CMD_SUCCESS) + ycc_skcipher_iv_out(req, req->dst_vaddr); + + ycc_skcipher_sg_unmap(req); + + if (req->key_vaddr) { + memset(req->key_vaddr, 0, ALIGN(ctx->key_dma_size, 64)); + dma_free_coherent(dev, ALIGN(ctx->key_dma_size, 64), + req->key_vaddr, req->key_paddr); + req->key_vaddr = NULL; + } + if (ske_req->base.complete) + ske_req->base.complete(&ske_req->base, + state == CMD_SUCCESS ? 0 : -EBADMSG); + + return 0; +} + +static inline bool ycc_skcipher_do_soft(struct ycc_dev *ydev) +{ + return !test_bit(YDEV_STATUS_READY, &ydev->status); +} + +static int ycc_skcipher_submit_desc(struct skcipher_request *ske_req, u8 cmd) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(ske_req); + struct ycc_crypto_req *req = skcipher_request_ctx(ske_req); + struct ycc_skcipher_cmd *ske_cmd = &req->desc.cmd.ske_cmd; + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ycc_flags *aflags; + u8 bs = crypto_skcipher_blocksize(tfm); + int ret; + + memset(req, 0, sizeof(*req)); + req->ctx = ctx; + req->ske_req = ske_req; + req->in_len = ALIGN(ske_req->cryptlen, bs); + + /* + * The length of request, 64n + bs, may lead the device hung. + * So append one bs here. This is a workaround for hardware issue. + */ + if (req->in_len % 64 == bs) + req->in_len += bs; + + ret = ycc_skcipher_fill_key(req); + if (ret) + return ret; + + ret = ycc_skcipher_sg_map(req); + if (ret) + goto free_key; + + ret = -ENOMEM; + aflags = kzalloc(sizeof(struct ycc_flags), GFP_ATOMIC); + if (!aflags) + goto sg_unmap; + + aflags->ptr = (void *)req; + aflags->ycc_done_callback = ycc_skcipher_callback; + + req->desc.private_ptr = (u64)aflags; + ske_cmd->cmd_id = cmd; + ske_cmd->mode = ctx->mode; + ske_cmd->sptr = req->src_paddr; + ske_cmd->dptr = req->dst_paddr; + ske_cmd->dlen = req->in_len; + ske_cmd->keyptr = req->key_paddr; + ske_cmd->padding = 0; + + /* LKCF will check iv output, for decryption, the iv is its last block */ + if (cmd == YCC_CMD_SKE_DEC) + memcpy(req->last_block, + req->src_vaddr + ALIGN(ske_req->cryptlen, bs) - bs, bs); + + ret = ycc_enqueue(ctx->ring, &req->desc); + if (!ret) + return -EINPROGRESS; + + pr_debug("Failed to submit desc to ring\n"); + kfree(aflags); + +sg_unmap: + ycc_skcipher_sg_unmap(req); +free_key: + memset(req->key_vaddr, 0, ALIGN(ctx->key_dma_size, 64)); + dma_free_coherent(YCC_DEV(ctx), + ALIGN(ctx->key_dma_size, 64), + req->key_vaddr, req->key_paddr); + req->key_vaddr = NULL; + return ret; +} + +static int ycc_skcipher_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_request *subreq = + &((struct ycc_crypto_req *)skcipher_request_ctx(req))->ske_subreq; + + if (ycc_skcipher_do_soft(ctx->ring->ydev)) { + skcipher_request_set_tfm(subreq, ctx->soft_tfm); + skcipher_request_set_callback(subreq, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(subreq, req->src, req->dst, + req->cryptlen, req->iv); + return crypto_skcipher_encrypt(subreq); + } + + return ycc_skcipher_submit_desc(req, YCC_CMD_SKE_ENC); +} + +static int ycc_skcipher_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_request *subreq = + &((struct ycc_crypto_req *)skcipher_request_ctx(req))->ske_subreq; + + if (ycc_skcipher_do_soft(ctx->ring->ydev)) { + skcipher_request_set_tfm(subreq, ctx->soft_tfm); + skcipher_request_set_callback(subreq, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(subreq, req->src, req->dst, + req->cryptlen, req->iv); + return crypto_skcipher_encrypt(subreq); + } + + return ycc_skcipher_submit_desc(req, YCC_CMD_SKE_DEC); +} + +static int ycc_skcipher_init(struct crypto_skcipher *tfm) +{ + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ycc_ring *ring; + + ctx->soft_tfm = crypto_alloc_skcipher(crypto_tfm_alg_name(crypto_skcipher_tfm(tfm)), 0, + CRYPTO_ALG_NEED_FALLBACK | CRYPTO_ALG_ASYNC); + if (IS_ERR(ctx->soft_tfm)) { + pr_warn("Failed to allocate soft tfm for:%s, software fallback is limited\n", + crypto_tfm_alg_name(crypto_skcipher_tfm(tfm))); + ctx->soft_tfm = NULL; + crypto_skcipher_set_reqsize(tfm, sizeof(struct ycc_crypto_req)); + } else { + /* + * If it's software fallback, store meta data of soft request. + */ + crypto_skcipher_set_reqsize(tfm, sizeof(struct ycc_crypto_req) + + crypto_skcipher_reqsize(ctx->soft_tfm)); + } + + ring = ycc_crypto_get_ring(); + if (!ring) + return -ENOMEM; + + ctx->ring = ring; + return 0; +} + +static void ycc_skcipher_exit(struct crypto_skcipher *tfm) +{ + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (ctx->ring) + ycc_crypto_free_ring(ctx->ring); + + kfree(ctx->cipher_key); + + if (ctx->soft_tfm) + crypto_free_skcipher((struct crypto_skcipher *)ctx->soft_tfm); +} + + +static struct skcipher_alg ycc_skciphers[] = { + { + .base = { + .cra_name = "cbc(aes)", + .cra_driver_name = "cbc-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_cbc_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ecb(aes)", + .cra_driver_name = "ecb-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_ecb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = 0, + }, + { + .base = { + .cra_name = "ctr(aes)", + .cra_driver_name = "ctr-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_ctr_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cfb(aes)", + .cra_driver_name = "cfb-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_cfb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ofb(aes)", + .cra_driver_name = "ofb-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_ofb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "xts(aes)", + .cra_driver_name = "xts-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_xts_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cbc(sm4)", + .cra_driver_name = "cbc-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_cbc_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = SM4_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ecb(sm4)", + .cra_driver_name = "ecb-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_ecb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = 0, + }, + { + .base = { + .cra_name = "ctr(sm4)", + .cra_driver_name = "ctr-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_ctr_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = SM4_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cfb(sm4)", + .cra_driver_name = "cfb-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_cfb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = SM4_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ofb(sm4)", + .cra_driver_name = "ofb-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_ofb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = SM4_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "xts(sm4)", + .cra_driver_name = "xts-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_xts_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE * 2, + .max_keysize = SM4_KEY_SIZE * 2, + .ivsize = SM4_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cbc(des)", + .cra_driver_name = "cbc-des-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_des_cbc_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ecb(des)", + .cra_driver_name = "ecb-des-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_des_ecb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = 0, + }, + { + .base = { + .cra_name = "ctr(des)", + .cra_driver_name = "ctr-des-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_des_ctr_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cfb(des)", + .cra_driver_name = "cfb-des-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_des_cfb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ofb(des)", + .cra_driver_name = "ofb-des-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_des_ofb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cbc(des3_ede)", + .cra_driver_name = "cbc-des3_ede-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_3des_cbc_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ecb(des3_ede)", + .cra_driver_name = "ecb-des3_ede-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_3des_ecb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = 0, + }, + { + .base = { + .cra_name = "ctr(des3_ede)", + .cra_driver_name = "ctr-des3_ede-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_3des_ctr_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cfb(des3_ede)", + .cra_driver_name = "cfb-des3_ede-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_3des_cfb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ofb(des3_ede)", + .cra_driver_name = "ofb-des3_ede-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_3des_ofb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + }, +}; + +int ycc_sym_register(void) +{ + return crypto_register_skciphers(ycc_skciphers, ARRAY_SIZE(ycc_skciphers)); +} + +void ycc_sym_unregister(void) +{ + crypto_unregister_skciphers(ycc_skciphers, ARRAY_SIZE(ycc_skciphers)); +} From patchwork Wed Sep 28 07:38:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 610275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E56AC04A95 for ; Wed, 28 Sep 2022 07:41:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233360AbiI1HlI (ORCPT ); Wed, 28 Sep 2022 03:41:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230486AbiI1HkL (ORCPT ); Wed, 28 Sep 2022 03:40:11 -0400 Received: from out30-54.freemail.mail.aliyun.com (out30-54.freemail.mail.aliyun.com [115.124.30.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68C6310C7A9; Wed, 28 Sep 2022 00:38:20 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R121e4; CH=green; DM=||false|; DS=||; FP=0|-1|-1|-1|0|-1|-1|-1; HT=ay29a033018045168; MF=guanjun@linux.alibaba.com; NM=1; PH=DS; RN=8; SR=0; TI=SMTPD_---0VQv3K4k_1664350696; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VQv3K4k_1664350696) by smtp.aliyun-inc.com; Wed, 28 Sep 2022 15:38:17 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, guanjun@linux.alibaba.com, xuchun.shang@linux.alibaba.com, artie.ding@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 8/9] crypto/ycc: Add sm2 algorithm support Date: Wed, 28 Sep 2022 15:38:06 +0800 Message-Id: <1664350687-47330-9-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1664350687-47330-1-git-send-email-guanjun@linux.alibaba.com> References: <1664350687-47330-1-git-send-email-guanjun@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Xuchun Shang Only support verification through sm2 at present. Signed-off-by: Xuchun Shang --- drivers/crypto/ycc/Makefile | 3 +- drivers/crypto/ycc/sm2signature_asn1.c | 38 +++++ drivers/crypto/ycc/sm2signature_asn1.h | 13 ++ drivers/crypto/ycc/ycc_algs.h | 2 + drivers/crypto/ycc/ycc_pke.c | 250 ++++++++++++++++++++++++++++++++- drivers/crypto/ycc/ycc_ring.h | 8 ++ 6 files changed, 312 insertions(+), 2 deletions(-) create mode 100644 drivers/crypto/ycc/sm2signature_asn1.c create mode 100644 drivers/crypto/ycc/sm2signature_asn1.h diff --git a/drivers/crypto/ycc/Makefile b/drivers/crypto/ycc/Makefile index 25f5930..3c86371 100644 --- a/drivers/crypto/ycc/Makefile +++ b/drivers/crypto/ycc/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-(CONFIG_CRYPTO_DEV_YCC) += ycc.o -ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o ycc_ske.o ycc_aead.o ycc_pke.o +ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o ycc_ske.o ycc_aead.o ycc_pke.o \ + sm2signature_asn1.o diff --git a/drivers/crypto/ycc/sm2signature_asn1.c b/drivers/crypto/ycc/sm2signature_asn1.c new file mode 100644 index 00000000..1fd15c1 --- /dev/null +++ b/drivers/crypto/ycc/sm2signature_asn1.c @@ -0,0 +1,38 @@ +/* + * Automatically generated by asn1_compiler. Do not edit + * + * ASN.1 parser for sm2signature + */ +#include +#include "sm2signature_asn1.h" + +enum sm2signature_actions { + ACT_sm2_get_signature_r = 0, + ACT_sm2_get_signature_s = 1, + NR__sm2signature_actions = 2 +}; + +static const asn1_action_t sm2signature_action_table[NR__sm2signature_actions] = { + [0] = sm2_get_signature_r, + [1] = sm2_get_signature_s, +}; + +static const unsigned char sm2signature_machine[] = { + // Sm2Signature + [0] = ASN1_OP_MATCH, + [1] = _tag(UNIV, CONS, SEQ), + [2] = ASN1_OP_MATCH_ACT, // sig_r + [3] = _tag(UNIV, PRIM, INT), + [4] = _action(ACT_sm2_get_signature_r), + [5] = ASN1_OP_MATCH_ACT, // sig_s + [6] = _tag(UNIV, PRIM, INT), + [7] = _action(ACT_sm2_get_signature_s), + [8] = ASN1_OP_END_SEQ, + [9] = ASN1_OP_COMPLETE, +}; + +const struct asn1_decoder sm2signature_decoder = { + .machine = sm2signature_machine, + .machlen = sizeof(sm2signature_machine), + .actions = sm2signature_action_table, +}; diff --git a/drivers/crypto/ycc/sm2signature_asn1.h b/drivers/crypto/ycc/sm2signature_asn1.h new file mode 100644 index 00000000..192c9e1 --- /dev/null +++ b/drivers/crypto/ycc/sm2signature_asn1.h @@ -0,0 +1,13 @@ +/* + * Automatically generated by asn1_compiler. Do not edit + * + * ASN.1 parser for sm2signature + */ +#include + +extern const struct asn1_decoder sm2signature_decoder; + +extern int sm2_get_signature_r(void *context, size_t hdrlen, + unsigned char tag, const void *value, size_t vlen); +extern int sm2_get_signature_s(void *context, size_t hdrlen, + unsigned char tag, const void *value, size_t vlen); diff --git a/drivers/crypto/ycc/ycc_algs.h b/drivers/crypto/ycc/ycc_algs.h index 6a13230a..26323a8 100644 --- a/drivers/crypto/ycc/ycc_algs.h +++ b/drivers/crypto/ycc/ycc_algs.h @@ -77,6 +77,8 @@ enum ycc_cmd_id { YCC_CMD_CCM_ENC, YCC_CMD_CCM_DEC, /* 0x28 */ + YCC_CMD_SM2_VERIFY = 0x47, + YCC_CMD_RSA_ENC = 0x83, YCC_CMD_RSA_DEC, YCC_CMD_RSA_CRT_DEC, diff --git a/drivers/crypto/ycc/ycc_pke.c b/drivers/crypto/ycc/ycc_pke.c index 3debd80..41d27d6 100644 --- a/drivers/crypto/ycc/ycc_pke.c +++ b/drivers/crypto/ycc/ycc_pke.c @@ -8,6 +8,8 @@ #include #include #include + +#include "sm2signature_asn1.h" #include "ycc_algs.h" static int ycc_rsa_done_callback(void *ptr, u16 state) @@ -666,6 +668,222 @@ static void ycc_rsa_exit(struct crypto_akcipher *tfm) crypto_free_akcipher(ctx->soft_tfm); } +#define MPI_NBYTES(m) ((mpi_get_nbits(m) + 7) / 8) + +static int ycc_sm2_done_callback(void *ptr, u16 state) +{ + struct ycc_pke_req *sm2_req = (struct ycc_pke_req *)ptr; + struct ycc_pke_ctx *ctx = sm2_req->ctx; + struct akcipher_request *req = sm2_req->req; + struct device *dev = YCC_DEV(ctx); + + dma_free_coherent(dev, 128, sm2_req->src_vaddr, sm2_req->src_paddr); + + if (req->base.complete) + req->base.complete(&req->base, state == CMD_SUCCESS ? 0 : -EBADMSG); + return 0; +} + +struct sm2_signature_ctx { + MPI sig_r; + MPI sig_s; +}; + +int sm2_get_signature_r(void *context, size_t hdrlen, unsigned char tag, + const void *value, size_t vlen) +{ + struct sm2_signature_ctx *sig = context; + + if (!value || !vlen) + return -EINVAL; + + sig->sig_r = mpi_read_raw_data(value, vlen); + if (!sig->sig_r) + return -ENOMEM; + + return 0; +} + +int sm2_get_signature_s(void *context, size_t hdrlen, unsigned char tag, + const void *value, size_t vlen) +{ + struct sm2_signature_ctx *sig = context; + + if (!value || !vlen) + return -EINVAL; + + sig->sig_s = mpi_read_raw_data(value, vlen); + if (!sig->sig_s) + return -ENOMEM; + + return 0; +} + +static int ycc_sm2_verify(struct akcipher_request *req) +{ + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); + struct ycc_pke_req *sm2_req = akcipher_request_ctx(req); + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct ycc_sm2_verify_cmd *sm2_verify_cmd; + struct ycc_dev *ydev = ctx->ring->ydev; + struct ycc_ring *ring = ctx->ring; + struct device *dev = YCC_DEV(ctx); + struct sm2_signature_ctx sig; + struct ycc_flags *aflags; + u8 buffer[80] = {0}; + int ret; + + /* Do software fallback */ + if (!test_bit(YDEV_STATUS_READY, &ydev->status) || ctx->key_len) { + akcipher_request_set_tfm(req, ctx->soft_tfm); + ret = crypto_akcipher_verify(req); + akcipher_request_set_tfm(req, tfm); + return ret; + } + + if (req->src_len > 72 || req->src_len < 70 || req->dst_len != 32) + return -EINVAL; + + sm2_req->ctx = ctx; + sm2_req->req = req; + + sg_copy_buffer(req->src, sg_nents(req->src), buffer, req->src_len, 0, 1); + sig.sig_r = NULL; + sig.sig_s = NULL; + ret = asn1_ber_decoder(&sm2signature_decoder, &sig, buffer, req->src_len); + if (ret) + return -EINVAL; + + ret = mpi_print(GCRYMPI_FMT_USG, buffer, MPI_NBYTES(sig.sig_r), + (size_t *)NULL, sig.sig_r); + if (ret) + return -EINVAL; + + ret = mpi_print(GCRYMPI_FMT_USG, buffer + MPI_NBYTES(sig.sig_r), + MPI_NBYTES(sig.sig_s), (size_t *)NULL, sig.sig_s); + if (ret) + return -EINVAL; + + ret = -ENOMEM; + /* Alloc dma for src, as verify has no output */ + sm2_req->src_vaddr = dma_alloc_coherent(dev, 128, &sm2_req->src_paddr, + GFP_ATOMIC); + if (!sm2_req->src_vaddr) + goto out; + + sg_copy_buffer(req->src, sg_nents(req->src), sm2_req->src_vaddr, + req->dst_len, req->src_len, 1); + memcpy(sm2_req->src_vaddr + 32, buffer, 64); + + sm2_req->dst_vaddr = NULL; + + aflags = kzalloc(sizeof(struct ycc_flags), GFP_ATOMIC); + if (!aflags) + goto free_src; + + aflags->ptr = (void *)sm2_req; + aflags->ycc_done_callback = ycc_sm2_done_callback; + + memset(&sm2_req->desc, 0, sizeof(sm2_req->desc)); + sm2_req->desc.private_ptr = (u64)(void *)aflags; + + sm2_verify_cmd = &sm2_req->desc.cmd.sm2_verify_cmd; + sm2_verify_cmd->cmd_id = YCC_CMD_SM2_VERIFY; + sm2_verify_cmd->sptr = sm2_req->src_paddr; + sm2_verify_cmd->keyptr = ctx->pub_key_paddr; + + ret = ycc_enqueue(ring, (u8 *)&sm2_req->desc); + if (!ret) + return -EINPROGRESS; + + kfree(aflags); +free_src: + dma_free_coherent(dev, 128, sm2_req->src_vaddr, sm2_req->src_paddr); +out: + return ret; +} + +static unsigned int ycc_sm2_max_size(struct crypto_akcipher *tfm) +{ + return PAGE_SIZE; +} + +static int ycc_sm2_setpubkey(struct crypto_akcipher *tfm, const void *key, + unsigned int keylen) +{ + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct device *dev = YCC_DEV(ctx); + int ret; + + ret = crypto_akcipher_set_pub_key(ctx->soft_tfm, key, keylen); + if (ret) + return ret; + + /* Always alloc 64 bytes for pub key */ + ctx->pub_key_vaddr = dma_alloc_coherent(dev, 64, &ctx->pub_key_paddr, + GFP_KERNEL); + if (!ctx->pub_key_vaddr) + return -ENOMEM; + + /* + * Uncompressed key 65 bytes with 0x04 flag + * Compressed key 33 bytes with 0x02 or 0x03 flag + */ + switch (keylen) { + case 65: + if (*(u8 *)key != 0x04) + return -EINVAL; + memcpy(ctx->pub_key_vaddr, key + 1, 64); + break; + case 64: + memcpy(ctx->pub_key_vaddr, key, 64); + break; + case 33: + return 0; /* TODO: use sw temporary */ + default: + return -EINVAL; + } + + return 0; +} + +static int ycc_sm2_init(struct crypto_akcipher *tfm) +{ + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct ycc_ring *ring; + + ctx->soft_tfm = crypto_alloc_akcipher("sm2-generic", 0, 0); + if (IS_ERR(ctx->soft_tfm)) + return PTR_ERR(ctx->soft_tfm); + + /* Reserve enough space if soft request reqires additional space */ + akcipher_set_reqsize(tfm, sizeof(struct ycc_pke_req) + + crypto_akcipher_alg(ctx->soft_tfm)->reqsize); + + ring = ycc_crypto_get_ring(); + if (!ring) { + crypto_free_akcipher(ctx->soft_tfm); + return -ENODEV; + } + + ctx->ring = ring; + return 0; +} + +static void ycc_sm2_exit(struct crypto_akcipher *tfm) +{ + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct device *dev = YCC_DEV(ctx); + + if (ctx->ring) + ycc_crypto_free_ring(ctx->ring); + + if (ctx->pub_key_vaddr) + dma_free_coherent(dev, 64, ctx->pub_key_vaddr, ctx->pub_key_paddr); + + crypto_free_akcipher(ctx->soft_tfm); +} + static struct akcipher_alg ycc_rsa = { .base = { .cra_name = "rsa", @@ -685,12 +903,42 @@ static void ycc_rsa_exit(struct crypto_akcipher *tfm) .exit = ycc_rsa_exit, }; +static struct akcipher_alg ycc_sm2 = { + .base = { + .cra_name = "sm2", + .cra_driver_name = "sm2-ycc", + .cra_priority = 1000, + .cra_module = THIS_MODULE, + .cra_ctxsize = sizeof(struct ycc_pke_ctx), + }, + .verify = ycc_sm2_verify, + .set_pub_key = ycc_sm2_setpubkey, + .max_size = ycc_sm2_max_size, + .init = ycc_sm2_init, + .exit = ycc_sm2_exit, +}; + int ycc_pke_register(void) { - return crypto_register_akcipher(&ycc_rsa); + int ret; + + ret = crypto_register_akcipher(&ycc_rsa); + if (ret) { + pr_err("Failed to register rsa\n"); + return ret; + } + + ret = crypto_register_akcipher(&ycc_sm2); + if (ret) { + crypto_unregister_akcipher(&ycc_rsa); + pr_err("Failed to register sm2\n"); + } + + return ret; } void ycc_pke_unregister(void) { crypto_unregister_akcipher(&ycc_rsa); + crypto_unregister_akcipher(&ycc_sm2); } diff --git a/drivers/crypto/ycc/ycc_ring.h b/drivers/crypto/ycc/ycc_ring.h index c47fc18..6dafdc7 100644 --- a/drivers/crypto/ycc/ycc_ring.h +++ b/drivers/crypto/ycc/ycc_ring.h @@ -121,11 +121,19 @@ struct ycc_rsa_dec_cmd { u64 dptr:48; } __packed; +struct ycc_sm2_verify_cmd { + u8 cmd_id; + u64 sptr:48; + u16 key_id; + u64 keyptr:48; +} __packed; + union ycc_real_cmd { struct ycc_skcipher_cmd ske_cmd; struct ycc_aead_cmd aead_cmd; struct ycc_rsa_enc_cmd rsa_enc_cmd; struct ycc_rsa_dec_cmd rsa_dec_cmd; + struct ycc_sm2_verify_cmd sm2_verify_cmd; u8 padding[32]; }; From patchwork Wed Sep 28 07:38:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 610276 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 956C5C04A95 for ; Wed, 28 Sep 2022 07:41:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233791AbiI1Hk4 (ORCPT ); Wed, 28 Sep 2022 03:40:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233536AbiI1HkK (ORCPT ); Wed, 28 Sep 2022 03:40:10 -0400 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 913D31162C7; Wed, 28 Sep 2022 00:38:21 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R701e4; CH=green; DM=||false|; DS=||; FP=0|-1|-1|-1|0|-1|-1|-1; HT=ay29a033018046049; MF=guanjun@linux.alibaba.com; NM=1; PH=DS; RN=8; SR=0; TI=SMTPD_---0VQv.eer_1664350697; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VQv.eer_1664350697) by smtp.aliyun-inc.com; Wed, 28 Sep 2022 15:38:18 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, guanjun@linux.alibaba.com, xuchun.shang@linux.alibaba.com, artie.ding@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 9/9] MAINTAINERS: Add Yitian Cryptography Complex (YCC) driver maintainer entry Date: Wed, 28 Sep 2022 15:38:07 +0800 Message-Id: <1664350687-47330-10-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1664350687-47330-1-git-send-email-guanjun@linux.alibaba.com> References: <1664350687-47330-1-git-send-email-guanjun@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Zelin Deng I will continue to add new feature, optimize the performance, and handle the issues of Yitian Cryptography Complex (YCC) driver. Guanjun and Xuchun Shang focus on various algorithms support, add them as co-maintainers. Signed-off-by: Zelin Deng Acked-by: Guanjun Acked-by: Xuchun Shang --- MAINTAINERS | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 56ff555..ffededc 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -943,6 +943,14 @@ S: Supported F: drivers/crypto/ccp/sev* F: include/uapi/linux/psp-sev.h +ALIBABA YITIAN CRYPTOGRAPHY COMPLEX (YCC) ACCELERATOR DRIVER +M: Zelin Deng +M: Guanjun +M: Xuchun Shang +L: ali-accel@list.alibaba-inc.com +S: Supported +F: drivers/crypto/ycc/ + AMD DISPLAY CORE M: Harry Wentland M: Leo Li