From patchwork Thu Oct 13 18:40:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Markus Stockhausen X-Patchwork-Id: 615822 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5894AC433FE for ; Thu, 13 Oct 2022 19:04:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229540AbiJMTEl (ORCPT ); Thu, 13 Oct 2022 15:04:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229498AbiJMTEk (ORCPT ); Thu, 13 Oct 2022 15:04:40 -0400 Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B68417CE35 for ; Thu, 13 Oct 2022 12:04:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1665687873; bh=/OLsTLvyk37g+YkYsghFpcKpXyORdMfDAZUeVD6XPzc=; h=X-UI-Sender-Class:From:To:Cc:Subject:Date:In-Reply-To:References; b=NhJ0jjdRCEaRDZFoIa1g9g1rIBc1oqjXOBo1J9iBr3zGRuF1bUgGojyN/ZFMPRCJZ pPX+zukumq6hUxoaOzEqhJOG1ugt2P163GPeL45E5iFn6ExebiOv3XdiMJ2rFbzULZ mn2HUGA37vw8adsL17/tFEkumQ49qLZ173VMyDGE= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from fedora.willemsstb.de ([94.31.86.22]) by mail.gmx.net (mrgmx105 [212.227.17.174]) with ESMTPSA (Nemesis) id 1Mqs0X-1pVsHU1vv7-00mq4S; Thu, 13 Oct 2022 20:40:29 +0200 From: Markus Stockhausen To: linux-crypto@vger.kernel.org Cc: Markus Stockhausen Subject: [PATCH 1/6] crypto/realtek: header definitions Date: Thu, 13 Oct 2022 20:40:21 +0200 Message-Id: <20221013184026.63826-2-markus.stockhausen@gmx.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221013184026.63826-1-markus.stockhausen@gmx.de> References: <20221013184026.63826-1-markus.stockhausen@gmx.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:8MbsVVoBhISzAM3fK0nI5/+o9PDk5Dw2EaUjxoFHmfJY1f2oBe/ inYahfQ63XAoDjaWxmAEoReJntFVp9v5Q4Hm8WJZyXfgd0xUW3cJXg+FN14pyGHU6wsLEal JXdjTTb8iGEc+9QCqXEcQj0/X6095ktNTAvjr60k+UO9/cw4aAVuX66j658RsobzL9YqOx/ rhTqgo2S1jjcMJoJ88QGg== X-UI-Out-Filterresults: notjunk:1;V03:K0:hK/knx52R30=:85218GGRMbFZtx5m0AQFmV b+zOMOkHamxgWbvN9XH4lmHGCnJvVSIuhuKE+H6yvAcrh7XgzwVWezVC6xn1X7MTYQ0XDY0tf 17LrcborPEAL1fYQarWt1rjVeZBKAyb/CJwdiQiTl9jmYa2EDNqn4LvOC9AI1DVIWIgSRf0Rt I7tuGozaWpZ4dRysjByxkhSDr2dohyvD1r40hGHOVy2mTPvUxcfktnMsRvTncZgWu/R5IghaG aqAjdX4rH+gEzk6YEBTuOL5SlZktQM+DZLHQ6+sKd6V25VXwJ5IKVUeuiSxyFSpyRCwtWVrjn S7RrZOuqNTqABRAblTMzms4v2JZqHZJdOvNhBM+HDdTCtXI3BC2E8ifmEULtFd4cGfhZ5X+fS hAn2b95V14jSspHsWvLxKGved15crTg4biuVMXCX03jXCIMTDIvQbioUrQrTUPfeLwLdZMleK jiKaPIOO7VPBSb/pJQfMnqUB6lmKeUQ00H+VBjgX/g8sGCaaa40uXaDO2favzLe04jzYOuQ2O lCrBZqlWQgb/ouc0NoA0hbG+ff3IiS9OnJUZEcdyVkROCurWo91z1iFLKlYJD8ydwq2sO0Deu 4FSP4Kn2NUYSnfn4WY8kp6adbXO4kq5Y9hD9GFTvH5ZmsHemBH+aEqN1OBBVS2CqPJ8onsPhy D3qkTYK9jpNedJoYEX0CEJgi/gYwsc+Blnh2tnr5vmejlpfXlJn9GLQd4YDQYwGnVNKQqV99f eky9DNy3ac0uYpwPoEg8fdP8rcES6xfipw13v6ngLpEY2vb9JMTOlCItnzWFURKs+ExFKVyB4 y+f8onVtmDyMzz6RL4aYeNSd350PiSU3mEwxExQELhQC4ID2WjmlKVmH4nqxeliPuY5fHeqh7 zftF4Bhh251VMgAgWt7sJ3CV8CxlNAv1FmB6XG/8fK88mQWP0WAeHKOiwrNtgS4/D54JVAZUt 52pA0A5Kwson4AjZOi4Y1Sr/w9rE2yMyKxj7e5hR4saS7GNrM7nSO1AlLqWvlVi2L4RLQX22c vz45+Ky6MqNfVUOcn1qPVDO3q7FPUuea8SbCYRskkuOD9aSznYr/JVCGkq9qakBOtUrG368Gu P2/t1N2u5JmpGkoBm+nnpWhSLApXCQIMlnEe5XYMagz8eHauHdrsZ/sgXMTPcRVMkqpfhTDPR vnTzvw3JtDgPmNemYmMuAdV7sh9UyvlzIbn+56vv7G1vCvCqbjZI8UbFLk+gNJJ+Q97qQR0Dg tHv+fiMYa9DdHw6p5XfoIs+CLjxpTNakZq3jdXFD0yMEXabINhbWE9vYWdjEUiokCIgRMDe0X mF7qiDevEA7maq7FeuP/WSeu8DSAEbt886OQGRHZVEwnbULFUkX339fGd7/5hr4FvCSKYFJ8o 1kvlhhEelvNJxA29blc4XC48ptCek1Kd95QS54B4HpCBzv2Lg0xFwgDY1T5IP36iUzMX4j5LL i9tOx9xFfNmSQYZJ5teNoDByYoyqrJ6ii1d11SFVLVai5zIDxw09FBwl9FydA6B9K4fvrKMgW HUJ+57c3roP/fOnlivUMHbeD04pBz+hDuudSpW4sm39yu Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add header definitions for new Realtek crypto device. Signed-off-by: Markus Stockhausen --- drivers/crypto/realtek/realtek_crypto.h | 325 ++++++++++++++++++++++++ 1 file changed, 325 insertions(+) create mode 100644 drivers/crypto/realtek/realtek_crypto.h -- 2.37.3 diff --git a/drivers/crypto/realtek/realtek_crypto.h b/drivers/crypto/realtek/realtek_crypto.h new file mode 100644 index 000000000000..35d9de5eca7a --- /dev/null +++ b/drivers/crypto/realtek/realtek_crypto.h @@ -0,0 +1,325 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Crypto acceleration support for Realtek crypto engine. Based on ideas from + * Rockchip & SafeXcel driver plus Realtek OpenWrt RTK. + * + * Copyright (c) 2022, Markus Stockhausen + */ + +#ifndef __REALTEK_CRYPTO_H__ +#define __REALTEK_CRYPTO_H__ + +#include +#include +#include +#include +#include +#include + +/* + * The four engine registers for instrumentation of the hardware. + */ +#define RTCR_REG_SRC 0x0 /* Source descriptor starting address */ +#define RTCR_REG_DST 0x4 /* Destination Descriptor starting address */ +#define RTCR_REG_CMD 0x8 /* Command/Status Register */ +#define RTCR_REG_CTR 0xC /* Control Register */ +/* + * Engine Command/Status Register. + */ +#define RTCR_CMD_SDUEIP BIT(15) /* Src desc unavail error interrupt pending */ +#define RTCR_CMD_SDLEIP BIT(14) /* Src desc length error interrupt pending */ +#define RTCR_CMD_DDUEIP BIT(13) /* Dst desc unavail error interrupt pending */ +#define RTCR_CMD_DDOKIP BIT(12) /* Dst dsec ok interrupt pending */ +#define RTCR_CMD_DABFIP BIT(11) /* Data address buffer interrupt pending */ +#define RTCR_CMD_POLL BIT(1) /* Descriptor polling. Set to kick engine */ +#define RTCR_CMD_SRST BIT(0) /* Software reset, write 1 to reset */ +/* + * Engine Control Register + */ +#define RTCR_CTR_SDUEIE BIT(15) /* Src desc unavail error interrupt enable */ +#define RTCR_CTR_SDLEIE BIT(14) /* Src desc length error interrupt enable */ +#define RTCR_CTR_DDUEIE BIT(13) /* Dst desc unavail error interrupt enable */ +#define RTCR_CTR_DDOKIE BIT(12) /* Dst desc ok interrupt enable */ +#define RTCR_CTR_DABFIE BIT(11) /* Data address buffer interrupt enable */ +#define RTCR_CTR_LBKM BIT(8) /* Loopback mode enable */ +#define RTCR_CTR_SAWB BIT(7) /* Source address write back = work inplace */ +#define RTCR_CTR_CKE BIT(6) /* Clock enable */ +#define RTCR_CTR_DDMMSK 0x38 /* Destination DMA max burst size mask */ +#define RTCR_CTR_DDM16 0x00 /* Destination DMA max burst size 16 bytes */ +#define RTCR_CTR_DDM32 0x08 /* Destination DMA max burst size 32 bytes */ +#define RTCR_CTR_DDM64 0x10 /* Destination DMA max burst size 64 bytes */ +#define RTCR_CTR_DDM128 0x18 /* Destination DMA max burst size 128 bytes */ +#define RTCR_CTR_SDMMSK 0x07 /* Source DMA max burst size mask */ +#define RTCR_CTR_SDM16 0x00 /* Source DMA max burst size 16 bytes */ +#define RTCR_CTR_SDM32 0x01 /* Source DMA max burst size 32 bytes */ +#define RTCR_CTR_SDM64 0x02 /* Source DMA max burst size 64 bytes */ +#define RTCR_CTR_SDM128 0x03 /* Source DMA max burst size 128 bytes */ + +/* + * Module settings and constants. Some of the limiter values have been chosen + * based on testing (e.g. ring sizes). Others are based on real hardware + * limits (e.g. scatter, request size, hash size). + */ +#define RTCR_SRC_RING_SIZE 64 +#define RTCR_DST_RING_SIZE 16 +#define RTCR_BUF_RING_SIZE 32768 +#define RTCR_MAX_REQ_SIZE 8192 +#define RTCR_MAX_SG 8 +#define RTCR_MAX_SG_AHASH (RTCR_MAX_SG - 1) +#define RTCR_MAX_SG_SKCIPHER (RTCR_MAX_SG - 3) +#define RTCR_HASH_VECTOR_SIZE SHA1_DIGEST_SIZE + +#define RTCR_ALG_AHASH 0 +#define RTCR_ALG_SKCIPHER 1 + +#define RTCR_HASH_UPDATE BIT(0) +#define RTCR_HASH_FINAL BIT(1) +#define RTCR_HASH_BUF_SIZE SHA1_BLOCK_SIZE +#define RTCR_HASH_PAD_SIZE ((SHA1_BLOCK_SIZE + 8) / sizeof(u64)) + +#define RTCR_REQ_SG_MASK 0xff +#define RTCR_REQ_MD5 BIT(8) +#define RTCR_REQ_SHA1 BIT(9) +#define RTCR_REQ_FB_ACT BIT(10) +#define RTCR_REQ_FB_RDY BIT(11) + +/* + * Crypto ring source data descripter. This data is fed into the engine. It + * takes all information about the input data and the type of cypher/hash + * algorithm that we want to apply. Each request consists of several source + * descriptors. + */ +struct rtcr_src_desc { + u32 opmode; + u32 len; + u32 dummy; + phys_addr_t paddr; +}; + +#define RTCR_SRC_DESC_SIZE (sizeof(struct rtcr_src_desc)) +/* + * Owner: This flag identifies the owner of the block. When we send the + * descripter to the ring set this flag to 1. Once the crypto engine has + * finished processing this will be reset to 0. + */ +#define RTCR_SRC_OP_OWN_ASIC BIT(31) +#define RTCR_SRC_OP_OWN_CPU 0 +/* + * End of ring: Setting this flag to 1 tells the crypto engine that this is + * the last descriptor of the whole ring (not the request). If set the engine + * will not increase the processing pointer afterwards but will jump back to + * the first descriptor address it was initialized with. + */ +#define RTCR_SRC_OP_EOR BIT(30) +#define RTCR_SRC_OP_CALC_EOR(idx) ((idx == RTCR_SRC_RING_SIZE - 1) ? \ + RTCR_SRC_OP_EOR : 0) +/* + * First segment: If set to 1 this is the first descriptor of a request. All + * descriptors that follow will have this flag set to 0 belong to the same + * request. + */ +#define RTCR_SRC_OP_FS BIT(29) +/* + * Mode select: Set to 00b for crypto only, set to 01b for hash only, 10b for + * hash then crypto or 11b for crypto then hash. + */ +#define RTCR_SRC_OP_MS_CRYPTO 0 +#define RTCR_SRC_OP_MS_HASH BIT(26) +#define RTCR_SRC_OP_MS_HASH_CRYPTO BIT(27) +#define RTCR_SRC_OP_MS_CRYPTO_HASH GENMASK(27, 26) +/* + * Key application management: Only relevant for cipher (AES/3DES/DES) mode. If + * using AES or DES it has to be set to 0 (000b) for decryption and 7 (111b) for + * encryption. For 3DES it has to be set to 2 (010b = decrypt, encrypt, decrypt) + * for decryption and 5 (101b = encrypt, decrypt, encrypt) for encryption. + */ +#define RTCR_SRC_OP_KAM_DEC 0 +#define RTCR_SRC_OP_KAM_ENC GENMASK(25, 23) +#define RTCR_SRC_OP_KAM_3DES_DEC BIT(24) +#define RTCR_SRC_OP_KAM_3DES_ENC (BIT(23) | BIT(25)) +/* + * AES/3DES/DES mode & key length: Upper two bits for AES mode. If set to values + * other than 0 we want to encrypt/decrypt with AES. The values are 01b for 128 + * bit key length, 10b for 192 bit key length and 11b for 256 bit key length. + * If AES is disabled (upper two bits 00b) then the lowest bit determines if we + * want to use 3DES (1) or DES (0). + */ +#define RTCR_SRC_OP_CIPHER_FROM_KEY(k) ((k - 8) << 18) +#define RTCR_SRC_OP_CIPHER_AES_128 BIT(21) +#define RTCR_SRC_OP_CIPHER_AES_192 BIT(22) +#define RTCR_SRC_OP_CIPHER_AES_256 GENMASK(22, 21) +#define RTCR_SRC_OP_CIPHER_3DES BIT(20) +#define RTCR_SRC_OP_CIPHER_DES 0 +#define RTCR_SRC_OP_CIPHER_MASK GENMASK(22, 20) +/* + * Cipher block mode: Determines the block mode of a cipher request. Set to 00b + * for ECB, 01b for CTR and 10b for CTR. + */ +#define RTCR_SRC_OP_CRYPT_ECB 0 +#define RTCR_SRC_OP_CRYPT_CTR BIT(18) +#define RTCR_SRC_OP_CRYPT_CBC BIT(19) +/* + * Hash mode: Set to 1 for MD5 or 0 for SHA1 calculation. + */ +#define RTCR_SRC_OP_HASH_MD5 BIT(16) +#define RTCR_SRC_OP_HASH_SHA1 0 + +#define RTCR_SRC_OP_DUMMY_LEN 128 + +/* + * Crypto ring destination data descriptor. Data inside will be fed to the + * engine and if we process a hash request we get the resulting hash from here. + * Each request consists of exactly one destination descriptor. + */ +struct rtcr_dst_desc { + u32 opmode; + phys_addr_t paddr; + u32 dummy; + u32 vector[RTCR_HASH_VECTOR_SIZE / sizeof(u32)]; +}; + +#define RTCR_DST_DESC_SIZE (sizeof(struct rtcr_dst_desc)) +/* + * Owner: This flag identifies the owner of the block. When we send the + * descripter to the ring set this flag to 1. Once the crypto engine has + * finished processing this will be reset to 0. + */ +#define RTCR_DST_OP_OWN_ASIC BIT(31) +#define RTCR_DST_OP_OWN_CPU 0 +/* + * End of ring: Setting this flag to 1 tells the crypto engine that this is + * the last descriptor of the whole ring (not the request). If set the engine + * will not increase the processing pointer afterwards but will jump back to + * the first descriptor address it was initialized with. + */ +#define RTCR_DST_OP_EOR BIT(30) +#define RTCR_DST_OP_CALC_EOR(idx) ((idx == RTCR_DST_RING_SIZE - 1) ? \ + RTCR_DST_OP_EOR : 0) + +/* + * Writeback descriptor. This descriptor maintains additional data per request + * about writebac. E.g. the hash result or a cipher that was written to the + * internal buffer only. Remember the post processing information here. + */ +struct rtcr_wbk_desc { + void *dst; + void *src; + int off; + int len; +}; +/* + * To keep the size of the descriptor a power of 2 (cache line aligned) the + * length field can denote special writeback requests that need another type of + * postprocessing. + */ +#define RTCR_WB_LEN_DONE (0) +#define RTCR_WB_LEN_HASH (-1) +#define RTCR_WB_LEN_SG_DIRECT (-2) + +struct rtcr_crypto_dev { + char buf_ring[RTCR_BUF_RING_SIZE]; + struct rtcr_src_desc src_ring[RTCR_SRC_RING_SIZE]; + struct rtcr_dst_desc dst_ring[RTCR_DST_RING_SIZE]; + struct rtcr_wbk_desc wbk_ring[RTCR_DST_RING_SIZE]; + + /* modified under ring lock */ + int cpu_src_idx; + int cpu_dst_idx; + int cpu_buf_idx; + + /* modified in (serialized) tasklet */ + int pp_src_idx; + int pp_dst_idx; + int pp_buf_idx; + + /* modified under asic lock */ + int asic_dst_idx; + int asic_src_idx; + bool busy; + + int irq; + spinlock_t asiclock; + spinlock_t ringlock; + struct tasklet_struct done_task; + wait_queue_head_t done_queue; + + void __iomem *base; + + struct platform_device *pdev; + struct device *dev; +}; + +struct rtcr_alg_template { + struct rtcr_crypto_dev *cdev; + int type; + int opmode; + union { + struct skcipher_alg skcipher; + struct ahash_alg ahash; + } alg; +}; + +struct rtcr_ahash_ctx { + struct rtcr_crypto_dev *cdev; + struct crypto_ahash *fback; + int opmode; +}; + +struct rtcr_ahash_req { + int state; + /* Data from here is lost if fallback switch happens */ + u32 vector[RTCR_HASH_VECTOR_SIZE]; + u64 totallen; + char buf[RTCR_HASH_BUF_SIZE]; + int buflen; +}; + +union rtcr_fallback_state { + struct md5_state md5; + struct sha1_state sha1; +}; + +struct rtcr_skcipher_ctx { + struct rtcr_crypto_dev *cdev; + int opmode; + int keylen; + u32 key_enc[AES_KEYSIZE_256 / sizeof(u32)]; + u32 key_dec[AES_KEYSIZE_256 / sizeof(u32)]; +}; + +extern struct rtcr_alg_template rtcr_ahash_md5; +extern struct rtcr_alg_template rtcr_ahash_sha1; +extern struct rtcr_alg_template rtcr_skcipher_ecb_aes; +extern struct rtcr_alg_template rtcr_skcipher_cbc_aes; +extern struct rtcr_alg_template rtcr_skcipher_ctr_aes; + +extern void rtcr_lock_ring(struct rtcr_crypto_dev *cdev); +extern void rtcr_unlock_ring(struct rtcr_crypto_dev *cdev); + +extern int rtcr_alloc_ring(struct rtcr_crypto_dev *cdev, int srclen, + int *srcidx, int *dstidx, int buflen, char **buf); +extern void rtcr_add_src_ahash_to_ring(struct rtcr_crypto_dev *cdev, int idx, + int opmode, int totallen); +extern void rtcr_add_src_pad_to_ring(struct rtcr_crypto_dev *cdev, + int idx, int len); +extern void rtcr_add_src_skcipher_to_ring(struct rtcr_crypto_dev *cdev, int idx, + int opmode, int totallen, + struct rtcr_skcipher_ctx *sctx); +extern void rtcr_add_src_to_ring(struct rtcr_crypto_dev *cdev, int idx, + void *vaddr, int blocklen, int totallen); +extern void rtcr_add_wbk_to_ring(struct rtcr_crypto_dev *cdev, int idx, + void *dst, int off); +extern void rtcr_add_dst_to_ring(struct rtcr_crypto_dev *cdev, int idx, + void *reqdst, int reqlen, void *wbkdst, + int wbkoff); + +extern void rtcr_kick_engine(struct rtcr_crypto_dev *cdev); + +extern void rtcr_prepare_request(struct rtcr_crypto_dev *cdev); +extern void rtcr_finish_request(struct rtcr_crypto_dev *cdev, int opmode, + int totallen); +extern int rtcr_wait_for_request(struct rtcr_crypto_dev *cdev, int idx); + +extern inline int rtcr_inc_src_idx(int idx, int cnt); +extern inline int rtcr_inc_dst_idx(int idx, int cnt); +#endif From patchwork Thu Oct 13 18:40:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Markus Stockhausen X-Patchwork-Id: 614892 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3596C4332F for ; Thu, 13 Oct 2022 18:56:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229894AbiJMS4W (ORCPT ); Thu, 13 Oct 2022 14:56:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232199AbiJMSzz (ORCPT ); Thu, 13 Oct 2022 14:55:55 -0400 Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE32C18498C for ; Thu, 13 Oct 2022 11:54:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1665687218; bh=veFDaEATWJ4AnenV1zmLa6aqCuAHUv38wf1pgfYgt2Y=; h=X-UI-Sender-Class:From:To:Cc:Subject:Date:In-Reply-To:References; b=JH5i8rO5kvo3IlUR8MNDM42jRhmwc8onBOsCPWbK8YmUR+r7Rq9j/Bg/RuATGZkrX OqvxoBc8PQrSI5A/L2WuThdBs7S/2NK8Jj63JG/ysvllmZfJeNgegGxB2JGbrrA+gl yPg+f379u2qa8RhrAxzYnOylqXT0xnU9uDNlu4DQ= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from fedora.willemsstb.de ([94.31.86.22]) by mail.gmx.net (mrgmx105 [212.227.17.174]) with ESMTPSA (Nemesis) id 1MrhUK-1pWhlB2V1A-00nf17; Thu, 13 Oct 2022 20:40:29 +0200 From: Markus Stockhausen To: linux-crypto@vger.kernel.org Cc: Markus Stockhausen Subject: [PATCH 2/6] crypto/realtek: core functions Date: Thu, 13 Oct 2022 20:40:22 +0200 Message-Id: <20221013184026.63826-3-markus.stockhausen@gmx.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221013184026.63826-1-markus.stockhausen@gmx.de> References: <20221013184026.63826-1-markus.stockhausen@gmx.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:PWlQtOQc+acxfJUkulB/yD6vOfmRBbs/bk8wtuKCCbBMbufB92D /0m7zIr5uKFOcfVa04cfoRGxR8xvudG3m8ckYfoz0orsDTN9zhPSCHJLX3PoB/1toHaI6WE UYSteyHjIVqy3zuliwYQ55yuXT+qHphrUxLi4XJvaL7Mlrkdddg5Xlo/HcxFQdvh6pUkAk6 ZMQaT42nwAt7ECpXfbFJA== X-UI-Out-Filterresults: notjunk:1;V03:K0:e3PDnHPJzx8=:PRuqdJKTJWg9L1lbDfrx5p SH+7Lv2wY3lwaIGhzTt4THk+aAOaxf5JZAIDQGh/JVL9cRzcw2NjOLFWeR3PX5EnZz6EMd5yv n1hik2I420MieORt3Ai58eXBfN7Be7udFuuNFgiYMNrtzp9kYW0uOCNognSj/Bdftc7jPdUtB JOtuMH8Ro6FinsLoAzmYqr/lFdKzLQfRy9r7ZE3ZH4X4UzRfYV6uxBU22leh9UW+aQqxAqgGy NN2l3cGJdc/iGtFPs05kVKxWyd495WF/jBN9GGXkkabb0e57SiYrK2o2hJ2PSy2xmzdVNacCJ E0opjhBYSdDbWK+hps57Iqseu4AwcMcSeqwGUG+VleEURpTOzQbrC7WYYUhB7UsOfqGohSLLO P1zUKooYuw5EdPnAktdDQk+/lMFwM3R9nR+LTmKALY3bSan/ijZLzkv+iUoBo0VbLxG27eQc3 YBUk5w3uXFjsCoXjsmSu9m9Kx2Rljqlq0HmGTkuekDexEu31PqVm7XbVdCpf0kNRaxfC8P5tx TQg8cb5qQw8BgIs4luH+c8kzyXSrlAuVyT7+JqCwasCyyZTzHO9jspo3PGWQJ0/VXwA+657yb ZQaBjnPj5dC12K0yhr3n/TMfBqg9zjoxgjj71fCRkD6OEsKq5cTjvLsL1kSQ8cmFYCiBw8G9h 24IIObdwUZ7PJo6oTx8h4VU7JOS/349280tMGqSOkolj1efftlswpHhPVhKZE9sF9T/fqXLAa Sx/kOp/yGzBrmgLILSDtp7Cr3t1gvQKaDI3vjq23O0ULcnaDdkkT2LfYQTYC9ClTWlpNw+jtb xwDRM67yMn4m3MdysGtgUDydUS8OylL8FwiZ+UzIJENMC7FfPxJcujgyuzPAy6gnbHUHwRWH7 rbeD6DJd81I4CEgxTIFmLSJWMNTdT2d8YGVVPk+BUMIeQoc8MMGKRb65NoK4rmTWqoF+pUvC7 qa+OWJNmVOlkbFh3mewWXVeFKBjgWQNt25cfIDa4n0n6X/L+wl6pQwqlC6QOU5CYfsAtv1RrQ E8Kjzna42X1QORMa26NTsOXO+hlgokamhiR5N6pmdhpA0e/iNsSB+swsKlBIChCA2/rH/gw0j zT9hxn6wFjA0LphXLdav3+cCNBBfSN42L532h62AjI1QChS9amOo/3xBTeh8nWNByAPPjws9T H83P4Axz63nFwX2yYIs1lwsy8594z8PJqfcJc9cM5bkFaR+xmAzVQWB8pBWxzftVLLAfXqLYG kfk3S4g0NnVUB0gkPu20THpe6YiOzK7m5AjqhLbBi+UhKKq1PzluawiB6mBI306vdRXOp9CqL +kOEWcrs/KrY2YL2Ogf7n662F66sXz6y47Nbcu5rOSoOevU6DtbtIqZUn7/RpNDgY5QkGWmeM 0Vc0INaLVCssD7obMZti5XopeVtG06lz3munogIq+XurLl21mRFyqFxUODs6krHaARECCUQxZ K7l9En4Bv8LKrjhR19OPL7IGuuc3gVv/4lPzfNHnrcZ1f65bvz3fLLIHBGy35eIAcI4xZ7HVd Hmue4VM+RfDdepRPouEj9uGF8/+uS7OEZMyz+N0+qcjGV Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add core functions for new Realtek crypto device. Signed-off-by: Markus Stockhausen --- drivers/crypto/realtek/realtek_crypto.c | 472 ++++++++++++++++++++++++ 1 file changed, 472 insertions(+) create mode 100644 drivers/crypto/realtek/realtek_crypto.c -- 2.37.3 diff --git a/drivers/crypto/realtek/realtek_crypto.c b/drivers/crypto/realtek/realtek_crypto.c new file mode 100644 index 000000000000..f22d117fd3c6 --- /dev/null +++ b/drivers/crypto/realtek/realtek_crypto.c @@ -0,0 +1,472 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Crypto acceleration support for Realtek crypto engine. Based on ideas from + * Rockchip & SafeXcel driver plus Realtek OpenWrt RTK. + * + * Copyright (c) 2022, Markus Stockhausen + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "realtek_crypto.h" + +inline int rtcr_inc_src_idx(int idx, int cnt) +{ + return (idx + cnt) & (RTCR_SRC_RING_SIZE - 1); +} + +inline int rtcr_inc_dst_idx(int idx, int cnt) +{ + return (idx + cnt) & (RTCR_DST_RING_SIZE - 1); +} + +inline int rtcr_inc_buf_idx(int idx, int cnt) +{ + return (idx + cnt) & (RTCR_BUF_RING_SIZE - 1); +} + +inline int rtcr_space_plus_pad(int len) +{ + return (len + 31) & ~31; +} + +int rtcr_alloc_ring(struct rtcr_crypto_dev *cdev, int srclen, int *srcidx, + int *dstidx, int buflen, char **buf) +{ + int srcfree, dstfree, buffree, bufidx; + int srcalloc = (srclen + 1) & ~1, bufalloc = 0; + int ret = -ENOSPC; + + spin_lock(&cdev->ringlock); + + bufidx = cdev->cpu_buf_idx; + if (buflen > 0) { + bufalloc = rtcr_space_plus_pad(buflen); + if (bufidx + bufalloc > RTCR_BUF_RING_SIZE) { + if (unlikely(cdev->cpu_buf_idx > bufidx)) { + dev_err(cdev->dev, "buffer ring full\n"); + goto err_nospace; + } + /* end of buffer is free but too small, skip it */ + bufidx = 0; + } + } + + srcfree = rtcr_inc_src_idx(cdev->pp_src_idx - cdev->cpu_src_idx, -1); + dstfree = rtcr_inc_dst_idx(cdev->pp_dst_idx - cdev->cpu_dst_idx, -1); + buffree = rtcr_inc_buf_idx(cdev->pp_buf_idx - bufidx, -1); + + if (unlikely(srcfree < srcalloc)) { + dev_err(cdev->dev, "source ring full\n"); + goto err_nospace; + } + if (unlikely(dstfree < 1)) { + dev_err(cdev->dev, "destination ring full\n"); + goto err_nospace; + } + if (unlikely(buffree < bufalloc)) { + dev_err(cdev->dev, "buffer ring full\n"); + goto err_nospace; + } + + *srcidx = cdev->cpu_src_idx; + cdev->cpu_src_idx = rtcr_inc_src_idx(cdev->cpu_src_idx, srcalloc); + + *dstidx = cdev->cpu_dst_idx; + cdev->cpu_dst_idx = rtcr_inc_dst_idx(cdev->cpu_dst_idx, 1); + + ret = 0; + cdev->wbk_ring[*dstidx].len = buflen; + if (buflen > 0) { + *buf = &cdev->buf_ring[bufidx]; + cdev->wbk_ring[*dstidx].src = *buf; + cdev->cpu_buf_idx = rtcr_inc_buf_idx(bufidx, bufalloc); + } + +err_nospace: + spin_unlock(&cdev->ringlock); + + return ret; +} + +static inline void rtcr_ack_irq(struct rtcr_crypto_dev *cdev) +{ + int v = ioread32(cdev->base + RTCR_REG_CMD); + + if (unlikely((v != RTCR_CMD_DDOKIP) && v)) + dev_err(cdev->dev, "unexpected IRQ result 0x%08x\n", v); + v = RTCR_CMD_SDUEIP | RTCR_CMD_SDLEIP | RTCR_CMD_DDUEIP | + RTCR_CMD_DDOKIP | RTCR_CMD_DABFIP; + + iowrite32(v, cdev->base + RTCR_REG_CMD); +} + +static void rtcr_done_task(unsigned long data) +{ + struct rtcr_crypto_dev *cdev = (struct rtcr_crypto_dev *)data; + int stop_src_idx, stop_dst_idx, idx, len; + struct scatterlist *sg; + unsigned long flags; + + spin_lock_irqsave(&cdev->asiclock, flags); + stop_src_idx = cdev->asic_src_idx; + stop_dst_idx = cdev->asic_dst_idx; + spin_unlock_irqrestore(&cdev->asiclock, flags); + + idx = cdev->pp_dst_idx; + + while (idx != stop_dst_idx) { + len = cdev->wbk_ring[idx].len; + switch (len) { + case RTCR_WB_LEN_SG_DIRECT: + /* already written to the destination by the engine */ + break; + case RTCR_WB_LEN_HASH: + /* write back hash from destination ring */ + memcpy(cdev->wbk_ring[idx].dst, + cdev->dst_ring[idx].vector, + RTCR_HASH_VECTOR_SIZE); + break; + default: + /* write back data from buffer */ + sg = (struct scatterlist *)cdev->wbk_ring[idx].dst; + sg_pcopy_from_buffer(sg, sg_nents(sg), + cdev->wbk_ring[idx].src, + len, cdev->wbk_ring[idx].off); + len = rtcr_space_plus_pad(len); + cdev->pp_buf_idx = ((char *)cdev->wbk_ring[idx].src - cdev->buf_ring) + len; + } + + cdev->wbk_ring[idx].len = RTCR_WB_LEN_DONE; + idx = rtcr_inc_dst_idx(idx, 1); + } + + wake_up_all(&cdev->done_queue); + cdev->pp_src_idx = stop_src_idx; + cdev->pp_dst_idx = stop_dst_idx; +} + +static irqreturn_t rtcr_handle_irq(int irq, void *dev_id) +{ + struct rtcr_crypto_dev *cdev = dev_id; + u32 p; + + spin_lock(&cdev->asiclock); + + rtcr_ack_irq(cdev); + cdev->busy = false; + + p = (u32)phys_to_virt((u32)ioread32(cdev->base + RTCR_REG_SRC)); + cdev->asic_src_idx = (p - (u32)cdev->src_ring) / RTCR_SRC_DESC_SIZE; + + p = (u32)phys_to_virt((u32)ioread32(cdev->base + RTCR_REG_DST)); + cdev->asic_dst_idx = (p - (u32)cdev->dst_ring) / RTCR_DST_DESC_SIZE; + + tasklet_schedule(&cdev->done_task); + spin_unlock(&cdev->asiclock); + + return IRQ_HANDLED; +} + +void rtcr_add_src_ahash_to_ring(struct rtcr_crypto_dev *cdev, int idx, + int opmode, int totallen) +{ + struct rtcr_src_desc *src = &cdev->src_ring[idx]; + + src->len = totallen; + src->opmode = opmode | RTCR_SRC_OP_FS | + RTCR_SRC_OP_DUMMY_LEN | RTCR_SRC_OP_OWN_ASIC | + RTCR_SRC_OP_CALC_EOR(idx); + + dma_sync_single_for_device(cdev->dev, virt_to_phys(src), + RTCR_SRC_DESC_SIZE, + DMA_TO_DEVICE); +} + +void rtcr_add_src_skcipher_to_ring(struct rtcr_crypto_dev *cdev, int idx, + int opmode, int totallen, + struct rtcr_skcipher_ctx *sctx) +{ + struct rtcr_src_desc *src = &cdev->src_ring[idx]; + + src->len = totallen; + if (opmode & RTCR_SRC_OP_KAM_ENC) + src->paddr = virt_to_phys(sctx->key_enc); + else + src->paddr = virt_to_phys(sctx->key_dec); + + src->opmode = RTCR_SRC_OP_FS | RTCR_SRC_OP_OWN_ASIC | + RTCR_SRC_OP_MS_CRYPTO | RTCR_SRC_OP_CRYPT_ECB | + RTCR_SRC_OP_CALC_EOR(idx) | opmode | sctx->keylen; + + dma_sync_single_for_device(cdev->dev, virt_to_phys(src), + RTCR_SRC_DESC_SIZE, + DMA_TO_DEVICE); +} + +void rtcr_add_src_to_ring(struct rtcr_crypto_dev *cdev, int idx, void *vaddr, + int blocklen, int totallen) +{ + struct rtcr_src_desc *src = &cdev->src_ring[idx]; + + src->len = totallen; + src->paddr = virt_to_phys(vaddr); + src->opmode = RTCR_SRC_OP_OWN_ASIC | RTCR_SRC_OP_CALC_EOR(idx) | blocklen; + + dma_sync_single_for_device(cdev->dev, virt_to_phys(src), + RTCR_SRC_DESC_SIZE, + DMA_BIDIRECTIONAL); +} + +inline void rtcr_add_src_pad_to_ring(struct rtcr_crypto_dev *cdev, int idx, int len) +{ + /* align 16 byte source descriptors with 32 byte cache lines */ + if (!(idx & 1)) + rtcr_add_src_to_ring(cdev, idx + 1, NULL, 0, len); +} + +void rtcr_add_dst_to_ring(struct rtcr_crypto_dev *cdev, int idx, void *reqdst, + int reqlen, void *wbkdst, int wbkoff) +{ + struct rtcr_dst_desc *dst = &cdev->dst_ring[idx]; + struct rtcr_wbk_desc *wbk = &cdev->wbk_ring[idx]; + + dst->paddr = virt_to_phys(reqdst); + dst->opmode = RTCR_DST_OP_OWN_ASIC | RTCR_DST_OP_CALC_EOR(idx) | reqlen; + + wbk->dst = wbkdst; + wbk->off = wbkoff; + + dma_sync_single_for_device(cdev->dev, virt_to_phys(dst), + RTCR_DST_DESC_SIZE, + DMA_BIDIRECTIONAL); +} + +inline int rtcr_wait_for_request(struct rtcr_crypto_dev *cdev, int idx) +{ + int *len = &cdev->wbk_ring[idx].len; + + wait_event(cdev->done_queue, *len == RTCR_WB_LEN_DONE); + return 0; +} + +void rtcr_kick_engine(struct rtcr_crypto_dev *cdev) +{ + unsigned long flags; + + spin_lock_irqsave(&cdev->asiclock, flags); + + if (!cdev->busy) { + cdev->busy = true; + /* engine needs up to 5us to reset poll bit */ + iowrite32(RTCR_CMD_POLL, cdev->base + RTCR_REG_CMD); + } + + spin_unlock_irqrestore(&cdev->asiclock, flags); +} + +static struct rtcr_alg_template *rtcr_algs[] = { + &rtcr_ahash_md5, + &rtcr_ahash_sha1, + &rtcr_skcipher_ecb_aes, + &rtcr_skcipher_cbc_aes, + &rtcr_skcipher_ctr_aes, +}; + +static void rtcr_unregister_algorithms(int end) +{ + int i; + + for (i = 0; i < end; i++) { + if (rtcr_algs[i]->type == RTCR_ALG_SKCIPHER) + crypto_unregister_skcipher(&rtcr_algs[i]->alg.skcipher); + else + crypto_unregister_ahash(&rtcr_algs[i]->alg.ahash); + } +} + +static int rtcr_register_algorithms(struct rtcr_crypto_dev *cdev) +{ + int i, ret = 0; + + for (i = 0; i < ARRAY_SIZE(rtcr_algs); i++) { + rtcr_algs[i]->cdev = cdev; + if (rtcr_algs[i]->type == RTCR_ALG_SKCIPHER) + ret = crypto_register_skcipher(&rtcr_algs[i]->alg.skcipher); + else { + rtcr_algs[i]->alg.ahash.halg.statesize = + max(sizeof(struct rtcr_ahash_req), + offsetof(struct rtcr_ahash_req, vector) + + sizeof(union rtcr_fallback_state)); + ret = crypto_register_ahash(&rtcr_algs[i]->alg.ahash); + } + if (ret) + goto err_cipher_algs; + } + + return 0; + +err_cipher_algs: + rtcr_unregister_algorithms(i); + + return ret; +} + +static void rtcr_init_engine(struct rtcr_crypto_dev *cdev) +{ + int v; + + v = ioread32(cdev->base + RTCR_REG_CMD); + v |= RTCR_CMD_SRST; + iowrite32(v, cdev->base + RTCR_REG_CMD); + + usleep_range(10000, 20000); + + iowrite32(RTCR_CTR_CKE | RTCR_CTR_SDM16 | RTCR_CTR_DDM16 | + RTCR_CTR_SDUEIE | RTCR_CTR_SDLEIE | RTCR_CTR_DDUEIE | + RTCR_CTR_DDOKIE | RTCR_CTR_DABFIE, cdev->base + RTCR_REG_CTR); + + rtcr_ack_irq(cdev); + usleep_range(10000, 20000); +} + +static void rtcr_exit_engine(struct rtcr_crypto_dev *cdev) +{ + iowrite32(0, cdev->base + RTCR_REG_CTR); +} + +static void rtcr_init_rings(struct rtcr_crypto_dev *cdev) +{ + phys_addr_t src = virt_to_phys(cdev->src_ring); + phys_addr_t dst = virt_to_phys(cdev->dst_ring); + + iowrite32(src, cdev->base + RTCR_REG_SRC); + iowrite32(dst, cdev->base + RTCR_REG_DST); + + cdev->asic_dst_idx = cdev->asic_src_idx = 0; + cdev->cpu_src_idx = cdev->cpu_dst_idx = cdev->cpu_buf_idx = 0; + cdev->pp_src_idx = cdev->pp_dst_idx = cdev->pp_buf_idx = 0; +} + +static int rtcr_crypto_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct rtcr_crypto_dev *cdev; + unsigned long flags = 0; + struct resource *res; + void __iomem *base; + int irq, ret; + +#ifdef CONFIG_MIPS + if ((cpu_dcache_line_size() != 16) && (cpu_dcache_line_size() != 32)) { + dev_err(dev, "cache line size not 16 or 32 bytes\n"); + ret = -EINVAL; + goto err_map; + } +#endif + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) { + dev_err(dev, "no IO address given\n"); + ret = -ENODEV; + goto err_map; + } + + base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR_OR_NULL(base)) { + dev_err(dev, "failed to map IO address\n"); + ret = -EINVAL; + goto err_map; + } + + cdev = devm_kzalloc(dev, sizeof(*cdev), GFP_KERNEL); + if (!cdev) { + dev_err(dev, "failed to allocate device memory\n"); + ret = -ENOMEM; + goto err_mem; + } + + irq = irq_of_parse_and_map(pdev->dev.of_node, 0); + if (!irq) { + dev_err(dev, "failed to determine device interrupt\n"); + ret = -EINVAL; + goto err_of_irq; + } + + if (devm_request_irq(dev, irq, rtcr_handle_irq, flags, "realtek-crypto", cdev)) { + dev_err(dev, "failed to request device interrupt\n"); + ret = -ENXIO; + goto err_request_irq; + } + + platform_set_drvdata(pdev, cdev); + cdev->base = base; + cdev->dev = dev; + cdev->irq = irq; + cdev->pdev = pdev; + + dma_map_single(dev, (void *)empty_zero_page, PAGE_SIZE, DMA_TO_DEVICE); + + init_waitqueue_head(&cdev->done_queue); + tasklet_init(&cdev->done_task, rtcr_done_task, (unsigned long)cdev); + spin_lock_init(&cdev->ringlock); + spin_lock_init(&cdev->asiclock); + + /* Init engine first as it resets the ring pointers */ + rtcr_init_engine(cdev); + rtcr_init_rings(cdev); + rtcr_register_algorithms(cdev); + + dev_info(dev, "%d KB buffer, max %d requests of up to %d bytes\n", + RTCR_BUF_RING_SIZE / 1024, RTCR_DST_RING_SIZE, + RTCR_MAX_REQ_SIZE); + dev_info(dev, "ready for AES/SHA1/MD5 crypto acceleration\n"); + + return 0; + +err_request_irq: + irq_dispose_mapping(irq); +err_of_irq: + kfree(cdev); +err_mem: + iounmap(base); +err_map: + return ret; +} + +static int rtcr_crypto_remove(struct platform_device *pdev) +{ + struct rtcr_crypto_dev *cdev = platform_get_drvdata(pdev); + + rtcr_exit_engine(cdev); + rtcr_unregister_algorithms(ARRAY_SIZE(rtcr_algs)); + tasklet_kill(&cdev->done_task); + return 0; +} + +static const struct of_device_id rtcr_id_table[] = { + { .compatible = "realtek,realtek-crypto" }, + {} +}; +MODULE_DEVICE_TABLE(of, rtcr_id_table); + +static struct platform_driver rtcr_driver = { + .probe = rtcr_crypto_probe, + .remove = rtcr_crypto_remove, + .driver = { + .name = "realtek-crypto", + .of_match_table = rtcr_id_table, + }, +}; + +module_platform_driver(rtcr_driver); + +MODULE_AUTHOR("Markus Stockhausen "); +MODULE_DESCRIPTION("Support for Realtek's cryptographic engine"); +MODULE_LICENSE("GPL"); From patchwork Thu Oct 13 18:40:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Markus Stockhausen X-Patchwork-Id: 614890 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FDFFC433FE for ; Thu, 13 Oct 2022 19:05:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229717AbiJMTFd (ORCPT ); Thu, 13 Oct 2022 15:05:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229695AbiJMTFc (ORCPT ); Thu, 13 Oct 2022 15:05:32 -0400 Received: from mout.gmx.net (mout.gmx.net [212.227.17.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5305E84E71 for ; Thu, 13 Oct 2022 12:05:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1665687927; bh=JkkGbOrcjjRuW4KeCwf2Ign9inZRbe+hMZq8NVTB7kg=; h=X-UI-Sender-Class:From:To:Cc:Subject:Date:In-Reply-To:References; b=Qm9aFpFYgy7p197Q+UrN/TU754jinUunD+ZFGBe0nC4skVB9955Xz9p7usCxixXAi x4NS7nyfhyDuC/pOG8Uc+NxaAq20FYaab3fmXs7MQe/at7LaZ+iupzxnyrtZrZiS7O hnaucdOrxnyjQGWDDx6NBg+AZY83I/2U/KSbKxy8= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from fedora.willemsstb.de ([94.31.86.22]) by mail.gmx.net (mrgmx105 [212.227.17.174]) with ESMTPSA (Nemesis) id 1MC30P-1or3Ls3Bjp-00CNGS; Thu, 13 Oct 2022 20:40:29 +0200 From: Markus Stockhausen To: linux-crypto@vger.kernel.org Cc: Markus Stockhausen Subject: [PATCH 3/6] crypto/realtek: hash algorithms Date: Thu, 13 Oct 2022 20:40:23 +0200 Message-Id: <20221013184026.63826-4-markus.stockhausen@gmx.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221013184026.63826-1-markus.stockhausen@gmx.de> References: <20221013184026.63826-1-markus.stockhausen@gmx.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:vqqXUhAwW8s6PJVujFOZYQlQtOlxCvHlT+I7IFCRMozC5IdEmEg yKJ4gATiAl35HUd+xWsv2fkoobtRZXkGqnK3FLr96R/dOxCVEz7v5EW7lNE21Bl0i03NBf3 ZreSM51/biDU+tL6PXY9FW1TS0CF3P0b6atdWb7FTV3l62y33+0q3P9xp55258Qe1FN93Bc WBf/r++tEmhfrfsV+ylXg== X-UI-Out-Filterresults: notjunk:1;V03:K0:U+AaJdy7nwQ=:SNxAxgkrcafBHLOwqMqTVW QHRUSr+zaR5rx9ycZC2NDzz0Ah1P9zNRA2Y3uDm5ZWZ7ZaRjBdC/JeQ2FXSFhnJbi/OcuThNb 7h5RAAMKrXxD56nsyJxLAuRDGXVVywEFfO1O1NtedU9YeYwkMB+wVXBRzMSQxdV/msL9GmYzK nUuQgnObDVY6oWaff8k1nHN+kSgO6c6l74WRm5TKmvrtWd3USelJGM8e/D1VQMNA4mEys5zi6 iizoo3+QfiJNt6j/2gpe/XQkKe0oxjJyEIzsS/4AsdiAlwIV9gd9MMe6kaZPEm96e6tHiqrVV meeuOFkWHPfhGw4lRyix5KjKKw3p9Ry/txLf8MnnfrCtTvMnt4PWOPfTY9hWL1Iht/pw7grjy PaKAoXA7BGNY98a58X+O+8f2AVDNk3nrrsuiOngBrUHSQwRSk4NtSuKRIzvh8biIefVqDzvS5 ttfHTUTVLcoRDeLqsbEbOfGfEL7BmTiXM14syltVsnGprIEG+zU+UoNkTohubilJI+Py3giIm WPn6J2DjwUL8GXeqWp5E8GffK3BTm/PMufU+saCAZwlX4aih5uqV5+vCowFO4dwBringbd0Vn lFnLuGLz0Km+U49x6W6b9AnXUuyumqeE5JmktY0e+i65r3UaUZ9chcrzZDxhNO4hBNx10rM0f BywsEHJE9FrAf/v5tYmhuXXUZ/u3rJLWPrjSWhd0y8oS3ImfWY4LibvugYITaBMAfz4sNQxPB JD0pwa+eaC4LKFcTeGm/h/eh1xRYUKrlfFpAB2y0qAOH6zPNShKkBMw56TmWP8a8gzwPXx6bD FAGl1jATRPMoAac2b6lyTN801+3OlZq6S4V/AwbDHu1GrPwq/nYJTDHIq/vOY/NNAUSLhqVbb 2pCE/cL8RNoVPcYvjmvSLTHMsJAROtITe5+O+fListpUhBagDbLQFSZmdPy5c5Sf591hk3xYJ eK+HDs7EYzhxh7N73lZuRzMGA8ptjkdORa06qsIXo26YYXHfjIkNq3OZ6ZFnMn8EiTPXVZ0Sr +d1YSKieEwRApAL1jj7k2EictY2hwfn35PiXXtTTHdYpFmni5fJBJ51W3uVpzdmCC8JchpZZV Xz1o3YZKhFS62wYTVvSCUwh6alK5KjSc0We9r08Hz4j4Ghs7MQDg5yNf+oBmS203edpA3sEfA GRreTtriqqTxDNic6UaE90ZQcJZ1gxy1YF3voLkWJXQ5W0ccc2LpGQnMvKNV+w5L0kVWcV+S9 FCl3Ct9XAA7Qp+vC7EzE3KTP8lgSQvwuCMmig4ubqwsCOR7aIxScCcup6TEeMWGRL9luhYX7b TdG2xb6IQAYGOTkU6JIh1xrwf+G+WU8rUeeHUzarmbr9tjWCv3dzZ3jxhJG1Gz2XsT6QgONAs tAUJLA8PECTl/8HbE/CLljMzf3IpgrNxuNza+oxI3bqNfmisKJE7HgyqLkMQ7s9QCHWf8c9zJ n+zaHLgCCKbwNagmBLulNBasfEzVEKyWel/22vZH8x6qot7fShlAgs2gagGPb8zcgDzvd9YZ6 qThdbE3C1SkVqX34eeFN1hnOBcF922rzGq5ODdQuNbOE3 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add md5/sha1 hash algorithms for new Realtek crypto device. Signed-off-by: Markus Stockhausen --- drivers/crypto/realtek/realtek_crypto_ahash.c | 406 ++++++++++++++++++ 1 file changed, 406 insertions(+) create mode 100644 drivers/crypto/realtek/realtek_crypto_ahash.c -- 2.37.3 diff --git a/drivers/crypto/realtek/realtek_crypto_ahash.c b/drivers/crypto/realtek/realtek_crypto_ahash.c new file mode 100644 index 000000000000..eba83451fac1 --- /dev/null +++ b/drivers/crypto/realtek/realtek_crypto_ahash.c @@ -0,0 +1,406 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Crypto acceleration support for Realtek crypto engine. Based on ideas from + * Rockchip & SafeXcel driver plus Realtek OpenWrt RTK. + * + * Copyright (c) 2022, Markus Stockhausen + */ + +#include +#include + +#include "realtek_crypto.h" + +static inline struct ahash_request *fallback_request_ctx(struct ahash_request *areq) +{ + char *p = (char *)ahash_request_ctx(areq); + + return (struct ahash_request *)(p + offsetof(struct rtcr_ahash_req, vector)); +} + +static inline void *fallback_export_state(void *export) +{ + char *p = (char *)export; + + return (void *)(p + offsetof(struct rtcr_ahash_req, vector)); +} + +static int rtcr_process_hash(struct ahash_request *areq, int opmode) +{ + unsigned int len, nextbuflen, datalen, padlen, reqlen; + struct rtcr_ahash_req *hreq = ahash_request_ctx(areq); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); + struct rtcr_ahash_ctx *hctx = crypto_ahash_ctx(tfm); + int sgcnt = hreq->state & RTCR_REQ_SG_MASK; + struct rtcr_crypto_dev *cdev = hctx->cdev; + struct scatterlist *sg = areq->src; + int idx, srcidx, dstidx, ret; + u64 pad[RTCR_HASH_PAD_SIZE]; + char *ppad; + + /* Quick checks if processing is really needed */ + if (unlikely(!areq->nbytes) && !(opmode & RTCR_HASH_FINAL)) + return 0; + + if (hreq->buflen + areq->nbytes < 64 && !(opmode & RTCR_HASH_FINAL)) { + hreq->buflen += sg_pcopy_to_buffer(areq->src, sg_nents(areq->src), + hreq->buf + hreq->buflen, + areq->nbytes, 0); + return 0; + } + + /* calculate required parts of the request */ + datalen = (opmode & RTCR_HASH_UPDATE) ? areq->nbytes : 0; + if (opmode & RTCR_HASH_FINAL) { + nextbuflen = 0; + padlen = 64 - ((hreq->buflen + datalen) & 63); + if (padlen < 9) + padlen += 64; + hreq->totallen += hreq->buflen + datalen; + + memset(pad, 0, sizeof(pad) - sizeof(u64)); + ppad = (char *)&pad[RTCR_HASH_PAD_SIZE] - padlen; + *ppad = 0x80; + pad[RTCR_HASH_PAD_SIZE - 1] = hreq->state & RTCR_REQ_MD5 ? + cpu_to_le64(hreq->totallen << 3) : + cpu_to_be64(hreq->totallen << 3); + } else { + nextbuflen = (hreq->buflen + datalen) & 63; + padlen = 0; + datalen -= nextbuflen; + hreq->totallen += hreq->buflen + datalen; + } + reqlen = hreq->buflen + datalen + padlen; + + /* Write back any uncommitted data to memory. */ + if (hreq->buflen) + dma_sync_single_for_device(cdev->dev, virt_to_phys(hreq->buf), + hreq->buflen, DMA_TO_DEVICE); + if (padlen) + dma_sync_single_for_device(cdev->dev, virt_to_phys(ppad), + padlen, DMA_TO_DEVICE); + if (datalen) + dma_map_sg(cdev->dev, sg, sgcnt, DMA_TO_DEVICE); + + /* Get free space in the ring */ + sgcnt = 1 + (hreq->buflen ? 1 : 0) + (datalen ? sgcnt : 0) + (padlen ? 1 : 0); + + ret = rtcr_alloc_ring(cdev, sgcnt, &srcidx, &dstidx, RTCR_WB_LEN_HASH, NULL); + if (ret) + return ret; + /* + * Feed input data into the rings. Start with destination ring and fill + * source ring afterwards. Ensure that the owner flag of the first source + * ring is the last that becomes visible to the engine. + */ + rtcr_add_dst_to_ring(cdev, dstidx, NULL, 0, hreq->vector, 0); + + idx = srcidx; + if (hreq->buflen) { + idx = rtcr_inc_src_idx(idx, 1); + rtcr_add_src_to_ring(cdev, idx, hreq->buf, hreq->buflen, reqlen); + } + + while (datalen) { + len = min(sg_dma_len(sg), datalen); + + idx = rtcr_inc_src_idx(idx, 1); + rtcr_add_src_to_ring(cdev, idx, sg_virt(sg), len, reqlen); + + datalen -= len; + if (datalen) + sg = sg_next(sg); + } + + if (padlen) { + idx = rtcr_inc_src_idx(idx, 1); + rtcr_add_src_to_ring(cdev, idx, ppad, padlen, reqlen); + } + + rtcr_add_src_pad_to_ring(cdev, idx, reqlen); + rtcr_add_src_ahash_to_ring(cdev, srcidx, hctx->opmode, reqlen); + + /* Off we go */ + rtcr_kick_engine(cdev); + if (rtcr_wait_for_request(cdev, dstidx)) + return -EINVAL; + + hreq->state |= RTCR_REQ_FB_ACT; + hreq->buflen = nextbuflen; + + if (nextbuflen) + sg_pcopy_to_buffer(sg, sg_nents(sg), hreq->buf, nextbuflen, len); + if (padlen) + memcpy(areq->result, hreq->vector, crypto_ahash_digestsize(tfm)); + + return 0; +} + +static void rtcr_check_request(struct ahash_request *areq, int opmode) +{ + struct rtcr_ahash_req *hreq = ahash_request_ctx(areq); + struct scatterlist *sg = areq->src; + int reqlen, sgcnt, sgmax; + + if (hreq->state & RTCR_REQ_FB_ACT) + return; + + if (reqlen > RTCR_MAX_REQ_SIZE) { + hreq->state |= RTCR_REQ_FB_ACT; + return; + } + + sgcnt = 0; + sgmax = RTCR_MAX_SG_AHASH - (hreq->buflen ? 1 : 0); + reqlen = areq->nbytes; + if (!(opmode & RTCR_HASH_FINAL)) { + reqlen -= (hreq->buflen + reqlen) & 63; + sgmax--; + } + + while (reqlen > 0) { + reqlen -= sg_dma_len(sg); + sgcnt++; + sg = sg_next(sg); + } + + if (sgcnt > sgmax) + hreq->state |= RTCR_REQ_FB_ACT; + else + hreq->state = (hreq->state & ~RTCR_REQ_SG_MASK) | sgcnt; +} + +static bool rtcr_check_fallback(struct ahash_request *areq) +{ + struct ahash_request *freq = fallback_request_ctx(areq); + struct rtcr_ahash_req *hreq = ahash_request_ctx(areq); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); + struct rtcr_ahash_ctx *hctx = crypto_ahash_ctx(tfm); + union rtcr_fallback_state state; + + if (!(hreq->state & RTCR_REQ_FB_ACT)) + return false; + + if (!(hreq->state & RTCR_REQ_FB_RDY)) { + /* Convert state to generic fallback state */ + if (hreq->state & RTCR_REQ_MD5) { + memcpy(state.md5.hash, hreq->vector, MD5_DIGEST_SIZE); + if (hreq->totallen) + cpu_to_le32_array(state.md5.hash, 4); + memcpy(state.md5.block, hreq->buf, SHA1_BLOCK_SIZE); + state.md5.byte_count = hreq->totallen + (u64)hreq->buflen; + } else { + memcpy(state.sha1.state, hreq->vector, SHA1_DIGEST_SIZE); + memcpy(state.sha1.buffer, &hreq->buf, SHA1_BLOCK_SIZE); + state.sha1.count = hreq->totallen + (u64)hreq->buflen; + } + } + + ahash_request_set_tfm(freq, hctx->fback); + ahash_request_set_crypt(freq, areq->src, areq->result, areq->nbytes); + + if (!(hreq->state & RTCR_REQ_FB_RDY)) { + crypto_ahash_import(freq, &state); + hreq->state |= RTCR_REQ_FB_RDY; + } + + return true; +} + +static int rtcr_ahash_init(struct ahash_request *areq) +{ + struct rtcr_ahash_req *hreq = ahash_request_ctx(areq); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); + int ds = crypto_ahash_digestsize(tfm); + + memset(hreq, 0, sizeof(*hreq)); + + hreq->vector[0] = SHA1_H0; + hreq->vector[1] = SHA1_H1; + hreq->vector[2] = SHA1_H2; + hreq->vector[3] = SHA1_H3; + hreq->vector[4] = SHA1_H4; + + hreq->state |= (ds == MD5_DIGEST_SIZE) ? RTCR_REQ_MD5 : RTCR_REQ_SHA1; + + return 0; +} + +static int rtcr_ahash_update(struct ahash_request *areq) +{ + struct ahash_request *freq = fallback_request_ctx(areq); + + rtcr_check_request(areq, RTCR_HASH_UPDATE); + if (rtcr_check_fallback(areq)) + return crypto_ahash_update(freq); + return rtcr_process_hash(areq, RTCR_HASH_UPDATE); +} + +static int rtcr_ahash_final(struct ahash_request *areq) +{ + struct ahash_request *freq = fallback_request_ctx(areq); + + if (rtcr_check_fallback(areq)) + return crypto_ahash_final(freq); + + return rtcr_process_hash(areq, RTCR_HASH_FINAL); +} + +static int rtcr_ahash_finup(struct ahash_request *areq) +{ + struct ahash_request *freq = fallback_request_ctx(areq); + + rtcr_check_request(areq, RTCR_HASH_FINAL | RTCR_HASH_UPDATE); + if (rtcr_check_fallback(areq)) + return crypto_ahash_finup(freq); + + return rtcr_process_hash(areq, RTCR_HASH_FINAL | RTCR_HASH_UPDATE); +} + +static int rtcr_ahash_digest(struct ahash_request *areq) +{ + struct ahash_request *freq = fallback_request_ctx(areq); + int ret; + + ret = rtcr_ahash_init(areq); + if (ret) + return ret; + + rtcr_check_request(areq, RTCR_HASH_FINAL | RTCR_HASH_UPDATE); + if (rtcr_check_fallback(areq)) + return crypto_ahash_digest(freq); + + return rtcr_process_hash(areq, RTCR_HASH_FINAL | RTCR_HASH_UPDATE); +} + +static int rtcr_ahash_import(struct ahash_request *areq, const void *in) +{ + const void *fexp = (const void *)fallback_export_state((void *)in); + struct ahash_request *freq = fallback_request_ctx(areq); + struct rtcr_ahash_req *hreq = ahash_request_ctx(areq); + const struct rtcr_ahash_req *hexp = in; + + hreq->state = hexp->state; + if (hreq->state & RTCR_REQ_FB_ACT) + hreq->state |= RTCR_REQ_FB_RDY; + + if (rtcr_check_fallback(areq)) + return crypto_ahash_import(freq, fexp); + + memcpy(hreq, hexp, sizeof(struct rtcr_ahash_req)); + + return 0; +} + +static int rtcr_ahash_export(struct ahash_request *areq, void *out) +{ + struct ahash_request *freq = fallback_request_ctx(areq); + struct rtcr_ahash_req *hreq = ahash_request_ctx(areq); + void *fexp = fallback_export_state(out); + struct rtcr_ahash_req *hexp = out; + + if (rtcr_check_fallback(areq)) { + hexp->state = hreq->state; + return crypto_ahash_export(freq, fexp); + } + + memcpy(hexp, hreq, sizeof(struct rtcr_ahash_req)); + + return 0; +} + +static int rtcr_ahash_cra_init(struct crypto_tfm *tfm) +{ + struct crypto_ahash *ahash = __crypto_ahash_cast(tfm); + struct rtcr_ahash_ctx *hctx = crypto_tfm_ctx(tfm); + struct rtcr_crypto_dev *cdev = hctx->cdev; + struct rtcr_alg_template *tmpl; + + tmpl = container_of(__crypto_ahash_alg(tfm->__crt_alg), + struct rtcr_alg_template, alg.ahash); + + hctx->cdev = tmpl->cdev; + hctx->opmode = tmpl->opmode; + hctx->fback = crypto_alloc_ahash(crypto_tfm_alg_name(tfm), 0, + CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); + + if (IS_ERR(hctx->fback)) { + dev_err(cdev->dev, "could not allocate fallback for %s\n", + crypto_tfm_alg_name(tfm)); + return PTR_ERR(hctx->fback); + } + + crypto_ahash_set_reqsize(ahash, max(sizeof(struct rtcr_ahash_req), + offsetof(struct rtcr_ahash_req, vector) + + sizeof(struct ahash_request) + + crypto_ahash_reqsize(hctx->fback))); + + return 0; +} + +static void rtcr_ahash_cra_exit(struct crypto_tfm *tfm) +{ + struct rtcr_ahash_ctx *hctx = crypto_tfm_ctx(tfm); + + crypto_free_ahash(hctx->fback); +} + +struct rtcr_alg_template rtcr_ahash_md5 = { + .type = RTCR_ALG_AHASH, + .opmode = RTCR_SRC_OP_MS_HASH | RTCR_SRC_OP_HASH_MD5, + .alg.ahash = { + .init = rtcr_ahash_init, + .update = rtcr_ahash_update, + .final = rtcr_ahash_final, + .finup = rtcr_ahash_finup, + .export = rtcr_ahash_export, + .import = rtcr_ahash_import, + .digest = rtcr_ahash_digest, + .halg = { + .digestsize = MD5_DIGEST_SIZE, + /* statesize calculated during initialization */ + .base = { + .cra_name = "md5", + .cra_driver_name = "realtek-md5", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA1_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct rtcr_ahash_ctx), + .cra_alignmask = 0, + .cra_init = rtcr_ahash_cra_init, + .cra_exit = rtcr_ahash_cra_exit, + .cra_module = THIS_MODULE, + } + } + } +}; + +struct rtcr_alg_template rtcr_ahash_sha1 = { + .type = RTCR_ALG_AHASH, + .opmode = RTCR_SRC_OP_MS_HASH | RTCR_SRC_OP_HASH_SHA1, + .alg.ahash = { + .init = rtcr_ahash_init, + .update = rtcr_ahash_update, + .final = rtcr_ahash_final, + .finup = rtcr_ahash_finup, + .export = rtcr_ahash_export, + .import = rtcr_ahash_import, + .digest = rtcr_ahash_digest, + .halg = { + .digestsize = SHA1_DIGEST_SIZE, + /* statesize calculated during initialization */ + .base = { + .cra_name = "sha1", + .cra_driver_name = "realtek-sha1", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA1_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct rtcr_ahash_ctx), + .cra_alignmask = 0, + .cra_init = rtcr_ahash_cra_init, + .cra_exit = rtcr_ahash_cra_exit, + .cra_module = THIS_MODULE, + } + } + } +}; From patchwork Thu Oct 13 18:40:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Markus Stockhausen X-Patchwork-Id: 615821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 654A7C4332F for ; Thu, 13 Oct 2022 19:12:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229771AbiJMTMI (ORCPT ); Thu, 13 Oct 2022 15:12:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230078AbiJMTLu (ORCPT ); Thu, 13 Oct 2022 15:11:50 -0400 Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 710C9E4E5E for ; Thu, 13 Oct 2022 12:11:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1665688306; bh=vMnMvN2Vormu2vCNvhs5YyBOVabvMXVZ700gLozC4kQ=; h=X-UI-Sender-Class:From:To:Cc:Subject:Date:In-Reply-To:References; b=PBVVImJ2SPsKu3K2n37MF+scFMXpnXb6Fsqs07GTaeBmpk2KAwdVmV4mKXkc1Q10L wEZohrk7a8xooxRz+4dPHDF0i3R1hYa9LuLMQvb6snNUd39UYw5BVv3yxYFOVgdcJC RuQp9EDPDBQFuFUjXCVC9xB3ekgVIq2Hhp2inc74= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from fedora.willemsstb.de ([94.31.86.22]) by mail.gmx.net (mrgmx105 [212.227.17.174]) with ESMTPSA (Nemesis) id 1Mqb1c-1pVbIb45Ro-00mZ5Z; Thu, 13 Oct 2022 20:40:30 +0200 From: Markus Stockhausen To: linux-crypto@vger.kernel.org Cc: Markus Stockhausen Subject: [PATCH 4/6] crypto/realtek: skcipher algorithms Date: Thu, 13 Oct 2022 20:40:24 +0200 Message-Id: <20221013184026.63826-5-markus.stockhausen@gmx.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221013184026.63826-1-markus.stockhausen@gmx.de> References: <20221013184026.63826-1-markus.stockhausen@gmx.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:XIUA76as5s1cvVn6eFdlCZXpqlrL/PP/F87PWSz3ajQhILeIZec tlKMdqUe+iR1MK55PICJeaeS92zHqRyjv4LTwdfGwmninDQRk2JDu1N80a9w7aVsv5gJbrv pEpgj3ZQ6aukppuqh4VQpJ9YFLcKxdhyERAq8HfYNy9Ohxu+H6Vv62Zj7qNbSFTIMiIQ1lo PwcZtJpPzMrch0zlCqIuA== X-UI-Out-Filterresults: notjunk:1;V03:K0:W3Mlv0hDdEo=:lobCYeDYgZXvnRrwDpS39S VK1xl4ppM8prn54irghqfn630hAnoPBttnfZvuyJWDmffLzqwfYpG3hwp7KqVB4l2l3EFz2Q7 ImgvCQFYKhRTpEsStOe+pKUjK8Yzut+cKJERMrZJB/kvhbfNLyD5s9Bo5xQ4EPe9//HbVMKov v7HNSmu7T14CBvVQHxq16LNNNqE8vElo0R8mc4doNgeMiFUNiZ+Pjc32gThHXyXWjix+l17dN DLAqEuAlqxjEtcuU4ezZnAxF5irdnKU9q3F6JI+S8s/QYnGlw+16iAknxFePHIsqzZHR7F5aE CSZlfcMXO+JTrx+ZPMe2xlY+4vL4ErFFDYNq1PfTgn3oRPHfLtGXAS+qU4o03BkuiHbaG10kO xVGvwpFzdnzc4rOw4Y2aHfJUC5D61rzGwK8HzHFzJiUqLxgHeGz1WGpyRlImgqmqVubEZlX8p QI1DC5Pm1iqrNmKs02jO9zcVKzy7gvK8Q7Rk1oFC3Wtyn+XZ6LJBpb8PUyYIKgHxknIwDjsRG j7FQBlgdZfSSMpSjCOvExPVida/bXQF4kIgH+OXAlW7Pc07XCb0grh4U7mU0x8p9lNlxPURBo FkZzZe/mW5uUUlShI0ObqQpugE8+FOPru8QMsIeFnf1Gx0FpJgF5/fkNFgDN0OlHRoCjIA67j CKZmvfMB7FxgfuKltH+WuS6jt0JC/hGbJFC7pzk1wR9K+qhXzBNanyoXnZ/ycXvKmB9/RA1sc GNrFlqrG0Qf+vTHo2XCPNw/v0zkXKT3iUfj3pdBTfdNTiNOIv0Uxa25zg9QTE4UOdR7jUf/23 bRF6qeGoCpT1YazJZGX/4SLmapobBKYejXAbfY3xgihOuTLQlpc2d35MAZBk43P0wLf/bS0b3 958dGqlOLY8C7iFO6vlX+55CMHah8dM1HYF4McxpmjjpSf3LvYA4gMmipcWQfYsEFMUYbV1D2 VT4uiiDygA3IIwizEGU87XnWlnEWBgQWefMPVsAs7KP0hiRjHytSCEAaMWtovYLx49mXLoub7 OL+LzIjIeqECFGSgDAWsmbdxFDQmT22fVqMQ/U2BCJiccZaTHpen0xZ8tLss556MzBR6Xn6XC OSO1DVA6FJqpUXma+zvUDVQgwXIHkKTVurjgdSD0DCAK08c0K9+3EVGNHsNS39gaMSedfc4Mp x24tp78Ngzg2KYsY+Qu1KFSYg6n2VhD+UP1KvA19pe3tg4ZXt50shJOCZcCOA8VvonHz0XQpa io905aR/wVb7waXMDg/Cz3/gHaqIUGbZqC5SaaThR5Xd+dTCLizx7OsINa2hL1WxQ7CURLv1j nYHC3KtsW2PX/LKhJKmf32pWkQ1akK4n1Mz2H69z9KACEoN5ZOSD2OOPzRmDBvvMswn8jZjCK Zej0xJu3VBY3OU+GfbT28xZCHgOeL27noUQEMcYQqxMIInCZDAuKOPMFSGnB1AINKYwPLrdO8 +zNtI0JzJ0Em3mP+7pV1u3xENl57tdjDDffrkANZoV6Wlgi/7DNGM4lbksKYdtwA4eMSq21Ze i5lrrRJG4Co7FzCaWZ46zfjjzqkUpLyfnr59NCXc23Bpu Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add ecb(aes), cbc(aes) and ctr(aes) skcipher algorithms for new Realtek crypto device. Signed-off-by: Markus Stockhausen --- .../crypto/realtek/realtek_crypto_skcipher.c | 361 ++++++++++++++++++ 1 file changed, 361 insertions(+) create mode 100644 drivers/crypto/realtek/realtek_crypto_skcipher.c -- 2.37.3 diff --git a/drivers/crypto/realtek/realtek_crypto_skcipher.c b/drivers/crypto/realtek/realtek_crypto_skcipher.c new file mode 100644 index 000000000000..6e2cde77b4d4 --- /dev/null +++ b/drivers/crypto/realtek/realtek_crypto_skcipher.c @@ -0,0 +1,361 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Crypto acceleration support for Realtek crypto engine. Based on ideas from + * Rockchip & SafeXcel driver plus Realtek OpenWrt RTK. + * + * Copyright (c) 2022, Markus Stockhausen + */ + +#include +#include + +#include "realtek_crypto.h" + +static inline void rtcr_inc_iv(u8 *iv, int cnt) +{ + u32 *ctr = (u32 *)iv + 4; + u32 old, new, carry = cnt; + + /* avoid looping with crypto_inc() */ + do { + old = be32_to_cpu(*--ctr); + new = old + carry; + *ctr = cpu_to_be32(new); + carry = (new < old) && (ctr > (u32 *)iv) ? 1 : 0; + } while (carry); +} + +static inline void rtcr_cut_skcipher_len(int *reqlen, int opmode, u8 *iv) +{ + int len = min(*reqlen, RTCR_MAX_REQ_SIZE); + + if (opmode & RTCR_SRC_OP_CRYPT_CTR) { + /* limit data as engine does not wrap around cleanly */ + u32 ctr = be32_to_cpu(*((u32 *)iv + 3)); + int blocks = min(~ctr, 0x3fffu) + 1; + + len = min(blocks * AES_BLOCK_SIZE, len); + } + + *reqlen = len; +} + +static inline void rtcr_max_skcipher_len(int *reqlen, struct scatterlist **sg, + int *sgoff, int *sgcnt) +{ + int len, cnt, sgnoff, sgmax = RTCR_MAX_SG_SKCIPHER, datalen, maxlen = *reqlen; + struct scatterlist *sgn; + +redo: + datalen = cnt = 0; + sgnoff = *sgoff; + sgn = *sg; + + while (sgn && (datalen < maxlen) && (cnt < sgmax)) { + cnt++; + len = min((int)sg_dma_len(sgn) - sgnoff, maxlen - datalen); + datalen += len; + if (len + sgnoff < sg_dma_len(sgn)) { + sgnoff = sgnoff + len; + break; + } + sgn = sg_next(sgn); + sgnoff = 0; + if (unlikely((cnt == sgmax) && (datalen < AES_BLOCK_SIZE))) { + /* expand search to get at least one block */ + sgmax = AES_BLOCK_SIZE; + maxlen = min(maxlen, AES_BLOCK_SIZE); + } + } + + if (unlikely((datalen < maxlen) && (datalen & (AES_BLOCK_SIZE - 1)))) { + /* recalculate to get aligned size */ + maxlen = datalen & ~(AES_BLOCK_SIZE - 1); + goto redo; + } + + *sg = sgn; + *sgoff = sgnoff; + *sgcnt = cnt; + *reqlen = datalen; +} + +static int rtcr_process_skcipher(struct skcipher_request *sreq, int opmode) +{ + char *dataout, *iv, ivbk[AES_BLOCK_SIZE], datain[AES_BLOCK_SIZE]; + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(sreq); + struct rtcr_skcipher_ctx *sctx = crypto_skcipher_ctx(tfm); + int totallen = sreq->cryptlen, sgoff = 0, dgoff = 0; + int padlen, sgnoff, sgcnt, reqlen, ret, fblen; + struct rtcr_crypto_dev *cdev = sctx->cdev; + struct scatterlist *sg = sreq->src, *sgn; + int idx, srcidx, dstidx, len, datalen; + + if (!totallen) + return 0; + + if ((totallen & (AES_BLOCK_SIZE - 1)) && (!(opmode & RTCR_SRC_OP_CRYPT_CTR))) + return -EINVAL; + +redo: + sgnoff = sgoff; + sgn = sg; + datalen = totallen; + + /* limit input so that engine can process it */ + rtcr_cut_skcipher_len(&datalen, opmode, sreq->iv); + rtcr_max_skcipher_len(&datalen, &sgn, &sgnoff, &sgcnt); + + /* CTR padding */ + padlen = (AES_BLOCK_SIZE - datalen) & (AES_BLOCK_SIZE - 1); + reqlen = datalen + padlen; + + fblen = 0; + if (sgcnt > RTCR_MAX_SG_SKCIPHER) { + /* single AES block with too many SGs */ + fblen = datalen; + sg_pcopy_to_buffer(sg, sgcnt, datain, datalen, sgoff); + } + + if ((opmode & RTCR_SRC_OP_CRYPT_CBC) && + (!(opmode & RTCR_SRC_OP_KAM_ENC))) { + /* CBC decryption IV might get overwritten */ + sg_pcopy_to_buffer(sg, sgcnt, ivbk, AES_BLOCK_SIZE, + sgoff + datalen - AES_BLOCK_SIZE); + } + + /* Get free space in the ring */ + if (padlen || (datalen + dgoff > sg_dma_len(sreq->dst))) { + len = datalen; + } else { + len = RTCR_WB_LEN_SG_DIRECT; + dataout = sg_virt(sreq->dst) + dgoff; + } + + ret = rtcr_alloc_ring(cdev, 2 + (fblen ? 1 : sgcnt) + (padlen ? 1 : 0), + &srcidx, &dstidx, len, &dataout); + if (ret) + return ret; + + /* Write back any uncommitted data to memory */ + if (dataout == sg_virt(sreq->src) + sgoff) { + dma_map_sg(cdev->dev, sg, sgcnt, DMA_BIDIRECTIONAL); + } else { + dma_sync_single_for_device(cdev->dev, virt_to_phys(dataout), + reqlen, DMA_BIDIRECTIONAL); + if (fblen) + dma_sync_single_for_device(cdev->dev, virt_to_phys(datain), + reqlen, DMA_TO_DEVICE); + else + dma_map_sg(cdev->dev, sg, sgcnt, DMA_TO_DEVICE); + } + + if (sreq->iv) + dma_sync_single_for_device(cdev->dev, virt_to_phys(sreq->iv), + AES_BLOCK_SIZE, DMA_TO_DEVICE); + /* + * Feed input data into the rings. Start with destination ring and fill + * source ring afterwards. Ensure that the owner flag of the first source + * ring is the last that becomes visible to the engine. + */ + rtcr_add_dst_to_ring(cdev, dstidx, dataout, reqlen, sreq->dst, dgoff); + + idx = rtcr_inc_src_idx(srcidx, 1); + rtcr_add_src_to_ring(cdev, idx, sreq->iv, AES_BLOCK_SIZE, reqlen); + + if (fblen) { + idx = rtcr_inc_src_idx(idx, 1); + rtcr_add_src_to_ring(cdev, idx, (void *)datain, fblen, reqlen); + } + + datalen -= fblen; + while (datalen) { + len = min((int)sg_dma_len(sg) - sgoff, datalen); + + idx = rtcr_inc_src_idx(idx, 1); + rtcr_add_src_to_ring(cdev, idx, sg_virt(sg) + sgoff, len, reqlen); + + datalen -= len; + sg = sg_next(sg); + sgoff = 0; + } + + if (padlen) { + idx = rtcr_inc_src_idx(idx, 1); + rtcr_add_src_to_ring(cdev, idx, (void *)empty_zero_page, padlen, reqlen); + } + + rtcr_add_src_pad_to_ring(cdev, idx, reqlen); + rtcr_add_src_skcipher_to_ring(cdev, srcidx, opmode, reqlen, sctx); + + /* Off we go */ + rtcr_kick_engine(cdev); + if (rtcr_wait_for_request(cdev, dstidx)) + return -EINVAL; + + /* Handle IV feedback as engine does not provide it */ + if (opmode & RTCR_SRC_OP_CRYPT_CTR) { + rtcr_inc_iv(sreq->iv, reqlen / AES_BLOCK_SIZE); + } else if (opmode & RTCR_SRC_OP_CRYPT_CBC) { + iv = opmode & RTCR_SRC_OP_KAM_ENC ? + dataout + reqlen - AES_BLOCK_SIZE : ivbk; + memcpy(sreq->iv, iv, AES_BLOCK_SIZE); + } + + sg = sgn; + sgoff = sgnoff; + dgoff += reqlen; + totallen -= min(reqlen, totallen); + + if (totallen) + goto redo; + + return 0; +} + +static int rtcr_skcipher_encrypt(struct skcipher_request *sreq) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(sreq); + struct rtcr_skcipher_ctx *sctx = crypto_skcipher_ctx(tfm); + int opmode = sctx->opmode | RTCR_SRC_OP_KAM_ENC; + + return rtcr_process_skcipher(sreq, opmode); +} + +static int rtcr_skcipher_decrypt(struct skcipher_request *sreq) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(sreq); + struct rtcr_skcipher_ctx *sctx = crypto_skcipher_ctx(tfm); + int opmode = sctx->opmode; + + opmode |= sctx->opmode & RTCR_SRC_OP_CRYPT_CTR ? + RTCR_SRC_OP_KAM_ENC : RTCR_SRC_OP_KAM_DEC; + + return rtcr_process_skcipher(sreq, opmode); +} + +static int rtcr_skcipher_setkey(struct crypto_skcipher *cipher, + const u8 *key, unsigned int keylen) +{ + struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); + struct rtcr_skcipher_ctx *sctx = crypto_tfm_ctx(tfm); + struct rtcr_crypto_dev *cdev = sctx->cdev; + struct crypto_aes_ctx kctx; + int p, i; + + if (aes_expandkey(&kctx, key, keylen)) + return -EINVAL; + + sctx->keylen = keylen; + sctx->opmode = (sctx->opmode & ~RTCR_SRC_OP_CIPHER_MASK) | + RTCR_SRC_OP_CIPHER_FROM_KEY(keylen); + + memcpy(sctx->key_enc, key, keylen); + /* decryption key is derived from expanded key */ + p = ((keylen / 4) + 6) * 4; + for (i = 0; i < 8; i++) { + sctx->key_dec[i] = cpu_to_le32(kctx.key_enc[p + i]); + if (i == 3) + p -= keylen == AES_KEYSIZE_256 ? 8 : 6; + } + + dma_sync_single_for_device(cdev->dev, virt_to_phys(sctx->key_enc), + 2 * AES_KEYSIZE_256, DMA_TO_DEVICE); + + return 0; +} + +static int rtcr_skcipher_cra_init(struct crypto_tfm *tfm) +{ + struct rtcr_skcipher_ctx *sctx = crypto_tfm_ctx(tfm); + struct rtcr_alg_template *tmpl; + + tmpl = container_of(tfm->__crt_alg, struct rtcr_alg_template, + alg.skcipher.base); + + sctx->cdev = tmpl->cdev; + sctx->opmode = tmpl->opmode; + + return 0; +} + +static void rtcr_skcipher_cra_exit(struct crypto_tfm *tfm) +{ + void *ctx = crypto_tfm_ctx(tfm); + + memzero_explicit(ctx, tfm->__crt_alg->cra_ctxsize); +} + +struct rtcr_alg_template rtcr_skcipher_ecb_aes = { + .type = RTCR_ALG_SKCIPHER, + .opmode = RTCR_SRC_OP_MS_CRYPTO | RTCR_SRC_OP_CRYPT_ECB, + .alg.skcipher = { + .setkey = rtcr_skcipher_setkey, + .encrypt = rtcr_skcipher_encrypt, + .decrypt = rtcr_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .base = { + .cra_name = "ecb(aes)", + .cra_driver_name = "realtek-ecb-aes", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct rtcr_skcipher_ctx), + .cra_alignmask = 0, + .cra_init = rtcr_skcipher_cra_init, + .cra_exit = rtcr_skcipher_cra_exit, + .cra_module = THIS_MODULE, + }, + }, +}; + +struct rtcr_alg_template rtcr_skcipher_cbc_aes = { + .type = RTCR_ALG_SKCIPHER, + .opmode = RTCR_SRC_OP_MS_CRYPTO | RTCR_SRC_OP_CRYPT_CBC, + .alg.skcipher = { + .setkey = rtcr_skcipher_setkey, + .encrypt = rtcr_skcipher_encrypt, + .decrypt = rtcr_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .base = { + .cra_name = "cbc(aes)", + .cra_driver_name = "realtek-cbc-aes", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct rtcr_skcipher_ctx), + .cra_alignmask = 0, + .cra_init = rtcr_skcipher_cra_init, + .cra_exit = rtcr_skcipher_cra_exit, + .cra_module = THIS_MODULE, + }, + }, +}; + +struct rtcr_alg_template rtcr_skcipher_ctr_aes = { + .type = RTCR_ALG_SKCIPHER, + .opmode = RTCR_SRC_OP_MS_CRYPTO | RTCR_SRC_OP_CRYPT_CTR, + .alg.skcipher = { + .setkey = rtcr_skcipher_setkey, + .encrypt = rtcr_skcipher_encrypt, + .decrypt = rtcr_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .base = { + .cra_name = "ctr(aes)", + .cra_driver_name = "realtek-ctr-aes", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct rtcr_skcipher_ctx), + .cra_alignmask = 0, + .cra_init = rtcr_skcipher_cra_init, + .cra_exit = rtcr_skcipher_cra_exit, + .cra_module = THIS_MODULE, + }, + }, +}; From patchwork Thu Oct 13 18:40:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Markus Stockhausen X-Patchwork-Id: 614891 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AACA3C433FE for ; Thu, 13 Oct 2022 19:04:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229625AbiJMTEf (ORCPT ); Thu, 13 Oct 2022 15:04:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229498AbiJMTEe (ORCPT ); Thu, 13 Oct 2022 15:04:34 -0400 Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 149376C972 for ; Thu, 13 Oct 2022 12:04:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1665687871; bh=Y2JBzG+FoZxfNRR8+s09rv4jB/A3dAvwAkDsLeqOgms=; h=X-UI-Sender-Class:From:To:Cc:Subject:Date:In-Reply-To:References; b=N042xNcqcURMvpl+e+I4jmAAiL8TZJYLiN46FowFgJ3vd9rlVlyCU/W8nSljiXyz1 Z0K6nt93HXqPgOV20G44ZGOwC0xNO4llP/3+rBdq6tqRinVdo/A+esiTXiDwHOiImW iaZ5pRRAqC/h3CZH0pbxEIwA2UpguTzvWfohNWMI= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from fedora.willemsstb.de ([94.31.86.22]) by mail.gmx.net (mrgmx105 [212.227.17.174]) with ESMTPSA (Nemesis) id 1MTiPl-1oZNyd0li9-00U2dk; Thu, 13 Oct 2022 20:40:30 +0200 From: Markus Stockhausen To: linux-crypto@vger.kernel.org Cc: Markus Stockhausen Subject: [PATCH 5/6] crypto/realtek: enable module Date: Thu, 13 Oct 2022 20:40:25 +0200 Message-Id: <20221013184026.63826-6-markus.stockhausen@gmx.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221013184026.63826-1-markus.stockhausen@gmx.de> References: <20221013184026.63826-1-markus.stockhausen@gmx.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:StCpxZYBVEfcY9BedQFryRbvxqHbML0ZqYphr9M5/UK1E50rbrG HyOB8B8PSV4u1BRFNpW+TsrkZFXrsuKkt2Dqv6NVbwMw8AKQQ4MWhEeLlvZfawwAhYBhxDw 5zrlr6etniPwOQJRIDh9l9ikB7xbD3DuWEGnZ057iYU5MHD5rJQyRzsuBlbund4nqY04uV9 IZj3EtMhYgKocChgdUXIQ== X-UI-Out-Filterresults: notjunk:1;V03:K0:9pCr4hhvU0k=:S0Bu+Lb6qUsxUtwYH3DM7+ V3wUwMi5wo0pTJ2XyotqxxO7Pta8nJS2DwnZpRTWI21kmc4dH3c7bk5H/qCSK9a/KaQr1NUe1 +I5iFvuYhCJMOci1khp35DgAsmWuyE6H5C9NHPA0XdCm4eSRd4iqjNjfENwyyiBeSGW9wwPIL nCDfQlgpTBD7+W1PZClmKIC9UFPUb4moq6JeMpfTbX8/+/ItIkcQ2yMOuXGiYUgv9fBgNboZe IAHBaInzhMldX2guUFhZTyLhIYRZkpeHyT/HYUE2UYnoO07bkfyQBkBL4xiIznZ1hKzqRKNxN DwAMmzDNW1j91UI509wB3g6yW+PkfjohAIhwAZigH1KHCwVzrdRwWBO1gkQ5K0uA86/03/AOZ oynLuZ6EMFncJNn8EBtaeIqPI6aTbZNmqlxFqTevV45DCLOg+kYfkXNSlPHbsC8UyO1TfvF2/ 8P1zFOiDYGGKpSwKGF9RjZMK+yn5lUViRR374sDi+uCZHMZISZqjqr7FfM01v2nkpSdBDCEoj B8RmL55+lN3Vz7ZukmtLvD+/lWi76fA8Vn3TkJNwny0h+Jk1SD2Ioz9azOqxMvkqLzI2GH02+ 3F6Oix2sgeTumKmhHF6s1OeqqGC3bUCMp2ggpbG2v9IVTgTaoigyaX7bRE8rjSjmO0vuavhcj DTS+9L1Uczm/rJJ19wV029GWIiikwBwiiwsCOqr+8JVRnv35YnQJbU0CgHyErn/jrpATzcKEV AfDmQMbrCOGHRbiYlblyE9qchaQCG5AFpgMfEodqmW0XHFp3SDPUeonI+UbZPwizBchcQr5rh QAFmkLy8ZBdx+NH/E/HAavNuew/RZzjbZMwDCO1diGfLGsoU9M9Ig9maO7I8VDho6cpesmGe0 HlhzA3aBcdEN1zW/P/jCZCIqIEBNrocgaTEVRQoOvVtMPg1qMOA4Qcl6H/RyoLSyeAeh7rH/u eD+jo6ISl2bE40Yoe7hsXBEc2WgKdrskB5iY4B5T0x2uMPnmfHz7Y9b3+giLA8Qjmq0iiSjzN MW0cii+jMqExAb8DKspsxaPiWc5dM/PKhPMMZmuG6J1uvmBkbkGSwFskbVCA5TKhWpfP99jS8 VRIpvWncJYjUSgwNPz+YJcyjkJYdGxxUP6ceG+ImnEMNY8ZO+4TEka8VwgawFp1VdT8r8ol9t GQ2pEI0/3URaMYMCXDH8/sxIy2p3xkAYfoh7AwO7bEQ8kQk91I4m6YercyldCUDPAWE3gnwCi 9TTVW07/Qg5Z0kqwul24RfrB32JTAANE3mBs4bRX2y3JuIy15J2wL9vko8Gppc6P9eYEWzvCe WQAxF5nFSW8LFExKYTWIbmNH6pasJurVbej66exkdsWyL09+7gMfV4j6lxdJ784CfoypGTazN Az8fIFHkhoPAySUemMlAGZFJkmk5b58cDPAK0p57dts96rN/8lgEpcMZJ9oBctjdKjiSMIXnI 8Luerj0+8eAXT3WCXUh48bsEVqH9VpjrerZLCagExu/38f3+jhmHlYNfPumO8cC8vMlqBZoZe GLNTP6F+WCblHh5RqFzrWIg0VXsvfepA3Gsr5XHR81uk8 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add new Realtek crypto device to the kernel configuration. Signed-off-by: Markus Stockhausen --- drivers/crypto/Kconfig | 13 +++++++++++++ drivers/crypto/Makefile | 1 + drivers/crypto/realtek/Makefile | 5 +++++ 3 files changed, 19 insertions(+) create mode 100644 drivers/crypto/realtek/Makefile -- 2.37.3 diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 55e75fbb658e..990a74f7ad97 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -666,6 +666,19 @@ config CRYPTO_DEV_IMGTEC_HASH hardware hash accelerator. Supporting MD5/SHA1/SHA224/SHA256 hashing algorithms. +config CRYPTO_DEV_REALTEK + tristate "Realtek's Cryptographic Engine driver" + depends on OF && MIPS && CPU_BIG_ENDIAN + select CRYPTO_MD5 + select CRYPTO_SHA1 + select CRYPTO_AES + help + This driver adds support for the Realtek crypto engine. It provides + hardware accelerated AES, SHA1 & MD5 algorithms. It is included in + SoCs of the RTL838x series, such as RTL8380, RTL8381, RTL8382, as + well as SoCs from the RTL930x series, such as RTL9301, RTL9302 and + RTL9303. + config CRYPTO_DEV_ROCKCHIP tristate "Rockchip's Cryptographic Engine driver" depends on OF && ARCH_ROCKCHIP diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile index 116de173a66c..df4b4b7d7302 100644 --- a/drivers/crypto/Makefile +++ b/drivers/crypto/Makefile @@ -36,6 +36,7 @@ obj-$(CONFIG_CRYPTO_DEV_PPC4XX) += amcc/ obj-$(CONFIG_CRYPTO_DEV_QAT) += qat/ obj-$(CONFIG_CRYPTO_DEV_QCE) += qce/ obj-$(CONFIG_CRYPTO_DEV_QCOM_RNG) += qcom-rng.o +obj-$(CONFIG_CRYPTO_DEV_REALTEK) += realtek/ obj-$(CONFIG_CRYPTO_DEV_ROCKCHIP) += rockchip/ obj-$(CONFIG_CRYPTO_DEV_S5P) += s5p-sss.o obj-$(CONFIG_CRYPTO_DEV_SA2UL) += sa2ul.o diff --git a/drivers/crypto/realtek/Makefile b/drivers/crypto/realtek/Makefile new file mode 100644 index 000000000000..8d973bf1d520 --- /dev/null +++ b/drivers/crypto/realtek/Makefile @@ -0,0 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_CRYPTO_DEV_REALTEK) += rtl_crypto.o +rtl_crypto-objs := realtek_crypto.o \ + realtek_crypto_skcipher.o \ + realtek_crypto_ahash.o From patchwork Thu Oct 13 18:40:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Markus Stockhausen X-Patchwork-Id: 615823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D78D4C4332F for ; Thu, 13 Oct 2022 18:56:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230294AbiJMS4a (ORCPT ); Thu, 13 Oct 2022 14:56:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232235AbiJMSz7 (ORCPT ); Thu, 13 Oct 2022 14:55:59 -0400 Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 452391849AC for ; Thu, 13 Oct 2022 11:54:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1665687218; bh=+JkfL4QqEFJ8WgRQicVqobBgwB9oGVGhbb7eNfPAgYA=; h=X-UI-Sender-Class:From:To:Cc:Subject:Date:In-Reply-To:References; b=J8BacS7EUKiJDLAYRXY8YLAKnBir7evSjM492PS6oyFgnPujquD05wVRnUKMxXHWi 80XpcwJlGMDoC5VaZHoon3dFx2nG1LDorakow9S0XVRTXCiv+P9fQHYCW26rMRoBa9 73o5jjevWBgUEFsDKBqKPIZ6baDwb1VApEn+cn/4= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from fedora.willemsstb.de ([94.31.86.22]) by mail.gmx.net (mrgmx105 [212.227.17.174]) with ESMTPSA (Nemesis) id 1MAfYm-1opfuQ1LWS-00B0M1; Thu, 13 Oct 2022 20:40:30 +0200 From: Markus Stockhausen To: linux-crypto@vger.kernel.org Cc: Markus Stockhausen Subject: [PATCH 6/6] crypto/realtek: add devicetree documentation Date: Thu, 13 Oct 2022 20:40:26 +0200 Message-Id: <20221013184026.63826-7-markus.stockhausen@gmx.de> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221013184026.63826-1-markus.stockhausen@gmx.de> References: <20221013184026.63826-1-markus.stockhausen@gmx.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:O4TFgX83LNUyGW2NkrZGyHBDDjfd5OBqwAmP4/DK2K7BhgDB2SG RwJbI/9gLa3dmmE5X0q09gN/nHQ8u05c8iTFn6mcIlfsFsa8x99SmKEy+S65iuRB++HIagi LesSSpIs67JJcGoaaSwn6Z6NHJrZ1Ymq/RsT12Uze+dth9kPNOM7Znd0RT1jsiLjqQK3caK 3qTbYaNmadsaV6KOnDqeA== X-UI-Out-Filterresults: notjunk:1;V03:K0:eWkWF4HUf1E=:m8UlQC4LeziGWduFXwDnYQ ssynfpKJs88vnp8yBfuOPl8hpkm0r+6QZ6HBAA+OpkoeSH83GCcuAvyHUo1oD78c/j/4Kt4co k58t4RIKhloLvGhcckYXmLB+MSCeNwdLlf2Vp/qGH153hsUcZNRjv4EMMNEzkfwxiqzVbVhtQ I0XFIo5D4NkJKUMqSJMOfOrjA7gCfXikLRYVD9FTIaujZBopTfKVHIPl+prIUdJzpSh4yVxba 7KwLigh74D6jFUtCxq/3sBoljS/0pU6jqEJPq1nN7GM0EG3/ZlKltiIZ+HsZOaQ5DcXl/ifNL N1hjp5zdQPPEM9M5UxaiNjJSnrwUUjOqi2q+gBfiWcm+v4kxk55k5yRt4w1NNE9oWXC7Dt5B/ 7LdF5BTE2dLx2ihfeBnc0em9uGZfqdxCGf77BXKBETrBIcDzxlKa59pX/AA276iRU8h5QK18X O/HMnR+kXyCoNmgxfZJofYVqt0DemuslW2+lWakz2gEIAQKvlJD74Wvmf7yKEilNz3lxC1oKp 1hvpkodD3nsKYy+ShJt6TNULeLkEPxa2KjqgMWsjbZbYaF2tnzeYVXFLAdBMNlqbhYy0Pd46U gfc0chRlNLw6ateqJp8rQ9KpyKuphbvO5DG//WruxQ4nUSihw6lG0ZfNqS772qvzomfMy7V20 4hK5a4jOxTgGdusPRkknlhIPdaZgP7uZmkGzd7AuVS/KxNPBbDVsdJrk0IGnNCdwd+NFGRKm7 yorCUbqqTncqqijbEIAlBm17TBhldbGRoeI7VzoOVJ9UReO8KRcFkA8mdyWNkkbcyjq/Eqouc rLIYko57LFfHKQrtlmkmRqPr/rvjPnaph2jYAJJ0z2bHnu2ET1HZeVfjM01jJcnPie69BNmY5 dSEyKSRIplnhw408CN9siAzjmoHkFnpPp6ihUgqApFBOCAtYb03d5WMKncftusqe6RN6msBO4 WL3S/CGhL+GIALio2PFKRqNOy5IikCV4X43VTzTQ7jXgH8wfP5AtTgspvcxOYobGN3TYqHi3w /VPefSlNbsHqP+TBnczc6bIOyV2WTke0pjjFU9FEj+kLgXrvj6MAQLTo2Dnt5XGer2/1U3eOy rAHX/zl0ZhVS9vxsKVfbMtHsSXt8wSwTePry9t9BcMHItDwSjw8Ha72yKV1YF83rsTphqDEI0 GEVKtI8bJhY97DB9WlOjMMToWvmE2fAA5DW+scy79sT4j9jGSkOWKb2Hu39uVbQ5peIcG/S8Z HdjxWQkGYINhkQw5o/i8B3xDD+oXL9QpRxpekOgtnL4MV5EoFUOKKBrAd+TWrhO6cqzXT751P c19FgLncYA/4NF8ch5qcxYa66JW/UNdCajZrr1OIUGyFKwyhCmJ6E6tsCyn2TR9CBj/eU33rw 9asZec4ph2Y0aNeAzxCcaXKpm5M9eFSs5dKNtlE7kCkvJvy0Q3OUzHvxkPEbrkhwnUksZrjo0 2JsKBNrtuk2Z/2M6yfPncbQvnovAsFPnoon28qBY+nZwnbrah4wYC5JWAYBk/dObfT9Ub53Tx wIgz9ZBfE2jAoDV+QdGp10Nj+32JDoLXjzZKvdZY+pA+d Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add devicetree documentation of new Realtek crypto device. Signed-off-by: Markus Stockhausen --- .../crypto/realtek,realtek-crypto.yaml | 51 +++++++++++++++++++ 1 file changed, 51 insertions(+) create mode 100644 Documentation/devicetree/bindings/crypto/realtek,realtek-crypto.yaml -- 2.37.3 diff --git a/Documentation/devicetree/bindings/crypto/realtek,realtek-crypto.yaml b/Documentation/devicetree/bindings/crypto/realtek,realtek-crypto.yaml new file mode 100644 index 000000000000..443195e2d850 --- /dev/null +++ b/Documentation/devicetree/bindings/crypto/realtek,realtek-crypto.yaml @@ -0,0 +1,51 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/crypto/realtek,realtek-crypto.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Realtek crypto engine bindings + +maintainers: + - Markus Stockhausen + +description: | + The Realtek crypto engine provides hardware accelerated AES, SHA1 & MD5 + algorithms. It is included in SoCs of the RTL838x series, such as RTL8380, + RTL8381, RTL8382, as well as SoCs from the RTL930x series, such as RTL9301, + RTL9302 and RTL9303. + +properties: + compatible: + const: realtek,realtek-crypto + + reg: + minItems: 1 + maxItems: 1 + + interrupt-parent: + minItems: 1 + maxItems: 1 + + interrupts: + minItems: 1 + maxItems: 1 + +required: + - compatible + - reg + - interrupt-parent + - interrupts + +additionalProperties: false + +examples: + - | + crypto0: crypto@c000 { + compatible = "realtek,realtek-crypto"; + reg = <0xc000 0x10>; + interrupt-parent = <&intc>; + interrupts = <22 3>; + }; + +...