From patchwork Sun Jun 4 08:02:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 101329 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp437313qgd; Sun, 4 Jun 2017 01:03:11 -0700 (PDT) X-Received: by 10.84.134.228 with SMTP id 91mr8642432plh.61.1496563391158; Sun, 04 Jun 2017 01:03:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496563391; cv=none; d=google.com; s=arc-20160816; b=Z2oWarc3ijNwhBjHnPVNf0cSmLvMBqojgCUFdrAmMxDIlvtQ3dtNF/xWgszrK+Y/dW OYcHAwMnXMQI8JRzWgJdycxLgOcmnS+TQ3sMl1BAfNhj9652EtaX4KeP+DEPlf763sxq 53Ek51BIEa0PZNF9ylYGG62lzQ7XLaUDMHxlamRZIvV185M0XykXD+jtxg4lazPWXiCs bqUYgeR6G7E+SFcSRdBGD2s4BFtwFQWobvleU0AOcmuUGb1c0ZIs0BaerzIaHeV2O4tF OCXvfrkf9+oDoYbOQnRnXMx+hPSl/fDD79HGewAfHh4sPr0Q2X8VDdkgD6IBeMepYlxN 86BA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=j5Ynb7pW16s1oaafwCGC5nQ/pY6QRk0Ey628ueHhPhQ=; b=HTj1/9LBd9nvUl4AjDFAihbyN7IFhmz0TEvjPi993rugUc3d55fDOa7MiaV/MWVmmD iUsBXO+RYj1nEsKQo49GaMO7KCodkywUz9vVMoxrz2mHpV2PFA7sBws0Z2XlllPiG87e 7mvie9O8fYCz/iaKKoqjgQO9wjXD+poTCpJ+oxKTDJhipePe2LhHXXhfOZ4yXnKi84Se 2SKINm7vgXWtgA7JnXUztTD5nzIHeifkbIHWq5CG4Z1obz2a5Zj6NYQGNfDG3pWQ4Nl2 PaTl/hpFeN3DeJpk9uW01Dzi7Zyvnm9gOqcRr+GlTzjnAckZgfTIuSRASCJbDYB+w8Qb Nu/A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v77si27589846pfa.176.2017.06.04.01.03.11; Sun, 04 Jun 2017 01:03:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751188AbdFDIDD (ORCPT + 1 other); Sun, 4 Jun 2017 04:03:03 -0400 Received: from foss.arm.com ([217.140.101.70]:52652 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751180AbdFDIC5 (ORCPT ); Sun, 4 Jun 2017 04:02:57 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EFEB5344; Sun, 4 Jun 2017 01:02:56 -0700 (PDT) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F28283F589; Sun, 4 Jun 2017 01:02:54 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Joe Perches , Ofir Drang , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Subject: [PATCH v3 01/18] staging: ccree: replace bit shift with BIT macro Date: Sun, 4 Jun 2017 11:02:22 +0300 Message-Id: <1496563362-7954-2-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496563362-7954-1-git-send-email-gilad@benyossef.com> References: <1496563362-7954-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org CC_CTX_SIZE was being defined using a hand rolled bit shift operation. Replace with use of BIT macro. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/cc_crypto_ctx.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.1.4 diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h index 1fc68de..20f3f9f 100644 --- a/drivers/staging/ccree/cc_crypto_ctx.h +++ b/drivers/staging/ccree/cc_crypto_ctx.h @@ -27,7 +27,7 @@ #define CC_CTX_SIZE_LOG2 7 #endif #endif -#define CC_CTX_SIZE (1 << CC_CTX_SIZE_LOG2) +#define CC_CTX_SIZE BIT(CC_CTX_SIZE_LOG2) #define CC_DRV_CTX_SIZE_WORDS (CC_CTX_SIZE >> 2) #define CC_DRV_DES_IV_SIZE 8 From patchwork Sun Jun 4 08:02:23 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 101330 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp437367qgd; Sun, 4 Jun 2017 01:03:21 -0700 (PDT) X-Received: by 10.98.34.152 with SMTP id p24mr5953667pfj.60.1496563401302; Sun, 04 Jun 2017 01:03:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496563401; cv=none; d=google.com; s=arc-20160816; b=LSIqph2MYTThDdFXjxVlAYRv33WOBalbKjefa5EHtFRIAUuiq6qnxtHVBmyUb1ed5d KYFogA9GP+2Kvp3LJ3yao8hnCC+RSaJklelotCa/p5KdUYXNBz/gWcerQw4ukO6olf4i APKt0AceLtr7iD2OkILn+y9CcyABPe+0KnxaW6XyFA9541i3ygPu1+RpUZ1zbZEvGHKs 9GyUonPjsZdgTZKCDrzf4WI7ma+g1DMgMPiTdwhUU40crT8Ml/pPi7AFhLcroS/hZBQ5 gOQkGiz5u8ko9vwGgUp9XxdIz1HL+iskCsRE18tcgqtuacm40uTSJ0kzf0pdFyPZAAOL NeGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=XBBvFyFomO1zLonbc39S0NP2Ourm4VgwzTK+f4wwLOw=; b=vwVdrU4vGzfstfSP7GcYA2XZDijFCdQFOkXW6q+s1q5mBMhlClvgmECdtZN2O5NqWU zACOwub87K31PVaBLHf7tO2OwThP4MRzuwDg1sqZgqPeeuYNBPoqYBZ960E6DkIFc0VB oUThJoSgXwyQpUKp/70OnioQGJjlamvsPUckK8XVLILYOhjBFMX94XBgfqVCkAbFAJYa K70lKDHytQ5LciGYo1ejBYQwigvgVtc3MyBZFgSLVA1ksIyFz2pcWGFlrqIKizqTKXKd E179kDHS0Lf+3BNlWOY4GNktvJTpwXm4HLSYB3SXWDztpgiUtFHXtV2dCqPUtS9xL0a2 FKrg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u91si4087070plb.499.2017.06.04.01.03.20; Sun, 04 Jun 2017 01:03:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751172AbdFDIDR (ORCPT + 1 other); Sun, 4 Jun 2017 04:03:17 -0400 Received: from foss.arm.com ([217.140.101.70]:52662 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751193AbdFDIDE (ORCPT ); Sun, 4 Jun 2017 04:03:04 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7C677344; Sun, 4 Jun 2017 01:03:03 -0700 (PDT) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 66BF63F589; Sun, 4 Jun 2017 01:03:00 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Joe Perches , Ofir Drang , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Subject: [PATCH v3 02/18] staging: ccree: refactor HW command FIFO access Date: Sun, 4 Jun 2017 11:02:23 +0300 Message-Id: <1496563362-7954-3-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496563362-7954-1-git-send-email-gilad@benyossef.com> References: <1496563362-7954-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The programming of the HW command FIFO in ccree was done via a set of macros which suffer a few problems: - Use of macros rather than inline leaves out parameter type checking and risks multiple macro parameter evaluation side effects. - Implemented via hand rolled versions of bitfield operations. This patch refactors the HW command queue access into a set of inline functions using generic kernel bitfield access infrastructure, thus resolving the above issues and opening the way later on to drop the hand rolled bitfield macros once additional users are dropped in later patches in the series. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/cc_bitops.h | 8 +- drivers/staging/ccree/cc_hw_queue_defs.h | 644 ++++++++++++---------- drivers/staging/ccree/ssi_aead.c | 901 +++++++++++++++---------------- drivers/staging/ccree/ssi_cipher.c | 224 ++++---- drivers/staging/ccree/ssi_driver.h | 4 +- drivers/staging/ccree/ssi_fips_ll.c | 704 ++++++++++++------------ drivers/staging/ccree/ssi_hash.c | 832 ++++++++++++++-------------- drivers/staging/ccree/ssi_ivgen.c | 77 ++- drivers/staging/ccree/ssi_request_mgr.c | 23 +- drivers/staging/ccree/ssi_sram_mgr.c | 8 +- 10 files changed, 1745 insertions(+), 1680 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/cc_bitops.h b/drivers/staging/ccree/cc_bitops.h index ee5238a..cbdc1ab 100644 --- a/drivers/staging/ccree/cc_bitops.h +++ b/drivers/staging/ccree/cc_bitops.h @@ -21,8 +21,12 @@ #ifndef _CC_BITOPS_H_ #define _CC_BITOPS_H_ -#define BITMASK(mask_size) (((mask_size) < 32) ? \ - ((1UL << (mask_size)) - 1) : 0xFFFFFFFFUL) +#include +#include + +#define BITMASK(mask_size) (((mask_size) < 32) ? \ + ((1UL << (mask_size)) - 1) : 0xFFFFFFFFUL) + #define BITMASK_AT(mask_size, mask_offset) (BITMASK(mask_size) << (mask_offset)) #define BITFIELD_GET(word, bit_offset, bit_size) \ diff --git a/drivers/staging/ccree/cc_hw_queue_defs.h b/drivers/staging/ccree/cc_hw_queue_defs.h index bb64fdd..c3f9d6a 100644 --- a/drivers/staging/ccree/cc_hw_queue_defs.h +++ b/drivers/staging/ccree/cc_hw_queue_defs.h @@ -23,24 +23,68 @@ #include "dx_crys_kernel.h" /****************************************************************************** - * DEFINITIONS - ******************************************************************************/ - -/* Dma AXI Secure bit */ -#define AXI_SECURE 0 -#define AXI_NOT_SECURE 1 +* DEFINITIONS +******************************************************************************/ #define HW_DESC_SIZE_WORDS 6 #define HW_QUEUE_SLOTS_MAX 15 /* Max. available slots in HW queue */ #define _HW_DESC_MONITOR_KICK 0x7FFFC00 +#define CC_REG_NAME(word, name) DX_DSCRPTR_QUEUE_WORD ## word ## _ ## name + +#define CC_REG_LOW(word, name) \ + (DX_DSCRPTR_QUEUE_WORD ## word ## _ ## name ## _BIT_SHIFT) + +#define CC_REG_HIGH(word, name) \ + (CC_REG_LOW(word, name) + \ + DX_DSCRPTR_QUEUE_WORD ## word ## _ ## name ## _BIT_SIZE - 1) + +#define CC_GENMASK(word, name) \ + GENMASK(CC_REG_HIGH(word, name), CC_REG_LOW(word, name)) + +#define WORD0_VALUE CC_GENMASK(0, VALUE) +#define WORD1_DIN_CONST_VALUE CC_GENMASK(1, DIN_CONST_VALUE) +#define WORD1_DIN_DMA_MODE CC_GENMASK(1, DIN_DMA_MODE) +#define WORD1_DIN_SIZE CC_GENMASK(1, DIN_SIZE) +#define WORD1_NOT_LAST CC_GENMASK(1, NOT_LAST) +#define WORD1_NS_BIT CC_GENMASK(1, NS_BIT) +#define WORD2_VALUE CC_GENMASK(2, VALUE) +#define WORD3_DOUT_DMA_MODE CC_GENMASK(3, DOUT_DMA_MODE) +#define WORD3_DOUT_LAST_IND CC_GENMASK(3, DOUT_LAST_IND) +#define WORD3_DOUT_SIZE CC_GENMASK(3, DOUT_SIZE) +#define WORD3_HASH_XOR_BIT CC_GENMASK(3, HASH_XOR_BIT) +#define WORD3_NS_BIT CC_GENMASK(3, NS_BIT) +#define WORD3_QUEUE_LAST_IND CC_GENMASK(3, QUEUE_LAST_IND) +#define WORD4_ACK_NEEDED CC_GENMASK(4, ACK_NEEDED) +#define WORD4_AES_SEL_N_HASH CC_GENMASK(4, AES_SEL_N_HASH) +#define WORD4_BYTES_SWAP CC_GENMASK(4, BYTES_SWAP) +#define WORD4_CIPHER_CONF0 CC_GENMASK(4, CIPHER_CONF0) +#define WORD4_CIPHER_CONF1 CC_GENMASK(4, CIPHER_CONF1) +#define WORD4_CIPHER_CONF2 CC_GENMASK(4, CIPHER_CONF2) +#define WORD4_CIPHER_DO CC_GENMASK(4, CIPHER_DO) +#define WORD4_CIPHER_MODE CC_GENMASK(4, CIPHER_MODE) +#define WORD4_CMAC_SIZE0 CC_GENMASK(4, CMAC_SIZE0) +#define WORD4_DATA_FLOW_MODE CC_GENMASK(4, DATA_FLOW_MODE) +#define WORD4_KEY_SIZE CC_GENMASK(4, KEY_SIZE) +#define WORD4_SETUP_OPERATION CC_GENMASK(4, SETUP_OPERATION) +#define WORD5_DIN_ADDR_HIGH CC_GENMASK(5, DIN_ADDR_HIGH) +#define WORD5_DOUT_ADDR_HIGH CC_GENMASK(5, DOUT_ADDR_HIGH) + /****************************************************************************** - * TYPE DEFINITIONS - ******************************************************************************/ +* TYPE DEFINITIONS +******************************************************************************/ struct cc_hw_desc { - u32 word[HW_DESC_SIZE_WORDS]; + union { + u32 word[HW_DESC_SIZE_WORDS]; + u16 hword[HW_DESC_SIZE_WORDS * 2]; + }; +}; + +enum cc_axi_sec { + AXI_SECURE = 0, + AXI_NOT_SECURE = 1 }; enum cc_desc_direction { @@ -164,369 +208,403 @@ enum cc_hw_des_key_size { /* Descriptor packing macros */ /*****************************/ -#define GET_HW_Q_DESC_WORD_IDX(descWordIdx) (CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD ## descWordIdx)) - -#define HW_DESC_INIT(pDesc) memset(pDesc, 0, sizeof(struct cc_hw_desc)) +/* + * Init a HW descriptor struct + * @pdesc: pointer HW descriptor struct + */ +static inline void hw_desc_init(struct cc_hw_desc *pdesc) +{ + memset(pdesc, 0, sizeof(struct cc_hw_desc)); +} -/*! - * This macro indicates the end of current HW descriptors flow and release the HW engines. +/* + * Indicates the end of current HW descriptors flow and release the HW engines. * - * \param pDesc pointer HW descriptor struct + * @pdesc: pointer HW descriptor struct */ -#define HW_DESC_SET_QUEUE_LAST_IND(pDesc) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, QUEUE_LAST_IND, (pDesc)->word[3], 1); \ - } while (0) +static inline void set_queue_last_ind(struct cc_hw_desc *pdesc) +{ + pdesc->word[3] |= FIELD_PREP(WORD3_QUEUE_LAST_IND, 1); +} -/*! - * This macro signs the end of HW descriptors flow by asking for completion ack, and release the HW engines +/* + * Signs the end of HW descriptors flow by asking for completion ack, + * and release the HW engines * - * \param pDesc pointer HW descriptor struct + * @pdesc: pointer HW descriptor struct */ -#define HW_DESC_SET_ACK_LAST(pDesc) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, QUEUE_LAST_IND, (pDesc)->word[3], 1); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, ACK_NEEDED, (pDesc)->word[4], 1); \ - } while (0) +static inline void set_ack_last(struct cc_hw_desc *pdesc) +{ + pdesc->word[3] |= FIELD_PREP(WORD3_QUEUE_LAST_IND, 1); + pdesc->word[4] |= FIELD_PREP(WORD4_ACK_NEEDED, 1); +} #define MSB64(_addr) (sizeof(_addr) == 4 ? 0 : ((_addr) >> 32) & U16_MAX) -/*! - * This macro sets the DIN field of a HW descriptors +/* + * Set the DIN field of a HW descriptors * - * \param pDesc pointer HW descriptor struct - * \param dmaMode The DMA mode: NO_DMA, SRAM, DLLI, MLLI, CONSTANT - * \param dinAdr DIN address - * \param dinSize Data size in bytes - * \param axiNs AXI secure bit - */ -#define HW_DESC_SET_DIN_TYPE(pDesc, dmaMode, dinAdr, dinSize, axiNs) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (dinAdr) & U32_MAX); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DIN_ADDR_HIGH, (pDesc)->word[5], MSB64(dinAdr)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_DMA_MODE, (pDesc)->word[1], (dmaMode)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, NS_BIT, (pDesc)->word[1], (axiNs)); \ - } while (0) + * @pdesc: pointer HW descriptor struct + * @dma_mode: dmaMode The DMA mode: NO_DMA, SRAM, DLLI, MLLI, CONSTANT + * @addr: dinAdr DIN address + * @size: Data size in bytes + * @axi_sec: AXI secure bit + */ +static inline void set_din_type(struct cc_hw_desc *pdesc, + enum cc_dma_mode dma_mode, dma_addr_t addr, + u32 size, enum cc_axi_sec axi_sec) +{ + pdesc->word[0] = (u32)addr; +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + pdesc->word[5] |= FIELD_PREP(WORD5_DIN_ADDR_HIGH, ((u16)(addr >> 32))); +#endif + pdesc->word[1] |= FIELD_PREP(WORD1_DIN_DMA_MODE, dma_mode) | + FIELD_PREP(WORD1_DIN_SIZE, size) | + FIELD_PREP(WORD1_NS_BIT, axi_sec); +} -/*! - * This macro sets the DIN field of a HW descriptors to NO DMA mode. Used for NOP descriptor, register patches and - * other special modes +/* + * Set the DIN field of a HW descriptors to NO DMA mode. + * Used for NOP descriptor, register patches and other special modes. * - * \param pDesc pointer HW descriptor struct - * \param dinAdr DIN address - * \param dinSize Data size in bytes + * @pdesc: pointer HW descriptor struct + * @addr: DIN address + * @size: Data size in bytes */ -#define HW_DESC_SET_DIN_NO_DMA(pDesc, dinAdr, dinSize) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (u32)(dinAdr)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize)); \ - } while (0) +static inline void set_din_no_dma(struct cc_hw_desc *pdesc, u32 addr, u32 size) +{ + pdesc->word[0] = addr; + pdesc->word[1] |= FIELD_PREP(WORD1_DIN_SIZE, size); +} -/*! - * This macro sets the DIN field of a HW descriptors to SRAM mode. +/* + * Set the DIN field of a HW descriptors to SRAM mode. * Note: No need to check SRAM alignment since host requests do not use SRAM and * adaptor will enforce alignment check. * - * \param pDesc pointer HW descriptor struct - * \param dinAdr DIN address - * \param dinSize Data size in bytes + * @pdesc: pointer HW descriptor struct + * @addr: DIN address + * @size Data size in bytes */ -#define HW_DESC_SET_DIN_SRAM(pDesc, dinAdr, dinSize) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (u32)(dinAdr)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_DMA_MODE, (pDesc)->word[1], DMA_SRAM); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize)); \ - } while (0) +static inline void set_din_sram(struct cc_hw_desc *pdesc, dma_addr_t addr, + u32 size) +{ + pdesc->word[0] = (u32)addr; + pdesc->word[1] |= FIELD_PREP(WORD1_DIN_SIZE, size) | + FIELD_PREP(WORD1_DIN_DMA_MODE, DMA_SRAM); +} -/*! This macro sets the DIN field of a HW descriptors to CONST mode +/* + * Set the DIN field of a HW descriptors to CONST mode * - * \param pDesc pointer HW descriptor struct - * \param val DIN const value - * \param dinSize Data size in bytes + * @pdesc: pointer HW descriptor struct + * @val: DIN const value + * @size: Data size in bytes */ -#define HW_DESC_SET_DIN_CONST(pDesc, val, dinSize) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (u32)(val)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_CONST_VALUE, (pDesc)->word[1], 1); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_DMA_MODE, (pDesc)->word[1], DMA_SRAM); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize)); \ - } while (0) +static inline void set_din_const(struct cc_hw_desc *pdesc, u32 val, u32 size) +{ + pdesc->word[0] = val; + pdesc->word[1] |= FIELD_PREP(WORD1_DIN_CONST_VALUE, 1) | + FIELD_PREP(WORD1_DIN_DMA_MODE, DMA_SRAM) | + FIELD_PREP(WORD1_DIN_SIZE, size); +} -/*! - * This macro sets the DIN not last input data indicator +/* + * Set the DIN not last input data indicator * - * \param pDesc pointer HW descriptor struct + * @pdesc: pointer HW descriptor struct */ -#define HW_DESC_SET_DIN_NOT_LAST_INDICATION(pDesc) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, NOT_LAST, (pDesc)->word[1], 1); \ - } while (0) +static inline void set_din_not_last_indication(struct cc_hw_desc *pdesc) +{ + pdesc->word[1] |= FIELD_PREP(WORD1_NOT_LAST, 1); +} -/*! - * This macro sets the DOUT field of a HW descriptors +/* + * Set the DOUT field of a HW descriptors * - * \param pDesc pointer HW descriptor struct - * \param dmaMode The DMA mode: NO_DMA, SRAM, DLLI, MLLI, CONSTANT - * \param doutAdr DOUT address - * \param doutSize Data size in bytes - * \param axiNs AXI secure bit + * @pdesc: pointer HW descriptor struct + * @dma_mode: The DMA mode: NO_DMA, SRAM, DLLI, MLLI, CONSTANT + * @addr: DOUT address + * @size: Data size in bytes + * @axi_sec: AXI secure bit */ -#define HW_DESC_SET_DOUT_TYPE(pDesc, dmaMode, doutAdr, doutSize, axiNs) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (doutAdr) & U32_MAX); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DOUT_ADDR_HIGH, (pDesc)->word[5], MSB64(doutAdr)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], (dmaMode)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, NS_BIT, (pDesc)->word[3], (axiNs)); \ - } while (0) +static inline void set_dout_type(struct cc_hw_desc *pdesc, + enum cc_dma_mode dma_mode, dma_addr_t addr, + u32 size, enum cc_axi_sec axi_sec) +{ + pdesc->word[2] = (u32)addr; +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT + pdesc->word[5] |= FIELD_PREP(WORD5_DOUT_ADDR_HIGH, ((u16)(addr >> 32))); +#endif + pdesc->word[3] |= FIELD_PREP(WORD3_DOUT_DMA_MODE, dma_mode) | + FIELD_PREP(WORD3_DOUT_SIZE, size) | + FIELD_PREP(WORD3_NS_BIT, axi_sec); +} -/*! - * This macro sets the DOUT field of a HW descriptors to DLLI type +/* + * Set the DOUT field of a HW descriptors to DLLI type * The LAST INDICATION is provided by the user * - * \param pDesc pointer HW descriptor struct - * \param doutAdr DOUT address - * \param doutSize Data size in bytes - * \param lastInd The last indication bit - * \param axiNs AXI secure bit - */ -#define HW_DESC_SET_DOUT_DLLI(pDesc, doutAdr, doutSize, axiNs, lastInd) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (doutAdr) & U32_MAX); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DOUT_ADDR_HIGH, (pDesc)->word[5], MSB64(doutAdr)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], DMA_DLLI); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_LAST_IND, (pDesc)->word[3], lastInd); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, NS_BIT, (pDesc)->word[3], (axiNs)); \ - } while (0) + * @pdesc pointer HW descriptor struct + * @addr: DOUT address + * @size: Data size in bytes + * @last_ind: The last indication bit + * @axi_sec: AXI secure bit + */ +static inline void set_dout_dlli(struct cc_hw_desc *pdesc, dma_addr_t addr, + u32 size, enum cc_axi_sec axi_sec, + u32 last_ind) +{ + set_dout_type(pdesc, DMA_DLLI, addr, size, axi_sec); + pdesc->word[3] |= FIELD_PREP(WORD3_DOUT_LAST_IND, last_ind); +} -/*! - * This macro sets the DOUT field of a HW descriptors to DLLI type +/* + * Set the DOUT field of a HW descriptors to DLLI type * The LAST INDICATION is provided by the user * - * \param pDesc pointer HW descriptor struct - * \param doutAdr DOUT address - * \param doutSize Data size in bytes - * \param lastInd The last indication bit - * \param axiNs AXI secure bit - */ -#define HW_DESC_SET_DOUT_MLLI(pDesc, doutAdr, doutSize, axiNs, lastInd) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (doutAdr) & U32_MAX); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DOUT_ADDR_HIGH, (pDesc)->word[5], MSB64(doutAdr)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], DMA_MLLI); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_LAST_IND, (pDesc)->word[3], lastInd); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, NS_BIT, (pDesc)->word[3], (axiNs)); \ - } while (0) + * @pdesc: pointer HW descriptor struct + * @addr: DOUT address + * @size: Data size in bytes + * @last_ind: The last indication bit + * @axi_sec: AXI secure bit + */ +static inline void set_dout_mlli(struct cc_hw_desc *pdesc, dma_addr_t addr, + u32 size, enum cc_axi_sec axi_sec, + bool last_ind) +{ + set_dout_type(pdesc, DMA_MLLI, addr, size, axi_sec); + pdesc->word[3] |= FIELD_PREP(WORD3_DOUT_LAST_IND, last_ind); +} -/*! - * This macro sets the DOUT field of a HW descriptors to NO DMA mode. Used for NOP descriptor, register patches and - * other special modes +/* + * Set the DOUT field of a HW descriptors to NO DMA mode. + * Used for NOP descriptor, register patches and other special modes. * - * \param pDesc pointer HW descriptor struct - * \param doutAdr DOUT address - * \param doutSize Data size in bytes - * \param registerWriteEnable Enables a write operation to a register - */ -#define HW_DESC_SET_DOUT_NO_DMA(pDesc, doutAdr, doutSize, registerWriteEnable) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (u32)(doutAdr)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_LAST_IND, (pDesc)->word[3], (registerWriteEnable)); \ - } while (0) + * @pdesc: pointer HW descriptor struct + * @addr: DOUT address + * @size: Data size in bytes + * @write_enable: Enables a write operation to a register + */ +static inline void set_dout_no_dma(struct cc_hw_desc *pdesc, u32 addr, + u32 size, bool write_enable) +{ + pdesc->word[2] = addr; + pdesc->word[3] |= FIELD_PREP(WORD3_DOUT_SIZE, size) | + FIELD_PREP(WORD3_DOUT_LAST_IND, write_enable); +} -/*! - * This macro sets the word for the XOR operation. +/* + * Set the word for the XOR operation. * - * \param pDesc pointer HW descriptor struct - * \param xorVal xor data value + * @pdesc: pointer HW descriptor struct + * @val: xor data value */ -#define HW_DESC_SET_XOR_VAL(pDesc, xorVal) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (u32)(xorVal)); \ - } while (0) +static inline void set_xor_val(struct cc_hw_desc *pdesc, u32 val) +{ + pdesc->word[2] = val; +} -/*! - * This macro sets the XOR indicator bit in the descriptor +/* + * Sets the XOR indicator bit in the descriptor * - * \param pDesc pointer HW descriptor struct + * @pdesc: pointer HW descriptor struct */ -#define HW_DESC_SET_XOR_ACTIVE(pDesc) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, HASH_XOR_BIT, (pDesc)->word[3], 1); \ - } while (0) +static inline void set_xor_active(struct cc_hw_desc *pdesc) +{ + pdesc->word[3] |= FIELD_PREP(WORD3_HASH_XOR_BIT, 1); +} -/*! - * This macro selects the AES engine instead of HASH engine when setting up combined mode with AES XCBC MAC +/* + * Select the AES engine instead of HASH engine when setting up combined mode + * with AES XCBC MAC * - * \param pDesc pointer HW descriptor struct + * @pdesc: pointer HW descriptor struct */ -#define HW_DESC_SET_AES_NOT_HASH_MODE(pDesc) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, AES_SEL_N_HASH, (pDesc)->word[4], 1); \ - } while (0) +static inline void set_aes_not_hash_mode(struct cc_hw_desc *pdesc) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_AES_SEL_N_HASH, 1); +} -/*! - * This macro sets the DOUT field of a HW descriptors to SRAM mode +/* + * Set the DOUT field of a HW descriptors to SRAM mode * Note: No need to check SRAM alignment since host requests do not use SRAM and * adaptor will enforce alignment check. * - * \param pDesc pointer HW descriptor struct - * \param doutAdr DOUT address - * \param doutSize Data size in bytes + * @pdesc: pointer HW descriptor struct + * @addr: DOUT address + * @size: Data size in bytes */ -#define HW_DESC_SET_DOUT_SRAM(pDesc, doutAdr, doutSize) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (u32)(doutAdr)); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], DMA_SRAM); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize)); \ - } while (0) +static inline void set_dout_sram(struct cc_hw_desc *pdesc, u32 addr, u32 size) +{ + pdesc->word[2] = addr; + pdesc->word[3] |= FIELD_PREP(WORD3_DOUT_DMA_MODE, DMA_SRAM) | + FIELD_PREP(WORD3_DOUT_SIZE, size); +} -/*! - * This macro sets the data unit size for XEX mode in data_out_addr[15:0] +/* + * Sets the data unit size for XEX mode in data_out_addr[15:0] * - * \param pDesc pointer HW descriptor struct - * \param dataUnitSize data unit size for XEX mode + * @pdesc: pDesc pointer HW descriptor struct + * @size: data unit size for XEX mode */ -#define HW_DESC_SET_XEX_DATA_UNIT_SIZE(pDesc, dataUnitSize) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (u32)(dataUnitSize)); \ - } while (0) +static inline void set_xex_data_unit_size(struct cc_hw_desc *pdesc, u32 size) +{ + pdesc->word[2] = size; +} -/*! - * This macro sets the number of rounds for Multi2 in data_out_addr[15:0] +/* + * Set the number of rounds for Multi2 in data_out_addr[15:0] * - * \param pDesc pointer HW descriptor struct - * \param numRounds number of rounds for Multi2 + * @pdesc: pointer HW descriptor struct + * @num: number of rounds for Multi2 */ -#define HW_DESC_SET_MULTI2_NUM_ROUNDS(pDesc, numRounds) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (u32)(numRounds)); \ - } while (0) +static inline void set_multi2_num_rounds(struct cc_hw_desc *pdesc, u32 num) +{ + pdesc->word[2] = num; +} -/*! - * This macro sets the flow mode. +/* + * Set the flow mode. * - * \param pDesc pointer HW descriptor struct - * \param flowMode Any one of the modes defined in [CC7x-DESC] + * @pdesc: pointer HW descriptor struct + * @mode: Any one of the modes defined in [CC7x-DESC] */ +static inline void set_flow_mode(struct cc_hw_desc *pdesc, + enum cc_flow_mode mode) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_DATA_FLOW_MODE, mode); +} -#define HW_DESC_SET_FLOW_MODE(pDesc, flowMode) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, DATA_FLOW_MODE, (pDesc)->word[4], (flowMode)); \ - } while (0) +/* + * Set the cipher mode. + * + * @pdesc: pointer HW descriptor struct + * @mode: Any one of the modes defined in [CC7x-DESC] + */ +static inline void set_cipher_mode(struct cc_hw_desc *pdesc, + enum drv_cipher_mode mode) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_CIPHER_MODE, mode); +} -/*! - * This macro sets the cipher mode. +/* + * Set the cipher configuration fields. * - * \param pDesc pointer HW descriptor struct - * \param cipherMode Any one of the modes defined in [CC7x-DESC] + * @pdesc: pointer HW descriptor struct + * @mode: Any one of the modes defined in [CC7x-DESC] */ -#define HW_DESC_SET_CIPHER_MODE(pDesc, cipherMode) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_MODE, (pDesc)->word[4], (cipherMode)); \ - } while (0) +static inline void set_cipher_config0(struct cc_hw_desc *pdesc, + enum drv_crypto_direction mode) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_CIPHER_CONF0, mode); +} -/*! - * This macro sets the cipher configuration fields. +/* + * Set the cipher configuration fields. * - * \param pDesc pointer HW descriptor struct - * \param cipherConfig Any one of the modes defined in [CC7x-DESC] + * @pdesc: pointer HW descriptor struct + * @config: Any one of the modes defined in [CC7x-DESC] */ -#define HW_DESC_SET_CIPHER_CONFIG0(pDesc, cipherConfig) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_CONF0, (pDesc)->word[4], (cipherConfig)); \ - } while (0) +static inline void set_cipher_config1(struct cc_hw_desc *pdesc, + enum HashConfig1Padding config) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_CIPHER_CONF1, config); +} -/*! - * This macro sets the cipher configuration fields. +/* + * Set HW key configuration fields. * - * \param pDesc pointer HW descriptor struct - * \param cipherConfig Any one of the modes defined in [CC7x-DESC] + * @pdesc: pointer HW descriptor struct + * @hw_key: The HW key slot asdefined in enum cc_hw_crypto_key */ -#define HW_DESC_SET_CIPHER_CONFIG1(pDesc, cipherConfig) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_CONF1, (pDesc)->word[4], (cipherConfig)); \ - } while (0) +static inline void set_hw_crypto_key(struct cc_hw_desc *pdesc, + enum cc_hw_crypto_key hw_key) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_CIPHER_DO, + (hw_key & HW_KEY_MASK_CIPHER_DO)) | + FIELD_PREP(WORD4_CIPHER_CONF2, + (hw_key >> HW_KEY_SHIFT_CIPHER_CFG2)); +} -/*! - * This macro sets HW key configuration fields. +/* + * Set byte order of all setup-finalize descriptors. * - * \param pDesc pointer HW descriptor struct - * \param hwKey The hw key number as in enun HwCryptoKey + * @pdesc: pointer HW descriptor struct + * @config: Any one of the modes defined in [CC7x-DESC] */ -#define HW_DESC_SET_HW_CRYPTO_KEY(pDesc, hwKey) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_DO, (pDesc)->word[4], (hwKey) & HW_KEY_MASK_CIPHER_DO); \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_CONF2, (pDesc)->word[4], (hwKey >> HW_KEY_SHIFT_CIPHER_CFG2)); \ - } while (0) +static inline void set_bytes_swap(struct cc_hw_desc *pdesc, bool config) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_BYTES_SWAP, config); +} -/*! - * This macro changes the bytes order of all setup-finalize descriptosets. +/* + * Set CMAC_SIZE0 mode. * - * \param pDesc pointer HW descriptor struct - * \param swapConfig Any one of the modes defined in [CC7x-DESC] + * @pdesc: pointer HW descriptor struct */ -#define HW_DESC_SET_BYTES_SWAP(pDesc, swapConfig) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, BYTES_SWAP, (pDesc)->word[4], (swapConfig)); \ - } while (0) +static inline void set_cmac_size0_mode(struct cc_hw_desc *pdesc) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_CMAC_SIZE0, 1); +} -/*! - * This macro sets the CMAC_SIZE0 mode. +/* + * Set key size descriptor field. * - * \param pDesc pointer HW descriptor struct + * @pdesc: pointer HW descriptor struct + * @size: key size in bytes (NOT size code) */ -#define HW_DESC_SET_CMAC_SIZE0_MODE(pDesc) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CMAC_SIZE0, (pDesc)->word[4], 0x1); \ - } while (0) +static inline void set_key_size(struct cc_hw_desc *pdesc, u32 size) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_KEY_SIZE, size); +} -/*! - * This macro sets the key size for AES engine. +/* + * Set AES key size. * - * \param pDesc pointer HW descriptor struct - * \param keySize key size in bytes (NOT size code) + * @pdesc: pointer HW descriptor struct + * @size: key size in bytes (NOT size code) */ -#define HW_DESC_SET_KEY_SIZE_AES(pDesc, keySize) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, KEY_SIZE, (pDesc)->word[4], ((keySize) >> 3) - 2); \ - } while (0) +static inline void set_key_size_aes(struct cc_hw_desc *pdesc, u32 size) +{ + set_key_size(pdesc, ((size >> 3) - 2)); +} -/*! - * This macro sets the key size for DES engine. +/* + * Set DES key size. * - * \param pDesc pointer HW descriptor struct - * \param keySize key size in bytes (NOT size code) + * @pdesc: pointer HW descriptor struct + * @size: key size in bytes (NOT size code) */ -#define HW_DESC_SET_KEY_SIZE_DES(pDesc, keySize) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, KEY_SIZE, (pDesc)->word[4], ((keySize) >> 3) - 1); \ - } while (0) +static inline void set_key_size_des(struct cc_hw_desc *pdesc, u32 size) +{ + set_key_size(pdesc, ((size >> 3) - 1)); +} -/*! - * This macro sets the descriptor's setup mode +/* + * Set the descriptor setup mode * - * \param pDesc pointer HW descriptor struct - * \param setupMode Any one of the setup modes defined in [CC7x-DESC] + * @pdesc: pointer HW descriptor struct + * @mode: Any one of the setup modes defined in [CC7x-DESC] */ -#define HW_DESC_SET_SETUP_MODE(pDesc, setupMode) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, SETUP_OPERATION, (pDesc)->word[4], (setupMode)); \ - } while (0) +static inline void set_setup_mode(struct cc_hw_desc *pdesc, + enum cc_setup_op mode) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_SETUP_OPERATION, mode); +} -/*! - * This macro sets the descriptor's cipher do +/* + * Set the descriptor cipher DO * - * \param pDesc pointer HW descriptor struct - * \param cipherDo Any one of the cipher do defined in [CC7x-DESC] + * @pdesc: pointer HW descriptor struct + * @config: Any one of the cipher do defined in [CC7x-DESC] */ -#define HW_DESC_SET_CIPHER_DO(pDesc, cipherDo) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_DO, (pDesc)->word[4], (cipherDo) & HW_KEY_MASK_CIPHER_DO); \ - } while (0) +static inline void set_cipher_do(struct cc_hw_desc *pdesc, + enum HashCipherDoPadding config) +{ + pdesc->word[4] |= FIELD_PREP(WORD4_CIPHER_DO, + (config & HW_KEY_MASK_CIPHER_DO)); +} /*! * This macro sets the DIN field of a HW descriptors to star/stop monitor descriptor. diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c index ecf9ff2..b0db815 100644 --- a/drivers/staging/ccree/ssi_aead.c +++ b/drivers/staging/ccree/ssi_aead.c @@ -278,33 +278,36 @@ static void ssi_aead_complete(struct device *dev, void *ssi_req, void __iomem *c static int xcbc_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx) { /* Load the AES key */ - HW_DESC_INIT(&desc[0]); + hw_desc_init(&desc[0]); /* We are using for the source/user key the same buffer as for the output keys, * because after this key loading it is not needed anymore */ - HW_DESC_SET_DIN_TYPE(&desc[0], DMA_DLLI, ctx->auth_state.xcbc.xcbc_keys_dma_addr, ctx->auth_keylen, NS_BIT); - HW_DESC_SET_CIPHER_MODE(&desc[0], DRV_CIPHER_ECB); - HW_DESC_SET_CIPHER_CONFIG0(&desc[0], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[0], ctx->auth_keylen); - HW_DESC_SET_FLOW_MODE(&desc[0], S_DIN_to_AES); - HW_DESC_SET_SETUP_MODE(&desc[0], SETUP_LOAD_KEY0); - - HW_DESC_INIT(&desc[1]); - HW_DESC_SET_DIN_CONST(&desc[1], 0x01010101, CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[1], DIN_AES_DOUT); - HW_DESC_SET_DOUT_DLLI(&desc[1], ctx->auth_state.xcbc.xcbc_keys_dma_addr, AES_KEYSIZE_128, NS_BIT, 0); - - HW_DESC_INIT(&desc[2]); - HW_DESC_SET_DIN_CONST(&desc[2], 0x02020202, CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[2], DIN_AES_DOUT); - HW_DESC_SET_DOUT_DLLI(&desc[2], (ctx->auth_state.xcbc.xcbc_keys_dma_addr + set_din_type(&desc[0], DMA_DLLI, + ctx->auth_state.xcbc.xcbc_keys_dma_addr, ctx->auth_keylen, + NS_BIT); + set_cipher_mode(&desc[0], DRV_CIPHER_ECB); + set_cipher_config0(&desc[0], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_key_size_aes(&desc[0], ctx->auth_keylen); + set_flow_mode(&desc[0], S_DIN_to_AES); + set_setup_mode(&desc[0], SETUP_LOAD_KEY0); + + hw_desc_init(&desc[1]); + set_din_const(&desc[1], 0x01010101, CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[1], DIN_AES_DOUT); + set_dout_dlli(&desc[1], ctx->auth_state.xcbc.xcbc_keys_dma_addr, + AES_KEYSIZE_128, NS_BIT, 0); + + hw_desc_init(&desc[2]); + set_din_const(&desc[2], 0x02020202, CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[2], DIN_AES_DOUT); + set_dout_dlli(&desc[2], (ctx->auth_state.xcbc.xcbc_keys_dma_addr + AES_KEYSIZE_128), AES_KEYSIZE_128, NS_BIT, 0); - HW_DESC_INIT(&desc[3]); - HW_DESC_SET_DIN_CONST(&desc[3], 0x03030303, CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[3], DIN_AES_DOUT); - HW_DESC_SET_DOUT_DLLI(&desc[3], (ctx->auth_state.xcbc.xcbc_keys_dma_addr + hw_desc_init(&desc[3]); + set_din_const(&desc[3], 0x03030303, CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[3], DIN_AES_DOUT); + set_dout_dlli(&desc[3], (ctx->auth_state.xcbc.xcbc_keys_dma_addr + 2 * AES_KEYSIZE_128), AES_KEYSIZE_128, NS_BIT, 0); @@ -326,52 +329,51 @@ static int hmac_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx) /* calc derived HMAC key */ for (i = 0; i < 2; i++) { /* Load hash initial state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); - HW_DESC_SET_DIN_SRAM(&desc[idx], - ssi_ahash_get_larval_digest_sram_addr( + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hash_mode); + set_din_sram(&desc[idx], + ssi_ahash_get_larval_digest_sram_addr( ctx->drvdata, ctx->auth_mode), - digest_size); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + digest_size); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load the hash current length*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hash_mode); + set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Prepare ipad key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_XOR_VAL(&desc[idx], hmacPadConst[i]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + hw_desc_init(&desc[idx]); + set_xor_val(&desc[idx], hmacPadConst[i]); + set_cipher_mode(&desc[idx], hash_mode); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); idx++; /* Perform HASH update */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - ctx->auth_state.hmac.padded_authkey_dma_addr, - SHA256_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); - HW_DESC_SET_XOR_ACTIVE(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + ctx->auth_state.hmac.padded_authkey_dma_addr, + SHA256_BLOCK_SIZE, NS_BIT); + set_cipher_mode(&desc[idx], hash_mode); + set_xor_active(&desc[idx]); + set_flow_mode(&desc[idx], DIN_HASH); idx++; /* Get the digset */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - (ctx->auth_state.hmac.ipad_opad_dma_addr + - digest_ofs), - digest_size, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hash_mode); + set_dout_dlli(&desc[idx], + (ctx->auth_state.hmac.ipad_opad_dma_addr + + digest_ofs), digest_size, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); idx++; digest_ofs += digest_size; @@ -466,84 +468,74 @@ ssi_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key, unsigned int keyl SSI_UPDATE_DMA_ADDR_TO_48BIT(key_dma_addr, keylen); if (keylen > blocksize) { /* Load hash initial state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode); - HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr, digestsize); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hashmode); + set_din_sram(&desc[idx], larval_addr, digestsize); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load the hash current length*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hashmode); + set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - key_dma_addr, - keylen, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + key_dma_addr, keylen, NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; /* Get hashed key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - padded_authkey_dma_addr, - digestsize, - NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], - HASH_PADDING_DISABLED); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], - HASH_DIGEST_RESULT_LITTLE_ENDIAN); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hashmode); + set_dout_dlli(&desc[idx], padded_authkey_dma_addr, + digestsize, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); + set_cipher_config0(&desc[idx], + HASH_DIGEST_RESULT_LITTLE_ENDIAN); idx++; - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - digestsize)); - HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - (padded_authkey_dma_addr + digestsize), - (blocksize - digestsize), - NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0, (blocksize - digestsize)); + set_flow_mode(&desc[idx], BYPASS); + set_dout_dlli(&desc[idx], (padded_authkey_dma_addr + + digestsize), (blocksize - digestsize), + NS_BIT, 0); idx++; } else { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - key_dma_addr, - keylen, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - (padded_authkey_dma_addr), - keylen, NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, key_dma_addr, + keylen, NS_BIT); + set_flow_mode(&desc[idx], BYPASS); + set_dout_dlli(&desc[idx], padded_authkey_dma_addr, + keylen, NS_BIT, 0); idx++; if ((blocksize - keylen) != 0) { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, - (blocksize - keylen)); - HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - (padded_authkey_dma_addr + keylen), - (blocksize - keylen), - NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0, + (blocksize - keylen)); + set_flow_mode(&desc[idx], BYPASS); + set_dout_dlli(&desc[idx], + (padded_authkey_dma_addr + + keylen), + (blocksize - keylen), NS_BIT, 0); idx++; } } } else { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, - (blocksize - keylen)); - HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - padded_authkey_dma_addr, - blocksize, - NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0, (blocksize - keylen)); + set_flow_mode(&desc[idx], BYPASS); + set_dout_dlli(&desc[idx], padded_authkey_dma_addr, + blocksize, NS_BIT, 0); idx++; } @@ -772,24 +764,23 @@ ssi_aead_create_assoc_desc( switch (assoc_dma_type) { case SSI_DMA_BUF_DLLI: SSI_LOG_DEBUG("ASSOC buffer type DLLI\n"); - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - sg_dma_address(areq->src), - areq->assoclen, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); - if (ctx->auth_mode == DRV_HASH_XCBC_MAC && (areq_ctx->cryptlen > 0) ) - HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx]); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, sg_dma_address(areq->src), + areq->assoclen, NS_BIT); set_flow_mode(&desc[idx], + flow_mode); + if ((ctx->auth_mode == DRV_HASH_XCBC_MAC) && + (areq_ctx->cryptlen > 0)) + set_din_not_last_indication(&desc[idx]); break; case SSI_DMA_BUF_MLLI: SSI_LOG_DEBUG("ASSOC buffer type MLLI\n"); - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI, - areq_ctx->assoc.sram_addr, - areq_ctx->assoc.mlli_nents, - NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); - if (ctx->auth_mode == DRV_HASH_XCBC_MAC && (areq_ctx->cryptlen > 0) ) - HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx]); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_MLLI, areq_ctx->assoc.sram_addr, + areq_ctx->assoc.mlli_nents, NS_BIT); + set_flow_mode(&desc[idx], flow_mode); + if ((ctx->auth_mode == DRV_HASH_XCBC_MAC) && + (areq_ctx->cryptlen > 0)) + set_din_not_last_indication(&desc[idx]); break; case SSI_DMA_BUF_NULL: default: @@ -822,11 +813,11 @@ ssi_aead_process_authenc_data_desc( (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? areq_ctx->dstOffset : areq_ctx->srcOffset; SSI_LOG_DEBUG("AUTHENC: SRC/DST buffer type DLLI\n"); - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - (sg_dma_address(cipher)+ offset), areq_ctx->cryptlen, - NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + (sg_dma_address(cipher) + offset), + areq_ctx->cryptlen, NS_BIT); + set_flow_mode(&desc[idx], flow_mode); break; } case SSI_DMA_BUF_MLLI: @@ -849,10 +840,10 @@ ssi_aead_process_authenc_data_desc( } SSI_LOG_DEBUG("AUTHENC: SRC/DST buffer type MLLI\n"); - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI, - mlli_addr, mlli_nents, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_MLLI, mlli_addr, mlli_nents, + NS_BIT); + set_flow_mode(&desc[idx], flow_mode); break; } case SSI_DMA_BUF_NULL: @@ -880,25 +871,24 @@ ssi_aead_process_cipher_data_desc( switch (data_dma_type) { case SSI_DMA_BUF_DLLI: SSI_LOG_DEBUG("CIPHER: SRC/DST buffer type DLLI\n"); - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - (sg_dma_address(areq_ctx->srcSgl)+areq_ctx->srcOffset), - areq_ctx->cryptlen, NS_BIT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - (sg_dma_address(areq_ctx->dstSgl)+areq_ctx->dstOffset), - areq_ctx->cryptlen, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + (sg_dma_address(areq_ctx->srcSgl) + + areq_ctx->srcOffset), areq_ctx->cryptlen, NS_BIT); + set_dout_dlli(&desc[idx], + (sg_dma_address(areq_ctx->dstSgl) + + areq_ctx->dstOffset), + areq_ctx->cryptlen, NS_BIT, 0); + set_flow_mode(&desc[idx], flow_mode); break; case SSI_DMA_BUF_MLLI: SSI_LOG_DEBUG("CIPHER: SRC/DST buffer type MLLI\n"); - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI, - areq_ctx->src.sram_addr, - areq_ctx->src.mlli_nents, NS_BIT); - HW_DESC_SET_DOUT_MLLI(&desc[idx], - areq_ctx->dst.sram_addr, - areq_ctx->dst.mlli_nents, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_MLLI, areq_ctx->src.sram_addr, + areq_ctx->src.mlli_nents, NS_BIT); + set_dout_mlli(&desc[idx], areq_ctx->dst.sram_addr, + areq_ctx->dst.mlli_nents, NS_BIT, 0); + set_flow_mode(&desc[idx], flow_mode); break; case SSI_DMA_BUF_NULL: default: @@ -923,35 +913,36 @@ static inline void ssi_aead_process_digest_result_desc( /* Get final ICV result */ if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->icv_dma_addr, - ctx->authsize, NS_BIT, 1); - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + hw_desc_init(&desc[idx]); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_dout_dlli(&desc[idx], req_ctx->icv_dma_addr, ctx->authsize, + NS_BIT, 1); + set_queue_last_ind(&desc[idx]); if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); + set_aes_not_hash_mode(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); } else { - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], - HASH_DIGEST_RESULT_LITTLE_ENDIAN); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + set_cipher_config0(&desc[idx], + HASH_DIGEST_RESULT_LITTLE_ENDIAN); + set_cipher_mode(&desc[idx], hash_mode); } } else { /*Decrypt*/ /* Get ICV out from hardware */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr, - ctx->authsize, NS_BIT, 1); - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + hw_desc_init(&desc[idx]); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_dout_dlli(&desc[idx], req_ctx->mac_buf_dma_addr, + ctx->authsize, NS_BIT, 1); + set_queue_last_ind(&desc[idx]); + set_cipher_config0(&desc[idx], + HASH_DIGEST_RESULT_LITTLE_ENDIAN); + set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); + set_aes_not_hash_mode(&desc[idx]); } else { - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + set_cipher_mode(&desc[idx], hash_mode); } } @@ -971,35 +962,35 @@ static inline void ssi_aead_setup_cipher_desc( int direct = req_ctx->gen_ctx.op_type; /* Setup cipher state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direct); - HW_DESC_SET_FLOW_MODE(&desc[idx], ctx->flow_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - req_ctx->gen_ctx.iv_dma_addr, hw_iv_size, NS_BIT); + hw_desc_init(&desc[idx]); + set_cipher_config0(&desc[idx], direct); + set_flow_mode(&desc[idx], ctx->flow_mode); + set_din_type(&desc[idx], DMA_DLLI, req_ctx->gen_ctx.iv_dma_addr, + hw_iv_size, NS_BIT); if (ctx->cipher_mode == DRV_CIPHER_CTR) { - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); } else { - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); } - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->cipher_mode); + set_cipher_mode(&desc[idx], ctx->cipher_mode); idx++; /* Setup enc. key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direct); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_FLOW_MODE(&desc[idx], ctx->flow_mode); + hw_desc_init(&desc[idx]); + set_cipher_config0(&desc[idx], direct); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_flow_mode(&desc[idx], ctx->flow_mode); if (ctx->flow_mode == S_DIN_to_AES) { - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, - ((ctx->enc_keylen == 24) ? - CC_AES_KEY_SIZE_MAX : ctx->enc_keylen), NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); + set_din_type(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ((ctx->enc_keylen == 24) ? CC_AES_KEY_SIZE_MAX : + ctx->enc_keylen), NS_BIT); + set_key_size_aes(&desc[idx], ctx->enc_keylen); } else { - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, - ctx->enc_keylen, NS_BIT); - HW_DESC_SET_KEY_SIZE_DES(&desc[idx], ctx->enc_keylen); + set_din_type(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ctx->enc_keylen, NS_BIT); + set_key_size_des(&desc[idx], ctx->enc_keylen); } - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->cipher_mode); + set_cipher_mode(&desc[idx], ctx->cipher_mode); idx++; *seq_size = idx; @@ -1022,9 +1013,9 @@ static inline void ssi_aead_process_cipher( ssi_aead_process_cipher_data_desc(req, data_flow_mode, desc, &idx); if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) { /* We must wait for DMA to write all cipher */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; } @@ -1045,23 +1036,24 @@ static inline void ssi_aead_hmac_setup_digest_desc( unsigned int idx = *seq_size; /* Loading hash ipad xor key state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - ctx->auth_state.hmac.ipad_opad_dma_addr, - digest_size, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hash_mode); + set_din_type(&desc[idx], DMA_DLLI, + ctx->auth_state.hmac.ipad_opad_dma_addr, digest_size, + NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load init. digest len (64 bytes) */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); - HW_DESC_SET_DIN_SRAM(&desc[idx], - ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, hash_mode), - HASH_LEN_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hash_mode); + set_din_sram(&desc[idx], + ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, + hash_mode), + HASH_LEN_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; *seq_size = idx; @@ -1077,55 +1069,53 @@ static inline void ssi_aead_xcbc_setup_digest_desc( unsigned int idx = *seq_size; /* Loading MAC state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, CC_AES_BLOCK_SIZE); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0, CC_AES_BLOCK_SIZE); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); + set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); idx++; /* Setup XCBC MAC K1 */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - ctx->auth_state.xcbc.xcbc_keys_dma_addr, - AES_KEYSIZE_128, NS_BIT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + ctx->auth_state.xcbc.xcbc_keys_dma_addr, + AES_KEYSIZE_128, NS_BIT); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); idx++; /* Setup XCBC MAC K2 */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - (ctx->auth_state.xcbc.xcbc_keys_dma_addr + - AES_KEYSIZE_128), - AES_KEYSIZE_128, NS_BIT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + (ctx->auth_state.xcbc.xcbc_keys_dma_addr + + AES_KEYSIZE_128), AES_KEYSIZE_128, NS_BIT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); + set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); idx++; /* Setup XCBC MAC K3 */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - (ctx->auth_state.xcbc.xcbc_keys_dma_addr + - 2 * AES_KEYSIZE_128), - AES_KEYSIZE_128, NS_BIT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE2); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + (ctx->auth_state.xcbc.xcbc_keys_dma_addr + + 2 * AES_KEYSIZE_128), AES_KEYSIZE_128, NS_BIT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE2); + set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); idx++; *seq_size = idx; @@ -1159,51 +1149,52 @@ static inline void ssi_aead_process_digest_scheme_desc( CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE; unsigned int idx = *seq_size; - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); - HW_DESC_SET_DOUT_SRAM(&desc[idx], aead_handle->sram_workspace_addr, - HASH_LEN_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1); - HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hash_mode); + set_dout_sram(&desc[idx], aead_handle->sram_workspace_addr, + HASH_LEN_SIZE); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); + set_cipher_do(&desc[idx], DO_PAD); idx++; /* Get final ICV result */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DOUT_SRAM(&desc[idx], aead_handle->sram_workspace_addr, - digest_size); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + hw_desc_init(&desc[idx]); + set_dout_sram(&desc[idx], aead_handle->sram_workspace_addr, + digest_size); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_config0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); + set_cipher_mode(&desc[idx], hash_mode); idx++; /* Loading hash opad xor key state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - (ctx->auth_state.hmac.ipad_opad_dma_addr + digest_size), - digest_size, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hash_mode); + set_din_type(&desc[idx], DMA_DLLI, + (ctx->auth_state.hmac.ipad_opad_dma_addr + digest_size), + digest_size, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load init. digest len (64 bytes) */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); - HW_DESC_SET_DIN_SRAM(&desc[idx], - ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, hash_mode), - HASH_LEN_SIZE); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hash_mode); + set_din_sram(&desc[idx], + ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, + hash_mode), + HASH_LEN_SIZE); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Perform HASH update */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_SRAM(&desc[idx], aead_handle->sram_workspace_addr, - digest_size); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_sram(&desc[idx], aead_handle->sram_workspace_addr, + digest_size); + set_flow_mode(&desc[idx], DIN_HASH); idx++; *seq_size = idx; @@ -1226,14 +1217,14 @@ static inline void ssi_aead_load_mlli_to_sram( (unsigned int)ctx->drvdata->mlli_sram_addr, req_ctx->mlli_params.mlli_len); /* Copy MLLI table host-to-sram */ - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, - req_ctx->mlli_params.mlli_dma_addr, - req_ctx->mlli_params.mlli_len, NS_BIT); - HW_DESC_SET_DOUT_SRAM(&desc[*seq_size], - ctx->drvdata->mlli_sram_addr, - req_ctx->mlli_params.mlli_len); - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], BYPASS); + hw_desc_init(&desc[*seq_size]); + set_din_type(&desc[*seq_size], DMA_DLLI, + req_ctx->mlli_params.mlli_dma_addr, + req_ctx->mlli_params.mlli_len, NS_BIT); + set_dout_sram(&desc[*seq_size], + ctx->drvdata->mlli_sram_addr, + req_ctx->mlli_params.mlli_len); + set_flow_mode(&desc[*seq_size], BYPASS); (*seq_size)++; } } @@ -1486,55 +1477,51 @@ static inline int ssi_aead_ccm( } /* load key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, - ((ctx->enc_keylen == 24) ? - CC_AES_KEY_SIZE_MAX : ctx->enc_keylen), - NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CTR); + set_din_type(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ((ctx->enc_keylen == 24) ? CC_AES_KEY_SIZE_MAX : + ctx->enc_keylen), NS_BIT); + set_key_size_aes(&desc[idx], ctx->enc_keylen); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* load ctr state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - req_ctx->gen_ctx.iv_dma_addr, - AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CTR); + set_key_size_aes(&desc[idx], ctx->enc_keylen); + set_din_type(&desc[idx], DMA_DLLI, + req_ctx->gen_ctx.iv_dma_addr, AES_BLOCK_SIZE, NS_BIT); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* load MAC key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, - ((ctx->enc_keylen == 24) ? - CC_AES_KEY_SIZE_MAX : ctx->enc_keylen), - NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CBC_MAC); + set_din_type(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ((ctx->enc_keylen == 24) ? CC_AES_KEY_SIZE_MAX : + ctx->enc_keylen), NS_BIT); + set_key_size_aes(&desc[idx], ctx->enc_keylen); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); idx++; /* load MAC state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - req_ctx->mac_buf_dma_addr, - AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CBC_MAC); + set_key_size_aes(&desc[idx], ctx->enc_keylen); + set_din_type(&desc[idx], DMA_DLLI, req_ctx->mac_buf_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); idx++; @@ -1542,12 +1529,11 @@ static inline int ssi_aead_ccm( if (req->assoclen > 0) { ssi_aead_create_assoc_desc(req, DIN_HASH, desc, &idx); } else { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - sg_dma_address(&req_ctx->ccm_adata_sg), - AES_BLOCK_SIZE + req_ctx->ccm_hdr_size, - NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + sg_dma_address(&req_ctx->ccm_adata_sg), + AES_BLOCK_SIZE + req_ctx->ccm_hdr_size, NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; } @@ -1557,40 +1543,39 @@ static inline int ssi_aead_ccm( } /* Read temporal MAC */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC); - HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr, - ctx->authsize, NS_BIT, 0); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CBC_MAC); + set_dout_dlli(&desc[idx], req_ctx->mac_buf_dma_addr, ctx->authsize, + NS_BIT, 0); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_config0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_aes_not_hash_mode(&desc[idx]); idx++; /* load AES-CTR state (for last MAC calculation)*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - req_ctx->ccm_iv0_dma_addr , - AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CTR); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_din_type(&desc[idx], DMA_DLLI, req_ctx->ccm_iv0_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + set_key_size_aes(&desc[idx], ctx->enc_keylen); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; /* encrypt the "T" value and store MAC in mac_state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - req_ctx->mac_buf_dma_addr , ctx->authsize, NS_BIT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_result , ctx->authsize, NS_BIT, 1); - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, req_ctx->mac_buf_dma_addr, + ctx->authsize, NS_BIT); + set_dout_dlli(&desc[idx], mac_result, ctx->authsize, NS_BIT, 1); + set_queue_last_ind(&desc[idx]); + set_flow_mode(&desc[idx], DIN_AES_DOUT); idx++; *seq_size = idx; @@ -1681,43 +1666,40 @@ static inline void ssi_aead_gcm_setup_ghash_desc( unsigned int idx = *seq_size; /* load key to AES*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, - ctx->enc_keylen, NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_ECB); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_din_type(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ctx->enc_keylen, NS_BIT); + set_key_size_aes(&desc[idx], ctx->enc_keylen); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* process one zero block to generate hkey */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - req_ctx->hkey_dma_addr, - AES_BLOCK_SIZE, - NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0x0, AES_BLOCK_SIZE); + set_dout_dlli(&desc[idx], req_ctx->hkey_dma_addr, AES_BLOCK_SIZE, + NS_BIT, 0); + set_flow_mode(&desc[idx], DIN_AES_DOUT); idx++; /* Memory Barrier */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; /* Load GHASH subkey */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - req_ctx->hkey_dma_addr, - AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, req_ctx->hkey_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + set_dout_no_dma(&desc[idx], 0, 0, 1); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_HASH_HW_GHASH); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Configure Hash Engine to work with GHASH. @@ -1725,27 +1707,27 @@ static inline void ssi_aead_gcm_setup_ghash_desc( * The following command is necessary in order to * select GHASH (according to HW designers) */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); - HW_DESC_SET_CIPHER_DO(&desc[idx], 1); //1=AES_SK RKEK - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_HASH_HW_GHASH); + set_cipher_do(&desc[idx], 1); //1=AES_SK RKEK + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Load GHASH initial STATE (which is 0). (for any hash there is an initial state) */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0x0, AES_BLOCK_SIZE); + set_dout_no_dma(&desc[idx], 0, 0, 1); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_HASH_HW_GHASH); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; *seq_size = idx; @@ -1762,27 +1744,27 @@ static inline void ssi_aead_gcm_setup_gctr_desc( unsigned int idx = *seq_size; /* load key to AES*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, - ctx->enc_keylen, NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_GCTR); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_din_type(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ctx->enc_keylen, NS_BIT); + set_key_size_aes(&desc[idx], ctx->enc_keylen); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; if ((req_ctx->cryptlen != 0) && (req_ctx->plaintext_authenticate_only==false)){ /* load AES/CTR initial CTR value inc by 2*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - req_ctx->gcm_iv_inc2_dma_addr, - AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_GCTR); + set_key_size_aes(&desc[idx], ctx->enc_keylen); + set_din_type(&desc[idx], DMA_DLLI, + req_ctx->gcm_iv_inc2_dma_addr, AES_BLOCK_SIZE, + NS_BIT); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; } @@ -1807,52 +1789,49 @@ static inline void ssi_aead_process_gcm_result_desc( } /* process(ghash) gcm_block_len */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - req_ctx->gcm_block_len_dma_addr, - AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, req_ctx->gcm_block_len_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; /* Store GHASH state after GHASH(Associated Data + Cipher +LenBlock) */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr, - AES_BLOCK_SIZE, NS_BIT, 0); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_HASH_HW_GHASH); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_dlli(&desc[idx], req_ctx->mac_buf_dma_addr, AES_BLOCK_SIZE, + NS_BIT, 0); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_aes_not_hash_mode(&desc[idx]); idx++; /* load AES/CTR initial CTR value inc by 1*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - req_ctx->gcm_iv_inc1_dma_addr, - AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_GCTR); + set_key_size_aes(&desc[idx], ctx->enc_keylen); + set_din_type(&desc[idx], DMA_DLLI, req_ctx->gcm_iv_inc1_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* Memory Barrier */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; /* process GCTR on stored GHASH and store MAC in mac_state*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - req_ctx->mac_buf_dma_addr, - AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_result, ctx->authsize, NS_BIT, 1); - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_GCTR); + set_din_type(&desc[idx], DMA_DLLI, req_ctx->mac_buf_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + set_dout_dlli(&desc[idx], mac_result, ctx->authsize, NS_BIT, 1); + set_queue_last_ind(&desc[idx]); + set_flow_mode(&desc[idx], DIN_AES_DOUT); idx++; *seq_size = idx; diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c index d245a2b..8f86fcd 100644 --- a/drivers/staging/ccree/ssi_cipher.c +++ b/drivers/staging/ccree/ssi_cipher.c @@ -489,96 +489,93 @@ ssi_blkcipher_create_setup_desc( case DRV_CIPHER_CTR: case DRV_CIPHER_OFB: /* Load cipher state */ - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, - iv_dma_addr, ivsize, - NS_BIT); - HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); - HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode); + hw_desc_init(&desc[*seq_size]); + set_din_type(&desc[*seq_size], DMA_DLLI, iv_dma_addr, ivsize, + NS_BIT); + set_cipher_config0(&desc[*seq_size], direction); + set_flow_mode(&desc[*seq_size], flow_mode); + set_cipher_mode(&desc[*seq_size], cipher_mode); if ((cipher_mode == DRV_CIPHER_CTR) || (cipher_mode == DRV_CIPHER_OFB) ) { - HW_DESC_SET_SETUP_MODE(&desc[*seq_size], - SETUP_LOAD_STATE1); + set_setup_mode(&desc[*seq_size], SETUP_LOAD_STATE1); } else { - HW_DESC_SET_SETUP_MODE(&desc[*seq_size], - SETUP_LOAD_STATE0); + set_setup_mode(&desc[*seq_size], SETUP_LOAD_STATE0); } (*seq_size)++; /*FALLTHROUGH*/ case DRV_CIPHER_ECB: /* Load key */ - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode); - HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); + hw_desc_init(&desc[*seq_size]); + set_cipher_mode(&desc[*seq_size], cipher_mode); + set_cipher_config0(&desc[*seq_size], direction); if (flow_mode == S_DIN_to_AES) { if (ssi_is_hw_key(tfm)) { - HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key1_slot); + set_hw_crypto_key(&desc[*seq_size], + ctx_p->hw.key1_slot); } else { - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, - key_dma_addr, - ((key_len == 24) ? AES_MAX_KEY_SIZE : key_len), - NS_BIT); + set_din_type(&desc[*seq_size], DMA_DLLI, + key_dma_addr, ((key_len == 24) ? + AES_MAX_KEY_SIZE : + key_len), NS_BIT); } - HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len); + set_key_size_aes(&desc[*seq_size], key_len); } else { /*des*/ - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, - key_dma_addr, key_len, - NS_BIT); - HW_DESC_SET_KEY_SIZE_DES(&desc[*seq_size], key_len); + set_din_type(&desc[*seq_size], DMA_DLLI, key_dma_addr, + key_len, NS_BIT); + set_key_size_des(&desc[*seq_size], key_len); } - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); - HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0); + set_flow_mode(&desc[*seq_size], flow_mode); + set_setup_mode(&desc[*seq_size], SETUP_LOAD_KEY0); (*seq_size)++; break; case DRV_CIPHER_XTS: case DRV_CIPHER_ESSIV: case DRV_CIPHER_BITLOCKER: /* Load AES key */ - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode); - HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); + hw_desc_init(&desc[*seq_size]); + set_cipher_mode(&desc[*seq_size], cipher_mode); + set_cipher_config0(&desc[*seq_size], direction); if (ssi_is_hw_key(tfm)) { - HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key1_slot); + set_hw_crypto_key(&desc[*seq_size], + ctx_p->hw.key1_slot); } else { - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, - key_dma_addr, key_len/2, - NS_BIT); + set_din_type(&desc[*seq_size], DMA_DLLI, key_dma_addr, + (key_len / 2), NS_BIT); } - HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2); - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); - HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0); + set_key_size_aes(&desc[*seq_size], (key_len / 2)); + set_flow_mode(&desc[*seq_size], flow_mode); + set_setup_mode(&desc[*seq_size], SETUP_LOAD_KEY0); (*seq_size)++; /* load XEX key */ - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode); - HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); + hw_desc_init(&desc[*seq_size]); + set_cipher_mode(&desc[*seq_size], cipher_mode); + set_cipher_config0(&desc[*seq_size], direction); if (ssi_is_hw_key(tfm)) { - HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key2_slot); + set_hw_crypto_key(&desc[*seq_size], + ctx_p->hw.key2_slot); } else { - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, - (key_dma_addr+key_len/2), key_len/2, - NS_BIT); + set_din_type(&desc[*seq_size], DMA_DLLI, + (key_dma_addr + (key_len / 2)), + (key_len / 2), NS_BIT); } - HW_DESC_SET_XEX_DATA_UNIT_SIZE(&desc[*seq_size], du_size); - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], S_DIN_to_AES2); - HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2); - HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_XEX_KEY); + set_xex_data_unit_size(&desc[*seq_size], du_size); + set_flow_mode(&desc[*seq_size], S_DIN_to_AES2); + set_key_size_aes(&desc[*seq_size], (key_len / 2)); + set_setup_mode(&desc[*seq_size], SETUP_LOAD_XEX_KEY); (*seq_size)++; /* Set state */ - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE1); - HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode); - HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); - HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2); - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, - iv_dma_addr, CC_AES_BLOCK_SIZE, - NS_BIT); + hw_desc_init(&desc[*seq_size]); + set_setup_mode(&desc[*seq_size], SETUP_LOAD_STATE1); + set_cipher_mode(&desc[*seq_size], cipher_mode); + set_cipher_config0(&desc[*seq_size], direction); + set_key_size_aes(&desc[*seq_size], (key_len / 2)); + set_flow_mode(&desc[*seq_size], flow_mode); + set_din_type(&desc[*seq_size], DMA_DLLI, iv_dma_addr, + CC_AES_BLOCK_SIZE, NS_BIT); (*seq_size)++; break; default: @@ -599,40 +596,36 @@ static inline void ssi_blkcipher_create_multi2_setup_desc( int direction = req_ctx->gen_ctx.op_type; /* Load system key */ - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode); - HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, ctx_p->user.key_dma_addr, - CC_MULTI2_SYSTEM_KEY_SIZE, - NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode); - HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0); + hw_desc_init(&desc[*seq_size]); + set_cipher_mode(&desc[*seq_size], ctx_p->cipher_mode); + set_cipher_config0(&desc[*seq_size], direction); + set_din_type(&desc[*seq_size], DMA_DLLI, ctx_p->user.key_dma_addr, + CC_MULTI2_SYSTEM_KEY_SIZE, NS_BIT); + set_flow_mode(&desc[*seq_size], ctx_p->flow_mode); + set_setup_mode(&desc[*seq_size], SETUP_LOAD_KEY0); (*seq_size)++; /* load data key */ - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, - (ctx_p->user.key_dma_addr + - CC_MULTI2_SYSTEM_KEY_SIZE), - CC_MULTI2_DATA_KEY_SIZE, NS_BIT); - HW_DESC_SET_MULTI2_NUM_ROUNDS(&desc[*seq_size], - ctx_p->key_round_number); - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode); - HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode); - HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); - HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE0 ); + hw_desc_init(&desc[*seq_size]); + set_din_type(&desc[*seq_size], DMA_DLLI, + (ctx_p->user.key_dma_addr + CC_MULTI2_SYSTEM_KEY_SIZE), + CC_MULTI2_DATA_KEY_SIZE, NS_BIT); + set_multi2_num_rounds(&desc[*seq_size], ctx_p->key_round_number); + set_flow_mode(&desc[*seq_size], ctx_p->flow_mode); + set_cipher_mode(&desc[*seq_size], ctx_p->cipher_mode); + set_cipher_config0(&desc[*seq_size], direction); + set_setup_mode(&desc[*seq_size], SETUP_LOAD_STATE0); (*seq_size)++; /* Set state */ - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, - req_ctx->gen_ctx.iv_dma_addr, - ivsize, NS_BIT); - HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode); - HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode); - HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE1); + hw_desc_init(&desc[*seq_size]); + set_din_type(&desc[*seq_size], DMA_DLLI, req_ctx->gen_ctx.iv_dma_addr, + ivsize, NS_BIT); + set_cipher_config0(&desc[*seq_size], direction); + set_flow_mode(&desc[*seq_size], ctx_p->flow_mode); + set_cipher_mode(&desc[*seq_size], ctx_p->cipher_mode); + set_setup_mode(&desc[*seq_size], SETUP_LOAD_STATE1); (*seq_size)++; } @@ -675,18 +668,15 @@ ssi_blkcipher_create_data_desc( SSI_LOG_DEBUG(" data params addr 0x%llX length 0x%X \n", (unsigned long long)sg_dma_address(dst), nbytes); - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, - sg_dma_address(src), - nbytes, NS_BIT); - HW_DESC_SET_DOUT_DLLI(&desc[*seq_size], - sg_dma_address(dst), - nbytes, - NS_BIT, (areq == NULL)? 0:1); + hw_desc_init(&desc[*seq_size]); + set_din_type(&desc[*seq_size], DMA_DLLI, sg_dma_address(src), + nbytes, NS_BIT); + set_dout_dlli(&desc[*seq_size], sg_dma_address(dst), + nbytes, NS_BIT, (!areq ? 0 : 1)); if (areq != NULL) { - HW_DESC_SET_QUEUE_LAST_IND(&desc[*seq_size]); + set_queue_last_ind(&desc[*seq_size]); } - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); + set_flow_mode(&desc[*seq_size], flow_mode); (*seq_size)++; } else { /* bypass */ @@ -695,30 +685,29 @@ ssi_blkcipher_create_data_desc( (unsigned long long)req_ctx->mlli_params.mlli_dma_addr, req_ctx->mlli_params.mlli_len, (unsigned int)ctx_p->drvdata->mlli_sram_addr); - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, - req_ctx->mlli_params.mlli_dma_addr, - req_ctx->mlli_params.mlli_len, - NS_BIT); - HW_DESC_SET_DOUT_SRAM(&desc[*seq_size], - ctx_p->drvdata->mlli_sram_addr, - req_ctx->mlli_params.mlli_len); - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], BYPASS); + hw_desc_init(&desc[*seq_size]); + set_din_type(&desc[*seq_size], DMA_DLLI, + req_ctx->mlli_params.mlli_dma_addr, + req_ctx->mlli_params.mlli_len, NS_BIT); + set_dout_sram(&desc[*seq_size], + ctx_p->drvdata->mlli_sram_addr, + req_ctx->mlli_params.mlli_len); + set_flow_mode(&desc[*seq_size], BYPASS); (*seq_size)++; - HW_DESC_INIT(&desc[*seq_size]); - HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_MLLI, - ctx_p->drvdata->mlli_sram_addr, - req_ctx->in_mlli_nents, NS_BIT); + hw_desc_init(&desc[*seq_size]); + set_din_type(&desc[*seq_size], DMA_MLLI, + ctx_p->drvdata->mlli_sram_addr, + req_ctx->in_mlli_nents, NS_BIT); if (req_ctx->out_nents == 0) { SSI_LOG_DEBUG(" din/dout params addr 0x%08X " "addr 0x%08X\n", (unsigned int)ctx_p->drvdata->mlli_sram_addr, (unsigned int)ctx_p->drvdata->mlli_sram_addr); - HW_DESC_SET_DOUT_MLLI(&desc[*seq_size], - ctx_p->drvdata->mlli_sram_addr, - req_ctx->in_mlli_nents, - NS_BIT,(areq == NULL)? 0:1); + set_dout_mlli(&desc[*seq_size], + ctx_p->drvdata->mlli_sram_addr, + req_ctx->in_mlli_nents, NS_BIT, + (!areq ? 0 : 1)); } else { SSI_LOG_DEBUG(" din/dout params " "addr 0x%08X addr 0x%08X\n", @@ -726,16 +715,17 @@ ssi_blkcipher_create_data_desc( (unsigned int)ctx_p->drvdata->mlli_sram_addr + (u32)LLI_ENTRY_BYTE_SIZE * req_ctx->in_nents); - HW_DESC_SET_DOUT_MLLI(&desc[*seq_size], - (ctx_p->drvdata->mlli_sram_addr + - LLI_ENTRY_BYTE_SIZE * - req_ctx->in_mlli_nents), - req_ctx->out_mlli_nents, NS_BIT,(areq == NULL)? 0:1); + set_dout_mlli(&desc[*seq_size], + (ctx_p->drvdata->mlli_sram_addr + + (LLI_ENTRY_BYTE_SIZE * + req_ctx->in_mlli_nents)), + req_ctx->out_mlli_nents, NS_BIT, + (!areq ? 0 : 1)); } if (areq != NULL) { - HW_DESC_SET_QUEUE_LAST_IND(&desc[*seq_size]); + set_queue_last_ind(&desc[*seq_size]); } - HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); + set_flow_mode(&desc[*seq_size], flow_mode); (*seq_size)++; } } diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h index e034b09..c0bb1a1 100644 --- a/drivers/staging/ccree/ssi_driver.h +++ b/drivers/staging/ccree/ssi_driver.h @@ -41,16 +41,16 @@ #include "dx_reg_base_host.h" #include "dx_host.h" #define DX_CC_HOST_VIRT /* must be defined before including dx_cc_regs.h */ -#include "cc_hw_queue_defs.h" #include "cc_regs.h" #include "dx_reg_common.h" #include "cc_hal.h" -#include "ssi_sram_mgr.h" #define CC_SUPPORT_SHA DX_DEV_SHA_MAX #include "cc_crypto_ctx.h" #include "ssi_sysfs.h" #include "hash_defs.h" #include "ssi_fips_local.h" +#include "cc_hw_queue_defs.h" +#include "ssi_sram_mgr.h" #define DRV_MODULE_VERSION "3.0" diff --git a/drivers/staging/ccree/ssi_fips_ll.c b/drivers/staging/ccree/ssi_fips_ll.c index ef0a9a5..cdfbf04 100644 --- a/drivers/staging/ccree/ssi_fips_ll.c +++ b/drivers/staging/ccree/ssi_fips_ll.c @@ -325,74 +325,73 @@ ssi_cipher_fips_run_test(struct ssi_drvdata *drvdata, case DRV_CIPHER_CTR: case DRV_CIPHER_OFB: /* Load cipher state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - iv_dma_addr, iv_len, NS_BIT); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction); - HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode); - HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + iv_dma_addr, iv_len, NS_BIT); + set_cipher_config0(&desc[idx], direction); + set_flow_mode(&desc[idx], s_flow_mode); + set_cipher_mode(&desc[idx], cipher_mode); if ((cipher_mode == DRV_CIPHER_CTR) || (cipher_mode == DRV_CIPHER_OFB) ) { - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); } else { - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); } idx++; /*FALLTHROUGH*/ case DRV_CIPHER_ECB: /* Load key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], cipher_mode); + set_cipher_config0(&desc[idx], direction); if (is_aes) { - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - key_dma_addr, - ((key_len == 24) ? AES_MAX_KEY_SIZE : key_len), - NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len); + set_din_type(&desc[idx], DMA_DLLI, key_dma_addr, + ((key_len == 24) ? AES_MAX_KEY_SIZE : + key_len), NS_BIT); + set_key_size_aes(&desc[idx], key_len); } else {/*des*/ - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - key_dma_addr, key_len, - NS_BIT); - HW_DESC_SET_KEY_SIZE_DES(&desc[idx], key_len); + set_din_type(&desc[idx], DMA_DLLI, key_dma_addr, + key_len, NS_BIT); + set_key_size_des(&desc[idx], key_len); } - HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + set_flow_mode(&desc[idx], s_flow_mode); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; break; case DRV_CIPHER_XTS: /* Load AES key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - key_dma_addr, key_len/2, NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len/2); - HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], cipher_mode); + set_cipher_config0(&desc[idx], direction); + set_din_type(&desc[idx], DMA_DLLI, key_dma_addr, (key_len / 2), + NS_BIT); + set_key_size_aes(&desc[idx], (key_len / 2)); + set_flow_mode(&desc[idx], s_flow_mode); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* load XEX key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - (key_dma_addr+key_len/2), key_len/2, NS_BIT); - HW_DESC_SET_XEX_DATA_UNIT_SIZE(&desc[idx], data_size); - HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len/2); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_XEX_KEY); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], cipher_mode); + set_cipher_config0(&desc[idx], direction); + set_din_type(&desc[idx], DMA_DLLI, + (key_dma_addr + (key_len / 2)), + (key_len / 2), NS_BIT); + set_xex_data_unit_size(&desc[idx], data_size); + set_flow_mode(&desc[idx], s_flow_mode); + set_key_size_aes(&desc[idx], (key_len / 2)); + set_setup_mode(&desc[idx], SETUP_LOAD_XEX_KEY); idx++; /* Set state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len/2); - HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - iv_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT); + hw_desc_init(&desc[idx]); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); + set_cipher_mode(&desc[idx], cipher_mode); + set_cipher_config0(&desc[idx], direction); + set_key_size_aes(&desc[idx], (key_len / 2)); + set_flow_mode(&desc[idx], s_flow_mode); + set_din_type(&desc[idx], DMA_DLLI, iv_dma_addr, + CC_AES_BLOCK_SIZE, NS_BIT); idx++; break; default: @@ -401,10 +400,10 @@ ssi_cipher_fips_run_test(struct ssi_drvdata *drvdata, } /* create data descriptor */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, din_dma_addr, data_size, NS_BIT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], dout_dma_addr, data_size, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], is_aes ? DIN_AES_DOUT : DIN_DES_DOUT); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, din_dma_addr, data_size, NS_BIT); + set_dout_dlli(&desc[idx], dout_dma_addr, data_size, NS_BIT, 0); + set_flow_mode(&desc[idx], is_aes ? DIN_AES_DOUT : DIN_DES_DOUT); idx++; /* perform the operation - Lock HW and push sequence */ @@ -499,42 +498,42 @@ ssi_cmac_fips_run_test(struct ssi_drvdata *drvdata, int idx = 0; /* Setup CMAC Key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr, - ((key_len == 24) ? AES_MAX_KEY_SIZE : key_len), NS_BIT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, key_dma_addr, + ((key_len == 24) ? AES_MAX_KEY_SIZE : key_len), NS_BIT); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_cipher_mode(&desc[idx], DRV_CIPHER_CMAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], key_len); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* Load MAC state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, digest_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, digest_dma_addr, CC_AES_BLOCK_SIZE, + NS_BIT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); + set_cipher_mode(&desc[idx], DRV_CIPHER_CMAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], key_len); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; //ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx); - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - din_dma_addr, - din_len, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, din_dma_addr, din_len, NS_BIT); + set_flow_mode(&desc[idx], DIN_AES_DOUT); idx++; /* Get final MAC result */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DOUT_DLLI(&desc[idx], digest_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC); + hw_desc_init(&desc[idx]); + set_dout_dlli(&desc[idx], digest_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT, + 0); + set_flow_mode(&desc[idx], S_AES_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_cipher_mode(&desc[idx], DRV_CIPHER_CMAC); idx++; /* perform the operation - Lock HW and push sequence */ @@ -644,41 +643,43 @@ ssi_hash_fips_run_test(struct ssi_drvdata *drvdata, int idx = 0; /* Load initial digest */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, initial_digest_dma_addr, inter_digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hw_mode); + set_din_type(&desc[idx], DMA_DLLI, initial_digest_dma_addr, + inter_digestsize, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load the hash current length */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hw_mode); + set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* data descriptor */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, din_dma_addr, data_in_size, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, din_dma_addr, data_in_size, NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; /* Get final MAC result */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, digest_size, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hw_mode); + set_dout_dlli(&desc[idx], mac_res_dma_addr, digest_size, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); if (unlikely((hash_mode == DRV_HASH_MD5) || (hash_mode == DRV_HASH_SHA384) || (hash_mode == DRV_HASH_SHA512))) { - HW_DESC_SET_BYTES_SWAP(&desc[idx], 1); + set_bytes_swap(&desc[idx], 1); } else { - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); + set_cipher_config0(&desc[idx], + HASH_DIGEST_RESULT_LITTLE_ENDIAN); } idx++; @@ -831,20 +832,19 @@ ssi_hmac_fips_run_test(struct ssi_drvdata *drvdata, unsigned int hmacPadConst[2] = { HMAC_OPAD_CONST, HMAC_IPAD_CONST }; // assume (key_size <= block_size) - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr, key_size, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); - HW_DESC_SET_DOUT_DLLI(&desc[idx], k0_dma_addr, key_size, NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, key_dma_addr, key_size, NS_BIT); + set_flow_mode(&desc[idx], BYPASS); + set_dout_dlli(&desc[idx], k0_dma_addr, key_size, NS_BIT, 0); idx++; // if needed, append Key with zeros to create K0 if ((block_size - key_size) != 0) { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, (block_size - key_size)); - HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - (k0_dma_addr + key_size), (block_size - key_size), - NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0, (block_size - key_size)); + set_flow_mode(&desc[idx], BYPASS); + set_dout_dlli(&desc[idx], (k0_dma_addr + key_size), + (block_size - key_size), NS_BIT, 0); idx++; } @@ -859,50 +859,48 @@ ssi_hmac_fips_run_test(struct ssi_drvdata *drvdata, /* calc derived HMAC key */ for (i = 0; i < 2; i++) { /* Load hash initial state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, initial_digest_dma_addr, inter_digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hw_mode); + set_din_type(&desc[idx], DMA_DLLI, initial_digest_dma_addr, + inter_digestsize, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load the hash current length*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hw_mode); + set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Prepare opad/ipad key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_XOR_VAL(&desc[idx], hmacPadConst[i]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + hw_desc_init(&desc[idx]); + set_xor_val(&desc[idx], hmacPadConst[i]); + set_cipher_mode(&desc[idx], hw_mode); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); idx++; /* Perform HASH update */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - k0_dma_addr, - block_size, NS_BIT); - HW_DESC_SET_CIPHER_MODE(&desc[idx],hw_mode); - HW_DESC_SET_XOR_ACTIVE(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, k0_dma_addr, block_size, + NS_BIT); + set_cipher_mode(&desc[idx], hw_mode); + set_xor_active(&desc[idx]); + set_flow_mode(&desc[idx], DIN_HASH); idx++; if (i == 0) { /* First iteration - calc H(K0^opad) into tmp_digest_dma_addr */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - tmp_digest_dma_addr, - inter_digestsize, - NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hw_mode); + set_dout_dlli(&desc[idx], tmp_digest_dma_addr, + inter_digestsize, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); idx++; // is this needed?? or continue with current descriptors?? @@ -917,35 +915,34 @@ ssi_hmac_fips_run_test(struct ssi_drvdata *drvdata, } /* data descriptor */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - din_dma_addr, data_in_size, - NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, din_dma_addr, data_in_size, NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; /* HW last hash block padding (aka. "DO_PAD") */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], k0_dma_addr, HASH_LEN_SIZE, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1); - HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hw_mode); + set_dout_dlli(&desc[idx], k0_dma_addr, HASH_LEN_SIZE, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); + set_cipher_do(&desc[idx], DO_PAD); idx++; /* store the hash digest result in the context */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], k0_dma_addr, digest_size, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hw_mode); + set_dout_dlli(&desc[idx], k0_dma_addr, digest_size, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); if (unlikely((hash_mode == DRV_HASH_MD5) || (hash_mode == DRV_HASH_SHA384) || (hash_mode == DRV_HASH_SHA512))) { - HW_DESC_SET_BYTES_SWAP(&desc[idx], 1); + set_bytes_swap(&desc[idx], 1); } else { - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); + set_cipher_config0(&desc[idx], + HASH_DIGEST_RESULT_LITTLE_ENDIAN); } - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); idx++; /* at this point: @@ -954,48 +951,51 @@ ssi_hmac_fips_run_test(struct ssi_drvdata *drvdata, */ /* Loading hash opad xor key state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, tmp_digest_dma_addr, inter_digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hw_mode); + set_din_type(&desc[idx], DMA_DLLI, tmp_digest_dma_addr, + inter_digestsize, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load the hash current length */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hw_mode); + set_din_type(&desc[idx], DMA_DLLI, digest_bytes_len_dma_addr, + HASH_LEN_SIZE, NS_BIT); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Memory Barrier: wait for IPAD/OPAD axi write to complete */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; /* Perform HASH update */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, k0_dma_addr, digest_size, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, k0_dma_addr, digest_size, NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; /* Get final MAC result */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, digest_size, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], hw_mode); + set_dout_dlli(&desc[idx], mac_res_dma_addr, digest_size, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); if (unlikely((hash_mode == DRV_HASH_MD5) || (hash_mode == DRV_HASH_SHA384) || (hash_mode == DRV_HASH_SHA512))) { - HW_DESC_SET_BYTES_SWAP(&desc[idx], 1); + set_bytes_swap(&desc[idx], 1); } else { - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); + set_cipher_config0(&desc[idx], + HASH_DIGEST_RESULT_LITTLE_ENDIAN); } idx++; @@ -1143,99 +1143,102 @@ ssi_ccm_fips_run_test(struct ssi_drvdata *drvdata, } /* load key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr, - ((key_size == NIST_AESCCM_192_BIT_KEY_SIZE) ? CC_AES_KEY_SIZE_MAX : key_size), - NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CTR); + set_din_type(&desc[idx], DMA_DLLI, key_dma_addr, + ((key_size == NIST_AESCCM_192_BIT_KEY_SIZE) ? + CC_AES_KEY_SIZE_MAX : key_size), NS_BIT) + set_key_size_aes(&desc[idx], key_size); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* load ctr state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - iv_dma_addr, AES_BLOCK_SIZE, - NS_BIT); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CTR); + set_key_size_aes(&desc[idx], key_size); + set_din_type(&desc[idx], DMA_DLLI, iv_dma_addr, AES_BLOCK_SIZE, + NS_BIT); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* load MAC key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr, - ((key_size == NIST_AESCCM_192_BIT_KEY_SIZE) ? CC_AES_KEY_SIZE_MAX : key_size), - NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CBC_MAC); + set_din_type(&desc[idx], DMA_DLLI, key_dma_addr, + ((key_size == NIST_AESCCM_192_BIT_KEY_SIZE) ? + CC_AES_KEY_SIZE_MAX : key_size), NS_BIT); + set_key_size_aes(&desc[idx], key_size); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); idx++; /* load MAC state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CBC_MAC); + set_key_size_aes(&desc[idx], key_size); + set_din_type(&desc[idx], DMA_DLLI, mac_res_dma_addr, + NIST_AESCCM_TAG_SIZE, NS_BIT); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); idx++; /* prcess assoc data */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, b0_a0_adata_dma_addr, b0_a0_adata_size, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, b0_a0_adata_dma_addr, + b0_a0_adata_size, NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; /* process the cipher */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, din_dma_addr, din_size, NS_BIT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], dout_dma_addr, din_size, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], cipher_flow_mode); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, din_dma_addr, din_size, NS_BIT); + set_dout_dlli(&desc[idx], dout_dma_addr, din_size, NS_BIT, 0); + set_flow_mode(&desc[idx], cipher_flow_mode); idx++; /* Read temporal MAC */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC); - HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT, 0); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CBC_MAC); + set_dout_dlli(&desc[idx], mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, + NS_BIT, 0); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_config0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_aes_not_hash_mode(&desc[idx]); idx++; /* load AES-CTR state (for last MAC calculation)*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - ctr_cnt_0_dma_addr, - AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_CTR); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_din_type(&desc[idx], DMA_DLLI, ctr_cnt_0_dma_addr, AES_BLOCK_SIZE, + NS_BIT); + set_key_size_aes(&desc[idx], key_size); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* Memory Barrier */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; /* encrypt the "T" value and store MAC inplace */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, mac_res_dma_addr, + NIST_AESCCM_TAG_SIZE, NS_BIT); + set_dout_dlli(&desc[idx], mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, + NS_BIT, 0); + set_flow_mode(&desc[idx], DIN_AES_DOUT); idx++; /* perform the operation - Lock HW and push sequence */ @@ -1373,44 +1376,39 @@ ssi_gcm_fips_run_test(struct ssi_drvdata *drvdata, // ssi_aead_gcm_setup_ghash_desc(req, desc, seq_size); ///////////////////////////////// 1 //////////////////////////////////// - /* load key to AES*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_DIN_TYPE(&desc[idx], - DMA_DLLI, key_dma_addr, key_size, - NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + /* load key to AES */ + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_ECB); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_din_type(&desc[idx], DMA_DLLI, key_dma_addr, key_size, NS_BIT); + set_key_size_aes(&desc[idx], key_size); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* process one zero block to generate hkey */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - hkey_dma_addr, AES_BLOCK_SIZE, - NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0x0, AES_BLOCK_SIZE); + set_dout_dlli(&desc[idx], hkey_dma_addr, AES_BLOCK_SIZE, NS_BIT, 0); + set_flow_mode(&desc[idx], DIN_AES_DOUT); idx++; /* Memory Barrier */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; /* Load GHASH subkey */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - hkey_dma_addr, AES_BLOCK_SIZE, - NS_BIT); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, hkey_dma_addr, AES_BLOCK_SIZE, + NS_BIT); + set_dout_no_dma(&desc[idx], 0, 0, 1); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_HASH_HW_GHASH); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Configure Hash Engine to work with GHASH. @@ -1418,27 +1416,27 @@ ssi_gcm_fips_run_test(struct ssi_drvdata *drvdata, * The following command is necessary in order to * select GHASH (according to HW designers) */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); - HW_DESC_SET_CIPHER_DO(&desc[idx], 1); //1=AES_SK RKEK - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_HASH_HW_GHASH); + set_cipher_do(&desc[idx], 1); //1=AES_SK RKEK + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Load GHASH initial STATE (which is 0). (for any hash there is an initial state) */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0x0, AES_BLOCK_SIZE); + set_dout_no_dma(&desc[idx], 0, 0, 1); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_aes_not_hash_mode(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_HASH_HW_GHASH); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; @@ -1449,11 +1447,9 @@ ssi_gcm_fips_run_test(struct ssi_drvdata *drvdata, // ssi_aead_create_assoc_desc(req, DIN_HASH, desc, seq_size); ///////////////////////////////// 2 //////////////////////////////////// - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - adata_dma_addr, adata_size, - NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, adata_dma_addr, adata_size, NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; @@ -1462,27 +1458,24 @@ ssi_gcm_fips_run_test(struct ssi_drvdata *drvdata, ///////////////////////////////// 3 //////////////////////////////////// /* load key to AES*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - key_dma_addr, key_size, - NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_GCTR); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_din_type(&desc[idx], DMA_DLLI, key_dma_addr, key_size, NS_BIT); + set_key_size_aes(&desc[idx], key_size); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* load AES/CTR initial CTR value inc by 2*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - iv_inc2_dma_addr, AES_BLOCK_SIZE, - NS_BIT); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_GCTR); + set_key_size_aes(&desc[idx], key_size); + set_din_type(&desc[idx], DMA_DLLI, iv_inc2_dma_addr, AES_BLOCK_SIZE, + NS_BIT); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; @@ -1492,14 +1485,10 @@ ssi_gcm_fips_run_test(struct ssi_drvdata *drvdata, // ssi_aead_process_cipher_data_desc(req, cipher_flow_mode, desc, seq_size); ///////////////////////////////// 4 //////////////////////////////////// - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - din_dma_addr, din_size, - NS_BIT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - dout_dma_addr, din_size, - NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], cipher_flow_mode); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, din_dma_addr, din_size, NS_BIT); + set_dout_dlli(&desc[idx], dout_dma_addr, din_size, NS_BIT, 0); + set_flow_mode(&desc[idx], cipher_flow_mode); idx++; @@ -1508,53 +1497,46 @@ ssi_gcm_fips_run_test(struct ssi_drvdata *drvdata, ///////////////////////////////// 5 //////////////////////////////////// /* prcess(ghash) gcm_block_len */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - block_len_dma_addr, AES_BLOCK_SIZE, - NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, block_len_dma_addr, AES_BLOCK_SIZE, + NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; /* Store GHASH state after GHASH(Associated Data + Cipher +LenBlock) */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - mac_res_dma_addr, AES_BLOCK_SIZE, - NS_BIT, 0); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_HASH_HW_GHASH); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_dlli(&desc[idx], mac_res_dma_addr, AES_BLOCK_SIZE, NS_BIT, 0); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_aes_not_hash_mode(&desc[idx]); idx++; /* load AES/CTR initial CTR value inc by 1*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - iv_inc1_dma_addr, AES_BLOCK_SIZE, - NS_BIT); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_GCTR); + set_key_size_aes(&desc[idx], key_size); + set_din_type(&desc[idx], DMA_DLLI, iv_inc1_dma_addr, AES_BLOCK_SIZE, + NS_BIT); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* Memory Barrier */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; /* process GCTR on stored GHASH and store MAC inplace */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - mac_res_dma_addr, AES_BLOCK_SIZE, - NS_BIT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - mac_res_dma_addr, AES_BLOCK_SIZE, - NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_GCTR); + set_din_type(&desc[idx], DMA_DLLI, mac_res_dma_addr, AES_BLOCK_SIZE, + NS_BIT); + set_dout_dlli(&desc[idx], mac_res_dma_addr, AES_BLOCK_SIZE, NS_BIT, 0); + set_flow_mode(&desc[idx], DIN_AES_DOUT); idx++; /* perform the operation - Lock HW and push sequence */ diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c index da5915e..8ab6b9e 100644 --- a/drivers/staging/ccree/ssi_hash.c +++ b/drivers/staging/ccree/ssi_hash.c @@ -126,9 +126,9 @@ static inline void ssi_set_hash_endianity(u32 mode, struct cc_hw_desc *desc) if (unlikely((mode == DRV_HASH_MD5) || (mode == DRV_HASH_SHA384) || (mode == DRV_HASH_SHA512))) { - HW_DESC_SET_BYTES_SWAP(desc, 1); + set_bytes_swap(desc, 1); } else { - HW_DESC_SET_CIPHER_CONFIG0(desc, HASH_DIGEST_RESULT_LITTLE_ENDIAN); + set_cipher_config0(desc, HASH_DIGEST_RESULT_LITTLE_ENDIAN); } } @@ -253,10 +253,11 @@ static int ssi_hash_map_request(struct device *dev, /* Copy the initial digests if hash flow. The SRAM contains the * initial digests in the expected order for all SHA* */ - HW_DESC_INIT(&desc); - HW_DESC_SET_DIN_SRAM(&desc, larval_digest_addr, ctx->inter_digestsize); - HW_DESC_SET_DOUT_DLLI(&desc, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc, BYPASS); + hw_desc_init(&desc); + set_din_sram(&desc, larval_digest_addr, ctx->inter_digestsize); + set_dout_dlli(&desc, state->digest_buff_dma_addr, + ctx->inter_digestsize, NS_BIT, 0); + set_flow_mode(&desc, BYPASS); rc = send_request(ctx->drvdata, &ssi_req, &desc, 1, 0); if (unlikely(rc != 0)) { @@ -488,96 +489,108 @@ static int ssi_hash_digest(struct ahash_req_ctx *state, } /* If HMAC then load hash IPAD xor key, if HASH then load initial digest */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); if (is_hmac) { - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT); + set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, + ctx->inter_digestsize, NS_BIT); } else { - HW_DESC_SET_DIN_SRAM(&desc[idx], larval_digest_addr, ctx->inter_digestsize); + set_din_sram(&desc[idx], larval_digest_addr, + ctx->inter_digestsize); } - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load the hash current length */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); if (is_hmac) { - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT); + set_din_type(&desc[idx], DMA_DLLI, + state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, + NS_BIT); } else { - HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); + set_din_const(&desc[idx], 0, HASH_LEN_SIZE); if (likely(nbytes != 0)) { - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); } else { - HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD); + set_cipher_do(&desc[idx], DO_PAD); } } - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx); if (is_hmac) { /* HW last hash block padding (aka. "DO_PAD") */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, HASH_LEN_SIZE, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1); - HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_dout_dlli(&desc[idx], state->digest_buff_dma_addr, + HASH_LEN_SIZE, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); + set_cipher_do(&desc[idx], DO_PAD); idx++; /* store the hash digest result in the context */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_dout_dlli(&desc[idx], state->digest_buff_dma_addr, + digestsize, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); idx++; /* Loading hash opad xor key state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_type(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, + ctx->inter_digestsize, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load the hash current length */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_sram(&desc[idx], + ssi_ahash_get_initial_digest_len_sram_addr( +ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Memory Barrier: wait for IPAD/OPAD axi write to complete */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; /* Perform HASH update */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, + digestsize, NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; } /* Get final MAC result */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0); /*TODO*/ + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + /* TODO */ + set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, + NS_BIT, (async_req ? 1 : 0)); if (async_req) { - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + set_queue_last_ind(&desc[idx]); } - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]); idx++; @@ -646,39 +659,43 @@ static int ssi_hash_update(struct ahash_req_ctx *state, } /* Restore hash digest */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, + ctx->inter_digestsize, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Restore hash current length */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_type(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, + HASH_LEN_SIZE, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx); /* store the hash digest result in context */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_dout_dlli(&desc[idx], state->digest_buff_dma_addr, + ctx->inter_digestsize, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); idx++; /* store current hash length in context */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT, async_req? 1:0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_dout_dlli(&desc[idx], state->digest_bytes_len_dma_addr, + HASH_LEN_SIZE, NS_BIT, (async_req ? 1 : 0)); if (async_req) { - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + set_queue_last_ind(&desc[idx]); } - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); idx++; if (async_req) { @@ -737,75 +754,84 @@ static int ssi_hash_finup(struct ahash_req_ctx *state, } /* Restore hash digest */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, + ctx->inter_digestsize, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Restore hash current length */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_din_type(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, + HASH_LEN_SIZE, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx); if (is_hmac) { /* Store the hash digest result in the context */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_dout_dlli(&desc[idx], state->digest_buff_dma_addr, + digestsize, NS_BIT, 0); ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); idx++; /* Loading hash OPAD xor key state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_type(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, + ctx->inter_digestsize, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load the hash current length */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_sram(&desc[idx], + ssi_ahash_get_initial_digest_len_sram_addr( +ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Memory Barrier: wait for IPAD/OPAD axi write to complete */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; /* Perform HASH update on last digest */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, + digestsize, NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; } /* Get final MAC result */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0); /*TODO*/ + hw_desc_init(&desc[idx]); + /* TODO */ + set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, + NS_BIT, (async_req ? 1 : 0)); if (async_req) { - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + set_queue_last_ind(&desc[idx]); } - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + set_cipher_mode(&desc[idx], ctx->hw_mode); idx++; if (async_req) { @@ -869,84 +895,93 @@ static int ssi_hash_final(struct ahash_req_ctx *state, } /* Restore hash digest */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, + ctx->inter_digestsize, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Restore hash current length */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); + set_din_type(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, + HASH_LEN_SIZE, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx); /* "DO-PAD" must be enabled only when writing current length to HW */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT, 0); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + hw_desc_init(&desc[idx]); + set_cipher_do(&desc[idx], DO_PAD); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_dout_dlli(&desc[idx], state->digest_bytes_len_dma_addr, + HASH_LEN_SIZE, NS_BIT, 0); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); idx++; if (is_hmac) { /* Store the hash digest result in the context */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_dout_dlli(&desc[idx], state->digest_buff_dma_addr, + digestsize, NS_BIT, 0); ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); idx++; /* Loading hash OPAD xor key state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_type(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, + ctx->inter_digestsize, NS_BIT); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load the hash current length */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_sram(&desc[idx], + ssi_ahash_get_initial_digest_len_sram_addr( +ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Memory Barrier: wait for IPAD/OPAD axi write to complete */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; /* Perform HASH update on last digest */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, + digestsize, NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; } /* Get final MAC result */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0); + hw_desc_init(&desc[idx]); + set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, + NS_BIT, (async_req ? 1 : 0)); if (async_req) { - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + set_queue_last_ind(&desc[idx]); } - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + set_cipher_mode(&desc[idx], ctx->hw_mode); idx++; if (async_req) { @@ -1054,79 +1089,76 @@ static int ssi_hash_setkey(void *hash, if (keylen > blocksize) { /* Load hash initial state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr, - ctx->inter_digestsize); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_sram(&desc[idx], larval_addr, + ctx->inter_digestsize); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load the hash current length*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - ctx->key_params.key_dma_addr, - keylen, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + ctx->key_params.key_dma_addr, keylen, + NS_BIT); + set_flow_mode(&desc[idx], DIN_HASH); idx++; /* Get hashed key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], ctx->opad_tmp_keys_dma_addr, - digestsize, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_dout_dlli(&desc[idx], ctx->opad_tmp_keys_dma_addr, + digestsize, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]); idx++; - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - digestsize)); - HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - (ctx->opad_tmp_keys_dma_addr + digestsize), - (blocksize - digestsize), - NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0, (blocksize - digestsize)); + set_flow_mode(&desc[idx], BYPASS); + set_dout_dlli(&desc[idx], (ctx->opad_tmp_keys_dma_addr + + digestsize), + (blocksize - digestsize), NS_BIT, 0); idx++; } else { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - ctx->key_params.key_dma_addr, - keylen, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - (ctx->opad_tmp_keys_dma_addr), - keylen, NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + ctx->key_params.key_dma_addr, keylen, + NS_BIT); + set_flow_mode(&desc[idx], BYPASS); + set_dout_dlli(&desc[idx], ctx->opad_tmp_keys_dma_addr, + keylen, NS_BIT, 0); idx++; if ((blocksize - keylen) != 0) { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - keylen)); - HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - (ctx->opad_tmp_keys_dma_addr + keylen), - (blocksize - keylen), - NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0, + (blocksize - keylen)); + set_flow_mode(&desc[idx], BYPASS); + set_dout_dlli(&desc[idx], + (ctx->opad_tmp_keys_dma_addr + + keylen), (blocksize - keylen), + NS_BIT, 0); idx++; } } } else { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, blocksize); - HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); - HW_DESC_SET_DOUT_DLLI(&desc[idx], - (ctx->opad_tmp_keys_dma_addr), - blocksize, - NS_BIT, 0); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0, blocksize); + set_flow_mode(&desc[idx], BYPASS); + set_dout_dlli(&desc[idx], (ctx->opad_tmp_keys_dma_addr), + blocksize, NS_BIT, 0); idx++; } @@ -1139,55 +1171,49 @@ static int ssi_hash_setkey(void *hash, /* calc derived HMAC key */ for (idx = 0, i = 0; i < 2; i++) { /* Load hash initial state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr, - ctx->inter_digestsize); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_sram(&desc[idx], larval_addr, ctx->inter_digestsize); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); idx++; /* Load the hash current length*/ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Prepare ipad key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_XOR_VAL(&desc[idx], hmacPadConst[i]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + hw_desc_init(&desc[idx]); + set_xor_val(&desc[idx], hmacPadConst[i]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_flow_mode(&desc[idx], S_DIN_to_HASH); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); idx++; /* Perform HASH update */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - ctx->opad_tmp_keys_dma_addr, - blocksize, NS_BIT); - HW_DESC_SET_CIPHER_MODE(&desc[idx],ctx->hw_mode); - HW_DESC_SET_XOR_ACTIVE(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, ctx->opad_tmp_keys_dma_addr, + blocksize, NS_BIT); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_xor_active(&desc[idx]); + set_flow_mode(&desc[idx], DIN_HASH); idx++; /* Get the IPAD/OPAD xor key (Note, IPAD is the initial digest of the first HASH "update" state) */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); if (i > 0) /* Not first iteration */ - HW_DESC_SET_DOUT_DLLI(&desc[idx], - ctx->opad_tmp_keys_dma_addr, - ctx->inter_digestsize, - NS_BIT, 0); + set_dout_dlli(&desc[idx], ctx->opad_tmp_keys_dma_addr, + ctx->inter_digestsize, NS_BIT, 0); else /* First iteration */ - HW_DESC_SET_DOUT_DLLI(&desc[idx], - ctx->digest_buff_dma_addr, - ctx->inter_digestsize, - NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + set_dout_dlli(&desc[idx], ctx->digest_buff_dma_addr, + ctx->inter_digestsize, NS_BIT, 0); + set_flow_mode(&desc[idx], S_HASH_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); idx++; } @@ -1255,35 +1281,36 @@ static int ssi_xcbc_setkey(struct crypto_ahash *ahash, ctx->is_hmac = true; /* 1. Load the AES key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->key_params.key_dma_addr, keylen, NS_BIT); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keylen); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, ctx->key_params.key_dma_addr, + keylen, NS_BIT); + set_cipher_mode(&desc[idx], DRV_CIPHER_ECB); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + set_key_size_aes(&desc[idx], keylen); + set_flow_mode(&desc[idx], S_DIN_to_AES); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0x01010101, CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr + + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0x01010101, CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[idx], DIN_AES_DOUT); + set_dout_dlli(&desc[idx], (ctx->opad_tmp_keys_dma_addr + XCBC_MAC_K1_OFFSET), CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0); idx++; - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0x02020202, CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr + + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0x02020202, CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[idx], DIN_AES_DOUT); + set_dout_dlli(&desc[idx], (ctx->opad_tmp_keys_dma_addr + XCBC_MAC_K2_OFFSET), CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0); idx++; - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0x03030303, CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr + + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0x03030303, CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[idx], DIN_AES_DOUT); + set_dout_dlli(&desc[idx], (ctx->opad_tmp_keys_dma_addr + XCBC_MAC_K3_OFFSET), CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0); idx++; @@ -1506,12 +1533,13 @@ static int ssi_mac_update(struct ahash_request *req) ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, true, &idx); /* store the hash digest result in context */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 1); - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_dout_dlli(&desc[idx], state->digest_buff_dma_addr, + ctx->inter_digestsize, NS_BIT, 1); + set_queue_last_ind(&desc[idx]); + set_flow_mode(&desc[idx], S_AES_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); idx++; /* Setup DX request structure */ @@ -1576,30 +1604,31 @@ static int ssi_mac_final(struct ahash_request *req) if (state->xcbc_count && (rem_cnt == 0)) { /* Load key for ECB decryption */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_DECRYPT); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - (ctx->opad_tmp_keys_dma_addr + - XCBC_MAC_K1_OFFSET), - keySize, NS_BIT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], DRV_CIPHER_ECB); + set_cipher_config0(&desc[idx], DRV_CRYPTO_DIRECTION_DECRYPT); + set_din_type(&desc[idx], DMA_DLLI, + (ctx->opad_tmp_keys_dma_addr + + XCBC_MAC_K1_OFFSET), keySize, NS_BIT); + set_key_size_aes(&desc[idx], keyLen); + set_flow_mode(&desc[idx], S_DIN_to_AES); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; /* Initiate decryption of block state to previous block_state-XOR-M[n] */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT,0); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, + CC_AES_BLOCK_SIZE, NS_BIT); + set_dout_dlli(&desc[idx], state->digest_buff_dma_addr, + CC_AES_BLOCK_SIZE, NS_BIT, 0); + set_flow_mode(&desc[idx], DIN_AES_DOUT); idx++; /* Memory Barrier: wait for axi write to complete */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + hw_desc_init(&desc[idx]); + set_din_no_dma(&desc[idx], 0, 0xfffff0); + set_dout_no_dma(&desc[idx], 0, 0, 1); idx++; } @@ -1610,28 +1639,30 @@ static int ssi_mac_final(struct ahash_request *req) } if (state->xcbc_count == 0) { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen); - HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_key_size_aes(&desc[idx], keyLen); + set_cmac_size0_mode(&desc[idx]); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; } else if (rem_cnt > 0) { ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx); } else { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_CONST(&desc[idx], 0x00, CC_AES_BLOCK_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + hw_desc_init(&desc[idx]); + set_din_const(&desc[idx], 0x00, CC_AES_BLOCK_SIZE); + set_flow_mode(&desc[idx], DIN_AES_DOUT); idx++; } /* Get final MAC result */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, 1); /*TODO*/ - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + hw_desc_init(&desc[idx]); + /* TODO */ + set_dout_dlli(&desc[idx], state->digest_result_dma_addr, + digestsize, NS_BIT, 1); + set_queue_last_ind(&desc[idx]); + set_flow_mode(&desc[idx], S_AES_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_mode(&desc[idx], ctx->hw_mode); idx++; rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); @@ -1688,23 +1719,25 @@ static int ssi_mac_finup(struct ahash_request *req) } if (req->nbytes == 0) { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len); - HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_key_size_aes(&desc[idx], key_len); + set_cmac_size0_mode(&desc[idx]); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; } else { ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx); } /* Get final MAC result */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, 1); /*TODO*/ - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + hw_desc_init(&desc[idx]); + /* TODO */ + set_dout_dlli(&desc[idx], state->digest_result_dma_addr, + digestsize, NS_BIT, 1); + set_queue_last_ind(&desc[idx]); + set_flow_mode(&desc[idx], S_AES_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_mode(&desc[idx], ctx->hw_mode); idx++; rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); @@ -1763,24 +1796,25 @@ static int ssi_mac_digest(struct ahash_request *req) } if (req->nbytes == 0) { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen); - HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_cipher_mode(&desc[idx], ctx->hw_mode); + set_key_size_aes(&desc[idx], keyLen); + set_cmac_size0_mode(&desc[idx]); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; } else { ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx); } /* Get final MAC result */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT,1); - HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx],DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + hw_desc_init(&desc[idx]); + set_dout_dlli(&desc[idx], state->digest_result_dma_addr, + CC_AES_BLOCK_SIZE, NS_BIT, 1); + set_queue_last_ind(&desc[idx]); + set_flow_mode(&desc[idx], S_AES_to_DOUT); + set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_cipher_mode(&desc[idx], ctx->hw_mode); idx++; rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); @@ -2554,49 +2588,50 @@ static void ssi_hash_create_xcbc_setup(struct ahash_request *areq, struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); /* Setup XCBC MAC K1 */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr - + XCBC_MAC_K1_OFFSET), - CC_AES_128_BIT_KEY_SIZE, NS_BIT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr + + XCBC_MAC_K1_OFFSET), + CC_AES_128_BIT_KEY_SIZE, NS_BIT); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* Setup XCBC MAC K2 */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr - + XCBC_MAC_K2_OFFSET), - CC_AES_128_BIT_KEY_SIZE, NS_BIT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr + + XCBC_MAC_K2_OFFSET), + CC_AES_128_BIT_KEY_SIZE, NS_BIT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE1); + set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* Setup XCBC MAC K3 */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr - + XCBC_MAC_K3_OFFSET), - CC_AES_128_BIT_KEY_SIZE, NS_BIT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE2); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr + + XCBC_MAC_K3_OFFSET), + CC_AES_128_BIT_KEY_SIZE, NS_BIT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE2); + set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* Loading MAC state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, + CC_AES_BLOCK_SIZE, NS_BIT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); + set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; *seq_size = idx; } @@ -2611,24 +2646,26 @@ static void ssi_hash_create_cmac_setup(struct ahash_request *areq, struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); /* Setup CMAC Key */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->opad_tmp_keys_dma_addr, - ((ctx->key_params.keylen == 24) ? AES_MAX_KEY_SIZE : ctx->key_params.keylen), NS_BIT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->key_params.keylen); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, ctx->opad_tmp_keys_dma_addr, + ((ctx->key_params.keylen == 24) ? AES_MAX_KEY_SIZE : + ctx->key_params.keylen), NS_BIT); + set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); + set_cipher_mode(&desc[idx], DRV_CIPHER_CMAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], ctx->key_params.keylen); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; /* Load MAC state */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT); - HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); - HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC); - HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->key_params.keylen); - HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, + CC_AES_BLOCK_SIZE, NS_BIT); + set_setup_mode(&desc[idx], SETUP_LOAD_STATE0); + set_cipher_mode(&desc[idx], DRV_CIPHER_CMAC); + set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_key_size_aes(&desc[idx], ctx->key_params.keylen); + set_flow_mode(&desc[idx], S_DIN_to_AES); idx++; *seq_size = idx; } @@ -2643,11 +2680,11 @@ static void ssi_hash_create_data_desc(struct ahash_req_ctx *areq_ctx, unsigned int idx = *seq_size; if (likely(areq_ctx->data_dma_buf_type == SSI_DMA_BUF_DLLI)) { - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - sg_dma_address(areq_ctx->curr_sg), - areq_ctx->curr_sg->length, NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + sg_dma_address(areq_ctx->curr_sg), + areq_ctx->curr_sg->length, NS_BIT); + set_flow_mode(&desc[idx], flow_mode); idx++; } else { if (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) { @@ -2656,27 +2693,24 @@ static void ssi_hash_create_data_desc(struct ahash_req_ctx *areq_ctx, return; } /* bypass */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, - areq_ctx->mlli_params.mlli_dma_addr, - areq_ctx->mlli_params.mlli_len, - NS_BIT); - HW_DESC_SET_DOUT_SRAM(&desc[idx], - ctx->drvdata->mlli_sram_addr, - areq_ctx->mlli_params.mlli_len); - HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, + areq_ctx->mlli_params.mlli_dma_addr, + areq_ctx->mlli_params.mlli_len, NS_BIT); + set_dout_sram(&desc[idx], ctx->drvdata->mlli_sram_addr, + areq_ctx->mlli_params.mlli_len); + set_flow_mode(&desc[idx], BYPASS); idx++; /* process */ - HW_DESC_INIT(&desc[idx]); - HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI, - ctx->drvdata->mlli_sram_addr, - areq_ctx->mlli_nents, - NS_BIT); - HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_MLLI, + ctx->drvdata->mlli_sram_addr, + areq_ctx->mlli_nents, NS_BIT); + set_flow_mode(&desc[idx], flow_mode); idx++; } if (is_not_last_data) { - HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx-1]); + set_din_not_last_indication(&desc[(idx - 1)]); } /* return updated desc sequence size */ *seq_size = idx; diff --git a/drivers/staging/ccree/ssi_ivgen.c b/drivers/staging/ccree/ssi_ivgen.c index db4b831e..233666b 100644 --- a/drivers/staging/ccree/ssi_ivgen.c +++ b/drivers/staging/ccree/ssi_ivgen.c @@ -69,37 +69,37 @@ static int ssi_ivgen_generate_pool( return -EINVAL; } /* Setup key */ - HW_DESC_INIT(&iv_seq[idx]); - HW_DESC_SET_DIN_SRAM(&iv_seq[idx], ivgen_ctx->ctr_key, AES_KEYSIZE_128); - HW_DESC_SET_SETUP_MODE(&iv_seq[idx], SETUP_LOAD_KEY0); - HW_DESC_SET_CIPHER_CONFIG0(&iv_seq[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_FLOW_MODE(&iv_seq[idx], S_DIN_to_AES); - HW_DESC_SET_KEY_SIZE_AES(&iv_seq[idx], CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_CIPHER_MODE(&iv_seq[idx], DRV_CIPHER_CTR); + hw_desc_init(&iv_seq[idx]); + set_din_sram(&iv_seq[idx], ivgen_ctx->ctr_key, AES_KEYSIZE_128); + set_setup_mode(&iv_seq[idx], SETUP_LOAD_KEY0); + set_cipher_config0(&iv_seq[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_flow_mode(&iv_seq[idx], S_DIN_to_AES); + set_key_size_aes(&iv_seq[idx], CC_AES_128_BIT_KEY_SIZE); + set_cipher_mode(&iv_seq[idx], DRV_CIPHER_CTR); idx++; /* Setup cipher state */ - HW_DESC_INIT(&iv_seq[idx]); - HW_DESC_SET_DIN_SRAM(&iv_seq[idx], ivgen_ctx->ctr_iv, CC_AES_IV_SIZE); - HW_DESC_SET_CIPHER_CONFIG0(&iv_seq[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); - HW_DESC_SET_FLOW_MODE(&iv_seq[idx], S_DIN_to_AES); - HW_DESC_SET_SETUP_MODE(&iv_seq[idx], SETUP_LOAD_STATE1); - HW_DESC_SET_KEY_SIZE_AES(&iv_seq[idx], CC_AES_128_BIT_KEY_SIZE); - HW_DESC_SET_CIPHER_MODE(&iv_seq[idx], DRV_CIPHER_CTR); + hw_desc_init(&iv_seq[idx]); + set_din_sram(&iv_seq[idx], ivgen_ctx->ctr_iv, CC_AES_IV_SIZE); + set_cipher_config0(&iv_seq[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + set_flow_mode(&iv_seq[idx], S_DIN_to_AES); + set_setup_mode(&iv_seq[idx], SETUP_LOAD_STATE1); + set_key_size_aes(&iv_seq[idx], CC_AES_128_BIT_KEY_SIZE); + set_cipher_mode(&iv_seq[idx], DRV_CIPHER_CTR); idx++; /* Perform dummy encrypt to skip first block */ - HW_DESC_INIT(&iv_seq[idx]); - HW_DESC_SET_DIN_CONST(&iv_seq[idx], 0, CC_AES_IV_SIZE); - HW_DESC_SET_DOUT_SRAM(&iv_seq[idx], ivgen_ctx->pool, CC_AES_IV_SIZE); - HW_DESC_SET_FLOW_MODE(&iv_seq[idx], DIN_AES_DOUT); + hw_desc_init(&iv_seq[idx]); + set_din_const(&iv_seq[idx], 0, CC_AES_IV_SIZE); + set_dout_sram(&iv_seq[idx], ivgen_ctx->pool, CC_AES_IV_SIZE); + set_flow_mode(&iv_seq[idx], DIN_AES_DOUT); idx++; /* Generate IV pool */ - HW_DESC_INIT(&iv_seq[idx]); - HW_DESC_SET_DIN_CONST(&iv_seq[idx], 0, SSI_IVPOOL_SIZE); - HW_DESC_SET_DOUT_SRAM(&iv_seq[idx], ivgen_ctx->pool, SSI_IVPOOL_SIZE); - HW_DESC_SET_FLOW_MODE(&iv_seq[idx], DIN_AES_DOUT); + hw_desc_init(&iv_seq[idx]); + set_din_const(&iv_seq[idx], 0, SSI_IVPOOL_SIZE); + set_dout_sram(&iv_seq[idx], ivgen_ctx->pool, SSI_IVPOOL_SIZE); + set_flow_mode(&iv_seq[idx], DIN_AES_DOUT); idx++; *iv_seq_len = idx; /* Update sequence length */ @@ -133,13 +133,12 @@ int ssi_ivgen_init_sram_pool(struct ssi_drvdata *drvdata) ivgen_ctx->ctr_iv = ivgen_ctx->pool + AES_KEYSIZE_128; /* Copy initial enc. key and IV to SRAM at a single descriptor */ - HW_DESC_INIT(&iv_seq[iv_seq_len]); - HW_DESC_SET_DIN_TYPE(&iv_seq[iv_seq_len], DMA_DLLI, - ivgen_ctx->pool_meta_dma, SSI_IVPOOL_META_SIZE, - NS_BIT); - HW_DESC_SET_DOUT_SRAM(&iv_seq[iv_seq_len], ivgen_ctx->pool, - SSI_IVPOOL_META_SIZE); - HW_DESC_SET_FLOW_MODE(&iv_seq[iv_seq_len], BYPASS); + hw_desc_init(&iv_seq[iv_seq_len]); + set_din_type(&iv_seq[iv_seq_len], DMA_DLLI, ivgen_ctx->pool_meta_dma, + SSI_IVPOOL_META_SIZE, NS_BIT); + set_dout_sram(&iv_seq[iv_seq_len], ivgen_ctx->pool, + SSI_IVPOOL_META_SIZE); + set_flow_mode(&iv_seq[iv_seq_len], BYPASS); iv_seq_len++; /* Generate initial pool */ @@ -268,22 +267,22 @@ int ssi_ivgen_getiv( for (t = 0; t < iv_out_dma_len; t++) { /* Acquire IV from pool */ - HW_DESC_INIT(&iv_seq[idx]); - HW_DESC_SET_DIN_SRAM(&iv_seq[idx], - ivgen_ctx->pool + ivgen_ctx->next_iv_ofs, - iv_out_size); - HW_DESC_SET_DOUT_DLLI(&iv_seq[idx], iv_out_dma[t], - iv_out_size, NS_BIT, 0); - HW_DESC_SET_FLOW_MODE(&iv_seq[idx], BYPASS); + hw_desc_init(&iv_seq[idx]); + set_din_sram(&iv_seq[idx], (ivgen_ctx->pool + + ivgen_ctx->next_iv_ofs), + iv_out_size); + set_dout_dlli(&iv_seq[idx], iv_out_dma[t], iv_out_size, + NS_BIT, 0); + set_flow_mode(&iv_seq[idx], BYPASS); idx++; } /* Bypass operation is proceeded by crypto sequence, hence must * assure bypass-write-transaction by a memory barrier */ - HW_DESC_INIT(&iv_seq[idx]); - HW_DESC_SET_DIN_NO_DMA(&iv_seq[idx], 0, 0xfffff0); - HW_DESC_SET_DOUT_NO_DMA(&iv_seq[idx], 0, 0, 1); + hw_desc_init(&iv_seq[idx]); + set_din_no_dma(&iv_seq[idx], 0, 0xfffff0); + set_dout_no_dma(&iv_seq[idx], 0, 0, 1); idx++; *iv_seq_len = idx; /* update seq length */ diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c index 1bc6811..02ad065 100644 --- a/drivers/staging/ccree/ssi_request_mgr.c +++ b/drivers/staging/ccree/ssi_request_mgr.c @@ -47,7 +47,7 @@ */ #define INIT_CC_MONITOR_DESC(desc_p) \ do { \ - HW_DESC_INIT(desc_p); \ + hw_desc_init(desc_p); \ HW_DESC_SET_DIN_MONITOR_CNTR(desc_p); \ } while (0) @@ -73,9 +73,9 @@ do { \ do { \ if ((is_monitored) == true) { \ struct cc_hw_desc barrier_desc; \ - HW_DESC_INIT(&barrier_desc); \ - HW_DESC_SET_DIN_NO_DMA(&barrier_desc, 0, 0xfffff0); \ - HW_DESC_SET_DOUT_NO_DMA(&barrier_desc, 0, 0, 1); \ + hw_desc_init(&barrier_desc); \ + set_din_no_dma(&barrier_desc, 0, 0xfffff0); \ + set_dout_no_dma(&barrier_desc, 0, 0, 1); \ enqueue_seq((cc_base_addr), &barrier_desc, 1); \ enqueue_seq((cc_base_addr), (desc_p), 1); \ } \ @@ -224,13 +224,12 @@ int request_mgr_init(struct ssi_drvdata *drvdata) sizeof(u32)); /* Init. "dummy" completion descriptor */ - HW_DESC_INIT(&req_mgr_h->compl_desc); - HW_DESC_SET_DIN_CONST(&req_mgr_h->compl_desc, 0, sizeof(u32)); - HW_DESC_SET_DOUT_DLLI(&req_mgr_h->compl_desc, - req_mgr_h->dummy_comp_buff_dma, - sizeof(u32), NS_BIT, 1); - HW_DESC_SET_FLOW_MODE(&req_mgr_h->compl_desc, BYPASS); - HW_DESC_SET_QUEUE_LAST_IND(&req_mgr_h->compl_desc); + hw_desc_init(&req_mgr_h->compl_desc); + set_din_const(&req_mgr_h->compl_desc, 0, sizeof(u32)); + set_dout_dlli(&req_mgr_h->compl_desc, req_mgr_h->dummy_comp_buff_dma, + sizeof(u32), NS_BIT, 1); + set_flow_mode(&req_mgr_h->compl_desc, BYPASS); + set_queue_last_ind(&req_mgr_h->compl_desc); #ifdef CC_CYCLE_COUNT /* For CC-HW cycle performance trace */ @@ -519,7 +518,7 @@ int send_request_init( if (unlikely(rc != 0 )) { return rc; } - HW_DESC_SET_QUEUE_LAST_IND(&desc[len-1]); + set_queue_last_ind(&desc[(len - 1)]); enqueue_seq(cc_base, desc, len); diff --git a/drivers/staging/ccree/ssi_sram_mgr.c b/drivers/staging/ccree/ssi_sram_mgr.c index bd7078d..c8ab55e 100644 --- a/drivers/staging/ccree/ssi_sram_mgr.c +++ b/drivers/staging/ccree/ssi_sram_mgr.c @@ -127,10 +127,10 @@ void ssi_sram_mgr_const2sram_desc( unsigned int idx = *seq_len; for (i = 0; i < nelement; i++, idx++) { - HW_DESC_INIT(&seq[idx]); - HW_DESC_SET_DIN_CONST(&seq[idx], src[i], sizeof(u32)); - HW_DESC_SET_DOUT_SRAM(&seq[idx], dst + (i * sizeof(u32)), sizeof(u32)); - HW_DESC_SET_FLOW_MODE(&seq[idx], BYPASS); + hw_desc_init(&seq[idx]); + set_din_const(&seq[idx], src[i], sizeof(u32)); + set_dout_sram(&seq[idx], dst + (i * sizeof(u32)), sizeof(u32)); + set_flow_mode(&seq[idx], BYPASS); } *seq_len = idx; From patchwork Sun Jun 4 08:02:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 101331 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp437453qgd; Sun, 4 Jun 2017 01:03:43 -0700 (PDT) X-Received: by 10.98.76.140 with SMTP id e12mr14456948pfj.78.1496563423619; Sun, 04 Jun 2017 01:03:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496563423; cv=none; d=google.com; s=arc-20160816; b=uehBwyxPDtdpJ/rf6HfhoY5GOsWucQKNUAYp3NIPwNO0PXPVF5PQiksCar4er8yFef 0lDo6QpgIRKOsp0GCJHTyMFEKbdn21gmCNJKvPg76zdj2vjXGVKdt3DggQPjq85iUC7b 6rh4L4sXIl/xd4/PiILmwxzzu6B2mLjznmiSauy/yP+DISGUj6pI77Xe+j7Rt60v6u09 uTjCiAHYHnbRo/J5e8DFq2W/1VGULWdY+7fuvSF3CB2DdjQ/CSP0N7vl4MmSi/cbQe7C fwSM8nLtyJG2TeX0X4469mMjxzhHymtwGIsC8LhjL7gg0l0nj+FRbDrV/Te78ey1rpmL xhKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=WGwUyJG5tNSxm+h5ncGLEwI8I81aFfP+qSwEFAFj4iM=; b=fDlJxa1cD6V88mzSudHRP/7aZbbR6GLO5fCBZcIPHECHOEw4HXKz5J/s/xjKtlteRd 0PGA5dwMnQzpwV+tyEXEjiAF9642cnRyjEbOiafwozDZNFoEivSi1XYjEDc79PZEaGcm da1amwYa2l3JOKWNlYvU9F0l08EmJULCmErd4obLpcPBbrUpxf++2R64ObAZKBAg/VmP cB/hluuKiHMNFfxssX1yAP0TZvldyvnfMhIBc2VpWP14HkQK7VmA0HSG4A8RQb6JFxTp NgYCKDSoll6RlPV9ON/elYNB95ourfVKX3ar+wGEdB7AR1N+J4NQxI52NhiIctk4kTvB PtHg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e124si26832040pgc.289.2017.06.04.01.03.43; Sun, 04 Jun 2017 01:03:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751246AbdFDIDe (ORCPT + 1 other); Sun, 4 Jun 2017 04:03:34 -0400 Received: from foss.arm.com ([217.140.101.70]:52702 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751230AbdFDIDU (ORCPT ); Sun, 4 Jun 2017 04:03:20 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 36A70344; Sun, 4 Jun 2017 01:03:20 -0700 (PDT) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 388E13F589; Sun, 4 Jun 2017 01:03:18 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Joe Perches , Ofir Drang , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Subject: [PATCH v3 05/18] staging: ccree: move M/LLI defines to header file Date: Sun, 4 Jun 2017 11:02:26 +0300 Message-Id: <1496563362-7954-6-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496563362-7954-1-git-send-email-gilad@benyossef.com> References: <1496563362-7954-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org A bunch of macros used to define M/LLI descriptors where being defined in the C file. Move them over to private include file where other relevant definitions are stored. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/cc_lli_defs.h | 8 ++++++++ drivers/staging/ccree/ssi_buffer_mgr.c | 7 ------- 2 files changed, 8 insertions(+), 7 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/cc_lli_defs.h b/drivers/staging/ccree/cc_lli_defs.h index 876dde0..78811aa 100644 --- a/drivers/staging/ccree/cc_lli_defs.h +++ b/drivers/staging/ccree/cc_lli_defs.h @@ -28,6 +28,14 @@ #define CC_MAX_MLLI_ENTRY_SIZE 0xFFFF +#define LLI_MAX_NUM_OF_DATA_ENTRIES 128 +#define LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES 4 +#define MLLI_TABLE_MIN_ALIGNMENT 4 /* 32 bit alignment */ +#define MAX_NUM_OF_BUFFERS_IN_MLLI 4 +#define MAX_NUM_OF_TOTAL_MLLI_ENTRIES \ + (2 * LLI_MAX_NUM_OF_DATA_ENTRIES + \ + LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES) + /* Size of entry */ #define LLI_ENTRY_WORD_SIZE 2 #define LLI_ENTRY_BYTE_SIZE (LLI_ENTRY_WORD_SIZE * sizeof(u32)) diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c index 24ba51d..63ffcd5 100644 --- a/drivers/staging/ccree/ssi_buffer_mgr.c +++ b/drivers/staging/ccree/ssi_buffer_mgr.c @@ -33,13 +33,6 @@ #include "ssi_hash.h" #include "ssi_aead.h" -#define LLI_MAX_NUM_OF_DATA_ENTRIES 128 -#define LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES 4 -#define MLLI_TABLE_MIN_ALIGNMENT 4 /*Force the MLLI table to be align to uint32 */ -#define MAX_NUM_OF_BUFFERS_IN_MLLI 4 -#define MAX_NUM_OF_TOTAL_MLLI_ENTRIES (2*LLI_MAX_NUM_OF_DATA_ENTRIES + \ - LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES ) - #ifdef CC_DEBUG #define DUMP_SGL(sg) \ while (sg) { \ From patchwork Sun Jun 4 08:02:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 101332 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp437454qgd; Sun, 4 Jun 2017 01:03:44 -0700 (PDT) X-Received: by 10.99.112.86 with SMTP id a22mr2029803pgn.158.1496563424030; Sun, 04 Jun 2017 01:03:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496563424; cv=none; d=google.com; s=arc-20160816; b=wwAVhzij+sXvyv1nb1Gq0VVFlqdTV0w1Hyi1QP0xpyWiO5jWAL2nkpJIzzlTlLqPHx nnSgGAqm2SZfP9IFymf44T9hQwxZ/tSI9WDvVO2Sg5rofZwRjuVfryozg/XeTzFWGKw0 EFnRzgnuMrzFSAyiYFToTW88t34OOhz525FMAAfTRHLnMzjHPDsOdimHiRAgW70bxyrl DszaY0iVR1SVEyHMdpPNqLrK5PIRCcbL0uiSHL4VT4VYLNSMTl5EMDjmuBxkGChE6eHy IdvHFZrCMKJZh1W23IWUo+UDBp/vPBK9YVZ9zsenvIBeK13dmST4CxQkTMYiVfZgp6mX NK3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=XUFHXqdwchGBfQIXnnumwLb/+ZUMEyKt06zwpYvzLXs=; b=wXmUKm/8ethdGyo7UO3+j5gjI8LXi2AzbDlsshYyjlUb4uZndAwkvrZbSutXXziSb7 TbkV/qzbgJrIus67bZzZva0LFlCgkca4Shqx86D3qEDGQQZwSOd952zDzLZq54QM6D67 OBPe0Dbl4B5jFtUxNGrbWs8zwsRb9PyiT6tLB68KL6B3uPTlkHqjq0PEFiLYKxjNUXO5 cgmHkeZ1WjEQ3N73YT0egiV54jz6Wvgj91XonWgj0cQ/0+6nBRFwmn6u3LJme0owB1Zq KFi3PDOTR2gDQECpZN2nlCmVciHhal/SYe48YN9RWuvM1dDYFKuG+J9o5arZ3dnsgVbJ H8jg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e124si26832040pgc.289.2017.06.04.01.03.43; Sun, 04 Jun 2017 01:03:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751256AbdFDIDg (ORCPT + 1 other); Sun, 4 Jun 2017 04:03:36 -0400 Received: from foss.arm.com ([217.140.101.70]:52716 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751193AbdFDIDb (ORCPT ); Sun, 4 Jun 2017 04:03:31 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A6518344; Sun, 4 Jun 2017 01:03:25 -0700 (PDT) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AA6F13F589; Sun, 4 Jun 2017 01:03:23 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Joe Perches , Ofir Drang , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Subject: [PATCH v3 06/18] staging: ccree: remove unused debug macros Date: Sun, 4 Jun 2017 11:02:27 +0300 Message-Id: <1496563362-7954-7-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496563362-7954-1-git-send-email-gilad@benyossef.com> References: <1496563362-7954-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The DUMP_SGL() and DUMP_MLLI_TABLE() debug macros were defined but not used anywhere and the difference of their definitions for debug vs. none debug indicated this has not being used in a while. Remove the dead code. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/ssi_buffer_mgr.c | 19 ------------------- 1 file changed, 19 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c index 63ffcd5..3252114 100644 --- a/drivers/staging/ccree/ssi_buffer_mgr.c +++ b/drivers/staging/ccree/ssi_buffer_mgr.c @@ -34,30 +34,11 @@ #include "ssi_aead.h" #ifdef CC_DEBUG -#define DUMP_SGL(sg) \ - while (sg) { \ - SSI_LOG_DEBUG("page=%p offset=%u length=%u (dma_len=%u) " \ - "dma_addr=%08x\n", sg_page(sg), (sg)->offset, \ - (sg)->length, sg_dma_len(sg), (sg)->dma_address); \ - (sg) = sg_next(sg); \ - } -#define DUMP_MLLI_TABLE(mlli_p, nents) \ - do { \ - SSI_LOG_DEBUG("mlli=%pK nents=%u\n", (mlli_p), (nents)); \ - while((nents)--) { \ - SSI_LOG_DEBUG("addr=0x%08X size=0x%08X\n", \ - (mlli_p)[LLI_WORD0_OFFSET], \ - (mlli_p)[LLI_WORD1_OFFSET]); \ - (mlli_p) += LLI_ENTRY_WORD_SIZE; \ - } \ - } while (0) #define GET_DMA_BUFFER_TYPE(buff_type) ( \ ((buff_type) == SSI_DMA_BUF_NULL) ? "BUF_NULL" : \ ((buff_type) == SSI_DMA_BUF_DLLI) ? "BUF_DLLI" : \ ((buff_type) == SSI_DMA_BUF_MLLI) ? "BUF_MLLI" : "BUF_INVALID") #else -#define DX_BUFFER_MGR_DUMP_SGL(sg) -#define DX_BUFFER_MGR_DUMP_MLLI_TABLE(mlli_p, nents) #define GET_DMA_BUFFER_TYPE(buff_type) #endif From patchwork Sun Jun 4 08:02:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 101345 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp438324qgd; Sun, 4 Jun 2017 01:07:05 -0700 (PDT) X-Received: by 10.99.36.129 with SMTP id k123mr15345120pgk.230.1496563625586; Sun, 04 Jun 2017 01:07:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496563625; cv=none; d=google.com; s=arc-20160816; b=o2uhp+LDqACotpJTSASK9e2Sj1ob4woDNZD0+cgmXO/9A5T5JXh6C4uRukHZVAOXP1 qXVmnDfGZkxGr5q5lu+iPK59TS3blsW6KCp2hI6VHMFcwjJEIbQGNcBqigk+hdOYBDYx oNnaxLEZlVUSvkybaKaQBneBQ2SY4+WKoT/NmibbWoRdnpfnSekvLhL6JwywZNvg7eT7 rtVIB7LE4FKNHTsC59G+uKzUkoiB3z3dNLyQIHyEEAp5653A/L/PM8b4MMShrWO1AQvw gseYcWW289gyA9ONvL9TLZlTSNyZQ2QS3PYbGdXJzFHbBMukC9rT18MEEXOcHrqyyp2g zWdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=zGBiSb7bOE2p1RQJ6eCDWsOVaF0o54rXxwAwUxE9I0Q=; b=RAptOxDWMAlh9GzGiQGYq50x9RZamnqClSFuMiZdocKuTXXrJN4jYc/MXYqXtou7nn 2iWhrUaOpYregVW1ZVEHfZjFcmuLfvfN0Fqh2cl+vBDtm211kVvtF4YxqtwudASTM032 zxEChRg+UhDjUt+YydsF09o+uWU99vCHhADk8DYDxH74/SUAt+LiTvWxWfKYNTy5M+oH FCl2ekXg0i+IlZ3LZom/tWLh1mhUCYsOXnPOqnvazyWW8XYJ04NM0jf/XC3Adwq5FDhc AABbRdS7glg7pVG7BUAaew19fwN/fS3BsUeXVWMhj95YSgC6Fc58k79dPNy/TamcXl10 zfZg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x89si2495051pff.412.2017.06.04.01.07.05; Sun, 04 Jun 2017 01:07:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751115AbdFDIHE (ORCPT + 1 other); Sun, 4 Jun 2017 04:07:04 -0400 Received: from foss.arm.com ([217.140.101.70]:52730 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751244AbdFDIDc (ORCPT ); Sun, 4 Jun 2017 04:03:32 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1BA1C80D; Sun, 4 Jun 2017 01:03:31 -0700 (PDT) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1F9483F589; Sun, 4 Jun 2017 01:03:28 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Joe Perches , Ofir Drang , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Subject: [PATCH v3 07/18] staging: ccree: remove cycle count debug support Date: Sun, 4 Jun 2017 11:02:28 +0300 Message-Id: <1496563362-7954-8-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496563362-7954-1-git-send-email-gilad@benyossef.com> References: <1496563362-7954-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The ccree driver had support for rough performance debugging via cycle counting which has bit rotted and can easily be replcaed with perf. Remove it from the driver. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/cc_hw_queue_defs.h | 14 ---- drivers/staging/ccree/ssi_aead.c | 33 --------- drivers/staging/ccree/ssi_cipher.c | 20 ------ drivers/staging/ccree/ssi_config.h | 6 -- drivers/staging/ccree/ssi_driver.c | 8 --- drivers/staging/ccree/ssi_driver.h | 25 ------- drivers/staging/ccree/ssi_hash.c | 28 -------- drivers/staging/ccree/ssi_request_mgr.c | 116 ------------------------------- 8 files changed, 250 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/cc_hw_queue_defs.h b/drivers/staging/ccree/cc_hw_queue_defs.h index c3f9d6a..e750817 100644 --- a/drivers/staging/ccree/cc_hw_queue_defs.h +++ b/drivers/staging/ccree/cc_hw_queue_defs.h @@ -29,8 +29,6 @@ #define HW_DESC_SIZE_WORDS 6 #define HW_QUEUE_SLOTS_MAX 15 /* Max. available slots in HW queue */ -#define _HW_DESC_MONITOR_KICK 0x7FFFC00 - #define CC_REG_NAME(word, name) DX_DSCRPTR_QUEUE_WORD ## word ## _ ## name #define CC_REG_LOW(word, name) \ @@ -606,16 +604,4 @@ static inline void set_cipher_do(struct cc_hw_desc *pdesc, (config & HW_KEY_MASK_CIPHER_DO)); } -/*! - * This macro sets the DIN field of a HW descriptors to star/stop monitor descriptor. - * Used for performance measurements and debug purposes. - * - * \param pDesc pointer HW descriptor struct - */ -#define HW_DESC_SET_DIN_MONITOR_CNTR(pDesc) \ - do { \ - CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_MEASURE_CNTR, VALUE, (pDesc)->word[1], _HW_DESC_MONITOR_KICK); \ - } while (0) - - #endif /*__CC_HW_QUEUE_DEFS_H__*/ diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c index a42bb49..e8936a3 100644 --- a/drivers/staging/ccree/ssi_aead.c +++ b/drivers/staging/ccree/ssi_aead.c @@ -217,9 +217,6 @@ static void ssi_aead_complete(struct device *dev, void *ssi_req, void __iomem *c struct crypto_aead *tfm = crypto_aead_reqtfm(ssi_req); struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); int err = 0; - DECL_CYCLE_COUNT_RESOURCES; - - START_CYCLE_COUNT(); ssi_buffer_mgr_unmap_aead_request(dev, areq); @@ -254,7 +251,6 @@ static void ssi_aead_complete(struct device *dev, void *ssi_req, void __iomem *c } } - END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_4); aead_request_complete(areq, err); } @@ -521,10 +517,6 @@ ssi_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key, unsigned int keyl idx++; } -#ifdef ENABLE_CYCLE_COUNT - ssi_req.op_type = STAT_OP_TYPE_SETKEY; -#endif - rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); if (unlikely(rc != 0)) SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); @@ -546,14 +538,12 @@ ssi_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) struct crypto_authenc_key_param *param; struct cc_hw_desc desc[MAX_AEAD_SETKEY_SEQ]; int seq_len = 0, rc = -EINVAL; - DECL_CYCLE_COUNT_RESOURCES; SSI_LOG_DEBUG("Setting key in context @%p for %s. key=%p keylen=%u\n", ctx, crypto_tfm_alg_name(crypto_aead_tfm(tfm)), key, keylen); CHECK_AND_RETURN_UPON_FIPS_ERROR(); /* STAT_PHASE_0: Init and sanity checks */ - START_CYCLE_COUNT(); if (ctx->auth_mode != DRV_HASH_NULL) { /* authenc() alg. */ if (!RTA_OK(rta, keylen)) @@ -592,9 +582,7 @@ ssi_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) if (unlikely(rc != 0)) goto badkey; - END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0); /* STAT_PHASE_1: Copy key to ctx */ - START_CYCLE_COUNT(); /* Get key material */ memcpy(ctx->enckey, key + ctx->auth_keylen, ctx->enc_keylen); @@ -608,10 +596,8 @@ ssi_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) goto badkey; } - END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1); /* STAT_PHASE_2: Create sequence */ - START_CYCLE_COUNT(); switch (ctx->auth_mode) { case DRV_HASH_SHA1: @@ -629,15 +615,10 @@ ssi_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) goto badkey; } - END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_2); /* STAT_PHASE_3: Submit sequence to HW */ - START_CYCLE_COUNT(); if (seq_len > 0) { /* For CCM there is no sequence to setup the key */ -#ifdef ENABLE_CYCLE_COUNT - ssi_req.op_type = STAT_OP_TYPE_SETKEY; -#endif rc = send_request(ctx->drvdata, &ssi_req, desc, seq_len, 0); if (unlikely(rc != 0)) { SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); @@ -646,7 +627,6 @@ ssi_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) } /* Update STAT_PHASE_3 */ - END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_3); return rc; badkey: @@ -1977,7 +1957,6 @@ static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction struct device *dev = &ctx->drvdata->plat_dev->dev; struct ssi_crypto_req ssi_req = {}; - DECL_CYCLE_COUNT_RESOURCES; SSI_LOG_DEBUG("%s context=%p req=%p iv=%p src=%p src_ofs=%d dst=%p dst_ofs=%d cryptolen=%d\n", ((direct==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"), ctx, req, req->iv, @@ -1985,7 +1964,6 @@ static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction CHECK_AND_RETURN_UPON_FIPS_ERROR(); /* STAT_PHASE_0: Init and sanity checks */ - START_CYCLE_COUNT(); /* Check data length according to mode */ if (unlikely(validate_data_size(ctx, direct, req) != 0)) { @@ -1999,19 +1977,13 @@ static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction ssi_req.user_cb = (void *)ssi_aead_complete; ssi_req.user_arg = (void *)req; -#ifdef ENABLE_CYCLE_COUNT - ssi_req.op_type = (direct == DRV_CRYPTO_DIRECTION_DECRYPT) ? - STAT_OP_TYPE_DECODE : STAT_OP_TYPE_ENCODE; -#endif /* Setup request context */ areq_ctx->gen_ctx.op_type = direct; areq_ctx->req_authsize = ctx->authsize; areq_ctx->cipher_mode = ctx->cipher_mode; - END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_0); /* STAT_PHASE_1: Map buffers */ - START_CYCLE_COUNT(); if (ctx->cipher_mode == DRV_CIPHER_CTR) { /* Build CTR IV - Copy nonce from last 4 bytes in @@ -2095,10 +2067,8 @@ static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction ssi_req.ivgen_size = crypto_aead_ivsize(tfm); } - END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_1); /* STAT_PHASE_2: Create sequence */ - START_CYCLE_COUNT(); /* Load MLLI tables to SRAM if necessary */ ssi_aead_load_mlli_to_sram(req, desc, &seq_len); @@ -2133,10 +2103,8 @@ static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction goto exit; } - END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_2); /* STAT_PHASE_3: Lock HW and push sequence */ - START_CYCLE_COUNT(); rc = send_request(ctx->drvdata, &ssi_req, desc, seq_len, 1); @@ -2146,7 +2114,6 @@ static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction } - END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3); exit: return rc; } diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c index 56e441c..e16fd36 100644 --- a/drivers/staging/ccree/ssi_cipher.c +++ b/drivers/staging/ccree/ssi_cipher.c @@ -323,7 +323,6 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm, struct device *dev = &ctx_p->drvdata->plat_dev->dev; u32 tmp[DES_EXPKEY_WORDS]; unsigned int max_key_buf_size = get_max_keysize(tfm); - DECL_CYCLE_COUNT_RESOURCES; SSI_LOG_DEBUG("Setting key in context @%p for %s. keylen=%u\n", ctx_p, crypto_tfm_alg_name(tfm), keylen); @@ -334,7 +333,6 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm, SSI_LOG_DEBUG("ssi_blkcipher_setkey: after FIPS check"); /* STAT_PHASE_0: Init and sanity checks */ - START_CYCLE_COUNT(); #if SSI_CC_HAS_MULTI2 /*last byte of key buffer is round number and should not be a part of key size*/ @@ -379,7 +377,6 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm, } ctx_p->keylen = keylen; - END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0); SSI_LOG_DEBUG("ssi_blkcipher_setkey: ssi_is_hw_key ret 0"); return 0; @@ -407,10 +404,8 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm, } - END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0); /* STAT_PHASE_1: Copy key to ctx */ - START_CYCLE_COUNT(); dma_sync_single_for_cpu(dev, ctx_p->user.key_dma_addr, max_key_buf_size, DMA_TO_DEVICE); #if SSI_CC_HAS_MULTI2 @@ -448,7 +443,6 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm, max_key_buf_size, DMA_TO_DEVICE); ctx_p->keylen = keylen; - END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1); SSI_LOG_DEBUG("ssi_blkcipher_setkey: return safely"); return 0; @@ -736,11 +730,8 @@ static int ssi_blkcipher_complete(struct device *dev, { int completion_error = 0; u32 inflight_counter; - DECL_CYCLE_COUNT_RESOURCES; - START_CYCLE_COUNT(); ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst); - END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_4); /*Set the inflight couter value to local variable*/ @@ -771,7 +762,6 @@ static int ssi_blkcipher_process( struct cc_hw_desc desc[MAX_ABLKCIPHER_SEQ_LEN]; struct ssi_crypto_req ssi_req = {}; int rc, seq_len = 0,cts_restore_flag = 0; - DECL_CYCLE_COUNT_RESOURCES; SSI_LOG_DEBUG("%s areq=%p info=%p nbytes=%d\n", ((direction==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"), @@ -779,7 +769,6 @@ static int ssi_blkcipher_process( CHECK_AND_RETURN_UPON_FIPS_ERROR(); /* STAT_PHASE_0: Init and sanity checks */ - START_CYCLE_COUNT(); /* TODO: check data length according to mode */ if (unlikely(validate_data_size(ctx_p, nbytes))) { @@ -811,10 +800,8 @@ static int ssi_blkcipher_process( /* Setup request context */ req_ctx->gen_ctx.op_type = direction; - END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_0); /* STAT_PHASE_1: Map buffers */ - START_CYCLE_COUNT(); rc = ssi_buffer_mgr_map_blkcipher_request(ctx_p->drvdata, req_ctx, ivsize, nbytes, info, src, dst); if (unlikely(rc != 0)) { @@ -822,10 +809,8 @@ static int ssi_blkcipher_process( goto exit_process; } - END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_1); /* STAT_PHASE_2: Create sequence */ - START_CYCLE_COUNT(); /* Setup processing */ #if SSI_CC_HAS_MULTI2 @@ -860,10 +845,8 @@ static int ssi_blkcipher_process( /* set the IV size (8/16 B long)*/ ssi_req.ivgen_size = ivsize; } - END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_2); /* STAT_PHASE_3: Lock HW and push sequence */ - START_CYCLE_COUNT(); rc = send_request(ctx_p->drvdata, &ssi_req, desc, seq_len, (areq == NULL)? 0:1); if(areq != NULL) { @@ -872,13 +855,10 @@ static int ssi_blkcipher_process( ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst); } - END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3); } else { if (rc != 0) { ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst); - END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3); } else { - END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3); rc = ssi_blkcipher_complete(dev, ctx_p, req_ctx, dst, src, ivsize, NULL, ctx_p->drvdata->cc_base); diff --git a/drivers/staging/ccree/ssi_config.h b/drivers/staging/ccree/ssi_config.h index 9feb692..b7c0576 100644 --- a/drivers/staging/ccree/ssi_config.h +++ b/drivers/staging/ccree/ssi_config.h @@ -30,15 +30,9 @@ // #define DX_DUMP_BYTES // #define CC_DEBUG #define ENABLE_CC_SYSFS /* Enable sysfs interface for debugging REE driver */ -//#define ENABLE_CC_CYCLE_COUNT //#define DX_IRQ_DELAY 100000 #define DMA_BIT_MASK_LEN 48 /* was 32 bit, but for juno's sake it was enlarged to 48 bit */ -#if defined ENABLE_CC_CYCLE_COUNT && defined ENABLE_CC_SYSFS -#define CC_CYCLE_COUNT -#endif - - #if defined (CONFIG_ARM64) // TODO currently only this mode was test on Juno (which is ARM64), need to enable coherent also. #define DISABLE_COHERENT_DMA_OPS #endif diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c index 52c6984..1909229 100644 --- a/drivers/staging/ccree/ssi_driver.c +++ b/drivers/staging/ccree/ssi_driver.c @@ -118,10 +118,8 @@ static irqreturn_t cc_isr(int irq, void *dev_id) void __iomem *cc_base = drvdata->cc_base; u32 irr; u32 imr; - DECL_CYCLE_COUNT_RESOURCES; /* STAT_OP_TYPE_GENERIC STAT_PHASE_0: Interrupt */ - START_CYCLE_COUNT(); /* read the interrupt status */ irr = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRR)); @@ -168,9 +166,6 @@ static irqreturn_t cc_isr(int irq, void *dev_id) /* Just warning */ } - END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_0); - START_CYCLE_COUNT_AT(drvdata->isr_exit_cycles); - return IRQ_HANDLED; } @@ -509,9 +504,6 @@ static int cc7x_remove(struct platform_device *plat_dev) cleanup_cc_resources(plat_dev); SSI_LOG(KERN_INFO, "ARM cc7x_ree device terminated\n"); -#ifdef ENABLE_CYCLE_COUNT - display_all_stat_db(); -#endif return 0; } diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h index c0bb1a1..658a7ed 100644 --- a/drivers/staging/ccree/ssi_driver.h +++ b/drivers/staging/ccree/ssi_driver.h @@ -117,11 +117,6 @@ struct ssi_crypto_req { unsigned int ivgen_dma_addr_len; /* Amount of 'ivgen_dma_addr' elements to be filled. */ unsigned int ivgen_size; /* The generated IV size required, 8/16 B allowed. */ struct completion seq_compl; /* request completion */ -#ifdef ENABLE_CYCLE_COUNT - enum stat_op op_type; - cycles_t submit_cycle; - bool is_monitored_p; -#endif }; /** @@ -153,10 +148,6 @@ struct ssi_drvdata { void *fips_handle; void *ivgen_handle; void *sram_mgr_handle; - -#ifdef ENABLE_CYCLE_COUNT - cycles_t isr_exit_cycles; /* Save for isr-to-tasklet latency */ -#endif u32 inflight_counter; }; @@ -202,22 +193,6 @@ void dump_byte_array(const char *name, const u8 *the_array, unsigned long size); } while (0); #endif -#ifdef ENABLE_CYCLE_COUNT -#define DECL_CYCLE_COUNT_RESOURCES cycles_t _last_cycles_read -#define START_CYCLE_COUNT() do { _last_cycles_read = get_cycles(); } while (0) -#define END_CYCLE_COUNT(_stat_op_type, _stat_phase) update_host_stat(_stat_op_type, _stat_phase, get_cycles() - _last_cycles_read) -#define GET_START_CYCLE_COUNT() _last_cycles_read -#define START_CYCLE_COUNT_AT(_var) do { _var = get_cycles(); } while(0) -#define END_CYCLE_COUNT_AT(_var, _stat_op_type, _stat_phase) update_host_stat(_stat_op_type, _stat_phase, get_cycles() - _var) -#else -#define DECL_CYCLE_COUNT_RESOURCES -#define START_CYCLE_COUNT() do { } while (0) -#define END_CYCLE_COUNT(_stat_op_type, _stat_phase) do { } while (0) -#define GET_START_CYCLE_COUNT() 0 -#define START_CYCLE_COUNT_AT(_var) do { } while (0) -#define END_CYCLE_COUNT_AT(_var, _stat_op_type, _stat_phase) do { } while (0) -#endif /*ENABLE_CYCLE_COUNT*/ - int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe); void fini_cc_regs(struct ssi_drvdata *drvdata); diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c index 66405e9..47bc496 100644 --- a/drivers/staging/ccree/ssi_hash.c +++ b/drivers/staging/ccree/ssi_hash.c @@ -460,9 +460,6 @@ static int ssi_hash_digest(struct ahash_req_ctx *state, /* Setup DX request structure */ ssi_req.user_cb = (void *)ssi_hash_digest_complete; ssi_req.user_arg = (void *)async_req; -#ifdef ENABLE_CYCLE_COUNT - ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ -#endif } /* If HMAC then load hash IPAD xor key, if HASH then load initial digest */ @@ -630,9 +627,6 @@ static int ssi_hash_update(struct ahash_req_ctx *state, /* Setup DX request structure */ ssi_req.user_cb = (void *)ssi_hash_update_complete; ssi_req.user_arg = async_req; -#ifdef ENABLE_CYCLE_COUNT - ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ -#endif } /* Restore hash digest */ @@ -725,9 +719,6 @@ static int ssi_hash_finup(struct ahash_req_ctx *state, /* Setup DX request structure */ ssi_req.user_cb = (void *)ssi_hash_complete; ssi_req.user_arg = async_req; -#ifdef ENABLE_CYCLE_COUNT - ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ -#endif } /* Restore hash digest */ @@ -866,9 +857,6 @@ static int ssi_hash_final(struct ahash_req_ctx *state, /* Setup DX request structure */ ssi_req.user_cb = (void *)ssi_hash_complete; ssi_req.user_arg = async_req; -#ifdef ENABLE_CYCLE_COUNT - ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ -#endif } /* Restore hash digest */ @@ -1308,7 +1296,6 @@ static int ssi_cmac_setkey(struct crypto_ahash *ahash, const u8 *key, unsigned int keylen) { struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash); - DECL_CYCLE_COUNT_RESOURCES; SSI_LOG_DEBUG("===== setkey (%d) ====\n", keylen); CHECK_AND_RETURN_UPON_FIPS_ERROR(); @@ -1326,7 +1313,6 @@ static int ssi_cmac_setkey(struct crypto_ahash *ahash, ctx->key_params.keylen = keylen; /* STAT_PHASE_1: Copy key to ctx */ - START_CYCLE_COUNT(); dma_sync_single_for_cpu(&ctx->drvdata->plat_dev->dev, ctx->opad_tmp_keys_dma_addr, @@ -1342,7 +1328,6 @@ static int ssi_cmac_setkey(struct crypto_ahash *ahash, ctx->key_params.keylen = keylen; - END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1); return 0; } @@ -1510,9 +1495,6 @@ static int ssi_mac_update(struct ahash_request *req) /* Setup DX request structure */ ssi_req.user_cb = (void *)ssi_hash_update_complete; ssi_req.user_arg = (void *)req; -#ifdef ENABLE_CYCLE_COUNT - ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ -#endif rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); if (unlikely(rc != -EINPROGRESS)) { @@ -1563,9 +1545,6 @@ static int ssi_mac_final(struct ahash_request *req) /* Setup DX request structure */ ssi_req.user_cb = (void *)ssi_hash_complete; ssi_req.user_arg = (void *)req; -#ifdef ENABLE_CYCLE_COUNT - ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ -#endif if (state->xcbc_count && (rem_cnt == 0)) { /* Load key for ECB decryption */ @@ -1671,9 +1650,6 @@ static int ssi_mac_finup(struct ahash_request *req) /* Setup DX request structure */ ssi_req.user_cb = (void *)ssi_hash_complete; ssi_req.user_arg = (void *)req; -#ifdef ENABLE_CYCLE_COUNT - ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ -#endif if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) { key_len = CC_AES_128_BIT_KEY_SIZE; @@ -1747,10 +1723,6 @@ static int ssi_mac_digest(struct ahash_request *req) /* Setup DX request structure */ ssi_req.user_cb = (void *)ssi_hash_digest_complete; ssi_req.user_arg = (void *)req; -#ifdef ENABLE_CYCLE_COUNT - ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ -#endif - if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) { keyLen = CC_AES_128_BIT_KEY_SIZE; diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c index 2ec296c..2382f32 100644 --- a/drivers/staging/ccree/ssi_request_mgr.c +++ b/drivers/staging/ccree/ssi_request_mgr.c @@ -37,74 +37,6 @@ #define AXIM_MON_BASE_OFFSET CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_COMP) -#ifdef CC_CYCLE_COUNT - -#define MONITOR_CNTR_BIT 0 - -/** - * Monitor descriptor. - * Used to measure CC performance. - */ -#define INIT_CC_MONITOR_DESC(desc_p) \ -do { \ - hw_desc_init(desc_p); \ - HW_DESC_SET_DIN_MONITOR_CNTR(desc_p); \ -} while (0) - -/** - * Try adding monitor descriptor BEFORE enqueuing sequence. - */ -#define CC_CYCLE_DESC_HEAD(cc_base_addr, desc_p, lock_p, is_monitored_p) \ -do { \ - if (!test_and_set_bit(MONITOR_CNTR_BIT, (lock_p))) { \ - enqueue_seq((cc_base_addr), (desc_p), 1); \ - *(is_monitored_p) = true; \ - } else { \ - *(is_monitored_p) = false; \ - } \ -} while (0) - -/** - * If CC_CYCLE_DESC_HEAD was successfully added: - * 1. Add memory barrier descriptor to ensure last AXI transaction. - * 2. Add monitor descriptor to sequence tail AFTER enqueuing sequence. - */ -#define CC_CYCLE_DESC_TAIL(cc_base_addr, desc_p, is_monitored) \ -do { \ - if ((is_monitored) == true) { \ - struct cc_hw_desc barrier_desc; \ - hw_desc_init(&barrier_desc); \ - set_din_no_dma(&barrier_desc, 0, 0xfffff0); \ - set_dout_no_dma(&barrier_desc, 0, 0, 1); \ - enqueue_seq((cc_base_addr), &barrier_desc, 1); \ - enqueue_seq((cc_base_addr), (desc_p), 1); \ - } \ -} while (0) - -/** - * Try reading CC monitor counter value upon sequence complete. - * Can only succeed if the lock_p is taken by the owner of the given request. - */ -#define END_CC_MONITOR_COUNT(cc_base_addr, stat_op_type, stat_phase, monitor_null_cycles, lock_p, is_monitored) \ -do { \ - u32 elapsed_cycles; \ - if ((is_monitored) == true) { \ - elapsed_cycles = READ_REGISTER((cc_base_addr) + CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_MEASURE_CNTR)); \ - clear_bit(MONITOR_CNTR_BIT, (lock_p)); \ - if (elapsed_cycles > 0) \ - update_cc_stat(stat_op_type, stat_phase, (elapsed_cycles - monitor_null_cycles)); \ - } \ -} while (0) - -#else /*CC_CYCLE_COUNT*/ - -#define INIT_CC_MONITOR_DESC(desc_p) do { } while (0) -#define CC_CYCLE_DESC_HEAD(cc_base_addr, desc_p, lock_p, is_monitored_p) do { } while (0) -#define CC_CYCLE_DESC_TAIL(cc_base_addr, desc_p, is_monitored) do { } while (0) -#define END_CC_MONITOR_COUNT(cc_base_addr, stat_op_type, stat_phase, monitor_null_cycles, lock_p, is_monitored) do { } while (0) -#endif /*CC_CYCLE_COUNT*/ - - struct ssi_request_mgr_handle { /* Request manager resources */ unsigned int hw_queue_size; /* HW capability */ @@ -168,10 +100,6 @@ void request_mgr_fini(struct ssi_drvdata *drvdata) int request_mgr_init(struct ssi_drvdata *drvdata) { -#ifdef CC_CYCLE_COUNT - struct cc_hw_desc monitor_desc[2]; - struct ssi_crypto_req monitor_req = {0}; -#endif struct ssi_request_mgr_handle *req_mgr_h; int rc = 0; @@ -228,24 +156,6 @@ int request_mgr_init(struct ssi_drvdata *drvdata) set_flow_mode(&req_mgr_h->compl_desc, BYPASS); set_queue_last_ind(&req_mgr_h->compl_desc); -#ifdef CC_CYCLE_COUNT - /* For CC-HW cycle performance trace */ - INIT_CC_MONITOR_DESC(&req_mgr_h->monitor_desc); - set_bit(MONITOR_CNTR_BIT, &req_mgr_h->monitor_lock); - monitor_desc[0] = req_mgr_h->monitor_desc; - monitor_desc[1] = req_mgr_h->monitor_desc; - - rc = send_request(drvdata, &monitor_req, monitor_desc, 2, 0); - if (unlikely(rc != 0)) - goto req_mgr_init_err; - - drvdata->monitor_null_cycles = READ_REGISTER(drvdata->cc_base + - CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_MEASURE_CNTR)); - SSI_LOG_ERR("Calibration time=0x%08x\n", drvdata->monitor_null_cycles); - - clear_bit(MONITOR_CNTR_BIT, &req_mgr_h->monitor_lock); -#endif - return 0; req_mgr_init_err: @@ -367,7 +277,6 @@ int send_request( ((ssi_req->ivgen_dma_addr_len == 0) ? 0 : SSI_IVPOOL_SEQ_LEN ) + ((is_dout == 0 )? 1 : 0)); - DECL_CYCLE_COUNT_RESOURCES; #if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) rc = ssi_power_mgr_runtime_get(&drvdata->plat_dev->dev); @@ -446,12 +355,8 @@ int send_request( req_mgr_h->max_used_sw_slots = used_sw_slots; } - CC_CYCLE_DESC_HEAD(cc_base, &req_mgr_h->monitor_desc, - &req_mgr_h->monitor_lock, &ssi_req->is_monitored_p); - /* Enqueue request - must be locked with HW lock*/ req_mgr_h->req_queue[req_mgr_h->req_queue_head] = *ssi_req; - START_CYCLE_COUNT_AT(req_mgr_h->req_queue[req_mgr_h->req_queue_head].submit_cycle); req_mgr_h->req_queue_head = (req_mgr_h->req_queue_head + 1) & (MAX_REQUEST_QUEUE_SIZE - 1); /* TODO: Use circ_buf.h ? */ @@ -462,13 +367,9 @@ int send_request( #endif /* STAT_PHASE_4: Push sequence */ - START_CYCLE_COUNT(); enqueue_seq(cc_base, iv_seq, iv_seq_len); enqueue_seq(cc_base, desc, len); enqueue_seq(cc_base, &req_mgr_h->compl_desc, (is_dout ? 0 : 1)); - END_CYCLE_COUNT(ssi_req->op_type, STAT_PHASE_4); - - CC_CYCLE_DESC_TAIL(cc_base, &req_mgr_h->monitor_desc, ssi_req->is_monitored_p); if (unlikely(req_mgr_h->q_free_slots < total_seq_len)) { /*This means that there was a problem with the resume*/ @@ -558,7 +459,6 @@ static void proc_completions(struct ssi_drvdata *drvdata) #if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) int rc = 0; #endif - DECL_CYCLE_COUNT_RESOURCES; while(request_mgr_handle->axi_completed) { request_mgr_handle->axi_completed--; @@ -570,9 +470,6 @@ static void proc_completions(struct ssi_drvdata *drvdata) } ssi_req = &request_mgr_handle->req_queue[request_mgr_handle->req_queue_tail]; - END_CYCLE_COUNT_AT(ssi_req->submit_cycle, ssi_req->op_type, STAT_PHASE_5); /* Seq. Comp. */ - END_CC_MONITOR_COUNT(drvdata->cc_base, ssi_req->op_type, STAT_PHASE_6, - drvdata->monitor_null_cycles, &request_mgr_handle->monitor_lock, ssi_req->is_monitored_p); #ifdef FLUSH_CACHE_ALL flush_cache_all(); @@ -591,9 +488,7 @@ static void proc_completions(struct ssi_drvdata *drvdata) #endif /* COMPLETION_DELAY */ if (likely(ssi_req->user_cb != NULL)) { - START_CYCLE_COUNT(); ssi_req->user_cb(&plat_dev->dev, ssi_req->user_arg, drvdata->cc_base); - END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_3); } request_mgr_handle->req_queue_tail = (request_mgr_handle->req_queue_tail + 1) & (MAX_REQUEST_QUEUE_SIZE - 1); SSI_LOG_DEBUG("Dequeue request tail=%u\n", request_mgr_handle->req_queue_tail); @@ -617,9 +512,7 @@ static void comp_handler(unsigned long devarg) u32 irq; - DECL_CYCLE_COUNT_RESOURCES; - START_CYCLE_COUNT(); irq = (drvdata->irq & SSI_COMP_IRQ_MASK); @@ -631,14 +524,6 @@ static void comp_handler(unsigned long devarg) request_mgr_handle->axi_completed += CC_REG_FLD_GET(CRY_KERNEL, AXIM_MON_COMP, VALUE, CC_HAL_READ_REGISTER(AXIM_MON_BASE_OFFSET)); - /* ISR-to-Tasklet latency */ - if (request_mgr_handle->axi_completed) { - /* Only if actually reflects ISR-to-completion-handling latency, i.e., - * not duplicate as a result of interrupt after AXIM_MON_ERR clear, before end of loop - */ - END_CYCLE_COUNT_AT(drvdata->isr_exit_cycles, STAT_OP_TYPE_GENERIC, STAT_PHASE_1); - } - while (request_mgr_handle->axi_completed) { do { proc_completions(drvdata); @@ -662,7 +547,6 @@ static void comp_handler(unsigned long devarg) CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), CC_HAL_READ_REGISTER( CC_REG_OFFSET(HOST_RGF, HOST_IMR)) & ~irq); - END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_2); } /* From patchwork Sun Jun 4 08:02:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 101344 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp438176qgd; Sun, 4 Jun 2017 01:06:27 -0700 (PDT) X-Received: by 10.98.40.198 with SMTP id o189mr14907294pfo.238.1496563587254; Sun, 04 Jun 2017 01:06:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496563587; cv=none; d=google.com; s=arc-20160816; b=ot0JNgShOLFUJpCrNERymrS+qGKBlchFPI9C7+RCkcRs7dRB+BxhQJtuVpn6+qurKf w47QVB2mMVmWPyMcdOlIYuodRiDz7ck5wdctHf+OeqtjLT1zg8KE7FcWHzawXHI94pPM K0noRWOnud/h/x5vfnlZVukq/wIuvjN41pRcwem1uSmAzRIeEnWh8GjuNdGXbQvpIxm6 0GnJ6ZawCI95gKlEK2qKi4bO5P6/dzfA7wiEpuGplcaaTd6+eG9O3g1BhHFiv2ox5Kr7 P4N/zaeeYBoznc7Gig8xVxLhWb0Iuij4/R565MQ81I7MSlDr5fGaVzPUQCMxtAKIbJgn XqEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=dHYhruYsvbJtYtlJ127SrKaz4djZ1RgxzruHY14Nvx8=; b=CiJtXfVW19X2HrhZI0wCgm0Qt6KdvFll6JkIrPUuojVSQEF8ulESUX1iJppJweY51j NiIJSIgMZhnrBCNk2OkgEdOhMRDi2lFph+Gs0mH8robFoax28fTMLt/oaf6W1VmIv23n pPB+0XKS6Al/nNcCjMyTaggANOsx6MLOMzwWr0DLkKhUTsNvpJW3fRv5O9iXPQbLvsAr myu6O7NzBSM5n24nLiBrGYn+XrCqGjXp4/LCjipMbtLuM14n/hW7iquItBlHgQFFvXJW Sx31CMOHNOy5PuKhieVS5pH6UyCmJWTfO2Nw9m2N4z7y4fEuksKH+1X3ErR1TgmB04uY SEZQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m5si4321200pli.193.2017.06.04.01.06.27; Sun, 04 Jun 2017 01:06:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751284AbdFDIGZ (ORCPT + 1 other); Sun, 4 Jun 2017 04:06:25 -0400 Received: from foss.arm.com ([217.140.101.70]:52768 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751193AbdFDIDw (ORCPT ); Sun, 4 Jun 2017 04:03:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A3CB344; Sun, 4 Jun 2017 01:03:47 -0700 (PDT) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 810CA3F589; Sun, 4 Jun 2017 01:03:45 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Joe Perches , Ofir Drang , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Subject: [PATCH v3 10/18] staging: ccree: remove unused struct Date: Sun, 4 Jun 2017 11:02:31 +0300 Message-Id: <1496563362-7954-11-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496563362-7954-1-git-send-email-gilad@benyossef.com> References: <1496563362-7954-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org struct SepHashPrivateContext is not used anywhere in the code. Remove it. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/hash_defs.h | 18 ------------------ 1 file changed, 18 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/hash_defs.h b/drivers/staging/ccree/hash_defs.h index 3f2b2d1..9e01219 100644 --- a/drivers/staging/ccree/hash_defs.h +++ b/drivers/staging/ccree/hash_defs.h @@ -57,23 +57,5 @@ enum HashCipherDoPadding { HASH_CIPHER_DO_PADDING_RESERVE32 = S32_MAX, }; -typedef struct SepHashPrivateContext { - /* The current length is placed at the end of the context buffer because the hash - * context is used for all HMAC operations as well. HMAC context includes a 64 bytes - * K0 field. The size of struct drv_ctx_hash reserved field is 88/184 bytes depend if t - * he SHA512 is supported ( in this case teh context size is 256 bytes). - * The size of struct drv_ctx_hash reseved field is 20 or 52 depend if the SHA512 is supported. - * This means that this structure size (without the reserved field can be up to 20 bytes , - * in case sha512 is not suppported it is 20 bytes (SEP_HASH_LENGTH_WORDS define to 2 ) and in the other - * case it is 28 (SEP_HASH_LENGTH_WORDS define to 4) - */ - u32 reserved[(sizeof(struct drv_ctx_hash)/sizeof(u32)) - SEP_HASH_LENGTH_WORDS - 3]; - u32 CurrentDigestedLength[SEP_HASH_LENGTH_WORDS]; - u32 KeyType; - u32 dataCompleted; - u32 hmacFinalization; - /* no space left */ -} SepHashPrivateContext_s; - #endif /*_HASH_DEFS_H__*/ From patchwork Sun Jun 4 08:02:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 101341 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp437958qgd; Sun, 4 Jun 2017 01:05:39 -0700 (PDT) X-Received: by 10.99.185.65 with SMTP id v1mr11828458pgo.189.1496563539125; Sun, 04 Jun 2017 01:05:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496563539; cv=none; d=google.com; s=arc-20160816; b=UJkeQt4w7W3g+hL6j2/MXWICNUWm2tJvKt8zJprwkBFLG07z6ZIUltkAQxzEwogYCd qBqeFJQe265LIa64nITBsL1mpvSW2B1CUt5W1WIZyN2r0po3+ghHiHJ0WbGLS0eO7ofW We09FAY2IwGxEXTu3IM/TuRtq6He+/9v8cyHYJUv3fOy8YSCyzY3kOmgYdvxY4LQW36n FaKsycOhZLERwYMx0c/6TehmcKxh2HJcubwXzi4bGQOOhXpv0cWNZYk5oWgn+yGgYxeo QtUh+JNAigpLlVFxkPcLS+5zR0oROV1aU0/LYiBFj0VIrCUsi8b2AbNzI+iSpblOP3mH suhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=SJ8BtvGQaXG3ECsEFnR80JCD3GzG8V4quTgKNPl3Ymw=; b=P4NnpsT//S9Cm7rQGpStYQQuXwoAqSahznpOJ7X1M7Le3IJdPOztyyQ6MTlfsJdv8E yTf2U1rtHgP+uSWtvOJ9DizeSPufRHDjRz43HB7dsTBGezYTcAxuii/rm6aHJ5cihC+G G4xQAmS47+MUOBYQeEKxAyZ9hfX8c1sG2UXHoa8ncW/+vj4a9YWMgNwgaXv2yeBxQQkR r6vklRGTFVXwrO7H26vEyWarBilT/m5WzajcbW5ivW/pTUT2HXpTRZbrV0FlbII1v2pe +3K3ZkyMdXNKFFxWZOool84Aep6pmALAlaVQaaMINtGwJ9g9MVMZZhZfHfp7Wh9jJ8zz +iHw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u59si4227618plb.508.2017.06.04.01.05.39; Sun, 04 Jun 2017 01:05:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751139AbdFDIFh (ORCPT + 1 other); Sun, 4 Jun 2017 04:05:37 -0400 Received: from foss.arm.com ([217.140.101.70]:52818 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751338AbdFDIEK (ORCPT ); Sun, 4 Jun 2017 04:04:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7F98C344; Sun, 4 Jun 2017 01:04:09 -0700 (PDT) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 79ADE3F589; Sun, 4 Jun 2017 01:04:07 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Joe Perches , Ofir Drang , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Subject: [PATCH v3 14/18] staging: ccree: remove spurious blank line Date: Sun, 4 Jun 2017 11:02:35 +0300 Message-Id: <1496563362-7954-15-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496563362-7954-1-git-send-email-gilad@benyossef.com> References: <1496563362-7954-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Remove spurious blank line from cc_regs.h Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/cc_regs.h | 1 - 1 file changed, 1 deletion(-) -- 2.1.4 diff --git a/drivers/staging/ccree/cc_regs.h b/drivers/staging/ccree/cc_regs.h index 53675e3..4a893a6 100644 --- a/drivers/staging/ccree/cc_regs.h +++ b/drivers/staging/ccree/cc_regs.h @@ -14,7 +14,6 @@ * along with this program; if not, see . */ - /*! * @file * @brief This file contains macro definitions for accessing ARM TrustZone From patchwork Sun Jun 4 08:02:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 101337 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp437645qgd; Sun, 4 Jun 2017 01:04:28 -0700 (PDT) X-Received: by 10.84.231.16 with SMTP id f16mr8437495plk.259.1496563468355; Sun, 04 Jun 2017 01:04:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496563468; cv=none; d=google.com; s=arc-20160816; b=Y5jG7O4M6a4gCvV13+lf8FDzjiws+n8WQEQKOM94a7ollI4z8CU79kHfPj7WIu49Kg 1TGxwbX4na9U5Gn/oUu/gHGr3A8RIYsvJpHA9HkJjHyDLC42c3kMajhLDPby6hh7Pf23 NjPCSh8GS8tBzez6Pc3wL4bauMlrZz4/aH/s5ymE1zrLSefYOIN7fZj5Fk/THPSRLwTA OctYV2utW/VZvx1RhK02hIClFvRexetULDzsr6pFe2u/1on7lfpIL81E+IhVyuvTk0aE Akfql5bxQka+DaVd3eajJ9UR2G9C3nC0HR/Y7ts9E0V9oM5MAjhVyW2Rl3s1NTRar2hA OFkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=45IxpXQc734z4d2+Qb4Ub9D9bFB0/jKTUxdufKLXs0U=; b=RHdGzAv80EGfqc1jG4eK2OII+hgW00zWOlqaBO57XwuAWG/eWjxtkY2oFIRUEUAefj T59XK1PoYc0w5Zn3SGBxRHDa4D2D8JwSfLuBREVn2r0u8TffA8JOdQFBrMmNBO/hmgR2 HKewDPj1ekE+HaLqB9a9pE4LYR2A9UdDlC46PLy9dUqGPBGyMRI+G3SoypI9GgVMXW1/ lHSjzMSQN+uGiMhVyDR9YM69icu69hfL5hxN1945/YU5ViRXu2fqhIJmC1oH9MB7qVhe 2eTjR6r7oQMIovYifGD5JiUssupHryzA/nRLA/2gNrt+G9uURp3Ecr1KkgFwoA7EnsF/ kEyQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p12si27953980pgs.155.2017.06.04.01.04.28; Sun, 04 Jun 2017 01:04:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751366AbdFDIEX (ORCPT + 1 other); Sun, 4 Jun 2017 04:04:23 -0400 Received: from foss.arm.com ([217.140.101.70]:52832 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751322AbdFDIEU (ORCPT ); Sun, 4 Jun 2017 04:04:20 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F2986344; Sun, 4 Jun 2017 01:04:14 -0700 (PDT) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 00F943F589; Sun, 4 Jun 2017 01:04:12 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Joe Perches , Ofir Drang , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Subject: [PATCH v3 15/18] staging: ccree: fix wrong whitespace usage Date: Sun, 4 Jun 2017 11:02:36 +0300 Message-Id: <1496563362-7954-16-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496563362-7954-1-git-send-email-gilad@benyossef.com> References: <1496563362-7954-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Some of the register definition files had none kernel coding style usage of tabs vs. spaces in macro definitions. This patch fixes them. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/dx_crys_kernel.h | 308 ++++++++++++++++----------------- drivers/staging/ccree/dx_host.h | 256 +++++++++++++-------------- 2 files changed, 282 insertions(+), 282 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/dx_crys_kernel.h b/drivers/staging/ccree/dx_crys_kernel.h index a776e24..2196030 100644 --- a/drivers/staging/ccree/dx_crys_kernel.h +++ b/drivers/staging/ccree/dx_crys_kernel.h @@ -20,161 +20,161 @@ // -------------------------------------- // BLOCK: DSCRPTR // -------------------------------------- -#define DX_DSCRPTR_COMPLETION_COUNTER_REG_OFFSET 0xE00UL -#define DX_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SIZE 0x6UL -#define DX_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SHIFT 0x6UL -#define DX_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SIZE 0x1UL -#define DX_DSCRPTR_SW_RESET_REG_OFFSET 0xE40UL -#define DX_DSCRPTR_SW_RESET_VALUE_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_SW_RESET_VALUE_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_SRAM_SIZE_REG_OFFSET 0xE60UL -#define DX_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SIZE 0xAUL -#define DX_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SHIFT 0xAUL -#define DX_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SIZE 0xCUL -#define DX_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SHIFT 0x16UL -#define DX_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SIZE 0x3UL -#define DX_DSCRPTR_SINGLE_ADDR_EN_REG_OFFSET 0xE64UL -#define DX_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SIZE 0x1UL -#define DX_DSCRPTR_MEASURE_CNTR_REG_OFFSET 0xE68UL -#define DX_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SIZE 0x20UL -#define DX_DSCRPTR_QUEUE_WORD0_REG_OFFSET 0xE80UL -#define DX_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SIZE 0x20UL -#define DX_DSCRPTR_QUEUE_WORD1_REG_OFFSET 0xE84UL -#define DX_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SIZE 0x2UL -#define DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SHIFT 0x2UL -#define DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SIZE 0x18UL -#define DX_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SHIFT 0x1AUL -#define DX_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SHIFT 0x1BUL -#define DX_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SHIFT 0x1CUL -#define DX_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SHIFT 0x1DUL -#define DX_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SHIFT 0x1EUL -#define DX_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SIZE 0x2UL -#define DX_DSCRPTR_QUEUE_WORD2_REG_OFFSET 0xE88UL -#define DX_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SIZE 0x20UL -#define DX_DSCRPTR_QUEUE_WORD3_REG_OFFSET 0xE8CUL -#define DX_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SIZE 0x2UL -#define DX_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SHIFT 0x2UL -#define DX_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SIZE 0x18UL -#define DX_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SHIFT 0x1AUL -#define DX_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SHIFT 0x1BUL -#define DX_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SHIFT 0x1DUL -#define DX_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SHIFT 0x1EUL -#define DX_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SHIFT 0x1FUL -#define DX_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD4_REG_OFFSET 0xE90UL -#define DX_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SIZE 0x6UL -#define DX_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SHIFT 0x6UL -#define DX_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SHIFT 0x7UL -#define DX_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SHIFT 0x8UL -#define DX_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SIZE 0x2UL -#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SHIFT 0xAUL -#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SIZE 0x4UL -#define DX_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SHIFT 0xEUL -#define DX_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SHIFT 0xFUL -#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SIZE 0x2UL -#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SHIFT 0x11UL -#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SIZE 0x2UL -#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SHIFT 0x13UL -#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SHIFT 0x14UL -#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SIZE 0x2UL -#define DX_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SHIFT 0x16UL -#define DX_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SIZE 0x2UL -#define DX_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SHIFT 0x18UL -#define DX_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SIZE 0x4UL -#define DX_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SHIFT 0x1CUL -#define DX_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SHIFT 0x1DUL -#define DX_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SHIFT 0x1EUL -#define DX_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SHIFT 0x1FUL -#define DX_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SIZE 0x1UL -#define DX_DSCRPTR_QUEUE_WORD5_REG_OFFSET 0xE94UL -#define DX_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SIZE 0x10UL -#define DX_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SHIFT 0x10UL -#define DX_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SIZE 0x10UL -#define DX_DSCRPTR_QUEUE_WATERMARK_REG_OFFSET 0xE98UL -#define DX_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SIZE 0xAUL -#define DX_DSCRPTR_QUEUE_CONTENT_REG_OFFSET 0xE9CUL -#define DX_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SHIFT 0x0UL -#define DX_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SIZE 0xAUL +#define DX_DSCRPTR_COMPLETION_COUNTER_REG_OFFSET 0xE00UL +#define DX_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SIZE 0x6UL +#define DX_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SHIFT 0x6UL +#define DX_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SIZE 0x1UL +#define DX_DSCRPTR_SW_RESET_REG_OFFSET 0xE40UL +#define DX_DSCRPTR_SW_RESET_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_SW_RESET_VALUE_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_REG_OFFSET 0xE60UL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SIZE 0xAUL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SHIFT 0xAUL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SIZE 0xCUL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SHIFT 0x16UL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SIZE 0x3UL +#define DX_DSCRPTR_SINGLE_ADDR_EN_REG_OFFSET 0xE64UL +#define DX_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SIZE 0x1UL +#define DX_DSCRPTR_MEASURE_CNTR_REG_OFFSET 0xE68UL +#define DX_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SIZE 0x20UL +#define DX_DSCRPTR_QUEUE_WORD0_REG_OFFSET 0xE80UL +#define DX_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SIZE 0x20UL +#define DX_DSCRPTR_QUEUE_WORD1_REG_OFFSET 0xE84UL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SHIFT 0x2UL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SIZE 0x18UL +#define DX_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SHIFT 0x1AUL +#define DX_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SHIFT 0x1BUL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SHIFT 0x1CUL +#define DX_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SHIFT 0x1DUL +#define DX_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SHIFT 0x1EUL +#define DX_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD2_REG_OFFSET 0xE88UL +#define DX_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SIZE 0x20UL +#define DX_DSCRPTR_QUEUE_WORD3_REG_OFFSET 0xE8CUL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SHIFT 0x2UL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SIZE 0x18UL +#define DX_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SHIFT 0x1AUL +#define DX_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SHIFT 0x1BUL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SHIFT 0x1DUL +#define DX_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SHIFT 0x1EUL +#define DX_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SHIFT 0x1FUL +#define DX_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_REG_OFFSET 0xE90UL +#define DX_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SIZE 0x6UL +#define DX_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SHIFT 0x6UL +#define DX_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SHIFT 0x7UL +#define DX_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SHIFT 0x8UL +#define DX_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SHIFT 0xAUL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SIZE 0x4UL +#define DX_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SHIFT 0xEUL +#define DX_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SHIFT 0xFUL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SHIFT 0x11UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SHIFT 0x13UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SHIFT 0x14UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SHIFT 0x16UL +#define DX_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SHIFT 0x18UL +#define DX_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SIZE 0x4UL +#define DX_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SHIFT 0x1CUL +#define DX_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SHIFT 0x1DUL +#define DX_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SHIFT 0x1EUL +#define DX_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SHIFT 0x1FUL +#define DX_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD5_REG_OFFSET 0xE94UL +#define DX_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SIZE 0x10UL +#define DX_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SHIFT 0x10UL +#define DX_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SIZE 0x10UL +#define DX_DSCRPTR_QUEUE_WATERMARK_REG_OFFSET 0xE98UL +#define DX_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SIZE 0xAUL +#define DX_DSCRPTR_QUEUE_CONTENT_REG_OFFSET 0xE9CUL +#define DX_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SIZE 0xAUL // -------------------------------------- // BLOCK: AXI_P // -------------------------------------- -#define DX_AXIM_MON_INFLIGHT_REG_OFFSET 0xB00UL -#define DX_AXIM_MON_INFLIGHT_VALUE_BIT_SHIFT 0x0UL -#define DX_AXIM_MON_INFLIGHT_VALUE_BIT_SIZE 0x8UL -#define DX_AXIM_MON_INFLIGHTLAST_REG_OFFSET 0xB40UL -#define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SHIFT 0x0UL -#define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SIZE 0x8UL -#define DX_AXIM_MON_COMP_REG_OFFSET 0xB80UL -#define DX_AXIM_MON_COMP_VALUE_BIT_SHIFT 0x0UL -#define DX_AXIM_MON_COMP_VALUE_BIT_SIZE 0x10UL -#define DX_AXIM_MON_ERR_REG_OFFSET 0xBC4UL -#define DX_AXIM_MON_ERR_BRESP_BIT_SHIFT 0x0UL -#define DX_AXIM_MON_ERR_BRESP_BIT_SIZE 0x2UL -#define DX_AXIM_MON_ERR_BID_BIT_SHIFT 0x2UL -#define DX_AXIM_MON_ERR_BID_BIT_SIZE 0x4UL -#define DX_AXIM_MON_ERR_RRESP_BIT_SHIFT 0x10UL -#define DX_AXIM_MON_ERR_RRESP_BIT_SIZE 0x2UL -#define DX_AXIM_MON_ERR_RID_BIT_SHIFT 0x12UL -#define DX_AXIM_MON_ERR_RID_BIT_SIZE 0x4UL -#define DX_AXIM_CFG_REG_OFFSET 0xBE8UL -#define DX_AXIM_CFG_BRESPMASK_BIT_SHIFT 0x4UL -#define DX_AXIM_CFG_BRESPMASK_BIT_SIZE 0x1UL -#define DX_AXIM_CFG_RRESPMASK_BIT_SHIFT 0x5UL -#define DX_AXIM_CFG_RRESPMASK_BIT_SIZE 0x1UL -#define DX_AXIM_CFG_INFLTMASK_BIT_SHIFT 0x6UL -#define DX_AXIM_CFG_INFLTMASK_BIT_SIZE 0x1UL -#define DX_AXIM_CFG_COMPMASK_BIT_SHIFT 0x7UL -#define DX_AXIM_CFG_COMPMASK_BIT_SIZE 0x1UL -#define DX_AXIM_ACE_CONST_REG_OFFSET 0xBECUL -#define DX_AXIM_ACE_CONST_ARDOMAIN_BIT_SHIFT 0x0UL -#define DX_AXIM_ACE_CONST_ARDOMAIN_BIT_SIZE 0x2UL -#define DX_AXIM_ACE_CONST_AWDOMAIN_BIT_SHIFT 0x2UL -#define DX_AXIM_ACE_CONST_AWDOMAIN_BIT_SIZE 0x2UL -#define DX_AXIM_ACE_CONST_ARBAR_BIT_SHIFT 0x4UL -#define DX_AXIM_ACE_CONST_ARBAR_BIT_SIZE 0x2UL -#define DX_AXIM_ACE_CONST_AWBAR_BIT_SHIFT 0x6UL -#define DX_AXIM_ACE_CONST_AWBAR_BIT_SIZE 0x2UL -#define DX_AXIM_ACE_CONST_ARSNOOP_BIT_SHIFT 0x8UL -#define DX_AXIM_ACE_CONST_ARSNOOP_BIT_SIZE 0x4UL -#define DX_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SHIFT 0xCUL -#define DX_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SIZE 0x3UL -#define DX_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SHIFT 0xFUL -#define DX_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SIZE 0x3UL -#define DX_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SHIFT 0x12UL -#define DX_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SIZE 0x7UL -#define DX_AXIM_ACE_CONST_AWLEN_VAL_BIT_SHIFT 0x19UL -#define DX_AXIM_ACE_CONST_AWLEN_VAL_BIT_SIZE 0x4UL -#define DX_AXIM_CACHE_PARAMS_REG_OFFSET 0xBF0UL -#define DX_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SHIFT 0x0UL -#define DX_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SIZE 0x4UL -#define DX_AXIM_CACHE_PARAMS_AWCACHE_BIT_SHIFT 0x4UL -#define DX_AXIM_CACHE_PARAMS_AWCACHE_BIT_SIZE 0x4UL -#define DX_AXIM_CACHE_PARAMS_ARCACHE_BIT_SHIFT 0x8UL -#define DX_AXIM_CACHE_PARAMS_ARCACHE_BIT_SIZE 0x4UL +#define DX_AXIM_MON_INFLIGHT_REG_OFFSET 0xB00UL +#define DX_AXIM_MON_INFLIGHT_VALUE_BIT_SHIFT 0x0UL +#define DX_AXIM_MON_INFLIGHT_VALUE_BIT_SIZE 0x8UL +#define DX_AXIM_MON_INFLIGHTLAST_REG_OFFSET 0xB40UL +#define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SHIFT 0x0UL +#define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SIZE 0x8UL +#define DX_AXIM_MON_COMP_REG_OFFSET 0xB80UL +#define DX_AXIM_MON_COMP_VALUE_BIT_SHIFT 0x0UL +#define DX_AXIM_MON_COMP_VALUE_BIT_SIZE 0x10UL +#define DX_AXIM_MON_ERR_REG_OFFSET 0xBC4UL +#define DX_AXIM_MON_ERR_BRESP_BIT_SHIFT 0x0UL +#define DX_AXIM_MON_ERR_BRESP_BIT_SIZE 0x2UL +#define DX_AXIM_MON_ERR_BID_BIT_SHIFT 0x2UL +#define DX_AXIM_MON_ERR_BID_BIT_SIZE 0x4UL +#define DX_AXIM_MON_ERR_RRESP_BIT_SHIFT 0x10UL +#define DX_AXIM_MON_ERR_RRESP_BIT_SIZE 0x2UL +#define DX_AXIM_MON_ERR_RID_BIT_SHIFT 0x12UL +#define DX_AXIM_MON_ERR_RID_BIT_SIZE 0x4UL +#define DX_AXIM_CFG_REG_OFFSET 0xBE8UL +#define DX_AXIM_CFG_BRESPMASK_BIT_SHIFT 0x4UL +#define DX_AXIM_CFG_BRESPMASK_BIT_SIZE 0x1UL +#define DX_AXIM_CFG_RRESPMASK_BIT_SHIFT 0x5UL +#define DX_AXIM_CFG_RRESPMASK_BIT_SIZE 0x1UL +#define DX_AXIM_CFG_INFLTMASK_BIT_SHIFT 0x6UL +#define DX_AXIM_CFG_INFLTMASK_BIT_SIZE 0x1UL +#define DX_AXIM_CFG_COMPMASK_BIT_SHIFT 0x7UL +#define DX_AXIM_CFG_COMPMASK_BIT_SIZE 0x1UL +#define DX_AXIM_ACE_CONST_REG_OFFSET 0xBECUL +#define DX_AXIM_ACE_CONST_ARDOMAIN_BIT_SHIFT 0x0UL +#define DX_AXIM_ACE_CONST_ARDOMAIN_BIT_SIZE 0x2UL +#define DX_AXIM_ACE_CONST_AWDOMAIN_BIT_SHIFT 0x2UL +#define DX_AXIM_ACE_CONST_AWDOMAIN_BIT_SIZE 0x2UL +#define DX_AXIM_ACE_CONST_ARBAR_BIT_SHIFT 0x4UL +#define DX_AXIM_ACE_CONST_ARBAR_BIT_SIZE 0x2UL +#define DX_AXIM_ACE_CONST_AWBAR_BIT_SHIFT 0x6UL +#define DX_AXIM_ACE_CONST_AWBAR_BIT_SIZE 0x2UL +#define DX_AXIM_ACE_CONST_ARSNOOP_BIT_SHIFT 0x8UL +#define DX_AXIM_ACE_CONST_ARSNOOP_BIT_SIZE 0x4UL +#define DX_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SHIFT 0xCUL +#define DX_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SIZE 0x3UL +#define DX_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SHIFT 0xFUL +#define DX_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SIZE 0x3UL +#define DX_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SHIFT 0x12UL +#define DX_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SIZE 0x7UL +#define DX_AXIM_ACE_CONST_AWLEN_VAL_BIT_SHIFT 0x19UL +#define DX_AXIM_ACE_CONST_AWLEN_VAL_BIT_SIZE 0x4UL +#define DX_AXIM_CACHE_PARAMS_REG_OFFSET 0xBF0UL +#define DX_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SHIFT 0x0UL +#define DX_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SIZE 0x4UL +#define DX_AXIM_CACHE_PARAMS_AWCACHE_BIT_SHIFT 0x4UL +#define DX_AXIM_CACHE_PARAMS_AWCACHE_BIT_SIZE 0x4UL +#define DX_AXIM_CACHE_PARAMS_ARCACHE_BIT_SHIFT 0x8UL +#define DX_AXIM_CACHE_PARAMS_ARCACHE_BIT_SIZE 0x4UL #endif // __DX_CRYS_KERNEL_H__ diff --git a/drivers/staging/ccree/dx_host.h b/drivers/staging/ccree/dx_host.h index 3e75dc4..863c267 100644 --- a/drivers/staging/ccree/dx_host.h +++ b/drivers/staging/ccree/dx_host.h @@ -20,136 +20,136 @@ // -------------------------------------- // BLOCK: HOST_P // -------------------------------------- -#define DX_HOST_IRR_REG_OFFSET 0xA00UL -#define DX_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SHIFT 0x2UL -#define DX_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SIZE 0x1UL -#define DX_HOST_IRR_AXI_ERR_INT_BIT_SHIFT 0x8UL -#define DX_HOST_IRR_AXI_ERR_INT_BIT_SIZE 0x1UL -#define DX_HOST_IRR_GPR0_BIT_SHIFT 0xBUL -#define DX_HOST_IRR_GPR0_BIT_SIZE 0x1UL -#define DX_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SHIFT 0x13UL -#define DX_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SIZE 0x1UL -#define DX_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT 0x17UL -#define DX_HOST_IRR_AXIM_COMP_INT_BIT_SIZE 0x1UL -#define DX_HOST_IMR_REG_OFFSET 0xA04UL -#define DX_HOST_IMR_NOT_USED_MASK_BIT_SHIFT 0x1UL -#define DX_HOST_IMR_NOT_USED_MASK_BIT_SIZE 0x1UL -#define DX_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SHIFT 0x2UL -#define DX_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SIZE 0x1UL -#define DX_HOST_IMR_AXI_ERR_MASK_BIT_SHIFT 0x8UL -#define DX_HOST_IMR_AXI_ERR_MASK_BIT_SIZE 0x1UL -#define DX_HOST_IMR_GPR0_BIT_SHIFT 0xBUL -#define DX_HOST_IMR_GPR0_BIT_SIZE 0x1UL -#define DX_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SHIFT 0x13UL -#define DX_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SIZE 0x1UL -#define DX_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SHIFT 0x17UL -#define DX_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SIZE 0x1UL -#define DX_HOST_ICR_REG_OFFSET 0xA08UL -#define DX_HOST_ICR_DSCRPTR_COMPLETION_BIT_SHIFT 0x2UL -#define DX_HOST_ICR_DSCRPTR_COMPLETION_BIT_SIZE 0x1UL -#define DX_HOST_ICR_AXI_ERR_CLEAR_BIT_SHIFT 0x8UL -#define DX_HOST_ICR_AXI_ERR_CLEAR_BIT_SIZE 0x1UL -#define DX_HOST_ICR_GPR_INT_CLEAR_BIT_SHIFT 0xBUL -#define DX_HOST_ICR_GPR_INT_CLEAR_BIT_SIZE 0x1UL -#define DX_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SHIFT 0x13UL -#define DX_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SIZE 0x1UL -#define DX_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SHIFT 0x17UL -#define DX_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SIZE 0x1UL -#define DX_HOST_SIGNATURE_REG_OFFSET 0xA24UL -#define DX_HOST_SIGNATURE_VALUE_BIT_SHIFT 0x0UL -#define DX_HOST_SIGNATURE_VALUE_BIT_SIZE 0x20UL -#define DX_HOST_BOOT_REG_OFFSET 0xA28UL -#define DX_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SHIFT 0x0UL -#define DX_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SHIFT 0x1UL -#define DX_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SHIFT 0x2UL -#define DX_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SHIFT 0x3UL -#define DX_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SHIFT 0x5UL -#define DX_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SHIFT 0x6UL -#define DX_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SIZE 0x3UL -#define DX_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SHIFT 0x9UL -#define DX_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SHIFT 0xAUL -#define DX_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SHIFT 0xBUL -#define DX_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SHIFT 0xCUL -#define DX_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SHIFT 0xDUL -#define DX_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SHIFT 0xEUL -#define DX_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SHIFT 0xFUL -#define DX_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SHIFT 0x10UL -#define DX_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SHIFT 0x11UL -#define DX_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SHIFT 0x12UL -#define DX_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SHIFT 0x13UL -#define DX_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SHIFT 0x14UL -#define DX_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SHIFT 0x15UL -#define DX_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SHIFT 0x16UL -#define DX_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SHIFT 0x17UL -#define DX_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SHIFT 0x18UL -#define DX_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SHIFT 0x19UL -#define DX_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SHIFT 0x1AUL -#define DX_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SHIFT 0x1BUL -#define DX_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SHIFT 0x1CUL -#define DX_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SHIFT 0x1DUL -#define DX_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SHIFT 0x1EUL -#define DX_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SIZE 0x1UL -#define DX_HOST_VERSION_REG_OFFSET 0xA40UL -#define DX_HOST_VERSION_VALUE_BIT_SHIFT 0x0UL -#define DX_HOST_VERSION_VALUE_BIT_SIZE 0x20UL -#define DX_HOST_KFDE0_VALID_REG_OFFSET 0xA60UL -#define DX_HOST_KFDE0_VALID_VALUE_BIT_SHIFT 0x0UL -#define DX_HOST_KFDE0_VALID_VALUE_BIT_SIZE 0x1UL -#define DX_HOST_KFDE1_VALID_REG_OFFSET 0xA64UL -#define DX_HOST_KFDE1_VALID_VALUE_BIT_SHIFT 0x0UL -#define DX_HOST_KFDE1_VALID_VALUE_BIT_SIZE 0x1UL -#define DX_HOST_KFDE2_VALID_REG_OFFSET 0xA68UL -#define DX_HOST_KFDE2_VALID_VALUE_BIT_SHIFT 0x0UL -#define DX_HOST_KFDE2_VALID_VALUE_BIT_SIZE 0x1UL -#define DX_HOST_KFDE3_VALID_REG_OFFSET 0xA6CUL -#define DX_HOST_KFDE3_VALID_VALUE_BIT_SHIFT 0x0UL -#define DX_HOST_KFDE3_VALID_VALUE_BIT_SIZE 0x1UL -#define DX_HOST_GPR0_REG_OFFSET 0xA70UL -#define DX_HOST_GPR0_VALUE_BIT_SHIFT 0x0UL -#define DX_HOST_GPR0_VALUE_BIT_SIZE 0x20UL -#define DX_GPR_HOST_REG_OFFSET 0xA74UL -#define DX_GPR_HOST_VALUE_BIT_SHIFT 0x0UL -#define DX_GPR_HOST_VALUE_BIT_SIZE 0x20UL -#define DX_HOST_POWER_DOWN_EN_REG_OFFSET 0xA78UL -#define DX_HOST_POWER_DOWN_EN_VALUE_BIT_SHIFT 0x0UL -#define DX_HOST_POWER_DOWN_EN_VALUE_BIT_SIZE 0x1UL +#define DX_HOST_IRR_REG_OFFSET 0xA00UL +#define DX_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SHIFT 0x2UL +#define DX_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SIZE 0x1UL +#define DX_HOST_IRR_AXI_ERR_INT_BIT_SHIFT 0x8UL +#define DX_HOST_IRR_AXI_ERR_INT_BIT_SIZE 0x1UL +#define DX_HOST_IRR_GPR0_BIT_SHIFT 0xBUL +#define DX_HOST_IRR_GPR0_BIT_SIZE 0x1UL +#define DX_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SHIFT 0x13UL +#define DX_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SIZE 0x1UL +#define DX_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT 0x17UL +#define DX_HOST_IRR_AXIM_COMP_INT_BIT_SIZE 0x1UL +#define DX_HOST_IMR_REG_OFFSET 0xA04UL +#define DX_HOST_IMR_NOT_USED_MASK_BIT_SHIFT 0x1UL +#define DX_HOST_IMR_NOT_USED_MASK_BIT_SIZE 0x1UL +#define DX_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SHIFT 0x2UL +#define DX_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SIZE 0x1UL +#define DX_HOST_IMR_AXI_ERR_MASK_BIT_SHIFT 0x8UL +#define DX_HOST_IMR_AXI_ERR_MASK_BIT_SIZE 0x1UL +#define DX_HOST_IMR_GPR0_BIT_SHIFT 0xBUL +#define DX_HOST_IMR_GPR0_BIT_SIZE 0x1UL +#define DX_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SHIFT 0x13UL +#define DX_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SIZE 0x1UL +#define DX_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SHIFT 0x17UL +#define DX_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SIZE 0x1UL +#define DX_HOST_ICR_REG_OFFSET 0xA08UL +#define DX_HOST_ICR_DSCRPTR_COMPLETION_BIT_SHIFT 0x2UL +#define DX_HOST_ICR_DSCRPTR_COMPLETION_BIT_SIZE 0x1UL +#define DX_HOST_ICR_AXI_ERR_CLEAR_BIT_SHIFT 0x8UL +#define DX_HOST_ICR_AXI_ERR_CLEAR_BIT_SIZE 0x1UL +#define DX_HOST_ICR_GPR_INT_CLEAR_BIT_SHIFT 0xBUL +#define DX_HOST_ICR_GPR_INT_CLEAR_BIT_SIZE 0x1UL +#define DX_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SHIFT 0x13UL +#define DX_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SIZE 0x1UL +#define DX_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SHIFT 0x17UL +#define DX_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SIZE 0x1UL +#define DX_HOST_SIGNATURE_REG_OFFSET 0xA24UL +#define DX_HOST_SIGNATURE_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_SIGNATURE_VALUE_BIT_SIZE 0x20UL +#define DX_HOST_BOOT_REG_OFFSET 0xA28UL +#define DX_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SHIFT 0x0UL +#define DX_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SHIFT 0x1UL +#define DX_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SHIFT 0x2UL +#define DX_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SHIFT 0x3UL +#define DX_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SHIFT 0x5UL +#define DX_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SHIFT 0x6UL +#define DX_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SIZE 0x3UL +#define DX_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SHIFT 0x9UL +#define DX_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SHIFT 0xAUL +#define DX_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SHIFT 0xBUL +#define DX_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SHIFT 0xCUL +#define DX_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SHIFT 0xDUL +#define DX_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SHIFT 0xEUL +#define DX_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SHIFT 0xFUL +#define DX_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SHIFT 0x10UL +#define DX_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SHIFT 0x11UL +#define DX_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SHIFT 0x12UL +#define DX_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SHIFT 0x13UL +#define DX_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SHIFT 0x14UL +#define DX_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SHIFT 0x15UL +#define DX_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SHIFT 0x16UL +#define DX_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SHIFT 0x17UL +#define DX_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SHIFT 0x18UL +#define DX_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SHIFT 0x19UL +#define DX_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SHIFT 0x1AUL +#define DX_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SHIFT 0x1BUL +#define DX_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SHIFT 0x1CUL +#define DX_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SHIFT 0x1DUL +#define DX_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SHIFT 0x1EUL +#define DX_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_VERSION_REG_OFFSET 0xA40UL +#define DX_HOST_VERSION_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_VERSION_VALUE_BIT_SIZE 0x20UL +#define DX_HOST_KFDE0_VALID_REG_OFFSET 0xA60UL +#define DX_HOST_KFDE0_VALID_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_KFDE0_VALID_VALUE_BIT_SIZE 0x1UL +#define DX_HOST_KFDE1_VALID_REG_OFFSET 0xA64UL +#define DX_HOST_KFDE1_VALID_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_KFDE1_VALID_VALUE_BIT_SIZE 0x1UL +#define DX_HOST_KFDE2_VALID_REG_OFFSET 0xA68UL +#define DX_HOST_KFDE2_VALID_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_KFDE2_VALID_VALUE_BIT_SIZE 0x1UL +#define DX_HOST_KFDE3_VALID_REG_OFFSET 0xA6CUL +#define DX_HOST_KFDE3_VALID_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_KFDE3_VALID_VALUE_BIT_SIZE 0x1UL +#define DX_HOST_GPR0_REG_OFFSET 0xA70UL +#define DX_HOST_GPR0_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_GPR0_VALUE_BIT_SIZE 0x20UL +#define DX_GPR_HOST_REG_OFFSET 0xA74UL +#define DX_GPR_HOST_VALUE_BIT_SHIFT 0x0UL +#define DX_GPR_HOST_VALUE_BIT_SIZE 0x20UL +#define DX_HOST_POWER_DOWN_EN_REG_OFFSET 0xA78UL +#define DX_HOST_POWER_DOWN_EN_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_POWER_DOWN_EN_VALUE_BIT_SIZE 0x1UL // -------------------------------------- // BLOCK: HOST_SRAM // -------------------------------------- -#define DX_SRAM_DATA_REG_OFFSET 0xF00UL -#define DX_SRAM_DATA_VALUE_BIT_SHIFT 0x0UL -#define DX_SRAM_DATA_VALUE_BIT_SIZE 0x20UL -#define DX_SRAM_ADDR_REG_OFFSET 0xF04UL -#define DX_SRAM_ADDR_VALUE_BIT_SHIFT 0x0UL -#define DX_SRAM_ADDR_VALUE_BIT_SIZE 0xFUL -#define DX_SRAM_DATA_READY_REG_OFFSET 0xF08UL -#define DX_SRAM_DATA_READY_VALUE_BIT_SHIFT 0x0UL -#define DX_SRAM_DATA_READY_VALUE_BIT_SIZE 0x1UL +#define DX_SRAM_DATA_REG_OFFSET 0xF00UL +#define DX_SRAM_DATA_VALUE_BIT_SHIFT 0x0UL +#define DX_SRAM_DATA_VALUE_BIT_SIZE 0x20UL +#define DX_SRAM_ADDR_REG_OFFSET 0xF04UL +#define DX_SRAM_ADDR_VALUE_BIT_SHIFT 0x0UL +#define DX_SRAM_ADDR_VALUE_BIT_SIZE 0xFUL +#define DX_SRAM_DATA_READY_REG_OFFSET 0xF08UL +#define DX_SRAM_DATA_READY_VALUE_BIT_SHIFT 0x0UL +#define DX_SRAM_DATA_READY_VALUE_BIT_SIZE 0x1UL #endif //__DX_HOST_H__ From patchwork Sun Jun 4 08:02:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 101340 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp437813qgd; Sun, 4 Jun 2017 01:05:06 -0700 (PDT) X-Received: by 10.84.131.229 with SMTP id d92mr8625418pld.16.1496563506222; Sun, 04 Jun 2017 01:05:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496563506; cv=none; d=google.com; s=arc-20160816; b=US/0Uy7dUd7Sx9VeqwhBpSW0LzOV0cx0f1E0NPhnujLfFPwYdEScRyCbc+M2iMzcol Nldq6O2PUpHef5v15y19zt73Jt5myoL1u6IQXOmacRhgClr9Rx0QFVIRFmkm4+nQeYXu lbH2BYaPxKWr3wNsBGwA6BUrB3YD4eepSUR1lYhsPsDnhw/yPJKtOPuI43tI7Q4TpUkT WMs7GtOu4nFBYdFtSgmXWgh/+UEWE2K6+spr58JgTBvennvP2Y24uXkAU3NIzfXmTLum KpQ4MfXmHU4SpKEnfPyZh5dY0NDlpjqnXZzlTklY0WFoC6BtpEKp9I6gYc9EKjWiOlrA KkAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=om7ZRxxcmbGm1//SRYvrMBekAvnADq0m2Qchqb/WBuQ=; b=J29zuuqqz2XucqT0LQVndo6U4GGEC+Q4tfziCfs5S5eouQOcyJ7ArspnMj1Fn2T0JR Acz8m0d1AAP88JBJQqnm9qU3V6CAfljsQNV0pW64cuEefSERXU0vLax5iI22FiYyW1K2 6rPpxkZLmRFQKlXpcU1JIv9eae4a8NEVxWmdXW15OfVXEKGMQsEIdNk01Y5o2YEo+hqX boKGz8el8A3s/wkna/qzqDLEGIZmLBCAyLrOQX1w8VEMpKT8dQS1ypVLpjCAcCMEGY2z /LLamhII/q4J+nGkHs2uYWJBH12r3+B6eI1rhvqLQaSMazqg+UkXatlYH9KyViFfK+7a +ZYA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p61si4117021plb.5.2017.06.04.01.05.06; Sun, 04 Jun 2017 01:05:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751064AbdFDIFD (ORCPT + 1 other); Sun, 4 Jun 2017 04:05:03 -0400 Received: from foss.arm.com ([217.140.101.70]:52842 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751139AbdFDIEa (ORCPT ); Sun, 4 Jun 2017 04:04:30 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7634880D; Sun, 4 Jun 2017 01:04:20 -0700 (PDT) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7A3B93F589; Sun, 4 Jun 2017 01:04:18 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Joe Perches , Ofir Drang , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Subject: [PATCH v3 16/18] staging: ccree: remove last remnants of sash algo Date: Sun, 4 Jun 2017 11:02:37 +0300 Message-Id: <1496563362-7954-17-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496563362-7954-1-git-send-email-gilad@benyossef.com> References: <1496563362-7954-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The hash code had some left overs from a misguided attempt to support shash API with the HW. Remove the code handling this. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/ssi_hash.c | 448 +++++++++++---------------------------- 1 file changed, 127 insertions(+), 321 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c index 47bc496..ed1c672 100644 --- a/drivers/staging/ccree/ssi_hash.c +++ b/drivers/staging/ccree/ssi_hash.c @@ -76,15 +76,11 @@ static void ssi_hash_create_cmac_setup(struct ahash_request *areq, struct ssi_hash_alg { struct list_head entry; - bool synchronize; int hash_mode; int hw_mode; int inter_digestsize; struct ssi_drvdata *drvdata; - union { - struct ahash_alg ahash_alg; - struct shash_alg shash_alg; - }; + struct ahash_alg ahash_alg; }; @@ -112,8 +108,6 @@ struct ssi_hash_ctx { bool is_hmac; }; -static const struct crypto_type crypto_shash_type; - static void ssi_hash_create_data_desc( struct ahash_req_ctx *areq_ctx, struct ssi_hash_ctx *ctx, @@ -1015,15 +1009,9 @@ static int ssi_hash_setkey(void *hash, SSI_LOG_DEBUG("ssi_hash_setkey: start keylen: %d", keylen); CHECK_AND_RETURN_UPON_FIPS_ERROR(); - if (synchronize) { - ctx = crypto_shash_ctx(((struct crypto_shash *)hash)); - blocksize = crypto_tfm_alg_blocksize(&((struct crypto_shash *)hash)->base); - digestsize = crypto_shash_digestsize(((struct crypto_shash *)hash)); - } else { - ctx = crypto_ahash_ctx(((struct crypto_ahash *)hash)); - blocksize = crypto_tfm_alg_blocksize(&((struct crypto_ahash *)hash)->base); - digestsize = crypto_ahash_digestsize(((struct crypto_ahash *)hash)); - } + ctx = crypto_ahash_ctx(((struct crypto_ahash *)hash)); + blocksize = crypto_tfm_alg_blocksize(&((struct crypto_ahash *)hash)->base); + digestsize = crypto_ahash_digestsize(((struct crypto_ahash *)hash)); larval_addr = ssi_ahash_get_larval_digest_sram_addr( ctx->drvdata, ctx->hash_mode); @@ -1184,13 +1172,8 @@ static int ssi_hash_setkey(void *hash, rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); out: - if (rc != 0) { - if (synchronize) { - crypto_shash_set_flags((struct crypto_shash *)hash, CRYPTO_TFM_RES_BAD_KEY_LEN); - } else { - crypto_ahash_set_flags((struct crypto_ahash *)hash, CRYPTO_TFM_RES_BAD_KEY_LEN); - } - } + if (rc) + crypto_ahash_set_flags((struct crypto_ahash *)hash, CRYPTO_TFM_RES_BAD_KEY_LEN); if (ctx->key_params.key_dma_addr) { dma_unmap_single(&ctx->drvdata->plat_dev->dev, @@ -1395,23 +1378,6 @@ static int ssi_hash_alloc_ctx(struct ssi_hash_ctx *ctx) return -ENOMEM; } -static int ssi_shash_cra_init(struct crypto_tfm *tfm) -{ - struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm); - struct shash_alg * shash_alg = - container_of(tfm->__crt_alg, struct shash_alg, base); - struct ssi_hash_alg *ssi_alg = - container_of(shash_alg, struct ssi_hash_alg, shash_alg); - - CHECK_AND_RETURN_UPON_FIPS_ERROR(); - ctx->hash_mode = ssi_alg->hash_mode; - ctx->hw_mode = ssi_alg->hw_mode; - ctx->inter_digestsize = ssi_alg->inter_digestsize; - ctx->drvdata = ssi_alg->drvdata; - - return ssi_hash_alloc_ctx(ctx); -} - static int ssi_ahash_cra_init(struct crypto_tfm *tfm) { struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm); @@ -1764,100 +1730,6 @@ static int ssi_mac_digest(struct ahash_request *req) return rc; } -//shash wrap functions -#ifdef SYNC_ALGS -static int ssi_shash_digest(struct shash_desc *desc, - const u8 *data, unsigned int len, u8 *out) -{ - struct ahash_req_ctx *state = shash_desc_ctx(desc); - struct crypto_shash *tfm = desc->tfm; - struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); - u32 digestsize = crypto_shash_digestsize(tfm); - struct scatterlist src; - - if (len == 0) { - return ssi_hash_digest(state, ctx, digestsize, NULL, 0, out, NULL); - } - - /* sg_init_one may crash when len is 0 (depends on kernel configuration) */ - sg_init_one(&src, (const void *)data, len); - - return ssi_hash_digest(state, ctx, digestsize, &src, len, out, NULL); -} - -static int ssi_shash_update(struct shash_desc *desc, - const u8 *data, unsigned int len) -{ - struct ahash_req_ctx *state = shash_desc_ctx(desc); - struct crypto_shash *tfm = desc->tfm; - struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); - u32 blocksize = crypto_tfm_alg_blocksize(&tfm->base); - struct scatterlist src; - - sg_init_one(&src, (const void *)data, len); - - return ssi_hash_update(state, ctx, blocksize, &src, len, NULL); -} - -static int ssi_shash_finup(struct shash_desc *desc, - const u8 *data, unsigned int len, u8 *out) -{ - struct ahash_req_ctx *state = shash_desc_ctx(desc); - struct crypto_shash *tfm = desc->tfm; - struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); - u32 digestsize = crypto_shash_digestsize(tfm); - struct scatterlist src; - - sg_init_one(&src, (const void *)data, len); - - return ssi_hash_finup(state, ctx, digestsize, &src, len, out, NULL); -} - -static int ssi_shash_final(struct shash_desc *desc, u8 *out) -{ - struct ahash_req_ctx *state = shash_desc_ctx(desc); - struct crypto_shash *tfm = desc->tfm; - struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); - u32 digestsize = crypto_shash_digestsize(tfm); - - return ssi_hash_final(state, ctx, digestsize, NULL, 0, out, NULL); -} - -static int ssi_shash_init(struct shash_desc *desc) -{ - struct ahash_req_ctx *state = shash_desc_ctx(desc); - struct crypto_shash *tfm = desc->tfm; - struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); - - return ssi_hash_init(state, ctx); -} - -#ifdef EXPORT_FIXED -static int ssi_shash_export(struct shash_desc *desc, void *out) -{ - struct crypto_shash *tfm = desc->tfm; - struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); - - return ssi_hash_export(ctx, out); -} - -static int ssi_shash_import(struct shash_desc *desc, const void *in) -{ - struct crypto_shash *tfm = desc->tfm; - struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); - - return ssi_hash_import(ctx, in); -} -#endif - -static int ssi_shash_setkey(struct crypto_shash *tfm, - const u8 *key, unsigned int keylen) -{ - return ssi_hash_setkey((void *) tfm, key, keylen, true); -} - -#endif /* SYNC_ALGS */ - //ahash wrap functions static int ssi_ahash_digest(struct ahash_request *req) { @@ -1941,10 +1813,7 @@ struct ssi_hash_template { char hmac_driver_name[CRYPTO_MAX_ALG_NAME]; unsigned int blocksize; bool synchronize; - union { - struct ahash_alg template_ahash; - struct shash_alg template_shash; - }; + struct ahash_alg template_ahash; int hash_mode; int hw_mode; int inter_digestsize; @@ -1961,22 +1830,20 @@ static struct ssi_hash_template driver_hash[] = { .hmac_driver_name = "hmac-sha1-dx", .blocksize = SHA1_BLOCK_SIZE, .synchronize = false, - { - .template_ahash = { - .init = ssi_ahash_init, - .update = ssi_ahash_update, - .final = ssi_ahash_final, - .finup = ssi_ahash_finup, - .digest = ssi_ahash_digest, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, #ifdef EXPORT_FIXED - .export = ssi_ahash_export, - .import = ssi_ahash_import, + .export = ssi_ahash_export, + .import = ssi_ahash_import, #endif - .setkey = ssi_ahash_setkey, - .halg = { - .digestsize = SHA1_DIGEST_SIZE, - .statesize = sizeof(struct sha1_state), - }, + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = SHA1_DIGEST_SIZE, + .statesize = sizeof(struct sha1_state), }, }, .hash_mode = DRV_HASH_SHA1, @@ -1989,23 +1856,20 @@ static struct ssi_hash_template driver_hash[] = { .hmac_name = "hmac(sha256)", .hmac_driver_name = "hmac-sha256-dx", .blocksize = SHA256_BLOCK_SIZE, - .synchronize = false, - { - .template_ahash = { - .init = ssi_ahash_init, - .update = ssi_ahash_update, - .final = ssi_ahash_final, - .finup = ssi_ahash_finup, - .digest = ssi_ahash_digest, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, #ifdef EXPORT_FIXED - .export = ssi_ahash_export, - .import = ssi_ahash_import, + .export = ssi_ahash_export, + .import = ssi_ahash_import, #endif - .setkey = ssi_ahash_setkey, - .halg = { - .digestsize = SHA256_DIGEST_SIZE, - .statesize = sizeof(struct sha256_state), - }, + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = SHA256_DIGEST_SIZE, + .statesize = sizeof(struct sha256_state), }, }, .hash_mode = DRV_HASH_SHA256, @@ -2018,23 +1882,20 @@ static struct ssi_hash_template driver_hash[] = { .hmac_name = "hmac(sha224)", .hmac_driver_name = "hmac-sha224-dx", .blocksize = SHA224_BLOCK_SIZE, - .synchronize = false, - { - .template_ahash = { - .init = ssi_ahash_init, - .update = ssi_ahash_update, - .final = ssi_ahash_final, - .finup = ssi_ahash_finup, - .digest = ssi_ahash_digest, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, #ifdef EXPORT_FIXED - .export = ssi_ahash_export, - .import = ssi_ahash_import, + .export = ssi_ahash_export, + .import = ssi_ahash_import, #endif - .setkey = ssi_ahash_setkey, - .halg = { - .digestsize = SHA224_DIGEST_SIZE, - .statesize = sizeof(struct sha256_state), - }, + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = SHA224_DIGEST_SIZE, + .statesize = sizeof(struct sha256_state), }, }, .hash_mode = DRV_HASH_SHA224, @@ -2048,23 +1909,20 @@ static struct ssi_hash_template driver_hash[] = { .hmac_name = "hmac(sha384)", .hmac_driver_name = "hmac-sha384-dx", .blocksize = SHA384_BLOCK_SIZE, - .synchronize = false, - { - .template_ahash = { - .init = ssi_ahash_init, - .update = ssi_ahash_update, - .final = ssi_ahash_final, - .finup = ssi_ahash_finup, - .digest = ssi_ahash_digest, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, #ifdef EXPORT_FIXED - .export = ssi_ahash_export, - .import = ssi_ahash_import, + .export = ssi_ahash_export, + .import = ssi_ahash_import, #endif - .setkey = ssi_ahash_setkey, - .halg = { - .digestsize = SHA384_DIGEST_SIZE, - .statesize = sizeof(struct sha512_state), - }, + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = SHA384_DIGEST_SIZE, + .statesize = sizeof(struct sha512_state), }, }, .hash_mode = DRV_HASH_SHA384, @@ -2077,23 +1935,20 @@ static struct ssi_hash_template driver_hash[] = { .hmac_name = "hmac(sha512)", .hmac_driver_name = "hmac-sha512-dx", .blocksize = SHA512_BLOCK_SIZE, - .synchronize = false, - { - .template_ahash = { - .init = ssi_ahash_init, - .update = ssi_ahash_update, - .final = ssi_ahash_final, - .finup = ssi_ahash_finup, - .digest = ssi_ahash_digest, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, #ifdef EXPORT_FIXED - .export = ssi_ahash_export, - .import = ssi_ahash_import, + .export = ssi_ahash_export, + .import = ssi_ahash_import, #endif - .setkey = ssi_ahash_setkey, - .halg = { - .digestsize = SHA512_DIGEST_SIZE, - .statesize = sizeof(struct sha512_state), - }, + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = SHA512_DIGEST_SIZE, + .statesize = sizeof(struct sha512_state), }, }, .hash_mode = DRV_HASH_SHA512, @@ -2107,23 +1962,20 @@ static struct ssi_hash_template driver_hash[] = { .hmac_name = "hmac(md5)", .hmac_driver_name = "hmac-md5-dx", .blocksize = MD5_HMAC_BLOCK_SIZE, - .synchronize = false, - { - .template_ahash = { - .init = ssi_ahash_init, - .update = ssi_ahash_update, - .final = ssi_ahash_final, - .finup = ssi_ahash_finup, - .digest = ssi_ahash_digest, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, #ifdef EXPORT_FIXED - .export = ssi_ahash_export, - .import = ssi_ahash_import, + .export = ssi_ahash_export, + .import = ssi_ahash_import, #endif - .setkey = ssi_ahash_setkey, - .halg = { - .digestsize = MD5_DIGEST_SIZE, - .statesize = sizeof(struct md5_state), - }, + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = MD5_DIGEST_SIZE, + .statesize = sizeof(struct md5_state), }, }, .hash_mode = DRV_HASH_MD5, @@ -2134,23 +1986,20 @@ static struct ssi_hash_template driver_hash[] = { .name = "xcbc(aes)", .driver_name = "xcbc-aes-dx", .blocksize = AES_BLOCK_SIZE, - .synchronize = false, - { - .template_ahash = { - .init = ssi_ahash_init, - .update = ssi_mac_update, - .final = ssi_mac_final, - .finup = ssi_mac_finup, - .digest = ssi_mac_digest, - .setkey = ssi_xcbc_setkey, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_mac_update, + .final = ssi_mac_final, + .finup = ssi_mac_finup, + .digest = ssi_mac_digest, + .setkey = ssi_xcbc_setkey, #ifdef EXPORT_FIXED - .export = ssi_ahash_export, - .import = ssi_ahash_import, + .export = ssi_ahash_export, + .import = ssi_ahash_import, #endif - .halg = { - .digestsize = AES_BLOCK_SIZE, - .statesize = sizeof(struct aeshash_state), - }, + .halg = { + .digestsize = AES_BLOCK_SIZE, + .statesize = sizeof(struct aeshash_state), }, }, .hash_mode = DRV_HASH_NULL, @@ -2162,23 +2011,20 @@ static struct ssi_hash_template driver_hash[] = { .name = "cmac(aes)", .driver_name = "cmac-aes-dx", .blocksize = AES_BLOCK_SIZE, - .synchronize = false, - { - .template_ahash = { - .init = ssi_ahash_init, - .update = ssi_mac_update, - .final = ssi_mac_final, - .finup = ssi_mac_finup, - .digest = ssi_mac_digest, - .setkey = ssi_cmac_setkey, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_mac_update, + .final = ssi_mac_final, + .finup = ssi_mac_finup, + .digest = ssi_mac_digest, + .setkey = ssi_cmac_setkey, #ifdef EXPORT_FIXED - .export = ssi_ahash_export, - .import = ssi_ahash_import, + .export = ssi_ahash_export, + .import = ssi_ahash_import, #endif - .halg = { - .digestsize = AES_BLOCK_SIZE, - .statesize = sizeof(struct aeshash_state), - }, + .halg = { + .digestsize = AES_BLOCK_SIZE, + .statesize = sizeof(struct aeshash_state), }, }, .hash_mode = DRV_HASH_NULL, @@ -2194,6 +2040,7 @@ ssi_hash_create_alg(struct ssi_hash_template *template, bool keyed) { struct ssi_hash_alg *t_crypto_alg; struct crypto_alg *alg; + struct ahash_alg *halg; t_crypto_alg = kzalloc(sizeof(struct ssi_hash_alg), GFP_KERNEL); if (!t_crypto_alg) { @@ -2201,20 +2048,9 @@ ssi_hash_create_alg(struct ssi_hash_template *template, bool keyed) return ERR_PTR(-ENOMEM); } - t_crypto_alg->synchronize = template->synchronize; - if (template->synchronize) { - struct shash_alg *halg; - t_crypto_alg->shash_alg = template->template_shash; - halg = &t_crypto_alg->shash_alg; - alg = &halg->base; - if (!keyed) halg->setkey = NULL; - } else { - struct ahash_alg *halg; - t_crypto_alg->ahash_alg = template->template_ahash; - halg = &t_crypto_alg->ahash_alg; - alg = &halg->halg.base; - if (!keyed) halg->setkey = NULL; - } + t_crypto_alg->ahash_alg = template->template_ahash; + halg = &t_crypto_alg->ahash_alg; + alg = &halg->halg.base; if (keyed) { snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", @@ -2222,6 +2058,7 @@ ssi_hash_create_alg(struct ssi_hash_template *template, bool keyed) snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", template->hmac_driver_name); } else { + halg->setkey = NULL; snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", template->name); snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", @@ -2234,17 +2071,10 @@ ssi_hash_create_alg(struct ssi_hash_template *template, bool keyed) alg->cra_alignmask = 0; alg->cra_exit = ssi_hash_cra_exit; - if (template->synchronize) { - alg->cra_init = ssi_shash_cra_init; - alg->cra_flags = CRYPTO_ALG_TYPE_SHASH | - CRYPTO_ALG_KERN_DRIVER_ONLY; - alg->cra_type = &crypto_shash_type; - } else { - alg->cra_init = ssi_ahash_cra_init; - alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_TYPE_AHASH | + alg->cra_init = ssi_ahash_cra_init; + alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_KERN_DRIVER_ONLY; - alg->cra_type = &crypto_ahash_type; - } + alg->cra_type = &crypto_ahash_type; t_crypto_alg->hash_mode = template->hash_mode; t_crypto_alg->hw_mode = template->hw_mode; @@ -2429,24 +2259,15 @@ int ssi_hash_alloc(struct ssi_drvdata *drvdata) } t_alg->drvdata = drvdata; - if (t_alg->synchronize) { - rc = crypto_register_shash(&t_alg->shash_alg); - if (unlikely(rc != 0)) { - SSI_LOG_ERR("%s alg registration failed\n", - t_alg->shash_alg.base.cra_driver_name); - kfree(t_alg); - goto fail; - } else - list_add_tail(&t_alg->entry, &hash_handle->hash_list); + rc = crypto_register_ahash(&t_alg->ahash_alg); + if (unlikely(rc)) { + SSI_LOG_ERR("%s alg registration failed\n", + driver_hash[alg].driver_name); + kfree(t_alg); + goto fail; } else { - rc = crypto_register_ahash(&t_alg->ahash_alg); - if (unlikely(rc != 0)) { - SSI_LOG_ERR("%s alg registration failed\n", - t_alg->ahash_alg.halg.base.cra_driver_name); - kfree(t_alg); - goto fail; - } else - list_add_tail(&t_alg->entry, &hash_handle->hash_list); + list_add_tail(&t_alg->entry, + &hash_handle->hash_list); } } @@ -2460,25 +2281,14 @@ int ssi_hash_alloc(struct ssi_drvdata *drvdata) } t_alg->drvdata = drvdata; - if (t_alg->synchronize) { - rc = crypto_register_shash(&t_alg->shash_alg); - if (unlikely(rc != 0)) { - SSI_LOG_ERR("%s alg registration failed\n", - t_alg->shash_alg.base.cra_driver_name); - kfree(t_alg); - goto fail; - } else - list_add_tail(&t_alg->entry, &hash_handle->hash_list); - + rc = crypto_register_ahash(&t_alg->ahash_alg); + if (unlikely(rc)) { + SSI_LOG_ERR("%s alg registration failed\n", + driver_hash[alg].driver_name); + kfree(t_alg); + goto fail; } else { - rc = crypto_register_ahash(&t_alg->ahash_alg); - if (unlikely(rc != 0)) { - SSI_LOG_ERR("%s alg registration failed\n", - t_alg->ahash_alg.halg.base.cra_driver_name); - kfree(t_alg); - goto fail; - } else - list_add_tail(&t_alg->entry, &hash_handle->hash_list); + list_add_tail(&t_alg->entry, &hash_handle->hash_list); } } @@ -2501,11 +2311,7 @@ int ssi_hash_free(struct ssi_drvdata *drvdata) if (hash_handle != NULL) { list_for_each_entry_safe(t_hash_alg, hash_n, &hash_handle->hash_list, entry) { - if (t_hash_alg->synchronize) { - crypto_unregister_shash(&t_hash_alg->shash_alg); - } else { - crypto_unregister_ahash(&t_hash_alg->ahash_alg); - } + crypto_unregister_ahash(&t_hash_alg->ahash_alg); list_del(&t_hash_alg->entry); kfree(t_hash_alg); } From patchwork Sun Jun 4 08:02:38 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 101338 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp437679qgd; Sun, 4 Jun 2017 01:04:34 -0700 (PDT) X-Received: by 10.84.196.129 with SMTP id l1mr4282870pld.109.1496563474111; Sun, 04 Jun 2017 01:04:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496563474; cv=none; d=google.com; s=arc-20160816; b=qq9OkCtypTOWFl6cxyN6vBm/2ghMrHPz2UrbnKA+SCU1g1w3Yqc0qz9AF5/DhDeEI2 +qNoHPNaDya+UIwU+rxUcria6qgD3y5pgUknbDgoNpMbmDVkJyNuiZtyQMB6RCBouwnK KNXMSZo5YG6Cm6mEB+CDCe6w4LegTVU+tqm2vTQ6X4E3jfTRPueZusuAZUuJ1+GIDP3i tSgTlhRavYfka22cRdbrbVJ+GHKDIqpJtfm0l2tqeHnDZ1oPwbijpM71blV5igwch8hu s0MtXlMHQ+szDcQogQEjIbRdhAcaFnkAU3fWu3fZSZWz6/txmb1QNEjzxo1Ia5brRy4z RYLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ySXoWELAsaGwX81jgUBXaohBt7IqNgef+hrVRhDQlpc=; b=NTnqVZSssUvmzJF0rdkENF85H+NiXnUVTy7n9bVIFMdxk3ZOWY1tDRxMLWKoKlRVWM CI3B56NOvmU8WdSFBDqkeQWD/WT9e7xvJKXlw4sZFkE88Iy2oiLmSJTMdxwUPEzT2vZs I1fbHHpTWCfJ+HUa+RByqDxw4djYbkYwbqnp80popuNFUEmfW3LkGmDJ2uVfN9NlUHd3 t822g/N83XBXU59q9GnhVAqgPiGYExdLTWO3YLI7dalWIvVTowdu22JJ2zp+dB8S7O0J nX2jcEf+IEt2pBI6gjLqelsSNt+kmiPklvYiBn1x+1Hw+qWTLg3NNYdt6zG4F/lxzBzi /xHQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h193si27373189pgc.305.2017.06.04.01.04.33; Sun, 04 Jun 2017 01:04:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751157AbdFDIEb (ORCPT + 1 other); Sun, 4 Jun 2017 04:04:31 -0400 Received: from foss.arm.com ([217.140.101.70]:52856 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751322AbdFDIE0 (ORCPT ); Sun, 4 Jun 2017 04:04:26 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 01938344; Sun, 4 Jun 2017 01:04:26 -0700 (PDT) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 058F33F589; Sun, 4 Jun 2017 01:04:23 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Joe Perches , Ofir Drang , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Subject: [PATCH v3 17/18] staging: ccree: remove last remnants of sblkcipher Date: Sun, 4 Jun 2017 11:02:38 +0300 Message-Id: <1496563362-7954-18-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496563362-7954-1-git-send-email-gilad@benyossef.com> References: <1496563362-7954-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The cipher code had some left overs of an attempt to support synch. cipher API with the HW. Remove the code handling this. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/ssi_cipher.c | 102 +++---------------------------------- drivers/staging/ccree/ssi_driver.h | 1 - 2 files changed, 6 insertions(+), 97 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c index e16fd36..2dfc6a3 100644 --- a/drivers/staging/ccree/ssi_cipher.c +++ b/drivers/staging/ccree/ssi_cipher.c @@ -36,7 +36,6 @@ #define MAX_ABLKCIPHER_SEQ_LEN 6 #define template_ablkcipher template_u.ablkcipher -#define template_sblkcipher template_u.blkcipher #define SSI_MIN_AES_XTS_SIZE 0x10 #define SSI_MAX_AES_XTS_SIZE 0x2000 @@ -886,69 +885,6 @@ static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req, void __io ivsize, areq, cc_base); } - - -static int ssi_sblkcipher_init(struct crypto_tfm *tfm) -{ - struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); - - /* Allocate sync ctx buffer */ - ctx_p->sync_ctx = kmalloc(sizeof(struct blkcipher_req_ctx), GFP_KERNEL|GFP_DMA); - if (!ctx_p->sync_ctx) { - SSI_LOG_ERR("Allocating sync ctx buffer in context failed\n"); - return -ENOMEM; - } - SSI_LOG_DEBUG("Allocated sync ctx buffer in context ctx_p->sync_ctx=@%p\n", - ctx_p->sync_ctx); - - return ssi_blkcipher_init(tfm); -} - - -static void ssi_sblkcipher_exit(struct crypto_tfm *tfm) -{ - struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); - - kfree(ctx_p->sync_ctx); - SSI_LOG_DEBUG("Free sync ctx buffer in context ctx_p->sync_ctx=@%p\n", ctx_p->sync_ctx); - - ssi_blkcipher_exit(tfm); -} - -#ifdef SYNC_ALGS -static int ssi_sblkcipher_encrypt(struct blkcipher_desc *desc, - struct scatterlist *dst, struct scatterlist *src, - unsigned int nbytes) -{ - struct crypto_blkcipher *blk_tfm = desc->tfm; - struct crypto_tfm *tfm = crypto_blkcipher_tfm(blk_tfm); - struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); - struct blkcipher_req_ctx *req_ctx = ctx_p->sync_ctx; - unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm); - - req_ctx->backup_info = desc->info; - req_ctx->is_giv = false; - - return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_ENCRYPT); -} - -static int ssi_sblkcipher_decrypt(struct blkcipher_desc *desc, - struct scatterlist *dst, struct scatterlist *src, - unsigned int nbytes) -{ - struct crypto_blkcipher *blk_tfm = desc->tfm; - struct crypto_tfm *tfm = crypto_blkcipher_tfm(blk_tfm); - struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); - struct blkcipher_req_ctx *req_ctx = ctx_p->sync_ctx; - unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm); - - req_ctx->backup_info = desc->info; - req_ctx->is_giv = false; - - return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_DECRYPT); -} -#endif - /* Async wrap functions */ static int ssi_ablkcipher_init(struct crypto_tfm *tfm) @@ -1014,7 +950,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_XTS, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, { .name = "xts(aes)", @@ -1031,7 +966,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_XTS, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, { .name = "xts(aes)", @@ -1048,7 +982,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_XTS, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, #endif /*SSI_CC_HAS_AES_XTS*/ #if SSI_CC_HAS_AES_ESSIV @@ -1067,7 +1000,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ESSIV, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, { .name = "essiv(aes)", @@ -1084,7 +1016,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ESSIV, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, { .name = "essiv(aes)", @@ -1101,7 +1032,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ESSIV, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, #endif /*SSI_CC_HAS_AES_ESSIV*/ #if SSI_CC_HAS_AES_BITLOCKER @@ -1120,7 +1050,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_BITLOCKER, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, { .name = "bitlocker(aes)", @@ -1137,7 +1066,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_BITLOCKER, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, { .name = "bitlocker(aes)", @@ -1154,7 +1082,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_BITLOCKER, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, #endif /*SSI_CC_HAS_AES_BITLOCKER*/ { @@ -1172,7 +1099,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ECB, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, { .name = "cbc(aes)", @@ -1186,10 +1112,9 @@ static struct ssi_alg_template blkcipher_algs[] = { .min_keysize = AES_MIN_KEY_SIZE, .max_keysize = AES_MAX_KEY_SIZE, .ivsize = AES_BLOCK_SIZE, - }, + }, .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, { .name = "ofb(aes)", @@ -1206,7 +1131,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_OFB, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, #if SSI_CC_HAS_AES_CTS { @@ -1224,7 +1148,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_CBC_CTS, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, #endif { @@ -1242,7 +1165,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_CTR, .flow_mode = S_DIN_to_AES, - .synchronous = false, }, { .name = "cbc(des3_ede)", @@ -1259,7 +1181,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_DES, - .synchronous = false, }, { .name = "ecb(des3_ede)", @@ -1276,7 +1197,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ECB, .flow_mode = S_DIN_to_DES, - .synchronous = false, }, { .name = "cbc(des)", @@ -1293,7 +1213,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_DES, - .synchronous = false, }, { .name = "ecb(des)", @@ -1310,7 +1229,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ECB, .flow_mode = S_DIN_to_DES, - .synchronous = false, }, #if SSI_CC_HAS_MULTI2 { @@ -1328,7 +1246,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_MULTI2_CBC, .flow_mode = S_DIN_to_MULTI2, - .synchronous = false, }, { .name = "ofb(multi2)", @@ -1345,7 +1262,6 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_MULTI2_OFB, .flow_mode = S_DIN_to_MULTI2, - .synchronous = false, }, #endif /*SSI_CC_HAS_MULTI2*/ }; @@ -1373,18 +1289,12 @@ struct ssi_crypto_alg *ssi_ablkcipher_create_alg(struct ssi_alg_template *templa alg->cra_alignmask = 0; alg->cra_ctxsize = sizeof(struct ssi_ablkcipher_ctx); - alg->cra_init = template->synchronous? ssi_sblkcipher_init:ssi_ablkcipher_init; - alg->cra_exit = template->synchronous? ssi_sblkcipher_exit:ssi_blkcipher_exit; - alg->cra_type = template->synchronous? &crypto_blkcipher_type:&crypto_ablkcipher_type; - if(template->synchronous) { - alg->cra_blkcipher = template->template_sblkcipher; - alg->cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | - template->type; - } else { - alg->cra_ablkcipher = template->template_ablkcipher; - alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY | + alg->cra_init = ssi_ablkcipher_init; + alg->cra_exit = ssi_blkcipher_exit; + alg->cra_type = &crypto_ablkcipher_type; + alg->cra_ablkcipher = template->template_ablkcipher; + alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY | template->type; - } t_alg->cipher_mode = template->cipher_mode; t_alg->flow_mode = template->flow_mode; diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h index 4c6eef9..34bd7ef 100644 --- a/drivers/staging/ccree/ssi_driver.h +++ b/drivers/staging/ccree/ssi_driver.h @@ -176,7 +176,6 @@ struct ssi_alg_template { int cipher_mode; int flow_mode; /* Note: currently, refers to the cipher mode only. */ int auth_mode; - bool synchronous; struct ssi_drvdata *drvdata; }; From patchwork Sun Jun 4 08:02:39 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 101339 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp437783qgd; Sun, 4 Jun 2017 01:04:59 -0700 (PDT) X-Received: by 10.84.176.3 with SMTP id u3mr8743717plb.119.1496563499447; Sun, 04 Jun 2017 01:04:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496563499; cv=none; d=google.com; s=arc-20160816; b=eimFCp5lz+B8ExH9kBklx7TFEdz5iyZe1WughdcB/1JPHt8JLfN9e9a2UeHXOCUtci 0MOI6ZCN1UIuPYYmHkRIryaO9jHGXKcoGmP7eiAdoB7wTIT/4vLyra8JbqKsITwsOocP UyTUHdnJUa9ZR+tIr4v2keiUqmHMcFQ8GfUQIrEJJidTDgoHn5nRDCqvgvqezdc7d96n wKVYZ13JcV8+4rodmNYaNvjucUhBwaGo2tccby3/sQebMO4OtngNohNT60AKgVI51XSU tDIT4TYQUNq8hMilcUFxm+aELEXtoVI1G1JAMCk7AxlyybLotHhbBEOik9tgeAPk2kCR J8ng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=py3SnkzXv65b0OdRKhCdXFPb9tyJPbK3pc4shQ1poOM=; b=Zap+y1IZVRewoj3MgsJ0ylNxBTeUQUSNhbN/bmgpb2VjfS4jMqaID/V1Q/vzFKn0s9 3tgnXbw/xGVhFibrOOJspZSysAlfTlz4f7JzZTxXnByVec4bNFY/kby060aO/0LmA255 Sb1kbQzw75rh+ScPIvV5jv4rCbxlZWpBz+ijMHEfuDH2+1ka2r/6epjt4nDK97tCLbM1 jIV0FW3mczJrfcNzZKTu3Ei6ToBeiT8wufn9WmamvLLC7NGeLogqRoWQjQO0UIOl5yQL V03dwantHki2LGyYd4QEzbU/LqIe/qVi0+KC1b14HFvyU6IszxpyZHQziPk0/M1f1n9j iZgA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p61si4117021plb.5.2017.06.04.01.04.59; Sun, 04 Jun 2017 01:04:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751204AbdFDIEq (ORCPT + 1 other); Sun, 4 Jun 2017 04:04:46 -0400 Received: from foss.arm.com ([217.140.101.70]:52866 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751389AbdFDIEc (ORCPT ); Sun, 4 Jun 2017 04:04:32 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9FA3A344; Sun, 4 Jun 2017 01:04:31 -0700 (PDT) Received: from localhost.localdomain (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A21423F589; Sun, 4 Jun 2017 01:04:29 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Joe Perches , Ofir Drang , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Subject: [PATCH v3 18/18] staging: ccree: remove descriptor context definitions Date: Sun, 4 Jun 2017 11:02:39 +0300 Message-Id: <1496563362-7954-19-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496563362-7954-1-git-send-email-gilad@benyossef.com> References: <1496563362-7954-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Remove definitions of descriptor context which are not used in the driver. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/cc_crypto_ctx.h | 86 ----------------------------------- 1 file changed, 86 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h index 20f3f9f..591f6fd 100644 --- a/drivers/staging/ccree/cc_crypto_ctx.h +++ b/drivers/staging/ccree/cc_crypto_ctx.h @@ -196,91 +196,5 @@ enum drv_crypto_padding_type { DRV_PADDING_RESERVE32B = S32_MAX }; -/*******************************************************************/ -/***************** DESCRIPTOR BASED CONTEXTS ***********************/ -/*******************************************************************/ - - /* Generic context ("super-class") */ -struct drv_ctx_generic { - enum drv_crypto_alg alg; -} __attribute__((__may_alias__)); - -struct drv_ctx_hash { - enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_HASH */ - enum drv_hash_mode mode; - u8 digest[CC_DIGEST_SIZE_MAX]; - /* reserve to end of allocated context size */ - u8 reserved[CC_CTX_SIZE - 2 * sizeof(u32) - - CC_DIGEST_SIZE_MAX]; -}; - -/* NOTE! drv_ctx_hmac should have the same structure as drv_ctx_hash except - * k0, k0_size fields - */ -struct drv_ctx_hmac { - enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_HMAC */ - enum drv_hash_mode mode; - u8 digest[CC_DIGEST_SIZE_MAX]; - u32 k0[CC_HMAC_BLOCK_SIZE_MAX / sizeof(u32)]; - u32 k0_size; - /* reserve to end of allocated context size */ - u8 reserved[CC_CTX_SIZE - 3 * sizeof(u32) - - CC_DIGEST_SIZE_MAX - CC_HMAC_BLOCK_SIZE_MAX]; -}; - -struct drv_ctx_cipher { - enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_AES */ - enum drv_cipher_mode mode; - enum drv_crypto_direction direction; - enum drv_crypto_key_type crypto_key_type; - enum drv_crypto_padding_type padding_type; - u32 key_size; /* numeric value in bytes */ - u32 data_unit_size; /* required for XTS */ - /* block_state is the AES engine block state. - * It is used by the host to pass IV or counter at initialization. - * It is used by SeP for intermediate block chaining state and for - * returning MAC algorithms results. - */ - u8 block_state[CC_AES_BLOCK_SIZE]; - u8 key[CC_AES_KEY_SIZE_MAX]; - u8 xex_key[CC_AES_KEY_SIZE_MAX]; - /* reserve to end of allocated context size */ - u32 reserved[CC_DRV_CTX_SIZE_WORDS - 7 - - CC_AES_BLOCK_SIZE / sizeof(u32) - 2 * - (CC_AES_KEY_SIZE_MAX / sizeof(u32))]; -}; - -/* authentication and encryption with associated data class */ -struct drv_ctx_aead { - enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_AES */ - enum drv_cipher_mode mode; - enum drv_crypto_direction direction; - u32 key_size; /* numeric value in bytes */ - u32 nonce_size; /* nonce size (octets) */ - u32 header_size; /* finit additional data size (octets) */ - u32 text_size; /* finit text data size (octets) */ - u32 tag_size; /* mac size, element of {4, 6, 8, 10, 12, 14, 16} */ - /* block_state1/2 is the AES engine block state */ - u8 block_state[CC_AES_BLOCK_SIZE]; - u8 mac_state[CC_AES_BLOCK_SIZE]; /* MAC result */ - u8 nonce[CC_AES_BLOCK_SIZE]; /* nonce buffer */ - u8 key[CC_AES_KEY_SIZE_MAX]; - /* reserve to end of allocated context size */ - u32 reserved[CC_DRV_CTX_SIZE_WORDS - 8 - - 3 * (CC_AES_BLOCK_SIZE / sizeof(u32)) - - CC_AES_KEY_SIZE_MAX / sizeof(u32)]; -}; - -/*******************************************************************/ -/***************** MESSAGE BASED CONTEXTS **************************/ -/*******************************************************************/ - -/* Get the address of a @member within a given @ctx address - * @ctx: The context address - * @type: Type of context structure - * @member: Associated context field - */ -#define GET_CTX_FIELD_ADDR(ctx, type, member) ((ctx) + offsetof(type, member)) - #endif /* _CC_CRYPTO_CTX_H_ */