From patchwork Sun Apr 23 09:26:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 97958 Delivered-To: patch@linaro.org Received: by 10.140.109.52 with SMTP id k49csp1004832qgf; Sun, 23 Apr 2017 02:27:02 -0700 (PDT) X-Received: by 10.98.145.18 with SMTP id l18mr19078910pfe.173.1492939622190; Sun, 23 Apr 2017 02:27:02 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f3si15375610pld.21.2017.04.23.02.27.01; Sun, 23 Apr 2017 02:27:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1044383AbdDWJ05 (ORCPT + 1 other); Sun, 23 Apr 2017 05:26:57 -0400 Received: from foss.arm.com ([217.140.101.70]:46868 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1044339AbdDWJ0m (ORCPT ); Sun, 23 Apr 2017 05:26:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 351D180D; Sun, 23 Apr 2017 02:26:41 -0700 (PDT) Received: from gby.kfn.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E323F3F220; Sun, 23 Apr 2017 02:26:36 -0700 (PDT) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" , Rob Herring , Mark Rutland , Greg Kroah-Hartman , devel@driverdev.osuosl.org Cc: linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, gilad.benyossef@arm.com, Binoy Jayan , Ofir Drang , Stuart Yoder , Stephan Muller Subject: [PATCH v3 01/15] staging: ccree: introduce CryptoCell HW driver Date: Sun, 23 Apr 2017 12:26:09 +0300 Message-Id: <1492939583-25688-2-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1492939583-25688-1-git-send-email-gilad@benyossef.com> References: <1492939583-25688-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Introduce basic low level Arm TrustZone CryptoCell HW support. This first patch doesn't actually register any Crypto API transformations, these will follow up in the next patch. This first revision supports the CC 712 REE component. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/Kconfig | 2 + drivers/staging/Makefile | 2 +- drivers/staging/ccree/Kconfig | 19 + drivers/staging/ccree/Makefile | 2 + drivers/staging/ccree/cc_bitops.h | 62 +++ drivers/staging/ccree/cc_crypto_ctx.h | 235 ++++++++++ drivers/staging/ccree/cc_hal.h | 30 ++ drivers/staging/ccree/cc_hw_queue_defs.h | 603 +++++++++++++++++++++++++ drivers/staging/ccree/cc_lli_defs.h | 57 +++ drivers/staging/ccree/cc_pal_log.h | 188 ++++++++ drivers/staging/ccree/cc_pal_log_plat.h | 33 ++ drivers/staging/ccree/cc_pal_types.h | 97 ++++ drivers/staging/ccree/cc_pal_types_plat.h | 29 ++ drivers/staging/ccree/cc_regs.h | 106 +++++ drivers/staging/ccree/dx_crys_kernel.h | 180 ++++++++ drivers/staging/ccree/dx_env.h | 224 ++++++++++ drivers/staging/ccree/dx_host.h | 155 +++++++ drivers/staging/ccree/dx_reg_base_host.h | 34 ++ drivers/staging/ccree/dx_reg_common.h | 26 ++ drivers/staging/ccree/hw_queue_defs_plat.h | 43 ++ drivers/staging/ccree/ssi_buffer_mgr.c | 537 +++++++++++++++++++++++ drivers/staging/ccree/ssi_buffer_mgr.h | 79 ++++ drivers/staging/ccree/ssi_config.h | 61 +++ drivers/staging/ccree/ssi_driver.c | 499 +++++++++++++++++++++ drivers/staging/ccree/ssi_driver.h | 183 ++++++++ drivers/staging/ccree/ssi_pm.c | 144 ++++++ drivers/staging/ccree/ssi_pm.h | 46 ++ drivers/staging/ccree/ssi_pm_ext.c | 60 +++ drivers/staging/ccree/ssi_pm_ext.h | 33 ++ drivers/staging/ccree/ssi_request_mgr.c | 680 +++++++++++++++++++++++++++++ drivers/staging/ccree/ssi_request_mgr.h | 60 +++ drivers/staging/ccree/ssi_sram_mgr.c | 138 ++++++ drivers/staging/ccree/ssi_sram_mgr.h | 80 ++++ drivers/staging/ccree/ssi_sysfs.c | 440 +++++++++++++++++++ drivers/staging/ccree/ssi_sysfs.h | 54 +++ 35 files changed, 5220 insertions(+), 1 deletion(-) create mode 100644 drivers/staging/ccree/Kconfig create mode 100644 drivers/staging/ccree/Makefile create mode 100644 drivers/staging/ccree/cc_bitops.h create mode 100644 drivers/staging/ccree/cc_crypto_ctx.h create mode 100644 drivers/staging/ccree/cc_hal.h create mode 100644 drivers/staging/ccree/cc_hw_queue_defs.h create mode 100644 drivers/staging/ccree/cc_lli_defs.h create mode 100644 drivers/staging/ccree/cc_pal_log.h create mode 100644 drivers/staging/ccree/cc_pal_log_plat.h create mode 100644 drivers/staging/ccree/cc_pal_types.h create mode 100644 drivers/staging/ccree/cc_pal_types_plat.h create mode 100644 drivers/staging/ccree/cc_regs.h create mode 100644 drivers/staging/ccree/dx_crys_kernel.h create mode 100644 drivers/staging/ccree/dx_env.h create mode 100644 drivers/staging/ccree/dx_host.h create mode 100644 drivers/staging/ccree/dx_reg_base_host.h create mode 100644 drivers/staging/ccree/dx_reg_common.h create mode 100644 drivers/staging/ccree/hw_queue_defs_plat.h create mode 100644 drivers/staging/ccree/ssi_buffer_mgr.c create mode 100644 drivers/staging/ccree/ssi_buffer_mgr.h create mode 100644 drivers/staging/ccree/ssi_config.h create mode 100644 drivers/staging/ccree/ssi_driver.c create mode 100644 drivers/staging/ccree/ssi_driver.h create mode 100644 drivers/staging/ccree/ssi_pm.c create mode 100644 drivers/staging/ccree/ssi_pm.h create mode 100644 drivers/staging/ccree/ssi_pm_ext.c create mode 100644 drivers/staging/ccree/ssi_pm_ext.h create mode 100644 drivers/staging/ccree/ssi_request_mgr.c create mode 100644 drivers/staging/ccree/ssi_request_mgr.h create mode 100644 drivers/staging/ccree/ssi_sram_mgr.c create mode 100644 drivers/staging/ccree/ssi_sram_mgr.h create mode 100644 drivers/staging/ccree/ssi_sysfs.c create mode 100644 drivers/staging/ccree/ssi_sysfs.h -- 2.1.4 diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig index 4c360f8..79587f5 100644 --- a/drivers/staging/Kconfig +++ b/drivers/staging/Kconfig @@ -104,4 +104,6 @@ source "drivers/staging/vc04_services/Kconfig" source "drivers/staging/bcm2835-audio/Kconfig" +source "drivers/staging/ccree/Kconfig" + endif # STAGING diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile index 29cec5a..a3dcb3e 100644 --- a/drivers/staging/Makefile +++ b/drivers/staging/Makefile @@ -41,4 +41,4 @@ obj-$(CONFIG_KS7010) += ks7010/ obj-$(CONFIG_GREYBUS) += greybus/ obj-$(CONFIG_BCM2835_VCHIQ) += vc04_services/ obj-$(CONFIG_SND_BCM2835) += bcm2835-audio/ - +obj-$(CONFIG_CRYPTO_DEV_CCREE) += ccree/ diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig new file mode 100644 index 0000000..0f723d7 --- /dev/null +++ b/drivers/staging/ccree/Kconfig @@ -0,0 +1,19 @@ +config CRYPTO_DEV_CCREE + tristate "Support for ARM TrustZone CryptoCell C7XX family of Crypto accelerators" + depends on CRYPTO_HW && OF && HAS_DMA + default n + help + Say 'Y' to enable a driver for the Arm TrustZone CryptoCell + C7xx. Currently only the CryptoCell 712 REE is supported. + Choose this if you wish to use hardware acceleration of + cryptographic operations on the system REE. + If unsure say Y. + +config CCREE_DISABLE_COHERENT_DMA_OPS + bool "Disable Coherent DMA operations for the CCREE driver" + depends on CRYPTO_DEV_CCREE + default n + help + Say 'Y' to disable the use of coherent DMA operations by the + CCREE driver for debugging purposes. + If unsure say N. diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile new file mode 100644 index 0000000..972af69 --- /dev/null +++ b/drivers/staging/ccree/Makefile @@ -0,0 +1,2 @@ +obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o +ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o diff --git a/drivers/staging/ccree/cc_bitops.h b/drivers/staging/ccree/cc_bitops.h new file mode 100644 index 0000000..3a39565 --- /dev/null +++ b/drivers/staging/ccree/cc_bitops.h @@ -0,0 +1,62 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/*! + * \file cc_bitops.h + * Bit fields operations macros. + */ +#ifndef _CC_BITOPS_H_ +#define _CC_BITOPS_H_ + +#define BITMASK(mask_size) (((mask_size) < 32) ? \ + ((1UL << (mask_size)) - 1) : 0xFFFFFFFFUL) +#define BITMASK_AT(mask_size, mask_offset) (BITMASK(mask_size) << (mask_offset)) + +#define BITFIELD_GET(word, bit_offset, bit_size) \ + (((word) >> (bit_offset)) & BITMASK(bit_size)) +#define BITFIELD_SET(word, bit_offset, bit_size, new_val) do { \ + word = ((word) & ~BITMASK_AT(bit_size, bit_offset)) | \ + (((new_val) & BITMASK(bit_size)) << (bit_offset)); \ +} while (0) + +/* Is val aligned to "align" ("align" must be power of 2) */ +#ifndef IS_ALIGNED +#define IS_ALIGNED(val, align) \ + (((uintptr_t)(val) & ((align) - 1)) == 0) +#endif + +#define SWAP_ENDIAN(word) \ + (((word) >> 24) | (((word) & 0x00FF0000) >> 8) | \ + (((word) & 0x0000FF00) << 8) | (((word) & 0x000000FF) << 24)) + +#ifdef BIG__ENDIAN +#define SWAP_TO_LE(word) SWAP_ENDIAN(word) +#define SWAP_TO_BE(word) word +#else +#define SWAP_TO_LE(word) word +#define SWAP_TO_BE(word) SWAP_ENDIAN(word) +#endif + + + +/* Is val a multiple of "mult" ("mult" must be power of 2) */ +#define IS_MULT(val, mult) \ + (((val) & ((mult) - 1)) == 0) + +#define IS_NULL_ADDR(adr) \ + (!(adr)) + +#endif /*_CC_BITOPS_H_*/ diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h new file mode 100644 index 0000000..3547cb4 --- /dev/null +++ b/drivers/staging/ccree/cc_crypto_ctx.h @@ -0,0 +1,235 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + + +#ifndef _CC_CRYPTO_CTX_H_ +#define _CC_CRYPTO_CTX_H_ + +#ifdef __KERNEL__ +#include +#define INT32_MAX 0x7FFFFFFFL +#else +#include +#endif + + +#ifndef max +#define max(a, b) ((a) > (b) ? (a) : (b)) +#define min(a, b) ((a) < (b) ? (a) : (b)) +#endif + +/* context size */ +#ifndef CC_CTX_SIZE_LOG2 +#if (CC_SUPPORT_SHA > 256) +#define CC_CTX_SIZE_LOG2 8 +#else +#define CC_CTX_SIZE_LOG2 7 +#endif +#endif +#define CC_CTX_SIZE (1<> 2) + +#define CC_DRV_DES_IV_SIZE 8 +#define CC_DRV_DES_BLOCK_SIZE 8 + +#define CC_DRV_DES_ONE_KEY_SIZE 8 +#define CC_DRV_DES_DOUBLE_KEY_SIZE 16 +#define CC_DRV_DES_TRIPLE_KEY_SIZE 24 +#define CC_DRV_DES_KEY_SIZE_MAX CC_DRV_DES_TRIPLE_KEY_SIZE + +#define CC_AES_IV_SIZE 16 +#define CC_AES_IV_SIZE_WORDS (CC_AES_IV_SIZE >> 2) + +#define CC_AES_BLOCK_SIZE 16 +#define CC_AES_BLOCK_SIZE_WORDS 4 + +#define CC_AES_128_BIT_KEY_SIZE 16 +#define CC_AES_128_BIT_KEY_SIZE_WORDS (CC_AES_128_BIT_KEY_SIZE >> 2) +#define CC_AES_192_BIT_KEY_SIZE 24 +#define CC_AES_192_BIT_KEY_SIZE_WORDS (CC_AES_192_BIT_KEY_SIZE >> 2) +#define CC_AES_256_BIT_KEY_SIZE 32 +#define CC_AES_256_BIT_KEY_SIZE_WORDS (CC_AES_256_BIT_KEY_SIZE >> 2) +#define CC_AES_KEY_SIZE_MAX CC_AES_256_BIT_KEY_SIZE +#define CC_AES_KEY_SIZE_WORDS_MAX (CC_AES_KEY_SIZE_MAX >> 2) + +#define CC_MD5_DIGEST_SIZE 16 +#define CC_SHA1_DIGEST_SIZE 20 +#define CC_SHA224_DIGEST_SIZE 28 +#define CC_SHA256_DIGEST_SIZE 32 +#define CC_SHA256_DIGEST_SIZE_IN_WORDS 8 +#define CC_SHA384_DIGEST_SIZE 48 +#define CC_SHA512_DIGEST_SIZE 64 + +#define CC_SHA1_BLOCK_SIZE 64 +#define CC_SHA1_BLOCK_SIZE_IN_WORDS 16 +#define CC_MD5_BLOCK_SIZE 64 +#define CC_MD5_BLOCK_SIZE_IN_WORDS 16 +#define CC_SHA224_BLOCK_SIZE 64 +#define CC_SHA256_BLOCK_SIZE 64 +#define CC_SHA256_BLOCK_SIZE_IN_WORDS 16 +#define CC_SHA1_224_256_BLOCK_SIZE 64 +#define CC_SHA384_BLOCK_SIZE 128 +#define CC_SHA512_BLOCK_SIZE 128 + +#if (CC_SUPPORT_SHA > 256) +#define CC_DIGEST_SIZE_MAX CC_SHA512_DIGEST_SIZE +#define CC_HASH_BLOCK_SIZE_MAX CC_SHA512_BLOCK_SIZE /*1024b*/ +#else /* Only up to SHA256 */ +#define CC_DIGEST_SIZE_MAX CC_SHA256_DIGEST_SIZE +#define CC_HASH_BLOCK_SIZE_MAX CC_SHA256_BLOCK_SIZE /*512b*/ +#endif + +#define CC_HMAC_BLOCK_SIZE_MAX CC_HASH_BLOCK_SIZE_MAX + +#define CC_MULTI2_SYSTEM_KEY_SIZE 32 +#define CC_MULTI2_DATA_KEY_SIZE 8 +#define CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE (CC_MULTI2_SYSTEM_KEY_SIZE + CC_MULTI2_DATA_KEY_SIZE) +#define CC_MULTI2_BLOCK_SIZE 8 +#define CC_MULTI2_IV_SIZE 8 +#define CC_MULTI2_MIN_NUM_ROUNDS 8 +#define CC_MULTI2_MAX_NUM_ROUNDS 128 + + +#define CC_DRV_ALG_MAX_BLOCK_SIZE CC_HASH_BLOCK_SIZE_MAX + + +enum drv_engine_type { + DRV_ENGINE_NULL = 0, + DRV_ENGINE_AES = 1, + DRV_ENGINE_DES = 2, + DRV_ENGINE_HASH = 3, + DRV_ENGINE_RC4 = 4, + DRV_ENGINE_DOUT = 5, + DRV_ENGINE_RESERVE32B = INT32_MAX, +}; + +enum drv_crypto_alg { + DRV_CRYPTO_ALG_NULL = -1, + DRV_CRYPTO_ALG_AES = 0, + DRV_CRYPTO_ALG_DES = 1, + DRV_CRYPTO_ALG_HASH = 2, + DRV_CRYPTO_ALG_C2 = 3, + DRV_CRYPTO_ALG_HMAC = 4, + DRV_CRYPTO_ALG_AEAD = 5, + DRV_CRYPTO_ALG_BYPASS = 6, + DRV_CRYPTO_ALG_NUM = 7, + DRV_CRYPTO_ALG_RESERVE32B = INT32_MAX +}; + +enum drv_crypto_direction { + DRV_CRYPTO_DIRECTION_NULL = -1, + DRV_CRYPTO_DIRECTION_ENCRYPT = 0, + DRV_CRYPTO_DIRECTION_DECRYPT = 1, + DRV_CRYPTO_DIRECTION_DECRYPT_ENCRYPT = 3, + DRV_CRYPTO_DIRECTION_RESERVE32B = INT32_MAX +}; + +enum drv_cipher_mode { + DRV_CIPHER_NULL_MODE = -1, + DRV_CIPHER_ECB = 0, + DRV_CIPHER_CBC = 1, + DRV_CIPHER_CTR = 2, + DRV_CIPHER_CBC_MAC = 3, + DRV_CIPHER_XTS = 4, + DRV_CIPHER_XCBC_MAC = 5, + DRV_CIPHER_OFB = 6, + DRV_CIPHER_CMAC = 7, + DRV_CIPHER_CCM = 8, + DRV_CIPHER_CBC_CTS = 11, + DRV_CIPHER_GCTR = 12, + DRV_CIPHER_ESSIV = 13, + DRV_CIPHER_BITLOCKER = 14, + DRV_CIPHER_RESERVE32B = INT32_MAX +}; + +enum drv_hash_mode { + DRV_HASH_NULL = -1, + DRV_HASH_SHA1 = 0, + DRV_HASH_SHA256 = 1, + DRV_HASH_SHA224 = 2, + DRV_HASH_SHA512 = 3, + DRV_HASH_SHA384 = 4, + DRV_HASH_MD5 = 5, + DRV_HASH_CBC_MAC = 6, + DRV_HASH_XCBC_MAC = 7, + DRV_HASH_CMAC = 8, + DRV_HASH_MODE_NUM = 9, + DRV_HASH_RESERVE32B = INT32_MAX +}; + +enum drv_hash_hw_mode { + DRV_HASH_HW_MD5 = 0, + DRV_HASH_HW_SHA1 = 1, + DRV_HASH_HW_SHA256 = 2, + DRV_HASH_HW_SHA224 = 10, + DRV_HASH_HW_SHA512 = 4, + DRV_HASH_HW_SHA384 = 12, + DRV_HASH_HW_GHASH = 6, + DRV_HASH_HW_RESERVE32B = INT32_MAX +}; + +enum drv_multi2_mode { + DRV_MULTI2_NULL = -1, + DRV_MULTI2_ECB = 0, + DRV_MULTI2_CBC = 1, + DRV_MULTI2_OFB = 2, + DRV_MULTI2_RESERVE32B = INT32_MAX +}; + + +/* drv_crypto_key_type[1:0] is mapped to cipher_do[1:0] */ +/* drv_crypto_key_type[2] is mapped to cipher_config2 */ +enum drv_crypto_key_type { + DRV_NULL_KEY = -1, + DRV_USER_KEY = 0, /* 0x000 */ + DRV_ROOT_KEY = 1, /* 0x001 */ + DRV_PROVISIONING_KEY = 2, /* 0x010 */ + DRV_SESSION_KEY = 3, /* 0x011 */ + DRV_APPLET_KEY = 4, /* NA */ + DRV_PLATFORM_KEY = 5, /* 0x101 */ + DRV_CUSTOMER_KEY = 6, /* 0x110 */ + DRV_END_OF_KEYS = INT32_MAX, +}; + +enum drv_crypto_padding_type { + DRV_PADDING_NONE = 0, + DRV_PADDING_PKCS7 = 1, + DRV_PADDING_RESERVE32B = INT32_MAX +}; + +/*******************************************************************/ +/***************** DESCRIPTOR BASED CONTEXTS ***********************/ +/*******************************************************************/ + + /* Generic context ("super-class") */ +struct drv_ctx_generic { + enum drv_crypto_alg alg; +} __attribute__((__may_alias__)); + + +/*******************************************************************/ +/***************** MESSAGE BASED CONTEXTS **************************/ +/*******************************************************************/ + + +/* Get the address of a @member within a given @ctx address + @ctx: The context address + @type: Type of context structure + @member: Associated context field */ +#define GET_CTX_FIELD_ADDR(ctx, type, member) (ctx + offsetof(type, member)) + +#endif /* _CC_CRYPTO_CTX_H_ */ + diff --git a/drivers/staging/ccree/cc_hal.h b/drivers/staging/ccree/cc_hal.h new file mode 100644 index 0000000..75a0ce3 --- /dev/null +++ b/drivers/staging/ccree/cc_hal.h @@ -0,0 +1,30 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* pseudo cc_hal.h for cc7x_perf_test_driver (to be able to include code from CC drivers) */ + +#ifndef __CC_HAL_H__ +#define __CC_HAL_H__ + +#include + +#define READ_REGISTER(_addr) ioread32((_addr)) +#define WRITE_REGISTER(_addr, _data) iowrite32((_data), (_addr)) + +#define CC_HAL_WRITE_REGISTER(offset, val) WRITE_REGISTER(cc_base + offset, val) +#define CC_HAL_READ_REGISTER(offset) READ_REGISTER(cc_base + offset) + +#endif diff --git a/drivers/staging/ccree/cc_hw_queue_defs.h b/drivers/staging/ccree/cc_hw_queue_defs.h new file mode 100644 index 0000000..fbaf1b6 --- /dev/null +++ b/drivers/staging/ccree/cc_hw_queue_defs.h @@ -0,0 +1,603 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#ifndef __CC_HW_QUEUE_DEFS_H__ +#define __CC_HW_QUEUE_DEFS_H__ + +#include "cc_pal_log.h" +#include "cc_regs.h" +#include "dx_crys_kernel.h" + +#ifdef __KERNEL__ +#include +#define UINT32_MAX 0xFFFFFFFFL +#define INT32_MAX 0x7FFFFFFFL +#define UINT16_MAX 0xFFFFL +#else +#include +#endif + +/****************************************************************************** +* DEFINITIONS +******************************************************************************/ + + +/* Dma AXI Secure bit */ +#define AXI_SECURE 0 +#define AXI_NOT_SECURE 1 + +#define HW_DESC_SIZE_WORDS 6 +#define HW_QUEUE_SLOTS_MAX 15 /* Max. available slots in HW queue */ + +#define _HW_DESC_MONITOR_KICK 0x7FFFC00 + +/****************************************************************************** +* TYPE DEFINITIONS +******************************************************************************/ + +typedef struct HwDesc { + uint32_t word[HW_DESC_SIZE_WORDS]; +} HwDesc_s; + +typedef enum DescDirection { + DESC_DIRECTION_ILLEGAL = -1, + DESC_DIRECTION_ENCRYPT_ENCRYPT = 0, + DESC_DIRECTION_DECRYPT_DECRYPT = 1, + DESC_DIRECTION_DECRYPT_ENCRYPT = 3, + DESC_DIRECTION_END = INT32_MAX, +}DescDirection_t; + +typedef enum DmaMode { + DMA_MODE_NULL = -1, + NO_DMA = 0, + DMA_SRAM = 1, + DMA_DLLI = 2, + DMA_MLLI = 3, + DmaMode_OPTIONTS, + DmaMode_END = INT32_MAX, +}DmaMode_t; + +typedef enum FlowMode { + FLOW_MODE_NULL = -1, + /* data flows */ + BYPASS = 0, + DIN_AES_DOUT = 1, + AES_to_HASH = 2, + AES_and_HASH = 3, + DIN_DES_DOUT = 4, + DES_to_HASH = 5, + DES_and_HASH = 6, + DIN_HASH = 7, + DIN_HASH_and_BYPASS = 8, + AESMAC_and_BYPASS = 9, + AES_to_HASH_and_DOUT = 10, + DIN_RC4_DOUT = 11, + DES_to_HASH_and_DOUT = 12, + AES_to_AES_to_HASH_and_DOUT = 13, + AES_to_AES_to_HASH = 14, + AES_to_HASH_and_AES = 15, + DIN_MULTI2_DOUT = 16, + DIN_AES_AESMAC = 17, + HASH_to_DOUT = 18, + /* setup flows */ + S_DIN_to_AES = 32, + S_DIN_to_AES2 = 33, + S_DIN_to_DES = 34, + S_DIN_to_RC4 = 35, + S_DIN_to_MULTI2 = 36, + S_DIN_to_HASH = 37, + S_AES_to_DOUT = 38, + S_AES2_to_DOUT = 39, + S_RC4_to_DOUT = 41, + S_DES_to_DOUT = 42, + S_HASH_to_DOUT = 43, + SET_FLOW_ID = 44, + FlowMode_OPTIONTS, + FlowMode_END = INT32_MAX, +}FlowMode_t; + +typedef enum TunnelOp { + TUNNEL_OP_INVALID = -1, + TUNNEL_OFF = 0, + TUNNEL_ON = 1, + TunnelOp_OPTIONS, + TunnelOp_END = INT32_MAX, +} TunnelOp_t; + +typedef enum SetupOp { + SETUP_LOAD_NOP = 0, + SETUP_LOAD_STATE0 = 1, + SETUP_LOAD_STATE1 = 2, + SETUP_LOAD_STATE2 = 3, + SETUP_LOAD_KEY0 = 4, + SETUP_LOAD_XEX_KEY = 5, + SETUP_WRITE_STATE0 = 8, + SETUP_WRITE_STATE1 = 9, + SETUP_WRITE_STATE2 = 10, + SETUP_WRITE_STATE3 = 11, + setupOp_OPTIONTS, + setupOp_END = INT32_MAX, +}SetupOp_t; + +enum AesMacSelector { + AES_SK = 1, + AES_CMAC_INIT = 2, + AES_CMAC_SIZE0 = 3, + AesMacEnd = INT32_MAX, +}; + +#define HW_KEY_MASK_CIPHER_DO 0x3 +#define HW_KEY_SHIFT_CIPHER_CFG2 2 + + +/* HwCryptoKey[1:0] is mapped to cipher_do[1:0] */ +/* HwCryptoKey[2:3] is mapped to cipher_config2[1:0] */ +typedef enum HwCryptoKey { + USER_KEY = 0, /* 0x0000 */ + ROOT_KEY = 1, /* 0x0001 */ + PROVISIONING_KEY = 2, /* 0x0010 */ /* ==KCP */ + SESSION_KEY = 3, /* 0x0011 */ + RESERVED_KEY = 4, /* NA */ + PLATFORM_KEY = 5, /* 0x0101 */ + CUSTOMER_KEY = 6, /* 0x0110 */ + KFDE0_KEY = 7, /* 0x0111 */ + KFDE1_KEY = 9, /* 0x1001 */ + KFDE2_KEY = 10, /* 0x1010 */ + KFDE3_KEY = 11, /* 0x1011 */ + END_OF_KEYS = INT32_MAX, +}HwCryptoKey_t; + +typedef enum HwAesKeySize { + AES_128_KEY = 0, + AES_192_KEY = 1, + AES_256_KEY = 2, + END_OF_AES_KEYS = INT32_MAX, +}HwAesKeySize_t; + +typedef enum HwDesKeySize { + DES_ONE_KEY = 0, + DES_TWO_KEYS = 1, + DES_THREE_KEYS = 2, + END_OF_DES_KEYS = INT32_MAX, +}HwDesKeySize_t; + +/*****************************/ +/* Descriptor packing macros */ +/*****************************/ + +#define GET_HW_Q_DESC_WORD_IDX(descWordIdx) (CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD ## descWordIdx) ) + +#define HW_DESC_INIT(pDesc) do { \ + (pDesc)->word[0] = 0; \ + (pDesc)->word[1] = 0; \ + (pDesc)->word[2] = 0; \ + (pDesc)->word[3] = 0; \ + (pDesc)->word[4] = 0; \ + (pDesc)->word[5] = 0; \ +} while (0) + +/* HW descriptor debug functions */ +int createDetailedDump(HwDesc_s *pDesc); +void descriptor_log(HwDesc_s *desc); + +#if defined(HW_DESCRIPTOR_LOG) || defined(HW_DESC_DUMP_HOST_BUF) +#define LOG_HW_DESC(pDesc) descriptor_log(pDesc) +#else +#define LOG_HW_DESC(pDesc) +#endif + +#if (CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_TRACE) || defined(OEMFW_LOG) + +#ifdef UART_PRINTF +#define CREATE_DETAILED_DUMP(pDesc) createDetailedDump(pDesc) +#else +#define CREATE_DETAILED_DUMP(pDesc) +#endif + +#define HW_DESC_DUMP(pDesc) do { \ + CC_PAL_LOG_TRACE("\n---------------------------------------------------\n"); \ + CREATE_DETAILED_DUMP(pDesc); \ + CC_PAL_LOG_TRACE("0x%08X, ", (unsigned int)(pDesc)->word[0]); \ + CC_PAL_LOG_TRACE("0x%08X, ", (unsigned int)(pDesc)->word[1]); \ + CC_PAL_LOG_TRACE("0x%08X, ", (unsigned int)(pDesc)->word[2]); \ + CC_PAL_LOG_TRACE("0x%08X, ", (unsigned int)(pDesc)->word[3]); \ + CC_PAL_LOG_TRACE("0x%08X, ", (unsigned int)(pDesc)->word[4]); \ + CC_PAL_LOG_TRACE("0x%08X\n", (unsigned int)(pDesc)->word[5]); \ + CC_PAL_LOG_TRACE("---------------------------------------------------\n\n"); \ +} while (0) + +#else +#define HW_DESC_DUMP(pDesc) do {} while (0) +#endif + + +/*! + * This macro indicates the end of current HW descriptors flow and release the HW engines. + * + * \param pDesc pointer HW descriptor struct + */ +#define HW_DESC_SET_QUEUE_LAST_IND(pDesc) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, QUEUE_LAST_IND, (pDesc)->word[3], 1); \ + } while (0) + +/*! + * This macro signs the end of HW descriptors flow by asking for completion ack, and release the HW engines + * + * \param pDesc pointer HW descriptor struct + */ +#define HW_DESC_SET_ACK_LAST(pDesc) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, QUEUE_LAST_IND, (pDesc)->word[3], 1); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, ACK_NEEDED, (pDesc)->word[4], 1); \ + } while (0) + + +#define MSB64(_addr) (sizeof(_addr) == 4 ? 0 : ((_addr) >> 32)&UINT16_MAX) + +/*! + * This macro sets the DIN field of a HW descriptors + * + * \param pDesc pointer HW descriptor struct + * \param dmaMode The DMA mode: NO_DMA, SRAM, DLLI, MLLI, CONSTANT + * \param dinAdr DIN address + * \param dinSize Data size in bytes + * \param axiNs AXI secure bit + */ +#define HW_DESC_SET_DIN_TYPE(pDesc, dmaMode, dinAdr, dinSize, axiNs) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (dinAdr)&UINT32_MAX ); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DIN_ADDR_HIGH, (pDesc)->word[5], MSB64(dinAdr) ); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_DMA_MODE, (pDesc)->word[1], (dmaMode)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, NS_BIT, (pDesc)->word[1], (axiNs)); \ + } while (0) + + +/*! + * This macro sets the DIN field of a HW descriptors to NO DMA mode. Used for NOP descriptor, register patches and + * other special modes + * + * \param pDesc pointer HW descriptor struct + * \param dinAdr DIN address + * \param dinSize Data size in bytes + */ +#define HW_DESC_SET_DIN_NO_DMA(pDesc, dinAdr, dinSize) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (uint32_t)(dinAdr)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize)); \ + } while (0) + +/*! + * This macro sets the DIN field of a HW descriptors to SRAM mode. + * Note: No need to check SRAM alignment since host requests do not use SRAM and + * adaptor will enforce alignment check. + * + * \param pDesc pointer HW descriptor struct + * \param dinAdr DIN address + * \param dinSize Data size in bytes + */ +#define HW_DESC_SET_DIN_SRAM(pDesc, dinAdr, dinSize) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (uint32_t)(dinAdr)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_DMA_MODE, (pDesc)->word[1], DMA_SRAM); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize)); \ + } while (0) + +/*! This macro sets the DIN field of a HW descriptors to CONST mode + * + * \param pDesc pointer HW descriptor struct + * \param val DIN const value + * \param dinSize Data size in bytes + */ +#define HW_DESC_SET_DIN_CONST(pDesc, val, dinSize) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0, VALUE, (pDesc)->word[0], (uint32_t)(val)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_CONST_VALUE, (pDesc)->word[1], 1); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_DMA_MODE, (pDesc)->word[1], DMA_SRAM); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, DIN_SIZE, (pDesc)->word[1], (dinSize)); \ + } while (0) + +/*! + * This macro sets the DIN not last input data indicator + * + * \param pDesc pointer HW descriptor struct + */ +#define HW_DESC_SET_DIN_NOT_LAST_INDICATION(pDesc) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD1, NOT_LAST, (pDesc)->word[1], 1); \ + } while (0) + +/*! + * This macro sets the DOUT field of a HW descriptors + * + * \param pDesc pointer HW descriptor struct + * \param dmaMode The DMA mode: NO_DMA, SRAM, DLLI, MLLI, CONSTANT + * \param doutAdr DOUT address + * \param doutSize Data size in bytes + * \param axiNs AXI secure bit + */ +#define HW_DESC_SET_DOUT_TYPE(pDesc, dmaMode, doutAdr, doutSize, axiNs) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (doutAdr)&UINT32_MAX ); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DOUT_ADDR_HIGH, (pDesc)->word[5], MSB64(doutAdr) ); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], (dmaMode)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, NS_BIT, (pDesc)->word[3], (axiNs)); \ + } while (0) + +/*! + * This macro sets the DOUT field of a HW descriptors to DLLI type + * The LAST INDICATION is provided by the user + * + * \param pDesc pointer HW descriptor struct + * \param doutAdr DOUT address + * \param doutSize Data size in bytes + * \param lastInd The last indication bit + * \param axiNs AXI secure bit + */ +#define HW_DESC_SET_DOUT_DLLI(pDesc, doutAdr, doutSize, axiNs ,lastInd) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (doutAdr)&UINT32_MAX ); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DOUT_ADDR_HIGH, (pDesc)->word[5], MSB64(doutAdr) ); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], DMA_DLLI); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_LAST_IND, (pDesc)->word[3], lastInd); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, NS_BIT, (pDesc)->word[3], (axiNs)); \ + } while (0) + +/*! + * This macro sets the DOUT field of a HW descriptors to DLLI type + * The LAST INDICATION is provided by the user + * + * \param pDesc pointer HW descriptor struct + * \param doutAdr DOUT address + * \param doutSize Data size in bytes + * \param lastInd The last indication bit + * \param axiNs AXI secure bit + */ +#define HW_DESC_SET_DOUT_MLLI(pDesc, doutAdr, doutSize, axiNs ,lastInd) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (doutAdr)&UINT32_MAX ); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD5, DOUT_ADDR_HIGH, (pDesc)->word[5], MSB64(doutAdr) ); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], DMA_MLLI); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_LAST_IND, (pDesc)->word[3], lastInd); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, NS_BIT, (pDesc)->word[3], (axiNs)); \ + } while (0) + +/*! + * This macro sets the DOUT field of a HW descriptors to NO DMA mode. Used for NOP descriptor, register patches and + * other special modes + * + * \param pDesc pointer HW descriptor struct + * \param doutAdr DOUT address + * \param doutSize Data size in bytes + * \param registerWriteEnable Enables a write operation to a register + */ +#define HW_DESC_SET_DOUT_NO_DMA(pDesc, doutAdr, doutSize, registerWriteEnable) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (uint32_t)(doutAdr)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_LAST_IND, (pDesc)->word[3], (registerWriteEnable)); \ + } while (0) + +/*! + * This macro sets the word for the XOR operation. + * + * \param pDesc pointer HW descriptor struct + * \param xorVal xor data value + */ +#define HW_DESC_SET_XOR_VAL(pDesc, xorVal) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (uint32_t)(xorVal)); \ + } while (0) + +/*! + * This macro sets the XOR indicator bit in the descriptor + * + * \param pDesc pointer HW descriptor struct + */ +#define HW_DESC_SET_XOR_ACTIVE(pDesc) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, HASH_XOR_BIT, (pDesc)->word[3], 1); \ + } while (0) + +/*! + * This macro selects the AES engine instead of HASH engine when setting up combined mode with AES XCBC MAC + * + * \param pDesc pointer HW descriptor struct + */ +#define HW_DESC_SET_AES_NOT_HASH_MODE(pDesc) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, AES_SEL_N_HASH, (pDesc)->word[4], 1); \ + } while (0) + +/*! + * This macro sets the DOUT field of a HW descriptors to SRAM mode + * Note: No need to check SRAM alignment since host requests do not use SRAM and + * adaptor will enforce alignment check. + * + * \param pDesc pointer HW descriptor struct + * \param doutAdr DOUT address + * \param doutSize Data size in bytes + */ +#define HW_DESC_SET_DOUT_SRAM(pDesc, doutAdr, doutSize) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (uint32_t)(doutAdr)); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_DMA_MODE, (pDesc)->word[3], DMA_SRAM); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD3, DOUT_SIZE, (pDesc)->word[3], (doutSize)); \ + } while (0) + + +/*! + * This macro sets the data unit size for XEX mode in data_out_addr[15:0] + * + * \param pDesc pointer HW descriptor struct + * \param dataUnitSize data unit size for XEX mode + */ +#define HW_DESC_SET_XEX_DATA_UNIT_SIZE(pDesc, dataUnitSize) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (uint32_t)(dataUnitSize)); \ + } while (0) + +/*! + * This macro sets the number of rounds for Multi2 in data_out_addr[15:0] + * + * \param pDesc pointer HW descriptor struct + * \param numRounds number of rounds for Multi2 +*/ +#define HW_DESC_SET_MULTI2_NUM_ROUNDS(pDesc, numRounds) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD2, VALUE, (pDesc)->word[2], (uint32_t)(numRounds)); \ + } while (0) + +/*! + * This macro sets the flow mode. + * + * \param pDesc pointer HW descriptor struct + * \param flowMode Any one of the modes defined in [CC7x-DESC] +*/ + +#define HW_DESC_SET_FLOW_MODE(pDesc, flowMode) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, DATA_FLOW_MODE, (pDesc)->word[4], (flowMode)); \ + } while (0) + +/*! + * This macro sets the cipher mode. + * + * \param pDesc pointer HW descriptor struct + * \param cipherMode Any one of the modes defined in [CC7x-DESC] +*/ +#define HW_DESC_SET_CIPHER_MODE(pDesc, cipherMode) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_MODE, (pDesc)->word[4], (cipherMode)); \ + } while (0) + +/*! + * This macro sets the cipher configuration fields. + * + * \param pDesc pointer HW descriptor struct + * \param cipherConfig Any one of the modes defined in [CC7x-DESC] +*/ +#define HW_DESC_SET_CIPHER_CONFIG0(pDesc, cipherConfig) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_CONF0, (pDesc)->word[4], (cipherConfig)); \ + } while (0) + +/*! + * This macro sets the cipher configuration fields. + * + * \param pDesc pointer HW descriptor struct + * \param cipherConfig Any one of the modes defined in [CC7x-DESC] +*/ +#define HW_DESC_SET_CIPHER_CONFIG1(pDesc, cipherConfig) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_CONF1, (pDesc)->word[4], (cipherConfig)); \ + } while (0) + +/*! + * This macro sets HW key configuration fields. + * + * \param pDesc pointer HW descriptor struct + * \param hwKey The hw key number as in enun HwCryptoKey +*/ +#define HW_DESC_SET_HW_CRYPTO_KEY(pDesc, hwKey) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_DO, (pDesc)->word[4], (hwKey)&HW_KEY_MASK_CIPHER_DO); \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_CONF2, (pDesc)->word[4], (hwKey>>HW_KEY_SHIFT_CIPHER_CFG2)); \ + } while (0) + +/*! + * This macro changes the bytes order of all setup-finalize descriptosets. + * + * \param pDesc pointer HW descriptor struct + * \param swapConfig Any one of the modes defined in [CC7x-DESC] +*/ +#define HW_DESC_SET_BYTES_SWAP(pDesc, swapConfig) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, BYTES_SWAP, (pDesc)->word[4], (swapConfig)); \ + } while (0) + +/*! + * This macro sets the CMAC_SIZE0 mode. + * + * \param pDesc pointer HW descriptor struct +*/ +#define HW_DESC_SET_CMAC_SIZE0_MODE(pDesc) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CMAC_SIZE0, (pDesc)->word[4], 0x1); \ + } while (0) + +/*! + * This macro sets the key size for AES engine. + * + * \param pDesc pointer HW descriptor struct + * \param keySize key size in bytes (NOT size code) +*/ +#define HW_DESC_SET_KEY_SIZE_AES(pDesc, keySize) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, KEY_SIZE, (pDesc)->word[4], ((keySize) >> 3) - 2); \ + } while (0) + +/*! + * This macro sets the key size for DES engine. + * + * \param pDesc pointer HW descriptor struct + * \param keySize key size in bytes (NOT size code) +*/ +#define HW_DESC_SET_KEY_SIZE_DES(pDesc, keySize) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, KEY_SIZE, (pDesc)->word[4], ((keySize) >> 3) - 1); \ + } while (0) + +/*! + * This macro sets the descriptor's setup mode + * + * \param pDesc pointer HW descriptor struct + * \param setupMode Any one of the setup modes defined in [CC7x-DESC] +*/ +#define HW_DESC_SET_SETUP_MODE(pDesc, setupMode) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, SETUP_OPERATION, (pDesc)->word[4], (setupMode)); \ + } while (0) + +/*! + * This macro sets the descriptor's cipher do + * + * \param pDesc pointer HW descriptor struct + * \param cipherDo Any one of the cipher do defined in [CC7x-DESC] +*/ +#define HW_DESC_SET_CIPHER_DO(pDesc, cipherDo) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_QUEUE_WORD4, CIPHER_DO, (pDesc)->word[4], (cipherDo)&HW_KEY_MASK_CIPHER_DO); \ + } while (0) + +/*! + * This macro sets the DIN field of a HW descriptors to star/stop monitor descriptor. + * Used for performance measurements and debug purposes. + * + * \param pDesc pointer HW descriptor struct + */ +#define HW_DESC_SET_DIN_MONITOR_CNTR(pDesc) \ + do { \ + CC_REG_FLD_SET(CRY_KERNEL, DSCRPTR_MEASURE_CNTR, VALUE, (pDesc)->word[1], _HW_DESC_MONITOR_KICK); \ + } while (0) + + + +#endif /*__CC_HW_QUEUE_DEFS_H__*/ diff --git a/drivers/staging/ccree/cc_lli_defs.h b/drivers/staging/ccree/cc_lli_defs.h new file mode 100644 index 0000000..697f1ed --- /dev/null +++ b/drivers/staging/ccree/cc_lli_defs.h @@ -0,0 +1,57 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + + +#ifndef _CC_LLI_DEFS_H_ +#define _CC_LLI_DEFS_H_ +#ifdef __KERNEL__ +#include +#else +#include +#endif +#include "cc_bitops.h" + +/* Max DLLI size */ +#define DLLI_SIZE_BIT_SIZE 0x18 // DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SIZE + +#define CC_MAX_MLLI_ENTRY_SIZE 0x10000 + +#define MSB64(_addr) (sizeof(_addr) == 4 ? 0 : ((_addr) >> 32)&UINT16_MAX) + +#define LLI_SET_ADDR(lli_p, addr) \ + BITFIELD_SET(((uint32_t *)(lli_p))[LLI_WORD0_OFFSET], LLI_LADDR_BIT_OFFSET, LLI_LADDR_BIT_SIZE, (addr & UINT32_MAX)); \ + BITFIELD_SET(((uint32_t *)(lli_p))[LLI_WORD1_OFFSET], LLI_HADDR_BIT_OFFSET, LLI_HADDR_BIT_SIZE, MSB64(addr)); + +#define LLI_SET_SIZE(lli_p, size) \ + BITFIELD_SET(((uint32_t *)(lli_p))[LLI_WORD1_OFFSET], LLI_SIZE_BIT_OFFSET, LLI_SIZE_BIT_SIZE, size) + +/* Size of entry */ +#define LLI_ENTRY_WORD_SIZE 2 +#define LLI_ENTRY_BYTE_SIZE (LLI_ENTRY_WORD_SIZE * sizeof(uint32_t)) + +/* Word0[31:0] = ADDR[31:0] */ +#define LLI_WORD0_OFFSET 0 +#define LLI_LADDR_BIT_OFFSET 0 +#define LLI_LADDR_BIT_SIZE 32 +/* Word1[31:16] = ADDR[47:32]; Word1[15:0] = SIZE */ +#define LLI_WORD1_OFFSET 1 +#define LLI_SIZE_BIT_OFFSET 0 +#define LLI_SIZE_BIT_SIZE 16 +#define LLI_HADDR_BIT_OFFSET 16 +#define LLI_HADDR_BIT_SIZE 16 + + +#endif /*_CC_LLI_DEFS_H_*/ diff --git a/drivers/staging/ccree/cc_pal_log.h b/drivers/staging/ccree/cc_pal_log.h new file mode 100644 index 0000000..e5f5a87 --- /dev/null +++ b/drivers/staging/ccree/cc_pal_log.h @@ -0,0 +1,188 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#ifndef _CC_PAL_LOG_H_ +#define _CC_PAL_LOG_H_ + +#include "cc_pal_types.h" +#include "cc_pal_log_plat.h" + +/*! +@file +@brief This file contains the PAL layer log definitions, by default the log is disabled. +@defgroup cc_pal_log CryptoCell PAL logging APIs and definitions +@{ +@ingroup cc_pal +*/ + +/* PAL log levels (to be used in CC_PAL_logLevel) */ +/*! PAL log level - disabled. */ +#define CC_PAL_LOG_LEVEL_NULL (-1) /*!< \internal Disable logging */ +/*! PAL log level - error. */ +#define CC_PAL_LOG_LEVEL_ERR 0 +/*! PAL log level - warning. */ +#define CC_PAL_LOG_LEVEL_WARN 1 +/*! PAL log level - info. */ +#define CC_PAL_LOG_LEVEL_INFO 2 +/*! PAL log level - debug. */ +#define CC_PAL_LOG_LEVEL_DEBUG 3 +/*! PAL log level - trace. */ +#define CC_PAL_LOG_LEVEL_TRACE 4 +/*! PAL log level - data. */ +#define CC_PAL_LOG_LEVEL_DATA 5 + +#ifndef CC_PAL_LOG_CUR_COMPONENT +/* Setting default component mask in case caller did not define */ +/* (a mask that is always on for every log mask value but full masking) */ +/*! Default log debugged component.*/ +#define CC_PAL_LOG_CUR_COMPONENT 0xFFFFFFFF +#endif +#ifndef CC_PAL_LOG_CUR_COMPONENT_NAME +/*! Default log debugged component.*/ +#define CC_PAL_LOG_CUR_COMPONENT_NAME "CC" +#endif + +/* Select compile time log level (default if not explicitly specified by caller) */ +#ifndef CC_PAL_MAX_LOG_LEVEL /* Can be overriden by external definition of this constant */ +#ifdef DEBUG +/*! Default debug log level (when debug is set to on).*/ +#define CC_PAL_MAX_LOG_LEVEL CC_PAL_LOG_LEVEL_ERR /*CC_PAL_LOG_LEVEL_DEBUG*/ +#else /* Disable logging */ +/*! Default debug log level (when debug is set to on).*/ +#define CC_PAL_MAX_LOG_LEVEL CC_PAL_LOG_LEVEL_NULL +#endif +#endif /*CC_PAL_MAX_LOG_LEVEL*/ +/*! Evaluate CC_PAL_MAX_LOG_LEVEL in case provided by caller */ +#define __CC_PAL_LOG_LEVEL_EVAL(level) level +/*! Maximal log level defintion.*/ +#define _CC_PAL_MAX_LOG_LEVEL __CC_PAL_LOG_LEVEL_EVAL(CC_PAL_MAX_LOG_LEVEL) + + +#ifdef ARM_DSM +/*! Log init function. */ +#define CC_PalLogInit() do {} while (0) +/*! Log set level function - sets the level of logging in case of debug. */ +#define CC_PalLogLevelSet(setLevel) do {} while (0) +/*! Log set mask function - sets the component masking in case of debug. */ +#define CC_PalLogMaskSet(setMask) do {} while (0) +#else +#if _CC_PAL_MAX_LOG_LEVEL > CC_PAL_LOG_LEVEL_NULL +/*! Log init function. */ +void CC_PalLogInit(void); +/*! Log set level function - sets the level of logging in case of debug. */ +void CC_PalLogLevelSet(int setLevel); +/*! Log set mask function - sets the component masking in case of debug. */ +void CC_PalLogMaskSet(uint32_t setMask); +/*! Global variable for log level */ +extern int CC_PAL_logLevel; +/*! Global variable for log mask */ +extern uint32_t CC_PAL_logMask; +#else /* No log */ +/*! Log init function. */ +static inline void CC_PalLogInit(void) {} +/*! Log set level function - sets the level of logging in case of debug. */ +static inline void CC_PalLogLevelSet(int setLevel) {CC_UNUSED_PARAM(setLevel);} +/*! Log set mask function - sets the component masking in case of debug. */ +static inline void CC_PalLogMaskSet(uint32_t setMask) {CC_UNUSED_PARAM(setMask);} +#endif +#endif + +/*! Filter logging based on logMask and dispatch to platform specific logging mechanism. */ +#define _CC_PAL_LOG(level, format, ...) \ + if (CC_PAL_logMask & CC_PAL_LOG_CUR_COMPONENT) \ + __CC_PAL_LOG_PLAT(CC_PAL_LOG_LEVEL_ ## level, "%s:%s: " format, CC_PAL_LOG_CUR_COMPONENT_NAME, __func__, ##__VA_ARGS__) + +#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_ERR) +/*! Log messages according to log level.*/ +#define CC_PAL_LOG_ERR(format, ... ) \ + _CC_PAL_LOG(ERR, format, ##__VA_ARGS__) +#else +/*! Log messages according to log level.*/ +#define CC_PAL_LOG_ERR( ... ) do {} while (0) +#endif + +#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_WARN) +/*! Log messages according to log level.*/ +#define CC_PAL_LOG_WARN(format, ... ) \ + if (CC_PAL_logLevel >= CC_PAL_LOG_LEVEL_WARN) \ + _CC_PAL_LOG(WARN, format, ##__VA_ARGS__) +#else +/*! Log messages according to log level.*/ +#define CC_PAL_LOG_WARN( ... ) do {} while (0) +#endif + +#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_INFO) +/*! Log messages according to log level.*/ +#define CC_PAL_LOG_INFO(format, ... ) \ + if (CC_PAL_logLevel >= CC_PAL_LOG_LEVEL_INFO) \ + _CC_PAL_LOG(INFO, format, ##__VA_ARGS__) +#else +/*! Log messages according to log level.*/ +#define CC_PAL_LOG_INFO( ... ) do {} while (0) +#endif + +#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_DEBUG) +/*! Log messages according to log level.*/ +#define CC_PAL_LOG_DEBUG(format, ... ) \ + if (CC_PAL_logLevel >= CC_PAL_LOG_LEVEL_DEBUG) \ + _CC_PAL_LOG(DEBUG, format, ##__VA_ARGS__) + +/*! Log message buffer.*/ +#define CC_PAL_LOG_DUMP_BUF(msg, buf, size) \ + do { \ + int i; \ + uint8_t *pData = (uint8_t*)buf; \ + \ + PRINTF("%s (%d):\n", msg, size); \ + for (i = 0; i < size; i++) { \ + PRINTF("0x%02X ", pData[i]); \ + if ((i & 0xF) == 0xF) { \ + PRINTF("\n"); \ + } \ + } \ + PRINTF("\n"); \ + } while (0) +#else +/*! Log debug messages.*/ +#define CC_PAL_LOG_DEBUG( ... ) do {} while (0) +/*! Log debug buffer.*/ +#define CC_PAL_LOG_DUMP_BUF(msg, buf, size) do {} while (0) +#endif + +#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_TRACE) +/*! Log debug trace.*/ +#define CC_PAL_LOG_TRACE(format, ... ) \ + if (CC_PAL_logLevel >= CC_PAL_LOG_LEVEL_TRACE) \ + _CC_PAL_LOG(TRACE, format, ##__VA_ARGS__) +#else +/*! Log debug trace.*/ +#define CC_PAL_LOG_TRACE(...) do {} while (0) +#endif + +#if (_CC_PAL_MAX_LOG_LEVEL >= CC_PAL_LOG_LEVEL_TRACE) +/*! Log debug data.*/ +#define CC_PAL_LOG_DATA(format, ...) \ + if (CC_PAL_logLevel >= CC_PAL_LOG_LEVEL_TRACE) \ + _CC_PAL_LOG(DATA, format, ##__VA_ARGS__) +#else +/*! Log debug data.*/ +#define CC_PAL_LOG_DATA( ...) do {} while (0) +#endif +/** +@} + */ + +#endif /*_CC_PAL_LOG_H_*/ diff --git a/drivers/staging/ccree/cc_pal_log_plat.h b/drivers/staging/ccree/cc_pal_log_plat.h new file mode 100644 index 0000000..a05a200 --- /dev/null +++ b/drivers/staging/ccree/cc_pal_log_plat.h @@ -0,0 +1,33 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* Dummy pal_log_plat for test driver in kernel */ + +#ifndef _SSI_PAL_LOG_PLAT_H_ +#define _SSI_PAL_LOG_PLAT_H_ + +#if defined(DEBUG) + +#define __CC_PAL_LOG_PLAT(level, format, ...) printk(level "cc7x_test::" format , ##__VA_ARGS__) + +#else /* Disable all prints */ + +#define __CC_PAL_LOG_PLAT(...) do {} while (0) + +#endif + +#endif /*_SASI_PAL_LOG_PLAT_H_*/ + diff --git a/drivers/staging/ccree/cc_pal_types.h b/drivers/staging/ccree/cc_pal_types.h new file mode 100644 index 0000000..9b59bbb --- /dev/null +++ b/drivers/staging/ccree/cc_pal_types.h @@ -0,0 +1,97 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#ifndef CC_PAL_TYPES_H +#define CC_PAL_TYPES_H + +/*! +@file +@brief This file contains platform-dependent definitions and types. +@defgroup cc_pal_types CryptoCell PAL platform dependant types +@{ +@ingroup cc_pal + +*/ + +#include "cc_pal_types_plat.h" + +/*! Boolean definition.*/ +typedef enum { + /*! Boolean false definition.*/ + CC_FALSE = 0, + /*! Boolean true definition.*/ + CC_TRUE = 1 +} CCBool; + +/*! Success definition. */ +#define CC_SUCCESS 0UL +/*! Failure definition. */ +#define CC_FAIL 1UL + +/*! Defintion of 1KB in bytes. */ +#define CC_1K_SIZE_IN_BYTES 1024 +/*! Defintion of number of bits in a byte. */ +#define CC_BITS_IN_BYTE 8 +/*! Defintion of number of bits in a 32bits word. */ +#define CC_BITS_IN_32BIT_WORD 32 +/*! Defintion of number of bytes in a 32bits word. */ +#define CC_32BIT_WORD_SIZE (sizeof(uint32_t)) + +/*! Success (OK) defintion. */ +#define CC_OK 0 + +/*! Macro that handles unused parameters in the code (to avoid compilation warnings). */ +#define CC_UNUSED_PARAM(prm) ((void)prm) + +/*! Maximal uint32 value.*/ +#define CC_MAX_UINT32_VAL (0xFFFFFFFF) + + +/* Minimum and Maximum macros */ +#ifdef min +/*! Definition for minimum. */ +#define CC_MIN(a,b) min( a , b ) +#else +/*! Definition for minimum. */ +#define CC_MIN( a , b ) ( ( (a) < (b) ) ? (a) : (b) ) +#endif + +#ifdef max +/*! Definition for maximum. */ +#define CC_MAX(a,b) max( a , b ) +#else +/*! Definition for maximum. */ +#define CC_MAX( a , b ) ( ( (a) > (b) ) ? (a) : (b) ) +#endif + +/*! Macro that calculates number of full bytes from bits (i.e. 7 bits are 1 byte). */ +#define CALC_FULL_BYTES(numBits) ((numBits)/CC_BITS_IN_BYTE + (((numBits) & (CC_BITS_IN_BYTE-1)) > 0)) +/*! Macro that calculates number of full 32bits words from bits (i.e. 31 bits are 1 word). */ +#define CALC_FULL_32BIT_WORDS(numBits) ((numBits)/CC_BITS_IN_32BIT_WORD + (((numBits) & (CC_BITS_IN_32BIT_WORD-1)) > 0)) +/*! Macro that calculates number of full 32bits words from bytes (i.e. 3 bytes are 1 word). */ +#define CALC_32BIT_WORDS_FROM_BYTES(sizeBytes) ((sizeBytes)/CC_32BIT_WORD_SIZE + (((sizeBytes) & (CC_32BIT_WORD_SIZE-1)) > 0)) +/*! Macro that round up bits to 32bits words. */ +#define ROUNDUP_BITS_TO_32BIT_WORD(numBits) (CALC_FULL_32BIT_WORDS(numBits) * CC_BITS_IN_32BIT_WORD) +/*! Macro that round up bits to bytes. */ +#define ROUNDUP_BITS_TO_BYTES(numBits) (CALC_FULL_BYTES(numBits) * CC_BITS_IN_BYTE) +/*! Macro that round up bytes to 32bits words. */ +#define ROUNDUP_BYTES_TO_32BIT_WORD(sizeBytes) (CALC_32BIT_WORDS_FROM_BYTES(sizeBytes) * CC_32BIT_WORD_SIZE) + + +/** +@} + */ +#endif diff --git a/drivers/staging/ccree/cc_pal_types_plat.h b/drivers/staging/ccree/cc_pal_types_plat.h new file mode 100644 index 0000000..6e42112 --- /dev/null +++ b/drivers/staging/ccree/cc_pal_types_plat.h @@ -0,0 +1,29 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + + +#ifndef SSI_PAL_TYPES_PLAT_H +#define SSI_PAL_TYPES_PLAT_H +/* Linux kernel types */ + +#include + +#ifndef NULL /* Missing in Linux kernel */ +#define NULL (0x0L) +#endif + + +#endif /*SSI_PAL_TYPES_PLAT_H*/ diff --git a/drivers/staging/ccree/cc_regs.h b/drivers/staging/ccree/cc_regs.h new file mode 100644 index 0000000..963f814 --- /dev/null +++ b/drivers/staging/ccree/cc_regs.h @@ -0,0 +1,106 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + + +/*! + * @file + * @brief This file contains macro definitions for accessing ARM TrustZone CryptoCell register space. + */ + +#ifndef _CC_REGS_H_ +#define _CC_REGS_H_ + +#include "cc_bitops.h" + +/* Register Offset macro */ +#define CC_REG_OFFSET(unit_name, reg_name) \ + (DX_BASE_ ## unit_name + DX_ ## reg_name ## _REG_OFFSET) + +#define CC_REG_BIT_SHIFT(reg_name, field_name) \ + (DX_ ## reg_name ## _ ## field_name ## _BIT_SHIFT) + +/* Register Offset macros (from registers base address in host) */ +#include "dx_reg_base_host.h" + +/* Read-Modify-Write a field of a register */ +#define MODIFY_REGISTER_FLD(unitName, regName, fldName, fldVal) \ +do { \ + uint32_t regVal; \ + regVal = READ_REGISTER(CC_REG_ADDR(unitName, regName)); \ + CC_REG_FLD_SET(unitName, regName, fldName, regVal, fldVal); \ + WRITE_REGISTER(CC_REG_ADDR(unitName, regName), regVal); \ +} while (0) + +/* Registers address macros for ENV registers (development FPGA only) */ +#ifdef DX_BASE_ENV_REGS + +/* This offset should be added to mapping address of DX_BASE_ENV_REGS */ +#define CC_ENV_REG_OFFSET(reg_name) (DX_ENV_ ## reg_name ## _REG_OFFSET) + +#endif /*DX_BASE_ENV_REGS*/ + +/*! Bit fields get */ +#define CC_REG_FLD_GET(unit_name, reg_name, fld_name, reg_val) \ + (DX_ ## reg_name ## _ ## fld_name ## _BIT_SIZE == 0x20 ? \ + reg_val /*!< \internal Optimization for 32b fields */ : \ + BITFIELD_GET(reg_val, DX_ ## reg_name ## _ ## fld_name ## _BIT_SHIFT, \ + DX_ ## reg_name ## _ ## fld_name ## _BIT_SIZE)) + +/*! Bit fields access */ +#define CC_REG_FLD_GET2(unit_name, reg_name, fld_name, reg_val) \ + (CC_ ## reg_name ## _ ## fld_name ## _BIT_SIZE == 0x20 ? \ + reg_val /*!< \internal Optimization for 32b fields */ : \ + BITFIELD_GET(reg_val, CC_ ## reg_name ## _ ## fld_name ## _BIT_SHIFT, \ + CC_ ## reg_name ## _ ## fld_name ## _BIT_SIZE)) + +/* yael TBD !!! - * +* all HW includes should start with CC_ and not DX_ !! */ + + +/*! Bit fields set */ +#define CC_REG_FLD_SET( \ + unit_name, reg_name, fld_name, reg_shadow_var, new_fld_val) \ +do { \ + if (DX_ ## reg_name ## _ ## fld_name ## _BIT_SIZE == 0x20) \ + reg_shadow_var = new_fld_val; /*!< \internal Optimization for 32b fields */\ + else \ + BITFIELD_SET(reg_shadow_var, \ + DX_ ## reg_name ## _ ## fld_name ## _BIT_SHIFT, \ + DX_ ## reg_name ## _ ## fld_name ## _BIT_SIZE, \ + new_fld_val); \ +} while (0) + +/*! Bit fields set */ +#define CC_REG_FLD_SET2( \ + unit_name, reg_name, fld_name, reg_shadow_var, new_fld_val) \ +do { \ + if (CC_ ## reg_name ## _ ## fld_name ## _BIT_SIZE == 0x20) \ + reg_shadow_var = new_fld_val; /*!< \internal Optimization for 32b fields */\ + else \ + BITFIELD_SET(reg_shadow_var, \ + CC_ ## reg_name ## _ ## fld_name ## _BIT_SHIFT, \ + CC_ ## reg_name ## _ ## fld_name ## _BIT_SIZE, \ + new_fld_val); \ +} while (0) + +/* Usage example: + uint32_t reg_shadow = READ_REGISTER(CC_REG_ADDR(CRY_KERNEL,AES_CONTROL)); + CC_REG_FLD_SET(CRY_KERNEL,AES_CONTROL,NK_KEY0,reg_shadow, 3); + CC_REG_FLD_SET(CRY_KERNEL,AES_CONTROL,NK_KEY1,reg_shadow, 1); + WRITE_REGISTER(CC_REG_ADDR(CRY_KERNEL,AES_CONTROL), reg_shadow); + */ + +#endif /*_CC_REGS_H_*/ diff --git a/drivers/staging/ccree/dx_crys_kernel.h b/drivers/staging/ccree/dx_crys_kernel.h new file mode 100644 index 0000000..703469c --- /dev/null +++ b/drivers/staging/ccree/dx_crys_kernel.h @@ -0,0 +1,180 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#ifndef __DX_CRYS_KERNEL_H__ +#define __DX_CRYS_KERNEL_H__ + +// -------------------------------------- +// BLOCK: DSCRPTR +// -------------------------------------- +#define DX_DSCRPTR_COMPLETION_COUNTER_REG_OFFSET 0xE00UL +#define DX_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SIZE 0x6UL +#define DX_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SHIFT 0x6UL +#define DX_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SIZE 0x1UL +#define DX_DSCRPTR_SW_RESET_REG_OFFSET 0xE40UL +#define DX_DSCRPTR_SW_RESET_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_SW_RESET_VALUE_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_REG_OFFSET 0xE60UL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SIZE 0xAUL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SHIFT 0xAUL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SIZE 0xCUL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SHIFT 0x16UL +#define DX_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SIZE 0x3UL +#define DX_DSCRPTR_SINGLE_ADDR_EN_REG_OFFSET 0xE64UL +#define DX_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SIZE 0x1UL +#define DX_DSCRPTR_MEASURE_CNTR_REG_OFFSET 0xE68UL +#define DX_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SIZE 0x20UL +#define DX_DSCRPTR_QUEUE_WORD0_REG_OFFSET 0xE80UL +#define DX_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SIZE 0x20UL +#define DX_DSCRPTR_QUEUE_WORD1_REG_OFFSET 0xE84UL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SHIFT 0x2UL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SIZE 0x18UL +#define DX_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SHIFT 0x1AUL +#define DX_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SHIFT 0x1BUL +#define DX_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SHIFT 0x1CUL +#define DX_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SHIFT 0x1DUL +#define DX_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SHIFT 0x1EUL +#define DX_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD2_REG_OFFSET 0xE88UL +#define DX_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SIZE 0x20UL +#define DX_DSCRPTR_QUEUE_WORD3_REG_OFFSET 0xE8CUL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SHIFT 0x2UL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SIZE 0x18UL +#define DX_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SHIFT 0x1AUL +#define DX_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SHIFT 0x1BUL +#define DX_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SHIFT 0x1DUL +#define DX_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SHIFT 0x1EUL +#define DX_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SHIFT 0x1FUL +#define DX_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_REG_OFFSET 0xE90UL +#define DX_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SIZE 0x6UL +#define DX_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SHIFT 0x6UL +#define DX_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SHIFT 0x7UL +#define DX_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SHIFT 0x8UL +#define DX_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SHIFT 0xAUL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SIZE 0x4UL +#define DX_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SHIFT 0xEUL +#define DX_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SHIFT 0xFUL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SHIFT 0x11UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SHIFT 0x13UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SHIFT 0x14UL +#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SHIFT 0x16UL +#define DX_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SIZE 0x2UL +#define DX_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SHIFT 0x18UL +#define DX_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SIZE 0x4UL +#define DX_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SHIFT 0x1CUL +#define DX_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SHIFT 0x1DUL +#define DX_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SHIFT 0x1EUL +#define DX_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SHIFT 0x1FUL +#define DX_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SIZE 0x1UL +#define DX_DSCRPTR_QUEUE_WORD5_REG_OFFSET 0xE94UL +#define DX_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SIZE 0x10UL +#define DX_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SHIFT 0x10UL +#define DX_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SIZE 0x10UL +#define DX_DSCRPTR_QUEUE_WATERMARK_REG_OFFSET 0xE98UL +#define DX_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SIZE 0xAUL +#define DX_DSCRPTR_QUEUE_CONTENT_REG_OFFSET 0xE9CUL +#define DX_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SHIFT 0x0UL +#define DX_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SIZE 0xAUL +// -------------------------------------- +// BLOCK: AXI_P +// -------------------------------------- +#define DX_AXIM_MON_INFLIGHT_REG_OFFSET 0xB00UL +#define DX_AXIM_MON_INFLIGHT_VALUE_BIT_SHIFT 0x0UL +#define DX_AXIM_MON_INFLIGHT_VALUE_BIT_SIZE 0x8UL +#define DX_AXIM_MON_INFLIGHTLAST_REG_OFFSET 0xB40UL +#define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SHIFT 0x0UL +#define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SIZE 0x8UL +#define DX_AXIM_MON_COMP_REG_OFFSET 0xB80UL +#define DX_AXIM_MON_COMP_VALUE_BIT_SHIFT 0x0UL +#define DX_AXIM_MON_COMP_VALUE_BIT_SIZE 0x10UL +#define DX_AXIM_MON_ERR_REG_OFFSET 0xBC4UL +#define DX_AXIM_MON_ERR_BRESP_BIT_SHIFT 0x0UL +#define DX_AXIM_MON_ERR_BRESP_BIT_SIZE 0x2UL +#define DX_AXIM_MON_ERR_BID_BIT_SHIFT 0x2UL +#define DX_AXIM_MON_ERR_BID_BIT_SIZE 0x4UL +#define DX_AXIM_MON_ERR_RRESP_BIT_SHIFT 0x10UL +#define DX_AXIM_MON_ERR_RRESP_BIT_SIZE 0x2UL +#define DX_AXIM_MON_ERR_RID_BIT_SHIFT 0x12UL +#define DX_AXIM_MON_ERR_RID_BIT_SIZE 0x4UL +#define DX_AXIM_CFG_REG_OFFSET 0xBE8UL +#define DX_AXIM_CFG_BRESPMASK_BIT_SHIFT 0x4UL +#define DX_AXIM_CFG_BRESPMASK_BIT_SIZE 0x1UL +#define DX_AXIM_CFG_RRESPMASK_BIT_SHIFT 0x5UL +#define DX_AXIM_CFG_RRESPMASK_BIT_SIZE 0x1UL +#define DX_AXIM_CFG_INFLTMASK_BIT_SHIFT 0x6UL +#define DX_AXIM_CFG_INFLTMASK_BIT_SIZE 0x1UL +#define DX_AXIM_CFG_COMPMASK_BIT_SHIFT 0x7UL +#define DX_AXIM_CFG_COMPMASK_BIT_SIZE 0x1UL +#define DX_AXIM_ACE_CONST_REG_OFFSET 0xBECUL +#define DX_AXIM_ACE_CONST_ARDOMAIN_BIT_SHIFT 0x0UL +#define DX_AXIM_ACE_CONST_ARDOMAIN_BIT_SIZE 0x2UL +#define DX_AXIM_ACE_CONST_AWDOMAIN_BIT_SHIFT 0x2UL +#define DX_AXIM_ACE_CONST_AWDOMAIN_BIT_SIZE 0x2UL +#define DX_AXIM_ACE_CONST_ARBAR_BIT_SHIFT 0x4UL +#define DX_AXIM_ACE_CONST_ARBAR_BIT_SIZE 0x2UL +#define DX_AXIM_ACE_CONST_AWBAR_BIT_SHIFT 0x6UL +#define DX_AXIM_ACE_CONST_AWBAR_BIT_SIZE 0x2UL +#define DX_AXIM_ACE_CONST_ARSNOOP_BIT_SHIFT 0x8UL +#define DX_AXIM_ACE_CONST_ARSNOOP_BIT_SIZE 0x4UL +#define DX_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SHIFT 0xCUL +#define DX_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SIZE 0x3UL +#define DX_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SHIFT 0xFUL +#define DX_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SIZE 0x3UL +#define DX_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SHIFT 0x12UL +#define DX_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SIZE 0x7UL +#define DX_AXIM_ACE_CONST_AWLEN_VAL_BIT_SHIFT 0x19UL +#define DX_AXIM_ACE_CONST_AWLEN_VAL_BIT_SIZE 0x4UL +#define DX_AXIM_CACHE_PARAMS_REG_OFFSET 0xBF0UL +#define DX_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SHIFT 0x0UL +#define DX_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SIZE 0x4UL +#define DX_AXIM_CACHE_PARAMS_AWCACHE_BIT_SHIFT 0x4UL +#define DX_AXIM_CACHE_PARAMS_AWCACHE_BIT_SIZE 0x4UL +#define DX_AXIM_CACHE_PARAMS_ARCACHE_BIT_SHIFT 0x8UL +#define DX_AXIM_CACHE_PARAMS_ARCACHE_BIT_SIZE 0x4UL +#endif // __DX_CRYS_KERNEL_H__ diff --git a/drivers/staging/ccree/dx_env.h b/drivers/staging/ccree/dx_env.h new file mode 100644 index 0000000..0804060 --- /dev/null +++ b/drivers/staging/ccree/dx_env.h @@ -0,0 +1,224 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#ifndef __DX_ENV_H__ +#define __DX_ENV_H__ + +// -------------------------------------- +// BLOCK: FPGA_ENV_REGS +// -------------------------------------- +#define DX_ENV_PKA_DEBUG_MODE_REG_OFFSET 0x024UL +#define DX_ENV_PKA_DEBUG_MODE_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_PKA_DEBUG_MODE_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_SCAN_MODE_REG_OFFSET 0x030UL +#define DX_ENV_SCAN_MODE_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_SCAN_MODE_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_CC_ALLOW_SCAN_REG_OFFSET 0x034UL +#define DX_ENV_CC_ALLOW_SCAN_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_CC_ALLOW_SCAN_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_CC_HOST_INT_REG_OFFSET 0x0A0UL +#define DX_ENV_CC_HOST_INT_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_CC_HOST_INT_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_CC_PUB_HOST_INT_REG_OFFSET 0x0A4UL +#define DX_ENV_CC_PUB_HOST_INT_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_CC_PUB_HOST_INT_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_CC_RST_N_REG_OFFSET 0x0A8UL +#define DX_ENV_CC_RST_N_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_CC_RST_N_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_RST_OVERRIDE_REG_OFFSET 0x0ACUL +#define DX_ENV_RST_OVERRIDE_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_RST_OVERRIDE_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_CC_POR_N_ADDR_REG_OFFSET 0x0E0UL +#define DX_ENV_CC_POR_N_ADDR_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_CC_POR_N_ADDR_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_CC_COLD_RST_REG_OFFSET 0x0FCUL +#define DX_ENV_CC_COLD_RST_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_CC_COLD_RST_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_DUMMY_ADDR_REG_OFFSET 0x108UL +#define DX_ENV_DUMMY_ADDR_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_DUMMY_ADDR_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_COUNTER_CLR_REG_OFFSET 0x118UL +#define DX_ENV_COUNTER_CLR_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_COUNTER_CLR_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_COUNTER_RD_REG_OFFSET 0x11CUL +#define DX_ENV_COUNTER_RD_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_COUNTER_RD_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_RNG_DEBUG_ENABLE_REG_OFFSET 0x430UL +#define DX_ENV_RNG_DEBUG_ENABLE_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_RNG_DEBUG_ENABLE_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_CC_LCS_REG_OFFSET 0x43CUL +#define DX_ENV_CC_LCS_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_CC_LCS_VALUE_BIT_SIZE 0x8UL +#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_REG_OFFSET 0x440UL +#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_CM_BIT_SHIFT 0x0UL +#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_CM_BIT_SIZE 0x1UL +#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_DM_BIT_SHIFT 0x1UL +#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_DM_BIT_SIZE 0x1UL +#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_SECURE_BIT_SHIFT 0x2UL +#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_SECURE_BIT_SIZE 0x1UL +#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_RMA_BIT_SHIFT 0x3UL +#define DX_ENV_CC_IS_CM_DM_SECURE_RMA_IS_RMA_BIT_SIZE 0x1UL +#define DX_ENV_DCU_EN_REG_OFFSET 0x444UL +#define DX_ENV_DCU_EN_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_DCU_EN_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_CC_LCS_IS_VALID_REG_OFFSET 0x448UL +#define DX_ENV_CC_LCS_IS_VALID_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_CC_LCS_IS_VALID_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_POWER_DOWN_REG_OFFSET 0x478UL +#define DX_ENV_POWER_DOWN_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_POWER_DOWN_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_DCU_H_EN_REG_OFFSET 0x484UL +#define DX_ENV_DCU_H_EN_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_DCU_H_EN_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_VERSION_REG_OFFSET 0x488UL +#define DX_ENV_VERSION_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_VERSION_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_ROSC_WRITE_REG_OFFSET 0x48CUL +#define DX_ENV_ROSC_WRITE_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_ROSC_WRITE_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_ROSC_ADDR_REG_OFFSET 0x490UL +#define DX_ENV_ROSC_ADDR_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_ROSC_ADDR_VALUE_BIT_SIZE 0x8UL +#define DX_ENV_RESET_SESSION_KEY_REG_OFFSET 0x494UL +#define DX_ENV_RESET_SESSION_KEY_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_RESET_SESSION_KEY_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_SESSION_KEY_0_REG_OFFSET 0x4A0UL +#define DX_ENV_SESSION_KEY_0_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_SESSION_KEY_0_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_SESSION_KEY_1_REG_OFFSET 0x4A4UL +#define DX_ENV_SESSION_KEY_1_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_SESSION_KEY_1_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_SESSION_KEY_2_REG_OFFSET 0x4A8UL +#define DX_ENV_SESSION_KEY_2_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_SESSION_KEY_2_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_SESSION_KEY_3_REG_OFFSET 0x4ACUL +#define DX_ENV_SESSION_KEY_3_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_SESSION_KEY_3_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_SESSION_KEY_VALID_REG_OFFSET 0x4B0UL +#define DX_ENV_SESSION_KEY_VALID_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_SESSION_KEY_VALID_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_SPIDEN_REG_OFFSET 0x4D0UL +#define DX_ENV_SPIDEN_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_SPIDEN_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_AXIM_USER_PARAMS_REG_OFFSET 0x600UL +#define DX_ENV_AXIM_USER_PARAMS_ARUSER_BIT_SHIFT 0x0UL +#define DX_ENV_AXIM_USER_PARAMS_ARUSER_BIT_SIZE 0x5UL +#define DX_ENV_AXIM_USER_PARAMS_AWUSER_BIT_SHIFT 0x5UL +#define DX_ENV_AXIM_USER_PARAMS_AWUSER_BIT_SIZE 0x5UL +#define DX_ENV_SECURITY_MODE_OVERRIDE_REG_OFFSET 0x604UL +#define DX_ENV_SECURITY_MODE_OVERRIDE_AWPROT_NS_BIT_BIT_SHIFT 0x0UL +#define DX_ENV_SECURITY_MODE_OVERRIDE_AWPROT_NS_BIT_BIT_SIZE 0x1UL +#define DX_ENV_SECURITY_MODE_OVERRIDE_AWPROT_NS_OVERRIDE_BIT_SHIFT 0x1UL +#define DX_ENV_SECURITY_MODE_OVERRIDE_AWPROT_NS_OVERRIDE_BIT_SIZE 0x1UL +#define DX_ENV_SECURITY_MODE_OVERRIDE_ARPROT_NS_BIT_BIT_SHIFT 0x2UL +#define DX_ENV_SECURITY_MODE_OVERRIDE_ARPROT_NS_BIT_BIT_SIZE 0x1UL +#define DX_ENV_SECURITY_MODE_OVERRIDE_ARPROT_NS_OVERRIDE_BIT_SHIFT 0x3UL +#define DX_ENV_SECURITY_MODE_OVERRIDE_ARPROT_NS_OVERRIDE_BIT_SIZE 0x1UL +#define DX_ENV_AO_CC_KPLT_0_REG_OFFSET 0x620UL +#define DX_ENV_AO_CC_KPLT_0_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_AO_CC_KPLT_0_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_AO_CC_KPLT_1_REG_OFFSET 0x624UL +#define DX_ENV_AO_CC_KPLT_1_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_AO_CC_KPLT_1_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_AO_CC_KPLT_2_REG_OFFSET 0x628UL +#define DX_ENV_AO_CC_KPLT_2_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_AO_CC_KPLT_2_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_AO_CC_KPLT_3_REG_OFFSET 0x62CUL +#define DX_ENV_AO_CC_KPLT_3_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_AO_CC_KPLT_3_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_AO_CC_KCST_0_REG_OFFSET 0x630UL +#define DX_ENV_AO_CC_KCST_0_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_AO_CC_KCST_0_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_AO_CC_KCST_1_REG_OFFSET 0x634UL +#define DX_ENV_AO_CC_KCST_1_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_AO_CC_KCST_1_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_AO_CC_KCST_2_REG_OFFSET 0x638UL +#define DX_ENV_AO_CC_KCST_2_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_AO_CC_KCST_2_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_AO_CC_KCST_3_REG_OFFSET 0x63CUL +#define DX_ENV_AO_CC_KCST_3_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_AO_CC_KCST_3_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_APB_FIPS_ADDR_REG_OFFSET 0x650UL +#define DX_ENV_APB_FIPS_ADDR_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APB_FIPS_ADDR_VALUE_BIT_SIZE 0xCUL +#define DX_ENV_APB_FIPS_VAL_REG_OFFSET 0x654UL +#define DX_ENV_APB_FIPS_VAL_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APB_FIPS_VAL_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_APB_FIPS_MASK_REG_OFFSET 0x658UL +#define DX_ENV_APB_FIPS_MASK_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APB_FIPS_MASK_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_APB_FIPS_CNT_REG_OFFSET 0x65CUL +#define DX_ENV_APB_FIPS_CNT_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APB_FIPS_CNT_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_APB_FIPS_NEW_ADDR_REG_OFFSET 0x660UL +#define DX_ENV_APB_FIPS_NEW_ADDR_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APB_FIPS_NEW_ADDR_VALUE_BIT_SIZE 0xCUL +#define DX_ENV_APB_FIPS_NEW_VAL_REG_OFFSET 0x664UL +#define DX_ENV_APB_FIPS_NEW_VAL_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APB_FIPS_NEW_VAL_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_APBP_FIPS_ADDR_REG_OFFSET 0x670UL +#define DX_ENV_APBP_FIPS_ADDR_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APBP_FIPS_ADDR_VALUE_BIT_SIZE 0xCUL +#define DX_ENV_APBP_FIPS_VAL_REG_OFFSET 0x674UL +#define DX_ENV_APBP_FIPS_VAL_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APBP_FIPS_VAL_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_APBP_FIPS_MASK_REG_OFFSET 0x678UL +#define DX_ENV_APBP_FIPS_MASK_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APBP_FIPS_MASK_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_APBP_FIPS_CNT_REG_OFFSET 0x67CUL +#define DX_ENV_APBP_FIPS_CNT_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APBP_FIPS_CNT_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_APBP_FIPS_NEW_ADDR_REG_OFFSET 0x680UL +#define DX_ENV_APBP_FIPS_NEW_ADDR_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APBP_FIPS_NEW_ADDR_VALUE_BIT_SIZE 0xCUL +#define DX_ENV_APBP_FIPS_NEW_VAL_REG_OFFSET 0x684UL +#define DX_ENV_APBP_FIPS_NEW_VAL_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_APBP_FIPS_NEW_VAL_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_CC_POWERDOWN_EN_REG_OFFSET 0x690UL +#define DX_ENV_CC_POWERDOWN_EN_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_CC_POWERDOWN_EN_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_CC_POWERDOWN_RST_EN_REG_OFFSET 0x694UL +#define DX_ENV_CC_POWERDOWN_RST_EN_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_CC_POWERDOWN_RST_EN_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_POWERDOWN_RST_CNTR_REG_OFFSET 0x698UL +#define DX_ENV_POWERDOWN_RST_CNTR_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_POWERDOWN_RST_CNTR_VALUE_BIT_SIZE 0x20UL +#define DX_ENV_POWERDOWN_EN_DEBUG_REG_OFFSET 0x69CUL +#define DX_ENV_POWERDOWN_EN_DEBUG_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_POWERDOWN_EN_DEBUG_VALUE_BIT_SIZE 0x1UL +// -------------------------------------- +// BLOCK: ENV_CC_MEMORIES +// -------------------------------------- +#define DX_ENV_FUSE_READY_REG_OFFSET 0x000UL +#define DX_ENV_FUSE_READY_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_FUSE_READY_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_PERF_RAM_MASTER_REG_OFFSET 0x0ECUL +#define DX_ENV_PERF_RAM_MASTER_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_PERF_RAM_MASTER_VALUE_BIT_SIZE 0x1UL +#define DX_ENV_PERF_RAM_ADDR_HIGH4_REG_OFFSET 0x0F0UL +#define DX_ENV_PERF_RAM_ADDR_HIGH4_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_PERF_RAM_ADDR_HIGH4_VALUE_BIT_SIZE 0x2UL +#define DX_ENV_FUSES_RAM_REG_OFFSET 0x3ECUL +#define DX_ENV_FUSES_RAM_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_FUSES_RAM_VALUE_BIT_SIZE 0x20UL +// -------------------------------------- +// BLOCK: ENV_PERF_RAM_BASE +// -------------------------------------- +#define DX_ENV_PERF_RAM_BASE_REG_OFFSET 0x000UL +#define DX_ENV_PERF_RAM_BASE_VALUE_BIT_SHIFT 0x0UL +#define DX_ENV_PERF_RAM_BASE_VALUE_BIT_SIZE 0x20UL + +#endif /*__DX_ENV_H__*/ diff --git a/drivers/staging/ccree/dx_host.h b/drivers/staging/ccree/dx_host.h new file mode 100644 index 0000000..4e42e74 --- /dev/null +++ b/drivers/staging/ccree/dx_host.h @@ -0,0 +1,155 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#ifndef __DX_HOST_H__ +#define __DX_HOST_H__ + +// -------------------------------------- +// BLOCK: HOST_P +// -------------------------------------- +#define DX_HOST_IRR_REG_OFFSET 0xA00UL +#define DX_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SHIFT 0x2UL +#define DX_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SIZE 0x1UL +#define DX_HOST_IRR_AXI_ERR_INT_BIT_SHIFT 0x8UL +#define DX_HOST_IRR_AXI_ERR_INT_BIT_SIZE 0x1UL +#define DX_HOST_IRR_GPR0_BIT_SHIFT 0xBUL +#define DX_HOST_IRR_GPR0_BIT_SIZE 0x1UL +#define DX_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SHIFT 0x13UL +#define DX_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SIZE 0x1UL +#define DX_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT 0x17UL +#define DX_HOST_IRR_AXIM_COMP_INT_BIT_SIZE 0x1UL +#define DX_HOST_IMR_REG_OFFSET 0xA04UL +#define DX_HOST_IMR_NOT_USED_MASK_BIT_SHIFT 0x1UL +#define DX_HOST_IMR_NOT_USED_MASK_BIT_SIZE 0x1UL +#define DX_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SHIFT 0x2UL +#define DX_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SIZE 0x1UL +#define DX_HOST_IMR_AXI_ERR_MASK_BIT_SHIFT 0x8UL +#define DX_HOST_IMR_AXI_ERR_MASK_BIT_SIZE 0x1UL +#define DX_HOST_IMR_GPR0_BIT_SHIFT 0xBUL +#define DX_HOST_IMR_GPR0_BIT_SIZE 0x1UL +#define DX_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SHIFT 0x13UL +#define DX_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SIZE 0x1UL +#define DX_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SHIFT 0x17UL +#define DX_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SIZE 0x1UL +#define DX_HOST_ICR_REG_OFFSET 0xA08UL +#define DX_HOST_ICR_DSCRPTR_COMPLETION_BIT_SHIFT 0x2UL +#define DX_HOST_ICR_DSCRPTR_COMPLETION_BIT_SIZE 0x1UL +#define DX_HOST_ICR_AXI_ERR_CLEAR_BIT_SHIFT 0x8UL +#define DX_HOST_ICR_AXI_ERR_CLEAR_BIT_SIZE 0x1UL +#define DX_HOST_ICR_GPR_INT_CLEAR_BIT_SHIFT 0xBUL +#define DX_HOST_ICR_GPR_INT_CLEAR_BIT_SIZE 0x1UL +#define DX_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SHIFT 0x13UL +#define DX_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SIZE 0x1UL +#define DX_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SHIFT 0x17UL +#define DX_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SIZE 0x1UL +#define DX_HOST_SIGNATURE_REG_OFFSET 0xA24UL +#define DX_HOST_SIGNATURE_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_SIGNATURE_VALUE_BIT_SIZE 0x20UL +#define DX_HOST_BOOT_REG_OFFSET 0xA28UL +#define DX_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SHIFT 0x0UL +#define DX_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SHIFT 0x1UL +#define DX_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SHIFT 0x2UL +#define DX_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SHIFT 0x3UL +#define DX_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SHIFT 0x5UL +#define DX_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SHIFT 0x6UL +#define DX_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SIZE 0x3UL +#define DX_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SHIFT 0x9UL +#define DX_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SHIFT 0xAUL +#define DX_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SHIFT 0xBUL +#define DX_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SHIFT 0xCUL +#define DX_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SHIFT 0xDUL +#define DX_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SHIFT 0xEUL +#define DX_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SHIFT 0xFUL +#define DX_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SHIFT 0x10UL +#define DX_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SHIFT 0x11UL +#define DX_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SHIFT 0x12UL +#define DX_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SHIFT 0x13UL +#define DX_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SHIFT 0x14UL +#define DX_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SHIFT 0x15UL +#define DX_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SHIFT 0x16UL +#define DX_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SHIFT 0x17UL +#define DX_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SHIFT 0x18UL +#define DX_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SHIFT 0x19UL +#define DX_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SHIFT 0x1AUL +#define DX_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SHIFT 0x1BUL +#define DX_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SHIFT 0x1CUL +#define DX_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SHIFT 0x1DUL +#define DX_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SHIFT 0x1EUL +#define DX_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SIZE 0x1UL +#define DX_HOST_VERSION_REG_OFFSET 0xA40UL +#define DX_HOST_VERSION_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_VERSION_VALUE_BIT_SIZE 0x20UL +#define DX_HOST_KFDE0_VALID_REG_OFFSET 0xA60UL +#define DX_HOST_KFDE0_VALID_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_KFDE0_VALID_VALUE_BIT_SIZE 0x1UL +#define DX_HOST_KFDE1_VALID_REG_OFFSET 0xA64UL +#define DX_HOST_KFDE1_VALID_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_KFDE1_VALID_VALUE_BIT_SIZE 0x1UL +#define DX_HOST_KFDE2_VALID_REG_OFFSET 0xA68UL +#define DX_HOST_KFDE2_VALID_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_KFDE2_VALID_VALUE_BIT_SIZE 0x1UL +#define DX_HOST_KFDE3_VALID_REG_OFFSET 0xA6CUL +#define DX_HOST_KFDE3_VALID_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_KFDE3_VALID_VALUE_BIT_SIZE 0x1UL +#define DX_HOST_GPR0_REG_OFFSET 0xA70UL +#define DX_HOST_GPR0_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_GPR0_VALUE_BIT_SIZE 0x20UL +#define DX_GPR_HOST_REG_OFFSET 0xA74UL +#define DX_GPR_HOST_VALUE_BIT_SHIFT 0x0UL +#define DX_GPR_HOST_VALUE_BIT_SIZE 0x20UL +#define DX_HOST_POWER_DOWN_EN_REG_OFFSET 0xA78UL +#define DX_HOST_POWER_DOWN_EN_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_POWER_DOWN_EN_VALUE_BIT_SIZE 0x1UL +// -------------------------------------- +// BLOCK: HOST_SRAM +// -------------------------------------- +#define DX_SRAM_DATA_REG_OFFSET 0xF00UL +#define DX_SRAM_DATA_VALUE_BIT_SHIFT 0x0UL +#define DX_SRAM_DATA_VALUE_BIT_SIZE 0x20UL +#define DX_SRAM_ADDR_REG_OFFSET 0xF04UL +#define DX_SRAM_ADDR_VALUE_BIT_SHIFT 0x0UL +#define DX_SRAM_ADDR_VALUE_BIT_SIZE 0xFUL +#define DX_SRAM_DATA_READY_REG_OFFSET 0xF08UL +#define DX_SRAM_DATA_READY_VALUE_BIT_SHIFT 0x0UL +#define DX_SRAM_DATA_READY_VALUE_BIT_SIZE 0x1UL + +#endif //__DX_HOST_H__ diff --git a/drivers/staging/ccree/dx_reg_base_host.h b/drivers/staging/ccree/dx_reg_base_host.h new file mode 100644 index 0000000..58dafe0 --- /dev/null +++ b/drivers/staging/ccree/dx_reg_base_host.h @@ -0,0 +1,34 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#ifndef __DX_REG_BASE_HOST_H__ +#define __DX_REG_BASE_HOST_H__ + +/* Identify platform: Xilinx Zynq7000 ZC706 */ +#define DX_PLAT_ZYNQ7000 1 +#define DX_PLAT_ZYNQ7000_ZC706 1 + +#define DX_BASE_CC 0x80000000 + +#define DX_BASE_ENV_REGS 0x40008000 +#define DX_BASE_ENV_CC_MEMORIES 0x40008000 +#define DX_BASE_ENV_PERF_RAM 0x40009000 + +#define DX_BASE_HOST_RGF 0x0UL +#define DX_BASE_CRY_KERNEL 0x0UL +#define DX_BASE_ROM 0x40000000 + +#endif /*__DX_REG_BASE_HOST_H__*/ diff --git a/drivers/staging/ccree/dx_reg_common.h b/drivers/staging/ccree/dx_reg_common.h new file mode 100644 index 0000000..4ffed38 --- /dev/null +++ b/drivers/staging/ccree/dx_reg_common.h @@ -0,0 +1,26 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#ifndef __DX_REG_COMMON_H__ +#define __DX_REG_COMMON_H__ + +#define DX_DEV_SIGNATURE 0xDCC71200UL + +#define CC_HW_VERSION 0xef840015UL + +#define DX_DEV_SHA_MAX 512 + +#endif /*__DX_REG_COMMON_H__*/ diff --git a/drivers/staging/ccree/hw_queue_defs_plat.h b/drivers/staging/ccree/hw_queue_defs_plat.h new file mode 100644 index 0000000..aee02cc --- /dev/null +++ b/drivers/staging/ccree/hw_queue_defs_plat.h @@ -0,0 +1,43 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#ifndef __HW_QUEUE_DEFS_PLAT_H__ +#define __HW_QUEUE_DEFS_PLAT_H__ + + +/*****************************/ +/* Descriptor packing macros */ +/*****************************/ + +#define HW_QUEUE_FREE_SLOTS_GET() (CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_CONTENT)) & HW_QUEUE_SLOTS_MAX) + +#define HW_QUEUE_POLL_QUEUE_UNTIL_FREE_SLOTS(seqLen) \ + do { \ + } while (HW_QUEUE_FREE_SLOTS_GET() < (seqLen)) + +#define HW_DESC_PUSH_TO_QUEUE(pDesc) do { \ + LOG_HW_DESC(pDesc); \ + HW_DESC_DUMP(pDesc); \ + CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(0), (pDesc)->word[0]); \ + CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(1), (pDesc)->word[1]); \ + CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(2), (pDesc)->word[2]); \ + CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(3), (pDesc)->word[3]); \ + CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(4), (pDesc)->word[4]); \ + wmb(); \ + CC_HAL_WRITE_REGISTER(GET_HW_Q_DESC_WORD_IDX(5), (pDesc)->word[5]); \ +} while (0) + +#endif /*__HW_QUEUE_DEFS_PLAT_H__*/ diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c new file mode 100644 index 0000000..3a74980 --- /dev/null +++ b/drivers/staging/ccree/ssi_buffer_mgr.c @@ -0,0 +1,537 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ssi_buffer_mgr.h" +#include "cc_lli_defs.h" + +#define LLI_MAX_NUM_OF_DATA_ENTRIES 128 +#define LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES 4 +#define MLLI_TABLE_MIN_ALIGNMENT 4 /*Force the MLLI table to be align to uint32 */ +#define MAX_NUM_OF_BUFFERS_IN_MLLI 4 +#define MAX_NUM_OF_TOTAL_MLLI_ENTRIES (2*LLI_MAX_NUM_OF_DATA_ENTRIES + \ + LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES ) + +#ifdef CC_DEBUG +#define DUMP_SGL(sg) \ + while (sg) { \ + SSI_LOG_DEBUG("page=%lu offset=%u length=%u (dma_len=%u) " \ + "dma_addr=%08x\n", (sg)->page_link, (sg)->offset, \ + (sg)->length, sg_dma_len(sg), (sg)->dma_address); \ + (sg) = sg_next(sg); \ + } +#define DUMP_MLLI_TABLE(mlli_p, nents) \ + do { \ + SSI_LOG_DEBUG("mlli=%pK nents=%u\n", (mlli_p), (nents)); \ + while((nents)--) { \ + SSI_LOG_DEBUG("addr=0x%08X size=0x%08X\n", \ + (mlli_p)[LLI_WORD0_OFFSET], \ + (mlli_p)[LLI_WORD1_OFFSET]); \ + (mlli_p) += LLI_ENTRY_WORD_SIZE; \ + } \ + } while (0) +#define GET_DMA_BUFFER_TYPE(buff_type) ( \ + ((buff_type) == SSI_DMA_BUF_NULL) ? "BUF_NULL" : \ + ((buff_type) == SSI_DMA_BUF_DLLI) ? "BUF_DLLI" : \ + ((buff_type) == SSI_DMA_BUF_MLLI) ? "BUF_MLLI" : "BUF_INVALID") +#else +#define DX_BUFFER_MGR_DUMP_SGL(sg) +#define DX_BUFFER_MGR_DUMP_MLLI_TABLE(mlli_p, nents) +#define GET_DMA_BUFFER_TYPE(buff_type) +#endif + + +enum dma_buffer_type { + DMA_NULL_TYPE = -1, + DMA_SGL_TYPE = 1, + DMA_BUFF_TYPE = 2, +}; + +struct buff_mgr_handle { + struct dma_pool *mlli_buffs_pool; +}; + +union buffer_array_entry { + struct scatterlist *sgl; + dma_addr_t buffer_dma; +}; + +struct buffer_array { + unsigned int num_of_buffers; + union buffer_array_entry entry[MAX_NUM_OF_BUFFERS_IN_MLLI]; + unsigned int offset[MAX_NUM_OF_BUFFERS_IN_MLLI]; + int nents[MAX_NUM_OF_BUFFERS_IN_MLLI]; + int total_data_len[MAX_NUM_OF_BUFFERS_IN_MLLI]; + enum dma_buffer_type type[MAX_NUM_OF_BUFFERS_IN_MLLI]; + bool is_last[MAX_NUM_OF_BUFFERS_IN_MLLI]; + uint32_t * mlli_nents[MAX_NUM_OF_BUFFERS_IN_MLLI]; +}; + +#ifdef CC_DMA_48BIT_SIM +dma_addr_t ssi_buff_mgr_update_dma_addr(dma_addr_t orig_addr, uint32_t data_len) +{ + dma_addr_t tmp_dma_addr; +#ifdef CC_DMA_48BIT_SIM_FULL + /* With this code all addresses will be switched to 48 bits. */ + /* The if condition protects from double expention */ + if((((orig_addr >> 16) & 0xFFFF) != 0xFFFF) && + (data_len <= CC_MAX_MLLI_ENTRY_SIZE)) { +#else + if((!(((orig_addr >> 16) & 0xFF) % 2)) && + (data_len <= CC_MAX_MLLI_ENTRY_SIZE)) { +#endif + tmp_dma_addr = ((orig_addr<<16) | 0xFFFF0000 | + (orig_addr & UINT16_MAX)); + SSI_LOG_DEBUG("MAP DMA: orig address=0x%llX " + "dma_address=0x%llX\n", + orig_addr, tmp_dma_addr); + return tmp_dma_addr; + } + return orig_addr; +} + +dma_addr_t ssi_buff_mgr_restore_dma_addr(dma_addr_t orig_addr) +{ + dma_addr_t tmp_dma_addr; +#ifdef CC_DMA_48BIT_SIM_FULL + /* With this code all addresses will be restored from 48 bits. */ + /* The if condition protects from double restoring */ + if((orig_addr >> 32) & 0xFFFF ) { +#else + if(((orig_addr >> 32) & 0xFFFF) && + !(((orig_addr >> 32) & 0xFF) % 2) ) { +#endif + /*return high 16 bits*/ + tmp_dma_addr = ((orig_addr >> 16)); + /*clean the 0xFFFF in the lower bits (set in the add expansion)*/ + tmp_dma_addr &= 0xFFFF0000; + /* Set the original 16 bits */ + tmp_dma_addr |= (orig_addr & UINT16_MAX); + SSI_LOG_DEBUG("Release DMA: orig address=0x%llX " + "dma_address=0x%llX\n", + orig_addr, tmp_dma_addr); + return tmp_dma_addr; + } + return orig_addr; +} +#endif +/** + * ssi_buffer_mgr_get_sgl_nents() - Get scatterlist number of entries. + * + * @sg_list: SG list + * @nbytes: [IN] Total SGL data bytes. + * @lbytes: [OUT] Returns the amount of bytes at the last entry + */ +static unsigned int ssi_buffer_mgr_get_sgl_nents( + struct scatterlist *sg_list, unsigned int nbytes, uint32_t *lbytes, bool *is_chained) +{ + unsigned int nents = 0; + while (nbytes != 0) { + if (sg_is_chain(sg_list)) { + SSI_LOG_ERR("Unexpected chanined entry " + "in sg (entry =0x%X) \n", nents); + BUG(); + } + if (sg_list->length != 0) { + nents++; + /* get the number of bytes in the last entry */ + *lbytes = nbytes; + nbytes -= ( sg_list->length > nbytes ) ? nbytes : sg_list->length; + sg_list = sg_next(sg_list); + } else { + sg_list = (struct scatterlist *)sg_page(sg_list); + if (is_chained != NULL) { + *is_chained = true; + } + } + } + SSI_LOG_DEBUG("nents %d last bytes %d\n",nents, *lbytes); + return nents; +} + +/** + * ssi_buffer_mgr_zero_sgl() - Zero scatter scatter list data. + * + * @sgl: + */ +void ssi_buffer_mgr_zero_sgl(struct scatterlist *sgl, uint32_t data_len) +{ + struct scatterlist *current_sg = sgl; + int sg_index = 0; + + while (sg_index <= data_len) { + if (current_sg == NULL) { + /* reached the end of the sgl --> just return back */ + return; + } + memset(sg_virt(current_sg), 0, current_sg->length); + sg_index += current_sg->length; + current_sg = sg_next(current_sg); + } +} + +/** + * ssi_buffer_mgr_copy_scatterlist_portion() - Copy scatter list data, + * from to_skip to end, to dest and vice versa + * + * @dest: + * @sg: + * @to_skip: + * @end: + * @direct: + */ +void ssi_buffer_mgr_copy_scatterlist_portion( + u8 *dest, struct scatterlist *sg, + uint32_t to_skip, uint32_t end, + enum ssi_sg_cpy_direct direct) +{ + uint32_t nents, lbytes; + + nents = ssi_buffer_mgr_get_sgl_nents(sg, end, &lbytes, NULL); + sg_copy_buffer(sg, nents, (void *)dest, (end - to_skip), 0, (direct == SSI_SG_TO_BUF)); +} + +static inline int ssi_buffer_mgr_render_buff_to_mlli( + dma_addr_t buff_dma, uint32_t buff_size, uint32_t *curr_nents, + uint32_t **mlli_entry_pp) +{ + uint32_t *mlli_entry_p = *mlli_entry_pp; + uint32_t new_nents;; + + /* Verify there is no memory overflow*/ + new_nents = (*curr_nents + buff_size/CC_MAX_MLLI_ENTRY_SIZE + 1); + if (new_nents > MAX_NUM_OF_TOTAL_MLLI_ENTRIES ) { + return -ENOMEM; + } + + /*handle buffer longer than 64 kbytes */ + while (buff_size > CC_MAX_MLLI_ENTRY_SIZE ) { + SSI_UPDATE_DMA_ADDR_TO_48BIT(buff_dma, CC_MAX_MLLI_ENTRY_SIZE); + LLI_SET_ADDR(mlli_entry_p,buff_dma); + LLI_SET_SIZE(mlli_entry_p, CC_MAX_MLLI_ENTRY_SIZE); + SSI_LOG_DEBUG("entry[%d]: single_buff=0x%08X size=%08X\n",*curr_nents, + mlli_entry_p[LLI_WORD0_OFFSET], + mlli_entry_p[LLI_WORD1_OFFSET]); + SSI_RESTORE_DMA_ADDR_TO_48BIT(buff_dma); + buff_dma += CC_MAX_MLLI_ENTRY_SIZE; + buff_size -= CC_MAX_MLLI_ENTRY_SIZE; + mlli_entry_p = mlli_entry_p + 2; + (*curr_nents)++; + } + /*Last entry */ + SSI_UPDATE_DMA_ADDR_TO_48BIT(buff_dma, buff_size); + LLI_SET_ADDR(mlli_entry_p,buff_dma); + LLI_SET_SIZE(mlli_entry_p, buff_size); + SSI_LOG_DEBUG("entry[%d]: single_buff=0x%08X size=%08X\n",*curr_nents, + mlli_entry_p[LLI_WORD0_OFFSET], + mlli_entry_p[LLI_WORD1_OFFSET]); + mlli_entry_p = mlli_entry_p + 2; + *mlli_entry_pp = mlli_entry_p; + (*curr_nents)++; + return 0; +} + + +static inline int ssi_buffer_mgr_render_scatterlist_to_mlli( + struct scatterlist *sgl, uint32_t sgl_data_len, uint32_t sglOffset, uint32_t *curr_nents, + uint32_t **mlli_entry_pp) +{ + struct scatterlist *curr_sgl = sgl; + uint32_t *mlli_entry_p = *mlli_entry_pp; + int32_t rc = 0; + + for ( ; (curr_sgl != NULL) && (sgl_data_len != 0); + curr_sgl = sg_next(curr_sgl)) { + uint32_t entry_data_len = + (sgl_data_len > sg_dma_len(curr_sgl) - sglOffset) ? + sg_dma_len(curr_sgl) - sglOffset : sgl_data_len ; + sgl_data_len -= entry_data_len; + rc = ssi_buffer_mgr_render_buff_to_mlli( + sg_dma_address(curr_sgl) + sglOffset, entry_data_len, curr_nents, + &mlli_entry_p); + if(rc != 0) { + return rc; + } + sglOffset=0; + } + *mlli_entry_pp = mlli_entry_p; + return 0; +} + +static int ssi_buffer_mgr_generate_mlli ( + struct device *dev, + struct buffer_array *sg_data, + struct mlli_params *mlli_params) __maybe_unused; + +static int ssi_buffer_mgr_generate_mlli( + struct device *dev, + struct buffer_array *sg_data, + struct mlli_params *mlli_params) +{ + uint32_t *mlli_p; + uint32_t total_nents = 0,prev_total_nents = 0; + int rc = 0, i; + + SSI_LOG_DEBUG("NUM of SG's = %d\n", sg_data->num_of_buffers); + + /* Allocate memory from the pointed pool */ + mlli_params->mlli_virt_addr = dma_pool_alloc( + mlli_params->curr_pool, GFP_KERNEL, + &(mlli_params->mlli_dma_addr)); + if (unlikely(mlli_params->mlli_virt_addr == NULL)) { + SSI_LOG_ERR("dma_pool_alloc() failed\n"); + rc =-ENOMEM; + goto build_mlli_exit; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(mlli_params->mlli_dma_addr, + (MAX_NUM_OF_TOTAL_MLLI_ENTRIES* + LLI_ENTRY_BYTE_SIZE)); + /* Point to start of MLLI */ + mlli_p = (uint32_t *)mlli_params->mlli_virt_addr; + /* go over all SG's and link it to one MLLI table */ + for (i = 0; i < sg_data->num_of_buffers; i++) { + if (sg_data->type[i] == DMA_SGL_TYPE) + rc = ssi_buffer_mgr_render_scatterlist_to_mlli( + sg_data->entry[i].sgl, + sg_data->total_data_len[i], sg_data->offset[i], &total_nents, + &mlli_p); + else /*DMA_BUFF_TYPE*/ + rc = ssi_buffer_mgr_render_buff_to_mlli( + sg_data->entry[i].buffer_dma, + sg_data->total_data_len[i], &total_nents, + &mlli_p); + if(rc != 0) { + return rc; + } + + /* set last bit in the current table */ + if (sg_data->mlli_nents[i] != NULL) { + /*Calculate the current MLLI table length for the + length field in the descriptor*/ + *(sg_data->mlli_nents[i]) += + (total_nents - prev_total_nents); + prev_total_nents = total_nents; + } + } + + /* Set MLLI size for the bypass operation */ + mlli_params->mlli_len = (total_nents * LLI_ENTRY_BYTE_SIZE); + + SSI_LOG_DEBUG("MLLI params: " + "virt_addr=%pK dma_addr=0x%llX mlli_len=0x%X\n", + mlli_params->mlli_virt_addr, + (unsigned long long)mlli_params->mlli_dma_addr, + mlli_params->mlli_len); + +build_mlli_exit: + return rc; +} + +static inline void ssi_buffer_mgr_add_buffer_entry( + struct buffer_array *sgl_data, + dma_addr_t buffer_dma, unsigned int buffer_len, + bool is_last_entry, uint32_t *mlli_nents) +{ + unsigned int index = sgl_data->num_of_buffers; + + SSI_LOG_DEBUG("index=%u single_buff=0x%llX " + "buffer_len=0x%08X is_last=%d\n", + index, (unsigned long long)buffer_dma, buffer_len, is_last_entry); + sgl_data->nents[index] = 1; + sgl_data->entry[index].buffer_dma = buffer_dma; + sgl_data->offset[index] = 0; + sgl_data->total_data_len[index] = buffer_len; + sgl_data->type[index] = DMA_BUFF_TYPE; + sgl_data->is_last[index] = is_last_entry; + sgl_data->mlli_nents[index] = mlli_nents; + if (sgl_data->mlli_nents[index] != NULL) + *sgl_data->mlli_nents[index] = 0; + sgl_data->num_of_buffers++; +} + +static inline void ssi_buffer_mgr_add_scatterlist_entry( + struct buffer_array *sgl_data, + unsigned int nents, + struct scatterlist *sgl, + unsigned int data_len, + unsigned int data_offset, + bool is_last_table, + uint32_t *mlli_nents) +{ + unsigned int index = sgl_data->num_of_buffers; + + SSI_LOG_DEBUG("index=%u nents=%u sgl=%pK data_len=0x%08X is_last=%d\n", + index, nents, sgl, data_len, is_last_table); + sgl_data->nents[index] = nents; + sgl_data->entry[index].sgl = sgl; + sgl_data->offset[index] = data_offset; + sgl_data->total_data_len[index] = data_len; + sgl_data->type[index] = DMA_SGL_TYPE; + sgl_data->is_last[index] = is_last_table; + sgl_data->mlli_nents[index] = mlli_nents; + if (sgl_data->mlli_nents[index] != NULL) + *sgl_data->mlli_nents[index] = 0; + sgl_data->num_of_buffers++; +} + +static int +ssi_buffer_mgr_dma_map_sg(struct device *dev, struct scatterlist *sg, uint32_t nents, + enum dma_data_direction direction) +{ + uint32_t i , j; + struct scatterlist *l_sg = sg; + for (i = 0; i < nents; i++) { + if (l_sg == NULL) { + break; + } + if (unlikely(dma_map_sg(dev, l_sg, 1, direction) != 1)){ + SSI_LOG_ERR("dma_map_page() sg buffer failed\n"); + goto err; + } + l_sg = sg_next(l_sg); + } + return nents; + +err: + /* Restore mapped parts */ + for (j = 0; j < i; j++) { + if (sg == NULL) { + break; + } + dma_unmap_sg(dev,sg,1,direction); + sg = sg_next(sg); + } + return 0; +} + +static int ssi_buffer_mgr_map_scatterlist (struct device *dev, + struct scatterlist *sg, unsigned int nbytes, int direction, + uint32_t *nents, uint32_t max_sg_nents, uint32_t *lbytes, + uint32_t *mapped_nents) __maybe_unused; + +static int ssi_buffer_mgr_map_scatterlist( + struct device *dev, struct scatterlist *sg, + unsigned int nbytes, int direction, + uint32_t *nents, uint32_t max_sg_nents, + uint32_t *lbytes, uint32_t *mapped_nents) +{ + bool is_chained = false; + + if (sg_is_last(sg)) { + /* One entry only case -set to DLLI */ + if (unlikely(dma_map_sg(dev, sg, 1, direction) != 1)) { + SSI_LOG_ERR("dma_map_sg() single buffer failed\n"); + return -ENOMEM; + } + SSI_LOG_DEBUG("Mapped sg: dma_address=0x%llX " + "page_link=0x%08lX addr=%pK offset=%u " + "length=%u\n", + (unsigned long long)sg_dma_address(sg), + sg->page_link, + sg_virt(sg), + sg->offset, sg->length); + *lbytes = nbytes; + *nents = 1; + *mapped_nents = 1; + SSI_UPDATE_DMA_ADDR_TO_48BIT(sg_dma_address(sg), sg_dma_len(sg)); + } else { /*sg_is_last*/ + *nents = ssi_buffer_mgr_get_sgl_nents(sg, nbytes, lbytes, + &is_chained); + if (*nents > max_sg_nents) { + *nents = 0; + SSI_LOG_ERR("Too many fragments. current %d max %d\n", + *nents, max_sg_nents); + return -ENOMEM; + } + if (!is_chained) { + /* In case of mmu the number of mapped nents might + be changed from the original sgl nents */ + *mapped_nents = dma_map_sg(dev, sg, *nents, direction); + if (unlikely(*mapped_nents == 0)){ + *nents = 0; + SSI_LOG_ERR("dma_map_sg() sg buffer failed\n"); + return -ENOMEM; + } + } else { + /*In this case the driver maps entry by entry so it + must have the same nents before and after map */ + *mapped_nents = ssi_buffer_mgr_dma_map_sg(dev, + sg, + *nents, + direction); + if (unlikely(*mapped_nents != *nents)){ + *nents = *mapped_nents; + SSI_LOG_ERR("dma_map_sg() sg buffer failed\n"); + return -ENOMEM; + } + } + } + + return 0; +} + +int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata) +{ + struct buff_mgr_handle *buff_mgr_handle; + struct device *dev = &drvdata->plat_dev->dev; + + buff_mgr_handle = (struct buff_mgr_handle *) + kmalloc(sizeof(struct buff_mgr_handle), GFP_KERNEL); + if (buff_mgr_handle == NULL) + return -ENOMEM; + + drvdata->buff_mgr_handle = buff_mgr_handle; + + buff_mgr_handle->mlli_buffs_pool = dma_pool_create( + "dx_single_mlli_tables", dev, + MAX_NUM_OF_TOTAL_MLLI_ENTRIES * + LLI_ENTRY_BYTE_SIZE, + MLLI_TABLE_MIN_ALIGNMENT, 0); + + if (unlikely(buff_mgr_handle->mlli_buffs_pool == NULL)) + goto error; + + return 0; + +error: + ssi_buffer_mgr_fini(drvdata); + return -ENOMEM; +} + +int ssi_buffer_mgr_fini(struct ssi_drvdata *drvdata) +{ + struct buff_mgr_handle *buff_mgr_handle = drvdata->buff_mgr_handle; + + if (buff_mgr_handle != NULL) { + if (buff_mgr_handle->mlli_buffs_pool != NULL) + dma_pool_destroy(buff_mgr_handle->mlli_buffs_pool); + kfree(drvdata->buff_mgr_handle); + drvdata->buff_mgr_handle = NULL; + + } + return 0; +} + diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h new file mode 100644 index 0000000..f21f439 --- /dev/null +++ b/drivers/staging/ccree/ssi_buffer_mgr.h @@ -0,0 +1,79 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* \file buffer_mgr.h + Buffer Manager + */ + +#ifndef __SSI_BUFFER_MGR_H__ +#define __SSI_BUFFER_MGR_H__ + +#include + +#include "ssi_config.h" +#include "ssi_driver.h" + + +enum ssi_req_dma_buf_type { + SSI_DMA_BUF_NULL = 0, + SSI_DMA_BUF_DLLI, + SSI_DMA_BUF_MLLI +}; + +enum ssi_sg_cpy_direct { + SSI_SG_TO_BUF = 0, + SSI_SG_FROM_BUF = 1 +}; + +struct ssi_mlli { + ssi_sram_addr_t sram_addr; + unsigned int nents; //sg nents + unsigned int mlli_nents; //mlli nents might be different than the above +}; + +struct mlli_params { + struct dma_pool *curr_pool; + uint8_t *mlli_virt_addr; + dma_addr_t mlli_dma_addr; + uint32_t mlli_len; +}; + +int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata); + +int ssi_buffer_mgr_fini(struct ssi_drvdata *drvdata); + +void ssi_buffer_mgr_copy_scatterlist_portion(u8 *dest, struct scatterlist *sg, uint32_t to_skip, uint32_t end, enum ssi_sg_cpy_direct direct); + +void ssi_buffer_mgr_zero_sgl(struct scatterlist *sgl, uint32_t data_len); + + +#ifdef CC_DMA_48BIT_SIM +dma_addr_t ssi_buff_mgr_update_dma_addr(dma_addr_t orig_addr, uint32_t data_len); +dma_addr_t ssi_buff_mgr_restore_dma_addr(dma_addr_t orig_addr); + +#define SSI_UPDATE_DMA_ADDR_TO_48BIT(addr,size) addr = \ + ssi_buff_mgr_update_dma_addr(addr,size) +#define SSI_RESTORE_DMA_ADDR_TO_48BIT(addr) addr = \ + ssi_buff_mgr_restore_dma_addr(addr) +#else + +#define SSI_UPDATE_DMA_ADDR_TO_48BIT(addr,size) addr = addr +#define SSI_RESTORE_DMA_ADDR_TO_48BIT(addr) addr = addr + +#endif + +#endif /*__BUFFER_MGR_H__*/ + diff --git a/drivers/staging/ccree/ssi_config.h b/drivers/staging/ccree/ssi_config.h new file mode 100644 index 0000000..d96a543 --- /dev/null +++ b/drivers/staging/ccree/ssi_config.h @@ -0,0 +1,61 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* \file ssi_config.h + Definitions for ARM CryptoCell Linux Crypto Driver + */ + +#ifndef __SSI_CONFIG_H__ +#define __SSI_CONFIG_H__ + +#include + +#define DISABLE_COHERENT_DMA_OPS +//#define FLUSH_CACHE_ALL +//#define COMPLETION_DELAY +//#define DX_DUMP_DESCS +// #define DX_DUMP_BYTES +// #define CC_DEBUG +#define ENABLE_CC_SYSFS /* Enable sysfs interface for debugging REE driver */ +//#define ENABLE_CC_CYCLE_COUNT +//#define DX_IRQ_DELAY 100000 +#define DMA_BIT_MASK_LEN 48 /* was 32 bit, but for juno's sake it was enlarged to 48 bit */ + +#if defined ENABLE_CC_CYCLE_COUNT && defined ENABLE_CC_SYSFS +#define CC_CYCLE_COUNT +#endif + + +#if defined (CONFIG_ARM64) // TODO currently only this mode was test on Juno (which is ARM64), need to enable coherent also. +#define DISABLE_COHERENT_DMA_OPS +#endif + +/* Define the CryptoCell DMA cache coherency signals configuration */ +#if defined (DISABLE_COHERENT_DMA_OPS) + /* Software Controlled Cache Coherency (SCCC) */ + #define SSI_CACHE_PARAMS (0x000) + /* CC attached to NONE-ACP such as HPP/ACE/AMBA4. + * The customer is responsible to enable/disable this feature + * according to his platform type. */ + #define DX_HAS_ACP 0 +#else + #define SSI_CACHE_PARAMS (0xEEE) + /* CC attached to ACP */ + #define DX_HAS_ACP 1 +#endif + +#endif /*__DX_CONFIG_H__*/ + diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c new file mode 100644 index 0000000..4fee9df --- /dev/null +++ b/drivers/staging/ccree/ssi_driver.c @@ -0,0 +1,499 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#include +#include + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* cache.h required for L1_CACHE_ALIGN() and cache_line_size() */ +#include +#include +#include +#include +#include +#include +#include + +#include "ssi_config.h" +#include "ssi_driver.h" +#include "ssi_request_mgr.h" +#include "ssi_buffer_mgr.h" +#include "ssi_sysfs.h" +#include "ssi_sram_mgr.h" +#include "ssi_pm.h" + + +#ifdef DX_DUMP_BYTES +void dump_byte_array(const char *name, const uint8_t *the_array, unsigned long size) +{ + int i , line_offset = 0, ret = 0; + const uint8_t *cur_byte; + char line_buf[80]; + + if (the_array == NULL) { + SSI_LOG_ERR("cannot dump_byte_array - NULL pointer\n"); + return; + } + + ret = snprintf(line_buf, sizeof(line_buf), "%s[%lu]: ", + name, size); + if (ret < 0) { + SSI_LOG_ERR("snprintf returned %d . aborting buffer array dump\n",ret); + return; + } + line_offset = ret; + for (i = 0 , cur_byte = the_array; + (i < size) && (line_offset < sizeof(line_buf)); i++, cur_byte++) { + ret = snprintf(line_buf + line_offset, + sizeof(line_buf) - line_offset, + "0x%02X ", *cur_byte); + if (ret < 0) { + SSI_LOG_ERR("snprintf returned %d . aborting buffer array dump\n",ret); + return; + } + line_offset += ret; + if (line_offset > 75) { /* Cut before line end */ + SSI_LOG_DEBUG("%s\n", line_buf); + line_offset = 0; + } + } + + if (line_offset > 0) /* Dump remaining line */ + SSI_LOG_DEBUG("%s\n", line_buf); +} +#endif + +static irqreturn_t cc_isr(int irq, void *dev_id) +{ + struct ssi_drvdata *drvdata = (struct ssi_drvdata *)dev_id; + void __iomem *cc_base = drvdata->cc_base; + uint32_t irr; + uint32_t imr; + DECL_CYCLE_COUNT_RESOURCES; + + /* STAT_OP_TYPE_GENERIC STAT_PHASE_0: Interrupt */ + START_CYCLE_COUNT(); + + /* read the interrupt status */ + irr = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRR)); + SSI_LOG_DEBUG("Got IRR=0x%08X\n", irr); + if (unlikely(irr == 0)) { /* Probably shared interrupt line */ + SSI_LOG_ERR("Got interrupt with empty IRR\n"); + return IRQ_NONE; + } + imr = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR)); + + /* clear interrupt - must be before processing events */ + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_ICR), irr); + + drvdata->irq = irr; + /* Completion interrupt - most probable */ + if (likely((irr & SSI_COMP_IRQ_MASK) != 0)) { + /* Mask AXI completion interrupt - will be unmasked in Deferred service handler */ + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), imr | SSI_COMP_IRQ_MASK); + irr &= ~SSI_COMP_IRQ_MASK; + complete_request(drvdata); + } + + /* AXI error interrupt */ + if (unlikely((irr & SSI_AXI_ERR_IRQ_MASK) != 0)) { + uint32_t axi_err; + + /* Read the AXI error ID */ + axi_err = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_ERR)); + SSI_LOG_DEBUG("AXI completion error: axim_mon_err=0x%08X\n", axi_err); + + irr &= ~SSI_AXI_ERR_IRQ_MASK; + } + + if (unlikely(irr != 0)) { + SSI_LOG_DEBUG("IRR includes unknown cause bits (0x%08X)\n", irr); + /* Just warning */ + } + + END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_0); + START_CYCLE_COUNT_AT(drvdata->isr_exit_cycles); + + return IRQ_HANDLED; +} + +int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe) +{ + unsigned int val; + void __iomem *cc_base = drvdata->cc_base; + + /* Unmask all AXI interrupt sources AXI_CFG1 register */ + val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CFG)); + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CFG), val & ~SSI_AXI_IRQ_MASK); + SSI_LOG_DEBUG("AXIM_CFG=0x%08X\n", CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CFG))); + + /* Clear all pending interrupts */ + val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRR)); + SSI_LOG_DEBUG("IRR=0x%08X\n", val); + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_ICR), val); + + /* Unmask relevant interrupt cause */ + val = (~(SSI_COMP_IRQ_MASK | SSI_AXI_ERR_IRQ_MASK | SSI_GPR0_IRQ_MASK)); + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), val); + +#ifdef DX_HOST_IRQ_TIMER_INIT_VAL_REG_OFFSET +#ifdef DX_IRQ_DELAY + /* Set CC IRQ delay */ + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRQ_TIMER_INIT_VAL), + DX_IRQ_DELAY); +#endif + if (CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRQ_TIMER_INIT_VAL)) > 0) { + SSI_LOG_DEBUG("irq_delay=%d CC cycles\n", + CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRQ_TIMER_INIT_VAL))); + } +#endif + + val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CACHE_PARAMS)); + if (is_probe == true) { + SSI_LOG_INFO("Cache params previous: 0x%08X\n", val); + } + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CACHE_PARAMS), SSI_CACHE_PARAMS); + val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CACHE_PARAMS)); + if (is_probe == true) { + SSI_LOG_INFO("Cache params current: 0x%08X (expected: 0x%08X)\n", val, SSI_CACHE_PARAMS); + } + + return 0; +} + +static int init_cc_resources(struct platform_device *plat_dev) +{ + struct resource *req_mem_cc_regs = NULL; + void __iomem *cc_base = NULL; + bool irq_registered = false; + struct ssi_drvdata *new_drvdata = kzalloc(sizeof(struct ssi_drvdata), GFP_KERNEL); + uint32_t signature_val; + int rc = 0; + + if (unlikely(new_drvdata == NULL)) { + SSI_LOG_ERR("Failed to allocate drvdata"); + rc = -ENOMEM; + goto init_cc_res_err; + } + + new_drvdata->inflight_counter = 0; + + dev_set_drvdata(&plat_dev->dev, new_drvdata); + /* Get device resources */ + /* First CC registers space */ + new_drvdata->res_mem = platform_get_resource(plat_dev, IORESOURCE_MEM, 0); + if (unlikely(new_drvdata->res_mem == NULL)) { + SSI_LOG_ERR("Failed getting IO memory resource\n"); + rc = -ENODEV; + goto init_cc_res_err; + } + SSI_LOG_DEBUG("Got MEM resource (%s): start=0x%llX end=0x%llX\n", + new_drvdata->res_mem->name, + (unsigned long long)new_drvdata->res_mem->start, + (unsigned long long)new_drvdata->res_mem->end); + /* Map registers space */ + req_mem_cc_regs = request_mem_region(new_drvdata->res_mem->start, resource_size(new_drvdata->res_mem), "arm_cc7x_regs"); + if (unlikely(req_mem_cc_regs == NULL)) { + SSI_LOG_ERR("Couldn't allocate registers memory region at " + "0x%08X\n", (unsigned int)new_drvdata->res_mem->start); + rc = -EBUSY; + goto init_cc_res_err; + } + cc_base = ioremap(new_drvdata->res_mem->start, resource_size(new_drvdata->res_mem)); + if (unlikely(cc_base == NULL)) { + SSI_LOG_ERR("ioremap[CC](0x%08X,0x%08X) failed\n", + (unsigned int)new_drvdata->res_mem->start, (unsigned int)resource_size(new_drvdata->res_mem)); + rc = -ENOMEM; + goto init_cc_res_err; + } + SSI_LOG_DEBUG("CC registers mapped from %pa to 0x%p\n", &new_drvdata->res_mem->start, cc_base); + new_drvdata->cc_base = cc_base; + + + /* Then IRQ */ + new_drvdata->res_irq = platform_get_resource(plat_dev, IORESOURCE_IRQ, 0); + if (unlikely(new_drvdata->res_irq == NULL)) { + SSI_LOG_ERR("Failed getting IRQ resource\n"); + rc = -ENODEV; + goto init_cc_res_err; + } + rc = request_irq(new_drvdata->res_irq->start, cc_isr, + IRQF_SHARED, "arm_cc7x", new_drvdata); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("Could not register to interrupt %llu\n", + (unsigned long long)new_drvdata->res_irq->start); + goto init_cc_res_err; + } + init_completion(&new_drvdata->icache_setup_completion); + + irq_registered = true; + SSI_LOG_DEBUG("Registered to IRQ (%s) %llu\n", + new_drvdata->res_irq->name, + (unsigned long long)new_drvdata->res_irq->start); + + new_drvdata->plat_dev = plat_dev; + + if(new_drvdata->plat_dev->dev.dma_mask == NULL) + { + new_drvdata->plat_dev->dev.dma_mask = & new_drvdata->plat_dev->dev.coherent_dma_mask; + } + if (!new_drvdata->plat_dev->dev.coherent_dma_mask) + { + new_drvdata->plat_dev->dev.coherent_dma_mask = DMA_BIT_MASK(DMA_BIT_MASK_LEN); + } + + /* Verify correct mapping */ + signature_val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_SIGNATURE)); + if (signature_val != DX_DEV_SIGNATURE) { + SSI_LOG_ERR("Invalid CC signature: SIGNATURE=0x%08X != expected=0x%08X\n", + signature_val, (uint32_t)DX_DEV_SIGNATURE); + rc = -EINVAL; + goto init_cc_res_err; + } + SSI_LOG_DEBUG("CC SIGNATURE=0x%08X\n", signature_val); + + /* Display HW versions */ + SSI_LOG(KERN_INFO, "ARM CryptoCell %s Driver: HW version 0x%08X, Driver version %s\n", SSI_DEV_NAME_STR, + CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_VERSION)), DRV_MODULE_VERSION); + + rc = init_cc_regs(new_drvdata, true); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("init_cc_regs failed\n"); + goto init_cc_res_err; + } + +#ifdef ENABLE_CC_SYSFS + rc = ssi_sysfs_init(&(plat_dev->dev.kobj), new_drvdata); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("init_stat_db failed\n"); + goto init_cc_res_err; + } +#endif + + rc = ssi_sram_mgr_init(new_drvdata); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("ssi_sram_mgr_init failed\n"); + goto init_cc_res_err; + } + + new_drvdata->mlli_sram_addr = + ssi_sram_mgr_alloc(new_drvdata, MAX_MLLI_BUFF_SIZE); + if (unlikely(new_drvdata->mlli_sram_addr == NULL_SRAM_ADDR)) { + SSI_LOG_ERR("Failed to alloc MLLI Sram buffer\n"); + rc = -ENOMEM; + goto init_cc_res_err; + } + + rc = request_mgr_init(new_drvdata); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("request_mgr_init failed\n"); + goto init_cc_res_err; + } + + rc = ssi_buffer_mgr_init(new_drvdata); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("buffer_mgr_init failed\n"); + goto init_cc_res_err; + } + + rc = ssi_power_mgr_init(new_drvdata); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("ssi_power_mgr_init failed\n"); + goto init_cc_res_err; + } + + return 0; + +init_cc_res_err: + SSI_LOG_ERR("Freeing CC HW resources!\n"); + + if (new_drvdata != NULL) { + ssi_power_mgr_fini(new_drvdata); + ssi_buffer_mgr_fini(new_drvdata); + request_mgr_fini(new_drvdata); + ssi_sram_mgr_fini(new_drvdata); +#ifdef ENABLE_CC_SYSFS + ssi_sysfs_fini(); +#endif + + if (req_mem_cc_regs != NULL) { + if (irq_registered) { + free_irq(new_drvdata->res_irq->start, new_drvdata); + new_drvdata->res_irq = NULL; + iounmap(cc_base); + new_drvdata->cc_base = NULL; + } + release_mem_region(new_drvdata->res_mem->start, + resource_size(new_drvdata->res_mem)); + new_drvdata->res_mem = NULL; + } + kfree(new_drvdata); + dev_set_drvdata(&plat_dev->dev, NULL); + } + + return rc; +} + +void fini_cc_regs(struct ssi_drvdata *drvdata) +{ + /* Mask all interrupts */ + WRITE_REGISTER(drvdata->cc_base + + CC_REG_OFFSET(HOST_RGF, HOST_IMR), 0xFFFFFFFF); + +} + +static void cleanup_cc_resources(struct platform_device *plat_dev) +{ + struct ssi_drvdata *drvdata = + (struct ssi_drvdata *)dev_get_drvdata(&plat_dev->dev); + + ssi_power_mgr_fini(drvdata); + ssi_buffer_mgr_fini(drvdata); + request_mgr_fini(drvdata); + ssi_sram_mgr_fini(drvdata); +#ifdef ENABLE_CC_SYSFS + ssi_sysfs_fini(); +#endif + + /* Mask all interrupts */ + WRITE_REGISTER(drvdata->cc_base + CC_REG_OFFSET(HOST_RGF, HOST_IMR), + 0xFFFFFFFF); + free_irq(drvdata->res_irq->start, drvdata); + drvdata->res_irq = NULL; + + fini_cc_regs(drvdata); + + if (drvdata->cc_base != NULL) { + iounmap(drvdata->cc_base); + release_mem_region(drvdata->res_mem->start, + resource_size(drvdata->res_mem)); + drvdata->cc_base = NULL; + drvdata->res_mem = NULL; + } + + kfree(drvdata); + dev_set_drvdata(&plat_dev->dev, NULL); +} + +static int cc7x_probe(struct platform_device *plat_dev) +{ + int rc; +#if defined(CONFIG_ARM) && defined(CC_DEBUG) + uint32_t ctr, cacheline_size; + + asm volatile("mrc p15, 0, %0, c0, c0, 1" : "=r" (ctr)); + cacheline_size = 4 << ((ctr >> 16) & 0xf); + SSI_LOG_DEBUG("CP15(L1_CACHE_BYTES) = %u , Kconfig(L1_CACHE_BYTES) = %u\n", + cacheline_size, L1_CACHE_BYTES); + + asm volatile("mrc p15, 0, %0, c0, c0, 0" : "=r" (ctr)); + SSI_LOG_DEBUG("Main ID register (MIDR): Implementer 0x%02X, Arch 0x%01X," + " Part 0x%03X, Rev r%dp%d\n", + (ctr>>24), (ctr>>16)&0xF, (ctr>>4)&0xFFF, (ctr>>20)&0xF, ctr&0xF); +#endif + + /* Map registers space */ + rc = init_cc_resources(plat_dev); + if (rc != 0) + return rc; + + SSI_LOG(KERN_INFO, "ARM cc7x_ree device initialized\n"); + + return 0; +} + +static int cc7x_remove(struct platform_device *plat_dev) +{ + SSI_LOG_DEBUG("Releasing cc7x resources...\n"); + + cleanup_cc_resources(plat_dev); + + SSI_LOG(KERN_INFO, "ARM cc7x_ree device terminated\n"); +#ifdef ENABLE_CYCLE_COUNT + display_all_stat_db(); +#endif + + return 0; +} +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) +static struct dev_pm_ops arm_cc7x_driver_pm = { + SET_RUNTIME_PM_OPS(ssi_power_mgr_runtime_suspend, ssi_power_mgr_runtime_resume, NULL) +}; +#endif + +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) +#define DX_DRIVER_RUNTIME_PM (&arm_cc7x_driver_pm) +#else +#define DX_DRIVER_RUNTIME_PM NULL +#endif + + +#ifdef CONFIG_OF +static const struct of_device_id arm_cc7x_dev_of_match[] = { + {.compatible = "arm,cryptocell-712-ree"}, + {} +}; +MODULE_DEVICE_TABLE(of, arm_cc7x_dev_of_match); +#endif + +static struct platform_driver cc7x_driver = { + .driver = { + .name = "cc7xree", + .owner = THIS_MODULE, +#ifdef CONFIG_OF + .of_match_table = arm_cc7x_dev_of_match, +#endif + .pm = DX_DRIVER_RUNTIME_PM, + }, + .probe = cc7x_probe, + .remove = cc7x_remove, +}; +module_platform_driver(cc7x_driver); + +/* Module description */ +MODULE_DESCRIPTION("ARM TrustZone CryptoCell REE Driver"); +MODULE_VERSION(DRV_MODULE_VERSION); +MODULE_AUTHOR("ARM"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h new file mode 100644 index 0000000..eb30643 --- /dev/null +++ b/drivers/staging/ccree/ssi_driver.h @@ -0,0 +1,183 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* \file ssi_driver.h + ARM CryptoCell Linux Crypto Driver + */ + +#ifndef __SSI_DRIVER_H__ +#define __SSI_DRIVER_H__ + +#include "ssi_config.h" +#ifdef COMP_IN_WQ +#include +#else +#include +#endif +#include +#include +#include +#include +#include +#include + +#ifndef INT32_MAX /* Missing in Linux kernel */ +#define INT32_MAX 0x7FFFFFFFL +#endif + +/* Registers definitions from shared/hw/ree_include */ +#include "dx_reg_base_host.h" +#include "dx_host.h" +#define DX_CC_HOST_VIRT /* must be defined before including dx_cc_regs.h */ +#include "cc_hw_queue_defs.h" +#include "cc_regs.h" +#include "dx_reg_common.h" +#include "cc_hal.h" +#include "ssi_sram_mgr.h" +#define CC_SUPPORT_SHA DX_DEV_SHA_MAX +#include "cc_crypto_ctx.h" +#include "ssi_sysfs.h" + +#define DRV_MODULE_VERSION "3.0" + +#define SSI_DEV_NAME_STR "cc715ree" +#define SSI_CC_HAS_AES_CCM 1 +#define SSI_CC_HAS_AES_GCM 1 +#define SSI_CC_HAS_AES_XTS 1 +#define SSI_CC_HAS_AES_ESSIV 1 +#define SSI_CC_HAS_AES_BITLOCKER 1 +#define SSI_CC_HAS_AES_CTS 1 +#define SSI_CC_HAS_MULTI2 0 +#define SSI_CC_HAS_CMAC 1 + +#define SSI_AXI_IRQ_MASK ((1 << DX_AXIM_CFG_BRESPMASK_BIT_SHIFT) | (1 << DX_AXIM_CFG_RRESPMASK_BIT_SHIFT) | \ + (1 << DX_AXIM_CFG_INFLTMASK_BIT_SHIFT) | (1 << DX_AXIM_CFG_COMPMASK_BIT_SHIFT)) + +#define SSI_AXI_ERR_IRQ_MASK (1 << DX_HOST_IRR_AXI_ERR_INT_BIT_SHIFT) + +#define SSI_COMP_IRQ_MASK (1 << DX_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT) + +/* TEE FIPS status interrupt */ +#define SSI_GPR0_IRQ_MASK (1 << DX_HOST_IRR_GPR0_BIT_SHIFT) + +#define SSI_CRA_PRIO 3000 + +#define MIN_HW_QUEUE_SIZE 50 /* Minimum size required for proper function */ + +#define MAX_REQUEST_QUEUE_SIZE 4096 +#define MAX_MLLI_BUFF_SIZE 2080 +#define MAX_ICV_NENTS_SUPPORTED 2 + +/* Definitions for HW descriptors DIN/DOUT fields */ +#define NS_BIT 1 +#define AXI_ID 0 +/* AXI_ID is not actually the AXI ID of the transaction but the value of AXI_ID + field in the HW descriptor. The DMA engine +8 that value. */ + +/* Logging macros */ +#define SSI_LOG(level, format, ...) \ + printk(level "cc715ree::%s: " format , __func__, ##__VA_ARGS__) +#define SSI_LOG_ERR(format, ...) SSI_LOG(KERN_ERR, format, ##__VA_ARGS__) +#define SSI_LOG_WARNING(format, ...) SSI_LOG(KERN_WARNING, format, ##__VA_ARGS__) +#define SSI_LOG_NOTICE(format, ...) SSI_LOG(KERN_NOTICE, format, ##__VA_ARGS__) +#define SSI_LOG_INFO(format, ...) SSI_LOG(KERN_INFO, format, ##__VA_ARGS__) +#ifdef CC_DEBUG +#define SSI_LOG_DEBUG(format, ...) SSI_LOG(KERN_DEBUG, format, ##__VA_ARGS__) +#else /* Debug log messages are removed at compile time for non-DEBUG config. */ +#define SSI_LOG_DEBUG(format, ...) do {} while (0) +#endif + +#define MIN(a, b) (((a) < (b)) ? (a) : (b)) +#define MAX(a, b) (((a) > (b)) ? (a) : (b)) + +struct ssi_crypto_req { + void (*user_cb)(struct device *dev, void *req, void __iomem *cc_base); + void *user_arg; + struct completion seq_compl; /* request completion */ +#ifdef ENABLE_CYCLE_COUNT + enum stat_op op_type; + cycles_t submit_cycle; + bool is_monitored_p; +#endif +}; + +/** + * struct ssi_drvdata - driver private data context + * @cc_base: virt address of the CC registers + * @irq: device IRQ number + * @irq_mask: Interrupt mask shadow (1 for masked interrupts) + * @fw_ver: SeP loaded firmware version + */ +struct ssi_drvdata { + struct resource *res_mem; + struct resource *res_irq; + void __iomem *cc_base; +#ifdef DX_BASE_ENV_REGS + void __iomem *env_base; /* ARM CryptoCell development FPGAs only */ +#endif + unsigned int irq; + uint32_t irq_mask; + uint32_t fw_ver; + /* Calibration time of start/stop + * monitor descriptors */ + uint32_t monitor_null_cycles; + struct platform_device *plat_dev; + ssi_sram_addr_t mlli_sram_addr; + struct completion icache_setup_completion; + void *buff_mgr_handle; + void *request_mgr_handle; + void *sram_mgr_handle; + +#ifdef ENABLE_CYCLE_COUNT + cycles_t isr_exit_cycles; /* Save for isr-to-tasklet latency */ +#endif + uint32_t inflight_counter; + +}; + +struct async_gen_req_ctx { + dma_addr_t iv_dma_addr; + enum drv_crypto_direction op_type; +}; + +#ifdef DX_DUMP_BYTES +void dump_byte_array(const char *name, const uint8_t *the_array, unsigned long size); +#else +#define dump_byte_array(name, array, size) do { \ +} while (0); +#endif + +#ifdef ENABLE_CYCLE_COUNT +#define DECL_CYCLE_COUNT_RESOURCES cycles_t _last_cycles_read +#define START_CYCLE_COUNT() do { _last_cycles_read = get_cycles(); } while (0) +#define END_CYCLE_COUNT(_stat_op_type, _stat_phase) update_host_stat(_stat_op_type, _stat_phase, get_cycles() - _last_cycles_read) +#define GET_START_CYCLE_COUNT() _last_cycles_read +#define START_CYCLE_COUNT_AT(_var) do { _var = get_cycles(); } while(0) +#define END_CYCLE_COUNT_AT(_var, _stat_op_type, _stat_phase) update_host_stat(_stat_op_type, _stat_phase, get_cycles() - _var) +#else +#define DECL_CYCLE_COUNT_RESOURCES +#define START_CYCLE_COUNT() do { } while (0) +#define END_CYCLE_COUNT(_stat_op_type, _stat_phase) do { } while (0) +#define GET_START_CYCLE_COUNT() 0 +#define START_CYCLE_COUNT_AT(_var) do { } while (0) +#define END_CYCLE_COUNT_AT(_var, _stat_op_type, _stat_phase) do { } while (0) +#endif /*ENABLE_CYCLE_COUNT*/ + +int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe); +void fini_cc_regs(struct ssi_drvdata *drvdata); + +#endif /*__SSI_DRIVER_H__*/ + diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c new file mode 100644 index 0000000..1f34e68 --- /dev/null +++ b/drivers/staging/ccree/ssi_pm.c @@ -0,0 +1,144 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + + +#include "ssi_config.h" +#include +#include +#include +#include +#include +#include "ssi_driver.h" +#include "ssi_buffer_mgr.h" +#include "ssi_request_mgr.h" +#include "ssi_sram_mgr.h" +#include "ssi_sysfs.h" +#include "ssi_pm.h" +#include "ssi_pm_ext.h" + + +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) + +#define POWER_DOWN_ENABLE 0x01 +#define POWER_DOWN_DISABLE 0x00 + + +int ssi_power_mgr_runtime_suspend(struct device *dev) +{ + struct ssi_drvdata *drvdata = + (struct ssi_drvdata *)dev_get_drvdata(dev); + int rc; + + SSI_LOG_DEBUG("ssi_power_mgr_runtime_suspend: set HOST_POWER_DOWN_EN\n"); + WRITE_REGISTER(drvdata->cc_base + CC_REG_OFFSET(HOST_RGF, HOST_POWER_DOWN_EN), POWER_DOWN_ENABLE); + rc = ssi_request_mgr_runtime_suspend_queue(drvdata); + if (rc != 0) { + SSI_LOG_ERR("ssi_request_mgr_runtime_suspend_queue (%x)\n", rc); + return rc; + } + fini_cc_regs(drvdata); + + /* Specific HW suspend code */ + ssi_pm_ext_hw_suspend(dev); + return 0; +} + +int ssi_power_mgr_runtime_resume(struct device *dev) +{ + int rc; + struct ssi_drvdata *drvdata = + (struct ssi_drvdata *)dev_get_drvdata(dev); + + SSI_LOG_DEBUG("ssi_power_mgr_runtime_resume , unset HOST_POWER_DOWN_EN\n"); + WRITE_REGISTER(drvdata->cc_base + CC_REG_OFFSET(HOST_RGF, HOST_POWER_DOWN_EN), POWER_DOWN_DISABLE); + /* Specific HW resume code */ + ssi_pm_ext_hw_resume(dev); + + rc = init_cc_regs(drvdata, false); + if (rc !=0) { + SSI_LOG_ERR("init_cc_regs (%x)\n",rc); + return rc; + } + + rc = ssi_request_mgr_runtime_resume_queue(drvdata); + if (rc !=0) { + SSI_LOG_ERR("ssi_request_mgr_runtime_resume_queue (%x)\n",rc); + return rc; + } + + return 0; +} + +int ssi_power_mgr_runtime_get(struct device *dev) +{ + int rc = 0; + + if (ssi_request_mgr_is_queue_runtime_suspend( + (struct ssi_drvdata *)dev_get_drvdata(dev))) { + rc = pm_runtime_get_sync(dev); + } else { + pm_runtime_get_noresume(dev); + } + return rc; +} + +int ssi_power_mgr_runtime_put_suspend(struct device *dev) +{ + int rc = 0; + + if (!ssi_request_mgr_is_queue_runtime_suspend( + (struct ssi_drvdata *)dev_get_drvdata(dev))) { + pm_runtime_mark_last_busy(dev); + rc = pm_runtime_put_autosuspend(dev); + } + else { + /* Something wrong happens*/ + BUG(); + } + return rc; + +} + +#endif + + + +int ssi_power_mgr_init(struct ssi_drvdata *drvdata) +{ + int rc = 0; +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) + struct platform_device *plat_dev = drvdata->plat_dev; + /* must be before the enabling to avoid resdundent suspending */ + pm_runtime_set_autosuspend_delay(&plat_dev->dev,SSI_SUSPEND_TIMEOUT); + pm_runtime_use_autosuspend(&plat_dev->dev); + /* activate the PM module */ + rc = pm_runtime_set_active(&plat_dev->dev); + if (rc != 0) + return rc; + /* enable the PM module*/ + pm_runtime_enable(&plat_dev->dev); +#endif + return rc; +} + +void ssi_power_mgr_fini(struct ssi_drvdata *drvdata) +{ +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) + struct platform_device *plat_dev = drvdata->plat_dev; + + pm_runtime_disable(&plat_dev->dev); +#endif +} diff --git a/drivers/staging/ccree/ssi_pm.h b/drivers/staging/ccree/ssi_pm.h new file mode 100644 index 0000000..516fc3f --- /dev/null +++ b/drivers/staging/ccree/ssi_pm.h @@ -0,0 +1,46 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* \file ssi_pm.h + */ + +#ifndef __SSI_POWER_MGR_H__ +#define __SSI_POWER_MGR_H__ + + +#include "ssi_config.h" +#include "ssi_driver.h" + + +#define SSI_SUSPEND_TIMEOUT 3000 + + +int ssi_power_mgr_init(struct ssi_drvdata *drvdata); + +void ssi_power_mgr_fini(struct ssi_drvdata *drvdata); + +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) +int ssi_power_mgr_runtime_suspend(struct device *dev); + +int ssi_power_mgr_runtime_resume(struct device *dev); + +int ssi_power_mgr_runtime_get(struct device *dev); + +int ssi_power_mgr_runtime_put_suspend(struct device *dev); +#endif + +#endif /*__POWER_MGR_H__*/ + diff --git a/drivers/staging/ccree/ssi_pm_ext.c b/drivers/staging/ccree/ssi_pm_ext.c new file mode 100644 index 0000000..f86bbab --- /dev/null +++ b/drivers/staging/ccree/ssi_pm_ext.c @@ -0,0 +1,60 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + + +#include "ssi_config.h" +#include +#include +#include +#include +#include +#include "ssi_driver.h" +#include "ssi_sram_mgr.h" +#include "ssi_pm_ext.h" + +/* +This function should suspend the HW (if possiable), It should be implemented by +the driver user. +The reference code clears the internal SRAM to imitate lose of state. +*/ +void ssi_pm_ext_hw_suspend(struct device *dev) +{ + struct ssi_drvdata *drvdata = + (struct ssi_drvdata *)dev_get_drvdata(dev); + unsigned int val; + void __iomem *cc_base = drvdata->cc_base; + unsigned int sram_addr = 0; + + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, SRAM_ADDR), sram_addr); + + for (;sram_addr < SSI_CC_SRAM_SIZE ; sram_addr+=4) { + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, SRAM_DATA), 0x0); + + do { + val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, SRAM_DATA_READY)); + } while (!(val &0x1)); + } +} + +/* +This function should resume the HW (if possiable).It should be implemented by +the driver user. +*/ +void ssi_pm_ext_hw_resume(struct device *dev) +{ + return; +} + diff --git a/drivers/staging/ccree/ssi_pm_ext.h b/drivers/staging/ccree/ssi_pm_ext.h new file mode 100644 index 0000000..b4e2795 --- /dev/null +++ b/drivers/staging/ccree/ssi_pm_ext.h @@ -0,0 +1,33 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* \file ssi_pm_ext.h + */ + +#ifndef __PM_EXT_H__ +#define __PM_EXT_H__ + + +#include "ssi_config.h" +#include "ssi_driver.h" + +void ssi_pm_ext_hw_suspend(struct device *dev); + +void ssi_pm_ext_hw_resume(struct device *dev); + + +#endif /*__POWER_MGR_H__*/ + diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c new file mode 100644 index 0000000..62ef6e7 --- /dev/null +++ b/drivers/staging/ccree/ssi_request_mgr.c @@ -0,0 +1,680 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#include "ssi_config.h" +#include +#include +#include +#include +#include +#ifdef FLUSH_CACHE_ALL +#include +#endif +#include +#include "ssi_driver.h" +#include "ssi_buffer_mgr.h" +#include "ssi_request_mgr.h" +#include "ssi_sysfs.h" +#include "ssi_pm.h" + +#define SSI_MAX_POLL_ITER 10 + +#define AXIM_MON_BASE_OFFSET CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_COMP) + +#ifdef CC_CYCLE_COUNT + +#define MONITOR_CNTR_BIT 0 + +/** + * Monitor descriptor. + * Used to measure CC performance. + */ +#define INIT_CC_MONITOR_DESC(desc_p) \ +do { \ + HW_DESC_INIT(desc_p); \ + HW_DESC_SET_DIN_MONITOR_CNTR(desc_p); \ +} while (0) + +/** + * Try adding monitor descriptor BEFORE enqueuing sequence. + */ +#define CC_CYCLE_DESC_HEAD(cc_base_addr, desc_p, lock_p, is_monitored_p) \ +do { \ + if (!test_and_set_bit(MONITOR_CNTR_BIT, (lock_p))) { \ + enqueue_seq((cc_base_addr), (desc_p), 1); \ + *(is_monitored_p) = true; \ + } else { \ + *(is_monitored_p) = false; \ + } \ +} while (0) + +/** + * If CC_CYCLE_DESC_HEAD was successfully added: + * 1. Add memory barrier descriptor to ensure last AXI transaction. + * 2. Add monitor descriptor to sequence tail AFTER enqueuing sequence. + */ +#define CC_CYCLE_DESC_TAIL(cc_base_addr, desc_p, is_monitored) \ +do { \ + if ((is_monitored) == true) { \ + HwDesc_s barrier_desc; \ + HW_DESC_INIT(&barrier_desc); \ + HW_DESC_SET_DIN_NO_DMA(&barrier_desc, 0, 0xfffff0); \ + HW_DESC_SET_DOUT_NO_DMA(&barrier_desc, 0, 0, 1); \ + enqueue_seq((cc_base_addr), &barrier_desc, 1); \ + enqueue_seq((cc_base_addr), (desc_p), 1); \ + } \ +} while (0) + +/** + * Try reading CC monitor counter value upon sequence complete. + * Can only succeed if the lock_p is taken by the owner of the given request. + */ +#define END_CC_MONITOR_COUNT(cc_base_addr, stat_op_type, stat_phase, monitor_null_cycles, lock_p, is_monitored) \ +do { \ + uint32_t elapsed_cycles; \ + if ((is_monitored) == true) { \ + elapsed_cycles = READ_REGISTER((cc_base_addr) + CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_MEASURE_CNTR)); \ + clear_bit(MONITOR_CNTR_BIT, (lock_p)); \ + if (elapsed_cycles > 0) \ + update_cc_stat(stat_op_type, stat_phase, (elapsed_cycles - monitor_null_cycles)); \ + } \ +} while (0) + +#else /*CC_CYCLE_COUNT*/ + +#define INIT_CC_MONITOR_DESC(desc_p) do { } while (0) +#define CC_CYCLE_DESC_HEAD(cc_base_addr, desc_p, lock_p, is_monitored_p) do { } while (0) +#define CC_CYCLE_DESC_TAIL(cc_base_addr, desc_p, is_monitored) do { } while (0) +#define END_CC_MONITOR_COUNT(cc_base_addr, stat_op_type, stat_phase, monitor_null_cycles, lock_p, is_monitored) do { } while (0) +#endif /*CC_CYCLE_COUNT*/ + + +struct ssi_request_mgr_handle { + /* Request manager resources */ + unsigned int hw_queue_size; /* HW capability */ + unsigned int min_free_hw_slots; + unsigned int max_used_sw_slots; + struct ssi_crypto_req req_queue[MAX_REQUEST_QUEUE_SIZE]; + uint32_t req_queue_head; + uint32_t req_queue_tail; + uint32_t axi_completed; + uint32_t q_free_slots; + spinlock_t hw_lock; + HwDesc_s compl_desc; + uint8_t *dummy_comp_buff; + dma_addr_t dummy_comp_buff_dma; + HwDesc_s monitor_desc; + volatile unsigned long monitor_lock; +#ifdef COMP_IN_WQ + struct workqueue_struct *workq; + struct delayed_work compwork; +#else + struct tasklet_struct comptask; +#endif +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) + bool is_runtime_suspended; +#endif +}; + +static void comp_handler(unsigned long devarg); +#ifdef COMP_IN_WQ +static void comp_work_handler(struct work_struct *work); +#endif + +void request_mgr_fini(struct ssi_drvdata *drvdata) +{ + struct ssi_request_mgr_handle *req_mgr_h = drvdata->request_mgr_handle; + + if (req_mgr_h == NULL) + return; /* Not allocated */ + + if (req_mgr_h->dummy_comp_buff_dma != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(req_mgr_h->dummy_comp_buff_dma); + dma_free_coherent(&drvdata->plat_dev->dev, + sizeof(uint32_t), req_mgr_h->dummy_comp_buff, + req_mgr_h->dummy_comp_buff_dma); + } + + SSI_LOG_DEBUG("max_used_hw_slots=%d\n", (req_mgr_h->hw_queue_size - + req_mgr_h->min_free_hw_slots) ); + SSI_LOG_DEBUG("max_used_sw_slots=%d\n", req_mgr_h->max_used_sw_slots); + +#ifdef COMP_IN_WQ + flush_workqueue(req_mgr_h->workq); + destroy_workqueue(req_mgr_h->workq); +#else + /* Kill tasklet */ + tasklet_kill(&req_mgr_h->comptask); +#endif + memset(req_mgr_h, 0, sizeof(struct ssi_request_mgr_handle)); + kfree(req_mgr_h); + drvdata->request_mgr_handle = NULL; +} + +int request_mgr_init(struct ssi_drvdata *drvdata) +{ +#ifdef CC_CYCLE_COUNT + HwDesc_s monitor_desc[2]; + struct ssi_crypto_req monitor_req = {0}; +#endif + struct ssi_request_mgr_handle *req_mgr_h; + int rc = 0; + + req_mgr_h = kzalloc(sizeof(struct ssi_request_mgr_handle),GFP_KERNEL); + if (req_mgr_h == NULL) { + rc = -ENOMEM; + goto req_mgr_init_err; + } + + drvdata->request_mgr_handle = req_mgr_h; + + spin_lock_init(&req_mgr_h->hw_lock); +#ifdef COMP_IN_WQ + SSI_LOG_DEBUG("Initializing completion workqueue\n"); + req_mgr_h->workq = create_singlethread_workqueue("arm_cc7x_wq"); + if (unlikely(req_mgr_h->workq == NULL)) { + SSI_LOG_ERR("Failed creating work queue\n"); + rc = -ENOMEM; + goto req_mgr_init_err; + } + INIT_DELAYED_WORK(&req_mgr_h->compwork, comp_work_handler); +#else + SSI_LOG_DEBUG("Initializing completion tasklet\n"); + tasklet_init(&req_mgr_h->comptask, comp_handler, (unsigned long)drvdata); +#endif + req_mgr_h->hw_queue_size = READ_REGISTER(drvdata->cc_base + + CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_SRAM_SIZE)); + SSI_LOG_DEBUG("hw_queue_size=0x%08X\n", req_mgr_h->hw_queue_size); + if (req_mgr_h->hw_queue_size < MIN_HW_QUEUE_SIZE) { + SSI_LOG_ERR("Invalid HW queue size = %u (Min. required is %u)\n", + req_mgr_h->hw_queue_size, MIN_HW_QUEUE_SIZE); + rc = -ENOMEM; + goto req_mgr_init_err; + } + req_mgr_h->min_free_hw_slots = req_mgr_h->hw_queue_size; + req_mgr_h->max_used_sw_slots = 0; + + + /* Allocate DMA word for "dummy" completion descriptor use */ + req_mgr_h->dummy_comp_buff = dma_alloc_coherent(&drvdata->plat_dev->dev, + sizeof(uint32_t), &req_mgr_h->dummy_comp_buff_dma, GFP_KERNEL); + if (!req_mgr_h->dummy_comp_buff) { + SSI_LOG_ERR("Not enough memory to allocate DMA (%zu) dropped " + "buffer\n", sizeof(uint32_t)); + rc = -ENOMEM; + goto req_mgr_init_err; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(req_mgr_h->dummy_comp_buff_dma, + sizeof(uint32_t)); + + /* Init. "dummy" completion descriptor */ + HW_DESC_INIT(&req_mgr_h->compl_desc); + HW_DESC_SET_DIN_CONST(&req_mgr_h->compl_desc, 0, sizeof(uint32_t)); + HW_DESC_SET_DOUT_DLLI(&req_mgr_h->compl_desc, + req_mgr_h->dummy_comp_buff_dma, + sizeof(uint32_t), NS_BIT, 1); + HW_DESC_SET_FLOW_MODE(&req_mgr_h->compl_desc, BYPASS); + HW_DESC_SET_QUEUE_LAST_IND(&req_mgr_h->compl_desc); + +#ifdef CC_CYCLE_COUNT + /* For CC-HW cycle performance trace */ + INIT_CC_MONITOR_DESC(&req_mgr_h->monitor_desc); + set_bit(MONITOR_CNTR_BIT, &req_mgr_h->monitor_lock); + monitor_desc[0] = req_mgr_h->monitor_desc; + monitor_desc[1] = req_mgr_h->monitor_desc; + + rc = send_request(drvdata, &monitor_req, monitor_desc, 2, 0); + if (unlikely(rc != 0)) + goto req_mgr_init_err; + + drvdata->monitor_null_cycles = READ_REGISTER(drvdata->cc_base + + CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_MEASURE_CNTR)); + SSI_LOG_ERR("Calibration time=0x%08x\n", drvdata->monitor_null_cycles); + + clear_bit(MONITOR_CNTR_BIT, &req_mgr_h->monitor_lock); +#endif + + return 0; + +req_mgr_init_err: + request_mgr_fini(drvdata); + return rc; +} + +static inline void enqueue_seq( + void __iomem *cc_base, + HwDesc_s seq[], unsigned int seq_len) +{ + int i; + + for (i = 0; i < seq_len; i++) { + writel_relaxed(seq[i].word[0], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0))); + writel_relaxed(seq[i].word[1], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0))); + writel_relaxed(seq[i].word[2], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0))); + writel_relaxed(seq[i].word[3], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0))); + writel_relaxed(seq[i].word[4], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0))); + wmb(); + writel_relaxed(seq[i].word[5], (volatile void __iomem *)(cc_base+CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_WORD0))); +#ifdef DX_DUMP_DESCS + SSI_LOG_DEBUG("desc[%02d]: 0x%08X 0x%08X 0x%08X 0x%08X 0x%08X 0x%08X\n", i, + seq[i].word[0], seq[i].word[1], seq[i].word[2], seq[i].word[3], seq[i].word[4], seq[i].word[5]); +#endif + } +} + +/*! + * Completion will take place if and only if user requested completion + * by setting "is_dout = 0" in send_request(). + * + * \param dev + * \param dx_compl_h The completion event to signal + */ +static void request_mgr_complete(struct device *dev, void *dx_compl_h, void __iomem *cc_base) +{ + struct completion *this_compl = dx_compl_h; + complete(this_compl); +} + + +static inline int request_mgr_queues_status_check( + struct ssi_request_mgr_handle *req_mgr_h, + void __iomem *cc_base, + unsigned int total_seq_len) +{ + unsigned long poll_queue; + + /* SW queue is checked only once as it will not + be chaned during the poll becasue the spinlock_bh + is held by the thread */ + if (unlikely(((req_mgr_h->req_queue_head + 1) & + (MAX_REQUEST_QUEUE_SIZE - 1)) == + req_mgr_h->req_queue_tail)) { + SSI_LOG_ERR("SW FIFO is full. req_queue_head=%d sw_fifo_len=%d\n", + req_mgr_h->req_queue_head, MAX_REQUEST_QUEUE_SIZE); + return -EBUSY; + } + + if ((likely(req_mgr_h->q_free_slots >= total_seq_len)) ) { + return 0; + } + /* Wait for space in HW queue. Poll constant num of iterations. */ + for (poll_queue =0; poll_queue < SSI_MAX_POLL_ITER ; poll_queue ++) { + req_mgr_h->q_free_slots = + CC_HAL_READ_REGISTER( + CC_REG_OFFSET(CRY_KERNEL, + DSCRPTR_QUEUE_CONTENT)); + if (unlikely(req_mgr_h->q_free_slots < + req_mgr_h->min_free_hw_slots)) { + req_mgr_h->min_free_hw_slots = req_mgr_h->q_free_slots; + } + + if (likely (req_mgr_h->q_free_slots >= total_seq_len)) { + /* If there is enough place return */ + return 0; + } + + SSI_LOG_DEBUG("HW FIFO is full. q_free_slots=%d total_seq_len=%d\n", + req_mgr_h->q_free_slots, total_seq_len); + } + /* No room in the HW queue try again later */ + SSI_LOG_DEBUG("HW FIFO full, timeout. req_queue_head=%d " + "sw_fifo_len=%d q_free_slots=%d total_seq_len=%d\n", + req_mgr_h->req_queue_head, + MAX_REQUEST_QUEUE_SIZE, + req_mgr_h->q_free_slots, + total_seq_len); + return -EAGAIN; +} + +/*! + * Enqueue caller request to crypto hardware. + * + * \param drvdata + * \param ssi_req The request to enqueue + * \param desc The crypto sequence + * \param len The crypto sequence length + * \param is_dout If "true": completion is handled by the caller + * If "false": this function adds a dummy descriptor completion + * and waits upon completion signal. + * + * \return int Returns -EINPROGRESS if "is_dout=true"; "0" if "is_dout=false" + */ +int send_request( + struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req, + HwDesc_s *desc, unsigned int len, bool is_dout) +{ + void __iomem *cc_base = drvdata->cc_base; + struct ssi_request_mgr_handle *req_mgr_h = drvdata->request_mgr_handle; + unsigned int used_sw_slots; + unsigned int total_seq_len = len; /*initial sequence length*/ + int rc; + unsigned int max_required_seq_len = total_seq_len + ((is_dout == 0) ? 1 : 0); + DECL_CYCLE_COUNT_RESOURCES; + +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) + rc = ssi_power_mgr_runtime_get(&drvdata->plat_dev->dev); + if (rc != 0) { + SSI_LOG_ERR("ssi_power_mgr_runtime_get returned %x\n",rc); + spin_unlock_bh(&req_mgr_h->hw_lock); + return rc; + } +#endif + + do { + spin_lock_bh(&req_mgr_h->hw_lock); + + /* Check if there is enough place in the SW/HW queues + in case iv gen add the max size and in case of no dout add 1 + for the internal completion descriptor */ + rc = request_mgr_queues_status_check(req_mgr_h, + cc_base, + max_required_seq_len); + if (likely(rc == 0 )) + /* There is enough place in the queue */ + break; + /* something wrong release the spinlock*/ + spin_unlock_bh(&req_mgr_h->hw_lock); + + if (rc != -EAGAIN) { + /* Any error other than HW queue full + (SW queue is full) */ +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) + ssi_power_mgr_runtime_put_suspend(&drvdata->plat_dev->dev); +#endif + return rc; + } + + /* HW queue is full - short sleep */ + msleep(1); + } while (1); + + /* Additional completion descriptor is needed incase caller did not + enabled any DLLI/MLLI DOUT bit in the given sequence */ + if (!is_dout) { + init_completion(&ssi_req->seq_compl); + ssi_req->user_cb = request_mgr_complete; + ssi_req->user_arg = &(ssi_req->seq_compl); + total_seq_len++; + } + + used_sw_slots = ((req_mgr_h->req_queue_head - req_mgr_h->req_queue_tail) & (MAX_REQUEST_QUEUE_SIZE-1)); + if (unlikely(used_sw_slots > req_mgr_h->max_used_sw_slots)) { + req_mgr_h->max_used_sw_slots = used_sw_slots; + } + + CC_CYCLE_DESC_HEAD(cc_base, &req_mgr_h->monitor_desc, + &req_mgr_h->monitor_lock, &ssi_req->is_monitored_p); + + /* Enqueue request - must be locked with HW lock*/ + req_mgr_h->req_queue[req_mgr_h->req_queue_head] = *ssi_req; + START_CYCLE_COUNT_AT(req_mgr_h->req_queue[req_mgr_h->req_queue_head].submit_cycle); + req_mgr_h->req_queue_head = (req_mgr_h->req_queue_head + 1) & (MAX_REQUEST_QUEUE_SIZE - 1); + /* TODO: Use circ_buf.h ? */ + + SSI_LOG_DEBUG("Enqueue request head=%u\n", req_mgr_h->req_queue_head); + +#ifdef FLUSH_CACHE_ALL + flush_cache_all(); +#endif + + /* STAT_PHASE_4: Push sequence */ + START_CYCLE_COUNT(); + enqueue_seq(cc_base, desc, len); + enqueue_seq(cc_base, &req_mgr_h->compl_desc, (is_dout ? 0 : 1)); + END_CYCLE_COUNT(ssi_req->op_type, STAT_PHASE_4); + + CC_CYCLE_DESC_TAIL(cc_base, &req_mgr_h->monitor_desc, ssi_req->is_monitored_p); + + if (unlikely(req_mgr_h->q_free_slots < total_seq_len)) { + /*This means that there was a problem with the resume*/ + BUG(); + } + /* Update the free slots in HW queue */ + req_mgr_h->q_free_slots -= total_seq_len; + + spin_unlock_bh(&req_mgr_h->hw_lock); + + if (!is_dout) { + /* Wait upon sequence completion. + * Return "0" -Operation done successfully. */ + return wait_for_completion_interruptible(&ssi_req->seq_compl); + } else { + /* Operation still in process */ + return -EINPROGRESS; + } +} + + +/*! + * Enqueue caller request to crypto hardware during init process. + * assume this function is not called in middle of a flow, + * since we set QUEUE_LAST_IND flag in the last descriptor. + * + * \param drvdata + * \param desc The crypto sequence + * \param len The crypto sequence length + * + * \return int Returns "0" upon success + */ +int send_request_init( + struct ssi_drvdata *drvdata, HwDesc_s *desc, unsigned int len) +{ + void __iomem *cc_base = drvdata->cc_base; + struct ssi_request_mgr_handle *req_mgr_h = drvdata->request_mgr_handle; + unsigned int total_seq_len = len; /*initial sequence length*/ + int rc = 0; + + /* Wait for space in HW and SW FIFO. Poll for as much as FIFO_TIMEOUT. */ + rc = request_mgr_queues_status_check(req_mgr_h, cc_base, total_seq_len); + if (unlikely(rc != 0 )) { + return rc; + } + HW_DESC_SET_QUEUE_LAST_IND(&desc[len-1]); + + enqueue_seq(cc_base, desc, len); + + /* Update the free slots in HW queue */ + req_mgr_h->q_free_slots = CC_HAL_READ_REGISTER( + CC_REG_OFFSET(CRY_KERNEL, + DSCRPTR_QUEUE_CONTENT)); + + return 0; +} + + +void complete_request(struct ssi_drvdata *drvdata) +{ + struct ssi_request_mgr_handle *request_mgr_handle = + drvdata->request_mgr_handle; +#ifdef COMP_IN_WQ + queue_delayed_work(request_mgr_handle->workq, &request_mgr_handle->compwork, 0); +#else + tasklet_schedule(&request_mgr_handle->comptask); +#endif +} + +#ifdef COMP_IN_WQ +static void comp_work_handler(struct work_struct *work) +{ + struct ssi_drvdata *drvdata = + container_of(work, struct ssi_drvdata, compwork.work); + + comp_handler((unsigned long)drvdata); +} +#endif + +static void proc_completions(struct ssi_drvdata *drvdata) +{ + struct ssi_crypto_req *ssi_req; + struct platform_device *plat_dev = drvdata->plat_dev; + struct ssi_request_mgr_handle * request_mgr_handle = + drvdata->request_mgr_handle; +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) + int rc = 0; +#endif + DECL_CYCLE_COUNT_RESOURCES; + + while(request_mgr_handle->axi_completed) { + request_mgr_handle->axi_completed--; + + /* Dequeue request */ + if (unlikely(request_mgr_handle->req_queue_head == request_mgr_handle->req_queue_tail)) { + SSI_LOG_ERR("Request queue is empty req_queue_head==req_queue_tail==%u\n", request_mgr_handle->req_queue_head); + BUG(); + } + + ssi_req = &request_mgr_handle->req_queue[request_mgr_handle->req_queue_tail]; + END_CYCLE_COUNT_AT(ssi_req->submit_cycle, ssi_req->op_type, STAT_PHASE_5); /* Seq. Comp. */ + END_CC_MONITOR_COUNT(drvdata->cc_base, ssi_req->op_type, STAT_PHASE_6, + drvdata->monitor_null_cycles, &request_mgr_handle->monitor_lock, ssi_req->is_monitored_p); + +#ifdef FLUSH_CACHE_ALL + flush_cache_all(); +#endif + +#ifdef COMPLETION_DELAY + /* Delay */ + { + uint32_t axi_err; + int i; + SSI_LOG_INFO("Delay\n"); + for (i=0;i<1000000;i++) { + axi_err = READ_REGISTER(drvdata->cc_base + CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_ERR)); + } + } +#endif /* COMPLETION_DELAY */ + + if (likely(ssi_req->user_cb != NULL)) { + START_CYCLE_COUNT(); + ssi_req->user_cb(&plat_dev->dev, ssi_req->user_arg, drvdata->cc_base); + END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_3); + } + request_mgr_handle->req_queue_tail = (request_mgr_handle->req_queue_tail + 1) & (MAX_REQUEST_QUEUE_SIZE - 1); + SSI_LOG_DEBUG("Dequeue request tail=%u\n", request_mgr_handle->req_queue_tail); + SSI_LOG_DEBUG("Request completed. axi_completed=%d\n", request_mgr_handle->axi_completed); +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) + rc = ssi_power_mgr_runtime_put_suspend(&plat_dev->dev); + if (rc != 0) { + SSI_LOG_ERR("Failed to set runtime suspension %d\n",rc); + } +#endif + } +} + +/* Deferred service handler, run as interrupt-fired tasklet */ +static void comp_handler(unsigned long devarg) +{ + struct ssi_drvdata *drvdata = (struct ssi_drvdata *)devarg; + void __iomem *cc_base = drvdata->cc_base; + struct ssi_request_mgr_handle * request_mgr_handle = + drvdata->request_mgr_handle; + + uint32_t irq; + + DECL_CYCLE_COUNT_RESOURCES; + + START_CYCLE_COUNT(); + + irq = (drvdata->irq & SSI_COMP_IRQ_MASK); + + if (irq & SSI_COMP_IRQ_MASK) { + /* To avoid the interrupt from firing as we unmask it, we clear it now */ + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_ICR), SSI_COMP_IRQ_MASK); + + /* Avoid race with above clear: Test completion counter once more */ + request_mgr_handle->axi_completed += CC_REG_FLD_GET(CRY_KERNEL, AXIM_MON_COMP, VALUE, + CC_HAL_READ_REGISTER(AXIM_MON_BASE_OFFSET)); + + /* ISR-to-Tasklet latency */ + if (request_mgr_handle->axi_completed) { + /* Only if actually reflects ISR-to-completion-handling latency, i.e., + not duplicate as a result of interrupt after AXIM_MON_ERR clear, before end of loop */ + END_CYCLE_COUNT_AT(drvdata->isr_exit_cycles, STAT_OP_TYPE_GENERIC, STAT_PHASE_1); + } + + while (request_mgr_handle->axi_completed) { + do { + proc_completions(drvdata); + /* At this point (after proc_completions()), request_mgr_handle->axi_completed is always 0. + The following assignment was changed to = (previously was +=) to conform KW restrictions. */ + request_mgr_handle->axi_completed = CC_REG_FLD_GET(CRY_KERNEL, AXIM_MON_COMP, VALUE, + CC_HAL_READ_REGISTER(AXIM_MON_BASE_OFFSET)); + } while (request_mgr_handle->axi_completed > 0); + + /* To avoid the interrupt from firing as we unmask it, we clear it now */ + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_ICR), SSI_COMP_IRQ_MASK); + + /* Avoid race with above clear: Test completion counter once more */ + request_mgr_handle->axi_completed += CC_REG_FLD_GET(CRY_KERNEL, AXIM_MON_COMP, VALUE, + CC_HAL_READ_REGISTER(AXIM_MON_BASE_OFFSET)); + }; + + } + /* after verifing that there is nothing to do, Unmask AXI completion interrupt */ + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), + CC_HAL_READ_REGISTER( + CC_REG_OFFSET(HOST_RGF, HOST_IMR)) & ~irq); + END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_2); +} + +/* +resume the queue configuration - no need to take the lock as this happens inside +the spin lock protection +*/ +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) +int ssi_request_mgr_runtime_resume_queue(struct ssi_drvdata *drvdata) +{ + struct ssi_request_mgr_handle * request_mgr_handle = drvdata->request_mgr_handle; + + spin_lock_bh(&request_mgr_handle->hw_lock); + request_mgr_handle->is_runtime_suspended = false; + spin_unlock_bh(&request_mgr_handle->hw_lock); + + return 0 ; +} + +/* +suspend the queue configuration. Since it is used for the runtime suspend +only verify that the queue can be suspended. +*/ +int ssi_request_mgr_runtime_suspend_queue(struct ssi_drvdata *drvdata) +{ + struct ssi_request_mgr_handle * request_mgr_handle = + drvdata->request_mgr_handle; + + /* lock the send_request */ + spin_lock_bh(&request_mgr_handle->hw_lock); + if (request_mgr_handle->req_queue_head != + request_mgr_handle->req_queue_tail) { + spin_unlock_bh(&request_mgr_handle->hw_lock); + return -EBUSY; + } + request_mgr_handle->is_runtime_suspended = true; + spin_unlock_bh(&request_mgr_handle->hw_lock); + + return 0; +} + +bool ssi_request_mgr_is_queue_runtime_suspend(struct ssi_drvdata *drvdata) +{ + struct ssi_request_mgr_handle * request_mgr_handle = + drvdata->request_mgr_handle; + + return request_mgr_handle->is_runtime_suspended; +} + +#endif + diff --git a/drivers/staging/ccree/ssi_request_mgr.h b/drivers/staging/ccree/ssi_request_mgr.h new file mode 100644 index 0000000..c09339b --- /dev/null +++ b/drivers/staging/ccree/ssi_request_mgr.h @@ -0,0 +1,60 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* \file request_mgr.h + Request Manager + */ + +#ifndef __REQUEST_MGR_H__ +#define __REQUEST_MGR_H__ + +#include "cc_hw_queue_defs.h" + +int request_mgr_init(struct ssi_drvdata *drvdata); + +/*! + * Enqueue caller request to crypto hardware. + * + * \param drvdata + * \param ssi_req The request to enqueue + * \param desc The crypto sequence + * \param len The crypto sequence length + * \param is_dout If "true": completion is handled by the caller + * If "false": this function adds a dummy descriptor completion + * and waits upon completion signal. + * + * \return int Returns -EINPROGRESS if "is_dout=ture"; "0" if "is_dout=false" + */ +int send_request( + struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req, + HwDesc_s *desc, unsigned int len, bool is_dout); + +int send_request_init( + struct ssi_drvdata *drvdata, HwDesc_s *desc, unsigned int len); + +void complete_request(struct ssi_drvdata *drvdata); + +void request_mgr_fini(struct ssi_drvdata *drvdata); + +#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) +int ssi_request_mgr_runtime_resume_queue(struct ssi_drvdata *drvdata); + +int ssi_request_mgr_runtime_suspend_queue(struct ssi_drvdata *drvdata); + +bool ssi_request_mgr_is_queue_runtime_suspend(struct ssi_drvdata *drvdata); +#endif + +#endif /*__REQUEST_MGR_H__*/ diff --git a/drivers/staging/ccree/ssi_sram_mgr.c b/drivers/staging/ccree/ssi_sram_mgr.c new file mode 100644 index 0000000..50066e1 --- /dev/null +++ b/drivers/staging/ccree/ssi_sram_mgr.c @@ -0,0 +1,138 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#include "ssi_driver.h" +#include "ssi_sram_mgr.h" + + +/** + * struct ssi_sram_mgr_ctx -Internal RAM context manager + * @sram_free_offset: the offset to the non-allocated area + */ +struct ssi_sram_mgr_ctx { + ssi_sram_addr_t sram_free_offset; +}; + + +/** + * ssi_sram_mgr_fini() - Cleanup SRAM pool. + * + * @drvdata: Associated device driver context + */ +void ssi_sram_mgr_fini(struct ssi_drvdata *drvdata) +{ + struct ssi_sram_mgr_ctx *smgr_ctx = drvdata->sram_mgr_handle; + + /* Free "this" context */ + if (smgr_ctx != NULL) { + memset(smgr_ctx, 0, sizeof(struct ssi_sram_mgr_ctx)); + kfree(smgr_ctx); + } +} + +/** + * ssi_sram_mgr_init() - Initializes SRAM pool. + * The pool starts right at the beginning of SRAM. + * Returns zero for success, negative value otherwise. + * + * @drvdata: Associated device driver context + */ +int ssi_sram_mgr_init(struct ssi_drvdata *drvdata) +{ + struct ssi_sram_mgr_ctx *smgr_ctx; + int rc; + + /* Allocate "this" context */ + drvdata->sram_mgr_handle = kzalloc( + sizeof(struct ssi_sram_mgr_ctx), GFP_KERNEL); + if (!drvdata->sram_mgr_handle) { + SSI_LOG_ERR("Not enough memory to allocate SRAM_MGR ctx (%zu)\n", + sizeof(struct ssi_sram_mgr_ctx)); + rc = -ENOMEM; + goto out; + } + smgr_ctx = drvdata->sram_mgr_handle; + + /* Pool starts at start of SRAM */ + smgr_ctx->sram_free_offset = 0; + + return 0; + +out: + ssi_sram_mgr_fini(drvdata); + return rc; +} + +/*! + * Allocated buffer from SRAM pool. + * Note: Caller is responsible to free the LAST allocated buffer. + * This function does not taking care of any fragmentation may occur + * by the order of calls to alloc/free. + * + * \param drvdata + * \param size The requested bytes to allocate + */ +ssi_sram_addr_t ssi_sram_mgr_alloc(struct ssi_drvdata *drvdata, uint32_t size) +{ + struct ssi_sram_mgr_ctx *smgr_ctx = drvdata->sram_mgr_handle; + ssi_sram_addr_t p; + + if (unlikely((size & 0x3) != 0)) { + SSI_LOG_ERR("Requested buffer size (%u) is not multiple of 4", + size); + return NULL_SRAM_ADDR; + } + if (unlikely(size > (SSI_CC_SRAM_SIZE - smgr_ctx->sram_free_offset))) { + SSI_LOG_ERR("Not enough space to allocate %u B (at offset %llu)\n", + size, smgr_ctx->sram_free_offset); + return NULL_SRAM_ADDR; + } + + p = smgr_ctx->sram_free_offset; + smgr_ctx->sram_free_offset += size; + SSI_LOG_DEBUG("Allocated %u B @ %u\n", size, (unsigned int)p); + return p; +} + +/** + * ssi_sram_mgr_const2sram_desc() - Create const descriptors sequence to + * set values in given array into SRAM. + * Note: each const value can't exceed word size. + * + * @src: A pointer to array of words to set as consts. + * @dst: The target SRAM buffer to set into + * @nelements: The number of words in "src" array + * @seq: A pointer to the given IN/OUT descriptor sequence + * @seq_len: A pointer to the given IN/OUT sequence length + */ +void ssi_sram_mgr_const2sram_desc( + const uint32_t *src, ssi_sram_addr_t dst, + unsigned int nelement, + HwDesc_s *seq, unsigned int *seq_len) +{ + uint32_t i; + unsigned int idx = *seq_len; + + for (i = 0; i < nelement; i++, idx++) { + HW_DESC_INIT(&seq[idx]); + HW_DESC_SET_DIN_CONST(&seq[idx], src[i], sizeof(uint32_t)); + HW_DESC_SET_DOUT_SRAM(&seq[idx], dst + (i * sizeof(uint32_t)), sizeof(uint32_t)); + HW_DESC_SET_FLOW_MODE(&seq[idx], BYPASS); + } + + *seq_len = idx; +} + diff --git a/drivers/staging/ccree/ssi_sram_mgr.h b/drivers/staging/ccree/ssi_sram_mgr.h new file mode 100644 index 0000000..d71fbaf --- /dev/null +++ b/drivers/staging/ccree/ssi_sram_mgr.h @@ -0,0 +1,80 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#ifndef __SSI_SRAM_MGR_H__ +#define __SSI_SRAM_MGR_H__ + + +#ifndef SSI_CC_SRAM_SIZE +#define SSI_CC_SRAM_SIZE 4096 +#endif + +struct ssi_drvdata; + +/** + * Address (offset) within CC internal SRAM + */ + +typedef uint64_t ssi_sram_addr_t; + +#define NULL_SRAM_ADDR ((ssi_sram_addr_t)-1) + +/*! + * Initializes SRAM pool. + * The first X bytes of SRAM are reserved for ROM usage, hence, pool + * starts right after X bytes. + * + * \param drvdata + * + * \return int Zero for success, negative value otherwise. + */ +int ssi_sram_mgr_init(struct ssi_drvdata *drvdata); + +/*! + * Uninits SRAM pool. + * + * \param drvdata + */ +void ssi_sram_mgr_fini(struct ssi_drvdata *drvdata); + +/*! + * Allocated buffer from SRAM pool. + * Note: Caller is responsible to free the LAST allocated buffer. + * This function does not taking care of any fragmentation may occur + * by the order of calls to alloc/free. + * + * \param drvdata + * \param size The requested bytes to allocate + */ +ssi_sram_addr_t ssi_sram_mgr_alloc(struct ssi_drvdata *drvdata, uint32_t size); + +/** + * ssi_sram_mgr_const2sram_desc() - Create const descriptors sequence to + * set values in given array into SRAM. + * Note: each const value can't exceed word size. + * + * @src: A pointer to array of words to set as consts. + * @dst: The target SRAM buffer to set into + * @nelements: The number of words in "src" array + * @seq: A pointer to the given IN/OUT descriptor sequence + * @seq_len: A pointer to the given IN/OUT sequence length + */ +void ssi_sram_mgr_const2sram_desc( + const uint32_t *src, ssi_sram_addr_t dst, + unsigned int nelement, + HwDesc_s *seq, unsigned int *seq_len); + +#endif /*__SSI_SRAM_MGR_H__*/ diff --git a/drivers/staging/ccree/ssi_sysfs.c b/drivers/staging/ccree/ssi_sysfs.c new file mode 100644 index 0000000..6db7573 --- /dev/null +++ b/drivers/staging/ccree/ssi_sysfs.c @@ -0,0 +1,440 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#include +#include "ssi_config.h" +#include "ssi_driver.h" +#include "cc_crypto_ctx.h" +#include "ssi_sysfs.h" + +#ifdef ENABLE_CC_SYSFS + +static struct ssi_drvdata *sys_get_drvdata(void); + +#ifdef CC_CYCLE_COUNT + +#include + +struct stat_item { + unsigned int min; + unsigned int max; + cycles_t sum; + unsigned int count; +}; + +struct stat_name { + const char *op_type_name; + const char *stat_phase_name[MAX_STAT_PHASES]; +}; + +static struct stat_name stat_name_db[MAX_STAT_OP_TYPES] = +{ + { + /* STAT_OP_TYPE_NULL */ + .op_type_name = "NULL", + .stat_phase_name = {NULL}, + }, + { + .op_type_name = "Encode", + .stat_phase_name[STAT_PHASE_0] = "Init and sanity checks", + .stat_phase_name[STAT_PHASE_1] = "Map buffers", + .stat_phase_name[STAT_PHASE_2] = "Create sequence", + .stat_phase_name[STAT_PHASE_3] = "Send Request", + .stat_phase_name[STAT_PHASE_4] = "HW-Q push", + .stat_phase_name[STAT_PHASE_5] = "Sequence completion", + .stat_phase_name[STAT_PHASE_6] = "HW cycles", + }, + { .op_type_name = "Decode", + .stat_phase_name[STAT_PHASE_0] = "Init and sanity checks", + .stat_phase_name[STAT_PHASE_1] = "Map buffers", + .stat_phase_name[STAT_PHASE_2] = "Create sequence", + .stat_phase_name[STAT_PHASE_3] = "Send Request", + .stat_phase_name[STAT_PHASE_4] = "HW-Q push", + .stat_phase_name[STAT_PHASE_5] = "Sequence completion", + .stat_phase_name[STAT_PHASE_6] = "HW cycles", + }, + { .op_type_name = "Setkey", + .stat_phase_name[STAT_PHASE_0] = "Init and sanity checks", + .stat_phase_name[STAT_PHASE_1] = "Copy key to ctx", + .stat_phase_name[STAT_PHASE_2] = "Create sequence", + .stat_phase_name[STAT_PHASE_3] = "Send Request", + .stat_phase_name[STAT_PHASE_4] = "HW-Q push", + .stat_phase_name[STAT_PHASE_5] = "Sequence completion", + .stat_phase_name[STAT_PHASE_6] = "HW cycles", + }, + { + .op_type_name = "Generic", + .stat_phase_name[STAT_PHASE_0] = "Interrupt", + .stat_phase_name[STAT_PHASE_1] = "ISR-to-Tasklet", + .stat_phase_name[STAT_PHASE_2] = "Tasklet start-to-end", + .stat_phase_name[STAT_PHASE_3] = "Tasklet:user_cb()", + .stat_phase_name[STAT_PHASE_4] = "Tasklet:dx_X_complete() - w/o X_complete()", + .stat_phase_name[STAT_PHASE_5] = "", + .stat_phase_name[STAT_PHASE_6] = "HW cycles", + } +}; + +/* + * Structure used to create a directory + * and its attributes in sysfs. + */ +struct sys_dir { + struct kobject *sys_dir_kobj; + struct attribute_group sys_dir_attr_group; + struct attribute **sys_dir_attr_list; + uint32_t num_of_attrs; + struct ssi_drvdata *drvdata; /* Associated driver context */ +}; + +/* top level directory structures */ +struct sys_dir sys_top_dir; + +static DEFINE_SPINLOCK(stat_lock); + +/* List of DBs */ +static struct stat_item stat_host_db[MAX_STAT_OP_TYPES][MAX_STAT_PHASES]; +static struct stat_item stat_cc_db[MAX_STAT_OP_TYPES][MAX_STAT_PHASES]; + + +static void init_db(struct stat_item item[MAX_STAT_OP_TYPES][MAX_STAT_PHASES]) +{ + unsigned int i, j; + + /* Clear db */ + for (i=0; icount++; + item->sum += result; + if (result < item->min) + item->min = result; + if (result > item->max ) + item->max = result; +} + +static void display_db(struct stat_item item[MAX_STAT_OP_TYPES][MAX_STAT_PHASES]) +{ + unsigned int i, j; + uint64_t avg; + + for (i=STAT_OP_TYPE_ENCODE; i 0) { + avg = (uint64_t)item[i][j].sum; + do_div(avg, item[i][j].count); + SSI_LOG_ERR("%s, %s: min=%d avg=%d max=%d sum=%lld count=%d\n", + stat_name_db[i].op_type_name, stat_name_db[i].stat_phase_name[j], + item[i][j].min, (int)avg, item[i][j].max, (long long)item[i][j].sum, item[i][j].count); + } + } + } +} + + +/************************************** + * Attributes show functions section * + **************************************/ + +static ssize_t ssi_sys_stats_host_db_clear(struct kobject *kobj, + struct kobj_attribute *attr, const char *buf, size_t count) +{ + init_db(stat_host_db); + return count; +} + +static ssize_t ssi_sys_stats_cc_db_clear(struct kobject *kobj, + struct kobj_attribute *attr, const char *buf, size_t count) +{ + init_db(stat_cc_db); + return count; +} + +static ssize_t ssi_sys_stat_host_db_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + int i, j ; + char line[512]; + uint32_t min_cyc, max_cyc; + uint64_t avg; + ssize_t buf_len, tmp_len=0; + + buf_len = scnprintf(buf,PAGE_SIZE, + "phase\t\t\t\t\t\t\tmin[cy]\tavg[cy]\tmax[cy]\t#samples\n"); + if ( buf_len <0 )/* scnprintf shouldn't return negative value according to its implementation*/ + return buf_len; + for (i=STAT_OP_TYPE_ENCODE; i 0) { + avg = (uint64_t)stat_host_db[i][j].sum; + do_div(avg, stat_host_db[i][j].count); + min_cyc = stat_host_db[i][j].min; + max_cyc = stat_host_db[i][j].max; + } else { + avg = min_cyc = max_cyc = 0; + } + tmp_len = scnprintf(line,512, + "%s::%s\t\t\t\t\t%6u\t%6u\t%6u\t%7u\n", + stat_name_db[i].op_type_name, + stat_name_db[i].stat_phase_name[j], + min_cyc, (unsigned int)avg, max_cyc, + stat_host_db[i][j].count); + if ( tmp_len <0 )/* scnprintf shouldn't return negative value according to its implementation*/ + return buf_len; + if ( buf_len + tmp_len >= PAGE_SIZE) + return buf_len; + buf_len += tmp_len; + strncat(buf, line,512); + } + } + return buf_len; +} + +static ssize_t ssi_sys_stat_cc_db_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + int i; + char line[256]; + uint32_t min_cyc, max_cyc; + uint64_t avg; + ssize_t buf_len,tmp_len=0; + + buf_len = scnprintf(buf,PAGE_SIZE, + "phase\tmin[cy]\tavg[cy]\tmax[cy]\t#samples\n"); + if ( buf_len <0 )/* scnprintf shouldn't return negative value according to its implementation*/ + return buf_len; + for (i=STAT_OP_TYPE_ENCODE; i 0) { + avg = (uint64_t)stat_cc_db[i][STAT_PHASE_6].sum; + do_div(avg, stat_cc_db[i][STAT_PHASE_6].count); + min_cyc = stat_cc_db[i][STAT_PHASE_6].min; + max_cyc = stat_cc_db[i][STAT_PHASE_6].max; + } else { + avg = min_cyc = max_cyc = 0; + } + tmp_len = scnprintf(line,256, + "%s\t%6u\t%6u\t%6u\t%7u\n", + stat_name_db[i].op_type_name, + min_cyc, + (unsigned int)avg, + max_cyc, + stat_cc_db[i][STAT_PHASE_6].count); + + if ( tmp_len < 0 )/* scnprintf shouldn't return negative value according to its implementation*/ + return buf_len; + + if ( buf_len + tmp_len >= PAGE_SIZE) + return buf_len; + buf_len += tmp_len; + strncat(buf, line,256); + } + return buf_len; +} + +void update_host_stat(unsigned int op_type, unsigned int phase, cycles_t result) +{ + unsigned long flags; + + spin_lock_irqsave(&stat_lock, flags); + update_db(&(stat_host_db[op_type][phase]), (unsigned int)result); + spin_unlock_irqrestore(&stat_lock, flags); +} + +void update_cc_stat( + unsigned int op_type, + unsigned int phase, + unsigned int elapsed_cycles) +{ + update_db(&(stat_cc_db[op_type][phase]), elapsed_cycles); +} + +void display_all_stat_db(void) +{ + SSI_LOG_ERR("\n======= CYCLE COUNT STATS =======\n"); + display_db(stat_host_db); + SSI_LOG_ERR("\n======= CC HW CYCLE COUNT STATS =======\n"); + display_db(stat_cc_db); +} +#endif /*CC_CYCLE_COUNT*/ + + + +static ssize_t ssi_sys_regdump_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct ssi_drvdata *drvdata = sys_get_drvdata(); + uint32_t register_value; + void __iomem* cc_base = drvdata->cc_base; + int offset = 0; + + register_value = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_SIGNATURE)); + offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s \t(0x%lX)\t 0x%08X \n", "HOST_SIGNATURE ", DX_HOST_SIGNATURE_REG_OFFSET, register_value); + register_value = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IRR)); + offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s \t(0x%lX)\t 0x%08X \n", "HOST_IRR ", DX_HOST_IRR_REG_OFFSET, register_value); + register_value = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_POWER_DOWN_EN)); + offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s \t(0x%lX)\t 0x%08X \n", "HOST_POWER_DOWN_EN ", DX_HOST_POWER_DOWN_EN_REG_OFFSET, register_value); + register_value = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_ERR)); + offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s \t(0x%lX)\t 0x%08X \n", "AXIM_MON_ERR ", DX_AXIM_MON_ERR_REG_OFFSET, register_value); + register_value = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, DSCRPTR_QUEUE_CONTENT)); + offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s \t(0x%lX)\t 0x%08X \n", "DSCRPTR_QUEUE_CONTENT", DX_DSCRPTR_QUEUE_CONTENT_REG_OFFSET, register_value); + return offset; +} + +static ssize_t ssi_sys_help_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + char* help_str[]={ + "cat reg_dump ", "Print several of CC register values", + #if defined CC_CYCLE_COUNT + "cat stats_host ", "Print host statistics", + "echo > stats_host", "Clear host statistics database", + "cat stats_cc ", "Print CC statistics", + "echo > stats_cc ", "Clear CC statistics database", + #endif + }; + int i=0, offset = 0; + + offset += scnprintf(buf + offset, PAGE_SIZE - offset, "Usage:\n"); + for ( i = 0; i < (sizeof(help_str)/sizeof(help_str[0])); i+=2) { + offset += scnprintf(buf + offset, PAGE_SIZE - offset, "%s\t\t%s\n", help_str[i], help_str[i+1]); + } + return offset; +} + +/******************************************************** + * SYSFS objects * + ********************************************************/ +/* + * Structure used to create a directory + * and its attributes in sysfs. + */ +struct sys_dir { + struct kobject *sys_dir_kobj; + struct attribute_group sys_dir_attr_group; + struct attribute **sys_dir_attr_list; + uint32_t num_of_attrs; + struct ssi_drvdata *drvdata; /* Associated driver context */ +}; + +/* top level directory structures */ +static struct sys_dir sys_top_dir; + +/* TOP LEVEL ATTRIBUTES */ +static struct kobj_attribute ssi_sys_top_level_attrs[] = { + __ATTR(dump_regs, 0444, ssi_sys_regdump_show, NULL), + __ATTR(help, 0444, ssi_sys_help_show, NULL), +#if defined CC_CYCLE_COUNT + __ATTR(stats_host, 0664, ssi_sys_stat_host_db_show, ssi_sys_stats_host_db_clear), + __ATTR(stats_cc, 0664, ssi_sys_stat_cc_db_show, ssi_sys_stats_cc_db_clear), +#endif + +}; + +static struct ssi_drvdata *sys_get_drvdata(void) +{ + /* TODO: supporting multiple SeP devices would require avoiding + * global "top_dir" and finding associated "top_dir" by traversing + * up the tree to the kobject which matches one of the top_dir's */ + return sys_top_dir.drvdata; +} + +static int sys_init_dir(struct sys_dir *sys_dir, struct ssi_drvdata *drvdata, + struct kobject *parent_dir_kobj, const char *dir_name, + struct kobj_attribute *attrs, uint32_t num_of_attrs) +{ + int i; + + memset(sys_dir, 0, sizeof(struct sys_dir)); + + sys_dir->drvdata = drvdata; + + /* initialize directory kobject */ + sys_dir->sys_dir_kobj = + kobject_create_and_add(dir_name, parent_dir_kobj); + + if (!(sys_dir->sys_dir_kobj)) + return -ENOMEM; + /* allocate memory for directory's attributes list */ + sys_dir->sys_dir_attr_list = + kzalloc(sizeof(struct attribute *) * (num_of_attrs + 1), + GFP_KERNEL); + + if (!(sys_dir->sys_dir_attr_list)) { + kobject_put(sys_dir->sys_dir_kobj); + return -ENOMEM; + } + + sys_dir->num_of_attrs = num_of_attrs; + + /* initialize attributes list */ + for (i = 0; i < num_of_attrs; ++i) + sys_dir->sys_dir_attr_list[i] = &(attrs[i].attr); + + /* last list entry should be NULL */ + sys_dir->sys_dir_attr_list[num_of_attrs] = NULL; + + sys_dir->sys_dir_attr_group.attrs = sys_dir->sys_dir_attr_list; + + return sysfs_create_group(sys_dir->sys_dir_kobj, + &(sys_dir->sys_dir_attr_group)); +} + +static void sys_free_dir(struct sys_dir *sys_dir) +{ + if (!sys_dir) + return; + + kfree(sys_dir->sys_dir_attr_list); + + if (sys_dir->sys_dir_kobj != NULL) + kobject_put(sys_dir->sys_dir_kobj); +} + +int ssi_sysfs_init(struct kobject *sys_dev_obj, struct ssi_drvdata *drvdata) +{ + int retval; + +#if defined CC_CYCLE_COUNT + /* Init. statistics */ + init_db(stat_host_db); + init_db(stat_cc_db); +#endif + + SSI_LOG_ERR("setup sysfs under %s\n", sys_dev_obj->name); + + /* Initialize top directory */ + retval = sys_init_dir(&sys_top_dir, drvdata, sys_dev_obj, + "cc_info", ssi_sys_top_level_attrs, + sizeof(ssi_sys_top_level_attrs) / + sizeof(struct kobj_attribute)); + return retval; +} + +void ssi_sysfs_fini(void) +{ + sys_free_dir(&sys_top_dir); +} + +#endif /*ENABLE_CC_SYSFS*/ + diff --git a/drivers/staging/ccree/ssi_sysfs.h b/drivers/staging/ccree/ssi_sysfs.h new file mode 100644 index 0000000..baeac1d --- /dev/null +++ b/drivers/staging/ccree/ssi_sysfs.h @@ -0,0 +1,54 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* \file ssi_sysfs.h + ARM CryptoCell sysfs APIs + */ + +#ifndef __SSI_SYSFS_H__ +#define __SSI_SYSFS_H__ + +#include + +/* forward declaration */ +struct ssi_drvdata; + +enum stat_phase { + STAT_PHASE_0 = 0, + STAT_PHASE_1, + STAT_PHASE_2, + STAT_PHASE_3, + STAT_PHASE_4, + STAT_PHASE_5, + STAT_PHASE_6, + MAX_STAT_PHASES, +}; +enum stat_op { + STAT_OP_TYPE_NULL = 0, + STAT_OP_TYPE_ENCODE, + STAT_OP_TYPE_DECODE, + STAT_OP_TYPE_SETKEY, + STAT_OP_TYPE_GENERIC, + MAX_STAT_OP_TYPES, +}; + +int ssi_sysfs_init(struct kobject *sys_dev_obj, struct ssi_drvdata *drvdata); +void ssi_sysfs_fini(void); +void update_host_stat(unsigned int op_type, unsigned int phase, cycles_t result); +void update_cc_stat(unsigned int op_type, unsigned int phase, unsigned int elapsed_cycles); +void display_all_stat_db(void); + +#endif /*__SSI_SYSFS_H__*/ From patchwork Sun Apr 23 09:26:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 97959 Delivered-To: patch@linaro.org Received: by 10.140.109.52 with SMTP id k49csp1004909qgf; Sun, 23 Apr 2017 02:27:28 -0700 (PDT) X-Received: by 10.99.181.92 with SMTP id u28mr19193298pgo.102.1492939648457; Sun, 23 Apr 2017 02:27:28 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h90si15364267pfa.43.2017.04.23.02.27.28; Sun, 23 Apr 2017 02:27:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1044406AbdDWJ1L (ORCPT + 1 other); Sun, 23 Apr 2017 05:27:11 -0400 Received: from foss.arm.com ([217.140.101.70]:46880 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1044378AbdDWJ0r (ORCPT ); Sun, 23 Apr 2017 05:26:47 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 866BF344; Sun, 23 Apr 2017 02:26:46 -0700 (PDT) Received: from gby.kfn.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9613D3F220; Sun, 23 Apr 2017 02:26:41 -0700 (PDT) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" , Rob Herring , Mark Rutland , Greg Kroah-Hartman , devel@driverdev.osuosl.org Cc: linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, gilad.benyossef@arm.com, Binoy Jayan , Ofir Drang , Stuart Yoder , Stephan Muller Subject: [PATCH v3 02/15] staging: ccree: add ahash support Date: Sun, 23 Apr 2017 12:26:10 +0300 Message-Id: <1492939583-25688-3-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1492939583-25688-1-git-send-email-gilad@benyossef.com> References: <1492939583-25688-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add CryptoCell async. hash and HMAC support. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/Kconfig | 6 + drivers/staging/ccree/Makefile | 2 +- drivers/staging/ccree/cc_crypto_ctx.h | 22 + drivers/staging/ccree/hash_defs.h | 78 + drivers/staging/ccree/ssi_buffer_mgr.c | 311 +++- drivers/staging/ccree/ssi_buffer_mgr.h | 6 + drivers/staging/ccree/ssi_driver.c | 11 +- drivers/staging/ccree/ssi_driver.h | 4 +- drivers/staging/ccree/ssi_hash.c | 2732 ++++++++++++++++++++++++++++++++ drivers/staging/ccree/ssi_hash.h | 101 ++ drivers/staging/ccree/ssi_pm.c | 4 + 11 files changed, 3263 insertions(+), 14 deletions(-) create mode 100644 drivers/staging/ccree/hash_defs.h create mode 100644 drivers/staging/ccree/ssi_hash.c create mode 100644 drivers/staging/ccree/ssi_hash.h -- 2.1.4 diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig index 0f723d7..a528a99 100644 --- a/drivers/staging/ccree/Kconfig +++ b/drivers/staging/ccree/Kconfig @@ -2,6 +2,12 @@ config CRYPTO_DEV_CCREE tristate "Support for ARM TrustZone CryptoCell C7XX family of Crypto accelerators" depends on CRYPTO_HW && OF && HAS_DMA default n + select CRYPTO_HASH + select CRYPTO_SHA1 + select CRYPTO_MD5 + select CRYPTO_SHA256 + select CRYPTO_SHA512 + select CRYPTO_HMAC help Say 'Y' to enable a driver for the Arm TrustZone CryptoCell C7xx. Currently only the CryptoCell 712 REE is supported. diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile index 972af69..f94e225 100644 --- a/drivers/staging/ccree/Makefile +++ b/drivers/staging/ccree/Makefile @@ -1,2 +1,2 @@ obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o -ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o +ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_hash.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h index 3547cb4..a4aa066 100644 --- a/drivers/staging/ccree/cc_crypto_ctx.h +++ b/drivers/staging/ccree/cc_crypto_ctx.h @@ -220,6 +220,28 @@ struct drv_ctx_generic { } __attribute__((__may_alias__)); +struct drv_ctx_hash { + enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_HASH */ + enum drv_hash_mode mode; + uint8_t digest[CC_DIGEST_SIZE_MAX]; + /* reserve to end of allocated context size */ + uint8_t reserved[CC_CTX_SIZE - 2 * sizeof(uint32_t) - + CC_DIGEST_SIZE_MAX]; +}; + +/* !!!! drv_ctx_hmac should have the same structure as drv_ctx_hash except + k0, k0_size fields */ +struct drv_ctx_hmac { + enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_HMAC */ + enum drv_hash_mode mode; + uint8_t digest[CC_DIGEST_SIZE_MAX]; + uint32_t k0[CC_HMAC_BLOCK_SIZE_MAX/sizeof(uint32_t)]; + uint32_t k0_size; + /* reserve to end of allocated context size */ + uint8_t reserved[CC_CTX_SIZE - 3 * sizeof(uint32_t) - + CC_DIGEST_SIZE_MAX - CC_HMAC_BLOCK_SIZE_MAX]; +}; + /*******************************************************************/ /***************** MESSAGE BASED CONTEXTS **************************/ /*******************************************************************/ diff --git a/drivers/staging/ccree/hash_defs.h b/drivers/staging/ccree/hash_defs.h new file mode 100644 index 0000000..5ab0861 --- /dev/null +++ b/drivers/staging/ccree/hash_defs.h @@ -0,0 +1,78 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#ifndef _HASH_DEFS_H__ +#define _HASH_DEFS_H__ + +#include "cc_crypto_ctx.h" + +/* this files provides definitions required for hash engine drivers */ +#ifndef CC_CONFIG_HASH_SHA_512_SUPPORTED +#define SEP_HASH_LENGTH_WORDS 2 +#else +#define SEP_HASH_LENGTH_WORDS 4 +#endif + +#ifdef BIG__ENDIAN +#define OPAD_CURRENT_LENGTH 0x40000000, 0x00000000 , 0x00000000, 0x00000000 +#define HASH_LARVAL_MD5 0x76543210, 0xFEDCBA98, 0x89ABCDEF, 0x01234567 +#define HASH_LARVAL_SHA1 0xF0E1D2C3, 0x76543210, 0xFEDCBA98, 0x89ABCDEF, 0x01234567 +#define HASH_LARVAL_SHA224 0XA44FFABE, 0XA78FF964, 0X11155868, 0X310BC0FF, 0X39590EF7, 0X17DD7030, 0X07D57C36, 0XD89E05C1 +#define HASH_LARVAL_SHA256 0X19CDE05B, 0XABD9831F, 0X8C68059B, 0X7F520E51, 0X3AF54FA5, 0X72F36E3C, 0X85AE67BB, 0X67E6096A +#define HASH_LARVAL_SHA384 0X1D48B547, 0XA44FFABE, 0X0D2E0CDB, 0XA78FF964, 0X874AB48E, 0X11155868, 0X67263367, 0X310BC0FF, 0XD8EC2F15, 0X39590EF7, 0X5A015991, 0X17DD7030, 0X2A299A62, 0X07D57C36, 0X5D9DBBCB, 0XD89E05C1 +#define HASH_LARVAL_SHA512 0X19CDE05B, 0X79217E13, 0XABD9831F, 0X6BBD41FB, 0X8C68059B, 0X1F6C3E2B, 0X7F520E51, 0XD182E6AD, 0X3AF54FA5, 0XF1361D5F, 0X72F36E3C, 0X2BF894FE, 0X85AE67BB, 0X3BA7CA84, 0X67E6096A, 0X08C9BCF3 +#else +#define OPAD_CURRENT_LENGTH 0x00000040, 0x00000000, 0x00000000, 0x00000000 +#define HASH_LARVAL_MD5 0x10325476, 0x98BADCFE, 0xEFCDAB89, 0x67452301 +#define HASH_LARVAL_SHA1 0xC3D2E1F0, 0x10325476, 0x98BADCFE, 0xEFCDAB89, 0x67452301 +#define HASH_LARVAL_SHA224 0xbefa4fa4, 0x64f98fa7, 0x68581511, 0xffc00b31, 0xf70e5939, 0x3070dd17, 0x367cd507, 0xc1059ed8 +#define HASH_LARVAL_SHA256 0x5be0cd19, 0x1f83d9ab, 0x9b05688c, 0x510e527f, 0xa54ff53a, 0x3c6ef372, 0xbb67ae85, 0x6a09e667 +#define HASH_LARVAL_SHA384 0X47B5481D, 0XBEFA4FA4, 0XDB0C2E0D, 0X64F98FA7, 0X8EB44A87, 0X68581511, 0X67332667, 0XFFC00B31, 0X152FECD8, 0XF70E5939, 0X9159015A, 0X3070DD17, 0X629A292A, 0X367CD507, 0XCBBB9D5D, 0XC1059ED8 +#define HASH_LARVAL_SHA512 0x5be0cd19, 0x137e2179, 0x1f83d9ab, 0xfb41bd6b, 0x9b05688c, 0x2b3e6c1f, 0x510e527f, 0xade682d1, 0xa54ff53a, 0x5f1d36f1, 0x3c6ef372, 0xfe94f82b, 0xbb67ae85, 0x84caa73b, 0x6a09e667, 0xf3bcc908 +#endif + +enum HashConfig1Padding { + HASH_PADDING_DISABLED = 0, + HASH_PADDING_ENABLED = 1, + HASH_DIGEST_RESULT_LITTLE_ENDIAN = 2, + HASH_CONFIG1_PADDING_RESERVE32 = INT32_MAX, +}; + +enum HashCipherDoPadding { + DO_NOT_PAD = 0, + DO_PAD = 1, + HASH_CIPHER_DO_PADDING_RESERVE32 = INT32_MAX, +}; + +typedef struct SepHashPrivateContext { + /* The current length is placed at the end of the context buffer because the hash + context is used for all HMAC operations as well. HMAC context includes a 64 bytes + K0 field. The size of struct drv_ctx_hash reserved field is 88/184 bytes depend if t + he SHA512 is supported ( in this case teh context size is 256 bytes). + The size of struct drv_ctx_hash reseved field is 20 or 52 depend if the SHA512 is supported. + This means that this structure size (without the reserved field can be up to 20 bytes , + in case sha512 is not suppported it is 20 bytes (SEP_HASH_LENGTH_WORDS define to 2 ) and in the other + case it is 28 (SEP_HASH_LENGTH_WORDS define to 4) */ + uint32_t reserved[(sizeof(struct drv_ctx_hash)/sizeof(uint32_t)) - SEP_HASH_LENGTH_WORDS - 3]; + uint32_t CurrentDigestedLength[SEP_HASH_LENGTH_WORDS]; + uint32_t KeyType; + uint32_t dataCompleted; + uint32_t hmacFinalization; + /* no space left */ +} SepHashPrivateContext_s; + +#endif /*_HASH_DEFS_H__*/ + diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c index 3a74980..aceb01c 100644 --- a/drivers/staging/ccree/ssi_buffer_mgr.c +++ b/drivers/staging/ccree/ssi_buffer_mgr.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -27,6 +28,7 @@ #include "ssi_buffer_mgr.h" #include "cc_lli_defs.h" +#include "ssi_hash.h" #define LLI_MAX_NUM_OF_DATA_ENTRIES 128 #define LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES 4 @@ -281,11 +283,6 @@ static inline int ssi_buffer_mgr_render_scatterlist_to_mlli( return 0; } -static int ssi_buffer_mgr_generate_mlli ( - struct device *dev, - struct buffer_array *sg_data, - struct mlli_params *mlli_params) __maybe_unused; - static int ssi_buffer_mgr_generate_mlli( struct device *dev, struct buffer_array *sg_data, @@ -427,11 +424,6 @@ ssi_buffer_mgr_dma_map_sg(struct device *dev, struct scatterlist *sg, uint32_t n return 0; } -static int ssi_buffer_mgr_map_scatterlist (struct device *dev, - struct scatterlist *sg, unsigned int nbytes, int direction, - uint32_t *nents, uint32_t max_sg_nents, uint32_t *lbytes, - uint32_t *mapped_nents) __maybe_unused; - static int ssi_buffer_mgr_map_scatterlist( struct device *dev, struct scatterlist *sg, unsigned int nbytes, int direction, @@ -493,6 +485,305 @@ static int ssi_buffer_mgr_map_scatterlist( return 0; } +static inline int ssi_ahash_handle_curr_buf(struct device *dev, + struct ahash_req_ctx *areq_ctx, + uint8_t* curr_buff, + uint32_t curr_buff_cnt, + struct buffer_array *sg_data) +{ + SSI_LOG_DEBUG(" handle curr buff %x set to DLLI \n", curr_buff_cnt); + /* create sg for the current buffer */ + sg_init_one(areq_ctx->buff_sg,curr_buff, curr_buff_cnt); + if (unlikely(dma_map_sg(dev, areq_ctx->buff_sg, 1, + DMA_TO_DEVICE) != 1)) { + SSI_LOG_ERR("dma_map_sg() " + "src buffer failed\n"); + return -ENOMEM; + } + SSI_LOG_DEBUG("Mapped curr_buff: dma_address=0x%llX " + "page_link=0x%08lX addr=%pK " + "offset=%u length=%u\n", + (unsigned long long)sg_dma_address(areq_ctx->buff_sg), + areq_ctx->buff_sg->page_link, + sg_virt(areq_ctx->buff_sg), + areq_ctx->buff_sg->offset, + areq_ctx->buff_sg->length); + areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI; + areq_ctx->curr_sg = areq_ctx->buff_sg; + areq_ctx->in_nents = 0; + /* prepare for case of MLLI */ + ssi_buffer_mgr_add_scatterlist_entry(sg_data, 1, areq_ctx->buff_sg, + curr_buff_cnt, 0, false, NULL); + return 0; +} + +int ssi_buffer_mgr_map_hash_request_final( + struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update) +{ + struct ahash_req_ctx *areq_ctx = (struct ahash_req_ctx *)ctx; + struct device *dev = &drvdata->plat_dev->dev; + uint8_t* curr_buff = areq_ctx->buff_index ? areq_ctx->buff1 : + areq_ctx->buff0; + uint32_t *curr_buff_cnt = areq_ctx->buff_index ? &areq_ctx->buff1_cnt : + &areq_ctx->buff0_cnt; + struct mlli_params *mlli_params = &areq_ctx->mlli_params; + struct buffer_array sg_data; + struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle; + uint32_t dummy = 0; + uint32_t mapped_nents = 0; + + SSI_LOG_DEBUG(" final params : curr_buff=%pK " + "curr_buff_cnt=0x%X nbytes = 0x%X " + "src=%pK curr_index=%u\n", + curr_buff, *curr_buff_cnt, nbytes, + src, areq_ctx->buff_index); + /* Init the type of the dma buffer */ + areq_ctx->data_dma_buf_type = SSI_DMA_BUF_NULL; + mlli_params->curr_pool = NULL; + sg_data.num_of_buffers = 0; + areq_ctx->in_nents = 0; + + if (unlikely(nbytes == 0 && *curr_buff_cnt == 0)) { + /* nothing to do */ + return 0; + } + + /*TODO: copy data in case that buffer is enough for operation */ + /* map the previous buffer */ + if (*curr_buff_cnt != 0 ) { + if (ssi_ahash_handle_curr_buf(dev, areq_ctx, curr_buff, + *curr_buff_cnt, &sg_data) != 0) { + return -ENOMEM; + } + } + + if (src && (nbytes > 0) && do_update) { + if ( unlikely( ssi_buffer_mgr_map_scatterlist( dev,src, + nbytes, + DMA_TO_DEVICE, + &areq_ctx->in_nents, + LLI_MAX_NUM_OF_DATA_ENTRIES, + &dummy, &mapped_nents))){ + goto unmap_curr_buff; + } + if ( src && (mapped_nents == 1) + && (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) ) { + memcpy(areq_ctx->buff_sg,src, + sizeof(struct scatterlist)); + areq_ctx->buff_sg->length = nbytes; + areq_ctx->curr_sg = areq_ctx->buff_sg; + areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI; + } else { + areq_ctx->data_dma_buf_type = SSI_DMA_BUF_MLLI; + } + + } + + /*build mlli */ + if (unlikely(areq_ctx->data_dma_buf_type == SSI_DMA_BUF_MLLI)) { + mlli_params->curr_pool = buff_mgr->mlli_buffs_pool; + /* add the src data to the sg_data */ + ssi_buffer_mgr_add_scatterlist_entry(&sg_data, + areq_ctx->in_nents, + src, + nbytes, 0, + true, &areq_ctx->mlli_nents); + if (unlikely(ssi_buffer_mgr_generate_mlli(dev, &sg_data, + mlli_params) != 0)) { + goto fail_unmap_din; + } + } + /* change the buffer index for the unmap function */ + areq_ctx->buff_index = (areq_ctx->buff_index^1); + SSI_LOG_DEBUG("areq_ctx->data_dma_buf_type = %s\n", + GET_DMA_BUFFER_TYPE(areq_ctx->data_dma_buf_type)); + return 0; + +fail_unmap_din: + dma_unmap_sg(dev, src, areq_ctx->in_nents, DMA_TO_DEVICE); + +unmap_curr_buff: + if (*curr_buff_cnt != 0 ) { + dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE); + } + return -ENOMEM; +} + +int ssi_buffer_mgr_map_hash_request_update( + struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size) +{ + struct ahash_req_ctx *areq_ctx = (struct ahash_req_ctx *)ctx; + struct device *dev = &drvdata->plat_dev->dev; + uint8_t* curr_buff = areq_ctx->buff_index ? areq_ctx->buff1 : + areq_ctx->buff0; + uint32_t *curr_buff_cnt = areq_ctx->buff_index ? &areq_ctx->buff1_cnt : + &areq_ctx->buff0_cnt; + uint8_t* next_buff = areq_ctx->buff_index ? areq_ctx->buff0 : + areq_ctx->buff1; + uint32_t *next_buff_cnt = areq_ctx->buff_index ? &areq_ctx->buff0_cnt : + &areq_ctx->buff1_cnt; + struct mlli_params *mlli_params = &areq_ctx->mlli_params; + unsigned int update_data_len; + uint32_t total_in_len = nbytes + *curr_buff_cnt; + struct buffer_array sg_data; + struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle; + unsigned int swap_index = 0; + uint32_t dummy = 0; + uint32_t mapped_nents = 0; + + SSI_LOG_DEBUG(" update params : curr_buff=%pK " + "curr_buff_cnt=0x%X nbytes=0x%X " + "src=%pK curr_index=%u \n", + curr_buff, *curr_buff_cnt, nbytes, + src, areq_ctx->buff_index); + /* Init the type of the dma buffer */ + areq_ctx->data_dma_buf_type = SSI_DMA_BUF_NULL; + mlli_params->curr_pool = NULL; + areq_ctx->curr_sg = NULL; + sg_data.num_of_buffers = 0; + areq_ctx->in_nents = 0; + + if (unlikely(total_in_len < block_size)) { + SSI_LOG_DEBUG(" less than one block: curr_buff=%pK " + "*curr_buff_cnt=0x%X copy_to=%pK\n", + curr_buff, *curr_buff_cnt, + &curr_buff[*curr_buff_cnt]); + areq_ctx->in_nents = + ssi_buffer_mgr_get_sgl_nents(src, + nbytes, + &dummy, NULL); + sg_copy_to_buffer(src, areq_ctx->in_nents, + &curr_buff[*curr_buff_cnt], nbytes); + *curr_buff_cnt += nbytes; + return 1; + } + + /* Calculate the residue size*/ + *next_buff_cnt = total_in_len & (block_size - 1); + /* update data len */ + update_data_len = total_in_len - *next_buff_cnt; + + SSI_LOG_DEBUG(" temp length : *next_buff_cnt=0x%X " + "update_data_len=0x%X\n", + *next_buff_cnt, update_data_len); + + /* Copy the new residue to next buffer */ + if (*next_buff_cnt != 0) { + SSI_LOG_DEBUG(" handle residue: next buff %pK skip data %u" + " residue %u \n", next_buff, + (update_data_len - *curr_buff_cnt), + *next_buff_cnt); + ssi_buffer_mgr_copy_scatterlist_portion(next_buff, src, + (update_data_len -*curr_buff_cnt), + nbytes,SSI_SG_TO_BUF); + /* change the buffer index for next operation */ + swap_index = 1; + } + + if (*curr_buff_cnt != 0) { + if (ssi_ahash_handle_curr_buf(dev, areq_ctx, curr_buff, + *curr_buff_cnt, &sg_data) != 0) { + return -ENOMEM; + } + /* change the buffer index for next operation */ + swap_index = 1; + } + + if ( update_data_len > *curr_buff_cnt ) { + if ( unlikely( ssi_buffer_mgr_map_scatterlist( dev,src, + (update_data_len -*curr_buff_cnt), + DMA_TO_DEVICE, + &areq_ctx->in_nents, + LLI_MAX_NUM_OF_DATA_ENTRIES, + &dummy, &mapped_nents))){ + goto unmap_curr_buff; + } + if ( (mapped_nents == 1) + && (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) ) { + /* only one entry in the SG and no previous data */ + memcpy(areq_ctx->buff_sg,src, + sizeof(struct scatterlist)); + areq_ctx->buff_sg->length = update_data_len; + areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI; + areq_ctx->curr_sg = areq_ctx->buff_sg; + } else { + areq_ctx->data_dma_buf_type = SSI_DMA_BUF_MLLI; + } + } + + if (unlikely(areq_ctx->data_dma_buf_type == SSI_DMA_BUF_MLLI)) { + mlli_params->curr_pool = buff_mgr->mlli_buffs_pool; + /* add the src data to the sg_data */ + ssi_buffer_mgr_add_scatterlist_entry(&sg_data, + areq_ctx->in_nents, + src, + (update_data_len - *curr_buff_cnt), 0, + true, &areq_ctx->mlli_nents); + if (unlikely(ssi_buffer_mgr_generate_mlli(dev, &sg_data, + mlli_params) != 0)) { + goto fail_unmap_din; + } + + } + areq_ctx->buff_index = (areq_ctx->buff_index^swap_index); + + return 0; + +fail_unmap_din: + dma_unmap_sg(dev, src, areq_ctx->in_nents, DMA_TO_DEVICE); + +unmap_curr_buff: + if (*curr_buff_cnt != 0 ) { + dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE); + } + return -ENOMEM; +} + +void ssi_buffer_mgr_unmap_hash_request( + struct device *dev, void *ctx, struct scatterlist *src, bool do_revert) +{ + struct ahash_req_ctx *areq_ctx = (struct ahash_req_ctx *)ctx; + uint32_t *prev_len = areq_ctx->buff_index ? &areq_ctx->buff0_cnt : + &areq_ctx->buff1_cnt; + + /*In case a pool was set, a table was + allocated and should be released */ + if (areq_ctx->mlli_params.curr_pool != NULL) { + SSI_LOG_DEBUG("free MLLI buffer: dma=0x%llX virt=%pK\n", + (unsigned long long)areq_ctx->mlli_params.mlli_dma_addr, + areq_ctx->mlli_params.mlli_virt_addr); + SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->mlli_params.mlli_dma_addr); + dma_pool_free(areq_ctx->mlli_params.curr_pool, + areq_ctx->mlli_params.mlli_virt_addr, + areq_ctx->mlli_params.mlli_dma_addr); + } + + if ((src) && likely(areq_ctx->in_nents != 0)) { + SSI_LOG_DEBUG("Unmapped sg src: virt=%pK dma=0x%llX len=0x%X\n", + sg_virt(src), + (unsigned long long)sg_dma_address(src), + sg_dma_len(src)); + SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(src)); + dma_unmap_sg(dev, src, + areq_ctx->in_nents, DMA_TO_DEVICE); + } + + if (*prev_len != 0) { + SSI_LOG_DEBUG("Unmapped buffer: areq_ctx->buff_sg=%pK" + "dma=0x%llX len 0x%X\n", + sg_virt(areq_ctx->buff_sg), + (unsigned long long)sg_dma_address(areq_ctx->buff_sg), + sg_dma_len(areq_ctx->buff_sg)); + dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE); + if (!do_revert) { + /* clean the previous data length for update operation */ + *prev_len = 0; + } else { + areq_ctx->buff_index ^= 1; + } + } +} + int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata) { struct buff_mgr_handle *buff_mgr_handle; diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h index f21f439..cadb853 100644 --- a/drivers/staging/ccree/ssi_buffer_mgr.h +++ b/drivers/staging/ccree/ssi_buffer_mgr.h @@ -55,6 +55,12 @@ int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata); int ssi_buffer_mgr_fini(struct ssi_drvdata *drvdata); +int ssi_buffer_mgr_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update); + +int ssi_buffer_mgr_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size); + +void ssi_buffer_mgr_unmap_hash_request(struct device *dev, void *ctx, struct scatterlist *src, bool do_revert); + void ssi_buffer_mgr_copy_scatterlist_portion(u8 *dest, struct scatterlist *sg, uint32_t to_skip, uint32_t end, enum ssi_sg_cpy_direct direct); void ssi_buffer_mgr_zero_sgl(struct scatterlist *sgl, uint32_t data_len); diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c index 4fee9df..8042fa2 100644 --- a/drivers/staging/ccree/ssi_driver.c +++ b/drivers/staging/ccree/ssi_driver.c @@ -61,6 +61,7 @@ #include "ssi_request_mgr.h" #include "ssi_buffer_mgr.h" #include "ssi_sysfs.h" +#include "ssi_hash.h" #include "ssi_sram_mgr.h" #include "ssi_pm.h" @@ -218,8 +219,6 @@ static int init_cc_resources(struct platform_device *plat_dev) goto init_cc_res_err; } - new_drvdata->inflight_counter = 0; - dev_set_drvdata(&plat_dev->dev, new_drvdata); /* Get device resources */ /* First CC registers space */ @@ -344,12 +343,19 @@ static int init_cc_resources(struct platform_device *plat_dev) goto init_cc_res_err; } + rc = ssi_hash_alloc(new_drvdata); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("ssi_hash_alloc failed\n"); + goto init_cc_res_err; + } + return 0; init_cc_res_err: SSI_LOG_ERR("Freeing CC HW resources!\n"); if (new_drvdata != NULL) { + ssi_hash_free(new_drvdata); ssi_power_mgr_fini(new_drvdata); ssi_buffer_mgr_fini(new_drvdata); request_mgr_fini(new_drvdata); @@ -389,6 +395,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev) struct ssi_drvdata *drvdata = (struct ssi_drvdata *)dev_get_drvdata(&plat_dev->dev); + ssi_hash_free(drvdata); ssi_power_mgr_fini(drvdata); ssi_buffer_mgr_fini(drvdata); request_mgr_fini(drvdata); diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h index eb30643..e080088 100644 --- a/drivers/staging/ccree/ssi_driver.h +++ b/drivers/staging/ccree/ssi_driver.h @@ -32,6 +32,7 @@ #include #include #include +#include #include #ifndef INT32_MAX /* Missing in Linux kernel */ @@ -50,6 +51,7 @@ #define CC_SUPPORT_SHA DX_DEV_SHA_MAX #include "cc_crypto_ctx.h" #include "ssi_sysfs.h" +#include "hash_defs.h" #define DRV_MODULE_VERSION "3.0" @@ -138,13 +140,13 @@ struct ssi_drvdata { ssi_sram_addr_t mlli_sram_addr; struct completion icache_setup_completion; void *buff_mgr_handle; + void *hash_handle; void *request_mgr_handle; void *sram_mgr_handle; #ifdef ENABLE_CYCLE_COUNT cycles_t isr_exit_cycles; /* Save for isr-to-tasklet latency */ #endif - uint32_t inflight_counter; }; diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c new file mode 100644 index 0000000..d0e89d2 --- /dev/null +++ b/drivers/staging/ccree/ssi_hash.c @@ -0,0 +1,2732 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ssi_config.h" +#include "ssi_driver.h" +#include "ssi_request_mgr.h" +#include "ssi_buffer_mgr.h" +#include "ssi_sysfs.h" +#include "ssi_hash.h" +#include "ssi_sram_mgr.h" + +#define SSI_MAX_AHASH_SEQ_LEN 12 +#define SSI_MAX_HASH_OPAD_TMP_KEYS_SIZE MAX(SSI_MAX_HASH_BLCK_SIZE, 3 * AES_BLOCK_SIZE) + +struct ssi_hash_handle { + ssi_sram_addr_t digest_len_sram_addr; /* const value in SRAM*/ + ssi_sram_addr_t larval_digest_sram_addr; /* const value in SRAM */ + struct list_head hash_list; + struct completion init_comp; +}; + +static const uint32_t digest_len_init[] = { + 0x00000040, 0x00000000, 0x00000000, 0x00000000 }; +static const uint32_t md5_init[] = { + SHA1_H3, SHA1_H2, SHA1_H1, SHA1_H0 }; +static const uint32_t sha1_init[] = { + SHA1_H4, SHA1_H3, SHA1_H2, SHA1_H1, SHA1_H0 }; +static const uint32_t sha224_init[] = { + SHA224_H7, SHA224_H6, SHA224_H5, SHA224_H4, + SHA224_H3, SHA224_H2, SHA224_H1, SHA224_H0 }; +static const uint32_t sha256_init[] = { + SHA256_H7, SHA256_H6, SHA256_H5, SHA256_H4, + SHA256_H3, SHA256_H2, SHA256_H1, SHA256_H0 }; +#if (DX_DEV_SHA_MAX > 256) +static const uint32_t digest_len_sha512_init[] = { + 0x00000080, 0x00000000, 0x00000000, 0x00000000 }; +static const uint64_t sha384_init[] = { + SHA384_H7, SHA384_H6, SHA384_H5, SHA384_H4, + SHA384_H3, SHA384_H2, SHA384_H1, SHA384_H0 }; +static const uint64_t sha512_init[] = { + SHA512_H7, SHA512_H6, SHA512_H5, SHA512_H4, + SHA512_H3, SHA512_H2, SHA512_H1, SHA512_H0 }; +#endif + +static void ssi_hash_create_xcbc_setup( + struct ahash_request *areq, + HwDesc_s desc[], + unsigned int *seq_size); + +static void ssi_hash_create_cmac_setup(struct ahash_request *areq, + HwDesc_s desc[], + unsigned int *seq_size); + +struct ssi_hash_alg { + struct list_head entry; + bool synchronize; + int hash_mode; + int hw_mode; + int inter_digestsize; + struct ssi_drvdata *drvdata; + union { + struct ahash_alg ahash_alg; + struct shash_alg shash_alg; + }; +}; + + +struct hash_key_req_ctx { + uint32_t keylen; + dma_addr_t key_dma_addr; +}; + +/* hash per-session context */ +struct ssi_hash_ctx { + struct ssi_drvdata *drvdata; + /* holds the origin digest; the digest after "setkey" if HMAC,* + the initial digest if HASH. */ + uint8_t digest_buff[SSI_MAX_HASH_DIGEST_SIZE] ____cacheline_aligned; + uint8_t opad_tmp_keys_buff[SSI_MAX_HASH_OPAD_TMP_KEYS_SIZE] ____cacheline_aligned; + dma_addr_t opad_tmp_keys_dma_addr ____cacheline_aligned; + dma_addr_t digest_buff_dma_addr; + /* use for hmac with key large then mode block size */ + struct hash_key_req_ctx key_params; + int hash_mode; + int hw_mode; + int inter_digestsize; + struct completion setkey_comp; + bool is_hmac; +}; + +static const struct crypto_type crypto_shash_type; + +static void ssi_hash_create_data_desc( + struct ahash_req_ctx *areq_ctx, + struct ssi_hash_ctx *ctx, + unsigned int flow_mode,HwDesc_s desc[], + bool is_not_last_data, + unsigned int *seq_size); + +static inline void ssi_set_hash_endianity(uint32_t mode, HwDesc_s *desc) +{ + if (unlikely((mode == DRV_HASH_MD5) || + (mode == DRV_HASH_SHA384) || + (mode == DRV_HASH_SHA512))) { + HW_DESC_SET_BYTES_SWAP(desc, 1); + } else { + HW_DESC_SET_CIPHER_CONFIG0(desc, HASH_DIGEST_RESULT_LITTLE_ENDIAN); + } +} + +static int ssi_hash_map_result(struct device *dev, + struct ahash_req_ctx *state, + unsigned int digestsize) +{ + state->digest_result_dma_addr = + dma_map_single(dev, (void *)state->digest_result_buff, + digestsize, + DMA_BIDIRECTIONAL); + if (unlikely(dma_mapping_error(dev, state->digest_result_dma_addr))) { + SSI_LOG_ERR("Mapping digest result buffer %u B for DMA failed\n", + digestsize); + return -ENOMEM; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_result_dma_addr, + digestsize); + SSI_LOG_DEBUG("Mapped digest result buffer %u B " + "at va=%pK to dma=0x%llX\n", + digestsize, state->digest_result_buff, + (unsigned long long)state->digest_result_dma_addr); + + return 0; +} + +static int ssi_hash_map_request(struct device *dev, + struct ahash_req_ctx *state, + struct ssi_hash_ctx *ctx) +{ + bool is_hmac = ctx->is_hmac; + ssi_sram_addr_t larval_digest_addr = ssi_ahash_get_larval_digest_sram_addr( + ctx->drvdata, ctx->hash_mode); + struct ssi_crypto_req ssi_req = {}; + HwDesc_s desc; + int rc = -ENOMEM; + + state->buff0 = kzalloc(SSI_MAX_HASH_BLCK_SIZE ,GFP_KERNEL|GFP_DMA); + if (!state->buff0) { + SSI_LOG_ERR("Allocating buff0 in context failed\n"); + goto fail0; + } + state->buff1 = kzalloc(SSI_MAX_HASH_BLCK_SIZE ,GFP_KERNEL|GFP_DMA); + if (!state->buff1) { + SSI_LOG_ERR("Allocating buff1 in context failed\n"); + goto fail_buff0; + } + state->digest_result_buff = kzalloc(SSI_MAX_HASH_DIGEST_SIZE ,GFP_KERNEL|GFP_DMA); + if (!state->digest_result_buff) { + SSI_LOG_ERR("Allocating digest_result_buff in context failed\n"); + goto fail_buff1; + } + state->digest_buff = kzalloc(ctx->inter_digestsize, GFP_KERNEL|GFP_DMA); + if (!state->digest_buff) { + SSI_LOG_ERR("Allocating digest-buffer in context failed\n"); + goto fail_digest_result_buff; + } + + SSI_LOG_DEBUG("Allocated digest-buffer in context ctx->digest_buff=@%p\n", state->digest_buff); + if (ctx->hw_mode != DRV_CIPHER_XCBC_MAC) { + state->digest_bytes_len = kzalloc(HASH_LEN_SIZE, GFP_KERNEL|GFP_DMA); + if (!state->digest_bytes_len) { + SSI_LOG_ERR("Allocating digest-bytes-len in context failed\n"); + goto fail1; + } + SSI_LOG_DEBUG("Allocated digest-bytes-len in context state->>digest_bytes_len=@%p\n", state->digest_bytes_len); + } else { + state->digest_bytes_len = NULL; + } + + state->opad_digest_buff = kzalloc(ctx->inter_digestsize, GFP_KERNEL|GFP_DMA); + if (!state->opad_digest_buff) { + SSI_LOG_ERR("Allocating opad-digest-buffer in context failed\n"); + goto fail2; + } + SSI_LOG_DEBUG("Allocated opad-digest-buffer in context state->digest_bytes_len=@%p\n", state->opad_digest_buff); + + state->digest_buff_dma_addr = dma_map_single(dev, (void *)state->digest_buff, ctx->inter_digestsize, DMA_BIDIRECTIONAL); + if (dma_mapping_error(dev, state->digest_buff_dma_addr)) { + SSI_LOG_ERR("Mapping digest len %d B at va=%pK for DMA failed\n", + ctx->inter_digestsize, state->digest_buff); + goto fail3; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr, + ctx->inter_digestsize); + SSI_LOG_DEBUG("Mapped digest %d B at va=%pK to dma=0x%llX\n", + ctx->inter_digestsize, state->digest_buff, + (unsigned long long)state->digest_buff_dma_addr); + + if (is_hmac) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr); + dma_sync_single_for_cpu(dev, ctx->digest_buff_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL); + SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr, + ctx->inter_digestsize); + if ((ctx->hw_mode == DRV_CIPHER_XCBC_MAC) || (ctx->hw_mode == DRV_CIPHER_CMAC)) { + memset(state->digest_buff, 0, ctx->inter_digestsize); + } else { /*sha*/ + memcpy(state->digest_buff, ctx->digest_buff, ctx->inter_digestsize); +#if (DX_DEV_SHA_MAX > 256) + if (unlikely((ctx->hash_mode == DRV_HASH_SHA512) || (ctx->hash_mode == DRV_HASH_SHA384))) { + memcpy(state->digest_bytes_len, digest_len_sha512_init, HASH_LEN_SIZE); + } else { + memcpy(state->digest_bytes_len, digest_len_init, HASH_LEN_SIZE); + } +#else + memcpy(state->digest_bytes_len, digest_len_init, HASH_LEN_SIZE); +#endif + } + SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr); + dma_sync_single_for_device(dev, state->digest_buff_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL); + SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr, + ctx->inter_digestsize); + + if (ctx->hash_mode != DRV_HASH_NULL) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr); + dma_sync_single_for_cpu(dev, ctx->opad_tmp_keys_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL); + memcpy(state->opad_digest_buff, ctx->opad_tmp_keys_buff, ctx->inter_digestsize); + SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr, + ctx->inter_digestsize); + } + } else { /*hash*/ + /* Copy the initial digests if hash flow. The SRAM contains the + initial digests in the expected order for all SHA* */ + HW_DESC_INIT(&desc); + HW_DESC_SET_DIN_SRAM(&desc, larval_digest_addr, ctx->inter_digestsize); + HW_DESC_SET_DOUT_DLLI(&desc, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 0); + HW_DESC_SET_FLOW_MODE(&desc, BYPASS); + + rc = send_request(ctx->drvdata, &ssi_req, &desc, 1, 0); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + goto fail4; + } + } + + if (ctx->hw_mode != DRV_CIPHER_XCBC_MAC) { + state->digest_bytes_len_dma_addr = dma_map_single(dev, (void *)state->digest_bytes_len, HASH_LEN_SIZE, DMA_BIDIRECTIONAL); + if (dma_mapping_error(dev, state->digest_bytes_len_dma_addr)) { + SSI_LOG_ERR("Mapping digest len %u B at va=%pK for DMA failed\n", + HASH_LEN_SIZE, state->digest_bytes_len); + goto fail4; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_bytes_len_dma_addr, + HASH_LEN_SIZE); + SSI_LOG_DEBUG("Mapped digest len %u B at va=%pK to dma=0x%llX\n", + HASH_LEN_SIZE, state->digest_bytes_len, + (unsigned long long)state->digest_bytes_len_dma_addr); + } else { + state->digest_bytes_len_dma_addr = 0; + } + + if (is_hmac && ctx->hash_mode != DRV_HASH_NULL) { + state->opad_digest_dma_addr = dma_map_single(dev, (void *)state->opad_digest_buff, ctx->inter_digestsize, DMA_BIDIRECTIONAL); + if (dma_mapping_error(dev, state->opad_digest_dma_addr)) { + SSI_LOG_ERR("Mapping opad digest %d B at va=%pK for DMA failed\n", + ctx->inter_digestsize, state->opad_digest_buff); + goto fail5; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(state->opad_digest_dma_addr, + ctx->inter_digestsize); + SSI_LOG_DEBUG("Mapped opad digest %d B at va=%pK to dma=0x%llX\n", + ctx->inter_digestsize, state->opad_digest_buff, + (unsigned long long)state->opad_digest_dma_addr); + } else { + state->opad_digest_dma_addr = 0; + } + state->buff0_cnt = 0; + state->buff1_cnt = 0; + state->buff_index = 0; + state->mlli_params.curr_pool = NULL; + + return 0; + +fail5: + if (state->digest_bytes_len_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_bytes_len_dma_addr); + dma_unmap_single(dev, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, DMA_BIDIRECTIONAL); + state->digest_bytes_len_dma_addr = 0; + } +fail4: + if (state->digest_buff_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr); + dma_unmap_single(dev, state->digest_buff_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL); + state->digest_buff_dma_addr = 0; + } +fail3: + if (state->opad_digest_buff != NULL) + kfree(state->opad_digest_buff); +fail2: + if (state->digest_bytes_len != NULL) + kfree(state->digest_bytes_len); +fail1: + if (state->digest_buff != NULL) + kfree(state->digest_buff); +fail_digest_result_buff: + if (state->digest_result_buff != NULL) { + kfree(state->digest_result_buff); + state->digest_result_buff = NULL; + } +fail_buff1: + if (state->buff1 != NULL) { + kfree(state->buff1); + state->buff1 = NULL; + } +fail_buff0: + if (state->buff0 != NULL) { + kfree(state->buff0); + state->buff0 = NULL; + } +fail0: + return rc; +} + +static void ssi_hash_unmap_request(struct device *dev, + struct ahash_req_ctx *state, + struct ssi_hash_ctx *ctx) +{ + if (state->digest_buff_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr); + dma_unmap_single(dev, state->digest_buff_dma_addr, + ctx->inter_digestsize, DMA_BIDIRECTIONAL); + SSI_LOG_DEBUG("Unmapped digest-buffer: digest_buff_dma_addr=0x%llX\n", + (unsigned long long)state->digest_buff_dma_addr); + state->digest_buff_dma_addr = 0; + } + if (state->digest_bytes_len_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_bytes_len_dma_addr); + dma_unmap_single(dev, state->digest_bytes_len_dma_addr, + HASH_LEN_SIZE, DMA_BIDIRECTIONAL); + SSI_LOG_DEBUG("Unmapped digest-bytes-len buffer: digest_bytes_len_dma_addr=0x%llX\n", + (unsigned long long)state->digest_bytes_len_dma_addr); + state->digest_bytes_len_dma_addr = 0; + } + if (state->opad_digest_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(state->opad_digest_dma_addr); + dma_unmap_single(dev, state->opad_digest_dma_addr, + ctx->inter_digestsize, DMA_BIDIRECTIONAL); + SSI_LOG_DEBUG("Unmapped opad-digest: opad_digest_dma_addr=0x%llX\n", + (unsigned long long)state->opad_digest_dma_addr); + state->opad_digest_dma_addr = 0; + } + + if (state->opad_digest_buff != NULL) + kfree(state->opad_digest_buff); + if (state->digest_bytes_len != NULL) + kfree(state->digest_bytes_len); + if (state->digest_buff != NULL) + kfree(state->digest_buff); + if (state->digest_result_buff != NULL) + kfree(state->digest_result_buff); + if (state->buff1 != NULL) + kfree(state->buff1); + if (state->buff0 != NULL) + kfree(state->buff0); +} + +static void ssi_hash_unmap_result(struct device *dev, + struct ahash_req_ctx *state, + unsigned int digestsize, u8 *result) +{ + if (state->digest_result_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_result_dma_addr); + dma_unmap_single(dev, + state->digest_result_dma_addr, + digestsize, + DMA_BIDIRECTIONAL); + SSI_LOG_DEBUG("unmpa digest result buffer " + "va (%pK) pa (%llx) len %u\n", + state->digest_result_buff, + (unsigned long long)state->digest_result_dma_addr, + digestsize); + memcpy(result, + state->digest_result_buff, + digestsize); + } + state->digest_result_dma_addr = 0; +} + +static void ssi_hash_update_complete(struct device *dev, void *ssi_req, void __iomem *cc_base) +{ + struct ahash_request *req = (struct ahash_request *)ssi_req; + struct ahash_req_ctx *state = ahash_request_ctx(req); + + SSI_LOG_DEBUG("req=%pK\n", req); + + ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, false); + req->base.complete(&req->base, 0); +} + +static void ssi_hash_digest_complete(struct device *dev, void *ssi_req, void __iomem *cc_base) +{ + struct ahash_request *req = (struct ahash_request *)ssi_req; + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + uint32_t digestsize = crypto_ahash_digestsize(tfm); + + SSI_LOG_DEBUG("req=%pK\n", req); + + ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, false); + ssi_hash_unmap_result(dev, state, digestsize, req->result); + ssi_hash_unmap_request(dev, state, ctx); + req->base.complete(&req->base, 0); +} + +static void ssi_hash_complete(struct device *dev, void *ssi_req, void __iomem *cc_base) +{ + struct ahash_request *req = (struct ahash_request *)ssi_req; + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + uint32_t digestsize = crypto_ahash_digestsize(tfm); + + SSI_LOG_DEBUG("req=%pK\n", req); + + ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, false); + ssi_hash_unmap_result(dev, state, digestsize, req->result); + ssi_hash_unmap_request(dev, state, ctx); + req->base.complete(&req->base, 0); +} + +static int ssi_hash_digest(struct ahash_req_ctx *state, + struct ssi_hash_ctx *ctx, + unsigned int digestsize, + struct scatterlist *src, + unsigned int nbytes, u8 *result, + void *async_req) +{ + struct device *dev = &ctx->drvdata->plat_dev->dev; + bool is_hmac = ctx->is_hmac; + struct ssi_crypto_req ssi_req = {}; + HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN]; + ssi_sram_addr_t larval_digest_addr = ssi_ahash_get_larval_digest_sram_addr( + ctx->drvdata, ctx->hash_mode); + int idx = 0; + int rc = 0; + + + SSI_LOG_DEBUG("===== %s-digest (%d) ====\n", is_hmac?"hmac":"hash", nbytes); + + if (unlikely(ssi_hash_map_request(dev, state, ctx) != 0)) { + SSI_LOG_ERR("map_ahash_source() failed\n"); + return -ENOMEM; + } + + if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) { + SSI_LOG_ERR("map_ahash_digest() failed\n"); + return -ENOMEM; + } + + if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src, nbytes, 1) != 0)) { + SSI_LOG_ERR("map_ahash_request_final() failed\n"); + return -ENOMEM; + } + + if (async_req) { + /* Setup DX request structure */ + ssi_req.user_cb = (void *)ssi_hash_digest_complete; + ssi_req.user_arg = (void *)async_req; +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ +#endif + } + + /* If HMAC then load hash IPAD xor key, if HASH then load initial digest */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + if (is_hmac) { + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT); + } else { + HW_DESC_SET_DIN_SRAM(&desc[idx], larval_digest_addr, ctx->inter_digestsize); + } + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Load the hash current length */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + + if (is_hmac) { + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT); + } else { + HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); + if (likely(nbytes != 0)) { + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + } else { + HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD); + } + } + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx); + + if (is_hmac) { + /* HW last hash block padding (aka. "DO_PAD") */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, HASH_LEN_SIZE, NS_BIT, 0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1); + HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD); + idx++; + + /* store the hash digest result in the context */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + idx++; + + /* Loading hash opad xor key state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Load the hash current length */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + /* Memory Barrier: wait for IPAD/OPAD axi write to complete */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); + HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + idx++; + + /* Perform HASH update */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + idx++; + } + + /* Get final MAC result */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0); /*TODO*/ + if (async_req) { + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + } + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]); + idx++; + + if (async_req) { + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (unlikely(rc != -EINPROGRESS)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, src, true); + ssi_hash_unmap_result(dev, state, digestsize, result); + ssi_hash_unmap_request(dev, state, ctx); + } + } else { + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); + if (rc != 0) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, src, true); + } else { + ssi_buffer_mgr_unmap_hash_request(dev, state, src, false); + } + ssi_hash_unmap_result(dev, state, digestsize, result); + ssi_hash_unmap_request(dev, state, ctx); + } + return rc; +} + +static int ssi_hash_update(struct ahash_req_ctx *state, + struct ssi_hash_ctx *ctx, + unsigned int block_size, + struct scatterlist *src, + unsigned int nbytes, + void *async_req) +{ + struct device *dev = &ctx->drvdata->plat_dev->dev; + struct ssi_crypto_req ssi_req = {}; + HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN]; + uint32_t idx = 0; + int rc; + + SSI_LOG_DEBUG("===== %s-update (%d) ====\n", ctx->is_hmac ? + "hmac":"hash", nbytes); + + if (nbytes == 0) { + /* no real updates required */ + return 0; + } + + if (unlikely(rc = ssi_buffer_mgr_map_hash_request_update(ctx->drvdata, state, src, nbytes, block_size))) { + if (rc == 1) { + SSI_LOG_DEBUG(" data size not require HW update %x\n", + nbytes); + /* No hardware updates are required */ + return 0; + } + SSI_LOG_ERR("map_ahash_request_update() failed\n"); + return -ENOMEM; + } + + if (async_req) { + /* Setup DX request structure */ + ssi_req.user_cb = (void *)ssi_hash_update_complete; + ssi_req.user_arg = async_req; +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ +#endif + } + + /* Restore hash digest */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + /* Restore hash current length */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx); + + /* store the hash digest result in context */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + idx++; + + /* store current hash length in context */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT, async_req? 1:0); + if (async_req) { + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + } + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1); + idx++; + + if (async_req) { + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (unlikely(rc != -EINPROGRESS)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, src, true); + } + } else { + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); + if (rc != 0) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, src, true); + } else { + ssi_buffer_mgr_unmap_hash_request(dev, state, src, false); + } + } + return rc; +} + +static int ssi_hash_finup(struct ahash_req_ctx *state, + struct ssi_hash_ctx *ctx, + unsigned int digestsize, + struct scatterlist *src, + unsigned int nbytes, + u8 *result, + void *async_req) +{ + struct device *dev = &ctx->drvdata->plat_dev->dev; + bool is_hmac = ctx->is_hmac; + struct ssi_crypto_req ssi_req = {}; + HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN]; + int idx = 0; + int rc; + + SSI_LOG_DEBUG("===== %s-finup (%d) ====\n", is_hmac?"hmac":"hash", nbytes); + + if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src , nbytes, 1) != 0)) { + SSI_LOG_ERR("map_ahash_request_final() failed\n"); + return -ENOMEM; + } + if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) { + SSI_LOG_ERR("map_ahash_digest() failed\n"); + return -ENOMEM; + } + + if (async_req) { + /* Setup DX request structure */ + ssi_req.user_cb = (void *)ssi_hash_complete; + ssi_req.user_arg = async_req; +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ +#endif + } + + /* Restore hash digest */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Restore hash current length */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx); + + if (is_hmac) { + /* Store the hash digest result in the context */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0); + ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + idx++; + + /* Loading hash OPAD xor key state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Load the hash current length */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + /* Memory Barrier: wait for IPAD/OPAD axi write to complete */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); + HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + idx++; + + /* Perform HASH update on last digest */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + idx++; + } + + /* Get final MAC result */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0); /*TODO*/ + if (async_req) { + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + } + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + idx++; + + if (async_req) { + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (unlikely(rc != -EINPROGRESS)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, src, true); + ssi_hash_unmap_result(dev, state, digestsize, result); + } + } else { + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); + if (rc != 0) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, src, true); + ssi_hash_unmap_result(dev, state, digestsize, result); + } else { + ssi_buffer_mgr_unmap_hash_request(dev, state, src, false); + ssi_hash_unmap_result(dev, state, digestsize, result); + ssi_hash_unmap_request(dev, state, ctx); + } + } + return rc; +} + +static int ssi_hash_final(struct ahash_req_ctx *state, + struct ssi_hash_ctx *ctx, + unsigned int digestsize, + struct scatterlist *src, + unsigned int nbytes, + u8 *result, + void *async_req) +{ + struct device *dev = &ctx->drvdata->plat_dev->dev; + bool is_hmac = ctx->is_hmac; + struct ssi_crypto_req ssi_req = {}; + HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN]; + int idx = 0; + int rc; + + SSI_LOG_DEBUG("===== %s-final (%d) ====\n", is_hmac?"hmac":"hash", nbytes); + + if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src, nbytes, 0) != 0)) { + SSI_LOG_ERR("map_ahash_request_final() failed\n"); + return -ENOMEM; + } + + if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) { + SSI_LOG_ERR("map_ahash_digest() failed\n"); + return -ENOMEM; + } + + if (async_req) { + /* Setup DX request structure */ + ssi_req.user_cb = (void *)ssi_hash_complete; + ssi_req.user_arg = async_req; +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ +#endif + } + + /* Restore hash digest */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Restore hash current length */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx); + + /* "DO-PAD" must be enabled only when writing current length to HW */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT, 0); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + idx++; + + if (is_hmac) { + /* Store the hash digest result in the context */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0); + ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + idx++; + + /* Loading hash OPAD xor key state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Load the hash current length */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + /* Memory Barrier: wait for IPAD/OPAD axi write to complete */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); + HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + idx++; + + /* Perform HASH update on last digest */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + idx++; + } + + /* Get final MAC result */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0); + if (async_req) { + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + } + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + idx++; + + if (async_req) { + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (unlikely(rc != -EINPROGRESS)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, src, true); + ssi_hash_unmap_result(dev, state, digestsize, result); + } + } else { + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); + if (rc != 0) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, src, true); + ssi_hash_unmap_result(dev, state, digestsize, result); + } else { + ssi_buffer_mgr_unmap_hash_request(dev, state, src, false); + ssi_hash_unmap_result(dev, state, digestsize, result); + ssi_hash_unmap_request(dev, state, ctx); + } + } + return rc; +} + +static int ssi_hash_init(struct ahash_req_ctx *state, struct ssi_hash_ctx *ctx) +{ + struct device *dev = &ctx->drvdata->plat_dev->dev; + state->xcbc_count = 0; + + ssi_hash_map_request(dev, state, ctx); + + return 0; +} + +#ifdef EXPORT_FIXED +static int ssi_hash_export(struct ssi_hash_ctx *ctx, void *out) +{ + memcpy(out, ctx, sizeof(struct ssi_hash_ctx)); + return 0; +} + +static int ssi_hash_import(struct ssi_hash_ctx *ctx, const void *in) +{ + memcpy(ctx, in, sizeof(struct ssi_hash_ctx)); + return 0; +} +#endif + +static int ssi_hash_setkey(void *hash, + const u8 *key, + unsigned int keylen, + bool synchronize) +{ + unsigned int hmacPadConst[2] = { HMAC_IPAD_CONST, HMAC_OPAD_CONST }; + struct ssi_crypto_req ssi_req = {}; + struct ssi_hash_ctx *ctx = NULL; + int blocksize = 0; + int digestsize = 0; + int i, idx = 0, rc = 0; + HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN]; + ssi_sram_addr_t larval_addr; + + SSI_LOG_DEBUG("ssi_hash_setkey: start keylen: %d", keylen); + + if (synchronize) { + ctx = crypto_shash_ctx(((struct crypto_shash *)hash)); + blocksize = crypto_tfm_alg_blocksize(&((struct crypto_shash *)hash)->base); + digestsize = crypto_shash_digestsize(((struct crypto_shash *)hash)); + } else { + ctx = crypto_ahash_ctx(((struct crypto_ahash *)hash)); + blocksize = crypto_tfm_alg_blocksize(&((struct crypto_ahash *)hash)->base); + digestsize = crypto_ahash_digestsize(((struct crypto_ahash *)hash)); + } + + larval_addr = ssi_ahash_get_larval_digest_sram_addr( + ctx->drvdata, ctx->hash_mode); + + /* The keylen value distinguishes HASH in case keylen is ZERO bytes, + any NON-ZERO value utilizes HMAC flow */ + ctx->key_params.keylen = keylen; + ctx->key_params.key_dma_addr = 0; + ctx->is_hmac = true; + + if (keylen != 0) { + ctx->key_params.key_dma_addr = dma_map_single( + &ctx->drvdata->plat_dev->dev, + (void *)key, + keylen, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(&ctx->drvdata->plat_dev->dev, + ctx->key_params.key_dma_addr))) { + SSI_LOG_ERR("Mapping key va=0x%p len=%u for" + " DMA failed\n", key, keylen); + return -ENOMEM; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr, keylen); + SSI_LOG_DEBUG("mapping key-buffer: key_dma_addr=0x%llX " + "keylen=%u\n", + (unsigned long long)ctx->key_params.key_dma_addr, + ctx->key_params.keylen); + + if (keylen > blocksize) { + /* Load hash initial state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr, + ctx->inter_digestsize); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Load the hash current length*/ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + ctx->key_params.key_dma_addr, + keylen, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + idx++; + + /* Get hashed key */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], ctx->opad_tmp_keys_dma_addr, + digestsize, NS_BIT, 0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]); + idx++; + + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - digestsize)); + HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + (ctx->opad_tmp_keys_dma_addr + digestsize), + (blocksize - digestsize), + NS_BIT, 0); + idx++; + } else { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + ctx->key_params.key_dma_addr, + keylen, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + (ctx->opad_tmp_keys_dma_addr), + keylen, NS_BIT, 0); + idx++; + + if ((blocksize - keylen) != 0) { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - keylen)); + HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + (ctx->opad_tmp_keys_dma_addr + keylen), + (blocksize - keylen), + NS_BIT, 0); + idx++; + } + } + } else { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0, blocksize); + HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + (ctx->opad_tmp_keys_dma_addr), + blocksize, + NS_BIT, 0); + idx++; + } + + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + goto out; + } + + /* calc derived HMAC key */ + for (idx = 0, i = 0; i < 2; i++) { + /* Load hash initial state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr, + ctx->inter_digestsize); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Load the hash current length*/ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + /* Prepare ipad key */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_XOR_VAL(&desc[idx], hmacPadConst[i]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + idx++; + + /* Perform HASH update */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + ctx->opad_tmp_keys_dma_addr, + blocksize, NS_BIT); + HW_DESC_SET_CIPHER_MODE(&desc[idx],ctx->hw_mode); + HW_DESC_SET_XOR_ACTIVE(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + idx++; + + /* Get the IPAD/OPAD xor key (Note, IPAD is the initial digest of the first HASH "update" state) */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + if (i > 0) /* Not first iteration */ + HW_DESC_SET_DOUT_DLLI(&desc[idx], + ctx->opad_tmp_keys_dma_addr, + ctx->inter_digestsize, + NS_BIT, 0); + else /* First iteration */ + HW_DESC_SET_DOUT_DLLI(&desc[idx], + ctx->digest_buff_dma_addr, + ctx->inter_digestsize, + NS_BIT, 0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + idx++; + } + + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); + +out: + if (rc != 0) { + if (synchronize) { + crypto_shash_set_flags((struct crypto_shash *)hash, CRYPTO_TFM_RES_BAD_KEY_LEN); + } else { + crypto_ahash_set_flags((struct crypto_ahash *)hash, CRYPTO_TFM_RES_BAD_KEY_LEN); + } + } + + if (ctx->key_params.key_dma_addr) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr); + dma_unmap_single(&ctx->drvdata->plat_dev->dev, + ctx->key_params.key_dma_addr, + ctx->key_params.keylen, DMA_TO_DEVICE); + SSI_LOG_DEBUG("Unmapped key-buffer: key_dma_addr=0x%llX keylen=%u\n", + (unsigned long long)ctx->key_params.key_dma_addr, + ctx->key_params.keylen); + } + return rc; +} + + +static int ssi_xcbc_setkey(struct crypto_ahash *ahash, + const u8 *key, unsigned int keylen) +{ + struct ssi_crypto_req ssi_req = {}; + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash); + int idx = 0, rc = 0; + HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN]; + + SSI_LOG_DEBUG("===== setkey (%d) ====\n", keylen); + + switch (keylen) { + case AES_KEYSIZE_128: + case AES_KEYSIZE_192: + case AES_KEYSIZE_256: + break; + default: + return -EINVAL; + } + + ctx->key_params.keylen = keylen; + + ctx->key_params.key_dma_addr = dma_map_single( + &ctx->drvdata->plat_dev->dev, + (void *)key, + keylen, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(&ctx->drvdata->plat_dev->dev, + ctx->key_params.key_dma_addr))) { + SSI_LOG_ERR("Mapping key va=0x%p len=%u for" + " DMA failed\n", key, keylen); + return -ENOMEM; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr, keylen); + SSI_LOG_DEBUG("mapping key-buffer: key_dma_addr=0x%llX " + "keylen=%u\n", + (unsigned long long)ctx->key_params.key_dma_addr, + ctx->key_params.keylen); + + ctx->is_hmac = true; + /* 1. Load the AES key */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->key_params.key_dma_addr, keylen, NS_BIT); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keylen); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0x01010101, CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr + + XCBC_MAC_K1_OFFSET), + CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0); + idx++; + + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0x02020202, CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr + + XCBC_MAC_K2_OFFSET), + CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0); + idx++; + + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0x03030303, CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr + + XCBC_MAC_K3_OFFSET), + CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0); + idx++; + + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); + + if (rc != 0) + crypto_ahash_set_flags(ahash, CRYPTO_TFM_RES_BAD_KEY_LEN); + + SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr); + dma_unmap_single(&ctx->drvdata->plat_dev->dev, + ctx->key_params.key_dma_addr, + ctx->key_params.keylen, DMA_TO_DEVICE); + SSI_LOG_DEBUG("Unmapped key-buffer: key_dma_addr=0x%llX keylen=%u\n", + (unsigned long long)ctx->key_params.key_dma_addr, + ctx->key_params.keylen); + + return rc; +} +#if SSI_CC_HAS_CMAC +static int ssi_cmac_setkey(struct crypto_ahash *ahash, + const u8 *key, unsigned int keylen) +{ + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash); + DECL_CYCLE_COUNT_RESOURCES; + SSI_LOG_DEBUG("===== setkey (%d) ====\n", keylen); + + ctx->is_hmac = true; + + switch (keylen) { + case AES_KEYSIZE_128: + case AES_KEYSIZE_192: + case AES_KEYSIZE_256: + break; + default: + return -EINVAL; + } + + ctx->key_params.keylen = keylen; + + /* STAT_PHASE_1: Copy key to ctx */ + START_CYCLE_COUNT(); + + SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr); + dma_sync_single_for_cpu(&ctx->drvdata->plat_dev->dev, + ctx->opad_tmp_keys_dma_addr, + keylen, DMA_TO_DEVICE); + + memcpy(ctx->opad_tmp_keys_buff, key, keylen); + if (keylen == 24) + memset(ctx->opad_tmp_keys_buff + 24, 0, CC_AES_KEY_SIZE_MAX - 24); + + dma_sync_single_for_device(&ctx->drvdata->plat_dev->dev, + ctx->opad_tmp_keys_dma_addr, + keylen, DMA_TO_DEVICE); + SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr, keylen); + + ctx->key_params.keylen = keylen; + + END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1); + + return 0; +} +#endif + +static void ssi_hash_free_ctx(struct ssi_hash_ctx *ctx) +{ + struct device *dev = &ctx->drvdata->plat_dev->dev; + + if (ctx->digest_buff_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr); + dma_unmap_single(dev, ctx->digest_buff_dma_addr, + sizeof(ctx->digest_buff), DMA_BIDIRECTIONAL); + SSI_LOG_DEBUG("Unmapped digest-buffer: " + "digest_buff_dma_addr=0x%llX\n", + (unsigned long long)ctx->digest_buff_dma_addr); + ctx->digest_buff_dma_addr = 0; + } + if (ctx->opad_tmp_keys_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr); + dma_unmap_single(dev, ctx->opad_tmp_keys_dma_addr, + sizeof(ctx->opad_tmp_keys_buff), + DMA_BIDIRECTIONAL); + SSI_LOG_DEBUG("Unmapped opad-digest: " + "opad_tmp_keys_dma_addr=0x%llX\n", + (unsigned long long)ctx->opad_tmp_keys_dma_addr); + ctx->opad_tmp_keys_dma_addr = 0; + } + + ctx->key_params.keylen = 0; + +} + + +static int ssi_hash_alloc_ctx(struct ssi_hash_ctx *ctx) +{ + struct device *dev = &ctx->drvdata->plat_dev->dev; + + ctx->key_params.keylen = 0; + + ctx->digest_buff_dma_addr = dma_map_single(dev, (void *)ctx->digest_buff, sizeof(ctx->digest_buff), DMA_BIDIRECTIONAL); + if (dma_mapping_error(dev, ctx->digest_buff_dma_addr)) { + SSI_LOG_ERR("Mapping digest len %zu B at va=%pK for DMA failed\n", + sizeof(ctx->digest_buff), ctx->digest_buff); + goto fail; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr, + sizeof(ctx->digest_buff)); + SSI_LOG_DEBUG("Mapped digest %zu B at va=%pK to dma=0x%llX\n", + sizeof(ctx->digest_buff), ctx->digest_buff, + (unsigned long long)ctx->digest_buff_dma_addr); + + ctx->opad_tmp_keys_dma_addr = dma_map_single(dev, (void *)ctx->opad_tmp_keys_buff, sizeof(ctx->opad_tmp_keys_buff), DMA_BIDIRECTIONAL); + if (dma_mapping_error(dev, ctx->opad_tmp_keys_dma_addr)) { + SSI_LOG_ERR("Mapping opad digest %zu B at va=%pK for DMA failed\n", + sizeof(ctx->opad_tmp_keys_buff), + ctx->opad_tmp_keys_buff); + goto fail; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr, + sizeof(ctx->opad_tmp_keys_buff)); + SSI_LOG_DEBUG("Mapped opad_tmp_keys %zu B at va=%pK to dma=0x%llX\n", + sizeof(ctx->opad_tmp_keys_buff), ctx->opad_tmp_keys_buff, + (unsigned long long)ctx->opad_tmp_keys_dma_addr); + + ctx->is_hmac = false; + return 0; + +fail: + ssi_hash_free_ctx(ctx); + return -ENOMEM; +} + +static int ssi_shash_cra_init(struct crypto_tfm *tfm) +{ + struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm); + struct shash_alg * shash_alg = + container_of(tfm->__crt_alg, struct shash_alg, base); + struct ssi_hash_alg *ssi_alg = + container_of(shash_alg, struct ssi_hash_alg, shash_alg); + + ctx->hash_mode = ssi_alg->hash_mode; + ctx->hw_mode = ssi_alg->hw_mode; + ctx->inter_digestsize = ssi_alg->inter_digestsize; + ctx->drvdata = ssi_alg->drvdata; + + return ssi_hash_alloc_ctx(ctx); +} + +static int ssi_ahash_cra_init(struct crypto_tfm *tfm) +{ + struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm); + struct hash_alg_common * hash_alg_common = + container_of(tfm->__crt_alg, struct hash_alg_common, base); + struct ahash_alg *ahash_alg = + container_of(hash_alg_common, struct ahash_alg, halg); + struct ssi_hash_alg *ssi_alg = + container_of(ahash_alg, struct ssi_hash_alg, ahash_alg); + + + crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), + sizeof(struct ahash_req_ctx)); + + ctx->hash_mode = ssi_alg->hash_mode; + ctx->hw_mode = ssi_alg->hw_mode; + ctx->inter_digestsize = ssi_alg->inter_digestsize; + ctx->drvdata = ssi_alg->drvdata; + + return ssi_hash_alloc_ctx(ctx); +} + +static void ssi_hash_cra_exit(struct crypto_tfm *tfm) +{ + struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm); + + SSI_LOG_DEBUG("ssi_hash_cra_exit"); + ssi_hash_free_ctx(ctx); +} + +static int ssi_mac_update(struct ahash_request *req) +{ + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct device *dev = &ctx->drvdata->plat_dev->dev; + unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base); + struct ssi_crypto_req ssi_req = {}; + HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN]; + int rc; + uint32_t idx = 0; + + if (req->nbytes == 0) { + /* no real updates required */ + return 0; + } + + state->xcbc_count++; + + if (unlikely(rc = ssi_buffer_mgr_map_hash_request_update(ctx->drvdata, state, req->src, req->nbytes, block_size))) { + if (rc == 1) { + SSI_LOG_DEBUG(" data size not require HW update %x\n", + req->nbytes); + /* No hardware updates are required */ + return 0; + } + SSI_LOG_ERR("map_ahash_request_update() failed\n"); + return -ENOMEM; + } + + if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) { + ssi_hash_create_xcbc_setup(req, desc, &idx); + } else { + ssi_hash_create_cmac_setup(req, desc, &idx); + } + + ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, true, &idx); + + /* store the hash digest result in context */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 1); + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + idx++; + + /* Setup DX request structure */ + ssi_req.user_cb = (void *)ssi_hash_update_complete; + ssi_req.user_arg = (void *)req; +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ +#endif + + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (unlikely(rc != -EINPROGRESS)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true); + } + return rc; +} + +static int ssi_mac_final(struct ahash_request *req) +{ + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct device *dev = &ctx->drvdata->plat_dev->dev; + struct ssi_crypto_req ssi_req = {}; + HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN]; + int idx = 0; + int rc = 0; + uint32_t keySize, keyLen; + uint32_t digestsize = crypto_ahash_digestsize(tfm); + + uint32_t rem_cnt = state->buff_index ? state->buff1_cnt : + state->buff0_cnt; + + + if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) { + keySize = CC_AES_128_BIT_KEY_SIZE; + keyLen = CC_AES_128_BIT_KEY_SIZE; + } else { + keySize = (ctx->key_params.keylen == 24) ? AES_MAX_KEY_SIZE : ctx->key_params.keylen; + keyLen = ctx->key_params.keylen; + } + + SSI_LOG_DEBUG("===== final xcbc reminder (%d) ====\n", rem_cnt); + + if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, req->src, req->nbytes, 0) != 0)) { + SSI_LOG_ERR("map_ahash_request_final() failed\n"); + return -ENOMEM; + } + + if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) { + SSI_LOG_ERR("map_ahash_digest() failed\n"); + return -ENOMEM; + } + + /* Setup DX request structure */ + ssi_req.user_cb = (void *)ssi_hash_complete; + ssi_req.user_arg = (void *)req; +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ +#endif + + if (state->xcbc_count && (rem_cnt == 0)) { + /* Load key for ECB decryption */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_DECRYPT); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + (ctx->opad_tmp_keys_dma_addr + + XCBC_MAC_K1_OFFSET), + keySize, NS_BIT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + + /* Initiate decryption of block state to previous block_state-XOR-M[n] */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT,0); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + idx++; + + /* Memory Barrier: wait for axi write to complete */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); + HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + idx++; + } + + if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) { + ssi_hash_create_xcbc_setup(req, desc, &idx); + } else { + ssi_hash_create_cmac_setup(req, desc, &idx); + } + + if (state->xcbc_count == 0) { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen); + HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + } else if (rem_cnt > 0) { + ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx); + } else { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0x00, CC_AES_BLOCK_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + idx++; + } + + /* Get final MAC result */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, 1); /*TODO*/ + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + idx++; + + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (unlikely(rc != -EINPROGRESS)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true); + ssi_hash_unmap_result(dev, state, digestsize, req->result); + } + return rc; +} + +static int ssi_mac_finup(struct ahash_request *req) +{ + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct device *dev = &ctx->drvdata->plat_dev->dev; + struct ssi_crypto_req ssi_req = {}; + HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN]; + int idx = 0; + int rc = 0; + uint32_t key_len = 0; + uint32_t digestsize = crypto_ahash_digestsize(tfm); + + SSI_LOG_DEBUG("===== finup xcbc(%d) ====\n", req->nbytes); + + if (state->xcbc_count > 0 && req->nbytes == 0) { + SSI_LOG_DEBUG("No data to update. Call to fdx_mac_final \n"); + return ssi_mac_final(req); + } + + if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, req->src, req->nbytes, 1) != 0)) { + SSI_LOG_ERR("map_ahash_request_final() failed\n"); + return -ENOMEM; + } + if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) { + SSI_LOG_ERR("map_ahash_digest() failed\n"); + return -ENOMEM; + } + + /* Setup DX request structure */ + ssi_req.user_cb = (void *)ssi_hash_complete; + ssi_req.user_arg = (void *)req; +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ +#endif + + if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) { + key_len = CC_AES_128_BIT_KEY_SIZE; + ssi_hash_create_xcbc_setup(req, desc, &idx); + } else { + key_len = ctx->key_params.keylen; + ssi_hash_create_cmac_setup(req, desc, &idx); + } + + if (req->nbytes == 0) { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len); + HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + } else { + ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx); + } + + /* Get final MAC result */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, 1); /*TODO*/ + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + idx++; + + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (unlikely(rc != -EINPROGRESS)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true); + ssi_hash_unmap_result(dev, state, digestsize, req->result); + } + return rc; +} + +static int ssi_mac_digest(struct ahash_request *req) +{ + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct device *dev = &ctx->drvdata->plat_dev->dev; + uint32_t digestsize = crypto_ahash_digestsize(tfm); + struct ssi_crypto_req ssi_req = {}; + HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN]; + uint32_t keyLen; + int idx = 0; + int rc; + + SSI_LOG_DEBUG("===== -digest mac (%d) ====\n", req->nbytes); + + if (unlikely(ssi_hash_map_request(dev, state, ctx) != 0)) { + SSI_LOG_ERR("map_ahash_source() failed\n"); + return -ENOMEM; + } + if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) { + SSI_LOG_ERR("map_ahash_digest() failed\n"); + return -ENOMEM; + } + + if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, req->src, req->nbytes, 1) != 0)) { + SSI_LOG_ERR("map_ahash_request_final() failed\n"); + return -ENOMEM; + } + + /* Setup DX request structure */ + ssi_req.user_cb = (void *)ssi_hash_digest_complete; + ssi_req.user_arg = (void *)req; +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */ +#endif + + + if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) { + keyLen = CC_AES_128_BIT_KEY_SIZE; + ssi_hash_create_xcbc_setup(req, desc, &idx); + } else { + keyLen = ctx->key_params.keylen; + ssi_hash_create_cmac_setup(req, desc, &idx); + } + + if (req->nbytes == 0) { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen); + HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + } else { + ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx); + } + + /* Get final MAC result */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT,1); + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx],DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode); + idx++; + + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (unlikely(rc != -EINPROGRESS)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true); + ssi_hash_unmap_result(dev, state, digestsize, req->result); + ssi_hash_unmap_request(dev, state, ctx); + } + return rc; +} + +//shash wrap functions +#ifdef SYNC_ALGS +static int ssi_shash_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, u8 *out) +{ + struct ahash_req_ctx *state = shash_desc_ctx(desc); + struct crypto_shash *tfm = desc->tfm; + struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); + uint32_t digestsize = crypto_shash_digestsize(tfm); + struct scatterlist src; + + if (len == 0) { + return ssi_hash_digest(state, ctx, digestsize, NULL, 0, out, NULL); + } + + /* sg_init_one may crash when len is 0 (depends on kernel configuration) */ + sg_init_one(&src, (const void *)data, len); + + return ssi_hash_digest(state, ctx, digestsize, &src, len, out, NULL); +} + +static int ssi_shash_update(struct shash_desc *desc, + const u8 *data, unsigned int len) +{ + struct ahash_req_ctx *state = shash_desc_ctx(desc); + struct crypto_shash *tfm = desc->tfm; + struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); + uint32_t blocksize = crypto_tfm_alg_blocksize(&tfm->base); + struct scatterlist src; + + sg_init_one(&src, (const void *)data, len); + + return ssi_hash_update(state, ctx, blocksize, &src, len, NULL); +} + +static int ssi_shash_finup(struct shash_desc *desc, + const u8 *data, unsigned int len, u8 *out) +{ + struct ahash_req_ctx *state = shash_desc_ctx(desc); + struct crypto_shash *tfm = desc->tfm; + struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); + uint32_t digestsize = crypto_shash_digestsize(tfm); + struct scatterlist src; + + sg_init_one(&src, (const void *)data, len); + + return ssi_hash_finup(state, ctx, digestsize, &src, len, out, NULL); +} + +static int ssi_shash_final(struct shash_desc *desc, u8 *out) +{ + struct ahash_req_ctx *state = shash_desc_ctx(desc); + struct crypto_shash *tfm = desc->tfm; + struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); + uint32_t digestsize = crypto_shash_digestsize(tfm); + + return ssi_hash_final(state, ctx, digestsize, NULL, 0, out, NULL); +} + +static int ssi_shash_init(struct shash_desc *desc) +{ + struct ahash_req_ctx *state = shash_desc_ctx(desc); + struct crypto_shash *tfm = desc->tfm; + struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); + + return ssi_hash_init(state, ctx); +} + +#ifdef EXPORT_FIXED +static int ssi_shash_export(struct shash_desc *desc, void *out) +{ + struct crypto_shash *tfm = desc->tfm; + struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); + + return ssi_hash_export(ctx, out); +} + +static int ssi_shash_import(struct shash_desc *desc, const void *in) +{ + struct crypto_shash *tfm = desc->tfm; + struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm); + + return ssi_hash_import(ctx, in); +} +#endif + +static int ssi_shash_setkey(struct crypto_shash *tfm, + const u8 *key, unsigned int keylen) +{ + return ssi_hash_setkey((void *) tfm, key, keylen, true); +} + +#endif /* SYNC_ALGS */ + +//ahash wrap functions +static int ssi_ahash_digest(struct ahash_request *req) +{ + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + uint32_t digestsize = crypto_ahash_digestsize(tfm); + + return ssi_hash_digest(state, ctx, digestsize, req->src, req->nbytes, req->result, (void *)req); +} + +static int ssi_ahash_update(struct ahash_request *req) +{ + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base); + + return ssi_hash_update(state, ctx, block_size, req->src, req->nbytes, (void *)req); +} + +static int ssi_ahash_finup(struct ahash_request *req) +{ + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + uint32_t digestsize = crypto_ahash_digestsize(tfm); + + return ssi_hash_finup(state, ctx, digestsize, req->src, req->nbytes, req->result, (void *)req); +} + +static int ssi_ahash_final(struct ahash_request *req) +{ + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + uint32_t digestsize = crypto_ahash_digestsize(tfm); + + return ssi_hash_final(state, ctx, digestsize, req->src, req->nbytes, req->result, (void *)req); +} + +static int ssi_ahash_init(struct ahash_request *req) +{ + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + + SSI_LOG_DEBUG("===== init (%d) ====\n", req->nbytes); + + return ssi_hash_init(state, ctx); +} + +#ifdef EXPORT_FIXED +static int ssi_ahash_export(struct ahash_request *req, void *out) +{ + struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash); + + return ssi_hash_export(ctx, out); +} + +static int ssi_ahash_import(struct ahash_request *req, const void *in) +{ + struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash); + + return ssi_hash_import(ctx, in); +} +#endif + +static int ssi_ahash_setkey(struct crypto_ahash *ahash, + const u8 *key, unsigned int keylen) +{ + return ssi_hash_setkey((void *) ahash, key, keylen, false); +} + +struct ssi_hash_template { + char name[CRYPTO_MAX_ALG_NAME]; + char driver_name[CRYPTO_MAX_ALG_NAME]; + char hmac_name[CRYPTO_MAX_ALG_NAME]; + char hmac_driver_name[CRYPTO_MAX_ALG_NAME]; + unsigned int blocksize; + bool synchronize; + union { + struct ahash_alg template_ahash; + struct shash_alg template_shash; + }; + int hash_mode; + int hw_mode; + int inter_digestsize; + struct ssi_drvdata *drvdata; +}; + +/* hash descriptors */ +static struct ssi_hash_template driver_hash[] = { + //Asynchronize hash template + { + .name = "sha1", + .driver_name = "sha1-dx", + .hmac_name = "hmac(sha1)", + .hmac_driver_name = "hmac-sha1-dx", + .blocksize = SHA1_BLOCK_SIZE, + .synchronize = false, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, +#ifdef EXPORT_FIXED + .export = ssi_ahash_export, + .import = ssi_ahash_import, +#endif + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = SHA1_DIGEST_SIZE, + .statesize = sizeof(struct sha1_state), + }, + }, + .hash_mode = DRV_HASH_SHA1, + .hw_mode = DRV_HASH_HW_SHA1, + .inter_digestsize = SHA1_DIGEST_SIZE, + }, + { + .name = "sha256", + .driver_name = "sha256-dx", + .hmac_name = "hmac(sha256)", + .hmac_driver_name = "hmac-sha256-dx", + .blocksize = SHA256_BLOCK_SIZE, + .synchronize = false, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, +#ifdef EXPORT_FIXED + .export = ssi_ahash_export, + .import = ssi_ahash_import, +#endif + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = SHA256_DIGEST_SIZE, + .statesize = sizeof(struct sha256_state), + }, + }, + .hash_mode = DRV_HASH_SHA256, + .hw_mode = DRV_HASH_HW_SHA256, + .inter_digestsize = SHA256_DIGEST_SIZE, + }, + { + .name = "sha224", + .driver_name = "sha224-dx", + .hmac_name = "hmac(sha224)", + .hmac_driver_name = "hmac-sha224-dx", + .blocksize = SHA224_BLOCK_SIZE, + .synchronize = false, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, +#ifdef EXPORT_FIXED + .export = ssi_ahash_export, + .import = ssi_ahash_import, +#endif + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = SHA224_DIGEST_SIZE, + .statesize = sizeof(struct sha256_state), + }, + }, + .hash_mode = DRV_HASH_SHA224, + .hw_mode = DRV_HASH_HW_SHA256, + .inter_digestsize = SHA256_DIGEST_SIZE, + }, +#if (DX_DEV_SHA_MAX > 256) + { + .name = "sha384", + .driver_name = "sha384-dx", + .hmac_name = "hmac(sha384)", + .hmac_driver_name = "hmac-sha384-dx", + .blocksize = SHA384_BLOCK_SIZE, + .synchronize = false, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, +#ifdef EXPORT_FIXED + .export = ssi_ahash_export, + .import = ssi_ahash_import, +#endif + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = SHA384_DIGEST_SIZE, + .statesize = sizeof(struct sha512_state), + }, + }, + .hash_mode = DRV_HASH_SHA384, + .hw_mode = DRV_HASH_HW_SHA512, + .inter_digestsize = SHA512_DIGEST_SIZE, + }, + { + .name = "sha512", + .driver_name = "sha512-dx", + .hmac_name = "hmac(sha512)", + .hmac_driver_name = "hmac-sha512-dx", + .blocksize = SHA512_BLOCK_SIZE, + .synchronize = false, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, +#ifdef EXPORT_FIXED + .export = ssi_ahash_export, + .import = ssi_ahash_import, +#endif + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = SHA512_DIGEST_SIZE, + .statesize = sizeof(struct sha512_state), + }, + }, + .hash_mode = DRV_HASH_SHA512, + .hw_mode = DRV_HASH_HW_SHA512, + .inter_digestsize = SHA512_DIGEST_SIZE, + }, +#endif + { + .name = "md5", + .driver_name = "md5-dx", + .hmac_name = "hmac(md5)", + .hmac_driver_name = "hmac-md5-dx", + .blocksize = MD5_HMAC_BLOCK_SIZE, + .synchronize = false, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_ahash_update, + .final = ssi_ahash_final, + .finup = ssi_ahash_finup, + .digest = ssi_ahash_digest, +#ifdef EXPORT_FIXED + .export = ssi_ahash_export, + .import = ssi_ahash_import, +#endif + .setkey = ssi_ahash_setkey, + .halg = { + .digestsize = MD5_DIGEST_SIZE, + .statesize = sizeof(struct md5_state), + }, + }, + .hash_mode = DRV_HASH_MD5, + .hw_mode = DRV_HASH_HW_MD5, + .inter_digestsize = MD5_DIGEST_SIZE, + }, + { + .name = "xcbc(aes)", + .driver_name = "xcbc-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .synchronize = false, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_mac_update, + .final = ssi_mac_final, + .finup = ssi_mac_finup, + .digest = ssi_mac_digest, + .setkey = ssi_xcbc_setkey, +#ifdef EXPORT_FIXED + .export = ssi_ahash_export, + .import = ssi_ahash_import, +#endif + .halg = { + .digestsize = AES_BLOCK_SIZE, + .statesize = sizeof(struct aeshash_state), + }, + }, + .hash_mode = DRV_HASH_NULL, + .hw_mode = DRV_CIPHER_XCBC_MAC, + .inter_digestsize = AES_BLOCK_SIZE, + }, +#if SSI_CC_HAS_CMAC + { + .name = "cmac(aes)", + .driver_name = "cmac-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .synchronize = false, + .template_ahash = { + .init = ssi_ahash_init, + .update = ssi_mac_update, + .final = ssi_mac_final, + .finup = ssi_mac_finup, + .digest = ssi_mac_digest, + .setkey = ssi_cmac_setkey, +#ifdef EXPORT_FIXED + .export = ssi_ahash_export, + .import = ssi_ahash_import, +#endif + .halg = { + .digestsize = AES_BLOCK_SIZE, + .statesize = sizeof(struct aeshash_state), + }, + }, + .hash_mode = DRV_HASH_NULL, + .hw_mode = DRV_CIPHER_CMAC, + .inter_digestsize = AES_BLOCK_SIZE, + }, +#endif + +}; + +static struct ssi_hash_alg * +ssi_hash_create_alg(struct ssi_hash_template *template, bool keyed) +{ + struct ssi_hash_alg *t_crypto_alg; + struct crypto_alg *alg; + + t_crypto_alg = kzalloc(sizeof(struct ssi_hash_alg), GFP_KERNEL); + if (!t_crypto_alg) { + SSI_LOG_ERR("failed to allocate t_alg\n"); + return ERR_PTR(-ENOMEM); + } + + t_crypto_alg->synchronize = template->synchronize; + if (template->synchronize) { + struct shash_alg *halg; + t_crypto_alg->shash_alg = template->template_shash; + halg = &t_crypto_alg->shash_alg; + alg = &halg->base; + if (!keyed) halg->setkey = NULL; + } else { + struct ahash_alg *halg; + t_crypto_alg->ahash_alg = template->template_ahash; + halg = &t_crypto_alg->ahash_alg; + alg = &halg->halg.base; + if (!keyed) halg->setkey = NULL; + } + + if (keyed) { + snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", + template->hmac_name); + snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", + template->hmac_driver_name); + } else { + snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", + template->name); + snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", + template->driver_name); + } + alg->cra_module = THIS_MODULE; + alg->cra_ctxsize = sizeof(struct ssi_hash_ctx); + alg->cra_priority = SSI_CRA_PRIO; + alg->cra_blocksize = template->blocksize; + alg->cra_alignmask = 0; + alg->cra_exit = ssi_hash_cra_exit; + + if (template->synchronize) { + alg->cra_init = ssi_shash_cra_init; + alg->cra_flags = CRYPTO_ALG_TYPE_SHASH | + CRYPTO_ALG_KERN_DRIVER_ONLY; + alg->cra_type = &crypto_shash_type; + } else { + alg->cra_init = ssi_ahash_cra_init; + alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_KERN_DRIVER_ONLY; + alg->cra_type = &crypto_ahash_type; + } + + t_crypto_alg->hash_mode = template->hash_mode; + t_crypto_alg->hw_mode = template->hw_mode; + t_crypto_alg->inter_digestsize = template->inter_digestsize; + + return t_crypto_alg; +} + +int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata) +{ + struct ssi_hash_handle *hash_handle = drvdata->hash_handle; + ssi_sram_addr_t sram_buff_ofs = hash_handle->digest_len_sram_addr; + unsigned int larval_seq_len = 0; + HwDesc_s larval_seq[CC_DIGEST_SIZE_MAX/sizeof(uint32_t)]; + int rc = 0; +#if (DX_DEV_SHA_MAX > 256) + int i; +#endif + + /* Copy-to-sram digest-len */ + ssi_sram_mgr_const2sram_desc(digest_len_init, sram_buff_ofs, + ARRAY_SIZE(digest_len_init), larval_seq, &larval_seq_len); + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (unlikely(rc != 0)) + goto init_digest_const_err; + + sram_buff_ofs += sizeof(digest_len_init); + larval_seq_len = 0; + +#if (DX_DEV_SHA_MAX > 256) + /* Copy-to-sram digest-len for sha384/512 */ + ssi_sram_mgr_const2sram_desc(digest_len_sha512_init, sram_buff_ofs, + ARRAY_SIZE(digest_len_sha512_init), larval_seq, &larval_seq_len); + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (unlikely(rc != 0)) + goto init_digest_const_err; + + sram_buff_ofs += sizeof(digest_len_sha512_init); + larval_seq_len = 0; +#endif + + /* The initial digests offset */ + hash_handle->larval_digest_sram_addr = sram_buff_ofs; + + /* Copy-to-sram initial SHA* digests */ + ssi_sram_mgr_const2sram_desc(md5_init, sram_buff_ofs, + ARRAY_SIZE(md5_init), larval_seq, &larval_seq_len); + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (unlikely(rc != 0)) + goto init_digest_const_err; + sram_buff_ofs += sizeof(md5_init); + larval_seq_len = 0; + + ssi_sram_mgr_const2sram_desc(sha1_init, sram_buff_ofs, + ARRAY_SIZE(sha1_init), larval_seq, &larval_seq_len); + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (unlikely(rc != 0)) + goto init_digest_const_err; + sram_buff_ofs += sizeof(sha1_init); + larval_seq_len = 0; + + ssi_sram_mgr_const2sram_desc(sha224_init, sram_buff_ofs, + ARRAY_SIZE(sha224_init), larval_seq, &larval_seq_len); + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (unlikely(rc != 0)) + goto init_digest_const_err; + sram_buff_ofs += sizeof(sha224_init); + larval_seq_len = 0; + + ssi_sram_mgr_const2sram_desc(sha256_init, sram_buff_ofs, + ARRAY_SIZE(sha256_init), larval_seq, &larval_seq_len); + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (unlikely(rc != 0)) + goto init_digest_const_err; + sram_buff_ofs += sizeof(sha256_init); + larval_seq_len = 0; + +#if (DX_DEV_SHA_MAX > 256) + /* We are forced to swap each double-word larval before copying to sram */ + for (i = 0; i < ARRAY_SIZE(sha384_init); i++) { + const uint32_t const0 = ((uint32_t *)((uint64_t *)&sha384_init[i]))[1]; + const uint32_t const1 = ((uint32_t *)((uint64_t *)&sha384_init[i]))[0]; + + ssi_sram_mgr_const2sram_desc(&const0, sram_buff_ofs, 1, + larval_seq, &larval_seq_len); + sram_buff_ofs += sizeof(uint32_t); + ssi_sram_mgr_const2sram_desc(&const1, sram_buff_ofs, 1, + larval_seq, &larval_seq_len); + sram_buff_ofs += sizeof(uint32_t); + } + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("send_request() failed (rc = %d)\n", rc); + goto init_digest_const_err; + } + larval_seq_len = 0; + + for (i = 0; i < ARRAY_SIZE(sha512_init); i++) { + const uint32_t const0 = ((uint32_t *)((uint64_t *)&sha512_init[i]))[1]; + const uint32_t const1 = ((uint32_t *)((uint64_t *)&sha512_init[i]))[0]; + + ssi_sram_mgr_const2sram_desc(&const0, sram_buff_ofs, 1, + larval_seq, &larval_seq_len); + sram_buff_ofs += sizeof(uint32_t); + ssi_sram_mgr_const2sram_desc(&const1, sram_buff_ofs, 1, + larval_seq, &larval_seq_len); + sram_buff_ofs += sizeof(uint32_t); + } + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("send_request() failed (rc = %d)\n", rc); + goto init_digest_const_err; + } +#endif + +init_digest_const_err: + return rc; +} + +int ssi_hash_alloc(struct ssi_drvdata *drvdata) +{ + struct ssi_hash_handle *hash_handle; + ssi_sram_addr_t sram_buff; + uint32_t sram_size_to_alloc; + int rc = 0; + int alg; + + hash_handle = kzalloc(sizeof(struct ssi_hash_handle), GFP_KERNEL); + if (hash_handle == NULL) { + SSI_LOG_ERR("kzalloc failed to allocate %zu B\n", + sizeof(struct ssi_hash_handle)); + rc = -ENOMEM; + goto fail; + } + + drvdata->hash_handle = hash_handle; + + sram_size_to_alloc = sizeof(digest_len_init) + +#if (DX_DEV_SHA_MAX > 256) + sizeof(digest_len_sha512_init) + + sizeof(sha384_init) + + sizeof(sha512_init) + +#endif + sizeof(md5_init) + + sizeof(sha1_init) + + sizeof(sha224_init) + + sizeof(sha256_init); + + sram_buff = ssi_sram_mgr_alloc(drvdata, sram_size_to_alloc); + if (sram_buff == NULL_SRAM_ADDR) { + SSI_LOG_ERR("SRAM pool exhausted\n"); + rc = -ENOMEM; + goto fail; + } + + /* The initial digest-len offset */ + hash_handle->digest_len_sram_addr = sram_buff; + + /*must be set before the alg registration as it is being used there*/ + rc = ssi_hash_init_sram_digest_consts(drvdata); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("Init digest CONST failed (rc=%d)\n", rc); + goto fail; + } + + INIT_LIST_HEAD(&hash_handle->hash_list); + + /* ahash registration */ + for (alg = 0; alg < ARRAY_SIZE(driver_hash); alg++) { + struct ssi_hash_alg *t_alg; + + /* register hmac version */ + + if ((((struct ssi_hash_template)driver_hash[alg]).hw_mode != DRV_CIPHER_XCBC_MAC) && + (((struct ssi_hash_template)driver_hash[alg]).hw_mode != DRV_CIPHER_CMAC)) { + t_alg = ssi_hash_create_alg(&driver_hash[alg], true); + if (IS_ERR(t_alg)) { + rc = PTR_ERR(t_alg); + SSI_LOG_ERR("%s alg allocation failed\n", + driver_hash[alg].driver_name); + goto fail; + } + t_alg->drvdata = drvdata; + + if (t_alg->synchronize) { + rc = crypto_register_shash(&t_alg->shash_alg); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("%s alg registration failed\n", + t_alg->shash_alg.base.cra_driver_name); + kfree(t_alg); + goto fail; + } else + list_add_tail(&t_alg->entry, &hash_handle->hash_list); + } else { + rc = crypto_register_ahash(&t_alg->ahash_alg); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("%s alg registration failed\n", + t_alg->ahash_alg.halg.base.cra_driver_name); + kfree(t_alg); + goto fail; + } else + list_add_tail(&t_alg->entry, &hash_handle->hash_list); + } + } + + /* register hash version */ + t_alg = ssi_hash_create_alg(&driver_hash[alg], false); + if (IS_ERR(t_alg)) { + rc = PTR_ERR(t_alg); + SSI_LOG_ERR("%s alg allocation failed\n", + driver_hash[alg].driver_name); + goto fail; + } + t_alg->drvdata = drvdata; + + if (t_alg->synchronize) { + rc = crypto_register_shash(&t_alg->shash_alg); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("%s alg registration failed\n", + t_alg->shash_alg.base.cra_driver_name); + kfree(t_alg); + goto fail; + } else + list_add_tail(&t_alg->entry, &hash_handle->hash_list); + + } else { + rc = crypto_register_ahash(&t_alg->ahash_alg); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("%s alg registration failed\n", + t_alg->ahash_alg.halg.base.cra_driver_name); + kfree(t_alg); + goto fail; + } else + list_add_tail(&t_alg->entry, &hash_handle->hash_list); + } + } + + return 0; + +fail: + + if (drvdata->hash_handle != NULL) { + kfree(drvdata->hash_handle); + drvdata->hash_handle = NULL; + } + return rc; +} + +int ssi_hash_free(struct ssi_drvdata *drvdata) +{ + struct ssi_hash_alg *t_hash_alg, *hash_n; + struct ssi_hash_handle *hash_handle = drvdata->hash_handle; + + if (hash_handle != NULL) { + + list_for_each_entry_safe(t_hash_alg, hash_n, &hash_handle->hash_list, entry) { + if (t_hash_alg->synchronize) { + crypto_unregister_shash(&t_hash_alg->shash_alg); + } else { + crypto_unregister_ahash(&t_hash_alg->ahash_alg); + } + list_del(&t_hash_alg->entry); + kfree(t_hash_alg); + } + + kfree(hash_handle); + drvdata->hash_handle = NULL; + } + return 0; +} + +static void ssi_hash_create_xcbc_setup(struct ahash_request *areq, + HwDesc_s desc[], + unsigned int *seq_size) { + unsigned int idx = *seq_size; + struct ahash_req_ctx *state = ahash_request_ctx(areq); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + + /* Setup XCBC MAC K1 */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr + + XCBC_MAC_K1_OFFSET), + CC_AES_128_BIT_KEY_SIZE, NS_BIT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + + /* Setup XCBC MAC K2 */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr + + XCBC_MAC_K2_OFFSET), + CC_AES_128_BIT_KEY_SIZE, NS_BIT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + + /* Setup XCBC MAC K3 */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr + + XCBC_MAC_K3_OFFSET), + CC_AES_128_BIT_KEY_SIZE, NS_BIT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE2); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + + /* Loading MAC state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + *seq_size = idx; +} + +static void ssi_hash_create_cmac_setup(struct ahash_request *areq, + HwDesc_s desc[], + unsigned int *seq_size) +{ + unsigned int idx = *seq_size; + struct ahash_req_ctx *state = ahash_request_ctx(areq); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + + /* Setup CMAC Key */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->opad_tmp_keys_dma_addr, + ((ctx->key_params.keylen == 24) ? AES_MAX_KEY_SIZE : ctx->key_params.keylen), NS_BIT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->key_params.keylen); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + + /* Load MAC state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->key_params.keylen); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + *seq_size = idx; +} + +static void ssi_hash_create_data_desc(struct ahash_req_ctx *areq_ctx, + struct ssi_hash_ctx *ctx, + unsigned int flow_mode, + HwDesc_s desc[], + bool is_not_last_data, + unsigned int *seq_size) +{ + unsigned int idx = *seq_size; + + if (likely(areq_ctx->data_dma_buf_type == SSI_DMA_BUF_DLLI)) { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + sg_dma_address(areq_ctx->curr_sg), + areq_ctx->curr_sg->length, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + idx++; + } else { + if (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) { + SSI_LOG_DEBUG(" NULL mode\n"); + /* nothing to build */ + return; + } + /* bypass */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + areq_ctx->mlli_params.mlli_dma_addr, + areq_ctx->mlli_params.mlli_len, + NS_BIT); + HW_DESC_SET_DOUT_SRAM(&desc[idx], + ctx->drvdata->mlli_sram_addr, + areq_ctx->mlli_params.mlli_len); + HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); + idx++; + /* process */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI, + ctx->drvdata->mlli_sram_addr, + areq_ctx->mlli_nents, + NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + idx++; + } + if (is_not_last_data) { + HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx-1]); + } + /* return updated desc sequence size */ + *seq_size = idx; +} + +/*! + * Gets the address of the initial digest in SRAM + * according to the given hash mode + * + * \param drvdata + * \param mode The Hash mode. Supported modes: MD5/SHA1/SHA224/SHA256 + * + * \return uint32_t The address of the inital digest in SRAM + */ +ssi_sram_addr_t ssi_ahash_get_larval_digest_sram_addr(void *drvdata, uint32_t mode) +{ + struct ssi_drvdata *_drvdata = (struct ssi_drvdata *)drvdata; + struct ssi_hash_handle *hash_handle = _drvdata->hash_handle; + + switch (mode) { + case DRV_HASH_NULL: + break; /*Ignore*/ + case DRV_HASH_MD5: + return (hash_handle->larval_digest_sram_addr); + case DRV_HASH_SHA1: + return (hash_handle->larval_digest_sram_addr + + sizeof(md5_init)); + case DRV_HASH_SHA224: + return (hash_handle->larval_digest_sram_addr + + sizeof(md5_init) + + sizeof(sha1_init)); + case DRV_HASH_SHA256: + return (hash_handle->larval_digest_sram_addr + + sizeof(md5_init) + + sizeof(sha1_init) + + sizeof(sha224_init)); +#if (DX_DEV_SHA_MAX > 256) + case DRV_HASH_SHA384: + return (hash_handle->larval_digest_sram_addr + + sizeof(md5_init) + + sizeof(sha1_init) + + sizeof(sha224_init) + + sizeof(sha256_init)); + case DRV_HASH_SHA512: + return (hash_handle->larval_digest_sram_addr + + sizeof(md5_init) + + sizeof(sha1_init) + + sizeof(sha224_init) + + sizeof(sha256_init) + + sizeof(sha384_init)); +#endif + default: + SSI_LOG_ERR("Invalid hash mode (%d)\n", mode); + } + + /*This is valid wrong value to avoid kernel crash*/ + return hash_handle->larval_digest_sram_addr; +} + +ssi_sram_addr_t +ssi_ahash_get_initial_digest_len_sram_addr(void *drvdata, uint32_t mode) +{ + struct ssi_drvdata *_drvdata = (struct ssi_drvdata *)drvdata; + struct ssi_hash_handle *hash_handle = _drvdata->hash_handle; + ssi_sram_addr_t digest_len_addr = hash_handle->digest_len_sram_addr; + + switch (mode) { + case DRV_HASH_SHA1: + case DRV_HASH_SHA224: + case DRV_HASH_SHA256: + case DRV_HASH_MD5: + return digest_len_addr; +#if (DX_DEV_SHA_MAX > 256) + case DRV_HASH_SHA384: + case DRV_HASH_SHA512: + return digest_len_addr + sizeof(digest_len_init); +#endif + default: + return digest_len_addr; /*to avoid kernel crash*/ + } +} + diff --git a/drivers/staging/ccree/ssi_hash.h b/drivers/staging/ccree/ssi_hash.h new file mode 100644 index 0000000..a2b076d3 --- /dev/null +++ b/drivers/staging/ccree/ssi_hash.h @@ -0,0 +1,101 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* \file ssi_hash.h + ARM CryptoCell Hash Crypto API + */ + +#ifndef __SSI_HASH_H__ +#define __SSI_HASH_H__ + +#include "ssi_buffer_mgr.h" + +#define HMAC_IPAD_CONST 0x36363636 +#define HMAC_OPAD_CONST 0x5C5C5C5C +#if (DX_DEV_SHA_MAX > 256) +#define HASH_LEN_SIZE 16 +#define SSI_MAX_HASH_DIGEST_SIZE SHA512_DIGEST_SIZE +#define SSI_MAX_HASH_BLCK_SIZE SHA512_BLOCK_SIZE +#else +#define HASH_LEN_SIZE 8 +#define SSI_MAX_HASH_DIGEST_SIZE SHA256_DIGEST_SIZE +#define SSI_MAX_HASH_BLCK_SIZE SHA256_BLOCK_SIZE +#endif + +#define XCBC_MAC_K1_OFFSET 0 +#define XCBC_MAC_K2_OFFSET 16 +#define XCBC_MAC_K3_OFFSET 32 + +// this struct was taken from drivers/crypto/nx/nx-aes-xcbc.c and it is used for xcbc/cmac statesize +struct aeshash_state { + u8 state[AES_BLOCK_SIZE]; + unsigned int count; + u8 buffer[AES_BLOCK_SIZE]; +}; + +/* ahash state */ +struct ahash_req_ctx { + uint8_t* buff0; + uint8_t* buff1; + uint8_t* digest_result_buff; + struct async_gen_req_ctx gen_ctx; + enum ssi_req_dma_buf_type data_dma_buf_type; + uint8_t *digest_buff; + uint8_t *opad_digest_buff; + uint8_t *digest_bytes_len; + dma_addr_t opad_digest_dma_addr; + dma_addr_t digest_buff_dma_addr; + dma_addr_t digest_bytes_len_dma_addr; + dma_addr_t digest_result_dma_addr; + uint32_t buff0_cnt; + uint32_t buff1_cnt; + uint32_t buff_index; + uint32_t xcbc_count; /* count xcbc update operatations */ + struct scatterlist buff_sg[2]; + struct scatterlist *curr_sg; + uint32_t in_nents; + uint32_t mlli_nents; + struct mlli_params mlli_params; +}; + +int ssi_hash_alloc(struct ssi_drvdata *drvdata); +int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata); +int ssi_hash_free(struct ssi_drvdata *drvdata); + +/*! + * Gets the initial digest length + * + * \param drvdata + * \param mode The Hash mode. Supported modes: MD5/SHA1/SHA224/SHA256/SHA384/SHA512 + * + * \return uint32_t returns the address of the initial digest length in SRAM + */ +ssi_sram_addr_t +ssi_ahash_get_initial_digest_len_sram_addr(void *drvdata, uint32_t mode); + +/*! + * Gets the address of the initial digest in SRAM + * according to the given hash mode + * + * \param drvdata + * \param mode The Hash mode. Supported modes: MD5/SHA1/SHA224/SHA256/SHA384/SHA512 + * + * \return uint32_t The address of the inital digest in SRAM + */ +ssi_sram_addr_t ssi_ahash_get_larval_digest_sram_addr(void *drvdata, uint32_t mode); + +#endif /*__SSI_HASH_H__*/ + diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c index 1f34e68..ec6d655 100644 --- a/drivers/staging/ccree/ssi_pm.c +++ b/drivers/staging/ccree/ssi_pm.c @@ -26,6 +26,7 @@ #include "ssi_request_mgr.h" #include "ssi_sram_mgr.h" #include "ssi_sysfs.h" +#include "ssi_hash.h" #include "ssi_pm.h" #include "ssi_pm_ext.h" @@ -79,6 +80,9 @@ int ssi_power_mgr_runtime_resume(struct device *dev) return rc; } + /* must be after the queue resuming as it uses the HW queue*/ + ssi_hash_init_sram_digest_consts(drvdata); + return 0; } From patchwork Sun Apr 23 09:26:11 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 97960 Delivered-To: patch@linaro.org Received: by 10.140.109.52 with SMTP id k49csp1004913qgf; Sun, 23 Apr 2017 02:27:30 -0700 (PDT) X-Received: by 10.84.238.198 with SMTP id l6mr25674105pln.95.1492939650089; Sun, 23 Apr 2017 02:27:30 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h90si15364267pfa.43.2017.04.23.02.27.29; Sun, 23 Apr 2017 02:27:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1044409AbdDWJ1R (ORCPT + 1 other); Sun, 23 Apr 2017 05:27:17 -0400 Received: from foss.arm.com ([217.140.101.70]:46904 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1044379AbdDWJ0v (ORCPT ); Sun, 23 Apr 2017 05:26:51 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EA22B15A1; Sun, 23 Apr 2017 02:26:50 -0700 (PDT) Received: from gby.kfn.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E7C6A3F220; Sun, 23 Apr 2017 02:26:46 -0700 (PDT) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" , Rob Herring , Mark Rutland , Greg Kroah-Hartman , devel@driverdev.osuosl.org Cc: linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, gilad.benyossef@arm.com, Binoy Jayan , Ofir Drang , Stuart Yoder , Stephan Muller Subject: [PATCH v3 03/15] staging: ccree: add skcipher support Date: Sun, 23 Apr 2017 12:26:11 +0300 Message-Id: <1492939583-25688-4-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1492939583-25688-1-git-send-email-gilad@benyossef.com> References: <1492939583-25688-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add CryptoCell skcipher support Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/Kconfig | 8 + drivers/staging/ccree/Makefile | 2 +- drivers/staging/ccree/cc_crypto_ctx.h | 21 + drivers/staging/ccree/ssi_buffer_mgr.c | 147 ++++ drivers/staging/ccree/ssi_buffer_mgr.h | 16 + drivers/staging/ccree/ssi_cipher.c | 1440 ++++++++++++++++++++++++++++++++ drivers/staging/ccree/ssi_cipher.h | 88 ++ drivers/staging/ccree/ssi_driver.c | 14 + drivers/staging/ccree/ssi_driver.h | 30 + 9 files changed, 1765 insertions(+), 1 deletion(-) create mode 100644 drivers/staging/ccree/ssi_cipher.c create mode 100644 drivers/staging/ccree/ssi_cipher.h -- 2.1.4 diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig index a528a99..3fff040 100644 --- a/drivers/staging/ccree/Kconfig +++ b/drivers/staging/ccree/Kconfig @@ -3,11 +3,19 @@ config CRYPTO_DEV_CCREE depends on CRYPTO_HW && OF && HAS_DMA default n select CRYPTO_HASH + select CRYPTO_BLKCIPHER + select CRYPTO_DES + select CRYPTO_AUTHENC select CRYPTO_SHA1 select CRYPTO_MD5 select CRYPTO_SHA256 select CRYPTO_SHA512 select CRYPTO_HMAC + select CRYPTO_AES + select CRYPTO_CBC + select CRYPTO_ECB + select CRYPTO_CTR + select CRYPTO_XTS help Say 'Y' to enable a driver for the Arm TrustZone CryptoCell C7xx. Currently only the CryptoCell 712 REE is supported. diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile index f94e225..21a80d5 100644 --- a/drivers/staging/ccree/Makefile +++ b/drivers/staging/ccree/Makefile @@ -1,2 +1,2 @@ obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o -ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_hash.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o +ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h index a4aa066..a7f7d95 100644 --- a/drivers/staging/ccree/cc_crypto_ctx.h +++ b/drivers/staging/ccree/cc_crypto_ctx.h @@ -242,6 +242,27 @@ struct drv_ctx_hmac { CC_DIGEST_SIZE_MAX - CC_HMAC_BLOCK_SIZE_MAX]; }; +struct drv_ctx_cipher { + enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_AES */ + enum drv_cipher_mode mode; + enum drv_crypto_direction direction; + enum drv_crypto_key_type crypto_key_type; + enum drv_crypto_padding_type padding_type; + uint32_t key_size; /* numeric value in bytes */ + uint32_t data_unit_size; /* required for XTS */ + /* block_state is the AES engine block state. + * It is used by the host to pass IV or counter at initialization. + * It is used by SeP for intermediate block chaining state and for + * returning MAC algorithms results. */ + uint8_t block_state[CC_AES_BLOCK_SIZE]; + uint8_t key[CC_AES_KEY_SIZE_MAX]; + uint8_t xex_key[CC_AES_KEY_SIZE_MAX]; + /* reserve to end of allocated context size */ + uint32_t reserved[CC_DRV_CTX_SIZE_WORDS - 7 - + CC_AES_BLOCK_SIZE/sizeof(uint32_t) - 2 * + (CC_AES_KEY_SIZE_MAX/sizeof(uint32_t))]; +}; + /*******************************************************************/ /***************** MESSAGE BASED CONTEXTS **************************/ /*******************************************************************/ diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c index aceb01c..d0d5352 100644 --- a/drivers/staging/ccree/ssi_buffer_mgr.c +++ b/drivers/staging/ccree/ssi_buffer_mgr.c @@ -28,6 +28,7 @@ #include "ssi_buffer_mgr.h" #include "cc_lli_defs.h" +#include "ssi_cipher.h" #include "ssi_hash.h" #define LLI_MAX_NUM_OF_DATA_ENTRIES 128 @@ -517,6 +518,152 @@ static inline int ssi_ahash_handle_curr_buf(struct device *dev, return 0; } +void ssi_buffer_mgr_unmap_blkcipher_request( + struct device *dev, + void *ctx, + unsigned int ivsize, + struct scatterlist *src, + struct scatterlist *dst) +{ + struct blkcipher_req_ctx *req_ctx = (struct blkcipher_req_ctx *)ctx; + + if (likely(req_ctx->gen_ctx.iv_dma_addr != 0)) { + SSI_LOG_DEBUG("Unmapped iv: iv_dma_addr=0x%llX iv_size=%u\n", + (unsigned long long)req_ctx->gen_ctx.iv_dma_addr, + ivsize); + SSI_RESTORE_DMA_ADDR_TO_48BIT(req_ctx->gen_ctx.iv_dma_addr); + dma_unmap_single(dev, req_ctx->gen_ctx.iv_dma_addr, + ivsize, + DMA_TO_DEVICE); + } + /* Release pool */ + if (req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(req_ctx->mlli_params.mlli_dma_addr); + dma_pool_free(req_ctx->mlli_params.curr_pool, + req_ctx->mlli_params.mlli_virt_addr, + req_ctx->mlli_params.mlli_dma_addr); + } + + SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(src)); + dma_unmap_sg(dev, src, req_ctx->in_nents, + DMA_BIDIRECTIONAL); + SSI_LOG_DEBUG("Unmapped req->src=%pK\n", + sg_virt(src)); + + if (src != dst) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(dst)); + dma_unmap_sg(dev, dst, req_ctx->out_nents, + DMA_BIDIRECTIONAL); + SSI_LOG_DEBUG("Unmapped req->dst=%pK\n", + sg_virt(dst)); + } +} + +int ssi_buffer_mgr_map_blkcipher_request( + struct ssi_drvdata *drvdata, + void *ctx, + unsigned int ivsize, + unsigned int nbytes, + void *info, + struct scatterlist *src, + struct scatterlist *dst) +{ + struct blkcipher_req_ctx *req_ctx = (struct blkcipher_req_ctx *)ctx; + struct mlli_params *mlli_params = &req_ctx->mlli_params; + struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle; + struct device *dev = &drvdata->plat_dev->dev; + struct buffer_array sg_data; + uint32_t dummy = 0; + int rc = 0; + uint32_t mapped_nents = 0; + + req_ctx->dma_buf_type = SSI_DMA_BUF_DLLI; + mlli_params->curr_pool = NULL; + sg_data.num_of_buffers = 0; + + /* Map IV buffer */ + if (likely(ivsize != 0) ) { + dump_byte_array("iv", (uint8_t *)info, ivsize); + req_ctx->gen_ctx.iv_dma_addr = + dma_map_single(dev, (void *)info, + ivsize, + DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev, + req_ctx->gen_ctx.iv_dma_addr))) { + SSI_LOG_ERR("Mapping iv %u B at va=%pK " + "for DMA failed\n", ivsize, info); + return -ENOMEM; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(req_ctx->gen_ctx.iv_dma_addr, + ivsize); + SSI_LOG_DEBUG("Mapped iv %u B at va=%pK to dma=0x%llX\n", + ivsize, info, + (unsigned long long)req_ctx->gen_ctx.iv_dma_addr); + } else + req_ctx->gen_ctx.iv_dma_addr = 0; + + /* Map the src SGL */ + rc = ssi_buffer_mgr_map_scatterlist(dev, src, + nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents, + LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents); + if (unlikely(rc != 0)) { + rc = -ENOMEM; + goto ablkcipher_exit; + } + if (mapped_nents > 1) + req_ctx->dma_buf_type = SSI_DMA_BUF_MLLI; + + if (unlikely(src == dst)) { + /* Handle inplace operation */ + if (unlikely(req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI)) { + req_ctx->out_nents = 0; + ssi_buffer_mgr_add_scatterlist_entry(&sg_data, + req_ctx->in_nents, src, + nbytes, 0, true, &req_ctx->in_mlli_nents); + } + } else { + /* Map the dst sg */ + if (unlikely(ssi_buffer_mgr_map_scatterlist( + dev,dst, nbytes, + DMA_BIDIRECTIONAL, &req_ctx->out_nents, + LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, + &mapped_nents))){ + rc = -ENOMEM; + goto ablkcipher_exit; + } + if (mapped_nents > 1) + req_ctx->dma_buf_type = SSI_DMA_BUF_MLLI; + + if (unlikely((req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI))) { + ssi_buffer_mgr_add_scatterlist_entry(&sg_data, + req_ctx->in_nents, src, + nbytes, 0, true, + &req_ctx->in_mlli_nents); + ssi_buffer_mgr_add_scatterlist_entry(&sg_data, + req_ctx->out_nents, dst, + nbytes, 0, true, + &req_ctx->out_mlli_nents); + } + } + + if (unlikely(req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI)) { + mlli_params->curr_pool = buff_mgr->mlli_buffs_pool; + rc = ssi_buffer_mgr_generate_mlli(dev, &sg_data, mlli_params); + if (unlikely(rc!= 0)) + goto ablkcipher_exit; + + } + + SSI_LOG_DEBUG("areq_ctx->dma_buf_type = %s\n", + GET_DMA_BUFFER_TYPE(req_ctx->dma_buf_type)); + + return 0; + +ablkcipher_exit: + ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst); + return rc; +} + int ssi_buffer_mgr_map_hash_request_final( struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update) { diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h index cadb853..41412b2 100644 --- a/drivers/staging/ccree/ssi_buffer_mgr.h +++ b/drivers/staging/ccree/ssi_buffer_mgr.h @@ -55,6 +55,22 @@ int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata); int ssi_buffer_mgr_fini(struct ssi_drvdata *drvdata); +int ssi_buffer_mgr_map_blkcipher_request( + struct ssi_drvdata *drvdata, + void *ctx, + unsigned int ivsize, + unsigned int nbytes, + void *info, + struct scatterlist *src, + struct scatterlist *dst); + +void ssi_buffer_mgr_unmap_blkcipher_request( + struct device *dev, + void *ctx, + unsigned int ivsize, + struct scatterlist *src, + struct scatterlist *dst); + int ssi_buffer_mgr_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update); int ssi_buffer_mgr_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size); diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c new file mode 100644 index 0000000..d22a1b3 --- /dev/null +++ b/drivers/staging/ccree/ssi_cipher.c @@ -0,0 +1,1440 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ssi_config.h" +#include "ssi_driver.h" +#include "cc_lli_defs.h" +#include "ssi_buffer_mgr.h" +#include "ssi_cipher.h" +#include "ssi_request_mgr.h" +#include "ssi_sysfs.h" + +#define MAX_ABLKCIPHER_SEQ_LEN 6 + +#define template_ablkcipher template_u.ablkcipher +#define template_sblkcipher template_u.blkcipher + +#define SSI_MIN_AES_XTS_SIZE 0x10 +#define SSI_MAX_AES_XTS_SIZE 0x2000 +struct ssi_blkcipher_handle { + struct list_head blkcipher_alg_list; +}; + +struct cc_user_key_info { + uint8_t *key; + dma_addr_t key_dma_addr; +}; +struct cc_hw_key_info { + enum HwCryptoKey key1_slot; + enum HwCryptoKey key2_slot; +}; + +struct ssi_ablkcipher_ctx { + struct ssi_drvdata *drvdata; + int keylen; + int key_round_number; + int cipher_mode; + int flow_mode; + unsigned int flags; + struct blkcipher_req_ctx *sync_ctx; + struct cc_user_key_info user; + struct cc_hw_key_info hw; + struct crypto_shash *shash_tfm; +}; + +static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req, void __iomem *cc_base); + + +static int validate_keys_sizes(struct ssi_ablkcipher_ctx *ctx_p, uint32_t size) { + switch (ctx_p->flow_mode){ + case S_DIN_to_AES: + switch (size){ + case CC_AES_128_BIT_KEY_SIZE: + case CC_AES_192_BIT_KEY_SIZE: + if (likely((ctx_p->cipher_mode != DRV_CIPHER_XTS) && + (ctx_p->cipher_mode != DRV_CIPHER_ESSIV) && + (ctx_p->cipher_mode != DRV_CIPHER_BITLOCKER))) + return 0; + break; + case CC_AES_256_BIT_KEY_SIZE: + return 0; + case (CC_AES_192_BIT_KEY_SIZE*2): + case (CC_AES_256_BIT_KEY_SIZE*2): + if (likely((ctx_p->cipher_mode == DRV_CIPHER_XTS) || + (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) || + (ctx_p->cipher_mode == DRV_CIPHER_BITLOCKER))) + return 0; + break; + default: + break; + } + case S_DIN_to_DES: + if (likely(size == DES3_EDE_KEY_SIZE || + size == DES_KEY_SIZE)) + return 0; + break; +#if SSI_CC_HAS_MULTI2 + case S_DIN_to_MULTI2: + if (likely(size == CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE)) + return 0; + break; +#endif + default: + break; + + } + return -EINVAL; +} + + +static int validate_data_size(struct ssi_ablkcipher_ctx *ctx_p, unsigned int size) { + switch (ctx_p->flow_mode){ + case S_DIN_to_AES: + switch (ctx_p->cipher_mode){ + case DRV_CIPHER_XTS: + if ((size >= SSI_MIN_AES_XTS_SIZE) && + (size <= SSI_MAX_AES_XTS_SIZE) && + IS_ALIGNED(size, AES_BLOCK_SIZE)) + return 0; + break; + case DRV_CIPHER_CBC_CTS: + if (likely(size >= AES_BLOCK_SIZE)) + return 0; + break; + case DRV_CIPHER_OFB: + case DRV_CIPHER_CTR: + return 0; + case DRV_CIPHER_ECB: + case DRV_CIPHER_CBC: + case DRV_CIPHER_ESSIV: + case DRV_CIPHER_BITLOCKER: + if (likely(IS_ALIGNED(size, AES_BLOCK_SIZE))) + return 0; + break; + default: + break; + } + break; + case S_DIN_to_DES: + if (likely(IS_ALIGNED(size, DES_BLOCK_SIZE))) + return 0; + break; +#if SSI_CC_HAS_MULTI2 + case S_DIN_to_MULTI2: + switch (ctx_p->cipher_mode) { + case DRV_MULTI2_CBC: + if (likely(IS_ALIGNED(size, CC_MULTI2_BLOCK_SIZE))) + return 0; + break; + case DRV_MULTI2_OFB: + return 0; + default: + break; + } + break; +#endif /*SSI_CC_HAS_MULTI2*/ + default: + break; + + } + return -EINVAL; +} + +static unsigned int get_max_keysize(struct crypto_tfm *tfm) +{ + struct ssi_crypto_alg *ssi_alg = container_of(tfm->__crt_alg, struct ssi_crypto_alg, crypto_alg); + + if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_ABLKCIPHER) { + return ssi_alg->crypto_alg.cra_ablkcipher.max_keysize; + } + + if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_BLKCIPHER) { + return ssi_alg->crypto_alg.cra_blkcipher.max_keysize; + } + + return 0; +} + +static int ssi_blkcipher_init(struct crypto_tfm *tfm) +{ + struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + struct crypto_alg *alg = tfm->__crt_alg; + struct ssi_crypto_alg *ssi_alg = + container_of(alg, struct ssi_crypto_alg, crypto_alg); + struct device *dev; + int rc = 0; + unsigned int max_key_buf_size = get_max_keysize(tfm); + + SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx_p, + crypto_tfm_alg_name(tfm)); + + ctx_p->cipher_mode = ssi_alg->cipher_mode; + ctx_p->flow_mode = ssi_alg->flow_mode; + ctx_p->drvdata = ssi_alg->drvdata; + dev = &ctx_p->drvdata->plat_dev->dev; + + /* Allocate key buffer, cache line aligned */ + ctx_p->user.key = kmalloc(max_key_buf_size, GFP_KERNEL|GFP_DMA); + if (!ctx_p->user.key) { + SSI_LOG_ERR("Allocating key buffer in context failed\n"); + rc = -ENOMEM; + } + SSI_LOG_DEBUG("Allocated key buffer in context. key=@%p\n", + ctx_p->user.key); + + /* Map key buffer */ + ctx_p->user.key_dma_addr = dma_map_single(dev, (void *)ctx_p->user.key, + max_key_buf_size, DMA_TO_DEVICE); + if (dma_mapping_error(dev, ctx_p->user.key_dma_addr)) { + SSI_LOG_ERR("Mapping Key %u B at va=%pK for DMA failed\n", + max_key_buf_size, ctx_p->user.key); + return -ENOMEM; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr, max_key_buf_size); + SSI_LOG_DEBUG("Mapped key %u B at va=%pK to dma=0x%llX\n", + max_key_buf_size, ctx_p->user.key, + (unsigned long long)ctx_p->user.key_dma_addr); + + if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) { + /* Alloc hash tfm for essiv */ + ctx_p->shash_tfm = crypto_alloc_shash("sha256-generic", 0, 0); + if (IS_ERR(ctx_p->shash_tfm)) { + SSI_LOG_ERR("Error allocating hash tfm for ESSIV.\n"); + return PTR_ERR(ctx_p->shash_tfm); + } + } + + return rc; +} + +static void ssi_blkcipher_exit(struct crypto_tfm *tfm) +{ + struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + struct device *dev = &ctx_p->drvdata->plat_dev->dev; + unsigned int max_key_buf_size = get_max_keysize(tfm); + + SSI_LOG_DEBUG("Clearing context @%p for %s\n", + crypto_tfm_ctx(tfm), crypto_tfm_alg_name(tfm)); + + if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) { + /* Free hash tfm for essiv */ + crypto_free_shash(ctx_p->shash_tfm); + ctx_p->shash_tfm = NULL; + } + + /* Unmap key buffer */ + SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr); + dma_unmap_single(dev, ctx_p->user.key_dma_addr, max_key_buf_size, + DMA_TO_DEVICE); + SSI_LOG_DEBUG("Unmapped key buffer key_dma_addr=0x%llX\n", + (unsigned long long)ctx_p->user.key_dma_addr); + + /* Free key buffer in context */ + kfree(ctx_p->user.key); + SSI_LOG_DEBUG("Free key buffer in context. key=@%p\n", ctx_p->user.key); +} + + +typedef struct tdes_keys{ + u8 key1[DES_KEY_SIZE]; + u8 key2[DES_KEY_SIZE]; + u8 key3[DES_KEY_SIZE]; +}tdes_keys_t; + +static const u8 zero_buff[] = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, + 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}; + +static enum HwCryptoKey hw_key_to_cc_hw_key(int slot_num) +{ + switch (slot_num) { + case 0: + return KFDE0_KEY; + case 1: + return KFDE1_KEY; + case 2: + return KFDE2_KEY; + case 3: + return KFDE3_KEY; + } + return END_OF_KEYS; +} + +static int ssi_blkcipher_setkey(struct crypto_tfm *tfm, + const u8 *key, + unsigned int keylen) +{ + struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + struct device *dev = &ctx_p->drvdata->plat_dev->dev; + u32 tmp[DES_EXPKEY_WORDS]; + unsigned int max_key_buf_size = get_max_keysize(tfm); + DECL_CYCLE_COUNT_RESOURCES; + + SSI_LOG_DEBUG("Setting key in context @%p for %s. keylen=%u\n", + ctx_p, crypto_tfm_alg_name(tfm), keylen); + dump_byte_array("key", (uint8_t *)key, keylen); + + /* STAT_PHASE_0: Init and sanity checks */ + START_CYCLE_COUNT(); + +#if SSI_CC_HAS_MULTI2 + /*last byte of key buffer is round number and should not be a part of key size*/ + if (ctx_p->flow_mode == S_DIN_to_MULTI2) { + keylen -=1; + } +#endif /*SSI_CC_HAS_MULTI2*/ + + if (unlikely(validate_keys_sizes(ctx_p,keylen) != 0)) { + SSI_LOG_ERR("Unsupported key size %d.\n", keylen); + crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + + if (ssi_is_hw_key(tfm)) { + /* setting HW key slots */ + struct arm_hw_key_info *hki = (struct arm_hw_key_info*)key; + + if (unlikely(ctx_p->flow_mode != S_DIN_to_AES)) { + SSI_LOG_ERR("HW key not supported for non-AES flows\n"); + return -EINVAL; + } + + ctx_p->hw.key1_slot = hw_key_to_cc_hw_key(hki->hw_key1); + if (unlikely(ctx_p->hw.key1_slot == END_OF_KEYS)) { + SSI_LOG_ERR("Unsupported hw key1 number (%d)\n", hki->hw_key1); + return -EINVAL; + } + + if ((ctx_p->cipher_mode == DRV_CIPHER_XTS) || + (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) || + (ctx_p->cipher_mode == DRV_CIPHER_BITLOCKER)) { + if (unlikely(hki->hw_key1 == hki->hw_key2)) { + SSI_LOG_ERR("Illegal hw key numbers (%d,%d)\n", hki->hw_key1, hki->hw_key2); + return -EINVAL; + } + ctx_p->hw.key2_slot = hw_key_to_cc_hw_key(hki->hw_key2); + if (unlikely(ctx_p->hw.key2_slot == END_OF_KEYS)) { + SSI_LOG_ERR("Unsupported hw key2 number (%d)\n", hki->hw_key2); + return -EINVAL; + } + } + + ctx_p->keylen = keylen; + END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0); + SSI_LOG_DEBUG("ssi_blkcipher_setkey: ssi_is_hw_key ret 0"); + + return 0; + } + + // verify weak keys + if (ctx_p->flow_mode == S_DIN_to_DES) { + if (unlikely(!des_ekey(tmp, key)) && + (crypto_tfm_get_flags(tfm) & CRYPTO_TFM_REQ_WEAK_KEY)) { + tfm->crt_flags |= CRYPTO_TFM_RES_WEAK_KEY; + SSI_LOG_DEBUG("ssi_blkcipher_setkey: weak DES key"); + return -EINVAL; + } + } + + END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0); + + /* STAT_PHASE_1: Copy key to ctx */ + START_CYCLE_COUNT(); + SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr); + dma_sync_single_for_cpu(dev, ctx_p->user.key_dma_addr, + max_key_buf_size, DMA_TO_DEVICE); +#if SSI_CC_HAS_MULTI2 + if (ctx_p->flow_mode == S_DIN_to_MULTI2) { + memcpy(ctx_p->user.key, key, CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE); + ctx_p->key_round_number = key[CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE]; + if (ctx_p->key_round_number < CC_MULTI2_MIN_NUM_ROUNDS || + ctx_p->key_round_number > CC_MULTI2_MAX_NUM_ROUNDS) { + crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + SSI_LOG_DEBUG("ssi_blkcipher_setkey: SSI_CC_HAS_MULTI2 einval"); + return -EINVAL; + } + } else +#endif /*SSI_CC_HAS_MULTI2*/ + { + memcpy(ctx_p->user.key, key, keylen); + if (keylen == 24) + memset(ctx_p->user.key + 24, 0, CC_AES_KEY_SIZE_MAX - 24); + + if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) { + /* sha256 for key2 - use sw implementation */ + int key_len = keylen >> 1; + int err; + SHASH_DESC_ON_STACK(desc, ctx_p->shash_tfm); + desc->tfm = ctx_p->shash_tfm; + + err = crypto_shash_digest(desc, ctx_p->user.key, key_len, ctx_p->user.key + key_len); + if (err) { + SSI_LOG_ERR("Failed to hash ESSIV key.\n"); + return err; + } + } + } + dma_sync_single_for_device(dev, ctx_p->user.key_dma_addr, + max_key_buf_size, DMA_TO_DEVICE); + SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr ,max_key_buf_size); + ctx_p->keylen = keylen; + + END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1); + + SSI_LOG_DEBUG("ssi_blkcipher_setkey: return safely"); + return 0; +} + +static inline void +ssi_blkcipher_create_setup_desc( + struct crypto_tfm *tfm, + struct blkcipher_req_ctx *req_ctx, + unsigned int ivsize, + unsigned int nbytes, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + int cipher_mode = ctx_p->cipher_mode; + int flow_mode = ctx_p->flow_mode; + int direction = req_ctx->gen_ctx.op_type; + dma_addr_t key_dma_addr = ctx_p->user.key_dma_addr; + unsigned int key_len = ctx_p->keylen; + dma_addr_t iv_dma_addr = req_ctx->gen_ctx.iv_dma_addr; + unsigned int du_size = nbytes; + + struct ssi_crypto_alg *ssi_alg = container_of(tfm->__crt_alg, struct ssi_crypto_alg, crypto_alg); + + if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_BULK_MASK) == CRYPTO_ALG_BULK_DU_512) + du_size = 512; + if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_BULK_MASK) == CRYPTO_ALG_BULK_DU_4096) + du_size = 4096; + + switch (cipher_mode) { + case DRV_CIPHER_CBC: + case DRV_CIPHER_CBC_CTS: + case DRV_CIPHER_CTR: + case DRV_CIPHER_OFB: + /* Load cipher state */ + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, + iv_dma_addr, ivsize, + NS_BIT); + HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); + HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode); + if ((cipher_mode == DRV_CIPHER_CTR) || + (cipher_mode == DRV_CIPHER_OFB) ) { + HW_DESC_SET_SETUP_MODE(&desc[*seq_size], + SETUP_LOAD_STATE1); + } else { + HW_DESC_SET_SETUP_MODE(&desc[*seq_size], + SETUP_LOAD_STATE0); + } + (*seq_size)++; + /*FALLTHROUGH*/ + case DRV_CIPHER_ECB: + /* Load key */ + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode); + HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); + if (flow_mode == S_DIN_to_AES) { + + if (ssi_is_hw_key(tfm)) { + HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key1_slot); + } else { + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, + key_dma_addr, + ((key_len == 24) ? AES_MAX_KEY_SIZE : key_len), + NS_BIT); + } + HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len); + } else { + /*des*/ + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, + key_dma_addr, key_len, + NS_BIT); + HW_DESC_SET_KEY_SIZE_DES(&desc[*seq_size], key_len); + } + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); + HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0); + (*seq_size)++; + break; + case DRV_CIPHER_XTS: + case DRV_CIPHER_ESSIV: + case DRV_CIPHER_BITLOCKER: + /* Load AES key */ + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode); + HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); + if (ssi_is_hw_key(tfm)) { + HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key1_slot); + } else { + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, + key_dma_addr, key_len/2, + NS_BIT); + } + HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2); + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); + HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0); + (*seq_size)++; + + /* load XEX key */ + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode); + HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); + if (ssi_is_hw_key(tfm)) { + HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key2_slot); + } else { + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, + (key_dma_addr+key_len/2), key_len/2, + NS_BIT); + } + HW_DESC_SET_XEX_DATA_UNIT_SIZE(&desc[*seq_size], du_size); + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], S_DIN_to_AES2); + HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2); + HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_XEX_KEY); + (*seq_size)++; + + /* Set state */ + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE1); + HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode); + HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); + HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2); + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, + iv_dma_addr, CC_AES_BLOCK_SIZE, + NS_BIT); + (*seq_size)++; + break; + default: + SSI_LOG_ERR("Unsupported cipher mode (%d)\n", cipher_mode); + BUG(); + } +} + +#if SSI_CC_HAS_MULTI2 +static inline void ssi_blkcipher_create_multi2_setup_desc( + struct crypto_tfm *tfm, + struct blkcipher_req_ctx *req_ctx, + unsigned int ivsize, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + + int direction = req_ctx->gen_ctx.op_type; + /* Load system key */ + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode); + HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, ctx_p->user.key_dma_addr, + CC_MULTI2_SYSTEM_KEY_SIZE, + NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode); + HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0); + (*seq_size)++; + + /* load data key */ + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, + (ctx_p->user.key_dma_addr + + CC_MULTI2_SYSTEM_KEY_SIZE), + CC_MULTI2_DATA_KEY_SIZE, NS_BIT); + HW_DESC_SET_MULTI2_NUM_ROUNDS(&desc[*seq_size], + ctx_p->key_round_number); + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode); + HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode); + HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); + HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE0 ); + (*seq_size)++; + + + /* Set state */ + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, + req_ctx->gen_ctx.iv_dma_addr, + ivsize, NS_BIT); + HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction); + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode); + HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode); + HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE1); + (*seq_size)++; + +} +#endif /*SSI_CC_HAS_MULTI2*/ + +static inline void +ssi_blkcipher_create_data_desc( + struct crypto_tfm *tfm, + struct blkcipher_req_ctx *req_ctx, + struct scatterlist *dst, struct scatterlist *src, + unsigned int nbytes, + void *areq, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + unsigned int flow_mode = ctx_p->flow_mode; + + switch (ctx_p->flow_mode) { + case S_DIN_to_AES: + flow_mode = DIN_AES_DOUT; + break; + case S_DIN_to_DES: + flow_mode = DIN_DES_DOUT; + break; +#if SSI_CC_HAS_MULTI2 + case S_DIN_to_MULTI2: + flow_mode = DIN_MULTI2_DOUT; + break; +#endif /*SSI_CC_HAS_MULTI2*/ + default: + SSI_LOG_ERR("invalid flow mode, flow_mode = %d \n", flow_mode); + return; + } + /* Process */ + if (likely(req_ctx->dma_buf_type == SSI_DMA_BUF_DLLI)){ + SSI_LOG_DEBUG(" data params addr 0x%llX length 0x%X \n", + (unsigned long long)sg_dma_address(src), + nbytes); + SSI_LOG_DEBUG(" data params addr 0x%llX length 0x%X \n", + (unsigned long long)sg_dma_address(dst), + nbytes); + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, + sg_dma_address(src), + nbytes, NS_BIT); + HW_DESC_SET_DOUT_DLLI(&desc[*seq_size], + sg_dma_address(dst), + nbytes, + NS_BIT, (areq == NULL)? 0:1); + if (areq != NULL) { + HW_DESC_SET_QUEUE_LAST_IND(&desc[*seq_size]); + } + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); + (*seq_size)++; + } else { + /* bypass */ + SSI_LOG_DEBUG(" bypass params addr 0x%llX " + "length 0x%X addr 0x%08X\n", + (unsigned long long)req_ctx->mlli_params.mlli_dma_addr, + req_ctx->mlli_params.mlli_len, + (unsigned int)ctx_p->drvdata->mlli_sram_addr); + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, + req_ctx->mlli_params.mlli_dma_addr, + req_ctx->mlli_params.mlli_len, + NS_BIT); + HW_DESC_SET_DOUT_SRAM(&desc[*seq_size], + ctx_p->drvdata->mlli_sram_addr, + req_ctx->mlli_params.mlli_len); + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], BYPASS); + (*seq_size)++; + + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_MLLI, + ctx_p->drvdata->mlli_sram_addr, + req_ctx->in_mlli_nents, NS_BIT); + if (req_ctx->out_nents == 0) { + SSI_LOG_DEBUG(" din/dout params addr 0x%08X " + "addr 0x%08X\n", + (unsigned int)ctx_p->drvdata->mlli_sram_addr, + (unsigned int)ctx_p->drvdata->mlli_sram_addr); + HW_DESC_SET_DOUT_MLLI(&desc[*seq_size], + ctx_p->drvdata->mlli_sram_addr, + req_ctx->in_mlli_nents, + NS_BIT,(areq == NULL)? 0:1); + } else { + SSI_LOG_DEBUG(" din/dout params " + "addr 0x%08X addr 0x%08X\n", + (unsigned int)ctx_p->drvdata->mlli_sram_addr, + (unsigned int)ctx_p->drvdata->mlli_sram_addr + + (uint32_t)LLI_ENTRY_BYTE_SIZE * + req_ctx->in_nents); + HW_DESC_SET_DOUT_MLLI(&desc[*seq_size], + (ctx_p->drvdata->mlli_sram_addr + + LLI_ENTRY_BYTE_SIZE * + req_ctx->in_mlli_nents), + req_ctx->out_mlli_nents, NS_BIT,(areq == NULL)? 0:1); + } + if (areq != NULL) { + HW_DESC_SET_QUEUE_LAST_IND(&desc[*seq_size]); + } + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode); + (*seq_size)++; + } +} + +static int ssi_blkcipher_complete(struct device *dev, + struct ssi_ablkcipher_ctx *ctx_p, + struct blkcipher_req_ctx *req_ctx, + struct scatterlist *dst, struct scatterlist *src, + void *info, //req info + unsigned int ivsize, + void *areq, + void __iomem *cc_base) +{ + int completion_error = 0; + uint32_t inflight_counter; + DECL_CYCLE_COUNT_RESOURCES; + + START_CYCLE_COUNT(); + ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst); + info = req_ctx->backup_info; + END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_4); + + + /*Set the inflight couter value to local variable*/ + inflight_counter = ctx_p->drvdata->inflight_counter; + /*Decrease the inflight counter*/ + if(ctx_p->flow_mode == BYPASS && ctx_p->drvdata->inflight_counter > 0) + ctx_p->drvdata->inflight_counter--; + + if(areq){ + ablkcipher_request_complete(areq, completion_error); + return 0; + } + return completion_error; +} + +static int ssi_blkcipher_process( + struct crypto_tfm *tfm, + struct blkcipher_req_ctx *req_ctx, + struct scatterlist *dst, struct scatterlist *src, + unsigned int nbytes, + void *info, //req info + unsigned int ivsize, + void *areq, + enum drv_crypto_direction direction) +{ + struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + struct device *dev = &ctx_p->drvdata->plat_dev->dev; + HwDesc_s desc[MAX_ABLKCIPHER_SEQ_LEN]; + struct ssi_crypto_req ssi_req = {}; + int rc, seq_len = 0,cts_restore_flag = 0; + DECL_CYCLE_COUNT_RESOURCES; + + SSI_LOG_DEBUG("%s areq=%p info=%p nbytes=%d\n", + ((direction==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"), + areq, info, nbytes); + + /* STAT_PHASE_0: Init and sanity checks */ + START_CYCLE_COUNT(); + + /* TODO: check data length according to mode */ + if (unlikely(validate_data_size(ctx_p, nbytes))) { + SSI_LOG_ERR("Unsupported data size %d.\n", nbytes); + crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_BLOCK_LEN); + return -EINVAL; + } + if (nbytes == 0) { + /* No data to process is valid */ + return 0; + } + /*For CTS in case of data size aligned to 16 use CBC mode*/ + if (((nbytes % AES_BLOCK_SIZE) == 0) && (ctx_p->cipher_mode == DRV_CIPHER_CBC_CTS)){ + + ctx_p->cipher_mode = DRV_CIPHER_CBC; + cts_restore_flag = 1; + } + + /* Setup DX request structure */ + ssi_req.user_cb = (void *)ssi_ablkcipher_complete; + ssi_req.user_arg = (void *)areq; + +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = (direction == DRV_CRYPTO_DIRECTION_DECRYPT) ? + STAT_OP_TYPE_DECODE : STAT_OP_TYPE_ENCODE; + +#endif + + /* Setup request context */ + req_ctx->gen_ctx.op_type = direction; + + END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_0); + + /* STAT_PHASE_1: Map buffers */ + START_CYCLE_COUNT(); + + rc = ssi_buffer_mgr_map_blkcipher_request(ctx_p->drvdata, req_ctx, ivsize, nbytes, info, src, dst); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("map_request() failed\n"); + goto exit_process; + } + + END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_1); + + /* STAT_PHASE_2: Create sequence */ + START_CYCLE_COUNT(); + + /* Setup processing */ +#if SSI_CC_HAS_MULTI2 + if (ctx_p->flow_mode == S_DIN_to_MULTI2) { + ssi_blkcipher_create_multi2_setup_desc(tfm, + req_ctx, + ivsize, + desc, + &seq_len); + } else +#endif /*SSI_CC_HAS_MULTI2*/ + { + ssi_blkcipher_create_setup_desc(tfm, + req_ctx, + ivsize, + nbytes, + desc, + &seq_len); + } + /* Data processing */ + ssi_blkcipher_create_data_desc(tfm, + req_ctx, + dst, src, + nbytes, + areq, + desc, &seq_len); + + END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_2); + + /* STAT_PHASE_3: Lock HW and push sequence */ + START_CYCLE_COUNT(); + + rc = send_request(ctx_p->drvdata, &ssi_req, desc, seq_len, (areq == NULL)? 0:1); + if(areq != NULL) { + if (unlikely(rc != -EINPROGRESS)) { + /* Failed to send the request or request completed synchronously */ + ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst); + } + + END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3); + } else { + if (rc != 0) { + ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst); + END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3); + } else { + END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3); + rc = ssi_blkcipher_complete(dev, ctx_p, req_ctx, dst, src, info, ivsize, NULL, ctx_p->drvdata->cc_base); + } + } + +exit_process: + if (cts_restore_flag != 0) + ctx_p->cipher_mode = DRV_CIPHER_CBC_CTS; + + return rc; +} + +static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req, void __iomem *cc_base) +{ + struct ablkcipher_request *areq = (struct ablkcipher_request *)ssi_req; + struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(areq); + struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(areq); + struct ssi_ablkcipher_ctx *ctx_p = crypto_ablkcipher_ctx(tfm); + unsigned int ivsize = crypto_ablkcipher_ivsize(tfm); + + ssi_blkcipher_complete(dev, ctx_p, req_ctx, areq->dst, areq->src, areq->info, ivsize, areq, cc_base); +} + + + +static int ssi_sblkcipher_init(struct crypto_tfm *tfm) +{ + struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + + /* Allocate sync ctx buffer */ + ctx_p->sync_ctx = kmalloc(sizeof(struct blkcipher_req_ctx), GFP_KERNEL|GFP_DMA); + if (!ctx_p->sync_ctx) { + SSI_LOG_ERR("Allocating sync ctx buffer in context failed\n"); + return -ENOMEM; + } + SSI_LOG_DEBUG("Allocated sync ctx buffer in context ctx_p->sync_ctx=@%p\n", + ctx_p->sync_ctx); + + return ssi_blkcipher_init(tfm); +} + + +static void ssi_sblkcipher_exit(struct crypto_tfm *tfm) +{ + struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + + kfree(ctx_p->sync_ctx); + SSI_LOG_DEBUG("Free sync ctx buffer in context ctx_p->sync_ctx=@%p\n", ctx_p->sync_ctx); + + ssi_blkcipher_exit(tfm); +} + +#ifdef SYNC_ALGS +static int ssi_sblkcipher_encrypt(struct blkcipher_desc *desc, + struct scatterlist *dst, struct scatterlist *src, + unsigned int nbytes) +{ + struct crypto_blkcipher *blk_tfm = desc->tfm; + struct crypto_tfm *tfm = crypto_blkcipher_tfm(blk_tfm); + struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + struct blkcipher_req_ctx *req_ctx = ctx_p->sync_ctx; + unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm); + + req_ctx->backup_info = desc->info; + + return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_ENCRYPT); +} + +static int ssi_sblkcipher_decrypt(struct blkcipher_desc *desc, + struct scatterlist *dst, struct scatterlist *src, + unsigned int nbytes) +{ + struct crypto_blkcipher *blk_tfm = desc->tfm; + struct crypto_tfm *tfm = crypto_blkcipher_tfm(blk_tfm); + struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm); + struct blkcipher_req_ctx *req_ctx = ctx_p->sync_ctx; + unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm); + + req_ctx->backup_info = desc->info; + + return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_DECRYPT); +} +#endif + +/* Async wrap functions */ + +static int ssi_ablkcipher_init(struct crypto_tfm *tfm) +{ + struct ablkcipher_tfm *ablktfm = &tfm->crt_ablkcipher; + + ablktfm->reqsize = sizeof(struct blkcipher_req_ctx); + + return ssi_blkcipher_init(tfm); +} + + +static int ssi_ablkcipher_setkey(struct crypto_ablkcipher *tfm, + const u8 *key, + unsigned int keylen) +{ + return ssi_blkcipher_setkey(crypto_ablkcipher_tfm(tfm), key, keylen); +} + +static int ssi_ablkcipher_encrypt(struct ablkcipher_request *req) +{ + struct crypto_ablkcipher *ablk_tfm = crypto_ablkcipher_reqtfm(req); + struct crypto_tfm *tfm = crypto_ablkcipher_tfm(ablk_tfm); + struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req); + unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm); + + req_ctx->backup_info = req->info; + + return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_ENCRYPT); +} + +static int ssi_ablkcipher_decrypt(struct ablkcipher_request *req) +{ + struct crypto_ablkcipher *ablk_tfm = crypto_ablkcipher_reqtfm(req); + struct crypto_tfm *tfm = crypto_ablkcipher_tfm(ablk_tfm); + struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req); + unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm); + + req_ctx->backup_info = req->info; + return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_DECRYPT); +} + + +/* DX Block cipher alg */ +static struct ssi_alg_template blkcipher_algs[] = { +/* Async template */ +#if SSI_CC_HAS_AES_XTS + { + .name = "xts(aes)", + .driver_name = "xts-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + .geniv = "eseqiv", + }, + .cipher_mode = DRV_CIPHER_XTS, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, + { + .name = "xts(aes)", + .driver_name = "xts-aes-du512-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_XTS, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, + { + .name = "xts(aes)", + .driver_name = "xts-aes-du4096-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_XTS, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, +#endif /*SSI_CC_HAS_AES_XTS*/ +#if SSI_CC_HAS_AES_ESSIV + { + .name = "essiv(aes)", + .driver_name = "essiv-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_ESSIV, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, + { + .name = "essiv(aes)", + .driver_name = "essiv-aes-du512-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_ESSIV, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, + { + .name = "essiv(aes)", + .driver_name = "essiv-aes-du4096-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_ESSIV, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, +#endif /*SSI_CC_HAS_AES_ESSIV*/ +#if SSI_CC_HAS_AES_BITLOCKER + { + .name = "bitlocker(aes)", + .driver_name = "bitlocker-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_BITLOCKER, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, + { + .name = "bitlocker(aes)", + .driver_name = "bitlocker-aes-du512-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_BITLOCKER, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, + { + .name = "bitlocker(aes)", + .driver_name = "bitlocker-aes-du4096-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_BITLOCKER, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, +#endif /*SSI_CC_HAS_AES_BITLOCKER*/ + { + .name = "ecb(aes)", + .driver_name = "ecb-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = 0, + }, + .cipher_mode = DRV_CIPHER_ECB, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, + { + .name = "cbc(aes)", + .driver_name = "cbc-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_CBC, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, + { + .name = "ofb(aes)", + .driver_name = "ofb-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_OFB, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, +#if SSI_CC_HAS_AES_CTS + { + .name = "cts1(cbc(aes))", + .driver_name = "cts1-cbc-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_CBC_CTS, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, +#endif + { + .name = "ctr(aes)", + .driver_name = "ctr-aes-dx", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_CTR, + .flow_mode = S_DIN_to_AES, + .synchronous = false, + }, + { + .name = "cbc(des3_ede)", + .driver_name = "cbc-3des-dx", + .blocksize = DES3_EDE_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_CBC, + .flow_mode = S_DIN_to_DES, + .synchronous = false, + }, + { + .name = "ecb(des3_ede)", + .driver_name = "ecb-3des-dx", + .blocksize = DES3_EDE_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = 0, + }, + .cipher_mode = DRV_CIPHER_ECB, + .flow_mode = S_DIN_to_DES, + .synchronous = false, + }, + { + .name = "cbc(des)", + .driver_name = "cbc-des-dx", + .blocksize = DES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_CBC, + .flow_mode = S_DIN_to_DES, + .synchronous = false, + }, + { + .name = "ecb(des)", + .driver_name = "ecb-des-dx", + .blocksize = DES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = 0, + }, + .cipher_mode = DRV_CIPHER_ECB, + .flow_mode = S_DIN_to_DES, + .synchronous = false, + }, +#if SSI_CC_HAS_MULTI2 + { + .name = "cbc(multi2)", + .driver_name = "cbc-multi2-dx", + .blocksize = CC_MULTI2_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_decrypt, + .min_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1, + .max_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1, + .ivsize = CC_MULTI2_IV_SIZE, + }, + .cipher_mode = DRV_MULTI2_CBC, + .flow_mode = S_DIN_to_MULTI2, + .synchronous = false, + }, + { + .name = "ofb(multi2)", + .driver_name = "ofb-multi2-dx", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ssi_ablkcipher_setkey, + .encrypt = ssi_ablkcipher_encrypt, + .decrypt = ssi_ablkcipher_encrypt, + .min_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1, + .max_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1, + .ivsize = CC_MULTI2_IV_SIZE, + }, + .cipher_mode = DRV_MULTI2_OFB, + .flow_mode = S_DIN_to_MULTI2, + .synchronous = false, + }, +#endif /*SSI_CC_HAS_MULTI2*/ +}; + +static +struct ssi_crypto_alg *ssi_ablkcipher_create_alg(struct ssi_alg_template *template) +{ + struct ssi_crypto_alg *t_alg; + struct crypto_alg *alg; + + t_alg = kzalloc(sizeof(struct ssi_crypto_alg), GFP_KERNEL); + if (!t_alg) { + SSI_LOG_ERR("failed to allocate t_alg\n"); + return ERR_PTR(-ENOMEM); + } + + alg = &t_alg->crypto_alg; + + snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", template->name); + snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", + template->driver_name); + alg->cra_module = THIS_MODULE; + alg->cra_priority = SSI_CRA_PRIO; + alg->cra_blocksize = template->blocksize; + alg->cra_alignmask = 0; + alg->cra_ctxsize = sizeof(struct ssi_ablkcipher_ctx); + + alg->cra_init = template->synchronous? ssi_sblkcipher_init:ssi_ablkcipher_init; + alg->cra_exit = template->synchronous? ssi_sblkcipher_exit:ssi_blkcipher_exit; + alg->cra_type = template->synchronous? &crypto_blkcipher_type:&crypto_ablkcipher_type; + if(template->synchronous) { + alg->cra_blkcipher = template->template_sblkcipher; + alg->cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | + template->type; + } else { + alg->cra_ablkcipher = template->template_ablkcipher; + alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY | + template->type; + } + + t_alg->cipher_mode = template->cipher_mode; + t_alg->flow_mode = template->flow_mode; + + return t_alg; +} + +int ssi_ablkcipher_free(struct ssi_drvdata *drvdata) +{ + struct ssi_crypto_alg *t_alg, *n; + struct ssi_blkcipher_handle *blkcipher_handle = + drvdata->blkcipher_handle; + struct device *dev; + dev = &drvdata->plat_dev->dev; + + if (blkcipher_handle != NULL) { + /* Remove registered algs */ + list_for_each_entry_safe(t_alg, n, + &blkcipher_handle->blkcipher_alg_list, + entry) { + crypto_unregister_alg(&t_alg->crypto_alg); + list_del(&t_alg->entry); + kfree(t_alg); + } + kfree(blkcipher_handle); + drvdata->blkcipher_handle = NULL; + } + return 0; +} + + + +int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata) +{ + struct ssi_blkcipher_handle *ablkcipher_handle; + struct ssi_crypto_alg *t_alg; + int rc = -ENOMEM; + int alg; + + ablkcipher_handle = kmalloc(sizeof(struct ssi_blkcipher_handle), + GFP_KERNEL); + if (ablkcipher_handle == NULL) + return -ENOMEM; + + drvdata->blkcipher_handle = ablkcipher_handle; + + INIT_LIST_HEAD(&ablkcipher_handle->blkcipher_alg_list); + + /* Linux crypto */ + SSI_LOG_DEBUG("Number of algorithms = %zu\n", ARRAY_SIZE(blkcipher_algs)); + for (alg = 0; alg < ARRAY_SIZE(blkcipher_algs); alg++) { + SSI_LOG_DEBUG("creating %s\n", blkcipher_algs[alg].driver_name); + t_alg = ssi_ablkcipher_create_alg(&blkcipher_algs[alg]); + if (IS_ERR(t_alg)) { + rc = PTR_ERR(t_alg); + SSI_LOG_ERR("%s alg allocation failed\n", + blkcipher_algs[alg].driver_name); + goto fail0; + } + t_alg->drvdata = drvdata; + + SSI_LOG_DEBUG("registering %s\n", blkcipher_algs[alg].driver_name); + rc = crypto_register_alg(&t_alg->crypto_alg); + SSI_LOG_DEBUG("%s alg registration rc = %x\n", + t_alg->crypto_alg.cra_driver_name, rc); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("%s alg registration failed\n", + t_alg->crypto_alg.cra_driver_name); + kfree(t_alg); + goto fail0; + } else { + list_add_tail(&t_alg->entry, + &ablkcipher_handle->blkcipher_alg_list); + SSI_LOG_DEBUG("Registered %s\n", + t_alg->crypto_alg.cra_driver_name); + } + } + return 0; + +fail0: + ssi_ablkcipher_free(drvdata); + return rc; +} diff --git a/drivers/staging/ccree/ssi_cipher.h b/drivers/staging/ccree/ssi_cipher.h new file mode 100644 index 0000000..9ceb0b6 --- /dev/null +++ b/drivers/staging/ccree/ssi_cipher.h @@ -0,0 +1,88 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* \file ssi_cipher.h + ARM CryptoCell Cipher Crypto API + */ + +#ifndef __SSI_CIPHER_H__ +#define __SSI_CIPHER_H__ + +#include +#include +#include "ssi_driver.h" +#include "ssi_buffer_mgr.h" + + +/* Crypto cipher flags */ +#define CC_CRYPTO_CIPHER_KEY_KFDE0 (1 << 0) +#define CC_CRYPTO_CIPHER_KEY_KFDE1 (1 << 1) +#define CC_CRYPTO_CIPHER_KEY_KFDE2 (1 << 2) +#define CC_CRYPTO_CIPHER_KEY_KFDE3 (1 << 3) +#define CC_CRYPTO_CIPHER_DU_SIZE_512B (1 << 4) + +#define CC_CRYPTO_CIPHER_KEY_KFDE_MASK (CC_CRYPTO_CIPHER_KEY_KFDE0 | CC_CRYPTO_CIPHER_KEY_KFDE1 | CC_CRYPTO_CIPHER_KEY_KFDE2 | CC_CRYPTO_CIPHER_KEY_KFDE3) + + +struct blkcipher_req_ctx { + struct async_gen_req_ctx gen_ctx; + enum ssi_req_dma_buf_type dma_buf_type; + uint32_t in_nents; + uint32_t in_mlli_nents; + uint32_t out_nents; + uint32_t out_mlli_nents; + uint8_t *backup_info; /*store iv for generated IV flow*/ + struct mlli_params mlli_params; +}; + + + +int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata); + +int ssi_ablkcipher_free(struct ssi_drvdata *drvdata); + +#ifndef CRYPTO_ALG_BULK_MASK + +#define CRYPTO_ALG_BULK_DU_512 0x00002000 +#define CRYPTO_ALG_BULK_DU_4096 0x00004000 +#define CRYPTO_ALG_BULK_MASK (CRYPTO_ALG_BULK_DU_512 |\ + CRYPTO_ALG_BULK_DU_4096) +#endif /* CRYPTO_ALG_BULK_MASK */ + + +#ifdef CRYPTO_TFM_REQ_HW_KEY + +static inline bool ssi_is_hw_key(struct crypto_tfm *tfm) +{ + return (crypto_tfm_get_flags(tfm) & CRYPTO_TFM_REQ_HW_KEY); +} + +#else + +struct arm_hw_key_info { + int hw_key1; + int hw_key2; +}; + +static inline bool ssi_is_hw_key(struct crypto_tfm *tfm) +{ + return 0; +} + +#endif /* CRYPTO_TFM_REQ_HW_KEY */ + + +#endif /*__SSI_CIPHER_H__*/ diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c index 8042fa2..7f7807d 100644 --- a/drivers/staging/ccree/ssi_driver.c +++ b/drivers/staging/ccree/ssi_driver.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include @@ -61,6 +62,7 @@ #include "ssi_request_mgr.h" #include "ssi_buffer_mgr.h" #include "ssi_sysfs.h" +#include "ssi_cipher.h" #include "ssi_hash.h" #include "ssi_sram_mgr.h" #include "ssi_pm.h" @@ -219,6 +221,9 @@ static int init_cc_resources(struct platform_device *plat_dev) goto init_cc_res_err; } + /*Initialize inflight counter used in dx_ablkcipher_secure_complete used for count of BYSPASS blocks operations*/ + new_drvdata->inflight_counter = 0; + dev_set_drvdata(&plat_dev->dev, new_drvdata); /* Get device resources */ /* First CC registers space */ @@ -343,6 +348,13 @@ static int init_cc_resources(struct platform_device *plat_dev) goto init_cc_res_err; } + /* Allocate crypto algs */ + rc = ssi_ablkcipher_alloc(new_drvdata); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("ssi_ablkcipher_alloc failed\n"); + goto init_cc_res_err; + } + rc = ssi_hash_alloc(new_drvdata); if (unlikely(rc != 0)) { SSI_LOG_ERR("ssi_hash_alloc failed\n"); @@ -356,6 +368,7 @@ static int init_cc_resources(struct platform_device *plat_dev) if (new_drvdata != NULL) { ssi_hash_free(new_drvdata); + ssi_ablkcipher_free(new_drvdata); ssi_power_mgr_fini(new_drvdata); ssi_buffer_mgr_fini(new_drvdata); request_mgr_fini(new_drvdata); @@ -396,6 +409,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev) (struct ssi_drvdata *)dev_get_drvdata(&plat_dev->dev); ssi_hash_free(drvdata); + ssi_ablkcipher_free(drvdata); ssi_power_mgr_fini(drvdata); ssi_buffer_mgr_fini(drvdata); request_mgr_fini(drvdata); diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h index e080088..49931be 100644 --- a/drivers/staging/ccree/ssi_driver.h +++ b/drivers/staging/ccree/ssi_driver.h @@ -29,6 +29,7 @@ #endif #include #include +#include #include #include #include @@ -141,15 +142,44 @@ struct ssi_drvdata { struct completion icache_setup_completion; void *buff_mgr_handle; void *hash_handle; + void *blkcipher_handle; void *request_mgr_handle; void *sram_mgr_handle; #ifdef ENABLE_CYCLE_COUNT cycles_t isr_exit_cycles; /* Save for isr-to-tasklet latency */ #endif + uint32_t inflight_counter; }; +struct ssi_crypto_alg { + struct list_head entry; + int cipher_mode; + int flow_mode; /* Note: currently, refers to the cipher mode only. */ + int auth_mode; + struct ssi_drvdata *drvdata; + struct crypto_alg crypto_alg; +}; + +struct ssi_alg_template { + char name[CRYPTO_MAX_ALG_NAME]; + char driver_name[CRYPTO_MAX_ALG_NAME]; + unsigned int blocksize; + u32 type; + union { + struct ablkcipher_alg ablkcipher; + struct blkcipher_alg blkcipher; + struct cipher_alg cipher; + struct compress_alg compress; + } template_u; + int cipher_mode; + int flow_mode; /* Note: currently, refers to the cipher mode only. */ + int auth_mode; + bool synchronous; + struct ssi_drvdata *drvdata; +}; + struct async_gen_req_ctx { dma_addr_t iv_dma_addr; enum drv_crypto_direction op_type; From patchwork Sun Apr 23 09:26:13 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 97962 Delivered-To: patch@linaro.org Received: by 10.140.109.52 with SMTP id k49csp1005140qgf; Sun, 23 Apr 2017 02:28:35 -0700 (PDT) X-Received: by 10.98.223.5 with SMTP id u5mr19341454pfg.147.1492939714981; Sun, 23 Apr 2017 02:28:34 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n63si5069616pga.62.2017.04.23.02.28.34; Sun, 23 Apr 2017 02:28:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1044478AbdDWJ2Y (ORCPT + 1 other); Sun, 23 Apr 2017 05:28:24 -0400 Received: from foss.arm.com ([217.140.101.70]:46928 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1044389AbdDWJ1A (ORCPT ); Sun, 23 Apr 2017 05:27:00 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 56B1F80D; Sun, 23 Apr 2017 02:26:59 -0700 (PDT) Received: from gby.kfn.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 487C93F220; Sun, 23 Apr 2017 02:26:55 -0700 (PDT) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" , Rob Herring , Mark Rutland , Greg Kroah-Hartman , devel@driverdev.osuosl.org Cc: linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, gilad.benyossef@arm.com, Binoy Jayan , Ofir Drang , Stuart Yoder , Stephan Muller Subject: [PATCH v3 05/15] staging: ccree: add AEAD support Date: Sun, 23 Apr 2017 12:26:13 +0300 Message-Id: <1492939583-25688-6-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1492939583-25688-1-git-send-email-gilad@benyossef.com> References: <1492939583-25688-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add CryptoCell AEAD support Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/Kconfig | 1 + drivers/staging/ccree/Makefile | 2 +- drivers/staging/ccree/cc_crypto_ctx.h | 21 + drivers/staging/ccree/ssi_aead.c | 2826 ++++++++++++++++++++++++++++++++ drivers/staging/ccree/ssi_aead.h | 120 ++ drivers/staging/ccree/ssi_buffer_mgr.c | 899 ++++++++++ drivers/staging/ccree/ssi_buffer_mgr.h | 4 + drivers/staging/ccree/ssi_driver.c | 11 + drivers/staging/ccree/ssi_driver.h | 4 + 9 files changed, 3887 insertions(+), 1 deletion(-) create mode 100644 drivers/staging/ccree/ssi_aead.c create mode 100644 drivers/staging/ccree/ssi_aead.h -- 2.1.4 diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig index 3fff040..2d11223 100644 --- a/drivers/staging/ccree/Kconfig +++ b/drivers/staging/ccree/Kconfig @@ -5,6 +5,7 @@ config CRYPTO_DEV_CCREE select CRYPTO_HASH select CRYPTO_BLKCIPHER select CRYPTO_DES + select CRYPTO_AEAD select CRYPTO_AUTHENC select CRYPTO_SHA1 select CRYPTO_MD5 diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile index 89afe9a..b9285c0 100644 --- a/drivers/staging/ccree/Makefile +++ b/drivers/staging/ccree/Makefile @@ -1,2 +1,2 @@ obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o -ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o +ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_aead.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h index a7f7d95..9e10b26 100644 --- a/drivers/staging/ccree/cc_crypto_ctx.h +++ b/drivers/staging/ccree/cc_crypto_ctx.h @@ -263,6 +263,27 @@ struct drv_ctx_cipher { (CC_AES_KEY_SIZE_MAX/sizeof(uint32_t))]; }; +/* authentication and encryption with associated data class */ +struct drv_ctx_aead { + enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_AES */ + enum drv_cipher_mode mode; + enum drv_crypto_direction direction; + uint32_t key_size; /* numeric value in bytes */ + uint32_t nonce_size; /* nonce size (octets) */ + uint32_t header_size; /* finit additional data size (octets) */ + uint32_t text_size; /* finit text data size (octets) */ + uint32_t tag_size; /* mac size, element of {4, 6, 8, 10, 12, 14, 16} */ + /* block_state1/2 is the AES engine block state */ + uint8_t block_state[CC_AES_BLOCK_SIZE]; + uint8_t mac_state[CC_AES_BLOCK_SIZE]; /* MAC result */ + uint8_t nonce[CC_AES_BLOCK_SIZE]; /* nonce buffer */ + uint8_t key[CC_AES_KEY_SIZE_MAX]; + /* reserve to end of allocated context size */ + uint32_t reserved[CC_DRV_CTX_SIZE_WORDS - 8 - + 3 * (CC_AES_BLOCK_SIZE/sizeof(uint32_t)) - + CC_AES_KEY_SIZE_MAX/sizeof(uint32_t)]; +}; + /*******************************************************************/ /***************** MESSAGE BASED CONTEXTS **************************/ /*******************************************************************/ diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c new file mode 100644 index 0000000..33d72d2 --- /dev/null +++ b/drivers/staging/ccree/ssi_aead.c @@ -0,0 +1,2826 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "ssi_config.h" +#include "ssi_driver.h" +#include "ssi_buffer_mgr.h" +#include "ssi_aead.h" +#include "ssi_request_mgr.h" +#include "ssi_hash.h" +#include "ssi_sysfs.h" +#include "ssi_sram_mgr.h" + +#define template_aead template_u.aead + +#define MAX_AEAD_SETKEY_SEQ 12 +#define MAX_AEAD_PROCESS_SEQ 23 + +#define MAX_HMAC_DIGEST_SIZE (SHA256_DIGEST_SIZE) +#define MAX_HMAC_BLOCK_SIZE (SHA256_BLOCK_SIZE) + +#define AES_CCM_RFC4309_NONCE_SIZE 3 +#define MAX_NONCE_SIZE CTR_RFC3686_NONCE_SIZE + + +/* Value of each ICV_CMP byte (of 8) in case of success */ +#define ICV_VERIF_OK 0x01 + +struct ssi_aead_handle { + ssi_sram_addr_t sram_workspace_addr; + struct list_head aead_list; +}; + +struct ssi_aead_ctx { + struct ssi_drvdata *drvdata; + uint8_t ctr_nonce[MAX_NONCE_SIZE]; /* used for ctr3686 iv and aes ccm */ + uint8_t *enckey; + dma_addr_t enckey_dma_addr; + union { + struct { + uint8_t *padded_authkey; + uint8_t *ipad_opad; /* IPAD, OPAD*/ + dma_addr_t padded_authkey_dma_addr; + dma_addr_t ipad_opad_dma_addr; + } hmac; + struct { + uint8_t *xcbc_keys; /* K1,K2,K3 */ + dma_addr_t xcbc_keys_dma_addr; + } xcbc; + } auth_state; + unsigned int enc_keylen; + unsigned int auth_keylen; + unsigned int authsize; /* Actual (reduced?) size of the MAC/ICv */ + enum drv_cipher_mode cipher_mode; + enum FlowMode flow_mode; + enum drv_hash_mode auth_mode; +}; + +static inline bool valid_assoclen(struct aead_request *req) +{ + return ((req->assoclen == 16) || (req->assoclen == 20)); +} + +static void ssi_aead_exit(struct crypto_aead *tfm) +{ + struct device *dev = NULL; + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + + SSI_LOG_DEBUG("Clearing context @%p for %s\n", + crypto_aead_ctx(tfm), crypto_tfm_alg_name(&(tfm->base))); + + dev = &ctx->drvdata->plat_dev->dev; + /* Unmap enckey buffer */ + if (ctx->enckey != NULL) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->enckey_dma_addr); + dma_free_coherent(dev, AES_MAX_KEY_SIZE, ctx->enckey, ctx->enckey_dma_addr); + SSI_LOG_DEBUG("Freed enckey DMA buffer enckey_dma_addr=0x%llX\n", + (unsigned long long)ctx->enckey_dma_addr); + ctx->enckey_dma_addr = 0; + ctx->enckey = NULL; + } + + if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { /* XCBC authetication */ + if (ctx->auth_state.xcbc.xcbc_keys != NULL) { + SSI_RESTORE_DMA_ADDR_TO_48BIT( + ctx->auth_state.xcbc.xcbc_keys_dma_addr); + dma_free_coherent(dev, CC_AES_128_BIT_KEY_SIZE * 3, + ctx->auth_state.xcbc.xcbc_keys, + ctx->auth_state.xcbc.xcbc_keys_dma_addr); + } + SSI_LOG_DEBUG("Freed xcbc_keys DMA buffer xcbc_keys_dma_addr=0x%llX\n", + (unsigned long long)ctx->auth_state.xcbc.xcbc_keys_dma_addr); + ctx->auth_state.xcbc.xcbc_keys_dma_addr = 0; + ctx->auth_state.xcbc.xcbc_keys = NULL; + } else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC auth. */ + if (ctx->auth_state.hmac.ipad_opad != NULL) { + SSI_RESTORE_DMA_ADDR_TO_48BIT( + ctx->auth_state.hmac.ipad_opad_dma_addr); + dma_free_coherent(dev, 2 * MAX_HMAC_DIGEST_SIZE, + ctx->auth_state.hmac.ipad_opad, + ctx->auth_state.hmac.ipad_opad_dma_addr); + SSI_LOG_DEBUG("Freed ipad_opad DMA buffer ipad_opad_dma_addr=0x%llX\n", + (unsigned long long)ctx->auth_state.hmac.ipad_opad_dma_addr); + ctx->auth_state.hmac.ipad_opad_dma_addr = 0; + ctx->auth_state.hmac.ipad_opad = NULL; + } + if (ctx->auth_state.hmac.padded_authkey != NULL) { + SSI_RESTORE_DMA_ADDR_TO_48BIT( + ctx->auth_state.hmac.padded_authkey_dma_addr); + dma_free_coherent(dev, MAX_HMAC_BLOCK_SIZE, + ctx->auth_state.hmac.padded_authkey, + ctx->auth_state.hmac.padded_authkey_dma_addr); + SSI_LOG_DEBUG("Freed padded_authkey DMA buffer padded_authkey_dma_addr=0x%llX\n", + (unsigned long long)ctx->auth_state.hmac.padded_authkey_dma_addr); + ctx->auth_state.hmac.padded_authkey_dma_addr = 0; + ctx->auth_state.hmac.padded_authkey = NULL; + } + } +} + +static int ssi_aead_init(struct crypto_aead *tfm) +{ + struct device *dev; + struct aead_alg *alg = crypto_aead_alg(tfm); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct ssi_crypto_alg *ssi_alg = + container_of(alg, struct ssi_crypto_alg, aead_alg); + SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx, crypto_tfm_alg_name(&(tfm->base))); + + /* Initialize modes in instance */ + ctx->cipher_mode = ssi_alg->cipher_mode; + ctx->flow_mode = ssi_alg->flow_mode; + ctx->auth_mode = ssi_alg->auth_mode; + ctx->drvdata = ssi_alg->drvdata; + dev = &ctx->drvdata->plat_dev->dev; + crypto_aead_set_reqsize(tfm,sizeof(struct aead_req_ctx)); + + /* Allocate key buffer, cache line aligned */ + ctx->enckey = dma_alloc_coherent(dev, AES_MAX_KEY_SIZE, + &ctx->enckey_dma_addr, GFP_KERNEL); + if (ctx->enckey == NULL) { + SSI_LOG_ERR("Failed allocating key buffer\n"); + goto init_failed; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->enckey_dma_addr, AES_MAX_KEY_SIZE); + SSI_LOG_DEBUG("Allocated enckey buffer in context ctx->enckey=@%p\n", ctx->enckey); + + /* Set default authlen value */ + + if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { /* XCBC authetication */ + /* Allocate dma-coherent buffer for XCBC's K1+K2+K3 */ + /* (and temporary for user key - up to 256b) */ + ctx->auth_state.xcbc.xcbc_keys = dma_alloc_coherent(dev, + CC_AES_128_BIT_KEY_SIZE * 3, + &ctx->auth_state.xcbc.xcbc_keys_dma_addr, GFP_KERNEL); + if (ctx->auth_state.xcbc.xcbc_keys == NULL) { + SSI_LOG_ERR("Failed allocating buffer for XCBC keys\n"); + goto init_failed; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT( + ctx->auth_state.xcbc.xcbc_keys_dma_addr, + CC_AES_128_BIT_KEY_SIZE * 3); + } else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC authentication */ + /* Allocate dma-coherent buffer for IPAD + OPAD */ + ctx->auth_state.hmac.ipad_opad = dma_alloc_coherent(dev, + 2 * MAX_HMAC_DIGEST_SIZE, + &ctx->auth_state.hmac.ipad_opad_dma_addr, GFP_KERNEL); + if (ctx->auth_state.hmac.ipad_opad == NULL) { + SSI_LOG_ERR("Failed allocating IPAD/OPAD buffer\n"); + goto init_failed; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT( + ctx->auth_state.hmac.ipad_opad_dma_addr, + 2 * MAX_HMAC_DIGEST_SIZE); + SSI_LOG_DEBUG("Allocated authkey buffer in context ctx->authkey=@%p\n", + ctx->auth_state.hmac.ipad_opad); + + ctx->auth_state.hmac.padded_authkey = dma_alloc_coherent(dev, + MAX_HMAC_BLOCK_SIZE, + &ctx->auth_state.hmac.padded_authkey_dma_addr, GFP_KERNEL); + if (ctx->auth_state.hmac.padded_authkey == NULL) { + SSI_LOG_ERR("failed to allocate padded_authkey\n"); + goto init_failed; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT( + ctx->auth_state.hmac.padded_authkey_dma_addr, + MAX_HMAC_BLOCK_SIZE); + } else { + ctx->auth_state.hmac.ipad_opad = NULL; + ctx->auth_state.hmac.padded_authkey = NULL; + } + + return 0; + +init_failed: + ssi_aead_exit(tfm); + return -ENOMEM; +} + + +static void ssi_aead_complete(struct device *dev, void *ssi_req, void __iomem *cc_base) +{ + struct aead_request *areq = (struct aead_request *)ssi_req; + struct aead_req_ctx *areq_ctx = aead_request_ctx(areq); + struct crypto_aead *tfm = crypto_aead_reqtfm(ssi_req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + int err = 0; + DECL_CYCLE_COUNT_RESOURCES; + + START_CYCLE_COUNT(); + + ssi_buffer_mgr_unmap_aead_request(dev, areq); + + /* Restore ordinary iv pointer */ + areq->iv = areq_ctx->backup_iv; + + if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) { + if (memcmp(areq_ctx->mac_buf, areq_ctx->icv_virt_addr, + ctx->authsize) != 0) { + SSI_LOG_DEBUG("Payload authentication failure, " + "(auth-size=%d, cipher=%d).\n", + ctx->authsize, ctx->cipher_mode); + /* In case of payload authentication failure, MUST NOT + revealed the decrypted message --> zero its memory. */ + ssi_buffer_mgr_zero_sgl(areq->dst, areq_ctx->cryptlen); + err = -EBADMSG; + } + } else { /*ENCRYPT*/ + if (unlikely(areq_ctx->is_icv_fragmented == true)) + ssi_buffer_mgr_copy_scatterlist_portion( + areq_ctx->mac_buf, areq_ctx->dstSgl, areq->cryptlen+areq_ctx->dstOffset, + areq->cryptlen+areq_ctx->dstOffset + ctx->authsize, SSI_SG_FROM_BUF); + + /* If an IV was generated, copy it back to the user provided buffer. */ + if (areq_ctx->backup_giv != NULL) { + if (ctx->cipher_mode == DRV_CIPHER_CTR) { + memcpy(areq_ctx->backup_giv, areq_ctx->ctr_iv + CTR_RFC3686_NONCE_SIZE, CTR_RFC3686_IV_SIZE); + } else if (ctx->cipher_mode == DRV_CIPHER_CCM) { + memcpy(areq_ctx->backup_giv, areq_ctx->ctr_iv + CCM_BLOCK_IV_OFFSET, CCM_BLOCK_IV_SIZE); + } + } + } + + END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_4); + aead_request_complete(areq, err); +} + +static int xcbc_setkey(HwDesc_s *desc, struct ssi_aead_ctx *ctx) +{ + /* Load the AES key */ + HW_DESC_INIT(&desc[0]); + /* We are using for the source/user key the same buffer as for the output keys, + because after this key loading it is not needed anymore */ + HW_DESC_SET_DIN_TYPE(&desc[0], DMA_DLLI, ctx->auth_state.xcbc.xcbc_keys_dma_addr, ctx->auth_keylen, NS_BIT); + HW_DESC_SET_CIPHER_MODE(&desc[0], DRV_CIPHER_ECB); + HW_DESC_SET_CIPHER_CONFIG0(&desc[0], DRV_CRYPTO_DIRECTION_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[0], ctx->auth_keylen); + HW_DESC_SET_FLOW_MODE(&desc[0], S_DIN_to_AES); + HW_DESC_SET_SETUP_MODE(&desc[0], SETUP_LOAD_KEY0); + + HW_DESC_INIT(&desc[1]); + HW_DESC_SET_DIN_CONST(&desc[1], 0x01010101, CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[1], DIN_AES_DOUT); + HW_DESC_SET_DOUT_DLLI(&desc[1], ctx->auth_state.xcbc.xcbc_keys_dma_addr, AES_KEYSIZE_128, NS_BIT, 0); + + HW_DESC_INIT(&desc[2]); + HW_DESC_SET_DIN_CONST(&desc[2], 0x02020202, CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[2], DIN_AES_DOUT); + HW_DESC_SET_DOUT_DLLI(&desc[2], (ctx->auth_state.xcbc.xcbc_keys_dma_addr + + AES_KEYSIZE_128), + AES_KEYSIZE_128, NS_BIT, 0); + + HW_DESC_INIT(&desc[3]); + HW_DESC_SET_DIN_CONST(&desc[3], 0x03030303, CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[3], DIN_AES_DOUT); + HW_DESC_SET_DOUT_DLLI(&desc[3], (ctx->auth_state.xcbc.xcbc_keys_dma_addr + + 2 * AES_KEYSIZE_128), + AES_KEYSIZE_128, NS_BIT, 0); + + return 4; +} + +static int hmac_setkey(HwDesc_s *desc, struct ssi_aead_ctx *ctx) +{ + unsigned int hmacPadConst[2] = { HMAC_IPAD_CONST, HMAC_OPAD_CONST }; + unsigned int digest_ofs = 0; + unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ? + DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256; + unsigned int digest_size = (ctx->auth_mode == DRV_HASH_SHA1) ? + CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE; + + int idx = 0; + int i; + + /* calc derived HMAC key */ + for (i = 0; i < 2; i++) { + /* Load hash initial state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + HW_DESC_SET_DIN_SRAM(&desc[idx], + ssi_ahash_get_larval_digest_sram_addr( + ctx->drvdata, ctx->auth_mode), + digest_size); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Load the hash current length*/ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + /* Prepare ipad key */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_XOR_VAL(&desc[idx], hmacPadConst[i]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + idx++; + + /* Perform HASH update */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + ctx->auth_state.hmac.padded_authkey_dma_addr, + SHA256_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + HW_DESC_SET_XOR_ACTIVE(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + idx++; + + /* Get the digset */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + (ctx->auth_state.hmac.ipad_opad_dma_addr + + digest_ofs), + digest_size, NS_BIT, 0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + idx++; + + digest_ofs += digest_size; + } + + return idx; +} + +static int validate_keys_sizes(struct ssi_aead_ctx *ctx) +{ + SSI_LOG_DEBUG("enc_keylen=%u authkeylen=%u\n", + ctx->enc_keylen, ctx->auth_keylen); + + switch (ctx->auth_mode) { + case DRV_HASH_SHA1: + case DRV_HASH_SHA256: + break; + case DRV_HASH_XCBC_MAC: + if ((ctx->auth_keylen != AES_KEYSIZE_128) && + (ctx->auth_keylen != AES_KEYSIZE_192) && + (ctx->auth_keylen != AES_KEYSIZE_256)) + return -ENOTSUPP; + break; + case DRV_HASH_NULL: /* Not authenc (e.g., CCM) - no auth_key) */ + if (ctx->auth_keylen > 0) + return -EINVAL; + break; + default: + SSI_LOG_ERR("Invalid auth_mode=%d\n", ctx->auth_mode); + return -EINVAL; + } + /* Check cipher key size */ + if (unlikely(ctx->flow_mode == S_DIN_to_DES)) { + if (ctx->enc_keylen != DES3_EDE_KEY_SIZE) { + SSI_LOG_ERR("Invalid cipher(3DES) key size: %u\n", + ctx->enc_keylen); + return -EINVAL; + } + } else { /* Default assumed to be AES ciphers */ + if ((ctx->enc_keylen != AES_KEYSIZE_128) && + (ctx->enc_keylen != AES_KEYSIZE_192) && + (ctx->enc_keylen != AES_KEYSIZE_256)) { + SSI_LOG_ERR("Invalid cipher(AES) key size: %u\n", + ctx->enc_keylen); + return -EINVAL; + } + } + + return 0; /* All tests of keys sizes passed */ +} +/*This function prepers the user key so it can pass to the hmac processing + (copy to intenral buffer or hash in case of key longer than block */ +static int +ssi_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) +{ + dma_addr_t key_dma_addr = 0; + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct device *dev = &ctx->drvdata->plat_dev->dev; + uint32_t larval_addr = ssi_ahash_get_larval_digest_sram_addr( + ctx->drvdata, ctx->auth_mode); + struct ssi_crypto_req ssi_req = {}; + unsigned int blocksize; + unsigned int digestsize; + unsigned int hashmode; + unsigned int idx = 0; + int rc = 0; + HwDesc_s desc[MAX_AEAD_SETKEY_SEQ]; + dma_addr_t padded_authkey_dma_addr = + ctx->auth_state.hmac.padded_authkey_dma_addr; + + switch (ctx->auth_mode) { /* auth_key required and >0 */ + case DRV_HASH_SHA1: + blocksize = SHA1_BLOCK_SIZE; + digestsize = SHA1_DIGEST_SIZE; + hashmode = DRV_HASH_HW_SHA1; + break; + case DRV_HASH_SHA256: + default: + blocksize = SHA256_BLOCK_SIZE; + digestsize = SHA256_DIGEST_SIZE; + hashmode = DRV_HASH_HW_SHA256; + } + + if (likely(keylen != 0)) { + key_dma_addr = dma_map_single(dev, (void *)key, keylen, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev, key_dma_addr))) { + SSI_LOG_ERR("Mapping key va=0x%p len=%u for" + " DMA failed\n", key, keylen); + return -ENOMEM; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(key_dma_addr, keylen); + if (keylen > blocksize) { + /* Load hash initial state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode); + HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr, digestsize); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Load the hash current length*/ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode); + HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + key_dma_addr, + keylen, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + idx++; + + /* Get hashed key */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + padded_authkey_dma_addr, + digestsize, + NS_BIT, 0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], + HASH_PADDING_DISABLED); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], + HASH_DIGEST_RESULT_LITTLE_ENDIAN); + idx++; + + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - digestsize)); + HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + (padded_authkey_dma_addr + digestsize), + (blocksize - digestsize), + NS_BIT, 0); + idx++; + } else { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + key_dma_addr, + keylen, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + (padded_authkey_dma_addr), + keylen, NS_BIT, 0); + idx++; + + if ((blocksize - keylen) != 0) { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0, + (blocksize - keylen)); + HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + (padded_authkey_dma_addr + keylen), + (blocksize - keylen), + NS_BIT, 0); + idx++; + } + } + } else { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0, + (blocksize - keylen)); + HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + padded_authkey_dma_addr, + blocksize, + NS_BIT, 0); + idx++; + } + +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = STAT_OP_TYPE_SETKEY; +#endif + + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); + if (unlikely(rc != 0)) + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + + if (likely(key_dma_addr != 0)) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(key_dma_addr); + dma_unmap_single(dev, key_dma_addr, keylen, DMA_TO_DEVICE); + } + + return rc; +} + + +static int +ssi_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) +{ + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct rtattr *rta = (struct rtattr *)key; + struct ssi_crypto_req ssi_req = {}; + struct crypto_authenc_key_param *param; + HwDesc_s desc[MAX_AEAD_SETKEY_SEQ]; + int seq_len = 0, rc = -EINVAL; + DECL_CYCLE_COUNT_RESOURCES; + + SSI_LOG_DEBUG("Setting key in context @%p for %s. key=%p keylen=%u\n", + ctx, crypto_tfm_alg_name(crypto_aead_tfm(tfm)), key, keylen); + + /* STAT_PHASE_0: Init and sanity checks */ + START_CYCLE_COUNT(); + + if (ctx->auth_mode != DRV_HASH_NULL) { /* authenc() alg. */ + if (!RTA_OK(rta, keylen)) + goto badkey; + if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM) + goto badkey; + if (RTA_PAYLOAD(rta) < sizeof(*param)) + goto badkey; + param = RTA_DATA(rta); + ctx->enc_keylen = be32_to_cpu(param->enckeylen); + key += RTA_ALIGN(rta->rta_len); + keylen -= RTA_ALIGN(rta->rta_len); + if (keylen < ctx->enc_keylen) + goto badkey; + ctx->auth_keylen = keylen - ctx->enc_keylen; + + if (ctx->cipher_mode == DRV_CIPHER_CTR) { + /* the nonce is stored in bytes at end of key */ + if (ctx->enc_keylen < + (AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE)) + goto badkey; + /* Copy nonce from last 4 bytes in CTR key to + * first 4 bytes in CTR IV */ + memcpy(ctx->ctr_nonce, key + ctx->auth_keylen + ctx->enc_keylen - + CTR_RFC3686_NONCE_SIZE, CTR_RFC3686_NONCE_SIZE); + /* Set CTR key size */ + ctx->enc_keylen -= CTR_RFC3686_NONCE_SIZE; + } + } else { /* non-authenc - has just one key */ + ctx->enc_keylen = keylen; + ctx->auth_keylen = 0; + } + + rc = validate_keys_sizes(ctx); + if (unlikely(rc != 0)) + goto badkey; + + END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0); + /* STAT_PHASE_1: Copy key to ctx */ + START_CYCLE_COUNT(); + + /* Get key material */ + memcpy(ctx->enckey, key + ctx->auth_keylen, ctx->enc_keylen); + if (ctx->enc_keylen == 24) + memset(ctx->enckey + 24, 0, CC_AES_KEY_SIZE_MAX - 24); + if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { + memcpy(ctx->auth_state.xcbc.xcbc_keys, key, ctx->auth_keylen); + } else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC */ + rc = ssi_get_plain_hmac_key(tfm, key, ctx->auth_keylen); + if (rc != 0) + goto badkey; + } + + END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1); + + /* STAT_PHASE_2: Create sequence */ + START_CYCLE_COUNT(); + + switch (ctx->auth_mode) { + case DRV_HASH_SHA1: + case DRV_HASH_SHA256: + seq_len = hmac_setkey(desc, ctx); + break; + case DRV_HASH_XCBC_MAC: + seq_len = xcbc_setkey(desc, ctx); + break; + case DRV_HASH_NULL: /* non-authenc modes, e.g., CCM */ + break; /* No auth. key setup */ + default: + SSI_LOG_ERR("Unsupported authenc (%d)\n", ctx->auth_mode); + rc = -ENOTSUPP; + goto badkey; + } + + END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_2); + + /* STAT_PHASE_3: Submit sequence to HW */ + START_CYCLE_COUNT(); + + if (seq_len > 0) { /* For CCM there is no sequence to setup the key */ +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = STAT_OP_TYPE_SETKEY; +#endif + rc = send_request(ctx->drvdata, &ssi_req, desc, seq_len, 0); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + goto setkey_error; + } + } + + /* Update STAT_PHASE_3 */ + END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_3); + return rc; + +badkey: + crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + +setkey_error: + return rc; +} + +#if SSI_CC_HAS_AES_CCM +static int ssi_rfc4309_ccm_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) +{ + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + int rc = 0; + + if (keylen < 3) + return -EINVAL; + + keylen -= 3; + memcpy(ctx->ctr_nonce, key + keylen, 3); + + rc = ssi_aead_setkey(tfm, key, keylen); + + return rc; +} +#endif /*SSI_CC_HAS_AES_CCM*/ + +static int ssi_aead_setauthsize( + struct crypto_aead *authenc, + unsigned int authsize) +{ + struct ssi_aead_ctx *ctx = crypto_aead_ctx(authenc); + + /* Unsupported auth. sizes */ + if ((authsize == 0) || + (authsize >crypto_aead_maxauthsize(authenc))) { + return -ENOTSUPP; + } + + ctx->authsize = authsize; + SSI_LOG_DEBUG("authlen=%d\n", ctx->authsize); + + return 0; +} + +#if SSI_CC_HAS_AES_CCM +static int ssi_rfc4309_ccm_setauthsize(struct crypto_aead *authenc, + unsigned int authsize) +{ + switch (authsize) { + case 8: + case 12: + case 16: + break; + default: + return -EINVAL; + } + + return ssi_aead_setauthsize(authenc, authsize); +} + +static int ssi_ccm_setauthsize(struct crypto_aead *authenc, + unsigned int authsize) +{ + switch (authsize) { + case 4: + case 6: + case 8: + case 10: + case 12: + case 14: + case 16: + break; + default: + return -EINVAL; + } + + return ssi_aead_setauthsize(authenc, authsize); +} +#endif /*SSI_CC_HAS_AES_CCM*/ + +static inline void +ssi_aead_create_assoc_desc( + struct aead_request *areq, + unsigned int flow_mode, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(areq); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *areq_ctx = aead_request_ctx(areq); + enum ssi_req_dma_buf_type assoc_dma_type = areq_ctx->assoc_buff_type; + unsigned int idx = *seq_size; + + switch (assoc_dma_type) { + case SSI_DMA_BUF_DLLI: + SSI_LOG_DEBUG("ASSOC buffer type DLLI\n"); + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + sg_dma_address(areq->src), + areq->assoclen, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + if (ctx->auth_mode == DRV_HASH_XCBC_MAC && (areq_ctx->cryptlen > 0) ) + HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx]); + break; + case SSI_DMA_BUF_MLLI: + SSI_LOG_DEBUG("ASSOC buffer type MLLI\n"); + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI, + areq_ctx->assoc.sram_addr, + areq_ctx->assoc.mlli_nents, + NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + if (ctx->auth_mode == DRV_HASH_XCBC_MAC && (areq_ctx->cryptlen > 0) ) + HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx]); + break; + case SSI_DMA_BUF_NULL: + default: + SSI_LOG_ERR("Invalid ASSOC buffer type\n"); + } + + *seq_size = (++idx); +} + +static inline void +ssi_aead_process_authenc_data_desc( + struct aead_request *areq, + unsigned int flow_mode, + HwDesc_s desc[], + unsigned int *seq_size, + int direct) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(areq); + enum ssi_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type; + unsigned int idx = *seq_size; + + switch (data_dma_type) { + case SSI_DMA_BUF_DLLI: + { + struct scatterlist *cipher = + (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? + areq_ctx->dstSgl : areq_ctx->srcSgl; + + unsigned int offset = + (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? + areq_ctx->dstOffset : areq_ctx->srcOffset; + SSI_LOG_DEBUG("AUTHENC: SRC/DST buffer type DLLI\n"); + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + (sg_dma_address(cipher)+ offset), areq_ctx->cryptlen, + NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + break; + } + case SSI_DMA_BUF_MLLI: + { + /* DOUBLE-PASS flow (as default) + * assoc. + iv + data -compact in one table + * if assoclen is ZERO only IV perform */ + ssi_sram_addr_t mlli_addr = areq_ctx->assoc.sram_addr; + uint32_t mlli_nents = areq_ctx->assoc.mlli_nents; + + if (likely(areq_ctx->is_single_pass == true)) { + if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT){ + mlli_addr = areq_ctx->dst.sram_addr; + mlli_nents = areq_ctx->dst.mlli_nents; + } else { + mlli_addr = areq_ctx->src.sram_addr; + mlli_nents = areq_ctx->src.mlli_nents; + } + } + + SSI_LOG_DEBUG("AUTHENC: SRC/DST buffer type MLLI\n"); + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI, + mlli_addr, mlli_nents, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + break; + } + case SSI_DMA_BUF_NULL: + default: + SSI_LOG_ERR("AUTHENC: Invalid SRC/DST buffer type\n"); + } + + *seq_size = (++idx); +} + +static inline void +ssi_aead_process_cipher_data_desc( + struct aead_request *areq, + unsigned int flow_mode, + HwDesc_s desc[], + unsigned int *seq_size) +{ + unsigned int idx = *seq_size; + struct aead_req_ctx *areq_ctx = aead_request_ctx(areq); + enum ssi_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type; + + if (areq_ctx->cryptlen == 0) + return; /*null processing*/ + + switch (data_dma_type) { + case SSI_DMA_BUF_DLLI: + SSI_LOG_DEBUG("CIPHER: SRC/DST buffer type DLLI\n"); + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + (sg_dma_address(areq_ctx->srcSgl)+areq_ctx->srcOffset), + areq_ctx->cryptlen, NS_BIT); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + (sg_dma_address(areq_ctx->dstSgl)+areq_ctx->dstOffset), + areq_ctx->cryptlen, NS_BIT, 0); + HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + break; + case SSI_DMA_BUF_MLLI: + SSI_LOG_DEBUG("CIPHER: SRC/DST buffer type MLLI\n"); + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI, + areq_ctx->src.sram_addr, + areq_ctx->src.mlli_nents, NS_BIT); + HW_DESC_SET_DOUT_MLLI(&desc[idx], + areq_ctx->dst.sram_addr, + areq_ctx->dst.mlli_nents, NS_BIT, 0); + HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode); + break; + case SSI_DMA_BUF_NULL: + default: + SSI_LOG_ERR("CIPHER: Invalid SRC/DST buffer type\n"); + } + + *seq_size = (++idx); +} + +static inline void ssi_aead_process_digest_result_desc( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + unsigned int idx = *seq_size; + unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ? + DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256; + int direct = req_ctx->gen_ctx.op_type; + + /* Get final ICV result */ + if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->icv_dma_addr, + ctx->authsize, NS_BIT, 1); + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); + } else { + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], + HASH_DIGEST_RESULT_LITTLE_ENDIAN); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + } + } else { /*Decrypt*/ + /* Get ICV out from hardware */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr, + ctx->authsize, NS_BIT, 1); + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED); + if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + } else { + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + } + } + + *seq_size = (++idx); +} + +static inline void ssi_aead_setup_cipher_desc( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + unsigned int hw_iv_size = req_ctx->hw_iv_size; + unsigned int idx = *seq_size; + int direct = req_ctx->gen_ctx.op_type; + + /* Setup cipher state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direct); + HW_DESC_SET_FLOW_MODE(&desc[idx], ctx->flow_mode); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + req_ctx->gen_ctx.iv_dma_addr, hw_iv_size, NS_BIT); + if (ctx->cipher_mode == DRV_CIPHER_CTR) { + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + } else { + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + } + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->cipher_mode); + idx++; + + /* Setup enc. key */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direct); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + HW_DESC_SET_FLOW_MODE(&desc[idx], ctx->flow_mode); + if (ctx->flow_mode == S_DIN_to_AES) { + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ((ctx->enc_keylen == 24) ? + CC_AES_KEY_SIZE_MAX : ctx->enc_keylen), NS_BIT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); + } else { + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ctx->enc_keylen, NS_BIT); + HW_DESC_SET_KEY_SIZE_DES(&desc[idx], ctx->enc_keylen); + } + HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->cipher_mode); + idx++; + + *seq_size = idx; +} + +static inline void ssi_aead_process_cipher( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size, + unsigned int data_flow_mode) +{ + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + int direct = req_ctx->gen_ctx.op_type; + unsigned int idx = *seq_size; + + if (req_ctx->cryptlen == 0) + return; /*null processing*/ + + ssi_aead_setup_cipher_desc(req, desc, &idx); + ssi_aead_process_cipher_data_desc(req, data_flow_mode, desc, &idx); + if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) { + /* We must wait for DMA to write all cipher */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); + HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + idx++; + } + + *seq_size = idx; +} + +static inline void ssi_aead_hmac_setup_digest_desc( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ? + DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256; + unsigned int digest_size = (ctx->auth_mode == DRV_HASH_SHA1) ? + CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE; + unsigned int idx = *seq_size; + + /* Loading hash ipad xor key state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + ctx->auth_state.hmac.ipad_opad_dma_addr, + digest_size, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Load init. digest len (64 bytes) */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + HW_DESC_SET_DIN_SRAM(&desc[idx], + ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, hash_mode), + HASH_LEN_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + *seq_size = idx; +} + +static inline void ssi_aead_xcbc_setup_digest_desc( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + unsigned int idx = *seq_size; + + /* Loading MAC state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0, CC_AES_BLOCK_SIZE); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + idx++; + + /* Setup XCBC MAC K1 */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + ctx->auth_state.xcbc.xcbc_keys_dma_addr, + AES_KEYSIZE_128, NS_BIT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + idx++; + + /* Setup XCBC MAC K2 */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + (ctx->auth_state.xcbc.xcbc_keys_dma_addr + + AES_KEYSIZE_128), + AES_KEYSIZE_128, NS_BIT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + idx++; + + /* Setup XCBC MAC K3 */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + (ctx->auth_state.xcbc.xcbc_keys_dma_addr + + 2 * AES_KEYSIZE_128), + AES_KEYSIZE_128, NS_BIT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE2); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + idx++; + + *seq_size = idx; +} + +static inline void ssi_aead_process_digest_header_desc( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + unsigned int idx = *seq_size; + /* Hash associated data */ + if (req->assoclen > 0) + ssi_aead_create_assoc_desc(req, DIN_HASH, desc, &idx); + + /* Hash IV */ + *seq_size = idx; +} + +static inline void ssi_aead_process_digest_scheme_desc( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct ssi_aead_handle *aead_handle = ctx->drvdata->aead_handle; + unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ? + DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256; + unsigned int digest_size = (ctx->auth_mode == DRV_HASH_SHA1) ? + CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE; + unsigned int idx = *seq_size; + + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + HW_DESC_SET_DOUT_SRAM(&desc[idx], aead_handle->sram_workspace_addr, + HASH_LEN_SIZE); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1); + HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD); + idx++; + + /* Get final ICV result */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DOUT_SRAM(&desc[idx], aead_handle->sram_workspace_addr, + digest_size); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + idx++; + + /* Loading hash opad xor key state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + (ctx->auth_state.hmac.ipad_opad_dma_addr + digest_size), + digest_size, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + /* Load init. digest len (64 bytes) */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode); + HW_DESC_SET_DIN_SRAM(&desc[idx], + ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, hash_mode), + HASH_LEN_SIZE); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + /* Perform HASH update */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_SRAM(&desc[idx], aead_handle->sram_workspace_addr, + digest_size); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + idx++; + + *seq_size = idx; +} + +static inline void ssi_aead_load_mlli_to_sram( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + + if (unlikely( + (req_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI) || + (req_ctx->data_buff_type == SSI_DMA_BUF_MLLI) || + (req_ctx->is_single_pass == false))) { + SSI_LOG_DEBUG("Copy-to-sram: mlli_dma=%08x, mlli_size=%u\n", + (unsigned int)ctx->drvdata->mlli_sram_addr, + req_ctx->mlli_params.mlli_len); + /* Copy MLLI table host-to-sram */ + HW_DESC_INIT(&desc[*seq_size]); + HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, + req_ctx->mlli_params.mlli_dma_addr, + req_ctx->mlli_params.mlli_len, NS_BIT); + HW_DESC_SET_DOUT_SRAM(&desc[*seq_size], + ctx->drvdata->mlli_sram_addr, + req_ctx->mlli_params.mlli_len); + HW_DESC_SET_FLOW_MODE(&desc[*seq_size], BYPASS); + (*seq_size)++; + } +} + +static inline enum FlowMode ssi_aead_get_data_flow_mode( + enum drv_crypto_direction direct, + enum FlowMode setup_flow_mode, + bool is_single_pass) +{ + enum FlowMode data_flow_mode; + + if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) { + if (setup_flow_mode == S_DIN_to_AES) + data_flow_mode = likely(is_single_pass) ? + AES_to_HASH_and_DOUT : DIN_AES_DOUT; + else + data_flow_mode = likely(is_single_pass) ? + DES_to_HASH_and_DOUT : DIN_DES_DOUT; + } else { /* Decrypt */ + if (setup_flow_mode == S_DIN_to_AES) + data_flow_mode = likely(is_single_pass) ? + AES_and_HASH : DIN_AES_DOUT; + else + data_flow_mode = likely(is_single_pass) ? + DES_and_HASH : DIN_DES_DOUT; + } + + return data_flow_mode; +} + +static inline void ssi_aead_hmac_authenc( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + int direct = req_ctx->gen_ctx.op_type; + unsigned int data_flow_mode = ssi_aead_get_data_flow_mode( + direct, ctx->flow_mode, req_ctx->is_single_pass); + + if (req_ctx->is_single_pass == true) { + /** + * Single-pass flow + */ + ssi_aead_hmac_setup_digest_desc(req, desc, seq_size); + ssi_aead_setup_cipher_desc(req, desc, seq_size); + ssi_aead_process_digest_header_desc(req, desc, seq_size); + ssi_aead_process_cipher_data_desc(req, data_flow_mode, desc, seq_size); + ssi_aead_process_digest_scheme_desc(req, desc, seq_size); + ssi_aead_process_digest_result_desc(req, desc, seq_size); + return; + } + + /** + * Double-pass flow + * Fallback for unsupported single-pass modes, + * i.e. using assoc. data of non-word-multiple */ + if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) { + /* encrypt first.. */ + ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode); + /* authenc after..*/ + ssi_aead_hmac_setup_digest_desc(req, desc, seq_size); + ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct); + ssi_aead_process_digest_scheme_desc(req, desc, seq_size); + ssi_aead_process_digest_result_desc(req, desc, seq_size); + + } else { /*DECRYPT*/ + /* authenc first..*/ + ssi_aead_hmac_setup_digest_desc(req, desc, seq_size); + ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct); + ssi_aead_process_digest_scheme_desc(req, desc, seq_size); + /* decrypt after.. */ + ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode); + /* read the digest result with setting the completion bit + must be after the cipher operation */ + ssi_aead_process_digest_result_desc(req, desc, seq_size); + } +} + +static inline void +ssi_aead_xcbc_authenc( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + int direct = req_ctx->gen_ctx.op_type; + unsigned int data_flow_mode = ssi_aead_get_data_flow_mode( + direct, ctx->flow_mode, req_ctx->is_single_pass); + + if (req_ctx->is_single_pass == true) { + /** + * Single-pass flow + */ + ssi_aead_xcbc_setup_digest_desc(req, desc, seq_size); + ssi_aead_setup_cipher_desc(req, desc, seq_size); + ssi_aead_process_digest_header_desc(req, desc, seq_size); + ssi_aead_process_cipher_data_desc(req, data_flow_mode, desc, seq_size); + ssi_aead_process_digest_result_desc(req, desc, seq_size); + return; + } + + /** + * Double-pass flow + * Fallback for unsupported single-pass modes, + * i.e. using assoc. data of non-word-multiple */ + if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) { + /* encrypt first.. */ + ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode); + /* authenc after.. */ + ssi_aead_xcbc_setup_digest_desc(req, desc, seq_size); + ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct); + ssi_aead_process_digest_result_desc(req, desc, seq_size); + } else { /*DECRYPT*/ + /* authenc first.. */ + ssi_aead_xcbc_setup_digest_desc(req, desc, seq_size); + ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct); + /* decrypt after..*/ + ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode); + /* read the digest result with setting the completion bit + must be after the cipher operation */ + ssi_aead_process_digest_result_desc(req, desc, seq_size); + } +} + +static int validate_data_size(struct ssi_aead_ctx *ctx, + enum drv_crypto_direction direct, struct aead_request *req) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + unsigned int assoclen = req->assoclen; + unsigned int cipherlen = (direct == DRV_CRYPTO_DIRECTION_DECRYPT) ? + (req->cryptlen - ctx->authsize) : req->cryptlen; + + if (unlikely((direct == DRV_CRYPTO_DIRECTION_DECRYPT) && + (req->cryptlen < ctx->authsize))) + goto data_size_err; + + areq_ctx->is_single_pass = true; /*defaulted to fast flow*/ + + switch (ctx->flow_mode) { + case S_DIN_to_AES: + if (unlikely((ctx->cipher_mode == DRV_CIPHER_CBC) && + !IS_ALIGNED(cipherlen, AES_BLOCK_SIZE))) + goto data_size_err; + if (ctx->cipher_mode == DRV_CIPHER_CCM) + break; + if (ctx->cipher_mode == DRV_CIPHER_GCTR) + { + if (areq_ctx->plaintext_authenticate_only == true) + areq_ctx->is_single_pass = false; + break; + } + + if (!IS_ALIGNED(assoclen, sizeof(uint32_t))) + areq_ctx->is_single_pass = false; + + if ((ctx->cipher_mode == DRV_CIPHER_CTR) && + !IS_ALIGNED(cipherlen, sizeof(uint32_t))) + areq_ctx->is_single_pass = false; + + break; + case S_DIN_to_DES: + if (unlikely(!IS_ALIGNED(cipherlen, DES_BLOCK_SIZE))) + goto data_size_err; + if (unlikely(!IS_ALIGNED(assoclen, DES_BLOCK_SIZE))) + areq_ctx->is_single_pass = false; + break; + default: + SSI_LOG_ERR("Unexpected flow mode (%d)\n", ctx->flow_mode); + goto data_size_err; + } + + return 0; + +data_size_err: + return -EINVAL; +} + +#if SSI_CC_HAS_AES_CCM +static unsigned int format_ccm_a0(uint8_t *pA0Buff, uint32_t headerSize) +{ + unsigned int len = 0; + if ( headerSize == 0 ) { + return 0; + } + if ( headerSize < ((1UL << 16) - (1UL << 8) )) { + len = 2; + + pA0Buff[0] = (headerSize >> 8) & 0xFF; + pA0Buff[1] = headerSize & 0xFF; + } else { + len = 6; + + pA0Buff[0] = 0xFF; + pA0Buff[1] = 0xFE; + pA0Buff[2] = (headerSize >> 24) & 0xFF; + pA0Buff[3] = (headerSize >> 16) & 0xFF; + pA0Buff[4] = (headerSize >> 8) & 0xFF; + pA0Buff[5] = headerSize & 0xFF; + } + + return len; +} + +static int set_msg_len(u8 *block, unsigned int msglen, unsigned int csize) +{ + __be32 data; + + memset(block, 0, csize); + block += csize; + + if (csize >= 4) + csize = 4; + else if (msglen > (1 << (8 * csize))) + return -EOVERFLOW; + + data = cpu_to_be32(msglen); + memcpy(block - csize, (u8 *)&data + 4 - csize, csize); + + return 0; +} + +static inline int ssi_aead_ccm( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + unsigned int idx = *seq_size; + unsigned int cipher_flow_mode; + dma_addr_t mac_result; + + + if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) { + cipher_flow_mode = AES_to_HASH_and_DOUT; + mac_result = req_ctx->mac_buf_dma_addr; + } else { /* Encrypt */ + cipher_flow_mode = AES_and_HASH; + mac_result = req_ctx->icv_dma_addr; + } + + /* load key */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ((ctx->enc_keylen == 24) ? + CC_AES_KEY_SIZE_MAX : ctx->enc_keylen), + NS_BIT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + + /* load ctr state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + req_ctx->gen_ctx.iv_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + + /* load MAC key */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ((ctx->enc_keylen == 24) ? + CC_AES_KEY_SIZE_MAX : ctx->enc_keylen), + NS_BIT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + idx++; + + /* load MAC state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + req_ctx->mac_buf_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + idx++; + + + /* process assoc data */ + if (req->assoclen > 0) { + ssi_aead_create_assoc_desc(req, DIN_HASH, desc, &idx); + } else { + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + sg_dma_address(&req_ctx->ccm_adata_sg), + AES_BLOCK_SIZE + req_ctx->ccm_hdr_size, + NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + idx++; + } + + /* process the cipher */ + if (req_ctx->cryptlen != 0) { + ssi_aead_process_cipher_data_desc(req, cipher_flow_mode, desc, &idx); + } + + /* Read temporal MAC */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC); + HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr, + ctx->authsize, NS_BIT, 0); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + idx++; + + /* load AES-CTR state (for last MAC calculation)*/ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + req_ctx->ccm_iv0_dma_addr , + AES_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); + HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + idx++; + + /* encrypt the "T" value and store MAC in mac_state */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + req_ctx->mac_buf_dma_addr , ctx->authsize, NS_BIT); + HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_result , ctx->authsize, NS_BIT, 1); + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + idx++; + + *seq_size = idx; + return 0; +} + +static int config_ccm_adata(struct aead_request *req) { + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + //unsigned int size_of_a = 0, rem_a_size = 0; + unsigned int lp = req->iv[0]; + /* Note: The code assume that req->iv[0] already contains the value of L' of RFC3610 */ + unsigned int l = lp + 1; /* This is L' of RFC 3610. */ + unsigned int m = ctx->authsize; /* This is M' of RFC 3610. */ + uint8_t *b0 = req_ctx->ccm_config + CCM_B0_OFFSET; + uint8_t *a0 = req_ctx->ccm_config + CCM_A0_OFFSET; + uint8_t *ctr_count_0 = req_ctx->ccm_config + CCM_CTR_COUNT_0_OFFSET; + unsigned int cryptlen = (req_ctx->gen_ctx.op_type == + DRV_CRYPTO_DIRECTION_ENCRYPT) ? + req->cryptlen : + (req->cryptlen - ctx->authsize); + int rc; + memset(req_ctx->mac_buf, 0, AES_BLOCK_SIZE); + memset(req_ctx->ccm_config, 0, AES_BLOCK_SIZE*3); + + /* taken from crypto/ccm.c */ + /* 2 <= L <= 8, so 1 <= L' <= 7. */ + if (2 > l || l > 8) { + SSI_LOG_ERR("illegal iv value %X\n",req->iv[0]); + return -EINVAL; + } + memcpy(b0, req->iv, AES_BLOCK_SIZE); + + /* format control info per RFC 3610 and + * NIST Special Publication 800-38C + */ + *b0 |= (8 * ((m - 2) / 2)); + if (req->assoclen > 0) + *b0 |= 64; /* Enable bit 6 if Adata exists. */ + + rc = set_msg_len(b0 + 16 - l, cryptlen, l); /* Write L'. */ + if (rc != 0) { + return rc; + } + /* END of "taken from crypto/ccm.c" */ + + /* l(a) - size of associated data. */ + req_ctx->ccm_hdr_size = format_ccm_a0 (a0, req->assoclen); + + memset(req->iv + 15 - req->iv[0], 0, req->iv[0] + 1); + req->iv [15] = 1; + + memcpy(ctr_count_0, req->iv, AES_BLOCK_SIZE) ; + ctr_count_0[15] = 0; + + return 0; +} + +static void ssi_rfc4309_ccm_process(struct aead_request *req) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + + /* L' */ + memset(areq_ctx->ctr_iv, 0, AES_BLOCK_SIZE); + areq_ctx->ctr_iv[0] = 3; /* For RFC 4309, always use 4 bytes for message length (at most 2^32-1 bytes). */ + + /* In RFC 4309 there is an 11-bytes nonce+IV part, that we build here. */ + memcpy(areq_ctx->ctr_iv + CCM_BLOCK_NONCE_OFFSET, ctx->ctr_nonce, CCM_BLOCK_NONCE_SIZE); + memcpy(areq_ctx->ctr_iv + CCM_BLOCK_IV_OFFSET, req->iv, CCM_BLOCK_IV_SIZE); + req->iv = areq_ctx->ctr_iv; + req->assoclen -= CCM_BLOCK_IV_SIZE; +} +#endif /*SSI_CC_HAS_AES_CCM*/ + +#if SSI_CC_HAS_AES_GCM + +static inline void ssi_aead_gcm_setup_ghash_desc( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + unsigned int idx = *seq_size; + + /* load key to AES*/ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ctx->enc_keylen, NS_BIT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + + /* process one zero block to generate hkey */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE); + HW_DESC_SET_DOUT_DLLI(&desc[idx], + req_ctx->hkey_dma_addr, + AES_BLOCK_SIZE, + NS_BIT, 0); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + idx++; + + /* Memory Barrier */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); + HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + idx++; + + /* Load GHASH subkey */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + req_ctx->hkey_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + /* Configure Hash Engine to work with GHASH. + Since it was not possible to extend HASH submodes to add GHASH, + The following command is necessary in order to select GHASH (according to HW designers)*/ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); + HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); + HW_DESC_SET_CIPHER_DO(&desc[idx], 1); //1=AES_SK RKEK + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + idx++; + + /* Load GHASH initial STATE (which is 0). (for any hash there is an initial state) */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE); + HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); + HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0); + idx++; + + *seq_size = idx; +} + +static inline void ssi_aead_gcm_setup_gctr_desc( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + unsigned int idx = *seq_size; + + /* load key to AES*/ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr, + ctx->enc_keylen, NS_BIT); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + + if ((req_ctx->cryptlen != 0) && (req_ctx->plaintext_authenticate_only==false)){ + /* load AES/CTR initial CTR value inc by 2*/ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + req_ctx->gcm_iv_inc2_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + } + + *seq_size = idx; +} + +static inline void ssi_aead_process_gcm_result_desc( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + dma_addr_t mac_result; + unsigned int idx = *seq_size; + + if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) { + mac_result = req_ctx->mac_buf_dma_addr; + } else { /* Encrypt */ + mac_result = req_ctx->icv_dma_addr; + } + + /* process(ghash) gcm_block_len */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + req_ctx->gcm_block_len_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH); + idx++; + + /* Store GHASH state after GHASH(Associated Data + Cipher +LenBlock) */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH); + HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); + HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr, + AES_BLOCK_SIZE, NS_BIT, 0); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT); + HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]); + + idx++; + + /* load AES/CTR initial CTR value inc by 1*/ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); + HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + req_ctx->gcm_iv_inc1_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT); + HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1); + HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES); + idx++; + + /* Memory Barrier */ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0); + HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1); + idx++; + + /* process GCTR on stored GHASH and store MAC in mac_state*/ + HW_DESC_INIT(&desc[idx]); + HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR); + HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, + req_ctx->mac_buf_dma_addr, + AES_BLOCK_SIZE, NS_BIT); + HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_result, ctx->authsize, NS_BIT, 1); + HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]); + HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT); + idx++; + + *seq_size = idx; +} + +static inline int ssi_aead_gcm( + struct aead_request *req, + HwDesc_s desc[], + unsigned int *seq_size) +{ + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + unsigned int idx = *seq_size; + unsigned int cipher_flow_mode; + + if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) { + cipher_flow_mode = AES_and_HASH; + } else { /* Encrypt */ + cipher_flow_mode = AES_to_HASH_and_DOUT; + } + + + //in RFC4543 no data to encrypt. just copy data from src to dest. + if (req_ctx->plaintext_authenticate_only==true){ + ssi_aead_process_cipher_data_desc(req, BYPASS, desc, seq_size); + ssi_aead_gcm_setup_ghash_desc(req, desc, seq_size); + /* process(ghash) assoc data */ + ssi_aead_create_assoc_desc(req, DIN_HASH, desc, seq_size); + ssi_aead_gcm_setup_gctr_desc(req, desc, seq_size); + ssi_aead_process_gcm_result_desc(req, desc, seq_size); + idx = *seq_size; + return 0; + } + + // for gcm and rfc4106. + ssi_aead_gcm_setup_ghash_desc(req, desc, seq_size); + /* process(ghash) assoc data */ + if (req->assoclen > 0) + ssi_aead_create_assoc_desc(req, DIN_HASH, desc, seq_size); + ssi_aead_gcm_setup_gctr_desc(req, desc, seq_size); + /* process(gctr+ghash) */ + if (req_ctx->cryptlen != 0) + ssi_aead_process_cipher_data_desc(req, cipher_flow_mode, desc, seq_size); + ssi_aead_process_gcm_result_desc(req, desc, seq_size); + + idx = *seq_size; + return 0; +} + +#ifdef CC_DEBUG +static inline void ssi_aead_dump_gcm( + const char* title, + struct aead_request *req) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + + if (ctx->cipher_mode != DRV_CIPHER_GCTR) + return; + + if (title != NULL) { + SSI_LOG_DEBUG("----------------------------------------------------------------------------------"); + SSI_LOG_DEBUG("%s\n", title); + } + + SSI_LOG_DEBUG("cipher_mode %d, authsize %d, enc_keylen %d, assoclen %d, cryptlen %d \n", \ + ctx->cipher_mode, ctx->authsize, ctx->enc_keylen, req->assoclen, req_ctx->cryptlen ); + + if ( ctx->enckey != NULL ) { + dump_byte_array("mac key",ctx->enckey, 16); + } + + dump_byte_array("req->iv",req->iv, AES_BLOCK_SIZE); + + dump_byte_array("gcm_iv_inc1",req_ctx->gcm_iv_inc1, AES_BLOCK_SIZE); + + dump_byte_array("gcm_iv_inc2",req_ctx->gcm_iv_inc2, AES_BLOCK_SIZE); + + dump_byte_array("hkey",req_ctx->hkey, AES_BLOCK_SIZE); + + dump_byte_array("mac_buf",req_ctx->mac_buf, AES_BLOCK_SIZE); + + dump_byte_array("gcm_len_block",req_ctx->gcm_len_block.lenA, AES_BLOCK_SIZE); + + if (req->src!=NULL && req->cryptlen) { + dump_byte_array("req->src",sg_virt(req->src), req->cryptlen+req->assoclen); + } + + if (req->dst!=NULL) { + dump_byte_array("req->dst",sg_virt(req->dst), req->cryptlen+ctx->authsize+req->assoclen); + } +} +#endif + +static int config_gcm_context(struct aead_request *req) { + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *req_ctx = aead_request_ctx(req); + + unsigned int cryptlen = (req_ctx->gen_ctx.op_type == + DRV_CRYPTO_DIRECTION_ENCRYPT) ? + req->cryptlen : + (req->cryptlen - ctx->authsize); + __be32 counter = cpu_to_be32(2); + + SSI_LOG_DEBUG("config_gcm_context() cryptlen = %d, req->assoclen = %d ctx->authsize = %d \n", cryptlen, req->assoclen, ctx->authsize); + + memset(req_ctx->hkey, 0, AES_BLOCK_SIZE); + + memset(req_ctx->mac_buf, 0, AES_BLOCK_SIZE); + + memcpy(req->iv + 12, &counter, 4); + memcpy(req_ctx->gcm_iv_inc2, req->iv, 16); + + counter = cpu_to_be32(1); + memcpy(req->iv + 12, &counter, 4); + memcpy(req_ctx->gcm_iv_inc1, req->iv, 16); + + + if (req_ctx->plaintext_authenticate_only == false) + { + __be64 temp64; + temp64 = cpu_to_be64(req->assoclen * 8); + memcpy ( &req_ctx->gcm_len_block.lenA , &temp64, sizeof(temp64) ); + temp64 = cpu_to_be64(cryptlen * 8); + memcpy ( &req_ctx->gcm_len_block.lenC , &temp64, 8 ); + } + else { //rfc4543=> all data(AAD,IV,Plain) are considered additional data that is nothing is encrypted. + __be64 temp64; + temp64 = cpu_to_be64((req->assoclen+GCM_BLOCK_RFC4_IV_SIZE+cryptlen) * 8); + memcpy ( &req_ctx->gcm_len_block.lenA , &temp64, sizeof(temp64) ); + temp64 = 0; + memcpy ( &req_ctx->gcm_len_block.lenC , &temp64, 8 ); + } + + return 0; +} + +static void ssi_rfc4_gcm_process(struct aead_request *req) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + + memcpy(areq_ctx->ctr_iv + GCM_BLOCK_RFC4_NONCE_OFFSET, ctx->ctr_nonce, GCM_BLOCK_RFC4_NONCE_SIZE); + memcpy(areq_ctx->ctr_iv + GCM_BLOCK_RFC4_IV_OFFSET, req->iv, GCM_BLOCK_RFC4_IV_SIZE); + req->iv = areq_ctx->ctr_iv; + req->assoclen -= GCM_BLOCK_RFC4_IV_SIZE; +} + + +#endif /*SSI_CC_HAS_AES_GCM*/ + +static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction direct) +{ + int rc = 0; + int seq_len = 0; + HwDesc_s desc[MAX_AEAD_PROCESS_SEQ]; + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct device *dev = &ctx->drvdata->plat_dev->dev; + struct ssi_crypto_req ssi_req = {}; + + DECL_CYCLE_COUNT_RESOURCES; + + SSI_LOG_DEBUG("%s context=%p req=%p iv=%p src=%p src_ofs=%d dst=%p dst_ofs=%d cryptolen=%d\n", + ((direct==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"), ctx, req, req->iv, + sg_virt(req->src), req->src->offset, sg_virt(req->dst), req->dst->offset, req->cryptlen); + + /* STAT_PHASE_0: Init and sanity checks */ + START_CYCLE_COUNT(); + + /* Check data length according to mode */ + if (unlikely(validate_data_size(ctx, direct, req) != 0)) { + SSI_LOG_ERR("Unsupported crypt/assoc len %d/%d.\n", + req->cryptlen, req->assoclen); + crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_BLOCK_LEN); + return -EINVAL; + } + + /* Setup DX request structure */ + ssi_req.user_cb = (void *)ssi_aead_complete; + ssi_req.user_arg = (void *)req; + +#ifdef ENABLE_CYCLE_COUNT + ssi_req.op_type = (direct == DRV_CRYPTO_DIRECTION_DECRYPT) ? + STAT_OP_TYPE_DECODE : STAT_OP_TYPE_ENCODE; +#endif + /* Setup request context */ + areq_ctx->gen_ctx.op_type = direct; + areq_ctx->req_authsize = ctx->authsize; + areq_ctx->cipher_mode = ctx->cipher_mode; + + END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_0); + + /* STAT_PHASE_1: Map buffers */ + START_CYCLE_COUNT(); + + if (ctx->cipher_mode == DRV_CIPHER_CTR) { + /* Build CTR IV - Copy nonce from last 4 bytes in + * CTR key to first 4 bytes in CTR IV */ + memcpy(areq_ctx->ctr_iv, ctx->ctr_nonce, CTR_RFC3686_NONCE_SIZE); + if (areq_ctx->backup_giv == NULL) /*User none-generated IV*/ + memcpy(areq_ctx->ctr_iv + CTR_RFC3686_NONCE_SIZE, + req->iv, CTR_RFC3686_IV_SIZE); + /* Initialize counter portion of counter block */ + *(__be32 *)(areq_ctx->ctr_iv + CTR_RFC3686_NONCE_SIZE + + CTR_RFC3686_IV_SIZE) = cpu_to_be32(1); + + /* Replace with counter iv */ + req->iv = areq_ctx->ctr_iv; + areq_ctx->hw_iv_size = CTR_RFC3686_BLOCK_SIZE; + } else if ((ctx->cipher_mode == DRV_CIPHER_CCM) || + (ctx->cipher_mode == DRV_CIPHER_GCTR) ) { + areq_ctx->hw_iv_size = AES_BLOCK_SIZE; + if (areq_ctx->ctr_iv != req->iv) { + memcpy(areq_ctx->ctr_iv, req->iv, crypto_aead_ivsize(tfm)); + req->iv = areq_ctx->ctr_iv; + } + } else { + areq_ctx->hw_iv_size = crypto_aead_ivsize(tfm); + } + +#if SSI_CC_HAS_AES_CCM + if (ctx->cipher_mode == DRV_CIPHER_CCM) { + rc = config_ccm_adata(req); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("config_ccm_adata() returned with a failure %d!", rc); + goto exit; + } + } else { + areq_ctx->ccm_hdr_size = ccm_header_size_null; + } +#else + areq_ctx->ccm_hdr_size = ccm_header_size_null; +#endif /*SSI_CC_HAS_AES_CCM*/ + +#if SSI_CC_HAS_AES_GCM + if (ctx->cipher_mode == DRV_CIPHER_GCTR) { + rc = config_gcm_context(req); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("config_gcm_context() returned with a failure %d!", rc); + goto exit; + } + } +#endif /*SSI_CC_HAS_AES_GCM*/ + + rc = ssi_buffer_mgr_map_aead_request(ctx->drvdata, req); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("map_request() failed\n"); + goto exit; + } + + /* do we need to generate IV? */ + if (areq_ctx->backup_giv != NULL) { + + /* set the DMA mapped IV address*/ + if (ctx->cipher_mode == DRV_CIPHER_CTR) { + ssi_req.ivgen_dma_addr[0] = areq_ctx->gen_ctx.iv_dma_addr + CTR_RFC3686_NONCE_SIZE; + ssi_req.ivgen_dma_addr_len = 1; + } else if (ctx->cipher_mode == DRV_CIPHER_CCM) { + /* In ccm, the IV needs to exist both inside B0 and inside the counter. + It is also copied to iv_dma_addr for other reasons (like returning + it to the user). + So, using 3 (identical) IV outputs. */ + ssi_req.ivgen_dma_addr[0] = areq_ctx->gen_ctx.iv_dma_addr + CCM_BLOCK_IV_OFFSET; + ssi_req.ivgen_dma_addr[1] = sg_dma_address(&areq_ctx->ccm_adata_sg) + CCM_B0_OFFSET + CCM_BLOCK_IV_OFFSET; + ssi_req.ivgen_dma_addr[2] = sg_dma_address(&areq_ctx->ccm_adata_sg) + CCM_CTR_COUNT_0_OFFSET + CCM_BLOCK_IV_OFFSET; + ssi_req.ivgen_dma_addr_len = 3; + } else { + ssi_req.ivgen_dma_addr[0] = areq_ctx->gen_ctx.iv_dma_addr; + ssi_req.ivgen_dma_addr_len = 1; + } + + /* set the IV size (8/16 B long)*/ + ssi_req.ivgen_size = crypto_aead_ivsize(tfm); + } + + END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_1); + + /* STAT_PHASE_2: Create sequence */ + START_CYCLE_COUNT(); + + /* Load MLLI tables to SRAM if necessary */ + ssi_aead_load_mlli_to_sram(req, desc, &seq_len); + + /*TODO: move seq len by reference */ + switch (ctx->auth_mode) { + case DRV_HASH_SHA1: + case DRV_HASH_SHA256: + ssi_aead_hmac_authenc(req, desc, &seq_len); + break; + case DRV_HASH_XCBC_MAC: + ssi_aead_xcbc_authenc(req, desc, &seq_len); + break; +#if ( SSI_CC_HAS_AES_CCM || SSI_CC_HAS_AES_GCM ) + case DRV_HASH_NULL: +#if SSI_CC_HAS_AES_CCM + if (ctx->cipher_mode == DRV_CIPHER_CCM) { + ssi_aead_ccm(req, desc, &seq_len); + } +#endif /*SSI_CC_HAS_AES_CCM*/ +#if SSI_CC_HAS_AES_GCM + if (ctx->cipher_mode == DRV_CIPHER_GCTR) { + ssi_aead_gcm(req, desc, &seq_len); + } +#endif /*SSI_CC_HAS_AES_GCM*/ + break; +#endif + default: + SSI_LOG_ERR("Unsupported authenc (%d)\n", ctx->auth_mode); + ssi_buffer_mgr_unmap_aead_request(dev, req); + rc = -ENOTSUPP; + goto exit; + } + + END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_2); + + /* STAT_PHASE_3: Lock HW and push sequence */ + START_CYCLE_COUNT(); + + rc = send_request(ctx->drvdata, &ssi_req, desc, seq_len, 1); + + if (unlikely(rc != -EINPROGRESS)) { + SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc); + ssi_buffer_mgr_unmap_aead_request(dev, req); + } + + + END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3); +exit: + return rc; +} + +static int ssi_aead_encrypt(struct aead_request *req) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc; + + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; + areq_ctx->backup_giv = NULL; + areq_ctx->is_gcm4543 = false; + + areq_ctx->plaintext_authenticate_only = false; + + rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT); + if (rc != -EINPROGRESS) + req->iv = areq_ctx->backup_iv; + + return rc; +} + +#if SSI_CC_HAS_AES_CCM +static int ssi_rfc4309_ccm_encrypt(struct aead_request *req) +{ + /* Very similar to ssi_aead_encrypt() above. */ + + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc = -EINVAL; + + if (!valid_assoclen(req)) { + SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen ); + goto out; + } + + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; + areq_ctx->backup_giv = NULL; + areq_ctx->is_gcm4543 = true; + + ssi_rfc4309_ccm_process(req); + + rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT); + if (rc != -EINPROGRESS) + req->iv = areq_ctx->backup_iv; +out: + return rc; +} +#endif /* SSI_CC_HAS_AES_CCM */ + +static int ssi_aead_decrypt(struct aead_request *req) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc; + + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; + areq_ctx->backup_giv = NULL; + areq_ctx->is_gcm4543 = false; + + areq_ctx->plaintext_authenticate_only = false; + + rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT); + if (rc != -EINPROGRESS) + req->iv = areq_ctx->backup_iv; + + return rc; + +} + +#if SSI_CC_HAS_AES_CCM +static int ssi_rfc4309_ccm_decrypt(struct aead_request *req) +{ + /* Very similar to ssi_aead_decrypt() above. */ + + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc = -EINVAL; + + if (!valid_assoclen(req)) { + SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen); + goto out; + } + + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; + areq_ctx->backup_giv = NULL; + + areq_ctx->is_gcm4543 = true; + ssi_rfc4309_ccm_process(req); + + rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT); + if (rc != -EINPROGRESS) + req->iv = areq_ctx->backup_iv; + +out: + return rc; +} +#endif /* SSI_CC_HAS_AES_CCM */ + +#if SSI_CC_HAS_AES_GCM + +static int ssi_rfc4106_gcm_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) +{ + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + int rc = 0; + + SSI_LOG_DEBUG("ssi_rfc4106_gcm_setkey() keylen %d, key %p \n", keylen, key ); + + if (keylen < 4) + return -EINVAL; + + keylen -= 4; + memcpy(ctx->ctr_nonce, key + keylen, 4); + + rc = ssi_aead_setkey(tfm, key, keylen); + + return rc; +} + +static int ssi_rfc4543_gcm_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen) +{ + struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm); + int rc = 0; + + SSI_LOG_DEBUG("ssi_rfc4543_gcm_setkey() keylen %d, key %p \n", keylen, key ); + + if (keylen < 4) + return -EINVAL; + + keylen -= 4; + memcpy(ctx->ctr_nonce, key + keylen, 4); + + rc = ssi_aead_setkey(tfm, key, keylen); + + return rc; +} + +static int ssi_gcm_setauthsize(struct crypto_aead *authenc, + unsigned int authsize) +{ + switch (authsize) { + case 4: + case 8: + case 12: + case 13: + case 14: + case 15: + case 16: + break; + default: + return -EINVAL; + } + + return ssi_aead_setauthsize(authenc, authsize); +} + +static int ssi_rfc4106_gcm_setauthsize(struct crypto_aead *authenc, + unsigned int authsize) +{ + SSI_LOG_DEBUG("ssi_rfc4106_gcm_setauthsize() authsize %d \n", authsize ); + + switch (authsize) { + case 8: + case 12: + case 16: + break; + default: + return -EINVAL; + } + + return ssi_aead_setauthsize(authenc, authsize); +} + +static int ssi_rfc4543_gcm_setauthsize(struct crypto_aead *authenc, + unsigned int authsize) +{ + SSI_LOG_DEBUG("ssi_rfc4543_gcm_setauthsize() authsize %d \n", authsize ); + + if (authsize != 16) + return -EINVAL; + + return ssi_aead_setauthsize(authenc, authsize); +} + +static int ssi_rfc4106_gcm_encrypt(struct aead_request *req) +{ + /* Very similar to ssi_aead_encrypt() above. */ + + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc = -EINVAL; + + if (!valid_assoclen(req)) { + SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen); + goto out; + } + + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; + areq_ctx->backup_giv = NULL; + + areq_ctx->plaintext_authenticate_only = false; + + ssi_rfc4_gcm_process(req); + areq_ctx->is_gcm4543 = true; + + rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT); + if (rc != -EINPROGRESS) + req->iv = areq_ctx->backup_iv; +out: + return rc; +} + +static int ssi_rfc4543_gcm_encrypt(struct aead_request *req) +{ + /* Very similar to ssi_aead_encrypt() above. */ + + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc; + + //plaintext is not encryped with rfc4543 + areq_ctx->plaintext_authenticate_only = true; + + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; + areq_ctx->backup_giv = NULL; + + ssi_rfc4_gcm_process(req); + areq_ctx->is_gcm4543 = true; + + rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT); + if (rc != -EINPROGRESS) + req->iv = areq_ctx->backup_iv; + + return rc; +} + +static int ssi_rfc4106_gcm_decrypt(struct aead_request *req) +{ + /* Very similar to ssi_aead_decrypt() above. */ + + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc = -EINVAL; + + if (!valid_assoclen(req)) { + SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen); + goto out; + } + + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; + areq_ctx->backup_giv = NULL; + + areq_ctx->plaintext_authenticate_only = false; + + ssi_rfc4_gcm_process(req); + areq_ctx->is_gcm4543 = true; + + rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT); + if (rc != -EINPROGRESS) + req->iv = areq_ctx->backup_iv; +out: + return rc; +} + +static int ssi_rfc4543_gcm_decrypt(struct aead_request *req) +{ + /* Very similar to ssi_aead_decrypt() above. */ + + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc; + + //plaintext is not decryped with rfc4543 + areq_ctx->plaintext_authenticate_only = true; + + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; + areq_ctx->backup_giv = NULL; + + ssi_rfc4_gcm_process(req); + areq_ctx->is_gcm4543 = true; + + rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT); + if (rc != -EINPROGRESS) + req->iv = areq_ctx->backup_iv; + + return rc; +} +#endif /* SSI_CC_HAS_AES_GCM */ + +/* DX Block aead alg */ +static struct ssi_alg_template aead_algs[] = { + { + .name = "authenc(hmac(sha1),cbc(aes))", + .driver_name = "authenc-hmac-sha1-cbc-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_aead_setkey, + .setauthsize = ssi_aead_setauthsize, + .encrypt = ssi_aead_encrypt, + .decrypt = ssi_aead_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + }, + .cipher_mode = DRV_CIPHER_CBC, + .flow_mode = S_DIN_to_AES, + .auth_mode = DRV_HASH_SHA1, + }, + { + .name = "authenc(hmac(sha1),cbc(des3_ede))", + .driver_name = "authenc-hmac-sha1-cbc-des3-dx", + .blocksize = DES3_EDE_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_aead_setkey, + .setauthsize = ssi_aead_setauthsize, + .encrypt = ssi_aead_encrypt, + .decrypt = ssi_aead_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + }, + .cipher_mode = DRV_CIPHER_CBC, + .flow_mode = S_DIN_to_DES, + .auth_mode = DRV_HASH_SHA1, + }, + { + .name = "authenc(hmac(sha256),cbc(aes))", + .driver_name = "authenc-hmac-sha256-cbc-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_aead_setkey, + .setauthsize = ssi_aead_setauthsize, + .encrypt = ssi_aead_encrypt, + .decrypt = ssi_aead_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + .cipher_mode = DRV_CIPHER_CBC, + .flow_mode = S_DIN_to_AES, + .auth_mode = DRV_HASH_SHA256, + }, + { + .name = "authenc(hmac(sha256),cbc(des3_ede))", + .driver_name = "authenc-hmac-sha256-cbc-des3-dx", + .blocksize = DES3_EDE_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_aead_setkey, + .setauthsize = ssi_aead_setauthsize, + .encrypt = ssi_aead_encrypt, + .decrypt = ssi_aead_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + .cipher_mode = DRV_CIPHER_CBC, + .flow_mode = S_DIN_to_DES, + .auth_mode = DRV_HASH_SHA256, + }, + { + .name = "authenc(xcbc(aes),cbc(aes))", + .driver_name = "authenc-xcbc-aes-cbc-aes-dx", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_aead_setkey, + .setauthsize = ssi_aead_setauthsize, + .encrypt = ssi_aead_encrypt, + .decrypt = ssi_aead_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_CBC, + .flow_mode = S_DIN_to_AES, + .auth_mode = DRV_HASH_XCBC_MAC, + }, + { + .name = "authenc(hmac(sha1),rfc3686(ctr(aes)))", + .driver_name = "authenc-hmac-sha1-rfc3686-ctr-aes-dx", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_aead_setkey, + .setauthsize = ssi_aead_setauthsize, + .encrypt = ssi_aead_encrypt, + .decrypt = ssi_aead_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = CTR_RFC3686_IV_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + }, + .cipher_mode = DRV_CIPHER_CTR, + .flow_mode = S_DIN_to_AES, + .auth_mode = DRV_HASH_SHA1, + }, + { + .name = "authenc(hmac(sha256),rfc3686(ctr(aes)))", + .driver_name = "authenc-hmac-sha256-rfc3686-ctr-aes-dx", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_aead_setkey, + .setauthsize = ssi_aead_setauthsize, + .encrypt = ssi_aead_encrypt, + .decrypt = ssi_aead_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = CTR_RFC3686_IV_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + .cipher_mode = DRV_CIPHER_CTR, + .flow_mode = S_DIN_to_AES, + .auth_mode = DRV_HASH_SHA256, + }, + { + .name = "authenc(xcbc(aes),rfc3686(ctr(aes)))", + .driver_name = "authenc-xcbc-aes-rfc3686-ctr-aes-dx", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_aead_setkey, + .setauthsize = ssi_aead_setauthsize, + .encrypt = ssi_aead_encrypt, + .decrypt = ssi_aead_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = CTR_RFC3686_IV_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_CTR, + .flow_mode = S_DIN_to_AES, + .auth_mode = DRV_HASH_XCBC_MAC, + }, +#if SSI_CC_HAS_AES_CCM + { + .name = "ccm(aes)", + .driver_name = "ccm-aes-dx", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_aead_setkey, + .setauthsize = ssi_ccm_setauthsize, + .encrypt = ssi_aead_encrypt, + .decrypt = ssi_aead_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_CCM, + .flow_mode = S_DIN_to_AES, + .auth_mode = DRV_HASH_NULL, + }, + { + .name = "rfc4309(ccm(aes))", + .driver_name = "rfc4309-ccm-aes-dx", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_rfc4309_ccm_setkey, + .setauthsize = ssi_rfc4309_ccm_setauthsize, + .encrypt = ssi_rfc4309_ccm_encrypt, + .decrypt = ssi_rfc4309_ccm_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = CCM_BLOCK_IV_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_CCM, + .flow_mode = S_DIN_to_AES, + .auth_mode = DRV_HASH_NULL, + }, +#endif /*SSI_CC_HAS_AES_CCM*/ +#if SSI_CC_HAS_AES_GCM + { + .name = "gcm(aes)", + .driver_name = "gcm-aes-dx", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_aead_setkey, + .setauthsize = ssi_gcm_setauthsize, + .encrypt = ssi_aead_encrypt, + .decrypt = ssi_aead_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = 12, + .maxauthsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_GCTR, + .flow_mode = S_DIN_to_AES, + .auth_mode = DRV_HASH_NULL, + }, + { + .name = "rfc4106(gcm(aes))", + .driver_name = "rfc4106-gcm-aes-dx", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_rfc4106_gcm_setkey, + .setauthsize = ssi_rfc4106_gcm_setauthsize, + .encrypt = ssi_rfc4106_gcm_encrypt, + .decrypt = ssi_rfc4106_gcm_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = GCM_BLOCK_RFC4_IV_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_GCTR, + .flow_mode = S_DIN_to_AES, + .auth_mode = DRV_HASH_NULL, + }, + { + .name = "rfc4543(gcm(aes))", + .driver_name = "rfc4543-gcm-aes-dx", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_AEAD, + .template_aead = { + .setkey = ssi_rfc4543_gcm_setkey, + .setauthsize = ssi_rfc4543_gcm_setauthsize, + .encrypt = ssi_rfc4543_gcm_encrypt, + .decrypt = ssi_rfc4543_gcm_decrypt, + .init = ssi_aead_init, + .exit = ssi_aead_exit, + .ivsize = GCM_BLOCK_RFC4_IV_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + }, + .cipher_mode = DRV_CIPHER_GCTR, + .flow_mode = S_DIN_to_AES, + .auth_mode = DRV_HASH_NULL, + }, +#endif /*SSI_CC_HAS_AES_GCM*/ +}; + +static struct ssi_crypto_alg *ssi_aead_create_alg(struct ssi_alg_template *template) +{ + struct ssi_crypto_alg *t_alg; + struct aead_alg *alg; + + t_alg = kzalloc(sizeof(struct ssi_crypto_alg), GFP_KERNEL); + if (!t_alg) { + SSI_LOG_ERR("failed to allocate t_alg\n"); + return ERR_PTR(-ENOMEM); + } + alg = &template->template_aead; + + snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", template->name); + snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", + template->driver_name); + alg->base.cra_module = THIS_MODULE; + alg->base.cra_priority = SSI_CRA_PRIO; + + alg->base.cra_ctxsize = sizeof(struct ssi_aead_ctx); + alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY | + template->type; + alg->init = ssi_aead_init; + alg->exit = ssi_aead_exit; + + t_alg->aead_alg = *alg; + + t_alg->cipher_mode = template->cipher_mode; + t_alg->flow_mode = template->flow_mode; + t_alg->auth_mode = template->auth_mode; + + return t_alg; +} + +int ssi_aead_free(struct ssi_drvdata *drvdata) +{ + struct ssi_crypto_alg *t_alg, *n; + struct ssi_aead_handle *aead_handle = + (struct ssi_aead_handle *)drvdata->aead_handle; + + if (aead_handle != NULL) { + /* Remove registered algs */ + list_for_each_entry_safe(t_alg, n, &aead_handle->aead_list, entry) { + crypto_unregister_aead(&t_alg->aead_alg); + list_del(&t_alg->entry); + kfree(t_alg); + } + kfree(aead_handle); + drvdata->aead_handle = NULL; + } + + return 0; +} + +int ssi_aead_alloc(struct ssi_drvdata *drvdata) +{ + struct ssi_aead_handle *aead_handle; + struct ssi_crypto_alg *t_alg; + int rc = -ENOMEM; + int alg; + + aead_handle = kmalloc(sizeof(struct ssi_aead_handle), GFP_KERNEL); + if (aead_handle == NULL) { + rc = -ENOMEM; + goto fail0; + } + + drvdata->aead_handle = aead_handle; + + aead_handle->sram_workspace_addr = ssi_sram_mgr_alloc( + drvdata, MAX_HMAC_DIGEST_SIZE); + if (aead_handle->sram_workspace_addr == NULL_SRAM_ADDR) { + SSI_LOG_ERR("SRAM pool exhausted\n"); + rc = -ENOMEM; + goto fail1; + } + + INIT_LIST_HEAD(&aead_handle->aead_list); + + /* Linux crypto */ + for (alg = 0; alg < ARRAY_SIZE(aead_algs); alg++) { + t_alg = ssi_aead_create_alg(&aead_algs[alg]); + if (IS_ERR(t_alg)) { + rc = PTR_ERR(t_alg); + SSI_LOG_ERR("%s alg allocation failed\n", + aead_algs[alg].driver_name); + goto fail1; + } + t_alg->drvdata = drvdata; + rc = crypto_register_aead(&t_alg->aead_alg); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("%s alg registration failed\n", + t_alg->aead_alg.base.cra_driver_name); + goto fail2; + } else { + list_add_tail(&t_alg->entry, &aead_handle->aead_list); + SSI_LOG_DEBUG("Registered %s\n", t_alg->aead_alg.base.cra_driver_name); + } + } + + return 0; + +fail2: + kfree(t_alg); +fail1: + ssi_aead_free(drvdata); +fail0: + return rc; +} + + + diff --git a/drivers/staging/ccree/ssi_aead.h b/drivers/staging/ccree/ssi_aead.h new file mode 100644 index 0000000..fe88c9e --- /dev/null +++ b/drivers/staging/ccree/ssi_aead.h @@ -0,0 +1,120 @@ +/* + * Copyright (C) 2012-2017 ARM Limited or its affiliates. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +/* \file ssi_aead.h + ARM CryptoCell AEAD Crypto API + */ + +#ifndef __SSI_AEAD_H__ +#define __SSI_AEAD_H__ + +#include +#include +#include + + +/* mac_cmp - HW writes 8 B but all bytes hold the same value */ +#define ICV_CMP_SIZE 8 +#define CCM_CONFIG_BUF_SIZE (AES_BLOCK_SIZE*3) +#define MAX_MAC_SIZE MAX(SHA256_DIGEST_SIZE, AES_BLOCK_SIZE) + + +/* defines for AES GCM configuration buffer */ +#define GCM_BLOCK_LEN_SIZE 8 + +#define GCM_BLOCK_RFC4_IV_OFFSET 4 +#define GCM_BLOCK_RFC4_IV_SIZE 8 /* IV size for rfc's */ +#define GCM_BLOCK_RFC4_NONCE_OFFSET 0 +#define GCM_BLOCK_RFC4_NONCE_SIZE 4 + + + +/* Offsets into AES CCM configuration buffer */ +#define CCM_B0_OFFSET 0 +#define CCM_A0_OFFSET 16 +#define CCM_CTR_COUNT_0_OFFSET 32 +/* CCM B0 and CTR_COUNT constants. */ +#define CCM_BLOCK_NONCE_OFFSET 1 /* Nonce offset inside B0 and CTR_COUNT */ +#define CCM_BLOCK_NONCE_SIZE 3 /* Nonce size inside B0 and CTR_COUNT */ +#define CCM_BLOCK_IV_OFFSET 4 /* IV offset inside B0 and CTR_COUNT */ +#define CCM_BLOCK_IV_SIZE 8 /* IV size inside B0 and CTR_COUNT */ + +enum aead_ccm_header_size { + ccm_header_size_null = -1, + ccm_header_size_zero = 0, + ccm_header_size_2 = 2, + ccm_header_size_6 = 6, + ccm_header_size_max = INT32_MAX +}; + +struct aead_req_ctx { + /* Allocate cache line although only 4 bytes are needed to + * assure next field falls @ cache line + * Used for both: digest HW compare and CCM/GCM MAC value */ + uint8_t mac_buf[MAX_MAC_SIZE] ____cacheline_aligned; + uint8_t ctr_iv[AES_BLOCK_SIZE] ____cacheline_aligned; + + //used in gcm + uint8_t gcm_iv_inc1[AES_BLOCK_SIZE] ____cacheline_aligned; + uint8_t gcm_iv_inc2[AES_BLOCK_SIZE] ____cacheline_aligned; + uint8_t hkey[AES_BLOCK_SIZE] ____cacheline_aligned; + struct { + uint8_t lenA[GCM_BLOCK_LEN_SIZE] ____cacheline_aligned; + uint8_t lenC[GCM_BLOCK_LEN_SIZE] ; + } gcm_len_block; + + uint8_t ccm_config[CCM_CONFIG_BUF_SIZE] ____cacheline_aligned; + unsigned int hw_iv_size ____cacheline_aligned; /*HW actual size input*/ + uint8_t backup_mac[MAX_MAC_SIZE]; /*used to prevent cache coherence problem*/ + uint8_t *backup_iv; /*store iv for generated IV flow*/ + uint8_t *backup_giv; /*store iv for rfc3686(ctr) flow*/ + dma_addr_t mac_buf_dma_addr; /* internal ICV DMA buffer */ + dma_addr_t ccm_iv0_dma_addr; /* buffer for internal ccm configurations */ + dma_addr_t icv_dma_addr; /* Phys. address of ICV */ + + //used in gcm + dma_addr_t gcm_iv_inc1_dma_addr; /* buffer for internal gcm configurations */ + dma_addr_t gcm_iv_inc2_dma_addr; /* buffer for internal gcm configurations */ + dma_addr_t hkey_dma_addr; /* Phys. address of hkey */ + dma_addr_t gcm_block_len_dma_addr; /* Phys. address of gcm block len */ + bool is_gcm4543; + + uint8_t *icv_virt_addr; /* Virt. address of ICV */ + struct async_gen_req_ctx gen_ctx; + struct ssi_mlli assoc; + struct ssi_mlli src; + struct ssi_mlli dst; + struct scatterlist* srcSgl; + struct scatterlist* dstSgl; + unsigned int srcOffset; + unsigned int dstOffset; + enum ssi_req_dma_buf_type assoc_buff_type; + enum ssi_req_dma_buf_type data_buff_type; + struct mlli_params mlli_params; + unsigned int cryptlen; + struct scatterlist ccm_adata_sg; + enum aead_ccm_header_size ccm_hdr_size; + unsigned int req_authsize; + enum drv_cipher_mode cipher_mode; + bool is_icv_fragmented; + bool is_single_pass; + bool plaintext_authenticate_only; //for gcm_rfc4543 +}; + +int ssi_aead_alloc(struct ssi_drvdata *drvdata); +int ssi_aead_free(struct ssi_drvdata *drvdata); + +#endif /*__SSI_AEAD_H__*/ diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c index 6ff5d6b..0140199 100644 --- a/drivers/staging/ccree/ssi_buffer_mgr.c +++ b/drivers/staging/ccree/ssi_buffer_mgr.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -30,6 +31,7 @@ #include "cc_lli_defs.h" #include "ssi_cipher.h" #include "ssi_hash.h" +#include "ssi_aead.h" #define LLI_MAX_NUM_OF_DATA_ENTRIES 128 #define LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES 4 @@ -486,6 +488,42 @@ static int ssi_buffer_mgr_map_scatterlist( return 0; } +static inline int +ssi_aead_handle_config_buf(struct device *dev, + struct aead_req_ctx *areq_ctx, + uint8_t* config_data, + struct buffer_array *sg_data, + unsigned int assoclen) +{ + SSI_LOG_DEBUG(" handle additional data config set to DLLI \n"); + /* create sg for the current buffer */ + sg_init_one(&areq_ctx->ccm_adata_sg, config_data, AES_BLOCK_SIZE + areq_ctx->ccm_hdr_size); + if (unlikely(dma_map_sg(dev, &areq_ctx->ccm_adata_sg, 1, + DMA_TO_DEVICE) != 1)) { + SSI_LOG_ERR("dma_map_sg() " + "config buffer failed\n"); + return -ENOMEM; + } + SSI_LOG_DEBUG("Mapped curr_buff: dma_address=0x%llX " + "page_link=0x%08lX addr=%pK " + "offset=%u length=%u\n", + (unsigned long long)sg_dma_address(&areq_ctx->ccm_adata_sg), + areq_ctx->ccm_adata_sg.page_link, + sg_virt(&areq_ctx->ccm_adata_sg), + areq_ctx->ccm_adata_sg.offset, + areq_ctx->ccm_adata_sg.length); + /* prepare for case of MLLI */ + if (assoclen > 0) { + ssi_buffer_mgr_add_scatterlist_entry(sg_data, 1, + &areq_ctx->ccm_adata_sg, + (AES_BLOCK_SIZE + + areq_ctx->ccm_hdr_size), 0, + false, NULL); + } + return 0; +} + + static inline int ssi_ahash_handle_curr_buf(struct device *dev, struct ahash_req_ctx *areq_ctx, uint8_t* curr_buff, @@ -666,6 +704,867 @@ int ssi_buffer_mgr_map_blkcipher_request( return rc; } +void ssi_buffer_mgr_unmap_aead_request( + struct device *dev, struct aead_request *req) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + unsigned int hw_iv_size = areq_ctx->hw_iv_size; + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + uint32_t dummy; + bool chained; + uint32_t size_to_unmap = 0; + + if (areq_ctx->mac_buf_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->mac_buf_dma_addr); + dma_unmap_single(dev, areq_ctx->mac_buf_dma_addr, + MAX_MAC_SIZE, DMA_BIDIRECTIONAL); + } + +#if SSI_CC_HAS_AES_GCM + if (areq_ctx->cipher_mode == DRV_CIPHER_GCTR) { + if (areq_ctx->hkey_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->hkey_dma_addr); + dma_unmap_single(dev, areq_ctx->hkey_dma_addr, + AES_BLOCK_SIZE, DMA_BIDIRECTIONAL); + } + + if (areq_ctx->gcm_block_len_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_block_len_dma_addr); + dma_unmap_single(dev, areq_ctx->gcm_block_len_dma_addr, + AES_BLOCK_SIZE, DMA_TO_DEVICE); + } + + if (areq_ctx->gcm_iv_inc1_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc1_dma_addr); + dma_unmap_single(dev, areq_ctx->gcm_iv_inc1_dma_addr, + AES_BLOCK_SIZE, DMA_TO_DEVICE); + } + + if (areq_ctx->gcm_iv_inc2_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc2_dma_addr); + dma_unmap_single(dev, areq_ctx->gcm_iv_inc2_dma_addr, + AES_BLOCK_SIZE, DMA_TO_DEVICE); + } + } +#endif + + if (areq_ctx->ccm_hdr_size != ccm_header_size_null) { + if (areq_ctx->ccm_iv0_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->ccm_iv0_dma_addr); + dma_unmap_single(dev, areq_ctx->ccm_iv0_dma_addr, + AES_BLOCK_SIZE, DMA_TO_DEVICE); + } + + if (&areq_ctx->ccm_adata_sg != NULL) + dma_unmap_sg(dev, &areq_ctx->ccm_adata_sg, + 1, DMA_TO_DEVICE); + } + if (areq_ctx->gen_ctx.iv_dma_addr != 0) { + SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gen_ctx.iv_dma_addr); + dma_unmap_single(dev, areq_ctx->gen_ctx.iv_dma_addr, + hw_iv_size, DMA_BIDIRECTIONAL); + } + + /*In case a pool was set, a table was + allocated and should be released */ + if (areq_ctx->mlli_params.curr_pool != NULL) { + SSI_LOG_DEBUG("free MLLI buffer: dma=0x%08llX virt=%pK\n", + (unsigned long long)areq_ctx->mlli_params.mlli_dma_addr, + areq_ctx->mlli_params.mlli_virt_addr); + SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->mlli_params.mlli_dma_addr); + dma_pool_free(areq_ctx->mlli_params.curr_pool, + areq_ctx->mlli_params.mlli_virt_addr, + areq_ctx->mlli_params.mlli_dma_addr); + } + + SSI_LOG_DEBUG("Unmapping src sgl: req->src=%pK areq_ctx->src.nents=%u areq_ctx->assoc.nents=%u assoclen:%u cryptlen=%u\n", sg_virt(req->src),areq_ctx->src.nents,areq_ctx->assoc.nents,req->assoclen,req->cryptlen); + SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(req->src)); + size_to_unmap = req->assoclen+req->cryptlen; + if(areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT){ + size_to_unmap += areq_ctx->req_authsize; + } + if (areq_ctx->is_gcm4543) + size_to_unmap += crypto_aead_ivsize(tfm); + + dma_unmap_sg(dev, req->src, ssi_buffer_mgr_get_sgl_nents(req->src,size_to_unmap,&dummy,&chained) , DMA_BIDIRECTIONAL); + if (unlikely(req->src != req->dst)) { + SSI_LOG_DEBUG("Unmapping dst sgl: req->dst=%pK\n", + sg_virt(req->dst)); + SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(req->dst)); + dma_unmap_sg(dev, req->dst, ssi_buffer_mgr_get_sgl_nents(req->dst,size_to_unmap,&dummy,&chained), + DMA_BIDIRECTIONAL); + } +#if DX_HAS_ACP + if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) && + likely(req->src == req->dst)) + { + uint32_t size_to_skip = req->assoclen; + if (areq_ctx->is_gcm4543) { + size_to_skip += crypto_aead_ivsize(tfm); + } + /* copy mac to a temporary location to deal with possible + data memory overriding that caused by cache coherence problem. */ + ssi_buffer_mgr_copy_scatterlist_portion( + areq_ctx->backup_mac, req->src, + size_to_skip+ req->cryptlen - areq_ctx->req_authsize, + size_to_skip+ req->cryptlen, SSI_SG_FROM_BUF); + } +#endif +} + +static inline int ssi_buffer_mgr_get_aead_icv_nents( + struct scatterlist *sgl, + unsigned int sgl_nents, + unsigned int authsize, + uint32_t last_entry_data_size, + bool *is_icv_fragmented) +{ + unsigned int icv_max_size = 0; + unsigned int icv_required_size = authsize > last_entry_data_size ? (authsize - last_entry_data_size) : authsize; + unsigned int nents; + unsigned int i; + + if (sgl_nents < MAX_ICV_NENTS_SUPPORTED) { + *is_icv_fragmented = false; + return 0; + } + + for( i = 0 ; i < (sgl_nents - MAX_ICV_NENTS_SUPPORTED) ; i++) { + if (sgl == NULL) { + break; + } + sgl = sg_next(sgl); + } + + if (sgl != NULL) { + icv_max_size = sgl->length; + } + + if (last_entry_data_size > authsize) { + nents = 0; /* ICV attached to data in last entry (not fragmented!) */ + *is_icv_fragmented = false; + } else if (last_entry_data_size == authsize) { + nents = 1; /* ICV placed in whole last entry (not fragmented!) */ + *is_icv_fragmented = false; + } else if (icv_max_size > icv_required_size) { + nents = 1; + *is_icv_fragmented = true; + } else if (icv_max_size == icv_required_size) { + nents = 2; + *is_icv_fragmented = true; + } else { + SSI_LOG_ERR("Unsupported num. of ICV fragments (> %d)\n", + MAX_ICV_NENTS_SUPPORTED); + nents = -1; /*unsupported*/ + } + SSI_LOG_DEBUG("is_frag=%s icv_nents=%u\n", + (*is_icv_fragmented ? "true" : "false"), nents); + + return nents; +} + +static inline int ssi_buffer_mgr_aead_chain_iv( + struct ssi_drvdata *drvdata, + struct aead_request *req, + struct buffer_array *sg_data, + bool is_last, bool do_chain) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + unsigned int hw_iv_size = areq_ctx->hw_iv_size; + struct device *dev = &drvdata->plat_dev->dev; + int rc = 0; + + if (unlikely(req->iv == NULL)) { + areq_ctx->gen_ctx.iv_dma_addr = 0; + goto chain_iv_exit; + } + + areq_ctx->gen_ctx.iv_dma_addr = dma_map_single(dev, req->iv, + hw_iv_size, DMA_BIDIRECTIONAL); + if (unlikely(dma_mapping_error(dev, areq_ctx->gen_ctx.iv_dma_addr))) { + SSI_LOG_ERR("Mapping iv %u B at va=%pK for DMA failed\n", + hw_iv_size, req->iv); + rc = -ENOMEM; + goto chain_iv_exit; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gen_ctx.iv_dma_addr, hw_iv_size); + + SSI_LOG_DEBUG("Mapped iv %u B at va=%pK to dma=0x%llX\n", + hw_iv_size, req->iv, + (unsigned long long)areq_ctx->gen_ctx.iv_dma_addr); + if (do_chain == true && areq_ctx->plaintext_authenticate_only == true){ // TODO: what about CTR?? ask Ron + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + unsigned int iv_size_to_authenc = crypto_aead_ivsize(tfm); + unsigned int iv_ofs = GCM_BLOCK_RFC4_IV_OFFSET; + /* Chain to given list */ + ssi_buffer_mgr_add_buffer_entry( + sg_data, areq_ctx->gen_ctx.iv_dma_addr + iv_ofs, + iv_size_to_authenc, is_last, + &areq_ctx->assoc.mlli_nents); + areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI; + } + +chain_iv_exit: + return rc; +} + +static inline int ssi_buffer_mgr_aead_chain_assoc( + struct ssi_drvdata *drvdata, + struct aead_request *req, + struct buffer_array *sg_data, + bool is_last, bool do_chain) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc = 0; + uint32_t mapped_nents = 0; + struct scatterlist *current_sg = req->src; + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + unsigned int sg_index = 0; + uint32_t size_of_assoc = req->assoclen; + + if (areq_ctx->is_gcm4543) { + size_of_assoc += crypto_aead_ivsize(tfm); + } + + if (sg_data == NULL) { + rc = -EINVAL; + goto chain_assoc_exit; + } + + if (unlikely(req->assoclen == 0)) { + areq_ctx->assoc_buff_type = SSI_DMA_BUF_NULL; + areq_ctx->assoc.nents = 0; + areq_ctx->assoc.mlli_nents = 0; + SSI_LOG_DEBUG("Chain assoc of length 0: buff_type=%s nents=%u\n", + GET_DMA_BUFFER_TYPE(areq_ctx->assoc_buff_type), + areq_ctx->assoc.nents); + goto chain_assoc_exit; + } + + //iterate over the sgl to see how many entries are for associated data + //it is assumed that if we reach here , the sgl is already mapped + sg_index = current_sg->length; + if (sg_index > size_of_assoc) { //the first entry in the scatter list contains all the associated data + mapped_nents++; + } + else{ + while (sg_index <= size_of_assoc) { + current_sg = sg_next(current_sg); + //if have reached the end of the sgl, then this is unexpected + if (current_sg == NULL) { + SSI_LOG_ERR("reached end of sg list. unexpected \n"); + BUG(); + } + sg_index += current_sg->length; + mapped_nents++; + } + } + if (unlikely(mapped_nents > LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES)) { + SSI_LOG_ERR("Too many fragments. current %d max %d\n", + mapped_nents, LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES); + return -ENOMEM; + } + areq_ctx->assoc.nents = mapped_nents; + + /* in CCM case we have additional entry for + * ccm header configurations */ + if (areq_ctx->ccm_hdr_size != ccm_header_size_null) { + if (unlikely((mapped_nents + 1) > + LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES)) { + + SSI_LOG_ERR("CCM case.Too many fragments. " + "Current %d max %d\n", + (areq_ctx->assoc.nents + 1), + LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES); + rc = -ENOMEM; + goto chain_assoc_exit; + } + } + + if (likely(mapped_nents == 1) && + (areq_ctx->ccm_hdr_size == ccm_header_size_null)) + areq_ctx->assoc_buff_type = SSI_DMA_BUF_DLLI; + else + areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI; + + if (unlikely((do_chain == true) || + (areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI))) { + + SSI_LOG_DEBUG("Chain assoc: buff_type=%s nents=%u\n", + GET_DMA_BUFFER_TYPE(areq_ctx->assoc_buff_type), + areq_ctx->assoc.nents); + ssi_buffer_mgr_add_scatterlist_entry( + sg_data, areq_ctx->assoc.nents, + req->src, req->assoclen, 0, is_last, + &areq_ctx->assoc.mlli_nents); + areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI; + } + +chain_assoc_exit: + return rc; +} + +static inline void ssi_buffer_mgr_prepare_aead_data_dlli( + struct aead_request *req, + uint32_t *src_last_bytes, uint32_t *dst_last_bytes) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type; + unsigned int authsize = areq_ctx->req_authsize; + + areq_ctx->is_icv_fragmented = false; + if (likely(req->src == req->dst)) { + /*INPLACE*/ + areq_ctx->icv_dma_addr = sg_dma_address( + areq_ctx->srcSgl)+ + (*src_last_bytes - authsize); + areq_ctx->icv_virt_addr = sg_virt( + areq_ctx->srcSgl) + + (*src_last_bytes - authsize); + } else if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) { + /*NON-INPLACE and DECRYPT*/ + areq_ctx->icv_dma_addr = sg_dma_address( + areq_ctx->srcSgl) + + (*src_last_bytes - authsize); + areq_ctx->icv_virt_addr = sg_virt( + areq_ctx->srcSgl) + + (*src_last_bytes - authsize); + } else { + /*NON-INPLACE and ENCRYPT*/ + areq_ctx->icv_dma_addr = sg_dma_address( + areq_ctx->dstSgl) + + (*dst_last_bytes - authsize); + areq_ctx->icv_virt_addr = sg_virt( + areq_ctx->dstSgl)+ + (*dst_last_bytes - authsize); + } +} + +static inline int ssi_buffer_mgr_prepare_aead_data_mlli( + struct ssi_drvdata *drvdata, + struct aead_request *req, + struct buffer_array *sg_data, + uint32_t *src_last_bytes, uint32_t *dst_last_bytes, + bool is_last_table) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type; + unsigned int authsize = areq_ctx->req_authsize; + int rc = 0, icv_nents; + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + + if (likely(req->src == req->dst)) { + /*INPLACE*/ + ssi_buffer_mgr_add_scatterlist_entry(sg_data, + areq_ctx->src.nents, areq_ctx->srcSgl, + areq_ctx->cryptlen,areq_ctx->srcOffset, is_last_table, + &areq_ctx->src.mlli_nents); + + icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->srcSgl, + areq_ctx->src.nents, authsize, *src_last_bytes, + &areq_ctx->is_icv_fragmented); + if (unlikely(icv_nents < 0)) { + rc = -ENOTSUPP; + goto prepare_data_mlli_exit; + } + + if (unlikely(areq_ctx->is_icv_fragmented == true)) { + /* Backup happens only when ICV is fragmented, ICV + verification is made by CPU compare in order to simplify + MAC verification upon request completion */ + if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) { +#if !DX_HAS_ACP + /* In ACP platform we already copying ICV + for any INPLACE-DECRYPT operation, hence + we must neglect this code. */ + uint32_t size_to_skip = req->assoclen; + if (areq_ctx->is_gcm4543) { + size_to_skip += crypto_aead_ivsize(tfm); + } + ssi_buffer_mgr_copy_scatterlist_portion( + areq_ctx->backup_mac, req->src, + size_to_skip+ req->cryptlen - areq_ctx->req_authsize, + size_to_skip+ req->cryptlen, SSI_SG_TO_BUF); +#endif + areq_ctx->icv_virt_addr = areq_ctx->backup_mac; + } else { + areq_ctx->icv_virt_addr = areq_ctx->mac_buf; + areq_ctx->icv_dma_addr = areq_ctx->mac_buf_dma_addr; + } + } else { /* Contig. ICV */ + /*Should hanlde if the sg is not contig.*/ + areq_ctx->icv_dma_addr = sg_dma_address( + &areq_ctx->srcSgl[areq_ctx->src.nents - 1]) + + (*src_last_bytes - authsize); + areq_ctx->icv_virt_addr = sg_virt( + &areq_ctx->srcSgl[areq_ctx->src.nents - 1]) + + (*src_last_bytes - authsize); + } + + } else if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) { + /*NON-INPLACE and DECRYPT*/ + ssi_buffer_mgr_add_scatterlist_entry(sg_data, + areq_ctx->src.nents, areq_ctx->srcSgl, + areq_ctx->cryptlen, areq_ctx->srcOffset,is_last_table, + &areq_ctx->src.mlli_nents); + ssi_buffer_mgr_add_scatterlist_entry(sg_data, + areq_ctx->dst.nents, areq_ctx->dstSgl, + areq_ctx->cryptlen,areq_ctx->dstOffset, is_last_table, + &areq_ctx->dst.mlli_nents); + + icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->srcSgl, + areq_ctx->src.nents, authsize, *src_last_bytes, + &areq_ctx->is_icv_fragmented); + if (unlikely(icv_nents < 0)) { + rc = -ENOTSUPP; + goto prepare_data_mlli_exit; + } + + if (unlikely(areq_ctx->is_icv_fragmented == true)) { + /* Backup happens only when ICV is fragmented, ICV + verification is made by CPU compare in order to simplify + MAC verification upon request completion */ + uint32_t size_to_skip = req->assoclen; + if (areq_ctx->is_gcm4543) { + size_to_skip += crypto_aead_ivsize(tfm); + } + ssi_buffer_mgr_copy_scatterlist_portion( + areq_ctx->backup_mac, req->src, + size_to_skip+ req->cryptlen - areq_ctx->req_authsize, + size_to_skip+ req->cryptlen, SSI_SG_TO_BUF); + areq_ctx->icv_virt_addr = areq_ctx->backup_mac; + } else { /* Contig. ICV */ + /*Should hanlde if the sg is not contig.*/ + areq_ctx->icv_dma_addr = sg_dma_address( + &areq_ctx->srcSgl[areq_ctx->src.nents - 1]) + + (*src_last_bytes - authsize); + areq_ctx->icv_virt_addr = sg_virt( + &areq_ctx->srcSgl[areq_ctx->src.nents - 1]) + + (*src_last_bytes - authsize); + } + + } else { + /*NON-INPLACE and ENCRYPT*/ + ssi_buffer_mgr_add_scatterlist_entry(sg_data, + areq_ctx->dst.nents, areq_ctx->dstSgl, + areq_ctx->cryptlen,areq_ctx->dstOffset, is_last_table, + &areq_ctx->dst.mlli_nents); + ssi_buffer_mgr_add_scatterlist_entry(sg_data, + areq_ctx->src.nents, areq_ctx->srcSgl, + areq_ctx->cryptlen, areq_ctx->srcOffset,is_last_table, + &areq_ctx->src.mlli_nents); + + icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->dstSgl, + areq_ctx->dst.nents, authsize, *dst_last_bytes, + &areq_ctx->is_icv_fragmented); + if (unlikely(icv_nents < 0)) { + rc = -ENOTSUPP; + goto prepare_data_mlli_exit; + } + + if (likely(areq_ctx->is_icv_fragmented == false)) { + /* Contig. ICV */ + areq_ctx->icv_dma_addr = sg_dma_address( + &areq_ctx->dstSgl[areq_ctx->dst.nents - 1]) + + (*dst_last_bytes - authsize); + areq_ctx->icv_virt_addr = sg_virt( + &areq_ctx->dstSgl[areq_ctx->dst.nents - 1]) + + (*dst_last_bytes - authsize); + } else { + areq_ctx->icv_dma_addr = areq_ctx->mac_buf_dma_addr; + areq_ctx->icv_virt_addr = areq_ctx->mac_buf; + } + } + +prepare_data_mlli_exit: + return rc; +} + +static inline int ssi_buffer_mgr_aead_chain_data( + struct ssi_drvdata *drvdata, + struct aead_request *req, + struct buffer_array *sg_data, + bool is_last_table, bool do_chain) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct device *dev = &drvdata->plat_dev->dev; + enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type; + unsigned int authsize = areq_ctx->req_authsize; + int src_last_bytes = 0, dst_last_bytes = 0; + int rc = 0; + uint32_t src_mapped_nents = 0, dst_mapped_nents = 0; + uint32_t offset = 0; + unsigned int size_for_map = req->assoclen +req->cryptlen; /*non-inplace mode*/ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + uint32_t sg_index = 0; + bool chained = false; + bool is_gcm4543 = areq_ctx->is_gcm4543; + uint32_t size_to_skip = req->assoclen; + if (is_gcm4543) { + size_to_skip += crypto_aead_ivsize(tfm); + } + offset = size_to_skip; + + if (sg_data == NULL) { + rc = -EINVAL; + goto chain_data_exit; + } + areq_ctx->srcSgl = req->src; + areq_ctx->dstSgl = req->dst; + + if (is_gcm4543) { + size_for_map += crypto_aead_ivsize(tfm); + } + + size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? authsize:0; + src_mapped_nents = ssi_buffer_mgr_get_sgl_nents(req->src,size_for_map,&src_last_bytes, &chained); + sg_index = areq_ctx->srcSgl->length; + //check where the data starts + while (sg_index <= size_to_skip) { + offset -= areq_ctx->srcSgl->length; + areq_ctx->srcSgl = sg_next(areq_ctx->srcSgl); + //if have reached the end of the sgl, then this is unexpected + if (areq_ctx->srcSgl == NULL) { + SSI_LOG_ERR("reached end of sg list. unexpected \n"); + BUG(); + } + sg_index += areq_ctx->srcSgl->length; + src_mapped_nents--; + } + if (unlikely(src_mapped_nents > LLI_MAX_NUM_OF_DATA_ENTRIES)) + { + SSI_LOG_ERR("Too many fragments. current %d max %d\n", + src_mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES); + return -ENOMEM; + } + + areq_ctx->src.nents = src_mapped_nents; + + areq_ctx->srcOffset = offset; + + if (req->src != req->dst) { + size_for_map = req->assoclen +req->cryptlen; + size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? authsize : 0; + if (is_gcm4543) { + size_for_map += crypto_aead_ivsize(tfm); + } + + rc = ssi_buffer_mgr_map_scatterlist(dev, req->dst, size_for_map, + DMA_BIDIRECTIONAL, &(areq_ctx->dst.nents), + LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes, + &dst_mapped_nents); + if (unlikely(rc != 0)) { + rc = -ENOMEM; + goto chain_data_exit; + } + } + + dst_mapped_nents = ssi_buffer_mgr_get_sgl_nents(req->dst,size_for_map,&dst_last_bytes, &chained); + sg_index = areq_ctx->dstSgl->length; + offset = size_to_skip; + + //check where the data starts + while (sg_index <= size_to_skip) { + + offset -= areq_ctx->dstSgl->length; + areq_ctx->dstSgl = sg_next(areq_ctx->dstSgl); + //if have reached the end of the sgl, then this is unexpected + if (areq_ctx->dstSgl == NULL) { + SSI_LOG_ERR("reached end of sg list. unexpected \n"); + BUG(); + } + sg_index += areq_ctx->dstSgl->length; + dst_mapped_nents--; + } + if (unlikely(dst_mapped_nents > LLI_MAX_NUM_OF_DATA_ENTRIES)) + { + SSI_LOG_ERR("Too many fragments. current %d max %d\n", + dst_mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES); + return -ENOMEM; + } + areq_ctx->dst.nents = dst_mapped_nents; + areq_ctx->dstOffset = offset; + if ((src_mapped_nents > 1) || + (dst_mapped_nents > 1) || + (do_chain == true)) { + areq_ctx->data_buff_type = SSI_DMA_BUF_MLLI; + rc = ssi_buffer_mgr_prepare_aead_data_mlli(drvdata, req, sg_data, + &src_last_bytes, &dst_last_bytes, is_last_table); + } else { + areq_ctx->data_buff_type = SSI_DMA_BUF_DLLI; + ssi_buffer_mgr_prepare_aead_data_dlli( + req, &src_last_bytes, &dst_last_bytes); + } + +chain_data_exit: + return rc; +} + +static void ssi_buffer_mgr_update_aead_mlli_nents( struct ssi_drvdata *drvdata, + struct aead_request *req) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + uint32_t curr_mlli_size = 0; + + if (areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI) { + areq_ctx->assoc.sram_addr = drvdata->mlli_sram_addr; + curr_mlli_size = areq_ctx->assoc.mlli_nents * + LLI_ENTRY_BYTE_SIZE; + } + + if (areq_ctx->data_buff_type == SSI_DMA_BUF_MLLI) { + /*Inplace case dst nents equal to src nents*/ + if (req->src == req->dst) { + areq_ctx->dst.mlli_nents = areq_ctx->src.mlli_nents; + areq_ctx->src.sram_addr = drvdata->mlli_sram_addr + + curr_mlli_size; + areq_ctx->dst.sram_addr = areq_ctx->src.sram_addr; + if (areq_ctx->is_single_pass == false) + areq_ctx->assoc.mlli_nents += + areq_ctx->src.mlli_nents; + } else { + if (areq_ctx->gen_ctx.op_type == + DRV_CRYPTO_DIRECTION_DECRYPT) { + areq_ctx->src.sram_addr = + drvdata->mlli_sram_addr + + curr_mlli_size; + areq_ctx->dst.sram_addr = + areq_ctx->src.sram_addr + + areq_ctx->src.mlli_nents * + LLI_ENTRY_BYTE_SIZE; + if (areq_ctx->is_single_pass == false) + areq_ctx->assoc.mlli_nents += + areq_ctx->src.mlli_nents; + } else { + areq_ctx->dst.sram_addr = + drvdata->mlli_sram_addr + + curr_mlli_size; + areq_ctx->src.sram_addr = + areq_ctx->dst.sram_addr + + areq_ctx->dst.mlli_nents * + LLI_ENTRY_BYTE_SIZE; + if (areq_ctx->is_single_pass == false) + areq_ctx->assoc.mlli_nents += + areq_ctx->dst.mlli_nents; + } + } + } +} + +int ssi_buffer_mgr_map_aead_request( + struct ssi_drvdata *drvdata, struct aead_request *req) +{ + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct mlli_params *mlli_params = &areq_ctx->mlli_params; + struct device *dev = &drvdata->plat_dev->dev; + struct buffer_array sg_data; + unsigned int authsize = areq_ctx->req_authsize; + struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle; + int rc = 0; + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + bool is_gcm4543 = areq_ctx->is_gcm4543; + + uint32_t mapped_nents = 0; + uint32_t dummy = 0; /*used for the assoc data fragments */ + uint32_t size_to_map = 0; + + mlli_params->curr_pool = NULL; + sg_data.num_of_buffers = 0; + +#if DX_HAS_ACP + if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) && + likely(req->src == req->dst)) + { + uint32_t size_to_skip = req->assoclen; + if (is_gcm4543) { + size_to_skip += crypto_aead_ivsize(tfm); + } + /* copy mac to a temporary location to deal with possible + data memory overriding that caused by cache coherence problem. */ + ssi_buffer_mgr_copy_scatterlist_portion( + areq_ctx->backup_mac, req->src, + size_to_skip+ req->cryptlen - areq_ctx->req_authsize, + size_to_skip+ req->cryptlen, SSI_SG_TO_BUF); + } +#endif + + /* cacluate the size for cipher remove ICV in decrypt*/ + areq_ctx->cryptlen = (areq_ctx->gen_ctx.op_type == + DRV_CRYPTO_DIRECTION_ENCRYPT) ? + req->cryptlen : + (req->cryptlen - authsize); + + areq_ctx->mac_buf_dma_addr = dma_map_single(dev, + areq_ctx->mac_buf, MAX_MAC_SIZE, DMA_BIDIRECTIONAL); + if (unlikely(dma_mapping_error(dev, areq_ctx->mac_buf_dma_addr))) { + SSI_LOG_ERR("Mapping mac_buf %u B at va=%pK for DMA failed\n", + MAX_MAC_SIZE, areq_ctx->mac_buf); + rc = -ENOMEM; + goto aead_map_failure; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->mac_buf_dma_addr, MAX_MAC_SIZE); + + if (areq_ctx->ccm_hdr_size != ccm_header_size_null) { + areq_ctx->ccm_iv0_dma_addr = dma_map_single(dev, + (areq_ctx->ccm_config + CCM_CTR_COUNT_0_OFFSET), + AES_BLOCK_SIZE, DMA_TO_DEVICE); + + if (unlikely(dma_mapping_error(dev, areq_ctx->ccm_iv0_dma_addr))) { + SSI_LOG_ERR("Mapping mac_buf %u B at va=%pK " + "for DMA failed\n", AES_BLOCK_SIZE, + (areq_ctx->ccm_config + CCM_CTR_COUNT_0_OFFSET)); + areq_ctx->ccm_iv0_dma_addr = 0; + rc = -ENOMEM; + goto aead_map_failure; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->ccm_iv0_dma_addr, + AES_BLOCK_SIZE); + if (ssi_aead_handle_config_buf(dev, areq_ctx, + areq_ctx->ccm_config, &sg_data, req->assoclen) != 0) { + rc = -ENOMEM; + goto aead_map_failure; + } + } + +#if SSI_CC_HAS_AES_GCM + if (areq_ctx->cipher_mode == DRV_CIPHER_GCTR) { + areq_ctx->hkey_dma_addr = dma_map_single(dev, + areq_ctx->hkey, AES_BLOCK_SIZE, DMA_BIDIRECTIONAL); + if (unlikely(dma_mapping_error(dev, areq_ctx->hkey_dma_addr))) { + SSI_LOG_ERR("Mapping hkey %u B at va=%pK for DMA failed\n", + AES_BLOCK_SIZE, areq_ctx->hkey); + rc = -ENOMEM; + goto aead_map_failure; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->hkey_dma_addr, AES_BLOCK_SIZE); + + areq_ctx->gcm_block_len_dma_addr = dma_map_single(dev, + &areq_ctx->gcm_len_block, AES_BLOCK_SIZE, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_block_len_dma_addr))) { + SSI_LOG_ERR("Mapping gcm_len_block %u B at va=%pK for DMA failed\n", + AES_BLOCK_SIZE, &areq_ctx->gcm_len_block); + rc = -ENOMEM; + goto aead_map_failure; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_block_len_dma_addr, AES_BLOCK_SIZE); + + areq_ctx->gcm_iv_inc1_dma_addr = dma_map_single(dev, + areq_ctx->gcm_iv_inc1, + AES_BLOCK_SIZE, DMA_TO_DEVICE); + + if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_iv_inc1_dma_addr))) { + SSI_LOG_ERR("Mapping gcm_iv_inc1 %u B at va=%pK " + "for DMA failed\n", AES_BLOCK_SIZE, + (areq_ctx->gcm_iv_inc1)); + areq_ctx->gcm_iv_inc1_dma_addr = 0; + rc = -ENOMEM; + goto aead_map_failure; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc1_dma_addr, + AES_BLOCK_SIZE); + + areq_ctx->gcm_iv_inc2_dma_addr = dma_map_single(dev, + areq_ctx->gcm_iv_inc2, + AES_BLOCK_SIZE, DMA_TO_DEVICE); + + if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_iv_inc2_dma_addr))) { + SSI_LOG_ERR("Mapping gcm_iv_inc2 %u B at va=%pK " + "for DMA failed\n", AES_BLOCK_SIZE, + (areq_ctx->gcm_iv_inc2)); + areq_ctx->gcm_iv_inc2_dma_addr = 0; + rc = -ENOMEM; + goto aead_map_failure; + } + SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc2_dma_addr, + AES_BLOCK_SIZE); + } +#endif /*SSI_CC_HAS_AES_GCM*/ + + size_to_map = req->cryptlen + req->assoclen; + if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) { + size_to_map += authsize; + } + if (is_gcm4543) + size_to_map += crypto_aead_ivsize(tfm); + rc = ssi_buffer_mgr_map_scatterlist(dev, req->src, + size_to_map, DMA_BIDIRECTIONAL, &(areq_ctx->src.nents), + LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES+LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents); + if (unlikely(rc != 0)) { + rc = -ENOMEM; + goto aead_map_failure; + } + + if (likely(areq_ctx->is_single_pass == true)) { + /* + * Create MLLI table for: + * (1) Assoc. data + * (2) Src/Dst SGLs + * Note: IV is contg. buffer (not an SGL) + */ + rc = ssi_buffer_mgr_aead_chain_assoc(drvdata, req, &sg_data, true, false); + if (unlikely(rc != 0)) + goto aead_map_failure; + rc = ssi_buffer_mgr_aead_chain_iv(drvdata, req, &sg_data, true, false); + if (unlikely(rc != 0)) + goto aead_map_failure; + rc = ssi_buffer_mgr_aead_chain_data(drvdata, req, &sg_data, true, false); + if (unlikely(rc != 0)) + goto aead_map_failure; + } else { /* DOUBLE-PASS flow */ + /* + * Prepare MLLI table(s) in this order: + * + * If ENCRYPT/DECRYPT (inplace): + * (1) MLLI table for assoc + * (2) IV entry (chained right after end of assoc) + * (3) MLLI for src/dst (inplace operation) + * + * If ENCRYPT (non-inplace) + * (1) MLLI table for assoc + * (2) IV entry (chained right after end of assoc) + * (3) MLLI for dst + * (4) MLLI for src + * + * If DECRYPT (non-inplace) + * (1) MLLI table for assoc + * (2) IV entry (chained right after end of assoc) + * (3) MLLI for src + * (4) MLLI for dst + */ + rc = ssi_buffer_mgr_aead_chain_assoc(drvdata, req, &sg_data, false, true); + if (unlikely(rc != 0)) + goto aead_map_failure; + rc = ssi_buffer_mgr_aead_chain_iv(drvdata, req, &sg_data, false, true); + if (unlikely(rc != 0)) + goto aead_map_failure; + rc = ssi_buffer_mgr_aead_chain_data(drvdata, req, &sg_data, true, true); + if (unlikely(rc != 0)) + goto aead_map_failure; + } + + /* Mlli support -start building the MLLI according to the above results */ + if (unlikely( + (areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI) || + (areq_ctx->data_buff_type == SSI_DMA_BUF_MLLI))) { + + mlli_params->curr_pool = buff_mgr->mlli_buffs_pool; + rc = ssi_buffer_mgr_generate_mlli(dev, &sg_data, mlli_params); + if (unlikely(rc != 0)) { + goto aead_map_failure; + } + + ssi_buffer_mgr_update_aead_mlli_nents(drvdata, req); + SSI_LOG_DEBUG("assoc params mn %d\n",areq_ctx->assoc.mlli_nents); + SSI_LOG_DEBUG("src params mn %d\n",areq_ctx->src.mlli_nents); + SSI_LOG_DEBUG("dst params mn %d\n",areq_ctx->dst.mlli_nents); + } + return 0; + +aead_map_failure: + ssi_buffer_mgr_unmap_aead_request(dev, req); + return rc; +} + int ssi_buffer_mgr_map_hash_request_final( struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update) { diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h index 41412b2..5f4b032 100644 --- a/drivers/staging/ccree/ssi_buffer_mgr.h +++ b/drivers/staging/ccree/ssi_buffer_mgr.h @@ -71,6 +71,10 @@ void ssi_buffer_mgr_unmap_blkcipher_request( struct scatterlist *src, struct scatterlist *dst); +int ssi_buffer_mgr_map_aead_request(struct ssi_drvdata *drvdata, struct aead_request *req); + +void ssi_buffer_mgr_unmap_aead_request(struct device *dev, struct aead_request *req); + int ssi_buffer_mgr_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update); int ssi_buffer_mgr_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size); diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c index ac1b61b..45d90c4 100644 --- a/drivers/staging/ccree/ssi_driver.c +++ b/drivers/staging/ccree/ssi_driver.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -63,6 +64,7 @@ #include "ssi_buffer_mgr.h" #include "ssi_sysfs.h" #include "ssi_cipher.h" +#include "ssi_aead.h" #include "ssi_hash.h" #include "ssi_ivgen.h" #include "ssi_sram_mgr.h" @@ -362,18 +364,26 @@ static int init_cc_resources(struct platform_device *plat_dev) goto init_cc_res_err; } + /* hash must be allocated before aead since hash exports APIs */ rc = ssi_hash_alloc(new_drvdata); if (unlikely(rc != 0)) { SSI_LOG_ERR("ssi_hash_alloc failed\n"); goto init_cc_res_err; } + rc = ssi_aead_alloc(new_drvdata); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("ssi_aead_alloc failed\n"); + goto init_cc_res_err; + } + return 0; init_cc_res_err: SSI_LOG_ERR("Freeing CC HW resources!\n"); if (new_drvdata != NULL) { + ssi_aead_free(new_drvdata); ssi_hash_free(new_drvdata); ssi_ablkcipher_free(new_drvdata); ssi_ivgen_fini(new_drvdata); @@ -416,6 +426,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev) struct ssi_drvdata *drvdata = (struct ssi_drvdata *)dev_get_drvdata(&plat_dev->dev); + ssi_aead_free(drvdata); ssi_hash_free(drvdata); ssi_ablkcipher_free(drvdata); ssi_ivgen_fini(drvdata); diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h index a5a2427..06e685e 100644 --- a/drivers/staging/ccree/ssi_driver.h +++ b/drivers/staging/ccree/ssi_driver.h @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -148,6 +149,7 @@ struct ssi_drvdata { struct completion icache_setup_completion; void *buff_mgr_handle; void *hash_handle; + void *aead_handle; void *blkcipher_handle; void *request_mgr_handle; void *ivgen_handle; @@ -167,6 +169,7 @@ struct ssi_crypto_alg { int auth_mode; struct ssi_drvdata *drvdata; struct crypto_alg crypto_alg; + struct aead_alg aead_alg; }; struct ssi_alg_template { @@ -176,6 +179,7 @@ struct ssi_alg_template { u32 type; union { struct ablkcipher_alg ablkcipher; + struct aead_alg aead; struct blkcipher_alg blkcipher; struct cipher_alg cipher; struct compress_alg compress; From patchwork Sun Apr 23 09:26:15 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 97964 Delivered-To: patch@linaro.org Received: by 10.140.109.52 with SMTP id k49csp1005267qgf; Sun, 23 Apr 2017 02:29:08 -0700 (PDT) X-Received: by 10.99.181.92 with SMTP id u28mr19197073pgo.102.1492939748492; Sun, 23 Apr 2017 02:29:08 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u8si15374312plh.22.2017.04.23.02.29.08; Sun, 23 Apr 2017 02:29:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1044522AbdDWJ3D (ORCPT + 1 other); Sun, 23 Apr 2017 05:29:03 -0400 Received: from foss.arm.com ([217.140.101.70]:46976 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1044405AbdDWJ1I (ORCPT ); Sun, 23 Apr 2017 05:27:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B316F168F; Sun, 23 Apr 2017 02:27:07 -0700 (PDT) Received: from gby.kfn.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 32BC63F220; Sun, 23 Apr 2017 02:27:03 -0700 (PDT) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" , Rob Herring , Mark Rutland , Greg Kroah-Hartman , devel@driverdev.osuosl.org Cc: linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, gilad.benyossef@arm.com, Binoy Jayan , Ofir Drang , Stuart Yoder , Stephan Muller Subject: [PATCH v3 07/15] staging: ccree: add TODO list Date: Sun, 23 Apr 2017 12:26:15 +0300 Message-Id: <1492939583-25688-8-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1492939583-25688-1-git-send-email-gilad@benyossef.com> References: <1492939583-25688-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add TODO list for moving out of staging tree for ccree crypto driver Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/TODO | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) create mode 100644 drivers/staging/ccree/TODO -- 2.1.4 diff --git a/drivers/staging/ccree/TODO b/drivers/staging/ccree/TODO new file mode 100644 index 0000000..c9f5754 --- /dev/null +++ b/drivers/staging/ccree/TODO @@ -0,0 +1,30 @@ + + +************************************************************************* +* * +* Arm Trust Zone CryptoCell REE Linux driver upstreaming TODO items * +* * +************************************************************************* + +ccree specific items +a.k.a stuff fixing for this driver to move out of staging +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +1. Move to using Crypto Engine to handle backlog queueing. +2. Remove synchronous algorithm support leftovers. +3. Separate platform specific code for FIPS and power management into separate platform modules. +4. Drop legacy kernel support code. +5. Move most (all?) #ifdef CONFIG into inline functions. +6. Remove all unused definitions. +7. Re-factor to accomediate newer/older HW revisions besides the 712. +8. Handle the many checkpatch errors. +9. Implement ahash import/export correctly. +10. Go through a proper review of DT bindings and sysfs ABI +11. Sort out FIPS mode: bake tests into testmgr, sort out behaviour on error, + figure if 3DES weak key check is needed + +Kernel infrastructure items +a.k.a stuff we either neither need to fix in the kernel or understand what we're doing wrong +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +1. ahash import/export context has a PAGE_SIZE/8 size limit. We need more. +2. Crypto Engine seems to be built for HW with hardware queue depth of 1, we have 600++. From patchwork Sun Apr 23 09:26:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 97965 Delivered-To: patch@linaro.org Received: by 10.140.109.52 with SMTP id k49csp1005286qgf; Sun, 23 Apr 2017 02:29:12 -0700 (PDT) X-Received: by 10.98.135.71 with SMTP id i68mr19349680pfe.258.1492939752441; Sun, 23 Apr 2017 02:29:12 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u8si15374312plh.22.2017.04.23.02.29.12; Sun, 23 Apr 2017 02:29:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1044527AbdDWJ3G (ORCPT + 1 other); Sun, 23 Apr 2017 05:29:06 -0400 Received: from foss.arm.com ([217.140.101.70]:46988 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1044408AbdDWJ1M (ORCPT ); Sun, 23 Apr 2017 05:27:12 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A53D0344; Sun, 23 Apr 2017 02:27:11 -0700 (PDT) Received: from gby.kfn.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 205FB3F220; Sun, 23 Apr 2017 02:27:07 -0700 (PDT) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" , Rob Herring , Mark Rutland , Greg Kroah-Hartman , devel@driverdev.osuosl.org Cc: linux-crypto@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, gilad.benyossef@arm.com, Binoy Jayan , Ofir Drang , Stuart Yoder , Stephan Muller Subject: [PATCH v3 08/15] staging: ccree: add DT bindings for Arm CryptoCell Date: Sun, 23 Apr 2017 12:26:16 +0300 Message-Id: <1492939583-25688-9-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1492939583-25688-1-git-send-email-gilad@benyossef.com> References: <1492939583-25688-1-git-send-email-gilad@benyossef.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This adds DT bindings for the Arm TrustZone CryptoCell cryptographic accelerator IP. Signed-off-by: Gilad Ben-Yossef --- .../devicetree/bindings/crypto/arm-cryptocell.txt | 27 ++++++++++++++++++++++ 1 file changed, 27 insertions(+) create mode 100644 drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt -- 2.1.4 diff --git a/drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt b/drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt new file mode 100644 index 0000000..2ea6517 --- /dev/null +++ b/drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt @@ -0,0 +1,27 @@ +Arm TrustZone CryptoCell cryptographic accelerators + +Required properties: +- compatible: must be "arm,cryptocell-712-ree". +- reg: shall contain base register location and length. + Typically length is 0x10000. +- interrupts: shall contain the interrupt for the device. + +Optional properties: +- interrupt-parent: can designate the interrupt controller the + device interrupt is connected to, if needed. +- clocks: may contain the clock handling the device, if needed. +- power-domains: may contain a reference to the PM domain, if applicable. + + +Examples: + +Zynq FPGA device +---------------- + + arm_cc7x: arm_cc7x@80000000 { + compatible = "arm,cryptocell-712-ree"; + interrupt-parent = <&intc>; + interrupts = < 0 30 4 >; + reg = < 0x80000000 0x10000 >; + }; +