From patchwork Thu Jun 22 07:07:48 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 106151 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp2316553qgd; Thu, 22 Jun 2017 00:08:47 -0700 (PDT) X-Received: by 10.84.215.140 with SMTP id l12mr1284745pli.133.1498115327473; Thu, 22 Jun 2017 00:08:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498115327; cv=none; d=google.com; s=arc-20160816; b=AmBgX7rbbY/Os9fI3oqodwtKPyBqhb0R+8fUYfKo/39bVl/JQhc7G1XUqCoesDvErP GHA6EfRUaCqsl7Xmrc3VWwaXYDap4xUBy0vu1Tqcl2CjhAiaRL7nxilRUSXb1ZIeryq3 k5FOOoiD/7r1iMeoztKv9L8e5w5wZNdU0WiYXhuEl2K8WTt37UqVmaETAotv/XSxYZJZ BAXGYLIg7tT+YMslvHBLxDeXBnVKgNsFrL8+/1dnEjGiAoHvZmGZ7fLtBrwDWUOveX6A 5wyolpFJhYlfWQfrKII+iIK2Tme/+a6MklPSO6E+9CZJ2cKxbjQN294FcESosQMN3sbw zz+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=lSdGJNyMvaYtc548AJmcBOmp+Gb9gFgSefhclgIwbTY=; b=gRvbrStJ5Upx/SI2t0QDvs4jdJhrqVyrCF6fs6gH5npBfqaaY4mo1Xq0fxJKMjGYPf gpqiVySyQe9UIBXFHVfwNgrousE41qGp8LfIjtR49EVTwofuQoUBetboiVc3jRZDWrh+ HkyTTeSp8tiOoZg6vR1CHzSEz6L7qELVRRUNnJ1PQVB6FXRfeC+fR1u9HfGswIcddq73 OBohODG0AKP7JmVTEc2zqJ92ZQWH5DQc8fqc5R/BsArHSc50kNa2TPIlgagvE2HGDXHy HpBM2D9PTjGq3ltoZNmnBbPR0Nloi0pv40EwkKGRRM8RbghCzVvLDlQ1AM327iAIgz63 pSQw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 82si533178pgg.519.2017.06.22.00.08.47; Thu, 22 Jun 2017 00:08:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752792AbdFVHI3 (ORCPT + 25 others); Thu, 22 Jun 2017 03:08:29 -0400 Received: from foss.arm.com ([217.140.101.70]:33358 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751124AbdFVHIZ (ORCPT ); Thu, 22 Jun 2017 03:08:25 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A210215AD; Thu, 22 Jun 2017 00:08:14 -0700 (PDT) Received: from gby.kfn.arm.com (unknown [10.45.48.178]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 760233F587; Thu, 22 Jun 2017 00:08:11 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Cc: Ofir Drang Subject: [PATCH 2/7] staging: ccree: register setkey for none hash macs Date: Thu, 22 Jun 2017 10:07:48 +0300 Message-Id: <1498115276-1601-3-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1498115276-1601-1-git-send-email-gilad@benyossef.com> References: <1498115276-1601-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Fix a bug where the transformation init code did not register a setkey method for none hash based MACs. Fixes commit 50cfbbb7e627 ("staging: ccree: add ahash support"). Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/ssi_hash.c | 83 ++++++++++++++++++++-------------------- 1 file changed, 42 insertions(+), 41 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c index c39e3be..f52e1af 100644 --- a/drivers/staging/ccree/ssi_hash.c +++ b/drivers/staging/ccree/ssi_hash.c @@ -1874,8 +1874,8 @@ static int ssi_ahash_setkey(struct crypto_ahash *ahash, struct ssi_hash_template { char name[CRYPTO_MAX_ALG_NAME]; char driver_name[CRYPTO_MAX_ALG_NAME]; - char hmac_name[CRYPTO_MAX_ALG_NAME]; - char hmac_driver_name[CRYPTO_MAX_ALG_NAME]; + char mac_name[CRYPTO_MAX_ALG_NAME]; + char mac_driver_name[CRYPTO_MAX_ALG_NAME]; unsigned int blocksize; bool synchronize; struct ahash_alg template_ahash; @@ -1894,8 +1894,8 @@ static struct ssi_hash_template driver_hash[] = { { .name = "sha1", .driver_name = "sha1-dx", - .hmac_name = "hmac(sha1)", - .hmac_driver_name = "hmac-sha1-dx", + .mac_name = "hmac(sha1)", + .mac_driver_name = "hmac-sha1-dx", .blocksize = SHA1_BLOCK_SIZE, .synchronize = false, .template_ahash = { @@ -1919,8 +1919,8 @@ static struct ssi_hash_template driver_hash[] = { { .name = "sha256", .driver_name = "sha256-dx", - .hmac_name = "hmac(sha256)", - .hmac_driver_name = "hmac-sha256-dx", + .mac_name = "hmac(sha256)", + .mac_driver_name = "hmac-sha256-dx", .blocksize = SHA256_BLOCK_SIZE, .template_ahash = { .init = ssi_ahash_init, @@ -1943,8 +1943,8 @@ static struct ssi_hash_template driver_hash[] = { { .name = "sha224", .driver_name = "sha224-dx", - .hmac_name = "hmac(sha224)", - .hmac_driver_name = "hmac-sha224-dx", + .mac_name = "hmac(sha224)", + .mac_driver_name = "hmac-sha224-dx", .blocksize = SHA224_BLOCK_SIZE, .template_ahash = { .init = ssi_ahash_init, @@ -1968,8 +1968,8 @@ static struct ssi_hash_template driver_hash[] = { { .name = "sha384", .driver_name = "sha384-dx", - .hmac_name = "hmac(sha384)", - .hmac_driver_name = "hmac-sha384-dx", + .mac_name = "hmac(sha384)", + .mac_driver_name = "hmac-sha384-dx", .blocksize = SHA384_BLOCK_SIZE, .template_ahash = { .init = ssi_ahash_init, @@ -1992,8 +1992,8 @@ static struct ssi_hash_template driver_hash[] = { { .name = "sha512", .driver_name = "sha512-dx", - .hmac_name = "hmac(sha512)", - .hmac_driver_name = "hmac-sha512-dx", + .mac_name = "hmac(sha512)", + .mac_driver_name = "hmac-sha512-dx", .blocksize = SHA512_BLOCK_SIZE, .template_ahash = { .init = ssi_ahash_init, @@ -2017,8 +2017,8 @@ static struct ssi_hash_template driver_hash[] = { { .name = "md5", .driver_name = "md5-dx", - .hmac_name = "hmac(md5)", - .hmac_driver_name = "hmac-md5-dx", + .mac_name = "hmac(md5)", + .mac_driver_name = "hmac-md5-dx", .blocksize = MD5_HMAC_BLOCK_SIZE, .template_ahash = { .init = ssi_ahash_init, @@ -2039,8 +2039,8 @@ static struct ssi_hash_template driver_hash[] = { .inter_digestsize = MD5_DIGEST_SIZE, }, { - .name = "xcbc(aes)", - .driver_name = "xcbc-aes-dx", + .mac_name = "xcbc(aes)", + .mac_driver_name = "xcbc-aes-dx", .blocksize = AES_BLOCK_SIZE, .template_ahash = { .init = ssi_ahash_init, @@ -2062,8 +2062,8 @@ static struct ssi_hash_template driver_hash[] = { }, #if SSI_CC_HAS_CMAC { - .name = "cmac(aes)", - .driver_name = "cmac-aes-dx", + .mac_name = "cmac(aes)", + .mac_driver_name = "cmac-aes-dx", .blocksize = AES_BLOCK_SIZE, .template_ahash = { .init = ssi_ahash_init, @@ -2106,9 +2106,9 @@ ssi_hash_create_alg(struct ssi_hash_template *template, bool keyed) if (keyed) { snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", - template->hmac_name); + template->mac_name); snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", - template->hmac_driver_name); + template->mac_driver_name); } else { halg->setkey = NULL; snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", @@ -2297,32 +2297,33 @@ int ssi_hash_alloc(struct ssi_drvdata *drvdata) /* ahash registration */ for (alg = 0; alg < ARRAY_SIZE(driver_hash); alg++) { struct ssi_hash_alg *t_alg; + int hw_mode = driver_hash[alg].hw_mode; /* register hmac version */ + t_alg = ssi_hash_create_alg(&driver_hash[alg], true); + if (IS_ERR(t_alg)) { + rc = PTR_ERR(t_alg); + SSI_LOG_ERR("%s alg allocation failed\n", + driver_hash[alg].driver_name); + goto fail; + } + t_alg->drvdata = drvdata; - if ((((struct ssi_hash_template *)&driver_hash[alg])->hw_mode != DRV_CIPHER_XCBC_MAC) && - (((struct ssi_hash_template *)&driver_hash[alg])->hw_mode != DRV_CIPHER_CMAC)) { - t_alg = ssi_hash_create_alg(&driver_hash[alg], true); - if (IS_ERR(t_alg)) { - rc = PTR_ERR(t_alg); - SSI_LOG_ERR("%s alg allocation failed\n", - driver_hash[alg].driver_name); - goto fail; - } - t_alg->drvdata = drvdata; - - rc = crypto_register_ahash(&t_alg->ahash_alg); - if (unlikely(rc)) { - SSI_LOG_ERR("%s alg registration failed\n", - driver_hash[alg].driver_name); - kfree(t_alg); - goto fail; - } else { - list_add_tail(&t_alg->entry, - &hash_handle->hash_list); - } + rc = crypto_register_ahash(&t_alg->ahash_alg); + if (unlikely(rc)) { + SSI_LOG_ERR("%s alg registration failed\n", + driver_hash[alg].driver_name); + kfree(t_alg); + goto fail; + } else { + list_add_tail(&t_alg->entry, + &hash_handle->hash_list); } + if ((hw_mode == DRV_CIPHER_XCBC_MAC) || + (hw_mode == DRV_CIPHER_CMAC)) + continue; + /* register hash version */ t_alg = ssi_hash_create_alg(&driver_hash[alg], false); if (IS_ERR(t_alg)) { From patchwork Thu Jun 22 07:07:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 106156 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp2316955qgd; Thu, 22 Jun 2017 00:09:50 -0700 (PDT) X-Received: by 10.99.95.147 with SMTP id t141mr1100089pgb.263.1498115390098; Thu, 22 Jun 2017 00:09:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498115390; cv=none; d=google.com; s=arc-20160816; b=otci4gicqaDyXlmDRAbUz/AOVvtren4CMqSgwCMF6I56Bg1Ptt7Tqhp1yhmeSA7pBn qiKZNDiOGwJVNaJH9PqALFIFidSFSZFCug8hWWxyHTZQr4ESKn5xQF+BhUd7y+qESAEi Y9YFyx4e65A5AXrtx3tm/bZVPYdbMN0+LiNtWa318wyz+Y6ibVSnBECbK3Yk7rU/E5US WtGU6FUHtwkOtNGw0WHP8pZvFkgzj2mwpCpZolOFLQ25J0fgJLt7I+qbT0dftwXoOPTM eiDcRKv/q37Q9iMpeJrsNVaMha28Ue3RtsMF3GV26bQN+DNAMGLS29NsOPD4ETBoPlpP wzkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=p9aSO8lVVHlAkkkE+Gb2lv0APItyQ4IMnuCK1glpV7E=; b=b2PIz3FycokRCXAsQfnM6HXql24VDJYjDoEaX4Ee3Kxfkace+rQaAUzKpUpiICL6uT TRLYpUGAzJ7Dm6TLZ87roJb5pZVuxEoIsksjfvAUa68h8ItdkxbCpytgUAPFKwZDzEtB kgKMaCAchg0Ml/cnyZzvoDB9zrOBbccInrGKvM71XvVvBXv5qnkJqPSjaPEcn8YW3Kxb x+7zymOfDTqZEWQN3gGo11qqD3zepKYiR4oAZqG3uFonOCewxYsG1aZOzIrjV0V+ICTB bJWIozAleJL/ey2dE6brz11o/FC4Y8IIz1hTsWqhB3Q2mWtbMWI5Oj2XUdlwBnvJgvHN ZHLA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t5si543286pfe.291.2017.06.22.00.09.49; Thu, 22 Jun 2017 00:09:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752866AbdFVHJn (ORCPT + 25 others); Thu, 22 Jun 2017 03:09:43 -0400 Received: from foss.arm.com ([217.140.101.70]:33364 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752682AbdFVHIZ (ORCPT ); Thu, 22 Jun 2017 03:08:25 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5D71315BE; Thu, 22 Jun 2017 00:08:20 -0700 (PDT) Received: from gby.kfn.arm.com (unknown [10.45.48.178]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 63BE13F587; Thu, 22 Jun 2017 00:08:17 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Cc: Ofir Drang Subject: [PATCH 3/7] staging: ccree: add support for older HW revisions Date: Thu, 22 Jun 2017 10:07:49 +0300 Message-Id: <1498115276-1601-4-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1498115276-1601-1-git-send-email-gilad@benyossef.com> References: <1498115276-1601-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add support for the older CryptoCell 710 and 630P hardware revisions. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/Kconfig | 7 +- drivers/staging/ccree/cc_crypto_ctx.h | 16 --- drivers/staging/ccree/cc_hw_queue_defs.h | 2 +- drivers/staging/ccree/cc_regs.h | 7 +- drivers/staging/ccree/dx_crys_kernel.h | 1 + drivers/staging/ccree/dx_host.h | 3 + drivers/staging/ccree/dx_reg_common.h | 2 - drivers/staging/ccree/ssi_aead.c | 36 +++-- drivers/staging/ccree/ssi_cipher.c | 27 +++- drivers/staging/ccree/ssi_config.h | 2 +- drivers/staging/ccree/ssi_driver.c | 115 ++++++++++----- drivers/staging/ccree/ssi_driver.h | 25 +++- drivers/staging/ccree/ssi_fips_ll.c | 59 ++++---- drivers/staging/ccree/ssi_hash.c | 234 +++++++++++++++++-------------- drivers/staging/ccree/ssi_hash.h | 10 +- drivers/staging/ccree/ssi_request_mgr.c | 19 ++- drivers/staging/ccree/ssi_sram_mgr.c | 15 +- 17 files changed, 349 insertions(+), 231 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig index 4be87f5..f1e75e8 100644 --- a/drivers/staging/ccree/Kconfig +++ b/drivers/staging/ccree/Kconfig @@ -19,9 +19,10 @@ config CRYPTO_DEV_CCREE select CRYPTO_XTS help Say 'Y' to enable a driver for the Arm TrustZone CryptoCell - C7xx. Currently only the CryptoCell 712 REE is supported. - Choose this if you wish to use hardware acceleration of - cryptographic operations on the system REE. + C7xx. Currently the REE interface of the CryptoCell 712, + 710 and 630p are supported. Choose this if you wish to use + hardware acceleration of cryptographic operations on the + system REE. If unsure say Y. config CCREE_FIPS_SUPPORT diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h index 591f6fd..1542aa7 100644 --- a/drivers/staging/ccree/cc_crypto_ctx.h +++ b/drivers/staging/ccree/cc_crypto_ctx.h @@ -19,17 +19,6 @@ #include -/* context size */ -#ifndef CC_CTX_SIZE_LOG2 -#if (CC_SUPPORT_SHA > 256) -#define CC_CTX_SIZE_LOG2 8 -#else -#define CC_CTX_SIZE_LOG2 7 -#endif -#endif -#define CC_CTX_SIZE BIT(CC_CTX_SIZE_LOG2) -#define CC_DRV_CTX_SIZE_WORDS (CC_CTX_SIZE >> 2) - #define CC_DRV_DES_IV_SIZE 8 #define CC_DRV_DES_BLOCK_SIZE 8 @@ -72,13 +61,8 @@ #define CC_SHA384_BLOCK_SIZE 128 #define CC_SHA512_BLOCK_SIZE 128 -#if (CC_SUPPORT_SHA > 256) #define CC_DIGEST_SIZE_MAX CC_SHA512_DIGEST_SIZE #define CC_HASH_BLOCK_SIZE_MAX CC_SHA512_BLOCK_SIZE /*1024b*/ -#else /* Only up to SHA256 */ -#define CC_DIGEST_SIZE_MAX CC_SHA256_DIGEST_SIZE -#define CC_HASH_BLOCK_SIZE_MAX CC_SHA256_BLOCK_SIZE /*512b*/ -#endif #define CC_HMAC_BLOCK_SIZE_MAX CC_HASH_BLOCK_SIZE_MAX diff --git a/drivers/staging/ccree/cc_hw_queue_defs.h b/drivers/staging/ccree/cc_hw_queue_defs.h index aaa56c8..c730c3c 100644 --- a/drivers/staging/ccree/cc_hw_queue_defs.h +++ b/drivers/staging/ccree/cc_hw_queue_defs.h @@ -220,7 +220,7 @@ static inline void hw_desc_init(struct cc_hw_desc *pdesc) * * @pdesc: pointer HW descriptor struct */ -static inline void set_queue_last_ind(struct cc_hw_desc *pdesc) +static inline void set_queue_last_ind_bit(struct cc_hw_desc *pdesc) { pdesc->word[3] |= FIELD_PREP(WORD3_QUEUE_LAST_IND, 1); } diff --git a/drivers/staging/ccree/cc_regs.h b/drivers/staging/ccree/cc_regs.h index 4a893a6..62ace81 100644 --- a/drivers/staging/ccree/cc_regs.h +++ b/drivers/staging/ccree/cc_regs.h @@ -25,12 +25,9 @@ #include -#define AXIM_MON_BASE_OFFSET CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_COMP) -#define AXIM_MON_COMP_VALUE GENMASK(DX_AXIM_MON_COMP_VALUE_BIT_SIZE + \ - DX_AXIM_MON_COMP_VALUE_BIT_SHIFT, \ - DX_AXIM_MON_COMP_VALUE_BIT_SHIFT) +#define AXIM_MON_BASE_712_OFFSET CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_COMP) +#define AXIM_MON_BASE_630_OFFSET CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_COMP8) -#define AXIM_MON_BASE_OFFSET CC_REG_OFFSET(CRY_KERNEL, AXIM_MON_COMP) #define AXIM_MON_COMP_VALUE GENMASK(DX_AXIM_MON_COMP_VALUE_BIT_SIZE + \ DX_AXIM_MON_COMP_VALUE_BIT_SHIFT, \ DX_AXIM_MON_COMP_VALUE_BIT_SHIFT) diff --git a/drivers/staging/ccree/dx_crys_kernel.h b/drivers/staging/ccree/dx_crys_kernel.h index 2196030..0d1d01e 100644 --- a/drivers/staging/ccree/dx_crys_kernel.h +++ b/drivers/staging/ccree/dx_crys_kernel.h @@ -131,6 +131,7 @@ #define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SHIFT 0x0UL #define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SIZE 0x8UL #define DX_AXIM_MON_COMP_REG_OFFSET 0xB80UL +#define DX_AXIM_MON_COMP8_REG_OFFSET 0xBA0UL #define DX_AXIM_MON_COMP_VALUE_BIT_SHIFT 0x0UL #define DX_AXIM_MON_COMP_VALUE_BIT_SIZE 0x10UL #define DX_AXIM_MON_ERR_REG_OFFSET 0xBC4UL diff --git a/drivers/staging/ccree/dx_host.h b/drivers/staging/ccree/dx_host.h index 863c267..b4bdb42 100644 --- a/drivers/staging/ccree/dx_host.h +++ b/drivers/staging/ccree/dx_host.h @@ -31,6 +31,9 @@ #define DX_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SIZE 0x1UL #define DX_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT 0x17UL #define DX_HOST_IRR_AXIM_COMP_INT_BIT_SIZE 0x1UL +#define DX_HOST_SEP_SRAM_THRESHOLD_REG_OFFSET 0xA10UL +#define DX_HOST_SEP_SRAM_THRESHOLD_VALUE_BIT_SHIFT 0x0UL +#define DX_HOST_SEP_SRAM_THRESHOLD_VALUE_BIT_SIZE 0xCUL #define DX_HOST_IMR_REG_OFFSET 0xA04UL #define DX_HOST_IMR_NOT_USED_MASK_BIT_SHIFT 0x1UL #define DX_HOST_IMR_NOT_USED_MASK_BIT_SIZE 0x1UL diff --git a/drivers/staging/ccree/dx_reg_common.h b/drivers/staging/ccree/dx_reg_common.h index d5132ff..f7cec05 100644 --- a/drivers/staging/ccree/dx_reg_common.h +++ b/drivers/staging/ccree/dx_reg_common.h @@ -21,6 +21,4 @@ #define CC_HW_VERSION 0xef840015UL -#define DX_DEV_SHA_MAX 512 - #endif /*__DX_REG_COMMON_H__*/ diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c index e8936a3..a1ac345 100644 --- a/drivers/staging/ccree/ssi_aead.c +++ b/drivers/staging/ccree/ssi_aead.c @@ -321,7 +321,7 @@ static int hmac_setkey(struct cc_hw_desc *desc, struct ssi_aead_ctx *ctx) /* Load the hash current length*/ hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hash_mode); - set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_din_const(&desc[idx], 0, ctx->drvdata->hash_len_sz); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; @@ -456,7 +456,8 @@ ssi_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key, unsigned int keyl /* Load the hash current length*/ hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hashmode); - set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_din_const(&desc[idx], 0, + ctx->drvdata->hash_len_sz); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -879,7 +880,7 @@ static inline void ssi_aead_process_digest_result_desc( set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); set_dout_dlli(&desc[idx], req_ctx->icv_dma_addr, ctx->authsize, NS_BIT, 1); - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { set_aes_not_hash_mode(&desc[idx]); set_cipher_mode(&desc[idx], DRV_CIPHER_XCBC_MAC); @@ -895,7 +896,7 @@ static inline void ssi_aead_process_digest_result_desc( set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_dout_dlli(&desc[idx], req_ctx->mac_buf_dma_addr, ctx->authsize, NS_BIT, 1); - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); set_cipher_config0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN); set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); @@ -1012,7 +1013,7 @@ static inline void ssi_aead_hmac_setup_digest_desc( set_din_sram(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, hash_mode), - HASH_LEN_SIZE); + ctx->drvdata->hash_len_sz); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; @@ -1113,7 +1114,7 @@ static inline void ssi_aead_process_digest_scheme_desc( hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hash_mode); set_dout_sram(&desc[idx], aead_handle->sram_workspace_addr, - HASH_LEN_SIZE); + ctx->drvdata->hash_len_sz); set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); set_cipher_do(&desc[idx], DO_PAD); @@ -1145,7 +1146,7 @@ static inline void ssi_aead_process_digest_scheme_desc( set_din_sram(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, hash_mode), - HASH_LEN_SIZE); + ctx->drvdata->hash_len_sz); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -1535,7 +1536,7 @@ static inline int ssi_aead_ccm( set_din_type(&desc[idx], DMA_DLLI, req_ctx->mac_buf_dma_addr, ctx->authsize, NS_BIT); set_dout_dlli(&desc[idx], mac_result, ctx->authsize, NS_BIT, 1); - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); set_flow_mode(&desc[idx], DIN_AES_DOUT); idx++; @@ -1791,7 +1792,7 @@ static inline void ssi_aead_process_gcm_result_desc( set_din_type(&desc[idx], DMA_DLLI, req_ctx->mac_buf_dma_addr, AES_BLOCK_SIZE, NS_BIT); set_dout_dlli(&desc[idx], mac_result, ctx->authsize, NS_BIT, 1); - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); set_flow_mode(&desc[idx], DIN_AES_DOUT); idx++; @@ -2424,6 +2425,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_SHA1, + .min_hw_rev = CC_HW_REV_630, }, { .name = "authenc(hmac(sha1),cbc(des3_ede))", @@ -2443,6 +2445,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_DES, .auth_mode = DRV_HASH_SHA1, + .min_hw_rev = CC_HW_REV_630, }, { .name = "authenc(hmac(sha256),cbc(aes))", @@ -2462,6 +2465,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_SHA256, + .min_hw_rev = CC_HW_REV_630, }, { .name = "authenc(hmac(sha256),cbc(des3_ede))", @@ -2481,6 +2485,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_DES, .auth_mode = DRV_HASH_SHA256, + .min_hw_rev = CC_HW_REV_630, }, { .name = "authenc(xcbc(aes),cbc(aes))", @@ -2500,6 +2505,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_XCBC_MAC, + .min_hw_rev = CC_HW_REV_630, }, { .name = "authenc(hmac(sha1),rfc3686(ctr(aes)))", @@ -2519,6 +2525,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_CTR, .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_SHA1, + .min_hw_rev = CC_HW_REV_630, }, { .name = "authenc(hmac(sha256),rfc3686(ctr(aes)))", @@ -2538,6 +2545,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_CTR, .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_SHA256, + .min_hw_rev = CC_HW_REV_630, }, { .name = "authenc(xcbc(aes),rfc3686(ctr(aes)))", @@ -2557,6 +2565,8 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_CTR, .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_XCBC_MAC, + .min_hw_rev = CC_HW_REV_630, + }, #if SSI_CC_HAS_AES_CCM { @@ -2577,6 +2587,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_CCM, .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_NULL, + .min_hw_rev = CC_HW_REV_630, }, { .name = "rfc4309(ccm(aes))", @@ -2596,6 +2607,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_CCM, .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_NULL, + .min_hw_rev = CC_HW_REV_630, }, #endif /*SSI_CC_HAS_AES_CCM*/ #if SSI_CC_HAS_AES_GCM @@ -2617,6 +2629,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_GCTR, .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_NULL, + .min_hw_rev = CC_HW_REV_630, }, { .name = "rfc4106(gcm(aes))", @@ -2636,6 +2649,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_GCTR, .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_NULL, + .min_hw_rev = CC_HW_REV_630, }, { .name = "rfc4543(gcm(aes))", @@ -2655,6 +2669,7 @@ static struct ssi_alg_template aead_algs[] = { .cipher_mode = DRV_CIPHER_GCTR, .flow_mode = S_DIN_to_AES, .auth_mode = DRV_HASH_NULL, + .min_hw_rev = CC_HW_REV_630, }, #endif /*SSI_CC_HAS_AES_GCM*/ }; @@ -2739,6 +2754,9 @@ int ssi_aead_alloc(struct ssi_drvdata *drvdata) /* Linux crypto */ for (alg = 0; alg < ARRAY_SIZE(aead_algs); alg++) { + if (aead_algs[alg].min_hw_rev > drvdata->hw_rev) + continue; + t_alg = ssi_aead_create_alg(&aead_algs[alg]); if (IS_ERR(t_alg)) { rc = PTR_ERR(t_alg); diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c index 2dfc6a3..e5bb976 100644 --- a/drivers/staging/ccree/ssi_cipher.c +++ b/drivers/staging/ccree/ssi_cipher.c @@ -663,7 +663,7 @@ ssi_blkcipher_create_data_desc( set_dout_dlli(&desc[*seq_size], sg_dma_address(dst), nbytes, NS_BIT, (!areq ? 0 : 1)); if (areq != NULL) { - set_queue_last_ind(&desc[*seq_size]); + set_queue_last_ind(ctx_p->drvdata, &desc[*seq_size]); } set_flow_mode(&desc[*seq_size], flow_mode); (*seq_size)++; @@ -712,7 +712,7 @@ ssi_blkcipher_create_data_desc( (!areq ? 0 : 1)); } if (areq != NULL) { - set_queue_last_ind(&desc[*seq_size]); + set_queue_last_ind(ctx_p->drvdata, &desc[*seq_size]); } set_flow_mode(&desc[*seq_size], flow_mode); (*seq_size)++; @@ -950,6 +950,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_XTS, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_630, }, { .name = "xts(aes)", @@ -966,6 +967,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_XTS, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_712, }, { .name = "xts(aes)", @@ -982,6 +984,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_XTS, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_712, }, #endif /*SSI_CC_HAS_AES_XTS*/ #if SSI_CC_HAS_AES_ESSIV @@ -1000,6 +1003,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ESSIV, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_712, }, { .name = "essiv(aes)", @@ -1016,6 +1020,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ESSIV, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_712, }, { .name = "essiv(aes)", @@ -1032,6 +1037,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ESSIV, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_712, }, #endif /*SSI_CC_HAS_AES_ESSIV*/ #if SSI_CC_HAS_AES_BITLOCKER @@ -1050,6 +1056,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_BITLOCKER, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_712, }, { .name = "bitlocker(aes)", @@ -1066,6 +1073,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_BITLOCKER, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_712, }, { .name = "bitlocker(aes)", @@ -1082,6 +1090,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_BITLOCKER, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_712, }, #endif /*SSI_CC_HAS_AES_BITLOCKER*/ { @@ -1099,6 +1108,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ECB, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_630, }, { .name = "cbc(aes)", @@ -1115,6 +1125,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_630, }, { .name = "ofb(aes)", @@ -1131,6 +1142,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_OFB, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_630, }, #if SSI_CC_HAS_AES_CTS { @@ -1148,6 +1160,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_CBC_CTS, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_630, }, #endif { @@ -1165,6 +1178,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_CTR, .flow_mode = S_DIN_to_AES, + .min_hw_rev = CC_HW_REV_630, }, { .name = "cbc(des3_ede)", @@ -1181,6 +1195,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_DES, + .min_hw_rev = CC_HW_REV_630, }, { .name = "ecb(des3_ede)", @@ -1197,6 +1212,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ECB, .flow_mode = S_DIN_to_DES, + .min_hw_rev = CC_HW_REV_630, }, { .name = "cbc(des)", @@ -1213,6 +1229,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_CBC, .flow_mode = S_DIN_to_DES, + .min_hw_rev = CC_HW_REV_630, }, { .name = "ecb(des)", @@ -1229,6 +1246,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_CIPHER_ECB, .flow_mode = S_DIN_to_DES, + .min_hw_rev = CC_HW_REV_630, }, #if SSI_CC_HAS_MULTI2 { @@ -1246,6 +1264,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_MULTI2_CBC, .flow_mode = S_DIN_to_MULTI2, + .min_hw_rev = CC_HW_REV_630, }, { .name = "ofb(multi2)", @@ -1262,6 +1281,7 @@ static struct ssi_alg_template blkcipher_algs[] = { }, .cipher_mode = DRV_MULTI2_OFB, .flow_mode = S_DIN_to_MULTI2, + .min_hw_rev = CC_HW_REV_630, }, #endif /*SSI_CC_HAS_MULTI2*/ }; @@ -1346,6 +1366,9 @@ int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata) /* Linux crypto */ SSI_LOG_DEBUG("Number of algorithms = %zu\n", ARRAY_SIZE(blkcipher_algs)); for (alg = 0; alg < ARRAY_SIZE(blkcipher_algs); alg++) { + if (blkcipher_algs[alg].min_hw_rev > drvdata->hw_rev) + continue; + SSI_LOG_DEBUG("creating %s\n", blkcipher_algs[alg].driver_name); t_alg = ssi_ablkcipher_create_alg(&blkcipher_algs[alg]); if (IS_ERR(t_alg)) { diff --git a/drivers/staging/ccree/ssi_config.h b/drivers/staging/ccree/ssi_config.h index b7c0576..2484a06 100644 --- a/drivers/staging/ccree/ssi_config.h +++ b/drivers/staging/ccree/ssi_config.h @@ -23,7 +23,7 @@ #include -#define DISABLE_COHERENT_DMA_OPS +//#define DISABLE_COHERENT_DMA_OPS //#define FLUSH_CACHE_ALL //#define COMPLETION_DELAY //#define DX_DUMP_DESCS diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c index 1909229..5a62b4f 100644 --- a/drivers/staging/ccree/ssi_driver.c +++ b/drivers/staging/ccree/ssi_driver.c @@ -71,6 +71,33 @@ #include "ssi_pm.h" #include "ssi_fips_local.h" +struct cc_hw_data { + char *name; + enum cc_hw_rev rev; + u32 sig; +}; + +/* Hardware revisions defs. */ + +static const struct cc_hw_data cc712_hw = { + .name = "712", .rev = CC_HW_REV_712, .sig = 0xDCC71200U +}; + +static const struct cc_hw_data cc710_hw = { + .name = "710", .rev = CC_HW_REV_710, .sig = 0xDCC63200U +}; + +static const struct cc_hw_data cc630p_hw = { + .name = "630P", .rev = CC_HW_REV_630, .sig = 0xDCC63000U +}; + +static const struct of_device_id arm_ccree_dev_of_match[] = { + { .compatible = "arm,cryptocell-712-ree", .data = &cc712_hw }, + { .compatible = "arm,cryptocell-710-ree", .data = &cc710_hw }, + { .compatible = "arm,cryptocell-630p-ree", .data = &cc630p_hw }, + {} +}; +MODULE_DEVICE_TABLE(of, arm_ccree_dev_of_match); #ifdef DX_DUMP_BYTES void dump_byte_array(const char *name, const u8 *the_array, unsigned long size) @@ -185,8 +212,12 @@ int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe) CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_ICR), val); /* Unmask relevant interrupt cause */ - val = (~(SSI_COMP_IRQ_MASK | SSI_AXI_ERR_IRQ_MASK | SSI_GPR0_IRQ_MASK)); - CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), val); + val = (SSI_COMP_IRQ_MASK | SSI_AXI_ERR_IRQ_MASK); + + if (drvdata->hw_rev >= CC_HW_REV_712) + val |= SSI_GPR0_IRQ_MASK; + + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), ~val); #ifdef DX_HOST_IRQ_TIMER_INIT_VAL_REG_OFFSET #ifdef DX_IRQ_DELAY @@ -215,11 +246,15 @@ int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe) static int init_cc_resources(struct platform_device *plat_dev) { - struct resource *req_mem_cc_regs = NULL; + struct resource *cc_regs_res = NULL; void __iomem *cc_base = NULL; bool irq_registered = false; struct ssi_drvdata *new_drvdata = kzalloc(sizeof(struct ssi_drvdata), GFP_KERNEL); u32 signature_val; + struct device *dev = &plat_dev->dev; + struct device_node *np = dev->of_node; + const struct cc_hw_data *hw_rev; + const struct of_device_id *dev_id; int rc = 0; if (unlikely(new_drvdata == NULL)) { @@ -228,6 +263,21 @@ static int init_cc_resources(struct platform_device *plat_dev) goto init_cc_res_err; } + dev_id = of_match_node(arm_ccree_dev_of_match, np); + if (!dev_id) + return -ENODEV; + hw_rev = (struct cc_hw_data *)dev_id->data; + new_drvdata->hw_rev_name = hw_rev->name; + new_drvdata->hw_rev = hw_rev->rev; + + if (hw_rev->rev >= CC_HW_REV_712) { + new_drvdata->hash_len_sz = HASH_LEN_SIZE_712; + new_drvdata->axim_mon_offset = AXIM_MON_BASE_712_OFFSET; + } else { + new_drvdata->hash_len_sz = HASH_LEN_SIZE_630; + new_drvdata->axim_mon_offset = AXIM_MON_BASE_630_OFFSET; + } + /*Initialize inflight counter used in dx_ablkcipher_secure_complete used for count of BYSPASS blocks operations*/ new_drvdata->inflight_counter = 0; @@ -245,8 +295,10 @@ static int init_cc_resources(struct platform_device *plat_dev) (unsigned long long)new_drvdata->res_mem->start, (unsigned long long)new_drvdata->res_mem->end); /* Map registers space */ - req_mem_cc_regs = request_mem_region(new_drvdata->res_mem->start, resource_size(new_drvdata->res_mem), "arm_cc7x_regs"); - if (unlikely(req_mem_cc_regs == NULL)) { + cc_regs_res = request_mem_region(new_drvdata->res_mem->start, + resource_size(new_drvdata->res_mem), + "arm_ccree_regs"); + if (unlikely(!cc_regs_res)) { SSI_LOG_ERR("Couldn't allocate registers memory region at " "0x%08X\n", (unsigned int)new_drvdata->res_mem->start); rc = -EBUSY; @@ -271,7 +323,7 @@ static int init_cc_resources(struct platform_device *plat_dev) goto init_cc_res_err; } rc = request_irq(new_drvdata->res_irq->start, cc_isr, - IRQF_SHARED, "arm_cc7x", new_drvdata); + IRQF_SHARED, "arm_ccree", new_drvdata); if (unlikely(rc != 0)) { SSI_LOG_ERR("Could not register to interrupt %llu\n", (unsigned long long)new_drvdata->res_irq->start); @@ -297,17 +349,19 @@ static int init_cc_resources(struct platform_device *plat_dev) /* Verify correct mapping */ signature_val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_SIGNATURE)); - if (signature_val != DX_DEV_SIGNATURE) { - SSI_LOG_ERR("Invalid CC signature: SIGNATURE=0x%08X != expected=0x%08X\n", - signature_val, (u32)DX_DEV_SIGNATURE); + if (signature_val != hw_rev->sig) { + SSI_LOG_ERR("Signature mismatch: expected 0x%08X got 0x%08X\n", + signature_val, hw_rev->sig); rc = -EINVAL; goto init_cc_res_err; } SSI_LOG_DEBUG("CC SIGNATURE=0x%08X\n", signature_val); /* Display HW versions */ - SSI_LOG(KERN_INFO, "ARM CryptoCell %s Driver: HW version 0x%08X, Driver version %s\n", SSI_DEV_NAME_STR, - CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_VERSION)), DRV_MODULE_VERSION); + SSI_LOG(KERN_INFO, "ARM CryptoCell %s (HW ver 0x%08X, SW version %s)\n", + hw_rev->name, + CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_VERSION)), + DRV_MODULE_VERSION); rc = init_cc_regs(new_drvdata, true); if (unlikely(rc != 0)) { @@ -406,7 +460,7 @@ static int init_cc_resources(struct platform_device *plat_dev) ssi_sysfs_fini(); #endif - if (req_mem_cc_regs != NULL) { + if (cc_regs_res) { if (irq_registered) { free_irq(new_drvdata->res_irq->start, new_drvdata); new_drvdata->res_irq = NULL; @@ -470,7 +524,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev) dev_set_drvdata(&plat_dev->dev, NULL); } -static int cc7x_probe(struct platform_device *plat_dev) +static int ccree_probe(struct platform_device *plat_dev) { int rc; #if defined(CONFIG_ARM) && defined(CC_DEBUG) @@ -492,54 +546,43 @@ static int cc7x_probe(struct platform_device *plat_dev) if (rc != 0) return rc; - SSI_LOG(KERN_INFO, "ARM cc7x_ree device initialized\n"); + SSI_LOG(KERN_INFO, "ARM CryptoCell REE device initialized\n"); return 0; } -static int cc7x_remove(struct platform_device *plat_dev) +static int ccree_remove(struct platform_device *plat_dev) { - SSI_LOG_DEBUG("Releasing cc7x resources...\n"); + SSI_LOG_DEBUG("Releasing resources...\n"); cleanup_cc_resources(plat_dev); - SSI_LOG(KERN_INFO, "ARM cc7x_ree device terminated\n"); + SSI_LOG(KERN_INFO, "ARM CryptoCell REE device unloaded\n"); return 0; } #if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) -static struct dev_pm_ops arm_cc7x_driver_pm = { +static const struct dev_pm_ops arm_ccree_driver_pm = { SET_RUNTIME_PM_OPS(ssi_power_mgr_runtime_suspend, ssi_power_mgr_runtime_resume, NULL) }; #endif #if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) -#define DX_DRIVER_RUNTIME_PM (&arm_cc7x_driver_pm) +#define DX_DRIVER_RUNTIME_PM (&arm_ccree_driver_pm) #else #define DX_DRIVER_RUNTIME_PM NULL #endif - -#ifdef CONFIG_OF -static const struct of_device_id arm_cc7x_dev_of_match[] = { - {.compatible = "arm,cryptocell-712-ree"}, - {} -}; -MODULE_DEVICE_TABLE(of, arm_cc7x_dev_of_match); -#endif - -static struct platform_driver cc7x_driver = { +static struct platform_driver ccree_driver = { .driver = { - .name = "cc7xree", -#ifdef CONFIG_OF - .of_match_table = arm_cc7x_dev_of_match, -#endif + .name = "ccree", + .of_match_table = arm_ccree_dev_of_match, .pm = DX_DRIVER_RUNTIME_PM, }, - .probe = cc7x_probe, - .remove = cc7x_remove, + .probe = ccree_probe, + .remove = ccree_remove, }; -module_platform_driver(cc7x_driver); +module_platform_driver(ccree_driver); /* Module description */ MODULE_DESCRIPTION("ARM TrustZone CryptoCell REE Driver"); diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h index 34bd7ef..52ac43c 100644 --- a/drivers/staging/ccree/ssi_driver.h +++ b/drivers/staging/ccree/ssi_driver.h @@ -43,7 +43,6 @@ #include "cc_regs.h" #include "dx_reg_common.h" #include "cc_hal.h" -#define CC_SUPPORT_SHA DX_DEV_SHA_MAX #include "cc_crypto_ctx.h" #include "ssi_sysfs.h" #include "hash_defs.h" @@ -51,9 +50,14 @@ #include "cc_hw_queue_defs.h" #include "ssi_sram_mgr.h" -#define DRV_MODULE_VERSION "3.0" +#define DRV_MODULE_VERSION "4.0" + +enum cc_hw_rev { + CC_HW_REV_630 = 630, + CC_HW_REV_710 = 710, + CC_HW_REV_712 = 712 +}; -#define SSI_DEV_NAME_STR "cc715ree" #define SSI_CC_HAS_AES_CCM 1 #define SSI_CC_HAS_AES_GCM 1 #define SSI_CC_HAS_AES_XTS 1 @@ -90,7 +94,7 @@ /* Logging macros */ #define SSI_LOG(level, format, ...) \ - printk(level "cc715ree::%s: " format , __func__, ##__VA_ARGS__) + printk(level "ccree::%s: " format, __func__, ##__VA_ARGS__) #define SSI_LOG_ERR(format, ...) SSI_LOG(KERN_ERR, format, ##__VA_ARGS__) #define SSI_LOG_WARNING(format, ...) SSI_LOG(KERN_WARNING, format, ##__VA_ARGS__) #define SSI_LOG_NOTICE(format, ...) SSI_LOG(KERN_NOTICE, format, ##__VA_ARGS__) @@ -148,7 +152,10 @@ struct ssi_drvdata { void *ivgen_handle; void *sram_mgr_handle; u32 inflight_counter; - + char *hw_rev_name; + enum cc_hw_rev hw_rev; + u32 hash_len_sz; + u32 axim_mon_offset; }; struct ssi_crypto_alg { @@ -176,6 +183,7 @@ struct ssi_alg_template { int cipher_mode; int flow_mode; /* Note: currently, refers to the cipher mode only. */ int auth_mode; + u32 min_hw_rev; struct ssi_drvdata *drvdata; }; @@ -194,5 +202,12 @@ void dump_byte_array(const char *name, const u8 *the_array, unsigned long size); int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe); void fini_cc_regs(struct ssi_drvdata *drvdata); +static inline void set_queue_last_ind(struct ssi_drvdata *drvdata, + struct cc_hw_desc *pdesc) +{ + if (drvdata->hw_rev >= CC_HW_REV_712) + set_queue_last_ind_bit(pdesc); +} + #endif /*__SSI_DRIVER_H__*/ diff --git a/drivers/staging/ccree/ssi_fips_ll.c b/drivers/staging/ccree/ssi_fips_ll.c index cdfbf04..f64177c 100644 --- a/drivers/staging/ccree/ssi_fips_ll.c +++ b/drivers/staging/ccree/ssi_fips_ll.c @@ -35,13 +35,11 @@ static const u32 sha1_init[] = { static const u32 sha256_init[] = { SHA256_H7, SHA256_H6, SHA256_H5, SHA256_H4, SHA256_H3, SHA256_H2, SHA256_H1, SHA256_H0 }; -#if (CC_SUPPORT_SHA > 256) static const u32 digest_len_sha512_init[] = { 0x00000080, 0x00000000, 0x00000000, 0x00000000 }; static const u64 sha512_init[] = { SHA512_H7, SHA512_H6, SHA512_H5, SHA512_H4, SHA512_H3, SHA512_H2, SHA512_H1, SHA512_H0 }; -#endif #define NIST_CIPHER_AES_MAX_VECTOR_SIZE 32 @@ -102,7 +100,7 @@ struct fips_hmac_ctx { u8 initial_digest[CC_DIGEST_SIZE_MAX]; u8 key[CC_HMAC_BLOCK_SIZE_MAX]; u8 k0[CC_HMAC_BLOCK_SIZE_MAX]; - u8 digest_bytes_len[HASH_LEN_SIZE]; + u8 digest_bytes_len[HASH_MAX_LEN_SIZE]; u8 tmp_digest[CC_DIGEST_SIZE_MAX]; u8 din[NIST_HMAC_MSG_SIZE]; u8 mac_res[CC_DIGEST_SIZE_MAX]; @@ -213,10 +211,8 @@ static const FipsCipherData FipsCipherDataTable[] = { { 1, RFC3962_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, RFC3962_AES_CBC_CTS_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CBC_CTS, RFC3962_AES_128_CBC_CTS_CIPHER, RFC3962_AES_PLAIN_DATA, RFC3962_AES_VECTOR_SIZE }, { 1, NIST_AES_256_XTS_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_256_XTS_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_XTS, NIST_AES_256_XTS_PLAIN, NIST_AES_256_XTS_CIPHER, NIST_AES_256_XTS_VECTOR_SIZE }, { 1, NIST_AES_256_XTS_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_256_XTS_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_XTS, NIST_AES_256_XTS_CIPHER, NIST_AES_256_XTS_PLAIN, NIST_AES_256_XTS_VECTOR_SIZE }, -#if (CC_SUPPORT_SHA > 256) { 1, NIST_AES_512_XTS_KEY, 2*CC_AES_256_BIT_KEY_SIZE, NIST_AES_512_XTS_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_XTS, NIST_AES_512_XTS_PLAIN, NIST_AES_512_XTS_CIPHER, NIST_AES_512_XTS_VECTOR_SIZE }, { 1, NIST_AES_512_XTS_KEY, 2*CC_AES_256_BIT_KEY_SIZE, NIST_AES_512_XTS_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_XTS, NIST_AES_512_XTS_CIPHER, NIST_AES_512_XTS_PLAIN, NIST_AES_512_XTS_VECTOR_SIZE }, -#endif /* DES */ { 0, NIST_TDES_ECB3_KEY, CC_DRV_DES_TRIPLE_KEY_SIZE, NIST_TDES_ECB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_ECB, NIST_TDES_ECB3_PLAIN_DATA, NIST_TDES_ECB3_CIPHER, NIST_TDES_VECTOR_SIZE }, { 0, NIST_TDES_ECB3_KEY, CC_DRV_DES_TRIPLE_KEY_SIZE, NIST_TDES_ECB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_ECB, NIST_TDES_ECB3_CIPHER, NIST_TDES_ECB3_PLAIN_DATA, NIST_TDES_VECTOR_SIZE }, @@ -235,18 +231,16 @@ static const FipsCmacData FipsCmacDataTable[] = { static const FipsHashData FipsHashDataTable[] = { { DRV_HASH_SHA1, NIST_SHA_1_MSG, NIST_SHA_MSG_SIZE, NIST_SHA_1_MD }, { DRV_HASH_SHA256, NIST_SHA_256_MSG, NIST_SHA_MSG_SIZE, NIST_SHA_256_MD }, -#if (CC_SUPPORT_SHA > 256) -// { DRV_HASH_SHA512, NIST_SHA_512_MSG, NIST_SHA_MSG_SIZE, NIST_SHA_512_MD }, -#endif + { DRV_HASH_SHA512, NIST_SHA_512_MSG, NIST_SHA_MSG_SIZE, + NIST_SHA_512_MD }, }; #define FIPS_HASH_NUM_OF_TESTS (sizeof(FipsHashDataTable) / sizeof(FipsHashData)) static const FipsHmacData FipsHmacDataTable[] = { { DRV_HASH_SHA1, NIST_HMAC_SHA1_KEY, NIST_HMAC_SHA1_KEY_SIZE, NIST_HMAC_SHA1_MSG, NIST_HMAC_MSG_SIZE, NIST_HMAC_SHA1_MD }, { DRV_HASH_SHA256, NIST_HMAC_SHA256_KEY, NIST_HMAC_SHA256_KEY_SIZE, NIST_HMAC_SHA256_MSG, NIST_HMAC_MSG_SIZE, NIST_HMAC_SHA256_MD }, -#if (CC_SUPPORT_SHA > 256) -// { DRV_HASH_SHA512, NIST_HMAC_SHA512_KEY, NIST_HMAC_SHA512_KEY_SIZE, NIST_HMAC_SHA512_MSG, NIST_HMAC_MSG_SIZE, NIST_HMAC_SHA512_MD }, -#endif + { DRV_HASH_SHA512, NIST_HMAC_SHA512_KEY, NIST_HMAC_SHA512_KEY_SIZE, + NIST_HMAC_SHA512_MSG, NIST_HMAC_MSG_SIZE, NIST_HMAC_SHA512_MD }, }; #define FIPS_HMAC_NUM_OF_TESTS (sizeof(FipsHmacDataTable) / sizeof(FipsHmacData)) @@ -434,6 +428,11 @@ ssi_cipher_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffe int rc = 0; size_t iv_size = cipherData->isAes ? NIST_AES_IV_SIZE : NIST_TDES_IV_SIZE ; + /* AES 512 was introduced in 712 */ + if ((cipherDara->keySize > CC_AES_256_BIT_KEY_SIZE) && + (drvdata->hw_rev < CC_HW_REV_712)) + continue; + memset(cpu_addr_buffer, 0, sizeof(struct fips_cipher_ctx)); /* copy into the allocated buffer */ @@ -612,10 +611,8 @@ FIPS_HashToFipsError(enum drv_hash_mode hash_mode) return CC_REE_FIPS_ERROR_SHA1_PUT; case DRV_HASH_SHA256: return CC_REE_FIPS_ERROR_SHA256_PUT; -#if (CC_SUPPORT_SHA > 256) case DRV_HASH_SHA512: return CC_REE_FIPS_ERROR_SHA512_PUT; -#endif default: return CC_REE_FIPS_ERROR_GENERAL; } @@ -654,7 +651,7 @@ ssi_hash_fips_run_test(struct ssi_drvdata *drvdata, /* Load the hash current length */ hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hw_mode); - set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_din_const(&desc[idx], 0, drvdata->hash_len_sz); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -726,14 +723,15 @@ ssi_hash_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, inter_digestsize = CC_SHA256_DIGEST_SIZE; memcpy(virt_ctx->initial_digest, (void*)sha256_init, CC_SHA256_DIGEST_SIZE); break; -#if (CC_SUPPORT_SHA > 256) case DRV_HASH_SHA512: + /* SHA 512 was introduced in CC 712 */ + if (drvdata->hw_rev < CC_HW_REV_712) + continue; hw_mode = DRV_HASH_HW_SHA512; digest_size = CC_SHA512_DIGEST_SIZE; inter_digestsize = CC_SHA512_DIGEST_SIZE; memcpy(virt_ctx->initial_digest, (void*)sha512_init, CC_SHA512_DIGEST_SIZE); break; -#endif default: error = FIPS_HashToFipsError(hash_data->hash_mode); break; @@ -788,10 +786,8 @@ FIPS_HmacToFipsError(enum drv_hash_mode hash_mode) return CC_REE_FIPS_ERROR_HMAC_SHA1_PUT; case DRV_HASH_SHA256: return CC_REE_FIPS_ERROR_HMAC_SHA256_PUT; -#if (CC_SUPPORT_SHA > 256) case DRV_HASH_SHA512: return CC_REE_FIPS_ERROR_HMAC_SHA512_PUT; -#endif default: return CC_REE_FIPS_ERROR_GENERAL; } @@ -871,7 +867,7 @@ ssi_hmac_fips_run_test(struct ssi_drvdata *drvdata, /* Load the hash current length*/ hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hw_mode); - set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_din_const(&desc[idx], 0, drvdata->hash_len_sz); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; @@ -923,7 +919,7 @@ ssi_hmac_fips_run_test(struct ssi_drvdata *drvdata, /* HW last hash block padding (aka. "DO_PAD") */ hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hw_mode); - set_dout_dlli(&desc[idx], k0_dma_addr, HASH_LEN_SIZE, NS_BIT, 0); + set_dout_dlli(&desc[idx], k0_dma_addr, drvdata->hash_len_sz, NS_BIT, 0); set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); set_cipher_do(&desc[idx], DO_PAD); @@ -963,7 +959,7 @@ ssi_hmac_fips_run_test(struct ssi_drvdata *drvdata, hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], hw_mode); set_din_type(&desc[idx], DMA_DLLI, digest_bytes_len_dma_addr, - HASH_LEN_SIZE, NS_BIT); + drvdata->hash_len_sz, NS_BIT); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -1039,27 +1035,32 @@ ssi_hmac_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, digest_size = CC_SHA1_DIGEST_SIZE; block_size = CC_SHA1_BLOCK_SIZE; inter_digestsize = CC_SHA1_DIGEST_SIZE; - memcpy(virt_ctx->initial_digest, (void*)sha1_init, CC_SHA1_DIGEST_SIZE); - memcpy(virt_ctx->digest_bytes_len, digest_len_init, HASH_LEN_SIZE); + memcpy(virt_ctx->initial_digest, (void *)sha1_init, + CC_SHA1_DIGEST_SIZE); + memcpy(virt_ctx->digest_bytes_len, digest_len_init, + drvdata->hash_len_sz); break; case DRV_HASH_SHA256: hw_mode = DRV_HASH_HW_SHA256; digest_size = CC_SHA256_DIGEST_SIZE; block_size = CC_SHA256_BLOCK_SIZE; inter_digestsize = CC_SHA256_DIGEST_SIZE; - memcpy(virt_ctx->initial_digest, (void*)sha256_init, CC_SHA256_DIGEST_SIZE); - memcpy(virt_ctx->digest_bytes_len, digest_len_init, HASH_LEN_SIZE); + memcpy(virt_ctx->initial_digest, (void *)sha256_init, + CC_SHA256_DIGEST_SIZE); + memcpy(virt_ctx->digest_bytes_len, digest_len_init, + drvdata->hash_len_sz); break; -#if (CC_SUPPORT_SHA > 256) case DRV_HASH_SHA512: hw_mode = DRV_HASH_HW_SHA512; digest_size = CC_SHA512_DIGEST_SIZE; block_size = CC_SHA512_BLOCK_SIZE; inter_digestsize = CC_SHA512_DIGEST_SIZE; - memcpy(virt_ctx->initial_digest, (void*)sha512_init, CC_SHA512_DIGEST_SIZE); - memcpy(virt_ctx->digest_bytes_len, digest_len_sha512_init, HASH_LEN_SIZE); + memcpy(virt_ctx->initial_digest, (void *)sha512_init, + CC_SHA512_DIGEST_SIZE); + memcpy(virt_ctx->digest_bytes_len, + digest_len_sha512_init, + drvdata->hash_len_sz); break; -#endif default: error = FIPS_HmacToFipsError(hmac_data->hash_mode); break; diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c index f52e1af..623486d 100644 --- a/drivers/staging/ccree/ssi_hash.c +++ b/drivers/staging/ccree/ssi_hash.c @@ -54,7 +54,6 @@ static const u32 sha224_init[] = { static const u32 sha256_init[] = { SHA256_H7, SHA256_H6, SHA256_H5, SHA256_H4, SHA256_H3, SHA256_H2, SHA256_H1, SHA256_H0 }; -#if (DX_DEV_SHA_MAX > 256) static const u32 digest_len_sha512_init[] = { 0x00000080, 0x00000000, 0x00000000, 0x00000000 }; static const u64 sha384_init[] = { @@ -63,7 +62,6 @@ static const u64 sha384_init[] = { static const u64 sha512_init[] = { SHA512_H7, SHA512_H6, SHA512_H5, SHA512_H4, SHA512_H3, SHA512_H2, SHA512_H1, SHA512_H0 }; -#endif static void ssi_hash_create_xcbc_setup( struct ahash_request *areq, @@ -181,7 +179,8 @@ static int ssi_hash_map_request(struct device *dev, SSI_LOG_DEBUG("Allocated digest-buffer in context ctx->digest_buff=@%p\n", state->digest_buff); if (ctx->hw_mode != DRV_CIPHER_XCBC_MAC) { - state->digest_bytes_len = kzalloc(HASH_LEN_SIZE, GFP_KERNEL|GFP_DMA); + state->digest_bytes_len = kzalloc(HASH_MAX_LEN_SIZE, + GFP_KERNEL | GFP_DMA); if (!state->digest_bytes_len) { SSI_LOG_ERR("Allocating digest-bytes-len in context failed\n"); goto fail1; @@ -214,15 +213,15 @@ static int ssi_hash_map_request(struct device *dev, memset(state->digest_buff, 0, ctx->inter_digestsize); } else { /*sha*/ memcpy(state->digest_buff, ctx->digest_buff, ctx->inter_digestsize); -#if (DX_DEV_SHA_MAX > 256) if (unlikely((ctx->hash_mode == DRV_HASH_SHA512) || (ctx->hash_mode == DRV_HASH_SHA384))) { - memcpy(state->digest_bytes_len, digest_len_sha512_init, HASH_LEN_SIZE); + memcpy(state->digest_bytes_len, + digest_len_sha512_init, + ctx->drvdata->hash_len_sz); } else { - memcpy(state->digest_bytes_len, digest_len_init, HASH_LEN_SIZE); + memcpy(state->digest_bytes_len, + digest_len_init, + ctx->drvdata->hash_len_sz); } -#else - memcpy(state->digest_bytes_len, digest_len_init, HASH_LEN_SIZE); -#endif } dma_sync_single_for_device(dev, state->digest_buff_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL); @@ -248,21 +247,26 @@ static int ssi_hash_map_request(struct device *dev, } if (ctx->hw_mode != DRV_CIPHER_XCBC_MAC) { - state->digest_bytes_len_dma_addr = dma_map_single(dev, (void *)state->digest_bytes_len, HASH_LEN_SIZE, DMA_BIDIRECTIONAL); + state->digest_bytes_len_dma_addr = + dma_map_single(dev, (void *)state->digest_bytes_len, + HASH_MAX_LEN_SIZE, DMA_BIDIRECTIONAL); if (dma_mapping_error(dev, state->digest_bytes_len_dma_addr)) { SSI_LOG_ERR("Mapping digest len %u B at va=%pK for DMA failed\n", - HASH_LEN_SIZE, state->digest_bytes_len); + HASH_MAX_LEN_SIZE, state->digest_bytes_len); goto fail4; } SSI_LOG_DEBUG("Mapped digest len %u B at va=%pK to dma=0x%llX\n", - HASH_LEN_SIZE, state->digest_bytes_len, + HASH_MAX_LEN_SIZE, state->digest_bytes_len, (unsigned long long)state->digest_bytes_len_dma_addr); } else { state->digest_bytes_len_dma_addr = 0; } if (is_hmac && ctx->hash_mode != DRV_HASH_NULL) { - state->opad_digest_dma_addr = dma_map_single(dev, (void *)state->opad_digest_buff, ctx->inter_digestsize, DMA_BIDIRECTIONAL); + state->opad_digest_dma_addr = + dma_map_single(dev, (void *)state->opad_digest_buff, + ctx->inter_digestsize, + DMA_BIDIRECTIONAL); if (dma_mapping_error(dev, state->opad_digest_dma_addr)) { SSI_LOG_ERR("Mapping opad digest %d B at va=%pK for DMA failed\n", ctx->inter_digestsize, state->opad_digest_buff); @@ -283,7 +287,8 @@ static int ssi_hash_map_request(struct device *dev, fail5: if (state->digest_bytes_len_dma_addr != 0) { - dma_unmap_single(dev, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, DMA_BIDIRECTIONAL); + dma_unmap_single(dev, state->digest_bytes_len_dma_addr, + HASH_MAX_LEN_SIZE, DMA_BIDIRECTIONAL); state->digest_bytes_len_dma_addr = 0; } fail4: @@ -329,7 +334,7 @@ static void ssi_hash_unmap_request(struct device *dev, } if (state->digest_bytes_len_dma_addr != 0) { dma_unmap_single(dev, state->digest_bytes_len_dma_addr, - HASH_LEN_SIZE, DMA_BIDIRECTIONAL); + HASH_MAX_LEN_SIZE, DMA_BIDIRECTIONAL); SSI_LOG_DEBUG("Unmapped digest-bytes-len buffer: digest_bytes_len_dma_addr=0x%llX\n", (unsigned long long)state->digest_bytes_len_dma_addr); state->digest_bytes_len_dma_addr = 0; @@ -476,10 +481,11 @@ static int ssi_hash_digest(struct ahash_req_ctx *state, if (is_hmac) { set_din_type(&desc[idx], DMA_DLLI, - state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, + state->digest_bytes_len_dma_addr, + ctx->drvdata->hash_len_sz, NS_BIT); } else { - set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_din_const(&desc[idx], 0, ctx->drvdata->hash_len_sz); if (likely(nbytes != 0)) { set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); } else { @@ -497,7 +503,7 @@ static int ssi_hash_digest(struct ahash_req_ctx *state, hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], ctx->hw_mode); set_dout_dlli(&desc[idx], state->digest_buff_dma_addr, - HASH_LEN_SIZE, NS_BIT, 0); + ctx->drvdata->hash_len_sz, NS_BIT, 0); set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); set_cipher_do(&desc[idx], DO_PAD); @@ -527,7 +533,7 @@ static int ssi_hash_digest(struct ahash_req_ctx *state, set_cipher_mode(&desc[idx], ctx->hw_mode); set_din_sram(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr( -ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); +ctx->drvdata, ctx->hash_mode), ctx->drvdata->hash_len_sz); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -554,7 +560,7 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, (async_req ? 1 : 0)); if (async_req) { - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); } set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); @@ -635,7 +641,7 @@ static int ssi_hash_update(struct ahash_req_ctx *state, hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], ctx->hw_mode); set_din_type(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, - HASH_LEN_SIZE, NS_BIT); + ctx->drvdata->hash_len_sz, NS_BIT); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; @@ -655,9 +661,9 @@ static int ssi_hash_update(struct ahash_req_ctx *state, hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], ctx->hw_mode); set_dout_dlli(&desc[idx], state->digest_bytes_len_dma_addr, - HASH_LEN_SIZE, NS_BIT, (async_req ? 1 : 0)); + ctx->drvdata->hash_len_sz, NS_BIT, (async_req ? 1 : 0)); if (async_req) { - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); } set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); @@ -729,7 +735,7 @@ static int ssi_hash_finup(struct ahash_req_ctx *state, set_cipher_mode(&desc[idx], ctx->hw_mode); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_din_type(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, - HASH_LEN_SIZE, NS_BIT); + ctx->drvdata->hash_len_sz, NS_BIT); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; @@ -761,7 +767,7 @@ static int ssi_hash_finup(struct ahash_req_ctx *state, set_cipher_mode(&desc[idx], ctx->hw_mode); set_din_sram(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr( -ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); +ctx->drvdata, ctx->hash_mode), ctx->drvdata->hash_len_sz); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -787,7 +793,7 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, (async_req ? 1 : 0)); if (async_req) { - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); } set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); @@ -867,7 +873,7 @@ static int ssi_hash_final(struct ahash_req_ctx *state, set_cipher_mode(&desc[idx], ctx->hw_mode); set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); set_din_type(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, - HASH_LEN_SIZE, NS_BIT); + ctx->drvdata->hash_len_sz, NS_BIT); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; @@ -879,7 +885,7 @@ static int ssi_hash_final(struct ahash_req_ctx *state, set_cipher_do(&desc[idx], DO_PAD); set_cipher_mode(&desc[idx], ctx->hw_mode); set_dout_dlli(&desc[idx], state->digest_bytes_len_dma_addr, - HASH_LEN_SIZE, NS_BIT, 0); + ctx->drvdata->hash_len_sz, NS_BIT, 0); set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); set_flow_mode(&desc[idx], S_HASH_to_DOUT); idx++; @@ -909,7 +915,7 @@ static int ssi_hash_final(struct ahash_req_ctx *state, set_cipher_mode(&desc[idx], ctx->hw_mode); set_din_sram(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr( -ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); +ctx->drvdata, ctx->hash_mode), ctx->drvdata->hash_len_sz); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -934,7 +940,7 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, (async_req ? 1 : 0)); if (async_req) { - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); } set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); @@ -1036,7 +1042,7 @@ static int ssi_hash_setkey(void *hash, /* Load the hash current length*/ hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], ctx->hw_mode); - set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_din_const(&desc[idx], 0, ctx->drvdata->hash_len_sz); set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); @@ -1117,7 +1123,7 @@ static int ssi_hash_setkey(void *hash, /* Load the hash current length*/ hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], ctx->hw_mode); - set_din_const(&desc[idx], 0, HASH_LEN_SIZE); + set_din_const(&desc[idx], 0, ctx->drvdata->hash_len_sz); set_flow_mode(&desc[idx], S_DIN_to_HASH); set_setup_mode(&desc[idx], SETUP_LOAD_KEY0); idx++; @@ -1437,7 +1443,7 @@ static int ssi_mac_update(struct ahash_request *req) set_cipher_mode(&desc[idx], ctx->hw_mode); set_dout_dlli(&desc[idx], state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 1); - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); set_flow_mode(&desc[idx], S_AES_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); idx++; @@ -1553,7 +1559,7 @@ static int ssi_mac_final(struct ahash_request *req) /* TODO */ set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, 1); - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); set_flow_mode(&desc[idx], S_AES_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); set_cipher_mode(&desc[idx], ctx->hw_mode); @@ -1625,7 +1631,7 @@ static int ssi_mac_finup(struct ahash_request *req) /* TODO */ set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, 1); - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); set_flow_mode(&desc[idx], S_AES_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); set_cipher_mode(&desc[idx], ctx->hw_mode); @@ -1697,7 +1703,7 @@ static int ssi_mac_digest(struct ahash_request *req) hw_desc_init(&desc[idx]); set_dout_dlli(&desc[idx], state->digest_result_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT, 1); - set_queue_last_ind(&desc[idx]); + set_queue_last_ind(ctx->drvdata, &desc[idx]); set_flow_mode(&desc[idx], S_AES_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); set_cipher_config0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT); @@ -1789,10 +1795,12 @@ static int ssi_ahash_export(struct ahash_request *req, void *out) if (state->digest_bytes_len_dma_addr) { dma_sync_single_for_cpu(dev, state->digest_bytes_len_dma_addr, - HASH_LEN_SIZE, DMA_BIDIRECTIONAL); - memcpy(out, state->digest_bytes_len, HASH_LEN_SIZE); + ctx->drvdata->hash_len_sz, + DMA_BIDIRECTIONAL); + memcpy(out, state->digest_bytes_len, + ctx->drvdata->hash_len_sz); } - out += HASH_LEN_SIZE; + out += ctx->drvdata->hash_len_sz; memcpy(out, &curr_buff_cnt, sizeof(u32)); out += sizeof(u32); @@ -1835,10 +1843,11 @@ static int ssi_ahash_import(struct ahash_request *req, const void *in) if (state->digest_bytes_len_dma_addr) { dma_sync_single_for_cpu(dev, state->digest_bytes_len_dma_addr, - HASH_LEN_SIZE, DMA_BIDIRECTIONAL); - memcpy(state->digest_bytes_len, in, HASH_LEN_SIZE); + ctx->drvdata->hash_len_sz, + DMA_BIDIRECTIONAL); + memcpy(state->digest_bytes_len, in, ctx->drvdata->hash_len_sz); } - in += HASH_LEN_SIZE; + in += ctx->drvdata->hash_len_sz; dma_sync_single_for_device(dev, state->digest_buff_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL); @@ -1846,7 +1855,8 @@ static int ssi_ahash_import(struct ahash_request *req, const void *in) if (state->digest_bytes_len_dma_addr) dma_sync_single_for_device(dev, state->digest_bytes_len_dma_addr, - HASH_LEN_SIZE, DMA_BIDIRECTIONAL); + ctx->drvdata->hash_len_sz, + DMA_BIDIRECTIONAL); state->buff_index = 0; @@ -1883,10 +1893,11 @@ struct ssi_hash_template { int hw_mode; int inter_digestsize; struct ssi_drvdata *drvdata; + u32 min_hw_rev; }; #define CC_STATE_SIZE(_x) \ - ((_x) + HASH_LEN_SIZE + SSI_MAX_HASH_BLCK_SIZE + (2 * sizeof(u32))) + ((_x) + HASH_MAX_LEN_SIZE + SSI_MAX_HASH_BLCK_SIZE + (2 * sizeof(u32))) /* hash descriptors */ static struct ssi_hash_template driver_hash[] = { @@ -1915,6 +1926,7 @@ static struct ssi_hash_template driver_hash[] = { .hash_mode = DRV_HASH_SHA1, .hw_mode = DRV_HASH_HW_SHA1, .inter_digestsize = SHA1_DIGEST_SIZE, + .min_hw_rev = CC_HW_REV_630, }, { .name = "sha256", @@ -1939,6 +1951,7 @@ static struct ssi_hash_template driver_hash[] = { .hash_mode = DRV_HASH_SHA256, .hw_mode = DRV_HASH_HW_SHA256, .inter_digestsize = SHA256_DIGEST_SIZE, + .min_hw_rev = CC_HW_REV_630, }, { .name = "sha224", @@ -1963,8 +1976,8 @@ static struct ssi_hash_template driver_hash[] = { .hash_mode = DRV_HASH_SHA224, .hw_mode = DRV_HASH_HW_SHA256, .inter_digestsize = SHA256_DIGEST_SIZE, + .min_hw_rev = CC_HW_REV_630, }, -#if (DX_DEV_SHA_MAX > 256) { .name = "sha384", .driver_name = "sha384-dx", @@ -1988,6 +2001,7 @@ static struct ssi_hash_template driver_hash[] = { .hash_mode = DRV_HASH_SHA384, .hw_mode = DRV_HASH_HW_SHA512, .inter_digestsize = SHA512_DIGEST_SIZE, + .min_hw_rev = CC_HW_REV_712, }, { .name = "sha512", @@ -2012,8 +2026,8 @@ static struct ssi_hash_template driver_hash[] = { .hash_mode = DRV_HASH_SHA512, .hw_mode = DRV_HASH_HW_SHA512, .inter_digestsize = SHA512_DIGEST_SIZE, + .min_hw_rev = CC_HW_REV_712, }, -#endif { .name = "md5", .driver_name = "md5-dx", @@ -2037,6 +2051,7 @@ static struct ssi_hash_template driver_hash[] = { .hash_mode = DRV_HASH_MD5, .hw_mode = DRV_HASH_HW_MD5, .inter_digestsize = MD5_DIGEST_SIZE, + .min_hw_rev = CC_HW_REV_630, }, { .mac_name = "xcbc(aes)", @@ -2059,6 +2074,7 @@ static struct ssi_hash_template driver_hash[] = { .hash_mode = DRV_HASH_NULL, .hw_mode = DRV_CIPHER_XCBC_MAC, .inter_digestsize = AES_BLOCK_SIZE, + .min_hw_rev = CC_HW_REV_630, }, #if SSI_CC_HAS_CMAC { @@ -2082,6 +2098,7 @@ static struct ssi_hash_template driver_hash[] = { .hash_mode = DRV_HASH_NULL, .hw_mode = DRV_CIPHER_CMAC, .inter_digestsize = AES_BLOCK_SIZE, + .min_hw_rev = CC_HW_REV_630, }, #endif @@ -2142,9 +2159,8 @@ int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata) unsigned int larval_seq_len = 0; struct cc_hw_desc larval_seq[CC_DIGEST_SIZE_MAX/sizeof(u32)]; int rc = 0; -#if (DX_DEV_SHA_MAX > 256) + bool large_sha_supported = (drvdata->hw_rev >= CC_HW_REV_712); int i; -#endif /* Copy-to-sram digest-len */ ssi_sram_mgr_const2sram_desc(digest_len_init, sram_buff_ofs, @@ -2156,17 +2172,19 @@ int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata) sram_buff_ofs += sizeof(digest_len_init); larval_seq_len = 0; -#if (DX_DEV_SHA_MAX > 256) - /* Copy-to-sram digest-len for sha384/512 */ - ssi_sram_mgr_const2sram_desc(digest_len_sha512_init, sram_buff_ofs, - ARRAY_SIZE(digest_len_sha512_init), larval_seq, &larval_seq_len); - rc = send_request_init(drvdata, larval_seq, larval_seq_len); - if (unlikely(rc != 0)) - goto init_digest_const_err; + if (large_sha_supported) { + /* Copy-to-sram digest-len for sha384/512 */ + ssi_sram_mgr_const2sram_desc(digest_len_sha512_init, + sram_buff_ofs, + ARRAY_SIZE(digest_len_sha512_init), + larval_seq, &larval_seq_len); + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (unlikely(rc != 0)) + goto init_digest_const_err; - sram_buff_ofs += sizeof(digest_len_sha512_init); - larval_seq_len = 0; -#endif + sram_buff_ofs += sizeof(digest_len_sha512_init); + larval_seq_len = 0; + } /* The initial digests offset */ hash_handle->larval_digest_sram_addr = sram_buff_ofs; @@ -2204,43 +2222,53 @@ int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata) sram_buff_ofs += sizeof(sha256_init); larval_seq_len = 0; -#if (DX_DEV_SHA_MAX > 256) - /* We are forced to swap each double-word larval before copying to sram */ - for (i = 0; i < ARRAY_SIZE(sha384_init); i++) { - const u32 const0 = ((u32 *)((u64 *)&sha384_init[i]))[1]; - const u32 const1 = ((u32 *)((u64 *)&sha384_init[i]))[0]; - - ssi_sram_mgr_const2sram_desc(&const0, sram_buff_ofs, 1, - larval_seq, &larval_seq_len); - sram_buff_ofs += sizeof(u32); - ssi_sram_mgr_const2sram_desc(&const1, sram_buff_ofs, 1, - larval_seq, &larval_seq_len); - sram_buff_ofs += sizeof(u32); - } - rc = send_request_init(drvdata, larval_seq, larval_seq_len); - if (unlikely(rc != 0)) { - SSI_LOG_ERR("send_request() failed (rc = %d)\n", rc); - goto init_digest_const_err; - } - larval_seq_len = 0; - - for (i = 0; i < ARRAY_SIZE(sha512_init); i++) { - const u32 const0 = ((u32 *)((u64 *)&sha512_init[i]))[1]; - const u32 const1 = ((u32 *)((u64 *)&sha512_init[i]))[0]; - - ssi_sram_mgr_const2sram_desc(&const0, sram_buff_ofs, 1, - larval_seq, &larval_seq_len); - sram_buff_ofs += sizeof(u32); - ssi_sram_mgr_const2sram_desc(&const1, sram_buff_ofs, 1, - larval_seq, &larval_seq_len); - sram_buff_ofs += sizeof(u32); - } - rc = send_request_init(drvdata, larval_seq, larval_seq_len); - if (unlikely(rc != 0)) { - SSI_LOG_ERR("send_request() failed (rc = %d)\n", rc); - goto init_digest_const_err; + if (large_sha_supported) { + /* We are forced to swap each double-word larval before + * copying to sram + */ + for (i = 0; i < ARRAY_SIZE(sha384_init); i++) { + const u32 const0 = + ((u32 *)((u64 *)&sha384_init[i]))[1]; + const u32 const1 = + ((u32 *)((u64 *)&sha384_init[i]))[0]; + + ssi_sram_mgr_const2sram_desc(&const0, sram_buff_ofs, 1, + larval_seq, + &larval_seq_len); + sram_buff_ofs += sizeof(u32); + ssi_sram_mgr_const2sram_desc(&const1, sram_buff_ofs, 1, + larval_seq, + &larval_seq_len); + sram_buff_ofs += sizeof(u32); + } + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("send_request() failed (rc = %d)\n", rc); + goto init_digest_const_err; + } + larval_seq_len = 0; + + for (i = 0; i < ARRAY_SIZE(sha512_init); i++) { + const u32 const0 = + ((u32 *)((u64 *)&sha512_init[i]))[1]; + const u32 const1 = + ((u32 *)((u64 *)&sha512_init[i]))[0]; + + ssi_sram_mgr_const2sram_desc(&const0, sram_buff_ofs, 1, + larval_seq, + &larval_seq_len); + sram_buff_ofs += sizeof(u32); + ssi_sram_mgr_const2sram_desc(&const1, sram_buff_ofs, 1, + larval_seq, + &larval_seq_len); + sram_buff_ofs += sizeof(u32); + } + rc = send_request_init(drvdata, larval_seq, larval_seq_len); + if (unlikely(rc != 0)) { + SSI_LOG_ERR("send_request() failed (rc = %d)\n", rc); + goto init_digest_const_err; + } } -#endif init_digest_const_err: return rc; @@ -2265,16 +2293,16 @@ int ssi_hash_alloc(struct ssi_drvdata *drvdata) drvdata->hash_handle = hash_handle; sram_size_to_alloc = sizeof(digest_len_init) + -#if (DX_DEV_SHA_MAX > 256) - sizeof(digest_len_sha512_init) + - sizeof(sha384_init) + - sizeof(sha512_init) + -#endif sizeof(md5_init) + sizeof(sha1_init) + sizeof(sha224_init) + sizeof(sha256_init); + if (drvdata->hw_rev >= CC_HW_REV_712) + sram_size_to_alloc += sizeof(digest_len_sha512_init) + + sizeof(sha384_init) + + sizeof(sha512_init); + sram_buff = ssi_sram_mgr_alloc(drvdata, sram_size_to_alloc); if (sram_buff == NULL_SRAM_ADDR) { SSI_LOG_ERR("SRAM pool exhausted\n"); @@ -2299,6 +2327,10 @@ int ssi_hash_alloc(struct ssi_drvdata *drvdata) struct ssi_hash_alg *t_alg; int hw_mode = driver_hash[alg].hw_mode; + /* We either support both HASH and MAC or none */ + if (driver_hash[alg].min_hw_rev > drvdata->hw_rev) + continue; + /* register hmac version */ t_alg = ssi_hash_create_alg(&driver_hash[alg], true); if (IS_ERR(t_alg)) { @@ -2543,7 +2575,6 @@ ssi_sram_addr_t ssi_ahash_get_larval_digest_sram_addr(void *drvdata, u32 mode) sizeof(md5_init) + sizeof(sha1_init) + sizeof(sha224_init)); -#if (DX_DEV_SHA_MAX > 256) case DRV_HASH_SHA384: return (hash_handle->larval_digest_sram_addr + sizeof(md5_init) + @@ -2557,7 +2588,6 @@ ssi_sram_addr_t ssi_ahash_get_larval_digest_sram_addr(void *drvdata, u32 mode) sizeof(sha224_init) + sizeof(sha256_init) + sizeof(sha384_init)); -#endif default: SSI_LOG_ERR("Invalid hash mode (%d)\n", mode); } @@ -2579,11 +2609,9 @@ ssi_ahash_get_initial_digest_len_sram_addr(void *drvdata, u32 mode) case DRV_HASH_SHA256: case DRV_HASH_MD5: return digest_len_addr; -#if (DX_DEV_SHA_MAX > 256) case DRV_HASH_SHA384: case DRV_HASH_SHA512: return digest_len_addr + sizeof(digest_len_init); -#endif default: return digest_len_addr; /*to avoid kernel crash*/ } diff --git a/drivers/staging/ccree/ssi_hash.h b/drivers/staging/ccree/ssi_hash.h index 0bb99cb..1c12ab9 100644 --- a/drivers/staging/ccree/ssi_hash.h +++ b/drivers/staging/ccree/ssi_hash.h @@ -25,15 +25,11 @@ #define HMAC_IPAD_CONST 0x36363636 #define HMAC_OPAD_CONST 0x5C5C5C5C -#if (DX_DEV_SHA_MAX > 256) -#define HASH_LEN_SIZE 16 +#define HASH_LEN_SIZE_712 16 +#define HASH_LEN_SIZE_630 8 +#define HASH_MAX_LEN_SIZE HASH_LEN_SIZE_712 #define SSI_MAX_HASH_DIGEST_SIZE SHA512_DIGEST_SIZE #define SSI_MAX_HASH_BLCK_SIZE SHA512_BLOCK_SIZE -#else -#define HASH_LEN_SIZE 8 -#define SSI_MAX_HASH_DIGEST_SIZE SHA256_DIGEST_SIZE -#define SSI_MAX_HASH_BLCK_SIZE SHA256_BLOCK_SIZE -#endif #define XCBC_MAC_K1_OFFSET 0 #define XCBC_MAC_K2_OFFSET 16 diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c index 7c2d88a..cffc8de 100644 --- a/drivers/staging/ccree/ssi_request_mgr.c +++ b/drivers/staging/ccree/ssi_request_mgr.c @@ -152,7 +152,7 @@ int request_mgr_init(struct ssi_drvdata *drvdata) set_dout_dlli(&req_mgr_h->compl_desc, req_mgr_h->dummy_comp_buff_dma, sizeof(u32), NS_BIT, 1); set_flow_mode(&req_mgr_h->compl_desc, BYPASS); - set_queue_last_ind(&req_mgr_h->compl_desc); + set_queue_last_ind(drvdata, &req_mgr_h->compl_desc); return 0; @@ -414,7 +414,7 @@ int send_request_init( if (unlikely(rc != 0 )) { return rc; } - set_queue_last_ind(&desc[(len - 1)]); + set_queue_last_ind(drvdata, &desc[(len - 1)]); enqueue_seq(cc_base, desc, len); @@ -500,13 +500,15 @@ static void proc_completions(struct ssi_drvdata *drvdata) } } -static inline u32 cc_axi_comp_count(void __iomem *cc_base) +static inline u32 cc_axi_comp_count(struct ssi_drvdata *drvdata) { /* The CC_HAL_READ_REGISTER macro implictly requires and uses * a base MMIO register address variable named cc_base. */ + void __iomem *cc_base = drvdata->cc_base; + return FIELD_GET(AXIM_MON_COMP_VALUE, - CC_HAL_READ_REGISTER(AXIM_MON_BASE_OFFSET)); + CC_HAL_READ_REGISTER(drvdata->axim_mon_offset)); } /* Deferred service handler, run as interrupt-fired tasklet */ @@ -516,11 +518,8 @@ static void comp_handler(unsigned long devarg) void __iomem *cc_base = drvdata->cc_base; struct ssi_request_mgr_handle * request_mgr_handle = drvdata->request_mgr_handle; - u32 irq; - - irq = (drvdata->irq & SSI_COMP_IRQ_MASK); if (irq & SSI_COMP_IRQ_MASK) { @@ -529,7 +528,7 @@ static void comp_handler(unsigned long devarg) /* Avoid race with above clear: Test completion counter once more */ request_mgr_handle->axi_completed += - cc_axi_comp_count(cc_base); + cc_axi_comp_count(drvdata); while (request_mgr_handle->axi_completed) { do { @@ -538,7 +537,7 @@ static void comp_handler(unsigned long devarg) * request_mgr_handle->axi_completed is 0. */ request_mgr_handle->axi_completed = - cc_axi_comp_count(cc_base); + cc_axi_comp_count(drvdata); } while (request_mgr_handle->axi_completed > 0); /* To avoid the interrupt from firing as we unmask it, we clear it now */ @@ -546,7 +545,7 @@ static void comp_handler(unsigned long devarg) /* Avoid race with above clear: Test completion counter once more */ request_mgr_handle->axi_completed += - cc_axi_comp_count(cc_base); + cc_axi_comp_count(drvdata); } } diff --git a/drivers/staging/ccree/ssi_sram_mgr.c b/drivers/staging/ccree/ssi_sram_mgr.c index c8ab55e..589638c 100644 --- a/drivers/staging/ccree/ssi_sram_mgr.c +++ b/drivers/staging/ccree/ssi_sram_mgr.c @@ -53,6 +53,8 @@ void ssi_sram_mgr_fini(struct ssi_drvdata *drvdata) int ssi_sram_mgr_init(struct ssi_drvdata *drvdata) { struct ssi_sram_mgr_ctx *smgr_ctx; + void *cc_base = drvdata->cc_base; + dma_addr_t start = 0; int rc; /* Allocate "this" context */ @@ -66,9 +68,18 @@ int ssi_sram_mgr_init(struct ssi_drvdata *drvdata) } smgr_ctx = drvdata->sram_mgr_handle; - /* Pool starts at start of SRAM */ - smgr_ctx->sram_free_offset = 0; + if (drvdata->hw_rev < CC_HW_REV_712) { + /* Pool starts after ROM bytes */ + start = (dma_addr_t)CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, +HOST_SEP_SRAM_THRESHOLD)); + if ((start & 0x3) != 0) { + SSI_LOG_ERR("Invalid SRAM offset 0x%x\n", start); + rc = -ENODEV; + goto out; + } + } + smgr_ctx->sram_free_offset = start; return 0; out: From patchwork Thu Jun 22 07:07:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 106155 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp2316942qgd; Thu, 22 Jun 2017 00:09:48 -0700 (PDT) X-Received: by 10.84.225.5 with SMTP id t5mr1317401plj.108.1498115388290; Thu, 22 Jun 2017 00:09:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498115388; cv=none; d=google.com; s=arc-20160816; b=mR9R/UHzopklZOheQUKOyyPxdT+SYi6K017Qmc+O17HGCXezi/iY/ypHLvvStK68+4 DsPjeuJPAlF3WBAqDLaqhcbj3wR5LCefCkI6hhIqG/enZhCtGnsH7K5FOsfm+zuwzvKS 7dsH8dQkKPKd7cN7Ky048QQ6h3D3De6hl8WO2VuFiHTwTFKFUuCvBrmF6HyF2OJHwRYc hE802p1SPfy+oDNldnKzo/kWapTJnZ7lGml3HHi20dnM58z1IxW80J4Hojgd3estpiTh x7WPDZZwo/RJOJHbOfUPpNUicsOvpmefLmrDaA7R4APuYRe70mGwoIHthJMZyz5I8Znj H3lA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=gn+0Eo/Ayjdqgos3TKfdraEQM44Ag+3YzFMgViV3COg=; b=JjpnkyFBRogKhKWkcPLhASDX8j0xT0av+dWDIJ5m62S21k5JnqrpnZKSfaVxuGzlCV 3POHvFNytlvwdNlt/t7KGoovxmKRhvREAY2pn7H4IaRG8S6xj0DaSFMhipQhnqTExas0 Z894N7stoQ1uRHXP8hXRmrPpb7KqU3tzA9P+ZyH4rB5raQyK5NBDnheTpCPc3DjQsuCw PQ1M9US9yneTSnsG/iclaGhbYnEa/86PUbOkbR8Eg9K2NBZbkN9BvN0+sVfoFnvFabC8 oRdYbOmzaXEzbuSJrOj4WjJfTJlCZWbP+bTmZZuj9h95aOZ6yeHCysbcnDoJVeyCJ7Hq yIsQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t5si543286pfe.291.2017.06.22.00.09.47; Thu, 22 Jun 2017 00:09:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752755AbdFVHI2 (ORCPT + 25 others); Thu, 22 Jun 2017 03:08:28 -0400 Received: from foss.arm.com ([217.140.101.70]:33378 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751143AbdFVHIZ (ORCPT ); Thu, 22 Jun 2017 03:08:25 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AC53F1610; Thu, 22 Jun 2017 00:08:24 -0700 (PDT) Received: from gby.kfn.arm.com (unknown [10.45.48.178]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2A2ED3F587; Thu, 22 Jun 2017 00:08:22 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Cc: Ofir Drang Subject: [PATCH 4/7] staging: ccree: remove unused function Date: Thu, 22 Jun 2017 10:07:50 +0300 Message-Id: <1498115276-1601-5-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1498115276-1601-1-git-send-email-gilad@benyossef.com> References: <1498115276-1601-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The function set_ack_last was not used anywhere. Remove it. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/cc_hw_queue_defs.h | 12 ------------ 1 file changed, 12 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/cc_hw_queue_defs.h b/drivers/staging/ccree/cc_hw_queue_defs.h index c730c3c..ac0eb97 100644 --- a/drivers/staging/ccree/cc_hw_queue_defs.h +++ b/drivers/staging/ccree/cc_hw_queue_defs.h @@ -226,18 +226,6 @@ static inline void set_queue_last_ind_bit(struct cc_hw_desc *pdesc) } /* - * Signs the end of HW descriptors flow by asking for completion ack, - * and release the HW engines - * - * @pdesc: pointer HW descriptor struct - */ -static inline void set_ack_last(struct cc_hw_desc *pdesc) -{ - pdesc->word[3] |= FIELD_PREP(WORD3_QUEUE_LAST_IND, 1); - pdesc->word[4] |= FIELD_PREP(WORD4_ACK_NEEDED, 1); -} - -/* * Set the DIN field of a HW descriptors * * @pdesc: pointer HW descriptor struct From patchwork Thu Jun 22 07:07:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 106153 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp2316559qgd; Thu, 22 Jun 2017 00:08:48 -0700 (PDT) X-Received: by 10.99.94.70 with SMTP id s67mr1162071pgb.82.1498115328236; Thu, 22 Jun 2017 00:08:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498115328; cv=none; d=google.com; s=arc-20160816; b=OpAh5r37TLCsqzeJp5cfqwPaQQjOhsXvhubvvY1yyWnB0iq+ri9iKokkAIjohrdXf/ AcLohz+JjU33EnuOlhPhH2LuEl5/tL2gGIMDfJrlR7tSvY1I1+uau7WCrlEtSgPcRa51 m6oT1T7Q91B/cPjYI/7wJW+POO4D887/OuXWnkOf/hNR+Mbo8BSrCiyym1mg660161GB J9QCIcOtIkPpXOvdIL9A5d2QDtpii9vIZOaybTQNSFTCx8Dx2Q3wBVd66WpVk1oqZZyw 6uYCoHzy2vu0YwvluTb9zb1HYNRZi9m7SpiPTfMK3Z0ftCnTjiy4GSY0Xfhr2Ba5SmxY sZJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=yeosCpTYe8j92FlD9uCHlvr99E2LfU7Q73PMojsfrJE=; b=dbSfDhjCTS7cqbZZNNpcf40IeiC24dGcye3TpRVsAaU6FEoWu0Vf+Ldssc0tEBvgFg 33bMV7XYesgdDN9mINi9J8WXyuUmhSXOUKeqZelLRX31qvxKMDnjGKojfQTd5LNXMi+Q bOcVxh3605/IF/dk4RCQNjedSNHXHguZi+je4tJqOK8BbUIIg5mwbjSji1Q0cAV90yI+ ixjbbk5DbwduHPWHOq1ta6HlU45fg9h1P5IcRkb1MFaPT5CakJhqZ92IjNK6F2A6+1dw FLqao/PSSmSwXvl+jitSL+la+N43w+3pzhwBepfmgsk5imnfc/NbyTg+rm0zObqGxBQZ j5sA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 82si533178pgg.519.2017.06.22.00.08.47; Thu, 22 Jun 2017 00:08:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752837AbdFVHIl (ORCPT + 25 others); Thu, 22 Jun 2017 03:08:41 -0400 Received: from foss.arm.com ([217.140.101.70]:33382 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751124AbdFVHIj (ORCPT ); Thu, 22 Jun 2017 03:08:39 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 04B5C1596; Thu, 22 Jun 2017 00:08:29 -0700 (PDT) Received: from gby.kfn.arm.com (unknown [10.45.48.178]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 734953F587; Thu, 22 Jun 2017 00:08:27 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Cc: Ofir Drang Subject: [PATCH 5/7] staging: ccree: add clock management support Date: Thu, 22 Jun 2017 10:07:51 +0300 Message-Id: <1498115276-1601-6-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1498115276-1601-1-git-send-email-gilad@benyossef.com> References: <1498115276-1601-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some SoC which implement CryptoCell have a dedicated clock tied to it, some do not. Implement clock support if exists based on device tree data and tie power management to it. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/Makefile | 2 +- drivers/staging/ccree/ssi_driver.c | 43 +++++++++++++++++++++++---- drivers/staging/ccree/ssi_driver.h | 4 +++ drivers/staging/ccree/ssi_pm.c | 13 +++++---- drivers/staging/ccree/ssi_pm_ext.c | 60 -------------------------------------- drivers/staging/ccree/ssi_pm_ext.h | 33 --------------------- 6 files changed, 50 insertions(+), 105 deletions(-) delete mode 100644 drivers/staging/ccree/ssi_pm_ext.c delete mode 100644 drivers/staging/ccree/ssi_pm_ext.h -- 2.1.4 diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile index 44f3e3e..318c2b3 100644 --- a/drivers/staging/ccree/Makefile +++ b/drivers/staging/ccree/Makefile @@ -1,3 +1,3 @@ obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o -ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_aead.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o +ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_aead.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ccree-$(CCREE_FIPS_SUPPORT) += ssi_fips.o ssi_fips_ll.o ssi_fips_ext.o ssi_fips_local.o diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c index 5a62b4f..f1275a6 100644 --- a/drivers/staging/ccree/ssi_driver.c +++ b/drivers/staging/ccree/ssi_driver.c @@ -57,6 +57,7 @@ #include #include #include +#include #include "ssi_config.h" #include "ssi_driver.h" @@ -269,6 +270,7 @@ static int init_cc_resources(struct platform_device *plat_dev) hw_rev = (struct cc_hw_data *)dev_id->data; new_drvdata->hw_rev_name = hw_rev->name; new_drvdata->hw_rev = hw_rev->rev; + new_drvdata->clk = of_clk_get(np, 0); if (hw_rev->rev >= CC_HW_REV_712) { new_drvdata->hash_len_sz = HASH_LEN_SIZE_712; @@ -338,6 +340,10 @@ static int init_cc_resources(struct platform_device *plat_dev) new_drvdata->plat_dev = plat_dev; + rc = cc_clk_on(new_drvdata); + if (rc) + goto init_cc_res_err; + if(new_drvdata->plat_dev->dev.dma_mask == NULL) { new_drvdata->plat_dev->dev.dma_mask = & new_drvdata->plat_dev->dev.coherent_dma_mask; @@ -504,14 +510,11 @@ static void cleanup_cc_resources(struct platform_device *plat_dev) ssi_sysfs_fini(); #endif - /* Mask all interrupts */ - WRITE_REGISTER(drvdata->cc_base + CC_REG_OFFSET(HOST_RGF, HOST_IMR), - 0xFFFFFFFF); + fini_cc_regs(drvdata); + cc_clk_off(drvdata); free_irq(drvdata->res_irq->start, drvdata); drvdata->res_irq = NULL; - fini_cc_regs(drvdata); - if (drvdata->cc_base != NULL) { iounmap(drvdata->cc_base); release_mem_region(drvdata->res_mem->start, @@ -524,6 +527,36 @@ static void cleanup_cc_resources(struct platform_device *plat_dev) dev_set_drvdata(&plat_dev->dev, NULL); } +int cc_clk_on(struct ssi_drvdata *drvdata) +{ + int rc = 0; + struct clk *clk = drvdata->clk; + + if (IS_ERR(clk)) + /* No all devices have a clock associated with CCREE */ + goto out; + + rc = clk_prepare_enable(clk); + if (rc) { + SSI_LOG_ERR("error enabling clock\n"); + clk_disable_unprepare(clk); + } + +out: + return rc; +} + +void cc_clk_off(struct ssi_drvdata *drvdata) +{ + struct clk *clk = drvdata->clk; + + if (IS_ERR(clk)) + /* No all devices have a clock associated with CCREE */ + return; + + clk_disable_unprepare(clk); +} + static int ccree_probe(struct platform_device *plat_dev) { int rc; diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h index 52ac43c..a59df59 100644 --- a/drivers/staging/ccree/ssi_driver.h +++ b/drivers/staging/ccree/ssi_driver.h @@ -36,6 +36,7 @@ #include #include #include +#include /* Registers definitions from shared/hw/ree_include */ #include "dx_reg_base_host.h" @@ -156,6 +157,7 @@ struct ssi_drvdata { enum cc_hw_rev hw_rev; u32 hash_len_sz; u32 axim_mon_offset; + struct clk *clk; }; struct ssi_crypto_alg { @@ -201,6 +203,8 @@ void dump_byte_array(const char *name, const u8 *the_array, unsigned long size); int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe); void fini_cc_regs(struct ssi_drvdata *drvdata); +int cc_clk_on(struct ssi_drvdata *drvdata); +void cc_clk_off(struct ssi_drvdata *drvdata); static inline void set_queue_last_ind(struct ssi_drvdata *drvdata, struct cc_hw_desc *pdesc) diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c index 5bfbdd0..67ae1dc 100644 --- a/drivers/staging/ccree/ssi_pm.c +++ b/drivers/staging/ccree/ssi_pm.c @@ -29,7 +29,6 @@ #include "ssi_ivgen.h" #include "ssi_hash.h" #include "ssi_pm.h" -#include "ssi_pm_ext.h" #if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP) @@ -52,9 +51,7 @@ int ssi_power_mgr_runtime_suspend(struct device *dev) return rc; } fini_cc_regs(drvdata); - - /* Specific HW suspend code */ - ssi_pm_ext_hw_suspend(dev); + cc_clk_off(drvdata); return 0; } @@ -66,8 +63,12 @@ int ssi_power_mgr_runtime_resume(struct device *dev) SSI_LOG_DEBUG("ssi_power_mgr_runtime_resume , unset HOST_POWER_DOWN_EN\n"); WRITE_REGISTER(drvdata->cc_base + CC_REG_OFFSET(HOST_RGF, HOST_POWER_DOWN_EN), POWER_DOWN_DISABLE); - /* Specific HW resume code */ - ssi_pm_ext_hw_resume(dev); + + rc = cc_clk_on(drvdata); + if (rc) { + SSI_LOG_ERR("failed getting clock back on. We're toast.\n"); + return rc; + } rc = init_cc_regs(drvdata, false); if (rc !=0) { diff --git a/drivers/staging/ccree/ssi_pm_ext.c b/drivers/staging/ccree/ssi_pm_ext.c deleted file mode 100644 index 453151c..0000000 --- a/drivers/staging/ccree/ssi_pm_ext.c +++ /dev/null @@ -1,60 +0,0 @@ -/* - * Copyright (C) 2012-2017 ARM Limited or its affiliates. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, see . - */ - - -#include "ssi_config.h" -#include -#include -#include -#include -#include -#include "ssi_driver.h" -#include "ssi_sram_mgr.h" -#include "ssi_pm_ext.h" - -/* - * This function should suspend the HW (if possiable), It should be implemented by - * the driver user. - * The reference code clears the internal SRAM to imitate lose of state. - */ -void ssi_pm_ext_hw_suspend(struct device *dev) -{ - struct ssi_drvdata *drvdata = - (struct ssi_drvdata *)dev_get_drvdata(dev); - unsigned int val; - void __iomem *cc_base = drvdata->cc_base; - unsigned int sram_addr = 0; - - CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, SRAM_ADDR), sram_addr); - - for (;sram_addr < SSI_CC_SRAM_SIZE ; sram_addr+=4) { - CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, SRAM_DATA), 0x0); - - do { - val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, SRAM_DATA_READY)); - } while (!(val &0x1)); - } -} - -/* - * This function should resume the HW (if possiable).It should be implemented by - * the driver user. - */ -void ssi_pm_ext_hw_resume(struct device *dev) -{ - return; -} - diff --git a/drivers/staging/ccree/ssi_pm_ext.h b/drivers/staging/ccree/ssi_pm_ext.h deleted file mode 100644 index dbe658b..0000000 --- a/drivers/staging/ccree/ssi_pm_ext.h +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Copyright (C) 2012-2017 ARM Limited or its affiliates. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, see . - */ - -/* \file ssi_pm_ext.h - */ - -#ifndef __PM_EXT_H__ -#define __PM_EXT_H__ - - -#include "ssi_config.h" -#include "ssi_driver.h" - -void ssi_pm_ext_hw_suspend(struct device *dev); - -void ssi_pm_ext_hw_resume(struct device *dev); - - -#endif /*__POWER_MGR_H__*/ - From patchwork Thu Jun 22 07:07:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 106152 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp2316557qgd; Thu, 22 Jun 2017 00:08:47 -0700 (PDT) X-Received: by 10.84.128.1 with SMTP id 1mr1297630pla.244.1498115327839; Thu, 22 Jun 2017 00:08:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498115327; cv=none; d=google.com; s=arc-20160816; b=xxo3dj+zgTOjbGesEhkTrci9LvG4L39GthQZsNM/mvvDQ9fx28Y1JJyJWUZ4S0MMVI yGmI1jQZhyMy6UI4VSICh2GW23yKp8yT7JsWX8mhrAZ6l4TPE6bhLlTsKjAMzJf0Ghdi xLKMNVuv0TWPhTpAwkDiMC12etAvcAplxWSQAGhC2gYjgDLJB41HBvMneZLwTY4Xgzb1 jAWYAxE/2973vO84/UjaZVdVTkS+8qc2Y6Kw0GjNoNGpDH5FcNTPoZCrxLjv+6t56GyL I5aeIT10cYFLhtSZx4OPrcoV/tMyErw9QNpxYDdTWs2cy/6H+77S/dNmzGtq4aAAOaD3 a+fQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=5EDBfOuvrTxAAbiC71UlMuJKIiPu7xk7wEU6Erxw/s0=; b=MtXI3/JfyBzTd7aM0f80gUkXP5ocvL0ZHsmAb6Q8KfQdtxUIpO/OVcHcpGUy7Vr+aR 3r+C20s2kpMLCJ2UUZ7dQ7xzzum+UXKmEQ3VDhXQIuJEWn7xPMf02aCMqNQVdBdtc4q9 /dqQE40pXGDDQVbMIPSdGqk6QCXLHh8cA5ezPETBsTudS4ahCEVrpOKzVQbDvE65ct+p z1ke+ZlSupm5Fp9VQwAyozB2vR993si6IKV9aJ6kk3Vpt9ojoJY+JRn2cEJvYNwUjwdb tJEpetWfo5KKqy1gSd9KYcDTx1YlHEi9ZcqrynDEDv0cR4TKzV9sFHbP3Ku3gd1icOGj 8Rkw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 82si533178pgg.519.2017.06.22.00.08.47; Thu, 22 Jun 2017 00:08:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752819AbdFVHIf (ORCPT + 25 others); Thu, 22 Jun 2017 03:08:35 -0400 Received: from foss.arm.com ([217.140.101.70]:33390 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751124AbdFVHId (ORCPT ); Thu, 22 Jun 2017 03:08:33 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4DC0515AD; Thu, 22 Jun 2017 00:08:33 -0700 (PDT) Received: from gby.kfn.arm.com (unknown [10.45.48.178]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BB66E3F587; Thu, 22 Jun 2017 00:08:31 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Cc: Ofir Drang Subject: [PATCH 6/7] staging: ccree: add DT bus coherency detection Date: Thu, 22 Jun 2017 10:07:52 +0300 Message-Id: <1498115276-1601-7-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1498115276-1601-1-git-send-email-gilad@benyossef.com> References: <1498115276-1601-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The ccree driver has build time configurable support to work on top of coherent (e.g. ACP) vs. none coherent bus connections. Turn it to run-time configurable option based on device tree. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/ssi_buffer_mgr.c | 37 ++++++++++++++++++---------------- drivers/staging/ccree/ssi_config.h | 20 ------------------ drivers/staging/ccree/ssi_driver.c | 14 +++++++++---- drivers/staging/ccree/ssi_driver.h | 3 +++ 4 files changed, 33 insertions(+), 41 deletions(-) -- 2.1.4 diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c index 88ebda8..75810dc 100644 --- a/drivers/staging/ccree/ssi_buffer_mgr.c +++ b/drivers/staging/ccree/ssi_buffer_mgr.c @@ -627,6 +627,9 @@ void ssi_buffer_mgr_unmap_aead_request( struct aead_req_ctx *areq_ctx = aead_request_ctx(req); unsigned int hw_iv_size = areq_ctx->hw_iv_size; struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct ssi_drvdata *drvdata = + (struct ssi_drvdata *)dev_get_drvdata(dev); + u32 dummy; bool chained; u32 size_to_unmap = 0; @@ -700,8 +703,8 @@ void ssi_buffer_mgr_unmap_aead_request( dma_unmap_sg(dev, req->dst, ssi_buffer_mgr_get_sgl_nents(req->dst,size_to_unmap,&dummy,&chained), DMA_BIDIRECTIONAL); } -#if DX_HAS_ACP - if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) && + if (drvdata->coherent && + (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) && likely(req->src == req->dst)) { u32 size_to_skip = req->assoclen; @@ -716,7 +719,6 @@ void ssi_buffer_mgr_unmap_aead_request( size_to_skip+ req->cryptlen - areq_ctx->req_authsize, size_to_skip+ req->cryptlen, SSI_SG_FROM_BUF); } -#endif } static inline int ssi_buffer_mgr_get_aead_icv_nents( @@ -981,20 +983,22 @@ static inline int ssi_buffer_mgr_prepare_aead_data_mlli( * MAC verification upon request completion */ if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) { -#if !DX_HAS_ACP - /* In ACP platform we already copying ICV - * for any INPLACE-DECRYPT operation, hence + if (!drvdata->coherent) { + /* In coherent platforms (e.g. ACP) + * already copying ICV for any + * INPLACE-DECRYPT operation, hence * we must neglect this code. */ - u32 size_to_skip = req->assoclen; - if (areq_ctx->is_gcm4543) { - size_to_skip += crypto_aead_ivsize(tfm); + u32 skip = req->assoclen; + + if (areq_ctx->is_gcm4543) + skip += crypto_aead_ivsize(tfm); + + ssi_buffer_mgr_copy_scatterlist_portion( +areq_ctx->backup_mac, req->src, +(skip + req->cryptlen - areq_ctx->req_authsize), +skip + req->cryptlen, SSI_SG_TO_BUF); } - ssi_buffer_mgr_copy_scatterlist_portion( - areq_ctx->backup_mac, req->src, - size_to_skip+ req->cryptlen - areq_ctx->req_authsize, - size_to_skip+ req->cryptlen, SSI_SG_TO_BUF); -#endif areq_ctx->icv_virt_addr = areq_ctx->backup_mac; } else { areq_ctx->icv_virt_addr = areq_ctx->mac_buf; @@ -1281,8 +1285,8 @@ int ssi_buffer_mgr_map_aead_request( mlli_params->curr_pool = NULL; sg_data.num_of_buffers = 0; -#if DX_HAS_ACP - if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) && + if (drvdata->coherent && + (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) && likely(req->src == req->dst)) { u32 size_to_skip = req->assoclen; @@ -1297,7 +1301,6 @@ int ssi_buffer_mgr_map_aead_request( size_to_skip+ req->cryptlen - areq_ctx->req_authsize, size_to_skip+ req->cryptlen, SSI_SG_TO_BUF); } -#endif /* cacluate the size for cipher remove ICV in decrypt*/ areq_ctx->cryptlen = (areq_ctx->gen_ctx.op_type == diff --git a/drivers/staging/ccree/ssi_config.h b/drivers/staging/ccree/ssi_config.h index 2484a06..ff7597c 100644 --- a/drivers/staging/ccree/ssi_config.h +++ b/drivers/staging/ccree/ssi_config.h @@ -23,7 +23,6 @@ #include -//#define DISABLE_COHERENT_DMA_OPS //#define FLUSH_CACHE_ALL //#define COMPLETION_DELAY //#define DX_DUMP_DESCS @@ -33,24 +32,5 @@ //#define DX_IRQ_DELAY 100000 #define DMA_BIT_MASK_LEN 48 /* was 32 bit, but for juno's sake it was enlarged to 48 bit */ -#if defined (CONFIG_ARM64) // TODO currently only this mode was test on Juno (which is ARM64), need to enable coherent also. -#define DISABLE_COHERENT_DMA_OPS -#endif - -/* Define the CryptoCell DMA cache coherency signals configuration */ -#if defined (DISABLE_COHERENT_DMA_OPS) - /* Software Controlled Cache Coherency (SCCC) */ - #define SSI_CACHE_PARAMS (0x000) - /* CC attached to NONE-ACP such as HPP/ACE/AMBA4. - * The customer is responsible to enable/disable this feature - * according to his platform type. - */ - #define DX_HAS_ACP 0 -#else - #define SSI_CACHE_PARAMS (0xEEE) - /* CC attached to ACP */ - #define DX_HAS_ACP 1 -#endif - #endif /*__DX_CONFIG_H__*/ diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c index f1275a6..b940943 100644 --- a/drivers/staging/ccree/ssi_driver.c +++ b/drivers/staging/ccree/ssi_driver.c @@ -58,6 +58,7 @@ #include #include #include +#include #include "ssi_config.h" #include "ssi_driver.h" @@ -199,7 +200,7 @@ static irqreturn_t cc_isr(int irq, void *dev_id) int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe) { - unsigned int val; + unsigned int val, cache_params; void __iomem *cc_base = drvdata->cc_base; /* Unmask all AXI interrupt sources AXI_CFG1 register */ @@ -232,14 +233,18 @@ int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe) } #endif + cache_params = (drvdata->coherent ? CC_COHERENT_CACHE_PARAMS : 0x0); + val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CACHE_PARAMS)); if (is_probe == true) { SSI_LOG_INFO("Cache params previous: 0x%08X\n", val); } - CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CACHE_PARAMS), SSI_CACHE_PARAMS); + CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CACHE_PARAMS), + cache_params); val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(CRY_KERNEL, AXIM_CACHE_PARAMS)); if (is_probe == true) { - SSI_LOG_INFO("Cache params current: 0x%08X (expected: 0x%08X)\n", val, SSI_CACHE_PARAMS); + SSI_LOG_INFO("Cache params current: 0x%08X (expect: 0x%08X)\n", + val, cache_params); } return 0; @@ -271,6 +276,7 @@ static int init_cc_resources(struct platform_device *plat_dev) new_drvdata->hw_rev_name = hw_rev->name; new_drvdata->hw_rev = hw_rev->rev; new_drvdata->clk = of_clk_get(np, 0); + new_drvdata->coherent = of_dma_is_coherent(np); if (hw_rev->rev >= CC_HW_REV_712) { new_drvdata->hash_len_sz = HASH_LEN_SIZE_712; @@ -533,7 +539,7 @@ int cc_clk_on(struct ssi_drvdata *drvdata) struct clk *clk = drvdata->clk; if (IS_ERR(clk)) - /* No all devices have a clock associated with CCREE */ + /* Not all devices have a clock associated with CCREE */ goto out; rc = clk_prepare_enable(clk); diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h index a59df59..05f3c27 100644 --- a/drivers/staging/ccree/ssi_driver.h +++ b/drivers/staging/ccree/ssi_driver.h @@ -59,6 +59,8 @@ enum cc_hw_rev { CC_HW_REV_712 = 712 }; +#define CC_COHERENT_CACHE_PARAMS 0xEEE + #define SSI_CC_HAS_AES_CCM 1 #define SSI_CC_HAS_AES_GCM 1 #define SSI_CC_HAS_AES_XTS 1 @@ -158,6 +160,7 @@ struct ssi_drvdata { u32 hash_len_sz; u32 axim_mon_offset; struct clk *clk; + bool coherent; }; struct ssi_crypto_alg { From patchwork Thu Jun 22 07:07:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 106154 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp2316768qgd; Thu, 22 Jun 2017 00:09:19 -0700 (PDT) X-Received: by 10.98.1.72 with SMTP id 69mr1166235pfb.124.1498115359827; Thu, 22 Jun 2017 00:09:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498115359; cv=none; d=google.com; s=arc-20160816; b=avm5tg4nnqaVsVIQBfSGpp8SFzV9KAn74NJ3bkQjwbw/uebIJBEEnyGzR22c+F7A79 hcwvii4Ru+Y7ICFAzOudKis7Rd5wepBObYCs70o+6EmvMpzCpbWtCTfY17iWPDgUcTOb 1Nuj0hXmc+agSCJOTdEFTGMkkAO3QgXAOIuYIqMCpsHbtHwkEg28/WqoOW3qkiR3vmk6 xhY1fYGPKrvx1tNjyLeHartSIhd6353ce4oL4tDR3cNRLXayPBRVdu2DOOZH0a2snT9X Qmkf+l7QPLQMe2YyJUmws7TPyYgVZQvGBDu8bfhHTs86oDIwK2m5csGGFdvRFtzjWv9O cp2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=PbPmyjhJr7299xtQRFvB8DfEY/XV7BxUnve2Z3+PyHU=; b=Oh2wfaqlF1a3ESsjb3TbAp+DM2Br2qlVLHpacru4oLEYFBRioNOOdI/HIQHhSVU6wu xS8aSk3uVtRXpoqBsJjC9NO8UIxfKpexK55KLlEVbkcYinu6GNW7n3XNc86fBcZyj2Is MFFUtfhIOSqGGa2fRBrD48nQITz+BXchmNiXW/nzBRjXySslJ/A6lENJV/lohjkuFWCj Z85zCGeSx/6NUVQvGoJJbUgsE5ghZC98JJ2qWyVr2oeks9wyK90yPkIZlMEYRO/+wU8b 3sbti3LfLovjlFPxGgtgjx6VkGAI/5FJFU01gcy7NoEny+RTOY8sKnAEdEFcAT1eZM/c qmSw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l20si544151pgu.585.2017.06.22.00.09.19; Thu, 22 Jun 2017 00:09:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752855AbdFVHI4 (ORCPT + 25 others); Thu, 22 Jun 2017 03:08:56 -0400 Received: from foss.arm.com ([217.140.101.70]:33404 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751124AbdFVHIx (ORCPT ); Thu, 22 Jun 2017 03:08:53 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 15B9215AD; Thu, 22 Jun 2017 00:08:38 -0700 (PDT) Received: from gby.kfn.arm.com (unknown [10.45.48.178]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 16DC73F587; Thu, 22 Jun 2017 00:08:35 -0700 (PDT) From: Gilad Ben-Yossef To: Greg Kroah-Hartman , linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org Cc: Ofir Drang Subject: [PATCH 7/7] staging: ccree: use signal safe completion wait Date: Thu, 22 Jun 2017 10:07:53 +0300 Message-Id: <1498115276-1601-8-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1498115276-1601-1-git-send-email-gilad@benyossef.com> References: <1498115276-1601-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We were waiting for a completion notification of HW DMA operation using an interruptible wait which can result in data corruption if a signal interrupted us while DMA was not yet completed. Fix this by moving to uninterrupted wait. Fixes: abefd6741d ("staging: ccree: introduce CryptoCell HW driver"). Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/ssi_request_mgr.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) -- 2.1.4 diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c index cffc8de..87f5ab6 100644 --- a/drivers/staging/ccree/ssi_request_mgr.c +++ b/drivers/staging/ccree/ssi_request_mgr.c @@ -382,7 +382,8 @@ int send_request( /* Wait upon sequence completion. * Return "0" -Operation done successfully. */ - return wait_for_completion_interruptible(&ssi_req->seq_compl); + wait_for_completion(&ssi_req->seq_compl); + return 0; } else { /* Operation still in process */ return -EINPROGRESS;