From patchwork Tue May 13 06:03:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889704 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E0D017FAC2 for ; Tue, 13 May 2025 06:03:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116241; cv=none; b=AMw91gQgnjUve3xpzGpSPvHIIG7cskneKoTUzIUdhzUF6RzQohmnp4Xbx0N/YakCOQVk5FTQLvCYe6UW8YTjzflcHSGU0JU79dYwCNZf3isxMadctn5P05OheWqLOa2KEm8NgvapurQctEJOYwXDDBCJ8tfc3unwc5NEvLYBQj0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116241; c=relaxed/simple; bh=8QDdo3tonlTSur/bVevrx3rFlx7xfamv/9H8hTX5gKc=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To:Cc; b=Zzz0xZIdFCZD67ykkJa/735kZ0c6SQ4d9oi4hod1h+N+0XdPwsYusCiK71N3fHdNuoZhe9IwiBlrgnw8nftamplnmDm2S9VKEx7lfFFZcz1+v/AMtHD7SGywfAu2UM5766ZQwAUqwvjEc/tNganm9TFOy2yLRUfTvBy/IYd8kuQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=iwHvzxcG; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="iwHvzxcG" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=Cc:To:Subject:From:References:In-Reply-To:Message-Id:Date: Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=qaIj+ZcBl/cU0Sg+Bo5053DJynhFBXA+5ndSeoJA92E=; b=iwHvzxcGNCrJfEgQovANjTBagO aD0p6AMZAMYf1+3GNxtRutw11D8N3GYqH/wsToFI+mLuTkfMjS10AIj2RrWdXCZnh+UaeHViatd8M dMkzMaNAApQrzbC6OCUOHiSmEcjxrV3W7VgXWKtgP6yL4ApYcTY6R6+PBG2qQAnyvBAVrx4/J9ues 3478yQ7tVIg+axcJ9eBDfrkg9jGSt7VkMwrbCaqHq9flMq7VRgQk1uV1n37+pHX33iylJQ9fFDsZ0 SmKdaivKplejkXpnSiV0tfO+BfonU6eFFZN/dBNjsQ4LlHuooGpiDdTIwQJk8ezP8iIZqMmgWci2u 4wA6osEQ==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uEik0-005g4H-0D; Tue, 13 May 2025 14:03:49 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 13 May 2025 14:03:48 +0800 Date: Tue, 13 May 2025 14:03:48 +0800 Message-Id: <1a9548c383795a025076de7f6ee4fc465c287369.1747116129.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 01/11] crypto: aspeed/hash - Remove purely software hmac implementation To: Linux Crypto Mailing List Cc: Neal Liu , linux-aspeed@lists.ozlabs.org Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The hmac implementation in aspeed simply duplicates what the new ahash hmac template already does, namely construct ipad and opad by hand and then adding them to the hash before feeding it to the engine. Remove them and just use the generic ahash hmac template. Signed-off-by: Herbert Xu --- drivers/crypto/aspeed/aspeed-hace-hash.c | 335 ----------------------- drivers/crypto/aspeed/aspeed-hace.h | 10 - 2 files changed, 345 deletions(-) diff --git a/drivers/crypto/aspeed/aspeed-hace-hash.c b/drivers/crypto/aspeed/aspeed-hace-hash.c index 0b6e49c06eff..0bcec2d40ed6 100644 --- a/drivers/crypto/aspeed/aspeed-hace-hash.c +++ b/drivers/crypto/aspeed/aspeed-hace-hash.c @@ -5,7 +5,6 @@ #include "aspeed-hace.h" #include -#include #include #include #include @@ -338,70 +337,6 @@ static int aspeed_hace_ahash_trigger(struct aspeed_hace_dev *hace_dev, return -EINPROGRESS; } -/* - * HMAC resume aims to do the second pass produces - * the final HMAC code derived from the inner hash - * result and the outer key. - */ -static int aspeed_ahash_hmac_resume(struct aspeed_hace_dev *hace_dev) -{ - struct aspeed_engine_hash *hash_engine = &hace_dev->hash_engine; - struct ahash_request *req = hash_engine->req; - struct aspeed_sham_reqctx *rctx = ahash_request_ctx(req); - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct aspeed_sham_ctx *tctx = crypto_ahash_ctx(tfm); - struct aspeed_sha_hmac_ctx *bctx = tctx->base; - int rc = 0; - - AHASH_DBG(hace_dev, "\n"); - - dma_unmap_single(hace_dev->dev, rctx->digest_dma_addr, - SHA512_DIGEST_SIZE, DMA_BIDIRECTIONAL); - - dma_unmap_single(hace_dev->dev, rctx->buffer_dma_addr, - rctx->block_size * 2, DMA_TO_DEVICE); - - /* o key pad + hash sum 1 */ - memcpy(rctx->buffer, bctx->opad, rctx->block_size); - memcpy(rctx->buffer + rctx->block_size, rctx->digest, rctx->digsize); - - rctx->bufcnt = rctx->block_size + rctx->digsize; - rctx->digcnt[0] = rctx->block_size + rctx->digsize; - - aspeed_ahash_fill_padding(hace_dev, rctx); - memcpy(rctx->digest, rctx->sha_iv, rctx->ivsize); - - rctx->digest_dma_addr = dma_map_single(hace_dev->dev, rctx->digest, - SHA512_DIGEST_SIZE, - DMA_BIDIRECTIONAL); - if (dma_mapping_error(hace_dev->dev, rctx->digest_dma_addr)) { - dev_warn(hace_dev->dev, "dma_map() rctx digest error\n"); - rc = -ENOMEM; - goto end; - } - - rctx->buffer_dma_addr = dma_map_single(hace_dev->dev, rctx->buffer, - rctx->block_size * 2, - DMA_TO_DEVICE); - if (dma_mapping_error(hace_dev->dev, rctx->buffer_dma_addr)) { - dev_warn(hace_dev->dev, "dma_map() rctx buffer error\n"); - rc = -ENOMEM; - goto free_rctx_digest; - } - - hash_engine->src_dma = rctx->buffer_dma_addr; - hash_engine->src_length = rctx->bufcnt; - hash_engine->digest_dma = rctx->digest_dma_addr; - - return aspeed_hace_ahash_trigger(hace_dev, aspeed_ahash_transfer); - -free_rctx_digest: - dma_unmap_single(hace_dev->dev, rctx->digest_dma_addr, - SHA512_DIGEST_SIZE, DMA_BIDIRECTIONAL); -end: - return rc; -} - static int aspeed_ahash_req_final(struct aspeed_hace_dev *hace_dev) { struct aspeed_engine_hash *hash_engine = &hace_dev->hash_engine; @@ -437,10 +372,6 @@ static int aspeed_ahash_req_final(struct aspeed_hace_dev *hace_dev) hash_engine->src_length = rctx->bufcnt; hash_engine->digest_dma = rctx->digest_dma_addr; - if (rctx->flags & SHA_FLAGS_HMAC) - return aspeed_hace_ahash_trigger(hace_dev, - aspeed_ahash_hmac_resume); - return aspeed_hace_ahash_trigger(hace_dev, aspeed_ahash_transfer); free_rctx_digest: @@ -609,16 +540,6 @@ static int aspeed_sham_update(struct ahash_request *req) return aspeed_hace_hash_handle_queue(hace_dev, req); } -static int aspeed_sham_shash_digest(struct crypto_shash *tfm, u32 flags, - const u8 *data, unsigned int len, u8 *out) -{ - SHASH_DESC_ON_STACK(shash, tfm); - - shash->tfm = tfm; - - return crypto_shash_digest(shash, data, len, out); -} - static int aspeed_sham_final(struct ahash_request *req) { struct aspeed_sham_reqctx *rctx = ahash_request_ctx(req); @@ -664,7 +585,6 @@ static int aspeed_sham_init(struct ahash_request *req) struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct aspeed_sham_ctx *tctx = crypto_ahash_ctx(tfm); struct aspeed_hace_dev *hace_dev = tctx->hace_dev; - struct aspeed_sha_hmac_ctx *bctx = tctx->base; AHASH_DBG(hace_dev, "%s: digest size:%d\n", crypto_tfm_alg_name(&tfm->base), @@ -732,14 +652,6 @@ static int aspeed_sham_init(struct ahash_request *req) rctx->digcnt[0] = 0; rctx->digcnt[1] = 0; - /* HMAC init */ - if (tctx->flags & SHA_FLAGS_HMAC) { - rctx->digcnt[0] = rctx->block_size; - rctx->bufcnt = rctx->block_size; - memcpy(rctx->buffer, bctx->ipad, rctx->block_size); - rctx->flags |= SHA_FLAGS_HMAC; - } - return 0; } @@ -748,43 +660,6 @@ static int aspeed_sham_digest(struct ahash_request *req) return aspeed_sham_init(req) ? : aspeed_sham_finup(req); } -static int aspeed_sham_setkey(struct crypto_ahash *tfm, const u8 *key, - unsigned int keylen) -{ - struct aspeed_sham_ctx *tctx = crypto_ahash_ctx(tfm); - struct aspeed_hace_dev *hace_dev = tctx->hace_dev; - struct aspeed_sha_hmac_ctx *bctx = tctx->base; - int ds = crypto_shash_digestsize(bctx->shash); - int bs = crypto_shash_blocksize(bctx->shash); - int err = 0; - int i; - - AHASH_DBG(hace_dev, "%s: keylen:%d\n", crypto_tfm_alg_name(&tfm->base), - keylen); - - if (keylen > bs) { - err = aspeed_sham_shash_digest(bctx->shash, - crypto_shash_get_flags(bctx->shash), - key, keylen, bctx->ipad); - if (err) - return err; - keylen = ds; - - } else { - memcpy(bctx->ipad, key, keylen); - } - - memset(bctx->ipad + keylen, 0, bs - keylen); - memcpy(bctx->opad, bctx->ipad, bs); - - for (i = 0; i < bs; i++) { - bctx->ipad[i] ^= HMAC_IPAD_VALUE; - bctx->opad[i] ^= HMAC_OPAD_VALUE; - } - - return err; -} - static int aspeed_sham_cra_init(struct crypto_tfm *tfm) { struct ahash_alg *alg = __crypto_ahash_alg(tfm->__crt_alg); @@ -793,43 +668,13 @@ static int aspeed_sham_cra_init(struct crypto_tfm *tfm) ast_alg = container_of(alg, struct aspeed_hace_alg, alg.ahash.base); tctx->hace_dev = ast_alg->hace_dev; - tctx->flags = 0; crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), sizeof(struct aspeed_sham_reqctx)); - if (ast_alg->alg_base) { - /* hmac related */ - struct aspeed_sha_hmac_ctx *bctx = tctx->base; - - tctx->flags |= SHA_FLAGS_HMAC; - bctx->shash = crypto_alloc_shash(ast_alg->alg_base, 0, - CRYPTO_ALG_NEED_FALLBACK); - if (IS_ERR(bctx->shash)) { - dev_warn(ast_alg->hace_dev->dev, - "base driver '%s' could not be loaded.\n", - ast_alg->alg_base); - return PTR_ERR(bctx->shash); - } - } - return 0; } -static void aspeed_sham_cra_exit(struct crypto_tfm *tfm) -{ - struct aspeed_sham_ctx *tctx = crypto_tfm_ctx(tfm); - struct aspeed_hace_dev *hace_dev = tctx->hace_dev; - - AHASH_DBG(hace_dev, "%s\n", crypto_tfm_alg_name(tfm)); - - if (tctx->flags & SHA_FLAGS_HMAC) { - struct aspeed_sha_hmac_ctx *bctx = tctx->base; - - crypto_free_shash(bctx->shash); - } -} - static int aspeed_sham_export(struct ahash_request *req, void *out) { struct aspeed_sham_reqctx *rctx = ahash_request_ctx(req); @@ -873,7 +718,6 @@ static struct aspeed_hace_alg aspeed_ahash_algs[] = { .cra_alignmask = 0, .cra_module = THIS_MODULE, .cra_init = aspeed_sham_cra_init, - .cra_exit = aspeed_sham_cra_exit, } } }, @@ -905,7 +749,6 @@ static struct aspeed_hace_alg aspeed_ahash_algs[] = { .cra_alignmask = 0, .cra_module = THIS_MODULE, .cra_init = aspeed_sham_cra_init, - .cra_exit = aspeed_sham_cra_exit, } } }, @@ -937,112 +780,6 @@ static struct aspeed_hace_alg aspeed_ahash_algs[] = { .cra_alignmask = 0, .cra_module = THIS_MODULE, .cra_init = aspeed_sham_cra_init, - .cra_exit = aspeed_sham_cra_exit, - } - } - }, - .alg.ahash.op = { - .do_one_request = aspeed_ahash_do_one, - }, - }, - { - .alg_base = "sha1", - .alg.ahash.base = { - .init = aspeed_sham_init, - .update = aspeed_sham_update, - .final = aspeed_sham_final, - .finup = aspeed_sham_finup, - .digest = aspeed_sham_digest, - .setkey = aspeed_sham_setkey, - .export = aspeed_sham_export, - .import = aspeed_sham_import, - .halg = { - .digestsize = SHA1_DIGEST_SIZE, - .statesize = sizeof(struct aspeed_sham_reqctx), - .base = { - .cra_name = "hmac(sha1)", - .cra_driver_name = "aspeed-hmac-sha1", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_AHASH | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = SHA1_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct aspeed_sham_ctx) + - sizeof(struct aspeed_sha_hmac_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - .cra_init = aspeed_sham_cra_init, - .cra_exit = aspeed_sham_cra_exit, - } - } - }, - .alg.ahash.op = { - .do_one_request = aspeed_ahash_do_one, - }, - }, - { - .alg_base = "sha224", - .alg.ahash.base = { - .init = aspeed_sham_init, - .update = aspeed_sham_update, - .final = aspeed_sham_final, - .finup = aspeed_sham_finup, - .digest = aspeed_sham_digest, - .setkey = aspeed_sham_setkey, - .export = aspeed_sham_export, - .import = aspeed_sham_import, - .halg = { - .digestsize = SHA224_DIGEST_SIZE, - .statesize = sizeof(struct aspeed_sham_reqctx), - .base = { - .cra_name = "hmac(sha224)", - .cra_driver_name = "aspeed-hmac-sha224", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_AHASH | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = SHA224_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct aspeed_sham_ctx) + - sizeof(struct aspeed_sha_hmac_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - .cra_init = aspeed_sham_cra_init, - .cra_exit = aspeed_sham_cra_exit, - } - } - }, - .alg.ahash.op = { - .do_one_request = aspeed_ahash_do_one, - }, - }, - { - .alg_base = "sha256", - .alg.ahash.base = { - .init = aspeed_sham_init, - .update = aspeed_sham_update, - .final = aspeed_sham_final, - .finup = aspeed_sham_finup, - .digest = aspeed_sham_digest, - .setkey = aspeed_sham_setkey, - .export = aspeed_sham_export, - .import = aspeed_sham_import, - .halg = { - .digestsize = SHA256_DIGEST_SIZE, - .statesize = sizeof(struct aspeed_sham_reqctx), - .base = { - .cra_name = "hmac(sha256)", - .cra_driver_name = "aspeed-hmac-sha256", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_AHASH | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = SHA256_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct aspeed_sham_ctx) + - sizeof(struct aspeed_sha_hmac_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - .cra_init = aspeed_sham_cra_init, - .cra_exit = aspeed_sham_cra_exit, } } }, @@ -1077,7 +814,6 @@ static struct aspeed_hace_alg aspeed_ahash_algs_g6[] = { .cra_alignmask = 0, .cra_module = THIS_MODULE, .cra_init = aspeed_sham_cra_init, - .cra_exit = aspeed_sham_cra_exit, } } }, @@ -1109,77 +845,6 @@ static struct aspeed_hace_alg aspeed_ahash_algs_g6[] = { .cra_alignmask = 0, .cra_module = THIS_MODULE, .cra_init = aspeed_sham_cra_init, - .cra_exit = aspeed_sham_cra_exit, - } - } - }, - .alg.ahash.op = { - .do_one_request = aspeed_ahash_do_one, - }, - }, - { - .alg_base = "sha384", - .alg.ahash.base = { - .init = aspeed_sham_init, - .update = aspeed_sham_update, - .final = aspeed_sham_final, - .finup = aspeed_sham_finup, - .digest = aspeed_sham_digest, - .setkey = aspeed_sham_setkey, - .export = aspeed_sham_export, - .import = aspeed_sham_import, - .halg = { - .digestsize = SHA384_DIGEST_SIZE, - .statesize = sizeof(struct aspeed_sham_reqctx), - .base = { - .cra_name = "hmac(sha384)", - .cra_driver_name = "aspeed-hmac-sha384", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_AHASH | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = SHA384_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct aspeed_sham_ctx) + - sizeof(struct aspeed_sha_hmac_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - .cra_init = aspeed_sham_cra_init, - .cra_exit = aspeed_sham_cra_exit, - } - } - }, - .alg.ahash.op = { - .do_one_request = aspeed_ahash_do_one, - }, - }, - { - .alg_base = "sha512", - .alg.ahash.base = { - .init = aspeed_sham_init, - .update = aspeed_sham_update, - .final = aspeed_sham_final, - .finup = aspeed_sham_finup, - .digest = aspeed_sham_digest, - .setkey = aspeed_sham_setkey, - .export = aspeed_sham_export, - .import = aspeed_sham_import, - .halg = { - .digestsize = SHA512_DIGEST_SIZE, - .statesize = sizeof(struct aspeed_sham_reqctx), - .base = { - .cra_name = "hmac(sha512)", - .cra_driver_name = "aspeed-hmac-sha512", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_TYPE_AHASH | - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_KERN_DRIVER_ONLY, - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct aspeed_sham_ctx) + - sizeof(struct aspeed_sha_hmac_ctx), - .cra_alignmask = 0, - .cra_module = THIS_MODULE, - .cra_init = aspeed_sham_cra_init, - .cra_exit = aspeed_sham_cra_exit, } } }, diff --git a/drivers/crypto/aspeed/aspeed-hace.h b/drivers/crypto/aspeed/aspeed-hace.h index 68f70e01fccb..7ff1798bc198 100644 --- a/drivers/crypto/aspeed/aspeed-hace.h +++ b/drivers/crypto/aspeed/aspeed-hace.h @@ -119,7 +119,6 @@ #define SHA_FLAGS_SHA512 BIT(4) #define SHA_FLAGS_SHA512_224 BIT(5) #define SHA_FLAGS_SHA512_256 BIT(6) -#define SHA_FLAGS_HMAC BIT(8) #define SHA_FLAGS_FINUP BIT(9) #define SHA_FLAGS_MASK (0xff) @@ -161,17 +160,8 @@ struct aspeed_engine_hash { aspeed_hace_fn_t dma_prepare; }; -struct aspeed_sha_hmac_ctx { - struct crypto_shash *shash; - u8 ipad[SHA512_BLOCK_SIZE]; - u8 opad[SHA512_BLOCK_SIZE]; -}; - struct aspeed_sham_ctx { struct aspeed_hace_dev *hace_dev; - unsigned long flags; /* hmac flag */ - - struct aspeed_sha_hmac_ctx base[]; }; struct aspeed_sham_reqctx { From patchwork Tue May 13 06:03:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889705 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C33A77DA93 for ; Tue, 13 May 2025 06:03:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116239; cv=none; b=sSL44kLEh4lmTFYldNUumk/wPs9Wp93HflM8YOPZG85oRUXSio/NIOHyHTcS/LAL6SgDZ2OelLWBBJlNujQR27pdvreIZ53yxZf1U12uk3uQL+bASIjGbQT6BWlxZZuxD5DVS2kG2gwMdmMptuIxmfFxDkE/Ii9fHJ3JrV4g1oA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116239; c=relaxed/simple; bh=uSVeXeccOdGfKhG0iE5HadTiaRQA8oz68+np6d7O5nc=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To:Cc; b=kScatg35U51sCqfUcB03HoNKnyQXyy0oFVBaD+LnJae1+RmUG78XCzN7tnqnxfGiB2nYPeF3MGFD5BW+go8Zzht+j520dgXNzjudt+d9yAEWAuFJ097zwCe/PxXJSu80UoX+0KX0MbANcUAe6QWt1RF7/TjbT5HFcngyHOiUXOs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=OYLFLiKe; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="OYLFLiKe" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=Cc:To:Subject:From:References:In-Reply-To:Message-Id:Date: Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=g5TyIwjCBw1+nLoen01HIIVGROBiFVaqY1TIdUAEmyU=; b=OYLFLiKeUNWggRDoB/iPzRELlN P7Cjy+3h/SABOv8UFh9KCC+JSt8a+0ZA8PvxCI3eFAoS8xzaUfBPGUzxQTkl2cPFDcP5J9rVl24ZY ftYNv2Qr24c/A6YUL9I2JUuSR97n5w89aEnfg+hDRPA1RxOGnQSxOCj9GUvs6DLo2d+ur7J61s2a1 w4y0Z+njJR7M0yHQ+njnte0v4e3yv2dBVgrqvbtwAHmnHv95Xy9LLycTBl81ZNBriZuyladmrSAYF rGpzHtQGDbTJKX/Q9oYgf7zpx0NlV9aJP7eHAN83ddDVA53atc0lvc9LyxKy3z/Ba3BjsD/toOPsK fhLy/wGA==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uEik4-005g4f-2F; Tue, 13 May 2025 14:03:53 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 13 May 2025 14:03:52 +0800 Date: Tue, 13 May 2025 14:03:52 +0800 Message-Id: <03bd7038c76f33a0cdb76a13ca8b377c0b6c5a3a.1747116129.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 03/11] crypto: aspeed/hash - Use init_tfm instead of cra_init To: Linux Crypto Mailing List Cc: Neal Liu , linux-aspeed@lists.ozlabs.org Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Use the init_tfm interface instead of cra_init. Also get rid of the dynamic reqsize. Signed-off-by: Herbert Xu --- drivers/crypto/aspeed/aspeed-hace-hash.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/drivers/crypto/aspeed/aspeed-hace-hash.c b/drivers/crypto/aspeed/aspeed-hace-hash.c index 0bcec2d40ed6..4a479a204331 100644 --- a/drivers/crypto/aspeed/aspeed-hace-hash.c +++ b/drivers/crypto/aspeed/aspeed-hace-hash.c @@ -660,18 +660,15 @@ static int aspeed_sham_digest(struct ahash_request *req) return aspeed_sham_init(req) ? : aspeed_sham_finup(req); } -static int aspeed_sham_cra_init(struct crypto_tfm *tfm) +static int aspeed_sham_cra_init(struct crypto_ahash *tfm) { - struct ahash_alg *alg = __crypto_ahash_alg(tfm->__crt_alg); - struct aspeed_sham_ctx *tctx = crypto_tfm_ctx(tfm); + struct ahash_alg *alg = crypto_ahash_alg(tfm); + struct aspeed_sham_ctx *tctx = crypto_ahash_ctx(tfm); struct aspeed_hace_alg *ast_alg; ast_alg = container_of(alg, struct aspeed_hace_alg, alg.ahash.base); tctx->hace_dev = ast_alg->hace_dev; - crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), - sizeof(struct aspeed_sham_reqctx)); - return 0; } @@ -703,6 +700,7 @@ static struct aspeed_hace_alg aspeed_ahash_algs[] = { .digest = aspeed_sham_digest, .export = aspeed_sham_export, .import = aspeed_sham_import, + .init_tfm = aspeed_sham_cra_init, .halg = { .digestsize = SHA1_DIGEST_SIZE, .statesize = sizeof(struct aspeed_sham_reqctx), @@ -715,9 +713,9 @@ static struct aspeed_hace_alg aspeed_ahash_algs[] = { CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct aspeed_sham_ctx), + .cra_reqsize = sizeof(struct aspeed_sham_reqctx), .cra_alignmask = 0, .cra_module = THIS_MODULE, - .cra_init = aspeed_sham_cra_init, } } }, @@ -734,6 +732,7 @@ static struct aspeed_hace_alg aspeed_ahash_algs[] = { .digest = aspeed_sham_digest, .export = aspeed_sham_export, .import = aspeed_sham_import, + .init_tfm = aspeed_sham_cra_init, .halg = { .digestsize = SHA256_DIGEST_SIZE, .statesize = sizeof(struct aspeed_sham_reqctx), @@ -746,9 +745,9 @@ static struct aspeed_hace_alg aspeed_ahash_algs[] = { CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_ctxsize = sizeof(struct aspeed_sham_ctx), + .cra_reqsize = sizeof(struct aspeed_sham_reqctx), .cra_alignmask = 0, .cra_module = THIS_MODULE, - .cra_init = aspeed_sham_cra_init, } } }, @@ -765,6 +764,7 @@ static struct aspeed_hace_alg aspeed_ahash_algs[] = { .digest = aspeed_sham_digest, .export = aspeed_sham_export, .import = aspeed_sham_import, + .init_tfm = aspeed_sham_cra_init, .halg = { .digestsize = SHA224_DIGEST_SIZE, .statesize = sizeof(struct aspeed_sham_reqctx), @@ -777,9 +777,9 @@ static struct aspeed_hace_alg aspeed_ahash_algs[] = { CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA224_BLOCK_SIZE, .cra_ctxsize = sizeof(struct aspeed_sham_ctx), + .cra_reqsize = sizeof(struct aspeed_sham_reqctx), .cra_alignmask = 0, .cra_module = THIS_MODULE, - .cra_init = aspeed_sham_cra_init, } } }, @@ -799,6 +799,7 @@ static struct aspeed_hace_alg aspeed_ahash_algs_g6[] = { .digest = aspeed_sham_digest, .export = aspeed_sham_export, .import = aspeed_sham_import, + .init_tfm = aspeed_sham_cra_init, .halg = { .digestsize = SHA384_DIGEST_SIZE, .statesize = sizeof(struct aspeed_sham_reqctx), @@ -811,9 +812,9 @@ static struct aspeed_hace_alg aspeed_ahash_algs_g6[] = { CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA384_BLOCK_SIZE, .cra_ctxsize = sizeof(struct aspeed_sham_ctx), + .cra_reqsize = sizeof(struct aspeed_sham_reqctx), .cra_alignmask = 0, .cra_module = THIS_MODULE, - .cra_init = aspeed_sham_cra_init, } } }, @@ -830,6 +831,7 @@ static struct aspeed_hace_alg aspeed_ahash_algs_g6[] = { .digest = aspeed_sham_digest, .export = aspeed_sham_export, .import = aspeed_sham_import, + .init_tfm = aspeed_sham_cra_init, .halg = { .digestsize = SHA512_DIGEST_SIZE, .statesize = sizeof(struct aspeed_sham_reqctx), @@ -842,9 +844,9 @@ static struct aspeed_hace_alg aspeed_ahash_algs_g6[] = { CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA512_BLOCK_SIZE, .cra_ctxsize = sizeof(struct aspeed_sham_ctx), + .cra_reqsize = sizeof(struct aspeed_sham_reqctx), .cra_alignmask = 0, .cra_module = THIS_MODULE, - .cra_init = aspeed_sham_cra_init, } } }, From patchwork Tue May 13 06:03:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889703 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11EC21E7C19 for ; Tue, 13 May 2025 06:04:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116243; cv=none; b=X27hOeANjyCQStXpKaaoaNZBh2GRavdvwtV7kvhF0fjR3av0oSeP/Qn9JKy/Y9pODppSdnHu7Xap/C5C/ZdiiBEx3nsii3oWgkjFvJHh8RIITZnPufIG5IIv3aUZGMn4OIZX6Fcde1Ou5zVEJYRKCHBkVfKLW9Sm2G6S6ABnMvE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116243; c=relaxed/simple; bh=0ute+jeP6PdZaYXG2n0lv24wB9C8kBikSInytkJQm/I=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To:Cc; b=tli4ELg1rcySuZfloAIMrEFM+ZbkUWJ2qWQIqJqR/OMH5Nwt7VYAsnWKYqciE1XHqi/Wa59NHNn4mB18WNZZe7+efxBvAGvxKdPJPx8dhtVI8J2hmnu+Ot49M9cBjVjltKSzFtxNlg9UM2z41hnCW7oP9S7OqAEeB65wIPdSx6Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=O7iVStjT; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="O7iVStjT" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=Cc:To:Subject:From:References:In-Reply-To:Message-Id:Date: Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=HMpZOccRKLHOxt1j9Wb25Ygiudgle05mCvS8XV7VkL4=; b=O7iVStjTolbqkytUZCqVPNDQ57 hEds29Vg+JmHYMqpgENVxgbOZsjwvJNlO7RcnKPd9jDsdgLEE3ub0BhkPwG13DAKNcox3c0gIMtSu Zi6/0R5efmvXloB26C5QxGMw/BPk/EYlpSJWikTk7rEC6Sjf2j3fu3kl25j2KF9Cd46JN4v9wibnS tGsKaYoQ9imitw74a3l1GvlO2YHc2nzPfer8y4zl0Gr+64AuoZiWHR6ugQf6/4fEplGTGBKA0okw/ 6NLQt6hr6cYhDJ0W+oue5hakZ3jnYgzxdwz9Vfloj6+0Rg3t8H8XgEUwxto+kc6ICk8sL752Yvd3b IFLPmF8Q==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uEik9-005g55-0s; Tue, 13 May 2025 14:03:58 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 13 May 2025 14:03:57 +0800 Date: Tue, 13 May 2025 14:03:57 +0800 Message-Id: <889c7e7d5a3e16da52ac39ecafaab2a4a2cf22dc.1747116129.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 05/11] crypto: aspeed/hash - Move sham_final call into sham_update To: Linux Crypto Mailing List Cc: Neal Liu , linux-aspeed@lists.ozlabs.org Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The only time when sham_final needs to be called in sham_finup is when the finup request fits into the partial block. Move this special handling into sham_update. The comment about releaseing resources is non-sense. The Crypto API does not mandate the use of final so the user could always go away after an update and never come back. Therefore the driver must not hold any resources after an update call. Signed-off-by: Herbert Xu --- drivers/crypto/aspeed/aspeed-hace-hash.c | 44 +++++++++--------------- 1 file changed, 17 insertions(+), 27 deletions(-) diff --git a/drivers/crypto/aspeed/aspeed-hace-hash.c b/drivers/crypto/aspeed/aspeed-hace-hash.c index 9f776ec8f5ec..40363159489e 100644 --- a/drivers/crypto/aspeed/aspeed-hace-hash.c +++ b/drivers/crypto/aspeed/aspeed-hace-hash.c @@ -508,6 +508,20 @@ static int aspeed_ahash_do_one(struct crypto_engine *engine, void *areq) return aspeed_ahash_do_request(engine, areq); } +static int aspeed_sham_final(struct ahash_request *req) +{ + struct aspeed_sham_reqctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct aspeed_sham_ctx *tctx = crypto_ahash_ctx(tfm); + struct aspeed_hace_dev *hace_dev = tctx->hace_dev; + + AHASH_DBG(hace_dev, "req->nbytes:%d, rctx->total:%d\n", + req->nbytes, rctx->total); + rctx->op = SHA_OP_FINAL; + + return aspeed_hace_hash_handle_queue(hace_dev, req); +} + static int aspeed_sham_update(struct ahash_request *req) { struct aspeed_sham_reqctx *rctx = ahash_request_ctx(req); @@ -533,49 +547,25 @@ static int aspeed_sham_update(struct ahash_request *req) rctx->total, 0); rctx->bufcnt += rctx->total; - return 0; + return rctx->flags & SHA_FLAGS_FINUP ? + aspeed_sham_final(req) : 0; } return aspeed_hace_hash_handle_queue(hace_dev, req); } -static int aspeed_sham_final(struct ahash_request *req) -{ - struct aspeed_sham_reqctx *rctx = ahash_request_ctx(req); - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct aspeed_sham_ctx *tctx = crypto_ahash_ctx(tfm); - struct aspeed_hace_dev *hace_dev = tctx->hace_dev; - - AHASH_DBG(hace_dev, "req->nbytes:%d, rctx->total:%d\n", - req->nbytes, rctx->total); - rctx->op = SHA_OP_FINAL; - - return aspeed_hace_hash_handle_queue(hace_dev, req); -} - static int aspeed_sham_finup(struct ahash_request *req) { struct aspeed_sham_reqctx *rctx = ahash_request_ctx(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct aspeed_sham_ctx *tctx = crypto_ahash_ctx(tfm); struct aspeed_hace_dev *hace_dev = tctx->hace_dev; - int rc1, rc2; AHASH_DBG(hace_dev, "req->nbytes: %d\n", req->nbytes); rctx->flags |= SHA_FLAGS_FINUP; - rc1 = aspeed_sham_update(req); - if (rc1 == -EINPROGRESS || rc1 == -EBUSY) - return rc1; - - /* - * final() has to be always called to cleanup resources - * even if update() failed, except EINPROGRESS - */ - rc2 = aspeed_sham_final(req); - - return rc1 ? : rc2; + return aspeed_sham_update(req); } static int aspeed_sham_init(struct ahash_request *req) From patchwork Tue May 13 06:04:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889702 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FBCF1E5710 for ; Tue, 13 May 2025 06:04:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116247; cv=none; b=VQSzfCM7nPlwQqsTpqmtSyFLd4eT1OvgMYe/XJbsf9LO6HYqyH4YzzSVQObni8zDWycBf35i21MQDL85bLxkgletcDpcz0paKJZMko9zVK+60MKz17sGUr58ZFl1m3Mu+JdkGFsujN0F1AQr8iXUfDI+MJnBIUnFwpxf/y6Vi0c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116247; c=relaxed/simple; bh=TVN/0wCrPqbi2ha4Drb95In4rerlF2c06vafI9yquZE=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To:Cc; b=rPR5DAgV946aWB0in6K/zk+ke8YiXw+nseqTv2yUmXgM4vSrX0vx4ADQwI6wdl1qFQzj+pE5ysFzGxkj1VOC2XV54l2vXl9xqsV/I0UacfrKIv3WtilIFWguvpHnfEedDCNGgFfAhcn9kL6QCWo4XIQGepGTXCDad6ayrNOHEPc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=l6BB8w0S; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="l6BB8w0S" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=Cc:To:Subject:From:References:In-Reply-To:Message-Id:Date: Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=+ru/O8aStcq/NQmOby3M3IUJaHQQYoWzI78Ee/BXevc=; b=l6BB8w0SXIScJvvtBJDupeualf y15f5EAYbX/YzEwXXeP5ZT1eqOrfyzKhK0ZGdopkw412Afs6nHtv/SCumlNTcgJC8OCFCwEQ/PFfd ja8LLosnDpsLvnMFKnY9UGpPc0M3IWJXG2vHNQLhYi3BCWSkzYHg8KJWKBcw0U9oDmtk9pLKvvG0r 2rAozuI7XAiKdTvt0VyDhPGU9aDj53rBliC5Ed9PT+Sg3Wce56IR3R9CQfPRELcFgHG+HWN7LgSH7 POKsKm2bt5th9c8REREEODVioDguG3oT/5v5DD1HqQ6CUzV/0c0b91jCKkeCKbNvTtxAcxoVkZdbK UZzrSt0A==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uEikD-005g5W-2r; Tue, 13 May 2025 14:04:02 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 13 May 2025 14:04:01 +0800 Date: Tue, 13 May 2025 14:04:01 +0800 Message-Id: <18fe5ff62fad08206de4c02f67746120aeb41fc0.1747116129.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 07/11] crypto: aspeed/hash - Remove sha_iv To: Linux Crypto Mailing List Cc: Neal Liu , linux-aspeed@lists.ozlabs.org Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Removed unused sha_iv field from request context. Signed-off-by: Herbert Xu --- drivers/crypto/aspeed/aspeed-hace-hash.c | 5 ----- drivers/crypto/aspeed/aspeed-hace.h | 1 - 2 files changed, 6 deletions(-) diff --git a/drivers/crypto/aspeed/aspeed-hace-hash.c b/drivers/crypto/aspeed/aspeed-hace-hash.c index ceea2e2f5658..d31d7e1c9af2 100644 --- a/drivers/crypto/aspeed/aspeed-hace-hash.c +++ b/drivers/crypto/aspeed/aspeed-hace-hash.c @@ -593,7 +593,6 @@ static int aspeed_sham_init(struct ahash_request *req) rctx->flags |= SHA_FLAGS_SHA1; rctx->digsize = SHA1_DIGEST_SIZE; rctx->block_size = SHA1_BLOCK_SIZE; - rctx->sha_iv = sha1_iv; rctx->ivsize = 32; memcpy(rctx->digest, sha1_iv, rctx->ivsize); break; @@ -602,7 +601,6 @@ static int aspeed_sham_init(struct ahash_request *req) rctx->flags |= SHA_FLAGS_SHA224; rctx->digsize = SHA224_DIGEST_SIZE; rctx->block_size = SHA224_BLOCK_SIZE; - rctx->sha_iv = sha224_iv; rctx->ivsize = 32; memcpy(rctx->digest, sha224_iv, rctx->ivsize); break; @@ -611,7 +609,6 @@ static int aspeed_sham_init(struct ahash_request *req) rctx->flags |= SHA_FLAGS_SHA256; rctx->digsize = SHA256_DIGEST_SIZE; rctx->block_size = SHA256_BLOCK_SIZE; - rctx->sha_iv = sha256_iv; rctx->ivsize = 32; memcpy(rctx->digest, sha256_iv, rctx->ivsize); break; @@ -621,7 +618,6 @@ static int aspeed_sham_init(struct ahash_request *req) rctx->flags |= SHA_FLAGS_SHA384; rctx->digsize = SHA384_DIGEST_SIZE; rctx->block_size = SHA384_BLOCK_SIZE; - rctx->sha_iv = (const __be32 *)sha384_iv; rctx->ivsize = 64; memcpy(rctx->digest, sha384_iv, rctx->ivsize); break; @@ -631,7 +627,6 @@ static int aspeed_sham_init(struct ahash_request *req) rctx->flags |= SHA_FLAGS_SHA512; rctx->digsize = SHA512_DIGEST_SIZE; rctx->block_size = SHA512_BLOCK_SIZE; - rctx->sha_iv = (const __be32 *)sha512_iv; rctx->ivsize = 64; memcpy(rctx->digest, sha512_iv, rctx->ivsize); break; diff --git a/drivers/crypto/aspeed/aspeed-hace.h b/drivers/crypto/aspeed/aspeed-hace.h index a34677f10966..ad39954251dd 100644 --- a/drivers/crypto/aspeed/aspeed-hace.h +++ b/drivers/crypto/aspeed/aspeed-hace.h @@ -184,7 +184,6 @@ struct aspeed_sham_reqctx { size_t digsize; size_t block_size; size_t ivsize; - const __be32 *sha_iv; /* remain data buffer */ dma_addr_t buffer_dma_addr; From patchwork Tue May 13 06:04:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889701 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A4131E5B7A for ; Tue, 13 May 2025 06:04:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116254; cv=none; b=CBMK+oyL88EKxcfOR9ClN8P5WrSGjNVFB78TlaLodvltiuJoRiOwT9ZJG1USVa2YUOEzYTZnz/mYaCr7lKOyTNmflCot6iCBhz8PozWNKCfbfuhCuH8ARRZSG1uPgmARirRdsKblp1fT9k3T956irz54QYRSAaIuOclch0L6buc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116254; c=relaxed/simple; bh=WVQVWl7uAnuBYxIB+zxO4djbrZQl/RZJdOCNVUal+Wg=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To:Cc; b=WZ/yRp7lAErAE5Q7q+weaNrvYfnmPtWoZ/4v0fSyUUbcfNYwxeyAVc3CN7TKzJhNv9vngJlFUauoswz+utqRig2eq8qspCJHkzhpvv3VV+Re8oD7cp5y2Zasjs0E2S4vVK6yfRdf06rFhpavzyEW8T15ZkZb1SShXm2Q5/PKyCY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=BUhd5ZUM; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="BUhd5ZUM" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=Cc:To:Subject:From:References:In-Reply-To:Message-Id:Date: Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=4Wtwzookz1HcELV23w9yfUZZzRBWFBxswmKn3PTe1iA=; b=BUhd5ZUMed8sFBr3W4Xv6t4PIh +DJAGKLIhpIoq0XDh7/JKKKY25q8NdVLJZAHyeDJoUM9NcsJnZyoqtExv+MiWrbsTFn43JzDMFL+l myY4YIE6cvCo/K7Ssqpbrp/vEhjfoFJIDiJXZwMa5zl7U2AorNXDOp/EknLaKTXTz06P+qi1PJMgm 5AmhHk3FTGTRQ47SsC93JcVAV5fvE/gKJf7tqXXSyauZBtxX5v9EiuMPkoKLd/+vQ2lchdr3Fbl5V MX9ujeXy/xRLm/qNSAGTi82saqCFDEWEZ4ghvTx/UA5HWthDUk8cprJQxKoV9fxxs9C1hjIwBe/St D0iogDtg==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uEikI-005g6J-1e; Tue, 13 May 2025 14:04:07 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 13 May 2025 14:04:06 +0800 Date: Tue, 13 May 2025 14:04:06 +0800 Message-Id: <67a2783ef1a4a5c37ba868af511fe0f0c6ef8476.1747116129.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 09/11] crypto: aspeed/hash - Add fallback To: Linux Crypto Mailing List Cc: Neal Liu , linux-aspeed@lists.ozlabs.org Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: If a hash request fails due to a DMA mapping error, or if it is too large to fit in the the driver buffer, use a fallback to do the hash rather than failing. Signed-off-by: Herbert Xu --- drivers/crypto/aspeed/aspeed-hace-hash.c | 28 +++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/aspeed/aspeed-hace-hash.c b/drivers/crypto/aspeed/aspeed-hace-hash.c index d7c7f867d6e1..3bfb7db96c40 100644 --- a/drivers/crypto/aspeed/aspeed-hace-hash.c +++ b/drivers/crypto/aspeed/aspeed-hace-hash.c @@ -420,6 +420,32 @@ static int aspeed_hace_hash_handle_queue(struct aspeed_hace_dev *hace_dev, hace_dev->crypt_engine_hash, req); } +static noinline int aspeed_ahash_fallback(struct ahash_request *req) +{ + struct aspeed_sham_reqctx *rctx = ahash_request_ctx(req); + HASH_FBREQ_ON_STACK(fbreq, req); + u8 *state = rctx->buffer; + struct scatterlist sg[2]; + struct scatterlist *ssg; + int ret; + + ssg = scatterwalk_ffwd(sg, req->src, rctx->offset); + ahash_request_set_crypt(fbreq, ssg, req->result, + rctx->total - rctx->offset); + + ret = aspeed_sham_export(req, state) ?: + crypto_ahash_import_core(fbreq, state); + + if (rctx->flags & SHA_FLAGS_FINUP) + ret = ret ?: crypto_ahash_finup(fbreq); + else + ret = ret ?: crypto_ahash_update(fbreq); + crypto_ahash_export_core(fbreq, state) ?: + aspeed_sham_import(req, state); + HASH_REQUEST_ZERO(fbreq); + return ret; +} + static int aspeed_ahash_do_request(struct crypto_engine *engine, void *areq) { struct ahash_request *req = ahash_request_cast(areq); @@ -434,7 +460,7 @@ static int aspeed_ahash_do_request(struct crypto_engine *engine, void *areq) ret = aspeed_ahash_req_update(hace_dev); if (ret != -EINPROGRESS) - return ret; + return aspeed_ahash_fallback(req); return 0; } From patchwork Tue May 13 06:04:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889700 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED74D7DA93 for ; Tue, 13 May 2025 06:04:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116258; cv=none; b=t4ZEcbrUPl9TnKzm3fQEYtqyLoDUCJGHY4ep3Jkph5vTByap1TZSXLaUYwDCih+35F6ymtZQB1wqTW98wCBPDEUhpEy9tj8fQmF90NtnBhhFepX0rT1r9cftWcBv8Wz5WdI5DXOdz9yC8ZSIdeB/bdeuQZrDjPkoJm0EPpVINAw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747116258; c=relaxed/simple; bh=9oWaZmVabCipJMcQm5ydCts5uGroSdWWubBy1PNORFU=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To:Cc; b=MF3nblCMmcbKhK5aB0DbwGienK5dppL791Mo1u6gNJ6L4BBoTEfaYEZZrgz+FZREhOBNXYItwL4XDbQoC9gMIQ5xjZaxSKbkt6IlAuW4X60xycUOMzik0qAcAmpluhbGU0kLGjksUGlfjimRB2QlFDEWwK1oxmtNIwWIRQt7FAQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=e15yqGdM; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="e15yqGdM" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=Cc:To:Subject:From:References:In-Reply-To:Message-Id:Date: Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=e0Ab6i77V9pvRivZks+l19R5uBrJGiwyJRdHY4M4Ra4=; b=e15yqGdM0HXuEMFohzZ9qaPUwJ SQ5UA2NhI4EtzRDpsDQrLxkeKb1fqxECmRHJU3+DivsqI5evv3u57XWJeyVVAcPtG4uW8YFk6McKZ NyIhTw2qMzJc2/JY5DgH+4/NgHQPDGrKY11R4//i5ibm6q8PjEzvPpvLesUYiV4vjI3T+ZvnxCGC/ RX98qbdhChl+Pq1JLXe8xKibrMzHnZQqoEtqz61ZbQgADAmuMBSWAkveFkZKn4EXnsJOWsBFjUIvb 5zyFxbNIh2RKzHrIf8UwLQ8Y5fnxCpKTCBMMAGb7+IbdzzYZ6iDt6n0qFr/bsOk2JQZNwqFhpU3zL bDxNCHnA==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uEikN-005g7I-0J; Tue, 13 May 2025 14:04:12 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 13 May 2025 14:04:11 +0800 Date: Tue, 13 May 2025 14:04:11 +0800 Message-Id: <11bdba6fa8b5a76d1fb20ca1a7f004b68716431b.1747116129.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [PATCH 11/11] crypto: aspeed/hash - Fix potential overflow in dma_prepare_sg To: Linux Crypto Mailing List Cc: Neal Liu , linux-aspeed@lists.ozlabs.org Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The mapped SG lists are written to hash_engine->ahash_src_addr which has the size ASPEED_HASH_SRC_DMA_BUF_LEN. Since scatterlists are not bound in size, make sure that size is not exceeded. If the mapped SG list is larger than the buffer, simply iterate over it as is done in the dma_prepare case. Signed-off-by: Herbert Xu --- drivers/crypto/aspeed/aspeed-hace-hash.c | 84 ++++++++++++------------ 1 file changed, 43 insertions(+), 41 deletions(-) diff --git a/drivers/crypto/aspeed/aspeed-hace-hash.c b/drivers/crypto/aspeed/aspeed-hace-hash.c index fc2154947ec8..e54b7dd03be3 100644 --- a/drivers/crypto/aspeed/aspeed-hace-hash.c +++ b/drivers/crypto/aspeed/aspeed-hace-hash.c @@ -146,6 +146,15 @@ static int aspeed_ahash_fill_padding(struct aspeed_hace_dev *hace_dev, return padlen + bitslen; } +static void aspeed_ahash_update_counter(struct aspeed_sham_reqctx *rctx, + unsigned int len) +{ + rctx->offset += len; + rctx->digcnt[0] += len; + if (rctx->digcnt[0] < len) + rctx->digcnt[1]++; +} + /* * Prepare DMA buffer before hardware engine * processing. @@ -175,12 +184,7 @@ static int aspeed_ahash_dma_prepare(struct aspeed_hace_dev *hace_dev) length -= remain; scatterwalk_map_and_copy(hash_engine->ahash_src_addr, rctx->src_sg, rctx->offset, length, 0); - rctx->offset += length; - - rctx->digcnt[0] += length; - if (rctx->digcnt[0] < length) - rctx->digcnt[1]++; - + aspeed_ahash_update_counter(rctx, length); if (final) length += aspeed_ahash_fill_padding( hace_dev, rctx, hash_engine->ahash_src_addr + length); @@ -210,13 +214,16 @@ static int aspeed_ahash_dma_prepare_sg(struct aspeed_hace_dev *hace_dev) struct ahash_request *req = hash_engine->req; struct aspeed_sham_reqctx *rctx = ahash_request_ctx(req); bool final = rctx->flags & SHA_FLAGS_FINUP; + int remain, sg_len, i, max_sg_nents; + unsigned int length, offset, total; struct aspeed_sg_list *src_list; struct scatterlist *s; - int length, remain, sg_len, i; int rc = 0; - remain = final ? 0 : rctx->total % rctx->block_size; - length = rctx->total - remain; + offset = rctx->offset; + length = rctx->total - offset; + remain = final ? 0 : length - round_down(length, rctx->block_size); + length -= remain; AHASH_DBG(hace_dev, "%s:0x%x, %s:0x%x, %s:0x%x\n", "rctx total", rctx->total, @@ -230,6 +237,8 @@ static int aspeed_ahash_dma_prepare_sg(struct aspeed_hace_dev *hace_dev) goto end; } + max_sg_nents = ASPEED_HASH_SRC_DMA_BUF_LEN / sizeof(*src_list) - final; + sg_len = min(sg_len, max_sg_nents); src_list = (struct aspeed_sg_list *)hash_engine->ahash_src_addr; rctx->digest_dma_addr = dma_map_single(hace_dev->dev, rctx->digest, SHA512_DIGEST_SIZE, @@ -240,10 +249,20 @@ static int aspeed_ahash_dma_prepare_sg(struct aspeed_hace_dev *hace_dev) goto free_src_sg; } + total = 0; for_each_sg(rctx->src_sg, s, sg_len, i) { u32 phy_addr = sg_dma_address(s); u32 len = sg_dma_len(s); + if (len <= offset) { + offset -= len; + continue; + } + + len -= offset; + phy_addr += offset; + offset = 0; + if (length > len) length -= len; else { @@ -252,24 +271,22 @@ static int aspeed_ahash_dma_prepare_sg(struct aspeed_hace_dev *hace_dev) length = 0; } + total += len; src_list[i].phy_addr = cpu_to_le32(phy_addr); src_list[i].len = cpu_to_le32(len); } if (length != 0) { - rc = -EINVAL; - goto free_rctx_digest; + total = round_down(total, rctx->block_size); + final = false; } - rctx->digcnt[0] += rctx->total - remain; - if (rctx->digcnt[0] < rctx->total - remain) - rctx->digcnt[1]++; - + aspeed_ahash_update_counter(rctx, total); if (final) { int len = aspeed_ahash_fill_padding(hace_dev, rctx, rctx->buffer); - rctx->total += len; + total += len; rctx->buffer_dma_addr = dma_map_single(hace_dev->dev, rctx->buffer, sizeof(rctx->buffer), @@ -286,8 +303,7 @@ static int aspeed_ahash_dma_prepare_sg(struct aspeed_hace_dev *hace_dev) } src_list[i - 1].len |= cpu_to_le32(HASH_SG_LAST_LIST); - rctx->offset = rctx->total - remain; - hash_engine->src_length = rctx->total - remain; + hash_engine->src_length = total; hash_engine->src_dma = hash_engine->ahash_src_dma_addr; hash_engine->digest_dma = rctx->digest_dma_addr; @@ -311,6 +327,13 @@ static int aspeed_ahash_complete(struct aspeed_hace_dev *hace_dev) AHASH_DBG(hace_dev, "\n"); + dma_unmap_single(hace_dev->dev, rctx->digest_dma_addr, + SHA512_DIGEST_SIZE, DMA_BIDIRECTIONAL); + + if (rctx->total - rctx->offset >= rctx->block_size || + (rctx->total != rctx->offset && rctx->flags & SHA_FLAGS_FINUP)) + return aspeed_ahash_req_update(hace_dev); + hash_engine->flags &= ~CRYPTO_FLAGS_BUSY; if (rctx->flags & SHA_FLAGS_FINUP) @@ -366,36 +389,15 @@ static int aspeed_ahash_update_resume_sg(struct aspeed_hace_dev *hace_dev) dma_unmap_sg(hace_dev->dev, rctx->src_sg, rctx->src_nents, DMA_TO_DEVICE); - if (rctx->flags & SHA_FLAGS_FINUP) + if (rctx->flags & SHA_FLAGS_FINUP && rctx->total == rctx->offset) dma_unmap_single(hace_dev->dev, rctx->buffer_dma_addr, sizeof(rctx->buffer), DMA_TO_DEVICE); - dma_unmap_single(hace_dev->dev, rctx->digest_dma_addr, - SHA512_DIGEST_SIZE, DMA_BIDIRECTIONAL); - rctx->cmd &= ~HASH_CMD_HASH_SRC_SG_CTRL; return aspeed_ahash_complete(hace_dev); } -static int aspeed_ahash_update_resume(struct aspeed_hace_dev *hace_dev) -{ - struct aspeed_engine_hash *hash_engine = &hace_dev->hash_engine; - struct ahash_request *req = hash_engine->req; - struct aspeed_sham_reqctx *rctx = ahash_request_ctx(req); - - AHASH_DBG(hace_dev, "\n"); - - dma_unmap_single(hace_dev->dev, rctx->digest_dma_addr, - SHA512_DIGEST_SIZE, DMA_BIDIRECTIONAL); - - if (rctx->total - rctx->offset >= rctx->block_size || - (rctx->total != rctx->offset && rctx->flags & SHA_FLAGS_FINUP)) - return aspeed_ahash_req_update(hace_dev); - - return aspeed_ahash_complete(hace_dev); -} - static int aspeed_ahash_req_update(struct aspeed_hace_dev *hace_dev) { struct aspeed_engine_hash *hash_engine = &hace_dev->hash_engine; @@ -411,7 +413,7 @@ static int aspeed_ahash_req_update(struct aspeed_hace_dev *hace_dev) resume = aspeed_ahash_update_resume_sg; } else { - resume = aspeed_ahash_update_resume; + resume = aspeed_ahash_complete; } ret = hash_engine->dma_prepare(hace_dev);