From patchwork Wed May 14 09:22:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889971 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C8D62222D3 for ; Wed, 14 May 2025 09:22:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214562; cv=none; b=InK2rzh1vZpLN1w/wXSc8cMsaWo4+BlHhGqiOKzoJreEnMffwtUOQZVQOLEHPukDywRP7XEgMuJjf5AmwnufU8tfDtpFtklaTY/sgororC+0iQl+6xKyDDvvOQvVMCRWkrtTZ+cpDAxHFH1UxV11aEECzh+k80osd1eu+/D+4R8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214562; c=relaxed/simple; bh=HNUFdAheGCAS5fh5V6LRxONnz1VzpzJy+sd/x/eWalo=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=s5fMP31YpvnNNP6SXpw+9ed43fyCH++WmEuQpb6HpSILdEnYZBHDqn1rI7qQJ5h3r/MqqhhdfWv9xqZ//dvgWU9LpOYGQDMGd6eN1xMFhYu3mJcZosTz4irHhpZdHRgqREnHY6DsAt403EtNLR5fvYrkg5UURwfKqI/2euvfjsg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=YAAQd5uz; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="YAAQd5uz" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=SYMgHCMmudGW2BcUDR/0zlfomZn+xuLsKwhbmNfikB8=; b=YAAQd5uzhLOK8vnxg/njTLT99y leBM8JTJG4jTeHjFm5m7E0rQVJVzcxgyCsh8blivBZy8zinSAAHNaLehvzOBorUnZIcCCiO1n1sv6 AarKcb60/pcUxwueWTMHmULEGPla9hMhvljtFkoFl82RZhZ1icgzAB7CCK1RUoK9ETR16Whxbtg1h FKmA6gmNfrWPWf3LFjxukc7o3vzTyI314ensDq8+5fFwoxO6EQfm+NQQO9G16vBmwLGpkp8A+4/gO o25qMb+mF+KXonTn1N0t9yMTEBPz3KgzaGkuV7vUzl8gb2gsNg8ZbGNXfLqk8CrKnXfPcFQjxlqAc VF5hKpPw==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uF8Jp-0060JQ-1W; Wed, 14 May 2025 17:22:30 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 14 May 2025 17:22:29 +0800 Date: Wed, 14 May 2025 17:22:29 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [v3 PATCH 01/11] crypto: hash - Move core export and import into internel/hash.h To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The core export and import functions are targeted at implementors so move them into internal/hash.h. Signed-off-by: Herbert Xu --- include/crypto/hash.h | 48 ---------------------------------- include/crypto/internal/hash.h | 48 ++++++++++++++++++++++++++++++++++ 2 files changed, 48 insertions(+), 48 deletions(-) diff --git a/include/crypto/hash.h b/include/crypto/hash.h index 1760662ad70a..9fc9daaaaab4 100644 --- a/include/crypto/hash.h +++ b/include/crypto/hash.h @@ -506,18 +506,6 @@ int crypto_ahash_digest(struct ahash_request *req); */ int crypto_ahash_export(struct ahash_request *req, void *out); -/** - * crypto_ahash_export_core() - extract core state for message digest - * @req: reference to the ahash_request handle whose state is exported - * @out: output buffer of sufficient size that can hold the hash state - * - * Export the hash state without the partial block buffer. - * - * Context: Softirq or process context. - * Return: 0 if the export creation was successful; < 0 if an error occurred - */ -int crypto_ahash_export_core(struct ahash_request *req, void *out); - /** * crypto_ahash_import() - import message digest state * @req: reference to ahash_request handle the state is imported into @@ -531,18 +519,6 @@ int crypto_ahash_export_core(struct ahash_request *req, void *out); */ int crypto_ahash_import(struct ahash_request *req, const void *in); -/** - * crypto_ahash_import_core() - import core state - * @req: reference to ahash_request handle the state is imported into - * @in: buffer holding the state - * - * Import the hash state without the partial block buffer. - * - * Context: Softirq or process context. - * Return: 0 if the import was successful; < 0 if an error occurred - */ -int crypto_ahash_import_core(struct ahash_request *req, const void *in); - /** * crypto_ahash_init() - (re)initialize message digest handle * @req: ahash_request handle that already is initialized with all necessary @@ -933,18 +909,6 @@ int crypto_hash_digest(struct crypto_ahash *tfm, const u8 *data, */ int crypto_shash_export(struct shash_desc *desc, void *out); -/** - * crypto_shash_export_core() - extract core state for message digest - * @desc: reference to the operational state handle whose state is exported - * @out: output buffer of sufficient size that can hold the hash state - * - * Export the hash state without the partial block buffer. - * - * Context: Softirq or process context. - * Return: 0 if the export creation was successful; < 0 if an error occurred - */ -int crypto_shash_export_core(struct shash_desc *desc, void *out); - /** * crypto_shash_import() - import operational state * @desc: reference to the operational state handle the state imported into @@ -959,18 +923,6 @@ int crypto_shash_export_core(struct shash_desc *desc, void *out); */ int crypto_shash_import(struct shash_desc *desc, const void *in); -/** - * crypto_shash_import_core() - import core state - * @desc: reference to the operational state handle the state imported into - * @in: buffer holding the state - * - * Import the hash state without the partial block buffer. - * - * Context: Softirq or process context. - * Return: 0 if the import was successful; < 0 if an error occurred - */ -int crypto_shash_import_core(struct shash_desc *desc, const void *in); - /** * crypto_shash_init() - (re)initialize message digest * @desc: operational state handle that is already filled diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h index f2bbdb74e11a..ef5ea75ac5c8 100644 --- a/include/crypto/internal/hash.h +++ b/include/crypto/internal/hash.h @@ -305,5 +305,53 @@ static inline unsigned int crypto_shash_coresize(struct crypto_shash *tfm) #define HASH_REQUEST_ZERO(name) \ memzero_explicit(__##name##_req, sizeof(__##name##_req)) +/** + * crypto_ahash_export_core() - extract core state for message digest + * @req: reference to the ahash_request handle whose state is exported + * @out: output buffer of sufficient size that can hold the hash state + * + * Export the hash state without the partial block buffer. + * + * Context: Softirq or process context. + * Return: 0 if the export creation was successful; < 0 if an error occurred + */ +int crypto_ahash_export_core(struct ahash_request *req, void *out); + +/** + * crypto_ahash_import_core() - import core state + * @req: reference to ahash_request handle the state is imported into + * @in: buffer holding the state + * + * Import the hash state without the partial block buffer. + * + * Context: Softirq or process context. + * Return: 0 if the import was successful; < 0 if an error occurred + */ +int crypto_ahash_import_core(struct ahash_request *req, const void *in); + +/** + * crypto_shash_export_core() - extract core state for message digest + * @desc: reference to the operational state handle whose state is exported + * @out: output buffer of sufficient size that can hold the hash state + * + * Export the hash state without the partial block buffer. + * + * Context: Softirq or process context. + * Return: 0 if the export creation was successful; < 0 if an error occurred + */ +int crypto_shash_export_core(struct shash_desc *desc, void *out); + +/** + * crypto_shash_import_core() - import core state + * @desc: reference to the operational state handle the state imported into + * @in: buffer holding the state + * + * Import the hash state without the partial block buffer. + * + * Context: Softirq or process context. + * Return: 0 if the import was successful; < 0 if an error occurred + */ +int crypto_shash_import_core(struct shash_desc *desc, const void *in); + #endif /* _CRYPTO_INTERNAL_HASH_H */ From patchwork Wed May 14 09:22:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889970 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41ADE225784 for ; Wed, 14 May 2025 09:22:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214562; cv=none; b=AnfoiFcrz8rDcxBkR0DlNbNDwXN5SAFyadi7BsZCHCIvFfObBzf8A1vzPkAI+H1Gc2waiBURy4qiZ5K+SFb6Yt4UGnz9zmoB3WHXPcFz6f6Kg7Jk59vSMI4KBCa5W0loe7h/3D1uhQdklpVnA5AUHDVTBmv3L4z3d3iJGtmLN/4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214562; c=relaxed/simple; bh=hiaM1GpevvjGbzQNciktf3FjXE/SPuVCxkfKtjg990U=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=e+/Yj6hSVp4uyu9RjZLQxFaNcRkS3Uty1ODCUOJWsHrfLkPjXu4tXtmn488ckXe87gVptvv5VNZKObsMaAQqdVpmyiE24Nasj/mYamg/0Qz6DIgeBv7UkcMK8gbNmPBifi2ZH9WdO/5vDtXE0p2BahMoCTVlOrotZihmrMIhf5k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=Ue42fNZf; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="Ue42fNZf" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=cQDuOudYQ87pbKhtJw+WbUgRISnifDFAJvzju9e0R2o=; b=Ue42fNZfuI9yL3o5HFWtVTemI1 oVaxHjsEPC71D2zkXGxKGdCxQbIPcDrWvOigWap0K+KKoWefn7SWTHneLR6OuSecJn2Pm824gWoYE C2r9E3laSP/eYFxVfk4MfR91VjmPx6fs/uXdwDHcckUJTsYzvl9CljlecDL3SWT+qDJ8s0SWaI3SC eYQEC6xc4jfYjhmWFw6FR3rtoXvnAbhlhI2Gi+2Wp0oOYezvtT39NQvw4XqNdN/YZD7v41gMtVC/H aeEu91RhugZLya0tL1jng3kQdyTijufl1cjEEi1k9J516NAr6aqY9AfQkK/UVq/TNxPD92QZ4eHp4 ZbB/pH1g==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uF8Ju-0060Jm-0P; Wed, 14 May 2025 17:22:35 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 14 May 2025 17:22:34 +0800 Date: Wed, 14 May 2025 17:22:34 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [v3 PATCH 03/11] crypto: ahash - Handle partial blocks in API To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Provide an option to handle the partial blocks in the ahash API. Almost every hash algorithm has a block size and are only able to hash partial blocks on finalisation. As a first step disable virtual address support for algorithms with state sizes larger than HASH_MAX_STATESIZE. This is OK as virtual addresses are currently only used on synchronous fallbacks. This means ahash_do_req_chain only needs to handle synchronous fallbacks, removing the complexities of saving the request state. Also move the saved request state into the ahash_request object as nesting is no longer possible. Add a scatterlist to ahash_request to store the partial block. Signed-off-by: Herbert Xu --- crypto/ahash.c | 541 ++++++++++++++++++++---------------------- include/crypto/hash.h | 12 +- 2 files changed, 265 insertions(+), 288 deletions(-) diff --git a/crypto/ahash.c b/crypto/ahash.c index 7d96c76731ef..cf8bbe7e54c0 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -12,11 +12,13 @@ * Copyright (c) 2008 Loc Ho */ +#include #include #include #include #include #include +#include #include #include #include @@ -40,24 +42,47 @@ struct crypto_hash_walk { struct scatterlist *sg; }; -struct ahash_save_req_state { - struct ahash_request *req0; - crypto_completion_t compl; - void *data; - struct scatterlist sg; - const u8 *src; - u8 *page; - unsigned int offset; - unsigned int nbytes; - bool update; -}; - -static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt); -static void ahash_restore_req(struct ahash_request *req); -static void ahash_def_finup_done1(void *data, int err); -static int ahash_def_finup_finish1(struct ahash_request *req, int err); static int ahash_def_finup(struct ahash_request *req); +static inline bool crypto_ahash_block_only(struct crypto_ahash *tfm) +{ + return crypto_ahash_alg(tfm)->halg.base.cra_flags & + CRYPTO_AHASH_ALG_BLOCK_ONLY; +} + +static inline bool crypto_ahash_final_nonzero(struct crypto_ahash *tfm) +{ + return crypto_ahash_alg(tfm)->halg.base.cra_flags & + CRYPTO_AHASH_ALG_FINAL_NONZERO; +} + +static inline bool crypto_ahash_need_fallback(struct crypto_ahash *tfm) +{ + return crypto_ahash_alg(tfm)->halg.base.cra_flags & + CRYPTO_ALG_NEED_FALLBACK; +} + +static inline void ahash_op_done(void *data, int err, + int (*finish)(struct ahash_request *, int)) +{ + struct ahash_request *areq = data; + crypto_completion_t compl; + + compl = areq->saved_complete; + data = areq->saved_data; + if (err == -EINPROGRESS) + goto out; + + areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + + err = finish(areq, err); + if (err == -EINPROGRESS || err == -EBUSY) + return; + +out: + compl(data, err); +} + static int hash_walk_next(struct crypto_hash_walk *walk) { unsigned int offset = walk->offset; @@ -298,7 +323,7 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, int err; err = alg->setkey(tfm, key, keylen); - if (!err && ahash_is_async(tfm)) + if (!err && crypto_ahash_need_fallback(tfm)) err = crypto_ahash_setkey(crypto_ahash_fb(tfm), key, keylen); if (unlikely(err)) { @@ -311,159 +336,47 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, } EXPORT_SYMBOL_GPL(crypto_ahash_setkey); -static int ahash_reqchain_virt(struct ahash_save_req_state *state, - int err, u32 mask) -{ - struct ahash_request *req = state->req0; - struct crypto_ahash *tfm; - - tfm = crypto_ahash_reqtfm(req); - - for (;;) { - unsigned len = state->nbytes; - - if (!state->offset) - break; - - if (state->offset == len || err) { - u8 *result = req->result; - - ahash_request_set_virt(req, state->src, result, len); - state->offset = 0; - break; - } - - len -= state->offset; - - len = min(PAGE_SIZE, len); - memcpy(state->page, state->src + state->offset, len); - state->offset += len; - req->nbytes = len; - - err = crypto_ahash_alg(tfm)->update(req); - if (err == -EINPROGRESS) { - if (state->offset < state->nbytes) - err = -EBUSY; - break; - } - - if (err == -EBUSY) - break; - } - - return err; -} - -static int ahash_reqchain_finish(struct ahash_request *req0, - struct ahash_save_req_state *state, - int err, u32 mask) -{ - u8 *page; - - err = ahash_reqchain_virt(state, err, mask); - if (err == -EINPROGRESS || err == -EBUSY) - goto out; - - page = state->page; - if (page) { - memset(page, 0, PAGE_SIZE); - free_page((unsigned long)page); - } - ahash_restore_req(req0); - -out: - return err; -} - -static void ahash_reqchain_done(void *data, int err) -{ - struct ahash_save_req_state *state = data; - crypto_completion_t compl = state->compl; - - data = state->data; - - if (err == -EINPROGRESS) { - if (state->offset < state->nbytes) - return; - goto notify; - } - - err = ahash_reqchain_finish(state->req0, state, err, - CRYPTO_TFM_REQ_MAY_BACKLOG); - if (err == -EBUSY) - return; - -notify: - compl(data, err); -} - static int ahash_do_req_chain(struct ahash_request *req, - int (*op)(struct ahash_request *req)) + int (*const *op)(struct ahash_request *req)) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - bool update = op == crypto_ahash_alg(tfm)->update; - struct ahash_save_req_state *state; - struct ahash_save_req_state state0; - u8 *page = NULL; int err; - if (crypto_ahash_req_virt(tfm) || - !update || !ahash_request_isvirt(req)) - return op(req); + if (crypto_ahash_req_virt(tfm) || !ahash_request_isvirt(req)) + return (*op)(req); - if (update && ahash_request_isvirt(req)) { - page = (void *)__get_free_page(GFP_ATOMIC); - err = -ENOMEM; - if (!page) - goto out; - } + if (crypto_ahash_statesize(tfm) > HASH_MAX_STATESIZE) + return -ENOSYS; - state = &state0; - if (ahash_is_async(tfm)) { - err = ahash_save_req(req, ahash_reqchain_done); - if (err) - goto out_free_page; + { + u8 state[HASH_MAX_STATESIZE]; - state = req->base.data; - } + if (op == &crypto_ahash_alg(tfm)->digest) { + ahash_request_set_tfm(req, crypto_ahash_fb(tfm)); + err = crypto_ahash_digest(req); + goto out_no_state; + } - state->update = update; - state->page = page; - state->offset = 0; - state->nbytes = 0; + err = crypto_ahash_export(req, state); + ahash_request_set_tfm(req, crypto_ahash_fb(tfm)); + err = err ?: crypto_ahash_import(req, state); - if (page) - sg_init_one(&state->sg, page, PAGE_SIZE); + if (op == &crypto_ahash_alg(tfm)->finup) { + err = err ?: crypto_ahash_finup(req); + goto out_no_state; + } - if (update && ahash_request_isvirt(req) && req->nbytes) { - unsigned len = req->nbytes; - u8 *result = req->result; + err = err ?: + crypto_ahash_update(req) ?: + crypto_ahash_export(req, state); - state->src = req->svirt; - state->nbytes = len; + ahash_request_set_tfm(req, tfm); + return err ?: crypto_ahash_import(req, state); - len = min(PAGE_SIZE, len); - - memcpy(page, req->svirt, len); - state->offset = len; - - ahash_request_set_crypt(req, &state->sg, result, len); - } - - err = op(req); - if (err == -EINPROGRESS || err == -EBUSY) { - if (state->offset < state->nbytes) - err = -EBUSY; +out_no_state: + ahash_request_set_tfm(req, tfm); return err; } - - return ahash_reqchain_finish(req, state, err, ~0); - -out_free_page: - free_page((unsigned long)page); - -out: - return err; } int crypto_ahash_init(struct ahash_request *req) @@ -476,144 +389,191 @@ int crypto_ahash_init(struct ahash_request *req) return -ENOKEY; if (ahash_req_on_stack(req) && ahash_is_async(tfm)) return -EAGAIN; - return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->init); + if (crypto_ahash_block_only(tfm)) { + u8 *buf = ahash_request_ctx(req); + + buf += crypto_ahash_reqsize(tfm) - 1; + *buf = 0; + } + return crypto_ahash_alg(tfm)->init(req); } EXPORT_SYMBOL_GPL(crypto_ahash_init); -static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt) +static void ahash_save_req(struct ahash_request *req, crypto_completion_t cplt) { - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ahash_save_req_state *state; - - if (!ahash_is_async(tfm)) - return 0; - - state = kmalloc(sizeof(*state), GFP_ATOMIC); - if (!state) - return -ENOMEM; - - state->compl = req->base.complete; - state->data = req->base.data; + req->saved_complete = req->base.complete; + req->saved_data = req->base.data; req->base.complete = cplt; - req->base.data = state; - state->req0 = req; - - return 0; + req->base.data = req; } static void ahash_restore_req(struct ahash_request *req) { - struct ahash_save_req_state *state; - struct crypto_ahash *tfm; + req->base.complete = req->saved_complete; + req->base.data = req->saved_data; +} - tfm = crypto_ahash_reqtfm(req); - if (!ahash_is_async(tfm)) - return; +static int ahash_update_finish(struct ahash_request *req, int err) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + bool nonzero = crypto_ahash_final_nonzero(tfm); + int bs = crypto_ahash_blocksize(tfm); + u8 *blenp = ahash_request_ctx(req); + int blen; + u8 *buf; - state = req->base.data; + blenp += crypto_ahash_reqsize(tfm) - 1; + blen = *blenp; + buf = blenp - bs; - req->base.complete = state->compl; - req->base.data = state->data; - kfree(state); + if (blen) { + req->src = req->sg_head + 1; + if (sg_is_chain(req->src)) + req->src = sg_chain_ptr(req->src); + } + + req->nbytes += nonzero - blen; + + blen = err < 0 ? 0 : err + nonzero; + if (ahash_request_isvirt(req)) + memcpy(buf, req->svirt + req->nbytes - blen, blen); + else + memcpy_from_sglist(buf, req->src, req->nbytes - blen, blen); + *blenp = blen; + + ahash_restore_req(req); + + return err; +} + +static void ahash_update_done(void *data, int err) +{ + ahash_op_done(data, err, ahash_update_finish); } int crypto_ahash_update(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + bool nonzero = crypto_ahash_final_nonzero(tfm); + int bs = crypto_ahash_blocksize(tfm); + u8 *blenp = ahash_request_ctx(req); + int blen, err; + u8 *buf; if (likely(tfm->using_shash)) return shash_ahash_update(req, ahash_request_ctx(req)); if (ahash_req_on_stack(req) && ahash_is_async(tfm)) return -EAGAIN; - return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->update); + if (!crypto_ahash_block_only(tfm)) + return ahash_do_req_chain(req, &crypto_ahash_alg(tfm)->update); + + blenp += crypto_ahash_reqsize(tfm) - 1; + blen = *blenp; + buf = blenp - bs; + + if (blen + req->nbytes < bs + nonzero) { + if (ahash_request_isvirt(req)) + memcpy(buf + blen, req->svirt, req->nbytes); + else + memcpy_from_sglist(buf + blen, req->src, 0, + req->nbytes); + + *blenp += req->nbytes; + return 0; + } + + if (blen) { + memset(req->sg_head, 0, sizeof(req->sg_head[0])); + sg_set_buf(req->sg_head, buf, blen); + if (req->src != req->sg_head + 1) + sg_chain(req->sg_head, 2, req->src); + req->src = req->sg_head; + req->nbytes += blen; + } + req->nbytes -= nonzero; + + ahash_save_req(req, ahash_update_done); + + err = ahash_do_req_chain(req, &crypto_ahash_alg(tfm)->update); + if (err == -EINPROGRESS || err == -EBUSY) + return err; + + return ahash_update_finish(req, err); } EXPORT_SYMBOL_GPL(crypto_ahash_update); -int crypto_ahash_final(struct ahash_request *req) +static int ahash_finup_finish(struct ahash_request *req, int err) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + u8 *blenp = ahash_request_ctx(req); + int blen; - if (likely(tfm->using_shash)) - return crypto_shash_final(ahash_request_ctx(req), req->result); - if (ahash_req_on_stack(req) && ahash_is_async(tfm)) - return -EAGAIN; - return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->final); + blenp += crypto_ahash_reqsize(tfm) - 1; + blen = *blenp; + + if (blen) { + if (sg_is_last(req->src)) + req->src = NULL; + else { + req->src = req->sg_head + 1; + if (sg_is_chain(req->src)) + req->src = sg_chain_ptr(req->src); + } + req->nbytes -= blen; + } + + ahash_restore_req(req); + + return err; +} + +static void ahash_finup_done(void *data, int err) +{ + ahash_op_done(data, err, ahash_finup_finish); } -EXPORT_SYMBOL_GPL(crypto_ahash_final); int crypto_ahash_finup(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + int bs = crypto_ahash_blocksize(tfm); + u8 *blenp = ahash_request_ctx(req); + int blen, err; + u8 *buf; if (likely(tfm->using_shash)) return shash_ahash_finup(req, ahash_request_ctx(req)); if (ahash_req_on_stack(req) && ahash_is_async(tfm)) return -EAGAIN; - if (!crypto_ahash_alg(tfm)->finup || - (!crypto_ahash_req_virt(tfm) && ahash_request_isvirt(req))) + if (!crypto_ahash_alg(tfm)->finup) return ahash_def_finup(req); - return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->finup); + if (!crypto_ahash_block_only(tfm)) + return ahash_do_req_chain(req, &crypto_ahash_alg(tfm)->finup); + + blenp += crypto_ahash_reqsize(tfm) - 1; + blen = *blenp; + buf = blenp - bs; + + if (blen) { + memset(req->sg_head, 0, sizeof(req->sg_head[0])); + sg_set_buf(req->sg_head, buf, blen); + if (!req->src) + sg_mark_end(req->sg_head); + else if (req->src != req->sg_head + 1) + sg_chain(req->sg_head, 2, req->src); + req->src = req->sg_head; + req->nbytes += blen; + } + + ahash_save_req(req, ahash_finup_done); + + err = ahash_do_req_chain(req, &crypto_ahash_alg(tfm)->finup); + if (err == -EINPROGRESS || err == -EBUSY) + return err; + + return ahash_finup_finish(req, err); } EXPORT_SYMBOL_GPL(crypto_ahash_finup); -static int ahash_def_digest_finish(struct ahash_request *req, int err) -{ - struct crypto_ahash *tfm; - - if (err) - goto out; - - tfm = crypto_ahash_reqtfm(req); - if (ahash_is_async(tfm)) - req->base.complete = ahash_def_finup_done1; - - err = crypto_ahash_update(req); - if (err == -EINPROGRESS || err == -EBUSY) - return err; - - return ahash_def_finup_finish1(req, err); - -out: - ahash_restore_req(req); - return err; -} - -static void ahash_def_digest_done(void *data, int err) -{ - struct ahash_save_req_state *state0 = data; - struct ahash_save_req_state state; - struct ahash_request *areq; - - state = *state0; - areq = state.req0; - if (err == -EINPROGRESS) - goto out; - - areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; - - err = ahash_def_digest_finish(areq, err); - if (err == -EINPROGRESS || err == -EBUSY) - return; - -out: - state.compl(state.data, err); -} - -static int ahash_def_digest(struct ahash_request *req) -{ - int err; - - err = ahash_save_req(req, ahash_def_digest_done); - if (err) - return err; - - err = crypto_ahash_init(req); - if (err == -EINPROGRESS || err == -EBUSY) - return err; - - return ahash_def_digest_finish(req, err); -} - int crypto_ahash_digest(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); @@ -622,18 +582,15 @@ int crypto_ahash_digest(struct ahash_request *req) return shash_ahash_digest(req, prepare_shash_desc(req, tfm)); if (ahash_req_on_stack(req) && ahash_is_async(tfm)) return -EAGAIN; - if (!crypto_ahash_req_virt(tfm) && ahash_request_isvirt(req)) - return ahash_def_digest(req); if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) return -ENOKEY; - return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->digest); + return ahash_do_req_chain(req, &crypto_ahash_alg(tfm)->digest); } EXPORT_SYMBOL_GPL(crypto_ahash_digest); static void ahash_def_finup_done2(void *data, int err) { - struct ahash_save_req_state *state = data; - struct ahash_request *areq = state->req0; + struct ahash_request *areq = data; if (err == -EINPROGRESS) return; @@ -644,14 +601,10 @@ static void ahash_def_finup_done2(void *data, int err) static int ahash_def_finup_finish1(struct ahash_request *req, int err) { - struct crypto_ahash *tfm; - if (err) goto out; - tfm = crypto_ahash_reqtfm(req); - if (ahash_is_async(tfm)) - req->base.complete = ahash_def_finup_done2; + req->base.complete = ahash_def_finup_done2; err = crypto_ahash_final(req); if (err == -EINPROGRESS || err == -EBUSY) @@ -664,32 +617,14 @@ static int ahash_def_finup_finish1(struct ahash_request *req, int err) static void ahash_def_finup_done1(void *data, int err) { - struct ahash_save_req_state *state0 = data; - struct ahash_save_req_state state; - struct ahash_request *areq; - - state = *state0; - areq = state.req0; - if (err == -EINPROGRESS) - goto out; - - areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; - - err = ahash_def_finup_finish1(areq, err); - if (err == -EINPROGRESS || err == -EBUSY) - return; - -out: - state.compl(state.data, err); + ahash_op_done(data, err, ahash_def_finup_finish1); } static int ahash_def_finup(struct ahash_request *req) { int err; - err = ahash_save_req(req, ahash_def_finup_done1); - if (err) - return err; + ahash_save_req(req, ahash_def_finup_done1); err = crypto_ahash_update(req); if (err == -EINPROGRESS || err == -EBUSY) @@ -714,6 +649,14 @@ int crypto_ahash_export(struct ahash_request *req, void *out) if (likely(tfm->using_shash)) return crypto_shash_export(ahash_request_ctx(req), out); + if (crypto_ahash_block_only(tfm)) { + unsigned int plen = crypto_ahash_blocksize(tfm) + 1; + unsigned int reqsize = crypto_ahash_reqsize(tfm); + unsigned int ss = crypto_ahash_statesize(tfm); + u8 *buf = ahash_request_ctx(req); + + memcpy(out + ss - plen, buf + reqsize - plen, plen); + } return crypto_ahash_alg(tfm)->export(req, out); } EXPORT_SYMBOL_GPL(crypto_ahash_export); @@ -739,6 +682,12 @@ int crypto_ahash_import(struct ahash_request *req, const void *in) return crypto_shash_import(prepare_shash_desc(req, tfm), in); if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) return -ENOKEY; + if (crypto_ahash_block_only(tfm)) { + unsigned int reqsize = crypto_ahash_reqsize(tfm); + u8 *buf = ahash_request_ctx(req); + + buf[reqsize - 1] = 0; + } return crypto_ahash_alg(tfm)->import(req, in); } EXPORT_SYMBOL_GPL(crypto_ahash_import); @@ -753,7 +702,7 @@ static void crypto_ahash_exit_tfm(struct crypto_tfm *tfm) else if (tfm->__crt_alg->cra_exit) tfm->__crt_alg->cra_exit(tfm); - if (ahash_is_async(hash)) + if (crypto_ahash_need_fallback(hash)) crypto_free_ahash(crypto_ahash_fb(hash)); } @@ -770,9 +719,12 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm) if (tfm->__crt_alg->cra_type == &crypto_shash_type) return crypto_init_ahash_using_shash(tfm); - if (ahash_is_async(hash)) { + if (crypto_ahash_need_fallback(hash)) { fb = crypto_alloc_ahash(crypto_ahash_alg_name(hash), - 0, CRYPTO_ALG_ASYNC); + CRYPTO_ALG_REQ_VIRT, + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_REQ_VIRT | + CRYPTO_AHASH_ALG_NO_EXPORT_CORE); if (IS_ERR(fb)) return PTR_ERR(fb); @@ -797,6 +749,10 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm) MAX_SYNC_HASH_REQSIZE) goto out_exit_tfm; + BUILD_BUG_ON(HASH_MAX_DESCSIZE > MAX_SYNC_HASH_REQSIZE); + if (crypto_ahash_reqsize(hash) < HASH_MAX_DESCSIZE) + crypto_ahash_set_reqsize(hash, HASH_MAX_DESCSIZE); + return 0; out_exit_tfm: @@ -941,7 +897,7 @@ struct crypto_ahash *crypto_clone_ahash(struct crypto_ahash *hash) return nhash; } - if (ahash_is_async(hash)) { + if (crypto_ahash_need_fallback(hash)) { fb = crypto_clone_ahash(crypto_ahash_fb(hash)); err = PTR_ERR(fb); if (IS_ERR(fb)) @@ -1003,10 +959,23 @@ static int ahash_prepare_alg(struct ahash_alg *alg) base->cra_type = &crypto_ahash_type; base->cra_flags |= CRYPTO_ALG_TYPE_AHASH; + if ((base->cra_flags ^ CRYPTO_ALG_REQ_VIRT) & + (CRYPTO_ALG_ASYNC | CRYPTO_ALG_REQ_VIRT)) + base->cra_flags |= CRYPTO_ALG_NEED_FALLBACK; + if (!alg->setkey) alg->setkey = ahash_nosetkey; - if (!alg->export_core || !alg->import_core) { + if (base->cra_flags & CRYPTO_AHASH_ALG_BLOCK_ONLY) { + BUILD_BUG_ON(MAX_ALGAPI_BLOCKSIZE >= 256); + if (!alg->finup) + return -EINVAL; + + base->cra_reqsize += base->cra_blocksize + 1; + alg->halg.statesize += base->cra_blocksize + 1; + alg->export_core = alg->export; + alg->import_core = alg->import; + } else if (!alg->export_core || !alg->import_core) { alg->export_core = ahash_default_export_core; alg->import_core = ahash_default_import_core; base->cra_flags |= CRYPTO_AHASH_ALG_NO_EXPORT_CORE; diff --git a/include/crypto/hash.h b/include/crypto/hash.h index bf177cf9be10..05ee817a3180 100644 --- a/include/crypto/hash.h +++ b/include/crypto/hash.h @@ -8,8 +8,8 @@ #ifndef _CRYPTO_HASH_H #define _CRYPTO_HASH_H -#include #include +#include #include #include @@ -65,6 +65,10 @@ struct ahash_request { }; u8 *result; + struct scatterlist sg_head[2]; + crypto_completion_t saved_complete; + void *saved_data; + void *__ctx[] CRYPTO_MINALIGN_ATTR; }; @@ -488,7 +492,11 @@ int crypto_ahash_finup(struct ahash_request *req); * -EBUSY if queue is full and request should be resubmitted later; * other < 0 if an error occurred */ -int crypto_ahash_final(struct ahash_request *req); +static inline int crypto_ahash_final(struct ahash_request *req) +{ + req->nbytes = 0; + return crypto_ahash_finup(req); +} /** * crypto_ahash_digest() - calculate message digest for a buffer From patchwork Wed May 14 09:22:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889969 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8548A225A39 for ; Wed, 14 May 2025 09:22:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214564; cv=none; b=YZlqJj3Ouw38xEgrAwb+htFIeZm9ziSCnrxVLkK8j8fiM2Dt1kIkRJ6Dn4aISz+qOVAEYOrZf4Y/hkxn+KWqR8jnCvexEFsPKHTng0ttWHpBuksiLOb+eZtXUKDsHNNRO3dp01LRcDYjiRUw60F+cETxb/OHqz0atD46RFEClsI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214564; c=relaxed/simple; bh=yLiK9w2W9Ej8s5fxIsdntF8CI7Q7pSpEf5wOxyhTzY0=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=DC7QWsUs63VlGUqZ5QptWuGB71zKrLzd0lGFtOsMSxkrbWpUO0tZUPnTVO27rT7KYfYKUdyzEUiVcq09Q94AKgTwulnhL3bdijDcrkZ79++oS0018cWSKzsL6uu39j1BOUjFQbrn4+ZWifBt+Ls4Q+LIDU8MhzoXNbo5+AAzZRc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=Betl2YoU; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="Betl2YoU" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=k6w4oe3QfmEb/hHyMA8Dg0EXufmhZP5QwxnZ0IPkc6I=; b=Betl2YoURiegsrkSKS+EoieZcn nXcYhfT3OWA6o0Z051zFGMJRJmVqowgYo2C7YVNT8U/bg8jqmUdGAf5jcFUT8tIUUKP8CzICgJlMz XnAbHdq82SHpnLQbNOgaZ2tuHy28LmbUdTdbKjnHv1EapeWrKd07m0tbL9WQQmqahpnF+Y6l5m2ez CpSdfnB4lel772WyeluGhkd+CdDeO+NTI0f1ujvUMxUpafzlMVkaN8nSKJacLOHGwdK2PEIK9y/HF oLnTScuNRiXsjp06AUpT3q+cl2T45obiGgGbWTV/85Ldse6h6iA3mJa80LCjYZvW2+NElc7OjSqL9 IBAO9spA==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uF8Jy-0060KV-2N; Wed, 14 May 2025 17:22:39 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 14 May 2025 17:22:38 +0800 Date: Wed, 14 May 2025 17:22:38 +0800 Message-Id: <1b9ec629aac1a9eb78d31133ae0ec5771a077717.1747214319.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [v3 PATCH 05/11] crypto: hmac - Add export_core and import_core To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Add export_import and import_core so that hmac can be used as a fallback by block-only drivers. Signed-off-by: Herbert Xu --- crypto/hmac.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/crypto/hmac.c b/crypto/hmac.c index 4517e04bfbaa..e4749a1f93dd 100644 --- a/crypto/hmac.c +++ b/crypto/hmac.c @@ -90,6 +90,22 @@ static int hmac_import(struct shash_desc *pdesc, const void *in) return crypto_shash_import(desc, in); } +static int hmac_export_core(struct shash_desc *pdesc, void *out) +{ + struct shash_desc *desc = shash_desc_ctx(pdesc); + + return crypto_shash_export_core(desc, out); +} + +static int hmac_import_core(struct shash_desc *pdesc, const void *in) +{ + const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm); + struct shash_desc *desc = shash_desc_ctx(pdesc); + + desc->tfm = tctx->hash; + return crypto_shash_import_core(desc, in); +} + static int hmac_init(struct shash_desc *pdesc) { const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm); @@ -177,6 +193,7 @@ static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb) return -ENOMEM; spawn = shash_instance_ctx(inst); + mask |= CRYPTO_AHASH_ALG_NO_EXPORT_CORE; err = crypto_grab_shash(spawn, shash_crypto_instance(inst), crypto_attr_alg_name(tb[1]), 0, mask); if (err) @@ -211,6 +228,8 @@ static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb) inst->alg.finup = hmac_finup; inst->alg.export = hmac_export; inst->alg.import = hmac_import; + inst->alg.export_core = hmac_export_core; + inst->alg.import_core = hmac_import_core; inst->alg.setkey = hmac_setkey; inst->alg.init_tfm = hmac_init_tfm; inst->alg.clone_tfm = hmac_clone_tfm; From patchwork Wed May 14 09:22:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889968 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24324226CFE for ; Wed, 14 May 2025 09:22:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214570; cv=none; b=W5jioq+JUdXjOKo5mPcDr0yyqjnXpcXksNG3XcR7x0YvWt11jwZ2/6gLvtOSaehppyyUs6dTokMKTfe/LRnOcBgMps3glcDr+rWrlHcgxjZeyoXs4Km686ROxYuwlWsBnu7NY8VSJwZobKYuylglCuwj8o31/A9sU+olg39QbKU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214570; c=relaxed/simple; bh=8oEBjQofC4Smx4r1oKj/hZUudqJc9esNnkIB1iREXro=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=qyNO/Sl42C8uMfWWo2BMNP0/kK/QiHMq29I9GSb9vl+VcYSaAzfYDlaAQs7DZmhBSkla3Z6DoGhTxK2GtBJz4YiTJbxzG0lxPyUVtzBEv7m73B9BdDdqWiJ52w0o3S9QHyKmYT+MjM34UZBLYefo3BeSfAxzPI2DeIVEkGnC8kI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=V7Wgol0H; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="V7Wgol0H" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=e1ioazZsYFSNe2fQXe0uDU15CY88bAeWSQiwvQso4aI=; b=V7Wgol0HxMkNJZoV3dOECDrm42 ir1lko6qsotKkCMnDHZQoUTCog0wk7TsATUxu3x/gJwsBuFbdXsALnlHLega2ZgFRdeiNv5xo+SzM 0twH/a0+rMhl2RTcbD0RnQlAmSsol6qvorflzcSqWQaO4bO4zspGaX/RyWqWLnlp+JWyj3EyBpb8R MP/nYVNcRymUzGbjONfuCzjuyM9Z//CSrRJINbW6kVnZJQFVJWYuTWvtjlJrREh36Mp+IVHms40P9 ie06jzMrnYgtq/EFkWY9U9hn+A7P+odoc/C0wg7b0azaOaOe3moycHBW2Y9qv5s+iESOYV5eGc7wt J4Wkdl+w==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uF8K3-0060Ks-1A; Wed, 14 May 2025 17:22:44 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 14 May 2025 17:22:43 +0800 Date: Wed, 14 May 2025 17:22:43 +0800 Message-Id: In-Reply-To: References: From: Herbert Xu Subject: [v3 PATCH 07/11] crypto: algapi - Add driver template support to crypto_inst_setname To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Add support to crypto_inst_setname for having a driver template name that differs from the algorithm template name. Signed-off-by: Herbert Xu --- crypto/algapi.c | 8 ++++---- include/crypto/algapi.h | 12 ++++++++++-- 2 files changed, 14 insertions(+), 6 deletions(-) diff --git a/crypto/algapi.c b/crypto/algapi.c index 25b5519e3b71..e604d0d8b7b4 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -923,20 +923,20 @@ const char *crypto_attr_alg_name(struct rtattr *rta) } EXPORT_SYMBOL_GPL(crypto_attr_alg_name); -int crypto_inst_setname(struct crypto_instance *inst, const char *name, - struct crypto_alg *alg) +int __crypto_inst_setname(struct crypto_instance *inst, const char *name, + const char *driver, struct crypto_alg *alg) { if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME, "%s(%s)", name, alg->cra_name) >= CRYPTO_MAX_ALG_NAME) return -ENAMETOOLONG; if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s(%s)", - name, alg->cra_driver_name) >= CRYPTO_MAX_ALG_NAME) + driver, alg->cra_driver_name) >= CRYPTO_MAX_ALG_NAME) return -ENAMETOOLONG; return 0; } -EXPORT_SYMBOL_GPL(crypto_inst_setname); +EXPORT_SYMBOL_GPL(__crypto_inst_setname); void crypto_init_queue(struct crypto_queue *queue, unsigned int max_qlen) { diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index 423e57eca351..188eface0a11 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -146,8 +146,16 @@ void *crypto_spawn_tfm2(struct crypto_spawn *spawn); struct crypto_attr_type *crypto_get_attr_type(struct rtattr **tb); int crypto_check_attr_type(struct rtattr **tb, u32 type, u32 *mask_ret); const char *crypto_attr_alg_name(struct rtattr *rta); -int crypto_inst_setname(struct crypto_instance *inst, const char *name, - struct crypto_alg *alg); +int __crypto_inst_setname(struct crypto_instance *inst, const char *name, + const char *driver, struct crypto_alg *alg); + +#define crypto_inst_setname(inst, name, ...) \ + CONCATENATE(crypto_inst_setname_, COUNT_ARGS(__VA_ARGS__))( \ + inst, name, ##__VA_ARGS__) +#define crypto_inst_setname_1(inst, name, alg) \ + __crypto_inst_setname(inst, name, name, alg) +#define crypto_inst_setname_2(inst, name, driver, alg) \ + __crypto_inst_setname(inst, name, driver, alg) void crypto_init_queue(struct crypto_queue *queue, unsigned int max_qlen); int crypto_enqueue_request(struct crypto_queue *queue, From patchwork Wed May 14 09:22:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889967 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 997A9229B07 for ; Wed, 14 May 2025 09:22:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214573; cv=none; b=OVkV3vMGDq5khZLY538vJxUQkdPck0WZYNsfYtlEB/OecNYM9D8yK3lUBMK49ugkdQiFyTtiXUwWTEvousZKVxA73R+DwYzyhIP9G8GoXukUFuZ0pq9ySahStm8FOGps86T0Q0wJXGulV9oM7iKb2WrSk7BhxvDhBg6TJEcLQdM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214573; c=relaxed/simple; bh=DgQKFi+jbu8J43oF5Ufmp8wkGlj9fsbe+wUzMRgbXFA=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=KO6Cemfh06oqytPys3pO8bj2TAO60RmiGUtGymOtwLvba55GwtgWRgr+3eaybEOjSSDL8/PZnWpQHAQTaRrzig1qiIGvJMEnU4fSr2kiz7AQbLyICTJArXN1WSim36rzazPXw8N7TH6oTTIOZk8TqtrsQ2S9hrGCtI0+EFfO2hU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=Bsk81m+f; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="Bsk81m+f" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=rIDvfVRNQVG+UmRqnq/k8gaU8pUEd/vXSVQITk4Uv5s=; b=Bsk81m+f0vCxUBYvUULHLOAXDa q9oPwUjQjN9e90UKPY8wYuOAx0DsYUl2hYnyMD+Q78h1jXXG6pozjYnJnaJJkuI6CMqhI2Qq3cq5g NHVq/7s9Akc6s04Z3F2OWDaCDBIpeCw6xDjISqgieGzs2dimbOo+zSHYzcDtchWVHHKVQczKeSFNP 4ie+mDcAtazRL31Ymy+D5BcKxMKOSIc2RF6hpydN8d08loiRHLB/e72nLkrvM+r+JfCjaB4QgWM7v bp+BJhRtw5erTDdyUiDlAtdG25XXYPf2kL/hkUnXjgxiqfMutGaBqiUCYe6feuhnsvrdLWjw1dI9K m7Zuo7Sg==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uF8K7-0060Lc-3C; Wed, 14 May 2025 17:22:49 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 14 May 2025 17:22:47 +0800 Date: Wed, 14 May 2025 17:22:47 +0800 Message-Id: <60dab74b9f4ddf2d8759b3becbbdf8bcce2b3381.1747214319.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [v3 PATCH 09/11] crypto: hmac - Add ahash support To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Add ahash support to hmac so that drivers that can't do hmac in hardware do not have to implement duplicate copies of hmac. Signed-off-by: Herbert Xu --- crypto/hmac.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/crypto/hmac.c b/crypto/hmac.c index d52d37df5a13..a67bc3543c35 100644 --- a/crypto/hmac.c +++ b/crypto/hmac.c @@ -314,6 +314,21 @@ static int hmac_import_ahash(struct ahash_request *preq, const void *in) return crypto_ahash_import(req, in); } +static int hmac_export_core_ahash(struct ahash_request *preq, void *out) +{ + return crypto_ahash_export_core(ahash_request_ctx(preq), out); +} + +static int hmac_import_core_ahash(struct ahash_request *preq, const void *in) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(preq); + struct ahash_hmac_ctx *tctx = crypto_ahash_ctx(tfm); + struct ahash_request *req = ahash_request_ctx(preq); + + ahash_request_set_tfm(req, tctx->hash); + return crypto_ahash_import_core(req, in); +} + static int hmac_init_ahash(struct ahash_request *preq) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(preq); @@ -443,6 +458,7 @@ static int __hmac_create_ahash(struct crypto_template *tmpl, return -ENOMEM; spawn = ahash_instance_ctx(inst); + mask |= CRYPTO_AHASH_ALG_NO_EXPORT_CORE; err = crypto_grab_ahash(spawn, ahash_crypto_instance(inst), crypto_attr_alg_name(tb[1]), 0, mask); if (err) @@ -483,6 +499,8 @@ static int __hmac_create_ahash(struct crypto_template *tmpl, inst->alg.digest = hmac_digest_ahash; inst->alg.export = hmac_export_ahash; inst->alg.import = hmac_import_ahash; + inst->alg.export_core = hmac_export_core_ahash; + inst->alg.import_core = hmac_import_core_ahash; inst->alg.setkey = hmac_setkey_ahash; inst->alg.init_tfm = hmac_init_ahash_tfm; inst->alg.clone_tfm = hmac_clone_ahash_tfm; From patchwork Wed May 14 09:22:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 889966 Received: from abb.hmeau.com (abb.hmeau.com [144.6.53.87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D5A5229B23 for ; Wed, 14 May 2025 09:22:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=144.6.53.87 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214578; cv=none; b=gj2nzndXAMOga1Z3Y+938JvsypyI+Nx/C7oWSnPZtk7gWK/sAKHuywm/60emoXCSRSB3liK0spWF6wS4eO5+O/OIBVe7R4cltjLCXCqAPkWXIh++FE0jbig7fNA8A6IWk3PfoFDrZ8WU9saWeku6sGoKJlNLCdQy98uhaQhO7s4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747214578; c=relaxed/simple; bh=TKUADb536ISEq3z/ama8iQSj9N6v5PTmE3cUbv+EHCI=; h=Date:Message-Id:In-Reply-To:References:From:Subject:To; b=Tu0yyQXe76zwGrnplBKlxkwybpjIG1EG8fBfhvS7Fy8BmpV8MT6oHCZGmK32WS64V2R9+WZbPB+nCvdrQllTjPYjiFrOOFn4+wL7zToS9mQUTlVGkBdb86IzBwMAbsxouYhZQSGEl6Ue5bIqwJuArny07iQrnuH2uXetEvemPIw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au; spf=pass smtp.mailfrom=gondor.apana.org.au; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b=OnKY/MKZ; arc=none smtp.client-ip=144.6.53.87 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gondor.apana.org.au Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=hmeau.com header.i=@hmeau.com header.b="OnKY/MKZ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=hmeau.com; s=formenos; h=To:Subject:From:References:In-Reply-To:Message-Id:Date:Sender: Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=RTT1wz80ie3pDL28QuK9U/Fn28Yly14UB7c+rWibWCU=; b=OnKY/MKZnuYszkc0p+xBl5fcy+ FCvtktJK5R2jbkypxZfZBTlsQAjyZfF/cxeWyCDhQ+5320zLkKOLKJon5SMtH4PEDrIsju5XB8QZL 9SOttGCrrh3GyR9dvTY3vZRa50csAKMUaALAy/lYluySXyz2Sj4KdZ6xlDX6I2ZtXNlQIQY2bu4u5 X3Z7DYTt96J8wJfKr4itqM3dgKnqbq1tRIzWotaM//bghJCvVxY1FBVSJ1WdgQQjHS17PEfnaU6Af tQEUWESgqVbfoni7f8qp6xepY02nBb0iKP5OWduRwyUcIeo9AIU6qKZusb3Z4ZivoFUrZheSfWCDG Q+Yf5iZA==; Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.96 #2 (Debian)) id 1uF8KC-0060Ly-1x; Wed, 14 May 2025 17:22:53 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 14 May 2025 17:22:52 +0800 Date: Wed, 14 May 2025 17:22:52 +0800 Message-Id: <9788e636e64bebd7ebfa0a567deec516623b862b.1747214319.git.herbert@gondor.apana.org.au> In-Reply-To: References: From: Herbert Xu Subject: [v3 PATCH 11/11] crypto: testmgr - Add hash export format testing To: Linux Crypto Mailing List Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Ensure that the hash state can be exported to and imported from the generic algorithm. Signed-off-by: Herbert Xu --- crypto/testmgr.c | 95 ++++++++++++++++++++++++++++++---- crypto/testmgr.h | 2 + include/crypto/internal/hash.h | 6 +++ 3 files changed, 94 insertions(+), 9 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 557ec5b1656a..5b14ab8796f4 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -17,10 +17,19 @@ */ #include -#include +#include +#include +#include +#include +#include +#include +#include +#include +#include #include #include #include +#include #include #include #include @@ -28,14 +37,6 @@ #include #include #include -#include -#include -#include -#include -#include -#include -#include -#include #include "internal.h" @@ -1464,6 +1465,49 @@ static int check_nonfinal_ahash_op(const char *op, int err, return 0; } +static int check_ahash_export(struct ahash_request *req, + const struct hash_testvec *vec, + const char *vec_name, + const struct testvec_config *cfg, + const char *driver, u8 *hashstate) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + const unsigned int digestsize = crypto_ahash_digestsize(tfm); + HASH_FBREQ_ON_STACK(fbreq, req); + int err; + + if (!vec->state) + return 0; + + err = crypto_ahash_export(req, hashstate); + if (err) { + pr_err("alg: ahash: %s mixed export() failed with err %d on test vector %s, cfg=\"%s\"\n", + driver, err, vec_name, cfg->name); + return err; + } + err = crypto_ahash_import(req, vec->state); + if (err) { + pr_err("alg: ahash: %s mixed import() failed with err %d on test vector %s, cfg=\"%s\"\n", + driver, err, vec_name, cfg->name); + return err; + } + err = crypto_ahash_import(fbreq, hashstate); + if (err) { + pr_err("alg: ahash: %s fallback import() failed with err %d on test vector %s, cfg=\"%s\"\n", + crypto_ahash_driver_name(crypto_ahash_reqtfm(fbreq)), err, vec_name, cfg->name); + return err; + } + ahash_request_set_crypt(fbreq, NULL, hashstate, 0); + testmgr_poison(hashstate, digestsize + TESTMGR_POISON_LEN); + err = crypto_ahash_final(fbreq); + if (err) { + pr_err("alg: ahash: %s fallback final() failed with err %d on test vector %s, cfg=\"%s\"\n", + crypto_ahash_driver_name(crypto_ahash_reqtfm(fbreq)), err, vec_name, cfg->name); + return err; + } + return check_hash_result("ahash export", hashstate, digestsize, vec, vec_name, driver, cfg); +} + /* Test one hash test vector in one configuration, using the ahash API */ static int test_ahash_vec_cfg(const struct hash_testvec *vec, const char *vec_name, @@ -1609,6 +1653,10 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec, driver, vec_name, cfg); if (err) return err; + err = check_ahash_export(req, vec, vec_name, cfg, + driver, hashstate); + if (err) + return err; err = do_ahash_op(crypto_ahash_final, req, &wait, cfg->nosimd); if (err) { pr_err("alg: ahash: %s final() failed with err %d on test vector %s, cfg=\"%s\"\n", @@ -1732,6 +1780,17 @@ static void generate_random_hash_testvec(struct rnd_state *rng, vec->digest_error = crypto_hash_digest( crypto_ahash_reqtfm(req), vec->plaintext, vec->psize, (u8 *)vec->digest); + + if (vec->digest_error || !vec->state) + goto done; + + ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL); + ahash_request_set_virt(req, vec->plaintext, (u8 *)vec->digest, + vec->psize); + crypto_ahash_init(req); + crypto_ahash_update(req); + crypto_ahash_export(req, (u8 *)vec->state); + done: snprintf(name, max_namelen, "\"random: psize=%u ksize=%u\"", vec->psize, vec->ksize); @@ -1750,6 +1809,7 @@ static int test_hash_vs_generic_impl(const char *generic_driver, { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); const unsigned int digestsize = crypto_ahash_digestsize(tfm); + const unsigned int statesize = crypto_ahash_statesize(tfm); const unsigned int blocksize = crypto_ahash_blocksize(tfm); const unsigned int maxdatasize = (2 * PAGE_SIZE) - TESTMGR_POISON_LEN; const char *algname = crypto_hash_alg_common(tfm)->base.cra_name; @@ -1822,6 +1882,22 @@ static int test_hash_vs_generic_impl(const char *generic_driver, goto out; } + if (crypto_hash_no_export_core(tfm) || + crypto_hash_no_export_core(generic_tfm)) + ; + else if (statesize != crypto_ahash_statesize(generic_tfm)) { + pr_err("alg: hash: statesize for %s (%u) doesn't match generic impl (%u)\n", + driver, statesize, + crypto_ahash_statesize(generic_tfm)); + err = -EINVAL; + goto out; + } else { + vec.state = kmalloc(statesize, GFP_KERNEL); + err = -ENOMEM; + if (!vec.state) + goto out; + } + /* * Now generate test vectors using the generic implementation, and test * the other implementation against them. @@ -1854,6 +1930,7 @@ static int test_hash_vs_generic_impl(const char *generic_driver, kfree(vec.key); kfree(vec.plaintext); kfree(vec.digest); + kfree(vec.state); ahash_request_free(generic_req); crypto_free_ahash(generic_tfm); return err; diff --git a/crypto/testmgr.h b/crypto/testmgr.h index 32d099ac9e73..5cf455a708b8 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -29,6 +29,7 @@ * hash_testvec: structure to describe a hash (message digest) test * @key: Pointer to key (NULL if none) * @plaintext: Pointer to source data + * @state: Pointer to expected state * @digest: Pointer to expected digest * @psize: Length of source data in bytes * @ksize: Length of @key in bytes (0 if no key) @@ -39,6 +40,7 @@ struct hash_testvec { const char *key; const char *plaintext; + const char *state; const char *digest; unsigned int psize; unsigned short ksize; diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h index 0f85c543f80b..f052afa6e7b0 100644 --- a/include/crypto/internal/hash.h +++ b/include/crypto/internal/hash.h @@ -91,6 +91,12 @@ static inline bool crypto_hash_alg_needs_key(struct hash_alg_common *alg) !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY); } +static inline bool crypto_hash_no_export_core(struct crypto_ahash *tfm) +{ + return crypto_hash_alg_common(tfm)->base.cra_flags & + CRYPTO_AHASH_ALG_NO_EXPORT_CORE; +} + int crypto_grab_ahash(struct crypto_ahash_spawn *spawn, struct crypto_instance *inst, const char *name, u32 type, u32 mask);