From patchwork Fri Jan 3 01:02:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 198310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 433ADC3276E for ; Fri, 3 Jan 2020 01:04:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1F66822314 for ; Fri, 3 Jan 2020 01:04:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726781AbgACBEF (ORCPT ); Thu, 2 Jan 2020 20:04:05 -0500 Received: from inva020.nxp.com ([92.121.34.13]:47326 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725872AbgACBDQ (ORCPT ); Thu, 2 Jan 2020 20:03:16 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id BEAD21A1AFA; Fri, 3 Jan 2020 02:03:12 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id AECDE1A1B0E; Fri, 3 Jan 2020 02:03:12 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 5FC1E205C2; Fri, 3 Jan 2020 02:03:12 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH v2 01/10] crypto: caam - refactor skcipher/aead/gcm/chachapoly {en, de}crypt functions Date: Fri, 3 Jan 2020 03:02:44 +0200 Message-Id: <1578013373-1956-2-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1578013373-1956-1-git-send-email-iuliana.prodan@nxp.com> References: <1578013373-1956-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Create a common crypt function for each skcipher/aead/gcm/chachapoly algorithms and call it for encrypt/decrypt with the specific boolean - true for encrypt and false for decrypt. Signed-off-by: Iuliana Prodan Reviewed-by: Horia Geanta --- drivers/crypto/caam/caamalg.c | 268 +++++++++--------------------------------- 1 file changed, 53 insertions(+), 215 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 2912006..6e021692 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -960,8 +960,8 @@ static void skcipher_unmap(struct device *dev, struct skcipher_edesc *edesc, edesc->sec4_sg_dma, edesc->sec4_sg_bytes); } -static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err, - void *context) +static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err, + void *context) { struct aead_request *req = context; struct aead_edesc *edesc; @@ -981,69 +981,8 @@ static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err, aead_request_complete(req, ecode); } -static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err, - void *context) -{ - struct aead_request *req = context; - struct aead_edesc *edesc; - int ecode = 0; - - dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - - edesc = container_of(desc, struct aead_edesc, hw_desc[0]); - - if (err) - ecode = caam_jr_strstatus(jrdev, err); - - aead_unmap(jrdev, edesc, req); - - kfree(edesc); - - aead_request_complete(req, ecode); -} - -static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err, - void *context) -{ - struct skcipher_request *req = context; - struct skcipher_edesc *edesc; - struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); - int ivsize = crypto_skcipher_ivsize(skcipher); - int ecode = 0; - - dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - - edesc = container_of(desc, struct skcipher_edesc, hw_desc[0]); - - if (err) - ecode = caam_jr_strstatus(jrdev, err); - - skcipher_unmap(jrdev, edesc, req); - - /* - * The crypto API expects us to set the IV (req->iv) to the last - * ciphertext block (CBC mode) or last counter (CTR mode). - * This is used e.g. by the CTS mode. - */ - if (ivsize && !ecode) { - memcpy(req->iv, (u8 *)edesc->sec4_sg + edesc->sec4_sg_bytes, - ivsize); - print_hex_dump_debug("dstiv @"__stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, req->iv, - edesc->src_nents > 1 ? 100 : ivsize, 1); - } - - caam_dump_sg("dst @" __stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, req->dst, - edesc->dst_nents > 1 ? 100 : req->cryptlen, 1); - - kfree(edesc); - - skcipher_request_complete(req, ecode); -} - -static void skcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err, - void *context) +static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err, + void *context) { struct skcipher_request *req = context; struct skcipher_edesc *edesc; @@ -1455,41 +1394,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, return edesc; } -static int gcm_encrypt(struct aead_request *req) -{ - struct aead_edesc *edesc; - struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct caam_ctx *ctx = crypto_aead_ctx(aead); - struct device *jrdev = ctx->jrdev; - bool all_contig; - u32 *desc; - int ret = 0; - - /* allocate extended descriptor */ - edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig, true); - if (IS_ERR(edesc)) - return PTR_ERR(edesc); - - /* Create and submit job descriptor */ - init_gcm_job(req, edesc, all_contig, true); - - print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, - desc_bytes(edesc->hw_desc), 1); - - desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { - aead_unmap(jrdev, edesc, req); - kfree(edesc); - } - - return ret; -} - -static int chachapoly_encrypt(struct aead_request *req) +static inline int chachapoly_crypt(struct aead_request *req, bool encrypt) { struct aead_edesc *edesc; struct crypto_aead *aead = crypto_aead_reqtfm(req); @@ -1500,18 +1405,18 @@ static int chachapoly_encrypt(struct aead_request *req) int ret; edesc = aead_edesc_alloc(req, CHACHAPOLY_DESC_JOB_IO_LEN, &all_contig, - true); + encrypt); if (IS_ERR(edesc)) return PTR_ERR(edesc); desc = edesc->hw_desc; - init_chachapoly_job(req, edesc, all_contig, true); + init_chachapoly_job(req, edesc, all_contig, encrypt); print_hex_dump_debug("chachapoly jobdesc@" __stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req); + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); if (!ret) { ret = -EINPROGRESS; } else { @@ -1522,45 +1427,17 @@ static int chachapoly_encrypt(struct aead_request *req) return ret; } -static int chachapoly_decrypt(struct aead_request *req) +static int chachapoly_encrypt(struct aead_request *req) { - struct aead_edesc *edesc; - struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct caam_ctx *ctx = crypto_aead_ctx(aead); - struct device *jrdev = ctx->jrdev; - bool all_contig; - u32 *desc; - int ret; - - edesc = aead_edesc_alloc(req, CHACHAPOLY_DESC_JOB_IO_LEN, &all_contig, - false); - if (IS_ERR(edesc)) - return PTR_ERR(edesc); - - desc = edesc->hw_desc; - - init_chachapoly_job(req, edesc, all_contig, false); - print_hex_dump_debug("chachapoly jobdesc@" __stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), - 1); - - ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { - aead_unmap(jrdev, edesc, req); - kfree(edesc); - } - - return ret; + return chachapoly_crypt(req, true); } -static int ipsec_gcm_encrypt(struct aead_request *req) +static int chachapoly_decrypt(struct aead_request *req) { - return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_encrypt(req); + return chachapoly_crypt(req, false); } -static int aead_encrypt(struct aead_request *req) +static inline int aead_crypt(struct aead_request *req, bool encrypt) { struct aead_edesc *edesc; struct crypto_aead *aead = crypto_aead_reqtfm(req); @@ -1572,19 +1449,19 @@ static int aead_encrypt(struct aead_request *req) /* allocate extended descriptor */ edesc = aead_edesc_alloc(req, AUTHENC_DESC_JOB_IO_LEN, - &all_contig, true); + &all_contig, encrypt); if (IS_ERR(edesc)) return PTR_ERR(edesc); /* Create and submit job descriptor */ - init_authenc_job(req, edesc, all_contig, true); + init_authenc_job(req, edesc, all_contig, encrypt); print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, desc_bytes(edesc->hw_desc), 1); desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req); + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); if (!ret) { ret = -EINPROGRESS; } else { @@ -1595,7 +1472,17 @@ static int aead_encrypt(struct aead_request *req) return ret; } -static int gcm_decrypt(struct aead_request *req) +static int aead_encrypt(struct aead_request *req) +{ + return aead_crypt(req, true); +} + +static int aead_decrypt(struct aead_request *req) +{ + return aead_crypt(req, false); +} + +static inline int gcm_crypt(struct aead_request *req, bool encrypt) { struct aead_edesc *edesc; struct crypto_aead *aead = crypto_aead_reqtfm(req); @@ -1606,19 +1493,20 @@ static int gcm_decrypt(struct aead_request *req) int ret = 0; /* allocate extended descriptor */ - edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig, false); + edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig, + encrypt); if (IS_ERR(edesc)) return PTR_ERR(edesc); - /* Create and submit job descriptor*/ - init_gcm_job(req, edesc, all_contig, false); + /* Create and submit job descriptor */ + init_gcm_job(req, edesc, all_contig, encrypt); print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, desc_bytes(edesc->hw_desc), 1); desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req); + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); if (!ret) { ret = -EINPROGRESS; } else { @@ -1629,48 +1517,24 @@ static int gcm_decrypt(struct aead_request *req) return ret; } -static int ipsec_gcm_decrypt(struct aead_request *req) +static int gcm_encrypt(struct aead_request *req) { - return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_decrypt(req); + return gcm_crypt(req, true); } -static int aead_decrypt(struct aead_request *req) +static int gcm_decrypt(struct aead_request *req) { - struct aead_edesc *edesc; - struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct caam_ctx *ctx = crypto_aead_ctx(aead); - struct device *jrdev = ctx->jrdev; - bool all_contig; - u32 *desc; - int ret = 0; - - caam_dump_sg("dec src@" __stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, req->src, - req->assoclen + req->cryptlen, 1); - - /* allocate extended descriptor */ - edesc = aead_edesc_alloc(req, AUTHENC_DESC_JOB_IO_LEN, - &all_contig, false); - if (IS_ERR(edesc)) - return PTR_ERR(edesc); - - /* Create and submit job descriptor*/ - init_authenc_job(req, edesc, all_contig, false); - - print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, - desc_bytes(edesc->hw_desc), 1); + return gcm_crypt(req, false); +} - desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { - aead_unmap(jrdev, edesc, req); - kfree(edesc); - } +static int ipsec_gcm_encrypt(struct aead_request *req) +{ + return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_encrypt(req); +} - return ret; +static int ipsec_gcm_decrypt(struct aead_request *req) +{ + return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_decrypt(req); } /* @@ -1834,7 +1698,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, return edesc; } -static int skcipher_encrypt(struct skcipher_request *req) +static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) { struct skcipher_edesc *edesc; struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); @@ -1852,14 +1716,14 @@ static int skcipher_encrypt(struct skcipher_request *req) return PTR_ERR(edesc); /* Create and submit job descriptor*/ - init_skcipher_job(req, edesc, true); + init_skcipher_job(req, edesc, encrypt); print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, desc_bytes(edesc->hw_desc), 1); desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, skcipher_encrypt_done, req); + ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req); if (!ret) { ret = -EINPROGRESS; @@ -1871,40 +1735,14 @@ static int skcipher_encrypt(struct skcipher_request *req) return ret; } -static int skcipher_decrypt(struct skcipher_request *req) +static int skcipher_encrypt(struct skcipher_request *req) { - struct skcipher_edesc *edesc; - struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); - struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher); - struct device *jrdev = ctx->jrdev; - u32 *desc; - int ret = 0; - - if (!req->cryptlen) - return 0; - - /* allocate extended descriptor */ - edesc = skcipher_edesc_alloc(req, DESC_JOB_IO_LEN * CAAM_CMD_SZ); - if (IS_ERR(edesc)) - return PTR_ERR(edesc); - - /* Create and submit job descriptor*/ - init_skcipher_job(req, edesc, false); - desc = edesc->hw_desc; - - print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, - desc_bytes(edesc->hw_desc), 1); - - ret = caam_jr_enqueue(jrdev, desc, skcipher_decrypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { - skcipher_unmap(jrdev, edesc, req); - kfree(edesc); - } + return skcipher_crypt(req, true); +} - return ret; +static int skcipher_decrypt(struct skcipher_request *req) +{ + return skcipher_crypt(req, false); } static struct caam_skcipher_alg driver_algs[] = { From patchwork Fri Jan 3 01:02:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 198308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05D50C3276E for ; Fri, 3 Jan 2020 01:04:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C2B96217F4 for ; Fri, 3 Jan 2020 01:04:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727436AbgACBEP (ORCPT ); Thu, 2 Jan 2020 20:04:15 -0500 Received: from inva020.nxp.com ([92.121.34.13]:47364 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726089AbgACBDQ (ORCPT ); Thu, 2 Jan 2020 20:03:16 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 90D741A1B12; Fri, 3 Jan 2020 02:03:13 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 82FC01A1AF5; Fri, 3 Jan 2020 02:03:13 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 230A820567; Fri, 3 Jan 2020 02:03:13 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH v2 03/10] crypto: caam - refactor ahash_edesc_alloc Date: Fri, 3 Jan 2020 03:02:46 +0200 Message-Id: <1578013373-1956-4-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1578013373-1956-1-git-send-email-iuliana.prodan@nxp.com> References: <1578013373-1956-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Changed parameters for ahash_edesc_alloc function: - remove flags since they can be computed in ahash_edesc_alloc, the only place they are needed; - use ahash_request instead of caam_hash_ctx, to be able to compute gfp flags. Signed-off-by: Iuliana Prodan Reviewed-by: Horia Geanta --- drivers/crypto/caam/caamhash.c | 62 +++++++++++++++--------------------------- 1 file changed, 22 insertions(+), 40 deletions(-) diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index 84be6db..844e391 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -666,11 +666,14 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err, * Allocate an enhanced descriptor, which contains the hardware descriptor * and space for hardware scatter table containing sg_num entries. */ -static struct ahash_edesc *ahash_edesc_alloc(struct caam_hash_ctx *ctx, +static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req, int sg_num, u32 *sh_desc, - dma_addr_t sh_desc_dma, - gfp_t flags) + dma_addr_t sh_desc_dma) { + struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); + struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); + gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? + GFP_KERNEL : GFP_ATOMIC; struct ahash_edesc *edesc; unsigned int sg_size = sg_num * sizeof(struct sec4_sg_entry); @@ -729,8 +732,6 @@ static int ahash_update_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; u8 *buf = state->buf; int *buflen = &state->buflen; int *next_buflen = &state->next_buflen; @@ -784,8 +785,8 @@ static int ahash_update_ctx(struct ahash_request *req) * allocate space for base edesc and hw desc commands, * link tables */ - edesc = ahash_edesc_alloc(ctx, pad_nents, ctx->sh_desc_update, - ctx->sh_desc_update_dma, flags); + edesc = ahash_edesc_alloc(req, pad_nents, ctx->sh_desc_update, + ctx->sh_desc_update_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; @@ -859,8 +860,6 @@ static int ahash_final_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; int buflen = state->buflen; u32 *desc; int sec4_sg_bytes; @@ -872,8 +871,8 @@ static int ahash_final_ctx(struct ahash_request *req) sizeof(struct sec4_sg_entry); /* allocate space for base edesc and hw desc commands, link tables */ - edesc = ahash_edesc_alloc(ctx, 4, ctx->sh_desc_fin, - ctx->sh_desc_fin_dma, flags); + edesc = ahash_edesc_alloc(req, 4, ctx->sh_desc_fin, + ctx->sh_desc_fin_dma); if (!edesc) return -ENOMEM; @@ -925,8 +924,6 @@ static int ahash_finup_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; int buflen = state->buflen; u32 *desc; int sec4_sg_src_index; @@ -955,9 +952,8 @@ static int ahash_finup_ctx(struct ahash_request *req) sec4_sg_src_index = 1 + (buflen ? 1 : 0); /* allocate space for base edesc and hw desc commands, link tables */ - edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index + mapped_nents, - ctx->sh_desc_fin, ctx->sh_desc_fin_dma, - flags); + edesc = ahash_edesc_alloc(req, sec4_sg_src_index + mapped_nents, + ctx->sh_desc_fin, ctx->sh_desc_fin_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; @@ -1005,8 +1001,6 @@ static int ahash_digest(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; u32 *desc; int digestsize = crypto_ahash_digestsize(ahash); int src_nents, mapped_nents; @@ -1033,9 +1027,8 @@ static int ahash_digest(struct ahash_request *req) } /* allocate space for base edesc and hw desc commands, link tables */ - edesc = ahash_edesc_alloc(ctx, mapped_nents > 1 ? mapped_nents : 0, - ctx->sh_desc_digest, ctx->sh_desc_digest_dma, - flags); + edesc = ahash_edesc_alloc(req, mapped_nents > 1 ? mapped_nents : 0, + ctx->sh_desc_digest, ctx->sh_desc_digest_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; @@ -1082,8 +1075,6 @@ static int ahash_final_no_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; u8 *buf = state->buf; int buflen = state->buflen; u32 *desc; @@ -1092,8 +1083,8 @@ static int ahash_final_no_ctx(struct ahash_request *req) int ret; /* allocate space for base edesc and hw desc commands, link tables */ - edesc = ahash_edesc_alloc(ctx, 0, ctx->sh_desc_digest, - ctx->sh_desc_digest_dma, flags); + edesc = ahash_edesc_alloc(req, 0, ctx->sh_desc_digest, + ctx->sh_desc_digest_dma); if (!edesc) return -ENOMEM; @@ -1141,8 +1132,6 @@ static int ahash_update_no_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; u8 *buf = state->buf; int *buflen = &state->buflen; int *next_buflen = &state->next_buflen; @@ -1195,10 +1184,9 @@ static int ahash_update_no_ctx(struct ahash_request *req) * allocate space for base edesc and hw desc commands, * link tables */ - edesc = ahash_edesc_alloc(ctx, pad_nents, + edesc = ahash_edesc_alloc(req, pad_nents, ctx->sh_desc_update_first, - ctx->sh_desc_update_first_dma, - flags); + ctx->sh_desc_update_first_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; @@ -1266,8 +1254,6 @@ static int ahash_finup_no_ctx(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; int buflen = state->buflen; u32 *desc; int sec4_sg_bytes, sec4_sg_src_index, src_nents, mapped_nents; @@ -1297,9 +1283,8 @@ static int ahash_finup_no_ctx(struct ahash_request *req) sizeof(struct sec4_sg_entry); /* allocate space for base edesc and hw desc commands, link tables */ - edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index + mapped_nents, - ctx->sh_desc_digest, ctx->sh_desc_digest_dma, - flags); + edesc = ahash_edesc_alloc(req, sec4_sg_src_index + mapped_nents, + ctx->sh_desc_digest, ctx->sh_desc_digest_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; @@ -1352,8 +1337,6 @@ static int ahash_update_first(struct ahash_request *req) struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); struct caam_hash_state *state = ahash_request_ctx(req); struct device *jrdev = ctx->jrdev; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; u8 *buf = state->buf; int *buflen = &state->buflen; int *next_buflen = &state->next_buflen; @@ -1401,11 +1384,10 @@ static int ahash_update_first(struct ahash_request *req) * allocate space for base edesc and hw desc commands, * link tables */ - edesc = ahash_edesc_alloc(ctx, mapped_nents > 1 ? + edesc = ahash_edesc_alloc(req, mapped_nents > 1 ? mapped_nents : 0, ctx->sh_desc_update_first, - ctx->sh_desc_update_first_dma, - flags); + ctx->sh_desc_update_first_dma); if (!edesc) { dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; From patchwork Fri Jan 3 01:02:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 198309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09837C3276C for ; Fri, 3 Jan 2020 01:04:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C5074217F4 for ; Fri, 3 Jan 2020 01:04:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727399AbgACBEF (ORCPT ); Thu, 2 Jan 2020 20:04:05 -0500 Received: from inva021.nxp.com ([92.121.34.21]:48398 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726130AbgACBDQ (ORCPT ); Thu, 2 Jan 2020 20:03:16 -0500 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 5EA5B200945; Fri, 3 Jan 2020 02:03:14 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 51348200940; Fri, 3 Jan 2020 02:03:14 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 028CB20567; Fri, 3 Jan 2020 02:03:13 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH v2 05/10] crypto: caam - change return code in caam_jr_enqueue function Date: Fri, 3 Jan 2020 03:02:48 +0200 Message-Id: <1578013373-1956-6-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1578013373-1956-1-git-send-email-iuliana.prodan@nxp.com> References: <1578013373-1956-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Based on commit 6b80ea389a0b ("crypto: change transient busy return code to -ENOSPC"), change the return code of caam_jr_enqueue function to -EINPROGRESS, in case of success, -ENOSPC in case the CAAM is busy (has no space left in job ring queue), -EIO if it cannot map the caller's descriptor. Update, also, the cases for resource-freeing for each algorithm type. This is done for later use, on backlogging support in CAAM. Signed-off-by: Iuliana Prodan Reviewed-by: Horia Geanta --- drivers/crypto/caam/caamalg.c | 16 ++++------------ drivers/crypto/caam/caamhash.c | 34 +++++++++++----------------------- drivers/crypto/caam/caampkc.c | 16 ++++++++-------- drivers/crypto/caam/caamrng.c | 4 ++-- drivers/crypto/caam/jr.c | 8 ++++---- drivers/crypto/caam/key_gen.c | 2 +- 6 files changed, 30 insertions(+), 50 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 6e021692..21b6172 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -1417,9 +1417,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt) 1); ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { aead_unmap(jrdev, edesc, req); kfree(edesc); } @@ -1462,9 +1460,7 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt) desc = edesc->hw_desc; ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { aead_unmap(jrdev, edesc, req); kfree(edesc); } @@ -1507,9 +1503,7 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt) desc = edesc->hw_desc; ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { aead_unmap(jrdev, edesc, req); kfree(edesc); } @@ -1725,9 +1719,7 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) desc = edesc->hw_desc; ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { skcipher_unmap(jrdev, edesc, req); kfree(edesc); } diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index 844e391..b019d7e 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -395,7 +395,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key, init_completion(&result.completion); ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result); - if (!ret) { + if (ret == -EINPROGRESS) { /* in progress */ wait_for_completion(&result.completion); ret = result.err; @@ -833,10 +833,8 @@ static int ahash_update_ctx(struct ahash_request *req) desc_bytes(desc), 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req); - if (ret) + if (ret != -EINPROGRESS) goto unmap_ctx; - - ret = -EINPROGRESS; } else if (*next_buflen) { scatterwalk_map_and_copy(buf + *buflen, req->src, 0, req->nbytes, 0); @@ -908,10 +906,9 @@ static int ahash_final_ctx(struct ahash_request *req) 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req); - if (ret) - goto unmap_ctx; + if (ret == -EINPROGRESS) + return ret; - return -EINPROGRESS; unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); kfree(edesc); @@ -985,10 +982,9 @@ static int ahash_finup_ctx(struct ahash_request *req) 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req); - if (ret) - goto unmap_ctx; + if (ret == -EINPROGRESS) + return ret; - return -EINPROGRESS; unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); kfree(edesc); @@ -1058,9 +1054,7 @@ static int ahash_digest(struct ahash_request *req) 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); } @@ -1110,9 +1104,7 @@ static int ahash_final_no_ctx(struct ahash_request *req) 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); } @@ -1223,10 +1215,9 @@ static int ahash_update_no_ctx(struct ahash_request *req) desc_bytes(desc), 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req); - if (ret) + if (ret != -EINPROGRESS) goto unmap_ctx; - ret = -EINPROGRESS; state->update = ahash_update_ctx; state->finup = ahash_finup_ctx; state->final = ahash_final_ctx; @@ -1315,9 +1306,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req) 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); - if (!ret) { - ret = -EINPROGRESS; - } else { + if (ret != -EINPROGRESS) { ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); kfree(edesc); } @@ -1411,10 +1400,9 @@ static int ahash_update_first(struct ahash_request *req) desc_bytes(desc), 1); ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req); - if (ret) + if (ret != -EINPROGRESS) goto unmap_ctx; - ret = -EINPROGRESS; state->update = ahash_update_ctx; state->finup = ahash_finup_ctx; state->final = ahash_final_ctx; diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c index ebf1677..7f7ea32 100644 --- a/drivers/crypto/caam/caampkc.c +++ b/drivers/crypto/caam/caampkc.c @@ -634,8 +634,8 @@ static int caam_rsa_enc(struct akcipher_request *req) init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub); ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req); - if (!ret) - return -EINPROGRESS; + if (ret == -EINPROGRESS) + return ret; rsa_pub_unmap(jrdev, edesc, req); @@ -667,8 +667,8 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req) init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1); ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); - if (!ret) - return -EINPROGRESS; + if (ret == -EINPROGRESS) + return ret; rsa_priv_f1_unmap(jrdev, edesc, req); @@ -700,8 +700,8 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req) init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2); ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); - if (!ret) - return -EINPROGRESS; + if (ret == -EINPROGRESS) + return ret; rsa_priv_f2_unmap(jrdev, edesc, req); @@ -733,8 +733,8 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req) init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3); ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req); - if (!ret) - return -EINPROGRESS; + if (ret == -EINPROGRESS) + return ret; rsa_priv_f3_unmap(jrdev, edesc, req); diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c index e8baaca..34cbb4a 100644 --- a/drivers/crypto/caam/caamrng.c +++ b/drivers/crypto/caam/caamrng.c @@ -133,7 +133,7 @@ static inline int submit_job(struct caam_rng_ctx *ctx, int to_current) dev_dbg(jrdev, "submitting job %d\n", !(to_current ^ ctx->current_buf)); init_completion(&bd->filled); err = caam_jr_enqueue(jrdev, desc, rng_done, ctx); - if (err) + if (err != -EINPROGRESS) complete(&bd->filled); /* don't wait on failed job*/ else atomic_inc(&bd->empty); /* note if pending */ @@ -153,7 +153,7 @@ static int caam_read(struct hwrng *rng, void *data, size_t max, bool wait) if (atomic_read(&bd->empty) == BUF_EMPTY) { err = submit_job(ctx, 1); /* if can't submit job, can't even wait */ - if (err) + if (err != -EINPROGRESS) return 0; } /* no immediate data, so exit if not waiting */ diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index fc97cde..df2a050 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -324,8 +324,8 @@ void caam_jr_free(struct device *rdev) EXPORT_SYMBOL(caam_jr_free); /** - * caam_jr_enqueue() - Enqueue a job descriptor head. Returns 0 if OK, - * -EBUSY if the queue is full, -EIO if it cannot map the caller's + * caam_jr_enqueue() - Enqueue a job descriptor head. Returns -EINPROGRESS + * if OK, -ENOSPC if the queue is full, -EIO if it cannot map the caller's * descriptor. * @dev: device of the job ring to be used. This device should have * been assigned prior by caam_jr_register(). @@ -377,7 +377,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc, CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) { spin_unlock_bh(&jrp->inplock); dma_unmap_single(dev, desc_dma, desc_size, DMA_TO_DEVICE); - return -EBUSY; + return -ENOSPC; } head_entry = &jrp->entinfo[head]; @@ -414,7 +414,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc, spin_unlock_bh(&jrp->inplock); - return 0; + return -EINPROGRESS; } EXPORT_SYMBOL(caam_jr_enqueue); diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c index 5a851dd..b0e8a49 100644 --- a/drivers/crypto/caam/key_gen.c +++ b/drivers/crypto/caam/key_gen.c @@ -108,7 +108,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out, init_completion(&result.completion); ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result); - if (!ret) { + if (ret == -EINPROGRESS) { /* in progress */ wait_for_completion(&result.completion); ret = result.err; From patchwork Fri Jan 3 01:02:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 198312 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EFA1C3276D for ; Fri, 3 Jan 2020 01:03:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 270D1217F4 for ; Fri, 3 Jan 2020 01:03:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727163AbgACBDU (ORCPT ); Thu, 2 Jan 2020 20:03:20 -0500 Received: from inva020.nxp.com ([92.121.34.13]:47410 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726677AbgACBDS (ORCPT ); Thu, 2 Jan 2020 20:03:18 -0500 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 29AA21A1B15; Fri, 3 Jan 2020 02:03:15 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 1B1B91A1B11; Fri, 3 Jan 2020 02:03:15 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id C03D020567; Fri, 3 Jan 2020 02:03:14 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH v2 07/10] crypto: caam - support crypto_engine framework for SKCIPHER algorithms Date: Fri, 3 Jan 2020 03:02:50 +0200 Message-Id: <1578013373-1956-8-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1578013373-1956-1-git-send-email-iuliana.prodan@nxp.com> References: <1578013373-1956-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Integrate crypto_engine into CAAM, to make use of the engine queue. Add support for SKCIPHER algorithms. This is intended to be used for CAAM backlogging support. The requests, with backlog flag (e.g. from dm-crypt) will be listed into crypto-engine queue and processed by CAAM when free. This changes the return codes for caam_jr_enqueue: -EINPROGRESS if OK, -EBUSY if request is backlogged, -ENOSPC if the queue is full, -EIO if it cannot map the caller's descriptor. Signed-off-by: Iuliana Prodan Signed-off-by: Franck LENORMAND --- drivers/crypto/caam/Kconfig | 1 + drivers/crypto/caam/caamalg.c | 86 +++++++++++++++++++++++++++++++++++++------ drivers/crypto/caam/intern.h | 2 + drivers/crypto/caam/jr.c | 29 +++++++++++++++ 4 files changed, 106 insertions(+), 12 deletions(-) diff --git a/drivers/crypto/caam/Kconfig b/drivers/crypto/caam/Kconfig index fac5b2e..64f8226 100644 --- a/drivers/crypto/caam/Kconfig +++ b/drivers/crypto/caam/Kconfig @@ -33,6 +33,7 @@ config CRYPTO_DEV_FSL_CAAM_DEBUG menuconfig CRYPTO_DEV_FSL_CAAM_JR tristate "Freescale CAAM Job Ring driver backend" + select CRYPTO_ENGINE default y help Enables the driver module for Job Rings which are part of diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 34662b4..8911c04 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -56,6 +56,7 @@ #include "sg_sw_sec4.h" #include "key_gen.h" #include "caamalg_desc.h" +#include /* * crypto alg @@ -101,6 +102,7 @@ struct caam_skcipher_alg { * per-session context */ struct caam_ctx { + struct crypto_engine_ctx enginectx; u32 sh_desc_enc[DESC_MAX_USED_LEN]; u32 sh_desc_dec[DESC_MAX_USED_LEN]; u8 key[CAAM_MAX_KEY_SIZE]; @@ -114,6 +116,12 @@ struct caam_ctx { unsigned int authsize; }; +struct caam_skcipher_req_ctx { + struct skcipher_edesc *edesc; + void (*skcipher_op_done)(struct device *jrdev, u32 *desc, u32 err, + void *context); +}; + static int aead_null_set_sh_desc(struct crypto_aead *aead) { struct caam_ctx *ctx = crypto_aead_ctx(aead); @@ -1014,13 +1022,15 @@ static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err, struct caam_skcipher_request_entry *jrentry = context; struct skcipher_request *req = jrentry->base; struct skcipher_edesc *edesc; + struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req); struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); + struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev); int ivsize = crypto_skcipher_ivsize(skcipher); int ecode = 0; dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - edesc = container_of(desc, struct skcipher_edesc, hw_desc[0]); + edesc = rctx->edesc; if (err) ecode = caam_jr_strstatus(jrdev, err); @@ -1046,7 +1056,14 @@ static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err, kfree(edesc); - skcipher_request_complete(req, ecode); + /* + * If no backlog flag, the completion of the request is done + * by CAAM, not crypto engine. + */ + if (!jrentry->bklog) + skcipher_request_complete(req, ecode); + else + crypto_finalize_skcipher_request(jrp->engine, req, ecode); } /* @@ -1569,6 +1586,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, { struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher); + struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req); struct device *jrdev = ctx->jrdev; gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; @@ -1669,6 +1687,9 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, desc_bytes); edesc->jrentry.base = req; + rctx->edesc = edesc; + rctx->skcipher_op_done = skcipher_crypt_done; + /* Make sure IV is located in a DMAable area */ if (ivsize) { iv = (u8 *)edesc->sec4_sg + sec4_sg_bytes; @@ -1723,12 +1744,38 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, return edesc; } +static int skcipher_do_one_req(struct crypto_engine *engine, void *areq) +{ + struct skcipher_request *req = skcipher_request_cast(areq); + struct caam_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req); + struct caam_skcipher_request_entry *jrentry; + u32 *desc = rctx->edesc->hw_desc; + int ret; + + jrentry = &rctx->edesc->jrentry; + jrentry->bklog = true; + + ret = caam_jr_enqueue(ctx->jrdev, desc, rctx->skcipher_op_done, + jrentry); + + if (ret != -EINPROGRESS) { + skcipher_unmap(ctx->jrdev, rctx->edesc, req); + kfree(rctx->edesc); + } else { + ret = 0; + } + + return ret; +} + static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) { struct skcipher_edesc *edesc; struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher); struct device *jrdev = ctx->jrdev; + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev); u32 *desc; int ret = 0; @@ -1742,15 +1789,18 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) /* Create and submit job descriptor*/ init_skcipher_job(req, edesc, encrypt); + desc = edesc->hw_desc; print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ", - DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, - desc_bytes(edesc->hw_desc), 1); + DUMP_PREFIX_ADDRESS, 16, 4, desc, + desc_bytes(desc), 1); - desc = edesc->hw_desc; - - ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, - &edesc->jrentry); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) + return crypto_transfer_skcipher_request_to_engine(jrpriv->engine, + req); + else + ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, + &edesc->jrentry); if (ret != -EINPROGRESS) { skcipher_unmap(jrdev, edesc, req); kfree(edesc); @@ -3287,7 +3337,9 @@ static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam, dma_addr = dma_map_single_attrs(ctx->jrdev, ctx->sh_desc_enc, offsetof(struct caam_ctx, - sh_desc_enc_dma), + sh_desc_enc_dma) - + offsetof(struct caam_ctx, + sh_desc_enc), ctx->dir, DMA_ATTR_SKIP_CPU_SYNC); if (dma_mapping_error(ctx->jrdev, dma_addr)) { dev_err(ctx->jrdev, "unable to map key, shared descriptors\n"); @@ -3297,8 +3349,12 @@ static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam, ctx->sh_desc_enc_dma = dma_addr; ctx->sh_desc_dec_dma = dma_addr + offsetof(struct caam_ctx, - sh_desc_dec); - ctx->key_dma = dma_addr + offsetof(struct caam_ctx, key); + sh_desc_dec) - + offsetof(struct caam_ctx, + sh_desc_enc); + ctx->key_dma = dma_addr + offsetof(struct caam_ctx, key) - + offsetof(struct caam_ctx, + sh_desc_enc); /* copy descriptor header template value */ ctx->cdata.algtype = OP_TYPE_CLASS1_ALG | caam->class1_alg_type; @@ -3312,6 +3368,11 @@ static int caam_cra_init(struct crypto_skcipher *tfm) struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct caam_skcipher_alg *caam_alg = container_of(alg, typeof(*caam_alg), skcipher); + struct caam_ctx *ctx = crypto_skcipher_ctx(tfm); + + crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx)); + + ctx->enginectx.op.do_one_request = skcipher_do_one_req; return caam_init_common(crypto_skcipher_ctx(tfm), &caam_alg->caam, false); @@ -3330,7 +3391,8 @@ static int caam_aead_init(struct crypto_aead *tfm) static void caam_exit_common(struct caam_ctx *ctx) { dma_unmap_single_attrs(ctx->jrdev, ctx->sh_desc_enc_dma, - offsetof(struct caam_ctx, sh_desc_enc_dma), + offsetof(struct caam_ctx, sh_desc_enc_dma) - + offsetof(struct caam_ctx, sh_desc_enc), ctx->dir, DMA_ATTR_SKIP_CPU_SYNC); caam_jr_free(ctx->jrdev); } diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h index 8ca884b..2f2330e 100644 --- a/drivers/crypto/caam/intern.h +++ b/drivers/crypto/caam/intern.h @@ -12,6 +12,7 @@ #include "ctrl.h" #include "regs.h" +#include /* Currently comes from Kconfig param as a ^2 (driver-required) */ #define JOBR_DEPTH (1 << CONFIG_CRYPTO_DEV_FSL_CAAM_RINGSIZE) @@ -61,6 +62,7 @@ struct caam_drv_private_jr { int out_ring_read_index; /* Output index "tail" */ int tail; /* entinfo (s/w ring) tail index */ void *outring; /* Base of output ring, DMA-safe */ + struct crypto_engine *engine; }; /* diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index df2a050..962e66d 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -62,6 +62,15 @@ static void unregister_algs(void) mutex_unlock(&algs_lock); } +static void caam_jr_crypto_engine_exit(void *data) +{ + struct device *jrdev = data; + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev); + + /* Free the resources of crypto-engine */ + crypto_engine_exit(jrpriv->engine); +} + static int caam_reset_hw_jr(struct device *dev) { struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); @@ -538,6 +547,26 @@ static int caam_jr_probe(struct platform_device *pdev) return error; } + /* Initialize crypto engine */ + jrpriv->engine = crypto_engine_alloc_init(jrdev, false); + if (!jrpriv->engine) { + dev_err(jrdev, "Could not init crypto-engine\n"); + return -ENOMEM; + } + + /* Start crypto engine */ + error = crypto_engine_start(jrpriv->engine); + if (error) { + dev_err(jrdev, "Could not start crypto-engine\n"); + crypto_engine_exit(jrpriv->engine); + return error; + } + + error = devm_add_action_or_reset(jrdev, caam_jr_crypto_engine_exit, + jrdev); + if (error) + return error; + /* Identify the interrupt */ jrpriv->irq = irq_of_parse_and_map(nprop, 0); if (!jrpriv->irq) { From patchwork Fri Jan 3 01:02:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iuliana Prodan X-Patchwork-Id: 198311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73289C3276E for ; Fri, 3 Jan 2020 01:03:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5214B222C3 for ; Fri, 3 Jan 2020 01:03:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727352AbgACBDu (ORCPT ); Thu, 2 Jan 2020 20:03:50 -0500 Received: from inva021.nxp.com ([92.121.34.21]:48462 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726820AbgACBDT (ORCPT ); Thu, 2 Jan 2020 20:03:19 -0500 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 55F39200EF3; Fri, 3 Jan 2020 02:03:16 +0100 (CET) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 47ED7200EDD; Fri, 3 Jan 2020 02:03:16 +0100 (CET) Received: from lorenz.ea.freescale.net (lorenz.ea.freescale.net [10.171.71.5]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id ED26D20567; Fri, 3 Jan 2020 02:03:15 +0100 (CET) From: Iuliana Prodan To: Herbert Xu , Horia Geanta , Aymen Sghaier Cc: "David S. Miller" , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH v2 10/10] crypto: caam - add crypto_engine support for HASH algorithms Date: Fri, 3 Jan 2020 03:02:53 +0200 Message-Id: <1578013373-1956-11-git-send-email-iuliana.prodan@nxp.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1578013373-1956-1-git-send-email-iuliana.prodan@nxp.com> References: <1578013373-1956-1-git-send-email-iuliana.prodan@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add crypto_engine support for HASH algorithms, to make use of the engine queue. The requests, with backlog flag, will be listed into crypto-engine queue and processed by CAAM when free. Signed-off-by: Iuliana Prodan --- drivers/crypto/caam/caamhash.c | 175 ++++++++++++++++++++++++++++++----------- 1 file changed, 127 insertions(+), 48 deletions(-) diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index f179d39..93af298 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -65,6 +65,7 @@ #include "sg_sw_sec4.h" #include "key_gen.h" #include "caamhash_desc.h" +#include #define CAAM_CRA_PRIORITY 3000 @@ -86,6 +87,7 @@ static struct list_head hash_list; /* ahash per-session context */ struct caam_hash_ctx { + struct crypto_engine_ctx enginectx; u32 sh_desc_update[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned; u32 sh_desc_update_first[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned; u32 sh_desc_fin[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned; @@ -111,9 +113,12 @@ struct caam_hash_state { int buflen; int next_buflen; u8 caam_ctx[MAX_CTX_LEN] ____cacheline_aligned; - int (*update)(struct ahash_request *req); + int (*update)(struct ahash_request *req) ____cacheline_aligned; int (*final)(struct ahash_request *req); int (*finup)(struct ahash_request *req); + struct ahash_edesc *edesc; + void (*ahash_op_done)(struct device *jrdev, u32 *desc, u32 err, + void *context); }; struct caam_export_state { @@ -123,6 +128,9 @@ struct caam_export_state { int (*update)(struct ahash_request *req); int (*final)(struct ahash_request *req); int (*finup)(struct ahash_request *req); + struct ahash_edesc *edesc; + void (*ahash_op_done)(struct device *jrdev, u32 *desc, u32 err, + void *context); }; static inline bool is_cmac_aes(u32 algtype) @@ -588,6 +596,7 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err, { struct caam_ahash_request_entry *jrentry = context; struct ahash_request *req = jrentry->base; + struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev); struct ahash_edesc *edesc; struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); int digestsize = crypto_ahash_digestsize(ahash); @@ -597,7 +606,8 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err, dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - edesc = container_of(desc, struct ahash_edesc, hw_desc[0]); + edesc = state->edesc; + if (err) ecode = caam_jr_strstatus(jrdev, err); @@ -609,7 +619,14 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err, DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx, ctx->ctx_len, 1); - req->base.complete(&req->base, ecode); + /* + * If no backlog flag, the completion of the request is done + * by CAAM, not crypto engine. + */ + if (!jrentry->bklog) + req->base.complete(&req->base, ecode); + else + crypto_finalize_hash_request(jrp->engine, req, ecode); } static void ahash_done(struct device *jrdev, u32 *desc, u32 err, @@ -629,6 +646,7 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err, { struct caam_ahash_request_entry *jrentry = context; struct ahash_request *req = jrentry->base; + struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev); struct ahash_edesc *edesc; struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); @@ -638,7 +656,7 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err, dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); - edesc = container_of(desc, struct ahash_edesc, hw_desc[0]); + edesc = state->edesc; if (err) ecode = caam_jr_strstatus(jrdev, err); @@ -662,7 +680,15 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err, DUMP_PREFIX_ADDRESS, 16, 4, req->result, digestsize, 1); - req->base.complete(&req->base, ecode); + /* + * If no backlog flag, the completion of the request is done + * by CAAM, not crypto engine. + */ + if (!jrentry->bklog) + req->base.complete(&req->base, ecode); + else + crypto_finalize_hash_request(jrp->engine, req, ecode); + } static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err, @@ -687,6 +713,7 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req, { struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); + struct caam_hash_state *state = ahash_request_ctx(req); gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; struct ahash_edesc *edesc; @@ -699,6 +726,7 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req, } edesc->jrentry.base = req; + state->edesc = edesc; init_job_desc_shared(edesc->hw_desc, sh_desc_dma, desc_len(sh_desc), HDR_SHARE_DEFER | HDR_REVERSE); @@ -742,6 +770,56 @@ static int ahash_edesc_add_src(struct caam_hash_ctx *ctx, return 0; } +static int ahash_do_one_req(struct crypto_engine *engine, void *areq) +{ + struct ahash_request *req = ahash_request_cast(areq); + struct caam_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); + struct caam_hash_state *state = ahash_request_ctx(req); + struct caam_ahash_request_entry *jrentry; + struct device *jrdev = ctx->jrdev; + u32 *desc = state->edesc->hw_desc; + int ret; + + jrentry = &state->edesc->jrentry; + jrentry->bklog = true; + + ret = caam_jr_enqueue(jrdev, desc, state->ahash_op_done, + jrentry); + + if (ret != -EINPROGRESS) { + ahash_unmap(jrdev, state->edesc, req, 0); + kfree(state->edesc); + } else { + ret = 0; + } + + return ret; +} + +static int ahash_enqueue_req(struct device *jrdev, u32 *desc, + void (*cbk)(struct device *jrdev, u32 *desc, + u32 err, void *context), + struct ahash_request *req, + struct ahash_edesc *edesc, + int dst_len, enum dma_data_direction dir) +{ + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev); + int ret; + + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) + return crypto_transfer_hash_request_to_engine(jrpriv->engine, + req); + else + ret = caam_jr_enqueue(jrdev, desc, cbk, &edesc->jrentry); + + if (ret != -EINPROGRESS) { + ahash_unmap_ctx(jrdev, edesc, req, dst_len, dir); + kfree(edesc); + } + + return ret; +} + /* submit update job descriptor */ static int ahash_update_ctx(struct ahash_request *req) { @@ -849,10 +927,9 @@ static int ahash_update_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, - &edesc->jrentry); - if (ret != -EINPROGRESS) - goto unmap_ctx; + state->ahash_op_done = ahash_done_bi; + ret = ahash_enqueue_req(jrdev, desc, ahash_done_bi, req, edesc, + ctx->ctx_len, DMA_BIDIRECTIONAL); } else if (*next_buflen) { scatterwalk_map_and_copy(buf + *buflen, req->src, 0, req->nbytes, 0); @@ -923,10 +1000,10 @@ static int ahash_final_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, &edesc->jrentry); - if (ret == -EINPROGRESS) - return ret; + state->ahash_op_done = ahash_done_ctx_src; + return ahash_enqueue_req(jrdev, desc, ahash_done_ctx_src, req, edesc, + digestsize, DMA_BIDIRECTIONAL); unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); kfree(edesc); @@ -999,10 +1076,10 @@ static int ahash_finup_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, &edesc->jrentry); - if (ret == -EINPROGRESS) - return ret; + state->ahash_op_done = ahash_done_ctx_src; + return ahash_enqueue_req(jrdev, desc, ahash_done_ctx_src, req, edesc, + digestsize, DMA_BIDIRECTIONAL); unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); kfree(edesc); @@ -1071,13 +1148,10 @@ static int ahash_digest(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry); - if (ret != -EINPROGRESS) { - ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); - kfree(edesc); - } + state->ahash_op_done = ahash_done; - return ret; + return ahash_enqueue_req(jrdev, desc, ahash_done, req, edesc, + digestsize, DMA_FROM_DEVICE); } /* submit ahash final if it the first job descriptor */ @@ -1121,18 +1195,14 @@ static int ahash_final_no_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry); - if (ret != -EINPROGRESS) { - ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); - kfree(edesc); - } + state->ahash_op_done = ahash_done; - return ret; + return ahash_enqueue_req(jrdev, desc, ahash_done, req, edesc, + digestsize, DMA_FROM_DEVICE); unmap: ahash_unmap(jrdev, edesc, req, digestsize); kfree(edesc); return -ENOMEM; - } /* submit ahash update if it the first job descriptor after update */ @@ -1232,10 +1302,9 @@ static int ahash_update_no_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, - &edesc->jrentry); - if (ret != -EINPROGRESS) - goto unmap_ctx; + state->ahash_op_done = ahash_done_ctx_dst; + ret = ahash_enqueue_req(jrdev, desc, ahash_done_ctx_dst, req, + edesc, ctx->ctx_len, DMA_TO_DEVICE); state->update = ahash_update_ctx; state->finup = ahash_finup_ctx; @@ -1324,13 +1393,10 @@ static int ahash_finup_no_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry); - if (ret != -EINPROGRESS) { - ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); - kfree(edesc); - } + state->ahash_op_done = ahash_done; - return ret; + return ahash_enqueue_req(jrdev, desc, ahash_done, req, edesc, + digestsize, DMA_FROM_DEVICE); unmap: ahash_unmap(jrdev, edesc, req, digestsize); kfree(edesc); @@ -1418,11 +1484,9 @@ static int ahash_update_first(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, - &edesc->jrentry); - if (ret != -EINPROGRESS) - goto unmap_ctx; - + state->ahash_op_done = ahash_done_ctx_dst; + ret = ahash_enqueue_req(jrdev, desc, ahash_done_ctx_dst, req, + edesc, ctx->ctx_len, DMA_TO_DEVICE); state->update = ahash_update_ctx; state->finup = ahash_finup_ctx; state->final = ahash_final_ctx; @@ -1502,6 +1566,8 @@ static int ahash_export(struct ahash_request *req, void *out) export->update = state->update; export->final = state->final; export->finup = state->finup; + export->edesc = state->edesc; + export->ahash_op_done = state->ahash_op_done; return 0; } @@ -1518,6 +1584,8 @@ static int ahash_import(struct ahash_request *req, const void *in) state->update = export->update; state->final = export->final; state->finup = export->finup; + state->edesc = export->edesc; + state->ahash_op_done = export->ahash_op_done; return 0; } @@ -1777,7 +1845,9 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm) } dma_addr = dma_map_single_attrs(ctx->jrdev, ctx->sh_desc_update, - offsetof(struct caam_hash_ctx, key), + offsetof(struct caam_hash_ctx, key) - + offsetof(struct caam_hash_ctx, + sh_desc_update), ctx->dir, DMA_ATTR_SKIP_CPU_SYNC); if (dma_mapping_error(ctx->jrdev, dma_addr)) { dev_err(ctx->jrdev, "unable to map shared descriptors\n"); @@ -1795,11 +1865,19 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm) ctx->sh_desc_update_dma = dma_addr; ctx->sh_desc_update_first_dma = dma_addr + offsetof(struct caam_hash_ctx, - sh_desc_update_first); + sh_desc_update_first) - + offsetof(struct caam_hash_ctx, + sh_desc_update); ctx->sh_desc_fin_dma = dma_addr + offsetof(struct caam_hash_ctx, - sh_desc_fin); + sh_desc_fin) - + offsetof(struct caam_hash_ctx, + sh_desc_update); ctx->sh_desc_digest_dma = dma_addr + offsetof(struct caam_hash_ctx, - sh_desc_digest); + sh_desc_digest) - + offsetof(struct caam_hash_ctx, + sh_desc_update); + + ctx->enginectx.op.do_one_request = ahash_do_one_req; crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), sizeof(struct caam_hash_state)); @@ -1816,7 +1894,8 @@ static void caam_hash_cra_exit(struct crypto_tfm *tfm) struct caam_hash_ctx *ctx = crypto_tfm_ctx(tfm); dma_unmap_single_attrs(ctx->jrdev, ctx->sh_desc_update_dma, - offsetof(struct caam_hash_ctx, key), + offsetof(struct caam_hash_ctx, key) - + offsetof(struct caam_hash_ctx, sh_desc_update), ctx->dir, DMA_ATTR_SKIP_CPU_SYNC); if (ctx->key_dir != DMA_NONE) dma_unmap_single_attrs(ctx->jrdev, ctx->adata.key_dma,