From patchwork Sun May 8 18:59:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corentin Labbe X-Patchwork-Id: 571083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7ED9AC433EF for ; Sun, 8 May 2022 19:19:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231336AbiEHTWs (ORCPT ); Sun, 8 May 2022 15:22:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236446AbiEHTEI (ORCPT ); Sun, 8 May 2022 15:04:08 -0400 Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com [IPv6:2a00:1450:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98EF0DF7C for ; Sun, 8 May 2022 12:00:11 -0700 (PDT) Received: by mail-wr1-x42f.google.com with SMTP id k2so16697341wrd.5 for ; Sun, 08 May 2022 12:00:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kQ7tISy8K+0cgFIOzo6JwID81BS3DiD07ImA91vTtec=; b=2C8UybYArmrxaHuPaigddcLFJMUC4sp5wfNuuBFrrqAjnZ3wl2E55YqpkTnCSR1qQH 17R3et7ZcGXOROhxUdfz3Mh6gCi0P3O2XNZE5hY0NNTgsfFutl9Dm4s6TAX3gwpt7YRZ iwLd4JYkqEinfhXk95IDWYgb0L6uI+0IMbjRU8MGM7BSUBEUNKVD674lEPvQi0fXmYET WSR6KYNX/RKm1Wr8Es9rc2KijbudFMa7p3zIkmqgX23h3K1xAzaZSQl1fXku9V4Z9PHD 7RDczpqQah45BMkZDcI9YqSokzAZzsv3fM3IBP/pmnTP0gdUjK9rTFwI49sd9nKufI0W KaoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kQ7tISy8K+0cgFIOzo6JwID81BS3DiD07ImA91vTtec=; b=gWa/39BVN2/7is17pbdn4KLAK+ibc8fbhtyrZoYrTc5eBdh7c28KdTViRs3HN0hzbc FZjNkPolzyOkRSXoHFBGGVaNWGRML2kOTKmFtHaio35IVc72IrNoZcGhy/WEV5KDfjWI To5SpNIOTkrMsaHocDi5P5pOwGpqrG3Mdrf/YOFaux5WrtTcvbfQoWHsYCblfKkUNbTS DM42K/zREQYrY2DWi1w2gKrxatSCNVOAVgSNjCp9s0L8pK//x5/gcSJV2hGHO5GWGqrX YxTFCy9HHgCDNyd6+c1fSJLVSJKtKvwBsyrfAA+sBy6rXZNttvmaswyA/Zvb86uTnUKQ bQHw== X-Gm-Message-State: AOAM530+0VyPvt1ckB59b5ujtbdM/1PMCUN+vqGsnKuvp+Hf4KYELinl RN0bZbBkg2/VKqL+yMa/wpofnQ== X-Google-Smtp-Source: ABdhPJyEKKQ0280N8fbvSBqIAmfrYCuVHhg6kpQKhrf5PPcRF6LSpEI5a1EfJDfPuJ5u7ORIFvKYTg== X-Received: by 2002:a5d:5547:0:b0:20c:7a44:d8e7 with SMTP id g7-20020a5d5547000000b0020c7a44d8e7mr11165652wrw.349.1652036410032; Sun, 08 May 2022 12:00:10 -0700 (PDT) Received: from localhost.localdomain (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.googlemail.com with ESMTPSA id n16-20020a05600c3b9000b00394699f803dsm10552348wms.46.2022.05.08.12.00.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 May 2022 12:00:09 -0700 (PDT) From: Corentin Labbe To: heiko@sntech.de, ardb@kernel.org, herbert@gondor.apana.org.au, krzysztof.kozlowski+dt@linaro.org, robh+dt@kernel.org Cc: devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rockchip@lists.infradead.org, Corentin Labbe Subject: [PATCH v7 05/33] crypto: rockchip: do not store mode globally Date: Sun, 8 May 2022 18:59:29 +0000 Message-Id: <20220508185957.3629088-6-clabbe@baylibre.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220508185957.3629088-1-clabbe@baylibre.com> References: <20220508185957.3629088-1-clabbe@baylibre.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Storing the mode globally does not work if 2 requests are handled in the same time. We should store it in a request context. Fixes: ce0183cb6464b ("crypto: rockchip - switch to skcipher API") Signed-off-by: Corentin Labbe --- drivers/crypto/rockchip/rk3288_crypto.h | 5 +- .../crypto/rockchip/rk3288_crypto_skcipher.c | 58 ++++++++++++------- 2 files changed, 41 insertions(+), 22 deletions(-) diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h index 656d6795d400..c919d9a43a08 100644 --- a/drivers/crypto/rockchip/rk3288_crypto.h +++ b/drivers/crypto/rockchip/rk3288_crypto.h @@ -245,10 +245,13 @@ struct rk_ahash_rctx { struct rk_cipher_ctx { struct rk_crypto_info *dev; unsigned int keylen; - u32 mode; u8 iv[AES_BLOCK_SIZE]; }; +struct rk_cipher_rctx { + u32 mode; +}; + enum alg_type { ALG_TYPE_HASH, ALG_TYPE_CIPHER, diff --git a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c index 8c44a19eab75..bbd0bf52bf07 100644 --- a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c +++ b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c @@ -76,9 +76,10 @@ static int rk_aes_ecb_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = RK_CRYPTO_AES_ECB_MODE; + rctx->mode = RK_CRYPTO_AES_ECB_MODE; return rk_handle_req(dev, req); } @@ -86,9 +87,10 @@ static int rk_aes_ecb_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = RK_CRYPTO_AES_ECB_MODE | RK_CRYPTO_DEC; + rctx->mode = RK_CRYPTO_AES_ECB_MODE | RK_CRYPTO_DEC; return rk_handle_req(dev, req); } @@ -96,9 +98,10 @@ static int rk_aes_cbc_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = RK_CRYPTO_AES_CBC_MODE; + rctx->mode = RK_CRYPTO_AES_CBC_MODE; return rk_handle_req(dev, req); } @@ -106,9 +109,10 @@ static int rk_aes_cbc_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = RK_CRYPTO_AES_CBC_MODE | RK_CRYPTO_DEC; + rctx->mode = RK_CRYPTO_AES_CBC_MODE | RK_CRYPTO_DEC; return rk_handle_req(dev, req); } @@ -116,9 +120,10 @@ static int rk_des_ecb_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = 0; + rctx->mode = 0; return rk_handle_req(dev, req); } @@ -126,9 +131,10 @@ static int rk_des_ecb_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = RK_CRYPTO_DEC; + rctx->mode = RK_CRYPTO_DEC; return rk_handle_req(dev, req); } @@ -136,9 +142,10 @@ static int rk_des_cbc_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC; + rctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC; return rk_handle_req(dev, req); } @@ -146,9 +153,10 @@ static int rk_des_cbc_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC | RK_CRYPTO_DEC; + rctx->mode = RK_CRYPTO_TDES_CHAINMODE_CBC | RK_CRYPTO_DEC; return rk_handle_req(dev, req); } @@ -156,9 +164,10 @@ static int rk_des3_ede_ecb_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = RK_CRYPTO_TDES_SELECT; + rctx->mode = RK_CRYPTO_TDES_SELECT; return rk_handle_req(dev, req); } @@ -166,9 +175,10 @@ static int rk_des3_ede_ecb_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_DEC; + rctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_DEC; return rk_handle_req(dev, req); } @@ -176,9 +186,10 @@ static int rk_des3_ede_cbc_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC; + rctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC; return rk_handle_req(dev, req); } @@ -186,9 +197,10 @@ static int rk_des3_ede_cbc_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_crypto_info *dev = ctx->dev; - ctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC | + rctx->mode = RK_CRYPTO_TDES_SELECT | RK_CRYPTO_TDES_CHAINMODE_CBC | RK_CRYPTO_DEC; return rk_handle_req(dev, req); } @@ -199,6 +211,7 @@ static void rk_ablk_hw_init(struct rk_crypto_info *dev) skcipher_request_cast(dev->async_req); struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req); struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(cipher); u32 ivsize, block, conf_reg = 0; @@ -206,22 +219,22 @@ static void rk_ablk_hw_init(struct rk_crypto_info *dev) ivsize = crypto_skcipher_ivsize(cipher); if (block == DES_BLOCK_SIZE) { - ctx->mode |= RK_CRYPTO_TDES_FIFO_MODE | + rctx->mode |= RK_CRYPTO_TDES_FIFO_MODE | RK_CRYPTO_TDES_BYTESWAP_KEY | RK_CRYPTO_TDES_BYTESWAP_IV; - CRYPTO_WRITE(dev, RK_CRYPTO_TDES_CTRL, ctx->mode); + CRYPTO_WRITE(dev, RK_CRYPTO_TDES_CTRL, rctx->mode); memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, req->iv, ivsize); conf_reg = RK_CRYPTO_DESSEL; } else { - ctx->mode |= RK_CRYPTO_AES_FIFO_MODE | + rctx->mode |= RK_CRYPTO_AES_FIFO_MODE | RK_CRYPTO_AES_KEY_CHANGE | RK_CRYPTO_AES_BYTESWAP_KEY | RK_CRYPTO_AES_BYTESWAP_IV; if (ctx->keylen == AES_KEYSIZE_192) - ctx->mode |= RK_CRYPTO_AES_192BIT_key; + rctx->mode |= RK_CRYPTO_AES_192BIT_key; else if (ctx->keylen == AES_KEYSIZE_256) - ctx->mode |= RK_CRYPTO_AES_256BIT_key; - CRYPTO_WRITE(dev, RK_CRYPTO_AES_CTRL, ctx->mode); + rctx->mode |= RK_CRYPTO_AES_256BIT_key; + CRYPTO_WRITE(dev, RK_CRYPTO_AES_CTRL, rctx->mode); memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, req->iv, ivsize); } conf_reg |= RK_CRYPTO_BYTESWAP_BTFIFO | @@ -246,6 +259,7 @@ static int rk_set_data_start(struct rk_crypto_info *dev) struct skcipher_request *req = skcipher_request_cast(dev->async_req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); u32 ivsize = crypto_skcipher_ivsize(tfm); u8 *src_last_blk = page_address(sg_page(dev->sg_src)) + @@ -254,7 +268,7 @@ static int rk_set_data_start(struct rk_crypto_info *dev) /* Store the iv that need to be updated in chain mode. * And update the IV buffer to contain the next IV for decryption mode. */ - if (ctx->mode & RK_CRYPTO_DEC) { + if (rctx->mode & RK_CRYPTO_DEC) { memcpy(ctx->iv, src_last_blk, ivsize); sg_pcopy_to_buffer(dev->first, dev->src_nents, req->iv, ivsize, dev->total - ivsize); @@ -294,11 +308,12 @@ static void rk_iv_copyback(struct rk_crypto_info *dev) struct skcipher_request *req = skcipher_request_cast(dev->async_req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); u32 ivsize = crypto_skcipher_ivsize(tfm); /* Update the IV buffer to contain the next IV for encryption mode. */ - if (!(ctx->mode & RK_CRYPTO_DEC)) { + if (!(rctx->mode & RK_CRYPTO_DEC)) { if (dev->aligned) { memcpy(req->iv, sg_virt(dev->sg_dst) + dev->sg_dst->length - ivsize, ivsize); @@ -314,11 +329,12 @@ static void rk_update_iv(struct rk_crypto_info *dev) struct skcipher_request *req = skcipher_request_cast(dev->async_req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct rk_cipher_rctx *rctx = skcipher_request_ctx(req); struct rk_cipher_ctx *ctx = crypto_skcipher_ctx(tfm); u32 ivsize = crypto_skcipher_ivsize(tfm); u8 *new_iv = NULL; - if (ctx->mode & RK_CRYPTO_DEC) { + if (rctx->mode & RK_CRYPTO_DEC) { new_iv = ctx->iv; } else { new_iv = page_address(sg_page(dev->sg_dst)) +