From patchwork Sat Jun 27 08:36:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 211308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0452C433DF for ; Sat, 27 Jun 2020 08:36:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 819A420B80 for ; Sat, 27 Jun 2020 08:36:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247006; bh=DOov/VzevH5sYOijwCFcdKlJxFMfsKQgVxpnIqtZ1KY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=CqTmhcAD9meHihAFNScF0WQvzixQ+RNPqw5oer9VwmFzrWkJHhIeqWnn+OWqrBcz5 Q3uHhvmZ/RUM60LN8bGlomSWEBQvxpaZn7ytIU2GDYX5Q6aLUxO9m3CtZ0ufPl1Em9 6Yd7WTIVRQyR/cRxfDf4gdtWhUhZTKQ7wvzr5FCY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726177AbgF0Igp (ORCPT ); Sat, 27 Jun 2020 04:36:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:60794 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0Igp (ORCPT ); Sat, 27 Jun 2020 04:36:45 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 977E2207FC; Sat, 27 Jun 2020 08:36:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247004; bh=DOov/VzevH5sYOijwCFcdKlJxFMfsKQgVxpnIqtZ1KY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d3D4qZ1Udq19A4lVk9G8mcnn+SKDYG8eRwybM6i60UDifVk1tlJZrpBYpOBGmezZN p93KPhKHWW658YZPGd+3PhaqhERvvq5bH8I7mvwdIgweJda0/QDy9qGvWM+7hSsqrS 0GoZSRp0fTzPlGrrigR0JDgGLOQWXvxmqWF1KgfU= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 02/13] crypto: amlogic-gxl - permit async skcipher as fallback Date: Sat, 27 Jun 2020 10:36:12 +0200 Message-Id: <20200627083623.2428333-3-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org Even though the amlogic-gxl driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/amlogic/amlogic-gxl-cipher.c | 27 ++++++++++---------- drivers/crypto/amlogic/amlogic-gxl.h | 3 ++- 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/drivers/crypto/amlogic/amlogic-gxl-cipher.c b/drivers/crypto/amlogic/amlogic-gxl-cipher.c index 9819dd50fbad..5880b94dcb32 100644 --- a/drivers/crypto/amlogic/amlogic-gxl-cipher.c +++ b/drivers/crypto/amlogic/amlogic-gxl-cipher.c @@ -64,22 +64,20 @@ static int meson_cipher_do_fallback(struct skcipher_request *areq) #ifdef CONFIG_CRYPTO_DEV_AMLOGIC_GXL_DEBUG struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct meson_alg_template *algt; -#endif - SYNC_SKCIPHER_REQUEST_ON_STACK(req, op->fallback_tfm); -#ifdef CONFIG_CRYPTO_DEV_AMLOGIC_GXL_DEBUG algt = container_of(alg, struct meson_alg_template, alg.skcipher); algt->stat_fb++; #endif - skcipher_request_set_sync_tfm(req, op->fallback_tfm); - skcipher_request_set_callback(req, areq->base.flags, NULL, NULL); - skcipher_request_set_crypt(req, areq->src, areq->dst, + skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); + if (rctx->op_dir == MESON_DECRYPT) - err = crypto_skcipher_decrypt(req); + err = crypto_skcipher_decrypt(&rctx->fallback_req); else - err = crypto_skcipher_encrypt(req); - skcipher_request_zero(req); + err = crypto_skcipher_encrypt(&rctx->fallback_req); return err; } @@ -321,15 +319,16 @@ int meson_cipher_init(struct crypto_tfm *tfm) algt = container_of(alg, struct meson_alg_template, alg.skcipher); op->mc = algt->mc; - sktfm->reqsize = sizeof(struct meson_cipher_req_ctx); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->mc->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + sktfm->reqsize = sizeof(struct meson_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm); + op->enginectx.op.do_one_request = meson_handle_cipher_request; op->enginectx.op.prepare_request = NULL; op->enginectx.op.unprepare_request = NULL; @@ -345,7 +344,7 @@ void meson_cipher_exit(struct crypto_tfm *tfm) memzero_explicit(op->key, op->keylen); kfree(op->key); } - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); } int meson_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, @@ -377,5 +376,5 @@ int meson_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/amlogic/amlogic-gxl.h b/drivers/crypto/amlogic/amlogic-gxl.h index b7f2de91ab76..dc0f142324a3 100644 --- a/drivers/crypto/amlogic/amlogic-gxl.h +++ b/drivers/crypto/amlogic/amlogic-gxl.h @@ -109,6 +109,7 @@ struct meson_dev { struct meson_cipher_req_ctx { u32 op_dir; int flow; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -126,7 +127,7 @@ struct meson_cipher_tfm_ctx { u32 keylen; u32 keymode; struct meson_dev *mc; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; /* From patchwork Sat Jun 27 08:36:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 211307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F3C6C433E0 for ; Sat, 27 Jun 2020 08:36:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42D8A208C7 for ; Sat, 27 Jun 2020 08:36:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247015; bh=hood+SBSrfD8rlAvUlszp6fbHnc3dUB49g4uRq5KZac=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=KAxLPf3bGURBSThu6d//AAca1nFScYD2mZAacRVhZpYaTQj2AZAA283bd5DaCx4Tv Kcden0YRfw+5QSnz2Yfecf0z71CRqElIBdtvkqKOsryJxjXceYjq4SLC0J1qBeAD7u otVENz77I38D+uCgCTWN9MHhBVFJ1lsO1jo5q+Qo= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726356AbgF0Igy (ORCPT ); Sat, 27 Jun 2020 04:36:54 -0400 Received: from mail.kernel.org ([198.145.29.99]:60970 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0Igy (ORCPT ); Sat, 27 Jun 2020 04:36:54 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 81D06207FC; Sat, 27 Jun 2020 08:36:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247013; bh=hood+SBSrfD8rlAvUlszp6fbHnc3dUB49g4uRq5KZac=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sG33aql52sl8EBuC4PxvhlewQ85dfoR1O2RPdCFcjveoCWP3KmrPSCat7J1H3HHRq gfRZa++y7vjdIiAhniY4g6WRbZQxPbT86bTsT+rxe1SIzD/07gQJrU8U8WOI4mvBRJ Px4BIvv82/TjM9cIoxUAtjFha/4bTwWmVyLKjpdM= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 04/13] crypto: sun4i - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:14 +0200 Message-Id: <20200627083623.2428333-5-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org Even though the sun4i driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c | 46 ++++++++++---------- drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h | 3 +- 2 files changed, 25 insertions(+), 24 deletions(-) diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c index 7f22d305178e..b72de8939497 100644 --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c @@ -122,19 +122,17 @@ static int noinline_for_stack sun4i_ss_cipher_poll_fallback(struct skcipher_requ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); struct sun4i_tfm_ctx *op = crypto_skcipher_ctx(tfm); struct sun4i_cipher_req_ctx *ctx = skcipher_request_ctx(areq); - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, op->fallback_tfm); int err; - skcipher_request_set_sync_tfm(subreq, op->fallback_tfm); - skcipher_request_set_callback(subreq, areq->base.flags, NULL, - NULL); - skcipher_request_set_crypt(subreq, areq->src, areq->dst, + skcipher_request_set_tfm(&ctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&ctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&ctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); if (ctx->mode & SS_DECRYPTION) - err = crypto_skcipher_decrypt(subreq); + err = crypto_skcipher_decrypt(&ctx->fallback_req); else - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = crypto_skcipher_encrypt(&ctx->fallback_req); return err; } @@ -494,23 +492,25 @@ int sun4i_ss_cipher_init(struct crypto_tfm *tfm) alg.crypto.base); op->ss = algt->ss; - crypto_skcipher_set_reqsize(__crypto_skcipher_cast(tfm), - sizeof(struct sun4i_cipher_req_ctx)); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->ss->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + crypto_skcipher_set_reqsize(__crypto_skcipher_cast(tfm), + sizeof(struct sun4i_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm)); + + err = pm_runtime_get_sync(op->ss->dev); if (err < 0) goto error_pm; return 0; error_pm: - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); return err; } @@ -518,7 +518,7 @@ void sun4i_ss_cipher_exit(struct crypto_tfm *tfm) { struct sun4i_tfm_ctx *op = crypto_tfm_ctx(tfm); - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); pm_runtime_put(op->ss->dev); } @@ -546,10 +546,10 @@ int sun4i_ss_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, op->keylen = keylen; memcpy(op->key, key, keylen); - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } /* check and set the DES key, prepare the mode to be used */ @@ -566,10 +566,10 @@ int sun4i_ss_des_setkey(struct crypto_skcipher *tfm, const u8 *key, op->keylen = keylen; memcpy(op->key, key, keylen); - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } /* check and set the 3DES key, prepare the mode to be used */ @@ -586,9 +586,9 @@ int sun4i_ss_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, op->keylen = keylen; memcpy(op->key, key, keylen); - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h index 2b4c6333eb67..163962f9e284 100644 --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h @@ -170,11 +170,12 @@ struct sun4i_tfm_ctx { u32 keylen; u32 keymode; struct sun4i_ss_ctx *ss; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; struct sun4i_cipher_req_ctx { u32 mode; + struct skcipher_request fallback_req; // keep at the end }; struct sun4i_req_ctx { From patchwork Sat Jun 27 08:36:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 211306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A229C433DF for ; Sat, 27 Jun 2020 08:37:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 478222088E for ; Sat, 27 Jun 2020 08:37:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247024; bh=tpcze3XSHyZPREkPUbsCkWhR8rfSX5Y6bvgG9Oth1A8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=EfJAkDU0uyAFjwuCE/tuT+Jm4EgmJBxhQLaPy4t1FU9azR7YSip3ATj5vzMGfmSgX 27eQ2d2zBea22DXtSqxYx81J/U8vgKFpirK5ffH7WIpPLcQQshctdj/PbSPkB9qx9X 3HUpgFdTZnLx5qHO41tRuC8ZrXZ5czK/Sh1umjcY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726402AbgF0IhD (ORCPT ); Sat, 27 Jun 2020 04:37:03 -0400 Received: from mail.kernel.org ([198.145.29.99]:32946 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0IhD (ORCPT ); Sat, 27 Jun 2020 04:37:03 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 66332207FC; Sat, 27 Jun 2020 08:36:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247022; bh=tpcze3XSHyZPREkPUbsCkWhR8rfSX5Y6bvgG9Oth1A8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kYgIR6EUldgyN0nplR86Hw7Wyi+99IiWT3r73jl6mkFEXhDt6yKIde8iIPeIJBxJZ yuMYQm9s/zAp9B9+QCmCOCkk6J6wNQ1krG7e0TADMxXXPGg3VdGanMLGEVwV/YC3Hr qgFgiAaYD+IlZBl31yJJau7nAJi+mgBeKdkAogCQ= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 06/13] crypto: sun8i-ss - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:16 +0200 Message-Id: <20200627083623.2428333-7-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org Even though the sun8i-ss driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c | 39 ++++++++++---------- drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h | 3 +- 2 files changed, 22 insertions(+), 20 deletions(-) diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c index c89cb2ee2496..7a131675a41c 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c @@ -73,7 +73,6 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq) struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq); int err; - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, op->fallback_tfm); #ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct sun8i_ss_alg_template *algt; @@ -81,15 +80,15 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq) algt = container_of(alg, struct sun8i_ss_alg_template, alg.skcipher); algt->stat_fb++; #endif - skcipher_request_set_sync_tfm(subreq, op->fallback_tfm); - skcipher_request_set_callback(subreq, areq->base.flags, NULL, NULL); - skcipher_request_set_crypt(subreq, areq->src, areq->dst, + skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); if (rctx->op_dir & SS_DECRYPTION) - err = crypto_skcipher_decrypt(subreq); + err = crypto_skcipher_decrypt(&rctx->fallback_req); else - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = crypto_skcipher_encrypt(&rctx->fallback_req); return err; } @@ -334,18 +333,20 @@ int sun8i_ss_cipher_init(struct crypto_tfm *tfm) algt = container_of(alg, struct sun8i_ss_alg_template, alg.skcipher); op->ss = algt->ss; - sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->ss->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm); + + dev_info(op->ss->dev, "Fallback for %s is %s\n", crypto_tfm_alg_driver_name(&sktfm->base), - crypto_tfm_alg_driver_name(crypto_skcipher_tfm(&op->fallback_tfm->base))); + crypto_tfm_alg_driver_name(crypto_skcipher_tfm(op->fallback_tfm))); op->enginectx.op.do_one_request = sun8i_ss_handle_cipher_request; op->enginectx.op.prepare_request = NULL; @@ -359,7 +360,7 @@ int sun8i_ss_cipher_init(struct crypto_tfm *tfm) return 0; error_pm: - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); return err; } @@ -371,7 +372,7 @@ void sun8i_ss_cipher_exit(struct crypto_tfm *tfm) memzero_explicit(op->key, op->keylen); kfree(op->key); } - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); pm_runtime_put_sync(op->ss->dev); } @@ -401,10 +402,10 @@ int sun8i_ss_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } int sun8i_ss_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, @@ -427,8 +428,8 @@ int sun8i_ss_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h index 29c44f279112..42658b134228 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h @@ -159,6 +159,7 @@ struct sun8i_cipher_req_ctx { unsigned int ivlen; unsigned int keylen; void *biv; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -174,7 +175,7 @@ struct sun8i_cipher_tfm_ctx { u32 *key; u32 keylen; struct sun8i_ss_dev *ss; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; /* From patchwork Sat Jun 27 08:36:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 211305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49757C433E0 for ; Sat, 27 Jun 2020 08:37:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 293CD2137B for ; Sat, 27 Jun 2020 08:37:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247033; bh=VdWXGOSKjJGXQhycLLuQQ27vibyXBPZrm23gn1gDZfA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=VyOtaAkGAgpdcjEUxRWaFinO+KPfZ6igUQny2OOeHDA+GqWcJALquAlRSTRwuAwxU 6aVIF5AZ/Hv8Tjni2yYh4pWgeVysCyzIRD60LM7vq5AqvuNHlh4g5jMYSDPNeYX7MX YgFTlR5cOgrFVduynRF8QC0FPF5XnactTzdXpXi0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726412AbgF0IhM (ORCPT ); Sat, 27 Jun 2020 04:37:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:33128 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0IhM (ORCPT ); Sat, 27 Jun 2020 04:37:12 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4580C20DD4; Sat, 27 Jun 2020 08:37:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247031; bh=VdWXGOSKjJGXQhycLLuQQ27vibyXBPZrm23gn1gDZfA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nDoTOnSTs2Rz3Sos4WM39GbplROYXqO0QPv/s1ucHafGTnuhZ3PgqYJfncaTsO5OI c7EziQSFZXMvvDuoXwgC0lPImzU11sYszDdaj/miKmQSFxkieHJSF5K/DyxaINyJ+4 Z6SidAFbuqI1S2qty1tnLeINfqHF2svhJKIVhHJ0= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 08/13] crypto: chelsio - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:18 +0200 Message-Id: <20200627083623.2428333-9-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org Even though the chelsio driver implements asynchronous versions of cbc(aes) and xts(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/chelsio/chcr_algo.c | 57 ++++++++------------ drivers/crypto/chelsio/chcr_crypto.h | 3 +- 2 files changed, 25 insertions(+), 35 deletions(-) diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index 4c2553672b6f..a6625b90fb1a 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -690,26 +690,22 @@ static int chcr_sg_ent_in_wr(struct scatterlist *src, return min(srclen, dstlen); } -static int chcr_cipher_fallback(struct crypto_sync_skcipher *cipher, - u32 flags, - struct scatterlist *src, - struct scatterlist *dst, - unsigned int nbytes, +static int chcr_cipher_fallback(struct crypto_skcipher *cipher, + struct skcipher_request *req, u8 *iv, unsigned short op_type) { + struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req); int err; - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, cipher); - - skcipher_request_set_sync_tfm(subreq, cipher); - skcipher_request_set_callback(subreq, flags, NULL, NULL); - skcipher_request_set_crypt(subreq, src, dst, - nbytes, iv); + skcipher_request_set_tfm(&reqctx->fallback_req, cipher); + skcipher_request_set_callback(&reqctx->fallback_req, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(&reqctx->fallback_req, req->src, req->dst, + req->cryptlen, iv); - err = op_type ? crypto_skcipher_decrypt(subreq) : - crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = op_type ? crypto_skcipher_decrypt(&reqctx->fallback_req) : + crypto_skcipher_encrypt(&reqctx->fallback_req); return err; @@ -924,11 +920,11 @@ static int chcr_cipher_fallback_setkey(struct crypto_skcipher *cipher, { struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(cipher)); - crypto_sync_skcipher_clear_flags(ablkctx->sw_cipher, + crypto_skcipher_clear_flags(ablkctx->sw_cipher, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ablkctx->sw_cipher, + crypto_skcipher_set_flags(ablkctx->sw_cipher, cipher->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(ablkctx->sw_cipher, key, keylen); + return crypto_skcipher_setkey(ablkctx->sw_cipher, key, keylen); } static int chcr_aes_cbc_setkey(struct crypto_skcipher *cipher, @@ -1206,13 +1202,8 @@ static int chcr_handle_cipher_resp(struct skcipher_request *req, req); memcpy(req->iv, reqctx->init_iv, IV); atomic_inc(&adap->chcr_stats.fallback); - err = chcr_cipher_fallback(ablkctx->sw_cipher, - req->base.flags, - req->src, - req->dst, - req->cryptlen, - req->iv, - reqctx->op); + err = chcr_cipher_fallback(ablkctx->sw_cipher, req, req->iv, + reqctx->op); goto complete; } @@ -1341,11 +1332,7 @@ static int process_cipher(struct skcipher_request *req, chcr_cipher_dma_unmap(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, req); fallback: atomic_inc(&adap->chcr_stats.fallback); - err = chcr_cipher_fallback(ablkctx->sw_cipher, - req->base.flags, - req->src, - req->dst, - req->cryptlen, + err = chcr_cipher_fallback(ablkctx->sw_cipher, req, subtype == CRYPTO_ALG_SUB_TYPE_CTR_RFC3686 ? reqctx->iv : req->iv, @@ -1486,14 +1473,15 @@ static int chcr_init_tfm(struct crypto_skcipher *tfm) struct chcr_context *ctx = crypto_skcipher_ctx(tfm); struct ablk_ctx *ablkctx = ABLK_CTX(ctx); - ablkctx->sw_cipher = crypto_alloc_sync_skcipher(alg->base.cra_name, 0, + ablkctx->sw_cipher = crypto_alloc_skcipher(alg->base.cra_name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ablkctx->sw_cipher)) { pr_err("failed to allocate fallback for %s\n", alg->base.cra_name); return PTR_ERR(ablkctx->sw_cipher); } init_completion(&ctx->cbc_aes_aio_done); - crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx) + + crypto_skcipher_reqsize(ablkctx->sw_cipher)); return chcr_device_init(ctx); } @@ -1507,13 +1495,14 @@ static int chcr_rfc3686_init(struct crypto_skcipher *tfm) /*RFC3686 initialises IV counter value to 1, rfc3686(ctr(aes)) * cannot be used as fallback in chcr_handle_cipher_response */ - ablkctx->sw_cipher = crypto_alloc_sync_skcipher("ctr(aes)", 0, + ablkctx->sw_cipher = crypto_alloc_skcipher("ctr(aes)", 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ablkctx->sw_cipher)) { pr_err("failed to allocate fallback for %s\n", alg->base.cra_name); return PTR_ERR(ablkctx->sw_cipher); } - crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx) + + crypto_skcipher_reqsize(ablkctx->sw_cipher)); return chcr_device_init(ctx); } @@ -1523,7 +1512,7 @@ static void chcr_exit_tfm(struct crypto_skcipher *tfm) struct chcr_context *ctx = crypto_skcipher_ctx(tfm); struct ablk_ctx *ablkctx = ABLK_CTX(ctx); - crypto_free_sync_skcipher(ablkctx->sw_cipher); + crypto_free_skcipher(ablkctx->sw_cipher); } static int get_alg_config(struct algo_param *params, diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h index b3fdbdc25acb..55a6631cdbee 100644 --- a/drivers/crypto/chelsio/chcr_crypto.h +++ b/drivers/crypto/chelsio/chcr_crypto.h @@ -171,7 +171,7 @@ static inline struct chcr_context *h_ctx(struct crypto_ahash *tfm) } struct ablk_ctx { - struct crypto_sync_skcipher *sw_cipher; + struct crypto_skcipher *sw_cipher; __be32 key_ctx_hdr; unsigned int enckey_len; unsigned char ciph_mode; @@ -305,6 +305,7 @@ struct chcr_skcipher_req_ctx { u8 init_iv[CHCR_MAX_CRYPTO_IV_LEN]; u16 txqidx; u16 rxqidx; + struct skcipher_request fallback_req; // keep at the end }; struct chcr_alg_template { From patchwork Sat Jun 27 08:36:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 211304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7381C433E5 for ; Sat, 27 Jun 2020 08:37:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A66B4212CC for ; Sat, 27 Jun 2020 08:37:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247041; bh=EGar2rBAa9p0EmyWbgXPPJUEijUxQLN5bxRPFqvMCTY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=uYI6BYHkhxqujWPjvFD/hWZDLU/55nrVyAMgavcYCkQ3LI80ugaj4uQE5JyGcSFx/ HH6oIsc5IuXGfKi+epq6qeeCVdX5xdIDuB0qDucKnOjKfbkbLLsWDeHcJWr5XlYMWx mevz8udJ5A32Pup3otnKAqUYHegTkfDBc6O0woO8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726429AbgF0IhV (ORCPT ); Sat, 27 Jun 2020 04:37:21 -0400 Received: from mail.kernel.org ([198.145.29.99]:33300 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0IhU (ORCPT ); Sat, 27 Jun 2020 04:37:20 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 26F03207FC; Sat, 27 Jun 2020 08:37:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247040; bh=EGar2rBAa9p0EmyWbgXPPJUEijUxQLN5bxRPFqvMCTY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O43pL46JHTcyFNkA22Y4nrjz3lRUgG/5t9y/PrLc9W2Tqn8PSGrYu/bnqJzG3qDcl nQGyjZS/YFlGgazwQq6VSynO+h/ICt0LjNQzOt7gEtCwS6v4U695FstrLSsd5I2/DY 0yrids79ui3vTeIfzDqlwifzLaPp5993qdI+eQ90= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 10/13] crypto: picoxcell - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:20 +0200 Message-Id: <20200627083623.2428333-11-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org Even though the picoxcell driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/picoxcell_crypto.c | 34 +++++++++++--------- 1 file changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/crypto/picoxcell_crypto.c b/drivers/crypto/picoxcell_crypto.c index 7384e91c8b32..eea75c7cbdf2 100644 --- a/drivers/crypto/picoxcell_crypto.c +++ b/drivers/crypto/picoxcell_crypto.c @@ -86,6 +86,7 @@ struct spacc_req { dma_addr_t src_addr, dst_addr; struct spacc_ddt *src_ddt, *dst_ddt; void (*complete)(struct spacc_req *req); + struct skcipher_request fallback_req; // keep at the end }; struct spacc_aead { @@ -158,7 +159,7 @@ struct spacc_ablk_ctx { * The fallback cipher. If the operation can't be done in hardware, * fallback to a software version. */ - struct crypto_sync_skcipher *sw_cipher; + struct crypto_skcipher *sw_cipher; }; /* AEAD cipher context. */ @@ -792,13 +793,13 @@ static int spacc_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, * Set the fallback transform to use the same request flags as * the hardware transform. */ - crypto_sync_skcipher_clear_flags(ctx->sw_cipher, + crypto_skcipher_clear_flags(ctx->sw_cipher, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ctx->sw_cipher, + crypto_skcipher_set_flags(ctx->sw_cipher, cipher->base.crt_flags & CRYPTO_TFM_REQ_MASK); - err = crypto_sync_skcipher_setkey(ctx->sw_cipher, key, len); + err = crypto_skcipher_setkey(ctx->sw_cipher, key, len); if (err) goto sw_setkey_failed; } @@ -900,7 +901,7 @@ static int spacc_ablk_do_fallback(struct skcipher_request *req, struct crypto_tfm *old_tfm = crypto_skcipher_tfm(crypto_skcipher_reqtfm(req)); struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(old_tfm); - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->sw_cipher); + struct spacc_req *dev_req = skcipher_request_ctx(req); int err; /* @@ -908,13 +909,13 @@ static int spacc_ablk_do_fallback(struct skcipher_request *req, * the ciphering has completed, put the old transform back into the * request. */ - skcipher_request_set_sync_tfm(subreq, ctx->sw_cipher); - skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, + skcipher_request_set_tfm(&dev_req->fallback_req, ctx->sw_cipher); + skcipher_request_set_callback(&dev_req->fallback_req, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(&dev_req->fallback_req, req->src, req->dst, req->cryptlen, req->iv); - err = is_encrypt ? crypto_skcipher_encrypt(subreq) : - crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); + err = is_encrypt ? crypto_skcipher_encrypt(&dev_req->fallback_req) : + crypto_skcipher_decrypt(&dev_req->fallback_req); return err; } @@ -1007,19 +1008,22 @@ static int spacc_ablk_init_tfm(struct crypto_skcipher *tfm) ctx->generic.flags = spacc_alg->type; ctx->generic.engine = engine; if (alg->base.cra_flags & CRYPTO_ALG_NEED_FALLBACK) { - ctx->sw_cipher = crypto_alloc_sync_skcipher( + ctx->sw_cipher = crypto_alloc_skcipher( alg->base.cra_name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->sw_cipher)) { dev_warn(engine->dev, "failed to allocate fallback for %s\n", alg->base.cra_name); return PTR_ERR(ctx->sw_cipher); } + crypto_skcipher_set_reqsize(tfm, sizeof(struct spacc_req) + + crypto_skcipher_reqsize(ctx->sw_cipher)); + } else { + crypto_skcipher_set_reqsize(tfm, sizeof(struct spacc_req)); } + ctx->generic.key_offs = spacc_alg->key_offs; ctx->generic.iv_offs = spacc_alg->iv_offs; - crypto_skcipher_set_reqsize(tfm, sizeof(struct spacc_req)); - return 0; } @@ -1027,7 +1031,7 @@ static void spacc_ablk_exit_tfm(struct crypto_skcipher *tfm) { struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(ctx->sw_cipher); + crypto_free_skcipher(ctx->sw_cipher); } static int spacc_ablk_encrypt(struct skcipher_request *req) From patchwork Sat Jun 27 08:36:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 211303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01CFAC433E1 for ; Sat, 27 Jun 2020 08:37:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D1AD7212CC for ; Sat, 27 Jun 2020 08:37:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247050; bh=1mKbpa4I4Sh07AGQPoHxLj1RNd0nCGZ8qWt4DEk3uDw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=N2ej1y9WPZyQ51L1CH0TNHliUCAjgKQZ/iv50/r9O15isSWif/6W2bc2bM7Q6A9vk gcbIl7T4mZKMrUagT1WhMNLRxOMs86ckvq+cMoIEuBhhQrTeEj2yQnXGDxNAgG9rrn NXbpbc2UjBxL+VrpufFg5drqNb0MrzfLNIwAd7NA= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726439AbgF0Iha (ORCPT ); Sat, 27 Jun 2020 04:37:30 -0400 Received: from mail.kernel.org ([198.145.29.99]:33532 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726086AbgF0Iha (ORCPT ); Sat, 27 Jun 2020 04:37:30 -0400 Received: from dogfood.home (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0B600207FC; Sat, 27 Jun 2020 08:37:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593247049; bh=1mKbpa4I4Sh07AGQPoHxLj1RNd0nCGZ8qWt4DEk3uDw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HciL4sMmgPCTWYfKn8WHC8Cev4c1ptta+YmLvQ1IjcYaB/cqNTB6GUa0rVz+wVnw/ n+tjZSG2qpgKjR00nqqKhrUGl7HDg6Y418OH41PBjrMwBnx+0EbPbWNToCoBEH5FU8 qvFtVw0AnHDBYYYAt+IPmOWLuFUvtFM7L6nBxWaM= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-omap@vger.kernel.org, linux-amlogic@lists.infradead.org, Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v2 12/13] crypto: sahara - permit asynchronous skcipher as fallback Date: Sat, 27 Jun 2020 10:36:22 +0200 Message-Id: <20200627083623.2428333-13-ardb@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200627083623.2428333-1-ardb@kernel.org> References: <20200627083623.2428333-1-ardb@kernel.org> MIME-Version: 1.0 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org Even though the sahara driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/sahara.c | 96 +++++++++----------- 1 file changed, 45 insertions(+), 51 deletions(-) diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c index 466e30bd529c..0c8cb23ae708 100644 --- a/drivers/crypto/sahara.c +++ b/drivers/crypto/sahara.c @@ -146,11 +146,12 @@ struct sahara_ctx { /* AES-specific context */ int keylen; u8 key[AES_KEYSIZE_128]; - struct crypto_sync_skcipher *fallback; + struct crypto_skcipher *fallback; }; struct sahara_aes_reqctx { unsigned long mode; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -617,10 +618,10 @@ static int sahara_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, /* * The requested key size is not supported by HW, do a fallback. */ - crypto_sync_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & + crypto_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); + return crypto_skcipher_setkey(ctx->fallback, key, keylen); } static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode) @@ -651,21 +652,19 @@ static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode) static int sahara_aes_ecb_encrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_encrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_ENCRYPT); @@ -673,21 +672,19 @@ static int sahara_aes_ecb_encrypt(struct skcipher_request *req) static int sahara_aes_ecb_decrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_decrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, 0); @@ -695,21 +692,19 @@ static int sahara_aes_ecb_decrypt(struct skcipher_request *req) static int sahara_aes_cbc_encrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_encrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_ENCRYPT | FLAGS_CBC); @@ -717,21 +712,19 @@ static int sahara_aes_cbc_encrypt(struct skcipher_request *req) static int sahara_aes_cbc_decrypt(struct skcipher_request *req) { + struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req); struct sahara_ctx *ctx = crypto_skcipher_ctx( crypto_skcipher_reqtfm(req)); - int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - - skcipher_request_set_sync_tfm(subreq, ctx->fallback); - skcipher_request_set_callback(subreq, req->base.flags, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, - req->cryptlen, req->iv); - err = crypto_skcipher_decrypt(subreq); - skcipher_request_zero(subreq); - return err; + skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback); + skcipher_request_set_callback(&rctx->fallback_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, req->src, + req->dst, req->cryptlen, req->iv); + return crypto_skcipher_decrypt(&rctx->fallback_req); } return sahara_aes_crypt(req, FLAGS_CBC); @@ -742,14 +735,15 @@ static int sahara_aes_init_tfm(struct crypto_skcipher *tfm) const char *name = crypto_tfm_alg_name(&tfm->base); struct sahara_ctx *ctx = crypto_skcipher_ctx(tfm); - ctx->fallback = crypto_alloc_sync_skcipher(name, 0, + ctx->fallback = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->fallback)) { pr_err("Error allocating fallback algo %s\n", name); return PTR_ERR(ctx->fallback); } - crypto_skcipher_set_reqsize(tfm, sizeof(struct sahara_aes_reqctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct sahara_aes_reqctx) + + crypto_skcipher_reqsize(ctx->fallback)); return 0; } @@ -758,7 +752,7 @@ static void sahara_aes_exit_tfm(struct crypto_skcipher *tfm) { struct sahara_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_sync_skcipher(ctx->fallback); + crypto_free_skcipher(ctx->fallback); } static u32 sahara_sha_init_hdr(struct sahara_dev *dev,