From patchwork Mon Mar 1 16:13:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 389830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78904C433E6 for ; Mon, 1 Mar 2021 17:56:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4DECD64E86 for ; Mon, 1 Mar 2021 17:56:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232476AbhCAR4C (ORCPT ); Mon, 1 Mar 2021 12:56:02 -0500 Received: from mail.kernel.org ([198.145.29.99]:43292 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234777AbhCARtf (ORCPT ); Mon, 1 Mar 2021 12:49:35 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 52565650E8; Mon, 1 Mar 2021 16:59:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1614617997; bh=J9cM09gvd/wJSivpw0k1h17JnyO7s1lC+nTQ8BBlnv4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mfqD/9TY/kfokKiaDT5swWjjRBeUGAd4ywEnkoTZGqsRecxwaN16sqqJDzzIuOls7 zRhQFgIHV+h3tnjqyhB11ypnpSFfiH5vFO0/MfJJ2GbR8dLed+nxQmylob9NML72KZ Fct6Y3auepuMuYaP+NSvELMj0BIauR8x/xgXL3cQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Corentin Labbe , Herbert Xu Subject: [PATCH 5.4 272/340] crypto: sun4i-ss - IV register does not work on A10 and A13 Date: Mon, 1 Mar 2021 17:13:36 +0100 Message-Id: <20210301161101.674019433@linuxfoundation.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301161048.294656001@linuxfoundation.org> References: <20210301161048.294656001@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Corentin Labbe commit b756f1c8fc9d84e3f546d7ffe056c5352f4aab05 upstream. Allwinner A10 and A13 SoC have a version of the SS which produce invalid IV in IVx register. Instead of adding a variant for those, let's convert SS to produce IV directly from data. Fixes: 6298e948215f2 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator") Cc: Signed-off-by: Corentin Labbe Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- drivers/crypto/sunxi-ss/sun4i-ss-cipher.c | 34 ++++++++++++++++++++++++------ 1 file changed, 28 insertions(+), 6 deletions(-) --- a/drivers/crypto/sunxi-ss/sun4i-ss-cipher.c +++ b/drivers/crypto/sunxi-ss/sun4i-ss-cipher.c @@ -20,6 +20,7 @@ static int noinline_for_stack sun4i_ss_o unsigned int ivsize = crypto_skcipher_ivsize(tfm); struct sun4i_cipher_req_ctx *ctx = skcipher_request_ctx(areq); u32 mode = ctx->mode; + void *backup_iv = NULL; /* when activating SS, the default FIFO space is SS_RX_DEFAULT(32) */ u32 rx_cnt = SS_RX_DEFAULT; u32 tx_cnt = 0; @@ -44,6 +45,13 @@ static int noinline_for_stack sun4i_ss_o return -EINVAL; } + if (areq->iv && ivsize > 0 && mode & SS_DECRYPTION) { + backup_iv = kzalloc(ivsize, GFP_KERNEL); + if (!backup_iv) + return -ENOMEM; + scatterwalk_map_and_copy(backup_iv, areq->src, areq->cryptlen - ivsize, ivsize, 0); + } + spin_lock_irqsave(&ss->slock, flags); for (i = 0; i < op->keylen; i += 4) @@ -117,9 +125,12 @@ static int noinline_for_stack sun4i_ss_o } while (oleft); if (areq->iv) { - for (i = 0; i < 4 && i < ivsize / 4; i++) { - v = readl(ss->base + SS_IV0 + i * 4); - *(u32 *)(areq->iv + i * 4) = v; + if (mode & SS_DECRYPTION) { + memcpy(areq->iv, backup_iv, ivsize); + kfree_sensitive(backup_iv); + } else { + scatterwalk_map_and_copy(areq->iv, areq->dst, areq->cryptlen - ivsize, + ivsize, 0); } } @@ -176,6 +187,7 @@ static int sun4i_ss_cipher_poll(struct s unsigned int ileft = areq->cryptlen; unsigned int oleft = areq->cryptlen; unsigned int todo; + void *backup_iv = NULL; struct sg_mapping_iter mi, mo; unsigned long pi = 0, po = 0; /* progress for in and out */ bool miter_err; @@ -219,6 +231,13 @@ static int sun4i_ss_cipher_poll(struct s if (need_fallback) return sun4i_ss_cipher_poll_fallback(areq); + if (areq->iv && ivsize > 0 && mode & SS_DECRYPTION) { + backup_iv = kzalloc(ivsize, GFP_KERNEL); + if (!backup_iv) + return -ENOMEM; + scatterwalk_map_and_copy(backup_iv, areq->src, areq->cryptlen - ivsize, ivsize, 0); + } + spin_lock_irqsave(&ss->slock, flags); for (i = 0; i < op->keylen; i += 4) @@ -347,9 +366,12 @@ static int sun4i_ss_cipher_poll(struct s sg_miter_stop(&mo); } if (areq->iv) { - for (i = 0; i < 4 && i < ivsize / 4; i++) { - v = readl(ss->base + SS_IV0 + i * 4); - *(u32 *)(areq->iv + i * 4) = v; + if (mode & SS_DECRYPTION) { + memcpy(areq->iv, backup_iv, ivsize); + kfree_sensitive(backup_iv); + } else { + scatterwalk_map_and_copy(areq->iv, areq->dst, areq->cryptlen - ivsize, + ivsize, 0); } }