From patchwork Thu Jun 27 10:26:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 167908 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2148618ilk; Thu, 27 Jun 2019 03:28:07 -0700 (PDT) X-Google-Smtp-Source: APXvYqxTjn4xsw+IGFmRCMfJt1Vwtx+C8StHmGFi/0erz0vuXzK257m7zVL5WwNG0H81Ba7L8tbO X-Received: by 2002:a17:90a:5806:: with SMTP id h6mr5042667pji.126.1561631287290; Thu, 27 Jun 2019 03:28:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561631287; cv=none; d=google.com; s=arc-20160816; b=t0turvgvuTgi2zX1i6ybAv++3IfGnrPBpW7nwzPGoidVoWZ11dS8VubA155REy6EC4 JbYhE3FN7Pj/sUNcbCcja+b7G4aiLwDG4AgpecH0ltYyqAM9P27Htu+gD+27Dha3f2hS ps6ybs/64JY6URvMqR4KKxn4GwMIQJiNoK2DQuaX0nmrQwaATWl+hP8Z34sFGxgGhUMS QMBRkhAyRTqF1n2SVgpXFlt792euWCQXb8Al9Ifn6sw2bcnEmzfi0d7QFpt+QDsj92VY 3Y/brfkiiuvEe6RmcTXx/sDT2feohPpgQf+Gu4ktYBD5YLSkV1n1cqhd+fX2aOhy3C7L zBKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Pgzq94PpKRgLR2IqqS+QLXIksHPk3/jPtTuBsqGb9fM=; b=MC0FP//x3VYmJO1PUIed0RUI4NJM0stYY+KRVv+BMcaFTPja9Hm+TH+S6Zl7A2Mg/m tXUYYdqp1BUOhGEXC5qpo4br8w8fAd9L8JAxtvRT7d70YVJWeBCr4jRfeyIdkJWgNyrE KgpnmgUyCDshTSB1HdXE4RglcqNJfz9R+NuE9aLdabRjbZo+lTqUmd0e3EWzuP+QbA5w jw6pJWiAo67z+Uo5JS0Do52xvWmZeQ/olnXI7ZGSmRcniSZjI639W9xoGp8dO7dBch4j V0NdykPpZCl1jCrC3I+ZKxdTCGN6tqhzHkNRQW9j2V7rD5/vaWjzTxUadX+14nWLDYD0 hCWw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=zqKRnQks; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f32si4829522pjg.42.2019.06.27.03.28.07; Thu, 27 Jun 2019 03:28:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=zqKRnQks; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726672AbfF0K2G (ORCPT + 3 others); Thu, 27 Jun 2019 06:28:06 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:36036 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726669AbfF0K2G (ORCPT ); Thu, 27 Jun 2019 06:28:06 -0400 Received: by mail-wr1-f67.google.com with SMTP id n4so1934403wrs.3 for ; Thu, 27 Jun 2019 03:28:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Pgzq94PpKRgLR2IqqS+QLXIksHPk3/jPtTuBsqGb9fM=; b=zqKRnQks+zzywDk154HrQyQHGVKltKxj2se24VndSlzuLaEsncsM1YqSEitfnJBLw5 D+TPzsz/Ykp9Mu3ZCbSlgoiMe+NM3QRWGuuNmtf+Iwd+gLmNYbgksZQ9efJXh4WOiOis U8JBYx2/h2kRo/SaQ3fw5gUaVFVBv8YJUGuAiy6FHYghrlr3IbW130TwZZUizgEbmnS1 F7khnmPmbA73vzK27R+FJ+GtlZZk2D8X3uql7A2jC2Hei21vpducb9f4jnp5YLaGLce6 8wEj+7G1fT9wEcSip3wCEIHHIjGzvKNG24P1RYy6eV25ZHVM1ifJAcqUVMdDpwdCUoW9 VEQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Pgzq94PpKRgLR2IqqS+QLXIksHPk3/jPtTuBsqGb9fM=; b=Z9VIs4mvfgOhuSyZ+5d263IGXwQHB2MKGrDFNaHx1mQvH4Y+C0vyfGllRs3Tss9xf6 ICwFN00N0xkVRK3ZoXy59e8X+latUECayRHxYeGaPDJXlJwVdGB+L/LQWyhsY5F4eLik ZJfDWY+I8vUg4D/yLthoOluGuA5hZJ4k+v7VtSDhmhQASDEsBzJx9ZLj2HQyYe72tM+9 7x60BmVcPXtksnG9buhu5vTN9E7Ze67oRdAiK7jw/ESxL9+IxLhuCjWzfwE7YYGIKEuq 11W5xJ5Jh7jvE0ggp/Gk/pG19siNyzus1mQXrgOqJQmZ3JfW1++lnLBHZvPLiBK9grR0 2qwQ== X-Gm-Message-State: APjAAAU/VeFHDjKEQ1xKfgJKrfGZqVBQxKA3gdG2Ilzw71lh+EubQAXN cwLVVvpI4TbIQBLBQoA3qvxvirV+Qj4= X-Received: by 2002:adf:e806:: with SMTP id o6mr483716wrm.269.1561631283771; Thu, 27 Jun 2019 03:28:03 -0700 (PDT) Received: from localhost.localdomain (aaubervilliers-681-1-8-173.w90-88.abo.wanadoo.fr. [90.88.13.173]) by smtp.gmail.com with ESMTPSA id g2sm5584533wmh.0.2019.06.27.03.28.02 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 27 Jun 2019 03:28:03 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH v3 12/32] crypto: arm64/aes-ccm - switch to AES library Date: Thu, 27 Jun 2019 12:26:27 +0200 Message-Id: <20190627102647.2992-13-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190627102647.2992-1-ard.biesheuvel@linaro.org> References: <20190627102647.2992-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The CCM code calls directly into the scalar table based AES cipher for arm64 from the fallback path, and since this implementation is known to be non-time invariant, doing so from a time invariant SIMD cipher is a bit nasty. So let's switch to the AES library - this makes the code more robust, and drops the dependency on the generic AES cipher, allowing us to omit it entirely in the future. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 2 +- arch/arm64/crypto/aes-ce-ccm-glue.c | 18 ++++++------------ 2 files changed, 7 insertions(+), 13 deletions(-) -- 2.20.1 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 1762055e7093..c6032bfb44fb 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -80,8 +80,8 @@ config CRYPTO_AES_ARM64_CE_CCM depends on ARM64 && KERNEL_MODE_NEON select CRYPTO_ALGAPI select CRYPTO_AES_ARM64_CE - select CRYPTO_AES_ARM64 select CRYPTO_AEAD + select CRYPTO_LIB_AES config CRYPTO_AES_ARM64_CE_BLK tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c index cb89c80800b5..b9b7cf4b5a8f 100644 --- a/arch/arm64/crypto/aes-ce-ccm-glue.c +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -46,8 +46,6 @@ asmlinkage void ce_aes_ccm_decrypt(u8 out[], u8 const in[], u32 cbytes, asmlinkage void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u32 const rk[], u32 rounds); -asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); - static int ccm_setkey(struct crypto_aead *tfm, const u8 *in_key, unsigned int key_len) { @@ -127,8 +125,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], } while (abytes >= AES_BLOCK_SIZE) { - __aes_arm64_encrypt(key->key_enc, mac, mac, - num_rounds(key)); + aes_encrypt(key, mac, mac); crypto_xor(mac, in, AES_BLOCK_SIZE); in += AES_BLOCK_SIZE; @@ -136,8 +133,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], } if (abytes > 0) { - __aes_arm64_encrypt(key->key_enc, mac, mac, - num_rounds(key)); + aes_encrypt(key, mac, mac); crypto_xor(mac, in, abytes); *macp = abytes; } @@ -209,10 +205,8 @@ static int ccm_crypt_fallback(struct skcipher_walk *walk, u8 mac[], u8 iv0[], bsize = nbytes; crypto_inc(walk->iv, AES_BLOCK_SIZE); - __aes_arm64_encrypt(ctx->key_enc, buf, walk->iv, - num_rounds(ctx)); - __aes_arm64_encrypt(ctx->key_enc, mac, mac, - num_rounds(ctx)); + aes_encrypt(ctx, buf, walk->iv); + aes_encrypt(ctx, mac, mac); if (enc) crypto_xor(mac, src, bsize); crypto_xor_cpy(dst, src, buf, bsize); @@ -227,8 +221,8 @@ static int ccm_crypt_fallback(struct skcipher_walk *walk, u8 mac[], u8 iv0[], } if (!err) { - __aes_arm64_encrypt(ctx->key_enc, buf, iv0, num_rounds(ctx)); - __aes_arm64_encrypt(ctx->key_enc, mac, mac, num_rounds(ctx)); + aes_encrypt(ctx, buf, iv0); + aes_encrypt(ctx, mac, mac); crypto_xor(mac, buf, AES_BLOCK_SIZE); } return err;