From patchwork Tue Jun 20 09:28:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 105935 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp1274030qgd; Tue, 20 Jun 2017 02:35:33 -0700 (PDT) X-Received: by 10.84.132.14 with SMTP id 14mr31377288ple.271.1497951333156; Tue, 20 Jun 2017 02:35:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497951333; cv=none; d=google.com; s=arc-20160816; b=hkvkp19s7cQne6q1Sx9duq8Pf7+pO+AGmUj66kT1KGo3bzElUph1EYV2EPypxC4049 r+16emWAN9AFX/zrIrdZ8ETsvc27woxdRNc9AB/Dgi0X2kpV0WJcDe7lAqTYXH6wFVDj Chsgkkk+p2boxGEIOkxR+OXATZp3eN/0egDsOR6BnR4vwrgBP5KXITtsX51cPpFfwGr2 9mxkUgMGE8KExAVALik0X5YiyYSQ6WH+lN46hTLwT/Dd1XMVuT9Rv4DqbQ4upszraIRB 7bI3Hp8/IWNDKJdK7S6c8ISfKrAT/m44vHhH9NxUZhcO2G+BRfoYsyaEs8wTQLePXW4G oXaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=qcYAcOmmUwiRk2c61XRLo0Bt89kgmcmXjKN5FsBPGOI=; b=mo1Bo0KhV+UpZjMnXdstWtOFvpcbQfq9sJQqyGlhLoCZ6zQY/8QI8T4J9yH92D/Y/d TZ0hSghC8yYKCaCdLpZAEiyCKDIAABBY7NMu/b+/ahoUJ19m7Abiqz2iVhC/8wOPxeP0 yfhFqb5FRMlmjJUhnvcFOzfwaDlqbWofc1odRdBHQ7NoNH7fLnpMxSDl0KcLp7qOE7WH OF8ceTIZRZJbUA4dCINNXGfGg9XX6yt9wZ317XSL8zWTR5wGjm1Uz1mQ9g304oHgoGbh MNlkdqdEk1FoU6OC1H6A2F5i4JDOMXRYW48R1ygbhCRJbFiY81L0R6UohPMCSr0MNOj7 0r8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.b=TnCZy5Ju; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z77si10042513pfj.338.2017.06.20.02.35.32; Tue, 20 Jun 2017 02:35:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.b=TnCZy5Ju; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751098AbdFTJao (ORCPT + 1 other); Tue, 20 Jun 2017 05:30:44 -0400 Received: from mail-wm0-f50.google.com ([74.125.82.50]:33783 "EHLO mail-wm0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751898AbdFTJ3F (ORCPT ); Tue, 20 Jun 2017 05:29:05 -0400 Received: by mail-wm0-f50.google.com with SMTP id u185so5557317wmd.0 for ; Tue, 20 Jun 2017 02:29:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qcYAcOmmUwiRk2c61XRLo0Bt89kgmcmXjKN5FsBPGOI=; b=TnCZy5JuXEWm2NK/lUQJtZtXo/MskX/S6VA8So6zOVVTPeckjEK7/y53Z7EgvCSNm2 H4EwpUMNA8w2z3G9MacuCWQFTvT3pxl246ngJvl8bHQfJGI+D4TuBU1EO4Bee9r5zVhT gRqgVQOdliu1DZqQzqxOp2zQ3ecCMjGtwP8xM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qcYAcOmmUwiRk2c61XRLo0Bt89kgmcmXjKN5FsBPGOI=; b=pkZ3pS9nzcIUZj/3qOE6qyQXZKZL2vMC/9wkszDNFWjDqPEADf64AjB7PBJFS7UVpV NoiPH2PP0YAtw7ykU2qa0baofkrkJqwkda33QRRreFXcetEob3bJBsqRESNRzAwSX8/S I/d3ZFE1thuiWKS+rj1eKDWhtJH0UGtVyh/wFo1eOGYWLlChPFlAQyRsOoq6a9JYAbc6 cOADH2/xgUpQE/+rv1PzJLMJgyygqSumQPSDifxosB2TJueWC3lk6X8+QCgHVgss4g1g dO8SxQvCTOu3cnCb5Sz5uZeBRs+VSp0f29su137bsjzXiFEiWz6yJDoinI9AdDPqmsC4 QPVg== X-Gm-Message-State: AKS2vOyD3L2mGt3v2gD5YPClpeJDGzYkMRE6JM1W+GEjnQ6c4vXo41U7 D3a1YTTs97t21TopcyfFlw== X-Received: by 10.80.191.76 with SMTP id g12mr20694856edk.12.1497950944208; Tue, 20 Jun 2017 02:29:04 -0700 (PDT) Received: from localhost.localdomain (101-126-045-062.dynamic.caiway.nl. [62.45.126.101]) by smtp.gmail.com with ESMTPSA id a52sm6033452eda.44.2017.06.20.02.29.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 20 Jun 2017 02:29:03 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, nico@linaro.org, ebiggers3@gmail.com, Ard Biesheuvel Subject: [PATCH v3 1/7] drivers/crypto/Kconfig: drop bogus CRYPTO_AES dependencies Date: Tue, 20 Jun 2017 11:28:54 +0200 Message-Id: <1497950940-24243-2-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In preparation of fine tuning the dependency relations between the accelerated AES drivers and the core support code, let's remove the dependency declarations that are false. None of these modules have link time dependencies on the generic AES code, nor do they declare any AES algos with CRYPTO_ALG_NEED_FALLBACK, so they can function perfectly fine without crypto/aes_generic.o loaded. Signed-off-by: Ard Biesheuvel --- drivers/crypto/Kconfig | 5 ----- 1 file changed, 5 deletions(-) -- 2.7.4 diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 0528a62a39a6..7a737c1c669e 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -419,7 +419,6 @@ config CRYPTO_DEV_S5P tristate "Support for Samsung S5PV210/Exynos crypto accelerator" depends on ARCH_S5PV210 || ARCH_EXYNOS || COMPILE_TEST depends on HAS_IOMEM && HAS_DMA - select CRYPTO_AES select CRYPTO_BLKCIPHER help This option allows you to have support for S5P crypto acceleration. @@ -473,7 +472,6 @@ config CRYPTO_DEV_ATMEL_AES tristate "Support for Atmel AES hw accelerator" depends on HAS_DMA depends on ARCH_AT91 || COMPILE_TEST - select CRYPTO_AES select CRYPTO_AEAD select CRYPTO_BLKCIPHER help @@ -591,7 +589,6 @@ config CRYPTO_DEV_SUN4I_SS depends on ARCH_SUNXI && !64BIT select CRYPTO_MD5 select CRYPTO_SHA1 - select CRYPTO_AES select CRYPTO_DES select CRYPTO_BLKCIPHER help @@ -606,7 +603,6 @@ config CRYPTO_DEV_SUN4I_SS config CRYPTO_DEV_ROCKCHIP tristate "Rockchip's Cryptographic Engine driver" depends on OF && ARCH_ROCKCHIP - select CRYPTO_AES select CRYPTO_DES select CRYPTO_MD5 select CRYPTO_SHA1 @@ -622,7 +618,6 @@ config CRYPTO_DEV_MEDIATEK tristate "MediaTek's EIP97 Cryptographic Engine driver" depends on HAS_DMA depends on (ARM && ARCH_MEDIATEK) || COMPILE_TEST - select CRYPTO_AES select CRYPTO_AEAD select CRYPTO_BLKCIPHER select CRYPTO_CTR From patchwork Tue Jun 20 09:28:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 105937 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp1274239qgd; Tue, 20 Jun 2017 02:36:10 -0700 (PDT) X-Received: by 10.84.174.131 with SMTP id r3mr33662760plb.90.1497951370690; Tue, 20 Jun 2017 02:36:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497951370; cv=none; d=google.com; s=arc-20160816; b=q1www4jZe3lAJnm5HUnhhtd7Ubo8HMyfv/We3EGbvOm+1v4wg/bIpWWUeCV4ibk3Qn jvNL/r8IQc1VcvotOFayX2NlVcoTThoRuGeKDpB1bwHD6IWvEbOs7zixsataC6XeRISX W2CYErBYYqvnZPrPGxNXHpEQh8QpLNQ9cb3KhZTb4zBrIEkp/G8dMhcoR5akSuiJY5GJ upVkUaZRoKpRN/RrayWN5vqkCUnZlySvGiTPEeiJUSIl/CoWNRxOl0oxr6hgfFphZm/U r31c0UdtxUE3ZFOMAMAXhZnDrEcJkNhoQ9QZWUJatgS0S2x4erKEuflepq6+ypXePew0 XTXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=FZvZbnYNkFEUdJNT3fIoIKdq6LGlBmoQK+DMpvZEzjs=; b=t1wY5wZmto4iK9sUgV4M0728CudZvXIR5UbB7DOGAIsq/v80jUT5n/hqVWRcBdl3+c 59Shs8VyN4KXhsN9ABry9ZHbLPqdBFq1L1QIYHXAPrcP/awYtNx1e1U64z6Mgt0zMjSI x45jD37fJlk8cP6vtKSRAZc2zM2kmlQ95QYN6jXWuKP5vW/YczGIzuP9VCtOGpEs1uIg K/A6fGP+NInOMhUoIttV/W2FmDQ1O4YMSOulEUXqir5i9i0u9L6qTaGIKYSdZr6Q7yU/ NwSiOImCGpsT6dRuCns1mfh9FPCtrgt8pXE+gwWnUBVDH5bDEDKYdvf6j1brGKX6W7OT 1VOQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.b=FWeVlRzM; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p1si10461961pge.322.2017.06.20.02.36.10; Tue, 20 Jun 2017 02:36:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.b=FWeVlRzM; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752017AbdFTJan (ORCPT + 1 other); Tue, 20 Jun 2017 05:30:43 -0400 Received: from mail-wm0-f49.google.com ([74.125.82.49]:36178 "EHLO mail-wm0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752014AbdFTJ3H (ORCPT ); Tue, 20 Jun 2017 05:29:07 -0400 Received: by mail-wm0-f49.google.com with SMTP id m125so15864245wmm.1 for ; Tue, 20 Jun 2017 02:29:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FZvZbnYNkFEUdJNT3fIoIKdq6LGlBmoQK+DMpvZEzjs=; b=FWeVlRzMA5lhR2E8RGebvUk731PdQUAq2E1VkALrCd4w7Uq7MJeFlOc7Lm+sU5+TY8 7uFqf1VsdmRTp6k+HZAq37VNA7seqiJSapNeGISEYsf8cwf0I13kUWABViok8x1AGQBj h3MXivliZUtWIsIeO1DOMo/Iz/melZCC1LWzo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FZvZbnYNkFEUdJNT3fIoIKdq6LGlBmoQK+DMpvZEzjs=; b=AtZo9AZOymu6iV+5Ogc/W02qRrbrGSRNLY337K6Qgsrz0rtdcK3TmqMc9DfO+wSs/4 AcWe8q2YumahQAJos6wuTLqyGObWRQltszZCI6vPpjN0dxg/jR2mfKq48vTr9BxkjfrF FvHYM4YCTwlsdj0egU7ZeHTG0ayEUHCbohmX4DAiQEUC7ro4svr2o6ZKGH9vqV8vHezL u0GOOF0Yk5+qsqM5SMf2LxaFxAW4sxE5JxLZpxIebUzNES7GNVMiBjzOEZBHgSfssIUd XFIwGXE/OC1brm3h8CeFe5XlQVV3BHs+6hOvwy3b1IuIz25n2w+HJKpyNTiqsfKi9pUE xxVQ== X-Gm-Message-State: AKS2vOwfcYqzrh2D5BHY64cbvitEzcAQ8m1n5Xjq7cLsvrFMAbZXW7Z1 HbJdsy/eMbFDsp5J3A4NIA== X-Received: by 10.80.145.25 with SMTP id e25mr20621778eda.8.1497950945626; Tue, 20 Jun 2017 02:29:05 -0700 (PDT) Received: from localhost.localdomain (101-126-045-062.dynamic.caiway.nl. [62.45.126.101]) by smtp.gmail.com with ESMTPSA id a52sm6033452eda.44.2017.06.20.02.29.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 20 Jun 2017 02:29:04 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, nico@linaro.org, ebiggers3@gmail.com, Ard Biesheuvel Subject: [PATCH v3 2/7] crypto: aes - refactor shared routines into separate core module Date: Tue, 20 Jun 2017 11:28:55 +0200 Message-Id: <1497950940-24243-3-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In preparation of further refactoring and cleanup of the AES code, move the implementations of crypto_aes_expand_key() and crypto_aes_set_key() into a separate module called aes_core, along with the forward Sbox and some GF(2^8) routines that these routines rely on. Also, introduce crypto_aes_[en|de]crypt() based on the fixed time code, which will be used in future patches by time invariant SIMD drivers that may need to fallback to scalar code in exceptional circumstances. These fallbacks offer a different tradeoff between time invariance and speed, but are generally more appropriate due to the smaller size and cache footprint. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/Kconfig | 2 +- arch/arm64/crypto/Kconfig | 2 +- crypto/Kconfig | 5 + crypto/Makefile | 1 + crypto/aes_core.c | 333 ++++++++++++++++++++ crypto/aes_generic.c | 178 ----------- crypto/aes_ti.c | 315 ++---------------- drivers/crypto/Kconfig | 8 +- include/crypto/aes.h | 6 + 9 files changed, 376 insertions(+), 474 deletions(-) -- 2.7.4 diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig index b9adedcc5b2e..fd77aebcb7a9 100644 --- a/arch/arm/crypto/Kconfig +++ b/arch/arm/crypto/Kconfig @@ -73,7 +73,7 @@ config CRYPTO_AES_ARM_BS depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER select CRYPTO_SIMD - select CRYPTO_AES + select CRYPTO_AES_CORE help Use a faster and more secure NEON based implementation of AES in CBC, CTR and XTS modes diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index d92293747d63..db55e069c17b 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -68,7 +68,7 @@ config CRYPTO_AES_ARM64_NEON_BLK tristate "AES in ECB/CBC/CTR/XTS modes using NEON instructions" depends on ARM64 && KERNEL_MODE_NEON select CRYPTO_BLKCIPHER - select CRYPTO_AES + select CRYPTO_AES_CORE select CRYPTO_SIMD config CRYPTO_CHACHA20_NEON diff --git a/crypto/Kconfig b/crypto/Kconfig index caa770e535a2..b4edea2aed22 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -894,9 +894,13 @@ config CRYPTO_GHASH_CLMUL_NI_INTEL comment "Ciphers" +config CRYPTO_AES_CORE + tristate + config CRYPTO_AES tristate "AES cipher algorithms" select CRYPTO_ALGAPI + select CRYPTO_AES_CORE help AES cipher algorithms (FIPS-197). AES uses the Rijndael algorithm. @@ -917,6 +921,7 @@ config CRYPTO_AES config CRYPTO_AES_TI tristate "Fixed time AES cipher" select CRYPTO_ALGAPI + select CRYPTO_AES_CORE help This is a generic implementation of AES that attempts to eliminate data dependent latencies as much as possible without affecting diff --git a/crypto/Makefile b/crypto/Makefile index d41f0331b085..0979ca461ddb 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -96,6 +96,7 @@ obj-$(CONFIG_CRYPTO_TWOFISH) += twofish_generic.o obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o obj-$(CONFIG_CRYPTO_SERPENT) += serpent_generic.o CFLAGS_serpent_generic.o := $(call cc-option,-fsched-pressure) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149 +obj-$(CONFIG_CRYPTO_AES_CORE) += aes_core.o obj-$(CONFIG_CRYPTO_AES) += aes_generic.o obj-$(CONFIG_CRYPTO_AES_TI) += aes_ti.o obj-$(CONFIG_CRYPTO_CAMELLIA) += camellia_generic.o diff --git a/crypto/aes_core.c b/crypto/aes_core.c new file mode 100644 index 000000000000..d3c8b5eaaf42 --- /dev/null +++ b/crypto/aes_core.c @@ -0,0 +1,333 @@ +/* + * Shared AES primitives for accelerated and generic implementations + * + * Copyright (C) 2017 Linaro Ltd + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include + +static const u8 __cacheline_aligned aes_sbox[] = { + 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, + 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, + 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, + 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0, + 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, + 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15, + 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, + 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75, + 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, + 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84, + 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, + 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, + 0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, + 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8, + 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, + 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, + 0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, + 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73, + 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, + 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb, + 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, + 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79, + 0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, + 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08, + 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, + 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, + 0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, + 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e, + 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, + 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf, + 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, + 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16, +}; + +static const u8 __cacheline_aligned aes_inv_sbox[] = { + 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, + 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb, + 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87, + 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb, + 0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, + 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e, + 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, + 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25, + 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, + 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92, + 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda, + 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84, + 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, + 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06, + 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, + 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b, + 0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea, + 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73, + 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, + 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e, + 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, + 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b, + 0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, + 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4, + 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, + 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f, + 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, + 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef, + 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, + 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61, + 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, + 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d, +}; + +static u32 mul_by_x(u32 w) +{ + u32 x = w & 0x7f7f7f7f; + u32 y = w & 0x80808080; + + /* multiply by polynomial 'x' (0b10) in GF(2^8) */ + return (x << 1) ^ (y >> 7) * 0x1b; +} + +static u32 mul_by_x2(u32 w) +{ + u32 x = w & 0x3f3f3f3f; + u32 y = w & 0x80808080; + u32 z = w & 0x40404040; + + /* multiply by polynomial 'x^2' (0b100) in GF(2^8) */ + return (x << 2) ^ (y >> 7) * 0x36 ^ (z >> 6) * 0x1b; +} + +static u32 mix_columns(u32 x) +{ + /* + * Perform the following matrix multiplication in GF(2^8) + * + * | 0x2 0x3 0x1 0x1 | | x[0] | + * | 0x1 0x2 0x3 0x1 | | x[1] | + * | 0x1 0x1 0x2 0x3 | x | x[2] | + * | 0x3 0x1 0x1 0x2 | | x[3] | + */ + u32 y = mul_by_x(x) ^ ror32(x, 16); + + return y ^ ror32(x ^ y, 8); +} + +static u32 inv_mix_columns(u32 x) +{ + /* + * Perform the following matrix multiplication in GF(2^8) + * + * | 0xe 0xb 0xd 0x9 | | x[0] | + * | 0x9 0xe 0xb 0xd | | x[1] | + * | 0xd 0x9 0xe 0xb | x | x[2] | + * | 0xb 0xd 0x9 0xe | | x[3] | + * + * which can conveniently be reduced to + * + * | 0x2 0x3 0x1 0x1 | | 0x5 0x0 0x4 0x0 | | x[0] | + * | 0x1 0x2 0x3 0x1 | | 0x0 0x5 0x0 0x4 | | x[1] | + * | 0x1 0x1 0x2 0x3 | x | 0x4 0x0 0x5 0x0 | x | x[2] | + * | 0x3 0x1 0x1 0x2 | | 0x0 0x4 0x0 0x5 | | x[3] | + */ + u32 y = mul_by_x2(x); + + return mix_columns(x ^ y ^ ror32(y, 16)); +} + +static __always_inline u32 subshift(u32 in[], int pos) +{ + return (aes_sbox[in[pos] & 0xff]) ^ + (aes_sbox[(in[(pos + 1) % 4] >> 8) & 0xff] << 8) ^ + (aes_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ + (aes_sbox[(in[(pos + 3) % 4] >> 24) & 0xff] << 24); +} + +static __always_inline u32 inv_subshift(u32 in[], int pos) +{ + return (aes_inv_sbox[in[pos] & 0xff]) ^ + (aes_inv_sbox[(in[(pos + 3) % 4] >> 8) & 0xff] << 8) ^ + (aes_inv_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ + (aes_inv_sbox[(in[(pos + 1) % 4] >> 24) & 0xff] << 24); +} + +static u32 subw(u32 in) +{ + return (aes_sbox[in & 0xff]) ^ + (aes_sbox[(in >> 8) & 0xff] << 8) ^ + (aes_sbox[(in >> 16) & 0xff] << 16) ^ + (aes_sbox[(in >> 24) & 0xff] << 24); +} + +int crypto_aes_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, + unsigned int key_len) +{ + u32 kwords = key_len / sizeof(u32); + u32 rc, i, j; + + if (key_len != AES_KEYSIZE_128 && + key_len != AES_KEYSIZE_192 && + key_len != AES_KEYSIZE_256) + return -EINVAL; + + ctx->key_length = key_len; + + for (i = 0; i < kwords; i++) + ctx->key_enc[i] = get_unaligned_le32(in_key + i * sizeof(u32)); + + for (i = 0, rc = 1; i < 10; i++, rc = mul_by_x(rc)) { + u32 *rki = ctx->key_enc + (i * kwords); + u32 *rko = rki + kwords; + + rko[0] = ror32(subw(rki[kwords - 1]), 8) ^ rc ^ rki[0]; + rko[1] = rko[0] ^ rki[1]; + rko[2] = rko[1] ^ rki[2]; + rko[3] = rko[2] ^ rki[3]; + + if (key_len == AES_KEYSIZE_192) { + if (i >= 7) + break; + rko[4] = rko[3] ^ rki[4]; + rko[5] = rko[4] ^ rki[5]; + } else if (key_len == AES_KEYSIZE_256) { + if (i >= 6) + break; + rko[4] = subw(rko[3]) ^ rki[4]; + rko[5] = rko[4] ^ rki[5]; + rko[6] = rko[5] ^ rki[6]; + rko[7] = rko[6] ^ rki[7]; + } + } + + /* + * Generate the decryption keys for the Equivalent Inverse Cipher. + * This involves reversing the order of the round keys, and applying + * the Inverse Mix Columns transformation to all but the first and + * the last one. + */ + ctx->key_dec[0] = ctx->key_enc[key_len + 24]; + ctx->key_dec[1] = ctx->key_enc[key_len + 25]; + ctx->key_dec[2] = ctx->key_enc[key_len + 26]; + ctx->key_dec[3] = ctx->key_enc[key_len + 27]; + + for (i = 4, j = key_len + 20; j > 0; i += 4, j -= 4) { + ctx->key_dec[i] = inv_mix_columns(ctx->key_enc[j]); + ctx->key_dec[i + 1] = inv_mix_columns(ctx->key_enc[j + 1]); + ctx->key_dec[i + 2] = inv_mix_columns(ctx->key_enc[j + 2]); + ctx->key_dec[i + 3] = inv_mix_columns(ctx->key_enc[j + 3]); + } + + ctx->key_dec[i] = ctx->key_enc[0]; + ctx->key_dec[i + 1] = ctx->key_enc[1]; + ctx->key_dec[i + 2] = ctx->key_enc[2]; + ctx->key_dec[i + 3] = ctx->key_enc[3]; + + return 0; +} +EXPORT_SYMBOL_GPL(crypto_aes_expand_key); + +/** + * crypto_aes_set_key - Set the AES key. + * @tfm: The %crypto_tfm that is used in the context. + * @in_key: The input key. + * @key_len: The size of the key. + * + * Returns 0 on success, on failure the %CRYPTO_TFM_RES_BAD_KEY_LEN flag in tfm + * is set. The function uses crypto_aes_expand_key() to expand the key. + * &crypto_aes_ctx _must_ be the private data embedded in @tfm which is + * retrieved with crypto_tfm_ctx(). + */ +int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + + if (crypto_aes_expand_key(ctx, in_key, key_len)) { + tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; + return -EINVAL; + } + return 0; +} +EXPORT_SYMBOL_GPL(crypto_aes_set_key); + +void crypto_aes_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in) +{ + const u32 *rkp = ctx->key_enc + 4; + int rounds = 6 + ctx->key_length / 4; + u32 st0[4], st1[4]; + int round; + + st0[0] = ctx->key_enc[0] ^ get_unaligned_le32(in); + st0[1] = ctx->key_enc[1] ^ get_unaligned_le32(in + 4); + st0[2] = ctx->key_enc[2] ^ get_unaligned_le32(in + 8); + st0[3] = ctx->key_enc[3] ^ get_unaligned_le32(in + 12); + + for (round = 0;; round += 2, rkp += 8) { + st1[0] = mix_columns(subshift(st0, 0)) ^ rkp[0]; + st1[1] = mix_columns(subshift(st0, 1)) ^ rkp[1]; + st1[2] = mix_columns(subshift(st0, 2)) ^ rkp[2]; + st1[3] = mix_columns(subshift(st0, 3)) ^ rkp[3]; + + if (round == rounds - 2) + break; + + st0[0] = mix_columns(subshift(st1, 0)) ^ rkp[4]; + st0[1] = mix_columns(subshift(st1, 1)) ^ rkp[5]; + st0[2] = mix_columns(subshift(st1, 2)) ^ rkp[6]; + st0[3] = mix_columns(subshift(st1, 3)) ^ rkp[7]; + } + + put_unaligned_le32(subshift(st1, 0) ^ rkp[4], out); + put_unaligned_le32(subshift(st1, 1) ^ rkp[5], out + 4); + put_unaligned_le32(subshift(st1, 2) ^ rkp[6], out + 8); + put_unaligned_le32(subshift(st1, 3) ^ rkp[7], out + 12); +} +EXPORT_SYMBOL_GPL(crypto_aes_encrypt); + +void crypto_aes_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in) +{ + const u32 *rkp = ctx->key_dec + 4; + int rounds = 6 + ctx->key_length / 4; + u32 st0[4], st1[4]; + int round; + + st0[0] = ctx->key_dec[0] ^ get_unaligned_le32(in); + st0[1] = ctx->key_dec[1] ^ get_unaligned_le32(in + 4); + st0[2] = ctx->key_dec[2] ^ get_unaligned_le32(in + 8); + st0[3] = ctx->key_dec[3] ^ get_unaligned_le32(in + 12); + + for (round = 0;; round += 2, rkp += 8) { + st1[0] = inv_mix_columns(inv_subshift(st0, 0)) ^ rkp[0]; + st1[1] = inv_mix_columns(inv_subshift(st0, 1)) ^ rkp[1]; + st1[2] = inv_mix_columns(inv_subshift(st0, 2)) ^ rkp[2]; + st1[3] = inv_mix_columns(inv_subshift(st0, 3)) ^ rkp[3]; + + if (round == rounds - 2) + break; + + st0[0] = inv_mix_columns(inv_subshift(st1, 0)) ^ rkp[4]; + st0[1] = inv_mix_columns(inv_subshift(st1, 1)) ^ rkp[5]; + st0[2] = inv_mix_columns(inv_subshift(st1, 2)) ^ rkp[6]; + st0[3] = inv_mix_columns(inv_subshift(st1, 3)) ^ rkp[7]; + } + + put_unaligned_le32(inv_subshift(st1, 0) ^ rkp[4], out); + put_unaligned_le32(inv_subshift(st1, 1) ^ rkp[5], out + 4); + put_unaligned_le32(inv_subshift(st1, 2) ^ rkp[6], out + 8); + put_unaligned_le32(inv_subshift(st1, 3) ^ rkp[7], out + 12); +} +EXPORT_SYMBOL_GPL(crypto_aes_decrypt); + +extern volatile const u8 crypto_aes_sbox[256] __alias(aes_sbox); +EXPORT_SYMBOL_GPL(crypto_aes_sbox); + +extern volatile const u8 crypto_aes_inv_sbox[256] __alias(aes_inv_sbox); +EXPORT_SYMBOL_GPL(crypto_aes_inv_sbox); + +MODULE_DESCRIPTION("Shared AES core routines"); +MODULE_AUTHOR("Ard Biesheuvel "); +MODULE_LICENSE("GPL v2"); diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c index ca554d57d01e..c0a7cf9ab574 100644 --- a/crypto/aes_generic.c +++ b/crypto/aes_generic.c @@ -61,8 +61,6 @@ static inline u8 byte(const u32 x, const unsigned n) return x >> (n << 3); } -static const u32 rco_tab[10] = { 1, 2, 4, 8, 16, 32, 64, 128, 27, 54 }; - __visible const u32 crypto_ft_tab[4][256] = { { 0xa56363c6, 0x847c7cf8, 0x997777ee, 0x8d7b7bf6, @@ -1124,182 +1122,6 @@ EXPORT_SYMBOL_GPL(crypto_fl_tab); EXPORT_SYMBOL_GPL(crypto_it_tab); EXPORT_SYMBOL_GPL(crypto_il_tab); -/* initialise the key schedule from the user supplied key */ - -#define star_x(x) (((x) & 0x7f7f7f7f) << 1) ^ ((((x) & 0x80808080) >> 7) * 0x1b) - -#define imix_col(y, x) do { \ - u = star_x(x); \ - v = star_x(u); \ - w = star_x(v); \ - t = w ^ (x); \ - (y) = u ^ v ^ w; \ - (y) ^= ror32(u ^ t, 8) ^ \ - ror32(v ^ t, 16) ^ \ - ror32(t, 24); \ -} while (0) - -#define ls_box(x) \ - crypto_fl_tab[0][byte(x, 0)] ^ \ - crypto_fl_tab[1][byte(x, 1)] ^ \ - crypto_fl_tab[2][byte(x, 2)] ^ \ - crypto_fl_tab[3][byte(x, 3)] - -#define loop4(i) do { \ - t = ror32(t, 8); \ - t = ls_box(t) ^ rco_tab[i]; \ - t ^= ctx->key_enc[4 * i]; \ - ctx->key_enc[4 * i + 4] = t; \ - t ^= ctx->key_enc[4 * i + 1]; \ - ctx->key_enc[4 * i + 5] = t; \ - t ^= ctx->key_enc[4 * i + 2]; \ - ctx->key_enc[4 * i + 6] = t; \ - t ^= ctx->key_enc[4 * i + 3]; \ - ctx->key_enc[4 * i + 7] = t; \ -} while (0) - -#define loop6(i) do { \ - t = ror32(t, 8); \ - t = ls_box(t) ^ rco_tab[i]; \ - t ^= ctx->key_enc[6 * i]; \ - ctx->key_enc[6 * i + 6] = t; \ - t ^= ctx->key_enc[6 * i + 1]; \ - ctx->key_enc[6 * i + 7] = t; \ - t ^= ctx->key_enc[6 * i + 2]; \ - ctx->key_enc[6 * i + 8] = t; \ - t ^= ctx->key_enc[6 * i + 3]; \ - ctx->key_enc[6 * i + 9] = t; \ - t ^= ctx->key_enc[6 * i + 4]; \ - ctx->key_enc[6 * i + 10] = t; \ - t ^= ctx->key_enc[6 * i + 5]; \ - ctx->key_enc[6 * i + 11] = t; \ -} while (0) - -#define loop8tophalf(i) do { \ - t = ror32(t, 8); \ - t = ls_box(t) ^ rco_tab[i]; \ - t ^= ctx->key_enc[8 * i]; \ - ctx->key_enc[8 * i + 8] = t; \ - t ^= ctx->key_enc[8 * i + 1]; \ - ctx->key_enc[8 * i + 9] = t; \ - t ^= ctx->key_enc[8 * i + 2]; \ - ctx->key_enc[8 * i + 10] = t; \ - t ^= ctx->key_enc[8 * i + 3]; \ - ctx->key_enc[8 * i + 11] = t; \ -} while (0) - -#define loop8(i) do { \ - loop8tophalf(i); \ - t = ctx->key_enc[8 * i + 4] ^ ls_box(t); \ - ctx->key_enc[8 * i + 12] = t; \ - t ^= ctx->key_enc[8 * i + 5]; \ - ctx->key_enc[8 * i + 13] = t; \ - t ^= ctx->key_enc[8 * i + 6]; \ - ctx->key_enc[8 * i + 14] = t; \ - t ^= ctx->key_enc[8 * i + 7]; \ - ctx->key_enc[8 * i + 15] = t; \ -} while (0) - -/** - * crypto_aes_expand_key - Expands the AES key as described in FIPS-197 - * @ctx: The location where the computed key will be stored. - * @in_key: The supplied key. - * @key_len: The length of the supplied key. - * - * Returns 0 on success. The function fails only if an invalid key size (or - * pointer) is supplied. - * The expanded key size is 240 bytes (max of 14 rounds with a unique 16 bytes - * key schedule plus a 16 bytes key which is used before the first round). - * The decryption key is prepared for the "Equivalent Inverse Cipher" as - * described in FIPS-197. The first slot (16 bytes) of each key (enc or dec) is - * for the initial combination, the second slot for the first round and so on. - */ -int crypto_aes_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - unsigned int key_len) -{ - u32 i, t, u, v, w, j; - - if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_192 && - key_len != AES_KEYSIZE_256) - return -EINVAL; - - ctx->key_length = key_len; - - ctx->key_enc[0] = get_unaligned_le32(in_key); - ctx->key_enc[1] = get_unaligned_le32(in_key + 4); - ctx->key_enc[2] = get_unaligned_le32(in_key + 8); - ctx->key_enc[3] = get_unaligned_le32(in_key + 12); - - ctx->key_dec[key_len + 24] = ctx->key_enc[0]; - ctx->key_dec[key_len + 25] = ctx->key_enc[1]; - ctx->key_dec[key_len + 26] = ctx->key_enc[2]; - ctx->key_dec[key_len + 27] = ctx->key_enc[3]; - - switch (key_len) { - case AES_KEYSIZE_128: - t = ctx->key_enc[3]; - for (i = 0; i < 10; ++i) - loop4(i); - break; - - case AES_KEYSIZE_192: - ctx->key_enc[4] = get_unaligned_le32(in_key + 16); - t = ctx->key_enc[5] = get_unaligned_le32(in_key + 20); - for (i = 0; i < 8; ++i) - loop6(i); - break; - - case AES_KEYSIZE_256: - ctx->key_enc[4] = get_unaligned_le32(in_key + 16); - ctx->key_enc[5] = get_unaligned_le32(in_key + 20); - ctx->key_enc[6] = get_unaligned_le32(in_key + 24); - t = ctx->key_enc[7] = get_unaligned_le32(in_key + 28); - for (i = 0; i < 6; ++i) - loop8(i); - loop8tophalf(i); - break; - } - - ctx->key_dec[0] = ctx->key_enc[key_len + 24]; - ctx->key_dec[1] = ctx->key_enc[key_len + 25]; - ctx->key_dec[2] = ctx->key_enc[key_len + 26]; - ctx->key_dec[3] = ctx->key_enc[key_len + 27]; - - for (i = 4; i < key_len + 24; ++i) { - j = key_len + 24 - (i & ~3) + (i & 3); - imix_col(ctx->key_dec[j], ctx->key_enc[i]); - } - return 0; -} -EXPORT_SYMBOL_GPL(crypto_aes_expand_key); - -/** - * crypto_aes_set_key - Set the AES key. - * @tfm: The %crypto_tfm that is used in the context. - * @in_key: The input key. - * @key_len: The size of the key. - * - * Returns 0 on success, on failure the %CRYPTO_TFM_RES_BAD_KEY_LEN flag in tfm - * is set. The function uses crypto_aes_expand_key() to expand the key. - * &crypto_aes_ctx _must_ be the private data embedded in @tfm which is - * retrieved with crypto_tfm_ctx(). - */ -int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, - unsigned int key_len) -{ - struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); - u32 *flags = &tfm->crt_flags; - int ret; - - ret = crypto_aes_expand_key(ctx, in_key, key_len); - if (!ret) - return 0; - - *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; - return -EINVAL; -} -EXPORT_SYMBOL_GPL(crypto_aes_set_key); - /* encrypt a block of text */ #define f_rn(bo, bi, n, k) do { \ diff --git a/crypto/aes_ti.c b/crypto/aes_ti.c index 03023b2290e8..81bfc4b8ff56 100644 --- a/crypto/aes_ti.c +++ b/crypto/aes_ti.c @@ -13,225 +13,8 @@ #include #include -/* - * Emit the sbox as volatile const to prevent the compiler from doing - * constant folding on sbox references involving fixed indexes. - */ -static volatile const u8 __cacheline_aligned __aesti_sbox[] = { - 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, - 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, - 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, - 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0, - 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, - 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15, - 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, - 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75, - 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, - 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84, - 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, - 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, - 0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, - 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8, - 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, - 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, - 0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, - 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73, - 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, - 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb, - 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, - 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79, - 0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, - 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08, - 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, - 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, - 0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, - 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e, - 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, - 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf, - 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, - 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16, -}; - -static volatile const u8 __cacheline_aligned __aesti_inv_sbox[] = { - 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, - 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb, - 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87, - 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb, - 0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, - 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e, - 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, - 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25, - 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, - 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92, - 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda, - 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84, - 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, - 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06, - 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, - 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b, - 0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea, - 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73, - 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, - 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e, - 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, - 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b, - 0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, - 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4, - 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, - 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f, - 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, - 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef, - 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, - 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61, - 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, - 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d, -}; - -static u32 mul_by_x(u32 w) -{ - u32 x = w & 0x7f7f7f7f; - u32 y = w & 0x80808080; - - /* multiply by polynomial 'x' (0b10) in GF(2^8) */ - return (x << 1) ^ (y >> 7) * 0x1b; -} - -static u32 mul_by_x2(u32 w) -{ - u32 x = w & 0x3f3f3f3f; - u32 y = w & 0x80808080; - u32 z = w & 0x40404040; - - /* multiply by polynomial 'x^2' (0b100) in GF(2^8) */ - return (x << 2) ^ (y >> 7) * 0x36 ^ (z >> 6) * 0x1b; -} - -static u32 mix_columns(u32 x) -{ - /* - * Perform the following matrix multiplication in GF(2^8) - * - * | 0x2 0x3 0x1 0x1 | | x[0] | - * | 0x1 0x2 0x3 0x1 | | x[1] | - * | 0x1 0x1 0x2 0x3 | x | x[2] | - * | 0x3 0x1 0x1 0x2 | | x[3] | - */ - u32 y = mul_by_x(x) ^ ror32(x, 16); - - return y ^ ror32(x ^ y, 8); -} - -static u32 inv_mix_columns(u32 x) -{ - /* - * Perform the following matrix multiplication in GF(2^8) - * - * | 0xe 0xb 0xd 0x9 | | x[0] | - * | 0x9 0xe 0xb 0xd | | x[1] | - * | 0xd 0x9 0xe 0xb | x | x[2] | - * | 0xb 0xd 0x9 0xe | | x[3] | - * - * which can conveniently be reduced to - * - * | 0x2 0x3 0x1 0x1 | | 0x5 0x0 0x4 0x0 | | x[0] | - * | 0x1 0x2 0x3 0x1 | | 0x0 0x5 0x0 0x4 | | x[1] | - * | 0x1 0x1 0x2 0x3 | x | 0x4 0x0 0x5 0x0 | x | x[2] | - * | 0x3 0x1 0x1 0x2 | | 0x0 0x4 0x0 0x5 | | x[3] | - */ - u32 y = mul_by_x2(x); - - return mix_columns(x ^ y ^ ror32(y, 16)); -} - -static __always_inline u32 subshift(u32 in[], int pos) -{ - return (__aesti_sbox[in[pos] & 0xff]) ^ - (__aesti_sbox[(in[(pos + 1) % 4] >> 8) & 0xff] << 8) ^ - (__aesti_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ - (__aesti_sbox[(in[(pos + 3) % 4] >> 24) & 0xff] << 24); -} - -static __always_inline u32 inv_subshift(u32 in[], int pos) -{ - return (__aesti_inv_sbox[in[pos] & 0xff]) ^ - (__aesti_inv_sbox[(in[(pos + 3) % 4] >> 8) & 0xff] << 8) ^ - (__aesti_inv_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ - (__aesti_inv_sbox[(in[(pos + 1) % 4] >> 24) & 0xff] << 24); -} - -static u32 subw(u32 in) -{ - return (__aesti_sbox[in & 0xff]) ^ - (__aesti_sbox[(in >> 8) & 0xff] << 8) ^ - (__aesti_sbox[(in >> 16) & 0xff] << 16) ^ - (__aesti_sbox[(in >> 24) & 0xff] << 24); -} - -static int aesti_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - unsigned int key_len) -{ - u32 kwords = key_len / sizeof(u32); - u32 rc, i, j; - - if (key_len != AES_KEYSIZE_128 && - key_len != AES_KEYSIZE_192 && - key_len != AES_KEYSIZE_256) - return -EINVAL; - - ctx->key_length = key_len; - - for (i = 0; i < kwords; i++) - ctx->key_enc[i] = get_unaligned_le32(in_key + i * sizeof(u32)); - - for (i = 0, rc = 1; i < 10; i++, rc = mul_by_x(rc)) { - u32 *rki = ctx->key_enc + (i * kwords); - u32 *rko = rki + kwords; - - rko[0] = ror32(subw(rki[kwords - 1]), 8) ^ rc ^ rki[0]; - rko[1] = rko[0] ^ rki[1]; - rko[2] = rko[1] ^ rki[2]; - rko[3] = rko[2] ^ rki[3]; - - if (key_len == 24) { - if (i >= 7) - break; - rko[4] = rko[3] ^ rki[4]; - rko[5] = rko[4] ^ rki[5]; - } else if (key_len == 32) { - if (i >= 6) - break; - rko[4] = subw(rko[3]) ^ rki[4]; - rko[5] = rko[4] ^ rki[5]; - rko[6] = rko[5] ^ rki[6]; - rko[7] = rko[6] ^ rki[7]; - } - } - - /* - * Generate the decryption keys for the Equivalent Inverse Cipher. - * This involves reversing the order of the round keys, and applying - * the Inverse Mix Columns transformation to all but the first and - * the last one. - */ - ctx->key_dec[0] = ctx->key_enc[key_len + 24]; - ctx->key_dec[1] = ctx->key_enc[key_len + 25]; - ctx->key_dec[2] = ctx->key_enc[key_len + 26]; - ctx->key_dec[3] = ctx->key_enc[key_len + 27]; - - for (i = 4, j = key_len + 20; j > 0; i += 4, j -= 4) { - ctx->key_dec[i] = inv_mix_columns(ctx->key_enc[j]); - ctx->key_dec[i + 1] = inv_mix_columns(ctx->key_enc[j + 1]); - ctx->key_dec[i + 2] = inv_mix_columns(ctx->key_enc[j + 2]); - ctx->key_dec[i + 3] = inv_mix_columns(ctx->key_enc[j + 3]); - } - - ctx->key_dec[i] = ctx->key_enc[0]; - ctx->key_dec[i + 1] = ctx->key_enc[1]; - ctx->key_dec[i + 2] = ctx->key_enc[2]; - ctx->key_dec[i + 3] = ctx->key_enc[3]; - - return 0; -} +extern volatile const u8 crypto_aes_sbox[]; +extern volatile const u8 crypto_aes_inv_sbox[]; static int aesti_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) @@ -239,7 +22,7 @@ static int aesti_set_key(struct crypto_tfm *tfm, const u8 *in_key, struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int err; - err = aesti_expand_key(ctx, in_key, key_len); + err = crypto_aes_expand_key(ctx, in_key, key_len); if (err) return err; @@ -250,15 +33,15 @@ static int aesti_set_key(struct crypto_tfm *tfm, const u8 *in_key, * the key is used, which will pull the entire Sbox into the D-cache * before any data dependent Sbox lookups are performed. */ - ctx->key_enc[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[128]; - ctx->key_enc[1] ^= __aesti_sbox[32] ^ __aesti_sbox[160]; - ctx->key_enc[2] ^= __aesti_sbox[64] ^ __aesti_sbox[192]; - ctx->key_enc[3] ^= __aesti_sbox[96] ^ __aesti_sbox[224]; + ctx->key_enc[0] ^= crypto_aes_sbox[ 0] ^ crypto_aes_sbox[128]; + ctx->key_enc[1] ^= crypto_aes_sbox[32] ^ crypto_aes_sbox[160]; + ctx->key_enc[2] ^= crypto_aes_sbox[64] ^ crypto_aes_sbox[192]; + ctx->key_enc[3] ^= crypto_aes_sbox[96] ^ crypto_aes_sbox[224]; - ctx->key_dec[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[128]; - ctx->key_dec[1] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[160]; - ctx->key_dec[2] ^= __aesti_inv_sbox[64] ^ __aesti_inv_sbox[192]; - ctx->key_dec[3] ^= __aesti_inv_sbox[96] ^ __aesti_inv_sbox[224]; + ctx->key_dec[0] ^= crypto_aes_inv_sbox[ 0] ^ crypto_aes_inv_sbox[128]; + ctx->key_dec[1] ^= crypto_aes_inv_sbox[32] ^ crypto_aes_inv_sbox[160]; + ctx->key_dec[2] ^= crypto_aes_inv_sbox[64] ^ crypto_aes_inv_sbox[192]; + ctx->key_dec[3] ^= crypto_aes_inv_sbox[96] ^ crypto_aes_inv_sbox[224]; return 0; } @@ -266,79 +49,31 @@ static int aesti_set_key(struct crypto_tfm *tfm, const u8 *in_key, static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); - const u32 *rkp = ctx->key_enc + 4; - int rounds = 6 + ctx->key_length / 4; - u32 st0[4], st1[4]; - int round; - - st0[0] = ctx->key_enc[0] ^ get_unaligned_le32(in); - st0[1] = ctx->key_enc[1] ^ get_unaligned_le32(in + 4); - st0[2] = ctx->key_enc[2] ^ get_unaligned_le32(in + 8); - st0[3] = ctx->key_enc[3] ^ get_unaligned_le32(in + 12); - - st0[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[128]; - st0[1] ^= __aesti_sbox[32] ^ __aesti_sbox[160]; - st0[2] ^= __aesti_sbox[64] ^ __aesti_sbox[192]; - st0[3] ^= __aesti_sbox[96] ^ __aesti_sbox[224]; - - for (round = 0;; round += 2, rkp += 8) { - st1[0] = mix_columns(subshift(st0, 0)) ^ rkp[0]; - st1[1] = mix_columns(subshift(st0, 1)) ^ rkp[1]; - st1[2] = mix_columns(subshift(st0, 2)) ^ rkp[2]; - st1[3] = mix_columns(subshift(st0, 3)) ^ rkp[3]; + u8 src[AES_BLOCK_SIZE]; - if (round == rounds - 2) - break; + memcpy(src, in, AES_BLOCK_SIZE); - st0[0] = mix_columns(subshift(st1, 0)) ^ rkp[4]; - st0[1] = mix_columns(subshift(st1, 1)) ^ rkp[5]; - st0[2] = mix_columns(subshift(st1, 2)) ^ rkp[6]; - st0[3] = mix_columns(subshift(st1, 3)) ^ rkp[7]; - } + src[ 0] ^= crypto_aes_sbox[ 0] ^ crypto_aes_sbox[128]; + src[ 4] ^= crypto_aes_sbox[32] ^ crypto_aes_sbox[160]; + src[ 8] ^= crypto_aes_sbox[64] ^ crypto_aes_sbox[192]; + src[12] ^= crypto_aes_sbox[96] ^ crypto_aes_sbox[224]; - put_unaligned_le32(subshift(st1, 0) ^ rkp[4], out); - put_unaligned_le32(subshift(st1, 1) ^ rkp[5], out + 4); - put_unaligned_le32(subshift(st1, 2) ^ rkp[6], out + 8); - put_unaligned_le32(subshift(st1, 3) ^ rkp[7], out + 12); + crypto_aes_encrypt(ctx, out, src); } static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); - const u32 *rkp = ctx->key_dec + 4; - int rounds = 6 + ctx->key_length / 4; - u32 st0[4], st1[4]; - int round; - - st0[0] = ctx->key_dec[0] ^ get_unaligned_le32(in); - st0[1] = ctx->key_dec[1] ^ get_unaligned_le32(in + 4); - st0[2] = ctx->key_dec[2] ^ get_unaligned_le32(in + 8); - st0[3] = ctx->key_dec[3] ^ get_unaligned_le32(in + 12); - - st0[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[128]; - st0[1] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[160]; - st0[2] ^= __aesti_inv_sbox[64] ^ __aesti_inv_sbox[192]; - st0[3] ^= __aesti_inv_sbox[96] ^ __aesti_inv_sbox[224]; - - for (round = 0;; round += 2, rkp += 8) { - st1[0] = inv_mix_columns(inv_subshift(st0, 0)) ^ rkp[0]; - st1[1] = inv_mix_columns(inv_subshift(st0, 1)) ^ rkp[1]; - st1[2] = inv_mix_columns(inv_subshift(st0, 2)) ^ rkp[2]; - st1[3] = inv_mix_columns(inv_subshift(st0, 3)) ^ rkp[3]; + u8 src[AES_BLOCK_SIZE]; - if (round == rounds - 2) - break; + memcpy(src, in, AES_BLOCK_SIZE); - st0[0] = inv_mix_columns(inv_subshift(st1, 0)) ^ rkp[4]; - st0[1] = inv_mix_columns(inv_subshift(st1, 1)) ^ rkp[5]; - st0[2] = inv_mix_columns(inv_subshift(st1, 2)) ^ rkp[6]; - st0[3] = inv_mix_columns(inv_subshift(st1, 3)) ^ rkp[7]; - } + src[ 0] ^= crypto_aes_inv_sbox[ 0] ^ crypto_aes_inv_sbox[128]; + src[ 4] ^= crypto_aes_inv_sbox[32] ^ crypto_aes_inv_sbox[160]; + src[ 8] ^= crypto_aes_inv_sbox[64] ^ crypto_aes_inv_sbox[192]; + src[12] ^= crypto_aes_inv_sbox[96] ^ crypto_aes_inv_sbox[224]; - put_unaligned_le32(inv_subshift(st1, 0) ^ rkp[4], out); - put_unaligned_le32(inv_subshift(st1, 1) ^ rkp[5], out + 4); - put_unaligned_le32(inv_subshift(st1, 2) ^ rkp[6], out + 8); - put_unaligned_le32(inv_subshift(st1, 3) ^ rkp[7], out + 12); + crypto_aes_decrypt(ctx, out, src); } static struct crypto_alg aes_alg = { diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 7a737c1c669e..704712d226e4 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -26,7 +26,7 @@ config CRYPTO_DEV_PADLOCK_AES tristate "PadLock driver for AES algorithm" depends on CRYPTO_DEV_PADLOCK select CRYPTO_BLKCIPHER - select CRYPTO_AES + select CRYPTO_AES_CORE help Use VIA PadLock for AES algorithm. @@ -189,7 +189,7 @@ config CRYPTO_CRC32_S390 config CRYPTO_DEV_MV_CESA tristate "Marvell's Cryptographic Engine" depends on PLAT_ORION - select CRYPTO_AES + select CRYPTO_AES_CORE select CRYPTO_BLKCIPHER select CRYPTO_HASH select SRAM @@ -203,7 +203,7 @@ config CRYPTO_DEV_MV_CESA config CRYPTO_DEV_MARVELL_CESA tristate "New Marvell's Cryptographic Engine driver" depends on PLAT_ORION || ARCH_MVEBU - select CRYPTO_AES + select CRYPTO_AES_CORE select CRYPTO_DES select CRYPTO_BLKCIPHER select CRYPTO_HASH @@ -655,7 +655,7 @@ config CRYPTO_DEV_SAFEXCEL tristate "Inside Secure's SafeXcel cryptographic engine driver" depends on HAS_DMA && OF depends on (ARM64 && ARCH_MVEBU) || (COMPILE_TEST && 64BIT) - select CRYPTO_AES + select CRYPTO_AES_CORE select CRYPTO_BLKCIPHER select CRYPTO_HASH select CRYPTO_HMAC diff --git a/include/crypto/aes.h b/include/crypto/aes.h index 7524ba3b6f3c..6374f91f5a0a 100644 --- a/include/crypto/aes.h +++ b/include/crypto/aes.h @@ -36,4 +36,10 @@ int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len); int crypto_aes_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len); + +void crypto_aes_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, + const u8 *in); +void crypto_aes_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, + const u8 *in); + #endif From patchwork Tue Jun 20 09:28:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 105936 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp1274244qgd; Tue, 20 Jun 2017 02:36:11 -0700 (PDT) X-Received: by 10.99.36.129 with SMTP id k123mr30492131pgk.230.1497951371013; Tue, 20 Jun 2017 02:36:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497951371; cv=none; d=google.com; s=arc-20160816; b=0ROAi+ehF6eG73J394uazffAjngEeJIpgH1C+n1FSx/C5D+XiSxfq1Slo7bElu996u 74I4xDGlr8uQeNh70lAG9++rwGCX6EAYGgbvEM+8f1j3OyT3UT1doMxdi23E3gRR9pvy YbajUXqyQi+wBgaVXgqlbqC1sWLAk066PAEG6V8qNclSOKSFt6wEIFaOqGfQb95n1sz7 x4XeyhsNhTuBqlzb0ZdtJ7/XVXiUOEeUQydDhFkzhiWbrM7URrzVhyjsgYsfI54vVY9h C6WvpIO1MIwB+3COxPeFdA3/GuPOQA0WijfPLTgF/5RjutadmKmIzauCU0qbhD9s0iOY LgPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=XlqOt1u/aDAvFOBRk9AbXKDyhDm97PZd8P8XndtMgqU=; b=zWrQ5+s13lD0XoZjpR9zN3Z4csi4wQhuZdcKDTM0wIOQS0f8hb8pKyiOaBpdb2kz7a CWLtbU43OfpZimeJObkM4tBX4TgDuSF/TJQWolSJ9f/JH+CT2+UeHf2W17ZytqYZzyC7 5aNYzYXx+iYENxaCpDqC0XtvGvx0UBPKcFBmTT0+csGkreud0fDWhyleUb6oa4LqiMUo BzJyt0HAiXrjJgI4uYv0QjcC4F31mH5iUW6lYvYgmlYKt10EQ1OejCZjNcNGiXBiNtX0 +bwiS0djUfyexuTWTNMgrSDm/oeSUEojxtvVf3fA4nlT6T6zJtLP5drKeNmG0qZzkqgs skxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.b=J7HPbXIQ; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p1si10461961pge.322.2017.06.20.02.36.10; Tue, 20 Jun 2017 02:36:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.b=J7HPbXIQ; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750979AbdFTJan (ORCPT + 1 other); Tue, 20 Jun 2017 05:30:43 -0400 Received: from mail-wm0-f53.google.com ([74.125.82.53]:38744 "EHLO mail-wm0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752017AbdFTJ3I (ORCPT ); Tue, 20 Jun 2017 05:29:08 -0400 Received: by mail-wm0-f53.google.com with SMTP id u195so14357473wmd.1 for ; Tue, 20 Jun 2017 02:29:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=XlqOt1u/aDAvFOBRk9AbXKDyhDm97PZd8P8XndtMgqU=; b=J7HPbXIQiwmteZ6I1KKw1VkfG5AGkrLC+sw81NFPLFnIOO0KUzKLvNh+HtRfA6GrOn pLSpz9ZNefgF5NdiL0+hhdddjk78ARMRCCE05Ige5wJHR8qB1ztpafJY6V7PQdduol7w wQPUE+o0x/4IKMn2YbuQn5S/WKgRvxyx3XszE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XlqOt1u/aDAvFOBRk9AbXKDyhDm97PZd8P8XndtMgqU=; b=MfbIkkkNBLD3FIJplG+JRdwUYkf67s3oIF4UZP+iSIJFxX2rXbZ8g4+Fs4HMEc31v2 Q/o8/9JWFY/qV90gRKx1WdOah9x7QpM4R0B6H85yUk68IhXUSPEcYNrXcN2J5N38nMB8 XEfl9xkKJ+uB/6XStIqIcqBayMVRMYfbgDsw3PJB/kf2hiwZXO2vslZFkiG4pVwJQzoh XmuT1EwLsn+rAVzHgbJve73tK/lFhUZVd8BhSk7ZMNHDjsiQo8TwXO5ZNFolj80e6NUZ V0Mp+rwnHoGvE41/j/c9O3cfulQGOezju47cC/eeaAh6ac2YuMpgLuLZtxdmRj1f/q4C wosw== X-Gm-Message-State: AKS2vOxLFebLRNmDIdK8O92Cm9EjsxuoGOR+VzfJq+b0nfSmecTOjrCV xJcpav/EfXUZS5XhAz4Ruw== X-Received: by 10.80.167.228 with SMTP id i91mr20237129edc.145.1497950946830; Tue, 20 Jun 2017 02:29:06 -0700 (PDT) Received: from localhost.localdomain (101-126-045-062.dynamic.caiway.nl. [62.45.126.101]) by smtp.gmail.com with ESMTPSA id a52sm6033452eda.44.2017.06.20.02.29.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 20 Jun 2017 02:29:05 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, nico@linaro.org, ebiggers3@gmail.com, Ard Biesheuvel Subject: [PATCH v3 3/7] crypto: x86/aes-ni - switch to generic fallback Date: Tue, 20 Jun 2017 11:28:56 +0200 Message-Id: <1497950940-24243-4-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The time invariant AES-NI implementation is SIMD based, and so it needs a fallback in case the code is called from a context where SIMD is not allowed. On x86, this is really only when executing in the context of an interrupt taken while in kernel mode, since SIMD is allowed in all other cases. There is very little code in the kernel that actually performs AES in interrupt context, and the code that does (mac80211) only does so when running on 802.11 devices that have no support for AES in hardware, and those are rare these days. So switch to the new AES core code as a fallback. It is much smaller, as well as more resistant to cache timing attacks, and removing the dependency allows us to disable the time variant drivers altogether if desired. Signed-off-by: Ard Biesheuvel --- arch/x86/crypto/aesni-intel_glue.c | 4 ++-- crypto/Kconfig | 3 +-- 2 files changed, 3 insertions(+), 4 deletions(-) -- 2.7.4 diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 4a55cdcdc008..1734e6185800 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -334,7 +334,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm)); if (!irq_fpu_usable()) - crypto_aes_encrypt_x86(ctx, dst, src); + crypto_aes_encrypt(ctx, dst, src); else { kernel_fpu_begin(); aesni_enc(ctx, dst, src); @@ -347,7 +347,7 @@ static void aes_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm)); if (!irq_fpu_usable()) - crypto_aes_decrypt_x86(ctx, dst, src); + crypto_aes_decrypt(ctx, dst, src); else { kernel_fpu_begin(); aesni_dec(ctx, dst, src); diff --git a/crypto/Kconfig b/crypto/Kconfig index b4edea2aed22..1e6e021fda10 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -984,8 +984,7 @@ config CRYPTO_AES_NI_INTEL tristate "AES cipher algorithms (AES-NI)" depends on X86 select CRYPTO_AEAD - select CRYPTO_AES_X86_64 if 64BIT - select CRYPTO_AES_586 if !64BIT + select CRYPTO_AES_CORE select CRYPTO_ALGAPI select CRYPTO_BLKCIPHER select CRYPTO_GLUE_HELPER_X86 if 64BIT From patchwork Tue Jun 20 09:28:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 105939 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp1274250qgd; Tue, 20 Jun 2017 02:36:11 -0700 (PDT) X-Received: by 10.99.168.67 with SMTP id i3mr30420352pgp.23.1497951371682; Tue, 20 Jun 2017 02:36:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497951371; cv=none; d=google.com; s=arc-20160816; b=bvLrwJKgq6Mtz0jSUXA2EWxXLKuTaz44vUQGvU2c2pDCznBBRNjZKNT3WJIu6LroAu MnLgWzK+7AYXD4O/u9OqcRK4pIWiH81jC+Hj904Sy40aB0BoavsnO+iY+Ul7XW2B4zPf jYgFTYKjXbP7GxZkk+Hhr6Z/7ovdF4bE5YjiyRAdR2c3ohiXbOV/9Sbh+4z7sGx3Sg0Y OgVd/04MKVheh3W3h1LwHlpLs9gQyeclf4SMZOT0k4oQMr0y+MjBGyky8gS3I9UygRKy /ttQKTZk9GuQo5VyNJb9G1DRLkw5srZd+0dsfjUKIDx83aWyoOP4xTcaOvXECKct7lcj rROw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=D8YYxJ9+LM6VNm+1McT5aWm6usfXGvzR1Er/9JpJ1h8=; b=nxQgB9lnqwCXTRpjKsutpARssgEhUg2llHi0auZFnOVLYFqcp8LCx/BFYlAG+XaNrO hkCQOrEWXxJUdVbBj4uJzulUI4nCijIR0Py16uyXABTN2nhF9PDvtZMkGpbcej3V3gij mDB9/7KZ8n+ZBqsObPhfoJyyiOI2vuBYahkVbsqO5Adx48Oq8jsmEt1in3f3kEiWsE/e IluMihVW1cV8krqVG/lwImUS0A+MOCxCaGWZoihOuY7o5YcyACDCpdcqCyh1Wh3DFcSD 60r4L/pwX4TSntNeLA5//3GDFTrGYes2iNjJdBOSXvRs4Fz4owxnMiL9BjQo51WndRRg YT6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.b=gAGUt0dF; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p1si10461961pge.322.2017.06.20.02.36.11; Tue, 20 Jun 2017 02:36:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.b=gAGUt0dF; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751136AbdFTJam (ORCPT + 1 other); Tue, 20 Jun 2017 05:30:42 -0400 Received: from mail-wm0-f45.google.com ([74.125.82.45]:36943 "EHLO mail-wm0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750979AbdFTJ3J (ORCPT ); Tue, 20 Jun 2017 05:29:09 -0400 Received: by mail-wm0-f45.google.com with SMTP id d73so14416731wma.0 for ; Tue, 20 Jun 2017 02:29:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=D8YYxJ9+LM6VNm+1McT5aWm6usfXGvzR1Er/9JpJ1h8=; b=gAGUt0dF33UWWdSAnjLQxYYhGuMFfpMi0JUyCtZq1Pxvr5qnhBBj3riOmg5+T8Avhq Mjp4dtY4D4+bPb2osZdKDQ5XigcBqYxKvagPo60mNypNmcDFF6PMONTCnwZn0xegDwkO FoQAwvwtCGoyKtzSinkTJFYxlqR9vdcDNJxKc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=D8YYxJ9+LM6VNm+1McT5aWm6usfXGvzR1Er/9JpJ1h8=; b=sk3lHFDBwJap6vgGJEe0/+McJ3nowvA/jfD5zJUwp94atUTra9GYg3ucuG3gv9k60S xAbXBNFRg3qz1N0OvsfxSzLnIL/Bcb4ITqqvpMDZ7lL/3hJfQxVBNG60EHjj6vyJ7euL UcILjcprYtKoST6cS8MRzw204RYpGAVnxfOvMyU8xPOUICraalufR+n4IL1XFHwVj9Hr qMdsityk0x/Ri/4Di1KflaTI+Wa1sKmNMJln4FdoY2xKGvsarMYc+XtD5nScwPVaRdXB SI44lo7IUTK0z9/l7ROXIz8MJmUUMyx3P9tFlK85kecgv75CxzPDj6zwsW9iSpAtecDT Mg2w== X-Gm-Message-State: AKS2vOyHiBhdzPNrNYei7Kt6ZsaH+yqko1QmDjtjP6kvCdNfGGSibzTn H3nCmTEwfp1obOJIjvvukA== X-Received: by 10.80.186.130 with SMTP id x2mr20464087ede.46.1497950947989; Tue, 20 Jun 2017 02:29:07 -0700 (PDT) Received: from localhost.localdomain (101-126-045-062.dynamic.caiway.nl. [62.45.126.101]) by smtp.gmail.com with ESMTPSA id a52sm6033452eda.44.2017.06.20.02.29.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 20 Jun 2017 02:29:07 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, nico@linaro.org, ebiggers3@gmail.com, Ard Biesheuvel Subject: [PATCH v3 4/7] crypto: arm64/aes-neon - reuse Sboxes from AES core module Date: Tue, 20 Jun 2017 11:28:57 +0200 Message-Id: <1497950940-24243-5-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The newly introduced AES core module exposes its Sboxes for the benefit of the fixed time AES driver. Since the arm64 NEON based implementation already depends on the same core module for its key expansion routines, let's use its Sboxes as well, and remove the local copy. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-neon.S | 74 +------------------- 1 file changed, 3 insertions(+), 71 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/aes-neon.S b/arch/arm64/crypto/aes-neon.S index f1e3aa2732f9..2acb5f81dcdb 100644 --- a/arch/arm64/crypto/aes-neon.S +++ b/arch/arm64/crypto/aes-neon.S @@ -32,7 +32,7 @@ /* preload the entire Sbox */ .macro prepare, sbox, shiftrows, temp - adr \temp, \sbox + adr_l \temp, \sbox movi v12.16b, #0x1b ldr q13, \shiftrows ldr q14, .Lror32by8 @@ -44,7 +44,7 @@ /* do preload for encryption */ .macro enc_prepare, ignore0, ignore1, temp - prepare .LForward_Sbox, .LForward_ShiftRows, \temp + prepare crypto_aes_sbox, .LForward_ShiftRows, \temp .endm .macro enc_switch_key, ignore0, ignore1, temp @@ -53,7 +53,7 @@ /* do preload for decryption */ .macro dec_prepare, ignore0, ignore1, temp - prepare .LReverse_Sbox, .LReverse_ShiftRows, \temp + prepare crypto_aes_inv_sbox, .LReverse_ShiftRows, \temp .endm /* apply SubBytes transformation using the the preloaded Sbox */ @@ -274,74 +274,6 @@ .text .align 6 -.LForward_Sbox: - .byte 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5 - .byte 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76 - .byte 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0 - .byte 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0 - .byte 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc - .byte 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15 - .byte 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a - .byte 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75 - .byte 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0 - .byte 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84 - .byte 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b - .byte 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf - .byte 0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85 - .byte 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8 - .byte 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5 - .byte 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2 - .byte 0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17 - .byte 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73 - .byte 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88 - .byte 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb - .byte 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c - .byte 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79 - .byte 0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9 - .byte 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08 - .byte 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6 - .byte 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a - .byte 0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e - .byte 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e - .byte 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94 - .byte 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf - .byte 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68 - .byte 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16 - -.LReverse_Sbox: - .byte 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38 - .byte 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb - .byte 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87 - .byte 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb - .byte 0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d - .byte 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e - .byte 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2 - .byte 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25 - .byte 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16 - .byte 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92 - .byte 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda - .byte 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84 - .byte 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a - .byte 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06 - .byte 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02 - .byte 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b - .byte 0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea - .byte 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73 - .byte 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85 - .byte 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e - .byte 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89 - .byte 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b - .byte 0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20 - .byte 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4 - .byte 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31 - .byte 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f - .byte 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d - .byte 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef - .byte 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0 - .byte 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61 - .byte 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26 - .byte 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d - .LForward_ShiftRows: .octa 0x0b06010c07020d08030e09040f0a0500 From patchwork Tue Jun 20 09:28:58 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 105938 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp1274246qgd; Tue, 20 Jun 2017 02:36:11 -0700 (PDT) X-Received: by 10.98.16.72 with SMTP id y69mr29617423pfi.30.1497951371395; Tue, 20 Jun 2017 02:36:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497951371; cv=none; d=google.com; s=arc-20160816; b=SMtg4z8R8V1eM5VJq+7tQ3HfDlHmR7U8YCLlm8gr3wJ2d+NS5k3La/fT/awfPcTAYm ORFxfD2DF3RbTSxYi5fFJsqZMqrHodOOpbbbQkgZNfqJucIGsGyJnT6C9xXrQd+SBh9B ynUufic1E7M+7ob0PqrLooANygaJh1BnUSfLh08sE1w/8Dibdimy4TsKUbk+BkwxNAYI 5d4KdKJqoU/+lhNkNB2A4KxtNIVvoMAB6/AVL7kVIPT5EtgmmVyV62FW+jeCB3Olzxv7 HWZbKBbetXOB1OJvx3CDTlrJLiG+ZWdKEBC6yos5ODQc0k6pmEEqXkL2YPRwtU2fs4BR HeVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Uh8oDe2O9QUwg1musPxNqZNFSQ22hrtZn8H7YLYLaTc=; b=vmlFJoqQ0qdNPTKNElBV7Jhd0FvBZ7Gmx/f7k99D4Lld8rXAl4h1l6azJoVEaNxOCT fo+DT8ztLvJmJ36tkTYjQQNMODH9cXLisbTg0+Ck80DsQ4BJ7PWQAAHReCyDMSPcutNb 6HLD8VJoPLzUACsEM9zw2HtpX2M3kvrRxY0gRBXv6c1wLDm4xSrO/268sIhf18qtlp0J d4zLFWTquVxMJle7vnT94Pc4OT6FytJqzBc7J0oBStjxtDtVunEz0Q0Euze4MxbKMr2C p2yamAhSsNcPuLEWKyGHK7FGGJ2rGVtsgmcwC7Olpy1tYXvQFzU2cqkWgMqHOWNR7cGM 4LfA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.b=cYOLcP2Q; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p1si10461961pge.322.2017.06.20.02.36.11; Tue, 20 Jun 2017 02:36:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.b=cYOLcP2Q; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751046AbdFTJam (ORCPT + 1 other); Tue, 20 Jun 2017 05:30:42 -0400 Received: from mail-wm0-f53.google.com ([74.125.82.53]:38772 "EHLO mail-wm0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752022AbdFTJ3L (ORCPT ); Tue, 20 Jun 2017 05:29:11 -0400 Received: by mail-wm0-f53.google.com with SMTP id u195so14358653wmd.1 for ; Tue, 20 Jun 2017 02:29:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Uh8oDe2O9QUwg1musPxNqZNFSQ22hrtZn8H7YLYLaTc=; b=cYOLcP2QTdMGxcWye+5dkYJDb9e/wCqkP3o1IH+wWv5Ngv8AhfilfXbfBIXPEjS2Vx nmESiy5nFgKi5BowSFIa+pHsBfzStQsGZliT28u3E2gSw2uqOHgsh5aPPIfMUzWw3Xan DjonRo5e59yplxynLrPGsU+McDBLGHKBBX/0c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Uh8oDe2O9QUwg1musPxNqZNFSQ22hrtZn8H7YLYLaTc=; b=suZLUWU1RRwIqwoxuNftQNQFw+sRdKyWPtasJNFXu0a9EZUr3v74SdKMx8K9qgS0s1 S1H3arsuNalfiWc2iw89U/MxqBvT1UL4lR76YFLWPOr3BUiRAZN5U7UvpV29MZYtJuL6 1qpy7FCdKEoSt5aWS5XJau3VRpjP12wZUgvWF3MuKY3eXVvLYtnvFQ6PGtPsu5n9/Mn6 lxsi6NS9zd5pyRGi80p2rlsK8MHzPz7kDG/Kuf4uK2KXMqlUvwuFQzI961It0l+gmY3w lC3orb2HX0QRhxgF8AcmpVgZoUlULo846A2Fu8MZmayacxHSp7X6aSjD6+FBtqmLXZwl fR0g== X-Gm-Message-State: AKS2vOwaDlBvREgLIGF/jr1elOrjt90v95+0eRP56umsmLRkz/4WRPyi XIyk3FbbgkXUL5cLvK1jgw== X-Received: by 10.80.175.34 with SMTP id g31mr20434842edd.24.1497950949613; Tue, 20 Jun 2017 02:29:09 -0700 (PDT) Received: from localhost.localdomain (101-126-045-062.dynamic.caiway.nl. [62.45.126.101]) by smtp.gmail.com with ESMTPSA id a52sm6033452eda.44.2017.06.20.02.29.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 20 Jun 2017 02:29:08 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, nico@linaro.org, ebiggers3@gmail.com, Ard Biesheuvel Subject: [PATCH v3 5/7] crypto: aes - repurpose CRYPTO_AES and introduce CRYPTO_AES_GENERIC Date: Tue, 20 Jun 2017 11:28:58 +0200 Message-Id: <1497950940-24243-6-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Repurpose the Kconfig symbol CRYPTO_AES to signify that a 'select' or 'depends on' relationship on it can be satisfied by any driver that exposes a generic "aes" cipher. The existing generic AES code is now controlled by a new Kconfig symbol CRYPTO_AES_GENERIC, and only dependencies on CRYPTO_AES that truly depend on its exported lookup tables are updated accordingly. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/Kconfig | 2 +- arch/arm64/crypto/Kconfig | 2 +- crypto/Kconfig | 8 ++++++-- crypto/Makefile | 2 +- net/sunrpc/Kconfig | 3 ++- 5 files changed, 11 insertions(+), 6 deletions(-) -- 2.7.4 diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig index fd77aebcb7a9..3a6994ada2d1 100644 --- a/arch/arm/crypto/Kconfig +++ b/arch/arm/crypto/Kconfig @@ -64,7 +64,7 @@ config CRYPTO_SHA512_ARM config CRYPTO_AES_ARM tristate "Scalar AES cipher for ARM" select CRYPTO_ALGAPI - select CRYPTO_AES + select CRYPTO_AES_GENERIC help Use optimized AES assembler routines for ARM platforms. diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index db55e069c17b..7ffe88267943 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -43,7 +43,7 @@ config CRYPTO_CRC32_ARM64_CE config CRYPTO_AES_ARM64 tristate "AES core cipher using scalar instructions" - select CRYPTO_AES + select CRYPTO_AES_GENERIC config CRYPTO_AES_ARM64_CE tristate "AES core cipher using ARMv8 Crypto Extensions" diff --git a/crypto/Kconfig b/crypto/Kconfig index 1e6e021fda10..9ae3dade4b2b 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -898,6 +898,10 @@ config CRYPTO_AES_CORE tristate config CRYPTO_AES + tristate + select CRYPTO_AES_GENERIC + +config CRYPTO_AES_GENERIC tristate "AES cipher algorithms" select CRYPTO_ALGAPI select CRYPTO_AES_CORE @@ -940,7 +944,7 @@ config CRYPTO_AES_586 tristate "AES cipher algorithms (i586)" depends on (X86 || UML_X86) && !64BIT select CRYPTO_ALGAPI - select CRYPTO_AES + select CRYPTO_AES_GENERIC help AES cipher algorithms (FIPS-197). AES uses the Rijndael algorithm. @@ -962,7 +966,7 @@ config CRYPTO_AES_X86_64 tristate "AES cipher algorithms (x86_64)" depends on (X86 || UML_X86) && 64BIT select CRYPTO_ALGAPI - select CRYPTO_AES + select CRYPTO_AES_GENERIC help AES cipher algorithms (FIPS-197). AES uses the Rijndael algorithm. diff --git a/crypto/Makefile b/crypto/Makefile index 0979ca461ddb..73395307bcea 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -97,7 +97,7 @@ obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o obj-$(CONFIG_CRYPTO_SERPENT) += serpent_generic.o CFLAGS_serpent_generic.o := $(call cc-option,-fsched-pressure) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149 obj-$(CONFIG_CRYPTO_AES_CORE) += aes_core.o -obj-$(CONFIG_CRYPTO_AES) += aes_generic.o +obj-$(CONFIG_CRYPTO_AES_GENERIC) += aes_generic.o obj-$(CONFIG_CRYPTO_AES_TI) += aes_ti.o obj-$(CONFIG_CRYPTO_CAMELLIA) += camellia_generic.o obj-$(CONFIG_CRYPTO_CAST_COMMON) += cast_common.o diff --git a/net/sunrpc/Kconfig b/net/sunrpc/Kconfig index ac09ca803296..58aa2ada40b3 100644 --- a/net/sunrpc/Kconfig +++ b/net/sunrpc/Kconfig @@ -19,7 +19,8 @@ config RPCSEC_GSS_KRB5 tristate "Secure RPC: Kerberos V mechanism" depends on SUNRPC && CRYPTO depends on CRYPTO_MD5 && CRYPTO_DES && CRYPTO_CBC && CRYPTO_CTS - depends on CRYPTO_ECB && CRYPTO_HMAC && CRYPTO_SHA1 && CRYPTO_AES + depends on CRYPTO_ECB && CRYPTO_HMAC && CRYPTO_SHA1 + select CRYPTO_AES depends on CRYPTO_ARC4 default y select SUNRPC_GSS From patchwork Tue Jun 20 09:28:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 105940 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp1274326qgd; Tue, 20 Jun 2017 02:36:29 -0700 (PDT) X-Received: by 10.84.224.134 with SMTP id s6mr1783543plj.263.1497951389687; Tue, 20 Jun 2017 02:36:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497951389; cv=none; d=google.com; s=arc-20160816; b=blSRWEz5I5OVo9B9mmBsvBeakY2rquzHCdkNAgIvnLTRVfApYo2pFknjkFNZxN0AJH /XbRT/7XN4xW6Mz82U7n7ACR3upVOmYkw9PVez5fl+U5HLQJ7fvEWci03bm7zf5LKMZc uE+w0kVZIspjFEaI+PNSIn+v0J4pK162JKbmTiA6ZaEg8fa/PfkZbAmXP81JNC7zMYsc PeROCbDC5Nu76vKN0eUAc2bcvj47N4Egp1v0srTWH+NZ3fgLTaJfZevnXmKFzCjK4tqc jTQqdqxK5spbO15YZeGOhxHoEIAnmK3y1VwKWHYMaMMnBLTAkCMSkRNCkEjF3E2mwMag gIaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=sIjGw1g6mhQuFKcdCRhcEi2HrBi+BTqJsuRdubrVeEU=; b=ea9CnhgVyqtAFKdhL1ciygNk7MSWNKoQ1Fm5r/Pu+sSYFWYLIYAR9VMWGoP5HPNpPZ obfdLYPQltcg9P3feQRjC9d02xzQUTrCLxYpwPYrwyeRj/FlD5TlNVB5uy7IwvXMC9o9 iWO5T6AIY9A6pNl4qOYLQiEE9dnaIsj3WJaVyTQZUou/mx+Q24vJ+lRSQusGBegQY0KF zBfcRhD/YIuUDs8v/ifLS2GOrcR66ox3Evklbw8Wk+t1Sk0mX4kV+6vrObnU3lpelEv1 ZW9+NBFgZf1Dot2Xzvh/r7DaWSp3Ue+oudU3TU/TtTxBCOwgqyqy7F/Aybs4iaOfqE/s YvTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.b=aZfRC58C; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q25si10346512pgn.509.2017.06.20.02.36.29; Tue, 20 Jun 2017 02:36:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.b=aZfRC58C; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751116AbdFTJal (ORCPT + 1 other); Tue, 20 Jun 2017 05:30:41 -0400 Received: from mail-wm0-f41.google.com ([74.125.82.41]:38785 "EHLO mail-wm0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751078AbdFTJ3M (ORCPT ); Tue, 20 Jun 2017 05:29:12 -0400 Received: by mail-wm0-f41.google.com with SMTP id u195so14359167wmd.1 for ; Tue, 20 Jun 2017 02:29:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=sIjGw1g6mhQuFKcdCRhcEi2HrBi+BTqJsuRdubrVeEU=; b=aZfRC58CYSPTTdYwh/IRcsBzfapI+OyTWGPM1UCjKp3g6gKf+CxfrSshEbh7nAdpQL XtmqyFBXlgq48o0XPy1Xy+1BKbLANiMS1zvpphHqSwbe7wHkQjH7d7UQKJFlAwDIDLFN 6GyuNBD0wkTGZpeioFcCrP+tkJmtuh7be4cK0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=sIjGw1g6mhQuFKcdCRhcEi2HrBi+BTqJsuRdubrVeEU=; b=fbqCf9YbntNNPq5ID24ufP6t6k0yLw9/GgmrAF1KN6tJfu8l7AvahONsNuA0wBrXz3 T0Z8uqREoKDfPUU8o6Nl94jprAVTI7YZfovi/U2Q/glwejU9gkA3vXLuLdv6+CO9tk1k F2yq8hG4atyPlNPp+Z/om3QvNH6V6hOM/RYrNZJMwz+1lxJW0rITdLvXViFaOSLrXByc ZxNh8ga4GIA1clX+9QUdtOHKt9l2i4jiMgJqgBJiHTwQyjHXKyBGoYbU+8LRcZ8OHqtC CQdkHWBsZNXBwTCTMEqCFv/er9NRq9HJmeMQ8s0OYnUosQKGmf07k4lfhRyabM2tzPPB d05A== X-Gm-Message-State: AKS2vOxeRKS9/tX7WatUs/idigYLeTwgtmA80fsiVRx6ZJkI/R/Z61a7 wLkF/Y7sxl2dfvYWdi9EZA== X-Received: by 10.80.191.76 with SMTP id g12mr20695165edk.12.1497950950704; Tue, 20 Jun 2017 02:29:10 -0700 (PDT) Received: from localhost.localdomain (101-126-045-062.dynamic.caiway.nl. [62.45.126.101]) by smtp.gmail.com with ESMTPSA id a52sm6033452eda.44.2017.06.20.02.29.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 20 Jun 2017 02:29:09 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, nico@linaro.org, ebiggers3@gmail.com, Ard Biesheuvel Subject: [PATCH v3 6/7] crypto: aes - add meaningful help text to the various AES drivers Date: Tue, 20 Jun 2017 11:28:59 +0200 Message-Id: <1497950940-24243-7-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Remove the duplicated boilerplate help text and add a bit of explanation about the nature of the various AES implementations that exist for various architectures. In particular, highlight the time variant nature of some implementations, and the fact that they can be omitted if required. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/Kconfig | 16 ++- arch/arm64/crypto/Kconfig | 30 +++- crypto/Kconfig | 144 +++++++------------- 3 files changed, 92 insertions(+), 98 deletions(-) -- 2.7.4 diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig index 3a6994ada2d1..d8f3336bfc88 100644 --- a/arch/arm/crypto/Kconfig +++ b/arch/arm/crypto/Kconfig @@ -62,11 +62,23 @@ config CRYPTO_SHA512_ARM using optimized ARM assembler and NEON, when available. config CRYPTO_AES_ARM - tristate "Scalar AES cipher for ARM" + tristate "Table based AES cipher for 32-bit ARM" select CRYPTO_ALGAPI select CRYPTO_AES_GENERIC help - Use optimized AES assembler routines for ARM platforms. + Table based implementation in 32-bit ARM assembler of the FIPS-197 + Advanced Encryption Standard (AES) symmetric cipher algorithm. This + driver reuses the tables exposed by the generic AES driver. + + For CPUs that lack the special ARMv8-CE instructions, this is the + fastest implementation available of the core cipher, but it may be + susceptible to known-plaintext attacks on the key due to the + correlation between the processing time and the input of the first + round. Therefore, it is recommended to also enable the time invariant + NEON based driver below (CRYPTO_AES_ARM_BS), which will supersede + this driver on NEON capable CPUs when using AES in CBC, CTR and XTS + modes. If time invariance is a requirement, this driver should not + be enabled. config CRYPTO_AES_ARM_BS tristate "Bit sliced AES using NEON instructions" diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 7ffe88267943..4fb3e519b43f 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -42,13 +42,37 @@ config CRYPTO_CRC32_ARM64_CE select CRYPTO_HASH config CRYPTO_AES_ARM64 - tristate "AES core cipher using scalar instructions" + tristate "Table based AES cipher for 64-bit ARM" select CRYPTO_AES_GENERIC + help + Table based implementation in 64-bit ARM assembler of the FIPS-197 + Advanced Encryption Standard (AES) symmetric cipher algorithm. This + driver reuses the tables exposed by the generic AES driver. + + For CPUs that lack the special ARMv8-CE instructions, this is the + fastest implementation available of the core cipher, but it may be + susceptible to known-plaintext attacks on the key due to the + correlation between the processing time and the input of the first + round. Therefore, it is recommended to also enable the time invariant + drivers below (CRYPTO_AES_ARM64_NEON_BLK and CRYPTO_AES_ARM64_BS), + which will supersede this driver when using AES in the specific modes + that they implement. If time invariance is a requirement, this driver + should not be enabled. config CRYPTO_AES_ARM64_CE - tristate "AES core cipher using ARMv8 Crypto Extensions" - depends on ARM64 && KERNEL_MODE_NEON + tristate "AES cipher using ARMv8 Crypto Extensions" + depends on KERNEL_MODE_NEON select CRYPTO_ALGAPI + help + Implementation in assembler of the FIPS-197 Advanced Encryption + Standard (AES) symmetric cipher algorithm, using instructions from + ARM's optional ARMv8 Crypto Extensions. This implementation is time + invariant, and is by far the preferred option for CPUs that support + this extension. + + If in doubt, enable as a module: it will be loaded automatically on + CPUs that support it, and supersede other implementations of the AES + cipher. config CRYPTO_AES_ARM64_CE_CCM tristate "AES in CCM mode using ARMv8 Crypto Extensions" diff --git a/crypto/Kconfig b/crypto/Kconfig index 9ae3dade4b2b..87d9e03dcb74 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -902,37 +902,31 @@ config CRYPTO_AES select CRYPTO_AES_GENERIC config CRYPTO_AES_GENERIC - tristate "AES cipher algorithms" + tristate "Generic table based AES cipher" select CRYPTO_ALGAPI select CRYPTO_AES_CORE help - AES cipher algorithms (FIPS-197). AES uses the Rijndael - algorithm. - - Rijndael appears to be consistently a very good performer in - both hardware and software across a wide range of computing - environments regardless of its use in feedback or non-feedback - modes. Its key setup time is excellent, and its key agility is - good. Rijndael's very low memory requirements make it very well - suited for restricted-space environments, in which it also - demonstrates excellent performance. Rijndael's operations are - among the easiest to defend against power and timing attacks. - - The AES specifies three key sizes: 128, 192 and 256 bits - - See for more information. + Generic table based implementation of the FIPS-197 Advanced + Encryption Standard (AES) symmetric cipher algorithm. This is + the fastest implementation in C, but may be susceptible to known + plaintext attacks on the key due to the correlation between the + processing time and the input of the first round. If time + invariance is a requirement, this driver should not be enabled, + and the fixed time variant below (CRYPTO_AES_TI) should be selected + instead. config CRYPTO_AES_TI - tristate "Fixed time AES cipher" + tristate "Generic fixed time AES cipher" select CRYPTO_ALGAPI select CRYPTO_AES_CORE help - This is a generic implementation of AES that attempts to eliminate - data dependent latencies as much as possible without affecting - performance too much. It is intended for use by the generic CCM - and GCM drivers, and other CTR or CMAC/XCBC based modes that rely - solely on encryption (although decryption is supported as well, but - with a more dramatic performance hit) + Alternative generic implementation of the FIPS-197 Advanced + Encryption Standard (AES) symmetric cipher algorithm, offering a + different tradeoff between security, performance and memory and + D-cache footprint. Most notably, decryption is substantially slower + than encryption when using this driver, which makes it more suitable + for AES based stream ciphers and MAC algorithms (which rely on + encryption only) than for block ciphers such as CBC or XTS. Instead of using 16 lookup tables of 1 KB each, (8 for encryption and 8 for decryption), this implementation only uses just two S-boxes of @@ -941,51 +935,37 @@ config CRYPTO_AES_TI block. config CRYPTO_AES_586 - tristate "AES cipher algorithms (i586)" + tristate "Table based AES cipher for 32-bit x86" depends on (X86 || UML_X86) && !64BIT select CRYPTO_ALGAPI select CRYPTO_AES_GENERIC help - AES cipher algorithms (FIPS-197). AES uses the Rijndael - algorithm. - - Rijndael appears to be consistently a very good performer in - both hardware and software across a wide range of computing - environments regardless of its use in feedback or non-feedback - modes. Its key setup time is excellent, and its key agility is - good. Rijndael's very low memory requirements make it very well - suited for restricted-space environments, in which it also - demonstrates excellent performance. Rijndael's operations are - among the easiest to defend against power and timing attacks. - - The AES specifies three key sizes: 128, 192 and 256 bits - - See for more information. + Table based implementation in 32-bit x86 assembler of the FIPS-197 + Advanced Encryption Standard (AES) symmetric cipher algorithm. For + older 32-bit x86 CPUs that lack the special AES-NI instructions, it + is the fastest implementation available, but it may be susceptible to + known-plaintext attacks on the key due to the correlation between the + processing time and the input of the first round. It reuses the + tables exposed by the generic AES driver. If time invariance is a + requirement, this driver should not be enabled. config CRYPTO_AES_X86_64 - tristate "AES cipher algorithms (x86_64)" + tristate "Table based AES cipher for 64-bit x86" depends on (X86 || UML_X86) && 64BIT select CRYPTO_ALGAPI select CRYPTO_AES_GENERIC help - AES cipher algorithms (FIPS-197). AES uses the Rijndael - algorithm. - - Rijndael appears to be consistently a very good performer in - both hardware and software across a wide range of computing - environments regardless of its use in feedback or non-feedback - modes. Its key setup time is excellent, and its key agility is - good. Rijndael's very low memory requirements make it very well - suited for restricted-space environments, in which it also - demonstrates excellent performance. Rijndael's operations are - among the easiest to defend against power and timing attacks. - - The AES specifies three key sizes: 128, 192 and 256 bits - - See for more information. + Table based implementation in 64-bit x86 assembler of the FIPS-197 + Advanced Encryption Standard (AES) symmetric cipher algorithm. For + older 64-bit x86 CPUs that lack the special AES-NI instructions, it + is the fastest implementation available, but it may be susceptible to + known-plaintext attacks on the key due to the correlation between the + processing time and the input of the first round. It reuses the + tables exposed by the generic AES driver. If time invariance is a + requirement, this driver should not be enabled. config CRYPTO_AES_NI_INTEL - tristate "AES cipher algorithms (AES-NI)" + tristate "AES cipher for x86 using AES-NI instructions" depends on X86 select CRYPTO_AEAD select CRYPTO_AES_CORE @@ -994,52 +974,29 @@ config CRYPTO_AES_NI_INTEL select CRYPTO_GLUE_HELPER_X86 if 64BIT select CRYPTO_SIMD help - Use Intel AES-NI instructions for AES algorithm. - - AES cipher algorithms (FIPS-197). AES uses the Rijndael - algorithm. - - Rijndael appears to be consistently a very good performer in - both hardware and software across a wide range of computing - environments regardless of its use in feedback or non-feedback - modes. Its key setup time is excellent, and its key agility is - good. Rijndael's very low memory requirements make it very well - suited for restricted-space environments, in which it also - demonstrates excellent performance. Rijndael's operations are - among the easiest to defend against power and timing attacks. - - The AES specifies three key sizes: 128, 192 and 256 bits - - See for more information. + Implementation in x86 assembler of the FIPS-197 Advanced Encryption + Standard (AES) symmetric cipher algorithm, using instructions from + Intel's optional AES-NI ISA extension. This implementation is time + invariant, and is by far the preferred option for CPUs that support + this extension. In addition to AES cipher algorithm support, the acceleration for some popular block cipher mode is supported too, including ECB, CBC, LRW, PCBC, XTS. The 64 bit version has additional acceleration for CTR. + If in doubt, enable as a module: it will be loaded automatically on + CPUs that support it, and supersede other implementations of the AES + cipher. + config CRYPTO_AES_SPARC64 - tristate "AES cipher algorithms (SPARC64)" + tristate "AES cipher for SPARC64 using crypto opcodes" depends on SPARC64 select CRYPTO_CRYPTD select CRYPTO_ALGAPI help - Use SPARC64 crypto opcodes for AES algorithm. - - AES cipher algorithms (FIPS-197). AES uses the Rijndael - algorithm. - - Rijndael appears to be consistently a very good performer in - both hardware and software across a wide range of computing - environments regardless of its use in feedback or non-feedback - modes. Its key setup time is excellent, and its key agility is - good. Rijndael's very low memory requirements make it very well - suited for restricted-space environments, in which it also - demonstrates excellent performance. Rijndael's operations are - among the easiest to defend against power and timing attacks. - - The AES specifies three key sizes: 128, 192 and 256 bits - - See for more information. + Implementation of the FIPS-197 Advanced Encryption Standard (AES) + symmetric cipher algorithm, using SPARC64 crypto opcodes. In addition to AES cipher algorithm support, the acceleration for some popular block cipher mode is supported too, including @@ -1049,8 +1006,9 @@ config CRYPTO_AES_PPC_SPE tristate "AES cipher algorithms (PPC SPE)" depends on PPC && SPE help - AES cipher algorithms (FIPS-197). Additionally the acceleration - for popular block cipher modes ECB, CBC, CTR and XTS is supported. + Implementation of the FIPS-197 Advanced Encryption Standard (AES) + symmetric cipher algorithm. Additionally, the acceleration for + popular block cipher modes ECB, CBC, CTR and XTS is supported. This module should only be used for low power (router) devices without hardware AES acceleration (e.g. caam crypto). It reduces the size of the AES tables from 16KB to 8KB + 256 bytes and mitigates From patchwork Tue Jun 20 09:29:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 105941 Delivered-To: patch@linaro.org Received: by 10.140.91.2 with SMTP id y2csp1274442qgd; Tue, 20 Jun 2017 02:36:50 -0700 (PDT) X-Received: by 10.84.214.22 with SMTP id h22mr6110777pli.127.1497951409927; Tue, 20 Jun 2017 02:36:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497951409; cv=none; d=google.com; s=arc-20160816; b=GxBj/whcVXmdkiuTXbSqACw6g+f8QgQvI4IQrySTpyUyN56hcA2WG+LNGmWZrPZmt9 bKvZrpWy4TFZU3DABglWWhwROjk2DlfKphTswF5Bpzllq8MGCQfW88CdwULJ334f6qra NQMQtCcAxUGabvbMILybyysVRzKKpRGz9k6MSc2j5zOML9P5lFVYpwfSOTy7XyXUO9Wa BisNhafFW92LMEGd3yMLexT+hQc0y3GqWcMpwV/p449kd//N7V9k4qAsnVb/R2OApTgm u+5x1hCUeiDk8PppeuBA/39FvS3a2xjIiAG5R70XB8Y+JUmc94zTwgzFdzceZc7CzH1P MFnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=5T8+8O6HjmsfaB+zGABg1aaZRYEeqs3KD/qw/uzfBDI=; b=R+48qGSWUJuP3+jxBLmxLaddaJ4gtX3PBZHiq93I7gJrpdMLbVPGCKG9Fb1T8qc3de 4qXyHKMWGXQPtxJpDaOwMFqA7MQhV8zYm6makqp0lIG2LSPXrEqvnjw7bENxU7DXR5CH GyLT2meBjipJIMQrD4s8M7QuCZhDnLf2wBKsN5XL5xSGG2pTX1AUI/Ft9S3vQeTYr+oz 7I5wjNAQzq9JViAog5CoGYe9/GIh9miGGtBdt8bZq2fShpt82fC8Fr2vMEf46NNS9gNF 4+znnjFRZ4JnGfBm8vIpEOEAjVUXFJsp6yrbPWdnxwSrfLJD9yFbyTtxfDKS3qOKZ4wJ gCvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.b=JiFSMZg4; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 189si9722394pfa.495.2017.06.20.02.36.49; Tue, 20 Jun 2017 02:36:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.b=JiFSMZg4; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751083AbdFTJak (ORCPT + 1 other); Tue, 20 Jun 2017 05:30:40 -0400 Received: from mail-wm0-f50.google.com ([74.125.82.50]:36983 "EHLO mail-wm0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752049AbdFTJ3N (ORCPT ); Tue, 20 Jun 2017 05:29:13 -0400 Received: by mail-wm0-f50.google.com with SMTP id d73so14418313wma.0 for ; Tue, 20 Jun 2017 02:29:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5T8+8O6HjmsfaB+zGABg1aaZRYEeqs3KD/qw/uzfBDI=; b=JiFSMZg4VtnN2FBouG0HjZ0PTN9xp79Ko6lE9o4pycN91555HQuA1n4S5kngSwNzYg KL/i2Lw6ZhOhwu4jxQuDCAsVNbTbDMGO0tjzwBWnx3n+nxzNnE+rK2zESMM6LdLDxMuh wFHYs7GpyWeoYciJ+tFQlaiQ75dm+yjDAWSs4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5T8+8O6HjmsfaB+zGABg1aaZRYEeqs3KD/qw/uzfBDI=; b=skBZXkAgad9RTk67p/BWH5KbtQaEpr/79OkpKg3yj+GRG6X8RJL2XpKiIoDi6cwlhV cgx12Eadc0elddod0cQ4/lukobtqnWIOQRo0/QTHjAClIc0GJznRaHSHTxASZpv9C1qg ZsoMYZtY50vbRAujYDRWPR/gc9UatTdLcD+OK8mn5sPRSZo3P/JCIIbnYjzN7nSTlcHU DJe7HfvWvu79uT/KxtpACTILHbgcngU4ROGHCY2T9unY2o9ZhSt+SAQoGMmr4iyCju0V UcN0LOftmbhNnCCzLTPxO1Pa8ZfRlRr1i1gZvWXLYCDBURXBHhO6JBNqQpgRsXnjbSHs UJfQ== X-Gm-Message-State: AKS2vOxltQU3h61JfIwdBwf42hO/Q1Ia0Vg59kdaHE3NIUH6wPd7huLM 2ekoMgZXWEW4I/uAQpBWyQ== X-Received: by 10.80.145.25 with SMTP id e25mr20622094eda.8.1497950951997; Tue, 20 Jun 2017 02:29:11 -0700 (PDT) Received: from localhost.localdomain (101-126-045-062.dynamic.caiway.nl. [62.45.126.101]) by smtp.gmail.com with ESMTPSA id a52sm6033452eda.44.2017.06.20.02.29.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 20 Jun 2017 02:29:11 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, nico@linaro.org, ebiggers3@gmail.com, Ard Biesheuvel Subject: [PATCH v3 7/7] crypto: aes - allow generic AES to be replaced by fixed time AES Date: Tue, 20 Jun 2017 11:29:00 +0200 Message-Id: <1497950940-24243-8-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497950940-24243-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On systems where a small memory footprint is important, the generic AES code with its 16 KB of lookup tables and fully unrolled encrypt and decrypt routines may be an unnecessary burden, especially given that modern SoCs often have dedicated instructions for AES. And even if they don't, a time invariant implementation may be preferred over a fast one that may be susceptible to cache timing attacks. So allow the declared dependency of other subsystems on AES to be fulfilled by either the generic table based AES or by the much smaller generic time invariant implementation. Signed-off-by: Ard Biesheuvel --- crypto/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) -- 2.7.4 diff --git a/crypto/Kconfig b/crypto/Kconfig index 87d9e03dcb74..dd0bc0d84789 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -899,7 +899,8 @@ config CRYPTO_AES_CORE config CRYPTO_AES tristate - select CRYPTO_AES_GENERIC + select CRYPTO_AES_GENERIC if (CRYPTO_AES=y && CRYPTO_AES_TI != y) || \ + (CRYPTO_AES=m && !CRYPTO_AES_TI) config CRYPTO_AES_GENERIC tristate "Generic table based AES cipher"