From patchwork Sat Jun 10 16:22:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103556 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270787qgd; Sat, 10 Jun 2017 09:23:10 -0700 (PDT) X-Received: by 10.84.239.23 with SMTP id w23mr46603892plk.73.1497111790284; Sat, 10 Jun 2017 09:23:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111790; cv=none; d=google.com; s=arc-20160816; b=oCLdnFoxfulA/SAdmYwreUaaSqfBI+TzAkxg0xHZiJCUwfoyA9FjMEney7DcaArqD+ QygIr/jD48JrQAdOQ0mWB09bGw4w73q4KUXprSwQnUVMQiXQOYKf3fLH/DYb+Ot2dNW0 7KFKdpW/r873/GgoPmBXHAW8FrTbNj+Km894uhfpQIf40DkluoPbcx9NU5A/kmWKnCAK 8aXeQO/4mLZKxhLYLXP6oVE72T8ks9HoqZ77dutyKXvF2YHfTEO0jHI0Ru71luGWJOxI y6VNpxWrSK0v3SDb7YxYw/YFl/KOA/vXNAjPsgv++QEBabXP32r74/EywCuV0c0ZjASv Ucqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=HK9nGCQgsimstC2hIWRHEAkPVWQ3KTwx63WBWB9ZNNY=; b=WMVXJ7jp+KOKqnk0QqraqNzOOk6ZQBxmJwVKRwqHxu7pEYCjX/a/hW1fSJ/3LS2AHF An4reTD1EpJLsAW9zR/nx8D9lFVjpyNz9bPG7Yf66CLI0ENAjLjAqhuot35DpekOOdyV JJxFK5TuJvpVvXvDISdY/7nw43Q9XupxPx92xUIOmyDL6Z+r9+NhU44zHLCM9BAVnlMy G5Kd1nHCHt9twwrX6WueOsbkdHdOo+RLCEOdIb0mbDAeXcVlLHhWUFf6EPIiuCLIALK3 1oAfH0sB93hnbnTGmY/qK5vmP9X6CRXJVMYres/2LEE3ckuFH6G5+tMZ0IZ1mMUzyV2Z Zd3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.10; Sat, 10 Jun 2017 09:23:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752111AbdFJQXJ (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:09 -0400 Received: from mail-wr0-f170.google.com ([209.85.128.170]:35985 "EHLO mail-wr0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752070AbdFJQXI (ORCPT ); Sat, 10 Jun 2017 12:23:08 -0400 Received: by mail-wr0-f170.google.com with SMTP id v111so58814317wrc.3 for ; Sat, 10 Jun 2017 09:23:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HK9nGCQgsimstC2hIWRHEAkPVWQ3KTwx63WBWB9ZNNY=; b=Ckk5ySy90TqVCDvOsigExqfEwlMHsFuhMrOAgOLBe98Tla56zJZ4nSJVzkZUTcmlRK YDkHND/CywWCmvtr0+wTnu4cTbJJQ3AZTrzO4ezOv9GeKedF0IerzHwcfu3kK2bd5XgT waSH4TNSoPkv6wdfnMV+VZnr0P6xF6Fo+yLkI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=HK9nGCQgsimstC2hIWRHEAkPVWQ3KTwx63WBWB9ZNNY=; b=M3OJGgurMh7Ihz/zsNr2Lwt8sVb6n6SM7t4rUQYkqtBIl6pWv6mVIISeEBHHPve+Im bF0slDvSu+4VdJJ9KZawqdCJmnm2nOGYn8xYJ/ziG/JRA27rgQ6lww4Ld8xHpB5M5BlT PlUCOAXYNqgGMtHyTaAjFo5rYLcTLa60x8ELKrH7ko3LVIA0mMmi66HuvP9vrCer0oLG lcucY9CmV/iKy9SCInY77qarhAztbGOXtubNRp8iQZesTCIbxAWY7m/xhb5AuGkFTPcc IkHfFiyY7yG4qKwTku35Un0IWbv1OWSD/WQMfkmiA6mLmSEiC0htwoS2Sud50IQJT4nm BTPA== X-Gm-Message-State: AKS2vOwtOsrl8DRhqTzGIKM1aF/5eLRbtcKlxzAoJrsO9WV5hiGE8qbq 9UPiUyxvw/pqwN7JCnJ4yA== X-Received: by 10.28.135.82 with SMTP id j79mr3299779wmd.10.1497111787044; Sat, 10 Jun 2017 09:23:07 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:06 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 01/12] arm64: neon: replace generic definition of may_use_simd() Date: Sat, 10 Jun 2017 16:22:47 +0000 Message-Id: <1497111778-4210-2-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In preparation of modifying the logic that decides whether kernel mode NEON is allowable, which is required for SVE support, introduce an implementation of may_use_simd() that reflects the current reality, i.e., that SIMD is allowed in any context. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/Kbuild | 1 - arch/arm64/include/asm/simd.h | 24 ++++++++++++++++++++ 2 files changed, 24 insertions(+), 1 deletion(-) -- 2.7.4 diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild index a7a97a608033..3c469b557ee8 100644 --- a/arch/arm64/include/asm/Kbuild +++ b/arch/arm64/include/asm/Kbuild @@ -31,7 +31,6 @@ generic-y += sembuf.h generic-y += serial.h generic-y += set_memory.h generic-y += shmbuf.h -generic-y += simd.h generic-y += sizes.h generic-y += socket.h generic-y += sockios.h diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h new file mode 100644 index 000000000000..f8aa7b3a0140 --- /dev/null +++ b/arch/arm64/include/asm/simd.h @@ -0,0 +1,24 @@ +/* + * Copyright (C) 2017 Linaro Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published + * by the Free Software Foundation. + */ + +#ifndef __ASM_SIMD_H +#define __ASM_SIMD_H + +#include +#include + +/* + * may_use_simd - whether it is allowable at this time to issue SIMD + * instructions or access the SIMD register file + */ +static __must_check inline bool may_use_simd(void) +{ + return true; +} + +#endif From patchwork Sat Jun 10 16:22:48 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103557 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270805qgd; Sat, 10 Jun 2017 09:23:13 -0700 (PDT) X-Received: by 10.84.132.42 with SMTP id 39mr36686502ple.226.1497111793538; Sat, 10 Jun 2017 09:23:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111793; cv=none; d=google.com; s=arc-20160816; b=VzgnMs8yxOxura8RZbcWHBJ5aOfJwtLZKGDKvA0uUkSaE0nxvEAN6uE8ehzM/5oL4K vq+rtDncklFJ5EsuKVOxAvcwyowVYlCRqDrgBL3hmrohTEAPknb+dsNSM7xYmjIZjKhf i0yMeVuji4lc/8sVdru4eqBF+uCAg1slBysB+4DgFm1ta5GREmwSLKvYKsP4af5iW/0L KFeluXyK5ii+bW2aSi9Fr5QASN3RWZiukTLjOh62bal2pIIxp/OIVON5u24iZ9ySvHsf 2ktstp/J25OPommysTp4eKE74gmqYMmvHqx4SzGJY5NjdGjEwp8exa5zw9++C/8WUr78 bTUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Y0U+ubMgLhP7VrOh5gl5x3F1FjQb0idG1cJqm/DPRl0=; b=l/w0jgUP7ieAMr0H0P+AVK3Vkc5Wg9pF8EqRvQ8iElgv9Hf83FWbX76xs033GpDUxt R0LDjBJniqYFvJBG/ZWjDyZbxXKy3AaE/HQofQxv5KBE0zEMd68d2nyux/uhxBLFAxj2 d6mS1hO2keSg1SQVR5ImVK1qywcxZWws+wJ9F5x68lLq7u2Ih1fbTqUgziK3b4QcEcrl hg7DVw5roNycY5OruF6O+OnPRalfKC+lsLsdN7mQm8INQC3x7KEv0iiKlDKx1lCPpkd4 gwO7x9vtwsbXGkqtuPlHKaGxDXf36o0RtGL2IXChglA8kPOh4g88exyluD3hyCJgNcxG G3eQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.13; Sat, 10 Jun 2017 09:23:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752119AbdFJQXM (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:12 -0400 Received: from mail-wr0-f170.google.com ([209.85.128.170]:34061 "EHLO mail-wr0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752070AbdFJQXL (ORCPT ); Sat, 10 Jun 2017 12:23:11 -0400 Received: by mail-wr0-f170.google.com with SMTP id g76so58539401wrd.1 for ; Sat, 10 Jun 2017 09:23:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Y0U+ubMgLhP7VrOh5gl5x3F1FjQb0idG1cJqm/DPRl0=; b=I26IOelNNZD+D9+lNhMxCOEWemKZ/akXRIsnx3xSfc+LgxHRaTo4GLwxISBa0Epf4k 6EkXuBFwqFQXuog/nf2So4I/IqcdUUBu22nayWQ/Zugdd/JxOS9Nqcdcp/sR+/9Q1P/t D60kcttA8DMVM5m9fz6X7IsrN66uJfJUOLn3A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Y0U+ubMgLhP7VrOh5gl5x3F1FjQb0idG1cJqm/DPRl0=; b=RrRZ67t5fBgnAvDrgodmJB5kiLNj1ZlnIVUSxrwK349YPSblOV4q50UCnSeaDVVQZS 9ZaFoOB9IS59RMwUhM3NayOjJ6zw5FeXCIPW5K20/Oi+ZGJGyXxgl7avGlxZHe6P2xLP pczaliXewCuVeTan0/J674WHoUMWzuaE1RVLQj8QMKiG0zSHVNJLHxNtHf2UPwtt1BUn S0V5U/ioM2KqaRI0avqSDP7uZ8d8YfcHeFC4MAPnld+mn+tWzryAvRGBcDuqgv1w3lwZ oQocCy1MZLUAJFq8CBXK9qTEyMVJKV+xwvkdubE9Jfhed/e7cEAYmBMDBOPbiJzaPbkq jxjA== X-Gm-Message-State: AKS2vOzbMS247bTQ+Sh/4g+PL2yQw0vawthtt8L+aDUmcOSd1DKfOypP 2aMH9zVfSyhuWmI4ecSc2w== X-Received: by 10.28.172.69 with SMTP id v66mr3212818wme.64.1497111789334; Sat, 10 Jun 2017 09:23:09 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:08 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 02/12] crypto: arm64/ghash-ce - add non-SIMD scalar fallback Date: Sat, 10 Jun 2017 16:22:48 +0000 Message-Id: <1497111778-4210-3-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arm64 kernel will shortly disallow nested kernel mode NEON, so add a fallback to scalar C code that can be invoked in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 3 +- arch/arm64/crypto/ghash-ce-glue.c | 49 ++++++++++++++++---- 2 files changed, 43 insertions(+), 9 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index d92293747d63..7d75a363e317 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -28,8 +28,9 @@ config CRYPTO_SHA2_ARM64_CE config CRYPTO_GHASH_ARM64_CE tristate "GHASH (for GCM chaining mode) using ARMv8 Crypto Extensions" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_HASH + select CRYPTO_GF128MUL config CRYPTO_CRCT10DIF_ARM64_CE tristate "CRCT10DIF digest algorithm using PMULL instructions" diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index 833ec1e3f3e9..3e1a778b181a 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -1,7 +1,7 @@ /* * Accelerated GHASH implementation with ARMv8 PMULL instructions. * - * Copyright (C) 2014 Linaro Ltd. + * Copyright (C) 2014 - 2017 Linaro Ltd. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published @@ -9,7 +9,9 @@ */ #include +#include #include +#include #include #include #include @@ -25,6 +27,7 @@ MODULE_LICENSE("GPL v2"); struct ghash_key { u64 a; u64 b; + be128 k; }; struct ghash_desc_ctx { @@ -44,6 +47,36 @@ static int ghash_init(struct shash_desc *desc) return 0; } +static void ghash_do_update(int blocks, u64 dg[], const char *src, + struct ghash_key *key, const char *head) +{ + if (may_use_simd()) { + kernel_neon_begin(); + pmull_ghash_update(blocks, dg, src, key, head); + kernel_neon_end(); + } else { + be128 dst = { cpu_to_be64(dg[1]), cpu_to_be64(dg[0]) }; + + do { + const u8 *in = src; + + if (head) { + in = head; + blocks++; + head = NULL; + } else { + src += GHASH_BLOCK_SIZE; + } + + crypto_xor((u8 *)&dst, in, GHASH_BLOCK_SIZE); + gf128mul_lle(&dst, &key->k); + } while (--blocks); + + dg[0] = be64_to_cpu(dst.b); + dg[1] = be64_to_cpu(dst.a); + } +} + static int ghash_update(struct shash_desc *desc, const u8 *src, unsigned int len) { @@ -67,10 +100,9 @@ static int ghash_update(struct shash_desc *desc, const u8 *src, blocks = len / GHASH_BLOCK_SIZE; len %= GHASH_BLOCK_SIZE; - kernel_neon_begin_partial(8); - pmull_ghash_update(blocks, ctx->digest, src, key, - partial ? ctx->buf : NULL); - kernel_neon_end(); + ghash_do_update(blocks, ctx->digest, src, key, + partial ? ctx->buf : NULL); + src += blocks * GHASH_BLOCK_SIZE; partial = 0; } @@ -89,9 +121,7 @@ static int ghash_final(struct shash_desc *desc, u8 *dst) memset(ctx->buf + partial, 0, GHASH_BLOCK_SIZE - partial); - kernel_neon_begin_partial(8); - pmull_ghash_update(1, ctx->digest, ctx->buf, key, NULL); - kernel_neon_end(); + ghash_do_update(1, ctx->digest, ctx->buf, key, NULL); } put_unaligned_be64(ctx->digest[1], dst); put_unaligned_be64(ctx->digest[0], dst + 8); @@ -111,6 +141,9 @@ static int ghash_setkey(struct crypto_shash *tfm, return -EINVAL; } + /* needed for the fallback */ + memcpy(&key->k, inkey, GHASH_BLOCK_SIZE); + /* perform multiplication by 'x' in GF(2^128) */ b = get_unaligned_be64(inkey); a = get_unaligned_be64(inkey + 8); From patchwork Sat Jun 10 16:22:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103558 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270816qgd; Sat, 10 Jun 2017 09:23:15 -0700 (PDT) X-Received: by 10.84.209.199 with SMTP id y65mr47141587plh.205.1497111794962; Sat, 10 Jun 2017 09:23:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111794; cv=none; d=google.com; s=arc-20160816; b=05ftGEk/wHV79d64LjMVcct6bkTzoqeaqfUr1/VTI9WUMq/34NYhrS5gPErzjbpH8q z5IP3Kci14zYRf7ZOPcOIjn/LF82524sbTnIgrp6Q13pW2HDqNPi34+botSJIDOcCLS8 KpNL9TFChDthOs5pccJi2BzoHB+XtuV4mgavrirbrV0zzV2JP35lpdHB0mQqRsisXKc6 JwkdLhlPlW68zqNm2EAHrIDrxKgnw62Y7qeA/wYQ02jUzX8uJcx3PgVbgKTnVSwijZIb b13MW0zcEeW9xzGsl3vljFGKL6tb7uo9kuesdiDYSF8/0O4EkxVUMhqGOqnUbAIt3ZCY QM2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=aE9Od/9BxzamDalJm7nnhdadu27YvYhEbdwO1+WJl58=; b=rEKygA+bb9yIxaXviOs5cYUGWwtmlhMzqvNpnL7mWhFf8C9HxW63zUFirK0Rq+5YXX sgVpfYPaNfOSPPaVUCFnbr3tRtuea0mPx+ESHLwnlomdway3bDqszuy1NngbukQZrlOR pA8Gdb8ZfeRyb7TZLvqpQtLgZ2noJBWabHlzjgpIEXOgp0/6mRqQ/FOSdEbAwLmzG9PP +OcfIMuRtmRNt/IvoWwq2SS1OnR4r0yJcDMEiNYqIwNfgJVL8PWnnxuCmgvvtvgRMa0K qLI6xcB45Lo2uMIEYDDfNLTYHoyUda+IrdhnTZcVQCPT2I/TEziWA/G7afpk7dsK1DSz uSxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.14; Sat, 10 Jun 2017 09:23:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752070AbdFJQXO (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:14 -0400 Received: from mail-wr0-f172.google.com ([209.85.128.172]:35838 "EHLO mail-wr0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751958AbdFJQXN (ORCPT ); Sat, 10 Jun 2017 12:23:13 -0400 Received: by mail-wr0-f172.google.com with SMTP id q97so58889441wrb.2 for ; Sat, 10 Jun 2017 09:23:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=aE9Od/9BxzamDalJm7nnhdadu27YvYhEbdwO1+WJl58=; b=RN5z79jt6lpY1+umNooamZeLEifXZOJZKsguExDsruZqyX68GxOoXp6Y/emo77AgU2 YDfQgtpdyVStV9KSBT43c9kvmJ+eBvIYH8pFrSY/6+2a3SmWzSvy045FdLNfuyoAaGTr Lu0/mUTSdJh1MKfGDiInUcHTTcyRwe+T9cgvc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=aE9Od/9BxzamDalJm7nnhdadu27YvYhEbdwO1+WJl58=; b=Oisk5/4uwi8ewhzatimedV5VYbVlxIMWmdTSzPGYdydDx7Zu8ybo98bHeEpxXGue0i TxYqGdJ87x0x/cwtpoJx3K9w0RtSG3fWlooy7jxnfijPy4kQCRE8nfYxCpGS6/Ge1K14 8XKGeERXWeXOuWWTYWmRZI+j/sVOLjjUvN+gJAZQpMy11MfrwLg3xSmq7bQubEq/Hwbl Ois4w/jwLw2z3DM8QhfXx/m4pxht4mnBFO9u4Bwo7TI9HcypaHLJVfsJ8URDbakyOUXY EEI1JG8rkQPyI81RM1xLEYwbK2PI/4UrsMQsAW9MryOy8B4RBHp+0uN2e64isw78u+ag Apng== X-Gm-Message-State: AKS2vOyI9o0nn4u+JjyqEZBeo1yHL71QxIsKh9SM+KUViAu5Fdk1ZLRh w8GjUG4JgG1Xw9sE8fcPkA== X-Received: by 10.28.27.76 with SMTP id b73mr3228424wmb.120.1497111791506; Sat, 10 Jun 2017 09:23:11 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:10 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 03/12] crypto: arm64/crct10dif - add non-SIMD generic fallback Date: Sat, 10 Jun 2017 16:22:49 +0000 Message-Id: <1497111778-4210-4-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arm64 kernel will shortly disallow nested kernel mode NEON, so add a fallback to scalar C code that can be invoked in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/crct10dif-ce-glue.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c index 60cb590c2590..96f0cae4a022 100644 --- a/arch/arm64/crypto/crct10dif-ce-glue.c +++ b/arch/arm64/crypto/crct10dif-ce-glue.c @@ -1,7 +1,7 @@ /* * Accelerated CRC-T10DIF using arm64 NEON and Crypto Extensions instructions * - * Copyright (C) 2016 Linaro Ltd + * Copyright (C) 2016 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -18,6 +18,7 @@ #include #include +#include #define CRC_T10DIF_PMULL_CHUNK_SIZE 16U @@ -48,9 +49,13 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data, } if (length > 0) { - kernel_neon_begin_partial(14); - *crc = crc_t10dif_pmull(*crc, data, length); - kernel_neon_end(); + if (may_use_simd()) { + kernel_neon_begin(); + *crc = crc_t10dif_pmull(*crc, data, length); + kernel_neon_end(); + } else { + *crc = crc_t10dif_generic(*crc, data, length); + } } return 0; From patchwork Sat Jun 10 16:22:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103559 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270842qgd; Sat, 10 Jun 2017 09:23:19 -0700 (PDT) X-Received: by 10.99.181.67 with SMTP id u3mr49720959pgo.89.1497111798966; Sat, 10 Jun 2017 09:23:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111798; cv=none; d=google.com; s=arc-20160816; b=rcKAiT6VQmplRHATa9u8o0UnJnzQbPL4HVQShwbXVNJx3O8AodccbyOv7tG0zPEg96 mLoNAvZG4pFpjjX0yM4oY3O/wPuVZxEHVj2zfOiQbpXK3Ppc5H2nuAEitFuy0vl+jJ2+ 7N+5GfBL80kcRr5rndYgxsoPMX8elyMRTEorgdOkYRv5ImNLgao9mvbksSZwSNKc/7mb hIciBWx9Me7bkgY/swB711RlENXL1Xo7e6MBbVDlsM0IPV9Qc1PZ9Y4M5vYpY9NVAeSQ WhVudKwMBUCCitx9WWoCEsn7J0DX8AKVdofmb8cAwG/gSNTHVwwYBABlbokqnjfsr6q0 jrHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=/ksqrgvTslkMJfq/rms+AOvd8QHemSaHO6eqU/UV6mc=; b=zkvSftDrgKFlhklaWvy78OdKj02751luiPMQ1vyopBvuH0VR4bfssNkIOrD+DSzI80 M6kLSCPAXrJn3lfBb9mWY3oCM/u+bWgO2msyMPWVAhYKD/9M6CLK6ajS4gcTjyjML+Jp 8ulmoiDSj4a9sa/qkJ8nlLgyDqx0tCUN0bEHwoPAztSo28yQkoOiZPPeD3oE7iB8dRaa znqP2j3Y7f0csIXm/J9sXutkex2PMgAn2D99pdJZ1+d/9OOkcOILdv6opfeEC3tuZfwU idfp3fchk7UW/xx3kZZcFBmIXGPtYCU3diwMUIqFCX8bECM/mH8xV80D5+oWZepSYhro M25w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.18; Sat, 10 Jun 2017 09:23:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752164AbdFJQXR (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:17 -0400 Received: from mail-wr0-f171.google.com ([209.85.128.171]:36019 "EHLO mail-wr0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751958AbdFJQXP (ORCPT ); Sat, 10 Jun 2017 12:23:15 -0400 Received: by mail-wr0-f171.google.com with SMTP id v111so58815720wrc.3 for ; Sat, 10 Jun 2017 09:23:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/ksqrgvTslkMJfq/rms+AOvd8QHemSaHO6eqU/UV6mc=; b=RpPFj0+VNSSWk3dCbM5YuCMNue+WfyQmB7m9L4vGwPHLtjg75VVr3MvS3Jp3IJRVxc eWLN6KiYmvkDgo8dkPZTlkCcwaLH4NQLAUmwqnqxVh0FyVhKMdii6RggF6km5DDahPUo Bq/+6GVSC3oYGeuwKsFyywGfnr5PGW3pS199Y= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/ksqrgvTslkMJfq/rms+AOvd8QHemSaHO6eqU/UV6mc=; b=Io3QOTABE8+0e6m17JRl/RmmWgK7sZFwsW++RSXKIC49BPKYNPDgl77DoaPpzta8SG ndIkeNaWQ4QRayHiFqEP8oa7iWxAOe5bGgKYIikQFn91cRp5xRYVr9Gv7cxcqI44cdjp /id/ypczI2UYJBGW2wJPzmFU4gfMcluevDx60XOh62CKy5Sw3Px8mKVncJJduJ7DfIKW cc5WdmgrirV4ymlUk+OVLEog3R8iDozV6Fj0g7FoxqGpd+CSTM+0ogf9UEFK5l57F2II xgj8PmOEHc1JBgY/z8a1yiqmwtsJDPr8SQkeP8RU6bHzsdy1wsWDvvoEtN4dXLWTu85t fekA== X-Gm-Message-State: AKS2vOz7QcS/HQihBLrXWTFT55i4vmBaxfYG5lZo/yOayv/Rj4Z+PEsQ Ujz1dWQAh3HCA7mlqhgD4A== X-Received: by 10.28.213.200 with SMTP id m191mr3293158wmg.62.1497111793725; Sat, 10 Jun 2017 09:23:13 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:13 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 04/12] crypto: arm64/crc32 - add non-SIMD scalar fallback Date: Sat, 10 Jun 2017 16:22:50 +0000 Message-Id: <1497111778-4210-5-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arm64 kernel will shortly disallow nested kernel mode NEON, so add a fallback to scalar C code that can be invoked in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/crc32-ce-glue.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/crc32-ce-glue.c b/arch/arm64/crypto/crc32-ce-glue.c index eccb1ae90064..624f4137918c 100644 --- a/arch/arm64/crypto/crc32-ce-glue.c +++ b/arch/arm64/crypto/crc32-ce-glue.c @@ -1,7 +1,7 @@ /* * Accelerated CRC32(C) using arm64 NEON and Crypto Extensions instructions * - * Copyright (C) 2016 Linaro Ltd + * Copyright (C) 2016 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -19,6 +19,7 @@ #include #include +#include #include #define PMULL_MIN_LEN 64L /* minimum size of buffer @@ -105,10 +106,10 @@ static int crc32_pmull_update(struct shash_desc *desc, const u8 *data, length -= l; } - if (length >= PMULL_MIN_LEN) { + if (length >= PMULL_MIN_LEN && may_use_simd()) { l = round_down(length, SCALE_F); - kernel_neon_begin_partial(10); + kernel_neon_begin(); *crc = crc32_pmull_le(data, l, *crc); kernel_neon_end(); @@ -137,10 +138,10 @@ static int crc32c_pmull_update(struct shash_desc *desc, const u8 *data, length -= l; } - if (length >= PMULL_MIN_LEN) { + if (length >= PMULL_MIN_LEN && may_use_simd()) { l = round_down(length, SCALE_F); - kernel_neon_begin_partial(10); + kernel_neon_begin(); *crc = crc32c_pmull_le(data, l, *crc); kernel_neon_end(); From patchwork Sat Jun 10 16:22:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103560 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270844qgd; Sat, 10 Jun 2017 09:23:19 -0700 (PDT) X-Received: by 10.101.88.130 with SMTP id d2mr6212914pgu.58.1497111799345; Sat, 10 Jun 2017 09:23:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111799; cv=none; d=google.com; s=arc-20160816; b=aRGeqe0aFrQJQPlcjs/1cqcmhDRAiuaJ1Kz9oOV+0AKqyoAhAysbQOHNfC+mHWCQM4 I9pIEtY1smHdLoZPFxiABXvKYTKQKSpBx3zFTZEM4UxBbQ+dTNFFRlQ4WTLvJnDAtApq XG+5o1YwCN/pkC8JHCqZ7CxiuMrXaCrUZmkFMfbawdZ7whjc5jv7ySXpafkpNlGP5Mmp K9KeX4RUm+O9Aub1Z6ttxlg+ZwzuCqmjdZ7mA+/DzrfuNmcatZ3PDqnZiLvCyYux6Wsm 0eXCWS5Wqc68j+dI5KH4GuSMfH+V4avSKfk5ceRnxO4OCpI9pqDDzBNRPuNp2z4dSDks Px6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=9Ya/sYOTgTER6xd/QF5C2mX/Hr3cZ/QXRNZQ20+oWtY=; b=i+B28klPR75pm77ajXhR/J7BQ7l+LmiyxiS7TT/NI/H37gZF222PsLP3JmU514f/lE VxDNWdUw2d1iN1hw/h43z40R6VIOQ+JfxxolTuRxj2aQI4f4g04hDLy7d00SFrBlbYGh bhHQV7PMOfnjclgPcKQDWNy0FhjF5k3UWvu15OplGgOqxPh6QzEp9WXFkm+i2ZsPUMCl oopmWQFctoDNhl72G1/o1HdNlx/YZmFMHAAaeC3Vf2GKCaPPwnxqTF0w1siTALzkTSzL /Y6N57FvGfteIr6lRdnFx5UrrMSg74Dm5Ac++dXVgWqfH4tGLQHbEWD46F4lux0E2UG7 3y4Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.19; Sat, 10 Jun 2017 09:23:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751958AbdFJQXS (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:18 -0400 Received: from mail-wr0-f181.google.com ([209.85.128.181]:34835 "EHLO mail-wr0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752144AbdFJQXR (ORCPT ); Sat, 10 Jun 2017 12:23:17 -0400 Received: by mail-wr0-f181.google.com with SMTP id q97so58890402wrb.2 for ; Sat, 10 Jun 2017 09:23:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9Ya/sYOTgTER6xd/QF5C2mX/Hr3cZ/QXRNZQ20+oWtY=; b=WXbaQPLVKJr58dCXndUKWKHPpselWCyRGnz88dNvG4yD932PO90mpjce0S9tTtzKOX EsSAp6uslOfEuU/bfzC7eh9bXpWs7fLpJEhG2nIcnK1hQjhU9S+UUkqRkjl8aRxmKclX P+RDVjwxbbdFwPl/RFEq/BhN9MxZWLaU70Bx0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9Ya/sYOTgTER6xd/QF5C2mX/Hr3cZ/QXRNZQ20+oWtY=; b=oKEZfNUkJOSA8jBlMfVBn2mNvZvf79VjleLfhUntLqPP/04RPdfltzKG87ITvYlWg0 hV28l6c/0dOgjw4BPEf/OWnfkQWaaQ5/MbtQVZ9XwEbHWhAJKv3QuMxV+kd/aD7AJlmT enF+tBquXEzdKXLlDHhNqFPpjbvrvEYWyeYaxdjqu/NB7E8sxvxCKsQDlrCwYgI4qN+6 2ILaT9NLu0vOOIRNEd+LBv1ntsbpDP35NE6KeXfnDFOBBrppg6UC4zTKlrHZSK3o4wme 1xovGgiOSGZrp4u2bYTtbqk4i6qhLWGcnugMsWSQbQxcwOcRAdsjwi8m73+Wu64Zotk+ FV8A== X-Gm-Message-State: AODbwcC7qHaiUhd41eBG5mvPHlvdPDYjlTLMITTERCgGTtblU2U8xav+ ds5djJuysJjJvGmn9jkolA== X-Received: by 10.223.174.194 with SMTP id y60mr2696242wrc.155.1497111795919; Sat, 10 Jun 2017 09:23:15 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:15 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 05/12] crypto: arm64/sha1-ce - add non-SIMD generic fallback Date: Sat, 10 Jun 2017 16:22:51 +0000 Message-Id: <1497111778-4210-6-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arm64 kernel will shortly disallow nested kernel mode NEON, so add a fallback to scalar C code that can be invoked in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 3 ++- arch/arm64/crypto/sha1-ce-glue.c | 18 ++++++++++++++---- 2 files changed, 16 insertions(+), 5 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 7d75a363e317..5d5953545dad 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -18,8 +18,9 @@ config CRYPTO_SHA512_ARM64 config CRYPTO_SHA1_ARM64_CE tristate "SHA-1 digest algorithm (ARMv8 Crypto Extensions)" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_HASH + select CRYPTO_SHA1 config CRYPTO_SHA2_ARM64_CE tristate "SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)" diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c index aefda9868627..058cbe299dd6 100644 --- a/arch/arm64/crypto/sha1-ce-glue.c +++ b/arch/arm64/crypto/sha1-ce-glue.c @@ -1,7 +1,7 @@ /* * sha1-ce-glue.c - SHA-1 secure hash using ARMv8 Crypto Extensions * - * Copyright (C) 2014 Linaro Ltd + * Copyright (C) 2014 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -9,6 +9,7 @@ */ #include +#include #include #include #include @@ -37,8 +38,11 @@ static int sha1_ce_update(struct shash_desc *desc, const u8 *data, { struct sha1_ce_state *sctx = shash_desc_ctx(desc); + if (!may_use_simd()) + return crypto_sha1_update(desc, data, len); + sctx->finalize = 0; - kernel_neon_begin_partial(16); + kernel_neon_begin(); sha1_base_do_update(desc, data, len, (sha1_block_fn *)sha1_ce_transform); kernel_neon_end(); @@ -57,13 +61,16 @@ static int sha1_ce_finup(struct shash_desc *desc, const u8 *data, ASM_EXPORT(sha1_ce_offsetof_finalize, offsetof(struct sha1_ce_state, finalize)); + if (!may_use_simd()) + return crypto_sha1_finup(desc, data, len, out); + /* * Allow the asm code to perform the finalization if there is no * partial data and the input is a round multiple of the block size. */ sctx->finalize = finalize; - kernel_neon_begin_partial(16); + kernel_neon_begin(); sha1_base_do_update(desc, data, len, (sha1_block_fn *)sha1_ce_transform); if (!finalize) @@ -76,8 +83,11 @@ static int sha1_ce_final(struct shash_desc *desc, u8 *out) { struct sha1_ce_state *sctx = shash_desc_ctx(desc); + if (!may_use_simd()) + return crypto_sha1_finup(desc, NULL, 0, out); + sctx->finalize = 0; - kernel_neon_begin_partial(16); + kernel_neon_begin(); sha1_base_do_finalize(desc, (sha1_block_fn *)sha1_ce_transform); kernel_neon_end(); return sha1_base_finish(desc, out); From patchwork Sat Jun 10 16:22:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103561 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270854qgd; Sat, 10 Jun 2017 09:23:21 -0700 (PDT) X-Received: by 10.84.216.30 with SMTP id m30mr46925012pli.156.1497111801321; Sat, 10 Jun 2017 09:23:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111801; cv=none; d=google.com; s=arc-20160816; b=qPGCDm87h7lcAH/lGMxreSdew2JOEbIfkcDaZ/SFFDqSO6SdMbYBhfixyFtRV1ETH2 au09QI1qlLFfwQjfpNm0Gxqe8N6ZsZo3mD+JCqeh1Sp3mf9q7xvOEs2+4lygbsRO4BSE U6u7MfEZx8Xgl8cPLZYrlEBRBFGQMc3mPBpnPpKSye0lqSnDQddziasXsrSMteSdaH0s 6ZSqh05E5xlYTb0SM/QQ0zzve63SFZaJVJ94CCWEnUoOz9mFb9p5uFRivwVxr8aDQGfx 9qlg0o4HjlPerH/kzhBVbqVRUGSFDxtNOxiUnV76g6vG4mgdtdL7HCq+Fb98f2eQZuxq t/rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=ouItXkmqbqmIT7C2bf97mWb/Wv06EAAO7UtaxhOZzZ0=; b=M5kGaA3Z55hXb++OxWdI2kgghxEg93Z7C9OGwm+/ZHQ4TxH6RxUIoMJ40jJUQhcdg0 NnDWnzkd1M2jZRyzxU/pSNJGebrrEcYoqRZtisQAdtVjz42B4v/Y+OVSKUaBKVgDDQ6w Fgx3d2F5Xv83gIZ3CRl2lFdcsdqmuJDdRK4FduX1FxqrFrPw/M2yr9V6KlYtzTpWs1Ej opRpDglviey+gMMae93+zA6mT1T0zpRtBecivn9z5856agxfBAo3accAq6j/68HePYN4 +z5mrL3iMmwLuxpvYy3lSixYThYX/g+VO7HAGWb/JnyhhFMcbMEEwIt95W2QsRd6qa5I hn1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.21; Sat, 10 Jun 2017 09:23:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752165AbdFJQXU (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:20 -0400 Received: from mail-wr0-f176.google.com ([209.85.128.176]:34845 "EHLO mail-wr0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752144AbdFJQXT (ORCPT ); Sat, 10 Jun 2017 12:23:19 -0400 Received: by mail-wr0-f176.google.com with SMTP id q97so58890844wrb.2 for ; Sat, 10 Jun 2017 09:23:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ouItXkmqbqmIT7C2bf97mWb/Wv06EAAO7UtaxhOZzZ0=; b=TpOybreNzmvSkAG4mFBw1XTpznq1V+b3uao+24HrEuNYKoYloPZZV6vhP3Xvr9B6Gi OlEJAZhvgcFOviMpvv4Yf413pAR8cnXDKJs6SUWRWkwfXEa23zDwoUCAHEsMhO6Os0Sk d6UvdsecWuUh9cU7oUxWDXNLhGEzRe4L2dibk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ouItXkmqbqmIT7C2bf97mWb/Wv06EAAO7UtaxhOZzZ0=; b=BBgjhhbxAqb07eP51DE5VyiMGxxw+jRCXR+rladkpWQ7VJ3UJ+HATVFKeNCHQjd2uc mY12wN1oDKRxsX6NiuSgckTIc9gd1WHmxBlX+iLNQUrFswi0B9MZVlxbWpVLyaWtks4K T2X+0v9pAFeCPvgzKT6nhBOP9DE3Nll1WkXEmhsoECAd041AXiMNQ5L3MYF1lF5EZlDT ysMDNo2BE948tP3hP/SsvZ6cT6tV2uxWu3aH6mK/MavRXUjYESGM+VK+Np3VTg2xuLar h+RSSkWBPTqJZG9rewscx0fw+2GNBVA4FPgO4CZmjkzCeK9/VqMassX1+d1szVvLA9Ww rr5w== X-Gm-Message-State: AODbwcCgVbs2zW0ezdUNfp2vodlXjMkxV2+iCUggwykfX+5xIpoBhZLB dGBxgiClKS+nynbpsPN6ew== X-Received: by 10.223.150.81 with SMTP id c17mr2721373wra.124.1497111798053; Sat, 10 Jun 2017 09:23:18 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:17 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 06/12] crypto: arm64/sha2-ce - add non-SIMD scalar fallback Date: Sat, 10 Jun 2017 16:22:52 +0000 Message-Id: <1497111778-4210-7-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arm64 kernel will shortly disallow nested kernel mode NEON, so add a fallback to scalar code that can be invoked in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 3 +- arch/arm64/crypto/sha2-ce-glue.c | 30 +++++++++++++++++--- arch/arm64/crypto/sha256-glue.c | 1 + 3 files changed, 29 insertions(+), 5 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 5d5953545dad..8cd145f9c1ff 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -24,8 +24,9 @@ config CRYPTO_SHA1_ARM64_CE config CRYPTO_SHA2_ARM64_CE tristate "SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_HASH + select CRYPTO_SHA256_ARM64 config CRYPTO_GHASH_ARM64_CE tristate "GHASH (for GCM chaining mode) using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c index 7cd587564a41..eb71543568b6 100644 --- a/arch/arm64/crypto/sha2-ce-glue.c +++ b/arch/arm64/crypto/sha2-ce-glue.c @@ -1,7 +1,7 @@ /* * sha2-ce-glue.c - SHA-224/SHA-256 using ARMv8 Crypto Extensions * - * Copyright (C) 2014 Linaro Ltd + * Copyright (C) 2014 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -9,6 +9,7 @@ */ #include +#include #include #include #include @@ -32,13 +33,19 @@ struct sha256_ce_state { asmlinkage void sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src, int blocks); +asmlinkage void sha256_block_data_order(u32 *digest, u8 const *src, int blocks); + static int sha256_ce_update(struct shash_desc *desc, const u8 *data, unsigned int len) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); + if (!may_use_simd()) + return sha256_base_do_update(desc, data, len, + (sha256_block_fn *)sha256_block_data_order); + sctx->finalize = 0; - kernel_neon_begin_partial(28); + kernel_neon_begin(); sha256_base_do_update(desc, data, len, (sha256_block_fn *)sha2_ce_transform); kernel_neon_end(); @@ -57,13 +64,22 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data, ASM_EXPORT(sha256_ce_offsetof_finalize, offsetof(struct sha256_ce_state, finalize)); + if (!may_use_simd()) { + if (len) + sha256_base_do_update(desc, data, len, + (sha256_block_fn *)sha256_block_data_order); + sha256_base_do_finalize(desc, + (sha256_block_fn *)sha256_block_data_order); + return sha256_base_finish(desc, out); + } + /* * Allow the asm code to perform the finalization if there is no * partial data and the input is a round multiple of the block size. */ sctx->finalize = finalize; - kernel_neon_begin_partial(28); + kernel_neon_begin(); sha256_base_do_update(desc, data, len, (sha256_block_fn *)sha2_ce_transform); if (!finalize) @@ -77,8 +93,14 @@ static int sha256_ce_final(struct shash_desc *desc, u8 *out) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); + if (!may_use_simd()) { + sha256_base_do_finalize(desc, + (sha256_block_fn *)sha256_block_data_order); + return sha256_base_finish(desc, out); + } + sctx->finalize = 0; - kernel_neon_begin_partial(28); + kernel_neon_begin(); sha256_base_do_finalize(desc, (sha256_block_fn *)sha2_ce_transform); kernel_neon_end(); return sha256_base_finish(desc, out); diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c index a2226f841960..b064d925fe2a 100644 --- a/arch/arm64/crypto/sha256-glue.c +++ b/arch/arm64/crypto/sha256-glue.c @@ -29,6 +29,7 @@ MODULE_ALIAS_CRYPTO("sha256"); asmlinkage void sha256_block_data_order(u32 *digest, const void *data, unsigned int num_blks); +EXPORT_SYMBOL(sha256_block_data_order); asmlinkage void sha256_block_neon(u32 *digest, const void *data, unsigned int num_blks); From patchwork Sat Jun 10 16:22:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103562 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270865qgd; Sat, 10 Jun 2017 09:23:24 -0700 (PDT) X-Received: by 10.84.225.5 with SMTP id t5mr46739636plj.238.1497111804667; Sat, 10 Jun 2017 09:23:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111804; cv=none; d=google.com; s=arc-20160816; b=gmPXuAwhyCCQO1Ms8X2aMmVr3jNeCsyWBB8yEpPMjztg8GwWCvuQzjDhiAMMrLnXFK xUnhU0G94KwBqFial4+NQV/KuTsylj6FIyxSouLC2tdiyJ66MfMJN1c0/WHPzTnUpER8 VrflovrMGKVYIvjpHGWAyzi1l5mq30wKrS9fCa9WAbb1gbv+0IwBXSAkrUKBNHwyp7va EO4/HLRkYHOczg0FvroMMZuMoiB026R7wk/XEHrKXNER0fQft3HIiulyGznrULPi+h9z qllVD5WwXbrnmpN3ci/r7Xfp/pI5kICAiNsMlCsLSNEo82ZpcNn7kdWMQ0v92rSagnaL 4gXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=/Fn9pYmpzucerGXkvc8TZ3uBB5/wDDj+xSCYmaB657I=; b=PsjI/PQKo8H+uZ/Q2bzlv5WI5ZdPmXNRIVznGSgU8kDiyfshbxKMzS7t2r5y/qBzPe 0LGQoOizlMfpjoNYhUvr9GH9ORCQfaY2ahTiLa0SK41hhyWghog1ojeNcQ/o82Ujx8Ry 2SdI0SxY1D2Bq7GNiUpDIK/TKEiAZv6qSSjP0kRGqkSpsj328YNH7K07ZdObgaUg830G xaicI35AZJ5jfhZ3hugFroZRKgw66uabSGd+BuBeTMQHsa/LD5lyrflbCqQy7ARK3C2z 4Yy7wPp4lZGaFEPbyFPIjdI5gvqaKMrNLowDB7OaHN6uBF4kf5EtryjurWq/fgEyi2Pk qw+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.24; Sat, 10 Jun 2017 09:23:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752196AbdFJQXX (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:23 -0400 Received: from mail-wr0-f176.google.com ([209.85.128.176]:33664 "EHLO mail-wr0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752144AbdFJQXW (ORCPT ); Sat, 10 Jun 2017 12:23:22 -0400 Received: by mail-wr0-f176.google.com with SMTP id v104so58596496wrb.0 for ; Sat, 10 Jun 2017 09:23:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/Fn9pYmpzucerGXkvc8TZ3uBB5/wDDj+xSCYmaB657I=; b=bPMRMPnQEn2am2TjbHu7b9T6umrvYdah9iusoafM68i6/4CeKYLR7PmIQXfOLhG3co Zzebbis9OZReFYW7asCElds1gJORkL/NCwM3rhuGIj0FrkZrxA0MEb9oROMbYdU+RHjT PlGpWl+bVfAFypavnSDkvDH0N7x7IJEDlmpiM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/Fn9pYmpzucerGXkvc8TZ3uBB5/wDDj+xSCYmaB657I=; b=b4mqQ+c9wJ7VvBUR+q3LjnNpeVwlJB852VRL53CFnQ3R6Mz1fK/kE7BBZtrk0t4lOf dhKF+zCgLnQG9ue8ujQohiolWy+AshXt4DtIXor1DnYX0Qs+hOOAK4bz0/XdwOHPcMjS pl8RvzNf4SRzSN0+bcHCf+bivLEr0A+6wiaE+uvrCMaTf2mwMuAxMp1Oa0qvEn3p7h39 inaZ7i2BHQq/ifVZS0c5Hy/RUpqWJnoy1SQoBHaAxeEW9gozGfHpu7kK85AViuK5YYre HT+ku1lXMQlrgQ+FcrfO9+O1v/x79yfnzHFGTEmDtVcfETlJpuCt2E3veoQujhOm71bk GUPQ== X-Gm-Message-State: AKS2vOxofYQxKToVtxZvbidFO2yEAHVa4LyiCghKfxRxeyLQ0X45/GCO q+nDxO2pMC5eZW9ePJAWHQ== X-Received: by 10.28.107.221 with SMTP id a90mr3231005wmi.94.1497111800666; Sat, 10 Jun 2017 09:23:20 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:19 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 07/12] crypto: arm64/aes-ce-cipher - match round key endianness with generic code Date: Sat, 10 Jun 2017 16:22:53 +0000 Message-Id: <1497111778-4210-8-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In order to be able to reuse the generic AES code as a fallback for situations where the NEON may not be used, update the key handling to match the byte order of the generic code: it stores round keys as sequences of 32-bit quantities rather than streams of bytes, and so our code needs to be updated to reflect that. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-ce-ccm-core.S | 30 ++++++++--------- arch/arm64/crypto/aes-ce-cipher.c | 35 +++++++++----------- arch/arm64/crypto/aes-ce.S | 12 +++---- 3 files changed, 37 insertions(+), 40 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/aes-ce-ccm-core.S b/arch/arm64/crypto/aes-ce-ccm-core.S index 3363560c79b7..e3a375c4cb83 100644 --- a/arch/arm64/crypto/aes-ce-ccm-core.S +++ b/arch/arm64/crypto/aes-ce-ccm-core.S @@ -1,7 +1,7 @@ /* * aesce-ccm-core.S - AES-CCM transform for ARMv8 with Crypto Extensions * - * Copyright (C) 2013 - 2014 Linaro Ltd + * Copyright (C) 2013 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -32,7 +32,7 @@ ENTRY(ce_aes_ccm_auth_data) beq 8f /* out of input? */ cbnz w8, 0b eor v0.16b, v0.16b, v1.16b -1: ld1 {v3.16b}, [x4] /* load first round key */ +1: ld1 {v3.4s}, [x4] /* load first round key */ prfm pldl1strm, [x1] cmp w5, #12 /* which key size? */ add x6, x4, #16 @@ -42,17 +42,17 @@ ENTRY(ce_aes_ccm_auth_data) mov v5.16b, v3.16b b 4f 2: mov v4.16b, v3.16b - ld1 {v5.16b}, [x6], #16 /* load 2nd round key */ + ld1 {v5.4s}, [x6], #16 /* load 2nd round key */ 3: aese v0.16b, v4.16b aesmc v0.16b, v0.16b -4: ld1 {v3.16b}, [x6], #16 /* load next round key */ +4: ld1 {v3.4s}, [x6], #16 /* load next round key */ aese v0.16b, v5.16b aesmc v0.16b, v0.16b -5: ld1 {v4.16b}, [x6], #16 /* load next round key */ +5: ld1 {v4.4s}, [x6], #16 /* load next round key */ subs w7, w7, #3 aese v0.16b, v3.16b aesmc v0.16b, v0.16b - ld1 {v5.16b}, [x6], #16 /* load next round key */ + ld1 {v5.4s}, [x6], #16 /* load next round key */ bpl 3b aese v0.16b, v4.16b subs w2, w2, #16 /* last data? */ @@ -90,7 +90,7 @@ ENDPROC(ce_aes_ccm_auth_data) * u32 rounds); */ ENTRY(ce_aes_ccm_final) - ld1 {v3.16b}, [x2], #16 /* load first round key */ + ld1 {v3.4s}, [x2], #16 /* load first round key */ ld1 {v0.16b}, [x0] /* load mac */ cmp w3, #12 /* which key size? */ sub w3, w3, #2 /* modified # of rounds */ @@ -100,17 +100,17 @@ ENTRY(ce_aes_ccm_final) mov v5.16b, v3.16b b 2f 0: mov v4.16b, v3.16b -1: ld1 {v5.16b}, [x2], #16 /* load next round key */ +1: ld1 {v5.4s}, [x2], #16 /* load next round key */ aese v0.16b, v4.16b aesmc v0.16b, v0.16b aese v1.16b, v4.16b aesmc v1.16b, v1.16b -2: ld1 {v3.16b}, [x2], #16 /* load next round key */ +2: ld1 {v3.4s}, [x2], #16 /* load next round key */ aese v0.16b, v5.16b aesmc v0.16b, v0.16b aese v1.16b, v5.16b aesmc v1.16b, v1.16b -3: ld1 {v4.16b}, [x2], #16 /* load next round key */ +3: ld1 {v4.4s}, [x2], #16 /* load next round key */ subs w3, w3, #3 aese v0.16b, v3.16b aesmc v0.16b, v0.16b @@ -137,31 +137,31 @@ CPU_LE( rev x8, x8 ) /* keep swabbed ctr in reg */ cmp w4, #12 /* which key size? */ sub w7, w4, #2 /* get modified # of rounds */ ins v1.d[1], x9 /* no carry in lower ctr */ - ld1 {v3.16b}, [x3] /* load first round key */ + ld1 {v3.4s}, [x3] /* load first round key */ add x10, x3, #16 bmi 1f bne 4f mov v5.16b, v3.16b b 3f 1: mov v4.16b, v3.16b - ld1 {v5.16b}, [x10], #16 /* load 2nd round key */ + ld1 {v5.4s}, [x10], #16 /* load 2nd round key */ 2: /* inner loop: 3 rounds, 2x interleaved */ aese v0.16b, v4.16b aesmc v0.16b, v0.16b aese v1.16b, v4.16b aesmc v1.16b, v1.16b -3: ld1 {v3.16b}, [x10], #16 /* load next round key */ +3: ld1 {v3.4s}, [x10], #16 /* load next round key */ aese v0.16b, v5.16b aesmc v0.16b, v0.16b aese v1.16b, v5.16b aesmc v1.16b, v1.16b -4: ld1 {v4.16b}, [x10], #16 /* load next round key */ +4: ld1 {v4.4s}, [x10], #16 /* load next round key */ subs w7, w7, #3 aese v0.16b, v3.16b aesmc v0.16b, v0.16b aese v1.16b, v3.16b aesmc v1.16b, v1.16b - ld1 {v5.16b}, [x10], #16 /* load next round key */ + ld1 {v5.4s}, [x10], #16 /* load next round key */ bpl 2b aese v0.16b, v4.16b aese v1.16b, v4.16b diff --git a/arch/arm64/crypto/aes-ce-cipher.c b/arch/arm64/crypto/aes-ce-cipher.c index 50d9fe11d0c8..a0a0e5e3a8b5 100644 --- a/arch/arm64/crypto/aes-ce-cipher.c +++ b/arch/arm64/crypto/aes-ce-cipher.c @@ -1,7 +1,7 @@ /* * aes-ce-cipher.c - core AES cipher using ARMv8 Crypto Extensions * - * Copyright (C) 2013 - 2014 Linaro Ltd + * Copyright (C) 2013 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -9,6 +9,7 @@ */ #include +#include #include #include #include @@ -47,24 +48,24 @@ static void aes_cipher_encrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) kernel_neon_begin_partial(4); __asm__(" ld1 {v0.16b}, %[in] ;" - " ld1 {v1.16b}, [%[key]], #16 ;" + " ld1 {v1.4s}, [%[key]], #16 ;" " cmp %w[rounds], #10 ;" " bmi 0f ;" " bne 3f ;" " mov v3.16b, v1.16b ;" " b 2f ;" "0: mov v2.16b, v1.16b ;" - " ld1 {v3.16b}, [%[key]], #16 ;" + " ld1 {v3.4s}, [%[key]], #16 ;" "1: aese v0.16b, v2.16b ;" " aesmc v0.16b, v0.16b ;" - "2: ld1 {v1.16b}, [%[key]], #16 ;" + "2: ld1 {v1.4s}, [%[key]], #16 ;" " aese v0.16b, v3.16b ;" " aesmc v0.16b, v0.16b ;" - "3: ld1 {v2.16b}, [%[key]], #16 ;" + "3: ld1 {v2.4s}, [%[key]], #16 ;" " subs %w[rounds], %w[rounds], #3 ;" " aese v0.16b, v1.16b ;" " aesmc v0.16b, v0.16b ;" - " ld1 {v3.16b}, [%[key]], #16 ;" + " ld1 {v3.4s}, [%[key]], #16 ;" " bpl 1b ;" " aese v0.16b, v2.16b ;" " eor v0.16b, v0.16b, v3.16b ;" @@ -92,24 +93,24 @@ static void aes_cipher_decrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) kernel_neon_begin_partial(4); __asm__(" ld1 {v0.16b}, %[in] ;" - " ld1 {v1.16b}, [%[key]], #16 ;" + " ld1 {v1.4s}, [%[key]], #16 ;" " cmp %w[rounds], #10 ;" " bmi 0f ;" " bne 3f ;" " mov v3.16b, v1.16b ;" " b 2f ;" "0: mov v2.16b, v1.16b ;" - " ld1 {v3.16b}, [%[key]], #16 ;" + " ld1 {v3.4s}, [%[key]], #16 ;" "1: aesd v0.16b, v2.16b ;" " aesimc v0.16b, v0.16b ;" - "2: ld1 {v1.16b}, [%[key]], #16 ;" + "2: ld1 {v1.4s}, [%[key]], #16 ;" " aesd v0.16b, v3.16b ;" " aesimc v0.16b, v0.16b ;" - "3: ld1 {v2.16b}, [%[key]], #16 ;" + "3: ld1 {v2.4s}, [%[key]], #16 ;" " subs %w[rounds], %w[rounds], #3 ;" " aesd v0.16b, v1.16b ;" " aesimc v0.16b, v0.16b ;" - " ld1 {v3.16b}, [%[key]], #16 ;" + " ld1 {v3.4s}, [%[key]], #16 ;" " bpl 1b ;" " aesd v0.16b, v2.16b ;" " eor v0.16b, v0.16b, v3.16b ;" @@ -165,20 +166,16 @@ int ce_aes_expandkey(struct crypto_aes_ctx *ctx, const u8 *in_key, key_len != AES_KEYSIZE_256) return -EINVAL; - memcpy(ctx->key_enc, in_key, key_len); ctx->key_length = key_len; + for (i = 0; i < kwords; i++) + ctx->key_enc[i] = get_unaligned_le32(in_key + i * sizeof(u32)); kernel_neon_begin_partial(2); for (i = 0; i < sizeof(rcon); i++) { u32 *rki = ctx->key_enc + (i * kwords); u32 *rko = rki + kwords; -#ifndef CONFIG_CPU_BIG_ENDIAN rko[0] = ror32(aes_sub(rki[kwords - 1]), 8) ^ rcon[i] ^ rki[0]; -#else - rko[0] = rol32(aes_sub(rki[kwords - 1]), 8) ^ (rcon[i] << 24) ^ - rki[0]; -#endif rko[1] = rko[0] ^ rki[1]; rko[2] = rko[1] ^ rki[2]; rko[3] = rko[2] ^ rki[3]; @@ -210,9 +207,9 @@ int ce_aes_expandkey(struct crypto_aes_ctx *ctx, const u8 *in_key, key_dec[0] = key_enc[j]; for (i = 1, j--; j > 0; i++, j--) - __asm__("ld1 {v0.16b}, %[in] ;" + __asm__("ld1 {v0.4s}, %[in] ;" "aesimc v1.16b, v0.16b ;" - "st1 {v1.16b}, %[out] ;" + "st1 {v1.4s}, %[out] ;" : [out] "=Q"(key_dec[i]) : [in] "Q"(key_enc[j]) diff --git a/arch/arm64/crypto/aes-ce.S b/arch/arm64/crypto/aes-ce.S index b46093d567e5..50330f5c3adc 100644 --- a/arch/arm64/crypto/aes-ce.S +++ b/arch/arm64/crypto/aes-ce.S @@ -2,7 +2,7 @@ * linux/arch/arm64/crypto/aes-ce.S - AES cipher for ARMv8 with * Crypto Extensions * - * Copyright (C) 2013 Linaro Ltd + * Copyright (C) 2013 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -22,11 +22,11 @@ cmp \rounds, #12 blo 2222f /* 128 bits */ beq 1111f /* 192 bits */ - ld1 {v17.16b-v18.16b}, [\rk], #32 -1111: ld1 {v19.16b-v20.16b}, [\rk], #32 -2222: ld1 {v21.16b-v24.16b}, [\rk], #64 - ld1 {v25.16b-v28.16b}, [\rk], #64 - ld1 {v29.16b-v31.16b}, [\rk] + ld1 {v17.4s-v18.4s}, [\rk], #32 +1111: ld1 {v19.4s-v20.4s}, [\rk], #32 +2222: ld1 {v21.4s-v24.4s}, [\rk], #64 + ld1 {v25.4s-v28.4s}, [\rk], #64 + ld1 {v29.4s-v31.4s}, [\rk] .endm /* prepare for encryption with key in rk[] */ From patchwork Sat Jun 10 16:22:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103563 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270871qgd; Sat, 10 Jun 2017 09:23:26 -0700 (PDT) X-Received: by 10.84.238.139 with SMTP id v11mr47732906plk.182.1497111806335; Sat, 10 Jun 2017 09:23:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111806; cv=none; d=google.com; s=arc-20160816; b=z4Fl6Z8Rn2iYOflKEXmMTySJnUEKXFQ4lx1vgtk9bbHcXeQeCRRyLOUQ8b3+6qEYkt eSB0Tv5cYMUTXnz6KTh2b5/p8dBc7OsHIsn0Xxe3Vobu+CIOndlKdgpFpmcbEPEeIJbT Onc3TQA+yNq81U83/UwuFaTOJnbPpgMXucacPdM0utN7d+OVXhLoYskgTKSAwfbWfSCb A+y9UUIL7ACofLzpp5hYw99ZeoKA4I7CSbpCGR+ddUPV77qrOdVfoDF2wctiDeZD45cG J9LedYWZTkOUB2WxSv26x/D2t1K+DeoQV2YJGusiwuFdEbAVMVFMvp9H2akl2YRbZZVR Zu2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Hp2doILegMbs6y4QwSEc7j8vbpgicAqKMc+INz1yHGs=; b=oU7X7RvszWg0m9yglJzehY7Pq5OrH2aYlSNTin7dT7nAZjK5E5eOPC34FR0wlFW3Nb oP7SGKjt/WrH7eXsDExzkeW4fYb2PGKxEIiJdgln1Knen4KnEa5tQPhFdxwx/XJexznd yxqt8F+tXiTqJtBO8lXamBxzY0m82iLnp2OaeKKMlv1aqzGAhejcpYX+i2Sy+NnfXi8s AbsKBKMYIfYq+f3+6FrqTYTbgStuIzqrqfBQpaSPj70OizXeUx+3vg2H+CKt9jzJLvn+ wz5sIzKjKK5dDkFihS3/B4kT821RX9/LFaoOi7FaDvfXgZTkiLnAlP78mRQJvRMyvNYR gF/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.26; Sat, 10 Jun 2017 09:23:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752202AbdFJQXZ (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:25 -0400 Received: from mail-wr0-f173.google.com ([209.85.128.173]:36064 "EHLO mail-wr0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752199AbdFJQXY (ORCPT ); Sat, 10 Jun 2017 12:23:24 -0400 Received: by mail-wr0-f173.google.com with SMTP id v111so58817631wrc.3 for ; Sat, 10 Jun 2017 09:23:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Hp2doILegMbs6y4QwSEc7j8vbpgicAqKMc+INz1yHGs=; b=VRSHiuaVDg4UINFM877ewuXI1TgsX7WBnis4EFq+sCJ87BBsk1WfeaIJ23ubgQHabM kD1EFv0PCflud2luGMqzkhRBHzCbBKsrkfPCYDcqtUXB77rkALRSwOUTVYku2aEGxKAg l+3psq7VL6ZMHxb+gcqdiMFGbf1xtstWPv/5E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Hp2doILegMbs6y4QwSEc7j8vbpgicAqKMc+INz1yHGs=; b=YRUyb+TPDvz0LgTvcn6VJNxrToIvzyQaXjxlkGC/HezXF5He2fTEAzB3FU+sSpxzag QXf6drkUA4SkRaK9ujbgJuT33cWLe0hYqt92z5/hUhdLSKVq3pc0uzs+QvzdnH3v1dcV M/X+eXfzV3vAMQD7JN0d4BEncR3tH2gDiOx8QLyek8e6EPlyzOeTOfLZWKKE+oe2hF5g Ao1B+C9xszPHKzzPIckF5EfKp8bGXtQUqG8psSHkhQF88nA9cOtA11E2Jz0klG7Flwx/ qsoPvEaC+O777iqrWx8T9xG0C7Iv/P1cqQ44kbli5tXo8tBYZMQJz2v0Maf5QPz/ti9H WdtQ== X-Gm-Message-State: AKS2vOzF+BB1UsuH4xHTE1icWO9anXu0sF9C5t3Tf+CZraDv09yZmi1r C1wS2kvvqmM8KliuHn26Pg== X-Received: by 10.28.47.138 with SMTP id v132mr3109018wmv.77.1497111803193; Sat, 10 Jun 2017 09:23:23 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:22 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 08/12] crypto: arm64/aes-ce-cipher: add non-SIMD generic fallback Date: Sat, 10 Jun 2017 16:22:54 +0000 Message-Id: <1497111778-4210-9-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arm64 kernel will shortly disallow nested kernel mode NEON, so add a fallback to scalar code that can be invoked in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 3 ++- arch/arm64/crypto/aes-ce-cipher.c | 20 +++++++++++++++++--- 2 files changed, 19 insertions(+), 4 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 8cd145f9c1ff..772801f263d9 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -50,8 +50,9 @@ config CRYPTO_AES_ARM64 config CRYPTO_AES_ARM64_CE tristate "AES core cipher using ARMv8 Crypto Extensions" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_ALGAPI + select CRYPTO_AES_ARM64 config CRYPTO_AES_ARM64_CE_CCM tristate "AES in CCM mode using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/aes-ce-cipher.c b/arch/arm64/crypto/aes-ce-cipher.c index a0a0e5e3a8b5..6a75cd75ed11 100644 --- a/arch/arm64/crypto/aes-ce-cipher.c +++ b/arch/arm64/crypto/aes-ce-cipher.c @@ -9,6 +9,7 @@ */ #include +#include #include #include #include @@ -21,6 +22,9 @@ MODULE_DESCRIPTION("Synchronous AES cipher using ARMv8 Crypto Extensions"); MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); +asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); +asmlinkage void __aes_arm64_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds); + struct aes_block { u8 b[AES_BLOCK_SIZE]; }; @@ -45,7 +49,12 @@ static void aes_cipher_encrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) void *dummy0; int dummy1; - kernel_neon_begin_partial(4); + if (!may_use_simd()) { + __aes_arm64_encrypt(ctx->key_enc, dst, src, num_rounds(ctx)); + return; + } + + kernel_neon_begin(); __asm__(" ld1 {v0.16b}, %[in] ;" " ld1 {v1.4s}, [%[key]], #16 ;" @@ -90,7 +99,12 @@ static void aes_cipher_decrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) void *dummy0; int dummy1; - kernel_neon_begin_partial(4); + if (!may_use_simd()) { + __aes_arm64_decrypt(ctx->key_dec, dst, src, num_rounds(ctx)); + return; + } + + kernel_neon_begin(); __asm__(" ld1 {v0.16b}, %[in] ;" " ld1 {v1.4s}, [%[key]], #16 ;" @@ -170,7 +184,7 @@ int ce_aes_expandkey(struct crypto_aes_ctx *ctx, const u8 *in_key, for (i = 0; i < kwords; i++) ctx->key_enc[i] = get_unaligned_le32(in_key + i * sizeof(u32)); - kernel_neon_begin_partial(2); + kernel_neon_begin(); for (i = 0; i < sizeof(rcon); i++) { u32 *rki = ctx->key_enc + (i * kwords); u32 *rko = rki + kwords; From patchwork Sat Jun 10 16:22:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103564 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270880qgd; Sat, 10 Jun 2017 09:23:29 -0700 (PDT) X-Received: by 10.98.65.215 with SMTP id g84mr25282412pfd.204.1497111809117; Sat, 10 Jun 2017 09:23:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111809; cv=none; d=google.com; s=arc-20160816; b=uxKGJSMKgppU8zwXXvUYrWjKQ8fPh94BwMydQPOP+6HwJ5qQSAXZdnfBOYrsGA2TaU uueTYNM0z0d/nWxLJ4mtmMX4pg/NAmaVkwO68CPeHYVVdHG2PzWcFCBSNzjM1jQC2wXm 576+NUgA3Ql7odLVeK/RelboWxNP3rIrF2vvyAqeAPdSVLLojtLWaSRNeLfGkC8XTYnm km78Yig3E78fY4ohFGOI0LsHCHU548XTcrzbUTq8D5B+ksCGLRX+VJ3lr6DhrsRFMnyH 2ezRkdgqY/1N+Zpzy04s7ica1r0Djh/L+q1sY+MfhR4fHzYvQw6CqV0Qw7Q3uPbPGb3C WRCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=x9D0mhsggP8lgsMUwDjfg4V2c7P96I9xtuFWw6oKj6g=; b=e8/PuzwvDmpj8UqzkNxb+bqd5QYffEqzjoFjQP2iq3zX7d08BKnH1TiwOJO6UITanu 5vdVyFxkjQeuGNpBUk+cm/j4jBbtBSaWP7NvdBrDlqdWY0y7R1YIrz/Ov3HUH/5UUeYk WdmDv8aZlpIU2xogqfeWv/RwhGPgm+s817M/zsLZ0OnuENmRwPQ7ZT3yvoTRtgIu9Idc DibVhdv7Q+ImYzkVUCsfijTYv+f15+2Y7hqZk4QGa66+mGHB6OpGxoeDsJrrAl9gA2jN Lp4iM07q3eyJoaaCk0PpdDNZnKFB92dRBupeeQCAQNAHRyehn8ou+faYN05iPD+SA/f/ i4ag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.28; Sat, 10 Jun 2017 09:23:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752204AbdFJQX2 (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:28 -0400 Received: from mail-wr0-f177.google.com ([209.85.128.177]:36076 "EHLO mail-wr0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752199AbdFJQX1 (ORCPT ); Sat, 10 Jun 2017 12:23:27 -0400 Received: by mail-wr0-f177.google.com with SMTP id v111so58818147wrc.3 for ; Sat, 10 Jun 2017 09:23:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=x9D0mhsggP8lgsMUwDjfg4V2c7P96I9xtuFWw6oKj6g=; b=ZPLCgEFoeDV4oGolNh9SMurjfX5h2zBBdHHq3RXuN9Zux5MTh+UE7VUQg3O3/4u5kq +DvLhaMPs+cru7wr7JD1O7MTBOObDhzhY9Ovg/n3eijosR0j/dso/lhbxIhqvT9M3dhN jtxDzezr14Ymr2DqMziGtuaLn1pM/aKo2MuSw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=x9D0mhsggP8lgsMUwDjfg4V2c7P96I9xtuFWw6oKj6g=; b=K/ATINa2NqnXa4P0RA7ijTaoapJ0QskrU3QhW//nca9soMucKMB1FuLxBUdCdi+0J6 xblzQIOFHVi48d2R+BD25TaHubDSTt80dJq/ZrTKwKLHNg3Fjpw4hlhQE7CngG2p5YCk lqB3J9KlDKmzT6KZe9Ehn34zuwoE66BYTcpcKkb0B321iD9twkXnIN6iMrz0SoQXYgJL zCFGvY+20A1/m6yyN4aU9eMuaweyUO/IdC/oQM4DVMmrJXUROwqmFR6btiJh7dLb93MK aQ/YfRN0/fOJ5cpYeIWfnfIAWxsnwLf4CDggXIrr5x1oC0wBeO+/lLD11QoLPkfw5LX6 elPw== X-Gm-Message-State: AODbwcDKL/WSImdfb0D27Ze4lJBX5xfx1+JlKKfCzXtJ2oYYTNLqL15X akUzLj36d8p3hFUiOWsQQQ== X-Received: by 10.223.165.1 with SMTP id i1mr2578209wrb.59.1497111805664; Sat, 10 Jun 2017 09:23:25 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:24 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 09/12] crypto: arm64/aes-ce-ccm: add non-SIMD generic fallback Date: Sat, 10 Jun 2017 16:22:55 +0000 Message-Id: <1497111778-4210-10-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arm64 kernel will shortly disallow nested kernel mode NEON. So honour this in the ARMv8 Crypto Extensions implementation of CCM-AES, and fall back to a dynamically instantiated ccm(aes) implementation if necessary (which will in all likelihood be produced by the generic CCM, CTR and AES drivers). Due to the fact that this may break the boottime algo tests, this driver can now only be built as a module. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 3 +- arch/arm64/crypto/aes-ce-ccm-glue.c | 152 +++++++++++++++----- 2 files changed, 116 insertions(+), 39 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 772801f263d9..c3b74db72cc8 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -56,10 +56,11 @@ config CRYPTO_AES_ARM64_CE config CRYPTO_AES_ARM64_CE_CCM tristate "AES in CCM mode using ARMv8 Crypto Extensions" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON && m select CRYPTO_ALGAPI select CRYPTO_AES_ARM64_CE select CRYPTO_AEAD + select CRYPTO_CCM config CRYPTO_AES_ARM64_CE_BLK tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c index 6a7dbc7c83a6..c5ae50141988 100644 --- a/arch/arm64/crypto/aes-ce-ccm-glue.c +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -1,7 +1,7 @@ /* * aes-ccm-glue.c - AES-CCM transform for ARMv8 with Crypto Extensions * - * Copyright (C) 2013 - 2014 Linaro Ltd + * Copyright (C) 2013 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -9,6 +9,7 @@ */ #include +#include #include #include #include @@ -18,6 +19,11 @@ #include "aes-ce-setkey.h" +struct crypto_aes_ccm_ctx { + struct crypto_aes_ctx key; + struct crypto_aead *fallback; +}; + static int num_rounds(struct crypto_aes_ctx *ctx) { /* @@ -47,22 +53,33 @@ asmlinkage void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u32 const rk[], static int ccm_setkey(struct crypto_aead *tfm, const u8 *in_key, unsigned int key_len) { - struct crypto_aes_ctx *ctx = crypto_aead_ctx(tfm); + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(tfm); int ret; - ret = ce_aes_expandkey(ctx, in_key, key_len); - if (!ret) - return 0; + ret = ce_aes_expandkey(&ctx->key, in_key, key_len); + if (ret) { + tfm->base.crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; + return ret; + } + + ret = crypto_aead_setkey(ctx->fallback, in_key, key_len); + if (ret) { + if (ctx->fallback->base.crt_flags & CRYPTO_TFM_RES_BAD_KEY_LEN) + tfm->base.crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; + return ret; + } - tfm->base.crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; - return -EINVAL; + return 0; } static int ccm_setauthsize(struct crypto_aead *tfm, unsigned int authsize) { + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(tfm); + if ((authsize & 1) || authsize < 4) return -EINVAL; - return 0; + + return crypto_aead_setauthsize(ctx->fallback, authsize); } static int ccm_init_mac(struct aead_request *req, u8 maciv[], u32 msglen) @@ -106,7 +123,7 @@ static int ccm_init_mac(struct aead_request *req, u8 maciv[], u32 msglen) static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) { struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(aead); struct __packed { __be16 l; __be32 h; u16 len; } ltag; struct scatter_walk walk; u32 len = req->assoclen; @@ -122,8 +139,8 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) ltag.len = 6; } - ce_aes_ccm_auth_data(mac, (u8 *)<ag, ltag.len, &macp, ctx->key_enc, - num_rounds(ctx)); + ce_aes_ccm_auth_data(mac, (u8 *)<ag, ltag.len, &macp, + ctx->key.key_enc, num_rounds(&ctx->key)); scatterwalk_start(&walk, req->src); do { @@ -135,8 +152,8 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) n = scatterwalk_clamp(&walk, len); } p = scatterwalk_map(&walk); - ce_aes_ccm_auth_data(mac, p, n, &macp, ctx->key_enc, - num_rounds(ctx)); + ce_aes_ccm_auth_data(mac, p, n, &macp, ctx->key.key_enc, + num_rounds(&ctx->key)); len -= n; scatterwalk_unmap(p); @@ -148,18 +165,34 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) static int ccm_encrypt(struct aead_request *req) { struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(aead); struct skcipher_walk walk; u8 __aligned(8) mac[AES_BLOCK_SIZE]; u8 buf[AES_BLOCK_SIZE]; u32 len = req->cryptlen; int err; + if (!may_use_simd()) { + struct aead_request *fallback_req; + + fallback_req = aead_request_alloc(ctx->fallback, GFP_ATOMIC); + if (!fallback_req) + return -ENOMEM; + + aead_request_set_ad(fallback_req, req->assoclen); + aead_request_set_crypt(fallback_req, req->src, req->dst, + req->cryptlen, req->iv); + + err = crypto_aead_encrypt(fallback_req); + aead_request_free(fallback_req); + return err; + } + err = ccm_init_mac(req, mac, len); if (err) return err; - kernel_neon_begin_partial(6); + kernel_neon_begin(); if (req->assoclen) ccm_calculate_auth_mac(req, mac); @@ -176,13 +209,14 @@ static int ccm_encrypt(struct aead_request *req) tail = 0; ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, - walk.nbytes - tail, ctx->key_enc, - num_rounds(ctx), mac, walk.iv); + walk.nbytes - tail, ctx->key.key_enc, + num_rounds(&ctx->key), mac, walk.iv); err = skcipher_walk_done(&walk, tail); } if (!err) - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + ce_aes_ccm_final(mac, buf, ctx->key.key_enc, + num_rounds(&ctx->key)); kernel_neon_end(); @@ -199,7 +233,7 @@ static int ccm_encrypt(struct aead_request *req) static int ccm_decrypt(struct aead_request *req) { struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(aead); unsigned int authsize = crypto_aead_authsize(aead); struct skcipher_walk walk; u8 __aligned(8) mac[AES_BLOCK_SIZE]; @@ -207,11 +241,27 @@ static int ccm_decrypt(struct aead_request *req) u32 len = req->cryptlen - authsize; int err; + if (!may_use_simd()) { + struct aead_request *fallback_req; + + fallback_req = aead_request_alloc(ctx->fallback, GFP_ATOMIC); + if (!fallback_req) + return -ENOMEM; + + aead_request_set_ad(fallback_req, req->assoclen); + aead_request_set_crypt(fallback_req, req->src, req->dst, + req->cryptlen, req->iv); + + err = crypto_aead_decrypt(fallback_req); + aead_request_free(fallback_req); + return err; + } + err = ccm_init_mac(req, mac, len); if (err) return err; - kernel_neon_begin_partial(6); + kernel_neon_begin(); if (req->assoclen) ccm_calculate_auth_mac(req, mac); @@ -228,13 +278,14 @@ static int ccm_decrypt(struct aead_request *req) tail = 0; ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr, - walk.nbytes - tail, ctx->key_enc, - num_rounds(ctx), mac, walk.iv); + walk.nbytes - tail, ctx->key.key_enc, + num_rounds(&ctx->key), mac, walk.iv); err = skcipher_walk_done(&walk, tail); } if (!err) - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + ce_aes_ccm_final(mac, buf, ctx->key.key_enc, + num_rounds(&ctx->key)); kernel_neon_end(); @@ -251,28 +302,53 @@ static int ccm_decrypt(struct aead_request *req) return 0; } +static int ccm_init(struct crypto_aead *aead) +{ + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(aead); + struct crypto_aead *tfm; + + tfm = crypto_alloc_aead("ccm(aes)", 0, + CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); + + if (IS_ERR(tfm)) + return PTR_ERR(tfm); + + ctx->fallback = tfm; + return 0; +} + +static void ccm_exit(struct crypto_aead *aead) +{ + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(aead); + + crypto_free_aead(ctx->fallback); +} + static struct aead_alg ccm_aes_alg = { - .base = { - .cra_name = "ccm(aes)", - .cra_driver_name = "ccm-aes-ce", - .cra_priority = 300, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct crypto_aes_ctx), - .cra_module = THIS_MODULE, - }, - .ivsize = AES_BLOCK_SIZE, - .chunksize = AES_BLOCK_SIZE, - .maxauthsize = AES_BLOCK_SIZE, - .setkey = ccm_setkey, - .setauthsize = ccm_setauthsize, - .encrypt = ccm_encrypt, - .decrypt = ccm_decrypt, + .base.cra_name = "ccm(aes)", + .base.cra_driver_name = "ccm-aes-ce", + .base.cra_priority = 300, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct crypto_aes_ccm_ctx), + .base.cra_module = THIS_MODULE, + .base.cra_flags = CRYPTO_ALG_NEED_FALLBACK, + + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + .setkey = ccm_setkey, + .setauthsize = ccm_setauthsize, + .encrypt = ccm_encrypt, + .decrypt = ccm_decrypt, + .init = ccm_init, + .exit = ccm_exit, }; static int __init aes_mod_init(void) { if (!(elf_hwcap & HWCAP_AES)) return -ENODEV; + return crypto_register_aead(&ccm_aes_alg); } From patchwork Sat Jun 10 16:22:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103565 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270892qgd; Sat, 10 Jun 2017 09:23:31 -0700 (PDT) X-Received: by 10.98.16.72 with SMTP id y69mr21727493pfi.30.1497111811861; Sat, 10 Jun 2017 09:23:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111811; cv=none; d=google.com; s=arc-20160816; b=ThOkwuDboX2kbmouUD6gK/n5/r8Vh5S4ImPxzMm+YjydkAQCE8Zw/lM4gGpKf3Jtvw jQM39YolPtsO7zFm6EdIyzGdU8SSNMx/TF2QT69SkdX+w2luOU75cqZMyV5IW9/ejY64 vm1EY8rRUmE3nmkeWr+l/QYsrtDXzqe1DII/PFg3Btckn9pzHAnzUUyzVKwfTnbyiTRJ fN8W7y8o3aIo4d1LGG/tHNOImE6CS5avSOfKgsK5c+tCkaX8XZcfA7FTZSSKSP+Q8OaW DQENEanhCxG3eIHn+zSOWrl73RatVKOyMK/bAzT5uM6STPMHqGLI/kD7HGIOGdKrEPdB enmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=me+hoaerXcnmqKC19wr10U7imjolSkqJ9ob5UxupJUw=; b=FAVKkJrT1Bx8cqoagcKFK/DGajcRY1V2Pbzko2yXklV+bn11nkKW/dX2hjiX3Jgwvh QR6iYt7DzSP+Dpfm67YacXhSw5sUud/ijJmx1nz5AXFrKoNHNd+OhhvDiLPcEyM6nHl0 8/TO9sCwLX5/8qHSjIG94iXQ6cE8+qyQlRaWgbzjMkLa8ieDxC891n2eyuIhJESMVShP tZsB6txLRF+CC8gZFK0jKNknaCtuhQLRrcp1kb62/d2YlHnowNob57SuO7FJuWZkmLrz vyC81wjlyMQ8q4VL8EdDNPRiTNlC8mYbDlG3QcWHm0JR1GzprQP1ibzCKnI+Tv9Tj+m+ Ou/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.31; Sat, 10 Jun 2017 09:23:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752206AbdFJQXa (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:30 -0400 Received: from mail-wr0-f182.google.com ([209.85.128.182]:34150 "EHLO mail-wr0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752203AbdFJQXa (ORCPT ); Sat, 10 Jun 2017 12:23:30 -0400 Received: by mail-wr0-f182.google.com with SMTP id g76so58543110wrd.1 for ; Sat, 10 Jun 2017 09:23:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=me+hoaerXcnmqKC19wr10U7imjolSkqJ9ob5UxupJUw=; b=Ex/Ria/QuC8/GdrYU1p8d0pd64fwkkKVT/Xh6hzNWSE+c53o3ME/0jfGbJ6u3N8kOK yOthh++k4A4iTUCZCvO5SaOW2BfZjLpZL+wHd3pUGsIchgz9uhJxXK4/48OuK7CPLuNV cdEdXygjFByUT0mHigsWxulxHJQJ3pbb1O7VU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=me+hoaerXcnmqKC19wr10U7imjolSkqJ9ob5UxupJUw=; b=pGamPfei6Hexp+Jo3UT8b1t8nRiP6PMrKaoXYXXqFO6YEiyhgqPPOqx/JE9aXe84/2 ntByznhujSNTLl4zcDwU0p2cIBiHbmXc/qnYZ+KxJLj4znHqRwZZXBNZLiSYKBoEE+pF ACdB0eXqB/gyrJ9aGS8utV/mKPmruAiSqTduTt//Z21UPGGTjBMw0aqCFlHyU2UsBIt2 0qAkdRNONxyLP42QaoWUzo2SZWQFRZ9z9Ug6Rr8AIWlpMoSnapQbVD9ud85rFL77Kq/T OqunLV8e27JM20BuIQs+/dE0kOcRJSh8Z3GfbCvN0UkwL4KrtW9h3U6hxrglgZ5FCPrj 1woA== X-Gm-Message-State: AKS2vOyh++2QE3IWvIBPu1GUIZOVbKnzetXEjnTB4aNKdQbT/YINfyIl xx7I43wjGvsrkoe2iaPr2g== X-Received: by 10.223.152.52 with SMTP id v49mr2341630wrb.60.1497111808544; Sat, 10 Jun 2017 09:23:28 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:27 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 10/12] crypto: arm64/aes-blk - add a non-SIMD fallback for synchronous CTR Date: Sat, 10 Jun 2017 16:22:56 +0000 Message-Id: <1497111778-4210-11-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org To accommodate systems that may disallow use of the NEON in kernel mode in some circumstances, introduce a C fallback for synchronous AES in CTR mode, and use it if may_use_simd() returns false. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 7 ++- arch/arm64/crypto/aes-ctr-fallback.h | 55 ++++++++++++++++++++ arch/arm64/crypto/aes-glue.c | 17 +++++- 3 files changed, 75 insertions(+), 4 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index c3b74db72cc8..6bd1921d8ca2 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -64,17 +64,20 @@ config CRYPTO_AES_ARM64_CE_CCM config CRYPTO_AES_ARM64_CE_BLK tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER select CRYPTO_AES_ARM64_CE + select CRYPTO_AES select CRYPTO_SIMD + select CRYPTO_AES_ARM64 config CRYPTO_AES_ARM64_NEON_BLK tristate "AES in ECB/CBC/CTR/XTS modes using NEON instructions" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER select CRYPTO_AES select CRYPTO_SIMD + select CRYPTO_AES_ARM64 config CRYPTO_CHACHA20_NEON tristate "NEON accelerated ChaCha20 symmetric cipher" diff --git a/arch/arm64/crypto/aes-ctr-fallback.h b/arch/arm64/crypto/aes-ctr-fallback.h new file mode 100644 index 000000000000..4a6bfac6ecb5 --- /dev/null +++ b/arch/arm64/crypto/aes-ctr-fallback.h @@ -0,0 +1,55 @@ +/* + * Fallback for sync aes(ctr) in contexts where kernel mode NEON + * is not allowed + * + * Copyright (C) 2017 Linaro Ltd + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include + +asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); + +static inline int aes_ctr_encrypt_fallback(struct crypto_aes_ctx *ctx, + struct skcipher_request *req) +{ + struct skcipher_walk walk; + u8 buf[AES_BLOCK_SIZE]; + int err; + + err = skcipher_walk_virt(&walk, req, true); + + while (walk.nbytes > 0) { + u8 *dst = walk.dst.virt.addr; + u8 *src = walk.src.virt.addr; + int nbytes = walk.nbytes; + int tail = 0; + + if (nbytes < walk.total) { + nbytes = round_down(nbytes, AES_BLOCK_SIZE); + tail = walk.nbytes % AES_BLOCK_SIZE; + } + + do { + int bsize = min(nbytes, AES_BLOCK_SIZE); + + __aes_arm64_encrypt(ctx->key_enc, buf, walk.iv, + ctx->key_length / 4 + 6); + if (dst != src) + memcpy(dst, src, bsize); + crypto_xor(dst, buf, bsize); + crypto_inc(walk.iv, AES_BLOCK_SIZE); + + dst += AES_BLOCK_SIZE; + src += AES_BLOCK_SIZE; + nbytes -= AES_BLOCK_SIZE; + } while (nbytes > 0); + + err = skcipher_walk_done(&walk, tail); + } + return err; +} diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index bcf596b0197e..6806ad7d8dd4 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -19,6 +20,7 @@ #include #include "aes-ce-setkey.h" +#include "aes-ctr-fallback.h" #ifdef USE_V8_CRYPTO_EXTENSIONS #define MODE "ce" @@ -251,6 +253,17 @@ static int ctr_encrypt(struct skcipher_request *req) return err; } +static int ctr_encrypt_sync(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (!may_use_simd()) + return aes_ctr_encrypt_fallback(ctx, req); + + return ctr_encrypt(req); +} + static int xts_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -357,8 +370,8 @@ static struct skcipher_alg aes_algs[] = { { .ivsize = AES_BLOCK_SIZE, .chunksize = AES_BLOCK_SIZE, .setkey = skcipher_aes_setkey, - .encrypt = ctr_encrypt, - .decrypt = ctr_encrypt, + .encrypt = ctr_encrypt_sync, + .decrypt = ctr_encrypt_sync, }, { .base = { .cra_name = "__xts(aes)", From patchwork Sat Jun 10 16:22:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103566 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270906qgd; Sat, 10 Jun 2017 09:23:34 -0700 (PDT) X-Received: by 10.84.233.204 with SMTP id m12mr48206170pln.273.1497111814255; Sat, 10 Jun 2017 09:23:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111814; cv=none; d=google.com; s=arc-20160816; b=KRFfvs9KMZUoqwIqigzRvuhQIO2b1Qz6U0l8kIl640HzVh+ZwJGdVG0+qrEHCN9Rsv DJLt9gRLx8wocOlLtRi/PzH/65+p/xI6ow04/fvVH/G1rPHHQvH6NDy80spJfmkT4Eav WvPuge4ebIHBb/qW3aK0ujU7Y/78IptSKsdGqAhjnJw1gk7j0WtI7HfRB3DMERKVOqG4 QeIqE3VVBUBqkx8VHEpmVFRQsz+3rRj6lC7Bcg7HX3F3pbqloOCGA+doobzokfS+pcgt T8nWJOs9iRgc0b3tvzo9OTG7uHH0mipnxnQzhTey6886jsQFueyoKtbXKgT5iUvNBvNL r8wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Gjn54yeOEILt3asFCwopbWX4iaXQMbY9VZg15wikTMw=; b=SWEIhGfO03mFA0SibEAIcxYfbGv3PsvzG+fwW6IVkBXGXHPoB5b4GDsc43HuJTJwng f92XH1z3w1ZJZqyvV5AB+cpBIVi5EEKyWpj2mrJin38EPdkQndwfB63d9+tlRJBRPpLM xrQuttUjQoegBT9COHJEQXdSZv2Ig3zMS7FryHSvI84umXj6FrUpr8K1BHu0hfVGr6tB /BSD7Tp38Bn8JkiSoJq8AHsb1tMRSxBzQV3yRV6466cDmUn7pIjmZGW5TQBFV9JSK0Qe suw9eioeEIY6sRKSPdYcFfA3qN+tdXlWjpW9WCb8NfD42jk5+BDRC2eP1NRGkxm09TYW gkmQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.34; Sat, 10 Jun 2017 09:23:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752208AbdFJQXd (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:33 -0400 Received: from mail-wr0-f169.google.com ([209.85.128.169]:34910 "EHLO mail-wr0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752205AbdFJQXc (ORCPT ); Sat, 10 Jun 2017 12:23:32 -0400 Received: by mail-wr0-f169.google.com with SMTP id q97so58893555wrb.2 for ; Sat, 10 Jun 2017 09:23:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Gjn54yeOEILt3asFCwopbWX4iaXQMbY9VZg15wikTMw=; b=N06oK6R9iKf5lU16VZNJryT9GjSs94AZPW4/HLpIqnNYJ/qgNId6xKhs5nxswyfapz R4gqlSBmJNjA3oSPvNbczguTu/aSLQCKpZ86FABO6UdZJrOdWa2D27jxNb7/gaksh+LK O99Gmg7s6prp2+dKJnUefDCUwG/+IxDa/ytwc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Gjn54yeOEILt3asFCwopbWX4iaXQMbY9VZg15wikTMw=; b=hthVVLJ8oIA/l/4f79rmxLFYRaVSXfROIh4Xr6OEmTJw34ZzN+o5JVy57cv0Jx9Prt HphurI5YaD5pztUfbTF5hr+ssPp/ryIEfYC8nic7fUOoSCsHeGPEYTMg5WxFFXL6fsLq v+RXyS+pvjmCFEE/ktXQ9qJtb3XWB9dBFJcR3YWK6jdrL9up8Q9uZ5VYgbeQDJHcflrx y+ZA+bTKMAPRYFxoldaO9kR6PdT74EAwwOeRJvd3pEvaiWI2SoZug7RNfm8W11FC78ky ZFX5OmyE5SCpF7Z81kE9ny4OFIULIKtzHjwxC0icK7K7C02+bwnIq2QE+K0FlCirZPhI T37g== X-Gm-Message-State: AKS2vOxeOO23DKRhlCwSndK0WFFBTME52TbgUDIOE0a0BLzr+qmZgq9Z wL4D14vfX8NdfBezW0KvVQ== X-Received: by 10.28.222.195 with SMTP id v186mr3118596wmg.88.1497111811249; Sat, 10 Jun 2017 09:23:31 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:30 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 11/12] crypto: arm64/chacha20 - take may_use_simd() into account Date: Sat, 10 Jun 2017 16:22:57 +0000 Message-Id: <1497111778-4210-12-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org To accommodate systems that disallow the use of kernel mode NEON in some circumstances, take the return value of may_use_simd into account when deciding whether to invoke the C fallback routine. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/chacha20-neon-glue.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/chacha20-neon-glue.c b/arch/arm64/crypto/chacha20-neon-glue.c index a7cd575ea223..cbdb75d15cd0 100644 --- a/arch/arm64/crypto/chacha20-neon-glue.c +++ b/arch/arm64/crypto/chacha20-neon-glue.c @@ -1,7 +1,7 @@ /* * ChaCha20 256-bit cipher algorithm, RFC7539, arm64 NEON functions * - * Copyright (C) 2016 Linaro, Ltd. + * Copyright (C) 2016 - 2017 Linaro, Ltd. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -26,6 +26,7 @@ #include #include +#include asmlinkage void chacha20_block_xor_neon(u32 *state, u8 *dst, const u8 *src); asmlinkage void chacha20_4block_xor_neon(u32 *state, u8 *dst, const u8 *src); @@ -64,7 +65,7 @@ static int chacha20_neon(struct skcipher_request *req) u32 state[16]; int err; - if (req->cryptlen <= CHACHA20_BLOCK_SIZE) + if (!may_use_simd() || req->cryptlen <= CHACHA20_BLOCK_SIZE) return crypto_chacha20_crypt(req); err = skcipher_walk_virt(&walk, req, true); From patchwork Sat Jun 10 16:22:58 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103567 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270918qgd; Sat, 10 Jun 2017 09:23:38 -0700 (PDT) X-Received: by 10.84.238.139 with SMTP id v11mr47733582plk.182.1497111818167; Sat, 10 Jun 2017 09:23:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111818; cv=none; d=google.com; s=arc-20160816; b=ugMGTsOx9/jdf7lSBs8fu40oNnz7kW2w/cj+EZHr3HTHtoSG4Uk8fnOgKY69NZZ4yT LdYiTKkpded/lnMvaLkUoLFWsSoqT9BuKOxqLHddJ1yQYfJ8LyuPHzvF9pIz2hTEegXH h37rDSpAmjFW+sWBLdz+wjYIXJS0E0pxx/r6qIpN4ELI82ylghy6lyODPnUze7doqa+E pb7QEaadZONuxPOyOg7kA52Rd4pCRMGQLKFMIkVpz9GAJIZ+DxfYFMlrB9OpREl5wOuV LXi95DYAbrcphu55APJY4huArJzHt3jF/odjbnSYDo92YzGi0x5J6SURrYW1BOGOClcs b9zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=rQ0pwNW5jf3skcY25/UJ7xqY4ig93QlAzOon8vDUJqg=; b=r+AhlwcXadvRFFSw/8EHxDE4fpcZPAoCBDa4zv+wpzt1jNARHv5welhSb+n6yrANGN IWgiu8SgGi80itufswsPAMPxaL2hMdI02tcVrQaLdOGomy4q/nakWtfxYyeFbwXtaRRF iNI8VU7gPWbrdAfr/ltt40tj8+6v/FjCG+J7g1NoFpMhcM16pSwtLHdd7TQwY4xhQ6FE mZSVgVa6zGQ4MIQmeZAIxLQmNAN6UlhviLDx1+WuXOnH+xzRFsEkuMv/aNO55cp2wMx3 8nL907hhNvynbkDH/jWbw9YzpkmGPL8hPs9ia1VcfTLG7MVQuTwwxWZM495UtqnJpPIF QZYw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.37; Sat, 10 Jun 2017 09:23:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752211AbdFJQXh (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:37 -0400 Received: from mail-wr0-f172.google.com ([209.85.128.172]:34181 "EHLO mail-wr0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752205AbdFJQXg (ORCPT ); Sat, 10 Jun 2017 12:23:36 -0400 Received: by mail-wr0-f172.google.com with SMTP id g76so58544284wrd.1 for ; Sat, 10 Jun 2017 09:23:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rQ0pwNW5jf3skcY25/UJ7xqY4ig93QlAzOon8vDUJqg=; b=GIvEdyr+b1n7/Gp5/6Um8IVgS+KmzNaoVYtT9ZJuaoADOKZSAZzjiwZ3Y0nGcjHs++ plmC1mHsMtfmztkvz8P+s7LUxjao+hHoit3O07adstWqHZT3JFOhQAtzph42FI/5FIWW PHaqsaWyit11aBMWwZw3DvQH6r7DG7EB9+Xoc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rQ0pwNW5jf3skcY25/UJ7xqY4ig93QlAzOon8vDUJqg=; b=T9fmtb28K2lAMTK7yqvNU+UgFXl/Vpq7z5Ae9VCoSLRRDkDYnH6JvvBRq2lTAe5ojn bxFE8ja6XHYMGRcQUo8DOBSKwP9Jm78aQaYA4Ttgkh7yON8Vrqud86Lz33J5SQiKeU+m NV/bvq05aarHnMuTso+tdLxCU4HqsBbajDHzMygAYUotsdUzISlZcWhvRkeLNlwnSvTj CXfvKKrmJjeJ7bVTg661yVcEKT+mMCLWMFQi1OvZK6Y3rDouoTQSP7YIihjH2KbB8XRP A3kuuPSKza3ktSt0hnzeS1XkejPtKAxdGF/A2cfhMmGaIXmtzTc22+UDMjN6c/BvVKhh Vqag== X-Gm-Message-State: AODbwcASzY6OWwyxaHX+aw5aHv1WfVxM4RdEREjS8hqcRwSY3jPC9ArO omaLVAjgDX4tEwwXYk50Iw== X-Received: by 10.223.150.123 with SMTP id c56mr2380778wra.15.1497111814922; Sat, 10 Jun 2017 09:23:34 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:34 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 12/12] crypto: arm64/aes-bs - implement non-SIMD fallback for AES-CTR Date: Sat, 10 Jun 2017 16:22:58 +0000 Message-Id: <1497111778-4210-13-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Of the various chaining modes implemented by the bit sliced AES driver, only CTR is exposed as a synchronous cipher, and requires a fallback in order to remain usable once we update the kernel mode NEON handling logic to disallow nested use. So wire up the existing CTR fallback C code. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-neonbs-glue.c | 48 ++++++++++++++++++-- 1 file changed, 43 insertions(+), 5 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c index db2501d93550..5fe442c26ff1 100644 --- a/arch/arm64/crypto/aes-neonbs-glue.c +++ b/arch/arm64/crypto/aes-neonbs-glue.c @@ -1,7 +1,7 @@ /* * Bit sliced AES using NEON instructions * - * Copyright (C) 2016 Linaro Ltd + * Copyright (C) 2016 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -9,12 +9,15 @@ */ #include +#include #include #include #include #include #include +#include "aes-ctr-fallback.h" + MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); @@ -58,6 +61,11 @@ struct aesbs_cbc_ctx { u32 enc[AES_MAX_KEYLENGTH_U32]; }; +struct aesbs_ctr_ctx { + struct aesbs_ctx key; /* must be first member */ + struct crypto_aes_ctx fallback; +}; + struct aesbs_xts_ctx { struct aesbs_ctx key; u32 twkey[AES_MAX_KEYLENGTH_U32]; @@ -196,6 +204,25 @@ static int cbc_decrypt(struct skcipher_request *req) return err; } +static int aesbs_ctr_setkey_sync(struct crypto_skcipher *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); + int err; + + err = crypto_aes_expand_key(&ctx->fallback, in_key, key_len); + if (err) + return err; + + ctx->key.rounds = 6 + key_len / 4; + + kernel_neon_begin(); + aesbs_convert_key(ctx->key.rk, ctx->fallback.key_enc, ctx->key.rounds); + kernel_neon_end(); + + return 0; +} + static int ctr_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -260,6 +287,17 @@ static int aesbs_xts_setkey(struct crypto_skcipher *tfm, const u8 *in_key, return aesbs_setkey(tfm, in_key, key_len); } +static int ctr_encrypt_sync(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (!may_use_simd()) + return aes_ctr_encrypt_fallback(&ctx->fallback, req); + + return ctr_encrypt(req); +} + static int __xts_crypt(struct skcipher_request *req, void (*fn)(u8 out[], u8 const in[], u8 const rk[], int rounds, int blocks, u8 iv[])) @@ -356,7 +394,7 @@ static struct skcipher_alg aes_algs[] = { { .base.cra_driver_name = "ctr-aes-neonbs", .base.cra_priority = 250 - 1, .base.cra_blocksize = 1, - .base.cra_ctxsize = sizeof(struct aesbs_ctx), + .base.cra_ctxsize = sizeof(struct aesbs_ctr_ctx), .base.cra_module = THIS_MODULE, .min_keysize = AES_MIN_KEY_SIZE, @@ -364,9 +402,9 @@ static struct skcipher_alg aes_algs[] = { { .chunksize = AES_BLOCK_SIZE, .walksize = 8 * AES_BLOCK_SIZE, .ivsize = AES_BLOCK_SIZE, - .setkey = aesbs_setkey, - .encrypt = ctr_encrypt, - .decrypt = ctr_encrypt, + .setkey = aesbs_ctr_setkey_sync, + .encrypt = ctr_encrypt_sync, + .decrypt = ctr_encrypt_sync, }, { .base.cra_name = "__xts(aes)", .base.cra_driver_name = "__xts-aes-neonbs",