From patchwork Sat Jun 10 16:22:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103561 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270854qgd; Sat, 10 Jun 2017 09:23:21 -0700 (PDT) X-Received: by 10.84.216.30 with SMTP id m30mr46925012pli.156.1497111801321; Sat, 10 Jun 2017 09:23:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111801; cv=none; d=google.com; s=arc-20160816; b=qPGCDm87h7lcAH/lGMxreSdew2JOEbIfkcDaZ/SFFDqSO6SdMbYBhfixyFtRV1ETH2 au09QI1qlLFfwQjfpNm0Gxqe8N6ZsZo3mD+JCqeh1Sp3mf9q7xvOEs2+4lygbsRO4BSE U6u7MfEZx8Xgl8cPLZYrlEBRBFGQMc3mPBpnPpKSye0lqSnDQddziasXsrSMteSdaH0s 6ZSqh05E5xlYTb0SM/QQ0zzve63SFZaJVJ94CCWEnUoOz9mFb9p5uFRivwVxr8aDQGfx 9qlg0o4HjlPerH/kzhBVbqVRUGSFDxtNOxiUnV76g6vG4mgdtdL7HCq+Fb98f2eQZuxq t/rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=ouItXkmqbqmIT7C2bf97mWb/Wv06EAAO7UtaxhOZzZ0=; b=M5kGaA3Z55hXb++OxWdI2kgghxEg93Z7C9OGwm+/ZHQ4TxH6RxUIoMJ40jJUQhcdg0 NnDWnzkd1M2jZRyzxU/pSNJGebrrEcYoqRZtisQAdtVjz42B4v/Y+OVSKUaBKVgDDQ6w Fgx3d2F5Xv83gIZ3CRl2lFdcsdqmuJDdRK4FduX1FxqrFrPw/M2yr9V6KlYtzTpWs1Ej opRpDglviey+gMMae93+zA6mT1T0zpRtBecivn9z5856agxfBAo3accAq6j/68HePYN4 +z5mrL3iMmwLuxpvYy3lSixYThYX/g+VO7HAGWb/JnyhhFMcbMEEwIt95W2QsRd6qa5I hn1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.21; Sat, 10 Jun 2017 09:23:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752165AbdFJQXU (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:20 -0400 Received: from mail-wr0-f176.google.com ([209.85.128.176]:34845 "EHLO mail-wr0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752144AbdFJQXT (ORCPT ); Sat, 10 Jun 2017 12:23:19 -0400 Received: by mail-wr0-f176.google.com with SMTP id q97so58890844wrb.2 for ; Sat, 10 Jun 2017 09:23:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ouItXkmqbqmIT7C2bf97mWb/Wv06EAAO7UtaxhOZzZ0=; b=TpOybreNzmvSkAG4mFBw1XTpznq1V+b3uao+24HrEuNYKoYloPZZV6vhP3Xvr9B6Gi OlEJAZhvgcFOviMpvv4Yf413pAR8cnXDKJs6SUWRWkwfXEa23zDwoUCAHEsMhO6Os0Sk d6UvdsecWuUh9cU7oUxWDXNLhGEzRe4L2dibk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ouItXkmqbqmIT7C2bf97mWb/Wv06EAAO7UtaxhOZzZ0=; b=BBgjhhbxAqb07eP51DE5VyiMGxxw+jRCXR+rladkpWQ7VJ3UJ+HATVFKeNCHQjd2uc mY12wN1oDKRxsX6NiuSgckTIc9gd1WHmxBlX+iLNQUrFswi0B9MZVlxbWpVLyaWtks4K T2X+0v9pAFeCPvgzKT6nhBOP9DE3Nll1WkXEmhsoECAd041AXiMNQ5L3MYF1lF5EZlDT ysMDNo2BE948tP3hP/SsvZ6cT6tV2uxWu3aH6mK/MavRXUjYESGM+VK+Np3VTg2xuLar h+RSSkWBPTqJZG9rewscx0fw+2GNBVA4FPgO4CZmjkzCeK9/VqMassX1+d1szVvLA9Ww rr5w== X-Gm-Message-State: AODbwcCgVbs2zW0ezdUNfp2vodlXjMkxV2+iCUggwykfX+5xIpoBhZLB dGBxgiClKS+nynbpsPN6ew== X-Received: by 10.223.150.81 with SMTP id c17mr2721373wra.124.1497111798053; Sat, 10 Jun 2017 09:23:18 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:17 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 06/12] crypto: arm64/sha2-ce - add non-SIMD scalar fallback Date: Sat, 10 Jun 2017 16:22:52 +0000 Message-Id: <1497111778-4210-7-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arm64 kernel will shortly disallow nested kernel mode NEON, so add a fallback to scalar code that can be invoked in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 3 +- arch/arm64/crypto/sha2-ce-glue.c | 30 +++++++++++++++++--- arch/arm64/crypto/sha256-glue.c | 1 + 3 files changed, 29 insertions(+), 5 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 5d5953545dad..8cd145f9c1ff 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -24,8 +24,9 @@ config CRYPTO_SHA1_ARM64_CE config CRYPTO_SHA2_ARM64_CE tristate "SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_HASH + select CRYPTO_SHA256_ARM64 config CRYPTO_GHASH_ARM64_CE tristate "GHASH (for GCM chaining mode) using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c index 7cd587564a41..eb71543568b6 100644 --- a/arch/arm64/crypto/sha2-ce-glue.c +++ b/arch/arm64/crypto/sha2-ce-glue.c @@ -1,7 +1,7 @@ /* * sha2-ce-glue.c - SHA-224/SHA-256 using ARMv8 Crypto Extensions * - * Copyright (C) 2014 Linaro Ltd + * Copyright (C) 2014 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -9,6 +9,7 @@ */ #include +#include #include #include #include @@ -32,13 +33,19 @@ struct sha256_ce_state { asmlinkage void sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src, int blocks); +asmlinkage void sha256_block_data_order(u32 *digest, u8 const *src, int blocks); + static int sha256_ce_update(struct shash_desc *desc, const u8 *data, unsigned int len) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); + if (!may_use_simd()) + return sha256_base_do_update(desc, data, len, + (sha256_block_fn *)sha256_block_data_order); + sctx->finalize = 0; - kernel_neon_begin_partial(28); + kernel_neon_begin(); sha256_base_do_update(desc, data, len, (sha256_block_fn *)sha2_ce_transform); kernel_neon_end(); @@ -57,13 +64,22 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data, ASM_EXPORT(sha256_ce_offsetof_finalize, offsetof(struct sha256_ce_state, finalize)); + if (!may_use_simd()) { + if (len) + sha256_base_do_update(desc, data, len, + (sha256_block_fn *)sha256_block_data_order); + sha256_base_do_finalize(desc, + (sha256_block_fn *)sha256_block_data_order); + return sha256_base_finish(desc, out); + } + /* * Allow the asm code to perform the finalization if there is no * partial data and the input is a round multiple of the block size. */ sctx->finalize = finalize; - kernel_neon_begin_partial(28); + kernel_neon_begin(); sha256_base_do_update(desc, data, len, (sha256_block_fn *)sha2_ce_transform); if (!finalize) @@ -77,8 +93,14 @@ static int sha256_ce_final(struct shash_desc *desc, u8 *out) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); + if (!may_use_simd()) { + sha256_base_do_finalize(desc, + (sha256_block_fn *)sha256_block_data_order); + return sha256_base_finish(desc, out); + } + sctx->finalize = 0; - kernel_neon_begin_partial(28); + kernel_neon_begin(); sha256_base_do_finalize(desc, (sha256_block_fn *)sha2_ce_transform); kernel_neon_end(); return sha256_base_finish(desc, out); diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c index a2226f841960..b064d925fe2a 100644 --- a/arch/arm64/crypto/sha256-glue.c +++ b/arch/arm64/crypto/sha256-glue.c @@ -29,6 +29,7 @@ MODULE_ALIAS_CRYPTO("sha256"); asmlinkage void sha256_block_data_order(u32 *digest, const void *data, unsigned int num_blks); +EXPORT_SYMBOL(sha256_block_data_order); asmlinkage void sha256_block_neon(u32 *digest, const void *data, unsigned int num_blks);