From patchwork Mon Mar 30 09:48:27 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 46496 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ig0-f199.google.com (mail-ig0-f199.google.com [209.85.213.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 317DF214C7 for ; Mon, 30 Mar 2015 09:49:27 +0000 (UTC) Received: by igcxw6 with SMTP id xw6sf138606768igc.2 for ; Mon, 30 Mar 2015 02:49:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=k7hGJ8nJ8UTN2Pb83vDOOo6575K/Yf7PPGJ7+jUhBBY=; b=iRJNkxJJCZxWyjDxhokMOuGlyPDTIgjgmg6FGmjqojibsX+EWEvT+yXXZ4Tv3Hn5/I 7rIAIPvYtAHo1/YqLzDDSVXy+zaUDxAE2efKkOue1uyDfpoCpo9lBR9PGwzUB5fgbCW3 8YrBJTwbRjBxc/iXzbd6msMX/f9ImsBXwK5QWX7fAZ6mEaq3kOGpbxi64ppLOre3HOPI 9FTaeePYAdv5nChnKgVbzzSgL9OYmbrIO/nZUUVsxJ+9JJO/j7+2LLaSwUsZ8BSMqVNz g6GhxO2QWFS5CTJ+Gb/aZY6YA+TYA79ws6wUNgkZlSiGuXnDT8P0jUTD2qFEM6SRdyEr HxgA== X-Gm-Message-State: ALoCoQkuHkz+ZWvpiZ57mE8AN4lFh1XSitSnWJVjU5RSsKv65Y6xyf0jbtHkxx8mNbNc4x656bfp X-Received: by 10.50.122.7 with SMTP id lo7mr14458548igb.7.1427708966838; Mon, 30 Mar 2015 02:49:26 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.108.118 with SMTP id i109ls773213qgf.60.gmail; Mon, 30 Mar 2015 02:49:26 -0700 (PDT) X-Received: by 10.55.17.21 with SMTP id b21mr6453986qkh.71.1427708966739; Mon, 30 Mar 2015 02:49:26 -0700 (PDT) Received: from mail-qc0-f176.google.com (mail-qc0-f176.google.com. [209.85.216.176]) by mx.google.com with ESMTPS id 76si9843858qht.82.2015.03.30.02.49.26 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Mar 2015 02:49:26 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.216.176 as permitted sender) client-ip=209.85.216.176; Received: by qcgx3 with SMTP id x3so8556613qcg.3 for ; Mon, 30 Mar 2015 02:49:26 -0700 (PDT) X-Received: by 10.229.66.201 with SMTP id o9mr26807794qci.31.1427708966635; Mon, 30 Mar 2015 02:49:26 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.96.155.231 with SMTP id vz7csp1329442qdb; Mon, 30 Mar 2015 02:49:25 -0700 (PDT) X-Received: by 10.69.16.1 with SMTP id fs1mr56996687pbd.125.1427708965524; Mon, 30 Mar 2015 02:49:25 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k6si14017272pdm.248.2015.03.30.02.49.25 for ; Mon, 30 Mar 2015 02:49:25 -0700 (PDT) Received-SPF: none (google.com: linux-crypto-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751599AbbC3JtY (ORCPT ); Mon, 30 Mar 2015 05:49:24 -0400 Received: from mail-wi0-f171.google.com ([209.85.212.171]:38616 "EHLO mail-wi0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751496AbbC3JtX (ORCPT ); Mon, 30 Mar 2015 05:49:23 -0400 Received: by wibgn9 with SMTP id gn9so120937834wib.1 for ; Mon, 30 Mar 2015 02:49:22 -0700 (PDT) X-Received: by 10.180.208.107 with SMTP id md11mr21283655wic.10.1427708962629; Mon, 30 Mar 2015 02:49:22 -0700 (PDT) Received: from ards-macbook-pro.local (129.20.90.92.rev.sfr.net. [92.90.20.129]) by mx.google.com with ESMTPSA id eo1sm14912443wib.16.2015.03.30.02.49.11 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 30 Mar 2015 02:49:13 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, samitolvanen@google.com, herbert@gondor.apana.org.au, jussi.kivilinna@iki.fi, stockhausen@collogia.de, x86@kernel.org Cc: Ard Biesheuvel Subject: [PATCH v2 resend 08/14] crypto/arm: move SHA-1 ARMv8 implementation to base layer Date: Mon, 30 Mar 2015 11:48:27 +0200 Message-Id: <1427708913-29678-9-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1427708913-29678-1-git-send-email-ard.biesheuvel@linaro.org> References: <1427708913-29678-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.216.176 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/Kconfig | 2 +- arch/arm/crypto/sha1-ce-glue.c | 110 +++++++++++------------------------------ 2 files changed, 31 insertions(+), 81 deletions(-) diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig index c111d8992afb..31ad19f18af2 100644 --- a/arch/arm/crypto/Kconfig +++ b/arch/arm/crypto/Kconfig @@ -32,7 +32,7 @@ config CRYPTO_SHA1_ARM_CE tristate "SHA1 digest algorithm (ARM v8 Crypto Extensions)" depends on KERNEL_MODE_NEON select CRYPTO_SHA1_ARM - select CRYPTO_SHA1 + select CRYPTO_SHA1_BASE select CRYPTO_HASH help SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented diff --git a/arch/arm/crypto/sha1-ce-glue.c b/arch/arm/crypto/sha1-ce-glue.c index a9dd90df9fd7..29039d1bcdf9 100644 --- a/arch/arm/crypto/sha1-ce-glue.c +++ b/arch/arm/crypto/sha1-ce-glue.c @@ -13,114 +13,64 @@ #include #include -#include #include #include #include #include +#include "sha1.h" + MODULE_DESCRIPTION("SHA1 secure hash using ARMv8 Crypto Extensions"); MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); -asmlinkage void sha1_ce_transform(int blocks, u8 const *src, u32 *state, - u8 *head); - -static int sha1_init(struct shash_desc *desc) -{ - struct sha1_state *sctx = shash_desc_ctx(desc); - - *sctx = (struct sha1_state){ - .state = { SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4 }, - }; - return 0; -} +asmlinkage void sha1_ce_transform(int blocks, u8 const *src, u32 *state, + const u8 *head, void *p); -static int sha1_update(struct shash_desc *desc, const u8 *data, - unsigned int len) +static int sha1_ce_update(struct shash_desc *desc, const u8 *data, + unsigned int len) { struct sha1_state *sctx = shash_desc_ctx(desc); - unsigned int partial; - if (!may_use_simd()) + if (!may_use_simd() || + (sctx->count % SHA1_BLOCK_SIZE) + len < SHA1_BLOCK_SIZE) return sha1_update_arm(desc, data, len); - partial = sctx->count % SHA1_BLOCK_SIZE; - sctx->count += len; + kernel_neon_begin(); + crypto_sha1_base_do_update(desc, data, len, sha1_ce_transform, NULL); + kernel_neon_end(); - if ((partial + len) >= SHA1_BLOCK_SIZE) { - int blocks; - - if (partial) { - int p = SHA1_BLOCK_SIZE - partial; - - memcpy(sctx->buffer + partial, data, p); - data += p; - len -= p; - } - - blocks = len / SHA1_BLOCK_SIZE; - len %= SHA1_BLOCK_SIZE; - - kernel_neon_begin(); - sha1_ce_transform(blocks, data, sctx->state, - partial ? sctx->buffer : NULL); - kernel_neon_end(); - - data += blocks * SHA1_BLOCK_SIZE; - partial = 0; - } - if (len) - memcpy(sctx->buffer + partial, data, len); return 0; } -static int sha1_final(struct shash_desc *desc, u8 *out) +static int sha1_ce_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) { - static const u8 padding[SHA1_BLOCK_SIZE] = { 0x80, }; - - struct sha1_state *sctx = shash_desc_ctx(desc); - __be64 bits = cpu_to_be64(sctx->count << 3); - __be32 *dst = (__be32 *)out; - int i; - - u32 padlen = SHA1_BLOCK_SIZE - - ((sctx->count + sizeof(bits)) % SHA1_BLOCK_SIZE); - - sha1_update(desc, padding, padlen); - sha1_update(desc, (const u8 *)&bits, sizeof(bits)); - - for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(__be32); i++) - put_unaligned_be32(sctx->state[i], dst++); - - *sctx = (struct sha1_state){}; - return 0; -} + if (!may_use_simd()) + return sha1_finup_arm(desc, data, len, out); -static int sha1_export(struct shash_desc *desc, void *out) -{ - struct sha1_state *sctx = shash_desc_ctx(desc); - struct sha1_state *dst = out; + kernel_neon_begin(); + if (len) + crypto_sha1_base_do_update(desc, data, len, + sha1_ce_transform, NULL); + crypto_sha1_base_do_finalize(desc, sha1_ce_transform, NULL); + kernel_neon_end(); - *dst = *sctx; - return 0; + return crypto_sha1_base_finish(desc, out); } -static int sha1_import(struct shash_desc *desc, const void *in) +static int sha1_ce_final(struct shash_desc *desc, u8 *out) { - struct sha1_state *sctx = shash_desc_ctx(desc); - struct sha1_state const *src = in; - - *sctx = *src; - return 0; + return sha1_ce_finup(desc, NULL, 0l, out); } static struct shash_alg alg = { - .init = sha1_init, - .update = sha1_update, - .final = sha1_final, - .export = sha1_export, - .import = sha1_import, + .init = crypto_sha1_base_init, + .update = sha1_ce_update, + .final = sha1_ce_final, + .finup = sha1_ce_finup, + .export = crypto_sha1_base_export, + .import = crypto_sha1_base_import, .descsize = sizeof(struct sha1_state), .digestsize = SHA1_DIGEST_SIZE, .statesize = sizeof(struct sha1_state),