From patchwork Wed Dec 6 19:43:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 120893 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp7468862qgn; Wed, 6 Dec 2017 11:44:33 -0800 (PST) X-Google-Smtp-Source: AGs4zMboF7Wyyo1SOdrVudgsl2omfwliT97RL0fUpOTQFBKPFA6tq3jeQsZLk2aRuKyt9Oft54FT X-Received: by 10.99.141.75 with SMTP id z72mr21746277pgd.73.1512589473579; Wed, 06 Dec 2017 11:44:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512589473; cv=none; d=google.com; s=arc-20160816; b=sXuZ+kFsMP2w1gt3tJ6y5/r3xDhy8DHxrgei3gypvS+Nim8CZN48jefEnAov5zQamI V0rxC0x8gmkMtrdgITA6uXSlIzC3SMQwXjodygiqYN0UchpasAlK6S8aRe8ikW2gqfuI VzTK3eqSqJVhTImLtz+wQXuAC+ryMHzjtPyNRaiTPJaagrG3GIIuYIKL0P2JHDf0YrFX ELcTDuw4tgNhhYLD1EzpXwkWEyfTlNfwbK/R6ICYpsAQBRPMZQqQHTFpyyzaVMVPYMXE 5d5gWXOB5jorTNYmxDdC29XfKtKJFMWNNIEy9UIyy4vG73KDKVh0gqgQuhyCZXSqD5X8 XGuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=5zk1lw0Of4wZ9+5EpqGNCIfS63EqvIFydBGYbdg5eLE=; b=YfKZJXeV20Y+xe3eM1QXqRaOeG0roSiA8ZVYNfE3jmdFFvDP6WzW8Zor7/SF0Pm5dq m0txPNC24WGUqbhVlkKAw/1eD0Q2mLYil1HXY14W9N5Kf2UtzaeYbES/7QPsmkNupYUS u7LaA5q+DCspSyEssDYa9cZR9JCPXYkfFA52rdTBkZOxIReUb+I18QGVnOQ8s7Vu9ORL KXvAWqW4mQen4byyUS5Z5ujbqpuGBy+rV4HIZ9M14iI+NxeVdLD8aJBNpk/8JMC2GnsB RyQjUgaZDLwRrKDfc1h6xkNgGNSyAizPWNB3sair59RzlxYNoJoAIyjSBGCMMnxEaAcY kkeA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=TJRcRkR6; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i16si2382524pgv.496.2017.12.06.11.44.33; Wed, 06 Dec 2017 11:44:33 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=TJRcRkR6; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752357AbdLFToc (ORCPT + 1 other); Wed, 6 Dec 2017 14:44:32 -0500 Received: from mail-wr0-f194.google.com ([209.85.128.194]:42676 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752317AbdLFToX (ORCPT ); Wed, 6 Dec 2017 14:44:23 -0500 Received: by mail-wr0-f194.google.com with SMTP id s66so5114430wrc.9 for ; Wed, 06 Dec 2017 11:44:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5zk1lw0Of4wZ9+5EpqGNCIfS63EqvIFydBGYbdg5eLE=; b=TJRcRkR6osoVuJGMqyJEfzWXl3vKd0OKLO3W/3HR9ZP5LFhsD4j/QxxLUdD47IPduF Jx62UC3CMOXGDYSDbD1YlNBXWLT8yjPKis7VEM7IRmd20dxP4vL1EckdmOzwJpO1Ohf8 tJV62Wyy/9NeYPOET5kTczlyULqJaR7sOJFg4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5zk1lw0Of4wZ9+5EpqGNCIfS63EqvIFydBGYbdg5eLE=; b=XT0aXqrkBEMXp4bEogaqIntSHB/26E1wotXUCLkXMMCet8BZjqPwspvKFfT6KpruMv IFfy80DWWRObiExb7C0AW1XTHPSHBZNth5LoZTOpFsQzyDTwrzr/f1dE/a3pltcAhdGR pz/QoqdX3ZlujhIXbUMU9zKiQFxsQtZ01YM60f/b4YyRTTxcQE1+JYzd1CcXiYrCIA7l fi6WLYzd8/lHXdUoO9DE2BIP7Gu/TadWf3xPlu3o9FVILhabi06gDmSpY/7FpqQowJmc 0d/KtUFj4K211AjzouBiwRhRPb4TAu7zN6k8atHFFinORBNFgbDz8Oqno+TIc2V4Bp4p aXNQ== X-Gm-Message-State: AJaThX5EwuM7o3eGDAWcugUJP7/wqze/4YzQEd5UwftQUMWi/OpjlScK OUjvTwG2Gqkxo13ABp9ai5C+XT5HrIc= X-Received: by 10.223.148.69 with SMTP id 63mr22031648wrq.89.1512589461631; Wed, 06 Dec 2017 11:44:21 -0800 (PST) Received: from localhost.localdomain ([105.150.171.234]) by smtp.gmail.com with ESMTPSA id b66sm3596594wmh.32.2017.12.06.11.44.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Dec 2017 11:44:20 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Dave Martin , Russell King - ARM Linux , Sebastian Andrzej Siewior , Mark Rutland , linux-rt-users@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Will Deacon , Steven Rostedt , Thomas Gleixner Subject: [PATCH v3 09/20] crypto: arm64/sha256-neon - play nice with CONFIG_PREEMPT kernels Date: Wed, 6 Dec 2017 19:43:35 +0000 Message-Id: <20171206194346.24393-10-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171206194346.24393-1-ard.biesheuvel@linaro.org> References: <20171206194346.24393-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Tweak the SHA256 update routines to invoke the SHA256 block transform block by block, to avoid excessive scheduling delays caused by the NEON algorithm running with preemption disabled. Also, remove a stale comment which no longer applies now that kernel mode NEON is actually disallowed in some contexts. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/sha256-glue.c | 36 +++++++++++++------- 1 file changed, 23 insertions(+), 13 deletions(-) -- 2.11.0 diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c index b064d925fe2a..e8880ccdc71f 100644 --- a/arch/arm64/crypto/sha256-glue.c +++ b/arch/arm64/crypto/sha256-glue.c @@ -89,21 +89,32 @@ static struct shash_alg algs[] = { { static int sha256_update_neon(struct shash_desc *desc, const u8 *data, unsigned int len) { - /* - * Stacking and unstacking a substantial slice of the NEON register - * file may significantly affect performance for small updates when - * executing in interrupt context, so fall back to the scalar code - * in that case. - */ + struct sha256_state *sctx = shash_desc_ctx(desc); + if (!may_use_simd()) return sha256_base_do_update(desc, data, len, (sha256_block_fn *)sha256_block_data_order); - kernel_neon_begin(); - sha256_base_do_update(desc, data, len, - (sha256_block_fn *)sha256_block_neon); - kernel_neon_end(); + while (len > 0) { + unsigned int chunk = len; + + /* + * Don't hog the CPU for the entire time it takes to process all + * input when running on a preemptible kernel, but process the + * data block by block instead. + */ + if (IS_ENABLED(CONFIG_PREEMPT) && + chunk + sctx->count % SHA256_BLOCK_SIZE > SHA256_BLOCK_SIZE) + chunk = SHA256_BLOCK_SIZE - + sctx->count % SHA256_BLOCK_SIZE; + kernel_neon_begin(); + sha256_base_do_update(desc, data, chunk, + (sha256_block_fn *)sha256_block_neon); + kernel_neon_end(); + data += chunk; + len -= chunk; + } return 0; } @@ -117,10 +128,9 @@ static int sha256_finup_neon(struct shash_desc *desc, const u8 *data, sha256_base_do_finalize(desc, (sha256_block_fn *)sha256_block_data_order); } else { - kernel_neon_begin(); if (len) - sha256_base_do_update(desc, data, len, - (sha256_block_fn *)sha256_block_neon); + sha256_update_neon(desc, data, len); + kernel_neon_begin(); sha256_base_do_finalize(desc, (sha256_block_fn *)sha256_block_neon); kernel_neon_end();