From patchwork Thu May 1 15:51:26 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 29522 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f197.google.com (mail-vc0-f197.google.com [209.85.220.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 631DB203F3 for ; Thu, 1 May 2014 15:51:48 +0000 (UTC) Received: by mail-vc0-f197.google.com with SMTP id if11sf3881937vcb.4 for ; Thu, 01 May 2014 08:51:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=4uVs3M1+6ggRK8IF4NRbaQwd7oMlJeTJO15M0lUZjtw=; b=gDtKhI+ObbqqwPPVDYV7OOmUr5qzKF/Iovesu0AfkVVSJRTZ0vkXbm857CQmMiWUyY wbZ9BpQtO5XERdZrPFqZi1Rmf+3T3mi+gJnQ0ddJ0SR8UTtoETd3eRHBf8gtuEh2+SIo 3w2hSbh62SnLDCjB9FpCMRt9If3qjR7z5aO0nKh/2vWQ7CNqCepNv9rApugod1MFcqpe k8XOEzg+hi0TZTF6ZTw3QOupunqMmFUy5B4aMrHKfeBjq8AjpC4mcEqtShRlltWAtONq 1qW5ryNOZ256BdVRLpMmuZbH+YthmN9DwnYY62qnLaR5ufsVuqZL/e1rTuIyyJTlMQ/A /DNA== X-Gm-Message-State: ALoCoQkAWgwn1QMhaieiMcFEO7qv3bp1tISOjt18HBU/PZy4iQGh+ainNVvzt7HBfPfb74/kdAUQ X-Received: by 10.224.36.137 with SMTP id t9mr5933852qad.4.1398959508175; Thu, 01 May 2014 08:51:48 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.20.54 with SMTP id 51ls716877qgi.63.gmail; Thu, 01 May 2014 08:51:48 -0700 (PDT) X-Received: by 10.52.3.129 with SMTP id c1mr639688vdc.37.1398959508017; Thu, 01 May 2014 08:51:48 -0700 (PDT) Received: from mail-vc0-f175.google.com (mail-vc0-f175.google.com [209.85.220.175]) by mx.google.com with ESMTPS id f17si6124257vcq.201.2014.05.01.08.51.48 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 01 May 2014 08:51:48 -0700 (PDT) Received-SPF: none (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) client-ip=209.85.220.175; Received: by mail-vc0-f175.google.com with SMTP id lh4so4064558vcb.6 for ; Thu, 01 May 2014 08:51:47 -0700 (PDT) X-Received: by 10.52.37.130 with SMTP id y2mr428468vdj.38.1398959507936; Thu, 01 May 2014 08:51:47 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp34112vcb; Thu, 1 May 2014 08:51:47 -0700 (PDT) X-Received: by 10.50.46.100 with SMTP id u4mr3923836igm.23.1398959507240; Thu, 01 May 2014 08:51:47 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w2si3639369igw.7.2014.05.01.08.51.47; Thu, 01 May 2014 08:51:47 -0700 (PDT) Received-SPF: none (google.com: linux-crypto-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754198AbaEAPvp (ORCPT ); Thu, 1 May 2014 11:51:45 -0400 Received: from mail-we0-f179.google.com ([74.125.82.179]:38121 "EHLO mail-we0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752226AbaEAPvl (ORCPT ); Thu, 1 May 2014 11:51:41 -0400 Received: by mail-we0-f179.google.com with SMTP id q59so956796wes.38 for ; Thu, 01 May 2014 08:51:40 -0700 (PDT) X-Received: by 10.194.243.3 with SMTP id wu3mr9470782wjc.29.1398959500564; Thu, 01 May 2014 08:51:40 -0700 (PDT) Received: from ards-macbook-pro.local (cag06-7-83-153-85-71.fbx.proxad.net. [83.153.85.71]) by mx.google.com with ESMTPSA id kp5sm41727776wjb.30.2014.05.01.08.51.39 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 01 May 2014 08:51:39 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, steve.capper@linaro.org, Ard Biesheuvel Subject: [PATCH resend 15/15] arm64/crypto: add voluntary preemption to Crypto Extensions GHASH Date: Thu, 1 May 2014 17:51:26 +0200 Message-Id: <1398959486-8222-6-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1398959486-8222-1-git-send-email-ard.biesheuvel@linaro.org> References: <1398959486-8222-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The Crypto Extensions based GHASH implementation uses the NEON register file, and hence runs with preemption disabled. This patch adds a TIF_NEED_RESCHED check to its inner loop so we at least give up the CPU voluntarily when we are running in process context and have been tagged for preemption by the scheduler. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/ghash-ce-core.S | 10 ++++++---- arch/arm64/crypto/ghash-ce-glue.c | 33 +++++++++++++++++++++++++-------- 2 files changed, 31 insertions(+), 12 deletions(-) diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index b9e6eaf41c9b..523432f24ed2 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -31,8 +31,9 @@ .arch armv8-a+crypto /* - * void pmull_ghash_update(int blocks, u64 dg[], const char *src, - * struct ghash_key const *k, const char *head) + * int pmull_ghash_update(int blocks, u64 dg[], const char *src, + * struct ghash_key const *k, const char *head, + * struct thread_info *ti) */ ENTRY(pmull_ghash_update) ld1 {DATA.16b}, [x1] @@ -88,8 +89,9 @@ CPU_LE( rev64 IN1.16b, IN1.16b ) eor T1.16b, T1.16b, T2.16b eor DATA.16b, DATA.16b, T1.16b - cbnz w0, 0b + cbz w0, 2f + b_if_no_resched x5, x7, 0b - st1 {DATA.16b}, [x1] +2: st1 {DATA.16b}, [x1] ret ENDPROC(pmull_ghash_update) diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index b92baf3f68c7..4df64832617d 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -33,8 +33,9 @@ struct ghash_desc_ctx { u32 count; }; -asmlinkage void pmull_ghash_update(int blocks, u64 dg[], const char *src, - struct ghash_key const *k, const char *head); +asmlinkage int pmull_ghash_update(int blocks, u64 dg[], const char *src, + struct ghash_key const *k, const char *head, + struct thread_info *ti); static int ghash_init(struct shash_desc *desc) { @@ -54,6 +55,7 @@ static int ghash_update(struct shash_desc *desc, const u8 *src, if ((partial + len) >= GHASH_BLOCK_SIZE) { struct ghash_key *key = crypto_shash_ctx(desc->tfm); + struct thread_info *ti = NULL; int blocks; if (partial) { @@ -64,14 +66,29 @@ static int ghash_update(struct shash_desc *desc, const u8 *src, len -= p; } + /* + * Pass current's thread info pointer to pmull_ghash_update() + * below if we want it to play nice under preemption. + */ + if ((IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) || + IS_ENABLED(CONFIG_PREEMPT)) && !in_interrupt()) + ti = current_thread_info(); + blocks = len / GHASH_BLOCK_SIZE; len %= GHASH_BLOCK_SIZE; - kernel_neon_begin_partial(6); - pmull_ghash_update(blocks, ctx->digest, src, key, - partial ? ctx->buf : NULL); - kernel_neon_end(); - src += blocks * GHASH_BLOCK_SIZE; + do { + int rem; + + kernel_neon_begin_partial(6); + rem = pmull_ghash_update(blocks, ctx->digest, src, key, + partial ? ctx->buf : NULL, ti); + kernel_neon_end(); + + src += (blocks - rem) * GHASH_BLOCK_SIZE; + blocks = rem; + partial = 0; + } while (unlikely(ti && blocks > 0)); } if (len) memcpy(ctx->buf + partial, src, len); @@ -89,7 +106,7 @@ static int ghash_final(struct shash_desc *desc, u8 *dst) memset(ctx->buf + partial, 0, GHASH_BLOCK_SIZE - partial); kernel_neon_begin_partial(6); - pmull_ghash_update(1, ctx->digest, ctx->buf, key, NULL); + pmull_ghash_update(1, ctx->digest, ctx->buf, key, NULL, NULL); kernel_neon_end(); } put_unaligned_be64(ctx->digest[1], dst);