From patchwork Mon Dec 4 12:26:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 120516 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4365597qgn; Mon, 4 Dec 2017 04:27:19 -0800 (PST) X-Google-Smtp-Source: AGs4zMYxnzhvrMNh9QP3tJfvaAcxAT9dHqB3mL+lD1m0EOmuSRS2KV0tyO1kV+uH0+ufTSiqPc1I X-Received: by 10.101.86.6 with SMTP id l6mr12288726pgs.153.1512390439605; Mon, 04 Dec 2017 04:27:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512390439; cv=none; d=google.com; s=arc-20160816; b=IP/+u7pToxnza6RMBjv6foKdoFrIMnYbschE5J/yLg62F81UiH91OsoErXsuxSjfG/ z4GO6uwWriS+sXxcMV43hiMprcUOfvH0Zwjt5sl64XHHb+lGqU6pHNDwANLYrxHyAaaH 4bxF0shUTvYZssytZdlbTmArJVUVxYOF/10e2S6LM/vbO3cD5wy5vSFlI9KIVZ3e6/gu 8c2UBm9+iCl4ts5jnrs6dyRIa/VBIGDqM7KpgjIKQdyxiuDF+6lTHirEFKFDlfi5iQ/d 0m83SHcjZ95XKBs/T54ylZHfzJjVa7RuRDk+RBxtgavNEF/0emIxvpWexjr98svSjMq3 Fjsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Ivjs08rQtt4hYa83WFs0Y/nrkWzevRyHdzepLIVTug8=; b=xf60Np7P/gxGzpleiNelCDYOL6/LUiAFArq6uelXb7c7Q8ucQAJUqc6MouZNeLCAUB U8TmS3O4Cxqhh9M82gU7MWV0UTUzjMKuzAYjChXbCQn7SCw3mR/raMVv3yxF7OW+iDku 9LvVwBqlwbv9QUdRBKov5gHwLNK0ZmDDpVWA1tepf2cEP36QazZtFui4fDSqU1KBe7yF 2Q9Ze/TRs0w/iKXvnITebf2hRD7FRe7bugxqhRz8FxTLODaAxzz3LClkRkgBOKVvy/XM VjRNQI6KoYgzC906Yno+CY4XSrh2gBdZ/8lqLv/9tZ7r/JV9kwTXJfTMQK26sVlPfTx9 o7Cw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZLIaS9xR; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u144si9409634pgb.226.2017.12.04.04.27.19; Mon, 04 Dec 2017 04:27:19 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZLIaS9xR; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753030AbdLDM1R (ORCPT + 1 other); Mon, 4 Dec 2017 07:27:17 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:41444 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753171AbdLDM1P (ORCPT ); Mon, 4 Dec 2017 07:27:15 -0500 Received: by mail-wm0-f66.google.com with SMTP id g75so5491545wme.0 for ; Mon, 04 Dec 2017 04:27:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Ivjs08rQtt4hYa83WFs0Y/nrkWzevRyHdzepLIVTug8=; b=ZLIaS9xRMfvdqVMVxqf5e5HYJqlTjw1bO+aQVVEAQKLIYYrQ6xPtiQk/t2liefAAaY T4qlPN6Lr7k2c7dESJ1acTjqQp7senrggYAZ7ENDmw+12CjZh+elPm3CbgE5/DWDMJo6 S9TVxwe3YhPxLDujE4xmzQGOqxjWDZBSBmTto= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Ivjs08rQtt4hYa83WFs0Y/nrkWzevRyHdzepLIVTug8=; b=HAaRRptkK6QKYkPRg+nPS7mOZhglD6UClUBpnidGqi91fPdTlJKSvroeAgIxfftlwd hbuukcF5aMrktgdxxusVljfcl0U7rnPxgHySBopFIsYE9ApjsSQIxhe/x+obx9f/O/Y7 BwU6sGmJQh0GIGvaX5T5WKOpG0HV4QsvToIDJoTJtZxgmGFv0HE6AAJGWEYSkazYNiPe dWHMx68hXUQR4vF03Is/Rq9GxVd68mwIGqO+V4hqLgLaWFJ51gDbwg0+0MFHlnHPFHp7 g1b4uZJtWy1uWK80t0KW95vrbHVHj6iNQUIluOnZrLARQyt34np5g32yVdup9mEBMqAd jm/g== X-Gm-Message-State: AKGB3mItCYEAIPgoA0a6oG9Y4CS1zpMA/nklTIXZqXh80wxhTjtaa39E l4qFZR6nQD4d85MVMyN10ke4gla6HYI= X-Received: by 10.28.71.5 with SMTP id u5mr6901936wma.84.1512390433808; Mon, 04 Dec 2017 04:27:13 -0800 (PST) Received: from localhost.localdomain ([105.150.171.234]) by smtp.gmail.com with ESMTPSA id a8sm7665839wmh.41.2017.12.04.04.27.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Dec 2017 04:27:12 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Dave Martin , Russell King - ARM Linux , Sebastian Andrzej Siewior , Mark Rutland , linux-rt-users@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Will Deacon , Steven Rostedt , Thomas Gleixner Subject: [PATCH v2 06/19] crypto: arm64/ghash - move kernel mode neon en/disable into loop Date: Mon, 4 Dec 2017 12:26:32 +0000 Message-Id: <20171204122645.31535-7-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171204122645.31535-1-ard.biesheuvel@linaro.org> References: <20171204122645.31535-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org When kernel mode NEON was first introduced on arm64, the preserve and restore of the userland NEON state was completely unoptimized, and involved saving all registers on each call to kernel_neon_begin(), and restoring them on each call to kernel_neon_end(). For this reason, the NEON crypto code that was introduced at the time keeps the NEON enabled throughout the execution of the crypto API methods, which may include calls back into the crypto API that could result in memory allocation or other actions that we should avoid when running with preemption disabled. Since then, we have optimized the kernel mode NEON handling, which now restores lazily (upon return to userland), and so the preserve action is only costly the first time it is called after entering the kernel. So let's put the kernel_neon_begin() and kernel_neon_end() calls around the actual invocations of the NEON crypto code, and run the remainder of the code with kernel mode NEON disabled (and preemption enabled) Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/ghash-ce-glue.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) -- 2.11.0 diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index cfc9c92814fd..cb39503673d4 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -368,26 +368,28 @@ static int gcm_encrypt(struct aead_request *req) pmull_gcm_encrypt_block(ks, iv, NULL, num_rounds(&ctx->aes_key)); put_unaligned_be32(3, iv + GCM_IV_SIZE); + kernel_neon_end(); - err = skcipher_walk_aead_encrypt(&walk, req, true); + err = skcipher_walk_aead_encrypt(&walk, req, false); while (walk.nbytes >= AES_BLOCK_SIZE) { int blocks = walk.nbytes / AES_BLOCK_SIZE; + kernel_neon_begin(); pmull_gcm_encrypt(blocks, dg, walk.dst.virt.addr, walk.src.virt.addr, &ctx->ghash_key, iv, num_rounds(&ctx->aes_key), ks); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); } else { __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, num_rounds(&ctx->aes_key)); put_unaligned_be32(2, iv + GCM_IV_SIZE); - err = skcipher_walk_aead_encrypt(&walk, req, true); + err = skcipher_walk_aead_encrypt(&walk, req, false); while (walk.nbytes >= AES_BLOCK_SIZE) { int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -467,15 +469,18 @@ static int gcm_decrypt(struct aead_request *req) pmull_gcm_encrypt_block(tag, iv, ctx->aes_key.key_enc, num_rounds(&ctx->aes_key)); put_unaligned_be32(2, iv + GCM_IV_SIZE); + kernel_neon_end(); - err = skcipher_walk_aead_decrypt(&walk, req, true); + err = skcipher_walk_aead_decrypt(&walk, req, false); while (walk.nbytes >= AES_BLOCK_SIZE) { int blocks = walk.nbytes / AES_BLOCK_SIZE; + kernel_neon_begin(); pmull_gcm_decrypt(blocks, dg, walk.dst.virt.addr, walk.src.virt.addr, &ctx->ghash_key, iv, num_rounds(&ctx->aes_key)); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); @@ -483,14 +488,12 @@ static int gcm_decrypt(struct aead_request *req) if (walk.nbytes) pmull_gcm_encrypt_block(iv, iv, NULL, num_rounds(&ctx->aes_key)); - - kernel_neon_end(); } else { __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, num_rounds(&ctx->aes_key)); put_unaligned_be32(2, iv + GCM_IV_SIZE); - err = skcipher_walk_aead_decrypt(&walk, req, true); + err = skcipher_walk_aead_decrypt(&walk, req, false); while (walk.nbytes >= AES_BLOCK_SIZE) { int blocks = walk.nbytes / AES_BLOCK_SIZE;