From patchwork Mon Dec 4 12:26:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 120512 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4365342qgn; Mon, 4 Dec 2017 04:27:05 -0800 (PST) X-Google-Smtp-Source: AGs4zMbS9cp1jkQaLln3AtUTNMr8So7LE3r5K6B3Yl9lz4MzMoGA1cpJtjnN3qi7ZvOM93LxQZ7l X-Received: by 10.101.90.138 with SMTP id c10mr13744840pgt.441.1512390425261; Mon, 04 Dec 2017 04:27:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512390425; cv=none; d=google.com; s=arc-20160816; b=iF601T+hAtHpVtJIZzi7Cpbs7xpWg/IftBILmWPh+1nqS2YskIu1DEVEqln8YgQxfl CAtuJng5w9Ui2KOKd/CtdSQV6MT5Fa7HM2YHwTKmNTgeCybCpL2Ye5qvVQGSioqYCqPd wtw8H7TioIuWLpW7QE7VZR7yCUK5xnzSGdG/oTzDBRN3nqsyUcTElehm5sH8fYrNxNr5 AVpDPKwo56YtO5kML4nPnuxfnzLSYRODIUinqkLmjTpmbiEwnCqRweBnQfaiVrGTLueU V/QhwJQEea0dZnQ5kfLkTb0ZtwGlr2wTClYDk292iR36s0aZIkoJN0eFRv7eiLY8qLux 7W0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=1SIIIyXSoyLYWAV1ykL0HbaGGIrMeiAc6kVVHXUZfJ4=; b=PWWeiQuPnkqdBHyYtE64zk9pKpOWzahgSZAvnUhErcfv+Rl/XE/UVgORkatwCjmIcg TpxGnKcMR9J1k2dxMZsQPGC6Y2AyfyRja+hQ5rkN14zNL05+2G3+Cd9hwR/R6mMQCnEN 7zRgScxBXTFvxIcRIaek1kHF0NHEP39oGBBeiq911TiKtP+xk/z/e1CWIiRcQhoo+sA6 sjebMJbRg4UXkUY26/ylsFEMtwkz9E4jD2Z6Ai8T8swGkw97PAc064Ow/OBnnF6/cfC0 tDSC237GLTO5rsiew1bh7N/vgOl+Y1xt+S8NebeeuA+qvAhFHC3761hXzj1YVZeBGzHC pm/g== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=A3K63UN3; spf=pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-rt-users-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ay5si7779699plb.11.2017.12.04.04.27.05; Mon, 04 Dec 2017 04:27:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=A3K63UN3; spf=pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-rt-users-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753133AbdLDM1E (ORCPT + 4 others); Mon, 4 Dec 2017 07:27:04 -0500 Received: from mail-wr0-f194.google.com ([209.85.128.194]:36458 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753129AbdLDM1B (ORCPT ); Mon, 4 Dec 2017 07:27:01 -0500 Received: by mail-wr0-f194.google.com with SMTP id v105so17029646wrc.3 for ; Mon, 04 Dec 2017 04:27:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LihVXBSIHLub8TadiGHOcXJbFC4M3p/6b/xh6SsRLz4=; b=A3K63UN3kJTg07Aw1OTWX2B0NXAtFutKhy+NSpk6DKoDkA+hziIwo57vKBJqD7gqDZ FgYvD+5wXcrdfdMEWafMPx2LuQXhKRx7BR7cOlqERVQhnGRcIaGkqGDRk1TKFJoZFPps WBEyD94LwKxAbLCs7FrWEdu3GY4g06t665BQQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LihVXBSIHLub8TadiGHOcXJbFC4M3p/6b/xh6SsRLz4=; b=MDkgrcwxi2j0sf4G/ZW/PcsDhIa7B9dpQ/sUsvjxkZh+n8j/FuC83JdcLWcx7NmlSP NmDocTYb5DXHXaYzW4ddu9e4FDRsrwqJC2I1FeASCmwuM1fyBNN+TFccIZs1xrVJa750 hmJdGAtHjuqUFh9t7a64ExOuOSX/N4irsXrjlCe/SX33u/URcElWzxsNeqW03pek70DA w1617Q2fdLXE8R9euvQ16rKiKI/ZD2Nyz3MdpKnCk9NzG3IoC2karMaTWInwJPhiARvv 96xp336o8MHAyHObePR6qASJz/4GC00HOe2LJGSmRsq9mvA/Tk943/1w+Vlj+3S4sElj s+NQ== X-Gm-Message-State: AJaThX5w6dkJXviaACOOKgGJCwlhJj/nXu8I+pt7a6cqkMQJX0Xs3ISz J45oS8OKOgvl3Gl0AbzGR7zsnw== X-Received: by 10.223.176.27 with SMTP id f27mr13175759wra.105.1512390419990; Mon, 04 Dec 2017 04:26:59 -0800 (PST) Received: from localhost.localdomain ([105.150.171.234]) by smtp.gmail.com with ESMTPSA id a8sm7665839wmh.41.2017.12.04.04.26.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Dec 2017 04:26:59 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Dave Martin , Russell King - ARM Linux , Sebastian Andrzej Siewior , Mark Rutland , linux-rt-users@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Will Deacon , Steven Rostedt , Thomas Gleixner Subject: [PATCH v2 02/19] crypto: arm64/aes-ce-ccm - move kernel mode neon en/disable into loop Date: Mon, 4 Dec 2017 12:26:28 +0000 Message-Id: <20171204122645.31535-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171204122645.31535-1-ard.biesheuvel@linaro.org> References: <20171204122645.31535-1-ard.biesheuvel@linaro.org> Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org When kernel mode NEON was first introduced on arm64, the preserve and restore of the userland NEON state was completely unoptimized, and involved saving all registers on each call to kernel_neon_begin(), and restoring them on each call to kernel_neon_end(). For this reason, the NEON crypto code that was introduced at the time keeps the NEON enabled throughout the execution of the crypto API methods, which may include calls back into the crypto API that could result in memory allocation or other actions that we should avoid when running with preemption disabled. Since then, we have optimized the kernel mode NEON handling, which now restores lazily (upon return to userland), and so the preserve action is only costly the first time it is called after entering the kernel. So let's put the kernel_neon_begin() and kernel_neon_end() calls around the actual invocations of the NEON crypto code, and run the remainder of the code with kernel mode NEON disabled (and preemption enabled) Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-ce-ccm-glue.c | 47 ++++++++++---------- 1 file changed, 23 insertions(+), 24 deletions(-) -- 2.11.0 -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c index a1254036f2b1..68b11aa690e4 100644 --- a/arch/arm64/crypto/aes-ce-ccm-glue.c +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -107,11 +107,13 @@ static int ccm_init_mac(struct aead_request *req, u8 maciv[], u32 msglen) } static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], - u32 abytes, u32 *macp, bool use_neon) + u32 abytes, u32 *macp) { - if (likely(use_neon)) { + if (may_use_simd()) { + kernel_neon_begin(); ce_aes_ccm_auth_data(mac, in, abytes, macp, key->key_enc, num_rounds(key)); + kernel_neon_end(); } else { if (*macp > 0 && *macp < AES_BLOCK_SIZE) { int added = min(abytes, AES_BLOCK_SIZE - *macp); @@ -143,8 +145,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], } } -static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[], - bool use_neon) +static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) { struct crypto_aead *aead = crypto_aead_reqtfm(req); struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); @@ -163,7 +164,7 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[], ltag.len = 6; } - ccm_update_mac(ctx, mac, (u8 *)<ag, ltag.len, &macp, use_neon); + ccm_update_mac(ctx, mac, (u8 *)<ag, ltag.len, &macp); scatterwalk_start(&walk, req->src); do { @@ -175,7 +176,7 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[], n = scatterwalk_clamp(&walk, len); } p = scatterwalk_map(&walk); - ccm_update_mac(ctx, mac, p, n, &macp, use_neon); + ccm_update_mac(ctx, mac, p, n, &macp); len -= n; scatterwalk_unmap(p); @@ -242,43 +243,42 @@ static int ccm_encrypt(struct aead_request *req) u8 __aligned(8) mac[AES_BLOCK_SIZE]; u8 buf[AES_BLOCK_SIZE]; u32 len = req->cryptlen; - bool use_neon = may_use_simd(); int err; err = ccm_init_mac(req, mac, len); if (err) return err; - if (likely(use_neon)) - kernel_neon_begin(); - if (req->assoclen) - ccm_calculate_auth_mac(req, mac, use_neon); + ccm_calculate_auth_mac(req, mac); /* preserve the original iv for the final round */ memcpy(buf, req->iv, AES_BLOCK_SIZE); err = skcipher_walk_aead_encrypt(&walk, req, true); - if (likely(use_neon)) { + if (may_use_simd()) { while (walk.nbytes) { u32 tail = walk.nbytes % AES_BLOCK_SIZE; if (walk.nbytes == walk.total) tail = 0; + kernel_neon_begin(); ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes - tail, ctx->key_enc, num_rounds(ctx), mac, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, tail); } - if (!err) + if (!err) { + kernel_neon_begin(); ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); - - kernel_neon_end(); + kernel_neon_end(); + } } else { err = ccm_crypt_fallback(&walk, mac, buf, ctx, true); } @@ -301,43 +301,42 @@ static int ccm_decrypt(struct aead_request *req) u8 __aligned(8) mac[AES_BLOCK_SIZE]; u8 buf[AES_BLOCK_SIZE]; u32 len = req->cryptlen - authsize; - bool use_neon = may_use_simd(); int err; err = ccm_init_mac(req, mac, len); if (err) return err; - if (likely(use_neon)) - kernel_neon_begin(); - if (req->assoclen) - ccm_calculate_auth_mac(req, mac, use_neon); + ccm_calculate_auth_mac(req, mac); /* preserve the original iv for the final round */ memcpy(buf, req->iv, AES_BLOCK_SIZE); err = skcipher_walk_aead_decrypt(&walk, req, true); - if (likely(use_neon)) { + if (may_use_simd()) { while (walk.nbytes) { u32 tail = walk.nbytes % AES_BLOCK_SIZE; if (walk.nbytes == walk.total) tail = 0; + kernel_neon_begin(); ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes - tail, ctx->key_enc, num_rounds(ctx), mac, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, tail); } - if (!err) + if (!err) { + kernel_neon_begin(); ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); - - kernel_neon_end(); + kernel_neon_end(); + } } else { err = ccm_crypt_fallback(&walk, mac, buf, ctx, false); }