From patchwork Fri Oct 11 17:08:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 176016 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp1030779ill; Fri, 11 Oct 2019 10:09:33 -0700 (PDT) X-Google-Smtp-Source: APXvYqyvhfgT2F3bGx4K0ygHWlIfKOpziPGwmKnldggoZrxKAZ7j7rZciFxzq1Wk49MrfHLJFceR X-Received: by 2002:a05:6402:1a33:: with SMTP id be19mr14514092edb.197.1570813773559; Fri, 11 Oct 2019 10:09:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570813773; cv=none; d=google.com; s=arc-20160816; b=wWZZKSE6Oc74TK+TUtwwrJoik4YQR5qHoGmYN5AUeys3p9OkbK4p/27NKDMBnpKTgj cmKAj+YvIv8tsDmQqSu8ZsfzWoMdBrQzTUwBuvVG8XMG+I9KWvAIMxcvVq5Cck7DsWYs CPaUiDjqHaU+tvtsoLHtaYtwLMouQk5+Piah3aSFRJHuVmUOreI5pvHjLjl19Mtbryio AcMcwoYii8btVCTE9E6xr1QUWh7TnuNgiA8l5ZQazxws6HsqIEl7OWwNKvnqLnGtILm5 1HZqliY7j/4XIDP/cKsRj9sco5YxG2oUHOerVAzfinVg5MEYBBi1hqY5sXf6DCm+rNbW gZLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=3dT5xeg+W/abI4zH3wzED14fHVMhPnQCtSYtaVYmbAk=; b=qCnTVee0YIgQW6Jq5KQBJysegV58YHg0c/zC4Q3+HBL210ODdDKj8fILzFg45DmrUA Xmkio53jThFPXPyBACRAJrD8pD5Tf/d5ruQ5eT/AvPbtPoCaNNMdxG/4rEF/UtwbKkJO Togkn24E6/9r7tyEHxaLFY2QTRvOcMq1z4zB2pi3h28zKuVZ2trY6W2Xehku5wg7n4B3 NLi9NWFwouW1etoLCdwYYOn54VvjkB8Kqo5KKR4LMhqpM/M+UtCbaFthqqrKtIDy7obG ZcAUX0uWcffMRjLrtKfG9siM2Wt097iSbVoojpJnOnlqR9T1WUpZAWs6WJGFknf1eyPY tcag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=apJ0GA9s; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z21si5652403ejw.229.2019.10.11.10.09.33; Fri, 11 Oct 2019 10:09:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=apJ0GA9s; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728431AbfJKRJc (ORCPT + 3 others); Fri, 11 Oct 2019 13:09:32 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:39723 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728449AbfJKRJc (ORCPT ); Fri, 11 Oct 2019 13:09:32 -0400 Received: by mail-wm1-f65.google.com with SMTP id v17so10912092wml.4 for ; Fri, 11 Oct 2019 10:09:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3dT5xeg+W/abI4zH3wzED14fHVMhPnQCtSYtaVYmbAk=; b=apJ0GA9s8TIf+XqhnX3fcyr614dCtpruEjBkG0gxLeU9xA3fh7RxI8oqyEzc9kk3i1 6O2SIz7jn2HKKlPCUUh3zZ21EW+wS4Au7E1JQHyArgLyuWvlIu7eZStQDE7cGbidKIGa Ztex0oLLfJQRoiPlYITZKw1wFY2C0FBrSf4NMjhTE61Xu/8M2HnzAMlCWedB521kqpps jOOgoA2ZCUSYef5swrsWfLMGAy0ZnTmPZvhvXIW3PIeFDjHI2HE/Dt0VmU4OzkA7uXet 19aG20hPJUUV/wa1Xht1wrT287YKDowb+BinMIA+SwFCCa6svPDU0nu0gnDfisHgOfqd zJKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3dT5xeg+W/abI4zH3wzED14fHVMhPnQCtSYtaVYmbAk=; b=jp6ke0A40XjyiQktieuCEjhCNy+HNlpLvoW4GXgfCn7LzOkUHYoWXWPIL4/5ulyCMH f6iVQmYNi1hi/LgLoAU9yF2ugeFE1YleJUdj0KI4RvPZTvJkFnRVwoalYOp8AYj/w89C lJjTivj/9tEkDS6JUvrywhqmxjOoLG6TsdF5022Pp2080ZWAUwLxbwLn69gf3je0O8VH +MIYUFAuNCsicpzGrnlrc7yuqJW/TppOoZ+ReLnTtYohaiLnayGLtAyXrh8coS2zK1Ot A66iSfqBaOOueasu6OOYLy27tvQ9S8JZIVMrIFLxNrV5LxDMloXtZmyfIt8GNAZ4Pi1c Ud0w== X-Gm-Message-State: APjAAAWZRqOqTfBVNVELZWDNuQibiUMC2tf+x4zMC+baUf3/iLLn4oEI QZKfXwcZ28kTPjhHz57SyIq5h/C1vKvaIA== X-Received: by 2002:a1c:1d15:: with SMTP id d21mr3998509wmd.5.1570813769234; Fri, 11 Oct 2019 10:09:29 -0700 (PDT) Received: from localhost.localdomain (91-167-84-221.subs.proxad.net. [91.167.84.221]) by smtp.gmail.com with ESMTPSA id f9sm11876875wre.74.2019.10.11.10.09.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Oct 2019 10:09:28 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, herbert@gondor.apana.org.au, Ard Biesheuvel Subject: [PATCH 1/2] crypto: aegis128 - avoid function pointers for parameterization Date: Fri, 11 Oct 2019 19:08:22 +0200 Message-Id: <20191011170823.6713-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191011170823.6713-1-ard.biesheuvel@linaro.org> References: <20191011170823.6713-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of passing around an ops structure with function pointers, which forces indirect calls to be used, refactor the code slightly so we can use ordinary function calls. At the same time, switch to a static key to decide whether or not the SIMD code path may be used. Signed-off-by: Ard Biesheuvel --- crypto/aegis128-core.c | 105 +++++++++----------- 1 file changed, 46 insertions(+), 59 deletions(-) -- 2.20.1 diff --git a/crypto/aegis128-core.c b/crypto/aegis128-core.c index 80e73611bd5c..fe7ab66dd8f9 100644 --- a/crypto/aegis128-core.c +++ b/crypto/aegis128-core.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -35,15 +36,7 @@ struct aegis_ctx { union aegis_block key; }; -struct aegis128_ops { - int (*skcipher_walk_init)(struct skcipher_walk *walk, - struct aead_request *req, bool atomic); - - void (*crypt_chunk)(struct aegis_state *state, u8 *dst, - const u8 *src, unsigned int size); -}; - -static bool have_simd; +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_simd); static const union aegis_block crypto_aegis_const[2] = { { .words64 = { @@ -59,7 +52,7 @@ static const union aegis_block crypto_aegis_const[2] = { static bool aegis128_do_simd(void) { #ifdef CONFIG_CRYPTO_AEGIS128_SIMD - if (have_simd) + if (static_branch_likely(&have_simd)) return crypto_simd_usable(); #endif return false; @@ -323,25 +316,27 @@ static void crypto_aegis128_process_ad(struct aegis_state *state, } } -static void crypto_aegis128_process_crypt(struct aegis_state *state, - struct aead_request *req, - const struct aegis128_ops *ops) +static __always_inline +int crypto_aegis128_process_crypt(struct aegis_state *state, + struct aead_request *req, + struct skcipher_walk *walk, + void (*crypt)(struct aegis_state *state, + u8 *dst, const u8 *src, + unsigned int size)) { - struct skcipher_walk walk; + int err = 0; - ops->skcipher_walk_init(&walk, req, false); + while (walk->nbytes) { + unsigned int nbytes = walk->nbytes; - while (walk.nbytes) { - unsigned int nbytes = walk.nbytes; + if (nbytes < walk->total) + nbytes = round_down(nbytes, walk->stride); - if (nbytes < walk.total) - nbytes = round_down(nbytes, walk.stride); + crypt(state, walk->dst.virt.addr, walk->src.virt.addr, nbytes); - ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr, - nbytes); - - skcipher_walk_done(&walk, walk.nbytes - nbytes); + err = skcipher_walk_done(walk, walk->nbytes - nbytes); } + return err; } static void crypto_aegis128_final(struct aegis_state *state, @@ -390,39 +385,27 @@ static int crypto_aegis128_setauthsize(struct crypto_aead *tfm, return 0; } -static void crypto_aegis128_crypt(struct aead_request *req, - union aegis_block *tag_xor, - unsigned int cryptlen, - const struct aegis128_ops *ops) +static int crypto_aegis128_encrypt(struct aead_request *req) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); + union aegis_block tag = {}; + unsigned int authsize = crypto_aead_authsize(tfm); struct aegis_ctx *ctx = crypto_aead_ctx(tfm); + unsigned int cryptlen = req->cryptlen; + struct skcipher_walk walk; struct aegis_state state; crypto_aegis128_init(&state, &ctx->key, req->iv); crypto_aegis128_process_ad(&state, req->src, req->assoclen); - crypto_aegis128_process_crypt(&state, req, ops); - crypto_aegis128_final(&state, tag_xor, req->assoclen, cryptlen); -} - -static int crypto_aegis128_encrypt(struct aead_request *req) -{ - const struct aegis128_ops *ops = &(struct aegis128_ops){ - .skcipher_walk_init = skcipher_walk_aead_encrypt, - .crypt_chunk = crypto_aegis128_encrypt_chunk, - }; - - struct crypto_aead *tfm = crypto_aead_reqtfm(req); - union aegis_block tag = {}; - unsigned int authsize = crypto_aead_authsize(tfm); - unsigned int cryptlen = req->cryptlen; + skcipher_walk_aead_encrypt(&walk, req, false); if (aegis128_do_simd()) - ops = &(struct aegis128_ops){ - .skcipher_walk_init = skcipher_walk_aead_encrypt, - .crypt_chunk = crypto_aegis128_encrypt_chunk_simd }; - - crypto_aegis128_crypt(req, &tag, cryptlen, ops); + crypto_aegis128_process_crypt(&state, req, &walk, + crypto_aegis128_encrypt_chunk_simd); + else + crypto_aegis128_process_crypt(&state, req, &walk, + crypto_aegis128_encrypt_chunk); + crypto_aegis128_final(&state, &tag, req->assoclen, cryptlen); scatterwalk_map_and_copy(tag.bytes, req->dst, req->assoclen + cryptlen, authsize, 1); @@ -431,26 +414,29 @@ static int crypto_aegis128_encrypt(struct aead_request *req) static int crypto_aegis128_decrypt(struct aead_request *req) { - const struct aegis128_ops *ops = &(struct aegis128_ops){ - .skcipher_walk_init = skcipher_walk_aead_decrypt, - .crypt_chunk = crypto_aegis128_decrypt_chunk, - }; static const u8 zeros[AEGIS128_MAX_AUTH_SIZE] = {}; - struct crypto_aead *tfm = crypto_aead_reqtfm(req); union aegis_block tag; unsigned int authsize = crypto_aead_authsize(tfm); unsigned int cryptlen = req->cryptlen - authsize; + struct aegis_ctx *ctx = crypto_aead_ctx(tfm); + struct skcipher_walk walk; + struct aegis_state state; scatterwalk_map_and_copy(tag.bytes, req->src, req->assoclen + cryptlen, authsize, 0); - if (aegis128_do_simd()) - ops = &(struct aegis128_ops){ - .skcipher_walk_init = skcipher_walk_aead_decrypt, - .crypt_chunk = crypto_aegis128_decrypt_chunk_simd }; + crypto_aegis128_init(&state, &ctx->key, req->iv); + crypto_aegis128_process_ad(&state, req->src, req->assoclen); - crypto_aegis128_crypt(req, &tag, cryptlen, ops); + skcipher_walk_aead_decrypt(&walk, req, false); + if (aegis128_do_simd()) + crypto_aegis128_process_crypt(&state, req, &walk, + crypto_aegis128_decrypt_chunk_simd); + else + crypto_aegis128_process_crypt(&state, req, &walk, + crypto_aegis128_decrypt_chunk); + crypto_aegis128_final(&state, &tag, req->assoclen, cryptlen); return crypto_memneq(tag.bytes, zeros, authsize) ? -EBADMSG : 0; } @@ -481,8 +467,9 @@ static struct aead_alg crypto_aegis128_alg = { static int __init crypto_aegis128_module_init(void) { - if (IS_ENABLED(CONFIG_CRYPTO_AEGIS128_SIMD)) - have_simd = crypto_aegis128_have_simd(); + if (IS_ENABLED(CONFIG_CRYPTO_AEGIS128_SIMD) && + crypto_aegis128_have_simd()) + static_branch_enable(&have_simd); return crypto_register_aead(&crypto_aegis128_alg); }