From patchwork Mon Oct 17 22:13:36 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 101672 Delivered-To: patch@linaro.org Received: by 10.140.97.247 with SMTP id m110csp597162qge; Mon, 17 Oct 2016 15:17:07 -0700 (PDT) X-Received: by 10.99.123.90 with SMTP id k26mr33848160pgn.23.1476742627553; Mon, 17 Oct 2016 15:17:07 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j28si32488947pfa.283.2016.10.17.15.17.07; Mon, 17 Oct 2016 15:17:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759183AbcJQWQx (ORCPT + 1 other); Mon, 17 Oct 2016 18:16:53 -0400 Received: from mout.kundenserver.de ([212.227.126.130]:51064 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759383AbcJQWQt (ORCPT ); Mon, 17 Oct 2016 18:16:49 -0400 Received: from wuerfel.lan. ([78.43.20.153]) by mrelayeu.kundenserver.de (mreue004) with ESMTPA (Nemesis) id 0Ldqqt-1cehoj2kzr-00ixlw; Tue, 18 Oct 2016 00:14:46 +0200 From: Arnd Bergmann To: Herbert Xu , x86@kernel.org Cc: Linus Torvalds , linux-kernel@vger.kernel.org, Arnd Bergmann , "David S. Miller" , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Borislav Petkov , Stephan Mueller , linux-crypto@vger.kernel.org Subject: [PATCH 15/28] crypto: aesni: avoid -Wmaybe-uninitialized warning Date: Tue, 18 Oct 2016 00:13:36 +0200 Message-Id: <20161017221355.1861551-3-arnd@arndb.de> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20161017220342.1627073-1-arnd@arndb.de> References: <20161017220342.1627073-1-arnd@arndb.de> MIME-Version: 1.0 X-Provags-ID: V03:K0:SAEMiQlliVUSogP9CVbITDytmswWsCkqGojIynggaYbZWdmS7i6 Cwh/ue3iLgcTnLpFn3vX07PbcYb29mN9UmyKwaNcvvSklqM2oUfsLoMLY6fD2u9RLYu7D2F dNDP8u6bvZ5kSaKQ76TD7z5IJshluo0SvFJX9vdkxhfj8HJiWkdvx1brkWIq00qDC1zXA6x 5Kp5YVYWXAk7Ht7Dvflzg== X-UI-Out-Filterresults: notjunk:1; V01:K0:/gzVgx/UYhY=:m6vqkzpyEpjU35bBFJCRtX MTDI93aAouEtYqCwZZt5KSc/afEOZmV3AHfaXWmRkE32PgGM8P1IHVm/otJrQ1tUXGbDMnN/H 3WnfGIclfxN0vrpAAwthlMX0ahbX4qDNWQhP1XCd1Ur9NPOw5SOqsC/hwhbuRQCPwLF+tD8fs qgelpZyTBzs87E+qpyk5nmACA8QisjdV2LYR6aar1lBixjH/FWBF+H+6YsvcDTGdcQTA1vzS3 6sITlzgkylPx1PMjB+Wr368Bfxm0M1jWWjTYUQ9BnD4lnITLbnfUrGz6VUZAIUUW1nE6Wzc3B RPrmYrCGi2ibc5YVqObd78Ui5y7MOQyflNdIk0+hDwkg890BtdZe+hkH3ruUsGT7PHTP8a4W2 x3LLZgR0nnVYXp+QWEWCk86rLaaMgV8jWOfrG8aVeFW+Bcns7+FbIslyfzcUR2YEuR6bgt30M HcDcDnaHckWltNoJRKMMogd0GBECHSPGf7Ny1JAQnyh6AzVRCoN+9LZ79tc1TOpV8vo7HLvA0 D1l0U7ZWubKGTTcA97MYSRkYlVKfYzds8EBg6+lnZof9ujskCfCE9Rv/d5s5ng9FUYlsM/IFl E2n8ECp39vtRBuP9GH7JHP/udRucc2m6/2DdDsMdUhNLZcbZNbp2HeBrjRfUsN3t9HtwIzRiY m0QE+6hg3zU7PiQCw9EYWblfbFQEZEopE0m0UAuvlZHBLqng+0ZRQQB0L8usRecVoE5vucErs efunMstvSz5SzDuO Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The rfc4106 encrypy/decrypt helper functions cause an annoying false-positive warning in allmodconfig if we turn on -Wmaybe-uninitialized warnings again: arch/x86/crypto/aesni-intel_glue.c: In function ‘helper_rfc4106_decrypt’: include/linux/scatterlist.h:67:31: warning: ‘dst_sg_walk.sg’ may be used uninitialized in this function [-Wmaybe-uninitialized] The problem seems to be that the compiler doesn't track the state of the 'one_entry_in_sg' variable across the kernel_fpu_begin/kernel_fpu_end section. This reorganizes the code to avoid that variable and have the shared code in a separate function to avoid some of the conditional branches. The resulting functions are a bit longer but also slightly less complex, leaving no room for speculation on the part of the compiler. Cc: Herbert Xu Signed-off-by: Arnd Bergmann --- The conversion is nontrivial, and I have only build-tested it, so this could use a careful review and testing. --- arch/x86/crypto/aesni-intel_glue.c | 121 ++++++++++++++++++++++--------------- 1 file changed, 73 insertions(+), 48 deletions(-) -- 2.9.0 -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 0ab5ee1..054155b 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -269,6 +269,34 @@ static void (*aesni_gcm_dec_tfm)(void *ctx, u8 *out, u8 *hash_subkey, const u8 *aad, unsigned long aad_len, u8 *auth_tag, unsigned long auth_tag_len); +static inline void aesni_do_gcm_enc_tfm(void *ctx, u8 *out, + const u8 *in, unsigned long plaintext_len, u8 *iv, + u8 *hash_subkey, const u8 *aad, unsigned long aad_len, + u8 *auth_tag, unsigned long auth_tag_len) +{ + kernel_fpu_begin(); + aesni_gcm_enc_tfm(ctx, out, in, plaintext_len, iv, hash_subkey, + aad, aad_len, auth_tag, auth_tag_len); + kernel_fpu_end(); +} + +static inline int aesni_do_gcm_dec_tfm(void *ctx, u8 *out, + const u8 *in, unsigned long ciphertext_len, u8 *iv, + u8 *hash_subkey, const u8 *aad, unsigned long aad_len, + u8 *auth_tag, unsigned long auth_tag_len) +{ + kernel_fpu_begin(); + aesni_gcm_dec_tfm(ctx, out, in, ciphertext_len, iv, hash_subkey, aad, + aad_len, auth_tag, auth_tag_len); + kernel_fpu_end(); + + /* Compare generated tag with passed in tag. */ + if (crypto_memneq(in + ciphertext_len, auth_tag, auth_tag_len)) + return -EBADMSG; + + return 0; +} + static inline struct aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm) { @@ -879,7 +907,6 @@ static int rfc4106_set_authsize(struct crypto_aead *parent, static int helper_rfc4106_encrypt(struct aead_request *req) { - u8 one_entry_in_sg = 0; u8 *src, *dst, *assoc; __be32 counter = cpu_to_be32(1); struct crypto_aead *tfm = crypto_aead_reqtfm(req); @@ -908,7 +935,6 @@ static int helper_rfc4106_encrypt(struct aead_request *req) req->src->offset + req->src->length <= PAGE_SIZE && sg_is_last(req->dst) && req->dst->offset + req->dst->length <= PAGE_SIZE) { - one_entry_in_sg = 1; scatterwalk_start(&src_sg_walk, req->src); assoc = scatterwalk_map(&src_sg_walk); src = assoc + req->assoclen; @@ -916,7 +942,23 @@ static int helper_rfc4106_encrypt(struct aead_request *req) if (unlikely(req->src != req->dst)) { scatterwalk_start(&dst_sg_walk, req->dst); dst = scatterwalk_map(&dst_sg_walk) + req->assoclen; + + aesni_do_gcm_enc_tfm(aes_ctx, dst, src, req->cryptlen, iv, + ctx->hash_subkey, assoc, req->assoclen - 8, + dst + req->cryptlen, auth_tag_len); + + scatterwalk_unmap(dst - req->assoclen); + scatterwalk_advance(&dst_sg_walk, req->dst->length); + scatterwalk_done(&dst_sg_walk, 1, 0); + } else { + aesni_do_gcm_enc_tfm(aes_ctx, dst, src, req->cryptlen, iv, + ctx->hash_subkey, assoc, req->assoclen - 8, + dst + req->cryptlen, auth_tag_len); } + + scatterwalk_unmap(assoc); + scatterwalk_advance(&src_sg_walk, req->src->length); + scatterwalk_done(&src_sg_walk, req->src == req->dst, 0); } else { /* Allocate memory for src, dst, assoc */ assoc = kmalloc(req->cryptlen + auth_tag_len + req->assoclen, @@ -925,28 +967,14 @@ static int helper_rfc4106_encrypt(struct aead_request *req) return -ENOMEM; scatterwalk_map_and_copy(assoc, req->src, 0, req->assoclen + req->cryptlen, 0); - src = assoc + req->assoclen; - dst = src; - } + dst = src = assoc + req->assoclen; - kernel_fpu_begin(); - aesni_gcm_enc_tfm(aes_ctx, dst, src, req->cryptlen, iv, - ctx->hash_subkey, assoc, req->assoclen - 8, - dst + req->cryptlen, auth_tag_len); - kernel_fpu_end(); + aesni_gcm_enc_tfm(aes_ctx, dst, src, req->cryptlen, iv, + ctx->hash_subkey, assoc, req->assoclen - 8, + dst + req->cryptlen, auth_tag_len); - /* The authTag (aka the Integrity Check Value) needs to be written - * back to the packet. */ - if (one_entry_in_sg) { - if (unlikely(req->src != req->dst)) { - scatterwalk_unmap(dst - req->assoclen); - scatterwalk_advance(&dst_sg_walk, req->dst->length); - scatterwalk_done(&dst_sg_walk, 1, 0); - } - scatterwalk_unmap(assoc); - scatterwalk_advance(&src_sg_walk, req->src->length); - scatterwalk_done(&src_sg_walk, req->src == req->dst, 0); - } else { + /* The authTag (aka the Integrity Check Value) needs to be written + * back to the packet. */ scatterwalk_map_and_copy(dst, req->dst, req->assoclen, req->cryptlen + auth_tag_len, 1); kfree(assoc); @@ -956,7 +984,6 @@ static int helper_rfc4106_encrypt(struct aead_request *req) static int helper_rfc4106_decrypt(struct aead_request *req) { - u8 one_entry_in_sg = 0; u8 *src, *dst, *assoc; unsigned long tempCipherLen = 0; __be32 counter = cpu_to_be32(1); @@ -990,47 +1017,45 @@ static int helper_rfc4106_decrypt(struct aead_request *req) req->src->offset + req->src->length <= PAGE_SIZE && sg_is_last(req->dst) && req->dst->offset + req->dst->length <= PAGE_SIZE) { - one_entry_in_sg = 1; scatterwalk_start(&src_sg_walk, req->src); assoc = scatterwalk_map(&src_sg_walk); src = assoc + req->assoclen; - dst = src; if (unlikely(req->src != req->dst)) { scatterwalk_start(&dst_sg_walk, req->dst); dst = scatterwalk_map(&dst_sg_walk) + req->assoclen; - } - - } else { - /* Allocate memory for src, dst, assoc */ - assoc = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC); - if (!assoc) - return -ENOMEM; - scatterwalk_map_and_copy(assoc, req->src, 0, - req->assoclen + req->cryptlen, 0); - src = assoc + req->assoclen; - dst = src; - } - kernel_fpu_begin(); - aesni_gcm_dec_tfm(aes_ctx, dst, src, tempCipherLen, iv, - ctx->hash_subkey, assoc, req->assoclen - 8, - authTag, auth_tag_len); - kernel_fpu_end(); - - /* Compare generated tag with passed in tag. */ - retval = crypto_memneq(src + tempCipherLen, authTag, auth_tag_len) ? - -EBADMSG : 0; + retval = aesni_do_gcm_dec_tfm(aes_ctx, dst, src, + tempCipherLen, iv, ctx->hash_subkey, + assoc, req->assoclen - 8, authTag, + auth_tag_len); - if (one_entry_in_sg) { - if (unlikely(req->src != req->dst)) { scatterwalk_unmap(dst - req->assoclen); scatterwalk_advance(&dst_sg_walk, req->dst->length); scatterwalk_done(&dst_sg_walk, 1, 0); + } else { + dst = src; + retval = aesni_do_gcm_dec_tfm(aes_ctx, dst, src, + tempCipherLen, iv, ctx->hash_subkey, + assoc, req->assoclen - 8, authTag, + auth_tag_len); } scatterwalk_unmap(assoc); scatterwalk_advance(&src_sg_walk, req->src->length); scatterwalk_done(&src_sg_walk, req->src == req->dst, 0); } else { + /* Allocate memory for src, dst, assoc */ + assoc = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC); + if (!assoc) + return -ENOMEM; + scatterwalk_map_and_copy(assoc, req->src, 0, + req->assoclen + req->cryptlen, 0); + dst = src = assoc + req->assoclen; + + retval = aesni_do_gcm_dec_tfm(aes_ctx, dst, src, tempCipherLen, + iv, ctx->hash_subkey, assoc, + req->assoclen - 8, authTag, + auth_tag_len); + scatterwalk_map_and_copy(dst, req->dst, req->assoclen, tempCipherLen, 1); kfree(assoc);