From patchwork Fri Sep 5 23:01:47 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Behan Webster X-Patchwork-Id: 36905 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f199.google.com (mail-qc0-f199.google.com [209.85.216.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8E99D202A1 for ; Fri, 5 Sep 2014 23:02:03 +0000 (UTC) Received: by mail-qc0-f199.google.com with SMTP id x3sf37014852qcv.6 for ; Fri, 05 Sep 2014 16:02:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :mime-version:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=a9U/Q8bx+KugqaRE9PNw4JX6jJpz/3tDpFVbI1qcPKg=; b=I8YiiqhiwLXA+f4H9oDBPicL7JrQfPlndKabnirJU2+3MqgyQNyRrw4NOI3jbYBlqm Uk3NPIB0vdIcMsrL/DTz0nUlvnBCKWEcOuLmdllsD7/UOT+MqeruPIn68hKWX0FwgCv1 uRYs0Z0P6GsrCWDSrXFkkRRo5ojdtBx/Ps2e2T/6nu1v9QaolYMNN9ZyV2uxrqB/xDN1 EOcZe1PcjL7GWo6ZOCQMx/KXuhEhM32vXfVdngl52BVlBWtrw2e+tf+XHgogAhcDeyMe 4Gs4hLcKD6tM0GUBlje7+8ThJZIZHAhIU1pSlc2+S/3PQwnOSR6oQE8nL4uQDVQknQVg PH6g== X-Gm-Message-State: ALoCoQlIXko5TlC7fn8fu8Zgbu9kohjsYT/+dhCXhHe2QkdXJZ7L6MgTYslGo3D10IjBUQiq/AYI X-Received: by 10.236.77.42 with SMTP id c30mr8799636yhe.6.1409958123380; Fri, 05 Sep 2014 16:02:03 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.16.99 with SMTP id 90ls46959qga.62.gmail; Fri, 05 Sep 2014 16:02:03 -0700 (PDT) X-Received: by 10.53.7.225 with SMTP id df1mr11033531vdd.9.1409958123305; Fri, 05 Sep 2014 16:02:03 -0700 (PDT) Received: from mail-vc0-x231.google.com (mail-vc0-x231.google.com [2607:f8b0:400c:c03::231]) by mx.google.com with ESMTPS id tf4si1557225vcb.1.2014.09.05.16.02.03 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 05 Sep 2014 16:02:03 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c03::231 as permitted sender) client-ip=2607:f8b0:400c:c03::231; Received: by mail-vc0-f177.google.com with SMTP id hq11so12879723vcb.22 for ; Fri, 05 Sep 2014 16:02:03 -0700 (PDT) X-Received: by 10.220.105.142 with SMTP id t14mr12856809vco.14.1409958123195; Fri, 05 Sep 2014 16:02:03 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.45.67 with SMTP id uj3csp148402vcb; Fri, 5 Sep 2014 16:02:02 -0700 (PDT) X-Received: by 10.69.26.134 with SMTP id iy6mr150747pbd.115.1409958122281; Fri, 05 Sep 2014 16:02:02 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id zh7si6248677pac.141.2014.09.05.16.02.02 for ; Fri, 05 Sep 2014 16:02:02 -0700 (PDT) Received-SPF: none (google.com: linux-crypto-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752175AbaIEXCA (ORCPT ); Fri, 5 Sep 2014 19:02:00 -0400 Received: from mail-pa0-f42.google.com ([209.85.220.42]:61165 "EHLO mail-pa0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751481AbaIEXCA (ORCPT ); Fri, 5 Sep 2014 19:02:00 -0400 Received: by mail-pa0-f42.google.com with SMTP id lf10so23157252pab.29 for ; Fri, 05 Sep 2014 16:01:59 -0700 (PDT) X-Received: by 10.70.89.237 with SMTP id br13mr26227897pdb.142.1409958119561; Fri, 05 Sep 2014 16:01:59 -0700 (PDT) Received: from galdor.websterwood.com (S0106dc9fdb80cffd.gv.shawcable.net. [96.50.97.138]) by mx.google.com with ESMTPSA id rk2sm2630480pbc.1.2014.09.05.16.01.57 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 05 Sep 2014 16:01:58 -0700 (PDT) From: behanw@converseincode.com To: davem@davemloft.net, herbert@gondor.apana.org.au, tadeusz.struk@intel.com Cc: bruce.w.allan@intel.com, john.griffin@intel.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, qat-linux@intel.com, torvalds@linux-foundation.org, Behan Webster , Mark Charlebois , =?UTF-8?q?Jan-Simon=20M=C3=B6ller?= Subject: [PATCH] crypto: LLVMLinux: Remove VLAIS from crypto/.../qat_algs.c Date: Fri, 5 Sep 2014 16:01:47 -0700 Message-Id: <1409958107-1805-1-git-send-email-behanw@converseincode.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Original-Sender: behanw@converseincode.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2607:f8b0:400c:c03::231 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@ Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Behan Webster Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using an char array. The new code can be compiled with both gcc and clang. struct shash_desc contains a flexible array member member ctx declared with CRYPTO_MINALIGN_ATTR, so sizeof(struct shash_desc) aligns the beginning of the array declared after struct shash_desc with long long. No trailing padding is required because it is not a struct type that can be used in an array. The CRYPTO_MINALIGN_ATTR is required so that desc is aligned with long long as would be the case for a struct containing a member with CRYPTO_MINALIGN_ATTR. Signed-off-by: Behan Webster Signed-off-by: Mark Charlebois Signed-off-by: Jan-Simon Möller --- drivers/crypto/qat/qat_common/qat_algs.c | 33 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index 59df488..3090333 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -152,10 +152,9 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, const uint8_t *auth_key, unsigned int auth_keylen, uint8_t *auth_state) { - struct { - struct shash_desc shash; - char ctx[crypto_shash_descsize(ctx->hash_tfm)]; - } desc; + char desc[sizeof(struct shash_desc) + + crypto_shash_descsize(ctx->hash_tfm)] CRYPTO_MINALIGN_ATTR; + struct shash_desc *shash = (struct shash_desc *)desc; struct sha1_state sha1; struct sha256_state sha256; struct sha512_state sha512; @@ -167,12 +166,12 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, __be64 *hash512_state_out; int i, offset; - desc.shash.tfm = ctx->hash_tfm; - desc.shash.flags = 0x0; + shash->tfm = ctx->hash_tfm; + shash->flags = 0x0; if (auth_keylen > block_size) { char buff[SHA512_BLOCK_SIZE]; - int ret = crypto_shash_digest(&desc.shash, auth_key, + int ret = crypto_shash_digest(shash, auth_key, auth_keylen, buff); if (ret) return ret; @@ -195,10 +194,10 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, *opad_ptr ^= 0x5C; } - if (crypto_shash_init(&desc.shash)) + if (crypto_shash_init(shash)) return -EFAULT; - if (crypto_shash_update(&desc.shash, ipad, block_size)) + if (crypto_shash_update(shash, ipad, block_size)) return -EFAULT; hash_state_out = (__be32 *)hash->sha.state1; @@ -206,19 +205,19 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, switch (ctx->qat_hash_alg) { case ICP_QAT_HW_AUTH_ALGO_SHA1: - if (crypto_shash_export(&desc.shash, &sha1)) + if (crypto_shash_export(shash, &sha1)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha1.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA256: - if (crypto_shash_export(&desc.shash, &sha256)) + if (crypto_shash_export(shash, &sha256)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha256.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA512: - if (crypto_shash_export(&desc.shash, &sha512)) + if (crypto_shash_export(shash, &sha512)) return -EFAULT; for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) *hash512_state_out = cpu_to_be64(*(sha512.state + i)); @@ -227,10 +226,10 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, return -EFAULT; } - if (crypto_shash_init(&desc.shash)) + if (crypto_shash_init(shash)) return -EFAULT; - if (crypto_shash_update(&desc.shash, opad, block_size)) + if (crypto_shash_update(shash, opad, block_size)) return -EFAULT; offset = round_up(qat_get_inter_state_size(ctx->qat_hash_alg), 8); @@ -239,19 +238,19 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, switch (ctx->qat_hash_alg) { case ICP_QAT_HW_AUTH_ALGO_SHA1: - if (crypto_shash_export(&desc.shash, &sha1)) + if (crypto_shash_export(shash, &sha1)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha1.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA256: - if (crypto_shash_export(&desc.shash, &sha256)) + if (crypto_shash_export(shash, &sha256)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha256.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA512: - if (crypto_shash_export(&desc.shash, &sha512)) + if (crypto_shash_export(shash, &sha512)) return -EFAULT; for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) *hash512_state_out = cpu_to_be64(*(sha512.state + i));