From patchwork Tue Sep 23 04:42:12 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Behan Webster X-Patchwork-Id: 37721 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f72.google.com (mail-wg0-f72.google.com [74.125.82.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C75D5202A1 for ; Tue, 23 Sep 2014 04:47:52 +0000 (UTC) Received: by mail-wg0-f72.google.com with SMTP id k14sf329737wgh.3 for ; Mon, 22 Sep 2014 21:47:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=FxrVkFHitsC+pVorko1RND2phBahv8hyeW/fDot/MlE=; b=KVWolR+vPSxCrI60zAkKoouMPkHfyci3SPjTqf2KrY3mEo91jOD2ewWR1i7bCllrIL 4FkawGfd9rdVZQCgYtWSryqmcfvHleDa311E+zJqKl7eQSvVcnTtaB+CeBsroPta4c06 SpZ6+INTch8I8NtsnzWziPSYY09DcY+m26cXQcltj6nvq5GHGaKbFzGAbh4IdgRBnfyO McZjYVmjjZghE3/OtcNdzVzXYNU0TUHsiz5rwQrMRfPFEif4LydnkHk8r32/F3J3si1r QGQz8Jg7y0b8Ej9G+zLBVRSjdK84IuKcni8uacNl+x8f0sP950O77WhfR1Y5BM8ghG1T bmjg== X-Gm-Message-State: ALoCoQnNFUqzmG3ecTvHLQ/Itd/vt+KpCf401R7BoXQl27BVudIRD7BSAdZZf2WtIKuFtBDlkPEq X-Received: by 10.194.121.72 with SMTP id li8mr5045097wjb.1.1411447672033; Mon, 22 Sep 2014 21:47:52 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.37.1 with SMTP id u1ls4849laj.20.gmail; Mon, 22 Sep 2014 21:47:51 -0700 (PDT) X-Received: by 10.152.27.200 with SMTP id v8mr30199762lag.53.1411447671218; Mon, 22 Sep 2014 21:47:51 -0700 (PDT) Received: from mail-lb0-x235.google.com (mail-lb0-x235.google.com [2a00:1450:4010:c04::235]) by mx.google.com with ESMTPS id r7si17007393lae.1.2014.09.22.21.47.51 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 22 Sep 2014 21:47:51 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::235 as permitted sender) client-ip=2a00:1450:4010:c04::235; Received: by mail-lb0-f181.google.com with SMTP id b6so2853982lbj.12 for ; Mon, 22 Sep 2014 21:47:51 -0700 (PDT) X-Received: by 10.152.7.8 with SMTP id f8mr29577692laa.27.1411447671128; Mon, 22 Sep 2014 21:47:51 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.130.169 with SMTP id of9csp352435lbb; Mon, 22 Sep 2014 21:47:50 -0700 (PDT) X-Received: by 10.68.179.66 with SMTP id de2mr29990062pbc.31.1411447669446; Mon, 22 Sep 2014 21:47:49 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id li15si18393916pab.217.2014.09.22.21.47.48 for ; Mon, 22 Sep 2014 21:47:49 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754377AbaIWErq (ORCPT + 27 others); Tue, 23 Sep 2014 00:47:46 -0400 Received: from mail-pa0-f51.google.com ([209.85.220.51]:64578 "EHLO mail-pa0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753685AbaIWEnE (ORCPT ); Tue, 23 Sep 2014 00:43:04 -0400 Received: by mail-pa0-f51.google.com with SMTP id eu11so4158587pac.10 for ; Mon, 22 Sep 2014 21:43:03 -0700 (PDT) X-Received: by 10.70.50.170 with SMTP id d10mr37449976pdo.33.1411447383529; Mon, 22 Sep 2014 21:43:03 -0700 (PDT) Received: from galdor.websterwood.com (S0106dc9fdb80cffd.gv.shawcable.net. [96.50.97.138]) by mx.google.com with ESMTPSA id j13sm10774808pbq.42.2014.09.22.21.43.01 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 22 Sep 2014 21:43:02 -0700 (PDT) From: behanw@converseincode.com To: agk@redhat.com, clm@fb.com, davem@davemloft.net, dm-devel@redhat.com, fabf@skynet.be, herbert@gondor.apana.org.au, jbacik@fb.com, snitzer@redhat.com, tadeusz.struk@intel.com Cc: akpm@linux-foundation.org, bruce.w.allan@intel.com, d.kasatkin@samsung.com, james.l.morris@oracle.com, john.griffin@intel.com, linux-btrfs@vger.kernel.org, linux-crypto@vger.kernel.org, linux-ima-devel@lists.sourceforge.net, linux-ima-user@lists.sourceforge.net, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, linux-security-module@vger.kernel.org, neilb@suse.de, qat-linux@intel.com, serge@hallyn.com, thomas.lendacky@amd.com, zohar@linux.vnet.ibm.com, torvalds@linux-foundation.org, Behan Webster Subject: [PATCH v4 07/12] crypto: LLVMLinux: Remove VLAIS from crypto/.../qat_algs.c Date: Mon, 22 Sep 2014 21:42:12 -0700 Message-Id: <1411447337-22362-8-git-send-email-behanw@converseincode.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1411447337-22362-1-git-send-email-behanw@converseincode.com> References: <1411447337-22362-1-git-send-email-behanw@converseincode.com> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: behanw@converseincode.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::235 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@ Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Behan Webster Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Behan Webster Reviewed-by: Mark Charlebois Reviewed-by: Jan-Simon Möller Acked-by: Herbert Xu --- drivers/crypto/qat/qat_common/qat_algs.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index 59df488..9cabadd 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -152,10 +152,7 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, const uint8_t *auth_key, unsigned int auth_keylen, uint8_t *auth_state) { - struct { - struct shash_desc shash; - char ctx[crypto_shash_descsize(ctx->hash_tfm)]; - } desc; + SHASH_DESC_ON_STACK(shash, ctx->hash_tfm); struct sha1_state sha1; struct sha256_state sha256; struct sha512_state sha512; @@ -167,12 +164,12 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, __be64 *hash512_state_out; int i, offset; - desc.shash.tfm = ctx->hash_tfm; - desc.shash.flags = 0x0; + shash->tfm = ctx->hash_tfm; + shash->flags = 0x0; if (auth_keylen > block_size) { char buff[SHA512_BLOCK_SIZE]; - int ret = crypto_shash_digest(&desc.shash, auth_key, + int ret = crypto_shash_digest(shash, auth_key, auth_keylen, buff); if (ret) return ret; @@ -195,10 +192,10 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, *opad_ptr ^= 0x5C; } - if (crypto_shash_init(&desc.shash)) + if (crypto_shash_init(shash)) return -EFAULT; - if (crypto_shash_update(&desc.shash, ipad, block_size)) + if (crypto_shash_update(shash, ipad, block_size)) return -EFAULT; hash_state_out = (__be32 *)hash->sha.state1; @@ -206,19 +203,19 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, switch (ctx->qat_hash_alg) { case ICP_QAT_HW_AUTH_ALGO_SHA1: - if (crypto_shash_export(&desc.shash, &sha1)) + if (crypto_shash_export(shash, &sha1)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha1.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA256: - if (crypto_shash_export(&desc.shash, &sha256)) + if (crypto_shash_export(shash, &sha256)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha256.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA512: - if (crypto_shash_export(&desc.shash, &sha512)) + if (crypto_shash_export(shash, &sha512)) return -EFAULT; for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) *hash512_state_out = cpu_to_be64(*(sha512.state + i)); @@ -227,10 +224,10 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, return -EFAULT; } - if (crypto_shash_init(&desc.shash)) + if (crypto_shash_init(shash)) return -EFAULT; - if (crypto_shash_update(&desc.shash, opad, block_size)) + if (crypto_shash_update(shash, opad, block_size)) return -EFAULT; offset = round_up(qat_get_inter_state_size(ctx->qat_hash_alg), 8); @@ -239,19 +236,19 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, switch (ctx->qat_hash_alg) { case ICP_QAT_HW_AUTH_ALGO_SHA1: - if (crypto_shash_export(&desc.shash, &sha1)) + if (crypto_shash_export(shash, &sha1)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha1.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA256: - if (crypto_shash_export(&desc.shash, &sha256)) + if (crypto_shash_export(shash, &sha256)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha256.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA512: - if (crypto_shash_export(&desc.shash, &sha512)) + if (crypto_shash_export(shash, &sha512)) return -EFAULT; for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) *hash512_state_out = cpu_to_be64(*(sha512.state + i));