From patchwork Tue Sep 13 08:48:52 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 76045 Delivered-To: patch@linaro.org Received: by 10.140.106.72 with SMTP id d66csp1253989qgf; Tue, 13 Sep 2016 01:49:01 -0700 (PDT) X-Received: by 10.98.70.80 with SMTP id t77mr14514011pfa.110.1473756541930; Tue, 13 Sep 2016 01:49:01 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cz4si26506702pad.270.2016.09.13.01.49.01; Tue, 13 Sep 2016 01:49:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756285AbcIMItA (ORCPT + 1 other); Tue, 13 Sep 2016 04:49:00 -0400 Received: from mail-wm0-f41.google.com ([74.125.82.41]:37109 "EHLO mail-wm0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755555AbcIMItA (ORCPT ); Tue, 13 Sep 2016 04:49:00 -0400 Received: by mail-wm0-f41.google.com with SMTP id c131so99834822wmh.0 for ; Tue, 13 Sep 2016 01:48:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=R8KaTG9bZl9c/9wr9IxlbzlIsBbsbBKWS9CLldqAgzw=; b=D3YH+9UjhmASqMwbP1mQWwX9iQ2IP2OFM8zcGFeGtCI+8yCvvUP2TtFcj9+NIRoQiv Bae3X6cUqA3HQ4pq0Hly8RP50QkA9Uf6wQ7FoeIqxtOWIpI+rGf+muQbAv6LPUFByE4q RDDqf+kfkryaLmemqLi5qHmXvJfWZ5brgqor0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=R8KaTG9bZl9c/9wr9IxlbzlIsBbsbBKWS9CLldqAgzw=; b=iS3YdNg/xoVeanVS6fmV62rjiE6foh8f/YGEuF5mB0gEtaUGGqXKEBRVVBqB7Abtvm yOGzHkly5kLYB324epFRX7r8S8ks1s0MJMYIOnzPJNrjI+7E44jYVPF0H+IxR7YEJBGU Lfb/WxLFQIRwu8WTM6QYiwK9UK4a8UNMTcPbX1Mby0AM5zCfE8hOKz/TgqzaiQAWqgC/ 87nJLGQxVLUYg7TKMdICza11bTKEKsVHs7HPHgbmsNX6qtGNUU9AdCDBzx1kQjZnTV5i iHqV+mBiv4GxJMeFGFFX/Rku7Ceupc2VhIB76VHz+G4CHTuEGJri+j2CbOqCqE3gY3fT I0fA== X-Gm-Message-State: AE9vXwM+BStKSU8gJ4w4Ih++oSGoufju/XtV2vydo8AmWWA7Db5Qp24bGI50q/fY13YQG1tR X-Received: by 10.28.87.16 with SMTP id l16mr4654214wmb.75.1473756538293; Tue, 13 Sep 2016 01:48:58 -0700 (PDT) Received: from localhost.localdomain ([197.128.106.42]) by smtp.gmail.com with ESMTPSA id nd1sm16866713wjb.22.2016.09.13.01.48.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 13 Sep 2016 01:48:57 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org Cc: xiakaixu@huawei.com, Ard Biesheuvel Subject: [PATCH 1/2] crypto: arm/aes-ctr: fix NULL dereference in tail processing Date: Tue, 13 Sep 2016 09:48:52 +0100 Message-Id: <1473756533-21078-1-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The AES-CTR glue code avoids calling into the blkcipher API for the tail portion of the walk, by comparing the remainder of walk.nbytes modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight into the tail processing block if they are equal. This tail processing block checks whether nbytes != 0, and does nothing otherwise. However, in case of an allocation failure in the blkcipher layer, we may enter this code with walk.nbytes == 0, while nbytes > 0. In this case, we should not dereference the source and destination pointers, since they may be NULL. So instead of checking for nbytes != 0, check for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in non-error conditions. Fixes: 86464859cc77 ("crypto: arm - AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions") Reported-by: xiakaixu Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-ce-glue.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.7.4 -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c index da3c0428507b..aef022a87c53 100644 --- a/arch/arm/crypto/aes-ce-glue.c +++ b/arch/arm/crypto/aes-ce-glue.c @@ -284,7 +284,7 @@ static int ctr_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE); } - if (nbytes) { + if (walk.nbytes % AES_BLOCK_SIZE) { u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE; u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE; u8 __aligned(8) tail[AES_BLOCK_SIZE];