From patchwork Sat Jun 10 16:22:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103565 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270892qgd; Sat, 10 Jun 2017 09:23:31 -0700 (PDT) X-Received: by 10.98.16.72 with SMTP id y69mr21727493pfi.30.1497111811861; Sat, 10 Jun 2017 09:23:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111811; cv=none; d=google.com; s=arc-20160816; b=ThOkwuDboX2kbmouUD6gK/n5/r8Vh5S4ImPxzMm+YjydkAQCE8Zw/lM4gGpKf3Jtvw jQM39YolPtsO7zFm6EdIyzGdU8SSNMx/TF2QT69SkdX+w2luOU75cqZMyV5IW9/ejY64 vm1EY8rRUmE3nmkeWr+l/QYsrtDXzqe1DII/PFg3Btckn9pzHAnzUUyzVKwfTnbyiTRJ fN8W7y8o3aIo4d1LGG/tHNOImE6CS5avSOfKgsK5c+tCkaX8XZcfA7FTZSSKSP+Q8OaW DQENEanhCxG3eIHn+zSOWrl73RatVKOyMK/bAzT5uM6STPMHqGLI/kD7HGIOGdKrEPdB enmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=me+hoaerXcnmqKC19wr10U7imjolSkqJ9ob5UxupJUw=; b=FAVKkJrT1Bx8cqoagcKFK/DGajcRY1V2Pbzko2yXklV+bn11nkKW/dX2hjiX3Jgwvh QR6iYt7DzSP+Dpfm67YacXhSw5sUud/ijJmx1nz5AXFrKoNHNd+OhhvDiLPcEyM6nHl0 8/TO9sCwLX5/8qHSjIG94iXQ6cE8+qyQlRaWgbzjMkLa8ieDxC891n2eyuIhJESMVShP tZsB6txLRF+CC8gZFK0jKNknaCtuhQLRrcp1kb62/d2YlHnowNob57SuO7FJuWZkmLrz vyC81wjlyMQ8q4VL8EdDNPRiTNlC8mYbDlG3QcWHm0JR1GzprQP1ibzCKnI+Tv9Tj+m+ Ou/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.31; Sat, 10 Jun 2017 09:23:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752206AbdFJQXa (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:30 -0400 Received: from mail-wr0-f182.google.com ([209.85.128.182]:34150 "EHLO mail-wr0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752203AbdFJQXa (ORCPT ); Sat, 10 Jun 2017 12:23:30 -0400 Received: by mail-wr0-f182.google.com with SMTP id g76so58543110wrd.1 for ; Sat, 10 Jun 2017 09:23:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=me+hoaerXcnmqKC19wr10U7imjolSkqJ9ob5UxupJUw=; b=Ex/Ria/QuC8/GdrYU1p8d0pd64fwkkKVT/Xh6hzNWSE+c53o3ME/0jfGbJ6u3N8kOK yOthh++k4A4iTUCZCvO5SaOW2BfZjLpZL+wHd3pUGsIchgz9uhJxXK4/48OuK7CPLuNV cdEdXygjFByUT0mHigsWxulxHJQJ3pbb1O7VU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=me+hoaerXcnmqKC19wr10U7imjolSkqJ9ob5UxupJUw=; b=pGamPfei6Hexp+Jo3UT8b1t8nRiP6PMrKaoXYXXqFO6YEiyhgqPPOqx/JE9aXe84/2 ntByznhujSNTLl4zcDwU0p2cIBiHbmXc/qnYZ+KxJLj4znHqRwZZXBNZLiSYKBoEE+pF ACdB0eXqB/gyrJ9aGS8utV/mKPmruAiSqTduTt//Z21UPGGTjBMw0aqCFlHyU2UsBIt2 0qAkdRNONxyLP42QaoWUzo2SZWQFRZ9z9Ug6Rr8AIWlpMoSnapQbVD9ud85rFL77Kq/T OqunLV8e27JM20BuIQs+/dE0kOcRJSh8Z3GfbCvN0UkwL4KrtW9h3U6hxrglgZ5FCPrj 1woA== X-Gm-Message-State: AKS2vOyh++2QE3IWvIBPu1GUIZOVbKnzetXEjnTB4aNKdQbT/YINfyIl xx7I43wjGvsrkoe2iaPr2g== X-Received: by 10.223.152.52 with SMTP id v49mr2341630wrb.60.1497111808544; Sat, 10 Jun 2017 09:23:28 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:27 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 10/12] crypto: arm64/aes-blk - add a non-SIMD fallback for synchronous CTR Date: Sat, 10 Jun 2017 16:22:56 +0000 Message-Id: <1497111778-4210-11-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org To accommodate systems that may disallow use of the NEON in kernel mode in some circumstances, introduce a C fallback for synchronous AES in CTR mode, and use it if may_use_simd() returns false. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 7 ++- arch/arm64/crypto/aes-ctr-fallback.h | 55 ++++++++++++++++++++ arch/arm64/crypto/aes-glue.c | 17 +++++- 3 files changed, 75 insertions(+), 4 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index c3b74db72cc8..6bd1921d8ca2 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -64,17 +64,20 @@ config CRYPTO_AES_ARM64_CE_CCM config CRYPTO_AES_ARM64_CE_BLK tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER select CRYPTO_AES_ARM64_CE + select CRYPTO_AES select CRYPTO_SIMD + select CRYPTO_AES_ARM64 config CRYPTO_AES_ARM64_NEON_BLK tristate "AES in ECB/CBC/CTR/XTS modes using NEON instructions" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER select CRYPTO_AES select CRYPTO_SIMD + select CRYPTO_AES_ARM64 config CRYPTO_CHACHA20_NEON tristate "NEON accelerated ChaCha20 symmetric cipher" diff --git a/arch/arm64/crypto/aes-ctr-fallback.h b/arch/arm64/crypto/aes-ctr-fallback.h new file mode 100644 index 000000000000..4a6bfac6ecb5 --- /dev/null +++ b/arch/arm64/crypto/aes-ctr-fallback.h @@ -0,0 +1,55 @@ +/* + * Fallback for sync aes(ctr) in contexts where kernel mode NEON + * is not allowed + * + * Copyright (C) 2017 Linaro Ltd + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include + +asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); + +static inline int aes_ctr_encrypt_fallback(struct crypto_aes_ctx *ctx, + struct skcipher_request *req) +{ + struct skcipher_walk walk; + u8 buf[AES_BLOCK_SIZE]; + int err; + + err = skcipher_walk_virt(&walk, req, true); + + while (walk.nbytes > 0) { + u8 *dst = walk.dst.virt.addr; + u8 *src = walk.src.virt.addr; + int nbytes = walk.nbytes; + int tail = 0; + + if (nbytes < walk.total) { + nbytes = round_down(nbytes, AES_BLOCK_SIZE); + tail = walk.nbytes % AES_BLOCK_SIZE; + } + + do { + int bsize = min(nbytes, AES_BLOCK_SIZE); + + __aes_arm64_encrypt(ctx->key_enc, buf, walk.iv, + ctx->key_length / 4 + 6); + if (dst != src) + memcpy(dst, src, bsize); + crypto_xor(dst, buf, bsize); + crypto_inc(walk.iv, AES_BLOCK_SIZE); + + dst += AES_BLOCK_SIZE; + src += AES_BLOCK_SIZE; + nbytes -= AES_BLOCK_SIZE; + } while (nbytes > 0); + + err = skcipher_walk_done(&walk, tail); + } + return err; +} diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index bcf596b0197e..6806ad7d8dd4 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -19,6 +20,7 @@ #include #include "aes-ce-setkey.h" +#include "aes-ctr-fallback.h" #ifdef USE_V8_CRYPTO_EXTENSIONS #define MODE "ce" @@ -251,6 +253,17 @@ static int ctr_encrypt(struct skcipher_request *req) return err; } +static int ctr_encrypt_sync(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (!may_use_simd()) + return aes_ctr_encrypt_fallback(ctx, req); + + return ctr_encrypt(req); +} + static int xts_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -357,8 +370,8 @@ static struct skcipher_alg aes_algs[] = { { .ivsize = AES_BLOCK_SIZE, .chunksize = AES_BLOCK_SIZE, .setkey = skcipher_aes_setkey, - .encrypt = ctr_encrypt, - .decrypt = ctr_encrypt, + .encrypt = ctr_encrypt_sync, + .decrypt = ctr_encrypt_sync, }, { .base = { .cra_name = "__xts(aes)",