From patchwork Sat Jun 10 16:22:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103564 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270880qgd; Sat, 10 Jun 2017 09:23:29 -0700 (PDT) X-Received: by 10.98.65.215 with SMTP id g84mr25282412pfd.204.1497111809117; Sat, 10 Jun 2017 09:23:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111809; cv=none; d=google.com; s=arc-20160816; b=uxKGJSMKgppU8zwXXvUYrWjKQ8fPh94BwMydQPOP+6HwJ5qQSAXZdnfBOYrsGA2TaU uueTYNM0z0d/nWxLJ4mtmMX4pg/NAmaVkwO68CPeHYVVdHG2PzWcFCBSNzjM1jQC2wXm 576+NUgA3Ql7odLVeK/RelboWxNP3rIrF2vvyAqeAPdSVLLojtLWaSRNeLfGkC8XTYnm km78Yig3E78fY4ohFGOI0LsHCHU548XTcrzbUTq8D5B+ksCGLRX+VJ3lr6DhrsRFMnyH 2ezRkdgqY/1N+Zpzy04s7ica1r0Djh/L+q1sY+MfhR4fHzYvQw6CqV0Qw7Q3uPbPGb3C WRCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=x9D0mhsggP8lgsMUwDjfg4V2c7P96I9xtuFWw6oKj6g=; b=e8/PuzwvDmpj8UqzkNxb+bqd5QYffEqzjoFjQP2iq3zX7d08BKnH1TiwOJO6UITanu 5vdVyFxkjQeuGNpBUk+cm/j4jBbtBSaWP7NvdBrDlqdWY0y7R1YIrz/Ov3HUH/5UUeYk WdmDv8aZlpIU2xogqfeWv/RwhGPgm+s817M/zsLZ0OnuENmRwPQ7ZT3yvoTRtgIu9Idc DibVhdv7Q+ImYzkVUCsfijTYv+f15+2Y7hqZk4QGa66+mGHB6OpGxoeDsJrrAl9gA2jN Lp4iM07q3eyJoaaCk0PpdDNZnKFB92dRBupeeQCAQNAHRyehn8ou+faYN05iPD+SA/f/ i4ag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.28; Sat, 10 Jun 2017 09:23:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752204AbdFJQX2 (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:28 -0400 Received: from mail-wr0-f177.google.com ([209.85.128.177]:36076 "EHLO mail-wr0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752199AbdFJQX1 (ORCPT ); Sat, 10 Jun 2017 12:23:27 -0400 Received: by mail-wr0-f177.google.com with SMTP id v111so58818147wrc.3 for ; Sat, 10 Jun 2017 09:23:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=x9D0mhsggP8lgsMUwDjfg4V2c7P96I9xtuFWw6oKj6g=; b=ZPLCgEFoeDV4oGolNh9SMurjfX5h2zBBdHHq3RXuN9Zux5MTh+UE7VUQg3O3/4u5kq +DvLhaMPs+cru7wr7JD1O7MTBOObDhzhY9Ovg/n3eijosR0j/dso/lhbxIhqvT9M3dhN jtxDzezr14Ymr2DqMziGtuaLn1pM/aKo2MuSw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=x9D0mhsggP8lgsMUwDjfg4V2c7P96I9xtuFWw6oKj6g=; b=K/ATINa2NqnXa4P0RA7ijTaoapJ0QskrU3QhW//nca9soMucKMB1FuLxBUdCdi+0J6 xblzQIOFHVi48d2R+BD25TaHubDSTt80dJq/ZrTKwKLHNg3Fjpw4hlhQE7CngG2p5YCk lqB3J9KlDKmzT6KZe9Ehn34zuwoE66BYTcpcKkb0B321iD9twkXnIN6iMrz0SoQXYgJL zCFGvY+20A1/m6yyN4aU9eMuaweyUO/IdC/oQM4DVMmrJXUROwqmFR6btiJh7dLb93MK aQ/YfRN0/fOJ5cpYeIWfnfIAWxsnwLf4CDggXIrr5x1oC0wBeO+/lLD11QoLPkfw5LX6 elPw== X-Gm-Message-State: AODbwcDKL/WSImdfb0D27Ze4lJBX5xfx1+JlKKfCzXtJ2oYYTNLqL15X akUzLj36d8p3hFUiOWsQQQ== X-Received: by 10.223.165.1 with SMTP id i1mr2578209wrb.59.1497111805664; Sat, 10 Jun 2017 09:23:25 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:24 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 09/12] crypto: arm64/aes-ce-ccm: add non-SIMD generic fallback Date: Sat, 10 Jun 2017 16:22:55 +0000 Message-Id: <1497111778-4210-10-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> References: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arm64 kernel will shortly disallow nested kernel mode NEON. So honour this in the ARMv8 Crypto Extensions implementation of CCM-AES, and fall back to a dynamically instantiated ccm(aes) implementation if necessary (which will in all likelihood be produced by the generic CCM, CTR and AES drivers). Due to the fact that this may break the boottime algo tests, this driver can now only be built as a module. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 3 +- arch/arm64/crypto/aes-ce-ccm-glue.c | 152 +++++++++++++++----- 2 files changed, 116 insertions(+), 39 deletions(-) -- 2.7.4 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 772801f263d9..c3b74db72cc8 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -56,10 +56,11 @@ config CRYPTO_AES_ARM64_CE config CRYPTO_AES_ARM64_CE_CCM tristate "AES in CCM mode using ARMv8 Crypto Extensions" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON && m select CRYPTO_ALGAPI select CRYPTO_AES_ARM64_CE select CRYPTO_AEAD + select CRYPTO_CCM config CRYPTO_AES_ARM64_CE_BLK tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c index 6a7dbc7c83a6..c5ae50141988 100644 --- a/arch/arm64/crypto/aes-ce-ccm-glue.c +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -1,7 +1,7 @@ /* * aes-ccm-glue.c - AES-CCM transform for ARMv8 with Crypto Extensions * - * Copyright (C) 2013 - 2014 Linaro Ltd + * Copyright (C) 2013 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -9,6 +9,7 @@ */ #include +#include #include #include #include @@ -18,6 +19,11 @@ #include "aes-ce-setkey.h" +struct crypto_aes_ccm_ctx { + struct crypto_aes_ctx key; + struct crypto_aead *fallback; +}; + static int num_rounds(struct crypto_aes_ctx *ctx) { /* @@ -47,22 +53,33 @@ asmlinkage void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u32 const rk[], static int ccm_setkey(struct crypto_aead *tfm, const u8 *in_key, unsigned int key_len) { - struct crypto_aes_ctx *ctx = crypto_aead_ctx(tfm); + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(tfm); int ret; - ret = ce_aes_expandkey(ctx, in_key, key_len); - if (!ret) - return 0; + ret = ce_aes_expandkey(&ctx->key, in_key, key_len); + if (ret) { + tfm->base.crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; + return ret; + } + + ret = crypto_aead_setkey(ctx->fallback, in_key, key_len); + if (ret) { + if (ctx->fallback->base.crt_flags & CRYPTO_TFM_RES_BAD_KEY_LEN) + tfm->base.crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; + return ret; + } - tfm->base.crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; - return -EINVAL; + return 0; } static int ccm_setauthsize(struct crypto_aead *tfm, unsigned int authsize) { + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(tfm); + if ((authsize & 1) || authsize < 4) return -EINVAL; - return 0; + + return crypto_aead_setauthsize(ctx->fallback, authsize); } static int ccm_init_mac(struct aead_request *req, u8 maciv[], u32 msglen) @@ -106,7 +123,7 @@ static int ccm_init_mac(struct aead_request *req, u8 maciv[], u32 msglen) static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) { struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(aead); struct __packed { __be16 l; __be32 h; u16 len; } ltag; struct scatter_walk walk; u32 len = req->assoclen; @@ -122,8 +139,8 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) ltag.len = 6; } - ce_aes_ccm_auth_data(mac, (u8 *)<ag, ltag.len, &macp, ctx->key_enc, - num_rounds(ctx)); + ce_aes_ccm_auth_data(mac, (u8 *)<ag, ltag.len, &macp, + ctx->key.key_enc, num_rounds(&ctx->key)); scatterwalk_start(&walk, req->src); do { @@ -135,8 +152,8 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) n = scatterwalk_clamp(&walk, len); } p = scatterwalk_map(&walk); - ce_aes_ccm_auth_data(mac, p, n, &macp, ctx->key_enc, - num_rounds(ctx)); + ce_aes_ccm_auth_data(mac, p, n, &macp, ctx->key.key_enc, + num_rounds(&ctx->key)); len -= n; scatterwalk_unmap(p); @@ -148,18 +165,34 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) static int ccm_encrypt(struct aead_request *req) { struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(aead); struct skcipher_walk walk; u8 __aligned(8) mac[AES_BLOCK_SIZE]; u8 buf[AES_BLOCK_SIZE]; u32 len = req->cryptlen; int err; + if (!may_use_simd()) { + struct aead_request *fallback_req; + + fallback_req = aead_request_alloc(ctx->fallback, GFP_ATOMIC); + if (!fallback_req) + return -ENOMEM; + + aead_request_set_ad(fallback_req, req->assoclen); + aead_request_set_crypt(fallback_req, req->src, req->dst, + req->cryptlen, req->iv); + + err = crypto_aead_encrypt(fallback_req); + aead_request_free(fallback_req); + return err; + } + err = ccm_init_mac(req, mac, len); if (err) return err; - kernel_neon_begin_partial(6); + kernel_neon_begin(); if (req->assoclen) ccm_calculate_auth_mac(req, mac); @@ -176,13 +209,14 @@ static int ccm_encrypt(struct aead_request *req) tail = 0; ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, - walk.nbytes - tail, ctx->key_enc, - num_rounds(ctx), mac, walk.iv); + walk.nbytes - tail, ctx->key.key_enc, + num_rounds(&ctx->key), mac, walk.iv); err = skcipher_walk_done(&walk, tail); } if (!err) - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + ce_aes_ccm_final(mac, buf, ctx->key.key_enc, + num_rounds(&ctx->key)); kernel_neon_end(); @@ -199,7 +233,7 @@ static int ccm_encrypt(struct aead_request *req) static int ccm_decrypt(struct aead_request *req) { struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead); + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(aead); unsigned int authsize = crypto_aead_authsize(aead); struct skcipher_walk walk; u8 __aligned(8) mac[AES_BLOCK_SIZE]; @@ -207,11 +241,27 @@ static int ccm_decrypt(struct aead_request *req) u32 len = req->cryptlen - authsize; int err; + if (!may_use_simd()) { + struct aead_request *fallback_req; + + fallback_req = aead_request_alloc(ctx->fallback, GFP_ATOMIC); + if (!fallback_req) + return -ENOMEM; + + aead_request_set_ad(fallback_req, req->assoclen); + aead_request_set_crypt(fallback_req, req->src, req->dst, + req->cryptlen, req->iv); + + err = crypto_aead_decrypt(fallback_req); + aead_request_free(fallback_req); + return err; + } + err = ccm_init_mac(req, mac, len); if (err) return err; - kernel_neon_begin_partial(6); + kernel_neon_begin(); if (req->assoclen) ccm_calculate_auth_mac(req, mac); @@ -228,13 +278,14 @@ static int ccm_decrypt(struct aead_request *req) tail = 0; ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr, - walk.nbytes - tail, ctx->key_enc, - num_rounds(ctx), mac, walk.iv); + walk.nbytes - tail, ctx->key.key_enc, + num_rounds(&ctx->key), mac, walk.iv); err = skcipher_walk_done(&walk, tail); } if (!err) - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + ce_aes_ccm_final(mac, buf, ctx->key.key_enc, + num_rounds(&ctx->key)); kernel_neon_end(); @@ -251,28 +302,53 @@ static int ccm_decrypt(struct aead_request *req) return 0; } +static int ccm_init(struct crypto_aead *aead) +{ + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(aead); + struct crypto_aead *tfm; + + tfm = crypto_alloc_aead("ccm(aes)", 0, + CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); + + if (IS_ERR(tfm)) + return PTR_ERR(tfm); + + ctx->fallback = tfm; + return 0; +} + +static void ccm_exit(struct crypto_aead *aead) +{ + struct crypto_aes_ccm_ctx *ctx = crypto_aead_ctx(aead); + + crypto_free_aead(ctx->fallback); +} + static struct aead_alg ccm_aes_alg = { - .base = { - .cra_name = "ccm(aes)", - .cra_driver_name = "ccm-aes-ce", - .cra_priority = 300, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct crypto_aes_ctx), - .cra_module = THIS_MODULE, - }, - .ivsize = AES_BLOCK_SIZE, - .chunksize = AES_BLOCK_SIZE, - .maxauthsize = AES_BLOCK_SIZE, - .setkey = ccm_setkey, - .setauthsize = ccm_setauthsize, - .encrypt = ccm_encrypt, - .decrypt = ccm_decrypt, + .base.cra_name = "ccm(aes)", + .base.cra_driver_name = "ccm-aes-ce", + .base.cra_priority = 300, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct crypto_aes_ccm_ctx), + .base.cra_module = THIS_MODULE, + .base.cra_flags = CRYPTO_ALG_NEED_FALLBACK, + + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + .setkey = ccm_setkey, + .setauthsize = ccm_setauthsize, + .encrypt = ccm_encrypt, + .decrypt = ccm_decrypt, + .init = ccm_init, + .exit = ccm_exit, }; static int __init aes_mod_init(void) { if (!(elf_hwcap & HWCAP_AES)) return -ENODEV; + return crypto_register_aead(&ccm_aes_alg); }