From patchwork Sat Aug 10 09:40:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171013 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp328414ily; Sat, 10 Aug 2019 02:41:21 -0700 (PDT) X-Google-Smtp-Source: APXvYqzRwD/uES8VTTwGSExIBjoshJnSQqX8HsR+4jS6TaaXYOpc62jn1ict7aSt5jeqA2Emg5iL X-Received: by 2002:a17:90a:c714:: with SMTP id o20mr2663099pjt.50.1565430081857; Sat, 10 Aug 2019 02:41:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565430081; cv=none; d=google.com; s=arc-20160816; b=IeIAHGhfhNOnQ8ts/j9SHI0Lqvu7o7/fJg85I+TWkjYnZW8wm/PFcAqd5DPi9BNwj8 ivMpzr80euj73eafwP8ZMWZsXRN+m5sWhKJ2iZez5uZe/l8JtTEscPNYgxkoz64x8btc QjK3uuQHeh3uQQoMsMF3R/8o69KzBw5MN/rEbRhNsPSpuOesU4/OBOI1PE8LwtptWiL6 JepcYS3uqJnNu6l0dtXHnLH5emsPsEeVFA4o99Z2UMH08jLH5g4FAwZ3/q3mA47ZUQYY MhR8eMeMglVS17VlHtRlRKTt4tOO9kPfWZBhOfRrl4h74TgDKhimcQ0UE0UXL+heh4U1 fgIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=2JnxCsOSkqDNeUZmxG7LtyDYu8sDUIMf2pLqfhS6WrA=; b=em6Gaqg846/1lJKmSi3IrZp3RDmM+T2wL6oBfZSTkos5OIVgDLDxBG+mx1Le6yIcIW Z8t5ylD9t0kCAGFELZnJzBgZQmAPePUilCoWO2dSdcF2QStK5BpiLmo+BwozxHRK8d+e 8Oc/1m3BvVHPqEQc0cpMUo1bsWvahDAvSjeDEhu/JrXxXBllM2JOsYFfUFDaIImSNrzy 3f/Ue05fcT0meyU1+O23DWUU+Ec1+6vEo4xhNd8jt2y+6/2//2cpzrR9gl1EY7FPjzDR GO/g5YbzvW8i5C3qZGE6pp3udDe5J1mQMjJcD1p5hpmSIdxo8ZZgHJdowCMbtyhChggx xkhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=G3msugjU; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c2si16412879plz.407.2019.08.10.02.41.21; Sat, 10 Aug 2019 02:41:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=G3msugjU; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726165AbfHJJlT (ORCPT + 3 others); Sat, 10 Aug 2019 05:41:19 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:32876 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726131AbfHJJlT (ORCPT ); Sat, 10 Aug 2019 05:41:19 -0400 Received: by mail-wr1-f67.google.com with SMTP id n9so100465491wru.0 for ; Sat, 10 Aug 2019 02:41:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2JnxCsOSkqDNeUZmxG7LtyDYu8sDUIMf2pLqfhS6WrA=; b=G3msugjUcZAiih6ZZvTJ9XaOOjCY+/tz6dgj2QTAAKuDt5umHuHO6G+Ejrc/NYV9hh AIERB/x5xAs3H+DnOkuwd3PtQg1hiwauTWuAKYC1tjy5EzISl3wlURHKplPt3fKUs+l+ dYJekNSgwcYLCKQ4OFGy5fcewub8bY17nBEhC0NuCIGHv2/zD5Kj4YTXkPl21zsjOY+u mQUPh4WLnvWewrPynwEESmC8tTJt5CeFn7XCddWsQgfJoIH1tYBBW735ot5Irr/4JRAD vffMJ4NzxelMbTD5pzNEx9ydwh3XPtmz7+t7Vk81cfKJ5RGcesiwOfYddrYW4Eq+L696 d4Bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2JnxCsOSkqDNeUZmxG7LtyDYu8sDUIMf2pLqfhS6WrA=; b=b/i/Jf2P35QfIk4ah6SiiuMCZObF7aLrFzCEV0pgBzTEuYYXxj+ZV5JKSoaXAhdygD WNZDEn1X6FXntksb6FJvQQr6dNcuPQ3CPLFbcNZl27rqntfFeHbb4UtW+LUfyrfvitaA YgJM1cP1aJ5CVX7/33dOQBItJxkChA453nBln1VYx5EHxBJ9sY1nJOcXyXGvO2QcR4nv Q0wBRM8gcirWhOS7TiDzTvCi3XZ8Y/EklC3fz3hUTJ6e4MMdRn8lovONjMWkHpBjbdFi HlH2Ndyai/v26RyY0rET7A6SL9ExCIe9oUUFmAWp73a37Dt6QDXa2NK3M0cM7vqEdyjn Wuqw== X-Gm-Message-State: APjAAAVvzmCS5vTSBWgKLba3eP5/oDP9uxXpgqDIWuKKXeEGEvF6gjQX lVJqFKm+FU/dmGVCXZUrGP8V6KwMmORGxw== X-Received: by 2002:a5d:5041:: with SMTP id h1mr16959900wrt.30.1565430075657; Sat, 10 Aug 2019 02:41:15 -0700 (PDT) Received: from localhost.localdomain ([2a02:587:a407:da00:582f:8334:9cd9:7241]) by smtp.gmail.com with ESMTPSA id n16sm519883wmk.12.2019.08.10.02.41.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 10 Aug 2019 02:41:14 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v9 1/7] crypto: essiv - create wrapper template for ESSIV generation Date: Sat, 10 Aug 2019 12:40:47 +0300 Message-Id: <20190810094053.7423-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190810094053.7423-1-ard.biesheuvel@linaro.org> References: <20190810094053.7423-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Implement a template that wraps a (skcipher,cipher,shash) or (aead,cipher,shash) tuple so that we can consolidate the ESSIV handling in fscrypt and dm-crypt and move it into the crypto API. This will result in better test coverage, and will allow future changes to make the bare cipher interface internal to the crypto subsystem, in order to increase robustness of the API against misuse. Signed-off-by: Ard Biesheuvel --- crypto/Kconfig | 28 + crypto/Makefile | 1 + crypto/essiv.c | 665 ++++++++++++++++++++ 3 files changed, 694 insertions(+) -- 2.17.1 diff --git a/crypto/Kconfig b/crypto/Kconfig index 8880c1fc51d8..dbceaed65e52 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -482,6 +482,34 @@ config CRYPTO_ADIANTUM If unsure, say N. +config CRYPTO_ESSIV + tristate "ESSIV support for block encryption" + select CRYPTO_AUTHENC + help + Encrypted salt-sector initialization vector (ESSIV) is an IV + generation method that is used in some cases by fscrypt and/or + dm-crypt. It uses the hash of the block encryption key as the + symmetric key for a block encryption pass applied to the input + IV, making low entropy IV sources more suitable for block + encryption. + + This driver implements a crypto API template that can be + instantiated either as a skcipher or as a aead (depending on the + type of the first template argument), and which defers encryption + and decryption requests to the encapsulated cipher after applying + ESSIV to the input IV. Note that in the aead case, it is assumed + that the keys are presented in the same format used by the authenc + template, and that the IV appears at the end of the authenticated + associated data (AAD) region (which is how dm-crypt uses it.) + + Note that the use of ESSIV is not recommended for new deployments, + and so this only needs to be enabled when interoperability with + existing encrypted volumes of filesystems is required, or when + building for a particular system that requires it (e.g., when + the SoC in question has accelerated CBC but not XTS, making CBC + combined with ESSIV the only feasible mode for h/w accelerated + block encryption) + comment "Hash modes" config CRYPTO_CMAC diff --git a/crypto/Makefile b/crypto/Makefile index cfcc954e59f9..c204029f21ab 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -145,6 +145,7 @@ obj-$(CONFIG_CRYPTO_USER_API_AEAD) += algif_aead.o obj-$(CONFIG_CRYPTO_ZSTD) += zstd.o obj-$(CONFIG_CRYPTO_OFB) += ofb.o obj-$(CONFIG_CRYPTO_ECC) += ecc.o +obj-$(CONFIG_CRYPTO_ESSIV) += essiv.o ecdh_generic-y += ecdh.o ecdh_generic-y += ecdh_helper.o diff --git a/crypto/essiv.c b/crypto/essiv.c new file mode 100644 index 000000000000..ed9a1eae434e --- /dev/null +++ b/crypto/essiv.c @@ -0,0 +1,665 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ESSIV skcipher and aead template for block encryption + * + * This template encapsulates the ESSIV IV generation algorithm used by + * dm-crypt and fscrypt, which converts the initial vector for the skcipher + * used for block encryption, by encrypting it using the hash of the + * skcipher key as encryption key. Usually, the input IV is a 64-bit sector + * number in LE representation zero-padded to the size of the IV, but this + * is not assumed by this driver. + * + * The typical use of this template is to instantiate the skcipher + * 'essiv(cbc(aes),aes,sha256)', which is the only instantiation used by + * fscrypt, and the most relevant one for dm-crypt. However, dm-crypt + * also permits ESSIV to be used in combination with the authenc template, + * e.g., 'essiv(authenc(hmac(sha256),cbc(aes)),aes,sha256)', in which case + * we need to instantiate an aead that accepts the same special key format + * as the authenc template, and deals with the way the encrypted IV is + * embedded into the AAD area of the aead request. This means the AEAD + * flavor produced by this template is tightly coupled to the way dm-crypt + * happens to use it. + * + * Copyright (c) 2019 Linaro, Ltd. + * + * Heavily based on: + * adiantum length-preserving encryption mode + * + * Copyright 2018 Google LLC + */ + +#include +#include +#include +#include +#include +#include + +#include "internal.h" + +struct essiv_instance_ctx { + union { + struct crypto_skcipher_spawn skcipher_spawn; + struct crypto_aead_spawn aead_spawn; + } u; + struct crypto_spawn essiv_cipher_spawn; + struct crypto_shash_spawn hash_spawn; +}; + +struct essiv_tfm_ctx { + union { + struct crypto_skcipher *skcipher; + struct crypto_aead *aead; + } u; + struct crypto_cipher *essiv_cipher; + struct crypto_shash *hash; + int ivoffset; +}; + +struct essiv_aead_request_ctx { + struct scatterlist sg[4]; + u8 *assoc; + struct aead_request aead_req; +}; + +static int essiv_skcipher_setkey(struct crypto_skcipher *tfm, + const u8 *key, unsigned int keylen) +{ + struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + SHASH_DESC_ON_STACK(desc, tctx->hash); + u8 salt[HASH_MAX_DIGESTSIZE]; + int err; + + crypto_skcipher_clear_flags(tctx->u.skcipher, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(tctx->u.skcipher, + crypto_skcipher_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_skcipher_setkey(tctx->u.skcipher, key, keylen); + crypto_skcipher_set_flags(tfm, + crypto_skcipher_get_flags(tctx->u.skcipher) & + CRYPTO_TFM_RES_MASK); + if (err) + return err; + + desc->tfm = tctx->hash; + err = crypto_shash_digest(desc, key, keylen, salt); + if (err) + return err; + + crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK); + crypto_cipher_set_flags(tctx->essiv_cipher, + crypto_skcipher_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_cipher_setkey(tctx->essiv_cipher, salt, + crypto_shash_digestsize(tctx->hash)); + crypto_skcipher_set_flags(tfm, + crypto_cipher_get_flags(tctx->essiv_cipher) & + CRYPTO_TFM_RES_MASK); + + return err; +} + +static int essiv_aead_setkey(struct crypto_aead *tfm, const u8 *key, + unsigned int keylen) +{ + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + SHASH_DESC_ON_STACK(desc, tctx->hash); + struct crypto_authenc_keys keys; + u8 salt[HASH_MAX_DIGESTSIZE]; + int err; + + crypto_aead_clear_flags(tctx->u.aead, CRYPTO_TFM_REQ_MASK); + crypto_aead_set_flags(tctx->u.aead, crypto_aead_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_aead_setkey(tctx->u.aead, key, keylen); + crypto_aead_set_flags(tfm, crypto_aead_get_flags(tctx->u.aead) & + CRYPTO_TFM_RES_MASK); + if (err) + return err; + + if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) { + crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + + desc->tfm = tctx->hash; + err = crypto_shash_init(desc) ?: + crypto_shash_update(desc, keys.enckey, keys.enckeylen) ?: + crypto_shash_finup(desc, keys.authkey, keys.authkeylen, salt); + if (err) + return err; + + crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK); + crypto_cipher_set_flags(tctx->essiv_cipher, crypto_aead_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_cipher_setkey(tctx->essiv_cipher, salt, + crypto_shash_digestsize(tctx->hash)); + crypto_aead_set_flags(tfm, crypto_cipher_get_flags(tctx->essiv_cipher) & + CRYPTO_TFM_RES_MASK); + + return err; +} + +static int essiv_aead_setauthsize(struct crypto_aead *tfm, + unsigned int authsize) +{ + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + + return crypto_aead_setauthsize(tctx->u.aead, authsize); +} + +static void essiv_skcipher_done(struct crypto_async_request *areq, int err) +{ + struct skcipher_request *req = areq->data; + + skcipher_request_complete(req, err); +} + +static int essiv_skcipher_crypt(struct skcipher_request *req, bool enc) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct skcipher_request *subreq = skcipher_request_ctx(req); + + crypto_cipher_encrypt_one(tctx->essiv_cipher, req->iv, req->iv); + + skcipher_request_set_tfm(subreq, tctx->u.skcipher); + skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, + req->iv); + skcipher_request_set_callback(subreq, skcipher_request_flags(req), + essiv_skcipher_done, req); + + return enc ? crypto_skcipher_encrypt(subreq) : + crypto_skcipher_decrypt(subreq); +} + +static int essiv_skcipher_encrypt(struct skcipher_request *req) +{ + return essiv_skcipher_crypt(req, true); +} + +static int essiv_skcipher_decrypt(struct skcipher_request *req) +{ + return essiv_skcipher_crypt(req, false); +} + +static void essiv_aead_done(struct crypto_async_request *areq, int err) +{ + struct aead_request *req = areq->data; + struct essiv_aead_request_ctx *rctx = aead_request_ctx(req); + + if (rctx->assoc) + kfree(rctx->assoc); + aead_request_complete(req, err); +} + +static int essiv_aead_crypt(struct aead_request *req, bool enc) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + const struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + struct essiv_aead_request_ctx *rctx = aead_request_ctx(req); + struct aead_request *subreq = &rctx->aead_req; + struct scatterlist *src = req->src; + int err; + + crypto_cipher_encrypt_one(tctx->essiv_cipher, req->iv, req->iv); + + /* + * dm-crypt embeds the sector number and the IV in the AAD region, so + * we have to copy the converted IV into the right scatterlist before + * we pass it on. + */ + rctx->assoc = NULL; + if (req->src == req->dst || !enc) { + scatterwalk_map_and_copy(req->iv, req->dst, + req->assoclen - crypto_aead_ivsize(tfm), + crypto_aead_ivsize(tfm), 1); + } else { + u8 *iv = (u8 *)aead_request_ctx(req) + tctx->ivoffset; + int ivsize = crypto_aead_ivsize(tfm); + int ssize = req->assoclen - ivsize; + struct scatterlist *sg; + int nents; + + if (ssize < 0) + return -EINVAL; + + nents = sg_nents_for_len(req->src, ssize); + if (nents < 0) + return -EINVAL; + + memcpy(iv, req->iv, ivsize); + sg_init_table(rctx->sg, 4); + + if (unlikely(nents > 1)) { + /* + * This is a case that rarely occurs in practice, but + * for correctness, we have to deal with it nonetheless. + */ + rctx->assoc = kmalloc(ssize, GFP_ATOMIC); + if (!rctx->assoc) + return -ENOMEM; + + scatterwalk_map_and_copy(rctx->assoc, req->src, 0, + ssize, 0); + sg_set_buf(rctx->sg, rctx->assoc, ssize); + } else { + sg_set_page(rctx->sg, sg_page(req->src), ssize, + req->src->offset); + } + + sg_set_buf(rctx->sg + 1, iv, ivsize); + sg = scatterwalk_ffwd(rctx->sg + 2, req->src, req->assoclen); + if (sg != rctx->sg + 2) + sg_chain(rctx->sg, 3, sg); + + src = rctx->sg; + } + + aead_request_set_tfm(subreq, tctx->u.aead); + aead_request_set_ad(subreq, req->assoclen); + aead_request_set_callback(subreq, aead_request_flags(req), + essiv_aead_done, req); + aead_request_set_crypt(subreq, src, req->dst, req->cryptlen, req->iv); + + err = enc ? crypto_aead_encrypt(subreq) : + crypto_aead_decrypt(subreq); + + if (rctx->assoc && err != -EINPROGRESS) + kfree(rctx->assoc); + return err; +} + +static int essiv_aead_encrypt(struct aead_request *req) +{ + return essiv_aead_crypt(req, true); +} + +static int essiv_aead_decrypt(struct aead_request *req) +{ + return essiv_aead_crypt(req, false); +} + +static int essiv_init_tfm(struct essiv_instance_ctx *ictx, + struct essiv_tfm_ctx *tctx) +{ + struct crypto_cipher *essiv_cipher; + struct crypto_shash *hash; + int err; + + essiv_cipher = crypto_spawn_cipher(&ictx->essiv_cipher_spawn); + if (IS_ERR(essiv_cipher)) + return PTR_ERR(essiv_cipher); + + hash = crypto_spawn_shash(&ictx->hash_spawn); + if (IS_ERR(hash)) { + err = PTR_ERR(hash); + goto err_free_essiv_cipher; + } + + tctx->essiv_cipher = essiv_cipher; + tctx->hash = hash; + + return 0; + +err_free_essiv_cipher: + crypto_free_cipher(essiv_cipher); + return err; +} + +static int essiv_skcipher_init_tfm(struct crypto_skcipher *tfm) +{ + struct skcipher_instance *inst = skcipher_alg_instance(tfm); + struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst); + struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct crypto_skcipher *skcipher; + int err; + + skcipher = crypto_spawn_skcipher(&ictx->u.skcipher_spawn); + if (IS_ERR(skcipher)) + return PTR_ERR(skcipher); + + crypto_skcipher_set_reqsize(tfm, sizeof(struct skcipher_request) + + crypto_skcipher_reqsize(skcipher)); + + err = essiv_init_tfm(ictx, tctx); + if (err) { + crypto_free_skcipher(skcipher); + return err; + } + + tctx->u.skcipher = skcipher; + return 0; +} + +static int essiv_aead_init_tfm(struct crypto_aead *tfm) +{ + struct aead_instance *inst = aead_alg_instance(tfm); + struct essiv_instance_ctx *ictx = aead_instance_ctx(inst); + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + struct crypto_aead *aead; + unsigned int subreq_size; + int err; + + BUILD_BUG_ON(offsetofend(struct essiv_aead_request_ctx, aead_req) != + sizeof(struct essiv_aead_request_ctx)); + + aead = crypto_spawn_aead(&ictx->u.aead_spawn); + if (IS_ERR(aead)) + return PTR_ERR(aead); + + subreq_size = FIELD_SIZEOF(struct essiv_aead_request_ctx, aead_req) + + crypto_aead_reqsize(aead); + + tctx->ivoffset = offsetof(struct essiv_aead_request_ctx, aead_req) + + subreq_size; + crypto_aead_set_reqsize(tfm, tctx->ivoffset + crypto_aead_ivsize(aead)); + + err = essiv_init_tfm(ictx, tctx); + if (err) { + crypto_free_aead(aead); + return err; + } + + tctx->u.aead = aead; + return 0; +} + +static void essiv_skcipher_exit_tfm(struct crypto_skcipher *tfm) +{ + struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + + crypto_free_skcipher(tctx->u.skcipher); + crypto_free_cipher(tctx->essiv_cipher); + crypto_free_shash(tctx->hash); +} + +static void essiv_aead_exit_tfm(struct crypto_aead *tfm) +{ + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + + crypto_free_aead(tctx->u.aead); + crypto_free_cipher(tctx->essiv_cipher); + crypto_free_shash(tctx->hash); +} + +static void essiv_skcipher_free_instance(struct skcipher_instance *inst) +{ + struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst); + + crypto_drop_skcipher(&ictx->u.skcipher_spawn); + crypto_drop_spawn(&ictx->essiv_cipher_spawn); + crypto_drop_shash(&ictx->hash_spawn); + kfree(inst); +} + +static void essiv_aead_free_instance(struct aead_instance *inst) +{ + struct essiv_instance_ctx *ictx = aead_instance_ctx(inst); + + crypto_drop_aead(&ictx->u.aead_spawn); + crypto_drop_spawn(&ictx->essiv_cipher_spawn); + crypto_drop_shash(&ictx->hash_spawn); + kfree(inst); +} + +static bool parse_cipher_name(char *essiv_cipher_name, const char *cra_name) +{ + const char *p, *q; + int len; + + /* find the last opening parens */ + p = strrchr(cra_name, '('); + if (!p++) + return false; + + /* find the first closing parens in the tail of the string */ + q = strchr(p, ')'); + if (!q) + return false; + + len = q - p; + if (len >= CRYPTO_MAX_ALG_NAME) + return false; + + memcpy(essiv_cipher_name, p, len); + essiv_cipher_name[len] = '\0'; + return true; +} + +static bool essiv_supported_algorithms(struct crypto_alg *essiv_cipher_alg, + struct shash_alg *hash_alg, + int ivsize) +{ + if (hash_alg->digestsize < essiv_cipher_alg->cra_cipher.cia_min_keysize || + hash_alg->digestsize > essiv_cipher_alg->cra_cipher.cia_max_keysize) + return false; + + if (ivsize != essiv_cipher_alg->cra_blocksize) + return false; + + if (crypto_shash_alg_has_setkey(hash_alg)) + return false; + + return true; +} + +static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb) +{ + struct crypto_attr_type *algt; + const char *inner_cipher_name; + const char *shash_name; + char essiv_cipher_name[CRYPTO_MAX_ALG_NAME]; + struct skcipher_instance *skcipher_inst = NULL; + struct aead_instance *aead_inst = NULL; + struct crypto_instance *inst; + struct crypto_alg *base, *block_base; + struct essiv_instance_ctx *ictx; + struct skcipher_alg *skcipher_alg = NULL; + struct aead_alg *aead_alg = NULL; + struct crypto_alg *essiv_cipher_alg; + struct crypto_alg *_hash_alg; + struct shash_alg *hash_alg; + int ivsize; + u32 type; + int err; + + algt = crypto_get_attr_type(tb); + if (IS_ERR(algt)) + return PTR_ERR(algt); + + inner_cipher_name = crypto_attr_alg_name(tb[1]); + if (IS_ERR(inner_cipher_name)) + return PTR_ERR(inner_cipher_name); + + shash_name = crypto_attr_alg_name(tb[2]); + if (IS_ERR(shash_name)) + return PTR_ERR(shash_name); + + type = algt->type & algt->mask; + + switch (type) { + case CRYPTO_ALG_TYPE_BLKCIPHER: + skcipher_inst = kzalloc(sizeof(*skcipher_inst) + + sizeof(*ictx), GFP_KERNEL); + if (!skcipher_inst) + return -ENOMEM; + inst = skcipher_crypto_instance(skcipher_inst); + base = &skcipher_inst->alg.base; + ictx = crypto_instance_ctx(inst); + + /* Block cipher, e.g. "cbc(aes)" */ + crypto_set_skcipher_spawn(&ictx->u.skcipher_spawn, inst); + err = crypto_grab_skcipher(&ictx->u.skcipher_spawn, + inner_cipher_name, 0, + crypto_requires_sync(algt->type, + algt->mask)); + if (err) + goto out_free_inst; + skcipher_alg = crypto_spawn_skcipher_alg(&ictx->u.skcipher_spawn); + block_base = &skcipher_alg->base; + ivsize = crypto_skcipher_alg_ivsize(skcipher_alg); + break; + + case CRYPTO_ALG_TYPE_AEAD: + aead_inst = kzalloc(sizeof(*aead_inst) + + sizeof(*ictx), GFP_KERNEL); + if (!aead_inst) + return -ENOMEM; + inst = aead_crypto_instance(aead_inst); + base = &aead_inst->alg.base; + ictx = crypto_instance_ctx(inst); + + /* AEAD cipher, e.g. "authenc(hmac(sha256),cbc(aes))" */ + crypto_set_aead_spawn(&ictx->u.aead_spawn, inst); + err = crypto_grab_aead(&ictx->u.aead_spawn, + inner_cipher_name, 0, + crypto_requires_sync(algt->type, + algt->mask)); + if (err) + goto out_free_inst; + aead_alg = crypto_spawn_aead_alg(&ictx->u.aead_spawn); + block_base = &aead_alg->base; + if (!strstarts(block_base->cra_name, "authenc(")) { + pr_warn("Only authenc() type AEADs are supported by ESSIV\n"); + goto out_drop_skcipher; + } + ivsize = aead_alg->ivsize; + break; + + default: + return -EINVAL; + } + + if (!parse_cipher_name(essiv_cipher_name, block_base->cra_name)) { + pr_warn("Failed to parse ESSIV cipher name from skcipher cra_name\n"); + goto out_drop_skcipher; + } + + /* Block cipher, e.g. "aes" */ + crypto_set_spawn(&ictx->essiv_cipher_spawn, inst); + err = crypto_grab_spawn(&ictx->essiv_cipher_spawn, essiv_cipher_name, + CRYPTO_ALG_TYPE_CIPHER, CRYPTO_ALG_TYPE_MASK); + if (err) + goto out_drop_skcipher; + essiv_cipher_alg = ictx->essiv_cipher_spawn.alg; + + /* Synchronous hash, e.g., "sha256" */ + _hash_alg = crypto_alg_mod_lookup(shash_name, + CRYPTO_ALG_TYPE_SHASH, + CRYPTO_ALG_TYPE_MASK); + if (IS_ERR(_hash_alg)) { + err = PTR_ERR(_hash_alg); + goto out_drop_essiv_cipher; + } + hash_alg = __crypto_shash_alg(_hash_alg); + err = crypto_init_shash_spawn(&ictx->hash_spawn, hash_alg, inst); + if (err) + goto out_put_hash; + + /* Check the set of algorithms */ + if (!essiv_supported_algorithms(essiv_cipher_alg, hash_alg, ivsize)) { + pr_warn("Unsupported essiv instantiation: essiv(%s,%s)\n", + block_base->cra_name, + hash_alg->base.cra_name); + err = -EINVAL; + goto out_drop_hash; + } + + /* Instance fields */ + + err = -ENAMETOOLONG; + if (snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, + "essiv(%s,%s)", block_base->cra_name, + hash_alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME) + goto out_drop_hash; + if (snprintf(base->cra_driver_name, CRYPTO_MAX_ALG_NAME, + "essiv(%s,%s)", block_base->cra_driver_name, + hash_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME) + goto out_drop_hash; + + base->cra_flags = block_base->cra_flags & CRYPTO_ALG_ASYNC; + base->cra_blocksize = block_base->cra_blocksize; + base->cra_ctxsize = sizeof(struct essiv_tfm_ctx); + base->cra_alignmask = block_base->cra_alignmask; + base->cra_priority = block_base->cra_priority; + + if (type == CRYPTO_ALG_TYPE_BLKCIPHER) { + skcipher_inst->alg.setkey = essiv_skcipher_setkey; + skcipher_inst->alg.encrypt = essiv_skcipher_encrypt; + skcipher_inst->alg.decrypt = essiv_skcipher_decrypt; + skcipher_inst->alg.init = essiv_skcipher_init_tfm; + skcipher_inst->alg.exit = essiv_skcipher_exit_tfm; + + skcipher_inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(skcipher_alg); + skcipher_inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(skcipher_alg); + skcipher_inst->alg.ivsize = crypto_skcipher_alg_ivsize(skcipher_alg); + skcipher_inst->alg.chunksize = crypto_skcipher_alg_chunksize(skcipher_alg); + skcipher_inst->alg.walksize = crypto_skcipher_alg_walksize(skcipher_alg); + + skcipher_inst->free = essiv_skcipher_free_instance; + + err = skcipher_register_instance(tmpl, skcipher_inst); + } else { + aead_inst->alg.setkey = essiv_aead_setkey; + aead_inst->alg.setauthsize = essiv_aead_setauthsize; + aead_inst->alg.encrypt = essiv_aead_encrypt; + aead_inst->alg.decrypt = essiv_aead_decrypt; + aead_inst->alg.init = essiv_aead_init_tfm; + aead_inst->alg.exit = essiv_aead_exit_tfm; + + aead_inst->alg.ivsize = crypto_aead_alg_ivsize(aead_alg); + aead_inst->alg.maxauthsize = crypto_aead_alg_maxauthsize(aead_alg); + aead_inst->alg.chunksize = crypto_aead_alg_chunksize(aead_alg); + + aead_inst->free = essiv_aead_free_instance; + + err = aead_register_instance(tmpl, aead_inst); + } + + if (err) + goto out_drop_hash; + + crypto_mod_put(_hash_alg); + return 0; + +out_drop_hash: + crypto_drop_shash(&ictx->hash_spawn); +out_put_hash: + crypto_mod_put(_hash_alg); +out_drop_essiv_cipher: + crypto_drop_spawn(&ictx->essiv_cipher_spawn); +out_drop_skcipher: + if (type == CRYPTO_ALG_TYPE_BLKCIPHER) + crypto_drop_skcipher(&ictx->u.skcipher_spawn); + else + crypto_drop_aead(&ictx->u.aead_spawn); +out_free_inst: + kfree(skcipher_inst); + kfree(aead_inst); + return err; +} + +/* essiv(cipher_name, shash_name) */ +static struct crypto_template essiv_tmpl = { + .name = "essiv", + .create = essiv_create, + .module = THIS_MODULE, +}; + +static int __init essiv_module_init(void) +{ + return crypto_register_template(&essiv_tmpl); +} + +static void __exit essiv_module_exit(void) +{ + crypto_unregister_template(&essiv_tmpl); +} + +subsys_initcall(essiv_module_init); +module_exit(essiv_module_exit); + +MODULE_DESCRIPTION("ESSIV skcipher/aead wrapper for block encryption"); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS_CRYPTO("essiv"); From patchwork Sat Aug 10 09:40:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171012 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp328420ily; Sat, 10 Aug 2019 02:41:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqzRz7F/E+KGQGh2+Ba7rDeKFcstAKBBkduEB7kQd3OwXkLAQ4j+C7GeYTyqAg9GLj1Tk5wO X-Received: by 2002:a62:ae02:: with SMTP id q2mr25614946pff.1.1565430082163; Sat, 10 Aug 2019 02:41:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565430082; cv=none; d=google.com; s=arc-20160816; b=v39VX+luwzGlXMSfhKytoZWYgd8VuUS3CxcLW0SU3AqVHbLubJn1+RfAgoSaZIhdUS i5oirpzsjuuEN5g0r/a1SWZLhs6QYNU09hrh+vkssHbaHRC9oaM8niYfG6PoqJPlEqBE myIiEb0iCCJGQgpNZX4oPxTwzMkHxFJsV4odC4NfB8ZITUqMkXaLEiKSjddBKSv7vW2Y +D0+icM8s6/SGh14k3X41w1cl6vREb5PHouNnbHpPRnVp2yosJ1jw1Ah1mQ/Dz41TyAi HfVyvBDmGToIUfz8cqCZOZ0XMtmr2Xlx3/f3w24CIRPpTSCw7I6Q44dghKfpAln1jOHR 51wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=WCWl+tpGIKbnP/rNTSW1y6zpS02IMHAFrYZrL23PudA=; b=IAgvjiImwGLP1nZjC4AvewW/79jE0DM1VtlrWw1DYFaj9ofM0EGUJz1sU1kpca4VpE A/wIFpA/OtjYqfhVfvYAJtj9pWi181q5qKkOrFisp2hK9jxMBsjpMqIMmXAxKrPRMTHI jxubJ+ju7XQCXf0yR3HxXuDRKL+hnLXjYgJ/9GpW64uJI7JhfcA0/VNsG3jg2FqONoyJ YSgtinQLACkJ2sSqV1qCtsVCqr4aiNTgVy8ljHe9six3jI1/X6A1Bns8CH/UmUYNsfA7 FtLbz+aF7GvmR0qRTAhdvd7W3X/LHBzMlwUBYl+EjlMvUzeDevyfsSkPFq24sKpX5AAU MtOg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=lIG1bQYT; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c2si16412879plz.407.2019.08.10.02.41.21; Sat, 10 Aug 2019 02:41:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=lIG1bQYT; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725888AbfHJJlV (ORCPT + 3 others); Sat, 10 Aug 2019 05:41:21 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:39197 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726145AbfHJJlV (ORCPT ); Sat, 10 Aug 2019 05:41:21 -0400 Received: by mail-wr1-f68.google.com with SMTP id t16so10230556wra.6 for ; Sat, 10 Aug 2019 02:41:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WCWl+tpGIKbnP/rNTSW1y6zpS02IMHAFrYZrL23PudA=; b=lIG1bQYTyTk9Ltdgc1P+te+E9IG/mJdvbes1vypeL5iM+EEResFKLnzqdJWEIWzBr+ GyJ4a886QCM4HVRKL0ip+dKLgyldwuX4YJH+LLfQnP201B4pGmbafTfSf6gu8DIwB3Hq 3HNf1cc0TfRBJgFku+7hPuTtSnRqz31sC7yw9pWfnqQc57zchfIqZyA0PiVA2XZ6jJlc pQtI8t/QDGEuIsutl5usskHaTLI2OPMrgoqeyMlDZcLVIjtyjX11CSIlSGeHTExB/9Tj lp0WFJrKrm0btpfbJCloH3M81jLMs7wyKMhcJ1qDSu50VNAzedD5jhxqL0yfqWSjJGLz UINA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WCWl+tpGIKbnP/rNTSW1y6zpS02IMHAFrYZrL23PudA=; b=Z3M8O32fvXV4E3PQqEefs+Y0yBf/d79dNzI6RjwywWyC//62pzeZbHfOcZvmTmSws/ a3N5zS9E4QHYepZ+SLn/HS2M7zzZmZaSW2tkTQ2G7v1deb3cK9YTHRwRFD7J4v0VG37k pDNsviL6cPGYavzjgygDHrmGa4fnyjzcY9YglJANr3G6MtUEdLopX4IuOTTuSHEXHkF5 WFbqQPhzP9DIWXQAkO3AlgwWpRLjf42m0hUhrRUGwdS+p7Vh8lLsOcc8uhLumYcMhkMv ZcXPDVR7+5dzKA53hr3sNclwa8B/9scABmRBodoariNkEHXxvg8fWxOKQTi3/3zK1Z7O tdPw== X-Gm-Message-State: APjAAAWm7tskbqKzMbF+Yyy/gqI/GOYILhgGi6UvOss2j05+LMzpDbVv uS1iab/37UxHBIEQpXKLk+/urwXPDoYQCw== X-Received: by 2002:adf:e602:: with SMTP id p2mr29379166wrm.306.1565430077936; Sat, 10 Aug 2019 02:41:17 -0700 (PDT) Received: from localhost.localdomain ([2a02:587:a407:da00:582f:8334:9cd9:7241]) by smtp.gmail.com with ESMTPSA id n16sm519883wmk.12.2019.08.10.02.41.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 10 Aug 2019 02:41:17 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v9 2/7] fs: crypto: invoke crypto API for ESSIV handling Date: Sat, 10 Aug 2019 12:40:48 +0300 Message-Id: <20190810094053.7423-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190810094053.7423-1-ard.biesheuvel@linaro.org> References: <20190810094053.7423-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of open coding the calculations for ESSIV handling, use a ESSIV skcipher which does all of this under the hood. Signed-off-by: Ard Biesheuvel --- fs/crypto/Kconfig | 1 + fs/crypto/crypto.c | 5 -- fs/crypto/fscrypt_private.h | 9 -- fs/crypto/keyinfo.c | 92 +------------------- 4 files changed, 4 insertions(+), 103 deletions(-) -- 2.17.1 diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig index 5fdf24877c17..6f3d59b880b7 100644 --- a/fs/crypto/Kconfig +++ b/fs/crypto/Kconfig @@ -5,6 +5,7 @@ config FS_ENCRYPTION select CRYPTO_AES select CRYPTO_CBC select CRYPTO_ECB + select CRYPTO_ESSIV select CRYPTO_XTS select CRYPTO_CTS select KEYS diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c index 45c3d0427fb2..fd13231c5ff6 100644 --- a/fs/crypto/crypto.c +++ b/fs/crypto/crypto.c @@ -143,9 +143,6 @@ void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num, if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) memcpy(iv->nonce, ci->ci_nonce, FS_KEY_DERIVATION_NONCE_SIZE); - - if (ci->ci_essiv_tfm != NULL) - crypto_cipher_encrypt_one(ci->ci_essiv_tfm, iv->raw, iv->raw); } /* Encrypt or decrypt a single filesystem block of file contents */ @@ -523,8 +520,6 @@ static void __exit fscrypt_exit(void) destroy_workqueue(fscrypt_read_workqueue); kmem_cache_destroy(fscrypt_ctx_cachep); kmem_cache_destroy(fscrypt_info_cachep); - - fscrypt_essiv_cleanup(); } module_exit(fscrypt_exit); diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h index 8978eec9d766..2fc6f0bd2d13 100644 --- a/fs/crypto/fscrypt_private.h +++ b/fs/crypto/fscrypt_private.h @@ -61,12 +61,6 @@ struct fscrypt_info { /* The actual crypto transform used for encryption and decryption */ struct crypto_skcipher *ci_ctfm; - /* - * Cipher for ESSIV IV generation. Only set for CBC contents - * encryption, otherwise is NULL. - */ - struct crypto_cipher *ci_essiv_tfm; - /* * Encryption mode used for this inode. It corresponds to either * ci_data_mode or ci_filename_mode, depending on the inode type. @@ -163,9 +157,6 @@ struct fscrypt_mode { int keysize; int ivsize; bool logged_impl_name; - bool needs_essiv; }; -extern void __exit fscrypt_essiv_cleanup(void); - #endif /* _FSCRYPT_PRIVATE_H */ diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c index 207ebed918c1..80924a0f72ca 100644 --- a/fs/crypto/keyinfo.c +++ b/fs/crypto/keyinfo.c @@ -14,12 +14,9 @@ #include #include #include -#include #include #include "fscrypt_private.h" -static struct crypto_shash *essiv_hash_tfm; - /* Table of keys referenced by FS_POLICY_FLAG_DIRECT_KEY policies */ static DEFINE_HASHTABLE(fscrypt_master_keys, 6); /* 6 bits = 64 buckets */ static DEFINE_SPINLOCK(fscrypt_master_keys_lock); @@ -143,10 +140,9 @@ static struct fscrypt_mode available_modes[] = { }, [FS_ENCRYPTION_MODE_AES_128_CBC] = { .friendly_name = "AES-128-CBC", - .cipher_str = "cbc(aes)", + .cipher_str = "essiv(cbc(aes),sha256)", .keysize = 16, .ivsize = 16, - .needs_essiv = true, }, [FS_ENCRYPTION_MODE_AES_128_CTS] = { .friendly_name = "AES-128-CTS-CBC", @@ -376,72 +372,6 @@ fscrypt_get_master_key(const struct fscrypt_info *ci, struct fscrypt_mode *mode, return ERR_PTR(err); } -static int derive_essiv_salt(const u8 *key, int keysize, u8 *salt) -{ - struct crypto_shash *tfm = READ_ONCE(essiv_hash_tfm); - - /* init hash transform on demand */ - if (unlikely(!tfm)) { - struct crypto_shash *prev_tfm; - - tfm = crypto_alloc_shash("sha256", 0, 0); - if (IS_ERR(tfm)) { - fscrypt_warn(NULL, - "error allocating SHA-256 transform: %ld", - PTR_ERR(tfm)); - return PTR_ERR(tfm); - } - prev_tfm = cmpxchg(&essiv_hash_tfm, NULL, tfm); - if (prev_tfm) { - crypto_free_shash(tfm); - tfm = prev_tfm; - } - } - - { - SHASH_DESC_ON_STACK(desc, tfm); - desc->tfm = tfm; - - return crypto_shash_digest(desc, key, keysize, salt); - } -} - -static int init_essiv_generator(struct fscrypt_info *ci, const u8 *raw_key, - int keysize) -{ - int err; - struct crypto_cipher *essiv_tfm; - u8 salt[SHA256_DIGEST_SIZE]; - - essiv_tfm = crypto_alloc_cipher("aes", 0, 0); - if (IS_ERR(essiv_tfm)) - return PTR_ERR(essiv_tfm); - - ci->ci_essiv_tfm = essiv_tfm; - - err = derive_essiv_salt(raw_key, keysize, salt); - if (err) - goto out; - - /* - * Using SHA256 to derive the salt/key will result in AES-256 being - * used for IV generation. File contents encryption will still use the - * configured keysize (AES-128) nevertheless. - */ - err = crypto_cipher_setkey(essiv_tfm, salt, sizeof(salt)); - if (err) - goto out; - -out: - memzero_explicit(salt, sizeof(salt)); - return err; -} - -void __exit fscrypt_essiv_cleanup(void) -{ - crypto_free_shash(essiv_hash_tfm); -} - /* * Given the encryption mode and key (normally the derived key, but for * FS_POLICY_FLAG_DIRECT_KEY mode it's the master key), set up the inode's @@ -453,7 +383,6 @@ static int setup_crypto_transform(struct fscrypt_info *ci, { struct fscrypt_master_key *mk; struct crypto_skcipher *ctfm; - int err; if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) { mk = fscrypt_get_master_key(ci, mode, raw_key, inode); @@ -469,19 +398,6 @@ static int setup_crypto_transform(struct fscrypt_info *ci, ci->ci_master_key = mk; ci->ci_ctfm = ctfm; - if (mode->needs_essiv) { - /* ESSIV implies 16-byte IVs which implies !DIRECT_KEY */ - WARN_ON(mode->ivsize != AES_BLOCK_SIZE); - WARN_ON(ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY); - - err = init_essiv_generator(ci, raw_key, mode->keysize); - if (err) { - fscrypt_warn(inode->i_sb, - "error initializing ESSIV generator for inode %lu: %d", - inode->i_ino, err); - return err; - } - } return 0; } @@ -490,12 +406,10 @@ static void put_crypt_info(struct fscrypt_info *ci) if (!ci) return; - if (ci->ci_master_key) { + if (ci->ci_master_key) put_master_key(ci->ci_master_key); - } else { + else crypto_free_skcipher(ci->ci_ctfm); - crypto_free_cipher(ci->ci_essiv_tfm); - } kmem_cache_free(fscrypt_info_cachep, ci); } From patchwork Sat Aug 10 09:40:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171014 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp328431ily; Sat, 10 Aug 2019 02:41:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqx14M+nl8LlfxnYLvSDY67ZXc8xQvFJ8dwz0tNdq09jKfVA0mGPelPmish/GtVja/na3mvd X-Received: by 2002:aa7:9799:: with SMTP id o25mr1328040pfp.74.1565430083560; Sat, 10 Aug 2019 02:41:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565430083; cv=none; d=google.com; s=arc-20160816; b=f9KRMIBSSeGQbuVGhHsB6kQ6LYzcCB69XBugtyZ6zmIpdvgSosAWRLoufI8FPyZH91 7dNd31sF2m3BLR6h6YjdIcWaDR5YFAs/jIaseT9EoKWUI+Bkjqdm9MkN1P5hIJps37Q9 H81sfSIQsVtKS69pKULyqPOFV/KbJDXFwY8aGxFFnaNkZ03dCH8PMvYUxFFz9n9Gw1wK KsJGp4A9EqFkNqbsOY2fM25nLBVw8tAaK/KEUWQmyfmiTFBTjlv6Ol8zHtU5O+JWy1dR deIVcZQ0G9aODUBLjOIY3fVW07UyZvLTilOYqSxBu6JDnmmXG2LxU6rfROVu5JDF1or0 EEMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=56ampqLUPOTSSPtRfg1LVltpdcwpm+t0SppiwG6MdWQ=; b=PRhzjfDtNuxFHMFlTkGsLExRsNYS+TZkNxdAaAk1U6dONqKJqtO3Aqp64hvTGUT3oX TN95NLrYPiomnVQLsIamWRn7WTTW/Sah42GKGnr6ytSBfjcjxyHlbj+dILgDiqxK+fA8 CLhZVfPi7m2h5ZxmCRVryO1BfdpwKsHGH5lFQ7eMiHSLzJ7nomKq5UfLPAREfm7zgSo2 HP0VayocmgF/3eafM0QgKa6znY0p9G7SPR+V2gDZdCX8E9+zcMg07guOyhUncDrp2AQ/ sAst9mxSDt4sSmAxjJ6KL2fotLMLnRz1B0FfNoYeyPAoVSP32pnHJRzR7j25yvXGK005 5u9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=GtQ0dmnE; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k43si6571482pje.59.2019.08.10.02.41.23; Sat, 10 Aug 2019 02:41:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=GtQ0dmnE; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725468AbfHJJlW (ORCPT + 3 others); Sat, 10 Aug 2019 05:41:22 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:32880 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726167AbfHJJlW (ORCPT ); Sat, 10 Aug 2019 05:41:22 -0400 Received: by mail-wr1-f66.google.com with SMTP id n9so100465594wru.0 for ; Sat, 10 Aug 2019 02:41:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=56ampqLUPOTSSPtRfg1LVltpdcwpm+t0SppiwG6MdWQ=; b=GtQ0dmnE804VqZHh5jpqC+lwABILW62Bwd4q22wI4C3aSaKdw2ws7yrXbO6G0U/yZv x3TFCyyz9CYCbp4twTvNVOa4W1BIsQFUts+5gFOCfeXCWRe1HsgfR5O9akciY6JvV2QK 0pg0aeAr4UwOPx1u9zop17esAVjs7v5TEBz7piU3Ch8cg/RUKelMiMVyDpPanAuBzDup BMDQIo0N9LJiSZNq+8yKaiWbFUgKcw2EH7rOF+NdF0+8eoRNacK+THiPPDSn/jLefKwK 6wO+RyJKqjtrXoDJsnCaeWZyjLqOUVkpG7q8FtjpUgWIJEFfACT6jRylagcTKshsuDB9 CyXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=56ampqLUPOTSSPtRfg1LVltpdcwpm+t0SppiwG6MdWQ=; b=PonHw8M094oEtL7ehlXTpxhDKSLqibDmu1IyMteyZ2RvCGwm3SMmVOmb0YUPaWQWmd kaABK0zLEU5JOdElSv+TFgvbXRF12aW/fk+H7OXTlQeEEfgE1MUg4OjWHbXbT7BL4PBH f7MPKPgh2fKrhw8Zyir27mdGm+934P5XT+CWyLhOH+eeXOM7QxCNUWeP1ppQWKDKGNzt NNi+Jpr6z7aiVLqPI/n0XN4v9GqGtRqEB2GF8sfd8Tz+QA4YKo5ScjmgOPeuQRuQ3pDW cm9SHiz8ZGsCnxePjbDpIoDGckbcdzIiLDVVFlHohww48I6K0P15uqV2RuW+HclkizCt 7u3A== X-Gm-Message-State: APjAAAXvICvYSsmTDjdsTlSIUj0e1ycG+1UYY8N1J+v1hHvc4xzLxotB MnOgKZ7XO6/WpohdILIO9Zsdwyp9kbADug== X-Received: by 2002:adf:db09:: with SMTP id s9mr1282160wri.214.1565430079837; Sat, 10 Aug 2019 02:41:19 -0700 (PDT) Received: from localhost.localdomain ([2a02:587:a407:da00:582f:8334:9cd9:7241]) by smtp.gmail.com with ESMTPSA id n16sm519883wmk.12.2019.08.10.02.41.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 10 Aug 2019 02:41:19 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v9 3/7] md: dm-crypt: switch to ESSIV crypto API template Date: Sat, 10 Aug 2019 12:40:49 +0300 Message-Id: <20190810094053.7423-4-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190810094053.7423-1-ard.biesheuvel@linaro.org> References: <20190810094053.7423-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Replace the explicit ESSIV handling in the dm-crypt driver with calls into the crypto API, which now possesses the capability to perform this processing within the crypto subsystem. Signed-off-by: Ard Biesheuvel --- drivers/md/Kconfig | 1 + drivers/md/dm-crypt.c | 194 ++++---------------- 2 files changed, 33 insertions(+), 162 deletions(-) -- 2.17.1 diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig index 3834332f4963..b727e8f15264 100644 --- a/drivers/md/Kconfig +++ b/drivers/md/Kconfig @@ -271,6 +271,7 @@ config DM_CRYPT depends on BLK_DEV_DM select CRYPTO select CRYPTO_CBC + select CRYPTO_ESSIV ---help--- This device-mapper target allows you to create a device that transparently encrypts the data on it. You'll need to activate diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 48cd76c88d77..d3f2634f41a8 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -98,11 +98,6 @@ struct crypt_iv_operations { struct dm_crypt_request *dmreq); }; -struct iv_essiv_private { - struct crypto_shash *hash_tfm; - u8 *salt; -}; - struct iv_benbi_private { int shift; }; @@ -155,7 +150,6 @@ struct crypt_config { const struct crypt_iv_operations *iv_gen_ops; union { - struct iv_essiv_private essiv; struct iv_benbi_private benbi; struct iv_lmk_private lmk; struct iv_tcw_private tcw; @@ -165,8 +159,6 @@ struct crypt_config { unsigned short int sector_size; unsigned char sector_shift; - /* ESSIV: struct crypto_cipher *essiv_tfm */ - void *iv_private; union { struct crypto_skcipher **tfms; struct crypto_aead **tfms_aead; @@ -324,157 +316,15 @@ static int crypt_iv_plain64be_gen(struct crypt_config *cc, u8 *iv, return 0; } -/* Initialise ESSIV - compute salt but no local memory allocations */ -static int crypt_iv_essiv_init(struct crypt_config *cc) -{ - struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv; - SHASH_DESC_ON_STACK(desc, essiv->hash_tfm); - struct crypto_cipher *essiv_tfm; - int err; - - desc->tfm = essiv->hash_tfm; - - err = crypto_shash_digest(desc, cc->key, cc->key_size, essiv->salt); - shash_desc_zero(desc); - if (err) - return err; - - essiv_tfm = cc->iv_private; - - err = crypto_cipher_setkey(essiv_tfm, essiv->salt, - crypto_shash_digestsize(essiv->hash_tfm)); - if (err) - return err; - - return 0; -} - -/* Wipe salt and reset key derived from volume key */ -static int crypt_iv_essiv_wipe(struct crypt_config *cc) -{ - struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv; - unsigned salt_size = crypto_shash_digestsize(essiv->hash_tfm); - struct crypto_cipher *essiv_tfm; - int r, err = 0; - - memset(essiv->salt, 0, salt_size); - - essiv_tfm = cc->iv_private; - r = crypto_cipher_setkey(essiv_tfm, essiv->salt, salt_size); - if (r) - err = r; - - return err; -} - -/* Allocate the cipher for ESSIV */ -static struct crypto_cipher *alloc_essiv_cipher(struct crypt_config *cc, - struct dm_target *ti, - const u8 *salt, - unsigned int saltsize) -{ - struct crypto_cipher *essiv_tfm; - int err; - - /* Setup the essiv_tfm with the given salt */ - essiv_tfm = crypto_alloc_cipher(cc->cipher, 0, 0); - if (IS_ERR(essiv_tfm)) { - ti->error = "Error allocating crypto tfm for ESSIV"; - return essiv_tfm; - } - - if (crypto_cipher_blocksize(essiv_tfm) != cc->iv_size) { - ti->error = "Block size of ESSIV cipher does " - "not match IV size of block cipher"; - crypto_free_cipher(essiv_tfm); - return ERR_PTR(-EINVAL); - } - - err = crypto_cipher_setkey(essiv_tfm, salt, saltsize); - if (err) { - ti->error = "Failed to set key for ESSIV cipher"; - crypto_free_cipher(essiv_tfm); - return ERR_PTR(err); - } - - return essiv_tfm; -} - -static void crypt_iv_essiv_dtr(struct crypt_config *cc) -{ - struct crypto_cipher *essiv_tfm; - struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv; - - crypto_free_shash(essiv->hash_tfm); - essiv->hash_tfm = NULL; - - kzfree(essiv->salt); - essiv->salt = NULL; - - essiv_tfm = cc->iv_private; - - if (essiv_tfm) - crypto_free_cipher(essiv_tfm); - - cc->iv_private = NULL; -} - -static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti, - const char *opts) -{ - struct crypto_cipher *essiv_tfm = NULL; - struct crypto_shash *hash_tfm = NULL; - u8 *salt = NULL; - int err; - - if (!opts) { - ti->error = "Digest algorithm missing for ESSIV mode"; - return -EINVAL; - } - - /* Allocate hash algorithm */ - hash_tfm = crypto_alloc_shash(opts, 0, 0); - if (IS_ERR(hash_tfm)) { - ti->error = "Error initializing ESSIV hash"; - err = PTR_ERR(hash_tfm); - goto bad; - } - - salt = kzalloc(crypto_shash_digestsize(hash_tfm), GFP_KERNEL); - if (!salt) { - ti->error = "Error kmallocing salt storage in ESSIV"; - err = -ENOMEM; - goto bad; - } - - cc->iv_gen_private.essiv.salt = salt; - cc->iv_gen_private.essiv.hash_tfm = hash_tfm; - - essiv_tfm = alloc_essiv_cipher(cc, ti, salt, - crypto_shash_digestsize(hash_tfm)); - if (IS_ERR(essiv_tfm)) { - crypt_iv_essiv_dtr(cc); - return PTR_ERR(essiv_tfm); - } - cc->iv_private = essiv_tfm; - - return 0; - -bad: - if (hash_tfm && !IS_ERR(hash_tfm)) - crypto_free_shash(hash_tfm); - kfree(salt); - return err; -} - static int crypt_iv_essiv_gen(struct crypt_config *cc, u8 *iv, struct dm_crypt_request *dmreq) { - struct crypto_cipher *essiv_tfm = cc->iv_private; - + /* + * ESSIV encryption of the IV is now handled by the crypto API, + * so just pass the plain sector number here. + */ memset(iv, 0, cc->iv_size); *(__le64 *)iv = cpu_to_le64(dmreq->iv_sector); - crypto_cipher_encrypt_one(essiv_tfm, iv, iv); return 0; } @@ -898,10 +748,6 @@ static const struct crypt_iv_operations crypt_iv_plain64be_ops = { }; static const struct crypt_iv_operations crypt_iv_essiv_ops = { - .ctr = crypt_iv_essiv_ctr, - .dtr = crypt_iv_essiv_dtr, - .init = crypt_iv_essiv_init, - .wipe = crypt_iv_essiv_wipe, .generator = crypt_iv_essiv_gen }; @@ -2464,7 +2310,7 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key char **ivmode, char **ivopts) { struct crypt_config *cc = ti->private; - char *tmp, *cipher_api; + char *tmp, *cipher_api, buf[CRYPTO_MAX_ALG_NAME]; int ret = -EINVAL; cc->tfms_count = 1; @@ -2493,6 +2339,20 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key if (*ivmode && !strcmp(*ivmode, "lmk")) cc->tfms_count = 64; + if (*ivmode && !strcmp(*ivmode, "essiv")) { + if (!*ivopts) { + ti->error = "Digest algorithm missing for ESSIV mode"; + return -EINVAL; + } + ret = snprintf(buf, CRYPTO_MAX_ALG_NAME, "essiv(%s,%s)", + cipher_api, *ivopts); + if (ret < 0 || ret >= CRYPTO_MAX_ALG_NAME) { + ti->error = "Cannot allocate cipher string"; + return -ENOMEM; + } + cipher_api = buf; + } + cc->key_parts = cc->tfms_count; /* Allocate cipher */ @@ -2579,9 +2439,19 @@ static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key if (!cipher_api) goto bad_mem; - ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME, - "%s(%s)", chainmode, cipher); - if (ret < 0) { + if (*ivmode && !strcmp(*ivmode, "essiv")) { + if (!*ivopts) { + ti->error = "Digest algorithm missing for ESSIV mode"; + kfree(cipher_api); + return -EINVAL; + } + ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME, + "essiv(%s(%s),%s)", chainmode, cipher, *ivopts); + } else { + ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME, + "%s(%s)", chainmode, cipher); + } + if (ret < 0 || ret >= CRYPTO_MAX_ALG_NAME) { kfree(cipher_api); goto bad_mem; } From patchwork Sat Aug 10 09:40:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171016 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp328502ily; Sat, 10 Aug 2019 02:41:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqzS0ly98xCcpfviytaDh++hjAXQ2aX6IpLx/Ezey2BnhfWlCas5KbqYccnZVKzMk7ddnHrM X-Received: by 2002:a65:504c:: with SMTP id k12mr21627970pgo.252.1565430089572; Sat, 10 Aug 2019 02:41:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565430089; cv=none; d=google.com; s=arc-20160816; b=mlvdjsFi/n1u5jgyYpWu4WTSlQNG+Q/TuwOy3Ee8sYVTJHTa8EL9Uiv5jDeJlRpb2h hoTK6i4jeMt69d9rgRjfeGtrIVVK2IKZmTwbfCL4hY/0/DC9VOdT43RGQEckcsBTqn4m yX6+7+F9MrxF6KPVgBWtWXv2VG8dU8LVNaqwU11zHfTdDM6bWvF3yFy+fXvyg0PaPQmp 4tSF2HkVGzmjdFOZwWfV09aZ68o7A1DWbg9yHYZLov7P4kqQFaYVzv4j7Mv95B4ZzD0e bbrZ2es+fOG0G6Nt//QCyiDbUXNvBnE46vHw9sAy+MsSkyruUWPildnJDuNQJLeS5rbF 7dBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=3rQo0bpFIGskyQ3+XxqfNAvEoIX1CtYPoAeZeVPe5Ag=; b=p5iZhp+LUlp0ujqS433RsNhxBXQ3U/EKf6/x5gwNM1jIvhSnnQZovDyzsFjRXT5bkr JXqLbcbfCTBU63gwtyXBw86NOk3YzJGv3W6Z4SCctsYmEtUh8shtIzTYmeeUvVFzciSy hnHQyNwLtkKzqN7Z4jyBBVRwyU6oL17N5IWqdD25OuESjYDRy0gs41JpRkGjxriZ6Y9B zOWk64LNCLAuOO+gXw+pJnfVvEOzyRQFwhDJM57/bTY2frX3YxnB6Kvd8SOgFmBP60Ol SKxiOfGxaDKOTIz5IpmCkVk1IkPZ10J5rCYI1ecDZQirlLlch7Pr/xE1U5R5C53KS3ux Xl9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=r0d5eyHS; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x20si17621959plm.61.2019.08.10.02.41.28; Sat, 10 Aug 2019 02:41:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=r0d5eyHS; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726195AbfHJJl1 (ORCPT + 3 others); Sat, 10 Aug 2019 05:41:27 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:45025 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726167AbfHJJl1 (ORCPT ); Sat, 10 Aug 2019 05:41:27 -0400 Received: by mail-wr1-f68.google.com with SMTP id p17so100338506wrf.11 for ; Sat, 10 Aug 2019 02:41:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3rQo0bpFIGskyQ3+XxqfNAvEoIX1CtYPoAeZeVPe5Ag=; b=r0d5eyHSxsGyWNCeISLmaaF1AF3hSsboCUekvU8eFTYBXFWSzr54LcLdeBILZKbx6b NtQcw8ttrlmNe6Kq5uwHNvX85Se/sknUj4j8LiJRTWl+5khmJUK9iCqFIsgvz7prnOT1 SxNE/hb69oAIok9zLUPVqbg7QZXecS2WJDtHAw19KdaG638ppJujMyRoFyMYytre+o/y /vhNtGArKlOYGaPY2QWWbYKskkbWSXxDYf2aIuj3cSoZe0eF9YESBT683YEIhYsbneb5 BiemJ3o/0uq1q0GBzTxqb+zXzs5xsAiq6drDIdFAVI3fuKeiyI+EDz1Yg+psnosftVDJ 4D6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3rQo0bpFIGskyQ3+XxqfNAvEoIX1CtYPoAeZeVPe5Ag=; b=L4040ikmsOt0jAfXteVd5ZKIbZc/cwjeKCvlHg/bh6yWTYwrj5JwN6VE0ZzplKSVpx 2syF3PQBK7M3LJhA639tTjReM0iGhCWNpd6wM44VB64KGfWzQpLKkWaKJd81T0NsrGss 9Pj4/j6tsuIdXeMJQyg/D3dipOGPQNQziqI5t6McwxFxpNbdE/2rBT/Df9Ui9kO1qqjO Uxftm6TPJlYapf/XStoTxw6Eh5FDfdoR6z7sM5K0WcG6oejDkEmgVQcTZWng9r3vAYmu qaHP4+NnN67t6M0nRYxa3F9E6Nrf1DLnjmV0b4teD7U1d0QtvdR0C1tr0wQY6A9mpCWR 5kag== X-Gm-Message-State: APjAAAXofvTl2KJ1j+ozfltGr8YUGy8TFwEYaBzVS8Acz+v/zfKp6ZGp i/0NdJ4D1CkVu7sWKhtrtTA9BLm0cBgOTg== X-Received: by 2002:adf:b64e:: with SMTP id i14mr29533731wre.248.1565430082045; Sat, 10 Aug 2019 02:41:22 -0700 (PDT) Received: from localhost.localdomain ([2a02:587:a407:da00:582f:8334:9cd9:7241]) by smtp.gmail.com with ESMTPSA id n16sm519883wmk.12.2019.08.10.02.41.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 10 Aug 2019 02:41:21 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v9 4/7] crypto: essiv - add tests for essiv in cbc(aes)+sha256 mode Date: Sat, 10 Aug 2019 12:40:50 +0300 Message-Id: <20190810094053.7423-5-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190810094053.7423-1-ard.biesheuvel@linaro.org> References: <20190810094053.7423-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add a test vector for the ESSIV mode that is the most widely used, i.e., using cbc(aes) and sha256, in both skcipher and AEAD modes (the latter is used by tcrypt to encapsulate the authenc template or h/w instantiations of the same) Signed-off-by: Ard Biesheuvel --- crypto/tcrypt.c | 9 + crypto/testmgr.c | 14 + crypto/testmgr.h | 497 ++++++++++++++++++++ 3 files changed, 520 insertions(+) -- 2.17.1 diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index c578ccd92c57..83ad0b1fab30 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -2327,6 +2327,15 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) 0, speed_template_32); break; + case 220: + test_acipher_speed("essiv(cbc(aes),sha256)", + ENCRYPT, sec, NULL, 0, + speed_template_16_24_32); + test_acipher_speed("essiv(cbc(aes),sha256)", + DECRYPT, sec, NULL, 0, + speed_template_16_24_32); + break; + case 221: test_aead_speed("aegis128", ENCRYPT, sec, NULL, 0, 16, 8, speed_template_16); diff --git a/crypto/testmgr.c b/crypto/testmgr.c index d990eba723cd..c39e39e55dc2 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -4544,6 +4544,20 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .akcipher = __VECS(ecrdsa_tv_template) } + }, { + .alg = "essiv(authenc(hmac(sha256),cbc(aes)),sha256)", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = __VECS(essiv_hmac_sha256_aes_cbc_tv_temp) + } + }, { + .alg = "essiv(cbc(aes),sha256)", + .test = alg_test_skcipher, + .fips_allowed = 1, + .suite = { + .cipher = __VECS(essiv_aes_cbc_tv_template) + } }, { .alg = "gcm(aes)", .generic_driver = "gcm_base(ctr(aes-generic),ghash-generic)", diff --git a/crypto/testmgr.h b/crypto/testmgr.h index 154052d07818..ef7d21f39d4a 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -31070,4 +31070,501 @@ static const struct comp_testvec zstd_decomp_tv_template[] = { "functions.", }, }; + +/* based on aes_cbc_tv_template */ +static const struct cipher_testvec essiv_aes_cbc_tv_template[] = { + { + .key = "\x06\xa9\x21\x40\x36\xb8\xa1\x5b" + "\x51\x2e\x03\xd5\x34\x12\x00\x06", + .klen = 16, + .iv = "\x3d\xaf\xba\x42\x9d\x9e\xb4\x30" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "Single block msg", + .ctext = "\xfa\x59\xe7\x5f\x41\x56\x65\xc3" + "\x36\xca\x6b\x72\x10\x9f\x8c\xd4", + .len = 16, + }, { + .key = "\xc2\x86\x69\x6d\x88\x7c\x9a\xa0" + "\x61\x1b\xbb\x3e\x20\x25\xa4\x5a", + .klen = 16, + .iv = "\x56\x2e\x17\x99\x6d\x09\x3d\x28" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" + "\x10\x11\x12\x13\x14\x15\x16\x17" + "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f", + .ctext = "\xc8\x59\x9a\xfe\x79\xe6\x7b\x20" + "\x06\x7d\x55\x0a\x5e\xc7\xb5\xa7" + "\x0b\x9c\x80\xd2\x15\xa1\xb8\x6d" + "\xc6\xab\x7b\x65\xd9\xfd\x88\xeb", + .len = 32, + }, { + .key = "\x8e\x73\xb0\xf7\xda\x0e\x64\x52" + "\xc8\x10\xf3\x2b\x80\x90\x79\xe5" + "\x62\xf8\xea\xd2\x52\x2c\x6b\x7b", + .klen = 24, + .iv = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" + "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" + "\xae\x2d\x8a\x57\x1e\x03\xac\x9c" + "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51" + "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11" + "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" + "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17" + "\xad\x2b\x41\x7b\xe6\x6c\x37\x10", + .ctext = "\x96\x6d\xa9\x7a\x42\xe6\x01\xc7" + "\x17\xfc\xa7\x41\xd3\x38\x0b\xe5" + "\x51\x48\xf7\x7e\x5e\x26\xa9\xfe" + "\x45\x72\x1c\xd9\xde\xab\xf3\x4d" + "\x39\x47\xc5\x4f\x97\x3a\x55\x63" + "\x80\x29\x64\x4c\x33\xe8\x21\x8a" + "\x6a\xef\x6b\x6a\x8f\x43\xc0\xcb" + "\xf0\xf3\x6e\x74\x54\x44\x92\x44", + .len = 64, + }, { + .key = "\x60\x3d\xeb\x10\x15\xca\x71\xbe" + "\x2b\x73\xae\xf0\x85\x7d\x77\x81" + "\x1f\x35\x2c\x07\x3b\x61\x08\xd7" + "\x2d\x98\x10\xa3\x09\x14\xdf\xf4", + .klen = 32, + .iv = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" + "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" + "\xae\x2d\x8a\x57\x1e\x03\xac\x9c" + "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51" + "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11" + "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" + "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17" + "\xad\x2b\x41\x7b\xe6\x6c\x37\x10", + .ctext = "\x24\x52\xf1\x48\x74\xd0\xa7\x93" + "\x75\x9b\x63\x46\xc0\x1c\x1e\x17" + "\x4d\xdc\x5b\x3a\x27\x93\x2a\x63" + "\xf7\xf1\xc7\xb3\x54\x56\x5b\x50" + "\xa3\x31\xa5\x8b\xd6\xfd\xb6\x3c" + "\x8b\xf6\xf2\x45\x05\x0c\xc8\xbb" + "\x32\x0b\x26\x1c\xe9\x8b\x02\xc0" + "\xb2\x6f\x37\xa7\x5b\xa8\xa9\x42", + .len = 64, + }, { + .key = "\xC9\x83\xA6\xC9\xEC\x0F\x32\x55" + "\x0F\x32\x55\x78\x9B\xBE\x78\x9B" + "\xBE\xE1\x04\x27\xE1\x04\x27\x4A" + "\x6D\x90\x4A\x6D\x90\xB3\xD6\xF9", + .klen = 32, + .iv = "\xE7\x82\x1D\xB8\x53\x11\xAC\x47" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x50\xB9\x22\xAE\x17\x80\x0C\x75" + "\xDE\x47\xD3\x3C\xA5\x0E\x9A\x03" + "\x6C\xF8\x61\xCA\x33\xBF\x28\x91" + "\x1D\x86\xEF\x58\xE4\x4D\xB6\x1F" + "\xAB\x14\x7D\x09\x72\xDB\x44\xD0" + "\x39\xA2\x0B\x97\x00\x69\xF5\x5E" + "\xC7\x30\xBC\x25\x8E\x1A\x83\xEC" + "\x55\xE1\x4A\xB3\x1C\xA8\x11\x7A" + "\x06\x6F\xD8\x41\xCD\x36\x9F\x08" + "\x94\xFD\x66\xF2\x5B\xC4\x2D\xB9" + "\x22\x8B\x17\x80\xE9\x52\xDE\x47" + "\xB0\x19\xA5\x0E\x77\x03\x6C\xD5" + "\x3E\xCA\x33\x9C\x05\x91\xFA\x63" + "\xEF\x58\xC1\x2A\xB6\x1F\x88\x14" + "\x7D\xE6\x4F\xDB\x44\xAD\x16\xA2" + "\x0B\x74\x00\x69\xD2\x3B\xC7\x30" + "\x99\x02\x8E\xF7\x60\xEC\x55\xBE" + "\x27\xB3\x1C\x85\x11\x7A\xE3\x4C" + "\xD8\x41\xAA\x13\x9F\x08\x71\xFD" + "\x66\xCF\x38\xC4\x2D\x96\x22\x8B" + "\xF4\x5D\xE9\x52\xBB\x24\xB0\x19" + "\x82\x0E\x77\xE0\x49\xD5\x3E\xA7" + "\x10\x9C\x05\x6E\xFA\x63\xCC\x35" + "\xC1\x2A\x93\x1F\x88\xF1\x5A\xE6" + "\x4F\xB8\x21\xAD\x16\x7F\x0B\x74" + "\xDD\x46\xD2\x3B\xA4\x0D\x99\x02" + "\x6B\xF7\x60\xC9\x32\xBE\x27\x90" + "\x1C\x85\xEE\x57\xE3\x4C\xB5\x1E" + "\xAA\x13\x7C\x08\x71\xDA\x43\xCF" + "\x38\xA1\x0A\x96\xFF\x68\xF4\x5D" + "\xC6\x2F\xBB\x24\x8D\x19\x82\xEB" + "\x54\xE0\x49\xB2\x1B\xA7\x10\x79" + "\x05\x6E\xD7\x40\xCC\x35\x9E\x07" + "\x93\xFC\x65\xF1\x5A\xC3\x2C\xB8" + "\x21\x8A\x16\x7F\xE8\x51\xDD\x46" + "\xAF\x18\xA4\x0D\x76\x02\x6B\xD4" + "\x3D\xC9\x32\x9B\x04\x90\xF9\x62" + "\xEE\x57\xC0\x29\xB5\x1E\x87\x13" + "\x7C\xE5\x4E\xDA\x43\xAC\x15\xA1" + "\x0A\x73\xFF\x68\xD1\x3A\xC6\x2F" + "\x98\x01\x8D\xF6\x5F\xEB\x54\xBD" + "\x26\xB2\x1B\x84\x10\x79\xE2\x4B" + "\xD7\x40\xA9\x12\x9E\x07\x70\xFC" + "\x65\xCE\x37\xC3\x2C\x95\x21\x8A" + "\xF3\x5C\xE8\x51\xBA\x23\xAF\x18" + "\x81\x0D\x76\xDF\x48\xD4\x3D\xA6" + "\x0F\x9B\x04\x6D\xF9\x62\xCB\x34" + "\xC0\x29\x92\x1E\x87\xF0\x59\xE5" + "\x4E\xB7\x20\xAC\x15\x7E\x0A\x73" + "\xDC\x45\xD1\x3A\xA3\x0C\x98\x01" + "\x6A\xF6\x5F\xC8\x31\xBD\x26\x8F" + "\x1B\x84\xED\x56\xE2\x4B\xB4\x1D" + "\xA9\x12\x7B\x07\x70\xD9\x42\xCE" + "\x37\xA0\x09\x95\xFE\x67\xF3\x5C" + "\xC5\x2E\xBA\x23\x8C\x18\x81\xEA" + "\x53\xDF\x48\xB1\x1A\xA6\x0F\x78" + "\x04\x6D\xD6\x3F\xCB\x34\x9D\x06" + "\x92\xFB\x64\xF0\x59\xC2\x2B\xB7" + "\x20\x89\x15\x7E\xE7\x50\xDC\x45" + "\xAE\x17\xA3\x0C\x75\x01\x6A\xD3" + "\x3C\xC8\x31\x9A\x03\x8F\xF8\x61" + "\xED\x56\xBF\x28\xB4\x1D\x86\x12", + .ctext = "\x97\x7f\x69\x0f\x0f\x34\xa6\x33" + "\x66\x49\x7e\xd0\x4d\x1b\xc9\x64" + "\xf9\x61\x95\x98\x11\x00\x88\xf8" + "\x2e\x88\x01\x0f\x2b\xe1\xae\x3e" + "\xfe\xd6\x47\x30\x11\x68\x7d\x99" + "\xad\x69\x6a\xe8\x41\x5f\x1e\x16" + "\x00\x3a\x47\xdf\x8e\x7d\x23\x1c" + "\x19\x5b\x32\x76\x60\x03\x05\xc1" + "\xa0\xff\xcf\xcc\x74\x39\x46\x63" + "\xfe\x5f\xa6\x35\xa7\xb4\xc1\xf9" + "\x4b\x5e\x38\xcc\x8c\xc1\xa2\xcf" + "\x9a\xc3\xae\x55\x42\x46\x93\xd9" + "\xbd\x22\xd3\x8a\x19\x96\xc3\xb3" + "\x7d\x03\x18\xf9\x45\x09\x9c\xc8" + "\x90\xf3\x22\xb3\x25\x83\x9a\x75" + "\xbb\x04\x48\x97\x3a\x63\x08\x04" + "\xa0\x69\xf6\x52\xd4\x89\x93\x69" + "\xb4\x33\xa2\x16\x58\xec\x4b\x26" + "\x76\x54\x10\x0b\x6e\x53\x1e\xbc" + "\x16\x18\x42\xb1\xb1\xd3\x4b\xda" + "\x06\x9f\x8b\x77\xf7\xab\xd6\xed" + "\xa3\x1d\x90\xda\x49\x38\x20\xb8" + "\x6c\xee\xae\x3e\xae\x6c\x03\xb8" + "\x0b\xed\xc8\xaa\x0e\xc5\x1f\x90" + "\x60\xe2\xec\x1b\x76\xd0\xcf\xda" + "\x29\x1b\xb8\x5a\xbc\xf4\xba\x13" + "\x91\xa6\xcb\x83\x3f\xeb\xe9\x7b" + "\x03\xba\x40\x9e\xe6\x7a\xb2\x4a" + "\x73\x49\xfc\xed\xfb\x55\xa4\x24" + "\xc7\xa4\xd7\x4b\xf5\xf7\x16\x62" + "\x80\xd3\x19\x31\x52\x25\xa8\x69" + "\xda\x9a\x87\xf5\xf2\xee\x5d\x61" + "\xc1\x12\x72\x3e\x52\x26\x45\x3a" + "\xd8\x9d\x57\xfa\x14\xe2\x9b\x2f" + "\xd4\xaa\x5e\x31\xf4\x84\x89\xa4" + "\xe3\x0e\xb0\x58\x41\x75\x6a\xcb" + "\x30\x01\x98\x90\x15\x80\xf5\x27" + "\x92\x13\x81\xf0\x1c\x1e\xfc\xb1" + "\x33\xf7\x63\xb0\x67\xec\x2e\x5c" + "\x85\xe3\x5b\xd0\x43\x8a\xb8\x5f" + "\x44\x9f\xec\x19\xc9\x8f\xde\xdf" + "\x79\xef\xf8\xee\x14\x87\xb3\x34" + "\x76\x00\x3a\x9b\xc7\xed\xb1\x3d" + "\xef\x07\xb0\xe4\xfd\x68\x9e\xeb" + "\xc2\xb4\x1a\x85\x9a\x7d\x11\x88" + "\xf8\xab\x43\x55\x2b\x8a\x4f\x60" + "\x85\x9a\xf4\xba\xae\x48\x81\xeb" + "\x93\x07\x97\x9e\xde\x2a\xfc\x4e" + "\x31\xde\xaa\x44\xf7\x2a\xc3\xee" + "\x60\xa2\x98\x2c\x0a\x88\x50\xc5" + "\x6d\x89\xd3\xe4\xb6\xa7\xf4\xb0" + "\xcf\x0e\x89\xe3\x5e\x8f\x82\xf4" + "\x9d\xd1\xa9\x51\x50\x8a\xd2\x18" + "\x07\xb2\xaa\x3b\x7f\x58\x9b\xf4" + "\xb7\x24\x39\xd3\x66\x2f\x1e\xc0" + "\x11\xa3\x56\x56\x2a\x10\x73\xbc" + "\xe1\x23\xbf\xa9\x37\x07\x9c\xc3" + "\xb2\xc9\xa8\x1c\x5b\x5c\x58\xa4" + "\x77\x02\x26\xad\xc3\x40\x11\x53" + "\x93\x68\x72\xde\x05\x8b\x10\xbc" + "\xa6\xd4\x1b\xd9\x27\xd8\x16\x12" + "\x61\x2b\x31\x2a\x44\x87\x96\x58", + .len = 496, + }, +}; + +/* based on hmac_sha256_aes_cbc_tv_temp */ +static const struct aead_testvec essiv_hmac_sha256_aes_cbc_tv_temp[] = { + { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x10" /* enc key length */ + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x06\xa9\x21\x40\x36\xb8\xa1\x5b" + "\x51\x2e\x03\xd5\x34\x12\x00\x06", + .klen = 8 + 32 + 16, + .iv = "\xb3\x0c\x5a\x11\x41\xad\xc1\x04" + "\xbc\x1e\x7e\x35\xb0\x5d\x78\x29", + .assoc = "\x3d\xaf\xba\x42\x9d\x9e\xb4\x30" + "\xb4\x22\xda\x80\x2c\x9f\xac\x41", + .alen = 16, + .ptext = "Single block msg", + .plen = 16, + .ctext = "\xe3\x53\x77\x9c\x10\x79\xae\xb8" + "\x27\x08\x94\x2d\xbe\x77\x18\x1a" + "\xcc\xde\x2d\x6a\xae\xf1\x0b\xcc" + "\x38\x06\x38\x51\xb4\xb8\xf3\x5b" + "\x5c\x34\xa6\xa3\x6e\x0b\x05\xe5" + "\x6a\x6d\x44\xaa\x26\xa8\x44\xa5", + .clen = 16 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x10" /* enc key length */ + "\x20\x21\x22\x23\x24\x25\x26\x27" + "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" + "\x30\x31\x32\x33\x34\x35\x36\x37" + "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f" + "\xc2\x86\x69\x6d\x88\x7c\x9a\xa0" + "\x61\x1b\xbb\x3e\x20\x25\xa4\x5a", + .klen = 8 + 32 + 16, + .iv = "\x56\xe8\x14\xa5\x74\x18\x75\x13" + "\x2f\x79\xe7\xc8\x65\xe3\x48\x45", + .assoc = "\x56\x2e\x17\x99\x6d\x09\x3d\x28" + "\xdd\xb3\xba\x69\x5a\x2e\x6f\x58", + .alen = 16, + .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" + "\x10\x11\x12\x13\x14\x15\x16\x17" + "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f", + .plen = 32, + .ctext = "\xd2\x96\xcd\x94\xc2\xcc\xcf\x8a" + "\x3a\x86\x30\x28\xb5\xe1\xdc\x0a" + "\x75\x86\x60\x2d\x25\x3c\xff\xf9" + "\x1b\x82\x66\xbe\xa6\xd6\x1a\xb1" + "\xf5\x33\x53\xf3\x68\x85\x2a\x99" + "\x0e\x06\x58\x8f\xba\xf6\x06\xda" + "\x49\x69\x0d\x5b\xd4\x36\x06\x62" + "\x35\x5e\x54\x58\x53\x4d\xdf\xbf", + .clen = 32 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x10" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x6c\x3e\xa0\x47\x76\x30\xce\x21" + "\xa2\xce\x33\x4a\xa7\x46\xc2\xcd", + .klen = 8 + 32 + 16, + .iv = "\x1f\x6b\xfb\xd6\x6b\x72\x2f\xc9" + "\xb6\x9f\x8c\x10\xa8\x96\x15\x64", + .assoc = "\xc7\x82\xdc\x4c\x09\x8c\x66\xcb" + "\xd9\xcd\x27\xd8\x25\x68\x2c\x81", + .alen = 16, + .ptext = "This is a 48-byte message (exactly 3 AES blocks)", + .plen = 48, + .ctext = "\xd0\xa0\x2b\x38\x36\x45\x17\x53" + "\xd4\x93\x66\x5d\x33\xf0\xe8\x86" + "\x2d\xea\x54\xcd\xb2\x93\xab\xc7" + "\x50\x69\x39\x27\x67\x72\xf8\xd5" + "\x02\x1c\x19\x21\x6b\xad\x52\x5c" + "\x85\x79\x69\x5d\x83\xba\x26\x84" + "\x68\xb9\x3e\x90\x38\xa0\x88\x01" + "\xe7\xc6\xce\x10\x31\x2f\x9b\x1d" + "\x24\x78\xfb\xbe\x02\xe0\x4f\x40" + "\x10\xbd\xaa\xc6\xa7\x79\xe0\x1a", + .clen = 48 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x10" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x56\xe4\x7a\x38\xc5\x59\x89\x74" + "\xbc\x46\x90\x3d\xba\x29\x03\x49", + .klen = 8 + 32 + 16, + .iv = "\x13\xe5\xf2\xef\x61\x97\x59\x35" + "\x9b\x36\x84\x46\x4e\x63\xd1\x41", + .assoc = "\x8c\xe8\x2e\xef\xbe\xa0\xda\x3c" + "\x44\x69\x9e\xd7\xdb\x51\xb7\xd9", + .alen = 16, + .ptext = "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7" + "\xa8\xa9\xaa\xab\xac\xad\xae\xaf" + "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7" + "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf" + "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7" + "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf" + "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7" + "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf", + .plen = 64, + .ctext = "\xc3\x0e\x32\xff\xed\xc0\x77\x4e" + "\x6a\xff\x6a\xf0\x86\x9f\x71\xaa" + "\x0f\x3a\xf0\x7a\x9a\x31\xa9\xc6" + "\x84\xdb\x20\x7e\xb0\xef\x8e\x4e" + "\x35\x90\x7a\xa6\x32\xc3\xff\xdf" + "\x86\x8b\xb7\xb2\x9d\x3d\x46\xad" + "\x83\xce\x9f\x9a\x10\x2e\xe9\x9d" + "\x49\xa5\x3e\x87\xf4\xc3\xda\x55" + "\x7a\x1b\xd4\x3c\xdb\x17\x95\xe2" + "\xe0\x93\xec\xc9\x9f\xf7\xce\xd8" + "\x3f\x54\xe2\x49\x39\xe3\x71\x25" + "\x2b\x6c\xe9\x5d\xec\xec\x2b\x64", + .clen = 64 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x10" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x90\xd3\x82\xb4\x10\xee\xba\x7a" + "\xd9\x38\xc4\x6c\xec\x1a\x82\xbf", + .klen = 8 + 32 + 16, + .iv = "\xe4\x13\xa1\x15\xe9\x6b\xb8\x23" + "\x81\x7a\x94\x29\xab\xfd\xd2\x2c", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01" + "\xe9\x6e\x8c\x08\xab\x46\x57\x63" + "\xfd\x09\x8d\x45\xdd\x3f\xf8\x93", + .alen = 24, + .ptext = "\x08\x00\x0e\xbd\xa7\x0a\x00\x00" + "\x8e\x9c\x08\x3d\xb9\x5b\x07\x00" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" + "\x10\x11\x12\x13\x14\x15\x16\x17" + "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" + "\x20\x21\x22\x23\x24\x25\x26\x27" + "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" + "\x30\x31\x32\x33\x34\x35\x36\x37" + "\x01\x02\x03\x04\x05\x06\x07\x08" + "\x09\x0a\x0b\x0c\x0d\x0e\x0e\x01", + .plen = 80, + .ctext = "\xf6\x63\xc2\x5d\x32\x5c\x18\xc6" + "\xa9\x45\x3e\x19\x4e\x12\x08\x49" + "\xa4\x87\x0b\x66\xcc\x6b\x99\x65" + "\x33\x00\x13\xb4\x89\x8d\xc8\x56" + "\xa4\x69\x9e\x52\x3a\x55\xdb\x08" + "\x0b\x59\xec\x3a\x8e\x4b\x7e\x52" + "\x77\x5b\x07\xd1\xdb\x34\xed\x9c" + "\x53\x8a\xb5\x0c\x55\x1b\x87\x4a" + "\xa2\x69\xad\xd0\x47\xad\x2d\x59" + "\x13\xac\x19\xb7\xcf\xba\xd4\xa6" + "\xbb\xd4\x0f\xbe\xa3\x3b\x4c\xb8" + "\x3a\xd2\xe1\x03\x86\xa5\x59\xb7" + "\x73\xc3\x46\x20\x2c\xb1\xef\x68" + "\xbb\x8a\x32\x7e\x12\x8c\x69\xcf", + .clen = 80 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x18" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x8e\x73\xb0\xf7\xda\x0e\x64\x52" + "\xc8\x10\xf3\x2b\x80\x90\x79\xe5" + "\x62\xf8\xea\xd2\x52\x2c\x6b\x7b", + .klen = 8 + 32 + 24, + .iv = "\x49\xca\x41\xc9\x6b\xbf\x6c\x98" + "\x38\x2f\xa7\x3d\x4d\x80\x49\xb0", + .assoc = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", + .alen = 16, + .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" + "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" + "\xae\x2d\x8a\x57\x1e\x03\xac\x9c" + "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51" + "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11" + "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" + "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17" + "\xad\x2b\x41\x7b\xe6\x6c\x37\x10", + .plen = 64, + .ctext = "\x4f\x02\x1d\xb2\x43\xbc\x63\x3d" + "\x71\x78\x18\x3a\x9f\xa0\x71\xe8" + "\xb4\xd9\xad\xa9\xad\x7d\xed\xf4" + "\xe5\xe7\x38\x76\x3f\x69\x14\x5a" + "\x57\x1b\x24\x20\x12\xfb\x7a\xe0" + "\x7f\xa9\xba\xac\x3d\xf1\x02\xe0" + "\x08\xb0\xe2\x79\x88\x59\x88\x81" + "\xd9\x20\xa9\xe6\x4f\x56\x15\xcd" + "\x2f\xee\x5f\xdb\x66\xfe\x79\x09" + "\x61\x81\x31\xea\x5b\x3d\x8e\xfb" + "\xca\x71\x85\x93\xf7\x85\x55\x8b" + "\x7a\xe4\x94\xca\x8b\xba\x19\x33", + .clen = 64 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x20" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x60\x3d\xeb\x10\x15\xca\x71\xbe" + "\x2b\x73\xae\xf0\x85\x7d\x77\x81" + "\x1f\x35\x2c\x07\x3b\x61\x08\xd7" + "\x2d\x98\x10\xa3\x09\x14\xdf\xf4", + .klen = 8 + 32 + 32, + .iv = "\xdf\xab\xf2\x7c\xdc\xe0\x33\x4c" + "\xf9\x75\xaf\xf9\x2f\x60\x3a\x9b", + .assoc = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", + .alen = 16, + .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" + "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" + "\xae\x2d\x8a\x57\x1e\x03\xac\x9c" + "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51" + "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11" + "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" + "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17" + "\xad\x2b\x41\x7b\xe6\x6c\x37\x10", + .plen = 64, + .ctext = "\xf5\x8c\x4c\x04\xd6\xe5\xf1\xba" + "\x77\x9e\xab\xfb\x5f\x7b\xfb\xd6" + "\x9c\xfc\x4e\x96\x7e\xdb\x80\x8d" + "\x67\x9f\x77\x7b\xc6\x70\x2c\x7d" + "\x39\xf2\x33\x69\xa9\xd9\xba\xcf" + "\xa5\x30\xe2\x63\x04\x23\x14\x61" + "\xb2\xeb\x05\xe2\xc3\x9b\xe9\xfc" + "\xda\x6c\x19\x07\x8c\x6a\x9d\x1b" + "\x24\x29\xed\xc2\x31\x49\xdb\xb1" + "\x8f\x74\xbd\x17\x92\x03\xbe\x8f" + "\xf3\x61\xde\x1c\xe9\xdb\xcd\xd0" + "\xcc\xce\xe9\x85\x57\xcf\x6f\x5f", + .clen = 64 + 32, + }, +}; + #endif /* _CRYPTO_TESTMGR_H */ From patchwork Sat Aug 10 09:40:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171015 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp328489ily; Sat, 10 Aug 2019 02:41:28 -0700 (PDT) X-Google-Smtp-Source: APXvYqwAVAbe/pYM8Th275QvWxE4H2A397AzOeYuwqQO7ZK/D+i6JDzT6hAsSRTPDbqfMRT8WAkB X-Received: by 2002:a17:902:b48f:: with SMTP id y15mr23953341plr.268.1565430088620; Sat, 10 Aug 2019 02:41:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565430088; cv=none; d=google.com; s=arc-20160816; b=G3xximQODK0d6NLZwswOlWWRkqZnTlCVSJElztSSgdNPXxkDkqzNHeDu21kwQl2nVV teobvMULujZRDi89VaF/lBDW0gna5ntS2hQnksxNBpVt2ndCjU2Bi1Kp9zikQQlls6Hv IEBFY51eQ5yFxNZad9Bhy79kcIMszbKpgLJc2oJ2Tn7TfQaEbsi03xAf5GHRZZbBofFP GUTzWRKr9tUFifa6AfNKRvfrvjdCH9MpmltPQbJYTg4TQxQtmLfYWC454VQJpZiK855n z3gDlaRuLnJrQcB0uRh3icH0C4oPVw1OeCRHrcMrtgOqnU3OzWiexNzlp+0PWDxAaO4z REVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=huVoFUpMc5dzWwrmJm9Qwvn6MCUViGlDp2QJ3557fZk=; b=aCM4oxTjxhZ2I4uC0xFV4Rz9RHL8DqysVHGmHS3npJCC7YGXBzLQZO9AjWlzjr4KxD N7lwWnE2/fHJUSW4NPijj0JYAAi4KC8XaiMxVlXsosKjGIXl/Oz9JD4d43B7EgAwoAGv kwVQRuHfDVs1X5bhrY1aAhiTG+mNMGOW7VaH58AJ5RZejcz+7dWm5ahfJhBt8P3QJbr7 6Gx6+sLyx6O80ABwJjPdEI83Wcyamvj81y96yCh2y2fFnxBtcxS9hleimrGWLvr/i0cx aaPhxhnMl67KXr/huyRLyJYl/KyCxmgiC0geGKkVPRHk8LXoWZVBYNXDtx1i4Zc1Xxwc BqFw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BsZM7Ak8; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x20si17621959plm.61.2019.08.10.02.41.28; Sat, 10 Aug 2019 02:41:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BsZM7Ak8; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726204AbfHJJl1 (ORCPT + 3 others); Sat, 10 Aug 2019 05:41:27 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:45896 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726195AbfHJJl1 (ORCPT ); Sat, 10 Aug 2019 05:41:27 -0400 Received: by mail-wr1-f66.google.com with SMTP id q12so10092202wrj.12 for ; Sat, 10 Aug 2019 02:41:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=huVoFUpMc5dzWwrmJm9Qwvn6MCUViGlDp2QJ3557fZk=; b=BsZM7Ak8xHjx4PUOoit7mHk2E0QD6CIcnThUVFYMPVMKGvZ2HGvIzAW8J7qb3fJnFW sIhma2m0sr6yz4J0RqnX5tCMvZMQFwCXSF450mVl5OAJzUylWiDbVSFRiqGBTPiHegXL 26ubDSDD7KBwz7BPX/iF4vurJyQG6FThb5NtPCKjqwUwgYDcYUqj5LC6YGQSd7Mdm/A4 5xsVWI0kAPKaPmdFsOmEPK9UNsHizpdd9mbsZiTE1MpOTq/lULpzZzRb7LmIWDkRuk0Y wbjZ7nVPu7oTJwf9uVumpvW+BOmypSQC6WKSoiMeTBE2g7VA1cwPOS0q34w+AChh7zS4 Dd5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=huVoFUpMc5dzWwrmJm9Qwvn6MCUViGlDp2QJ3557fZk=; b=P0BbKYqkXwcabD0ZBvL9jezJu1v9IrGcLDL74uBSOyoFSielDTvc/bTMHiwlFSB+B8 e2Th27QKiHv2UI1S1r7mZGB/8+KbdQB9Efq9+zsWrqxbpMK/DDw8iMkVDqy6k/Qc67ob w2bfWXTIY2tu8VR3EL3D8oJ1oSirsI/CLRi0UL4NldYdpkj5urZmqe2jOAmSNALBn8vU 91cH54k2eemY1OgmJhAIDReHlTYapQRo4Oa/gwsAEqgS83myuBVtm1tlzrpw79TQvtxF rgdnmcK91wZ+L5bFmQYoTNXGyQEuihGhoP3cn4LMrG+6Iq+Jn56VFyIfG0B6dqP6O1sQ MmLg== X-Gm-Message-State: APjAAAVco6N0Oj0lhU3DTmxRVcUvvNzHIAw82dQ5Tw4AuNUF0oC5+CfL tjux6f3K4Puwigj5xMVtXy0we6r0DU+vdA== X-Received: by 2002:a5d:6406:: with SMTP id z6mr28796180wru.280.1565430084306; Sat, 10 Aug 2019 02:41:24 -0700 (PDT) Received: from localhost.localdomain ([2a02:587:a407:da00:582f:8334:9cd9:7241]) by smtp.gmail.com with ESMTPSA id n16sm519883wmk.12.2019.08.10.02.41.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 10 Aug 2019 02:41:23 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v9 5/7] crypto: arm64/aes-cts-cbc - factor out CBC en/decryption of a walk Date: Sat, 10 Aug 2019 12:40:51 +0300 Message-Id: <20190810094053.7423-6-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190810094053.7423-1-ard.biesheuvel@linaro.org> References: <20190810094053.7423-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The plain CBC driver and the CTS one share some code that iterates over a scatterwalk and invokes the CBC asm code to do the processing. The upcoming ESSIV/CBC mode will clone that pattern for the third time, so let's factor it out first. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-glue.c | 82 ++++++++++---------- 1 file changed, 40 insertions(+), 42 deletions(-) -- 2.17.1 diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 55d6d4838708..23abf335f1ee 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -186,46 +186,64 @@ static int ecb_decrypt(struct skcipher_request *req) return err; } -static int cbc_encrypt(struct skcipher_request *req) +static int cbc_encrypt_walk(struct skcipher_request *req, + struct skcipher_walk *walk) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - int err, rounds = 6 + ctx->key_length / 4; - struct skcipher_walk walk; + int err = 0, rounds = 6 + ctx->key_length / 4; unsigned int blocks; - err = skcipher_walk_virt(&walk, req, false); - - while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + while ((blocks = (walk->nbytes / AES_BLOCK_SIZE))) { kernel_neon_begin(); - aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, - ctx->key_enc, rounds, blocks, walk.iv); + aes_cbc_encrypt(walk->dst.virt.addr, walk->src.virt.addr, + ctx->key_enc, rounds, blocks, walk->iv); kernel_neon_end(); - err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); + err = skcipher_walk_done(walk, walk->nbytes % AES_BLOCK_SIZE); } return err; } -static int cbc_decrypt(struct skcipher_request *req) +static int cbc_encrypt(struct skcipher_request *req) { - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - int err, rounds = 6 + ctx->key_length / 4; struct skcipher_walk walk; - unsigned int blocks; + int err; err = skcipher_walk_virt(&walk, req, false); + if (err) + return err; + return cbc_encrypt_walk(req, &walk); +} - while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { +static int cbc_decrypt_walk(struct skcipher_request *req, + struct skcipher_walk *walk) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + int err = 0, rounds = 6 + ctx->key_length / 4; + unsigned int blocks; + + while ((blocks = (walk->nbytes / AES_BLOCK_SIZE))) { kernel_neon_begin(); - aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, - ctx->key_dec, rounds, blocks, walk.iv); + aes_cbc_decrypt(walk->dst.virt.addr, walk->src.virt.addr, + ctx->key_dec, rounds, blocks, walk->iv); kernel_neon_end(); - err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); + err = skcipher_walk_done(walk, walk->nbytes % AES_BLOCK_SIZE); } return err; } +static int cbc_decrypt(struct skcipher_request *req) +{ + struct skcipher_walk walk; + int err; + + err = skcipher_walk_virt(&walk, req, false); + if (err) + return err; + return cbc_decrypt_walk(req, &walk); +} + static int cts_cbc_init_tfm(struct crypto_skcipher *tfm) { crypto_skcipher_set_reqsize(tfm, sizeof(struct cts_cbc_req_ctx)); @@ -251,22 +269,12 @@ static int cts_cbc_encrypt(struct skcipher_request *req) } if (cbc_blocks > 0) { - unsigned int blocks; - skcipher_request_set_crypt(&rctx->subreq, req->src, req->dst, cbc_blocks * AES_BLOCK_SIZE, req->iv); - err = skcipher_walk_virt(&walk, &rctx->subreq, false); - - while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { - kernel_neon_begin(); - aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, - ctx->key_enc, rounds, blocks, walk.iv); - kernel_neon_end(); - err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); - } + err = skcipher_walk_virt(&walk, &rctx->subreq, false) ?: + cbc_encrypt_walk(&rctx->subreq, &walk); if (err) return err; @@ -316,22 +324,12 @@ static int cts_cbc_decrypt(struct skcipher_request *req) } if (cbc_blocks > 0) { - unsigned int blocks; - skcipher_request_set_crypt(&rctx->subreq, req->src, req->dst, cbc_blocks * AES_BLOCK_SIZE, req->iv); - err = skcipher_walk_virt(&walk, &rctx->subreq, false); - - while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { - kernel_neon_begin(); - aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, - ctx->key_dec, rounds, blocks, walk.iv); - kernel_neon_end(); - err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); - } + err = skcipher_walk_virt(&walk, &rctx->subreq, false) ?: + cbc_decrypt_walk(&rctx->subreq, &walk); if (err) return err; From patchwork Sat Aug 10 09:40:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171017 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp328509ily; Sat, 10 Aug 2019 02:41:30 -0700 (PDT) X-Google-Smtp-Source: APXvYqy9sfhG4otNbiyv5FHE9YxUrQxJ/rVLy/5nsVHFRUXLvxV1Kc6uXq4TJhgDOOivgzR01Sm5 X-Received: by 2002:a62:e403:: with SMTP id r3mr25118275pfh.37.1565430090085; Sat, 10 Aug 2019 02:41:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565430090; cv=none; d=google.com; s=arc-20160816; b=tc/DOVdUeGdyKJvAXVpteKkw2Ec57O3KwJX8vZH7pG5VSjOZzYFaF5baCI0Iq10vpp D0Fd0kTDRmfwplQjbHHVaQgMM648qAI/xsP1PW39nCmVoHVcNzP2liAFAZD6NTJXazMu L0VrvJlcCD/UOC/+KNg1U/v8Dppt6XXTDDvoEBrQ4KvuqUvVM38P5Q06fW7tNgwVf0W/ S68+rXEjFZ3PicVA5AOttHQHQw0URGNZhg3jggk6/Sr70HZzCQbOkwk1sdvyf1kytUeH B3NVrfkt23pFEjuAcX+flPc/xFfZWL/nVn+/jgIQQGaRJ+5LyDkh+nMw++eJ3BZzU2mL 7PJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=WtUW4kMtV0FuBm9YSr9KHz4FtfpiQZk/mVyKFBxk8RY=; b=umAPZXmLLvE75ULCNhmMF2KavHa3zD0a9davUICNJ2Tkpiip4hDa/pPiLkPa/1mxIp QrYteKmsVwQDQ8G0zNwWROK7mxBrNtY8lEy9qDQZddTKt/nA817w1GMerq4IqKwR6A5N uPHg6oswxIMqUcdya5bw1tm6VFfTvMek0ewCQpUvBt+nzhr9IaCzDUxzerBP6tYuwBWU d973FvwWFTCpwyD81IrnVw4Q0ryiu6YxHy4wbQbozavq4xzWdtclnX+306goOy26y8VC be2P8m4ZkSmo1tqJf+ar3C32aYf5fNz2x6157sl9r5mbXc+7g6ngugbbBk81NTmLT0pK CHJQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=AyrhBJu8; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x20si17621959plm.61.2019.08.10.02.41.29; Sat, 10 Aug 2019 02:41:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=AyrhBJu8; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726210AbfHJJl3 (ORCPT + 3 others); Sat, 10 Aug 2019 05:41:29 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:33860 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726205AbfHJJl2 (ORCPT ); Sat, 10 Aug 2019 05:41:28 -0400 Received: by mail-wr1-f67.google.com with SMTP id 31so100384966wrm.1 for ; Sat, 10 Aug 2019 02:41:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WtUW4kMtV0FuBm9YSr9KHz4FtfpiQZk/mVyKFBxk8RY=; b=AyrhBJu8fUqszw4iANyFyJs5HpQXjT6EuzCIzSVB2L1Gre2tx7stmuD7ikbk2qu7SW ltmtG3vhfOLoy3uFXk/0mfy+6LVvDiLBjLfGpahjzR5GGLJ8/Qo40+1kvGt7AfTgxNOo fqSQQZJWzxGHbR5DKwhUwwsixb+ZssNDn0rCt5wcfiYxknh3haQZSUJS9qY9rWJtTLr5 oA2ArOkoRhtjPOfCyGn/IyKEO2JCzGXEgbcaHVh8A7dgamjF+ciYBsKEUlgF8FlDqu0p Z8AzQhf/syvUCrWz19ronAkuUVawVkw/32cmUf2oKmD+veQ8/DwocEAsNUHBbMCNYctz eh5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WtUW4kMtV0FuBm9YSr9KHz4FtfpiQZk/mVyKFBxk8RY=; b=TlqGN8LXm8zpsWVnmaWJ3Jkc/aatPk+1dyPf1xAR1+yCZY+9Fas4qnpcC3ulS7m8hg P2IiO+Vd3RhEIB7zgFxforBIjEcNicIVwTAgWD9N/bkcp9x0iZWfa9wSDFbLH4LMG0nm clj3asWOIUHxISlpk5a6+gmyRtDYm+T6NPotUL0x9tdZ6m1TtWAYEYTmBQaGJ8156iOx LTQ8/bMAbPfRBJF3SJ0SsPD0tsq8tdtxi05vXESi6eRHiVQ6qhusWDMu4ZluH0tZm7nT fil9f1/phVZNm8mLOP4iUv3kpnND4lj5zVwVtTdfaWusTWoocsUIeYIcNFq1C5gssq5f 7lSg== X-Gm-Message-State: APjAAAX7DzaUXgiBVYJSx2GdxRfSWF+z+DgFsXVwsCLLQz6qNGm/FCEB JniKyKRUuc2YXmNAmD1/nw2IKf1/dJDmXQ== X-Received: by 2002:a5d:670d:: with SMTP id o13mr19421829wru.289.1565430086316; Sat, 10 Aug 2019 02:41:26 -0700 (PDT) Received: from localhost.localdomain ([2a02:587:a407:da00:582f:8334:9cd9:7241]) by smtp.gmail.com with ESMTPSA id n16sm519883wmk.12.2019.08.10.02.41.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 10 Aug 2019 02:41:25 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v9 6/7] crypto: arm64/aes - implement accelerated ESSIV/CBC mode Date: Sat, 10 Aug 2019 12:40:52 +0300 Message-Id: <20190810094053.7423-7-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190810094053.7423-1-ard.biesheuvel@linaro.org> References: <20190810094053.7423-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add an accelerated version of the 'essiv(cbc(aes),aes,sha256' skcipher, which is used by fscrypt or dm-crypt on systems where CBC mode is signficantly more performant than XTS mode (e.g., when using a h/w accelerator which supports the former but not the latter) This avoids a separate call into the AES cipher for every invocation. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-glue.c | 123 ++++++++++++++++++++ arch/arm64/crypto/aes-modes.S | 28 +++++ 2 files changed, 151 insertions(+) -- 2.17.1 diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 23abf335f1ee..45fca7f8cd75 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -30,6 +31,8 @@ #define aes_cbc_decrypt ce_aes_cbc_decrypt #define aes_cbc_cts_encrypt ce_aes_cbc_cts_encrypt #define aes_cbc_cts_decrypt ce_aes_cbc_cts_decrypt +#define aes_essiv_cbc_encrypt ce_aes_essiv_cbc_encrypt +#define aes_essiv_cbc_decrypt ce_aes_essiv_cbc_decrypt #define aes_ctr_encrypt ce_aes_ctr_encrypt #define aes_xts_encrypt ce_aes_xts_encrypt #define aes_xts_decrypt ce_aes_xts_decrypt @@ -44,6 +47,8 @@ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions"); #define aes_cbc_decrypt neon_aes_cbc_decrypt #define aes_cbc_cts_encrypt neon_aes_cbc_cts_encrypt #define aes_cbc_cts_decrypt neon_aes_cbc_cts_decrypt +#define aes_essiv_cbc_encrypt neon_aes_essiv_cbc_encrypt +#define aes_essiv_cbc_decrypt neon_aes_essiv_cbc_decrypt #define aes_ctr_encrypt neon_aes_ctr_encrypt #define aes_xts_encrypt neon_aes_xts_encrypt #define aes_xts_decrypt neon_aes_xts_decrypt @@ -87,6 +92,13 @@ asmlinkage void aes_xts_decrypt(u8 out[], u8 const in[], u32 const rk1[], int rounds, int blocks, u32 const rk2[], u8 iv[], int first); +asmlinkage void aes_essiv_cbc_encrypt(u8 out[], u8 const in[], u32 const rk1[], + int rounds, int blocks, u8 iv[], + u32 const rk2[]); +asmlinkage void aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[], + int rounds, int blocks, u8 iv[], + u32 const rk2[]); + asmlinkage void aes_mac_update(u8 const in[], u32 const rk[], int rounds, int blocks, u8 dg[], int enc_before, int enc_after); @@ -102,6 +114,12 @@ struct crypto_aes_xts_ctx { struct crypto_aes_ctx __aligned(8) key2; }; +struct crypto_aes_essiv_cbc_ctx { + struct crypto_aes_ctx key1; + struct crypto_aes_ctx __aligned(8) key2; + struct crypto_shash *hash; +}; + struct mac_tfm_ctx { struct crypto_aes_ctx key; u8 __aligned(8) consts[]; @@ -146,6 +164,31 @@ static int xts_set_key(struct crypto_skcipher *tfm, const u8 *in_key, return -EINVAL; } +static int essiv_cbc_set_key(struct crypto_skcipher *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); + SHASH_DESC_ON_STACK(desc, ctx->hash); + u8 digest[SHA256_DIGEST_SIZE]; + int ret; + + ret = aes_expandkey(&ctx->key1, in_key, key_len); + if (ret) + goto out; + + desc->tfm = ctx->hash; + crypto_shash_digest(desc, in_key, key_len, digest); + + ret = aes_expandkey(&ctx->key2, digest, sizeof(digest)); + if (ret) + goto out; + + return 0; +out: + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; +} + static int ecb_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -360,6 +403,68 @@ static int cts_cbc_decrypt(struct skcipher_request *req) return skcipher_walk_done(&walk, 0); } +static int essiv_cbc_init_tfm(struct crypto_skcipher *tfm) +{ + struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); + + ctx->hash = crypto_alloc_shash("sha256", 0, 0); + if (IS_ERR(ctx->hash)) + return PTR_ERR(ctx->hash); + + return 0; +} + +static void essiv_cbc_exit_tfm(struct crypto_skcipher *tfm) +{ + struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); + + crypto_free_shash(ctx->hash); +} + +static int essiv_cbc_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); + int err, rounds = 6 + ctx->key1.key_length / 4; + struct skcipher_walk walk; + unsigned int blocks; + + err = skcipher_walk_virt(&walk, req, false); + + blocks = walk.nbytes / AES_BLOCK_SIZE; + if (blocks) { + kernel_neon_begin(); + aes_essiv_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, + ctx->key1.key_enc, rounds, blocks, + req->iv, ctx->key2.key_enc); + kernel_neon_end(); + err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); + } + return err ?: cbc_encrypt_walk(req, &walk); +} + +static int essiv_cbc_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); + int err, rounds = 6 + ctx->key1.key_length / 4; + struct skcipher_walk walk; + unsigned int blocks; + + err = skcipher_walk_virt(&walk, req, false); + + blocks = walk.nbytes / AES_BLOCK_SIZE; + if (blocks) { + kernel_neon_begin(); + aes_essiv_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, + ctx->key1.key_dec, rounds, blocks, + req->iv, ctx->key2.key_enc); + kernel_neon_end(); + err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); + } + return err ?: cbc_decrypt_walk(req, &walk); +} + static int ctr_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -515,6 +620,24 @@ static struct skcipher_alg aes_algs[] = { { .encrypt = cts_cbc_encrypt, .decrypt = cts_cbc_decrypt, .init = cts_cbc_init_tfm, +}, { + .base = { + .cra_name = "__essiv(cbc(aes),sha256)", + .cra_driver_name = "__essiv-cbc-aes-sha256-" MODE, + .cra_priority = PRIO + 1, + .cra_flags = CRYPTO_ALG_INTERNAL, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_essiv_cbc_ctx), + .cra_module = THIS_MODULE, + }, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = essiv_cbc_set_key, + .encrypt = essiv_cbc_encrypt, + .decrypt = essiv_cbc_decrypt, + .init = essiv_cbc_init_tfm, + .exit = essiv_cbc_exit_tfm, }, { .base = { .cra_name = "__ctr(aes)", diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S index 324039b72094..2879f030a749 100644 --- a/arch/arm64/crypto/aes-modes.S +++ b/arch/arm64/crypto/aes-modes.S @@ -118,8 +118,23 @@ AES_ENDPROC(aes_ecb_decrypt) * int blocks, u8 iv[]) * aes_cbc_decrypt(u8 out[], u8 const in[], u8 const rk[], int rounds, * int blocks, u8 iv[]) + * aes_essiv_cbc_encrypt(u8 out[], u8 const in[], u32 const rk1[], + * int rounds, int blocks, u8 iv[], + * u32 const rk2[]); + * aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[], + * int rounds, int blocks, u8 iv[], + * u32 const rk2[]); */ +AES_ENTRY(aes_essiv_cbc_encrypt) + ld1 {v4.16b}, [x5] /* get iv */ + + mov w8, #14 /* AES-256: 14 rounds */ + enc_prepare w8, x6, x7 + encrypt_block v4, w8, x6, x7, w9 + enc_switch_key w3, x2, x6 + b .Lcbcencloop4x + AES_ENTRY(aes_cbc_encrypt) ld1 {v4.16b}, [x5] /* get iv */ enc_prepare w3, x2, x6 @@ -153,13 +168,25 @@ AES_ENTRY(aes_cbc_encrypt) st1 {v4.16b}, [x5] /* return iv */ ret AES_ENDPROC(aes_cbc_encrypt) +AES_ENDPROC(aes_essiv_cbc_encrypt) + +AES_ENTRY(aes_essiv_cbc_decrypt) + stp x29, x30, [sp, #-16]! + mov x29, sp + + ld1 {cbciv.16b}, [x5] /* get iv */ + mov w8, #14 /* AES-256: 14 rounds */ + enc_prepare w8, x6, x7 + encrypt_block cbciv, w8, x6, x7, w9 + b .Lessivcbcdecstart AES_ENTRY(aes_cbc_decrypt) stp x29, x30, [sp, #-16]! mov x29, sp ld1 {cbciv.16b}, [x5] /* get iv */ +.Lessivcbcdecstart: dec_prepare w3, x2, x6 .LcbcdecloopNx: @@ -212,6 +239,7 @@ ST5( st1 {v4.16b}, [x0], #16 ) ldp x29, x30, [sp], #16 ret AES_ENDPROC(aes_cbc_decrypt) +AES_ENDPROC(aes_essiv_cbc_decrypt) /* From patchwork Sat Aug 10 09:40:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171018 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp328531ily; Sat, 10 Aug 2019 02:41:31 -0700 (PDT) X-Google-Smtp-Source: APXvYqzeKDVnjH2VZ3jZjVHLWk6RQAvqMSuueGQGr2DSni+HCK4Cr5elB5A2QFIDtA8FZem0Bu8Q X-Received: by 2002:a63:5d54:: with SMTP id o20mr21546563pgm.413.1565430090980; Sat, 10 Aug 2019 02:41:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565430090; cv=none; d=google.com; s=arc-20160816; b=uIOWmwN3N4s3SsyINQv5liV7KRRtLD9ljtpvVlRenGhyRxmL/aCXnNJ6ZqbcCVflLs ifC4kJ9qeQuBZWtK7C6NWxRhX5UkyqvFiw2Ay8pDEPKGd/1o8JJNR+SZPJw6EcWTAqBr rDKrIFTRfd3QuiygEAIVqczXbN3US40yTmppwsvnceUYXca06mm0As1kZs+VWbhrtjn9 iuAQg7xJLZztrECl/+/KNpnOtHi7XT8YY4v1bLQAEI2PJcG6rhSf1I9QiGzMKmsPNjKJ Ksyj+sMl+ZCQcUFJgSOgR+PAgLgBU4f//aNOsc5e/Qezr3XNqwn+BOWlNktnfenJIMTC P9Fw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=pDwlcEgRHMP1A+eWN1ls3VEq1WPb0aAZmMnRS26btp0=; b=sLYfRrH1AIQt939MMf1le8jXHTBaDpwD5jHXzgDk6SnclOpVqDNP9rhkIl4kBvlKcx H/5acBzi5sdx8rTEZHerVyNYZu5BWeQMWHSML+yn1aSwQxaaRzflh6/c9z0RVi2fNxXm K7seTiq+Tf7xMkqUc5MwAKvtII+S47n1Jgs9+jU0Yfe5nVmzNJYvbUfao3q/75C54PqQ tX+kScB6LVhBur3KzP/kFl1TzWZn8bjMfId4dIp1nsG3TTCL7o95TcAVDfpz0Vke3pnI 8153LzQF1HPWQ4XsGXC+BvKVlCT3lD2bMHKQ6zmq6Qwq0M4qnkq5FEURvRU0TtAfYpw2 aBjw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=iy+v26NR; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x20si17621959plm.61.2019.08.10.02.41.30; Sat, 10 Aug 2019 02:41:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=iy+v26NR; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726145AbfHJJla (ORCPT + 3 others); Sat, 10 Aug 2019 05:41:30 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:41268 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726199AbfHJJla (ORCPT ); Sat, 10 Aug 2019 05:41:30 -0400 Received: by mail-wr1-f66.google.com with SMTP id c2so97117533wrm.8 for ; Sat, 10 Aug 2019 02:41:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pDwlcEgRHMP1A+eWN1ls3VEq1WPb0aAZmMnRS26btp0=; b=iy+v26NRzdcMasS+ielZ6t2ZRgVZqxgKuZUm+lNSRJkx/fwtoF+yZedxSzo3He7Tys gLZs4oAZOtlKp1LAqPi52bBRz4hxaeZfSBZy/4b1hQG4teBSAfwTT7qrlgU61he4I6n2 gC4dM9F/IcE3V2dyeVFpIPo5kjIX/Gg9LDHiPZ9ZyCMm+PqtqYC87S9IjTUvbqxtaA7U vImOsFGwxxX3QoKc5cgxcB7gkbI+8ZRPL1S3b4CoBuAEkHIx8Skfs2c+7ubYPX60/FA+ EQ5c5wrVHHZ13pNcpBd46PsIa6JvQ6WG/NUZQF7ViyD7vPkDlU3Vuq3RABzmPUyNBl2X MVYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pDwlcEgRHMP1A+eWN1ls3VEq1WPb0aAZmMnRS26btp0=; b=RuUalREn1SgKkTY4vUVRfjwb+L3tPYr5aT4hp+8yv103DbckwT1qZTtrnA3Ud/gT18 mFzXQST1AWNKrzgAYAmZtbJjP5Lv5k1twu6915yGt11ZmAed9/gkis+BFQplMQnyMkCV BUCTsX3vewAYFhnTYdk8AwlTFrBKSWNiIgTOAVhR7bdV+A3DMIu2DA7XTyCM36ELpHPh O9HWAl7HF9KIF2XaiugR/g8PkpcERjhP8Cf14EPBTQnaocgiFZ4hj20iLU487jAKo7vx a3Pmco2ZMvYecFzCsVEGTAGftRWTyuz8l2fvahjRuEgvbIHsomw9Aip5vdIMEne9pxzE UhRA== X-Gm-Message-State: APjAAAWbU8YNjk1BNSjePlWt+1mmQocrPgIs4Oksx/j+cZEINEwTPsnU OfS5k+wCjR/jAbz00cnsCovLJPk8nARLAQ== X-Received: by 2002:adf:83c5:: with SMTP id 63mr520977wre.86.1565430088139; Sat, 10 Aug 2019 02:41:28 -0700 (PDT) Received: from localhost.localdomain ([2a02:587:a407:da00:582f:8334:9cd9:7241]) by smtp.gmail.com with ESMTPSA id n16sm519883wmk.12.2019.08.10.02.41.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 10 Aug 2019 02:41:27 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v9 7/7] md: dm-crypt: omit parsing of the encapsulated cipher Date: Sat, 10 Aug 2019 12:40:53 +0300 Message-Id: <20190810094053.7423-8-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190810094053.7423-1-ard.biesheuvel@linaro.org> References: <20190810094053.7423-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Only the ESSIV IV generation mode used to use cc->cipher so it could instantiate the bare cipher used to encrypt the IV. However, this is now taken care of by the ESSIV template, and so no users of cc->cipher remain. So remove it altogether. Signed-off-by: Ard Biesheuvel --- drivers/md/dm-crypt.c | 58 -------------------- 1 file changed, 58 deletions(-) -- 2.17.1 diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index d3f2634f41a8..e5ad3e596639 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -143,7 +143,6 @@ struct crypt_config { struct task_struct *write_thread; struct rb_root write_tree; - char *cipher; char *cipher_string; char *cipher_auth; char *key_string; @@ -2140,7 +2139,6 @@ static void crypt_dtr(struct dm_target *ti) if (cc->dev) dm_put_device(ti, cc->dev); - kzfree(cc->cipher); kzfree(cc->cipher_string); kzfree(cc->key_string); kzfree(cc->cipher_auth); @@ -2221,52 +2219,6 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode) return 0; } -/* - * Workaround to parse cipher algorithm from crypto API spec. - * The cc->cipher is currently used only in ESSIV. - * This should be probably done by crypto-api calls (once available...) - */ -static int crypt_ctr_blkdev_cipher(struct crypt_config *cc) -{ - const char *alg_name = NULL; - char *start, *end; - - if (crypt_integrity_aead(cc)) { - alg_name = crypto_tfm_alg_name(crypto_aead_tfm(any_tfm_aead(cc))); - if (!alg_name) - return -EINVAL; - if (crypt_integrity_hmac(cc)) { - alg_name = strchr(alg_name, ','); - if (!alg_name) - return -EINVAL; - } - alg_name++; - } else { - alg_name = crypto_tfm_alg_name(crypto_skcipher_tfm(any_tfm(cc))); - if (!alg_name) - return -EINVAL; - } - - start = strchr(alg_name, '('); - end = strchr(alg_name, ')'); - - if (!start && !end) { - cc->cipher = kstrdup(alg_name, GFP_KERNEL); - return cc->cipher ? 0 : -ENOMEM; - } - - if (!start || !end || ++start >= end) - return -EINVAL; - - cc->cipher = kzalloc(end - start + 1, GFP_KERNEL); - if (!cc->cipher) - return -ENOMEM; - - strncpy(cc->cipher, start, end - start); - - return 0; -} - /* * Workaround to parse HMAC algorithm from AEAD crypto API spec. * The HMAC is needed to calculate tag size (HMAC digest size). @@ -2373,12 +2325,6 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key } else cc->iv_size = crypto_skcipher_ivsize(any_tfm(cc)); - ret = crypt_ctr_blkdev_cipher(cc); - if (ret < 0) { - ti->error = "Cannot allocate cipher string"; - return -ENOMEM; - } - return 0; } @@ -2413,10 +2359,6 @@ static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key } cc->key_parts = cc->tfms_count; - cc->cipher = kstrdup(cipher, GFP_KERNEL); - if (!cc->cipher) - goto bad_mem; - chainmode = strsep(&tmp, "-"); *ivmode = strsep(&tmp, ":"); *ivopts = tmp;