From patchwork Tue Jun 18 21:27:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 167198 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp4728845ilk; Tue, 18 Jun 2019 14:28:11 -0700 (PDT) X-Google-Smtp-Source: APXvYqyTPbqOAHFsBHCXQOvYKBv4Qk8TEliqsib+pYxLKgXf7m6ezzaZfryHM8A0Smahy4Eplorz X-Received: by 2002:a17:90a:eb08:: with SMTP id j8mr7497821pjz.72.1560893291045; Tue, 18 Jun 2019 14:28:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560893291; cv=none; d=google.com; s=arc-20160816; b=enuCDNylzPmztV5LALcJrVSTWK/8QIV30QEFwmD43epAG8itJgawMqn9F8ofm84coz GuXJ9VhCqrFk/bP4VNCeR/8/9N4t83YOrWb9dQ6/vCQaB/JkAQ6FQbQi/P9zIHFskuX9 6HViGN+JW1CSEKPQb4OzYOUZTMms1qJzse5jd7yM1X00JT+PWSyq4l3TQU1OgSRQSm55 UoU62QaURuSPQ8Zde5OyJiWZFnV/Hhk0obDTWgQ0zZMojuKMeJy7eoX0DrO6ntH1CLEI jiCXyMvKYAHTcHb2DIvXp0RVGoXfqWPs4YYK/7a7x/KAAUZW0R6+4HhYRbkl2sjB0INS cwSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=EVl3ZqppaSRl5kC3hjPAS0th63oGGSKX7nxl/AgvVDc=; b=K/lj14WSmuujcltY54D38YwTtmQ5Ul7hgefbwgK1iqaIf/dZHE9I7hpB3GCiQ/GaNC XGG4wM+DhKFmAdMUqo1PT31WocLnsDsLmPKNJCE+ktOISiLgy1ikhhx5IQWwyUnL7BG1 bLHL9Z4nGp62hBAiWg9HrL7J1Zoh08GbsaekSoFNoOm/0T8v/bQMDCiV/+9XS5SwRaBb Mse+AiZdGakiSj9vIV7TAJpQFD34eX0uA/JC4W+6jJlEWCJIyO3vC/0kPfnyjBEqJrNs cqq34qPlxVI/thpxknd36kg8BMGYhJ0x0SmOztIIfdSGwXaSXxEkE+KNNmVs1U6wFnTb m5Sg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=L5b+RE9l; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si3179923pjs.35.2019.06.18.14.28.10; Tue, 18 Jun 2019 14:28:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=L5b+RE9l; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730367AbfFRV2K (ORCPT + 3 others); Tue, 18 Jun 2019 17:28:10 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:44369 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730196AbfFRV2J (ORCPT ); Tue, 18 Jun 2019 17:28:09 -0400 Received: by mail-wr1-f65.google.com with SMTP id r16so1011593wrl.11 for ; Tue, 18 Jun 2019 14:28:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=EVl3ZqppaSRl5kC3hjPAS0th63oGGSKX7nxl/AgvVDc=; b=L5b+RE9lbPL+Kr3H9nEqa1Kk3WVL5L08ZnZGfqXRaKO51RHlDkaZv+qp1UeCncxNhV TgefC8LiUZQNYftxckWfFfzt2HA+5omfzK+4lCzQM5ZdiYbDNHIWO2zkGub3U4TiAQZj nfx1GCJa2ovWVwN3cvlk4wevblLBDD7SVX/kwZG3LqYZjrkIYSzr+Js0ZYcQncw03Uaa M+Mc7s5dysMlpr0BzgsW5YCuOu22XnBlGHxtE5s0LKqgSynxwlBa9xrf8EDvOouEht83 2dVGaTN3BbzWbpXuYztaIQ7QzFJKZqSIgrvmxngR2bw1rkHqHWDbOuuKQtAnkKrMMuRf X53Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=EVl3ZqppaSRl5kC3hjPAS0th63oGGSKX7nxl/AgvVDc=; b=DnvYhDeJKY0Nz7pNPvVBpbjsEVwrMiDVJY/Yuha/v9UY3Qtcgy5vvigVt34ReRRP3r EhCoKImktV9qopsDjJVN67PU2//NT6oTh7EqiwNlUB/pSG3GBsdpDjddnrj/BuHznn9Q yP/RQ9eD9TH9gtRzD3p9cicADI/pMmtRCOmQL3mQdH1jElwvuTLSlsiCRmYrn9pbvuSJ zJmt7Qe7zWz5IfSXxlBr6k6fYJbWM2wS4Hctys9HsjlAGFDk+UvczKn9LY/uzaZvfpjc dSl8kywx3K1DJZ3l1SFFl2+Px6zJFzpEoceiB/PS6k4dRd5COozrF7R5x0a+jCGsD0hH C1Bw== X-Gm-Message-State: APjAAAUnmNCxg8fhYwRe6Y0yPsDX2N4leEldcmDeTaVfpQ/Dj9hmua3i W3YhnYMN3Atb6wwh7R/+o1nRYqRzH+1ZyLH8 X-Received: by 2002:a5d:4752:: with SMTP id o18mr13775245wrs.74.1560893285721; Tue, 18 Jun 2019 14:28:05 -0700 (PDT) Received: from e111045-lin.arm.com (lfbn-nic-1-216-10.w2-15.abo.wanadoo.fr. [2.15.62.10]) by smtp.gmail.com with ESMTPSA id h21sm2273831wmb.47.2019.06.18.14.28.04 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 18 Jun 2019 14:28:05 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v2 1/4] crypto: essiv - create wrapper template for ESSIV generation Date: Tue, 18 Jun 2019 23:27:46 +0200 Message-Id: <20190618212749.8995-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190618212749.8995-1-ard.biesheuvel@linaro.org> References: <20190618212749.8995-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Implement a template that wraps a (skcipher,cipher,shash) or (aead,cipher,shash) tuple so that we can consolidate the ESSIV handling in fscrypt and dm-crypt and move it into the crypto API. This will result in better test coverage, and will allow future changes to make the bare cipher interface internal to the crypto subsystem, in order to increase robustness of the API against misuse. Note that especially the AEAD handling is a bit complex, and is tightly coupled to the way dm-crypt combines AEAD based on the authenc() template with the ESSIV handling. Signed-off-by: Ard Biesheuvel --- crypto/Kconfig | 4 + crypto/Makefile | 1 + crypto/essiv.c | 624 ++++++++++++++++++++ 3 files changed, 629 insertions(+) -- 2.17.1 diff --git a/crypto/Kconfig b/crypto/Kconfig index 3d056e7da65f..1aa47087c1a2 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1917,6 +1917,10 @@ config CRYPTO_STATS config CRYPTO_HASH_INFO bool +config CRYPTO_ESSIV + tristate + select CRYPTO_AUTHENC + source "drivers/crypto/Kconfig" source "crypto/asymmetric_keys/Kconfig" source "certs/Kconfig" diff --git a/crypto/Makefile b/crypto/Makefile index 266a4cdbb9e2..ad1d99ba6d56 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -148,6 +148,7 @@ obj-$(CONFIG_CRYPTO_USER_API_AEAD) += algif_aead.o obj-$(CONFIG_CRYPTO_ZSTD) += zstd.o obj-$(CONFIG_CRYPTO_OFB) += ofb.o obj-$(CONFIG_CRYPTO_ECC) += ecc.o +obj-$(CONFIG_CRYPTO_ESSIV) += essiv.o ecdh_generic-y += ecdh.o ecdh_generic-y += ecdh_helper.o diff --git a/crypto/essiv.c b/crypto/essiv.c new file mode 100644 index 000000000000..029a65afb4d7 --- /dev/null +++ b/crypto/essiv.c @@ -0,0 +1,624 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ESSIV skcipher template for block encryption + * + * Copyright (c) 2019 Linaro, Ltd. + * + * Heavily based on: + * adiantum length-preserving encryption mode + * + * Copyright 2018 Google LLC + */ + +#include +#include +#include +#include +#include +#include + +#include "internal.h" + +#define ESSIV_IV_SIZE sizeof(u64) // IV size of the outer algo +#define MAX_INNER_IV_SIZE 16 // max IV size of inner algo + +struct essiv_instance_ctx { + union { + struct crypto_skcipher_spawn blockcipher_spawn; + struct crypto_aead_spawn aead_spawn; + } u; + struct crypto_spawn essiv_cipher_spawn; + struct crypto_shash_spawn hash_spawn; +}; + +struct essiv_tfm_ctx { + union { + struct crypto_skcipher *blockcipher; + struct crypto_aead *aead; + } u; + struct crypto_cipher *essiv_cipher; + struct crypto_shash *hash; +}; + +struct essiv_skcipher_request_ctx { + u8 iv[MAX_INNER_IV_SIZE]; + struct skcipher_request blockcipher_req; +}; + +struct essiv_aead_request_ctx { + u8 iv[MAX_INNER_IV_SIZE]; + struct scatterlist src[4], dst[4]; + struct aead_request aead_req; +}; + +static int essiv_skcipher_setkey(struct crypto_skcipher *tfm, + const u8 *key, unsigned int keylen) +{ + u32 flags = crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_REQ_MASK; + struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + SHASH_DESC_ON_STACK(desc, tctx->hash); + unsigned int saltsize; + u8 *salt; + int err; + + crypto_skcipher_clear_flags(tctx->u.blockcipher, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(tctx->u.blockcipher, flags); + err = crypto_skcipher_setkey(tctx->u.blockcipher, key, keylen); + crypto_skcipher_set_flags(tfm, + crypto_skcipher_get_flags(tctx->u.blockcipher) & + CRYPTO_TFM_RES_MASK); + if (err) + return err; + + saltsize = crypto_shash_digestsize(tctx->hash); + salt = kmalloc(saltsize, GFP_KERNEL); + if (!salt) + return -ENOMEM; + + desc->tfm = tctx->hash; + crypto_shash_digest(desc, key, keylen, salt); + + crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK); + crypto_cipher_set_flags(tctx->essiv_cipher, flags & CRYPTO_TFM_REQ_MASK); + err = crypto_cipher_setkey(tctx->essiv_cipher, salt, saltsize); + flags = crypto_cipher_get_flags(tctx->essiv_cipher) & CRYPTO_TFM_RES_MASK; + crypto_skcipher_set_flags(tfm, flags); + + kzfree(salt); + return err; +} + +static int essiv_aead_setkey(struct crypto_aead *tfm, const u8 *key, + unsigned int keylen) +{ + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + SHASH_DESC_ON_STACK(desc, tctx->hash); + struct crypto_authenc_keys keys; + unsigned int saltsize; + u8 *salt; + int err; + + crypto_aead_clear_flags(tctx->u.aead, CRYPTO_TFM_REQ_MASK); + crypto_aead_set_flags(tctx->u.aead, crypto_aead_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_aead_setkey(tctx->u.aead, key, keylen); + crypto_aead_set_flags(tfm, crypto_aead_get_flags(tctx->u.aead) & + CRYPTO_TFM_RES_MASK); + if (err) + return err; + + if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) { + crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + + saltsize = crypto_shash_digestsize(tctx->hash); + salt = kmalloc(saltsize, GFP_KERNEL); + if (!salt) + return -ENOMEM; + + desc->tfm = tctx->hash; + crypto_shash_init(desc); + crypto_shash_update(desc, keys.enckey, keys.enckeylen); + crypto_shash_finup(desc, keys.authkey, keys.authkeylen, salt); + + crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK); + crypto_cipher_set_flags(tctx->essiv_cipher, crypto_aead_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_cipher_setkey(tctx->essiv_cipher, salt, saltsize); + crypto_aead_set_flags(tfm, crypto_cipher_get_flags(tctx->essiv_cipher) & + CRYPTO_TFM_RES_MASK); + + kzfree(salt); + return err; +} + +static int essiv_aead_setauthsize(struct crypto_aead *tfm, + unsigned int authsize) +{ + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + + return crypto_aead_setauthsize(tctx->u.aead, authsize); +} + +static void essiv_skcipher_done(struct crypto_async_request *areq, int err) +{ + struct skcipher_request *req = areq->data; + + skcipher_request_complete(req, err); +} + +static void essiv_aead_done(struct crypto_async_request *areq, int err) +{ + struct aead_request *req = areq->data; + + aead_request_complete(req, err); +} + +static void essiv_skcipher_prepare_subreq(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req); + struct skcipher_request *subreq = &rctx->blockcipher_req; + + memset(rctx->iv, 0, crypto_cipher_blocksize(tctx->essiv_cipher)); + memcpy(rctx->iv, req->iv, crypto_skcipher_ivsize(tfm)); + + crypto_cipher_encrypt_one(tctx->essiv_cipher, rctx->iv, rctx->iv); + + skcipher_request_set_tfm(subreq, tctx->u.blockcipher); + skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, + rctx->iv); + skcipher_request_set_callback(subreq, req->base.flags, + essiv_skcipher_done, req); +} + +static int essiv_aead_prepare_subreq(struct aead_request *req) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + const struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + struct essiv_aead_request_ctx *rctx = aead_request_ctx(req); + int ivsize = crypto_cipher_blocksize(tctx->essiv_cipher); + int ssize = req->assoclen - crypto_aead_ivsize(tfm); + struct aead_request *subreq = &rctx->aead_req; + struct scatterlist *sg; + + /* + * dm-crypt embeds the sector number and the IV in the AAD region so we + * have to splice the converted IV into the subrequest that we pass on + * to the AEAD transform. This means we are tightly coupled to dm-crypt, + * but that should be the only user of this code in AEAD mode. + */ + if (ssize < 0 || sg_nents_for_len(req->src, ssize) != 1) + return -EINVAL; + + memset(rctx->iv, 0, ivsize); + memcpy(rctx->iv, req->iv, crypto_aead_ivsize(tfm)); + + crypto_cipher_encrypt_one(tctx->essiv_cipher, rctx->iv, rctx->iv); + + sg_init_table(rctx->src, 4); + sg_set_page(rctx->src, sg_page(req->src), ssize, req->src->offset); + sg_set_buf(rctx->src + 1, rctx->iv, ivsize); + sg = scatterwalk_ffwd(rctx->src + 2, req->src, req->assoclen); + if (sg != rctx->src + 2) + sg_chain(rctx->src, 3, sg); + + sg_init_table(rctx->dst, 4); + sg_set_page(rctx->dst, sg_page(req->dst), ssize, req->dst->offset); + sg_set_buf(rctx->dst + 1, rctx->iv, ivsize); + sg = scatterwalk_ffwd(rctx->dst + 2, req->dst, req->assoclen); + if (sg != rctx->dst + 2) + sg_chain(rctx->dst, 3, sg); + + aead_request_set_tfm(subreq, tctx->u.aead); + aead_request_set_crypt(subreq, rctx->src, rctx->dst, req->cryptlen, + rctx->iv); + aead_request_set_ad(subreq, ssize + ivsize); + aead_request_set_callback(subreq, req->base.flags, essiv_aead_done, req); + + return 0; +} + +static int essiv_skcipher_encrypt(struct skcipher_request *req) +{ + struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req); + + essiv_skcipher_prepare_subreq(req); + return crypto_skcipher_encrypt(&rctx->blockcipher_req); +} + +static int essiv_aead_encrypt(struct aead_request *req) +{ + struct essiv_aead_request_ctx *rctx = aead_request_ctx(req); + int err; + + err = essiv_aead_prepare_subreq(req); + if (err) + return err; + return crypto_aead_encrypt(&rctx->aead_req); +} + +static int essiv_skcipher_decrypt(struct skcipher_request *req) +{ + struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req); + return crypto_skcipher_decrypt(&rctx->blockcipher_req); +} + +static int essiv_aead_decrypt(struct aead_request *req) +{ + struct essiv_aead_request_ctx *rctx = aead_request_ctx(req); + int err; + + err = essiv_aead_prepare_subreq(req); + if (err) + return err; + + essiv_aead_prepare_subreq(req); + return crypto_aead_decrypt(&rctx->aead_req); +} + +static int essiv_init_tfm(struct essiv_instance_ctx *ictx, + struct essiv_tfm_ctx *tctx) +{ + struct crypto_cipher *essiv_cipher; + struct crypto_shash *hash; + int err; + + essiv_cipher = crypto_spawn_cipher(&ictx->essiv_cipher_spawn); + if (IS_ERR(essiv_cipher)) + return PTR_ERR(essiv_cipher); + + hash = crypto_spawn_shash(&ictx->hash_spawn); + if (IS_ERR(hash)) { + err = PTR_ERR(hash); + goto err_free_essiv_cipher; + } + + tctx->essiv_cipher = essiv_cipher; + tctx->hash = hash; + + return 0; + +err_free_essiv_cipher: + crypto_free_cipher(essiv_cipher); + return err; +} + +static int essiv_skcipher_init_tfm(struct crypto_skcipher *tfm) +{ + struct skcipher_instance *inst = skcipher_alg_instance(tfm); + struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst); + struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct crypto_skcipher *blockcipher; + unsigned int subreq_size; + int err; + + BUILD_BUG_ON(offsetofend(struct essiv_skcipher_request_ctx, + blockcipher_req) != + sizeof(struct essiv_skcipher_request_ctx)); + + blockcipher = crypto_spawn_skcipher(&ictx->u.blockcipher_spawn); + if (IS_ERR(blockcipher)) + return PTR_ERR(blockcipher); + + subreq_size = FIELD_SIZEOF(struct essiv_skcipher_request_ctx, + blockcipher_req) + + crypto_skcipher_reqsize(blockcipher); + + crypto_skcipher_set_reqsize(tfm, offsetof(struct essiv_skcipher_request_ctx, + blockcipher_req) + subreq_size); + + err = essiv_init_tfm(ictx, tctx); + if (err) + crypto_free_skcipher(blockcipher); + + tctx->u.blockcipher = blockcipher; + return err; +} + +static int essiv_aead_init_tfm(struct crypto_aead *tfm) +{ + struct aead_instance *inst = aead_alg_instance(tfm); + struct essiv_instance_ctx *ictx = aead_instance_ctx(inst); + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + struct crypto_aead *aead; + unsigned int subreq_size; + int err; + + BUILD_BUG_ON(offsetofend(struct essiv_aead_request_ctx, aead_req) != + sizeof(struct essiv_aead_request_ctx)); + + aead = crypto_spawn_aead(&ictx->u.aead_spawn); + if (IS_ERR(aead)) + return PTR_ERR(aead); + + subreq_size = FIELD_SIZEOF(struct essiv_aead_request_ctx, aead_req) + + crypto_aead_reqsize(aead); + + crypto_aead_set_reqsize(tfm, offsetof(struct essiv_aead_request_ctx, + aead_req) + subreq_size); + + err = essiv_init_tfm(ictx, tctx); + if (err) + crypto_free_aead(aead); + + tctx->u.aead = aead; + return err; +} + +static void essiv_skcipher_exit_tfm(struct crypto_skcipher *tfm) +{ + struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + + crypto_free_skcipher(tctx->u.blockcipher); + crypto_free_cipher(tctx->essiv_cipher); + crypto_free_shash(tctx->hash); +} + +static void essiv_aead_exit_tfm(struct crypto_aead *tfm) +{ + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + + crypto_free_aead(tctx->u.aead); + crypto_free_cipher(tctx->essiv_cipher); + crypto_free_shash(tctx->hash); +} + +static void essiv_skcipher_free_instance(struct skcipher_instance *inst) +{ + struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst); + + crypto_drop_skcipher(&ictx->u.blockcipher_spawn); + crypto_drop_spawn(&ictx->essiv_cipher_spawn); + crypto_drop_shash(&ictx->hash_spawn); + kfree(inst); +} + +static void essiv_aead_free_instance(struct aead_instance *inst) +{ + struct essiv_instance_ctx *ictx = aead_instance_ctx(inst); + + crypto_drop_aead(&ictx->u.aead_spawn); + crypto_drop_spawn(&ictx->essiv_cipher_spawn); + crypto_drop_shash(&ictx->hash_spawn); + kfree(inst); +} + +static bool essiv_supported_algorithms(struct crypto_alg *essiv_cipher_alg, + struct shash_alg *hash_alg, + int ivsize) +{ + if (hash_alg->digestsize < essiv_cipher_alg->cra_cipher.cia_min_keysize || + hash_alg->digestsize > essiv_cipher_alg->cra_cipher.cia_max_keysize) + return false; + + if (ivsize != essiv_cipher_alg->cra_blocksize) + return false; + + if (ivsize > MAX_INNER_IV_SIZE) + return false; + + return true; +} + +static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb) +{ + struct crypto_attr_type *algt; + const char *blockcipher_name; + const char *essiv_cipher_name; + const char *shash_name; + struct skcipher_instance *skcipher_inst = NULL; + struct aead_instance *aead_inst = NULL; + struct crypto_instance *inst; + struct crypto_alg *base, *block_base; + struct essiv_instance_ctx *ictx; + struct skcipher_alg *blockcipher_alg = NULL; + struct aead_alg *aead_alg = NULL; + struct crypto_alg *essiv_cipher_alg; + struct crypto_alg *_hash_alg; + struct shash_alg *hash_alg; + int ivsize; + u32 type; + int err; + + algt = crypto_get_attr_type(tb); + if (IS_ERR(algt)) + return PTR_ERR(algt); + + blockcipher_name = crypto_attr_alg_name(tb[1]); + if (IS_ERR(blockcipher_name)) + return PTR_ERR(blockcipher_name); + + essiv_cipher_name = crypto_attr_alg_name(tb[2]); + if (IS_ERR(essiv_cipher_name)) + return PTR_ERR(essiv_cipher_name); + + shash_name = crypto_attr_alg_name(tb[3]); + if (IS_ERR(shash_name)) + return PTR_ERR(shash_name); + + type = algt->type & algt->mask; + + switch (type) { + case CRYPTO_ALG_TYPE_BLKCIPHER: + skcipher_inst = kzalloc(sizeof(*skcipher_inst) + + sizeof(*ictx), GFP_KERNEL); + if (!skcipher_inst) + return -ENOMEM; + inst = skcipher_crypto_instance(skcipher_inst); + base = &skcipher_inst->alg.base; + ictx = crypto_instance_ctx(inst); + + /* Block cipher, e.g. "cbc(aes)" */ + crypto_set_skcipher_spawn(&ictx->u.blockcipher_spawn, inst); + err = crypto_grab_skcipher(&ictx->u.blockcipher_spawn, + blockcipher_name, 0, + crypto_requires_sync(algt->type, + algt->mask)); + if (err) + goto out_free_inst; + blockcipher_alg = crypto_spawn_skcipher_alg(&ictx->u.blockcipher_spawn); + block_base = &blockcipher_alg->base; + ivsize = blockcipher_alg->ivsize; + break; + + case CRYPTO_ALG_TYPE_AEAD: + aead_inst = kzalloc(sizeof(*aead_inst) + + sizeof(*ictx), GFP_KERNEL); + if (!aead_inst) + return -ENOMEM; + inst = aead_crypto_instance(aead_inst); + base = &aead_inst->alg.base; + ictx = crypto_instance_ctx(inst); + + /* AEAD cipher, e.g. "authenc(hmac(sha256),cbc(aes))" */ + crypto_set_aead_spawn(&ictx->u.aead_spawn, inst); + err = crypto_grab_aead(&ictx->u.aead_spawn, + blockcipher_name, 0, + crypto_requires_sync(algt->type, + algt->mask)); + if (err) + goto out_free_inst; + aead_alg = crypto_spawn_aead_alg(&ictx->u.aead_spawn); + block_base = &aead_alg->base; + ivsize = aead_alg->ivsize; + break; + + default: + return -EINVAL; + } + + /* Block cipher, e.g. "aes" */ + crypto_set_spawn(&ictx->essiv_cipher_spawn, inst); + err = crypto_grab_spawn(&ictx->essiv_cipher_spawn, essiv_cipher_name, + CRYPTO_ALG_TYPE_CIPHER, CRYPTO_ALG_TYPE_MASK); + if (err) + goto out_drop_blockcipher; + essiv_cipher_alg = ictx->essiv_cipher_spawn.alg; + + /* Synchronous hash, e.g., "sha256" */ + _hash_alg = crypto_alg_mod_lookup(shash_name, + CRYPTO_ALG_TYPE_SHASH, + CRYPTO_ALG_TYPE_MASK); + if (IS_ERR(_hash_alg)) { + err = PTR_ERR(_hash_alg); + goto out_drop_essiv_cipher; + } + hash_alg = __crypto_shash_alg(_hash_alg); + err = crypto_init_shash_spawn(&ictx->hash_spawn, hash_alg, inst); + if (err) + goto out_put_hash; + + /* Check the set of algorithms */ + if (!essiv_supported_algorithms(essiv_cipher_alg, hash_alg, ivsize)) { + pr_warn("Unsupported essiv instantiation: (%s,%s,%s)\n", + block_base->cra_name, + essiv_cipher_alg->cra_name, + hash_alg->base.cra_name); + err = -EINVAL; + goto out_drop_hash; + } + + /* Instance fields */ + + err = -ENAMETOOLONG; + if (snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, + "essiv(%s,%s,%s)", block_base->cra_name, + essiv_cipher_alg->cra_name, + hash_alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME) + goto out_drop_hash; + if (snprintf(base->cra_driver_name, CRYPTO_MAX_ALG_NAME, + "essiv(%s,%s,%s)", + block_base->cra_driver_name, + essiv_cipher_alg->cra_driver_name, + hash_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME) + goto out_drop_hash; + + base->cra_flags = block_base->cra_flags & CRYPTO_ALG_ASYNC; + base->cra_blocksize = block_base->cra_blocksize; + base->cra_ctxsize = sizeof(struct essiv_tfm_ctx); + base->cra_alignmask = block_base->cra_alignmask; + base->cra_priority = block_base->cra_priority; + + if (type == CRYPTO_ALG_TYPE_BLKCIPHER) { + skcipher_inst->alg.setkey = essiv_skcipher_setkey; + skcipher_inst->alg.encrypt = essiv_skcipher_encrypt; + skcipher_inst->alg.decrypt = essiv_skcipher_decrypt; + skcipher_inst->alg.init = essiv_skcipher_init_tfm; + skcipher_inst->alg.exit = essiv_skcipher_exit_tfm; + + skcipher_inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(blockcipher_alg); + skcipher_inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(blockcipher_alg); + skcipher_inst->alg.ivsize = ESSIV_IV_SIZE; + skcipher_inst->alg.chunksize = blockcipher_alg->chunksize; + skcipher_inst->alg.walksize = blockcipher_alg->walksize; + + skcipher_inst->free = essiv_skcipher_free_instance; + + err = skcipher_register_instance(tmpl, skcipher_inst); + } else { + aead_inst->alg.setkey = essiv_aead_setkey; + aead_inst->alg.setauthsize = essiv_aead_setauthsize; + aead_inst->alg.encrypt = essiv_aead_encrypt; + aead_inst->alg.decrypt = essiv_aead_decrypt; + aead_inst->alg.init = essiv_aead_init_tfm; + aead_inst->alg.exit = essiv_aead_exit_tfm; + + aead_inst->alg.ivsize = ESSIV_IV_SIZE; + aead_inst->alg.maxauthsize = aead_alg->maxauthsize; + aead_inst->alg.chunksize = aead_alg->chunksize; + + aead_inst->free = essiv_aead_free_instance; + + err = aead_register_instance(tmpl, aead_inst); + } + + if (err) + goto out_drop_hash; + + crypto_mod_put(_hash_alg); + return 0; + +out_drop_hash: + crypto_drop_shash(&ictx->hash_spawn); +out_put_hash: + crypto_mod_put(_hash_alg); +out_drop_essiv_cipher: + crypto_drop_spawn(&ictx->essiv_cipher_spawn); +out_drop_blockcipher: + if (type == CRYPTO_ALG_TYPE_BLKCIPHER) { + crypto_drop_skcipher(&ictx->u.blockcipher_spawn); + } else { + crypto_drop_aead(&ictx->u.aead_spawn); + } +out_free_inst: + kfree(skcipher_inst); + kfree(aead_inst); + return err; +} + +/* essiv(blockcipher_name, essiv_cipher_name, shash_name) */ +static struct crypto_template essiv_tmpl = { + .name = "essiv", + .create = essiv_create, + .module = THIS_MODULE, +}; + +static int __init essiv_module_init(void) +{ + return crypto_register_template(&essiv_tmpl); +} + +static void __exit essiv_module_exit(void) +{ + crypto_unregister_template(&essiv_tmpl); +} + +subsys_initcall(essiv_module_init); +module_exit(essiv_module_exit); + +MODULE_DESCRIPTION("ESSIV skcipher/aead wrapper for block encryption"); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS_CRYPTO("essiv"); From patchwork Tue Jun 18 21:27:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 167199 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp4728857ilk; Tue, 18 Jun 2019 14:28:12 -0700 (PDT) X-Google-Smtp-Source: APXvYqwquAeHuqoY2gzHngcWCRR4yW4p1/qJe4+ldAPg5gPovSlQy9Ud5sNg8ZtFB7ewqN1XO0kW X-Received: by 2002:a17:90a:ca0f:: with SMTP id x15mr4108012pjt.82.1560893292111; Tue, 18 Jun 2019 14:28:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560893292; cv=none; d=google.com; s=arc-20160816; b=gNAS1dIqt9R8FIiKJu2oUbMSVBK9K3uCfxfD+x4klgTODD3mzbfYSrVT3FiqoFVU3T FzmcE8PEvpQjGHMzXrJIBeahtmIs2XbgD9eJWr4FHZk2zsTn4cTFtYipj+mbD6iY4jUu /CmobNk2h9s2qKCZxEc9ul9GsywiNXJVxqFMwxGaKfT1+aCkIEh5qKu+z1V8gvvWNjBL hv1OEw2pI/WnJ3csGYNvSGFOFOhtvbdbroF82PCQ4xEScb2GCe6AjZYdHAtAd1TYW1jO 8p3b1TkyMtgbA/9ahUC+5r5PaqFwGEwlE2uvTRr9PC6srUXQ5S1j28IeXcOj4U55Ygi1 8osQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=gHO2KLrUA3I7yUEmSueCdlizyCKC0yerXVKj0vKIABE=; b=f5K8hoHNT8PrBJhpFXAkUeW8YSR7CGD2ogaFAj6HqetT+qP+Kfs6NbafPlA2YLFhjl UykZ0n/DirJ7jTJmK0SWZ6wvGGHeX6cLGEzGOBeONEIa4UKDNuhUmUfSxuczC9NWT1ry y8i/MTZqXxNo6PHi6M+77Z3HjY5bg0o9rtMT47tj51jL/48t+jOSm1hdiTy170MUxBbq gF2n2FeOnyfVpejl6IJ12zPIzVHr5NWjRmtGyDsvAfqgxqBYKPRpo09bVNErSSuJZ/NB W66S74UEv+jZPn+ezcmOXc6MRqxPN3pdbPGkL8TI13O7M7MLZh/rKRSrpodSDgHVxjDV OeKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=zrP1xnhN; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si3179923pjs.35.2019.06.18.14.28.11; Tue, 18 Jun 2019 14:28:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=zrP1xnhN; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730388AbfFRV2L (ORCPT + 3 others); Tue, 18 Jun 2019 17:28:11 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:35192 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730102AbfFRV2K (ORCPT ); Tue, 18 Jun 2019 17:28:10 -0400 Received: by mail-wr1-f67.google.com with SMTP id m3so1077555wrv.2 for ; Tue, 18 Jun 2019 14:28:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=gHO2KLrUA3I7yUEmSueCdlizyCKC0yerXVKj0vKIABE=; b=zrP1xnhNTl+xUBtXaeqY/FAxiJJK2ODnx9N46cxpMqn2eyJPpDlOZxNDfg8iahaHKf zhmHSRvjYHXxxtsGvjdVfyCbnj3+bURPrpBQj+uHj1dIxMnnXaTxBWtMdiFI6Tl5qbVF 5H0YmYqIrqNHVQ8dNV7Cxs/22npNvYgGBB/579cqzMa0mZtZBz056H4fctBNDL8WnSmc H7WyVU1jd6JUD6gXP2JbF5r0yGByM02tK7rZEPXLDCF8joZjbgWVQ4C+UPHTZVTNDoY5 QGnF1t2a2cHpUmsIg1jLC/t8GakmZntke6yhGEmSwr4ywJBXXaMqWWYlEshrNRcNTJ6s h5PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=gHO2KLrUA3I7yUEmSueCdlizyCKC0yerXVKj0vKIABE=; b=g9hF0r4idLDemnxB7iFx+vdQNw6OEtULq0mdWlso9SGNPfL+aCsFi4sAStslq5HgQJ hokLes821PKtViG512OuAlWiTBDLmYdY9BeZcESUXaOTbI7s6nON0dOWsTvRBBifAnMW mfQ5CEPYC14BBQnu9QjlxFwhASJxjl3T8VupZQFg684D2fJUhz8eyMEGacxMgEKGWwFz bQo8Q8kJQphHqUh4YssZSfNuJgvpR66Jc82FsSp3kifhzkq8Rvp63mNtf0arJEtzkgQV BLmSnWvIEtpYvVc0CgQRxqdaAw1jTJz6ZqvjPyKz0giBrSj6hh2YDYh2pkNdoQjJDYMR vxUA== X-Gm-Message-State: APjAAAVS8L60Lal9NWOBmdAQkmBZUTM8px9bBagAEkkGwwISUfRwY3X4 SemNCJhfTfg8oOkgA/H4AjgvvE/ONtmEF82Q X-Received: by 2002:a5d:49c4:: with SMTP id t4mr26274734wrs.318.1560893287863; Tue, 18 Jun 2019 14:28:07 -0700 (PDT) Received: from e111045-lin.arm.com (lfbn-nic-1-216-10.w2-15.abo.wanadoo.fr. [2.15.62.10]) by smtp.gmail.com with ESMTPSA id h21sm2273831wmb.47.2019.06.18.14.28.05 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 18 Jun 2019 14:28:06 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v2 2/4] fs: crypto: invoke crypto API for ESSIV handling Date: Tue, 18 Jun 2019 23:27:47 +0200 Message-Id: <20190618212749.8995-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190618212749.8995-1-ard.biesheuvel@linaro.org> References: <20190618212749.8995-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of open coding the calculations for ESSIV handling, use a ESSIV skcipher which does all of this under the hood. Signed-off-by: Ard Biesheuvel --- fs/crypto/Kconfig | 1 + fs/crypto/crypto.c | 5 -- fs/crypto/fscrypt_private.h | 9 -- fs/crypto/keyinfo.c | 88 +------------------- 4 files changed, 3 insertions(+), 100 deletions(-) -- 2.17.1 diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig index 24ed99e2eca0..b0292da8613c 100644 --- a/fs/crypto/Kconfig +++ b/fs/crypto/Kconfig @@ -5,6 +5,7 @@ config FS_ENCRYPTION select CRYPTO_AES select CRYPTO_CBC select CRYPTO_ECB + select CRYPTO_ESSIV select CRYPTO_XTS select CRYPTO_CTS select CRYPTO_SHA256 diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c index 335a362ee446..c53ce262a06c 100644 --- a/fs/crypto/crypto.c +++ b/fs/crypto/crypto.c @@ -136,9 +136,6 @@ void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num, if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) memcpy(iv->nonce, ci->ci_nonce, FS_KEY_DERIVATION_NONCE_SIZE); - - if (ci->ci_essiv_tfm != NULL) - crypto_cipher_encrypt_one(ci->ci_essiv_tfm, iv->raw, iv->raw); } int fscrypt_do_page_crypto(const struct inode *inode, fscrypt_direction_t rw, @@ -492,8 +489,6 @@ static void __exit fscrypt_exit(void) destroy_workqueue(fscrypt_read_workqueue); kmem_cache_destroy(fscrypt_ctx_cachep); kmem_cache_destroy(fscrypt_info_cachep); - - fscrypt_essiv_cleanup(); } module_exit(fscrypt_exit); diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h index 7da276159593..59d0cba9cfb9 100644 --- a/fs/crypto/fscrypt_private.h +++ b/fs/crypto/fscrypt_private.h @@ -61,12 +61,6 @@ struct fscrypt_info { /* The actual crypto transform used for encryption and decryption */ struct crypto_skcipher *ci_ctfm; - /* - * Cipher for ESSIV IV generation. Only set for CBC contents - * encryption, otherwise is NULL. - */ - struct crypto_cipher *ci_essiv_tfm; - /* * Encryption mode used for this inode. It corresponds to either * ci_data_mode or ci_filename_mode, depending on the inode type. @@ -166,9 +160,6 @@ struct fscrypt_mode { int keysize; int ivsize; bool logged_impl_name; - bool needs_essiv; }; -extern void __exit fscrypt_essiv_cleanup(void); - #endif /* _FSCRYPT_PRIVATE_H */ diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c index dcd91a3fbe49..82c7eb86ca00 100644 --- a/fs/crypto/keyinfo.c +++ b/fs/crypto/keyinfo.c @@ -19,8 +19,6 @@ #include #include "fscrypt_private.h" -static struct crypto_shash *essiv_hash_tfm; - /* Table of keys referenced by FS_POLICY_FLAG_DIRECT_KEY policies */ static DEFINE_HASHTABLE(fscrypt_master_keys, 6); /* 6 bits = 64 buckets */ static DEFINE_SPINLOCK(fscrypt_master_keys_lock); @@ -144,10 +142,9 @@ static struct fscrypt_mode available_modes[] = { }, [FS_ENCRYPTION_MODE_AES_128_CBC] = { .friendly_name = "AES-128-CBC", - .cipher_str = "cbc(aes)", + .cipher_str = "essiv(cbc(aes),aes,sha256)", .keysize = 16, - .ivsize = 16, - .needs_essiv = true, + .ivsize = 8, }, [FS_ENCRYPTION_MODE_AES_128_CTS] = { .friendly_name = "AES-128-CTS-CBC", @@ -377,72 +374,6 @@ fscrypt_get_master_key(const struct fscrypt_info *ci, struct fscrypt_mode *mode, return ERR_PTR(err); } -static int derive_essiv_salt(const u8 *key, int keysize, u8 *salt) -{ - struct crypto_shash *tfm = READ_ONCE(essiv_hash_tfm); - - /* init hash transform on demand */ - if (unlikely(!tfm)) { - struct crypto_shash *prev_tfm; - - tfm = crypto_alloc_shash("sha256", 0, 0); - if (IS_ERR(tfm)) { - fscrypt_warn(NULL, - "error allocating SHA-256 transform: %ld", - PTR_ERR(tfm)); - return PTR_ERR(tfm); - } - prev_tfm = cmpxchg(&essiv_hash_tfm, NULL, tfm); - if (prev_tfm) { - crypto_free_shash(tfm); - tfm = prev_tfm; - } - } - - { - SHASH_DESC_ON_STACK(desc, tfm); - desc->tfm = tfm; - - return crypto_shash_digest(desc, key, keysize, salt); - } -} - -static int init_essiv_generator(struct fscrypt_info *ci, const u8 *raw_key, - int keysize) -{ - int err; - struct crypto_cipher *essiv_tfm; - u8 salt[SHA256_DIGEST_SIZE]; - - essiv_tfm = crypto_alloc_cipher("aes", 0, 0); - if (IS_ERR(essiv_tfm)) - return PTR_ERR(essiv_tfm); - - ci->ci_essiv_tfm = essiv_tfm; - - err = derive_essiv_salt(raw_key, keysize, salt); - if (err) - goto out; - - /* - * Using SHA256 to derive the salt/key will result in AES-256 being - * used for IV generation. File contents encryption will still use the - * configured keysize (AES-128) nevertheless. - */ - err = crypto_cipher_setkey(essiv_tfm, salt, sizeof(salt)); - if (err) - goto out; - -out: - memzero_explicit(salt, sizeof(salt)); - return err; -} - -void __exit fscrypt_essiv_cleanup(void) -{ - crypto_free_shash(essiv_hash_tfm); -} - /* * Given the encryption mode and key (normally the derived key, but for * FS_POLICY_FLAG_DIRECT_KEY mode it's the master key), set up the inode's @@ -454,7 +385,6 @@ static int setup_crypto_transform(struct fscrypt_info *ci, { struct fscrypt_master_key *mk; struct crypto_skcipher *ctfm; - int err; if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) { mk = fscrypt_get_master_key(ci, mode, raw_key, inode); @@ -470,19 +400,6 @@ static int setup_crypto_transform(struct fscrypt_info *ci, ci->ci_master_key = mk; ci->ci_ctfm = ctfm; - if (mode->needs_essiv) { - /* ESSIV implies 16-byte IVs which implies !DIRECT_KEY */ - WARN_ON(mode->ivsize != AES_BLOCK_SIZE); - WARN_ON(ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY); - - err = init_essiv_generator(ci, raw_key, mode->keysize); - if (err) { - fscrypt_warn(inode->i_sb, - "error initializing ESSIV generator for inode %lu: %d", - inode->i_ino, err); - return err; - } - } return 0; } @@ -495,7 +412,6 @@ static void put_crypt_info(struct fscrypt_info *ci) put_master_key(ci->ci_master_key); } else { crypto_free_skcipher(ci->ci_ctfm); - crypto_free_cipher(ci->ci_essiv_tfm); } kmem_cache_free(fscrypt_info_cachep, ci); } From patchwork Tue Jun 18 21:27:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 167200 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp4728861ilk; Tue, 18 Jun 2019 14:28:12 -0700 (PDT) X-Google-Smtp-Source: APXvYqyqBCgIzPpM78eQfFw05cBZiLRorcKpSFq/uhMq+770F9ogJsMxIex79x6JkqOzhI2jo0uk X-Received: by 2002:a62:750c:: with SMTP id q12mr102931640pfc.59.1560893292457; Tue, 18 Jun 2019 14:28:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560893292; cv=none; d=google.com; s=arc-20160816; b=u7MewQr8ETVMfRYHToUwJBQmtdJbLskZTQXrkEwz0LDrrHs2mDuZGDQRia6k48dPtH RveLmyfde7H0+6gdj5CZOtgzXsunT2XOOr2jx9TLZIDnw9TlUBAKx8q4lQuqEuj891f6 23LnnJyCPGqw4HMgXNQXixHyd7z+oSfQ5Py2+BO+e7upxoZQvkY+t4W2dv8zLO8wgNAp omBhXvSv5UtNIUo/RwfCTgfcfLtrhsYmAbwBWMKGfLWQxmf3kMzMMYTBEhcNIaOCVvuD Qn89msm7MjvmUzVtvCGJqfRIPWBOQhgbBaKPqFy8QFXYCdQDIaGPrv/uXVpYXr+MiNj5 gSZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=GeHdLEIiKMp7/1v0iGs3g+X9QZKIHHe69xTr11ESOuo=; b=iJbOEAcNRvhQMbXw5HDZxxbKAquJY/TeTuiwHJdvxExVF+OzxZ2H8pnZct46ArJGek EeRjgumMVURJhcs+hTaKzm/v6/1KeTgI8YrH6kLU477vZOl8c/I4cPj9auDZBTVRIp4I 80uToIZWiArmGw/wJSMH0SMvyJR42NdaA5n7R+6iVCo/oA3HTFetmyjzlnWKBMb4aEDm EqIK6q3HFoN0LEu8etDsr8DkDIo6to1ENNg+gZNprkDykWRg6Wrflx7PE15EY4Lk4+T8 mW/xE7A7T/9GkDru6H0lMqdkombNzzG4H3pn0FfmeEXoFU8OIDmkOLjGdTl22Q24Y9zb 9kdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DpMBBmk2; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si3179923pjs.35.2019.06.18.14.28.12; Tue, 18 Jun 2019 14:28:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DpMBBmk2; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730102AbfFRV2L (ORCPT + 3 others); Tue, 18 Jun 2019 17:28:11 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:35194 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730181AbfFRV2K (ORCPT ); Tue, 18 Jun 2019 17:28:10 -0400 Received: by mail-wr1-f66.google.com with SMTP id m3so1077590wrv.2 for ; Tue, 18 Jun 2019 14:28:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GeHdLEIiKMp7/1v0iGs3g+X9QZKIHHe69xTr11ESOuo=; b=DpMBBmk2kGfOoo1UtTp/w94N831+KzjoBk3fn3kDgIehYo2zHzDHet8G50UyVgSnkj Ri7viZOAMYnfQlbiBROYFUGsC8zcCtmfXSojqaDC9MzX4vaRpw5QI1V7asBop9WLTO2j wsEA2nLW8zhKKaQAAQWnZVPMSfYDBaz3KDsD9NkzSLcwkCl3/rCc5C0YcN0DwoBmkVkU TocNSZisIrDSrXt8tigpeHNZ1KIumRySoF5YEILpwiYVKrvvJfFabZyDI3kUKVvF60am 48ebFFYnCX04XSgYR/7FBhetbNqG9K2Xiw996Fh3tdv1+8ayMXQ/loeD4ES2iuwF1b0F FXAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GeHdLEIiKMp7/1v0iGs3g+X9QZKIHHe69xTr11ESOuo=; b=c5qvrZSiPdrIDpjwQuNP4DNeF++c0+j5qSpMceylaj1cqhouJPmaD9pdQKReyjY454 HQG5yAmHg3GKHQ2HlpeLPmeNNJh+rlNHUGl/8NZZL28y3nW8UoWHUmRx71w0LNsRrUjF PojnB1leCip/74d11DZ5iGtvqjoiIt6t6kp61p0BZrl23xZT7KsVqwBwg/ifJbgKP9S7 N2qCUFh0A0LxApOmCTuWVLfmYOODj9zHm1kMrwRErFu4grRE/UvOMHc8qtcErfU4Wc+4 yEH2omSkcjKEGHUWep/MyLpLrJiMMqWTAtB6+vxw/D8RmzNuZvGaQ+xhBW6Ql4HSmNRE ks9g== X-Gm-Message-State: APjAAAV8JGQ8j1sBs4h3vARdcROQiQKnfqniRWsRMWBCKjnQdmK0R2L2 TIL3XDmKo9J4SqUaFXHlV454YJcNHjvV3viw X-Received: by 2002:adf:e88e:: with SMTP id d14mr4948391wrm.189.1560893288980; Tue, 18 Jun 2019 14:28:08 -0700 (PDT) Received: from e111045-lin.arm.com (lfbn-nic-1-216-10.w2-15.abo.wanadoo.fr. [2.15.62.10]) by smtp.gmail.com with ESMTPSA id h21sm2273831wmb.47.2019.06.18.14.28.07 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 18 Jun 2019 14:28:08 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v2 3/4] md: dm-crypt: infer ESSIV block cipher from cipher string directly Date: Tue, 18 Jun 2019 23:27:48 +0200 Message-Id: <20190618212749.8995-4-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190618212749.8995-1-ard.biesheuvel@linaro.org> References: <20190618212749.8995-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of allocating a crypto skcipher tfm 'foo' and attempting to infer the encapsulated block cipher from the driver's 'name' field, directly parse the string that we used to allocated the tfm. These are always identical (unless the allocation failed, in which case we bail anyway), but using the string allows us to use it in the allocation, which is something we will need when switching to the 'essiv' crypto API template. Signed-off-by: Ard Biesheuvel --- drivers/md/dm-crypt.c | 35 +++++++++----------- 1 file changed, 15 insertions(+), 20 deletions(-) -- 2.17.1 diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 1b16d34bb785..f001f1104cb5 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -2321,25 +2321,17 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode) * The cc->cipher is currently used only in ESSIV. * This should be probably done by crypto-api calls (once available...) */ -static int crypt_ctr_blkdev_cipher(struct crypt_config *cc) +static int crypt_ctr_blkdev_cipher(struct crypt_config *cc, char *alg_name) { - const char *alg_name = NULL; char *start, *end; if (crypt_integrity_aead(cc)) { - alg_name = crypto_tfm_alg_name(crypto_aead_tfm(any_tfm_aead(cc))); - if (!alg_name) - return -EINVAL; if (crypt_integrity_hmac(cc)) { alg_name = strchr(alg_name, ','); if (!alg_name) return -EINVAL; } alg_name++; - } else { - alg_name = crypto_tfm_alg_name(crypto_skcipher_tfm(any_tfm(cc))); - if (!alg_name) - return -EINVAL; } start = strchr(alg_name, '('); @@ -2434,6 +2426,20 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key if (*ivmode && !strcmp(*ivmode, "lmk")) cc->tfms_count = 64; + if (crypt_integrity_aead(cc)) { + ret = crypt_ctr_auth_cipher(cc, cipher_api); + if (ret < 0) { + ti->error = "Invalid AEAD cipher spec"; + return -ENOMEM; + } + } + + ret = crypt_ctr_blkdev_cipher(cc, cipher_api); + if (ret < 0) { + ti->error = "Cannot allocate cipher string"; + return -ENOMEM; + } + cc->key_parts = cc->tfms_count; /* Allocate cipher */ @@ -2445,21 +2451,10 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key /* Alloc AEAD, can be used only in new format. */ if (crypt_integrity_aead(cc)) { - ret = crypt_ctr_auth_cipher(cc, cipher_api); - if (ret < 0) { - ti->error = "Invalid AEAD cipher spec"; - return -ENOMEM; - } cc->iv_size = crypto_aead_ivsize(any_tfm_aead(cc)); } else cc->iv_size = crypto_skcipher_ivsize(any_tfm(cc)); - ret = crypt_ctr_blkdev_cipher(cc); - if (ret < 0) { - ti->error = "Cannot allocate cipher string"; - return -ENOMEM; - } - return 0; } From patchwork Tue Jun 18 21:27:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 167201 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp4728909ilk; Tue, 18 Jun 2019 14:28:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqxsq8Zz/VPZ2+SY1lF0AnTO2FjTYSS2Xchji/xQYk3y1fuGB6LNHGITBXWpzzIZe5fE9H7c X-Received: by 2002:a63:a044:: with SMTP id u4mr4523706pgn.316.1560893295673; Tue, 18 Jun 2019 14:28:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560893295; cv=none; d=google.com; s=arc-20160816; b=uJqFNe0+FUC8TbNeUIdjji+/LPHl11L/jszmPACzw7WlusXF9NDU8kCsW4xBBobv9U 9QTUV19luD8j5V31iNDDqd7QL2nRZlQx/orAZ8ssicM26aNP/k3tfviVJGmpuR2Wxhwj tB1PWbE3nMapzWaJcyv5w7gEO/PM9H+d49G+jD2pbrMLkyAV40+HNCrHjYHQ8RBJnRcB E9ls7goYEeXJ5Zo6nz9vl4MZ6Jtb5EBz2Ej9YX5VQt3nGzX1MKRJddwwE2g76r8QDzgy bXyL7xbV1FMxtYklqt/aqB4bZJ5fLKxf21rD4235o1LXrUzAnwUguJNC+qTenaWA9lOM K6gA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=t1z9+TDQfHPW/smtTmewGAgWAJLuzsbWWpgKygwvzhI=; b=M6YfVFLJ8wXmTg0CtrLFYDL6vg5dOoeMv5e6qtDCTF7rkhBZL24LuZCMH4HGhuKFdQ 4EXZCXfsxSivEBkCmsi2L3X44zWe653wC2tppLZN8tl/xrERNdDUGK1C0R9SdL6omRrp NbtXqFAJ8oc0bTX0vOMHqal/7ewmRddlZH64bkTz9nzCitWRx1sph0B+1GZ0DonCSfTB RzmK5c4WT00hk7vL4FXhNNPlgpMBn0sDFOri9EhOYDBBA1semddBxRt7Mt04uhCi9ADh XF69wAxeKC+kPcCtuuUM86rXUt3t7GYwxG6BFrRaXRTEovJKqQ/iosmMyHImrKSg8fhz /8bA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=od4ihnPX; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d5si13568808plo.396.2019.06.18.14.28.15; Tue, 18 Jun 2019 14:28:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=od4ihnPX; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729196AbfFRV2N (ORCPT + 3 others); Tue, 18 Jun 2019 17:28:13 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:54725 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730377AbfFRV2N (ORCPT ); Tue, 18 Jun 2019 17:28:13 -0400 Received: by mail-wm1-f67.google.com with SMTP id g135so4822185wme.4 for ; Tue, 18 Jun 2019 14:28:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=t1z9+TDQfHPW/smtTmewGAgWAJLuzsbWWpgKygwvzhI=; b=od4ihnPXfC0cbv6Q6JphpdB71CghciGPrihK+4yU4BrAkXSUhGty0VQ2wLQEgm86U6 CyDurd6QJWXxbBES0t+UAEIJEg9XMGlODlu9vj2x7EU2r/AMEqSZEFViG80fBJIBPB/k ++v/uVQXEHoaGsWcmHN1JHzF4qmeMGu3PF1cFk3WsG0K26DHRFM9ddJcjDBL+CBKqyJa t4EIyL/fGaHGguQyzcqMLiCfa3BHdgrMOnGTgnRtPddgL/MGl9NtdQiDh4xjpyebsA35 viQ8tvmuSK7rHDl0f781lssw1KtWjFImXmd5noxFlauDqBzaHi9VWAHQIswXlnGBmdPy jGGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=t1z9+TDQfHPW/smtTmewGAgWAJLuzsbWWpgKygwvzhI=; b=QmZk+QUruDnGTmUrQ91ZzLBhAE9iRMx6+677So07s/oxdtBxYoqa3wh2RnU8wBFZZZ 5s0WQmKPNdr+JoGM8Qjlxzi9rCqTSoITcLDNvzczuzjnyJLqsql+01ibjkCiQxI2vf1G 9RZjpXmXfHOR5UsaJLwcDl23KxeoV8YaOcDCAeCIKWQGKrK+Lu7vbnlUSsCidfd2mZeV ahnEfR5WMV+dUsL+y91KVulK+I3umq6celwYkhub7ruPerSH06W9eN3K6AmVH8iBgfYJ Ikb9S3eV4jZCXzMOipPNHre97aYQh12IhD1SO721TSZu9vuaolhu8dyloWygg//8TQ38 Imkg== X-Gm-Message-State: APjAAAWQB7TusnXazN/moHpTSHSVfsx1Ft3l+Nb3w56aUQi9cexh/Du1 rK6EcKQxdijnV/wDxm6eQYcSBosHH//GzqFV X-Received: by 2002:a1c:480a:: with SMTP id v10mr5120117wma.120.1560893290234; Tue, 18 Jun 2019 14:28:10 -0700 (PDT) Received: from e111045-lin.arm.com (lfbn-nic-1-216-10.w2-15.abo.wanadoo.fr. [2.15.62.10]) by smtp.gmail.com with ESMTPSA id h21sm2273831wmb.47.2019.06.18.14.28.09 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 18 Jun 2019 14:28:09 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v2 4/4] md: dm-crypt: switch to ESSIV crypto API template Date: Tue, 18 Jun 2019 23:27:49 +0200 Message-Id: <20190618212749.8995-5-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190618212749.8995-1-ard.biesheuvel@linaro.org> References: <20190618212749.8995-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Replace the explicit ESSIV handling in the dm-crypt driver with calls into the crypto API, which now possesses the capability to perform this processing within the crypto subsystem. Signed-off-by: Ard Biesheuvel --- drivers/md/Kconfig | 1 + drivers/md/dm-crypt.c | 208 +++----------------- 2 files changed, 31 insertions(+), 178 deletions(-) -- 2.17.1 diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig index 45254b3ef715..30ca87cf25db 100644 --- a/drivers/md/Kconfig +++ b/drivers/md/Kconfig @@ -271,6 +271,7 @@ config DM_CRYPT depends on BLK_DEV_DM select CRYPTO select CRYPTO_CBC + select CRYPTO_ESSIV ---help--- This device-mapper target allows you to create a device that transparently encrypts the data on it. You'll need to activate diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index f001f1104cb5..89efd7d249fd 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -98,11 +98,6 @@ struct crypt_iv_operations { struct dm_crypt_request *dmreq); }; -struct iv_essiv_private { - struct crypto_shash *hash_tfm; - u8 *salt; -}; - struct iv_benbi_private { int shift; }; @@ -155,7 +150,6 @@ struct crypt_config { const struct crypt_iv_operations *iv_gen_ops; union { - struct iv_essiv_private essiv; struct iv_benbi_private benbi; struct iv_lmk_private lmk; struct iv_tcw_private tcw; @@ -165,8 +159,6 @@ struct crypt_config { unsigned short int sector_size; unsigned char sector_shift; - /* ESSIV: struct crypto_cipher *essiv_tfm */ - void *iv_private; union { struct crypto_skcipher **tfms; struct crypto_aead **tfms_aead; @@ -323,161 +315,6 @@ static int crypt_iv_plain64be_gen(struct crypt_config *cc, u8 *iv, return 0; } -/* Initialise ESSIV - compute salt but no local memory allocations */ -static int crypt_iv_essiv_init(struct crypt_config *cc) -{ - struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv; - SHASH_DESC_ON_STACK(desc, essiv->hash_tfm); - struct crypto_cipher *essiv_tfm; - int err; - - desc->tfm = essiv->hash_tfm; - - err = crypto_shash_digest(desc, cc->key, cc->key_size, essiv->salt); - shash_desc_zero(desc); - if (err) - return err; - - essiv_tfm = cc->iv_private; - - err = crypto_cipher_setkey(essiv_tfm, essiv->salt, - crypto_shash_digestsize(essiv->hash_tfm)); - if (err) - return err; - - return 0; -} - -/* Wipe salt and reset key derived from volume key */ -static int crypt_iv_essiv_wipe(struct crypt_config *cc) -{ - struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv; - unsigned salt_size = crypto_shash_digestsize(essiv->hash_tfm); - struct crypto_cipher *essiv_tfm; - int r, err = 0; - - memset(essiv->salt, 0, salt_size); - - essiv_tfm = cc->iv_private; - r = crypto_cipher_setkey(essiv_tfm, essiv->salt, salt_size); - if (r) - err = r; - - return err; -} - -/* Allocate the cipher for ESSIV */ -static struct crypto_cipher *alloc_essiv_cipher(struct crypt_config *cc, - struct dm_target *ti, - const u8 *salt, - unsigned int saltsize) -{ - struct crypto_cipher *essiv_tfm; - int err; - - /* Setup the essiv_tfm with the given salt */ - essiv_tfm = crypto_alloc_cipher(cc->cipher, 0, 0); - if (IS_ERR(essiv_tfm)) { - ti->error = "Error allocating crypto tfm for ESSIV"; - return essiv_tfm; - } - - if (crypto_cipher_blocksize(essiv_tfm) != cc->iv_size) { - ti->error = "Block size of ESSIV cipher does " - "not match IV size of block cipher"; - crypto_free_cipher(essiv_tfm); - return ERR_PTR(-EINVAL); - } - - err = crypto_cipher_setkey(essiv_tfm, salt, saltsize); - if (err) { - ti->error = "Failed to set key for ESSIV cipher"; - crypto_free_cipher(essiv_tfm); - return ERR_PTR(err); - } - - return essiv_tfm; -} - -static void crypt_iv_essiv_dtr(struct crypt_config *cc) -{ - struct crypto_cipher *essiv_tfm; - struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv; - - crypto_free_shash(essiv->hash_tfm); - essiv->hash_tfm = NULL; - - kzfree(essiv->salt); - essiv->salt = NULL; - - essiv_tfm = cc->iv_private; - - if (essiv_tfm) - crypto_free_cipher(essiv_tfm); - - cc->iv_private = NULL; -} - -static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti, - const char *opts) -{ - struct crypto_cipher *essiv_tfm = NULL; - struct crypto_shash *hash_tfm = NULL; - u8 *salt = NULL; - int err; - - if (!opts) { - ti->error = "Digest algorithm missing for ESSIV mode"; - return -EINVAL; - } - - /* Allocate hash algorithm */ - hash_tfm = crypto_alloc_shash(opts, 0, 0); - if (IS_ERR(hash_tfm)) { - ti->error = "Error initializing ESSIV hash"; - err = PTR_ERR(hash_tfm); - goto bad; - } - - salt = kzalloc(crypto_shash_digestsize(hash_tfm), GFP_KERNEL); - if (!salt) { - ti->error = "Error kmallocing salt storage in ESSIV"; - err = -ENOMEM; - goto bad; - } - - cc->iv_gen_private.essiv.salt = salt; - cc->iv_gen_private.essiv.hash_tfm = hash_tfm; - - essiv_tfm = alloc_essiv_cipher(cc, ti, salt, - crypto_shash_digestsize(hash_tfm)); - if (IS_ERR(essiv_tfm)) { - crypt_iv_essiv_dtr(cc); - return PTR_ERR(essiv_tfm); - } - cc->iv_private = essiv_tfm; - - return 0; - -bad: - if (hash_tfm && !IS_ERR(hash_tfm)) - crypto_free_shash(hash_tfm); - kfree(salt); - return err; -} - -static int crypt_iv_essiv_gen(struct crypt_config *cc, u8 *iv, - struct dm_crypt_request *dmreq) -{ - struct crypto_cipher *essiv_tfm = cc->iv_private; - - memset(iv, 0, cc->iv_size); - *(__le64 *)iv = cpu_to_le64(dmreq->iv_sector); - crypto_cipher_encrypt_one(essiv_tfm, iv, iv); - - return 0; -} - static int crypt_iv_benbi_ctr(struct crypt_config *cc, struct dm_target *ti, const char *opts) { @@ -853,14 +690,6 @@ static const struct crypt_iv_operations crypt_iv_plain64be_ops = { .generator = crypt_iv_plain64be_gen }; -static const struct crypt_iv_operations crypt_iv_essiv_ops = { - .ctr = crypt_iv_essiv_ctr, - .dtr = crypt_iv_essiv_dtr, - .init = crypt_iv_essiv_init, - .wipe = crypt_iv_essiv_wipe, - .generator = crypt_iv_essiv_gen -}; - static const struct crypt_iv_operations crypt_iv_benbi_ops = { .ctr = crypt_iv_benbi_ctr, .dtr = crypt_iv_benbi_dtr, @@ -2283,7 +2112,7 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode) else if (strcmp(ivmode, "plain64be") == 0) cc->iv_gen_ops = &crypt_iv_plain64be_ops; else if (strcmp(ivmode, "essiv") == 0) - cc->iv_gen_ops = &crypt_iv_essiv_ops; + cc->iv_gen_ops = &crypt_iv_plain64_ops; else if (strcmp(ivmode, "benbi") == 0) cc->iv_gen_ops = &crypt_iv_benbi_ops; else if (strcmp(ivmode, "null") == 0) @@ -2397,7 +2226,7 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key char **ivmode, char **ivopts) { struct crypt_config *cc = ti->private; - char *tmp, *cipher_api; + char *tmp, *cipher_api, buf[CRYPTO_MAX_ALG_NAME]; int ret = -EINVAL; cc->tfms_count = 1; @@ -2435,9 +2264,19 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key } ret = crypt_ctr_blkdev_cipher(cc, cipher_api); - if (ret < 0) { - ti->error = "Cannot allocate cipher string"; - return -ENOMEM; + if (ret < 0) + goto bad_mem; + + if (*ivmode && !strcmp(*ivmode, "essiv")) { + if (!*ivopts) { + ti->error = "Digest algorithm missing for ESSIV mode"; + return -EINVAL; + } + ret = snprintf(buf, CRYPTO_MAX_ALG_NAME, "essiv(%s,%s,%s)", + cipher_api, cc->cipher, *ivopts); + if (ret < 0) + goto bad_mem; + cipher_api = buf; } cc->key_parts = cc->tfms_count; @@ -2456,6 +2295,9 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key cc->iv_size = crypto_skcipher_ivsize(any_tfm(cc)); return 0; +bad_mem: + ti->error = "Cannot allocate cipher string"; + return -ENOMEM; } static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key, @@ -2515,8 +2357,18 @@ static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key if (!cipher_api) goto bad_mem; - ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME, - "%s(%s)", chainmode, cipher); + if (!strcmp(*ivmode, "essiv")) { + if (!*ivopts) { + ti->error = "Digest algorithm missing for ESSIV mode"; + return -EINVAL; + } + ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME, + "essiv(%s(%s),%s,%s)", chainmode, cipher, + cipher, *ivopts); + } else { + ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME, + "%s(%s)", chainmode, cipher); + } if (ret < 0) { kfree(cipher_api); goto bad_mem;