From patchwork Thu Aug 15 19:28:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171443 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp2592390ily; Thu, 15 Aug 2019 12:29:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqx9lTu/kxPD2MLOVfzuMk2+rNP3UMPd3dt+4kurzkzRKsW8vomFCFAzeweXzbnO3Xo1VHGI X-Received: by 2002:a17:90a:a489:: with SMTP id z9mr3569986pjp.24.1565897361981; Thu, 15 Aug 2019 12:29:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565897361; cv=none; d=google.com; s=arc-20160816; b=MZf4GdT+b7oqde0nYwdfTWtcuA2PopHLrvKD+pCnISghfqr2cQ+u4Fw2IVNb89DDbA si2Vv6ifrkm258wnrhimv6xlzBSQes/KL9B6K19oGghQPlghyueOAXWxCqBbftrhYf1y azaBVmLThU+Nf7k6A0OT05++m7K2u2+1fyC6nK99JOMM6RM+32aBbwLzWeJKzJDtRF7o Aa/WdFgAQH7MfKDD5RcbTMorJ3JHs3/2n0Oo4IlFuXwUv2K83eiydulAm2+lFNHbeFhN HGFoki7j56iZ/RB6JRCHeMGYvhd9TdpjGuFTYbO6cdDaOAMubnIwZ/F3R/wHPhVdNNw/ IKFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=E9ab7vjs0VuhUjsbpNJ/6ViwvtsqaUBIlx/v5SgzSk0=; b=sK1dCW74l/4PW9n7ZtfHLGa95MpZreHGoaru5UvhF/l1gym4B7WYqSC1pUnKy+L5rO TsFI9iI/bl636KHWBhXZjqcTFjxyGhANem7IlcfBYDnYgN8al26RWmrtoVeXVRFgx5Nx JxTfbxQK3sXj1L+B2ytFVjwW8z4rgHlOlwwta4TK3w00TxJp4byFoVBtc+ZwGzLd3SHc a4+TPwsOErFFHrz7z/sNKr+ZnF9g2Wh6QOeg71005XqGSTiJYwN0hW8Elmw4fD1ATVIo M0z5xDudraIc7ryKHNZO9E/fwailllJVvyFe+yyno+Zd9NKHsuo4Ct74a3Pkrre8r1Yz oXMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=s6nqR7Oo; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cm1si1494206pjb.12.2019.08.15.12.29.21; Thu, 15 Aug 2019 12:29:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=s6nqR7Oo; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726080AbfHOT3V (ORCPT + 3 others); Thu, 15 Aug 2019 15:29:21 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:40415 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730517AbfHOT3V (ORCPT ); Thu, 15 Aug 2019 15:29:21 -0400 Received: by mail-wr1-f65.google.com with SMTP id c3so3186332wrd.7 for ; Thu, 15 Aug 2019 12:29:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=E9ab7vjs0VuhUjsbpNJ/6ViwvtsqaUBIlx/v5SgzSk0=; b=s6nqR7OoAmiJhBsAx+GKhMx/kiCfn4Cdgx9d9WafN+DaHkrXSed/kVqklMgObuwsIb GzbJqeoYXI9dI4CaIoeO6jUhoFO0XOtyzATQtLD5QBaIr4Kv5f34LtagvyByBQga6cRm QcXMctcK2gEE8jYb9XwYWaVV/krZ7yGVfFmj9qjNwHWigkiZ8TbCLY8K/cc/h7Z5Epti qf8G5v/PpF8FdiCotkMgy45Pqp4E9x/sLdp9R2cC9+7IjoLz+dmMTaRxmNM/KeCtvZOa nxg7SHUAv8wGucp09ogU4iziWZdOdNurLuTbwYFR3vk/vszgFJ7DfAmvg+LzfOe10RVL BLmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=E9ab7vjs0VuhUjsbpNJ/6ViwvtsqaUBIlx/v5SgzSk0=; b=Ig9RdJhVIgX2y+tKAX68NGAMxlAlifEBKS6+voTZNVD2BjIykKbnzH/OrGg/+L8SRX F0MBy1OT6MHsF5rrIbMzXithXM6EayBCFkjKQlDa8rdc95dze02uVLL8m8KIkS++oc9P jT70Wfm13rrGXN3rKOV4nHDYdn/tsspHqfyqe2dCzN4suucfB1WDxNV+pShjWq7k1pET P/+6+heuPRyxr3zg7pFDCpvyH30FGvls3o7MNHjqDRyhnpex9OvhYi43D+YARWYfu0Ba 7YiLRgLLMQj+7y8K/xlkIklGWHzCUcfl7QKDj8MPe0V1kVOi21LOWuOrW5jGGWbkqXV3 qaPg== X-Gm-Message-State: APjAAAVRJhzSFYPSrwktoIfCBVkcHRtR6ETiPy3627d2DrdBNGiPa/j/ MGY5+067uF1gI1exRs6eRpSXcUHqhzLJ8/kF X-Received: by 2002:adf:e887:: with SMTP id d7mr6965991wrm.282.1565897355261; Thu, 15 Aug 2019 12:29:15 -0700 (PDT) Received: from localhost.localdomain ([2a02:587:a407:da00:f1b5:e68c:5f7f:79e7]) by smtp.gmail.com with ESMTPSA id h9sm2949063wrt.53.2019.08.15.12.29.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Aug 2019 12:29:14 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v12 1/4] crypto: essiv - create wrapper template for ESSIV generation Date: Thu, 15 Aug 2019 22:28:55 +0300 Message-Id: <20190815192858.28125-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190815192858.28125-1-ard.biesheuvel@linaro.org> References: <20190815192858.28125-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Implement a template that wraps a (skcipher,shash) or (aead,shash) tuple so that we can consolidate the ESSIV handling in fscrypt and dm-crypt and move it into the crypto API. This will result in better test coverage, and will allow future changes to make the bare cipher interface internal to the crypto subsystem, in order to increase robustness of the API against misuse. Signed-off-by: Ard Biesheuvel --- crypto/Kconfig | 28 + crypto/Makefile | 1 + crypto/essiv.c | 645 ++++++++++++++++++++ 3 files changed, 674 insertions(+) -- 2.17.1 diff --git a/crypto/Kconfig b/crypto/Kconfig index 455a3354e291..01f81c1f3138 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -487,6 +487,34 @@ config CRYPTO_ADIANTUM If unsure, say N. +config CRYPTO_ESSIV + tristate "ESSIV support for block encryption" + select CRYPTO_AUTHENC + help + Encrypted salt-sector initialization vector (ESSIV) is an IV + generation method that is used in some cases by fscrypt and/or + dm-crypt. It uses the hash of the block encryption key as the + symmetric key for a block encryption pass applied to the input + IV, making low entropy IV sources more suitable for block + encryption. + + This driver implements a crypto API template that can be + instantiated either as a skcipher or as a aead (depending on the + type of the first template argument), and which defers encryption + and decryption requests to the encapsulated cipher after applying + ESSIV to the input IV. Note that in the aead case, it is assumed + that the keys are presented in the same format used by the authenc + template, and that the IV appears at the end of the authenticated + associated data (AAD) region (which is how dm-crypt uses it.) + + Note that the use of ESSIV is not recommended for new deployments, + and so this only needs to be enabled when interoperability with + existing encrypted volumes of filesystems is required, or when + building for a particular system that requires it (e.g., when + the SoC in question has accelerated CBC but not XTS, making CBC + combined with ESSIV the only feasible mode for h/w accelerated + block encryption) + comment "Hash modes" config CRYPTO_CMAC diff --git a/crypto/Makefile b/crypto/Makefile index 0d2cdd523fd9..fcb1ee679782 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -165,6 +165,7 @@ obj-$(CONFIG_CRYPTO_USER_API_AEAD) += algif_aead.o obj-$(CONFIG_CRYPTO_ZSTD) += zstd.o obj-$(CONFIG_CRYPTO_OFB) += ofb.o obj-$(CONFIG_CRYPTO_ECC) += ecc.o +obj-$(CONFIG_CRYPTO_ESSIV) += essiv.o ecdh_generic-y += ecdh.o ecdh_generic-y += ecdh_helper.o diff --git a/crypto/essiv.c b/crypto/essiv.c new file mode 100644 index 000000000000..38b18907ce25 --- /dev/null +++ b/crypto/essiv.c @@ -0,0 +1,645 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ESSIV skcipher and aead template for block encryption + * + * This template encapsulates the ESSIV IV generation algorithm used by + * dm-crypt and fscrypt, which converts the initial vector for the skcipher + * used for block encryption, by encrypting it using the hash of the + * skcipher key as encryption key. Usually, the input IV is a 64-bit sector + * number in LE representation zero-padded to the size of the IV, but this + * is not assumed by this driver. + * + * The typical use of this template is to instantiate the skcipher + * 'essiv(cbc(aes),sha256)', which is the only instantiation used by + * fscrypt, and the most relevant one for dm-crypt. However, dm-crypt + * also permits ESSIV to be used in combination with the authenc template, + * e.g., 'essiv(authenc(hmac(sha256),cbc(aes)),sha256)', in which case + * we need to instantiate an aead that accepts the same special key format + * as the authenc template, and deals with the way the encrypted IV is + * embedded into the AAD area of the aead request. This means the AEAD + * flavor produced by this template is tightly coupled to the way dm-crypt + * happens to use it. + * + * Copyright (c) 2019 Linaro, Ltd. + * + * Heavily based on: + * adiantum length-preserving encryption mode + * + * Copyright 2018 Google LLC + */ + +#include +#include +#include +#include +#include +#include + +#include "internal.h" + +struct essiv_instance_ctx { + union { + struct crypto_skcipher_spawn skcipher_spawn; + struct crypto_aead_spawn aead_spawn; + } u; + char essiv_cipher_name[CRYPTO_MAX_ALG_NAME]; + struct crypto_shash *hash; +}; + +struct essiv_tfm_ctx { + union { + struct crypto_skcipher *skcipher; + struct crypto_aead *aead; + } u; + struct crypto_cipher *essiv_cipher; + int ivoffset; +}; + +struct essiv_aead_request_ctx { + struct scatterlist sg[4]; + u8 *assoc; + struct aead_request aead_req; +}; + +static int essiv_skcipher_setkey(struct crypto_skcipher *tfm, + const u8 *key, unsigned int keylen) +{ + struct skcipher_instance *inst = skcipher_alg_instance(tfm); + struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst); + struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + SHASH_DESC_ON_STACK(desc, ictx->hash); + u8 salt[HASH_MAX_DIGESTSIZE]; + int err; + + crypto_skcipher_clear_flags(tctx->u.skcipher, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(tctx->u.skcipher, + crypto_skcipher_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_skcipher_setkey(tctx->u.skcipher, key, keylen); + crypto_skcipher_set_flags(tfm, + crypto_skcipher_get_flags(tctx->u.skcipher) & + CRYPTO_TFM_RES_MASK); + if (err) + return err; + + desc->tfm = ictx->hash; + err = crypto_shash_digest(desc, key, keylen, salt); + if (err) + return err; + + crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK); + crypto_cipher_set_flags(tctx->essiv_cipher, + crypto_skcipher_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_cipher_setkey(tctx->essiv_cipher, salt, + crypto_shash_digestsize(ictx->hash)); + crypto_skcipher_set_flags(tfm, + crypto_cipher_get_flags(tctx->essiv_cipher) & + CRYPTO_TFM_RES_MASK); + + return err; +} + +static int essiv_aead_setkey(struct crypto_aead *tfm, const u8 *key, + unsigned int keylen) +{ + struct aead_instance *inst = aead_alg_instance(tfm); + struct essiv_instance_ctx *ictx = aead_instance_ctx(inst); + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + SHASH_DESC_ON_STACK(desc, ictx->hash); + struct crypto_authenc_keys keys; + u8 salt[HASH_MAX_DIGESTSIZE]; + int err; + + crypto_aead_clear_flags(tctx->u.aead, CRYPTO_TFM_REQ_MASK); + crypto_aead_set_flags(tctx->u.aead, crypto_aead_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_aead_setkey(tctx->u.aead, key, keylen); + crypto_aead_set_flags(tfm, crypto_aead_get_flags(tctx->u.aead) & + CRYPTO_TFM_RES_MASK); + if (err) + return err; + + if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) { + crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + + desc->tfm = ictx->hash; + err = crypto_shash_init(desc) ?: + crypto_shash_update(desc, keys.enckey, keys.enckeylen) ?: + crypto_shash_finup(desc, keys.authkey, keys.authkeylen, salt); + if (err) + return err; + + crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK); + crypto_cipher_set_flags(tctx->essiv_cipher, crypto_aead_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + err = crypto_cipher_setkey(tctx->essiv_cipher, salt, + crypto_shash_digestsize(ictx->hash)); + crypto_aead_set_flags(tfm, crypto_cipher_get_flags(tctx->essiv_cipher) & + CRYPTO_TFM_RES_MASK); + + return err; +} + +static int essiv_aead_setauthsize(struct crypto_aead *tfm, + unsigned int authsize) +{ + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + + return crypto_aead_setauthsize(tctx->u.aead, authsize); +} + +static void essiv_skcipher_done(struct crypto_async_request *areq, int err) +{ + struct skcipher_request *req = areq->data; + + skcipher_request_complete(req, err); +} + +static int essiv_skcipher_crypt(struct skcipher_request *req, bool enc) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct skcipher_request *subreq = skcipher_request_ctx(req); + + crypto_cipher_encrypt_one(tctx->essiv_cipher, req->iv, req->iv); + + skcipher_request_set_tfm(subreq, tctx->u.skcipher); + skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, + req->iv); + skcipher_request_set_callback(subreq, skcipher_request_flags(req), + essiv_skcipher_done, req); + + return enc ? crypto_skcipher_encrypt(subreq) : + crypto_skcipher_decrypt(subreq); +} + +static int essiv_skcipher_encrypt(struct skcipher_request *req) +{ + return essiv_skcipher_crypt(req, true); +} + +static int essiv_skcipher_decrypt(struct skcipher_request *req) +{ + return essiv_skcipher_crypt(req, false); +} + +static void essiv_aead_done(struct crypto_async_request *areq, int err) +{ + struct aead_request *req = areq->data; + struct essiv_aead_request_ctx *rctx = aead_request_ctx(req); + + if (rctx->assoc) + kfree(rctx->assoc); + aead_request_complete(req, err); +} + +static int essiv_aead_crypt(struct aead_request *req, bool enc) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + const struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + struct essiv_aead_request_ctx *rctx = aead_request_ctx(req); + struct aead_request *subreq = &rctx->aead_req; + struct scatterlist *src = req->src; + int err; + + crypto_cipher_encrypt_one(tctx->essiv_cipher, req->iv, req->iv); + + /* + * dm-crypt embeds the sector number and the IV in the AAD region, so + * we have to copy the converted IV into the right scatterlist before + * we pass it on. + */ + rctx->assoc = NULL; + if (req->src == req->dst || !enc) { + scatterwalk_map_and_copy(req->iv, req->dst, + req->assoclen - crypto_aead_ivsize(tfm), + crypto_aead_ivsize(tfm), 1); + } else { + u8 *iv = (u8 *)aead_request_ctx(req) + tctx->ivoffset; + int ivsize = crypto_aead_ivsize(tfm); + int ssize = req->assoclen - ivsize; + struct scatterlist *sg; + int nents; + + if (ssize < 0) + return -EINVAL; + + nents = sg_nents_for_len(req->src, ssize); + if (nents < 0) + return -EINVAL; + + memcpy(iv, req->iv, ivsize); + sg_init_table(rctx->sg, 4); + + if (unlikely(nents > 1)) { + /* + * This is a case that rarely occurs in practice, but + * for correctness, we have to deal with it nonetheless. + */ + rctx->assoc = kmalloc(ssize, GFP_ATOMIC); + if (!rctx->assoc) + return -ENOMEM; + + scatterwalk_map_and_copy(rctx->assoc, req->src, 0, + ssize, 0); + sg_set_buf(rctx->sg, rctx->assoc, ssize); + } else { + sg_set_page(rctx->sg, sg_page(req->src), ssize, + req->src->offset); + } + + sg_set_buf(rctx->sg + 1, iv, ivsize); + sg = scatterwalk_ffwd(rctx->sg + 2, req->src, req->assoclen); + if (sg != rctx->sg + 2) + sg_chain(rctx->sg, 3, sg); + + src = rctx->sg; + } + + aead_request_set_tfm(subreq, tctx->u.aead); + aead_request_set_ad(subreq, req->assoclen); + aead_request_set_callback(subreq, aead_request_flags(req), + essiv_aead_done, req); + aead_request_set_crypt(subreq, src, req->dst, req->cryptlen, req->iv); + + err = enc ? crypto_aead_encrypt(subreq) : + crypto_aead_decrypt(subreq); + + if (rctx->assoc && err != -EINPROGRESS) + kfree(rctx->assoc); + return err; +} + +static int essiv_aead_encrypt(struct aead_request *req) +{ + return essiv_aead_crypt(req, true); +} + +static int essiv_aead_decrypt(struct aead_request *req) +{ + return essiv_aead_crypt(req, false); +} + +static int essiv_init_tfm(struct essiv_instance_ctx *ictx, + struct essiv_tfm_ctx *tctx) +{ + struct crypto_cipher *essiv_cipher; + + essiv_cipher = crypto_alloc_cipher(ictx->essiv_cipher_name, 0, 0); + if (IS_ERR(essiv_cipher)) + return PTR_ERR(essiv_cipher); + + tctx->essiv_cipher = essiv_cipher; + + return 0; +} + +static int essiv_skcipher_init_tfm(struct crypto_skcipher *tfm) +{ + struct skcipher_instance *inst = skcipher_alg_instance(tfm); + struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst); + struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + struct crypto_skcipher *skcipher; + int err; + + skcipher = crypto_spawn_skcipher(&ictx->u.skcipher_spawn); + if (IS_ERR(skcipher)) + return PTR_ERR(skcipher); + + crypto_skcipher_set_reqsize(tfm, sizeof(struct skcipher_request) + + crypto_skcipher_reqsize(skcipher)); + + err = essiv_init_tfm(ictx, tctx); + if (err) { + crypto_free_skcipher(skcipher); + return err; + } + + tctx->u.skcipher = skcipher; + return 0; +} + +static int essiv_aead_init_tfm(struct crypto_aead *tfm) +{ + struct aead_instance *inst = aead_alg_instance(tfm); + struct essiv_instance_ctx *ictx = aead_instance_ctx(inst); + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + struct crypto_aead *aead; + unsigned int subreq_size; + int err; + + BUILD_BUG_ON(offsetofend(struct essiv_aead_request_ctx, aead_req) != + sizeof(struct essiv_aead_request_ctx)); + + aead = crypto_spawn_aead(&ictx->u.aead_spawn); + if (IS_ERR(aead)) + return PTR_ERR(aead); + + subreq_size = FIELD_SIZEOF(struct essiv_aead_request_ctx, aead_req) + + crypto_aead_reqsize(aead); + + tctx->ivoffset = offsetof(struct essiv_aead_request_ctx, aead_req) + + subreq_size; + crypto_aead_set_reqsize(tfm, tctx->ivoffset + crypto_aead_ivsize(aead)); + + err = essiv_init_tfm(ictx, tctx); + if (err) { + crypto_free_aead(aead); + return err; + } + + tctx->u.aead = aead; + return 0; +} + +static void essiv_skcipher_exit_tfm(struct crypto_skcipher *tfm) +{ + struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); + + crypto_free_skcipher(tctx->u.skcipher); + crypto_free_cipher(tctx->essiv_cipher); +} + +static void essiv_aead_exit_tfm(struct crypto_aead *tfm) +{ + struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm); + + crypto_free_aead(tctx->u.aead); + crypto_free_cipher(tctx->essiv_cipher); +} + +static void essiv_skcipher_free_instance(struct skcipher_instance *inst) +{ + struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst); + + crypto_drop_skcipher(&ictx->u.skcipher_spawn); + crypto_free_shash(ictx->hash); + kfree(inst); +} + +static void essiv_aead_free_instance(struct aead_instance *inst) +{ + struct essiv_instance_ctx *ictx = aead_instance_ctx(inst); + + crypto_drop_aead(&ictx->u.aead_spawn); + crypto_free_shash(ictx->hash); + kfree(inst); +} + +static bool parse_cipher_name(char *essiv_cipher_name, const char *cra_name) +{ + const char *p, *q; + int len; + + /* find the last opening parens */ + p = strrchr(cra_name, '('); + if (!p++) + return false; + + /* find the first closing parens in the tail of the string */ + q = strchr(p, ')'); + if (!q) + return false; + + len = q - p; + if (len >= CRYPTO_MAX_ALG_NAME) + return false; + + memcpy(essiv_cipher_name, p, len); + essiv_cipher_name[len] = '\0'; + return true; +} + +static bool essiv_supported_algorithms(const char *essiv_cipher_name, + struct shash_alg *hash_alg, + int ivsize) +{ + struct crypto_alg *alg; + bool ret = false; + + alg = crypto_alg_mod_lookup(essiv_cipher_name, + CRYPTO_ALG_TYPE_CIPHER, + CRYPTO_ALG_TYPE_MASK); + if (IS_ERR(alg)) + return false; + + if (hash_alg->digestsize < alg->cra_cipher.cia_min_keysize || + hash_alg->digestsize > alg->cra_cipher.cia_max_keysize) + goto out; + + if (ivsize != alg->cra_blocksize) + goto out; + + if (crypto_shash_alg_has_setkey(hash_alg)) + goto out; + + ret = true; + +out: + crypto_mod_put(alg); + return ret; +} + +static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb) +{ + struct crypto_attr_type *algt; + const char *inner_cipher_name; + const char *shash_name; + struct skcipher_instance *skcipher_inst = NULL; + struct aead_instance *aead_inst = NULL; + struct crypto_instance *inst; + struct crypto_alg *base, *block_base; + struct essiv_instance_ctx *ictx; + struct skcipher_alg *skcipher_alg = NULL; + struct aead_alg *aead_alg = NULL; + struct shash_alg *hash_alg; + int ivsize; + u32 type; + int err; + + algt = crypto_get_attr_type(tb); + if (IS_ERR(algt)) + return PTR_ERR(algt); + + inner_cipher_name = crypto_attr_alg_name(tb[1]); + if (IS_ERR(inner_cipher_name)) + return PTR_ERR(inner_cipher_name); + + shash_name = crypto_attr_alg_name(tb[2]); + if (IS_ERR(shash_name)) + return PTR_ERR(shash_name); + + type = algt->type & algt->mask; + + switch (type) { + case CRYPTO_ALG_TYPE_BLKCIPHER: + skcipher_inst = kzalloc(sizeof(*skcipher_inst) + + sizeof(*ictx), GFP_KERNEL); + if (!skcipher_inst) + return -ENOMEM; + inst = skcipher_crypto_instance(skcipher_inst); + base = &skcipher_inst->alg.base; + ictx = crypto_instance_ctx(inst); + + /* Symmetric cipher, e.g., "cbc(aes)" */ + crypto_set_skcipher_spawn(&ictx->u.skcipher_spawn, inst); + err = crypto_grab_skcipher(&ictx->u.skcipher_spawn, + inner_cipher_name, 0, + crypto_requires_sync(algt->type, + algt->mask)); + if (err) + goto out_free_inst; + skcipher_alg = crypto_spawn_skcipher_alg(&ictx->u.skcipher_spawn); + block_base = &skcipher_alg->base; + ivsize = crypto_skcipher_alg_ivsize(skcipher_alg); + break; + + case CRYPTO_ALG_TYPE_AEAD: + aead_inst = kzalloc(sizeof(*aead_inst) + + sizeof(*ictx), GFP_KERNEL); + if (!aead_inst) + return -ENOMEM; + inst = aead_crypto_instance(aead_inst); + base = &aead_inst->alg.base; + ictx = crypto_instance_ctx(inst); + + /* AEAD cipher, e.g., "authenc(hmac(sha256),cbc(aes))" */ + crypto_set_aead_spawn(&ictx->u.aead_spawn, inst); + err = crypto_grab_aead(&ictx->u.aead_spawn, + inner_cipher_name, 0, + crypto_requires_sync(algt->type, + algt->mask)); + if (err) + goto out_free_inst; + aead_alg = crypto_spawn_aead_alg(&ictx->u.aead_spawn); + block_base = &aead_alg->base; + if (!strstarts(block_base->cra_name, "authenc(")) { + pr_warn("Only authenc() type AEADs are supported by ESSIV\n"); + err = -EINVAL; + goto out_drop_skcipher; + } + ivsize = aead_alg->ivsize; + break; + + default: + return -EINVAL; + } + + if (!parse_cipher_name(ictx->essiv_cipher_name, block_base->cra_name)) { + pr_warn("Failed to parse ESSIV cipher name from skcipher cra_name\n"); + err = -EINVAL; + goto out_drop_skcipher; + } + + /* Synchronous hash, e.g., "sha256" */ + ictx->hash = crypto_alloc_shash(shash_name, 0, 0); + if (IS_ERR(ictx->hash)) { + err = PTR_ERR(ictx->hash); + goto out_drop_skcipher; + } + hash_alg = crypto_shash_alg(ictx->hash); + + /* Check the set of algorithms */ + if (!essiv_supported_algorithms(ictx->essiv_cipher_name, hash_alg, + ivsize)) { + pr_warn("Unsupported essiv instantiation: essiv(%s,%s)\n", + block_base->cra_name, hash_alg->base.cra_name); + err = -EINVAL; + goto out_free_hash; + } + + /* Instance fields */ + + err = -ENAMETOOLONG; + if (snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, + "essiv(%s,%s)", block_base->cra_name, + hash_alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME) + goto out_free_hash; + if (snprintf(base->cra_driver_name, CRYPTO_MAX_ALG_NAME, + "essiv(%s,%s)", block_base->cra_driver_name, + hash_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME) + goto out_free_hash; + + base->cra_flags = block_base->cra_flags & CRYPTO_ALG_ASYNC; + base->cra_blocksize = block_base->cra_blocksize; + base->cra_ctxsize = sizeof(struct essiv_tfm_ctx); + base->cra_alignmask = block_base->cra_alignmask; + base->cra_priority = block_base->cra_priority; + + if (type == CRYPTO_ALG_TYPE_BLKCIPHER) { + skcipher_inst->alg.setkey = essiv_skcipher_setkey; + skcipher_inst->alg.encrypt = essiv_skcipher_encrypt; + skcipher_inst->alg.decrypt = essiv_skcipher_decrypt; + skcipher_inst->alg.init = essiv_skcipher_init_tfm; + skcipher_inst->alg.exit = essiv_skcipher_exit_tfm; + + skcipher_inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(skcipher_alg); + skcipher_inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(skcipher_alg); + skcipher_inst->alg.ivsize = ivsize; + skcipher_inst->alg.chunksize = crypto_skcipher_alg_chunksize(skcipher_alg); + skcipher_inst->alg.walksize = crypto_skcipher_alg_walksize(skcipher_alg); + + skcipher_inst->free = essiv_skcipher_free_instance; + + err = skcipher_register_instance(tmpl, skcipher_inst); + } else { + aead_inst->alg.setkey = essiv_aead_setkey; + aead_inst->alg.setauthsize = essiv_aead_setauthsize; + aead_inst->alg.encrypt = essiv_aead_encrypt; + aead_inst->alg.decrypt = essiv_aead_decrypt; + aead_inst->alg.init = essiv_aead_init_tfm; + aead_inst->alg.exit = essiv_aead_exit_tfm; + + aead_inst->alg.ivsize = ivsize; + aead_inst->alg.maxauthsize = crypto_aead_alg_maxauthsize(aead_alg); + aead_inst->alg.chunksize = crypto_aead_alg_chunksize(aead_alg); + + aead_inst->free = essiv_aead_free_instance; + + err = aead_register_instance(tmpl, aead_inst); + } + + if (err) + goto out_free_hash; + + return 0; + +out_free_hash: + crypto_free_shash(ictx->hash); +out_drop_skcipher: + if (type == CRYPTO_ALG_TYPE_BLKCIPHER) + crypto_drop_skcipher(&ictx->u.skcipher_spawn); + else + crypto_drop_aead(&ictx->u.aead_spawn); +out_free_inst: + kfree(skcipher_inst); + kfree(aead_inst); + return err; +} + +/* essiv(cipher_name, shash_name) */ +static struct crypto_template essiv_tmpl = { + .name = "essiv", + .create = essiv_create, + .module = THIS_MODULE, +}; + +static int __init essiv_module_init(void) +{ + return crypto_register_template(&essiv_tmpl); +} + +static void __exit essiv_module_exit(void) +{ + crypto_unregister_template(&essiv_tmpl); +} + +subsys_initcall(essiv_module_init); +module_exit(essiv_module_exit); + +MODULE_DESCRIPTION("ESSIV skcipher/aead wrapper for block encryption"); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS_CRYPTO("essiv"); From patchwork Thu Aug 15 19:28:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171445 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp2592413ily; Thu, 15 Aug 2019 12:29:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqzU7xGXsAr5xK3sE9IoYSpuflhaok78psACbbrVChQ/2DxRMLsAb9ZsxaW5XyB6D3Ky/29u X-Received: by 2002:a17:90a:cd03:: with SMTP id d3mr3651692pju.117.1565897363648; Thu, 15 Aug 2019 12:29:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565897363; cv=none; d=google.com; s=arc-20160816; b=EuAxf6PH/CRdaTncvzMdb1bf2uFJh60UnN6E1XkmDerrKdu6KB8WaxRpLkrqGawG84 2jMATyFCRYpBPnDJ+OrBUTl1ojAwYemG++GVX+hajoHiKl0oMF/KxUiBu8UGAGL/foyK 8rPBDnUYoOp5LhqcbHUd1YyrkhDrkXwMv9oie7D72dCxhHAUUoexZtFVahw/W2nnNVNL xRZTL6NEPHzBW6mrdvJYQdgu0kFKpdIcp7DDC9e+vo7HDrTMlR8KOKvtEn1SkX/SLl7I dk8DCi1mX8btJjZIuLVgOwadt5CC6j5JYOVoAY8tDswgjL+kWCI2lu1BhqyknPbkua0R 7T1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=3rQo0bpFIGskyQ3+XxqfNAvEoIX1CtYPoAeZeVPe5Ag=; b=rks9WRs4sybBo7GYxhtrWQVMRCGLS4gKT+OcdxoUm0XTgQ30X5yRFdVHrpKIm+iWbq f5F3JhW7jl/SfayZFOGoPBoR8d0jvkodTWiFJYB2Q9XTf7+SQNgQ/WyiydmCmkRC37p5 ime4PQgb/FhZSvyBYEAewLZMjHdHtJ02U9NJoMEwYP+PSLf8nRGqSsHDYCjfsRNSK2lP RU2LiFlPjFimpXRnlLAdK6BByoUicplL2mx/D/0JdNjwtNAFeLPH8OHp/WQB++psf6VG 8+bYSPb5V7NzCKhjzSfo1GSPHv0N27c60IZg8reeV/paPS+MHTbGHDPI5qV2RP2UqPic w9hA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=rjKoy99O; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cm1si1494206pjb.12.2019.08.15.12.29.23; Thu, 15 Aug 2019 12:29:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=rjKoy99O; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732088AbfHOT3W (ORCPT + 3 others); Thu, 15 Aug 2019 15:29:22 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:44759 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732065AbfHOT3W (ORCPT ); Thu, 15 Aug 2019 15:29:22 -0400 Received: by mail-wr1-f66.google.com with SMTP id p17so3173870wrf.11 for ; Thu, 15 Aug 2019 12:29:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3rQo0bpFIGskyQ3+XxqfNAvEoIX1CtYPoAeZeVPe5Ag=; b=rjKoy99Om2+LinvITOfh1jAOg0iP+OuK6e+LJNezLE02LVmV8W6UNOaq+eLXoRgW3y CLzYhWX51s7t5I2KrwbjQ6jQfTF/6ZG1yz+aJ2HT1g953CwAeNu7fa1OpUtSm2D6h8gB ilg5+EmHq19gYtr/bZ0KMkBZ37bNWC+GxEYoHMEghwUQmsg5kZyrrzCcGRgxLvg6pRMJ s5ssbOZJeelWLi40jE80BjAUeOzk0Wqc+jpQftBrc3PG3h9KvwiH+B4CLe2JJKIddVXY HvJw3sjH7AbfOd9dgLJXp/Id9rYLzCuDuoCbqUVUVE9ebNz/5Vb4u1ZRMsIBq9ZWVGq6 Ogjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3rQo0bpFIGskyQ3+XxqfNAvEoIX1CtYPoAeZeVPe5Ag=; b=QAibJkFOiUyAdDjOkq8jqbNan8RTuWMoFTAEGWMdMbwjXqdlmwPi/VH9gHjYvFsgDS w8prhDekMP5kCdYexj6NowkrAjiLmgL5qoxvzokniR18BqpO2CyuixBg1U6NiCM6h5Jb Tx5z8GTJAx6q+Vyp4D1nsIijdXIbjauS3e9moEdEkf/ygwiWaczeOzAoC9O5X4HUFIIx CqIEBIfd/nf2FfXkml2ibX7ZEzrybdgJ6qI9CyMZrc7gThN0x8BbAVmkDWLb+c/erdlT MXx4HEoSHRDUy0R9dBEmGj7K9+u31mOgvulgMVD6zAFBwuipK0qcJ4FGeNkvLr/qJCpm dcyw== X-Gm-Message-State: APjAAAUmP8aIWj7XDOwvnFbRsqTRZCvjofd5Kw1U1edXGOKjVxmPaCrq akzAYuhIZEUPpH2GRJMKnvR7mk6L5qXZaY/Q X-Received: by 2002:adf:b613:: with SMTP id f19mr6954152wre.192.1565897357362; Thu, 15 Aug 2019 12:29:17 -0700 (PDT) Received: from localhost.localdomain ([2a02:587:a407:da00:f1b5:e68c:5f7f:79e7]) by smtp.gmail.com with ESMTPSA id h9sm2949063wrt.53.2019.08.15.12.29.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Aug 2019 12:29:16 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v12 2/4] crypto: essiv - add tests for essiv in cbc(aes)+sha256 mode Date: Thu, 15 Aug 2019 22:28:56 +0300 Message-Id: <20190815192858.28125-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190815192858.28125-1-ard.biesheuvel@linaro.org> References: <20190815192858.28125-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add a test vector for the ESSIV mode that is the most widely used, i.e., using cbc(aes) and sha256, in both skcipher and AEAD modes (the latter is used by tcrypt to encapsulate the authenc template or h/w instantiations of the same) Signed-off-by: Ard Biesheuvel --- crypto/tcrypt.c | 9 + crypto/testmgr.c | 14 + crypto/testmgr.h | 497 ++++++++++++++++++++ 3 files changed, 520 insertions(+) -- 2.17.1 diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index c578ccd92c57..83ad0b1fab30 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -2327,6 +2327,15 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) 0, speed_template_32); break; + case 220: + test_acipher_speed("essiv(cbc(aes),sha256)", + ENCRYPT, sec, NULL, 0, + speed_template_16_24_32); + test_acipher_speed("essiv(cbc(aes),sha256)", + DECRYPT, sec, NULL, 0, + speed_template_16_24_32); + break; + case 221: test_aead_speed("aegis128", ENCRYPT, sec, NULL, 0, 16, 8, speed_template_16); diff --git a/crypto/testmgr.c b/crypto/testmgr.c index d990eba723cd..c39e39e55dc2 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -4544,6 +4544,20 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .akcipher = __VECS(ecrdsa_tv_template) } + }, { + .alg = "essiv(authenc(hmac(sha256),cbc(aes)),sha256)", + .test = alg_test_aead, + .fips_allowed = 1, + .suite = { + .aead = __VECS(essiv_hmac_sha256_aes_cbc_tv_temp) + } + }, { + .alg = "essiv(cbc(aes),sha256)", + .test = alg_test_skcipher, + .fips_allowed = 1, + .suite = { + .cipher = __VECS(essiv_aes_cbc_tv_template) + } }, { .alg = "gcm(aes)", .generic_driver = "gcm_base(ctr(aes-generic),ghash-generic)", diff --git a/crypto/testmgr.h b/crypto/testmgr.h index 154052d07818..ef7d21f39d4a 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -31070,4 +31070,501 @@ static const struct comp_testvec zstd_decomp_tv_template[] = { "functions.", }, }; + +/* based on aes_cbc_tv_template */ +static const struct cipher_testvec essiv_aes_cbc_tv_template[] = { + { + .key = "\x06\xa9\x21\x40\x36\xb8\xa1\x5b" + "\x51\x2e\x03\xd5\x34\x12\x00\x06", + .klen = 16, + .iv = "\x3d\xaf\xba\x42\x9d\x9e\xb4\x30" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "Single block msg", + .ctext = "\xfa\x59\xe7\x5f\x41\x56\x65\xc3" + "\x36\xca\x6b\x72\x10\x9f\x8c\xd4", + .len = 16, + }, { + .key = "\xc2\x86\x69\x6d\x88\x7c\x9a\xa0" + "\x61\x1b\xbb\x3e\x20\x25\xa4\x5a", + .klen = 16, + .iv = "\x56\x2e\x17\x99\x6d\x09\x3d\x28" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" + "\x10\x11\x12\x13\x14\x15\x16\x17" + "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f", + .ctext = "\xc8\x59\x9a\xfe\x79\xe6\x7b\x20" + "\x06\x7d\x55\x0a\x5e\xc7\xb5\xa7" + "\x0b\x9c\x80\xd2\x15\xa1\xb8\x6d" + "\xc6\xab\x7b\x65\xd9\xfd\x88\xeb", + .len = 32, + }, { + .key = "\x8e\x73\xb0\xf7\xda\x0e\x64\x52" + "\xc8\x10\xf3\x2b\x80\x90\x79\xe5" + "\x62\xf8\xea\xd2\x52\x2c\x6b\x7b", + .klen = 24, + .iv = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" + "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" + "\xae\x2d\x8a\x57\x1e\x03\xac\x9c" + "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51" + "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11" + "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" + "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17" + "\xad\x2b\x41\x7b\xe6\x6c\x37\x10", + .ctext = "\x96\x6d\xa9\x7a\x42\xe6\x01\xc7" + "\x17\xfc\xa7\x41\xd3\x38\x0b\xe5" + "\x51\x48\xf7\x7e\x5e\x26\xa9\xfe" + "\x45\x72\x1c\xd9\xde\xab\xf3\x4d" + "\x39\x47\xc5\x4f\x97\x3a\x55\x63" + "\x80\x29\x64\x4c\x33\xe8\x21\x8a" + "\x6a\xef\x6b\x6a\x8f\x43\xc0\xcb" + "\xf0\xf3\x6e\x74\x54\x44\x92\x44", + .len = 64, + }, { + .key = "\x60\x3d\xeb\x10\x15\xca\x71\xbe" + "\x2b\x73\xae\xf0\x85\x7d\x77\x81" + "\x1f\x35\x2c\x07\x3b\x61\x08\xd7" + "\x2d\x98\x10\xa3\x09\x14\xdf\xf4", + .klen = 32, + .iv = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" + "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" + "\xae\x2d\x8a\x57\x1e\x03\xac\x9c" + "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51" + "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11" + "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" + "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17" + "\xad\x2b\x41\x7b\xe6\x6c\x37\x10", + .ctext = "\x24\x52\xf1\x48\x74\xd0\xa7\x93" + "\x75\x9b\x63\x46\xc0\x1c\x1e\x17" + "\x4d\xdc\x5b\x3a\x27\x93\x2a\x63" + "\xf7\xf1\xc7\xb3\x54\x56\x5b\x50" + "\xa3\x31\xa5\x8b\xd6\xfd\xb6\x3c" + "\x8b\xf6\xf2\x45\x05\x0c\xc8\xbb" + "\x32\x0b\x26\x1c\xe9\x8b\x02\xc0" + "\xb2\x6f\x37\xa7\x5b\xa8\xa9\x42", + .len = 64, + }, { + .key = "\xC9\x83\xA6\xC9\xEC\x0F\x32\x55" + "\x0F\x32\x55\x78\x9B\xBE\x78\x9B" + "\xBE\xE1\x04\x27\xE1\x04\x27\x4A" + "\x6D\x90\x4A\x6D\x90\xB3\xD6\xF9", + .klen = 32, + .iv = "\xE7\x82\x1D\xB8\x53\x11\xAC\x47" + "\x00\x00\x00\x00\x00\x00\x00\x00", + .ptext = "\x50\xB9\x22\xAE\x17\x80\x0C\x75" + "\xDE\x47\xD3\x3C\xA5\x0E\x9A\x03" + "\x6C\xF8\x61\xCA\x33\xBF\x28\x91" + "\x1D\x86\xEF\x58\xE4\x4D\xB6\x1F" + "\xAB\x14\x7D\x09\x72\xDB\x44\xD0" + "\x39\xA2\x0B\x97\x00\x69\xF5\x5E" + "\xC7\x30\xBC\x25\x8E\x1A\x83\xEC" + "\x55\xE1\x4A\xB3\x1C\xA8\x11\x7A" + "\x06\x6F\xD8\x41\xCD\x36\x9F\x08" + "\x94\xFD\x66\xF2\x5B\xC4\x2D\xB9" + "\x22\x8B\x17\x80\xE9\x52\xDE\x47" + "\xB0\x19\xA5\x0E\x77\x03\x6C\xD5" + "\x3E\xCA\x33\x9C\x05\x91\xFA\x63" + "\xEF\x58\xC1\x2A\xB6\x1F\x88\x14" + "\x7D\xE6\x4F\xDB\x44\xAD\x16\xA2" + "\x0B\x74\x00\x69\xD2\x3B\xC7\x30" + "\x99\x02\x8E\xF7\x60\xEC\x55\xBE" + "\x27\xB3\x1C\x85\x11\x7A\xE3\x4C" + "\xD8\x41\xAA\x13\x9F\x08\x71\xFD" + "\x66\xCF\x38\xC4\x2D\x96\x22\x8B" + "\xF4\x5D\xE9\x52\xBB\x24\xB0\x19" + "\x82\x0E\x77\xE0\x49\xD5\x3E\xA7" + "\x10\x9C\x05\x6E\xFA\x63\xCC\x35" + "\xC1\x2A\x93\x1F\x88\xF1\x5A\xE6" + "\x4F\xB8\x21\xAD\x16\x7F\x0B\x74" + "\xDD\x46\xD2\x3B\xA4\x0D\x99\x02" + "\x6B\xF7\x60\xC9\x32\xBE\x27\x90" + "\x1C\x85\xEE\x57\xE3\x4C\xB5\x1E" + "\xAA\x13\x7C\x08\x71\xDA\x43\xCF" + "\x38\xA1\x0A\x96\xFF\x68\xF4\x5D" + "\xC6\x2F\xBB\x24\x8D\x19\x82\xEB" + "\x54\xE0\x49\xB2\x1B\xA7\x10\x79" + "\x05\x6E\xD7\x40\xCC\x35\x9E\x07" + "\x93\xFC\x65\xF1\x5A\xC3\x2C\xB8" + "\x21\x8A\x16\x7F\xE8\x51\xDD\x46" + "\xAF\x18\xA4\x0D\x76\x02\x6B\xD4" + "\x3D\xC9\x32\x9B\x04\x90\xF9\x62" + "\xEE\x57\xC0\x29\xB5\x1E\x87\x13" + "\x7C\xE5\x4E\xDA\x43\xAC\x15\xA1" + "\x0A\x73\xFF\x68\xD1\x3A\xC6\x2F" + "\x98\x01\x8D\xF6\x5F\xEB\x54\xBD" + "\x26\xB2\x1B\x84\x10\x79\xE2\x4B" + "\xD7\x40\xA9\x12\x9E\x07\x70\xFC" + "\x65\xCE\x37\xC3\x2C\x95\x21\x8A" + "\xF3\x5C\xE8\x51\xBA\x23\xAF\x18" + "\x81\x0D\x76\xDF\x48\xD4\x3D\xA6" + "\x0F\x9B\x04\x6D\xF9\x62\xCB\x34" + "\xC0\x29\x92\x1E\x87\xF0\x59\xE5" + "\x4E\xB7\x20\xAC\x15\x7E\x0A\x73" + "\xDC\x45\xD1\x3A\xA3\x0C\x98\x01" + "\x6A\xF6\x5F\xC8\x31\xBD\x26\x8F" + "\x1B\x84\xED\x56\xE2\x4B\xB4\x1D" + "\xA9\x12\x7B\x07\x70\xD9\x42\xCE" + "\x37\xA0\x09\x95\xFE\x67\xF3\x5C" + "\xC5\x2E\xBA\x23\x8C\x18\x81\xEA" + "\x53\xDF\x48\xB1\x1A\xA6\x0F\x78" + "\x04\x6D\xD6\x3F\xCB\x34\x9D\x06" + "\x92\xFB\x64\xF0\x59\xC2\x2B\xB7" + "\x20\x89\x15\x7E\xE7\x50\xDC\x45" + "\xAE\x17\xA3\x0C\x75\x01\x6A\xD3" + "\x3C\xC8\x31\x9A\x03\x8F\xF8\x61" + "\xED\x56\xBF\x28\xB4\x1D\x86\x12", + .ctext = "\x97\x7f\x69\x0f\x0f\x34\xa6\x33" + "\x66\x49\x7e\xd0\x4d\x1b\xc9\x64" + "\xf9\x61\x95\x98\x11\x00\x88\xf8" + "\x2e\x88\x01\x0f\x2b\xe1\xae\x3e" + "\xfe\xd6\x47\x30\x11\x68\x7d\x99" + "\xad\x69\x6a\xe8\x41\x5f\x1e\x16" + "\x00\x3a\x47\xdf\x8e\x7d\x23\x1c" + "\x19\x5b\x32\x76\x60\x03\x05\xc1" + "\xa0\xff\xcf\xcc\x74\x39\x46\x63" + "\xfe\x5f\xa6\x35\xa7\xb4\xc1\xf9" + "\x4b\x5e\x38\xcc\x8c\xc1\xa2\xcf" + "\x9a\xc3\xae\x55\x42\x46\x93\xd9" + "\xbd\x22\xd3\x8a\x19\x96\xc3\xb3" + "\x7d\x03\x18\xf9\x45\x09\x9c\xc8" + "\x90\xf3\x22\xb3\x25\x83\x9a\x75" + "\xbb\x04\x48\x97\x3a\x63\x08\x04" + "\xa0\x69\xf6\x52\xd4\x89\x93\x69" + "\xb4\x33\xa2\x16\x58\xec\x4b\x26" + "\x76\x54\x10\x0b\x6e\x53\x1e\xbc" + "\x16\x18\x42\xb1\xb1\xd3\x4b\xda" + "\x06\x9f\x8b\x77\xf7\xab\xd6\xed" + "\xa3\x1d\x90\xda\x49\x38\x20\xb8" + "\x6c\xee\xae\x3e\xae\x6c\x03\xb8" + "\x0b\xed\xc8\xaa\x0e\xc5\x1f\x90" + "\x60\xe2\xec\x1b\x76\xd0\xcf\xda" + "\x29\x1b\xb8\x5a\xbc\xf4\xba\x13" + "\x91\xa6\xcb\x83\x3f\xeb\xe9\x7b" + "\x03\xba\x40\x9e\xe6\x7a\xb2\x4a" + "\x73\x49\xfc\xed\xfb\x55\xa4\x24" + "\xc7\xa4\xd7\x4b\xf5\xf7\x16\x62" + "\x80\xd3\x19\x31\x52\x25\xa8\x69" + "\xda\x9a\x87\xf5\xf2\xee\x5d\x61" + "\xc1\x12\x72\x3e\x52\x26\x45\x3a" + "\xd8\x9d\x57\xfa\x14\xe2\x9b\x2f" + "\xd4\xaa\x5e\x31\xf4\x84\x89\xa4" + "\xe3\x0e\xb0\x58\x41\x75\x6a\xcb" + "\x30\x01\x98\x90\x15\x80\xf5\x27" + "\x92\x13\x81\xf0\x1c\x1e\xfc\xb1" + "\x33\xf7\x63\xb0\x67\xec\x2e\x5c" + "\x85\xe3\x5b\xd0\x43\x8a\xb8\x5f" + "\x44\x9f\xec\x19\xc9\x8f\xde\xdf" + "\x79\xef\xf8\xee\x14\x87\xb3\x34" + "\x76\x00\x3a\x9b\xc7\xed\xb1\x3d" + "\xef\x07\xb0\xe4\xfd\x68\x9e\xeb" + "\xc2\xb4\x1a\x85\x9a\x7d\x11\x88" + "\xf8\xab\x43\x55\x2b\x8a\x4f\x60" + "\x85\x9a\xf4\xba\xae\x48\x81\xeb" + "\x93\x07\x97\x9e\xde\x2a\xfc\x4e" + "\x31\xde\xaa\x44\xf7\x2a\xc3\xee" + "\x60\xa2\x98\x2c\x0a\x88\x50\xc5" + "\x6d\x89\xd3\xe4\xb6\xa7\xf4\xb0" + "\xcf\x0e\x89\xe3\x5e\x8f\x82\xf4" + "\x9d\xd1\xa9\x51\x50\x8a\xd2\x18" + "\x07\xb2\xaa\x3b\x7f\x58\x9b\xf4" + "\xb7\x24\x39\xd3\x66\x2f\x1e\xc0" + "\x11\xa3\x56\x56\x2a\x10\x73\xbc" + "\xe1\x23\xbf\xa9\x37\x07\x9c\xc3" + "\xb2\xc9\xa8\x1c\x5b\x5c\x58\xa4" + "\x77\x02\x26\xad\xc3\x40\x11\x53" + "\x93\x68\x72\xde\x05\x8b\x10\xbc" + "\xa6\xd4\x1b\xd9\x27\xd8\x16\x12" + "\x61\x2b\x31\x2a\x44\x87\x96\x58", + .len = 496, + }, +}; + +/* based on hmac_sha256_aes_cbc_tv_temp */ +static const struct aead_testvec essiv_hmac_sha256_aes_cbc_tv_temp[] = { + { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x10" /* enc key length */ + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x06\xa9\x21\x40\x36\xb8\xa1\x5b" + "\x51\x2e\x03\xd5\x34\x12\x00\x06", + .klen = 8 + 32 + 16, + .iv = "\xb3\x0c\x5a\x11\x41\xad\xc1\x04" + "\xbc\x1e\x7e\x35\xb0\x5d\x78\x29", + .assoc = "\x3d\xaf\xba\x42\x9d\x9e\xb4\x30" + "\xb4\x22\xda\x80\x2c\x9f\xac\x41", + .alen = 16, + .ptext = "Single block msg", + .plen = 16, + .ctext = "\xe3\x53\x77\x9c\x10\x79\xae\xb8" + "\x27\x08\x94\x2d\xbe\x77\x18\x1a" + "\xcc\xde\x2d\x6a\xae\xf1\x0b\xcc" + "\x38\x06\x38\x51\xb4\xb8\xf3\x5b" + "\x5c\x34\xa6\xa3\x6e\x0b\x05\xe5" + "\x6a\x6d\x44\xaa\x26\xa8\x44\xa5", + .clen = 16 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x10" /* enc key length */ + "\x20\x21\x22\x23\x24\x25\x26\x27" + "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" + "\x30\x31\x32\x33\x34\x35\x36\x37" + "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f" + "\xc2\x86\x69\x6d\x88\x7c\x9a\xa0" + "\x61\x1b\xbb\x3e\x20\x25\xa4\x5a", + .klen = 8 + 32 + 16, + .iv = "\x56\xe8\x14\xa5\x74\x18\x75\x13" + "\x2f\x79\xe7\xc8\x65\xe3\x48\x45", + .assoc = "\x56\x2e\x17\x99\x6d\x09\x3d\x28" + "\xdd\xb3\xba\x69\x5a\x2e\x6f\x58", + .alen = 16, + .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" + "\x10\x11\x12\x13\x14\x15\x16\x17" + "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f", + .plen = 32, + .ctext = "\xd2\x96\xcd\x94\xc2\xcc\xcf\x8a" + "\x3a\x86\x30\x28\xb5\xe1\xdc\x0a" + "\x75\x86\x60\x2d\x25\x3c\xff\xf9" + "\x1b\x82\x66\xbe\xa6\xd6\x1a\xb1" + "\xf5\x33\x53\xf3\x68\x85\x2a\x99" + "\x0e\x06\x58\x8f\xba\xf6\x06\xda" + "\x49\x69\x0d\x5b\xd4\x36\x06\x62" + "\x35\x5e\x54\x58\x53\x4d\xdf\xbf", + .clen = 32 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x10" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x6c\x3e\xa0\x47\x76\x30\xce\x21" + "\xa2\xce\x33\x4a\xa7\x46\xc2\xcd", + .klen = 8 + 32 + 16, + .iv = "\x1f\x6b\xfb\xd6\x6b\x72\x2f\xc9" + "\xb6\x9f\x8c\x10\xa8\x96\x15\x64", + .assoc = "\xc7\x82\xdc\x4c\x09\x8c\x66\xcb" + "\xd9\xcd\x27\xd8\x25\x68\x2c\x81", + .alen = 16, + .ptext = "This is a 48-byte message (exactly 3 AES blocks)", + .plen = 48, + .ctext = "\xd0\xa0\x2b\x38\x36\x45\x17\x53" + "\xd4\x93\x66\x5d\x33\xf0\xe8\x86" + "\x2d\xea\x54\xcd\xb2\x93\xab\xc7" + "\x50\x69\x39\x27\x67\x72\xf8\xd5" + "\x02\x1c\x19\x21\x6b\xad\x52\x5c" + "\x85\x79\x69\x5d\x83\xba\x26\x84" + "\x68\xb9\x3e\x90\x38\xa0\x88\x01" + "\xe7\xc6\xce\x10\x31\x2f\x9b\x1d" + "\x24\x78\xfb\xbe\x02\xe0\x4f\x40" + "\x10\xbd\xaa\xc6\xa7\x79\xe0\x1a", + .clen = 48 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x10" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x56\xe4\x7a\x38\xc5\x59\x89\x74" + "\xbc\x46\x90\x3d\xba\x29\x03\x49", + .klen = 8 + 32 + 16, + .iv = "\x13\xe5\xf2\xef\x61\x97\x59\x35" + "\x9b\x36\x84\x46\x4e\x63\xd1\x41", + .assoc = "\x8c\xe8\x2e\xef\xbe\xa0\xda\x3c" + "\x44\x69\x9e\xd7\xdb\x51\xb7\xd9", + .alen = 16, + .ptext = "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7" + "\xa8\xa9\xaa\xab\xac\xad\xae\xaf" + "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7" + "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf" + "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7" + "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf" + "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7" + "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf", + .plen = 64, + .ctext = "\xc3\x0e\x32\xff\xed\xc0\x77\x4e" + "\x6a\xff\x6a\xf0\x86\x9f\x71\xaa" + "\x0f\x3a\xf0\x7a\x9a\x31\xa9\xc6" + "\x84\xdb\x20\x7e\xb0\xef\x8e\x4e" + "\x35\x90\x7a\xa6\x32\xc3\xff\xdf" + "\x86\x8b\xb7\xb2\x9d\x3d\x46\xad" + "\x83\xce\x9f\x9a\x10\x2e\xe9\x9d" + "\x49\xa5\x3e\x87\xf4\xc3\xda\x55" + "\x7a\x1b\xd4\x3c\xdb\x17\x95\xe2" + "\xe0\x93\xec\xc9\x9f\xf7\xce\xd8" + "\x3f\x54\xe2\x49\x39\xe3\x71\x25" + "\x2b\x6c\xe9\x5d\xec\xec\x2b\x64", + .clen = 64 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x10" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x90\xd3\x82\xb4\x10\xee\xba\x7a" + "\xd9\x38\xc4\x6c\xec\x1a\x82\xbf", + .klen = 8 + 32 + 16, + .iv = "\xe4\x13\xa1\x15\xe9\x6b\xb8\x23" + "\x81\x7a\x94\x29\xab\xfd\xd2\x2c", + .assoc = "\x00\x00\x43\x21\x00\x00\x00\x01" + "\xe9\x6e\x8c\x08\xab\x46\x57\x63" + "\xfd\x09\x8d\x45\xdd\x3f\xf8\x93", + .alen = 24, + .ptext = "\x08\x00\x0e\xbd\xa7\x0a\x00\x00" + "\x8e\x9c\x08\x3d\xb9\x5b\x07\x00" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" + "\x10\x11\x12\x13\x14\x15\x16\x17" + "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" + "\x20\x21\x22\x23\x24\x25\x26\x27" + "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" + "\x30\x31\x32\x33\x34\x35\x36\x37" + "\x01\x02\x03\x04\x05\x06\x07\x08" + "\x09\x0a\x0b\x0c\x0d\x0e\x0e\x01", + .plen = 80, + .ctext = "\xf6\x63\xc2\x5d\x32\x5c\x18\xc6" + "\xa9\x45\x3e\x19\x4e\x12\x08\x49" + "\xa4\x87\x0b\x66\xcc\x6b\x99\x65" + "\x33\x00\x13\xb4\x89\x8d\xc8\x56" + "\xa4\x69\x9e\x52\x3a\x55\xdb\x08" + "\x0b\x59\xec\x3a\x8e\x4b\x7e\x52" + "\x77\x5b\x07\xd1\xdb\x34\xed\x9c" + "\x53\x8a\xb5\x0c\x55\x1b\x87\x4a" + "\xa2\x69\xad\xd0\x47\xad\x2d\x59" + "\x13\xac\x19\xb7\xcf\xba\xd4\xa6" + "\xbb\xd4\x0f\xbe\xa3\x3b\x4c\xb8" + "\x3a\xd2\xe1\x03\x86\xa5\x59\xb7" + "\x73\xc3\x46\x20\x2c\xb1\xef\x68" + "\xbb\x8a\x32\x7e\x12\x8c\x69\xcf", + .clen = 80 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x18" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x8e\x73\xb0\xf7\xda\x0e\x64\x52" + "\xc8\x10\xf3\x2b\x80\x90\x79\xe5" + "\x62\xf8\xea\xd2\x52\x2c\x6b\x7b", + .klen = 8 + 32 + 24, + .iv = "\x49\xca\x41\xc9\x6b\xbf\x6c\x98" + "\x38\x2f\xa7\x3d\x4d\x80\x49\xb0", + .assoc = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", + .alen = 16, + .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" + "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" + "\xae\x2d\x8a\x57\x1e\x03\xac\x9c" + "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51" + "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11" + "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" + "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17" + "\xad\x2b\x41\x7b\xe6\x6c\x37\x10", + .plen = 64, + .ctext = "\x4f\x02\x1d\xb2\x43\xbc\x63\x3d" + "\x71\x78\x18\x3a\x9f\xa0\x71\xe8" + "\xb4\xd9\xad\xa9\xad\x7d\xed\xf4" + "\xe5\xe7\x38\x76\x3f\x69\x14\x5a" + "\x57\x1b\x24\x20\x12\xfb\x7a\xe0" + "\x7f\xa9\xba\xac\x3d\xf1\x02\xe0" + "\x08\xb0\xe2\x79\x88\x59\x88\x81" + "\xd9\x20\xa9\xe6\x4f\x56\x15\xcd" + "\x2f\xee\x5f\xdb\x66\xfe\x79\x09" + "\x61\x81\x31\xea\x5b\x3d\x8e\xfb" + "\xca\x71\x85\x93\xf7\x85\x55\x8b" + "\x7a\xe4\x94\xca\x8b\xba\x19\x33", + .clen = 64 + 32, + }, { +#ifdef __LITTLE_ENDIAN + .key = "\x08\x00" /* rta length */ + "\x01\x00" /* rta type */ +#else + .key = "\x00\x08" /* rta length */ + "\x00\x01" /* rta type */ +#endif + "\x00\x00\x00\x20" /* enc key length */ + "\x11\x22\x33\x44\x55\x66\x77\x88" + "\x99\xaa\xbb\xcc\xdd\xee\xff\x11" + "\x22\x33\x44\x55\x66\x77\x88\x99" + "\xaa\xbb\xcc\xdd\xee\xff\x11\x22" + "\x60\x3d\xeb\x10\x15\xca\x71\xbe" + "\x2b\x73\xae\xf0\x85\x7d\x77\x81" + "\x1f\x35\x2c\x07\x3b\x61\x08\xd7" + "\x2d\x98\x10\xa3\x09\x14\xdf\xf4", + .klen = 8 + 32 + 32, + .iv = "\xdf\xab\xf2\x7c\xdc\xe0\x33\x4c" + "\xf9\x75\xaf\xf9\x2f\x60\x3a\x9b", + .assoc = "\x00\x01\x02\x03\x04\x05\x06\x07" + "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", + .alen = 16, + .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" + "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" + "\xae\x2d\x8a\x57\x1e\x03\xac\x9c" + "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51" + "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11" + "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" + "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17" + "\xad\x2b\x41\x7b\xe6\x6c\x37\x10", + .plen = 64, + .ctext = "\xf5\x8c\x4c\x04\xd6\xe5\xf1\xba" + "\x77\x9e\xab\xfb\x5f\x7b\xfb\xd6" + "\x9c\xfc\x4e\x96\x7e\xdb\x80\x8d" + "\x67\x9f\x77\x7b\xc6\x70\x2c\x7d" + "\x39\xf2\x33\x69\xa9\xd9\xba\xcf" + "\xa5\x30\xe2\x63\x04\x23\x14\x61" + "\xb2\xeb\x05\xe2\xc3\x9b\xe9\xfc" + "\xda\x6c\x19\x07\x8c\x6a\x9d\x1b" + "\x24\x29\xed\xc2\x31\x49\xdb\xb1" + "\x8f\x74\xbd\x17\x92\x03\xbe\x8f" + "\xf3\x61\xde\x1c\xe9\xdb\xcd\xd0" + "\xcc\xce\xe9\x85\x57\xcf\x6f\x5f", + .clen = 64 + 32, + }, +}; + #endif /* _CRYPTO_TESTMGR_H */ From patchwork Thu Aug 15 19:28:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171444 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp2592405ily; Thu, 15 Aug 2019 12:29:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqyZ/8EwF3zDclMqMzTKBnDG8LIBfyuDHIwdB5MPPUTKd0Gckz4jHRZ4P6+nHLn9F6l44FvI X-Received: by 2002:a63:3387:: with SMTP id z129mr1760293pgz.177.1565897363281; Thu, 15 Aug 2019 12:29:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565897363; cv=none; d=google.com; s=arc-20160816; b=OP2q+e6Ah9JDJLCHtsct/i5Nj5en3DyHE+V56DFpLWzXVL1Ogajg2U33M/qp2JI48k ZV9RpRsFli9xiFVowpEDY+PgljM1szhJvZ6iUcjlMaWnvAVMU0+y0n9ojzszSDV9q3rs H3zyFrXCeh26crF4OQT/qXmJnHXJoqMoPZosVj96KM+jaIbDxAej4Us/3hEW3SBicOkw h0gUhCTbXtfNJBwPUG5pY916uwvsQ0ayFk7jkCd4XeieYEZzsSPp6DOQH0hIaPhD+IPy 5DHAhwbU7HlDdwTIWUCruxrYr3wOEgd+Vu95zO4zbDx8mYJmj4R7gourFoVPbQJvGeGC 8M6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=huVoFUpMc5dzWwrmJm9Qwvn6MCUViGlDp2QJ3557fZk=; b=mgsAVApnAtwdUSYqh8bEGdyqNueG5y5viYAMCsdYnIWQHnL+ftI0jYevjttNJi4zNF Tce66945toz17KxhrLXcJ1DrjU3+GrMqczxpL34OBBG+WoEUU/86OzUHpnkUnXO9MZVA I22qgqk2LAMcezakKZ63vTrvPpAZCCdbBBfnCW0L74zWf7wUv3haZt7QCdvIHkc+jJf+ Gz0N77H/FXSgCOktR4UDXcFe/fblm0Jrt4MD75etA5n63g3c5LRy4RkiSQrJBWbGmmWI 8U/3r12NgGsULEsMHAkzwLpKneAUh79eoWOKKKr5sk0IEsa04cn/xePb8dByfxXmGpke Iq0g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=noy61VfH; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cm1si1494206pjb.12.2019.08.15.12.29.23; Thu, 15 Aug 2019 12:29:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=noy61VfH; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729975AbfHOT3W (ORCPT + 3 others); Thu, 15 Aug 2019 15:29:22 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:35205 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732088AbfHOT3V (ORCPT ); Thu, 15 Aug 2019 15:29:21 -0400 Received: by mail-wr1-f67.google.com with SMTP id k2so3210108wrq.2 for ; Thu, 15 Aug 2019 12:29:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=huVoFUpMc5dzWwrmJm9Qwvn6MCUViGlDp2QJ3557fZk=; b=noy61VfHR7U2BgfdW30pqSNYezEMOuuESP7Kxt2spwIyqZclbrVt9uq/fnWCA/55Tt 0lYC7xcVVxwpwPaDm3iZlXreSOgQH4g2HroXrWABayTOOld+C/YNV2RxNATK3TbjO4BH DK7hrEqLdVOHt5kRdB8ae9zmJFbgttdLBJX/oilIiW9DmEyFyETFNZcjAnbztc7kEwcc JCnD2EP01sCpQJ0udZjkIapv/e+QkwO4u8xFnOAORqVMvnzofA36yHoD/c8wo8csmHDL rbH+FGMMgW9QwBwl6uURBhwHPectNJTWNSdGO5MdjX48I3gLO6q0gL9FbGwdmiVujIGU OIEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=huVoFUpMc5dzWwrmJm9Qwvn6MCUViGlDp2QJ3557fZk=; b=sI2VX6HpB/WY+HNhC6DyvpmcJmWAHmCfMPXtM47zR1DFnIZKiNSEYceM8fy+LLV7KD qp92dAHeUxojq+Lfob3y8W+Eqy2YJCHgIJJmNa8kWuTvIG9CjCn2WySNWypC2w8tzlSe tyxhUPsecMljax8P6d4Vc2Giq4GcFdZiB23n6Z5LnRJEuhfm/L4RrEIHk5NjdzR8ElCe xtt7PqvckhtP4Edr9/XDRBwTaSYOnGB9Kdxvaq1HT+xS4z9UZjMN3LPBOe9Dmow0sjkb q+2Jp2sTPj3z/WlSt/aEMkDELk0zjkRFrVa6UM7mw1cJGBVKRNXuI8mT+lidxtN21Ri1 Poow== X-Gm-Message-State: APjAAAUnm4b2c9w2V9j0yLytKlcbaT+6ed1cWZy44w6SK5KFYSdxoEv0 isY5dqHFT/NSumCLEo8TY51f72dPi4r3QAvT X-Received: by 2002:adf:eb0f:: with SMTP id s15mr6966623wrn.324.1565897359259; Thu, 15 Aug 2019 12:29:19 -0700 (PDT) Received: from localhost.localdomain ([2a02:587:a407:da00:f1b5:e68c:5f7f:79e7]) by smtp.gmail.com with ESMTPSA id h9sm2949063wrt.53.2019.08.15.12.29.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Aug 2019 12:29:18 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v12 3/4] crypto: arm64/aes-cts-cbc - factor out CBC en/decryption of a walk Date: Thu, 15 Aug 2019 22:28:57 +0300 Message-Id: <20190815192858.28125-4-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190815192858.28125-1-ard.biesheuvel@linaro.org> References: <20190815192858.28125-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The plain CBC driver and the CTS one share some code that iterates over a scatterwalk and invokes the CBC asm code to do the processing. The upcoming ESSIV/CBC mode will clone that pattern for the third time, so let's factor it out first. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-glue.c | 82 ++++++++++---------- 1 file changed, 40 insertions(+), 42 deletions(-) -- 2.17.1 diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 55d6d4838708..23abf335f1ee 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -186,46 +186,64 @@ static int ecb_decrypt(struct skcipher_request *req) return err; } -static int cbc_encrypt(struct skcipher_request *req) +static int cbc_encrypt_walk(struct skcipher_request *req, + struct skcipher_walk *walk) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - int err, rounds = 6 + ctx->key_length / 4; - struct skcipher_walk walk; + int err = 0, rounds = 6 + ctx->key_length / 4; unsigned int blocks; - err = skcipher_walk_virt(&walk, req, false); - - while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + while ((blocks = (walk->nbytes / AES_BLOCK_SIZE))) { kernel_neon_begin(); - aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, - ctx->key_enc, rounds, blocks, walk.iv); + aes_cbc_encrypt(walk->dst.virt.addr, walk->src.virt.addr, + ctx->key_enc, rounds, blocks, walk->iv); kernel_neon_end(); - err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); + err = skcipher_walk_done(walk, walk->nbytes % AES_BLOCK_SIZE); } return err; } -static int cbc_decrypt(struct skcipher_request *req) +static int cbc_encrypt(struct skcipher_request *req) { - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - int err, rounds = 6 + ctx->key_length / 4; struct skcipher_walk walk; - unsigned int blocks; + int err; err = skcipher_walk_virt(&walk, req, false); + if (err) + return err; + return cbc_encrypt_walk(req, &walk); +} - while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { +static int cbc_decrypt_walk(struct skcipher_request *req, + struct skcipher_walk *walk) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + int err = 0, rounds = 6 + ctx->key_length / 4; + unsigned int blocks; + + while ((blocks = (walk->nbytes / AES_BLOCK_SIZE))) { kernel_neon_begin(); - aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, - ctx->key_dec, rounds, blocks, walk.iv); + aes_cbc_decrypt(walk->dst.virt.addr, walk->src.virt.addr, + ctx->key_dec, rounds, blocks, walk->iv); kernel_neon_end(); - err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); + err = skcipher_walk_done(walk, walk->nbytes % AES_BLOCK_SIZE); } return err; } +static int cbc_decrypt(struct skcipher_request *req) +{ + struct skcipher_walk walk; + int err; + + err = skcipher_walk_virt(&walk, req, false); + if (err) + return err; + return cbc_decrypt_walk(req, &walk); +} + static int cts_cbc_init_tfm(struct crypto_skcipher *tfm) { crypto_skcipher_set_reqsize(tfm, sizeof(struct cts_cbc_req_ctx)); @@ -251,22 +269,12 @@ static int cts_cbc_encrypt(struct skcipher_request *req) } if (cbc_blocks > 0) { - unsigned int blocks; - skcipher_request_set_crypt(&rctx->subreq, req->src, req->dst, cbc_blocks * AES_BLOCK_SIZE, req->iv); - err = skcipher_walk_virt(&walk, &rctx->subreq, false); - - while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { - kernel_neon_begin(); - aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, - ctx->key_enc, rounds, blocks, walk.iv); - kernel_neon_end(); - err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); - } + err = skcipher_walk_virt(&walk, &rctx->subreq, false) ?: + cbc_encrypt_walk(&rctx->subreq, &walk); if (err) return err; @@ -316,22 +324,12 @@ static int cts_cbc_decrypt(struct skcipher_request *req) } if (cbc_blocks > 0) { - unsigned int blocks; - skcipher_request_set_crypt(&rctx->subreq, req->src, req->dst, cbc_blocks * AES_BLOCK_SIZE, req->iv); - err = skcipher_walk_virt(&walk, &rctx->subreq, false); - - while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { - kernel_neon_begin(); - aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, - ctx->key_dec, rounds, blocks, walk.iv); - kernel_neon_end(); - err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); - } + err = skcipher_walk_virt(&walk, &rctx->subreq, false) ?: + cbc_decrypt_walk(&rctx->subreq, &walk); if (err) return err; From patchwork Thu Aug 15 19:28:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171446 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp2592458ily; Thu, 15 Aug 2019 12:29:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqxlf5sZVyGM3lXBB0hcu9oeTMZfKn5o76yS4W+0oxXXVJK61RDf+iJG2n0QzP9lXF88QiJc X-Received: by 2002:a17:902:b710:: with SMTP id d16mr1584782pls.165.1565897366872; Thu, 15 Aug 2019 12:29:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565897366; cv=none; d=google.com; s=arc-20160816; b=eG5rwZTwy3KwYJNHjXHP1jXJed+B1/DWnNSEGnLUW48dP+uxxQkCu+xzEjfy5UQSFZ s2Ly9WXlPzbCm9f1+4FFQeGCdyVH+Mou6LWtQttJOXWZt1Qz6gvKsVuCv+lQBUPhFIDM zOUYzb3VRsH/Adq7dqN2swhfn1+hqmlZIyWg1Vb2UDFq8Vyr5W6/tYOTZtDlFRmM4ItP lq37SDVUp/Dwuj8Z8khDT38QpxGCDJd/Fw1iZFvrLSRI16tM0zzPiQ32ThzYvRdLVpdt dGjQrpI+RnoVIQ2y10ypD4mBbEO5F2zkx7Te7CwZK0/tnYYlh4mjf/o76Q7H1zklWV7S NPdw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=hXvMZ2KwYb5qrlz7PRq35hMlmHi2bXsSYLcsF+NixYQ=; b=dAE2LB5VMUDgcll4Jbo9MB2hdXW9nvEmHjCz4lHBqu4aYszNuTrQQKlYzfQCM+/9vA 5Z/ZoFhhT4y2c3dH1eI2TmkBSE/LU6KLIWHs2v3pDg1oPm5ZJsOaqV4Ze/MSj3/pqvby Q1ZWomAl0aD+1UX1lSHdzt1/5e4Ls3JDHw03MpnHsCKNPiOrNXFauneJ+jHW3iKZB1vU 8s7AIwGDFQTHJAz+JJQBfZ0Mirg/TIW8mQm0zOd3MI7zUIe0df58pbKfmy3bJ6xQoTW5 5IjXr4AkdFlPyGQo6pbDGbOEt62SW4ziJy9FSv7kJOEk0xiCryc+PUsAU0gFB4EPO/R/ lw3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=K3ZI1EgE; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cm1si1494206pjb.12.2019.08.15.12.29.26; Thu, 15 Aug 2019 12:29:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=K3ZI1EgE; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732309AbfHOT30 (ORCPT + 3 others); Thu, 15 Aug 2019 15:29:26 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:50478 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732131AbfHOT30 (ORCPT ); Thu, 15 Aug 2019 15:29:26 -0400 Received: by mail-wm1-f65.google.com with SMTP id v15so2180209wml.0 for ; Thu, 15 Aug 2019 12:29:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=hXvMZ2KwYb5qrlz7PRq35hMlmHi2bXsSYLcsF+NixYQ=; b=K3ZI1EgE4A7tBoz/y2G9crA0/fJUdNnMYRmZ5tPPCHaeVVcbAD5S3lWHLCFmGiXPvw wy6UdEfHCee5GCiU30MS7XAqSxvJBaq51yCYnARV8Dsbiy9IS/khBswLnBRPWZR1zzEN O8yA+yA11jFHKW9iF/nwWHOEqGOqNhXMrgB1NFDTTZmJ2xvG6GuM5x6bnEcz03DBgWRT 4/rfYrLED2hwnCvK+dq8sqUtEmpeCBWyy5MLBTVscZIE64lrczzu9izreSElqqknjL5d +lFPqjRzLhp9l8BWXUMh4Xsc16snZ/Cmm8OuNKeHVXmkaXhcvRsIuIX+Erk9C9vM5btO TmAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hXvMZ2KwYb5qrlz7PRq35hMlmHi2bXsSYLcsF+NixYQ=; b=ABDqCrjQl7Bk2GaE65EwJcXdt17qc/wjTf8VEkDG5EZzFi1ozF3tIStVDUpeWquLyk SSuA2HJ0EEPDUEzVRRUL3Ox6J+046yqkfeBoLW/frLNyh7zFDBhmKMdAC1CoVYZsB6kV Eh0NEoeqrLD4QNwVLPC4kWjvmLO+WjkmWxZsxJKgJc1u53fB3Vmql7yJ578WUam2CE0N eEZnOPKpFAtdmUHtvHwE2pQCesLXQ/VIIIBSNXbv7YCHGpaaq45h9A6m1JmuLMVNOJe8 VwrTOjXL7jWP2bvNJlrP++xvqxGkn/JMF33HhpeFt14zX21325nlxDD7HwKA+bEkNcbE BZQA== X-Gm-Message-State: APjAAAXNY1CxR07v4+Mnw6cpuPDkH+apfPdYRjumSn7Ax7J1HM7LgOa6 /6aSgiicYpIAP7Quju/gJ82MmZOxvT5GQ+Lh X-Received: by 2002:a05:600c:2385:: with SMTP id m5mr3844314wma.4.1565897362784; Thu, 15 Aug 2019 12:29:22 -0700 (PDT) Received: from localhost.localdomain ([2a02:587:a407:da00:f1b5:e68c:5f7f:79e7]) by smtp.gmail.com with ESMTPSA id h9sm2949063wrt.53.2019.08.15.12.29.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Aug 2019 12:29:21 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , dm-devel@redhat.com, linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , Milan Broz Subject: [PATCH v12 4/4] crypto: arm64/aes - implement accelerated ESSIV/CBC mode Date: Thu, 15 Aug 2019 22:28:58 +0300 Message-Id: <20190815192858.28125-5-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190815192858.28125-1-ard.biesheuvel@linaro.org> References: <20190815192858.28125-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add an accelerated version of the 'essiv(cbc(aes),sha256)' skcipher, which is used by fscrypt or dm-crypt on systems where CBC mode is signficantly more performant than XTS mode (e.g., when using a h/w accelerator which supports the former but not the latter) This avoids a separate call into the AES cipher for every invocation. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-glue.c | 124 ++++++++++++++++++++ arch/arm64/crypto/aes-modes.S | 28 +++++ 2 files changed, 152 insertions(+) -- 2.17.1 diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 23abf335f1ee..ca0c84d56cba 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -30,6 +31,8 @@ #define aes_cbc_decrypt ce_aes_cbc_decrypt #define aes_cbc_cts_encrypt ce_aes_cbc_cts_encrypt #define aes_cbc_cts_decrypt ce_aes_cbc_cts_decrypt +#define aes_essiv_cbc_encrypt ce_aes_essiv_cbc_encrypt +#define aes_essiv_cbc_decrypt ce_aes_essiv_cbc_decrypt #define aes_ctr_encrypt ce_aes_ctr_encrypt #define aes_xts_encrypt ce_aes_xts_encrypt #define aes_xts_decrypt ce_aes_xts_decrypt @@ -44,6 +47,8 @@ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions"); #define aes_cbc_decrypt neon_aes_cbc_decrypt #define aes_cbc_cts_encrypt neon_aes_cbc_cts_encrypt #define aes_cbc_cts_decrypt neon_aes_cbc_cts_decrypt +#define aes_essiv_cbc_encrypt neon_aes_essiv_cbc_encrypt +#define aes_essiv_cbc_decrypt neon_aes_essiv_cbc_decrypt #define aes_ctr_encrypt neon_aes_ctr_encrypt #define aes_xts_encrypt neon_aes_xts_encrypt #define aes_xts_decrypt neon_aes_xts_decrypt @@ -51,6 +56,7 @@ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions"); MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 NEON"); MODULE_ALIAS_CRYPTO("ecb(aes)"); MODULE_ALIAS_CRYPTO("cbc(aes)"); +MODULE_ALIAS_CRYPTO("essiv(cbc(aes),sha256)"); MODULE_ALIAS_CRYPTO("ctr(aes)"); MODULE_ALIAS_CRYPTO("xts(aes)"); MODULE_ALIAS_CRYPTO("cmac(aes)"); @@ -87,6 +93,13 @@ asmlinkage void aes_xts_decrypt(u8 out[], u8 const in[], u32 const rk1[], int rounds, int blocks, u32 const rk2[], u8 iv[], int first); +asmlinkage void aes_essiv_cbc_encrypt(u8 out[], u8 const in[], u32 const rk1[], + int rounds, int blocks, u8 iv[], + u32 const rk2[]); +asmlinkage void aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[], + int rounds, int blocks, u8 iv[], + u32 const rk2[]); + asmlinkage void aes_mac_update(u8 const in[], u32 const rk[], int rounds, int blocks, u8 dg[], int enc_before, int enc_after); @@ -102,6 +115,12 @@ struct crypto_aes_xts_ctx { struct crypto_aes_ctx __aligned(8) key2; }; +struct crypto_aes_essiv_cbc_ctx { + struct crypto_aes_ctx key1; + struct crypto_aes_ctx __aligned(8) key2; + struct crypto_shash *hash; +}; + struct mac_tfm_ctx { struct crypto_aes_ctx key; u8 __aligned(8) consts[]; @@ -146,6 +165,31 @@ static int xts_set_key(struct crypto_skcipher *tfm, const u8 *in_key, return -EINVAL; } +static int essiv_cbc_set_key(struct crypto_skcipher *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); + SHASH_DESC_ON_STACK(desc, ctx->hash); + u8 digest[SHA256_DIGEST_SIZE]; + int ret; + + ret = aes_expandkey(&ctx->key1, in_key, key_len); + if (ret) + goto out; + + desc->tfm = ctx->hash; + crypto_shash_digest(desc, in_key, key_len, digest); + + ret = aes_expandkey(&ctx->key2, digest, sizeof(digest)); + if (ret) + goto out; + + return 0; +out: + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; +} + static int ecb_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -360,6 +404,68 @@ static int cts_cbc_decrypt(struct skcipher_request *req) return skcipher_walk_done(&walk, 0); } +static int essiv_cbc_init_tfm(struct crypto_skcipher *tfm) +{ + struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); + + ctx->hash = crypto_alloc_shash("sha256", 0, 0); + if (IS_ERR(ctx->hash)) + return PTR_ERR(ctx->hash); + + return 0; +} + +static void essiv_cbc_exit_tfm(struct crypto_skcipher *tfm) +{ + struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); + + crypto_free_shash(ctx->hash); +} + +static int essiv_cbc_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); + int err, rounds = 6 + ctx->key1.key_length / 4; + struct skcipher_walk walk; + unsigned int blocks; + + err = skcipher_walk_virt(&walk, req, false); + + blocks = walk.nbytes / AES_BLOCK_SIZE; + if (blocks) { + kernel_neon_begin(); + aes_essiv_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, + ctx->key1.key_enc, rounds, blocks, + req->iv, ctx->key2.key_enc); + kernel_neon_end(); + err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); + } + return err ?: cbc_encrypt_walk(req, &walk); +} + +static int essiv_cbc_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_essiv_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); + int err, rounds = 6 + ctx->key1.key_length / 4; + struct skcipher_walk walk; + unsigned int blocks; + + err = skcipher_walk_virt(&walk, req, false); + + blocks = walk.nbytes / AES_BLOCK_SIZE; + if (blocks) { + kernel_neon_begin(); + aes_essiv_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, + ctx->key1.key_dec, rounds, blocks, + req->iv, ctx->key2.key_enc); + kernel_neon_end(); + err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); + } + return err ?: cbc_decrypt_walk(req, &walk); +} + static int ctr_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -515,6 +621,24 @@ static struct skcipher_alg aes_algs[] = { { .encrypt = cts_cbc_encrypt, .decrypt = cts_cbc_decrypt, .init = cts_cbc_init_tfm, +}, { + .base = { + .cra_name = "__essiv(cbc(aes),sha256)", + .cra_driver_name = "__essiv-cbc-aes-sha256-" MODE, + .cra_priority = PRIO + 1, + .cra_flags = CRYPTO_ALG_INTERNAL, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_essiv_cbc_ctx), + .cra_module = THIS_MODULE, + }, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = essiv_cbc_set_key, + .encrypt = essiv_cbc_encrypt, + .decrypt = essiv_cbc_decrypt, + .init = essiv_cbc_init_tfm, + .exit = essiv_cbc_exit_tfm, }, { .base = { .cra_name = "__ctr(aes)", diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S index 324039b72094..2879f030a749 100644 --- a/arch/arm64/crypto/aes-modes.S +++ b/arch/arm64/crypto/aes-modes.S @@ -118,8 +118,23 @@ AES_ENDPROC(aes_ecb_decrypt) * int blocks, u8 iv[]) * aes_cbc_decrypt(u8 out[], u8 const in[], u8 const rk[], int rounds, * int blocks, u8 iv[]) + * aes_essiv_cbc_encrypt(u8 out[], u8 const in[], u32 const rk1[], + * int rounds, int blocks, u8 iv[], + * u32 const rk2[]); + * aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[], + * int rounds, int blocks, u8 iv[], + * u32 const rk2[]); */ +AES_ENTRY(aes_essiv_cbc_encrypt) + ld1 {v4.16b}, [x5] /* get iv */ + + mov w8, #14 /* AES-256: 14 rounds */ + enc_prepare w8, x6, x7 + encrypt_block v4, w8, x6, x7, w9 + enc_switch_key w3, x2, x6 + b .Lcbcencloop4x + AES_ENTRY(aes_cbc_encrypt) ld1 {v4.16b}, [x5] /* get iv */ enc_prepare w3, x2, x6 @@ -153,13 +168,25 @@ AES_ENTRY(aes_cbc_encrypt) st1 {v4.16b}, [x5] /* return iv */ ret AES_ENDPROC(aes_cbc_encrypt) +AES_ENDPROC(aes_essiv_cbc_encrypt) + +AES_ENTRY(aes_essiv_cbc_decrypt) + stp x29, x30, [sp, #-16]! + mov x29, sp + + ld1 {cbciv.16b}, [x5] /* get iv */ + mov w8, #14 /* AES-256: 14 rounds */ + enc_prepare w8, x6, x7 + encrypt_block cbciv, w8, x6, x7, w9 + b .Lessivcbcdecstart AES_ENTRY(aes_cbc_decrypt) stp x29, x30, [sp, #-16]! mov x29, sp ld1 {cbciv.16b}, [x5] /* get iv */ +.Lessivcbcdecstart: dec_prepare w3, x2, x6 .LcbcdecloopNx: @@ -212,6 +239,7 @@ ST5( st1 {v4.16b}, [x0], #16 ) ldp x29, x30, [sp], #16 ret AES_ENDPROC(aes_cbc_decrypt) +AES_ENDPROC(aes_essiv_cbc_decrypt) /*