From patchwork Mon Apr 22 20:35:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 791044 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3753D1B948; Mon, 22 Apr 2024 20:36:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713818162; cv=none; b=rNwtGhuWuoQOmIQ67x7OHMEybugKBPLzy7Tx89TMWK2U6WbAmggsAkLCYJTEgBUMSZw6Gu9EgAgZvWAGkm3LCvZ0hug3wMidAfKsLJMo7bdoEt97RsfCQFVZ61jpebEQZK6UqJU439uz/diuwLJL1tm5Qxhl1ZMHBFgrPIJvPsc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713818162; c=relaxed/simple; bh=XeEp/lH/EzLpu3vZNqTNnlBpKE1ibOVuDw3pXhUB0GE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=r2UXLTLJ2R8JL02af6rau7jaWYLeUeTURF5t6GAZxrD3UBXJFDZi+XY0pCI5C6DXUz9tpwFf3iQURP5L7GH8BqGmXQ3LuUyhRdHzNs7gT1PaGjH2wiANUQGSZIeWkn0IzzGSIZEPvdH0t6FysdC3oCh0o0pxyUzgndtX2vgW308= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DseM/aCq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DseM/aCq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 81CB2C32782; Mon, 22 Apr 2024 20:36:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713818161; bh=XeEp/lH/EzLpu3vZNqTNnlBpKE1ibOVuDw3pXhUB0GE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DseM/aCqpZEGhnq/nk8bVPCjVeK/BkNQvFAesoIw+qAL9ZAUE+5v08bC7p+52rlQU v/hNp7tRoUx8GWG4E/DTb1ZlDMb21oZLJc9d7A3LiOlDrZXSPLNlk9VrZkowvU88FN adY5KNY2xbRz3cAyZ7pFl6zpINcZzkSW2FLpyjb6PsMzt0LrDTtE7Iid+xz4TeHEUW zMhxQ0EUPDEH3G7WnQ77rQ6L3gYmSzqUQGRDZ/SUdvmAl/OzaLzv6iQNIdJHzNsXe5 Hw4BB2MegpwqUq+/rFZBxHxgICULUGoW6OrkPvZPRcsZdwjjHqf0pkxT1B9N+4h5TY FFAK5it6YfLFQ== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche Subject: [PATCH v2 1/8] crypto: shash - add support for finup2x Date: Mon, 22 Apr 2024 13:35:37 -0700 Message-ID: <20240422203544.195390-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240422203544.195390-1-ebiggers@kernel.org> References: <20240422203544.195390-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Most cryptographic hash functions are serialized, in the sense that they have an internal block size and the blocks must be processed serially. (BLAKE3 is a notable exception that has tree-based hashing built-in, but all the more common choices such as the SHAs and BLAKE2 are serialized. ParallelHash and Sakura are parallel hashes based on SHA3, but SHA3 is much slower than SHA256 in software even with the ARMv8 SHA3 extension.) This limits the performance of computing a single hash. Yet, computing multiple hashes simultaneously does not have this limitation. Modern CPUs are superscalar and often can execute independent instructions in parallel. As a result, on many modern CPUs, it is possible to hash two equal-length messages in about the same time as a single message, if all the instructions are interleaved. Meanwhile, a very common use case for hashing in the Linux kernel is dm-verity and fs-verity. Both use a Merkle tree that has a fixed block size, usually 4096 bytes with an empty or 32-byte salt prepended. The hash algorithm is usually SHA-256. Usually, many blocks need to be hashed at a time. This is an ideal scenario for multibuffer hashing. Linux actually used to support SHA-256 multibuffer hashing on x86_64, before it was removed by commit ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd"). However, it was integrated with the crypto API in a weird way, where it behaved as an asynchronous hash that queued up and executed all requests on a global queue. This made it very complex, buggy, and virtually unusable. This patch takes a new approach of just adding an API crypto_shash_finup2x() that synchronously computes the hash of two equal-length messages, starting from a common state that represents the (possibly empty) common prefix shared by the two messages. The new API is part of the "shash" algorithm type, as it does not make sense in "ahash". It does a "finup" operation rather than a "digest" operation in order to support the salt that is used by dm-verity and fs-verity. There is no fallback implementation that does two regular finups if the underlying algorithm doesn't support finup2x, since users probably will want to avoid the overhead of queueing up multiple hashes when multibuffer hashing won't actually be used anyway. For now the API only supports 2-way interleaving, as the usefulness and practicality seems to drop off dramatically after 2. The arm64 CPUs I tested don't support more than 2 concurrent SHA-256 hashes. On x86_64, AMD's Zen 4 can do 4 concurrent SHA-256 hashes (at least based on a microbenchmark of the sha256rnds2 instruction), and it's been reported that the highest SHA-256 throughput on Intel processors comes from using AVX512 to compute 16 hashes in parallel. However, higher interleaving factors would involve tradeoffs such as no longer being able to cache the round constants in registers, further increasing the code size (both source and binary), further increasing the amount of state that users need to keep track of, and causing there to be more "leftover" hashes. Signed-off-by: Eric Biggers --- include/crypto/hash.h | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/include/crypto/hash.h b/include/crypto/hash.h index 0014bdd81ab7..66d93c940861 100644 --- a/include/crypto/hash.h +++ b/include/crypto/hash.h @@ -177,10 +177,13 @@ struct shash_desc { * @finup: see struct ahash_alg * @digest: see struct ahash_alg * @export: see struct ahash_alg * @import: see struct ahash_alg * @setkey: see struct ahash_alg + * @finup2x: **[optional]** Finish calculating the digests of two equal-length + * messages, interleaving the instructions to potentially achieve + * better performance than hashing each message individually. * @init_tfm: Initialize the cryptographic transformation object. * This function is called only once at the instantiation * time, right after the transformation context was * allocated. In case the cryptographic hardware has * some special requirements which need to be handled @@ -208,10 +211,12 @@ struct shash_alg { unsigned int len, u8 *out); int (*export)(struct shash_desc *desc, void *out); int (*import)(struct shash_desc *desc, const void *in); int (*setkey)(struct crypto_shash *tfm, const u8 *key, unsigned int keylen); + int (*finup2x)(struct shash_desc *desc, const u8 *data1, + const u8 *data2, unsigned int len, u8 *out1, u8 *out2); int (*init_tfm)(struct crypto_shash *tfm); void (*exit_tfm)(struct crypto_shash *tfm); int (*clone_tfm)(struct crypto_shash *dst, struct crypto_shash *src); unsigned int descsize; @@ -749,10 +754,15 @@ static inline unsigned int crypto_shash_digestsize(struct crypto_shash *tfm) static inline unsigned int crypto_shash_statesize(struct crypto_shash *tfm) { return crypto_shash_alg(tfm)->statesize; } +static inline bool crypto_shash_supports_finup2x(struct crypto_shash *tfm) +{ + return crypto_shash_alg(tfm)->finup2x != NULL; +} + static inline u32 crypto_shash_get_flags(struct crypto_shash *tfm) { return crypto_tfm_get_flags(crypto_shash_tfm(tfm)); } @@ -842,10 +852,34 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data, * Return: 0 on success; < 0 if an error occurred. */ int crypto_shash_tfm_digest(struct crypto_shash *tfm, const u8 *data, unsigned int len, u8 *out); +/** + * crypto_shash_finup2x() - finish hashing two equal-length messages + * @desc: the hash state that will be forked for the two messages. This + * contains the state after hashing a (possibly-empty) common prefix of + * the two messages. + * @data1: the first message (not including any common prefix from @desc) + * @data2: the second message (not including any common prefix from @desc) + * @len: length of @data1 and @data2 in bytes + * @out1: output buffer for first message digest + * @out2: output buffer for second message digest + * + * Users must check crypto_shash_supports_finup2x(tfm) before calling this. + * + * Context: Any context. + * Return: 0 on success; a negative errno value on failure. + */ +static inline int crypto_shash_finup2x(struct shash_desc *desc, + const u8 *data1, const u8 *data2, + unsigned int len, u8 *out1, u8 *out2) +{ + return crypto_shash_alg(desc->tfm)->finup2x(desc, data1, data2, len, + out1, out2); +} + /** * crypto_shash_export() - extract operational state for message digest * @desc: reference to the operational state handle whose state is exported * @out: output buffer of sufficient size that can hold the hash state * From patchwork Mon Apr 22 20:35:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 791043 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFC1A1BDC4; Mon, 22 Apr 2024 20:36:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713818163; cv=none; b=AZV8dWpj0k4eTq0St9nNVSwGQJn/rPaEAEWZj0mn+0L3bluSTDI2sr6RyW/PwBbrOnQMgxkEIUVEJHVIZGf5FC0vkizaGPv2vcQ1Pq/D03k3Yg0dbT5X2L2X+BtvryI30GO/gxRGZAdryIEYXvWsKg8/WkVyG2J3hvOCv8FWNcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713818163; c=relaxed/simple; bh=g2lxiYMhd7JfTooN9yvW8o7SK6LFd1hw8m3EZ1YtqW0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Sxdifbik1MThsDQCBPf3adhkEorQje+aVYCSYGhqUzJhaqT6OYRXdk0pvyXyI31skN0b97deCkIIKmF4MSCQWiKSzAVFiIfpCk9rUC4Y0YokuxUku3lb8WnAMKTe0pvtPH5x6nkB327XIThmcUSg72TtAriK4Kq/lwh0xWHQdTs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=guEFiQ7a; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="guEFiQ7a" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52EB7C4AF08; Mon, 22 Apr 2024 20:36:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713818162; bh=g2lxiYMhd7JfTooN9yvW8o7SK6LFd1hw8m3EZ1YtqW0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=guEFiQ7agdb4PW66xa0AifMHOKe0fsNK9s0Au1C096IMK8Z68EDRGPQJ1MaOrs8pN oBRaxxP6C7oPWC7GRH3V12xiAKS24uN6vO8sh8RkZA4GhzeaHKDqWB9z1Rfng5eahi 5q0zNeWecb9r+ogAvQ6XqJLzPI6zs5FQ5BLOtoGjupsrnQMZmXEDPw5BmX8ZTQHFMK Pa57FVRwSZv+y5V130P/rFHEZZZ9MMYEFNOgwbgEMWWSmBE7GQy797opqxh6AuhwRd aw+XmZjmkNPNA5p618grKeuz+sjaiFSdxPTYuSWY0+vVvTvWlrJMXkUyC1Wdce2f7p nXWpDY4HXHTOg== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche Subject: [PATCH v2 3/8] crypto: testmgr - add tests for finup2x Date: Mon, 22 Apr 2024 13:35:39 -0700 Message-ID: <20240422203544.195390-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240422203544.195390-1-ebiggers@kernel.org> References: <20240422203544.195390-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Update the shash self-tests to test the new finup2x method when CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y. Signed-off-by: Eric Biggers --- crypto/testmgr.c | 56 +++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 46 insertions(+), 10 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 2c57ebcaf368..b49fa88c95e1 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -227,10 +227,12 @@ enum flush_type { /* finalization function for hash algorithms */ enum finalization_type { FINALIZATION_TYPE_FINAL, /* use final() */ FINALIZATION_TYPE_FINUP, /* use finup() */ + FINALIZATION_TYPE_FINUP2X_BUF1, /* use 1st buffer of finup2x() */ + FINALIZATION_TYPE_FINUP2X_BUF2, /* use 2nd buffer of finup2x() */ FINALIZATION_TYPE_DIGEST, /* use digest() */ }; /* * Whether the crypto operation will occur in-place, and if so whether the @@ -1109,19 +1111,28 @@ static void generate_random_testvec_config(struct rnd_state *rng, if (prandom_bool(rng)) { cfg->req_flags |= CRYPTO_TFM_REQ_MAY_SLEEP; p += scnprintf(p, end - p, " may_sleep"); } - switch (prandom_u32_below(rng, 4)) { + switch (prandom_u32_below(rng, 8)) { case 0: + case 1: cfg->finalization_type = FINALIZATION_TYPE_FINAL; p += scnprintf(p, end - p, " use_final"); break; - case 1: + case 2: cfg->finalization_type = FINALIZATION_TYPE_FINUP; p += scnprintf(p, end - p, " use_finup"); break; + case 3: + cfg->finalization_type = FINALIZATION_TYPE_FINUP2X_BUF1; + p += scnprintf(p, end - p, " use_finup2x_buf1"); + break; + case 4: + cfg->finalization_type = FINALIZATION_TYPE_FINUP2X_BUF2; + p += scnprintf(p, end - p, " use_finup2x_buf2"); + break; default: cfg->finalization_type = FINALIZATION_TYPE_DIGEST; p += scnprintf(p, end - p, " use_digest"); break; } @@ -1346,11 +1357,14 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec, return -EINVAL; } goto result_ready; } - /* Using init(), zero or more update(), then final() or finup() */ + /* + * Using init(), zero or more update(), then either final(), finup(), or + * finup2x(). + */ if (cfg->nosimd) crypto_disable_simd_for_test(); err = crypto_shash_init(desc); if (cfg->nosimd) @@ -1358,28 +1372,50 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec, err = check_shash_op("init", err, driver, vec_name, cfg); if (err) return err; for (i = 0; i < tsgl->nents; i++) { + const u8 *data = sg_virt(&tsgl->sgl[i]); + unsigned int len = tsgl->sgl[i].length; + if (i + 1 == tsgl->nents && - cfg->finalization_type == FINALIZATION_TYPE_FINUP) { + (cfg->finalization_type == FINALIZATION_TYPE_FINUP || + cfg->finalization_type == FINALIZATION_TYPE_FINUP2X_BUF1 || + cfg->finalization_type == FINALIZATION_TYPE_FINUP2X_BUF2)) { + const u8 *unused_data = tsgl->bufs[XBUFSIZE - 1]; + u8 unused_result[HASH_MAX_DIGESTSIZE]; + const char *op; + if (divs[i]->nosimd) crypto_disable_simd_for_test(); - err = crypto_shash_finup(desc, sg_virt(&tsgl->sgl[i]), - tsgl->sgl[i].length, result); + if (cfg->finalization_type == FINALIZATION_TYPE_FINUP || + !crypto_shash_supports_finup2x(tfm)) { + err = crypto_shash_finup(desc, data, len, + result); + op = "finup"; + } else if (cfg->finalization_type == + FINALIZATION_TYPE_FINUP2X_BUF1) { + err = crypto_shash_finup2x( + desc, data, unused_data, len, + result, unused_result); + op = "finup2x_buf1"; + } else { /* FINALIZATION_TYPE_FINUP2X_BUF2 */ + err = crypto_shash_finup2x( + desc, unused_data, data, len, + unused_result, result); + op = "finup2x_buf2"; + } if (divs[i]->nosimd) crypto_reenable_simd_for_test(); - err = check_shash_op("finup", err, driver, vec_name, - cfg); + err = check_shash_op(op, err, driver, vec_name, cfg); if (err) return err; goto result_ready; } if (divs[i]->nosimd) crypto_disable_simd_for_test(); - err = crypto_shash_update(desc, sg_virt(&tsgl->sgl[i]), - tsgl->sgl[i].length); + err = crypto_shash_update(desc, data, len); if (divs[i]->nosimd) crypto_reenable_simd_for_test(); err = check_shash_op("update", err, driver, vec_name, cfg); if (err) return err; From patchwork Mon Apr 22 20:35:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 791042 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA3FA1C6B8; Mon, 22 Apr 2024 20:36:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713818163; cv=none; b=ld45mNRhJXJdQnEg3Z5tlbumOrgzkNzfkO+831UySVj5mj2ynFLjL7HdmsyUUvepaq+CqDTg8+PZnArJC7MDyFgIwYrBqdQ6Y1a8pRbycpNnwmUjpK2AWTse/DQHkxSq9eE2DyaTzcXqiVd/IvYYHrY/bgBJzFfJEvjMmEl4XCc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713818163; c=relaxed/simple; bh=A/SJeX1WaVwRrD9NYUGVQTFowCVhuP/v2Jq1Nj4cm6U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rOxn0Y7VLu4QAQ1a21VpxN8S1dW5nxh4+j7U9Kp2MzBKuri6GkP9+iEio1SJ8WVIHHCtqte8lGWw4MybXvSbasvGSVEeyMbPbWNIv6jKqgKJDYUturmP1liJSLEJ3Rx6cvpvTGQHLREibtGT9liGeCAIiyJRJpCSNwcbP0/HsRI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PES3nZ8I; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PES3nZ8I" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 25083C113CC; Mon, 22 Apr 2024 20:36:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713818163; bh=A/SJeX1WaVwRrD9NYUGVQTFowCVhuP/v2Jq1Nj4cm6U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PES3nZ8IprQPT0U4B67QdbeI3y2ki5cSRSbRvBvdQxIJ+HuFXgBbQcpCk0cYmriOA Y98grFZzTyL3Z+Lvs+jLWVIiTGu3YNR0plVxK+XrthIXQ5TOtWPegDbiiOovXL5R4m ziWrk6B8lP4MH6Ud//bxI6mx74EG3vgcAEE29PLxwdV6agvadfuqZ4IcrKSmLAGWT5 DCHpYm2EJxZ+NnLnqamNBGcJTYppyTEiEWog2vy3OVyiiSP4LhEwYdoZt3CKC2V7HX jXlHw51ob9y2LPRF5WKw169nDqchSSa5Oj/eNxKwHkvA1H1bD5ft2gvApvlM88Mj9D Ga9m1tpU3p63Q== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche Subject: [PATCH v2 5/8] crypto: arm64/sha256-ce - add support for finup2x Date: Mon, 22 Apr 2024 13:35:41 -0700 Message-ID: <20240422203544.195390-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240422203544.195390-1-ebiggers@kernel.org> References: <20240422203544.195390-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add an implementation of finup2x to sha256-ce. finup2x interleaves a finup operation for two equal-length messages that share a common prefix. dm-verity and fs-verity will take advantage of this for significantly improved performance on capable CPUs. On an ARM Cortex-X1, this increases the throughput of SHA-256 hashing 4096-byte messages by 70%. Signed-off-by: Eric Biggers --- arch/arm64/crypto/sha2-ce-core.S | 281 ++++++++++++++++++++++++++++++- arch/arm64/crypto/sha2-ce-glue.c | 41 +++++ 2 files changed, 316 insertions(+), 6 deletions(-) diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S index fce84d88ddb2..fb5d5227e585 100644 --- a/arch/arm64/crypto/sha2-ce-core.S +++ b/arch/arm64/crypto/sha2-ce-core.S @@ -68,22 +68,26 @@ .word 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5 .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 + .macro load_round_constants tmp + adr_l \tmp, .Lsha2_rcon + ld1 { v0.4s- v3.4s}, [\tmp], #64 + ld1 { v4.4s- v7.4s}, [\tmp], #64 + ld1 { v8.4s-v11.4s}, [\tmp], #64 + ld1 {v12.4s-v15.4s}, [\tmp] + .endm + /* * int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src, * int blocks) */ .text SYM_FUNC_START(__sha256_ce_transform) - /* load round constants */ - adr_l x8, .Lsha2_rcon - ld1 { v0.4s- v3.4s}, [x8], #64 - ld1 { v4.4s- v7.4s}, [x8], #64 - ld1 { v8.4s-v11.4s}, [x8], #64 - ld1 {v12.4s-v15.4s}, [x8] + + load_round_constants x8 /* load state */ ld1 {dgav.4s, dgbv.4s}, [x0] /* load sha256_ce_state::finalize */ @@ -153,5 +157,270 @@ CPU_LE( rev32 v19.16b, v19.16b ) /* store new state */ 3: st1 {dgav.4s, dgbv.4s}, [x0] mov w0, w2 ret SYM_FUNC_END(__sha256_ce_transform) + + .unreq dga + .unreq dgav + .unreq dgb + .unreq dgbv + .unreq t0 + .unreq t1 + .unreq dg0q + .unreq dg0v + .unreq dg1q + .unreq dg1v + .unreq dg2q + .unreq dg2v + + // parameters for __sha256_ce_finup2x() + sctx .req x0 + data1 .req x1 + data2 .req x2 + len .req w3 + out1 .req x4 + out2 .req x5 + + // other scalar variables + count .req x6 + final_step .req w7 + + // x8-x9 are used as temporaries. + + // v0-v15 are used to cache the SHA-256 round constants. + // v16-v19 are used for the message schedule for the first message. + // v20-v23 are used for the message schedule for the second message. + // v24-v31 are used for the state and temporaries as given below. + // *_a are for the first message and *_b for the second. + state0_a_q .req q24 + state0_a .req v24 + state1_a_q .req q25 + state1_a .req v25 + state0_b_q .req q26 + state0_b .req v26 + state1_b_q .req q27 + state1_b .req v27 + t0_a .req v28 + t0_b .req v29 + t1_a_q .req q30 + t1_a .req v30 + t1_b_q .req q31 + t1_b .req v31 + +#define OFFSETOF_COUNT 32 // offsetof(struct sha256_state, count) +#define OFFSETOF_BUF 40 // offsetof(struct sha256_state, buf) +// offsetof(struct sha256_state, state) is assumed to be 0. + + // Do 4 rounds of SHA-256 for each of two messages (interleaved). m0_a + // and m0_b contain the current 4 message schedule words for the first + // and second message respectively. + // + // If not all the message schedule words have been computed yet, then + // this also computes 4 more message schedule words for each message. + // m1_a-m3_a contain the next 3 groups of 4 message schedule words for + // the first message, and likewise m1_b-m3_b for the second. After + // consuming the current value of m0_a, this macro computes the group + // after m3_a and writes it to m0_a, and likewise for *_b. This means + // that the next (m0_a, m1_a, m2_a, m3_a) is the current (m1_a, m2_a, + // m3_a, m0_a), and likewise for *_b, so the caller must cycle through + // the registers accordingly. + .macro do_4rounds_2x i, k, m0_a, m1_a, m2_a, m3_a, \ + m0_b, m1_b, m2_b, m3_b + add t0_a\().4s, \m0_a\().4s, \k\().4s + add t0_b\().4s, \m0_b\().4s, \k\().4s + .if \i < 48 + sha256su0 \m0_a\().4s, \m1_a\().4s + sha256su0 \m0_b\().4s, \m1_b\().4s + sha256su1 \m0_a\().4s, \m2_a\().4s, \m3_a\().4s + sha256su1 \m0_b\().4s, \m2_b\().4s, \m3_b\().4s + .endif + mov t1_a.16b, state0_a.16b + mov t1_b.16b, state0_b.16b + sha256h state0_a_q, state1_a_q, t0_a\().4s + sha256h state0_b_q, state1_b_q, t0_b\().4s + sha256h2 state1_a_q, t1_a_q, t0_a\().4s + sha256h2 state1_b_q, t1_b_q, t0_b\().4s + .endm + + .macro do_16rounds_2x i, k0, k1, k2, k3 + do_4rounds_2x \i + 0, \k0, v16, v17, v18, v19, v20, v21, v22, v23 + do_4rounds_2x \i + 4, \k1, v17, v18, v19, v16, v21, v22, v23, v20 + do_4rounds_2x \i + 8, \k2, v18, v19, v16, v17, v22, v23, v20, v21 + do_4rounds_2x \i + 12, \k3, v19, v16, v17, v18, v23, v20, v21, v22 + .endm + +// +// void __sha256_ce_finup2x(const struct sha256_state *sctx, +// const u8 *data1, const u8 *data2, int len, +// u8 out1[SHA256_DIGEST_SIZE], +// u8 out2[SHA256_DIGEST_SIZE]); +// +// This function computes the SHA-256 digests of two messages |data1| and +// |data2| that are both |len| bytes long, starting from the initial state +// |sctx|. |len| must be at least SHA256_BLOCK_SIZE. +// +// The instructions for the two SHA-256 operations are interleaved. On many +// CPUs, this is almost twice as fast as hashing each message individually due +// to taking better advantage of the CPU's SHA-256 and SIMD throughput. +// +SYM_FUNC_START(__sha256_ce_finup2x) + sub sp, sp, #128 + mov final_step, #0 + load_round_constants x8 + + // Load the initial state from sctx->state. + ld1 {state0_a.4s-state1_a.4s}, [sctx] + + // Load sctx->count. Take the mod 64 of it to get the number of bytes + // that are buffered in sctx->buf. Also save it in a register with len + // added to it. + ldr x8, [sctx, #OFFSETOF_COUNT] + add count, x8, len, sxtw + and x8, x8, #63 + cbz x8, .Lfinup2x_enter_loop // No bytes buffered? + + // x8 bytes (1 to 63) are currently buffered in sctx->buf. Load them + // followed by the first 64 - x8 bytes of data. Since len >= 64, we + // just load 64 bytes from each of sctx->buf, data1, and data2 + // unconditionally and rearrange the data as needed. + add x9, sctx, #OFFSETOF_BUF + ld1 {v16.16b-v19.16b}, [x9] + st1 {v16.16b-v19.16b}, [sp] + + ld1 {v16.16b-v19.16b}, [data1], #64 + add x9, sp, x8 + st1 {v16.16b-v19.16b}, [x9] + ld1 {v16.4s-v19.4s}, [sp] + + ld1 {v20.16b-v23.16b}, [data2], #64 + st1 {v20.16b-v23.16b}, [x9] + ld1 {v20.4s-v23.4s}, [sp] + + sub len, len, #64 + sub data1, data1, x8 + sub data2, data2, x8 + add len, len, w8 + mov state0_b.16b, state0_a.16b + mov state1_b.16b, state1_a.16b + b .Lfinup2x_loop_have_data + +.Lfinup2x_enter_loop: + sub len, len, #64 + mov state0_b.16b, state0_a.16b + mov state1_b.16b, state1_a.16b +.Lfinup2x_loop: + // Load the next two data blocks. + ld1 {v16.4s-v19.4s}, [data1], #64 + ld1 {v20.4s-v23.4s}, [data2], #64 +.Lfinup2x_loop_have_data: + // Convert the words of the data blocks from big endian. +CPU_LE( rev32 v16.16b, v16.16b ) +CPU_LE( rev32 v17.16b, v17.16b ) +CPU_LE( rev32 v18.16b, v18.16b ) +CPU_LE( rev32 v19.16b, v19.16b ) +CPU_LE( rev32 v20.16b, v20.16b ) +CPU_LE( rev32 v21.16b, v21.16b ) +CPU_LE( rev32 v22.16b, v22.16b ) +CPU_LE( rev32 v23.16b, v23.16b ) +.Lfinup2x_loop_have_bswapped_data: + + // Save the original state for each block. + st1 {state0_a.4s-state1_b.4s}, [sp] + + // Do the SHA-256 rounds on each block. + do_16rounds_2x 0, v0, v1, v2, v3 + do_16rounds_2x 16, v4, v5, v6, v7 + do_16rounds_2x 32, v8, v9, v10, v11 + do_16rounds_2x 48, v12, v13, v14, v15 + + // Add the original state for each block. + ld1 {v16.4s-v19.4s}, [sp] + add state0_a.4s, state0_a.4s, v16.4s + add state1_a.4s, state1_a.4s, v17.4s + add state0_b.4s, state0_b.4s, v18.4s + add state1_b.4s, state1_b.4s, v19.4s + + // Update len and loop back if more blocks remain. + sub len, len, #64 + tbz len, #31, .Lfinup2x_loop // len >= 0? + + // Check if any final blocks need to be handled. + // final_step = 2: all done + // final_step = 1: need to do count-only padding block + // final_step = 0: need to do the block with 0x80 padding byte + tbnz final_step, #1, .Lfinup2x_done + tbnz final_step, #0, .Lfinup2x_finalize_countonly + add len, len, #64 + cbz len, .Lfinup2x_finalize_blockaligned + + // Not block-aligned; 1 <= len <= 63 data bytes remain. Pad the block. + // To do this, write the padding starting with the 0x80 byte to + // &sp[64]. Then for each message, copy the last 64 data bytes to sp + // and load from &sp[64 - len] to get the needed padding block. This + // code relies on the data buffers being >= 64 bytes in length. + sub w8, len, #64 // w8 = len - 64 + add data1, data1, w8, sxtw // data1 += len - 64 + add data2, data2, w8, sxtw // data2 += len - 64 + mov x9, 0x80 + fmov d16, x9 + movi v17.16b, #0 + stp q16, q17, [sp, #64] + stp q17, q17, [sp, #96] + sub x9, sp, w8, sxtw // x9 = &sp[64 - len] + cmp len, #56 + b.ge 1f // will count spill into its own block? + lsl count, count, #3 + rev count, count + str count, [x9, #56] + mov final_step, #2 // won't need count-only block + b 2f +1: + mov final_step, #1 // will need count-only block +2: + ld1 {v16.16b-v19.16b}, [data1] + st1 {v16.16b-v19.16b}, [sp] + ld1 {v16.4s-v19.4s}, [x9] + ld1 {v20.16b-v23.16b}, [data2] + st1 {v20.16b-v23.16b}, [sp] + ld1 {v20.4s-v23.4s}, [x9] + b .Lfinup2x_loop_have_data + + // Prepare a padding block, either: + // + // {0x80, 0, 0, 0, ..., count (as __be64)} + // This is for a block aligned message. + // + // { 0, 0, 0, 0, ..., count (as __be64)} + // This is for a message whose length mod 64 is >= 56. + // + // Pre-swap the endianness of the words. +.Lfinup2x_finalize_countonly: + movi v16.2d, #0 + b 1f +.Lfinup2x_finalize_blockaligned: + mov x8, #0x80000000 + fmov d16, x8 +1: + movi v17.2d, #0 + movi v18.2d, #0 + ror count, count, #29 // ror(lsl(count, 3), 32) + mov v19.d[0], xzr + mov v19.d[1], count + mov v20.16b, v16.16b + movi v21.2d, #0 + movi v22.2d, #0 + mov v23.16b, v19.16b + mov final_step, #2 + b .Lfinup2x_loop_have_bswapped_data + +.Lfinup2x_done: + // Write the two digests with all bytes in the correct order. +CPU_LE( rev32 state0_a.16b, state0_a.16b ) +CPU_LE( rev32 state1_a.16b, state1_a.16b ) +CPU_LE( rev32 state0_b.16b, state0_b.16b ) +CPU_LE( rev32 state1_b.16b, state1_b.16b ) + st1 {state0_a.4s-state1_a.4s}, [out1] + st1 {state0_b.4s-state1_b.4s}, [out2] + add sp, sp, #128 + ret +SYM_FUNC_END(__sha256_ce_finup2x) diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c index 0a44d2e7ee1f..c77d75395cc4 100644 --- a/arch/arm64/crypto/sha2-ce-glue.c +++ b/arch/arm64/crypto/sha2-ce-glue.c @@ -31,10 +31,15 @@ extern const u32 sha256_ce_offsetof_count; extern const u32 sha256_ce_offsetof_finalize; asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src, int blocks); +asmlinkage void __sha256_ce_finup2x(const struct sha256_state *sctx, + const u8 *data1, const u8 *data2, int len, + u8 out1[SHA256_DIGEST_SIZE], + u8 out2[SHA256_DIGEST_SIZE]); + static void sha256_ce_transform(struct sha256_state *sst, u8 const *src, int blocks) { while (blocks) { int rem; @@ -122,10 +127,45 @@ static int sha256_ce_digest(struct shash_desc *desc, const u8 *data, { sha256_base_init(desc); return sha256_ce_finup(desc, data, len, out); } +static noinline_for_stack int +sha256_finup2x_fallback(struct sha256_state *sctx, const u8 *data1, + const u8 *data2, unsigned int len, u8 *out1, u8 *out2) +{ + struct sha256_state sctx2 = *sctx; + + sha256_update(sctx, data1, len); + sha256_final(sctx, out1); + sha256_update(&sctx2, data2, len); + sha256_final(&sctx2, out2); + return 0; +} + +static int sha256_ce_finup2x(struct shash_desc *desc, + const u8 *data1, const u8 *data2, + unsigned int len, u8 *out1, u8 *out2) +{ + struct sha256_ce_state *sctx = shash_desc_ctx(desc); + + if (unlikely(!crypto_simd_usable() || len < SHA256_BLOCK_SIZE || + len > INT_MAX)) + return sha256_finup2x_fallback(&sctx->sst, data1, data2, len, + out1, out2); + + /* __sha256_ce_finup2x() assumes the following offsets. */ + BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); + BUILD_BUG_ON(offsetof(struct sha256_state, count) != 32); + BUILD_BUG_ON(offsetof(struct sha256_state, buf) != 40); + + kernel_neon_begin(); + __sha256_ce_finup2x(&sctx->sst, data1, data2, len, out1, out2); + kernel_neon_end(); + return 0; +} + static int sha256_ce_export(struct shash_desc *desc, void *out) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); memcpy(out, &sctx->sst, sizeof(struct sha256_state)); @@ -162,10 +202,11 @@ static struct shash_alg algs[] = { { .init = sha256_base_init, .update = sha256_ce_update, .final = sha256_ce_final, .finup = sha256_ce_finup, .digest = sha256_ce_digest, + .finup2x = sha256_ce_finup2x, .export = sha256_ce_export, .import = sha256_ce_import, .descsize = sizeof(struct sha256_ce_state), .statesize = sizeof(struct sha256_state), .digestsize = SHA256_DIGEST_SIZE, From patchwork Mon Apr 22 20:35:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 791041 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 537831CAB5; Mon, 22 Apr 2024 20:36:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713818164; cv=none; b=Hon/kcCwuOlTpHkDX8PKuBfcdCIyj3xSXJGn3gyyyPu/TMcDWATt+12AndBzJvUL3tSpVTjqXJnMxuSumasfft7LjCvLFY7hT7l/NmLo7hc8AO9KqNP3AK87JV0mQ/jIKqmPpyWy2fw5aN4mSI+uHUxlNMtRbCLikL8fl7DX7nI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713818164; c=relaxed/simple; bh=wmQfuS7HAE+piB5/rOaXXZrH2tnNBbN+hDxv0804aZc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=o4bxD++1G2rt6506MsIBOMMdnZezV9K39ak3+a5K574yQTbinX7cDA7NDehMYwcYxn+ISs2RDWk/GbSt6M69+4kt+IdC3WDre2+Rhn4n45nCVD5m0fjQ845qrSsxta4aUHCUkxWexoWxq9XBrkY2LS4AAAXbjVgMcrsLxB/5q0c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=syW9pHyC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="syW9pHyC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9FB8C113CC; Mon, 22 Apr 2024 20:36:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713818164; bh=wmQfuS7HAE+piB5/rOaXXZrH2tnNBbN+hDxv0804aZc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=syW9pHyCCoux2HCDMWAMWs83Q98GXSmWor7PIzu+Ev4WIOt6qjg33yxPvk9s6jFh9 fBJBmf3a6/3tMaVAQiGzjiqQwFp0IeDZOhIx8HmUX/UkJyHWX08gBcjvGqJM5D5omZ S+lzIFzjT9U987zPj9pnDC3pmHVLf14XOxoeuxFMq3d8Fd6v/kJ0QYrFRw57sZigG7 wktWxjgMmx7jEQ5g6M51CDnerN8A4VqeNJicZpgUCd+GJA4ySfIimnbVWVpuJA1bTr bvjG9RHHuEis3haBUguCx/T20t1gFohIG23xZvv9uuyMEUNakwtOL2c1h06Bv+7xSA iDrBw2VCnA5JA== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche Subject: [PATCH v2 7/8] dm-verity: hash blocks with shash import+finup when possible Date: Mon, 22 Apr 2024 13:35:43 -0700 Message-ID: <20240422203544.195390-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240422203544.195390-1-ebiggers@kernel.org> References: <20240422203544.195390-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Currently dm-verity computes the hash of each block by using multiple calls to the "ahash" crypto API. While the exact sequence depends on the chosen dm-verity settings, in the vast majority of cases it is: 1. crypto_ahash_init() 2. crypto_ahash_update() [salt] 3. crypto_ahash_update() [data] 4. crypto_ahash_final() This is inefficient for two main reasons: - It makes multiple indirect calls, which is expensive on modern CPUs especially when mitigations for CPU vulnerabilities are enabled. Since the salt is the same across all blocks on a given dm-verity device, a much more efficient sequence would be to do an import of the pre-salted state, then a finup. - It uses the ahash (asynchronous hash) API, despite the fact that CPU-based hashing is almost always used in practice, and therefore it experiences the overhead of the ahash-based wrapper for shash. This also means that the new function crypto_shash_finup2x(), which is specifically designed for fast CPU-based hashing, is unavailable. Since dm-verity was intentionally converted to ahash to support off-CPU crypto accelerators, wholesale conversion to shash (reverting that change) might not be acceptable. Yet, we should still provide a fast path for shash with the most common dm-verity settings. Therefore, this patch adds a new shash import+finup based fast path to dm-verity. It is used automatically when appropriate, i.e. when the ahash API and shash APIs resolve to the same underlying algorithm, the dm-verity version is not 0 (so that the salt is hashed before the data), and the data block size is not greater than the page size. This makes dm-verity optimized for what the vast majority of users want: CPU-based hashing with the most common settings, while still retaining support for rarer settings and off-CPU crypto accelerators. In benchmarks with veritysetup's default parameters (SHA-256, 4K data and hash block sizes, 32-byte salt), which also match the parameters that Android currently uses, this patch improves block hashing performance by about 15% on an x86_64 system that supports the SHA-NI instructions, or by about 5% on an arm64 system that supports the ARMv8 SHA2 instructions. This was with CONFIG_CRYPTO_STATS disabled; an even larger improvement can be expected if that option is enabled. Note that another benefit of using "import" to handle the salt is that if the salt size is equal to the input size of the hash algorithm's compression function, e.g. 64 bytes for SHA-256, then the performance is exactly the same as no salt. (This doesn't seem to be much better than veritysetup's current default of 32-byte salts, due to the way SHA-256's finalization padding works, but it should be marginally better.) In addition to the benchmarks mentioned above, I've tested this patch with cryptsetup's 'verity-compat-test' script. I've also lightly tested this patch with Android, where the new shash-based code gets used. Signed-off-by: Eric Biggers --- drivers/md/dm-verity-fec.c | 13 +- drivers/md/dm-verity-target.c | 336 ++++++++++++++++++++++++---------- drivers/md/dm-verity.h | 27 ++- 3 files changed, 263 insertions(+), 113 deletions(-) diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c index e46aee6f932e..b436b8e4d750 100644 --- a/drivers/md/dm-verity-fec.c +++ b/drivers/md/dm-verity-fec.c @@ -184,13 +184,14 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io, * Locate data block erasures using verity hashes. */ static int fec_is_erasure(struct dm_verity *v, struct dm_verity_io *io, u8 *want_digest, u8 *data) { - if (unlikely(verity_hash(v, verity_io_hash_req(v, io), - data, 1 << v->data_dev_block_bits, - verity_io_real_digest(v, io), true))) + if (unlikely(verity_compute_hash_virt(v, io, data, + 1 << v->data_dev_block_bits, + verity_io_real_digest(v, io), + true))) return 0; return memcmp(verity_io_real_digest(v, io), want_digest, v->digest_size) != 0; } @@ -386,13 +387,13 @@ static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io, pos += fio->nbufs << DM_VERITY_FEC_BUF_RS_BITS; } /* Always re-validate the corrected block against the expected hash */ - r = verity_hash(v, verity_io_hash_req(v, io), fio->output, - 1 << v->data_dev_block_bits, - verity_io_real_digest(v, io), true); + r = verity_compute_hash_virt(v, io, fio->output, + 1 << v->data_dev_block_bits, + verity_io_real_digest(v, io), true); if (unlikely(r < 0)) return r; if (memcmp(verity_io_real_digest(v, io), verity_io_want_digest(v, io), v->digest_size)) { diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index bb5da66da4c1..2dd15f5e91b7 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -44,12 +44,16 @@ static unsigned int dm_verity_prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE; module_param_named(prefetch_cluster, dm_verity_prefetch_cluster, uint, 0644); +/* Is at least one dm-verity instance using the bh workqueue? */ static DEFINE_STATIC_KEY_FALSE(use_bh_wq_enabled); +/* Is at least one dm-verity instance using ahash_tfm instead of shash_tfm? */ +static DEFINE_STATIC_KEY_FALSE(ahash_enabled); + struct dm_verity_prefetch_work { struct work_struct work; struct dm_verity *v; unsigned short ioprio; sector_t block; @@ -100,13 +104,13 @@ static sector_t verity_position_at_level(struct dm_verity *v, sector_t block, int level) { return block >> (level * v->hash_per_block_bits); } -static int verity_hash_update(struct dm_verity *v, struct ahash_request *req, - const u8 *data, size_t len, - struct crypto_wait *wait) +static int verity_ahash_update(struct dm_verity *v, struct ahash_request *req, + const u8 *data, size_t len, + struct crypto_wait *wait) { struct scatterlist sg; if (likely(!is_vmalloc_addr(data))) { sg_init_one(&sg, data, len); @@ -133,16 +137,16 @@ static int verity_hash_update(struct dm_verity *v, struct ahash_request *req, } /* * Wrapper for crypto_ahash_init, which handles verity salting. */ -static int verity_hash_init(struct dm_verity *v, struct ahash_request *req, +static int verity_ahash_init(struct dm_verity *v, struct ahash_request *req, struct crypto_wait *wait, bool may_sleep) { int r; - ahash_request_set_tfm(req, v->tfm); + ahash_request_set_tfm(req, v->ahash_tfm); ahash_request_set_callback(req, may_sleep ? CRYPTO_TFM_REQ_MAY_SLEEP | CRYPTO_TFM_REQ_MAY_BACKLOG : 0, crypto_req_done, (void *)wait); crypto_init_wait(wait); @@ -153,22 +157,22 @@ static int verity_hash_init(struct dm_verity *v, struct ahash_request *req, DMERR("crypto_ahash_init failed: %d", r); return r; } if (likely(v->salt_size && (v->version >= 1))) - r = verity_hash_update(v, req, v->salt, v->salt_size, wait); + r = verity_ahash_update(v, req, v->salt, v->salt_size, wait); return r; } -static int verity_hash_final(struct dm_verity *v, struct ahash_request *req, - u8 *digest, struct crypto_wait *wait) +static int verity_ahash_final(struct dm_verity *v, struct ahash_request *req, + u8 *digest, struct crypto_wait *wait) { int r; if (unlikely(v->salt_size && (!v->version))) { - r = verity_hash_update(v, req, v->salt, v->salt_size, wait); + r = verity_ahash_update(v, req, v->salt, v->salt_size, wait); if (r < 0) { DMERR("%s failed updating salt: %d", __func__, r); goto out; } @@ -178,27 +182,47 @@ static int verity_hash_final(struct dm_verity *v, struct ahash_request *req, r = crypto_wait_req(crypto_ahash_final(req), wait); out: return r; } -int verity_hash(struct dm_verity *v, struct ahash_request *req, - const u8 *data, size_t len, u8 *digest, bool may_sleep) +int verity_compute_hash_virt(struct dm_verity *v, struct dm_verity_io *io, + const u8 *data, size_t len, u8 *digest, + bool may_sleep) { int r; - struct crypto_wait wait; - r = verity_hash_init(v, req, &wait, may_sleep); - if (unlikely(r < 0)) - goto out; + if (static_branch_unlikely(&ahash_enabled) && !v->shash_tfm) { + struct ahash_request *req = verity_io_hash_req(v, io); + struct crypto_wait wait; - r = verity_hash_update(v, req, data, len, &wait); - if (unlikely(r < 0)) - goto out; + r = verity_ahash_init(v, req, &wait, may_sleep); + if (unlikely(r)) + goto error; - r = verity_hash_final(v, req, digest, &wait); + r = verity_ahash_update(v, req, data, len, &wait); + if (unlikely(r)) + goto error; -out: + r = verity_ahash_final(v, req, digest, &wait); + if (unlikely(r)) + goto error; + } else { + struct shash_desc *desc = verity_io_hash_req(v, io); + + desc->tfm = v->shash_tfm; + r = crypto_shash_import(desc, v->initial_hashstate); + if (unlikely(r)) + goto error; + + r = crypto_shash_finup(desc, data, len, digest); + if (unlikely(r)) + goto error; + } + return 0; + +error: + DMERR("Error hashing block from virt buffer: %d", r); return r; } static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level, sector_t *hash_block, unsigned int *offset) @@ -323,13 +347,14 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io, if (skip_unverified) { r = 1; goto release_ret_r; } - r = verity_hash(v, verity_io_hash_req(v, io), - data, 1 << v->hash_dev_block_bits, - verity_io_real_digest(v, io), !io->in_bh); + r = verity_compute_hash_virt(v, io, data, + 1 << v->hash_dev_block_bits, + verity_io_real_digest(v, io), + !io->in_bh); if (unlikely(r < 0)) goto release_ret_r; if (likely(memcmp(verity_io_real_digest(v, io), want_digest, v->digest_size) == 0)) @@ -403,14 +428,17 @@ int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io, return r; } /* - * Calculates the digest for the given bio + * Update the ahash_request of @io with the next data block from @iter, and + * advance @iter accordingly. */ -static int verity_for_io_block(struct dm_verity *v, struct dm_verity_io *io, - struct bvec_iter *iter, struct crypto_wait *wait) +static int verity_ahash_update_block(struct dm_verity *v, + struct dm_verity_io *io, + struct bvec_iter *iter, + struct crypto_wait *wait) { unsigned int todo = 1 << v->data_dev_block_bits; struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); struct scatterlist sg; struct ahash_request *req = verity_io_hash_req(v, io); @@ -445,10 +473,71 @@ static int verity_for_io_block(struct dm_verity *v, struct dm_verity_io *io, } while (todo); return 0; } +static int verity_compute_hash(struct dm_verity *v, struct dm_verity_io *io, + struct bvec_iter *iter, u8 *digest, + bool may_sleep) +{ + int r; + + if (static_branch_unlikely(&ahash_enabled) && !v->shash_tfm) { + struct ahash_request *req = verity_io_hash_req(v, io); + struct crypto_wait wait; + + r = verity_ahash_init(v, req, &wait, may_sleep); + if (unlikely(r)) + goto error; + + r = verity_ahash_update_block(v, io, iter, &wait); + if (unlikely(r)) + goto error; + + r = verity_ahash_final(v, req, digest, &wait); + if (unlikely(r)) + goto error; + } else { + struct shash_desc *desc = verity_io_hash_req(v, io); + struct bio *bio = + dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); + struct bio_vec bv = bio_iter_iovec(bio, *iter); + const unsigned int len = 1 << v->data_dev_block_bits; + const void *virt; + + if (unlikely(len > bv.bv_len)) { + /* + * Data block spans pages. This should not happen, + * since this code path is not used if the data block + * size is greater than the page size, and all I/O + * should be data block aligned because dm-verity sets + * logical_block_size to the data block size. + */ + DMERR_LIMIT("unaligned io (data block spans pages)"); + return -EIO; + } + + desc->tfm = v->shash_tfm; + r = crypto_shash_import(desc, v->initial_hashstate); + if (unlikely(r)) + goto error; + + virt = bvec_kmap_local(&bv); + r = crypto_shash_finup(desc, virt, len, digest); + kunmap_local(virt); + if (unlikely(r)) + goto error; + + bio_advance_iter(bio, iter, len); + } + return 0; + +error: + DMERR("Error hashing block from bio iter: %d", r); + return r; +} + /* * Calls function process for 1 << v->data_dev_block_bits bytes in the bio_vec * starting from iter. */ int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io, @@ -516,13 +605,12 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, io_loc.count = 1 << (v->data_dev_block_bits - SECTOR_SHIFT); r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT); if (unlikely(r)) goto free_ret; - r = verity_hash(v, verity_io_hash_req(v, io), buffer, - 1 << v->data_dev_block_bits, - verity_io_real_digest(v, io), true); + r = verity_compute_hash_virt(v, io, buffer, 1 << v->data_dev_block_bits, + verity_io_real_digest(v, io), true); if (unlikely(r)) goto free_ret; if (memcmp(verity_io_real_digest(v, io), verity_io_want_digest(v, io), v->digest_size)) { @@ -569,11 +657,10 @@ static int verity_verify_io(struct dm_verity_io *io) bool is_zero; struct dm_verity *v = io->v; struct bvec_iter start; struct bvec_iter iter_copy; struct bvec_iter *iter; - struct crypto_wait wait; struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); unsigned int b; if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { /* @@ -586,11 +673,10 @@ static int verity_verify_io(struct dm_verity_io *io) iter = &io->iter; for (b = 0; b < io->n_blocks; b++) { int r; sector_t cur_block = io->block + b; - struct ahash_request *req = verity_io_hash_req(v, io); if (v->validated_blocks && bio->bi_status == BLK_STS_OK && likely(test_bit(cur_block, v->validated_blocks))) { verity_bv_skip_block(v, io, iter); continue; @@ -613,21 +699,14 @@ static int verity_verify_io(struct dm_verity_io *io) return r; continue; } - r = verity_hash_init(v, req, &wait, !io->in_bh); - if (unlikely(r < 0)) - return r; - start = *iter; - r = verity_for_io_block(v, io, iter, &wait); - if (unlikely(r < 0)) - return r; - - r = verity_hash_final(v, req, verity_io_real_digest(v, io), - &wait); + r = verity_compute_hash(v, io, iter, + verity_io_real_digest(v, io), + !io->in_bh); if (unlikely(r < 0)) return r; if (likely(memcmp(verity_io_real_digest(v, io), verity_io_want_digest(v, io), v->digest_size) == 0)) { @@ -1031,15 +1110,20 @@ static void verity_dtr(struct dm_target *ti) if (v->bufio) dm_bufio_client_destroy(v->bufio); kvfree(v->validated_blocks); kfree(v->salt); + kfree(v->initial_hashstate); kfree(v->root_digest); kfree(v->zero_digest); - if (v->tfm) - crypto_free_ahash(v->tfm); + if (v->ahash_tfm) { + static_branch_dec(&ahash_enabled); + crypto_free_ahash(v->ahash_tfm); + } else { + crypto_free_shash(v->shash_tfm); + } kfree(v->alg_name); if (v->hash_dev) dm_put_device(ti, v->hash_dev); @@ -1081,33 +1165,33 @@ static int verity_alloc_most_once(struct dm_verity *v) } static int verity_alloc_zero_digest(struct dm_verity *v) { int r = -ENOMEM; - struct ahash_request *req; + struct dm_verity_io *io; u8 *zero_data; v->zero_digest = kmalloc(v->digest_size, GFP_KERNEL); if (!v->zero_digest) return r; - req = kmalloc(v->ahash_reqsize, GFP_KERNEL); + io = kmalloc(sizeof(*io) + v->hash_reqsize, GFP_KERNEL); - if (!req) + if (!io) return r; /* verity_dtr will free zero_digest */ zero_data = kzalloc(1 << v->data_dev_block_bits, GFP_KERNEL); if (!zero_data) goto out; - r = verity_hash(v, req, zero_data, 1 << v->data_dev_block_bits, - v->zero_digest, true); - + r = verity_compute_hash_virt(v, io, zero_data, + 1 << v->data_dev_block_bits, + v->zero_digest, true); out: - kfree(req); + kfree(io); kfree(zero_data); return r; } @@ -1224,10 +1308,109 @@ static int verity_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v, } while (argc && !r); return r; } +static int verity_setup_hash_alg(struct dm_verity *v, const char *alg_name) +{ + struct dm_target *ti = v->ti; + struct crypto_ahash *ahash; + struct crypto_shash *shash = NULL; + const char *driver_name; + + v->alg_name = kstrdup(alg_name, GFP_KERNEL); + if (!v->alg_name) { + ti->error = "Cannot allocate algorithm name"; + return -ENOMEM; + } + + ahash = crypto_alloc_ahash(alg_name, 0, + v->use_bh_wq ? CRYPTO_ALG_ASYNC : 0); + if (IS_ERR(ahash)) { + ti->error = "Cannot initialize hash function"; + return PTR_ERR(ahash); + } + driver_name = crypto_ahash_driver_name(ahash); + if (v->version >= 1 /* salt prepended, not appended? */ && + 1 << v->data_dev_block_bits <= PAGE_SIZE) { + shash = crypto_alloc_shash(alg_name, 0, 0); + if (!IS_ERR(shash) && + strcmp(crypto_shash_driver_name(shash), driver_name) != 0) { + /* + * ahash gave a different driver than shash, so probably + * this is a case of real hardware offload. Use ahash. + */ + crypto_free_shash(shash); + shash = NULL; + } + } + if (!IS_ERR_OR_NULL(shash)) { + crypto_free_ahash(ahash); + ahash = NULL; + v->shash_tfm = shash; + v->digest_size = crypto_shash_digestsize(shash); + v->hash_reqsize = sizeof(struct shash_desc) + + crypto_shash_descsize(shash); + DMINFO("%s using shash \"%s\"", alg_name, driver_name); + } else { + v->ahash_tfm = ahash; + static_branch_inc(&ahash_enabled); + v->digest_size = crypto_ahash_digestsize(ahash); + v->hash_reqsize = sizeof(struct ahash_request) + + crypto_ahash_reqsize(ahash); + DMINFO("%s using ahash \"%s\"", alg_name, driver_name); + } + if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) { + ti->error = "Digest size too big"; + return -EINVAL; + } + return 0; +} + +static int verity_setup_salt_and_hashstate(struct dm_verity *v, const char *arg) +{ + struct dm_target *ti = v->ti; + + if (strcmp(arg, "-") != 0) { + v->salt_size = strlen(arg) / 2; + v->salt = kmalloc(v->salt_size, GFP_KERNEL); + if (!v->salt) { + ti->error = "Cannot allocate salt"; + return -ENOMEM; + } + if (strlen(arg) != v->salt_size * 2 || + hex2bin(v->salt, arg, v->salt_size)) { + ti->error = "Invalid salt"; + return -EINVAL; + } + } + /* + * If the "shash with import+finup sequence" method has been selected + * (see verity_setup_hash_alg()), then create the initial hash state. + */ + if (v->shash_tfm) { + SHASH_DESC_ON_STACK(desc, v->shash_tfm); + int r; + + v->initial_hashstate = kmalloc( + crypto_shash_statesize(v->shash_tfm), GFP_KERNEL); + if (!v->initial_hashstate) { + ti->error = "Cannot allocate initial hash state"; + return -ENOMEM; + } + desc->tfm = v->shash_tfm; + r = crypto_shash_init(desc) ?: + crypto_shash_update(desc, v->salt, v->salt_size) ?: + crypto_shash_export(desc, v->initial_hashstate); + if (r) { + ti->error = "Cannot set up initial hash state"; + return r; + } + } + return 0; +} + /* * Target parameters: * The current format is version 1. * Vsn 0 is compatible with original Chromium OS releases. * @@ -1348,42 +1531,13 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv) r = -EINVAL; goto bad; } v->hash_start = num_ll; - v->alg_name = kstrdup(argv[7], GFP_KERNEL); - if (!v->alg_name) { - ti->error = "Cannot allocate algorithm name"; - r = -ENOMEM; - goto bad; - } - - v->tfm = crypto_alloc_ahash(v->alg_name, 0, - v->use_bh_wq ? CRYPTO_ALG_ASYNC : 0); - if (IS_ERR(v->tfm)) { - ti->error = "Cannot initialize hash function"; - r = PTR_ERR(v->tfm); - v->tfm = NULL; - goto bad; - } - - /* - * dm-verity performance can vary greatly depending on which hash - * algorithm implementation is used. Help people debug performance - * problems by logging the ->cra_driver_name. - */ - DMINFO("%s using implementation \"%s\"", v->alg_name, - crypto_hash_alg_common(v->tfm)->base.cra_driver_name); - - v->digest_size = crypto_ahash_digestsize(v->tfm); - if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) { - ti->error = "Digest size too big"; - r = -EINVAL; + r = verity_setup_hash_alg(v, argv[7]); + if (r) goto bad; - } - v->ahash_reqsize = sizeof(struct ahash_request) + - crypto_ahash_reqsize(v->tfm); v->root_digest = kmalloc(v->digest_size, GFP_KERNEL); if (!v->root_digest) { ti->error = "Cannot allocate root digest"; r = -ENOMEM; @@ -1395,25 +1549,13 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv) r = -EINVAL; goto bad; } root_hash_digest_to_validate = argv[8]; - if (strcmp(argv[9], "-")) { - v->salt_size = strlen(argv[9]) / 2; - v->salt = kmalloc(v->salt_size, GFP_KERNEL); - if (!v->salt) { - ti->error = "Cannot allocate salt"; - r = -ENOMEM; - goto bad; - } - if (strlen(argv[9]) != v->salt_size * 2 || - hex2bin(v->salt, argv[9], v->salt_size)) { - ti->error = "Invalid salt"; - r = -EINVAL; - goto bad; - } - } + r = verity_setup_salt_and_hashstate(v, argv[9]); + if (r) + goto bad; argv += 10; argc -= 10; /* Optional parameters */ @@ -1512,11 +1654,11 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv) r = -ENOMEM; goto bad; } ti->per_io_data_size = sizeof(struct dm_verity_io) + - v->ahash_reqsize + v->digest_size * 2; + v->hash_reqsize + v->digest_size * 2; r = verity_fec_ctr(v); if (r) goto bad; diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h index 20b1bcf03474..15ffb0881cc9 100644 --- a/drivers/md/dm-verity.h +++ b/drivers/md/dm-verity.h @@ -37,13 +37,15 @@ struct dm_verity { struct dm_dev *data_dev; struct dm_dev *hash_dev; struct dm_target *ti; struct dm_bufio_client *bufio; char *alg_name; - struct crypto_ahash *tfm; + struct crypto_ahash *ahash_tfm; /* either this or shash_tfm is set */ + struct crypto_shash *shash_tfm; /* either this or ahash_tfm is set */ u8 *root_digest; /* digest of the root block */ u8 *salt; /* salt: its size is salt_size */ + u8 *initial_hashstate; /* salted initial state, if shash_tfm is set */ u8 *zero_digest; /* digest for a zero block */ unsigned int salt_size; sector_t data_start; /* data offset in 512-byte sectors */ sector_t hash_start; /* hash start in blocks */ sector_t data_blocks; /* the number of data blocks */ @@ -54,11 +56,11 @@ struct dm_verity { unsigned char levels; /* the number of tree levels */ unsigned char version; bool hash_failed:1; /* set if hash of any block failed */ bool use_bh_wq:1; /* try to verify in BH wq before normal work-queue */ unsigned int digest_size; /* digest size for the current hash algorithm */ - unsigned int ahash_reqsize;/* the size of temporary space for crypto */ + unsigned int hash_reqsize; /* the size of temporary space for crypto */ enum verity_mode mode; /* mode for handling verification errors */ unsigned int corrupted_errs;/* Number of errors for corrupted blocks */ struct workqueue_struct *verify_wq; @@ -92,45 +94,50 @@ struct dm_verity_io { char *recheck_buffer; /* * Three variably-size fields follow this struct: * - * u8 hash_req[v->ahash_reqsize]; + * u8 hash_req[v->hash_reqsize]; * u8 real_digest[v->digest_size]; * u8 want_digest[v->digest_size]; * * To access them use: verity_io_hash_req(), verity_io_real_digest() * and verity_io_want_digest(). + * + * hash_req is either a struct ahash_request or a struct shash_desc, + * depending on whether ahash_tfm or shash_tfm is being used. */ }; -static inline struct ahash_request *verity_io_hash_req(struct dm_verity *v, - struct dm_verity_io *io) +static inline void *verity_io_hash_req(struct dm_verity *v, + struct dm_verity_io *io) { - return (struct ahash_request *)(io + 1); + return io + 1; } static inline u8 *verity_io_real_digest(struct dm_verity *v, struct dm_verity_io *io) { - return (u8 *)(io + 1) + v->ahash_reqsize; + return (u8 *)(io + 1) + v->hash_reqsize; } static inline u8 *verity_io_want_digest(struct dm_verity *v, struct dm_verity_io *io) { - return (u8 *)(io + 1) + v->ahash_reqsize + v->digest_size; + return (u8 *)(io + 1) + v->hash_reqsize + v->digest_size; } extern int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io, struct bvec_iter *iter, int (*process)(struct dm_verity *v, struct dm_verity_io *io, u8 *data, size_t len)); -extern int verity_hash(struct dm_verity *v, struct ahash_request *req, - const u8 *data, size_t len, u8 *digest, bool may_sleep); +extern int verity_compute_hash_virt(struct dm_verity *v, + struct dm_verity_io *io, + const u8 *data, size_t len, u8 *digest, + bool may_sleep); extern int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io, sector_t block, u8 *digest, bool *is_zero); extern bool dm_is_verity_target(struct dm_target *ti);