From patchwork Sat Dec 7 19:57:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 848331 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2FF91153573 for ; Sat, 7 Dec 2024 19:58:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; cv=none; b=fO+oLp+oNIgv+d46sU0orPp8mURbmD5a8lQXUWMmAg430XBdlfyRPcmIan/su/SpBZpsEf2Z/D02vkSWqRYQHE3Klht+2Tn5vkZpDrnTpkcaheqNdz/Ranpk0PfkKyVpRCjNY5Qg7bS4GbiiQAyYHjKVJ4QOSc371GMsaTVnQrI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; c=relaxed/simple; bh=VUa1mmLsSCRwI64/KwV+wAc3rjy9YU3dBQo80ZMq2TM=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YvmQv6ptZyi+4dHlg3hhgoQ46xfzPA44soAW+PENG/c9V5lUFMVetzUdR+Jj2ra3siS6NjnyKF1NIDFgf/vDYKugeiAjDsAJkitcmkXBOEh5ZZqCinsxAucCqIMENxkezLNsc0tsHYSrlfKYD9hbuBIiXaOo7ycKLhZgmaEoWEM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=klK/72BZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="klK/72BZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 99953C4CEDD for ; Sat, 7 Dec 2024 19:58:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733601512; bh=VUa1mmLsSCRwI64/KwV+wAc3rjy9YU3dBQo80ZMq2TM=; h=From:To:Subject:Date:In-Reply-To:References:From; b=klK/72BZDm4CerH61JJbXqD6+eC7TNaQBwrlyw/xX0/Ox+AL9Dpef4/VNkRzFb65C IgUNJmNzd0Ld2aPLKMrpj8N3VeUTcIo2DdNYb+HQu5XMi6NRm4URQUcSHR2+STxRM9 OdrOuOCvrXVzeioadWCdgkRQUV2cZd41SWlZalv9fmNaN4ClXcKzLnVq1poyvuJrxi WID+O+QJjX+YfBax264dKeAMpTob0IdqNlyfyaF1W9tt9csPvM2LOlZC8X0yOltx+8 TWwh1uIL4JIfMDMoBkQrili4vSutAmgxseJ8qTTw3vSK6iQ11uQO9ryZW7g+eEY8aG pfDz2cdsarMyw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 1/8] crypto: anubis - stop using cra_alignmask Date: Sat, 7 Dec 2024 11:57:45 -0800 Message-ID: <20241207195752.87654-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207195752.87654-1-ebiggers@kernel.org> References: <20241207195752.87654-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers --- crypto/anubis.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/crypto/anubis.c b/crypto/anubis.c index 9f0cf61bbc6e2..886e7c9136886 100644 --- a/crypto/anubis.c +++ b/crypto/anubis.c @@ -31,11 +31,11 @@ #include #include #include #include -#include +#include #include #define ANUBIS_MIN_KEY_SIZE 16 #define ANUBIS_MAX_KEY_SIZE 40 #define ANUBIS_BLOCK_SIZE 16 @@ -461,11 +461,10 @@ static const u32 rc[] = { static int anubis_setkey(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { struct anubis_ctx *ctx = crypto_tfm_ctx(tfm); - const __be32 *key = (const __be32 *)in_key; int N, R, i, r; u32 kappa[ANUBIS_MAX_N]; u32 inter[ANUBIS_MAX_N]; switch (key_len) { @@ -480,11 +479,11 @@ static int anubis_setkey(struct crypto_tfm *tfm, const u8 *in_key, N = ctx->key_len >> 5; ctx->R = R = 8 + N; /* * map cipher key to initial key state (mu): */ for (i = 0; i < N; i++) - kappa[i] = be32_to_cpu(key[i]); + kappa[i] = get_unaligned_be32(&in_key[4 * i]); /* * generate R + 1 round keys: */ for (r = 0; r <= R; r++) { @@ -568,24 +567,22 @@ static int anubis_setkey(struct crypto_tfm *tfm, const u8 *in_key, return 0; } static void anubis_crypt(u32 roundKey[ANUBIS_MAX_ROUNDS + 1][4], - u8 *ciphertext, const u8 *plaintext, const int R) + u8 *dst, const u8 *src, const int R) { - const __be32 *src = (const __be32 *)plaintext; - __be32 *dst = (__be32 *)ciphertext; int i, r; u32 state[4]; u32 inter[4]; /* * map plaintext block to cipher state (mu) * and add initial round key (sigma[K^0]): */ for (i = 0; i < 4; i++) - state[i] = be32_to_cpu(src[i]) ^ roundKey[0][i]; + state[i] = get_unaligned_be32(&src[4 * i]) ^ roundKey[0][i]; /* * R - 1 full rounds: */ @@ -652,11 +649,11 @@ static void anubis_crypt(u32 roundKey[ANUBIS_MAX_ROUNDS + 1][4], /* * map cipher state to ciphertext block (mu^{-1}): */ for (i = 0; i < 4; i++) - dst[i] = cpu_to_be32(inter[i]); + put_unaligned_be32(inter[i], &dst[4 * i]); } static void anubis_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { struct anubis_ctx *ctx = crypto_tfm_ctx(tfm); @@ -673,11 +670,10 @@ static struct crypto_alg anubis_alg = { .cra_name = "anubis", .cra_driver_name = "anubis-generic", .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = ANUBIS_BLOCK_SIZE, .cra_ctxsize = sizeof (struct anubis_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = ANUBIS_MIN_KEY_SIZE, .cia_max_keysize = ANUBIS_MAX_KEY_SIZE, .cia_setkey = anubis_setkey, From patchwork Sat Dec 7 19:57:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 848183 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D9471509A0 for ; Sat, 7 Dec 2024 19:58:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; cv=none; b=UWgVkW38HaH2ifIWW+sj8eEBO3irq/ewa/adnQ3btw65c4iYtjzxGejZwJ6guHWeObwzWD3CB7StCtedao2Bh5gBPqR92e09EXAVhnN7+zC+0mn8dosH2d6ijQrXQgWiggx96a1bQmkxqVyKY1hxn3AAXlk/v5ZUTJcJI7HdAwA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; c=relaxed/simple; bh=8lhj+o/gRgPEL5z+pB6QVmjTpedeQAH8kI7iIivYnJc=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HTTJElFoDNtHqYQ2JeCbXwzwde/rFq5yB0NCI6qD9e9qW7UDKcc+qAXGtxmVh8GcWyk0ulxNySDOsgIyDnUBnNWHxVtRlzvCP94JtJ1yI/vyrsyKyq0zCkAsL7gDFIRr+eJ+7epD6GNhm5lXqRrz1hzVb0bsERm6yvaxuc+GXlY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QHjrRXdh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QHjrRXdh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C6E6BC4CEDC for ; Sat, 7 Dec 2024 19:58:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733601512; bh=8lhj+o/gRgPEL5z+pB6QVmjTpedeQAH8kI7iIivYnJc=; h=From:To:Subject:Date:In-Reply-To:References:From; b=QHjrRXdhqUgHEdj+uApzfz7gUZF/FItgGiMpe5UER3OQo53tDh66aLAB7/nP6Yzk8 mCu9rv2cGe4ko1vJ7eiZERIo+whRM07z3Kd1bnsDxRYhQE22npM34wF3YJR4Co+06K VkwxT8cJB4EOmdvLR9YNupzKqcMMtSeTssU/qOclIT0zcbzks2CXN1MzJdZBI9Oa68 9975KQtU9M1U9SMvWbdm0PckeDzKDdxdUTxpG5EkOTS1+o2ie90neFWWwEZK52cP/p h0IZHQHMrufNQ/Fh1fC6PfbuG+p+fDOpO4N3YT4Vwo0hlcn58ySvS4lg/cJ2mPRSJF XwLvZkONd4O4g== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 2/8] crypto: aria - stop using cra_alignmask Date: Sat, 7 Dec 2024 11:57:46 -0800 Message-ID: <20241207195752.87654-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207195752.87654-1-ebiggers@kernel.org> References: <20241207195752.87654-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers --- crypto/aria_generic.c | 37 +++++++++++++++++-------------------- 1 file changed, 17 insertions(+), 20 deletions(-) diff --git a/crypto/aria_generic.c b/crypto/aria_generic.c index d96dfc4fdde67..bd359d3313c22 100644 --- a/crypto/aria_generic.c +++ b/crypto/aria_generic.c @@ -13,10 +13,11 @@ * * Public domain version is distributed above. */ #include +#include static const u32 key_rc[20] = { 0x517cc1b7, 0x27220a94, 0xfe13abe8, 0xfa9a6ee0, 0x6db14acc, 0x9e21c820, 0xff28b1d5, 0xef5de2b0, 0xdb92371d, 0x2126e970, 0x03249775, 0x04e8c90e, @@ -25,36 +26,35 @@ static const u32 key_rc[20] = { }; static void aria_set_encrypt_key(struct aria_ctx *ctx, const u8 *in_key, unsigned int key_len) { - const __be32 *key = (const __be32 *)in_key; u32 w0[4], w1[4], w2[4], w3[4]; u32 reg0, reg1, reg2, reg3; const u32 *ck; int rkidx = 0; ck = &key_rc[(key_len - 16) / 2]; - w0[0] = be32_to_cpu(key[0]); - w0[1] = be32_to_cpu(key[1]); - w0[2] = be32_to_cpu(key[2]); - w0[3] = be32_to_cpu(key[3]); + w0[0] = get_unaligned_be32(&in_key[0]); + w0[1] = get_unaligned_be32(&in_key[4]); + w0[2] = get_unaligned_be32(&in_key[8]); + w0[3] = get_unaligned_be32(&in_key[12]); reg0 = w0[0] ^ ck[0]; reg1 = w0[1] ^ ck[1]; reg2 = w0[2] ^ ck[2]; reg3 = w0[3] ^ ck[3]; aria_subst_diff_odd(®0, ®1, ®2, ®3); if (key_len > 16) { - w1[0] = be32_to_cpu(key[4]); - w1[1] = be32_to_cpu(key[5]); + w1[0] = get_unaligned_be32(&in_key[16]); + w1[1] = get_unaligned_be32(&in_key[20]); if (key_len > 24) { - w1[2] = be32_to_cpu(key[6]); - w1[3] = be32_to_cpu(key[7]); + w1[2] = get_unaligned_be32(&in_key[24]); + w1[3] = get_unaligned_be32(&in_key[28]); } else { w1[2] = 0; w1[3] = 0; } } else { @@ -193,21 +193,19 @@ int aria_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) EXPORT_SYMBOL_GPL(aria_set_key); static void __aria_crypt(struct aria_ctx *ctx, u8 *out, const u8 *in, u32 key[][ARIA_RD_KEY_WORDS]) { - const __be32 *src = (const __be32 *)in; - __be32 *dst = (__be32 *)out; u32 reg0, reg1, reg2, reg3; int rounds, rkidx = 0; rounds = ctx->rounds; - reg0 = be32_to_cpu(src[0]); - reg1 = be32_to_cpu(src[1]); - reg2 = be32_to_cpu(src[2]); - reg3 = be32_to_cpu(src[3]); + reg0 = get_unaligned_be32(&in[0]); + reg1 = get_unaligned_be32(&in[4]); + reg2 = get_unaligned_be32(&in[8]); + reg3 = get_unaligned_be32(&in[12]); aria_add_round_key(key[rkidx], ®0, ®1, ®2, ®3); rkidx++; aria_subst_diff_odd(®0, ®1, ®2, ®3); @@ -239,14 +237,14 @@ static void __aria_crypt(struct aria_ctx *ctx, u8 *out, const u8 *in, reg3 = key[rkidx][3] ^ make_u32((u8)(x1[get_u8(reg3, 0)]), (u8)(x2[get_u8(reg3, 1)] >> 8), (u8)(s1[get_u8(reg3, 2)]), (u8)(s2[get_u8(reg3, 3)])); - dst[0] = cpu_to_be32(reg0); - dst[1] = cpu_to_be32(reg1); - dst[2] = cpu_to_be32(reg2); - dst[3] = cpu_to_be32(reg3); + put_unaligned_be32(reg0, &out[0]); + put_unaligned_be32(reg1, &out[4]); + put_unaligned_be32(reg2, &out[8]); + put_unaligned_be32(reg3, &out[12]); } void aria_encrypt(void *_ctx, u8 *out, const u8 *in) { struct aria_ctx *ctx = (struct aria_ctx *)_ctx; @@ -282,11 +280,10 @@ static struct crypto_alg aria_alg = { .cra_driver_name = "aria-generic", .cra_priority = 100, .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = ARIA_BLOCK_SIZE, .cra_ctxsize = sizeof(struct aria_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = ARIA_MIN_KEY_SIZE, .cia_max_keysize = ARIA_MAX_KEY_SIZE, From patchwork Sat Dec 7 19:57:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 848330 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7565F154BE2 for ; Sat, 7 Dec 2024 19:58:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; cv=none; b=rVh2DvFfC6vbAmlabalmc4WfYADOH+oRHDnuCHhdBCQg4YpW54jPoZKsJhMqpcvdDFDhWJhJRoDnM2C/iaP5iof77XIKt/xqpN5+2KSgQsS173256zZCVOW9K1XEoWgpj3YorGYzV3c6DpdZ8+LUTZOqTm+4c6VD9ww5FzFLvpM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; c=relaxed/simple; bh=n9ioLaUCO2ycI9rs6WceJ1E88jmO8gnY6BMnmtBj3yE=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=g7qHbr3RVFxrj6xcnWAyY6tWvSmWG8pv0xGlvEKhgoys/gpFb/m655yPvhvLukf2SZhLFQy2CXOQMQYZ/rsaNZnPJ0yq66TZXCaVVBHz4LZsy5cQs+TikPK4vwhniRC6jowlC02QdjJKe6yH7xDtaAeMvJc51z8zwFHZzA2o6Gc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=c1wuKhbA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="c1wuKhbA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F36AAC4CEE0 for ; Sat, 7 Dec 2024 19:58:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733601513; bh=n9ioLaUCO2ycI9rs6WceJ1E88jmO8gnY6BMnmtBj3yE=; h=From:To:Subject:Date:In-Reply-To:References:From; b=c1wuKhbAru/HdlRvmZN8oqfSEYy+7GedF4wPRQ/5iQc6ZGBu+82D5C2a3P6Yc+hHm keqVRoH2IJ29MRRw5nak4v0AgreCH7pGxLciGo2yy9SqRRDz0xzV7QwmjT6hajFMra rSKWJLCvGrUW1UtHsh0ZeTecguyGQVrrzZx2Hjqj83/d3dI7oQwbYW1oYq3goW2Ryn 0+cZ5RQUXFCGoWtteUcN+A42ORxSyR9sdEyuCRM3aFJgmk2ghPrxSSEaZrzRu9LPHz Zoa/f1bOcCHPnoEwgzz/nlhLJ5vEb+M/wXTiLDyQ0avY1IovZ9ArBDoiLg3cdC0EHM RyDggBslbg9Fg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 3/8] crypto: tea - stop using cra_alignmask Date: Sat, 7 Dec 2024 11:57:47 -0800 Message-ID: <20241207195752.87654-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207195752.87654-1-ebiggers@kernel.org> References: <20241207195752.87654-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers --- crypto/tea.c | 83 +++++++++++++++++++++------------------------------- 1 file changed, 33 insertions(+), 50 deletions(-) diff --git a/crypto/tea.c b/crypto/tea.c index 896f863f3067c..b315da8c89ebc 100644 --- a/crypto/tea.c +++ b/crypto/tea.c @@ -16,11 +16,11 @@ #include #include #include #include -#include +#include #include #define TEA_KEY_SIZE 16 #define TEA_BLOCK_SIZE 8 #define TEA_ROUNDS 32 @@ -41,31 +41,28 @@ struct xtea_ctx { static int tea_setkey(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { struct tea_ctx *ctx = crypto_tfm_ctx(tfm); - const __le32 *key = (const __le32 *)in_key; - ctx->KEY[0] = le32_to_cpu(key[0]); - ctx->KEY[1] = le32_to_cpu(key[1]); - ctx->KEY[2] = le32_to_cpu(key[2]); - ctx->KEY[3] = le32_to_cpu(key[3]); + ctx->KEY[0] = get_unaligned_le32(&in_key[0]); + ctx->KEY[1] = get_unaligned_le32(&in_key[4]); + ctx->KEY[2] = get_unaligned_le32(&in_key[8]); + ctx->KEY[3] = get_unaligned_le32(&in_key[12]); return 0; } static void tea_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { u32 y, z, n, sum = 0; u32 k0, k1, k2, k3; struct tea_ctx *ctx = crypto_tfm_ctx(tfm); - const __le32 *in = (const __le32 *)src; - __le32 *out = (__le32 *)dst; - y = le32_to_cpu(in[0]); - z = le32_to_cpu(in[1]); + y = get_unaligned_le32(&src[0]); + z = get_unaligned_le32(&src[4]); k0 = ctx->KEY[0]; k1 = ctx->KEY[1]; k2 = ctx->KEY[2]; k3 = ctx->KEY[3]; @@ -76,24 +73,22 @@ static void tea_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) sum += TEA_DELTA; y += ((z << 4) + k0) ^ (z + sum) ^ ((z >> 5) + k1); z += ((y << 4) + k2) ^ (y + sum) ^ ((y >> 5) + k3); } - out[0] = cpu_to_le32(y); - out[1] = cpu_to_le32(z); + put_unaligned_le32(y, &dst[0]); + put_unaligned_le32(z, &dst[4]); } static void tea_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { u32 y, z, n, sum; u32 k0, k1, k2, k3; struct tea_ctx *ctx = crypto_tfm_ctx(tfm); - const __le32 *in = (const __le32 *)src; - __le32 *out = (__le32 *)dst; - y = le32_to_cpu(in[0]); - z = le32_to_cpu(in[1]); + y = get_unaligned_le32(&src[0]); + z = get_unaligned_le32(&src[4]); k0 = ctx->KEY[0]; k1 = ctx->KEY[1]; k2 = ctx->KEY[2]; k3 = ctx->KEY[3]; @@ -106,123 +101,113 @@ static void tea_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) z -= ((y << 4) + k2) ^ (y + sum) ^ ((y >> 5) + k3); y -= ((z << 4) + k0) ^ (z + sum) ^ ((z >> 5) + k1); sum -= TEA_DELTA; } - out[0] = cpu_to_le32(y); - out[1] = cpu_to_le32(z); + put_unaligned_le32(y, &dst[0]); + put_unaligned_le32(z, &dst[4]); } static int xtea_setkey(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { struct xtea_ctx *ctx = crypto_tfm_ctx(tfm); - const __le32 *key = (const __le32 *)in_key; - ctx->KEY[0] = le32_to_cpu(key[0]); - ctx->KEY[1] = le32_to_cpu(key[1]); - ctx->KEY[2] = le32_to_cpu(key[2]); - ctx->KEY[3] = le32_to_cpu(key[3]); + ctx->KEY[0] = get_unaligned_le32(&in_key[0]); + ctx->KEY[1] = get_unaligned_le32(&in_key[4]); + ctx->KEY[2] = get_unaligned_le32(&in_key[8]); + ctx->KEY[3] = get_unaligned_le32(&in_key[12]); return 0; } static void xtea_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { u32 y, z, sum = 0; u32 limit = XTEA_DELTA * XTEA_ROUNDS; struct xtea_ctx *ctx = crypto_tfm_ctx(tfm); - const __le32 *in = (const __le32 *)src; - __le32 *out = (__le32 *)dst; - y = le32_to_cpu(in[0]); - z = le32_to_cpu(in[1]); + y = get_unaligned_le32(&src[0]); + z = get_unaligned_le32(&src[4]); while (sum != limit) { y += ((z << 4 ^ z >> 5) + z) ^ (sum + ctx->KEY[sum&3]); sum += XTEA_DELTA; z += ((y << 4 ^ y >> 5) + y) ^ (sum + ctx->KEY[sum>>11 &3]); } - out[0] = cpu_to_le32(y); - out[1] = cpu_to_le32(z); + put_unaligned_le32(y, &dst[0]); + put_unaligned_le32(z, &dst[4]); } static void xtea_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { u32 y, z, sum; struct tea_ctx *ctx = crypto_tfm_ctx(tfm); - const __le32 *in = (const __le32 *)src; - __le32 *out = (__le32 *)dst; - y = le32_to_cpu(in[0]); - z = le32_to_cpu(in[1]); + y = get_unaligned_le32(&src[0]); + z = get_unaligned_le32(&src[4]); sum = XTEA_DELTA * XTEA_ROUNDS; while (sum) { z -= ((y << 4 ^ y >> 5) + y) ^ (sum + ctx->KEY[sum>>11 & 3]); sum -= XTEA_DELTA; y -= ((z << 4 ^ z >> 5) + z) ^ (sum + ctx->KEY[sum & 3]); } - out[0] = cpu_to_le32(y); - out[1] = cpu_to_le32(z); + put_unaligned_le32(y, &dst[0]); + put_unaligned_le32(z, &dst[4]); } static void xeta_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { u32 y, z, sum = 0; u32 limit = XTEA_DELTA * XTEA_ROUNDS; struct xtea_ctx *ctx = crypto_tfm_ctx(tfm); - const __le32 *in = (const __le32 *)src; - __le32 *out = (__le32 *)dst; - y = le32_to_cpu(in[0]); - z = le32_to_cpu(in[1]); + y = get_unaligned_le32(&src[0]); + z = get_unaligned_le32(&src[4]); while (sum != limit) { y += (z << 4 ^ z >> 5) + (z ^ sum) + ctx->KEY[sum&3]; sum += XTEA_DELTA; z += (y << 4 ^ y >> 5) + (y ^ sum) + ctx->KEY[sum>>11 &3]; } - out[0] = cpu_to_le32(y); - out[1] = cpu_to_le32(z); + put_unaligned_le32(y, &dst[0]); + put_unaligned_le32(z, &dst[4]); } static void xeta_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { u32 y, z, sum; struct tea_ctx *ctx = crypto_tfm_ctx(tfm); - const __le32 *in = (const __le32 *)src; - __le32 *out = (__le32 *)dst; - y = le32_to_cpu(in[0]); - z = le32_to_cpu(in[1]); + y = get_unaligned_le32(&src[0]); + z = get_unaligned_le32(&src[4]); sum = XTEA_DELTA * XTEA_ROUNDS; while (sum) { z -= (y << 4 ^ y >> 5) + (y ^ sum) + ctx->KEY[sum>>11 & 3]; sum -= XTEA_DELTA; y -= (z << 4 ^ z >> 5) + (z ^ sum) + ctx->KEY[sum & 3]; } - out[0] = cpu_to_le32(y); - out[1] = cpu_to_le32(z); + put_unaligned_le32(y, &dst[0]); + put_unaligned_le32(z, &dst[4]); } static struct crypto_alg tea_algs[3] = { { .cra_name = "tea", .cra_driver_name = "tea-generic", .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = TEA_BLOCK_SIZE, .cra_ctxsize = sizeof (struct tea_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = TEA_KEY_SIZE, .cia_max_keysize = TEA_KEY_SIZE, .cia_setkey = tea_setkey, @@ -232,11 +217,10 @@ static struct crypto_alg tea_algs[3] = { { .cra_name = "xtea", .cra_driver_name = "xtea-generic", .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = XTEA_BLOCK_SIZE, .cra_ctxsize = sizeof (struct xtea_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = XTEA_KEY_SIZE, .cia_max_keysize = XTEA_KEY_SIZE, .cia_setkey = xtea_setkey, @@ -246,11 +230,10 @@ static struct crypto_alg tea_algs[3] = { { .cra_name = "xeta", .cra_driver_name = "xeta-generic", .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = XTEA_BLOCK_SIZE, .cra_ctxsize = sizeof (struct xtea_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = XTEA_KEY_SIZE, .cia_max_keysize = XTEA_KEY_SIZE, .cia_setkey = xtea_setkey, From patchwork Sat Dec 7 19:57:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 848182 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99E57155759 for ; Sat, 7 Dec 2024 19:58:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; cv=none; b=tD5HhrKgVa/F0yBFqHHYQ8pkLKz71Fo7eLThZbnJbsEmJL2TWFpjH8c/EzrBjGx9VJeAdV/eiJiV9+Qk+fRSWUgO8/mZ+J/apZDsIdQvyEf6hlBaz3VAITwp1YiKEo177bXItHUdThn8WzarjFYCDM6G24MNGsLZAjW1uyDHFdY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; c=relaxed/simple; bh=D1txlg46hkHeEOfHNEpb4Zq5tuFo1uQusTc9AUkmA5A=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kD6FaK12ZcU4BtiyZj1W6idRMBhlkxar8HidYySycNCVU05ZOjqGaobQXQXr+LhXsrm4lciWK/BbgxBBubsz35FVUPXlUj5fh0pbvPAfmSWhWbJOS4kt+wNyO1+Xscfg5CKyfYGP4C38N+tMptcp4sGQcmdWZ3AckPg3aODT8hs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kuPt5ssJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kuPt5ssJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2BDE0C4CEDF for ; Sat, 7 Dec 2024 19:58:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733601513; bh=D1txlg46hkHeEOfHNEpb4Zq5tuFo1uQusTc9AUkmA5A=; h=From:To:Subject:Date:In-Reply-To:References:From; b=kuPt5ssJ5f/giPmuKsaSLlbwr1nH5DI5MJajnk8hzvBR1H7Q+m2MoygfoL1mZeNup 5CFWD1t5QgDw6w/22TsvWltrsQsMbXEGMVHpdPYDjNrhtKfs82+UAQL9nOgBezwcyV aeEy1/wOtgz4FFrPgPFMLiYjJ+JlgwrY0P1Sk18jpTaGETChoxYTojE48MyjdGuSGq 12yeaReEkVFBfz/Ifcbg9kWFt2vabUeYjbuCHV7DLuFNv5LBKU2FD09/dEu4x8w/pP D9sEm/exr4To5og3V1gIjiWBWWu+qa4WoEmRBhZCcrD2KO8tLGTxFLZmGo08I+Z6oi WChJBdFkBeYzQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 4/8] crypto: khazad - stop using cra_alignmask Date: Sat, 7 Dec 2024 11:57:48 -0800 Message-ID: <20241207195752.87654-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207195752.87654-1-ebiggers@kernel.org> References: <20241207195752.87654-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers --- crypto/khazad.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/crypto/khazad.c b/crypto/khazad.c index 70cafe73f9740..7ad338ca2c18f 100644 --- a/crypto/khazad.c +++ b/crypto/khazad.c @@ -21,11 +21,11 @@ #include #include #include #include -#include +#include #include #define KHAZAD_KEY_SIZE 16 #define KHAZAD_BLOCK_SIZE 8 #define KHAZAD_ROUNDS 8 @@ -755,18 +755,16 @@ static const u64 c[KHAZAD_ROUNDS + 1] = { static int khazad_setkey(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { struct khazad_ctx *ctx = crypto_tfm_ctx(tfm); - const __be32 *key = (const __be32 *)in_key; int r; const u64 *S = T7; u64 K2, K1; - /* key is supposed to be 32-bit aligned */ - K2 = ((u64)be32_to_cpu(key[0]) << 32) | be32_to_cpu(key[1]); - K1 = ((u64)be32_to_cpu(key[2]) << 32) | be32_to_cpu(key[3]); + K2 = get_unaligned_be64(&in_key[0]); + K1 = get_unaligned_be64(&in_key[8]); /* setup the encrypt key */ for (r = 0; r <= KHAZAD_ROUNDS; r++) { ctx->E[r] = T0[(int)(K1 >> 56) ] ^ T1[(int)(K1 >> 48) & 0xff] ^ @@ -798,18 +796,16 @@ static int khazad_setkey(struct crypto_tfm *tfm, const u8 *in_key, return 0; } static void khazad_crypt(const u64 roundKey[KHAZAD_ROUNDS + 1], - u8 *ciphertext, const u8 *plaintext) + u8 *dst, const u8 *src) { - const __be64 *src = (const __be64 *)plaintext; - __be64 *dst = (__be64 *)ciphertext; int r; u64 state; - state = be64_to_cpu(*src) ^ roundKey[0]; + state = get_unaligned_be64(src) ^ roundKey[0]; for (r = 1; r < KHAZAD_ROUNDS; r++) { state = T0[(int)(state >> 56) ] ^ T1[(int)(state >> 48) & 0xff] ^ T2[(int)(state >> 40) & 0xff] ^ @@ -829,11 +825,11 @@ static void khazad_crypt(const u64 roundKey[KHAZAD_ROUNDS + 1], (T5[(int)(state >> 16) & 0xff] & 0x0000000000ff0000ULL) ^ (T6[(int)(state >> 8) & 0xff] & 0x000000000000ff00ULL) ^ (T7[(int)(state ) & 0xff] & 0x00000000000000ffULL) ^ roundKey[KHAZAD_ROUNDS]; - *dst = cpu_to_be64(state); + put_unaligned_be64(state, dst); } static void khazad_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { struct khazad_ctx *ctx = crypto_tfm_ctx(tfm); @@ -850,11 +846,10 @@ static struct crypto_alg khazad_alg = { .cra_name = "khazad", .cra_driver_name = "khazad-generic", .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = KHAZAD_BLOCK_SIZE, .cra_ctxsize = sizeof (struct khazad_ctx), - .cra_alignmask = 7, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = KHAZAD_KEY_SIZE, .cia_max_keysize = KHAZAD_KEY_SIZE, .cia_setkey = khazad_setkey, From patchwork Sat Dec 7 19:57:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 848181 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8B18155C8A for ; Sat, 7 Dec 2024 19:58:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; cv=none; b=c6suHhDyIXhMkxzNfJGYgMtk4fyGUdH2ZA16Jpn7+zvIPikO35pTjH7w6K0+ho53W+D8gNvflS4dnEzE5OI7DyD8/o+ykOQlncfsFLH+/OFTzrDS0tsXMY5LrhcP3bMp7MzBo3OkG2IozUS7NvbeHtX99Qt+vF/OGemYCNQqxlU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; c=relaxed/simple; bh=UNHNRsH+im+c/Y6QT61D//yFTFTLE3czbw4Sb8srj+o=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ssPOD+79w4BMSe/RBDfOVKDCa82bW4G8+x0wKfDCjGnDqmbaLsGfx29qFzOShfZFah/q0rq0UJhPG9v1GkQvgSCN62KGQzqIgP7XxQ70IGiY7CWUXvLzdFViX7iJTcNYQXONQ5fgFZFJEZqc6NyoftKHzQ8La02JOoU/e4fFZzU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kb8sEuGU; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kb8sEuGU" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 589CCC4CECD for ; Sat, 7 Dec 2024 19:58:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733601513; bh=UNHNRsH+im+c/Y6QT61D//yFTFTLE3czbw4Sb8srj+o=; h=From:To:Subject:Date:In-Reply-To:References:From; b=kb8sEuGUwA1QVetXlJOucc9J1q4hYiRAq3Xlpf0Y/BaqssXv7ARTg3i4PLsUN3xNl WvlU9J1+Lziy0vFogy1LdcqtdO9knne2O3zVQPlkss81B8iDyShCWIFD7ryXV5FKpT zzvz2CwvePZqtHrdCSPHMnXi5K5ZAbakWhWpxwpfAbR787YNrIQVsblfyJ08YPSBYx vtmlj+0QFc4wt4dJIm/4/n9Zqprebl7J4b6K5IVf/CQ4izIsLx5MmiDJywWNO6dV0v gO9eHRW7gGT8XU5nx8yX9WhoZJrK4AusFln6tDljJG0YYEqfpaFiPkKtP7ZQxwuPqc pXkmOL1kTjBbw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 5/8] crypto: seed - stop using cra_alignmask Date: Sat, 7 Dec 2024 11:57:49 -0800 Message-ID: <20241207195752.87654-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207195752.87654-1-ebiggers@kernel.org> References: <20241207195752.87654-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers --- crypto/seed.c | 48 +++++++++++++++++++++--------------------------- 1 file changed, 21 insertions(+), 27 deletions(-) diff --git a/crypto/seed.c b/crypto/seed.c index d0506ade2a5f8..d05d8ed909fa7 100644 --- a/crypto/seed.c +++ b/crypto/seed.c @@ -11,11 +11,11 @@ #include #include #include #include #include -#include +#include #define SEED_NUM_KCONSTANTS 16 #define SEED_KEY_SIZE 16 #define SEED_BLOCK_SIZE 16 #define SEED_KEYSCHED_LEN 32 @@ -327,17 +327,16 @@ static const u32 KC[SEED_NUM_KCONSTANTS] = { static int seed_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { struct seed_ctx *ctx = crypto_tfm_ctx(tfm); u32 *keyout = ctx->keysched; - const __be32 *key = (const __be32 *)in_key; u32 i, t0, t1, x1, x2, x3, x4; - x1 = be32_to_cpu(key[0]); - x2 = be32_to_cpu(key[1]); - x3 = be32_to_cpu(key[2]); - x4 = be32_to_cpu(key[3]); + x1 = get_unaligned_be32(&in_key[0]); + x2 = get_unaligned_be32(&in_key[4]); + x3 = get_unaligned_be32(&in_key[8]); + x4 = get_unaligned_be32(&in_key[12]); for (i = 0; i < SEED_NUM_KCONSTANTS; i++) { t0 = x1 + x3 - KC[i]; t1 = x2 + KC[i] - x4; *(keyout++) = SS0[byte(t0, 0)] ^ SS1[byte(t0, 1)] ^ @@ -362,19 +361,17 @@ static int seed_set_key(struct crypto_tfm *tfm, const u8 *in_key, /* encrypt a block of text */ static void seed_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct seed_ctx *ctx = crypto_tfm_ctx(tfm); - const __be32 *src = (const __be32 *)in; - __be32 *dst = (__be32 *)out; u32 x1, x2, x3, x4, t0, t1; const u32 *ks = ctx->keysched; - x1 = be32_to_cpu(src[0]); - x2 = be32_to_cpu(src[1]); - x3 = be32_to_cpu(src[2]); - x4 = be32_to_cpu(src[3]); + x1 = get_unaligned_be32(&in[0]); + x2 = get_unaligned_be32(&in[4]); + x3 = get_unaligned_be32(&in[8]); + x4 = get_unaligned_be32(&in[12]); OP(x1, x2, x3, x4, 0); OP(x3, x4, x1, x2, 2); OP(x1, x2, x3, x4, 4); OP(x3, x4, x1, x2, 6); @@ -389,30 +386,28 @@ static void seed_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) OP(x1, x2, x3, x4, 24); OP(x3, x4, x1, x2, 26); OP(x1, x2, x3, x4, 28); OP(x3, x4, x1, x2, 30); - dst[0] = cpu_to_be32(x3); - dst[1] = cpu_to_be32(x4); - dst[2] = cpu_to_be32(x1); - dst[3] = cpu_to_be32(x2); + put_unaligned_be32(x3, &out[0]); + put_unaligned_be32(x4, &out[4]); + put_unaligned_be32(x1, &out[8]); + put_unaligned_be32(x2, &out[12]); } /* decrypt a block of text */ static void seed_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct seed_ctx *ctx = crypto_tfm_ctx(tfm); - const __be32 *src = (const __be32 *)in; - __be32 *dst = (__be32 *)out; u32 x1, x2, x3, x4, t0, t1; const u32 *ks = ctx->keysched; - x1 = be32_to_cpu(src[0]); - x2 = be32_to_cpu(src[1]); - x3 = be32_to_cpu(src[2]); - x4 = be32_to_cpu(src[3]); + x1 = get_unaligned_be32(&in[0]); + x2 = get_unaligned_be32(&in[4]); + x3 = get_unaligned_be32(&in[8]); + x4 = get_unaligned_be32(&in[12]); OP(x1, x2, x3, x4, 30); OP(x3, x4, x1, x2, 28); OP(x1, x2, x3, x4, 26); OP(x3, x4, x1, x2, 24); @@ -427,25 +422,24 @@ static void seed_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) OP(x1, x2, x3, x4, 6); OP(x3, x4, x1, x2, 4); OP(x1, x2, x3, x4, 2); OP(x3, x4, x1, x2, 0); - dst[0] = cpu_to_be32(x3); - dst[1] = cpu_to_be32(x4); - dst[2] = cpu_to_be32(x1); - dst[3] = cpu_to_be32(x2); + put_unaligned_be32(x3, &out[0]); + put_unaligned_be32(x4, &out[4]); + put_unaligned_be32(x1, &out[8]); + put_unaligned_be32(x2, &out[12]); } static struct crypto_alg seed_alg = { .cra_name = "seed", .cra_driver_name = "seed-generic", .cra_priority = 100, .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = SEED_BLOCK_SIZE, .cra_ctxsize = sizeof(struct seed_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = SEED_KEY_SIZE, .cia_max_keysize = SEED_KEY_SIZE, From patchwork Sat Dec 7 19:57:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 848329 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE959155C82 for ; Sat, 7 Dec 2024 19:58:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; cv=none; b=F6mEgmilZ4SQmB6zplyQHAWz4irmhyf/OMrQqcl14uYr5r8Fgc4E1Aome54B1XDCSBMglfRgnhjAFOvEPP17yUwrDvlIx0MKSDGWfx9uqRQ85PYYgLMOV+fKK24aolTm3Cau4Ca0+ae3gt4FF2MXG/fnahtaFHCwbqs3jYY7MKI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; c=relaxed/simple; bh=6/TQ3vOptxxMUN/+0bpc2eXcGnHqvxGx0hNgF0cctl0=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=E8G7pTOZGlKSTsOBJf15yqoHQR0+HBJP2GRZ6guAwC8gYyQD7bneU58/cJhA3T+/CTyqYafCBtsffL1xw/X6XP9bX3H8BHUCCClX8dYe+2O/UDvlh+1sr+pbaVhLrkaPkbxedsVbQnRMc9oDMvdOMhOqvs5XLWZExO90YQ2hgSY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LmCK2WRe; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LmCK2WRe" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 84F78C4CEDC for ; Sat, 7 Dec 2024 19:58:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733601513; bh=6/TQ3vOptxxMUN/+0bpc2eXcGnHqvxGx0hNgF0cctl0=; h=From:To:Subject:Date:In-Reply-To:References:From; b=LmCK2WRenVPIuVpMO7uXrSa7rod1P7YOBTSiC/Ls5OQhAvalIDssuk3psROlrERU9 OE80NJmMGEd6I8tmK8L8CO7iwenkwskz7dMLM8KrDX/5i4d8NT5itfD5ZzdIwabvkm DfhphOjF1RnHKP59/jXCqpVg1cTt4NEio6j/8yObOt/Q7sykwpHHPZqYpcvuJlXrAm GF7PAjmsuaX1AivhV5+91aWnzuxP6STDc10S0uitDg+7+LmpBihgnOrjQTIiC91pA9 3P2Z/lzDeyuRCbyEdJKfjvYIUSL4eFRr29672GtLmn+lq2wJpYHT4TKs+5a/YLtBwb KrToc2EXI/Rog== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 6/8] crypto: x86 - remove assignments of 0 to cra_alignmask Date: Sat, 7 Dec 2024 11:57:50 -0800 Message-ID: <20241207195752.87654-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207195752.87654-1-ebiggers@kernel.org> References: <20241207195752.87654-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Struct fields are zero by default, so these lines of code have no effect. Remove them to reduce the number of matches that are found when grepping for cra_alignmask. Signed-off-by: Eric Biggers --- arch/x86/crypto/aegis128-aesni-glue.c | 1 - arch/x86/crypto/blowfish_glue.c | 1 - arch/x86/crypto/camellia_glue.c | 1 - arch/x86/crypto/des3_ede_glue.c | 1 - arch/x86/crypto/twofish_glue.c | 1 - 5 files changed, 5 deletions(-) diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c index c19d8e3d96a35..01fa568dc5fc4 100644 --- a/arch/x86/crypto/aegis128-aesni-glue.c +++ b/arch/x86/crypto/aegis128-aesni-glue.c @@ -238,11 +238,10 @@ static struct aead_alg crypto_aegis128_aesni_alg = { .base = { .cra_flags = CRYPTO_ALG_INTERNAL, .cra_blocksize = 1, .cra_ctxsize = sizeof(struct aegis_ctx) + __alignof__(struct aegis_ctx), - .cra_alignmask = 0, .cra_priority = 400, .cra_name = "__aegis128", .cra_driver_name = "__aegis128-aesni", diff --git a/arch/x86/crypto/blowfish_glue.c b/arch/x86/crypto/blowfish_glue.c index 552f2df0643f2..26c5f2ee5d103 100644 --- a/arch/x86/crypto/blowfish_glue.c +++ b/arch/x86/crypto/blowfish_glue.c @@ -92,11 +92,10 @@ static struct crypto_alg bf_cipher_alg = { .cra_driver_name = "blowfish-asm", .cra_priority = 200, .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = BF_BLOCK_SIZE, .cra_ctxsize = sizeof(struct bf_ctx), - .cra_alignmask = 0, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = BF_MIN_KEY_SIZE, .cia_max_keysize = BF_MAX_KEY_SIZE, diff --git a/arch/x86/crypto/camellia_glue.c b/arch/x86/crypto/camellia_glue.c index f110708c8038c..3bd37d6641216 100644 --- a/arch/x86/crypto/camellia_glue.c +++ b/arch/x86/crypto/camellia_glue.c @@ -1311,11 +1311,10 @@ static struct crypto_alg camellia_cipher_alg = { .cra_driver_name = "camellia-asm", .cra_priority = 200, .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = CAMELLIA_BLOCK_SIZE, .cra_ctxsize = sizeof(struct camellia_ctx), - .cra_alignmask = 0, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = CAMELLIA_MIN_KEY_SIZE, .cia_max_keysize = CAMELLIA_MAX_KEY_SIZE, diff --git a/arch/x86/crypto/des3_ede_glue.c b/arch/x86/crypto/des3_ede_glue.c index abb8b1fe123b4..e88439d3828ea 100644 --- a/arch/x86/crypto/des3_ede_glue.c +++ b/arch/x86/crypto/des3_ede_glue.c @@ -289,11 +289,10 @@ static struct crypto_alg des3_ede_cipher = { .cra_driver_name = "des3_ede-asm", .cra_priority = 200, .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = DES3_EDE_BLOCK_SIZE, .cra_ctxsize = sizeof(struct des3_ede_x86_ctx), - .cra_alignmask = 0, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = DES3_EDE_KEY_SIZE, .cia_max_keysize = DES3_EDE_KEY_SIZE, diff --git a/arch/x86/crypto/twofish_glue.c b/arch/x86/crypto/twofish_glue.c index 0614beece2793..4c67184dc573e 100644 --- a/arch/x86/crypto/twofish_glue.c +++ b/arch/x86/crypto/twofish_glue.c @@ -66,11 +66,10 @@ static struct crypto_alg alg = { .cra_driver_name = "twofish-asm", .cra_priority = 200, .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = TF_BLOCK_SIZE, .cra_ctxsize = sizeof(struct twofish_ctx), - .cra_alignmask = 0, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = TF_MIN_KEY_SIZE, .cia_max_keysize = TF_MAX_KEY_SIZE, From patchwork Sat Dec 7 19:57:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 848328 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBA1F156641 for ; Sat, 7 Dec 2024 19:58:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; cv=none; b=DOmNoAmRALbIOzyWCALcQbn53kNib/p/uIUN2fFGwnTJop3ohoZXr0SM8vir3xmC63VYmY3iyZPsKr0V4BcLEv4iEDKy/RCNfDBCIC0bBhAhXzKD2m+LRq5RFMGvKl2dk3Clhr7bqANmQFm6KlBj15T6oiEju1W1Bf4xk7+dmUo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601513; c=relaxed/simple; bh=JD5IkYe6ngI5RFQ64i8zOqzDWCYgNzd34M9qimtusNM=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HUQDHnYr9X2agR9/QHrDC9b4CRMQwxt+1nq/J//+nsHPhshF80p1jLocVX/4FfjVEiS5hqe2kMwnzZvzqVtZlamxjtWwz7NysX+NjrNHQgdtDKtzu9zQfwHp/HQ4SbEqHajX1ypyzUuUE0qzd26D9CPWpjZUdXfNSDYZyO1/juc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OZnWh3D6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OZnWh3D6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B34C5C4CEE1 for ; Sat, 7 Dec 2024 19:58:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733601513; bh=JD5IkYe6ngI5RFQ64i8zOqzDWCYgNzd34M9qimtusNM=; h=From:To:Subject:Date:In-Reply-To:References:From; b=OZnWh3D61r4EjCflnzH4rP5vDeVfEkt6bOGr0ZH09/j/AN9njXDRYBE70v2XpJY+h lk3BvUO8XSZr7ZM1mSLc5Rt2bIJqiQIbGwNaD4qXpE2JfN+U+aWvihw5eyVrVlnPa+ CyHKwaunmObEHgMuBD6ZCsgDWs/pcgm6b38DAqDv4JhUjIZrzBl1Di6jkR6e+aZWUm WtQlMDSgFkYZP8JlQTWKG2vDqi3yTH4clB2avtFkmhoeS3LguBR39fwBi1BnD7lCyS +vpEGDrGZGPtv7Qxe9y2s4u9KLG223znw81TiMS7ovrhpZzdRCttKMI+S3+WLyzptX TCO6MNUEtjKXQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 7/8] crypto: aegis - remove assignments of 0 to cra_alignmask Date: Sat, 7 Dec 2024 11:57:51 -0800 Message-ID: <20241207195752.87654-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207195752.87654-1-ebiggers@kernel.org> References: <20241207195752.87654-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Struct fields are zero by default, so these lines of code have no effect. Remove them to reduce the number of matches that are found when grepping for cra_alignmask. Signed-off-by: Eric Biggers --- crypto/aegis128-core.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/crypto/aegis128-core.c b/crypto/aegis128-core.c index 4fdb53435827e..6cbff298722b4 100644 --- a/crypto/aegis128-core.c +++ b/crypto/aegis128-core.c @@ -514,11 +514,10 @@ static struct aead_alg crypto_aegis128_alg_generic = { .maxauthsize = AEGIS128_MAX_AUTH_SIZE, .chunksize = AEGIS_BLOCK_SIZE, .base.cra_blocksize = 1, .base.cra_ctxsize = sizeof(struct aegis_ctx), - .base.cra_alignmask = 0, .base.cra_priority = 100, .base.cra_name = "aegis128", .base.cra_driver_name = "aegis128-generic", .base.cra_module = THIS_MODULE, }; @@ -533,11 +532,10 @@ static struct aead_alg crypto_aegis128_alg_simd = { .maxauthsize = AEGIS128_MAX_AUTH_SIZE, .chunksize = AEGIS_BLOCK_SIZE, .base.cra_blocksize = 1, .base.cra_ctxsize = sizeof(struct aegis_ctx), - .base.cra_alignmask = 0, .base.cra_priority = 200, .base.cra_name = "aegis128", .base.cra_driver_name = "aegis128-simd", .base.cra_module = THIS_MODULE, }; From patchwork Sat Dec 7 19:57:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 848180 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 644E6155C8A for ; Sat, 7 Dec 2024 19:58:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601514; cv=none; b=OlP/meoEMT3P4PgvEg/YVq3qZ8/YZh+/m8F6jl/zHEqN4qlpxmbZ6zjm8rjeCIf4JsK91ZM9rW+vD9gjAyeoag9FrsowTqv7EB7iYErOgcJk4fI5mf1dRjytmOR5OrTnVN6z+GvvRmfS8uBp6p5yS5pYO0rlfEa1es7Wa681yRY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733601514; c=relaxed/simple; bh=WIJr3To+Bg946AFk5yTIVR8OK0ItC2hbRsJv4XNtRWc=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XLE8O/OsrNaEKTNqxCowmhx3KpldbjxkYd0GAjudDWZIyx836iJaItftyKTI3qLNxH7YIkgDF+H0QL9yhe94BZFWEyWouM7ahJ6rKueI7TUfzt2c3XDk0ljxC5+Jwa5ubKKuFz/6rCZAbM1N7BudF3KBGeG/z55M3IvVD5rIFm0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KEPBgH+t; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KEPBgH+t" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DFC48C4CEDC for ; Sat, 7 Dec 2024 19:58:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733601513; bh=WIJr3To+Bg946AFk5yTIVR8OK0ItC2hbRsJv4XNtRWc=; h=From:To:Subject:Date:In-Reply-To:References:From; b=KEPBgH+tNg3KepBzBcutWplYy1oyl3UEjApDJxUgfc6M1wnOJyTF4eTNvBBTgwYOy lDoSH5nm7aA2YZO8XGpbxnPN0nu285bUXrTj+9FZQq0H9DhdIQi2nEhmHfC9Oqp3Zj Y/xX+3SRfJQcrj079nEt99Z9k2zBiyfd2Lf18yF9Dio11Bz4bfJ/uAoVITEYSpoxht 6rdqZ3VDWRq+J058J5UNEO0C0heYLSVZ7PAHQwl59CzUtVCGp2bPokMLiCTC2hsUvs +yzaaetjA3Y4Jv6/lALpkvETiB9l5SaElg7H64+I3yR4+MDaiEtQnpyQbs2NrladzR 06suZFXe+fKBA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 8/8] crypto: keywrap - remove assignment of 0 to cra_alignmask Date: Sat, 7 Dec 2024 11:57:52 -0800 Message-ID: <20241207195752.87654-9-ebiggers@kernel.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207195752.87654-1-ebiggers@kernel.org> References: <20241207195752.87654-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Since this code is zero-initializing the algorithm struct, the assignment of 0 to cra_alignmask is redundant. Remove it to reduce the number of matches that are found when grepping for cra_alignmask. Signed-off-by: Eric Biggers --- crypto/keywrap.c | 1 - 1 file changed, 1 deletion(-) diff --git a/crypto/keywrap.c b/crypto/keywrap.c index 385ffdfd5a9b4..5ec4f94d46bd0 100644 --- a/crypto/keywrap.c +++ b/crypto/keywrap.c @@ -277,11 +277,10 @@ static int crypto_kw_create(struct crypto_template *tmpl, struct rtattr **tb) /* Section 5.1 requirement for KW */ if (alg->cra_blocksize != sizeof(struct crypto_kw_block)) goto out_free_inst; inst->alg.base.cra_blocksize = SEMIBSIZE; - inst->alg.base.cra_alignmask = 0; inst->alg.ivsize = SEMIBSIZE; inst->alg.encrypt = crypto_kw_encrypt; inst->alg.decrypt = crypto_kw_decrypt;