From patchwork Mon Oct 24 06:30:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 618372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA649C38A2D for ; Mon, 24 Oct 2022 06:31:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230332AbiJXGb1 (ORCPT ); Mon, 24 Oct 2022 02:31:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230306AbiJXGbT (ORCPT ); Mon, 24 Oct 2022 02:31:19 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 434CC5C379 for ; Sun, 23 Oct 2022 23:31:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CC8C76101C for ; Mon, 24 Oct 2022 06:31:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ABADBC4347C; Mon, 24 Oct 2022 06:31:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666593076; bh=MVStoeot+9N481gE+IzlwIXraNYXd9DYcPzHHvi46mA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gd6Yx/4kbhEkQlM0qexWyxZULf1yKl1lygwlqcqPyREh7gL0Y97lp0Tv0Hta54iD3 98DZunBYX8SaxU2XlmuV96zo+HRryjTcpuCnnbeZ6wzRpebQ7CpD/c1wd3FH7e1ypP 7o2KFtrM3S6B+jrp/r/NUTEQEyFVG0qJ/Zer7ScWGhQmGd2S7x27I799SfSrrFxGA7 Tr7go2cO2DxKUd8Z/t/7qHBGIXafyMaonjysjmBYDujhE2iyeeCKrdpEmq5GRe0HP/ JxXA4mOEIoNKjKo5NNkimMCUHyJqRcYxjjTTS49KsRXNlbWehxR1B3OM69F2xhfZyN 7MDgkWrlPjBSQ== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, keescook@chromium.org, jason@zx2c4.com, nikunj@amd.com, Ard Biesheuvel Subject: [PATCH v4 1/3] crypto: move gf128mul library into lib/crypto Date: Mon, 24 Oct 2022 08:30:50 +0200 Message-Id: <20221024063052.109148-2-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221024063052.109148-1-ardb@kernel.org> References: <20221024063052.109148-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2469; i=ardb@kernel.org; h=from:subject; bh=MVStoeot+9N481gE+IzlwIXraNYXd9DYcPzHHvi46mA=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjVjEY0vEsO+UUZuKB7qbu1LDJGNo0I6eQ1XIqe/mZ RwGA+96JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY1YxGAAKCRDDTyI5ktmPJJeAC/ 9z+0991Mn0DyejV6p/22CERyF29J79JIjH2YyecFzITzRYmH/fIY/TeSRkZe0PO7T8WAwffkUV/NFp ihQ+rQIcgbGsKZxjK/zfppGoy73km80zTtBaK2pTHtT32POSI7b6TSc5B1xk1n6ZGcF7b7JcERvmwX BVqQXD1Yq5B1xqCpq/x1lqckWD2/5Xe9X9wPs4g6dXs+8UWxw6Oa7Yx+klKi7WW4yVb/UcPuSZ9M+Y l8ZhT+1YsUNfUd10PKjEqfoNSp1zj/kRj4WI1M22xD/R35nPbfejg4NA75nIxSaBzGqxrOqJES4KPB ZbBORcrYaYo6GNF4zaj2QoMycUDsQzRYcTG+34CMKLQKs4rSnBnjNK6KsB2scrSeEbj9z7oeAQJ3hO /oKhbEtYK4Drj2B3achEZ6tDnaT5NbjoVzqaVlivPe9fD/5+DRyc+zxyeU/UEb6nA1KmhbWNoiaC3O Mm6EEB1Pr3uRUpiOumr/e6Z1FTL0rJ6j+yd9Ku0t9Vdi4= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The gf128mul library does not depend on the crypto API at all, so it can be moved into lib/crypto. This will allow us to use it in other library code in a subsequent patch without having to depend on CONFIG_CRYPTO. Signed-off-by: Ard Biesheuvel --- crypto/Kconfig | 3 --- crypto/Makefile | 1 - lib/crypto/Kconfig | 3 +++ lib/crypto/Makefile | 2 ++ {crypto => lib/crypto}/gf128mul.c | 0 5 files changed, 5 insertions(+), 4 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index 2589ad5357df..a466beb64a69 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -175,9 +175,6 @@ config CRYPTO_MANAGER_EXTRA_TESTS This is intended for developer use only, as these tests take much longer to run than the normal self tests. -config CRYPTO_GF128MUL - tristate - config CRYPTO_NULL tristate "Null algorithms" select CRYPTO_NULL2 diff --git a/crypto/Makefile b/crypto/Makefile index 303b21c43df0..d0126c915834 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -85,7 +85,6 @@ obj-$(CONFIG_CRYPTO_WP512) += wp512.o CFLAGS_wp512.o := $(call cc-option,-fno-schedule-insns) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149 obj-$(CONFIG_CRYPTO_BLAKE2B) += blake2b_generic.o CFLAGS_blake2b_generic.o := -Wframe-larger-than=4096 # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105930 -obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o obj-$(CONFIG_CRYPTO_ECB) += ecb.o obj-$(CONFIG_CRYPTO_CBC) += cbc.o obj-$(CONFIG_CRYPTO_CFB) += cfb.o diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 7e9683e9f5c6..2a4b57779fd7 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -8,6 +8,9 @@ config CRYPTO_LIB_UTILS config CRYPTO_LIB_AES tristate +config CRYPTO_GF128MUL + tristate + config CRYPTO_LIB_ARC4 tristate diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index c852f067ab06..60bb566eed78 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -10,6 +10,8 @@ obj-$(CONFIG_CRYPTO_LIB_CHACHA_GENERIC) += libchacha.o obj-$(CONFIG_CRYPTO_LIB_AES) += libaes.o libaes-y := aes.o +obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o + obj-$(CONFIG_CRYPTO_LIB_ARC4) += libarc4.o libarc4-y := arc4.o diff --git a/crypto/gf128mul.c b/lib/crypto/gf128mul.c similarity index 100% rename from crypto/gf128mul.c rename to lib/crypto/gf128mul.c From patchwork Mon Oct 24 06:30:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 617986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC0B0FA373F for ; Mon, 24 Oct 2022 06:31:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229562AbiJXGb2 (ORCPT ); Mon, 24 Oct 2022 02:31:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230314AbiJXGbZ (ORCPT ); Mon, 24 Oct 2022 02:31:25 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30AC45C94B for ; Sun, 23 Oct 2022 23:31:19 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C1A0060F7A for ; Mon, 24 Oct 2022 06:31:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A6025C433D6; Mon, 24 Oct 2022 06:31:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666593078; bh=OvuK8TjSVTd7lpUp7M+FPHWgAYeh90efWrnuQEV3MXo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G//EeoZr8auKlVV04XmreaOzLBE1P+G5YpMrmSczqNBhS3RAfCc2dDwiV1kUrMuXM dbz/pqpu5HOUx3jm7KwUSdysVyhS6rDG/t9zq/eo0KJ4uUgeYzcv8j+WxZPgufuhS1 0ka96AryrJNpF0shpcIVfTPY0DSLutfsbPaprXrwrGjHncbN68O5wxlS4WDa+VBETb HHH9HLpqC8xBCNinWm9xkE63Wmk4tM+tZELzF8zVPLTPcPWC1uCJ4/fSj4iHgX7unX eMEGuSvAwD/DwUujbqtmrepb8k8TtgSP/dfcpv19Sf9TyV/lG6lyEhwC17T1zOJhl6 ElYuU9elzXaag== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, keescook@chromium.org, jason@zx2c4.com, nikunj@amd.com, Ard Biesheuvel Subject: [PATCH v4 2/3] crypto: gf128mul - make gf128mul_lle time invariant Date: Mon, 24 Oct 2022 08:30:51 +0200 Message-Id: <20221024063052.109148-3-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221024063052.109148-1-ardb@kernel.org> References: <20221024063052.109148-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4986; i=ardb@kernel.org; h=from:subject; bh=OvuK8TjSVTd7lpUp7M+FPHWgAYeh90efWrnuQEV3MXo=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjVjEZN4ntcjhmurmmhS4HvbemXRuCBorbAa+tnCwA 1pptEQSJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY1YxGQAKCRDDTyI5ktmPJF32C/ 9HiMM4bfzx6LZTD8fepUzLKfgRua0xlbOL01EaXTJ+1w3Jt6Ei1gl4wzfa1eGFBAByNedtw/Bcjl9r 49z5h4rt3SfIxcCKyQ7s5ctYOvovWlJV1f6bLtx++mFvxZ95nhNFnCgwmoSqQCHEMdd3bMwLZW1iLP G7YNCSPh/6vegd6yc3/NMJi5BHFYiKJST0rAyaFwBgga3f25j97mR3jA6fDdJkZ7/p2VuzT6VIkFqi /vtwqntioD4b7czD7VxrSuXGC5ev/3r13kmZLii2rSdp4NhYsrGEwu07dG06mEgeJFN4e4Cuhy8V7W xCH/TwBna508FymgabaJ9TzW/CXk7bQf75LD+Eg4lbj9Ki7DSm2DDgCULN1rPRWmU2tx2lv+hrHVCT 0o0qgdZBALQKH4X6U1wcdUqtWDzNyRChjXlype35XmLbrMhTivmb2BL6PNi6AinhqWZMZkSVz7sefY fIPYv1dg8ZPhNsrs3ZPCsIib7JFSay/9LB3IxcB6JHqAU= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The gf128mul library has different variants with different memory/performance tradeoffs, where the faster ones use 4k or 64k lookup tables precomputed at runtime, which are based on one of the multiplication factors, which is commonly the key for keyed hash algorithms such as GHASH. The slowest variant is gf128_mul_lle() [and its bbe/ble counterparts], which does not use precomputed lookup tables, but it still relies on a single u16[256] lookup table which is input independent. The use of such a table may cause the execution time of gf128_mul_lle() to correlate with the value of the inputs, which is generally something that must be avoided for cryptographic algorithms. On top of that, the function uses a sequence of if () statements that conditionally invoke be128_xor() based on which bits are set in the second argument of the function, which is usually a pointer to the multiplication factor that represents the key. In order to remove the correlation between the execution time of gf128_mul_lle() and the value of its inputs, let's address the identified shortcomings: - add a time invariant version of gf128mul_x8_lle() that replaces the table lookup with the expression that is used at compile time to populate the lookup table; - make the invocations of be128_xor() unconditional, but pass a zero vector as the third argument if the associated bit in the key is cleared. The resulting code is likely to be significantly slower. However, given that this is the slowest version already, making it even slower in order to make it more secure is assumed to be justified. The bbe and ble counterparts could receive the same treatment, but the former is never used anywhere in the kernel, and the latter is only used in the driver for a asynchronous crypto h/w accelerator (Chelsio), where timing variances are unlikely to matter. Signed-off-by: Ard Biesheuvel --- lib/crypto/gf128mul.c | 58 +++++++++++++------- 1 file changed, 39 insertions(+), 19 deletions(-) diff --git a/lib/crypto/gf128mul.c b/lib/crypto/gf128mul.c index a69ae3e6c16c..cd0cb1d3bf36 100644 --- a/lib/crypto/gf128mul.c +++ b/lib/crypto/gf128mul.c @@ -146,6 +146,17 @@ static void gf128mul_x8_lle(be128 *x) x->a = cpu_to_be64((a >> 8) ^ (_tt << 48)); } +/* time invariant version of gf128mul_x8_lle */ +static void gf128mul_x8_lle_ti(be128 *x) +{ + u64 a = be64_to_cpu(x->a); + u64 b = be64_to_cpu(x->b); + u64 _tt = xda_le(b & 0xff); /* avoid table lookup */ + + x->b = cpu_to_be64((b >> 8) | (a << 56)); + x->a = cpu_to_be64((a >> 8) ^ (_tt << 48)); +} + static void gf128mul_x8_bbe(be128 *x) { u64 a = be64_to_cpu(x->a); @@ -169,38 +180,47 @@ EXPORT_SYMBOL(gf128mul_x8_ble); void gf128mul_lle(be128 *r, const be128 *b) { - be128 p[8]; + /* + * The p array should be aligned to twice the size of its element type, + * so that every even/odd pair is guaranteed to share a cacheline + * (assuming a cacheline size of 32 bytes or more, which is by far the + * most common). This ensures that each be128_xor() call in the loop + * takes the same amount of time regardless of the value of 'ch', which + * is derived from function parameter 'b', which is commonly used as a + * key, e.g., for GHASH. The odd array elements are all set to zero, + * making each be128_xor() a NOP if its associated bit in 'ch' is not + * set, and this is equivalent to calling be128_xor() conditionally. + * This approach aims to avoid leaking information about such keys + * through execution time invariances. + * + * Unfortunately, __aligned(16) or higher does not work on x86 for + * variables on the stack so we need to perform the alignment by hand. + */ + be128 array[16 + 3] = {}; + be128 *p = PTR_ALIGN(&array[0], 2 * sizeof(be128)); int i; p[0] = *r; for (i = 0; i < 7; ++i) - gf128mul_x_lle(&p[i + 1], &p[i]); + gf128mul_x_lle(&p[2 * i + 2], &p[2 * i]); memset(r, 0, sizeof(*r)); for (i = 0;;) { u8 ch = ((u8 *)b)[15 - i]; - if (ch & 0x80) - be128_xor(r, r, &p[0]); - if (ch & 0x40) - be128_xor(r, r, &p[1]); - if (ch & 0x20) - be128_xor(r, r, &p[2]); - if (ch & 0x10) - be128_xor(r, r, &p[3]); - if (ch & 0x08) - be128_xor(r, r, &p[4]); - if (ch & 0x04) - be128_xor(r, r, &p[5]); - if (ch & 0x02) - be128_xor(r, r, &p[6]); - if (ch & 0x01) - be128_xor(r, r, &p[7]); + be128_xor(r, r, &p[ 0 + !(ch & 0x80)]); + be128_xor(r, r, &p[ 2 + !(ch & 0x40)]); + be128_xor(r, r, &p[ 4 + !(ch & 0x20)]); + be128_xor(r, r, &p[ 6 + !(ch & 0x10)]); + be128_xor(r, r, &p[ 8 + !(ch & 0x08)]); + be128_xor(r, r, &p[10 + !(ch & 0x04)]); + be128_xor(r, r, &p[12 + !(ch & 0x02)]); + be128_xor(r, r, &p[14 + !(ch & 0x01)]); if (++i >= 16) break; - gf128mul_x8_lle(r); + gf128mul_x8_lle_ti(r); /* use the time invariant version */ } } EXPORT_SYMBOL(gf128mul_lle); From patchwork Mon Oct 24 06:30:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 618371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A830FA373D for ; Mon, 24 Oct 2022 06:31:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230016AbiJXGb3 (ORCPT ); Mon, 24 Oct 2022 02:31:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230329AbiJXGb0 (ORCPT ); Mon, 24 Oct 2022 02:31:26 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C48D5C9DE for ; Sun, 23 Oct 2022 23:31:24 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 18467CE112B for ; Mon, 24 Oct 2022 06:31:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9FCCCC43470; Mon, 24 Oct 2022 06:31:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666593080; bh=UkgyRCJRGUSc6ldPkNqhte+/X5NspoCGab34b/RchSo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C/2k22lGlViPPxdAYwBMrQLPxDZeaf3yV/S39d6HbDkPoyfRc8fmI/XERbHTNWFF1 dViy6GaouiMUfm6FcynM3B/pLEFbPXgoM7h38NQ9Qv8W94cpzEU7pmSBUi4CFtt8CA 33ecINUIzIt14xOEXVxXP92yKXFItOx4A0hSjRG42I7ccBaKEp8DFHC37if9pTg9Pr EBPDCfPkOeSOgLUWPDOeQQ7gSItO6eg8h80dguykltUByzR0Tk1p0tuSCdn9rSU+8v 5ksODpZEHFXz9oNT6cdSP8e0LYjknxPe3jS8IKeVVpzNLqPPH1y3N0Vt0RcH+mjma5 ITkq0IP04W49w== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, keescook@chromium.org, jason@zx2c4.com, nikunj@amd.com, Ard Biesheuvel Subject: [PATCH v4 3/3] crypto: aesgcm - Provide minimal library implementation Date: Mon, 24 Oct 2022 08:30:52 +0200 Message-Id: <20221024063052.109148-4-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221024063052.109148-1-ardb@kernel.org> References: <20221024063052.109148-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=27641; i=ardb@kernel.org; h=from:subject; bh=UkgyRCJRGUSc6ldPkNqhte+/X5NspoCGab34b/RchSo=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjVjEb2nFW+EY92Lahpmo5xeregVdpJ4ITAW+dABrn f3UAMzKJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY1YxGwAKCRDDTyI5ktmPJKMMC/ 0XI8BgMlar2Kp7RvqBe0vfl265Mnek5P+h0HPI/e5gRrpHyo0pvBBYJZvgUu8C/pC9gh8j/hE4Hx8e iC/DGHLbZvci1UsEGyIwi5E5ZWMEpsP7NUuwXzL7Ab42bWYGANFT6rGWwAVn4RgOEceb/yhT7acdUX ew3J2PRc1FTpUNnB01T07RkSWrB6MDIVV+X/rdb0087UV9YDDOMd3hzX2m8QXyeenIiEkogGsPHo61 Zx8fzdek981mgvZYJf2MOVUxyTy/asddnBfqFKygrxXSUAnCBkc6GG2nvj5RLQwlzEVAx/rL3ZRcoU obcVUpAfrcxa9ppncdGl1T55CwR1QPWtqUvZacNKFRbi4SxociMxM/WPYLpF9vMtMbP281NCXe1tkg d6nbdv3ti2yjjxcXe8JJoCEPzFT60yhAQLS+O/qK4QhWbwJSalsgWxF5U1Ttl/c7jxU4kNgTO7alsC 3xNaTTxBkVfA/bRc6hxss2aLUkls+rNmy5679l1+g47gE= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Implement a minimal library version of AES-GCM based on the existing library implementations of AES and multiplication in GF(2^128). Using these primitives, GCM can be implemented in a straight-forward manner. GCM has a couple of sharp edges, i.e., the amount of input data processed with the same initialization vector (IV) should be capped to protect the counter from 32-bit rollover (or carry), and the size of the authentication tag should be fixed for a given key. [0] The former concern is addressed trivially, given that the function call API uses 32-bit signed types for the input lengths. It is still up to the caller to avoid IV reuse in general, but this is not something we can police at the implementation level. As for the latter concern, let's make the authentication tag size part of the key schedule, and only permit it to be configured as part of the key expansion routine. Note that table based AES implementations are susceptible to known plaintext timing attacks on the encryption key. The AES library already attempts to mitigate this to some extent, but given that the counter mode encryption used by GCM operates exclusively on known plaintext by construction (the IV and therefore the initial counter value are known to an attacker), let's take some extra care to mitigate this, by calling the AES library with interrupts disabled. [0] https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-38d.pdf Link: https://lore.kernel.org/all/c6fb9b25-a4b6-2e4a-2dd1-63adda055a49@amd.com/ Signed-off-by: Ard Biesheuvel Tested-by: Nikunj A Dadhania --- include/crypto/gcm.h | 22 + lib/crypto/Kconfig | 6 + lib/crypto/Makefile | 3 + lib/crypto/aesgcm.c | 720 ++++++++++++++++++++ 4 files changed, 751 insertions(+) diff --git a/include/crypto/gcm.h b/include/crypto/gcm.h index 9d7eff04f224..1155d326cb89 100644 --- a/include/crypto/gcm.h +++ b/include/crypto/gcm.h @@ -3,6 +3,9 @@ #include +#include +#include + #define GCM_AES_IV_SIZE 12 #define GCM_RFC4106_IV_SIZE 8 #define GCM_RFC4543_IV_SIZE 8 @@ -60,4 +63,23 @@ static inline int crypto_ipsec_check_assoclen(unsigned int assoclen) return 0; } + +struct aesgcm_ctx { + be128 ghash_key; + struct crypto_aes_ctx aes_ctx; + unsigned int authsize; +}; + +int aesgcm_expandkey(struct aesgcm_ctx *ctx, const u8 *key, + unsigned int keysize, unsigned int authsize); + +void aesgcm_encrypt(const struct aesgcm_ctx *ctx, u8 *dst, const u8 *src, + int crypt_len, const u8 *assoc, int assoc_len, + const u8 iv[GCM_AES_IV_SIZE], u8 *authtag); + +int __must_check aesgcm_decrypt(const struct aesgcm_ctx *ctx, u8 *dst, + const u8 *src, int crypt_len, const u8 *assoc, + int assoc_len, const u8 iv[GCM_AES_IV_SIZE], + const u8 *authtag); + #endif diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 2a4b57779fd7..5355a835428f 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -11,6 +11,12 @@ config CRYPTO_LIB_AES config CRYPTO_GF128MUL tristate +config CRYPTO_LIB_AESGCM + tristate + select CRYPTO_GF128MUL + select CRYPTO_LIB_AES + select CRYPTO_LIB_UTILS + config CRYPTO_LIB_ARC4 tristate diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index 60bb566eed78..c5f4502c5e12 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -12,6 +12,9 @@ libaes-y := aes.o obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o +obj-$(CONFIG_CRYPTO_LIB_AESGCM) += libaesgcm.o +libaesgcm-y := aesgcm.o + obj-$(CONFIG_CRYPTO_LIB_ARC4) += libarc4.o libarc4-y := arc4.o diff --git a/lib/crypto/aesgcm.c b/lib/crypto/aesgcm.c new file mode 100644 index 000000000000..2533692a9563 --- /dev/null +++ b/lib/crypto/aesgcm.c @@ -0,0 +1,720 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Minimal library implementation of GCM + * + * Copyright 2022 Google LLC + */ + +#include + +#include +#include +#include + +#include + +static void aesgcm_encrypt_block(const struct crypto_aes_ctx *ctx, void *dst, + const void *src) +{ + unsigned long flags; + + /* + * In AES-GCM, both the GHASH key derivation and the CTR mode + * encryption operate on known plaintext, making them susceptible to + * timing attacks on the encryption key. The AES library already + * mitigates this risk to some extent by pulling the entire S-box into + * the caches before doing any substitutions, but this strategy is more + * effective when running with interrupts disabled. + */ + local_irq_save(flags); + aes_encrypt(ctx, dst, src); + local_irq_restore(flags); +} + +/** + * aesgcm_expandkey - Expands the AES and GHASH keys for the AES-GCM key + * schedule + * + * @ctx: The data structure that will hold the AES-GCM key schedule + * @key: The AES encryption input key + * @keysize: The length in bytes of the input key + * @authsize: The size in bytes of the GCM authentication tag + * + * Returns: 0 on success, or -EINVAL if @keysize or @authsize contain values + * that are not permitted by the GCM specification. + */ +int aesgcm_expandkey(struct aesgcm_ctx *ctx, const u8 *key, + unsigned int keysize, unsigned int authsize) +{ + u8 kin[AES_BLOCK_SIZE] = {}; + int ret; + + ret = crypto_gcm_check_authsize(authsize) ?: + aes_expandkey(&ctx->aes_ctx, key, keysize); + if (ret) + return ret; + + ctx->authsize = authsize; + aesgcm_encrypt_block(&ctx->aes_ctx, &ctx->ghash_key, kin); + + return 0; +} +EXPORT_SYMBOL(aesgcm_expandkey); + +static void aesgcm_ghash(be128 *ghash, const be128 *key, const void *src, + int len) +{ + while (len > 0) { + crypto_xor((u8 *)ghash, src, min(len, GHASH_BLOCK_SIZE)); + gf128mul_lle(ghash, key); + + src += GHASH_BLOCK_SIZE; + len -= GHASH_BLOCK_SIZE; + } +} + +static void aesgcm_mac(const struct aesgcm_ctx *ctx, const u8 *src, int src_len, + const u8 *assoc, int assoc_len, __be32 *ctr, u8 *authtag) +{ + be128 tail = { cpu_to_be64(assoc_len * 8), cpu_to_be64(src_len * 8) }; + u8 buf[AES_BLOCK_SIZE]; + be128 ghash = {}; + + aesgcm_ghash(&ghash, &ctx->ghash_key, assoc, assoc_len); + aesgcm_ghash(&ghash, &ctx->ghash_key, src, src_len); + aesgcm_ghash(&ghash, &ctx->ghash_key, &tail, sizeof(tail)); + + ctr[3] = cpu_to_be32(1); + aesgcm_encrypt_block(&ctx->aes_ctx, buf, ctr); + crypto_xor_cpy(authtag, buf, (u8 *)&ghash, ctx->authsize); + + memzero_explicit(&ghash, sizeof(ghash)); + memzero_explicit(buf, sizeof(buf)); +} + +static void aesgcm_crypt(const struct aesgcm_ctx *ctx, u8 *dst, const u8 *src, + int len, __be32 *ctr) +{ + u8 buf[AES_BLOCK_SIZE]; + unsigned int n = 2; + + while (len > 0) { + ctr[3] = cpu_to_be32(n++); + aesgcm_encrypt_block(&ctx->aes_ctx, buf, ctr); + crypto_xor_cpy(dst, src, buf, min(len, AES_BLOCK_SIZE)); + + dst += AES_BLOCK_SIZE; + src += AES_BLOCK_SIZE; + len -= AES_BLOCK_SIZE; + } + memzero_explicit(buf, sizeof(buf)); +} + +/** + * aesgcm_encrypt - Perform AES-GCM encryption on a block of data + * + * @ctx: The AES-GCM key schedule + * @dst: Pointer to the ciphertext output buffer + * @src: Pointer the plaintext (may equal @dst for encryption in place) + * @crypt_len: The size in bytes of the plaintext and ciphertext. + * @assoc: Pointer to the associated data, + * @assoc_len: The size in bytes of the associated data + * @iv: The initialization vector (IV) to use for this block of data + * (must be 12 bytes in size as per the GCM spec recommendation) + * @authtag: The address of the buffer in memory where the authentication + * tag should be stored. The buffer is assumed to have space for + * @ctx->authsize bytes. + */ +void aesgcm_encrypt(const struct aesgcm_ctx *ctx, u8 *dst, const u8 *src, + int crypt_len, const u8 *assoc, int assoc_len, + const u8 iv[GCM_AES_IV_SIZE], u8 *authtag) +{ + __be32 ctr[4]; + + memcpy(ctr, iv, GCM_AES_IV_SIZE); + + aesgcm_crypt(ctx, dst, src, crypt_len, ctr); + aesgcm_mac(ctx, dst, crypt_len, assoc, assoc_len, ctr, authtag); +} +EXPORT_SYMBOL(aesgcm_encrypt); + +/** + * aesgcm_decrypt - Perform AES-GCM decryption on a block of data + * + * @ctx: The AES-GCM key schedule + * @dst: Pointer to the plaintext output buffer + * @src: Pointer the ciphertext (may equal @dst for decryption in place) + * @crypt_len: The size in bytes of the plaintext and ciphertext. + * @assoc: Pointer to the associated data, + * @assoc_len: The size in bytes of the associated data + * @iv: The initialization vector (IV) to use for this block of data + * (must be 12 bytes in size as per the GCM spec recommendation) + * @authtag: The address of the buffer in memory where the authentication + * tag is stored. + * + * Returns: 0 on success, or -EBADMSG if the ciphertext failed authentication. + * On failure, no plaintext will be returned. + */ +int __must_check aesgcm_decrypt(const struct aesgcm_ctx *ctx, u8 *dst, + const u8 *src, int crypt_len, const u8 *assoc, + int assoc_len, const u8 iv[GCM_AES_IV_SIZE], + const u8 *authtag) +{ + u8 tagbuf[AES_BLOCK_SIZE]; + __be32 ctr[4]; + + memcpy(ctr, iv, GCM_AES_IV_SIZE); + + aesgcm_mac(ctx, src, crypt_len, assoc, assoc_len, ctr, tagbuf); + if (crypto_memneq(authtag, tagbuf, ctx->authsize)) { + memzero_explicit(tagbuf, sizeof(tagbuf)); + return -EBADMSG; + } + aesgcm_crypt(ctx, dst, src, crypt_len, ctr); + return 0; +} +EXPORT_SYMBOL(aesgcm_decrypt); + +MODULE_DESCRIPTION("Generic AES-GCM library"); +MODULE_AUTHOR("Ard Biesheuvel "); +MODULE_LICENSE("GPL"); + +#ifndef CONFIG_CRYPTO_MANAGER_DISABLE_TESTS + +/* + * Test code below. Vectors taken from crypto/testmgr.h + */ + +static const u8 __initconst ctext0[16] = + "\x58\xe2\xfc\xce\xfa\x7e\x30\x61" + "\x36\x7f\x1d\x57\xa4\xe7\x45\x5a"; + +static const u8 __initconst ptext1[16]; + +static const u8 __initconst ctext1[32] = + "\x03\x88\xda\xce\x60\xb6\xa3\x92" + "\xf3\x28\xc2\xb9\x71\xb2\xfe\x78" + "\xab\x6e\x47\xd4\x2c\xec\x13\xbd" + "\xf5\x3a\x67\xb2\x12\x57\xbd\xdf"; + +static const u8 __initconst ptext2[64] = + "\xd9\x31\x32\x25\xf8\x84\x06\xe5" + "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" + "\x86\xa7\xa9\x53\x15\x34\xf7\xda" + "\x2e\x4c\x30\x3d\x8a\x31\x8a\x72" + "\x1c\x3c\x0c\x95\x95\x68\x09\x53" + "\x2f\xcf\x0e\x24\x49\xa6\xb5\x25" + "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" + "\xba\x63\x7b\x39\x1a\xaf\xd2\x55"; + +static const u8 __initconst ctext2[80] = + "\x42\x83\x1e\xc2\x21\x77\x74\x24" + "\x4b\x72\x21\xb7\x84\xd0\xd4\x9c" + "\xe3\xaa\x21\x2f\x2c\x02\xa4\xe0" + "\x35\xc1\x7e\x23\x29\xac\xa1\x2e" + "\x21\xd5\x14\xb2\x54\x66\x93\x1c" + "\x7d\x8f\x6a\x5a\xac\x84\xaa\x05" + "\x1b\xa3\x0b\x39\x6a\x0a\xac\x97" + "\x3d\x58\xe0\x91\x47\x3f\x59\x85" + "\x4d\x5c\x2a\xf3\x27\xcd\x64\xa6" + "\x2c\xf3\x5a\xbd\x2b\xa6\xfa\xb4"; + +static const u8 __initconst ptext3[60] = + "\xd9\x31\x32\x25\xf8\x84\x06\xe5" + "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" + "\x86\xa7\xa9\x53\x15\x34\xf7\xda" + "\x2e\x4c\x30\x3d\x8a\x31\x8a\x72" + "\x1c\x3c\x0c\x95\x95\x68\x09\x53" + "\x2f\xcf\x0e\x24\x49\xa6\xb5\x25" + "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" + "\xba\x63\x7b\x39"; + +static const u8 __initconst ctext3[76] = + "\x42\x83\x1e\xc2\x21\x77\x74\x24" + "\x4b\x72\x21\xb7\x84\xd0\xd4\x9c" + "\xe3\xaa\x21\x2f\x2c\x02\xa4\xe0" + "\x35\xc1\x7e\x23\x29\xac\xa1\x2e" + "\x21\xd5\x14\xb2\x54\x66\x93\x1c" + "\x7d\x8f\x6a\x5a\xac\x84\xaa\x05" + "\x1b\xa3\x0b\x39\x6a\x0a\xac\x97" + "\x3d\x58\xe0\x91" + "\x5b\xc9\x4f\xbc\x32\x21\xa5\xdb" + "\x94\xfa\xe9\x5a\xe7\x12\x1a\x47"; + +static const u8 __initconst ctext4[16] = + "\xcd\x33\xb2\x8a\xc7\x73\xf7\x4b" + "\xa0\x0e\xd1\xf3\x12\x57\x24\x35"; + +static const u8 __initconst ctext5[32] = + "\x98\xe7\x24\x7c\x07\xf0\xfe\x41" + "\x1c\x26\x7e\x43\x84\xb0\xf6\x00" + "\x2f\xf5\x8d\x80\x03\x39\x27\xab" + "\x8e\xf4\xd4\x58\x75\x14\xf0\xfb"; + +static const u8 __initconst ptext6[64] = + "\xd9\x31\x32\x25\xf8\x84\x06\xe5" + "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" + "\x86\xa7\xa9\x53\x15\x34\xf7\xda" + "\x2e\x4c\x30\x3d\x8a\x31\x8a\x72" + "\x1c\x3c\x0c\x95\x95\x68\x09\x53" + "\x2f\xcf\x0e\x24\x49\xa6\xb5\x25" + "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" + "\xba\x63\x7b\x39\x1a\xaf\xd2\x55"; + +static const u8 __initconst ctext6[80] = + "\x39\x80\xca\x0b\x3c\x00\xe8\x41" + "\xeb\x06\xfa\xc4\x87\x2a\x27\x57" + "\x85\x9e\x1c\xea\xa6\xef\xd9\x84" + "\x62\x85\x93\xb4\x0c\xa1\xe1\x9c" + "\x7d\x77\x3d\x00\xc1\x44\xc5\x25" + "\xac\x61\x9d\x18\xc8\x4a\x3f\x47" + "\x18\xe2\x44\x8b\x2f\xe3\x24\xd9" + "\xcc\xda\x27\x10\xac\xad\xe2\x56" + "\x99\x24\xa7\xc8\x58\x73\x36\xbf" + "\xb1\x18\x02\x4d\xb8\x67\x4a\x14"; + +static const u8 __initconst ctext7[16] = + "\x53\x0f\x8a\xfb\xc7\x45\x36\xb9" + "\xa9\x63\xb4\xf1\xc4\xcb\x73\x8b"; + +static const u8 __initconst ctext8[32] = + "\xce\xa7\x40\x3d\x4d\x60\x6b\x6e" + "\x07\x4e\xc5\xd3\xba\xf3\x9d\x18" + "\xd0\xd1\xc8\xa7\x99\x99\x6b\xf0" + "\x26\x5b\x98\xb5\xd4\x8a\xb9\x19"; + +static const u8 __initconst ptext9[64] = + "\xd9\x31\x32\x25\xf8\x84\x06\xe5" + "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" + "\x86\xa7\xa9\x53\x15\x34\xf7\xda" + "\x2e\x4c\x30\x3d\x8a\x31\x8a\x72" + "\x1c\x3c\x0c\x95\x95\x68\x09\x53" + "\x2f\xcf\x0e\x24\x49\xa6\xb5\x25" + "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" + "\xba\x63\x7b\x39\x1a\xaf\xd2\x55"; + +static const u8 __initconst ctext9[80] = + "\x52\x2d\xc1\xf0\x99\x56\x7d\x07" + "\xf4\x7f\x37\xa3\x2a\x84\x42\x7d" + "\x64\x3a\x8c\xdc\xbf\xe5\xc0\xc9" + "\x75\x98\xa2\xbd\x25\x55\xd1\xaa" + "\x8c\xb0\x8e\x48\x59\x0d\xbb\x3d" + "\xa7\xb0\x8b\x10\x56\x82\x88\x38" + "\xc5\xf6\x1e\x63\x93\xba\x7a\x0a" + "\xbc\xc9\xf6\x62\x89\x80\x15\xad" + "\xb0\x94\xda\xc5\xd9\x34\x71\xbd" + "\xec\x1a\x50\x22\x70\xe3\xcc\x6c"; + +static const u8 __initconst ptext10[60] = + "\xd9\x31\x32\x25\xf8\x84\x06\xe5" + "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" + "\x86\xa7\xa9\x53\x15\x34\xf7\xda" + "\x2e\x4c\x30\x3d\x8a\x31\x8a\x72" + "\x1c\x3c\x0c\x95\x95\x68\x09\x53" + "\x2f\xcf\x0e\x24\x49\xa6\xb5\x25" + "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" + "\xba\x63\x7b\x39"; + +static const u8 __initconst ctext10[76] = + "\x52\x2d\xc1\xf0\x99\x56\x7d\x07" + "\xf4\x7f\x37\xa3\x2a\x84\x42\x7d" + "\x64\x3a\x8c\xdc\xbf\xe5\xc0\xc9" + "\x75\x98\xa2\xbd\x25\x55\xd1\xaa" + "\x8c\xb0\x8e\x48\x59\x0d\xbb\x3d" + "\xa7\xb0\x8b\x10\x56\x82\x88\x38" + "\xc5\xf6\x1e\x63\x93\xba\x7a\x0a" + "\xbc\xc9\xf6\x62" + "\x76\xfc\x6e\xce\x0f\x4e\x17\x68" + "\xcd\xdf\x88\x53\xbb\x2d\x55\x1b"; + +static const u8 __initconst ptext11[60] = + "\xd9\x31\x32\x25\xf8\x84\x06\xe5" + "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" + "\x86\xa7\xa9\x53\x15\x34\xf7\xda" + "\x2e\x4c\x30\x3d\x8a\x31\x8a\x72" + "\x1c\x3c\x0c\x95\x95\x68\x09\x53" + "\x2f\xcf\x0e\x24\x49\xa6\xb5\x25" + "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" + "\xba\x63\x7b\x39"; + +static const u8 __initconst ctext11[76] = + "\x39\x80\xca\x0b\x3c\x00\xe8\x41" + "\xeb\x06\xfa\xc4\x87\x2a\x27\x57" + "\x85\x9e\x1c\xea\xa6\xef\xd9\x84" + "\x62\x85\x93\xb4\x0c\xa1\xe1\x9c" + "\x7d\x77\x3d\x00\xc1\x44\xc5\x25" + "\xac\x61\x9d\x18\xc8\x4a\x3f\x47" + "\x18\xe2\x44\x8b\x2f\xe3\x24\xd9" + "\xcc\xda\x27\x10" + "\x25\x19\x49\x8e\x80\xf1\x47\x8f" + "\x37\xba\x55\xbd\x6d\x27\x61\x8c"; + +static const u8 __initconst ptext12[719] = + "\x42\xc1\xcc\x08\x48\x6f\x41\x3f" + "\x2f\x11\x66\x8b\x2a\x16\xf0\xe0" + "\x58\x83\xf0\xc3\x70\x14\xc0\x5b" + "\x3f\xec\x1d\x25\x3c\x51\xd2\x03" + "\xcf\x59\x74\x1f\xb2\x85\xb4\x07" + "\xc6\x6a\x63\x39\x8a\x5b\xde\xcb" + "\xaf\x08\x44\xbd\x6f\x91\x15\xe1" + "\xf5\x7a\x6e\x18\xbd\xdd\x61\x50" + "\x59\xa9\x97\xab\xbb\x0e\x74\x5c" + "\x00\xa4\x43\x54\x04\x54\x9b\x3b" + "\x77\xec\xfd\x5c\xa6\xe8\x7b\x08" + "\xae\xe6\x10\x3f\x32\x65\xd1\xfc" + "\xa4\x1d\x2c\x31\xfb\x33\x7a\xb3" + "\x35\x23\xf4\x20\x41\xd4\xad\x82" + "\x8b\xa4\xad\x96\x1c\x20\x53\xbe" + "\x0e\xa6\xf4\xdc\x78\x49\x3e\x72" + "\xb1\xa9\xb5\x83\xcb\x08\x54\xb7" + "\xad\x49\x3a\xae\x98\xce\xa6\x66" + "\x10\x30\x90\x8c\x55\x83\xd7\x7c" + "\x8b\xe6\x53\xde\xd2\x6e\x18\x21" + "\x01\x52\xd1\x9f\x9d\xbb\x9c\x73" + "\x57\xcc\x89\x09\x75\x9b\x78\x70" + "\xed\x26\x97\x4d\xb4\xe4\x0c\xa5" + "\xfa\x70\x04\x70\xc6\x96\x1c\x7d" + "\x54\x41\x77\xa8\xe3\xb0\x7e\x96" + "\x82\xd9\xec\xa2\x87\x68\x55\xf9" + "\x8f\x9e\x73\x43\x47\x6a\x08\x36" + "\x93\x67\xa8\x2d\xde\xac\x41\xa9" + "\x5c\x4d\x73\x97\x0f\x70\x68\xfa" + "\x56\x4d\x00\xc2\x3b\x1f\xc8\xb9" + "\x78\x1f\x51\x07\xe3\x9a\x13\x4e" + "\xed\x2b\x2e\xa3\xf7\x44\xb2\xe7" + "\xab\x19\x37\xd9\xba\x76\x5e\xd2" + "\xf2\x53\x15\x17\x4c\x6b\x16\x9f" + "\x02\x66\x49\xca\x7c\x91\x05\xf2" + "\x45\x36\x1e\xf5\x77\xad\x1f\x46" + "\xa8\x13\xfb\x63\xb6\x08\x99\x63" + "\x82\xa2\xed\xb3\xac\xdf\x43\x19" + "\x45\xea\x78\x73\xd9\xb7\x39\x11" + "\xa3\x13\x7c\xf8\x3f\xf7\xad\x81" + "\x48\x2f\xa9\x5c\x5f\xa0\xf0\x79" + "\xa4\x47\x7d\x80\x20\x26\xfd\x63" + "\x0a\xc7\x7e\x6d\x75\x47\xff\x76" + "\x66\x2e\x8a\x6c\x81\x35\xaf\x0b" + "\x2e\x6a\x49\x60\xc1\x10\xe1\xe1" + "\x54\x03\xa4\x09\x0c\x37\x7a\x15" + "\x23\x27\x5b\x8b\x4b\xa5\x64\x97" + "\xae\x4a\x50\x73\x1f\x66\x1c\x5c" + "\x03\x25\x3c\x8d\x48\x58\x71\x34" + "\x0e\xec\x4e\x55\x1a\x03\x6a\xe5" + "\xb6\x19\x2b\x84\x2a\x20\xd1\xea" + "\x80\x6f\x96\x0e\x05\x62\xc7\x78" + "\x87\x79\x60\x38\x46\xb4\x25\x57" + "\x6e\x16\x63\xf8\xad\x6e\xd7\x42" + "\x69\xe1\x88\xef\x6e\xd5\xb4\x9a" + "\x3c\x78\x6c\x3b\xe5\xa0\x1d\x22" + "\x86\x5c\x74\x3a\xeb\x24\x26\xc7" + "\x09\xfc\x91\x96\x47\x87\x4f\x1a" + "\xd6\x6b\x2c\x18\x47\xc0\xb8\x24" + "\xa8\x5a\x4a\x9e\xcb\x03\xe7\x2a" + "\x09\xe6\x4d\x9c\x6d\x86\x60\xf5" + "\x2f\x48\x69\x37\x9f\xf2\xd2\xcb" + "\x0e\x5a\xdd\x6e\x8a\xfb\x6a\xfe" + "\x0b\x63\xde\x87\x42\x79\x8a\x68" + "\x51\x28\x9b\x7a\xeb\xaf\xb8\x2f" + "\x9d\xd1\xc7\x45\x90\x08\xc9\x83" + "\xe9\x83\x84\xcb\x28\x69\x09\x69" + "\xce\x99\x46\x00\x54\xcb\xd8\x38" + "\xf9\x53\x4a\xbf\x31\xce\x57\x15" + "\x33\xfa\x96\x04\x33\x42\xe3\xc0" + "\xb7\x54\x4a\x65\x7a\x7c\x02\xe6" + "\x19\x95\xd0\x0e\x82\x07\x63\xf9" + "\xe1\x2b\x2a\xfc\x55\x92\x52\xc9" + "\xb5\x9f\x23\x28\x60\xe7\x20\x51" + "\x10\xd3\xed\x6d\x9b\xab\xb8\xe2" + "\x5d\x9a\x34\xb3\xbe\x9c\x64\xcb" + "\x78\xc6\x91\x22\x40\x91\x80\xbe" + "\xd7\x78\x5c\x0e\x0a\xdc\x08\xe9" + "\x67\x10\xa4\x83\x98\x79\x23\xe7" + "\x92\xda\xa9\x22\x16\xb1\xe7\x78" + "\xa3\x1c\x6c\x8f\x35\x7c\x4d\x37" + "\x2f\x6e\x0b\x50\x5c\x34\xb9\xf9" + "\xe6\x3d\x91\x0d\x32\x95\xaa\x3d" + "\x48\x11\x06\xbb\x2d\xf2\x63\x88" + "\x3f\x73\x09\xe2\x45\x56\x31\x51" + "\xfa\x5e\x4e\x62\xf7\x90\xf9\xa9" + "\x7d\x7b\x1b\xb1\xc8\x26\x6e\x66" + "\xf6\x90\x9a\x7f\xf2\x57\xcc\x23" + "\x59\xfa\xfa\xaa\x44\x04\x01\xa7" + "\xa4\x78\xdb\x74\x3d\x8b\xb5"; + +static const u8 __initconst ctext12[735] = + "\x84\x0b\xdb\xd5\xb7\xa8\xfe\x20" + "\xbb\xb1\x12\x7f\x41\xea\xb3\xc0" + "\xa2\xb4\x37\x19\x11\x58\xb6\x0b" + "\x4c\x1d\x38\x05\x54\xd1\x16\x73" + "\x8e\x1c\x20\x90\xa2\x9a\xb7\x74" + "\x47\xe6\xd8\xfc\x18\x3a\xb4\xea" + "\xd5\x16\x5a\x2c\x53\x01\x46\xb3" + "\x18\x33\x74\x6c\x50\xf2\xe8\xc0" + "\x73\xda\x60\x22\xeb\xe3\xe5\x9b" + "\x20\x93\x6c\x4b\x37\x99\xb8\x23" + "\x3b\x4e\xac\xe8\x5b\xe8\x0f\xb7" + "\xc3\x8f\xfb\x4a\x37\xd9\x39\x95" + "\x34\xf1\xdb\x8f\x71\xd9\xc7\x0b" + "\x02\xf1\x63\xfc\x9b\xfc\xc5\xab" + "\xb9\x14\x13\x21\xdf\xce\xaa\x88" + "\x44\x30\x1e\xce\x26\x01\x92\xf8" + "\x9f\x00\x4b\x0c\x4b\xf7\x5f\xe0" + "\x89\xca\x94\x66\x11\x21\x97\xca" + "\x3e\x83\x74\x2d\xdb\x4d\x11\xeb" + "\x97\xc2\x14\xff\x9e\x1e\xa0\x6b" + "\x08\xb4\x31\x2b\x85\xc6\x85\x6c" + "\x90\xec\x39\xc0\xec\xb3\xb5\x4e" + "\xf3\x9c\xe7\x83\x3a\x77\x0a\xf4" + "\x56\xfe\xce\x18\x33\x6d\x0b\x2d" + "\x33\xda\xc8\x05\x5c\xb4\x09\x2a" + "\xde\x6b\x52\x98\x01\xef\x36\x3d" + "\xbd\xf9\x8f\xa8\x3e\xaa\xcd\xd1" + "\x01\x2d\x42\x49\xc3\xb6\x84\xbb" + "\x48\x96\xe0\x90\x93\x6c\x48\x64" + "\xd4\xfa\x7f\x93\x2c\xa6\x21\xc8" + "\x7a\x23\x7b\xaa\x20\x56\x12\xae" + "\x16\x9d\x94\x0f\x54\xa1\xec\xca" + "\x51\x4e\xf2\x39\xf4\xf8\x5f\x04" + "\x5a\x0d\xbf\xf5\x83\xa1\x15\xe1" + "\xf5\x3c\xd8\x62\xa3\xed\x47\x89" + "\x85\x4c\xe5\xdb\xac\x9e\x17\x1d" + "\x0c\x09\xe3\x3e\x39\x5b\x4d\x74" + "\x0e\xf5\x34\xee\x70\x11\x4c\xfd" + "\xdb\x34\xb1\xb5\x10\x3f\x73\xb7" + "\xf5\xfa\xed\xb0\x1f\xa5\xcd\x3c" + "\x8d\x35\x83\xd4\x11\x44\x6e\x6c" + "\x5b\xe0\x0e\x69\xa5\x39\xe5\xbb" + "\xa9\x57\x24\x37\xe6\x1f\xdd\xcf" + "\x16\x2a\x13\xf9\x6a\x2d\x90\xa0" + "\x03\x60\x7a\xed\x69\xd5\x00\x8b" + "\x7e\x4f\xcb\xb9\xfa\x91\xb9\x37" + "\xc1\x26\xce\x90\x97\x22\x64\x64" + "\xc1\x72\x43\x1b\xf6\xac\xc1\x54" + "\x8a\x10\x9c\xdd\x8d\xd5\x8e\xb2" + "\xe4\x85\xda\xe0\x20\x5f\xf4\xb4" + "\x15\xb5\xa0\x8d\x12\x74\x49\x23" + "\x3a\xdf\x4a\xd3\xf0\x3b\x89\xeb" + "\xf8\xcc\x62\x7b\xfb\x93\x07\x41" + "\x61\x26\x94\x58\x70\xa6\x3c\xe4" + "\xff\x58\xc4\x13\x3d\xcb\x36\x6b" + "\x32\xe5\xb2\x6d\x03\x74\x6f\x76" + "\x93\x77\xde\x48\xc4\xfa\x30\x4a" + "\xda\x49\x80\x77\x0f\x1c\xbe\x11" + "\xc8\x48\xb1\xe5\xbb\xf2\x8a\xe1" + "\x96\x2f\x9f\xd1\x8e\x8a\x5c\xe2" + "\xf7\xd7\xd8\x54\xf3\x3f\xc4\x91" + "\xb8\xfb\x86\xdc\x46\x24\x91\x60" + "\x6c\x2f\xc9\x41\x37\x51\x49\x54" + "\x09\x81\x21\xf3\x03\x9f\x2b\xe3" + "\x1f\x39\x63\xaf\xf4\xd7\x53\x60" + "\xa7\xc7\x54\xf9\xee\xb1\xb1\x7d" + "\x75\x54\x65\x93\xfe\xb1\x68\x6b" + "\x57\x02\xf9\xbb\x0e\xf9\xf8\xbf" + "\x01\x12\x27\xb4\xfe\xe4\x79\x7a" + "\x40\x5b\x51\x4b\xdf\x38\xec\xb1" + "\x6a\x56\xff\x35\x4d\x42\x33\xaa" + "\x6f\x1b\xe4\xdc\xe0\xdb\x85\x35" + "\x62\x10\xd4\xec\xeb\xc5\x7e\x45" + "\x1c\x6f\x17\xca\x3b\x8e\x2d\x66" + "\x4f\x4b\x36\x56\xcd\x1b\x59\xaa" + "\xd2\x9b\x17\xb9\x58\xdf\x7b\x64" + "\x8a\xff\x3b\x9c\xa6\xb5\x48\x9e" + "\xaa\xe2\x5d\x09\x71\x32\x5f\xb6" + "\x29\xbe\xe7\xc7\x52\x7e\x91\x82" + "\x6b\x6d\x33\xe1\x34\x06\x36\x21" + "\x5e\xbe\x1e\x2f\x3e\xc1\xfb\xea" + "\x49\x2c\xb5\xca\xf7\xb0\x37\xea" + "\x1f\xed\x10\x04\xd9\x48\x0d\x1a" + "\x1c\xfb\xe7\x84\x0e\x83\x53\x74" + "\xc7\x65\xe2\x5c\xe5\xba\x73\x4c" + "\x0e\xe1\xb5\x11\x45\x61\x43\x46" + "\xaa\x25\x8f\xbd\x85\x08\xfa\x4c" + "\x15\xc1\xc0\xd8\xf5\xdc\x16\xbb" + "\x7b\x1d\xe3\x87\x57\xa7\x2a\x1d" + "\x38\x58\x9e\x8a\x43\xdc\x57" + "\xd1\x81\x7d\x2b\xe9\xff\x99\x3a" + "\x4b\x24\x52\x58\x55\xe1\x49\x14"; + +static struct { + const u8 *ptext; + const u8 *ctext; + + u8 key[AES_MAX_KEY_SIZE]; + u8 iv[GCM_AES_IV_SIZE]; + u8 assoc[20]; + + int klen; + int clen; + int plen; + int alen; +} const aesgcm_tv[] __initconst = { + { /* From McGrew & Viega - http://citeseer.ist.psu.edu/656989.html */ + .klen = 16, + .ctext = ctext0, + .clen = sizeof(ctext0), + }, { + .klen = 16, + .ptext = ptext1, + .plen = sizeof(ptext1), + .ctext = ctext1, + .clen = sizeof(ctext1), + }, { + .key = "\xfe\xff\xe9\x92\x86\x65\x73\x1c" + "\x6d\x6a\x8f\x94\x67\x30\x83\x08", + .klen = 16, + .iv = "\xca\xfe\xba\xbe\xfa\xce\xdb\xad" + "\xde\xca\xf8\x88", + .ptext = ptext2, + .plen = sizeof(ptext2), + .ctext = ctext2, + .clen = sizeof(ctext2), + }, { + .key = "\xfe\xff\xe9\x92\x86\x65\x73\x1c" + "\x6d\x6a\x8f\x94\x67\x30\x83\x08", + .klen = 16, + .iv = "\xca\xfe\xba\xbe\xfa\xce\xdb\xad" + "\xde\xca\xf8\x88", + .ptext = ptext3, + .plen = sizeof(ptext3), + .assoc = "\xfe\xed\xfa\xce\xde\xad\xbe\xef" + "\xfe\xed\xfa\xce\xde\xad\xbe\xef" + "\xab\xad\xda\xd2", + .alen = 20, + .ctext = ctext3, + .clen = sizeof(ctext3), + }, { + .klen = 24, + .ctext = ctext4, + .clen = sizeof(ctext4), + }, { + .klen = 24, + .ptext = ptext1, + .plen = sizeof(ptext1), + .ctext = ctext5, + .clen = sizeof(ctext5), + }, { + .key = "\xfe\xff\xe9\x92\x86\x65\x73\x1c" + "\x6d\x6a\x8f\x94\x67\x30\x83\x08" + "\xfe\xff\xe9\x92\x86\x65\x73\x1c", + .klen = 24, + .iv = "\xca\xfe\xba\xbe\xfa\xce\xdb\xad" + "\xde\xca\xf8\x88", + .ptext = ptext6, + .plen = sizeof(ptext6), + .ctext = ctext6, + .clen = sizeof(ctext6), + }, { + .klen = 32, + .ctext = ctext7, + .clen = sizeof(ctext7), + }, { + .klen = 32, + .ptext = ptext1, + .plen = sizeof(ptext1), + .ctext = ctext8, + .clen = sizeof(ctext8), + }, { + .key = "\xfe\xff\xe9\x92\x86\x65\x73\x1c" + "\x6d\x6a\x8f\x94\x67\x30\x83\x08" + "\xfe\xff\xe9\x92\x86\x65\x73\x1c" + "\x6d\x6a\x8f\x94\x67\x30\x83\x08", + .klen = 32, + .iv = "\xca\xfe\xba\xbe\xfa\xce\xdb\xad" + "\xde\xca\xf8\x88", + .ptext = ptext9, + .plen = sizeof(ptext9), + .ctext = ctext9, + .clen = sizeof(ctext9), + }, { + .key = "\xfe\xff\xe9\x92\x86\x65\x73\x1c" + "\x6d\x6a\x8f\x94\x67\x30\x83\x08" + "\xfe\xff\xe9\x92\x86\x65\x73\x1c" + "\x6d\x6a\x8f\x94\x67\x30\x83\x08", + .klen = 32, + .iv = "\xca\xfe\xba\xbe\xfa\xce\xdb\xad" + "\xde\xca\xf8\x88", + .ptext = ptext10, + .plen = sizeof(ptext10), + .assoc = "\xfe\xed\xfa\xce\xde\xad\xbe\xef" + "\xfe\xed\xfa\xce\xde\xad\xbe\xef" + "\xab\xad\xda\xd2", + .alen = 20, + .ctext = ctext10, + .clen = sizeof(ctext10), + }, { + .key = "\xfe\xff\xe9\x92\x86\x65\x73\x1c" + "\x6d\x6a\x8f\x94\x67\x30\x83\x08" + "\xfe\xff\xe9\x92\x86\x65\x73\x1c", + .klen = 24, + .iv = "\xca\xfe\xba\xbe\xfa\xce\xdb\xad" + "\xde\xca\xf8\x88", + .ptext = ptext11, + .plen = sizeof(ptext11), + .assoc = "\xfe\xed\xfa\xce\xde\xad\xbe\xef" + "\xfe\xed\xfa\xce\xde\xad\xbe\xef" + "\xab\xad\xda\xd2", + .alen = 20, + .ctext = ctext11, + .clen = sizeof(ctext11), + }, { + .key = "\x62\x35\xf8\x95\xfc\xa5\xeb\xf6" + "\x0e\x92\x12\x04\xd3\xa1\x3f\x2e" + "\x8b\x32\xcf\xe7\x44\xed\x13\x59" + "\x04\x38\x77\xb0\xb9\xad\xb4\x38", + .klen = 32, + .iv = "\x00\xff\xff\xff\xff\x00\x00\xff" + "\xff\xff\x00\xff", + .ptext = ptext12, + .plen = sizeof(ptext12), + .ctext = ctext12, + .clen = sizeof(ctext12), + } +}; + +static int __init libaesgcm_init(void) +{ + for (int i = 0; i < ARRAY_SIZE(aesgcm_tv); i++) { + u8 tagbuf[AES_BLOCK_SIZE]; + int plen = aesgcm_tv[i].plen; + struct aesgcm_ctx ctx; + u8 buf[sizeof(ptext12)]; + + if (aesgcm_expandkey(&ctx, aesgcm_tv[i].key, aesgcm_tv[i].klen, + aesgcm_tv[i].clen - plen)) { + WARN(1, "aesgcm_expandkey() failed on vector %d\n", i); + return -EINVAL; + } + + if (aesgcm_decrypt(&ctx, buf, aesgcm_tv[i].ctext, plen, + aesgcm_tv[i].assoc, aesgcm_tv[i].alen, + aesgcm_tv[i].iv, aesgcm_tv[i].ctext + plen) + || memcmp(buf, aesgcm_tv[i].ptext, plen)) { + WARN(1, "aesgcm_decrypt() #1 failed on vector %d\n", i); + return -EINVAL; + } + + /* encrypt in place */ + aesgcm_encrypt(&ctx, buf, buf, plen, aesgcm_tv[i].assoc, + aesgcm_tv[i].alen, aesgcm_tv[i].iv, tagbuf); + if (memcmp(buf, aesgcm_tv[i].ctext, plen)) { + WARN(1, "aesgcm_encrypt() failed on vector %d\n", i); + return -EINVAL; + } + + /* decrypt in place */ + if (aesgcm_decrypt(&ctx, buf, buf, plen, aesgcm_tv[i].assoc, + aesgcm_tv[i].alen, aesgcm_tv[i].iv, tagbuf) + || memcmp(buf, aesgcm_tv[i].ptext, plen)) { + WARN(1, "aesgcm_decrypt() #2 failed on vector %d\n", i); + return -EINVAL; + } + } + return 0; +} +module_init(libaesgcm_init); + +static void __exit libaesgcm_exit(void) +{ +} +module_exit(libaesgcm_exit); +#endif