From patchwork Fri Mar 22 06:29:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 160840 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp445763jan; Thu, 21 Mar 2019 23:30:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqxXZXJSIOfAkMu9qdEnGJfZtL6KSudaV+43E0qVGoddDVHZ8/fsD+lHL7MesQjQPw70xjjc X-Received: by 2002:a63:360e:: with SMTP id d14mr7398285pga.179.1553236256517; Thu, 21 Mar 2019 23:30:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553236256; cv=none; d=google.com; s=arc-20160816; b=z5BraHyqKw0hzH5ocHVQ2p4o6bHSi8hV5SI1/VxDT5DE0mrTmFnyTtFIHbTQjdeGw0 VHPUdw2lR9cizGAq1R1CB17pEhwC1wpmYWYnB2Ta/2eW4WeYi8fG86v25dmW8ZbxxlIA YsZU9pa/MKl2+6zV4jRD1epRmyLgCbZaj4wNMVXLdDwMQv/iz6M8k32SU2btGGfC++iP MbwonXsG5U2n08WhqduzhC3Z2q+SFCbOkiPwGoGfvo80XXZWEPeUM2nVQniK9C2nUiZk wHy2ucf9goZh0Oho7/rQ9dw2wasb6GsSkuhVJvsJNnaS4aQlAknw7VtvzHlnwxC10oC5 IDYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:from:message-id:to:references :subject; bh=5vYEv2oGM0LMQln3ncVOMjn/EVN18+wlffpf5hhVZPA=; b=Lvf55Pnzd9TdxAumyKgZdQencP43+rhNRZACvs/Njcc2TLz/MVFpuiLDqU525mwnbX CL9/Jru65bi+FK2EXiK/YoyPqcnEQbVFGhslzg6vEtn77zeVJTGqRFhLuvvSBpzSvCUi 46lFiEqYMQeMkD6axtbUsqlDhL/PiXmjAfW0amOyd0VeO3ytpmUPXk5GHIfHdAG28epx pvFRzqIuh2WHR9ZoyrcWoua7xXX2NccAio+5JhmXg/zVhP7zvJzATa2G1FRNGh2V2nhu TWV4OOxItU4vJ1BZvB+kQ+jMIbL+69ITvAJEpkd2z6G11EN4JCsI/fMS9Ms9Rz0XGvko /t9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i5si6127070pgq.233.2019.03.21.23.30.56; Thu, 21 Mar 2019 23:30:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727105AbfCVGaz (ORCPT + 3 others); Fri, 22 Mar 2019 02:30:55 -0400 Received: from orcrist.hmeau.com ([104.223.48.154]:44042 "EHLO deadmen.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726029AbfCVGaz (ORCPT ); Fri, 22 Mar 2019 02:30:55 -0400 Received: from gondobar.mordor.me.apana.org.au ([192.168.128.4] helo=gondobar) by deadmen.hmeau.com with esmtps (Exim 4.89 #2 (Debian)) id 1h7DgT-0002Xl-8C; Fri, 22 Mar 2019 14:29:41 +0800 Received: from herbert by gondobar with local (Exim 4.89) (envelope-from ) id 1h7DgP-0001Gp-MP; Fri, 22 Mar 2019 14:29:37 +0800 Subject: [PATCH 1/17] asm: simd context helper API References: <20190322062740.nrwfx2rvmt7lzotj@gondor.apana.org.au> To: Linus Torvalds , "David S. Miller" , "Jason A. Donenfeld" , Eric Biggers , Ard Biesheuvel , Linux Crypto Mailing List , linux-fscrypt@vger.kernel.org, linux-arm-kernel@lists.infradead.org, LKML , Paul Crowley , Greg Kaiser , Samuel Neves , Tomer Ashur , Martin Willi Message-Id: From: Herbert Xu Date: Fri, 22 Mar 2019 14:29:37 +0800 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Jason A. Donenfeld Sometimes it's useful to amortize calls to XSAVE/XRSTOR and the related FPU/SIMD functions over a number of calls, because FPU restoration is quite expensive. This adds a simple header for carrying out this pattern: simd_context_t simd_context; simd_get(&simd_context); while ((item = get_item_from_queue()) != NULL) { encrypt_item(item, &simd_context); simd_relax(&simd_context); } simd_put(&simd_context); The relaxation step ensures that we don't trample over preemption, and the get/put API should be a familiar paradigm in the kernel. On the other end, code that actually wants to use SIMD instructions can accept this as a parameter and check it via: void encrypt_item(struct item *item, simd_context_t *simd_context) { if (item->len > LARGE_FOR_SIMD && simd_use(simd_context)) wild_simd_code(item); else boring_scalar_code(item); } The actual XSAVE happens during simd_use (and only on the first time), so that if the context is never actually used, no performance penalty is hit. Signed-off-by: Jason A. Donenfeld Cc: Samuel Neves Cc: Andy Lutomirski Cc: Thomas Gleixner Cc: Greg KH Cc: linux-arch@vger.kernel.org Signed-off-by: Herbert Xu --- arch/alpha/include/asm/Kbuild | 1 arch/arc/include/asm/Kbuild | 1 arch/arm/include/asm/Kbuild | 1 arch/arm/include/asm/simd.h | 63 +++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/simd.h | 51 ++++++++++++++++++++++++++--- arch/c6x/include/asm/Kbuild | 1 arch/h8300/include/asm/Kbuild | 1 arch/hexagon/include/asm/Kbuild | 1 arch/ia64/include/asm/Kbuild | 1 arch/m68k/include/asm/Kbuild | 1 arch/microblaze/include/asm/Kbuild | 1 arch/mips/include/asm/Kbuild | 1 arch/nds32/include/asm/Kbuild | 1 arch/nios2/include/asm/Kbuild | 1 arch/openrisc/include/asm/Kbuild | 1 arch/parisc/include/asm/Kbuild | 1 arch/powerpc/include/asm/Kbuild | 1 arch/riscv/include/asm/Kbuild | 1 arch/s390/include/asm/Kbuild | 1 arch/sh/include/asm/Kbuild | 1 arch/sparc/include/asm/Kbuild | 1 arch/um/include/asm/Kbuild | 1 arch/unicore32/include/asm/Kbuild | 1 arch/x86/include/asm/simd.h | 44 +++++++++++++++++++++++++ arch/xtensa/include/asm/Kbuild | 1 include/asm-generic/simd.h | 20 +++++++++++ include/linux/simd.h | 32 ++++++++++++++++++ 27 files changed, 224 insertions(+), 8 deletions(-) diff --git a/arch/alpha/include/asm/Kbuild b/arch/alpha/include/asm/Kbuild index dc0ab28baca1..b385c2036290 100644 --- a/arch/alpha/include/asm/Kbuild +++ b/arch/alpha/include/asm/Kbuild @@ -13,3 +13,4 @@ generic-y += sections.h generic-y += trace_clock.h generic-y += current.h generic-y += kprobes.h +generic-y += simd.h diff --git a/arch/arc/include/asm/Kbuild b/arch/arc/include/asm/Kbuild index b41f8881ecc8..3028a2246ab4 100644 --- a/arch/arc/include/asm/Kbuild +++ b/arch/arc/include/asm/Kbuild @@ -19,6 +19,7 @@ generic-y += msi.h generic-y += parport.h generic-y += percpu.h generic-y += preempt.h +generic-y += simd.h generic-y += topology.h generic-y += trace_clock.h generic-y += user.h diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild index a8a4eb7f6dae..27ce7c7a795a 100644 --- a/arch/arm/include/asm/Kbuild +++ b/arch/arm/include/asm/Kbuild @@ -16,7 +16,6 @@ generic-y += rwsem.h generic-y += seccomp.h generic-y += segment.h generic-y += serial.h -generic-y += simd.h generic-y += sizes.h generic-y += trace_clock.h diff --git a/arch/arm/include/asm/simd.h b/arch/arm/include/asm/simd.h new file mode 100644 index 000000000000..264ed84b41d8 --- /dev/null +++ b/arch/arm/include/asm/simd.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + */ + +#include +#ifndef _ASM_SIMD_H +#define _ASM_SIMD_H + +#ifdef CONFIG_KERNEL_MODE_NEON +#include + +static __must_check inline bool may_use_simd(void) +{ + return !in_nmi() && !in_irq() && !in_serving_softirq(); +} + +static inline void simd_get(simd_context_t *ctx) +{ + *ctx = may_use_simd() ? HAVE_FULL_SIMD : HAVE_NO_SIMD; +} + +static inline void simd_put(simd_context_t *ctx) +{ + if (*ctx & HAVE_SIMD_IN_USE) + kernel_neon_end(); + *ctx = HAVE_NO_SIMD; +} + +static __must_check inline bool simd_use(simd_context_t *ctx) +{ + if (!(*ctx & HAVE_FULL_SIMD)) + return false; + if (*ctx & HAVE_SIMD_IN_USE) + return true; + kernel_neon_begin(); + *ctx |= HAVE_SIMD_IN_USE; + return true; +} + +#else + +static __must_check inline bool may_use_simd(void) +{ + return false; +} + +static inline void simd_get(simd_context_t *ctx) +{ + *ctx = HAVE_NO_SIMD; +} + +static inline void simd_put(simd_context_t *ctx) +{ +} + +static __must_check inline bool simd_use(simd_context_t *ctx) +{ + return false; +} +#endif + +#endif /* _ASM_SIMD_H */ diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h index 6495cc51246f..a45ff1600040 100644 --- a/arch/arm64/include/asm/simd.h +++ b/arch/arm64/include/asm/simd.h @@ -1,11 +1,10 @@ -/* - * Copyright (C) 2017 Linaro Ltd. +/* SPDX-License-Identifier: GPL-2.0 * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 as published - * by the Free Software Foundation. + * Copyright (C) 2017 Linaro Ltd. + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. */ +#include #ifndef __ASM_SIMD_H #define __ASM_SIMD_H @@ -16,6 +15,8 @@ #include #ifdef CONFIG_KERNEL_MODE_NEON +#include +#include DECLARE_PER_CPU(bool, kernel_neon_busy); @@ -40,9 +41,47 @@ static __must_check inline bool may_use_simd(void) !this_cpu_read(kernel_neon_busy); } +static inline void simd_get(simd_context_t *ctx) +{ + *ctx = may_use_simd() ? HAVE_FULL_SIMD : HAVE_NO_SIMD; +} + +static inline void simd_put(simd_context_t *ctx) +{ + if (*ctx & HAVE_SIMD_IN_USE) + kernel_neon_end(); + *ctx = HAVE_NO_SIMD; +} + +static __must_check inline bool simd_use(simd_context_t *ctx) +{ + if (!(*ctx & HAVE_FULL_SIMD)) + return false; + if (*ctx & HAVE_SIMD_IN_USE) + return true; + kernel_neon_begin(); + *ctx |= HAVE_SIMD_IN_USE; + return true; +} + #else /* ! CONFIG_KERNEL_MODE_NEON */ -static __must_check inline bool may_use_simd(void) { +static __must_check inline bool may_use_simd(void) +{ + return false; +} + +static inline void simd_get(simd_context_t *ctx) +{ + *ctx = HAVE_NO_SIMD; +} + +static inline void simd_put(simd_context_t *ctx) +{ +} + +static __must_check inline bool simd_use(simd_context_t *ctx) +{ return false; } diff --git a/arch/c6x/include/asm/Kbuild b/arch/c6x/include/asm/Kbuild index 63b4a1705182..b0aa9d783d2b 100644 --- a/arch/c6x/include/asm/Kbuild +++ b/arch/c6x/include/asm/Kbuild @@ -31,6 +31,7 @@ generic-y += preempt.h generic-y += segment.h generic-y += serial.h generic-y += shmparam.h +generic-y += simd.h generic-y += tlbflush.h generic-y += topology.h generic-y += trace_clock.h diff --git a/arch/h8300/include/asm/Kbuild b/arch/h8300/include/asm/Kbuild index 3e7c8ecf151e..a10bbf3c67e9 100644 --- a/arch/h8300/include/asm/Kbuild +++ b/arch/h8300/include/asm/Kbuild @@ -40,6 +40,7 @@ generic-y += scatterlist.h generic-y += sections.h generic-y += serial.h generic-y += shmparam.h +generic-y += simd.h generic-y += sizes.h generic-y += spinlock.h generic-y += timex.h diff --git a/arch/hexagon/include/asm/Kbuild b/arch/hexagon/include/asm/Kbuild index b25fd42aa0f4..843ae8d5c4d9 100644 --- a/arch/hexagon/include/asm/Kbuild +++ b/arch/hexagon/include/asm/Kbuild @@ -31,6 +31,7 @@ generic-y += sections.h generic-y += segment.h generic-y += serial.h generic-y += shmparam.h +generic-y += simd.h generic-y += sizes.h generic-y += topology.h generic-y += trace_clock.h diff --git a/arch/ia64/include/asm/Kbuild b/arch/ia64/include/asm/Kbuild index 43e21fe3499c..4f4a969c4611 100644 --- a/arch/ia64/include/asm/Kbuild +++ b/arch/ia64/include/asm/Kbuild @@ -5,6 +5,7 @@ generic-y += irq_work.h generic-y += mcs_spinlock.h generic-y += mm-arch-hooks.h generic-y += preempt.h +generic-y += simd.h generic-y += trace_clock.h generic-y += vtime.h generic-y += word-at-a-time.h diff --git a/arch/m68k/include/asm/Kbuild b/arch/m68k/include/asm/Kbuild index 95f8f631c4df..d60a481e6f73 100644 --- a/arch/m68k/include/asm/Kbuild +++ b/arch/m68k/include/asm/Kbuild @@ -21,6 +21,7 @@ generic-y += percpu.h generic-y += preempt.h generic-y += sections.h generic-y += shmparam.h +generic-y += simd.h generic-y += spinlock.h generic-y += topology.h generic-y += trace_clock.h diff --git a/arch/microblaze/include/asm/Kbuild b/arch/microblaze/include/asm/Kbuild index 791cc8d54d0a..103ed3592214 100644 --- a/arch/microblaze/include/asm/Kbuild +++ b/arch/microblaze/include/asm/Kbuild @@ -27,6 +27,7 @@ generic-y += percpu.h generic-y += preempt.h generic-y += serial.h generic-y += shmparam.h +generic-y += simd.h generic-y += syscalls.h generic-y += topology.h generic-y += trace_clock.h diff --git a/arch/mips/include/asm/Kbuild b/arch/mips/include/asm/Kbuild index 87b86cdf126a..fb3345c870c6 100644 --- a/arch/mips/include/asm/Kbuild +++ b/arch/mips/include/asm/Kbuild @@ -20,6 +20,7 @@ generic-y += qrwlock.h generic-y += qspinlock.h generic-y += sections.h generic-y += segment.h +generic-y += simd.h generic-y += trace_clock.h generic-y += unaligned.h generic-y += user.h diff --git a/arch/nds32/include/asm/Kbuild b/arch/nds32/include/asm/Kbuild index 64ceff7ab99b..887923fdbb65 100644 --- a/arch/nds32/include/asm/Kbuild +++ b/arch/nds32/include/asm/Kbuild @@ -38,6 +38,7 @@ generic-y += preempt.h generic-y += sections.h generic-y += segment.h generic-y += serial.h +generic-y += simd.h generic-y += sizes.h generic-y += switch_to.h generic-y += timex.h diff --git a/arch/nios2/include/asm/Kbuild b/arch/nios2/include/asm/Kbuild index 8fde4fa2c34f..571a9d9ad107 100644 --- a/arch/nios2/include/asm/Kbuild +++ b/arch/nios2/include/asm/Kbuild @@ -33,6 +33,7 @@ generic-y += preempt.h generic-y += sections.h generic-y += segment.h generic-y += serial.h +generic-y += simd.h generic-y += spinlock.h generic-y += topology.h generic-y += trace_clock.h diff --git a/arch/openrisc/include/asm/Kbuild b/arch/openrisc/include/asm/Kbuild index 5a73e2956ac4..f0abc1c93be0 100644 --- a/arch/openrisc/include/asm/Kbuild +++ b/arch/openrisc/include/asm/Kbuild @@ -34,6 +34,7 @@ generic-y += qrwlock.h generic-y += sections.h generic-y += segment.h generic-y += shmparam.h +generic-y += simd.h generic-y += switch_to.h generic-y += topology.h generic-y += trace_clock.h diff --git a/arch/parisc/include/asm/Kbuild b/arch/parisc/include/asm/Kbuild index 6f49e77d82a2..8aec86fd5393 100644 --- a/arch/parisc/include/asm/Kbuild +++ b/arch/parisc/include/asm/Kbuild @@ -19,6 +19,7 @@ generic-y += percpu.h generic-y += preempt.h generic-y += seccomp.h generic-y += segment.h +generic-y += simd.h generic-y += trace_clock.h generic-y += user.h generic-y += vga.h diff --git a/arch/powerpc/include/asm/Kbuild b/arch/powerpc/include/asm/Kbuild index a0c132bedfae..5ac3dead6952 100644 --- a/arch/powerpc/include/asm/Kbuild +++ b/arch/powerpc/include/asm/Kbuild @@ -11,3 +11,4 @@ generic-y += preempt.h generic-y += rwsem.h generic-y += vtime.h generic-y += msi.h +generic-y += simd.h diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index cccd12cf27d4..525bd8707c7d 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -28,6 +28,7 @@ generic-y += scatterlist.h generic-y += sections.h generic-y += serial.h generic-y += shmparam.h +generic-y += simd.h generic-y += topology.h generic-y += trace_clock.h generic-y += unaligned.h diff --git a/arch/s390/include/asm/Kbuild b/arch/s390/include/asm/Kbuild index 12d77cb11fe5..6bc0fdc57783 100644 --- a/arch/s390/include/asm/Kbuild +++ b/arch/s390/include/asm/Kbuild @@ -21,6 +21,7 @@ generic-y += local64.h generic-y += mcs_spinlock.h generic-y += mm-arch-hooks.h generic-y += rwsem.h +generic-y += simd.h generic-y += trace_clock.h generic-y += unaligned.h generic-y += word-at-a-time.h diff --git a/arch/sh/include/asm/Kbuild b/arch/sh/include/asm/Kbuild index a6ef3fee5f85..74b7bb3ba96a 100644 --- a/arch/sh/include/asm/Kbuild +++ b/arch/sh/include/asm/Kbuild @@ -18,6 +18,7 @@ generic-y += percpu.h generic-y += preempt.h generic-y += rwsem.h generic-y += serial.h +generic-y += simd.h generic-y += sizes.h generic-y += trace_clock.h generic-y += xor.h diff --git a/arch/sparc/include/asm/Kbuild b/arch/sparc/include/asm/Kbuild index b82f64e28f55..d053d5bad630 100644 --- a/arch/sparc/include/asm/Kbuild +++ b/arch/sparc/include/asm/Kbuild @@ -19,5 +19,6 @@ generic-y += msi.h generic-y += preempt.h generic-y += rwsem.h generic-y += serial.h +generic-y += simd.h generic-y += trace_clock.h generic-y += word-at-a-time.h diff --git a/arch/um/include/asm/Kbuild b/arch/um/include/asm/Kbuild index 00bcbe2326d9..a4dfffb2c22b 100644 --- a/arch/um/include/asm/Kbuild +++ b/arch/um/include/asm/Kbuild @@ -20,6 +20,7 @@ generic-y += param.h generic-y += pci.h generic-y += percpu.h generic-y += preempt.h +generic-y += simd.h generic-y += switch_to.h generic-y += topology.h generic-y += trace_clock.h diff --git a/arch/unicore32/include/asm/Kbuild b/arch/unicore32/include/asm/Kbuild index 1d1544b6ca74..c428158a3a19 100644 --- a/arch/unicore32/include/asm/Kbuild +++ b/arch/unicore32/include/asm/Kbuild @@ -29,6 +29,7 @@ generic-y += sections.h generic-y += segment.h generic-y += serial.h generic-y += shmparam.h +generic-y += simd.h generic-y += sizes.h generic-y += syscalls.h generic-y += topology.h diff --git a/arch/x86/include/asm/simd.h b/arch/x86/include/asm/simd.h index a341c878e977..4aad7f158dcb 100644 --- a/arch/x86/include/asm/simd.h +++ b/arch/x86/include/asm/simd.h @@ -1,4 +1,11 @@ -/* SPDX-License-Identifier: GPL-2.0 */ +/* SPDX-License-Identifier: GPL-2.0 + * + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + */ + +#include +#ifndef _ASM_SIMD_H +#define _ASM_SIMD_H #include @@ -10,3 +17,38 @@ static __must_check inline bool may_use_simd(void) { return irq_fpu_usable(); } + +static inline void simd_get(simd_context_t *ctx) +{ +#if !defined(CONFIG_UML) + *ctx = may_use_simd() ? HAVE_FULL_SIMD : HAVE_NO_SIMD; +#else + *ctx = HAVE_NO_SIMD; +#endif +} + +static inline void simd_put(simd_context_t *ctx) +{ +#if !defined(CONFIG_UML) + if (*ctx & HAVE_SIMD_IN_USE) + kernel_fpu_end(); +#endif + *ctx = HAVE_NO_SIMD; +} + +static __must_check inline bool simd_use(simd_context_t *ctx) +{ +#if !defined(CONFIG_UML) + if (!(*ctx & HAVE_FULL_SIMD)) + return false; + if (*ctx & HAVE_SIMD_IN_USE) + return true; + kernel_fpu_begin(); + *ctx |= HAVE_SIMD_IN_USE; + return true; +#else + return false; +#endif +} + +#endif /* _ASM_SIMD_H */ diff --git a/arch/xtensa/include/asm/Kbuild b/arch/xtensa/include/asm/Kbuild index 42b6cb3d16f7..318cad148bea 100644 --- a/arch/xtensa/include/asm/Kbuild +++ b/arch/xtensa/include/asm/Kbuild @@ -26,6 +26,7 @@ generic-y += qrwlock.h generic-y += qspinlock.h generic-y += rwsem.h generic-y += sections.h +generic-y += simd.h generic-y += socket.h generic-y += topology.h generic-y += trace_clock.h diff --git a/include/asm-generic/simd.h b/include/asm-generic/simd.h index d0343d58a74a..b3dd61ac010e 100644 --- a/include/asm-generic/simd.h +++ b/include/asm-generic/simd.h @@ -1,5 +1,9 @@ /* SPDX-License-Identifier: GPL-2.0 */ +#include +#ifndef _ASM_SIMD_H +#define _ASM_SIMD_H + #include /* @@ -13,3 +17,19 @@ static __must_check inline bool may_use_simd(void) { return !in_interrupt(); } + +static inline void simd_get(simd_context_t *ctx) +{ + *ctx = HAVE_NO_SIMD; +} + +static inline void simd_put(simd_context_t *ctx) +{ +} + +static __must_check inline bool simd_use(simd_context_t *ctx) +{ + return false; +} + +#endif /* _ASM_SIMD_H */ diff --git a/include/linux/simd.h b/include/linux/simd.h new file mode 100644 index 000000000000..4e0b8a9bdc14 --- /dev/null +++ b/include/linux/simd.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + */ + +#ifndef _SIMD_H +#define _SIMD_H + +typedef enum { + HAVE_NO_SIMD = 1 << 0, + HAVE_FULL_SIMD = 1 << 1, + HAVE_SIMD_IN_USE = 1 << 31 +} simd_context_t; + +#define DONT_USE_SIMD ((simd_context_t []){ HAVE_NO_SIMD }) + +#include +#include + +static inline bool simd_relax(simd_context_t *ctx) +{ +#ifdef CONFIG_PREEMPT + if ((*ctx & HAVE_SIMD_IN_USE) && need_resched()) { + simd_put(ctx); + simd_get(ctx); + return true; + } +#endif + return false; +} + +#endif /* _SIMD_H */ From patchwork Fri Mar 22 06:29:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 160844 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp452553jan; Thu, 21 Mar 2019 23:40:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqyzaktAlATcg9pYw3soay/NES/t6ToFz3mdUJ175KtUFLqnu28ecuvA8FQG8/nIkzNw8yRk X-Received: by 2002:a63:2045:: with SMTP id r5mr7215447pgm.394.1553236805866; Thu, 21 Mar 2019 23:40:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553236805; cv=none; d=google.com; s=arc-20160816; b=cmvI+sKFMpDso7OnyyuK346w/H3bAHuZr4czcPy9dz2jQ/ySrDXKbJ8uYafl0+Ei/U KmCMfy+brbpZrkBEsW13fkEYrw6YMYxnPwb6Sg5aNVhbWpuNVadTAqjnH834Oxrp9A1V M9nsh5W5xm1roTXoT+ci9qnnrH7kQeAOK7arssRJ5ri4XMj3whcp6gsU41bDI1FN4koI lY0/rU5x0EDKa5XXiiKAkrKKqKE932GrPIXFtS8kfdGU4akfjgO5J0csyZPH5E5ThqWJ nUkSPOprtNxswrB8Roi+zxgyrle9oq4DA4HOFCs3UC9b8DT920fGojkK5Ap2kcpWXxUd oUUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:from:message-id:to:references :subject; bh=xYcDiMy6Pc8DGQyiBNuzlCO42Ogb0H6jgYuob8zu8KQ=; b=jun+geN5nolddG41BbdWjTjx7ZwCga8Y53DEPWRnatGBkxamOjclp4KIezXesOr+CQ YU2wS2CEhnNzKoZ1qrJVIq1SMLcNr7zuA2rVZVX4gq4cySoRXNkbUK/K4fMxnXJGqKSh kyeFwIEtwlfHFDITIqRjZ2pjMFPyEU3IE58Z7QZPFnTF0ozQXnlYwI9Zc624nYrQOu/F 8R6y0mrwHJkB9EA3SQ6h6wneOqSS9Y6ibTn/pZu1+431F60JryghxySsgYr1taJKQhMn nreQqqs0qXDzVZY3hVWKlNRdHJ8zV2FRi1XAsV0v0DvXWBTM6X3oezzqHNTwYAPY2iRE pgMw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p4si6532945plr.366.2019.03.21.23.40.05; Thu, 21 Mar 2019 23:40:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727559AbfCVGkE (ORCPT + 3 others); Fri, 22 Mar 2019 02:40:04 -0400 Received: from orcrist.hmeau.com ([104.223.48.154]:44380 "EHLO deadmen.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727429AbfCVGkE (ORCPT ); Fri, 22 Mar 2019 02:40:04 -0400 Received: from gondobar.mordor.me.apana.org.au ([192.168.128.4] helo=gondobar) by deadmen.hmeau.com with esmtps (Exim 4.89 #2 (Debian)) id 1h7Dgj-0002Z0-12; Fri, 22 Mar 2019 14:30:01 +0800 Received: from herbert by gondobar with local (Exim 4.89) (envelope-from ) id 1h7Dgc-0001Iu-8W; Fri, 22 Mar 2019 14:29:50 +0800 Subject: [PATCH 11/17] zinc: BLAKE2s generic C implementation and selftest References: <20190322062740.nrwfx2rvmt7lzotj@gondor.apana.org.au> To: Linus Torvalds , "David S. Miller" , "Jason A. Donenfeld" , Eric Biggers , Ard Biesheuvel , Linux Crypto Mailing List , linux-fscrypt@vger.kernel.org, linux-arm-kernel@lists.infradead.org, LKML , Paul Crowley , Greg Kaiser , Samuel Neves , Tomer Ashur , Martin Willi Message-Id: From: Herbert Xu Date: Fri, 22 Mar 2019 14:29:50 +0800 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Jason A. Donenfeld The C implementation was originally based on Samuel Neves' public domain reference implementation but has since been heavily modified for the kernel. We're able to do compile-time optimizations by moving some scaffolding around the final function into the header file. Information: https://blake2.net/ Signed-off-by: Jason A. Donenfeld Signed-off-by: Samuel Neves Co-developed-by: Samuel Neves Cc: Jean-Philippe Aumasson Cc: Andy Lutomirski Cc: Greg KH Cc: Andrew Morton Cc: Linus Torvalds Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org Signed-off-by: Herbert Xu --- include/zinc/blake2s.h | 56 + lib/zinc/Kconfig | 3 lib/zinc/Makefile | 3 lib/zinc/blake2s/blake2s.c | 295 ++++++ lib/zinc/selftest/blake2s.c | 2090 ++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 2447 insertions(+) diff --git a/include/zinc/blake2s.h b/include/zinc/blake2s.h new file mode 100644 index 000000000000..701a08ba47c1 --- /dev/null +++ b/include/zinc/blake2s.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ +/* + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + */ + +#ifndef _ZINC_BLAKE2S_H +#define _ZINC_BLAKE2S_H + +#include +#include +#include + +enum blake2s_lengths { + BLAKE2S_BLOCK_SIZE = 64, + BLAKE2S_HASH_SIZE = 32, + BLAKE2S_KEY_SIZE = 32 +}; + +struct blake2s_state { + u32 h[8]; + u32 t[2]; + u32 f[2]; + u8 buf[BLAKE2S_BLOCK_SIZE]; + size_t buflen; + u8 last_node; +}; + +void blake2s_init(struct blake2s_state *state, const size_t outlen); +void blake2s_init_key(struct blake2s_state *state, const size_t outlen, + const void *key, const size_t keylen); +void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen); +void blake2s_final(struct blake2s_state *state, u8 *out, const size_t outlen); + +static inline void blake2s(u8 *out, const u8 *in, const u8 *key, + const size_t outlen, const size_t inlen, + const size_t keylen) +{ + struct blake2s_state state; + + WARN_ON(IS_ENABLED(DEBUG) && ((!in && inlen > 0) || !out || !outlen || + outlen > BLAKE2S_HASH_SIZE || keylen > BLAKE2S_KEY_SIZE || + (!key && keylen))); + + if (keylen) + blake2s_init_key(&state, outlen, key, keylen); + else + blake2s_init(&state, outlen); + + blake2s_update(&state, in, inlen); + blake2s_final(&state, out, outlen); +} + +void blake2s_hmac(u8 *out, const u8 *in, const u8 *key, const size_t outlen, + const size_t inlen, const size_t keylen); + +#endif /* _ZINC_BLAKE2S_H */ diff --git a/lib/zinc/Kconfig b/lib/zinc/Kconfig index 76a3bc0b073a..8c25042ac014 100644 --- a/lib/zinc/Kconfig +++ b/lib/zinc/Kconfig @@ -16,6 +16,9 @@ config ZINC_CHACHA20POLY1305 select ZINC_POLY1305 select CRYPTO_BLKCIPHER +config ZINC_BLAKE2S + tristate + config ZINC_SELFTEST bool "Zinc cryptography library self-tests" help diff --git a/lib/zinc/Makefile b/lib/zinc/Makefile index 300996a4cf2f..c1cf678e4068 100644 --- a/lib/zinc/Makefile +++ b/lib/zinc/Makefile @@ -10,3 +10,6 @@ obj-$(CONFIG_ZINC_POLY1305) += zinc_poly1305.o zinc_chacha20poly1305-y := chacha20poly1305.o obj-$(CONFIG_ZINC_CHACHA20POLY1305) += zinc_chacha20poly1305.o + +zinc_blake2s-y := blake2s/blake2s.o +obj-$(CONFIG_ZINC_BLAKE2S) += zinc_blake2s.o diff --git a/lib/zinc/blake2s/blake2s.c b/lib/zinc/blake2s/blake2s.c new file mode 100644 index 000000000000..58d7e9378bd4 --- /dev/null +++ b/lib/zinc/blake2s/blake2s.c @@ -0,0 +1,295 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Copyright (C) 2012 Samuel Neves . All Rights Reserved. + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + * + * This is an implementation of the BLAKE2s hash and PRF functions. + * + * Information: https://blake2.net/ + * + */ + +#include +#include "../selftest/run.h" + +#include +#include +#include +#include +#include +#include +#include + +typedef union { + struct { + u8 digest_length; + u8 key_length; + u8 fanout; + u8 depth; + u32 leaf_length; + u32 node_offset; + u16 xof_length; + u8 node_depth; + u8 inner_length; + u8 salt[8]; + u8 personal[8]; + }; + __le32 words[8]; +} __packed blake2s_param; + +static const u32 blake2s_iv[8] = { + 0x6A09E667UL, 0xBB67AE85UL, 0x3C6EF372UL, 0xA54FF53AUL, + 0x510E527FUL, 0x9B05688CUL, 0x1F83D9ABUL, 0x5BE0CD19UL +}; + +static const u8 blake2s_sigma[10][16] = { + { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 }, + { 14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3 }, + { 11, 8, 12, 0, 5, 2, 15, 13, 10, 14, 3, 6, 7, 1, 9, 4 }, + { 7, 9, 3, 1, 13, 12, 11, 14, 2, 6, 5, 10, 4, 0, 15, 8 }, + { 9, 0, 5, 7, 2, 4, 10, 15, 14, 1, 11, 12, 6, 8, 3, 13 }, + { 2, 12, 6, 10, 0, 11, 8, 3, 4, 13, 7, 5, 15, 14, 1, 9 }, + { 12, 5, 1, 15, 14, 13, 4, 10, 0, 7, 6, 3, 9, 2, 8, 11 }, + { 13, 11, 7, 14, 12, 1, 3, 9, 5, 0, 15, 4, 8, 6, 2, 10 }, + { 6, 15, 14, 9, 11, 3, 0, 8, 12, 2, 13, 7, 1, 4, 10, 5 }, + { 10, 2, 8, 4, 7, 6, 1, 5, 15, 11, 9, 14, 3, 12, 13, 0 }, +}; + +static inline void blake2s_set_lastblock(struct blake2s_state *state) +{ + if (state->last_node) + state->f[1] = -1; + state->f[0] = -1; +} + +static inline void blake2s_increment_counter(struct blake2s_state *state, + const u32 inc) +{ + state->t[0] += inc; + state->t[1] += (state->t[0] < inc); +} + +static inline void blake2s_init_param(struct blake2s_state *state, + const blake2s_param *param) +{ + int i; + + memset(state, 0, sizeof(*state)); + for (i = 0; i < 8; ++i) + state->h[i] = blake2s_iv[i] ^ le32_to_cpu(param->words[i]); +} + +void blake2s_init(struct blake2s_state *state, const size_t outlen) +{ + blake2s_param param __aligned(__alignof__(u32)) = { + .digest_length = outlen, + .fanout = 1, + .depth = 1 + }; + + WARN_ON(IS_ENABLED(DEBUG) && (!outlen || outlen > BLAKE2S_HASH_SIZE)); + blake2s_init_param(state, ¶m); +} +EXPORT_SYMBOL(blake2s_init); + +void blake2s_init_key(struct blake2s_state *state, const size_t outlen, + const void *key, const size_t keylen) +{ + blake2s_param param = { .digest_length = outlen, + .key_length = keylen, + .fanout = 1, + .depth = 1 }; + u8 block[BLAKE2S_BLOCK_SIZE] = { 0 }; + + WARN_ON(IS_ENABLED(DEBUG) && (!outlen || outlen > BLAKE2S_HASH_SIZE || + !key || !keylen || keylen > BLAKE2S_KEY_SIZE)); + blake2s_init_param(state, ¶m); + memcpy(block, key, keylen); + blake2s_update(state, block, BLAKE2S_BLOCK_SIZE); + memzero_explicit(block, BLAKE2S_BLOCK_SIZE); +} +EXPORT_SYMBOL(blake2s_init_key); + +static bool *const blake2s_nobs[] __initconst = { }; +static void __init blake2s_fpu_init(void) +{ +} +static inline bool blake2s_compress_arch(struct blake2s_state *state, + const u8 *block, size_t nblocks, + const u32 inc) +{ + return false; +} + +static inline void blake2s_compress(struct blake2s_state *state, + const u8 *block, size_t nblocks, + const u32 inc) +{ + u32 m[16]; + u32 v[16]; + int i; + + WARN_ON(IS_ENABLED(DEBUG) && + (nblocks > 1 && inc != BLAKE2S_BLOCK_SIZE)); + + if (blake2s_compress_arch(state, block, nblocks, inc)) + return; + + while (nblocks > 0) { + blake2s_increment_counter(state, inc); + memcpy(m, block, BLAKE2S_BLOCK_SIZE); + le32_to_cpu_array(m, ARRAY_SIZE(m)); + memcpy(v, state->h, 32); + v[ 8] = blake2s_iv[0]; + v[ 9] = blake2s_iv[1]; + v[10] = blake2s_iv[2]; + v[11] = blake2s_iv[3]; + v[12] = blake2s_iv[4] ^ state->t[0]; + v[13] = blake2s_iv[5] ^ state->t[1]; + v[14] = blake2s_iv[6] ^ state->f[0]; + v[15] = blake2s_iv[7] ^ state->f[1]; + +#define G(r, i, a, b, c, d) do { \ + a += b + m[blake2s_sigma[r][2 * i + 0]]; \ + d = ror32(d ^ a, 16); \ + c += d; \ + b = ror32(b ^ c, 12); \ + a += b + m[blake2s_sigma[r][2 * i + 1]]; \ + d = ror32(d ^ a, 8); \ + c += d; \ + b = ror32(b ^ c, 7); \ +} while (0) + +#define ROUND(r) do { \ + G(r, 0, v[0], v[ 4], v[ 8], v[12]); \ + G(r, 1, v[1], v[ 5], v[ 9], v[13]); \ + G(r, 2, v[2], v[ 6], v[10], v[14]); \ + G(r, 3, v[3], v[ 7], v[11], v[15]); \ + G(r, 4, v[0], v[ 5], v[10], v[15]); \ + G(r, 5, v[1], v[ 6], v[11], v[12]); \ + G(r, 6, v[2], v[ 7], v[ 8], v[13]); \ + G(r, 7, v[3], v[ 4], v[ 9], v[14]); \ +} while (0) + ROUND(0); + ROUND(1); + ROUND(2); + ROUND(3); + ROUND(4); + ROUND(5); + ROUND(6); + ROUND(7); + ROUND(8); + ROUND(9); + +#undef G +#undef ROUND + + for (i = 0; i < 8; ++i) + state->h[i] ^= v[i] ^ v[i + 8]; + + block += BLAKE2S_BLOCK_SIZE; + --nblocks; + } +} + +void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen) +{ + const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; + + if (unlikely(!inlen)) + return; + if (inlen > fill) { + memcpy(state->buf + state->buflen, in, fill); + blake2s_compress(state, state->buf, 1, BLAKE2S_BLOCK_SIZE); + state->buflen = 0; + in += fill; + inlen -= fill; + } + if (inlen > BLAKE2S_BLOCK_SIZE) { + const size_t nblocks = + (inlen + BLAKE2S_BLOCK_SIZE - 1) / BLAKE2S_BLOCK_SIZE; + /* Hash one less (full) block than strictly possible */ + blake2s_compress(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE); + in += BLAKE2S_BLOCK_SIZE * (nblocks - 1); + inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1); + } + memcpy(state->buf + state->buflen, in, inlen); + state->buflen += inlen; +} +EXPORT_SYMBOL(blake2s_update); + +void blake2s_final(struct blake2s_state *state, u8 *out, const size_t outlen) +{ + WARN_ON(IS_ENABLED(DEBUG) && + (!out || !outlen || outlen > BLAKE2S_HASH_SIZE)); + blake2s_set_lastblock(state); + memset(state->buf + state->buflen, 0, + BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */ + blake2s_compress(state, state->buf, 1, state->buflen); + cpu_to_le32_array(state->h, ARRAY_SIZE(state->h)); + memcpy(out, state->h, outlen); + memzero_explicit(state, sizeof(*state)); +} +EXPORT_SYMBOL(blake2s_final); + +void blake2s_hmac(u8 *out, const u8 *in, const u8 *key, const size_t outlen, + const size_t inlen, const size_t keylen) +{ + struct blake2s_state state; + u8 x_key[BLAKE2S_BLOCK_SIZE] __aligned(__alignof__(u32)) = { 0 }; + u8 i_hash[BLAKE2S_HASH_SIZE] __aligned(__alignof__(u32)); + int i; + + if (keylen > BLAKE2S_BLOCK_SIZE) { + blake2s_init(&state, BLAKE2S_HASH_SIZE); + blake2s_update(&state, key, keylen); + blake2s_final(&state, x_key, BLAKE2S_HASH_SIZE); + } else + memcpy(x_key, key, keylen); + + for (i = 0; i < BLAKE2S_BLOCK_SIZE; ++i) + x_key[i] ^= 0x36; + + blake2s_init(&state, BLAKE2S_HASH_SIZE); + blake2s_update(&state, x_key, BLAKE2S_BLOCK_SIZE); + blake2s_update(&state, in, inlen); + blake2s_final(&state, i_hash, BLAKE2S_HASH_SIZE); + + for (i = 0; i < BLAKE2S_BLOCK_SIZE; ++i) + x_key[i] ^= 0x5c ^ 0x36; + + blake2s_init(&state, BLAKE2S_HASH_SIZE); + blake2s_update(&state, x_key, BLAKE2S_BLOCK_SIZE); + blake2s_update(&state, i_hash, BLAKE2S_HASH_SIZE); + blake2s_final(&state, i_hash, BLAKE2S_HASH_SIZE); + + memcpy(out, i_hash, outlen); + memzero_explicit(x_key, BLAKE2S_BLOCK_SIZE); + memzero_explicit(i_hash, BLAKE2S_HASH_SIZE); +} +EXPORT_SYMBOL(blake2s_hmac); + +#include "../selftest/blake2s.c" + +static bool nosimd __initdata = false; + +static int __init mod_init(void) +{ + if (!nosimd) + blake2s_fpu_init(); + if (!selftest_run("blake2s", blake2s_selftest, blake2s_nobs, + ARRAY_SIZE(blake2s_nobs))) + return -ENOTRECOVERABLE; + return 0; +} + +static void __exit mod_exit(void) +{ +} + +module_param(nosimd, bool, 0); +module_init(mod_init); +module_exit(mod_exit); +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("BLAKE2s hash function"); +MODULE_AUTHOR("Jason A. Donenfeld "); diff --git a/lib/zinc/selftest/blake2s.c b/lib/zinc/selftest/blake2s.c new file mode 100644 index 000000000000..7325a42334aa --- /dev/null +++ b/lib/zinc/selftest/blake2s.c @@ -0,0 +1,2090 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + */ + +static const u8 blake2s_testvecs[][BLAKE2S_HASH_SIZE] __initconst = { + { 0x69, 0x21, 0x7a, 0x30, 0x79, 0x90, 0x80, 0x94, + 0xe1, 0x11, 0x21, 0xd0, 0x42, 0x35, 0x4a, 0x7c, + 0x1f, 0x55, 0xb6, 0x48, 0x2c, 0xa1, 0xa5, 0x1e, + 0x1b, 0x25, 0x0d, 0xfd, 0x1e, 0xd0, 0xee, 0xf9 }, + { 0xe3, 0x4d, 0x74, 0xdb, 0xaf, 0x4f, 0xf4, 0xc6, + 0xab, 0xd8, 0x71, 0xcc, 0x22, 0x04, 0x51, 0xd2, + 0xea, 0x26, 0x48, 0x84, 0x6c, 0x77, 0x57, 0xfb, + 0xaa, 0xc8, 0x2f, 0xe5, 0x1a, 0xd6, 0x4b, 0xea }, + { 0xdd, 0xad, 0x9a, 0xb1, 0x5d, 0xac, 0x45, 0x49, + 0xba, 0x42, 0xf4, 0x9d, 0x26, 0x24, 0x96, 0xbe, + 0xf6, 0xc0, 0xba, 0xe1, 0xdd, 0x34, 0x2a, 0x88, + 0x08, 0xf8, 0xea, 0x26, 0x7c, 0x6e, 0x21, 0x0c }, + { 0xe8, 0xf9, 0x1c, 0x6e, 0xf2, 0x32, 0xa0, 0x41, + 0x45, 0x2a, 0xb0, 0xe1, 0x49, 0x07, 0x0c, 0xdd, + 0x7d, 0xd1, 0x76, 0x9e, 0x75, 0xb3, 0xa5, 0x92, + 0x1b, 0xe3, 0x78, 0x76, 0xc4, 0x5c, 0x99, 0x00 }, + { 0x0c, 0xc7, 0x0e, 0x00, 0x34, 0x8b, 0x86, 0xba, + 0x29, 0x44, 0xd0, 0xc3, 0x20, 0x38, 0xb2, 0x5c, + 0x55, 0x58, 0x4f, 0x90, 0xdf, 0x23, 0x04, 0xf5, + 0x5f, 0xa3, 0x32, 0xaf, 0x5f, 0xb0, 0x1e, 0x20 }, + { 0xec, 0x19, 0x64, 0x19, 0x10, 0x87, 0xa4, 0xfe, + 0x9d, 0xf1, 0xc7, 0x95, 0x34, 0x2a, 0x02, 0xff, + 0xc1, 0x91, 0xa5, 0xb2, 0x51, 0x76, 0x48, 0x56, + 0xae, 0x5b, 0x8b, 0x57, 0x69, 0xf0, 0xc6, 0xcd }, + { 0xe1, 0xfa, 0x51, 0x61, 0x8d, 0x7d, 0xf4, 0xeb, + 0x70, 0xcf, 0x0d, 0x5a, 0x9e, 0x90, 0x6f, 0x80, + 0x6e, 0x9d, 0x19, 0xf7, 0xf4, 0xf0, 0x1e, 0x3b, + 0x62, 0x12, 0x88, 0xe4, 0x12, 0x04, 0x05, 0xd6 }, + { 0x59, 0x80, 0x01, 0xfa, 0xfb, 0xe8, 0xf9, 0x4e, + 0xc6, 0x6d, 0xc8, 0x27, 0xd0, 0x12, 0xcf, 0xcb, + 0xba, 0x22, 0x28, 0x56, 0x9f, 0x44, 0x8e, 0x89, + 0xea, 0x22, 0x08, 0xc8, 0xbf, 0x76, 0x92, 0x93 }, + { 0xc7, 0xe8, 0x87, 0xb5, 0x46, 0x62, 0x36, 0x35, + 0xe9, 0x3e, 0x04, 0x95, 0x59, 0x8f, 0x17, 0x26, + 0x82, 0x19, 0x96, 0xc2, 0x37, 0x77, 0x05, 0xb9, + 0x3a, 0x1f, 0x63, 0x6f, 0x87, 0x2b, 0xfa, 0x2d }, + { 0xc3, 0x15, 0xa4, 0x37, 0xdd, 0x28, 0x06, 0x2a, + 0x77, 0x0d, 0x48, 0x19, 0x67, 0x13, 0x6b, 0x1b, + 0x5e, 0xb8, 0x8b, 0x21, 0xee, 0x53, 0xd0, 0x32, + 0x9c, 0x58, 0x97, 0x12, 0x6e, 0x9d, 0xb0, 0x2c }, + { 0xbb, 0x47, 0x3d, 0xed, 0xdc, 0x05, 0x5f, 0xea, + 0x62, 0x28, 0xf2, 0x07, 0xda, 0x57, 0x53, 0x47, + 0xbb, 0x00, 0x40, 0x4c, 0xd3, 0x49, 0xd3, 0x8c, + 0x18, 0x02, 0x63, 0x07, 0xa2, 0x24, 0xcb, 0xff }, + { 0x68, 0x7e, 0x18, 0x73, 0xa8, 0x27, 0x75, 0x91, + 0xbb, 0x33, 0xd9, 0xad, 0xf9, 0xa1, 0x39, 0x12, + 0xef, 0xef, 0xe5, 0x57, 0xca, 0xfc, 0x39, 0xa7, + 0x95, 0x26, 0x23, 0xe4, 0x72, 0x55, 0xf1, 0x6d }, + { 0x1a, 0xc7, 0xba, 0x75, 0x4d, 0x6e, 0x2f, 0x94, + 0xe0, 0xe8, 0x6c, 0x46, 0xbf, 0xb2, 0x62, 0xab, + 0xbb, 0x74, 0xf4, 0x50, 0xef, 0x45, 0x6d, 0x6b, + 0x4d, 0x97, 0xaa, 0x80, 0xce, 0x6d, 0xa7, 0x67 }, + { 0x01, 0x2c, 0x97, 0x80, 0x96, 0x14, 0x81, 0x6b, + 0x5d, 0x94, 0x94, 0x47, 0x7d, 0x4b, 0x68, 0x7d, + 0x15, 0xb9, 0x6e, 0xb6, 0x9c, 0x0e, 0x80, 0x74, + 0xa8, 0x51, 0x6f, 0x31, 0x22, 0x4b, 0x5c, 0x98 }, + { 0x91, 0xff, 0xd2, 0x6c, 0xfa, 0x4d, 0xa5, 0x13, + 0x4c, 0x7e, 0xa2, 0x62, 0xf7, 0x88, 0x9c, 0x32, + 0x9f, 0x61, 0xf6, 0xa6, 0x57, 0x22, 0x5c, 0xc2, + 0x12, 0xf4, 0x00, 0x56, 0xd9, 0x86, 0xb3, 0xf4 }, + { 0xd9, 0x7c, 0x82, 0x8d, 0x81, 0x82, 0xa7, 0x21, + 0x80, 0xa0, 0x6a, 0x78, 0x26, 0x83, 0x30, 0x67, + 0x3f, 0x7c, 0x4e, 0x06, 0x35, 0x94, 0x7c, 0x04, + 0xc0, 0x23, 0x23, 0xfd, 0x45, 0xc0, 0xa5, 0x2d }, + { 0xef, 0xc0, 0x4c, 0xdc, 0x39, 0x1c, 0x7e, 0x91, + 0x19, 0xbd, 0x38, 0x66, 0x8a, 0x53, 0x4e, 0x65, + 0xfe, 0x31, 0x03, 0x6d, 0x6a, 0x62, 0x11, 0x2e, + 0x44, 0xeb, 0xeb, 0x11, 0xf9, 0xc5, 0x70, 0x80 }, + { 0x99, 0x2c, 0xf5, 0xc0, 0x53, 0x44, 0x2a, 0x5f, + 0xbc, 0x4f, 0xaf, 0x58, 0x3e, 0x04, 0xe5, 0x0b, + 0xb7, 0x0d, 0x2f, 0x39, 0xfb, 0xb6, 0xa5, 0x03, + 0xf8, 0x9e, 0x56, 0xa6, 0x3e, 0x18, 0x57, 0x8a }, + { 0x38, 0x64, 0x0e, 0x9f, 0x21, 0x98, 0x3e, 0x67, + 0xb5, 0x39, 0xca, 0xcc, 0xae, 0x5e, 0xcf, 0x61, + 0x5a, 0xe2, 0x76, 0x4f, 0x75, 0xa0, 0x9c, 0x9c, + 0x59, 0xb7, 0x64, 0x83, 0xc1, 0xfb, 0xc7, 0x35 }, + { 0x21, 0x3d, 0xd3, 0x4c, 0x7e, 0xfe, 0x4f, 0xb2, + 0x7a, 0x6b, 0x35, 0xf6, 0xb4, 0x00, 0x0d, 0x1f, + 0xe0, 0x32, 0x81, 0xaf, 0x3c, 0x72, 0x3e, 0x5c, + 0x9f, 0x94, 0x74, 0x7a, 0x5f, 0x31, 0xcd, 0x3b }, + { 0xec, 0x24, 0x6e, 0xee, 0xb9, 0xce, 0xd3, 0xf7, + 0xad, 0x33, 0xed, 0x28, 0x66, 0x0d, 0xd9, 0xbb, + 0x07, 0x32, 0x51, 0x3d, 0xb4, 0xe2, 0xfa, 0x27, + 0x8b, 0x60, 0xcd, 0xe3, 0x68, 0x2a, 0x4c, 0xcd }, + { 0xac, 0x9b, 0x61, 0xd4, 0x46, 0x64, 0x8c, 0x30, + 0x05, 0xd7, 0x89, 0x2b, 0xf3, 0xa8, 0x71, 0x9f, + 0x4c, 0x81, 0x81, 0xcf, 0xdc, 0xbc, 0x2b, 0x79, + 0xfe, 0xf1, 0x0a, 0x27, 0x9b, 0x91, 0x10, 0x95 }, + { 0x7b, 0xf8, 0xb2, 0x29, 0x59, 0xe3, 0x4e, 0x3a, + 0x43, 0xf7, 0x07, 0x92, 0x23, 0xe8, 0x3a, 0x97, + 0x54, 0x61, 0x7d, 0x39, 0x1e, 0x21, 0x3d, 0xfd, + 0x80, 0x8e, 0x41, 0xb9, 0xbe, 0xad, 0x4c, 0xe7 }, + { 0x68, 0xd4, 0xb5, 0xd4, 0xfa, 0x0e, 0x30, 0x2b, + 0x64, 0xcc, 0xc5, 0xaf, 0x79, 0x29, 0x13, 0xac, + 0x4c, 0x88, 0xec, 0x95, 0xc0, 0x7d, 0xdf, 0x40, + 0x69, 0x42, 0x56, 0xeb, 0x88, 0xce, 0x9f, 0x3d }, + { 0xb2, 0xc2, 0x42, 0x0f, 0x05, 0xf9, 0xab, 0xe3, + 0x63, 0x15, 0x91, 0x93, 0x36, 0xb3, 0x7e, 0x4e, + 0x0f, 0xa3, 0x3f, 0xf7, 0xe7, 0x6a, 0x49, 0x27, + 0x67, 0x00, 0x6f, 0xdb, 0x5d, 0x93, 0x54, 0x62 }, + { 0x13, 0x4f, 0x61, 0xbb, 0xd0, 0xbb, 0xb6, 0x9a, + 0xed, 0x53, 0x43, 0x90, 0x45, 0x51, 0xa3, 0xe6, + 0xc1, 0xaa, 0x7d, 0xcd, 0xd7, 0x7e, 0x90, 0x3e, + 0x70, 0x23, 0xeb, 0x7c, 0x60, 0x32, 0x0a, 0xa7 }, + { 0x46, 0x93, 0xf9, 0xbf, 0xf7, 0xd4, 0xf3, 0x98, + 0x6a, 0x7d, 0x17, 0x6e, 0x6e, 0x06, 0xf7, 0x2a, + 0xd1, 0x49, 0x0d, 0x80, 0x5c, 0x99, 0xe2, 0x53, + 0x47, 0xb8, 0xde, 0x77, 0xb4, 0xdb, 0x6d, 0x9b }, + { 0x85, 0x3e, 0x26, 0xf7, 0x41, 0x95, 0x3b, 0x0f, + 0xd5, 0xbd, 0xb4, 0x24, 0xe8, 0xab, 0x9e, 0x8b, + 0x37, 0x50, 0xea, 0xa8, 0xef, 0x61, 0xe4, 0x79, + 0x02, 0xc9, 0x1e, 0x55, 0x4e, 0x9c, 0x73, 0xb9 }, + { 0xf7, 0xde, 0x53, 0x63, 0x61, 0xab, 0xaa, 0x0e, + 0x15, 0x81, 0x56, 0xcf, 0x0e, 0xa4, 0xf6, 0x3a, + 0x99, 0xb5, 0xe4, 0x05, 0x4f, 0x8f, 0xa4, 0xc9, + 0xd4, 0x5f, 0x62, 0x85, 0xca, 0xd5, 0x56, 0x94 }, + { 0x4c, 0x23, 0x06, 0x08, 0x86, 0x0a, 0x99, 0xae, + 0x8d, 0x7b, 0xd5, 0xc2, 0xcc, 0x17, 0xfa, 0x52, + 0x09, 0x6b, 0x9a, 0x61, 0xbe, 0xdb, 0x17, 0xcb, + 0x76, 0x17, 0x86, 0x4a, 0xd2, 0x9c, 0xa7, 0xa6 }, + { 0xae, 0xb9, 0x20, 0xea, 0x87, 0x95, 0x2d, 0xad, + 0xb1, 0xfb, 0x75, 0x92, 0x91, 0xe3, 0x38, 0x81, + 0x39, 0xa8, 0x72, 0x86, 0x50, 0x01, 0x88, 0x6e, + 0xd8, 0x47, 0x52, 0xe9, 0x3c, 0x25, 0x0c, 0x2a }, + { 0xab, 0xa4, 0xad, 0x9b, 0x48, 0x0b, 0x9d, 0xf3, + 0xd0, 0x8c, 0xa5, 0xe8, 0x7b, 0x0c, 0x24, 0x40, + 0xd4, 0xe4, 0xea, 0x21, 0x22, 0x4c, 0x2e, 0xb4, + 0x2c, 0xba, 0xe4, 0x69, 0xd0, 0x89, 0xb9, 0x31 }, + { 0x05, 0x82, 0x56, 0x07, 0xd7, 0xfd, 0xf2, 0xd8, + 0x2e, 0xf4, 0xc3, 0xc8, 0xc2, 0xae, 0xa9, 0x61, + 0xad, 0x98, 0xd6, 0x0e, 0xdf, 0xf7, 0xd0, 0x18, + 0x98, 0x3e, 0x21, 0x20, 0x4c, 0x0d, 0x93, 0xd1 }, + { 0xa7, 0x42, 0xf8, 0xb6, 0xaf, 0x82, 0xd8, 0xa6, + 0xca, 0x23, 0x57, 0xc5, 0xf1, 0xcf, 0x91, 0xde, + 0xfb, 0xd0, 0x66, 0x26, 0x7d, 0x75, 0xc0, 0x48, + 0xb3, 0x52, 0x36, 0x65, 0x85, 0x02, 0x59, 0x62 }, + { 0x2b, 0xca, 0xc8, 0x95, 0x99, 0x00, 0x0b, 0x42, + 0xc9, 0x5a, 0xe2, 0x38, 0x35, 0xa7, 0x13, 0x70, + 0x4e, 0xd7, 0x97, 0x89, 0xc8, 0x4f, 0xef, 0x14, + 0x9a, 0x87, 0x4f, 0xf7, 0x33, 0xf0, 0x17, 0xa2 }, + { 0xac, 0x1e, 0xd0, 0x7d, 0x04, 0x8f, 0x10, 0x5a, + 0x9e, 0x5b, 0x7a, 0xb8, 0x5b, 0x09, 0xa4, 0x92, + 0xd5, 0xba, 0xff, 0x14, 0xb8, 0xbf, 0xb0, 0xe9, + 0xfd, 0x78, 0x94, 0x86, 0xee, 0xa2, 0xb9, 0x74 }, + { 0xe4, 0x8d, 0x0e, 0xcf, 0xaf, 0x49, 0x7d, 0x5b, + 0x27, 0xc2, 0x5d, 0x99, 0xe1, 0x56, 0xcb, 0x05, + 0x79, 0xd4, 0x40, 0xd6, 0xe3, 0x1f, 0xb6, 0x24, + 0x73, 0x69, 0x6d, 0xbf, 0x95, 0xe0, 0x10, 0xe4 }, + { 0x12, 0xa9, 0x1f, 0xad, 0xf8, 0xb2, 0x16, 0x44, + 0xfd, 0x0f, 0x93, 0x4f, 0x3c, 0x4a, 0x8f, 0x62, + 0xba, 0x86, 0x2f, 0xfd, 0x20, 0xe8, 0xe9, 0x61, + 0x15, 0x4c, 0x15, 0xc1, 0x38, 0x84, 0xed, 0x3d }, + { 0x7c, 0xbe, 0xe9, 0x6e, 0x13, 0x98, 0x97, 0xdc, + 0x98, 0xfb, 0xef, 0x3b, 0xe8, 0x1a, 0xd4, 0xd9, + 0x64, 0xd2, 0x35, 0xcb, 0x12, 0x14, 0x1f, 0xb6, + 0x67, 0x27, 0xe6, 0xe5, 0xdf, 0x73, 0xa8, 0x78 }, + { 0xeb, 0xf6, 0x6a, 0xbb, 0x59, 0x7a, 0xe5, 0x72, + 0xa7, 0x29, 0x7c, 0xb0, 0x87, 0x1e, 0x35, 0x5a, + 0xcc, 0xaf, 0xad, 0x83, 0x77, 0xb8, 0xe7, 0x8b, + 0xf1, 0x64, 0xce, 0x2a, 0x18, 0xde, 0x4b, 0xaf }, + { 0x71, 0xb9, 0x33, 0xb0, 0x7e, 0x4f, 0xf7, 0x81, + 0x8c, 0xe0, 0x59, 0xd0, 0x08, 0x82, 0x9e, 0x45, + 0x3c, 0x6f, 0xf0, 0x2e, 0xc0, 0xa7, 0xdb, 0x39, + 0x3f, 0xc2, 0xd8, 0x70, 0xf3, 0x7a, 0x72, 0x86 }, + { 0x7c, 0xf7, 0xc5, 0x13, 0x31, 0x22, 0x0b, 0x8d, + 0x3e, 0xba, 0xed, 0x9c, 0x29, 0x39, 0x8a, 0x16, + 0xd9, 0x81, 0x56, 0xe2, 0x61, 0x3c, 0xb0, 0x88, + 0xf2, 0xb0, 0xe0, 0x8a, 0x1b, 0xe4, 0xcf, 0x4f }, + { 0x3e, 0x41, 0xa1, 0x08, 0xe0, 0xf6, 0x4a, 0xd2, + 0x76, 0xb9, 0x79, 0xe1, 0xce, 0x06, 0x82, 0x79, + 0xe1, 0x6f, 0x7b, 0xc7, 0xe4, 0xaa, 0x1d, 0x21, + 0x1e, 0x17, 0xb8, 0x11, 0x61, 0xdf, 0x16, 0x02 }, + { 0x88, 0x65, 0x02, 0xa8, 0x2a, 0xb4, 0x7b, 0xa8, + 0xd8, 0x67, 0x10, 0xaa, 0x9d, 0xe3, 0xd4, 0x6e, + 0xa6, 0x5c, 0x47, 0xaf, 0x6e, 0xe8, 0xde, 0x45, + 0x0c, 0xce, 0xb8, 0xb1, 0x1b, 0x04, 0x5f, 0x50 }, + { 0xc0, 0x21, 0xbc, 0x5f, 0x09, 0x54, 0xfe, 0xe9, + 0x4f, 0x46, 0xea, 0x09, 0x48, 0x7e, 0x10, 0xa8, + 0x48, 0x40, 0xd0, 0x2f, 0x64, 0x81, 0x0b, 0xc0, + 0x8d, 0x9e, 0x55, 0x1f, 0x7d, 0x41, 0x68, 0x14 }, + { 0x20, 0x30, 0x51, 0x6e, 0x8a, 0x5f, 0xe1, 0x9a, + 0xe7, 0x9c, 0x33, 0x6f, 0xce, 0x26, 0x38, 0x2a, + 0x74, 0x9d, 0x3f, 0xd0, 0xec, 0x91, 0xe5, 0x37, + 0xd4, 0xbd, 0x23, 0x58, 0xc1, 0x2d, 0xfb, 0x22 }, + { 0x55, 0x66, 0x98, 0xda, 0xc8, 0x31, 0x7f, 0xd3, + 0x6d, 0xfb, 0xdf, 0x25, 0xa7, 0x9c, 0xb1, 0x12, + 0xd5, 0x42, 0x58, 0x60, 0x60, 0x5c, 0xba, 0xf5, + 0x07, 0xf2, 0x3b, 0xf7, 0xe9, 0xf4, 0x2a, 0xfe }, + { 0x2f, 0x86, 0x7b, 0xa6, 0x77, 0x73, 0xfd, 0xc3, + 0xe9, 0x2f, 0xce, 0xd9, 0x9a, 0x64, 0x09, 0xad, + 0x39, 0xd0, 0xb8, 0x80, 0xfd, 0xe8, 0xf1, 0x09, + 0xa8, 0x17, 0x30, 0xc4, 0x45, 0x1d, 0x01, 0x78 }, + { 0x17, 0x2e, 0xc2, 0x18, 0xf1, 0x19, 0xdf, 0xae, + 0x98, 0x89, 0x6d, 0xff, 0x29, 0xdd, 0x98, 0x76, + 0xc9, 0x4a, 0xf8, 0x74, 0x17, 0xf9, 0xae, 0x4c, + 0x70, 0x14, 0xbb, 0x4e, 0x4b, 0x96, 0xaf, 0xc7 }, + { 0x3f, 0x85, 0x81, 0x4a, 0x18, 0x19, 0x5f, 0x87, + 0x9a, 0xa9, 0x62, 0xf9, 0x5d, 0x26, 0xbd, 0x82, + 0xa2, 0x78, 0xf2, 0xb8, 0x23, 0x20, 0x21, 0x8f, + 0x6b, 0x3b, 0xd6, 0xf7, 0xf6, 0x67, 0xa6, 0xd9 }, + { 0x1b, 0x61, 0x8f, 0xba, 0xa5, 0x66, 0xb3, 0xd4, + 0x98, 0xc1, 0x2e, 0x98, 0x2c, 0x9e, 0xc5, 0x2e, + 0x4d, 0xa8, 0x5a, 0x8c, 0x54, 0xf3, 0x8f, 0x34, + 0xc0, 0x90, 0x39, 0x4f, 0x23, 0xc1, 0x84, 0xc1 }, + { 0x0c, 0x75, 0x8f, 0xb5, 0x69, 0x2f, 0xfd, 0x41, + 0xa3, 0x57, 0x5d, 0x0a, 0xf0, 0x0c, 0xc7, 0xfb, + 0xf2, 0xcb, 0xe5, 0x90, 0x5a, 0x58, 0x32, 0x3a, + 0x88, 0xae, 0x42, 0x44, 0xf6, 0xe4, 0xc9, 0x93 }, + { 0xa9, 0x31, 0x36, 0x0c, 0xad, 0x62, 0x8c, 0x7f, + 0x12, 0xa6, 0xc1, 0xc4, 0xb7, 0x53, 0xb0, 0xf4, + 0x06, 0x2a, 0xef, 0x3c, 0xe6, 0x5a, 0x1a, 0xe3, + 0xf1, 0x93, 0x69, 0xda, 0xdf, 0x3a, 0xe2, 0x3d }, + { 0xcb, 0xac, 0x7d, 0x77, 0x3b, 0x1e, 0x3b, 0x3c, + 0x66, 0x91, 0xd7, 0xab, 0xb7, 0xe9, 0xdf, 0x04, + 0x5c, 0x8b, 0xa1, 0x92, 0x68, 0xde, 0xd1, 0x53, + 0x20, 0x7f, 0x5e, 0x80, 0x43, 0x52, 0xec, 0x5d }, + { 0x23, 0xa1, 0x96, 0xd3, 0x80, 0x2e, 0xd3, 0xc1, + 0xb3, 0x84, 0x01, 0x9a, 0x82, 0x32, 0x58, 0x40, + 0xd3, 0x2f, 0x71, 0x95, 0x0c, 0x45, 0x80, 0xb0, + 0x34, 0x45, 0xe0, 0x89, 0x8e, 0x14, 0x05, 0x3c }, + { 0xf4, 0x49, 0x54, 0x70, 0xf2, 0x26, 0xc8, 0xc2, + 0x14, 0xbe, 0x08, 0xfd, 0xfa, 0xd4, 0xbc, 0x4a, + 0x2a, 0x9d, 0xbe, 0xa9, 0x13, 0x6a, 0x21, 0x0d, + 0xf0, 0xd4, 0xb6, 0x49, 0x29, 0xe6, 0xfc, 0x14 }, + { 0xe2, 0x90, 0xdd, 0x27, 0x0b, 0x46, 0x7f, 0x34, + 0xab, 0x1c, 0x00, 0x2d, 0x34, 0x0f, 0xa0, 0x16, + 0x25, 0x7f, 0xf1, 0x9e, 0x58, 0x33, 0xfd, 0xbb, + 0xf2, 0xcb, 0x40, 0x1c, 0x3b, 0x28, 0x17, 0xde }, + { 0x9f, 0xc7, 0xb5, 0xde, 0xd3, 0xc1, 0x50, 0x42, + 0xb2, 0xa6, 0x58, 0x2d, 0xc3, 0x9b, 0xe0, 0x16, + 0xd2, 0x4a, 0x68, 0x2d, 0x5e, 0x61, 0xad, 0x1e, + 0xff, 0x9c, 0x63, 0x30, 0x98, 0x48, 0xf7, 0x06 }, + { 0x8c, 0xca, 0x67, 0xa3, 0x6d, 0x17, 0xd5, 0xe6, + 0x34, 0x1c, 0xb5, 0x92, 0xfd, 0x7b, 0xef, 0x99, + 0x26, 0xc9, 0xe3, 0xaa, 0x10, 0x27, 0xea, 0x11, + 0xa7, 0xd8, 0xbd, 0x26, 0x0b, 0x57, 0x6e, 0x04 }, + { 0x40, 0x93, 0x92, 0xf5, 0x60, 0xf8, 0x68, 0x31, + 0xda, 0x43, 0x73, 0xee, 0x5e, 0x00, 0x74, 0x26, + 0x05, 0x95, 0xd7, 0xbc, 0x24, 0x18, 0x3b, 0x60, + 0xed, 0x70, 0x0d, 0x45, 0x83, 0xd3, 0xf6, 0xf0 }, + { 0x28, 0x02, 0x16, 0x5d, 0xe0, 0x90, 0x91, 0x55, + 0x46, 0xf3, 0x39, 0x8c, 0xd8, 0x49, 0x16, 0x4a, + 0x19, 0xf9, 0x2a, 0xdb, 0xc3, 0x61, 0xad, 0xc9, + 0x9b, 0x0f, 0x20, 0xc8, 0xea, 0x07, 0x10, 0x54 }, + { 0xad, 0x83, 0x91, 0x68, 0xd9, 0xf8, 0xa4, 0xbe, + 0x95, 0xba, 0x9e, 0xf9, 0xa6, 0x92, 0xf0, 0x72, + 0x56, 0xae, 0x43, 0xfe, 0x6f, 0x98, 0x64, 0xe2, + 0x90, 0x69, 0x1b, 0x02, 0x56, 0xce, 0x50, 0xa9 }, + { 0x75, 0xfd, 0xaa, 0x50, 0x38, 0xc2, 0x84, 0xb8, + 0x6d, 0x6e, 0x8a, 0xff, 0xe8, 0xb2, 0x80, 0x7e, + 0x46, 0x7b, 0x86, 0x60, 0x0e, 0x79, 0xaf, 0x36, + 0x89, 0xfb, 0xc0, 0x63, 0x28, 0xcb, 0xf8, 0x94 }, + { 0xe5, 0x7c, 0xb7, 0x94, 0x87, 0xdd, 0x57, 0x90, + 0x24, 0x32, 0xb2, 0x50, 0x73, 0x38, 0x13, 0xbd, + 0x96, 0xa8, 0x4e, 0xfc, 0xe5, 0x9f, 0x65, 0x0f, + 0xac, 0x26, 0xe6, 0x69, 0x6a, 0xef, 0xaf, 0xc3 }, + { 0x56, 0xf3, 0x4e, 0x8b, 0x96, 0x55, 0x7e, 0x90, + 0xc1, 0xf2, 0x4b, 0x52, 0xd0, 0xc8, 0x9d, 0x51, + 0x08, 0x6a, 0xcf, 0x1b, 0x00, 0xf6, 0x34, 0xcf, + 0x1d, 0xde, 0x92, 0x33, 0xb8, 0xea, 0xaa, 0x3e }, + { 0x1b, 0x53, 0xee, 0x94, 0xaa, 0xf3, 0x4e, 0x4b, + 0x15, 0x9d, 0x48, 0xde, 0x35, 0x2c, 0x7f, 0x06, + 0x61, 0xd0, 0xa4, 0x0e, 0xdf, 0xf9, 0x5a, 0x0b, + 0x16, 0x39, 0xb4, 0x09, 0x0e, 0x97, 0x44, 0x72 }, + { 0x05, 0x70, 0x5e, 0x2a, 0x81, 0x75, 0x7c, 0x14, + 0xbd, 0x38, 0x3e, 0xa9, 0x8d, 0xda, 0x54, 0x4e, + 0xb1, 0x0e, 0x6b, 0xc0, 0x7b, 0xae, 0x43, 0x5e, + 0x25, 0x18, 0xdb, 0xe1, 0x33, 0x52, 0x53, 0x75 }, + { 0xd8, 0xb2, 0x86, 0x6e, 0x8a, 0x30, 0x9d, 0xb5, + 0x3e, 0x52, 0x9e, 0xc3, 0x29, 0x11, 0xd8, 0x2f, + 0x5c, 0xa1, 0x6c, 0xff, 0x76, 0x21, 0x68, 0x91, + 0xa9, 0x67, 0x6a, 0xa3, 0x1a, 0xaa, 0x6c, 0x42 }, + { 0xf5, 0x04, 0x1c, 0x24, 0x12, 0x70, 0xeb, 0x04, + 0xc7, 0x1e, 0xc2, 0xc9, 0x5d, 0x4c, 0x38, 0xd8, + 0x03, 0xb1, 0x23, 0x7b, 0x0f, 0x29, 0xfd, 0x4d, + 0xb3, 0xeb, 0x39, 0x76, 0x69, 0xe8, 0x86, 0x99 }, + { 0x9a, 0x4c, 0xe0, 0x77, 0xc3, 0x49, 0x32, 0x2f, + 0x59, 0x5e, 0x0e, 0xe7, 0x9e, 0xd0, 0xda, 0x5f, + 0xab, 0x66, 0x75, 0x2c, 0xbf, 0xef, 0x8f, 0x87, + 0xd0, 0xe9, 0xd0, 0x72, 0x3c, 0x75, 0x30, 0xdd }, + { 0x65, 0x7b, 0x09, 0xf3, 0xd0, 0xf5, 0x2b, 0x5b, + 0x8f, 0x2f, 0x97, 0x16, 0x3a, 0x0e, 0xdf, 0x0c, + 0x04, 0xf0, 0x75, 0x40, 0x8a, 0x07, 0xbb, 0xeb, + 0x3a, 0x41, 0x01, 0xa8, 0x91, 0x99, 0x0d, 0x62 }, + { 0x1e, 0x3f, 0x7b, 0xd5, 0xa5, 0x8f, 0xa5, 0x33, + 0x34, 0x4a, 0xa8, 0xed, 0x3a, 0xc1, 0x22, 0xbb, + 0x9e, 0x70, 0xd4, 0xef, 0x50, 0xd0, 0x04, 0x53, + 0x08, 0x21, 0x94, 0x8f, 0x5f, 0xe6, 0x31, 0x5a }, + { 0x80, 0xdc, 0xcf, 0x3f, 0xd8, 0x3d, 0xfd, 0x0d, + 0x35, 0xaa, 0x28, 0x58, 0x59, 0x22, 0xab, 0x89, + 0xd5, 0x31, 0x39, 0x97, 0x67, 0x3e, 0xaf, 0x90, + 0x5c, 0xea, 0x9c, 0x0b, 0x22, 0x5c, 0x7b, 0x5f }, + { 0x8a, 0x0d, 0x0f, 0xbf, 0x63, 0x77, 0xd8, 0x3b, + 0xb0, 0x8b, 0x51, 0x4b, 0x4b, 0x1c, 0x43, 0xac, + 0xc9, 0x5d, 0x75, 0x17, 0x14, 0xf8, 0x92, 0x56, + 0x45, 0xcb, 0x6b, 0xc8, 0x56, 0xca, 0x15, 0x0a }, + { 0x9f, 0xa5, 0xb4, 0x87, 0x73, 0x8a, 0xd2, 0x84, + 0x4c, 0xc6, 0x34, 0x8a, 0x90, 0x19, 0x18, 0xf6, + 0x59, 0xa3, 0xb8, 0x9e, 0x9c, 0x0d, 0xfe, 0xea, + 0xd3, 0x0d, 0xd9, 0x4b, 0xcf, 0x42, 0xef, 0x8e }, + { 0x80, 0x83, 0x2c, 0x4a, 0x16, 0x77, 0xf5, 0xea, + 0x25, 0x60, 0xf6, 0x68, 0xe9, 0x35, 0x4d, 0xd3, + 0x69, 0x97, 0xf0, 0x37, 0x28, 0xcf, 0xa5, 0x5e, + 0x1b, 0x38, 0x33, 0x7c, 0x0c, 0x9e, 0xf8, 0x18 }, + { 0xab, 0x37, 0xdd, 0xb6, 0x83, 0x13, 0x7e, 0x74, + 0x08, 0x0d, 0x02, 0x6b, 0x59, 0x0b, 0x96, 0xae, + 0x9b, 0xb4, 0x47, 0x72, 0x2f, 0x30, 0x5a, 0x5a, + 0xc5, 0x70, 0xec, 0x1d, 0xf9, 0xb1, 0x74, 0x3c }, + { 0x3e, 0xe7, 0x35, 0xa6, 0x94, 0xc2, 0x55, 0x9b, + 0x69, 0x3a, 0xa6, 0x86, 0x29, 0x36, 0x1e, 0x15, + 0xd1, 0x22, 0x65, 0xad, 0x6a, 0x3d, 0xed, 0xf4, + 0x88, 0xb0, 0xb0, 0x0f, 0xac, 0x97, 0x54, 0xba }, + { 0xd6, 0xfc, 0xd2, 0x32, 0x19, 0xb6, 0x47, 0xe4, + 0xcb, 0xd5, 0xeb, 0x2d, 0x0a, 0xd0, 0x1e, 0xc8, + 0x83, 0x8a, 0x4b, 0x29, 0x01, 0xfc, 0x32, 0x5c, + 0xc3, 0x70, 0x19, 0x81, 0xca, 0x6c, 0x88, 0x8b }, + { 0x05, 0x20, 0xec, 0x2f, 0x5b, 0xf7, 0xa7, 0x55, + 0xda, 0xcb, 0x50, 0xc6, 0xbf, 0x23, 0x3e, 0x35, + 0x15, 0x43, 0x47, 0x63, 0xdb, 0x01, 0x39, 0xcc, + 0xd9, 0xfa, 0xef, 0xbb, 0x82, 0x07, 0x61, 0x2d }, + { 0xaf, 0xf3, 0xb7, 0x5f, 0x3f, 0x58, 0x12, 0x64, + 0xd7, 0x66, 0x16, 0x62, 0xb9, 0x2f, 0x5a, 0xd3, + 0x7c, 0x1d, 0x32, 0xbd, 0x45, 0xff, 0x81, 0xa4, + 0xed, 0x8a, 0xdc, 0x9e, 0xf3, 0x0d, 0xd9, 0x89 }, + { 0xd0, 0xdd, 0x65, 0x0b, 0xef, 0xd3, 0xba, 0x63, + 0xdc, 0x25, 0x10, 0x2c, 0x62, 0x7c, 0x92, 0x1b, + 0x9c, 0xbe, 0xb0, 0xb1, 0x30, 0x68, 0x69, 0x35, + 0xb5, 0xc9, 0x27, 0xcb, 0x7c, 0xcd, 0x5e, 0x3b }, + { 0xe1, 0x14, 0x98, 0x16, 0xb1, 0x0a, 0x85, 0x14, + 0xfb, 0x3e, 0x2c, 0xab, 0x2c, 0x08, 0xbe, 0xe9, + 0xf7, 0x3c, 0xe7, 0x62, 0x21, 0x70, 0x12, 0x46, + 0xa5, 0x89, 0xbb, 0xb6, 0x73, 0x02, 0xd8, 0xa9 }, + { 0x7d, 0xa3, 0xf4, 0x41, 0xde, 0x90, 0x54, 0x31, + 0x7e, 0x72, 0xb5, 0xdb, 0xf9, 0x79, 0xda, 0x01, + 0xe6, 0xbc, 0xee, 0xbb, 0x84, 0x78, 0xea, 0xe6, + 0xa2, 0x28, 0x49, 0xd9, 0x02, 0x92, 0x63, 0x5c }, + { 0x12, 0x30, 0xb1, 0xfc, 0x8a, 0x7d, 0x92, 0x15, + 0xed, 0xc2, 0xd4, 0xa2, 0xde, 0xcb, 0xdd, 0x0a, + 0x6e, 0x21, 0x6c, 0x92, 0x42, 0x78, 0xc9, 0x1f, + 0xc5, 0xd1, 0x0e, 0x7d, 0x60, 0x19, 0x2d, 0x94 }, + { 0x57, 0x50, 0xd7, 0x16, 0xb4, 0x80, 0x8f, 0x75, + 0x1f, 0xeb, 0xc3, 0x88, 0x06, 0xba, 0x17, 0x0b, + 0xf6, 0xd5, 0x19, 0x9a, 0x78, 0x16, 0xbe, 0x51, + 0x4e, 0x3f, 0x93, 0x2f, 0xbe, 0x0c, 0xb8, 0x71 }, + { 0x6f, 0xc5, 0x9b, 0x2f, 0x10, 0xfe, 0xba, 0x95, + 0x4a, 0xa6, 0x82, 0x0b, 0x3c, 0xa9, 0x87, 0xee, + 0x81, 0xd5, 0xcc, 0x1d, 0xa3, 0xc6, 0x3c, 0xe8, + 0x27, 0x30, 0x1c, 0x56, 0x9d, 0xfb, 0x39, 0xce }, + { 0xc7, 0xc3, 0xfe, 0x1e, 0xeb, 0xdc, 0x7b, 0x5a, + 0x93, 0x93, 0x26, 0xe8, 0xdd, 0xb8, 0x3e, 0x8b, + 0xf2, 0xb7, 0x80, 0xb6, 0x56, 0x78, 0xcb, 0x62, + 0xf2, 0x08, 0xb0, 0x40, 0xab, 0xdd, 0x35, 0xe2 }, + { 0x0c, 0x75, 0xc1, 0xa1, 0x5c, 0xf3, 0x4a, 0x31, + 0x4e, 0xe4, 0x78, 0xf4, 0xa5, 0xce, 0x0b, 0x8a, + 0x6b, 0x36, 0x52, 0x8e, 0xf7, 0xa8, 0x20, 0x69, + 0x6c, 0x3e, 0x42, 0x46, 0xc5, 0xa1, 0x58, 0x64 }, + { 0x21, 0x6d, 0xc1, 0x2a, 0x10, 0x85, 0x69, 0xa3, + 0xc7, 0xcd, 0xde, 0x4a, 0xed, 0x43, 0xa6, 0xc3, + 0x30, 0x13, 0x9d, 0xda, 0x3c, 0xcc, 0x4a, 0x10, + 0x89, 0x05, 0xdb, 0x38, 0x61, 0x89, 0x90, 0x50 }, + { 0xa5, 0x7b, 0xe6, 0xae, 0x67, 0x56, 0xf2, 0x8b, + 0x02, 0xf5, 0x9d, 0xad, 0xf7, 0xe0, 0xd7, 0xd8, + 0x80, 0x7f, 0x10, 0xfa, 0x15, 0xce, 0xd1, 0xad, + 0x35, 0x85, 0x52, 0x1a, 0x1d, 0x99, 0x5a, 0x89 }, + { 0x81, 0x6a, 0xef, 0x87, 0x59, 0x53, 0x71, 0x6c, + 0xd7, 0xa5, 0x81, 0xf7, 0x32, 0xf5, 0x3d, 0xd4, + 0x35, 0xda, 0xb6, 0x6d, 0x09, 0xc3, 0x61, 0xd2, + 0xd6, 0x59, 0x2d, 0xe1, 0x77, 0x55, 0xd8, 0xa8 }, + { 0x9a, 0x76, 0x89, 0x32, 0x26, 0x69, 0x3b, 0x6e, + 0xa9, 0x7e, 0x6a, 0x73, 0x8f, 0x9d, 0x10, 0xfb, + 0x3d, 0x0b, 0x43, 0xae, 0x0e, 0x8b, 0x7d, 0x81, + 0x23, 0xea, 0x76, 0xce, 0x97, 0x98, 0x9c, 0x7e }, + { 0x8d, 0xae, 0xdb, 0x9a, 0x27, 0x15, 0x29, 0xdb, + 0xb7, 0xdc, 0x3b, 0x60, 0x7f, 0xe5, 0xeb, 0x2d, + 0x32, 0x11, 0x77, 0x07, 0x58, 0xdd, 0x3b, 0x0a, + 0x35, 0x93, 0xd2, 0xd7, 0x95, 0x4e, 0x2d, 0x5b }, + { 0x16, 0xdb, 0xc0, 0xaa, 0x5d, 0xd2, 0xc7, 0x74, + 0xf5, 0x05, 0x10, 0x0f, 0x73, 0x37, 0x86, 0xd8, + 0xa1, 0x75, 0xfc, 0xbb, 0xb5, 0x9c, 0x43, 0xe1, + 0xfb, 0xff, 0x3e, 0x1e, 0xaf, 0x31, 0xcb, 0x4a }, + { 0x86, 0x06, 0xcb, 0x89, 0x9c, 0x6a, 0xea, 0xf5, + 0x1b, 0x9d, 0xb0, 0xfe, 0x49, 0x24, 0xa9, 0xfd, + 0x5d, 0xab, 0xc1, 0x9f, 0x88, 0x26, 0xf2, 0xbc, + 0x1c, 0x1d, 0x7d, 0xa1, 0x4d, 0x2c, 0x2c, 0x99 }, + { 0x84, 0x79, 0x73, 0x1a, 0xed, 0xa5, 0x7b, 0xd3, + 0x7e, 0xad, 0xb5, 0x1a, 0x50, 0x7e, 0x30, 0x7f, + 0x3b, 0xd9, 0x5e, 0x69, 0xdb, 0xca, 0x94, 0xf3, + 0xbc, 0x21, 0x72, 0x60, 0x66, 0xad, 0x6d, 0xfd }, + { 0x58, 0x47, 0x3a, 0x9e, 0xa8, 0x2e, 0xfa, 0x3f, + 0x3b, 0x3d, 0x8f, 0xc8, 0x3e, 0xd8, 0x86, 0x31, + 0x27, 0xb3, 0x3a, 0xe8, 0xde, 0xae, 0x63, 0x07, + 0x20, 0x1e, 0xdb, 0x6d, 0xde, 0x61, 0xde, 0x29 }, + { 0x9a, 0x92, 0x55, 0xd5, 0x3a, 0xf1, 0x16, 0xde, + 0x8b, 0xa2, 0x7c, 0xe3, 0x5b, 0x4c, 0x7e, 0x15, + 0x64, 0x06, 0x57, 0xa0, 0xfc, 0xb8, 0x88, 0xc7, + 0x0d, 0x95, 0x43, 0x1d, 0xac, 0xd8, 0xf8, 0x30 }, + { 0x9e, 0xb0, 0x5f, 0xfb, 0xa3, 0x9f, 0xd8, 0x59, + 0x6a, 0x45, 0x49, 0x3e, 0x18, 0xd2, 0x51, 0x0b, + 0xf3, 0xef, 0x06, 0x5c, 0x51, 0xd6, 0xe1, 0x3a, + 0xbe, 0x66, 0xaa, 0x57, 0xe0, 0x5c, 0xfd, 0xb7 }, + { 0x81, 0xdc, 0xc3, 0xa5, 0x05, 0xea, 0xce, 0x3f, + 0x87, 0x9d, 0x8f, 0x70, 0x27, 0x76, 0x77, 0x0f, + 0x9d, 0xf5, 0x0e, 0x52, 0x1d, 0x14, 0x28, 0xa8, + 0x5d, 0xaf, 0x04, 0xf9, 0xad, 0x21, 0x50, 0xe0 }, + { 0xe3, 0xe3, 0xc4, 0xaa, 0x3a, 0xcb, 0xbc, 0x85, + 0x33, 0x2a, 0xf9, 0xd5, 0x64, 0xbc, 0x24, 0x16, + 0x5e, 0x16, 0x87, 0xf6, 0xb1, 0xad, 0xcb, 0xfa, + 0xe7, 0x7a, 0x8f, 0x03, 0xc7, 0x2a, 0xc2, 0x8c }, + { 0x67, 0x46, 0xc8, 0x0b, 0x4e, 0xb5, 0x6a, 0xea, + 0x45, 0xe6, 0x4e, 0x72, 0x89, 0xbb, 0xa3, 0xed, + 0xbf, 0x45, 0xec, 0xf8, 0x20, 0x64, 0x81, 0xff, + 0x63, 0x02, 0x12, 0x29, 0x84, 0xcd, 0x52, 0x6a }, + { 0x2b, 0x62, 0x8e, 0x52, 0x76, 0x4d, 0x7d, 0x62, + 0xc0, 0x86, 0x8b, 0x21, 0x23, 0x57, 0xcd, 0xd1, + 0x2d, 0x91, 0x49, 0x82, 0x2f, 0x4e, 0x98, 0x45, + 0xd9, 0x18, 0xa0, 0x8d, 0x1a, 0xe9, 0x90, 0xc0 }, + { 0xe4, 0xbf, 0xe8, 0x0d, 0x58, 0xc9, 0x19, 0x94, + 0x61, 0x39, 0x09, 0xdc, 0x4b, 0x1a, 0x12, 0x49, + 0x68, 0x96, 0xc0, 0x04, 0xaf, 0x7b, 0x57, 0x01, + 0x48, 0x3d, 0xe4, 0x5d, 0x28, 0x23, 0xd7, 0x8e }, + { 0xeb, 0xb4, 0xba, 0x15, 0x0c, 0xef, 0x27, 0x34, + 0x34, 0x5b, 0x5d, 0x64, 0x1b, 0xbe, 0xd0, 0x3a, + 0x21, 0xea, 0xfa, 0xe9, 0x33, 0xc9, 0x9e, 0x00, + 0x92, 0x12, 0xef, 0x04, 0x57, 0x4a, 0x85, 0x30 }, + { 0x39, 0x66, 0xec, 0x73, 0xb1, 0x54, 0xac, 0xc6, + 0x97, 0xac, 0x5c, 0xf5, 0xb2, 0x4b, 0x40, 0xbd, + 0xb0, 0xdb, 0x9e, 0x39, 0x88, 0x36, 0xd7, 0x6d, + 0x4b, 0x88, 0x0e, 0x3b, 0x2a, 0xf1, 0xaa, 0x27 }, + { 0xef, 0x7e, 0x48, 0x31, 0xb3, 0xa8, 0x46, 0x36, + 0x51, 0x8d, 0x6e, 0x4b, 0xfc, 0xe6, 0x4a, 0x43, + 0xdb, 0x2a, 0x5d, 0xda, 0x9c, 0xca, 0x2b, 0x44, + 0xf3, 0x90, 0x33, 0xbd, 0xc4, 0x0d, 0x62, 0x43 }, + { 0x7a, 0xbf, 0x6a, 0xcf, 0x5c, 0x8e, 0x54, 0x9d, + 0xdb, 0xb1, 0x5a, 0xe8, 0xd8, 0xb3, 0x88, 0xc1, + 0xc1, 0x97, 0xe6, 0x98, 0x73, 0x7c, 0x97, 0x85, + 0x50, 0x1e, 0xd1, 0xf9, 0x49, 0x30, 0xb7, 0xd9 }, + { 0x88, 0x01, 0x8d, 0xed, 0x66, 0x81, 0x3f, 0x0c, + 0xa9, 0x5d, 0xef, 0x47, 0x4c, 0x63, 0x06, 0x92, + 0x01, 0x99, 0x67, 0xb9, 0xe3, 0x68, 0x88, 0xda, + 0xdd, 0x94, 0x12, 0x47, 0x19, 0xb6, 0x82, 0xf6 }, + { 0x39, 0x30, 0x87, 0x6b, 0x9f, 0xc7, 0x52, 0x90, + 0x36, 0xb0, 0x08, 0xb1, 0xb8, 0xbb, 0x99, 0x75, + 0x22, 0xa4, 0x41, 0x63, 0x5a, 0x0c, 0x25, 0xec, + 0x02, 0xfb, 0x6d, 0x90, 0x26, 0xe5, 0x5a, 0x97 }, + { 0x0a, 0x40, 0x49, 0xd5, 0x7e, 0x83, 0x3b, 0x56, + 0x95, 0xfa, 0xc9, 0x3d, 0xd1, 0xfb, 0xef, 0x31, + 0x66, 0xb4, 0x4b, 0x12, 0xad, 0x11, 0x24, 0x86, + 0x62, 0x38, 0x3a, 0xe0, 0x51, 0xe1, 0x58, 0x27 }, + { 0x81, 0xdc, 0xc0, 0x67, 0x8b, 0xb6, 0xa7, 0x65, + 0xe4, 0x8c, 0x32, 0x09, 0x65, 0x4f, 0xe9, 0x00, + 0x89, 0xce, 0x44, 0xff, 0x56, 0x18, 0x47, 0x7e, + 0x39, 0xab, 0x28, 0x64, 0x76, 0xdf, 0x05, 0x2b }, + { 0xe6, 0x9b, 0x3a, 0x36, 0xa4, 0x46, 0x19, 0x12, + 0xdc, 0x08, 0x34, 0x6b, 0x11, 0xdd, 0xcb, 0x9d, + 0xb7, 0x96, 0xf8, 0x85, 0xfd, 0x01, 0x93, 0x6e, + 0x66, 0x2f, 0xe2, 0x92, 0x97, 0xb0, 0x99, 0xa4 }, + { 0x5a, 0xc6, 0x50, 0x3b, 0x0d, 0x8d, 0xa6, 0x91, + 0x76, 0x46, 0xe6, 0xdc, 0xc8, 0x7e, 0xdc, 0x58, + 0xe9, 0x42, 0x45, 0x32, 0x4c, 0xc2, 0x04, 0xf4, + 0xdd, 0x4a, 0xf0, 0x15, 0x63, 0xac, 0xd4, 0x27 }, + { 0xdf, 0x6d, 0xda, 0x21, 0x35, 0x9a, 0x30, 0xbc, + 0x27, 0x17, 0x80, 0x97, 0x1c, 0x1a, 0xbd, 0x56, + 0xa6, 0xef, 0x16, 0x7e, 0x48, 0x08, 0x87, 0x88, + 0x8e, 0x73, 0xa8, 0x6d, 0x3b, 0xf6, 0x05, 0xe9 }, + { 0xe8, 0xe6, 0xe4, 0x70, 0x71, 0xe7, 0xb7, 0xdf, + 0x25, 0x80, 0xf2, 0x25, 0xcf, 0xbb, 0xed, 0xf8, + 0x4c, 0xe6, 0x77, 0x46, 0x62, 0x66, 0x28, 0xd3, + 0x30, 0x97, 0xe4, 0xb7, 0xdc, 0x57, 0x11, 0x07 }, + { 0x53, 0xe4, 0x0e, 0xad, 0x62, 0x05, 0x1e, 0x19, + 0xcb, 0x9b, 0xa8, 0x13, 0x3e, 0x3e, 0x5c, 0x1c, + 0xe0, 0x0d, 0xdc, 0xad, 0x8a, 0xcf, 0x34, 0x2a, + 0x22, 0x43, 0x60, 0xb0, 0xac, 0xc1, 0x47, 0x77 }, + { 0x9c, 0xcd, 0x53, 0xfe, 0x80, 0xbe, 0x78, 0x6a, + 0xa9, 0x84, 0x63, 0x84, 0x62, 0xfb, 0x28, 0xaf, + 0xdf, 0x12, 0x2b, 0x34, 0xd7, 0x8f, 0x46, 0x87, + 0xec, 0x63, 0x2b, 0xb1, 0x9d, 0xe2, 0x37, 0x1a }, + { 0xcb, 0xd4, 0x80, 0x52, 0xc4, 0x8d, 0x78, 0x84, + 0x66, 0xa3, 0xe8, 0x11, 0x8c, 0x56, 0xc9, 0x7f, + 0xe1, 0x46, 0xe5, 0x54, 0x6f, 0xaa, 0xf9, 0x3e, + 0x2b, 0xc3, 0xc4, 0x7e, 0x45, 0x93, 0x97, 0x53 }, + { 0x25, 0x68, 0x83, 0xb1, 0x4e, 0x2a, 0xf4, 0x4d, + 0xad, 0xb2, 0x8e, 0x1b, 0x34, 0xb2, 0xac, 0x0f, + 0x0f, 0x4c, 0x91, 0xc3, 0x4e, 0xc9, 0x16, 0x9e, + 0x29, 0x03, 0x61, 0x58, 0xac, 0xaa, 0x95, 0xb9 }, + { 0x44, 0x71, 0xb9, 0x1a, 0xb4, 0x2d, 0xb7, 0xc4, + 0xdd, 0x84, 0x90, 0xab, 0x95, 0xa2, 0xee, 0x8d, + 0x04, 0xe3, 0xef, 0x5c, 0x3d, 0x6f, 0xc7, 0x1a, + 0xc7, 0x4b, 0x2b, 0x26, 0x91, 0x4d, 0x16, 0x41 }, + { 0xa5, 0xeb, 0x08, 0x03, 0x8f, 0x8f, 0x11, 0x55, + 0xed, 0x86, 0xe6, 0x31, 0x90, 0x6f, 0xc1, 0x30, + 0x95, 0xf6, 0xbb, 0xa4, 0x1d, 0xe5, 0xd4, 0xe7, + 0x95, 0x75, 0x8e, 0xc8, 0xc8, 0xdf, 0x8a, 0xf1 }, + { 0xdc, 0x1d, 0xb6, 0x4e, 0xd8, 0xb4, 0x8a, 0x91, + 0x0e, 0x06, 0x0a, 0x6b, 0x86, 0x63, 0x74, 0xc5, + 0x78, 0x78, 0x4e, 0x9a, 0xc4, 0x9a, 0xb2, 0x77, + 0x40, 0x92, 0xac, 0x71, 0x50, 0x19, 0x34, 0xac }, + { 0x28, 0x54, 0x13, 0xb2, 0xf2, 0xee, 0x87, 0x3d, + 0x34, 0x31, 0x9e, 0xe0, 0xbb, 0xfb, 0xb9, 0x0f, + 0x32, 0xda, 0x43, 0x4c, 0xc8, 0x7e, 0x3d, 0xb5, + 0xed, 0x12, 0x1b, 0xb3, 0x98, 0xed, 0x96, 0x4b }, + { 0x02, 0x16, 0xe0, 0xf8, 0x1f, 0x75, 0x0f, 0x26, + 0xf1, 0x99, 0x8b, 0xc3, 0x93, 0x4e, 0x3e, 0x12, + 0x4c, 0x99, 0x45, 0xe6, 0x85, 0xa6, 0x0b, 0x25, + 0xe8, 0xfb, 0xd9, 0x62, 0x5a, 0xb6, 0xb5, 0x99 }, + { 0x38, 0xc4, 0x10, 0xf5, 0xb9, 0xd4, 0x07, 0x20, + 0x50, 0x75, 0x5b, 0x31, 0xdc, 0xa8, 0x9f, 0xd5, + 0x39, 0x5c, 0x67, 0x85, 0xee, 0xb3, 0xd7, 0x90, + 0xf3, 0x20, 0xff, 0x94, 0x1c, 0x5a, 0x93, 0xbf }, + { 0xf1, 0x84, 0x17, 0xb3, 0x9d, 0x61, 0x7a, 0xb1, + 0xc1, 0x8f, 0xdf, 0x91, 0xeb, 0xd0, 0xfc, 0x6d, + 0x55, 0x16, 0xbb, 0x34, 0xcf, 0x39, 0x36, 0x40, + 0x37, 0xbc, 0xe8, 0x1f, 0xa0, 0x4c, 0xec, 0xb1 }, + { 0x1f, 0xa8, 0x77, 0xde, 0x67, 0x25, 0x9d, 0x19, + 0x86, 0x3a, 0x2a, 0x34, 0xbc, 0xc6, 0x96, 0x2a, + 0x2b, 0x25, 0xfc, 0xbf, 0x5c, 0xbe, 0xcd, 0x7e, + 0xde, 0x8f, 0x1f, 0xa3, 0x66, 0x88, 0xa7, 0x96 }, + { 0x5b, 0xd1, 0x69, 0xe6, 0x7c, 0x82, 0xc2, 0xc2, + 0xe9, 0x8e, 0xf7, 0x00, 0x8b, 0xdf, 0x26, 0x1f, + 0x2d, 0xdf, 0x30, 0xb1, 0xc0, 0x0f, 0x9e, 0x7f, + 0x27, 0x5b, 0xb3, 0xe8, 0xa2, 0x8d, 0xc9, 0xa2 }, + { 0xc8, 0x0a, 0xbe, 0xeb, 0xb6, 0x69, 0xad, 0x5d, + 0xee, 0xb5, 0xf5, 0xec, 0x8e, 0xa6, 0xb7, 0xa0, + 0x5d, 0xdf, 0x7d, 0x31, 0xec, 0x4c, 0x0a, 0x2e, + 0xe2, 0x0b, 0x0b, 0x98, 0xca, 0xec, 0x67, 0x46 }, + { 0xe7, 0x6d, 0x3f, 0xbd, 0xa5, 0xba, 0x37, 0x4e, + 0x6b, 0xf8, 0xe5, 0x0f, 0xad, 0xc3, 0xbb, 0xb9, + 0xba, 0x5c, 0x20, 0x6e, 0xbd, 0xec, 0x89, 0xa3, + 0xa5, 0x4c, 0xf3, 0xdd, 0x84, 0xa0, 0x70, 0x16 }, + { 0x7b, 0xba, 0x9d, 0xc5, 0xb5, 0xdb, 0x20, 0x71, + 0xd1, 0x77, 0x52, 0xb1, 0x04, 0x4c, 0x1e, 0xce, + 0xd9, 0x6a, 0xaf, 0x2d, 0xd4, 0x6e, 0x9b, 0x43, + 0x37, 0x50, 0xe8, 0xea, 0x0d, 0xcc, 0x18, 0x70 }, + { 0xf2, 0x9b, 0x1b, 0x1a, 0xb9, 0xba, 0xb1, 0x63, + 0x01, 0x8e, 0xe3, 0xda, 0x15, 0x23, 0x2c, 0xca, + 0x78, 0xec, 0x52, 0xdb, 0xc3, 0x4e, 0xda, 0x5b, + 0x82, 0x2e, 0xc1, 0xd8, 0x0f, 0xc2, 0x1b, 0xd0 }, + { 0x9e, 0xe3, 0xe3, 0xe7, 0xe9, 0x00, 0xf1, 0xe1, + 0x1d, 0x30, 0x8c, 0x4b, 0x2b, 0x30, 0x76, 0xd2, + 0x72, 0xcf, 0x70, 0x12, 0x4f, 0x9f, 0x51, 0xe1, + 0xda, 0x60, 0xf3, 0x78, 0x46, 0xcd, 0xd2, 0xf4 }, + { 0x70, 0xea, 0x3b, 0x01, 0x76, 0x92, 0x7d, 0x90, + 0x96, 0xa1, 0x85, 0x08, 0xcd, 0x12, 0x3a, 0x29, + 0x03, 0x25, 0x92, 0x0a, 0x9d, 0x00, 0xa8, 0x9b, + 0x5d, 0xe0, 0x42, 0x73, 0xfb, 0xc7, 0x6b, 0x85 }, + { 0x67, 0xde, 0x25, 0xc0, 0x2a, 0x4a, 0xab, 0xa2, + 0x3b, 0xdc, 0x97, 0x3c, 0x8b, 0xb0, 0xb5, 0x79, + 0x6d, 0x47, 0xcc, 0x06, 0x59, 0xd4, 0x3d, 0xff, + 0x1f, 0x97, 0xde, 0x17, 0x49, 0x63, 0xb6, 0x8e }, + { 0xb2, 0x16, 0x8e, 0x4e, 0x0f, 0x18, 0xb0, 0xe6, + 0x41, 0x00, 0xb5, 0x17, 0xed, 0x95, 0x25, 0x7d, + 0x73, 0xf0, 0x62, 0x0d, 0xf8, 0x85, 0xc1, 0x3d, + 0x2e, 0xcf, 0x79, 0x36, 0x7b, 0x38, 0x4c, 0xee }, + { 0x2e, 0x7d, 0xec, 0x24, 0x28, 0x85, 0x3b, 0x2c, + 0x71, 0x76, 0x07, 0x45, 0x54, 0x1f, 0x7a, 0xfe, + 0x98, 0x25, 0xb5, 0xdd, 0x77, 0xdf, 0x06, 0x51, + 0x1d, 0x84, 0x41, 0xa9, 0x4b, 0xac, 0xc9, 0x27 }, + { 0xca, 0x9f, 0xfa, 0xc4, 0xc4, 0x3f, 0x0b, 0x48, + 0x46, 0x1d, 0xc5, 0xc2, 0x63, 0xbe, 0xa3, 0xf6, + 0xf0, 0x06, 0x11, 0xce, 0xac, 0xab, 0xf6, 0xf8, + 0x95, 0xba, 0x2b, 0x01, 0x01, 0xdb, 0xb6, 0x8d }, + { 0x74, 0x10, 0xd4, 0x2d, 0x8f, 0xd1, 0xd5, 0xe9, + 0xd2, 0xf5, 0x81, 0x5c, 0xb9, 0x34, 0x17, 0x99, + 0x88, 0x28, 0xef, 0x3c, 0x42, 0x30, 0xbf, 0xbd, + 0x41, 0x2d, 0xf0, 0xa4, 0xa7, 0xa2, 0x50, 0x7a }, + { 0x50, 0x10, 0xf6, 0x84, 0x51, 0x6d, 0xcc, 0xd0, + 0xb6, 0xee, 0x08, 0x52, 0xc2, 0x51, 0x2b, 0x4d, + 0xc0, 0x06, 0x6c, 0xf0, 0xd5, 0x6f, 0x35, 0x30, + 0x29, 0x78, 0xdb, 0x8a, 0xe3, 0x2c, 0x6a, 0x81 }, + { 0xac, 0xaa, 0xb5, 0x85, 0xf7, 0xb7, 0x9b, 0x71, + 0x99, 0x35, 0xce, 0xb8, 0x95, 0x23, 0xdd, 0xc5, + 0x48, 0x27, 0xf7, 0x5c, 0x56, 0x88, 0x38, 0x56, + 0x15, 0x4a, 0x56, 0xcd, 0xcd, 0x5e, 0xe9, 0x88 }, + { 0x66, 0x6d, 0xe5, 0xd1, 0x44, 0x0f, 0xee, 0x73, + 0x31, 0xaa, 0xf0, 0x12, 0x3a, 0x62, 0xef, 0x2d, + 0x8b, 0xa5, 0x74, 0x53, 0xa0, 0x76, 0x96, 0x35, + 0xac, 0x6c, 0xd0, 0x1e, 0x63, 0x3f, 0x77, 0x12 }, + { 0xa6, 0xf9, 0x86, 0x58, 0xf6, 0xea, 0xba, 0xf9, + 0x02, 0xd8, 0xb3, 0x87, 0x1a, 0x4b, 0x10, 0x1d, + 0x16, 0x19, 0x6e, 0x8a, 0x4b, 0x24, 0x1e, 0x15, + 0x58, 0xfe, 0x29, 0x96, 0x6e, 0x10, 0x3e, 0x8d }, + { 0x89, 0x15, 0x46, 0xa8, 0xb2, 0x9f, 0x30, 0x47, + 0xdd, 0xcf, 0xe5, 0xb0, 0x0e, 0x45, 0xfd, 0x55, + 0x75, 0x63, 0x73, 0x10, 0x5e, 0xa8, 0x63, 0x7d, + 0xfc, 0xff, 0x54, 0x7b, 0x6e, 0xa9, 0x53, 0x5f }, + { 0x18, 0xdf, 0xbc, 0x1a, 0xc5, 0xd2, 0x5b, 0x07, + 0x61, 0x13, 0x7d, 0xbd, 0x22, 0xc1, 0x7c, 0x82, + 0x9d, 0x0f, 0x0e, 0xf1, 0xd8, 0x23, 0x44, 0xe9, + 0xc8, 0x9c, 0x28, 0x66, 0x94, 0xda, 0x24, 0xe8 }, + { 0xb5, 0x4b, 0x9b, 0x67, 0xf8, 0xfe, 0xd5, 0x4b, + 0xbf, 0x5a, 0x26, 0x66, 0xdb, 0xdf, 0x4b, 0x23, + 0xcf, 0xf1, 0xd1, 0xb6, 0xf4, 0xaf, 0xc9, 0x85, + 0xb2, 0xe6, 0xd3, 0x30, 0x5a, 0x9f, 0xf8, 0x0f }, + { 0x7d, 0xb4, 0x42, 0xe1, 0x32, 0xba, 0x59, 0xbc, + 0x12, 0x89, 0xaa, 0x98, 0xb0, 0xd3, 0xe8, 0x06, + 0x00, 0x4f, 0x8e, 0xc1, 0x28, 0x11, 0xaf, 0x1e, + 0x2e, 0x33, 0xc6, 0x9b, 0xfd, 0xe7, 0x29, 0xe1 }, + { 0x25, 0x0f, 0x37, 0xcd, 0xc1, 0x5e, 0x81, 0x7d, + 0x2f, 0x16, 0x0d, 0x99, 0x56, 0xc7, 0x1f, 0xe3, + 0xeb, 0x5d, 0xb7, 0x45, 0x56, 0xe4, 0xad, 0xf9, + 0xa4, 0xff, 0xaf, 0xba, 0x74, 0x01, 0x03, 0x96 }, + { 0x4a, 0xb8, 0xa3, 0xdd, 0x1d, 0xdf, 0x8a, 0xd4, + 0x3d, 0xab, 0x13, 0xa2, 0x7f, 0x66, 0xa6, 0x54, + 0x4f, 0x29, 0x05, 0x97, 0xfa, 0x96, 0x04, 0x0e, + 0x0e, 0x1d, 0xb9, 0x26, 0x3a, 0xa4, 0x79, 0xf8 }, + { 0xee, 0x61, 0x72, 0x7a, 0x07, 0x66, 0xdf, 0x93, + 0x9c, 0xcd, 0xc8, 0x60, 0x33, 0x40, 0x44, 0xc7, + 0x9a, 0x3c, 0x9b, 0x15, 0x62, 0x00, 0xbc, 0x3a, + 0xa3, 0x29, 0x73, 0x48, 0x3d, 0x83, 0x41, 0xae }, + { 0x3f, 0x68, 0xc7, 0xec, 0x63, 0xac, 0x11, 0xeb, + 0xb9, 0x8f, 0x94, 0xb3, 0x39, 0xb0, 0x5c, 0x10, + 0x49, 0x84, 0xfd, 0xa5, 0x01, 0x03, 0x06, 0x01, + 0x44, 0xe5, 0xa2, 0xbf, 0xcc, 0xc9, 0xda, 0x95 }, + { 0x05, 0x6f, 0x29, 0x81, 0x6b, 0x8a, 0xf8, 0xf5, + 0x66, 0x82, 0xbc, 0x4d, 0x7c, 0xf0, 0x94, 0x11, + 0x1d, 0xa7, 0x73, 0x3e, 0x72, 0x6c, 0xd1, 0x3d, + 0x6b, 0x3e, 0x8e, 0xa0, 0x3e, 0x92, 0xa0, 0xd5 }, + { 0xf5, 0xec, 0x43, 0xa2, 0x8a, 0xcb, 0xef, 0xf1, + 0xf3, 0x31, 0x8a, 0x5b, 0xca, 0xc7, 0xc6, 0x6d, + 0xdb, 0x52, 0x30, 0xb7, 0x9d, 0xb2, 0xd1, 0x05, + 0xbc, 0xbe, 0x15, 0xf3, 0xc1, 0x14, 0x8d, 0x69 }, + { 0x2a, 0x69, 0x60, 0xad, 0x1d, 0x8d, 0xd5, 0x47, + 0x55, 0x5c, 0xfb, 0xd5, 0xe4, 0x60, 0x0f, 0x1e, + 0xaa, 0x1c, 0x8e, 0xda, 0x34, 0xde, 0x03, 0x74, + 0xec, 0x4a, 0x26, 0xea, 0xaa, 0xa3, 0x3b, 0x4e }, + { 0xdc, 0xc1, 0xea, 0x7b, 0xaa, 0xb9, 0x33, 0x84, + 0xf7, 0x6b, 0x79, 0x68, 0x66, 0x19, 0x97, 0x54, + 0x74, 0x2f, 0x7b, 0x96, 0xd6, 0xb4, 0xc1, 0x20, + 0x16, 0x5c, 0x04, 0xa6, 0xc4, 0xf5, 0xce, 0x10 }, + { 0x13, 0xd5, 0xdf, 0x17, 0x92, 0x21, 0x37, 0x9c, + 0x6a, 0x78, 0xc0, 0x7c, 0x79, 0x3f, 0xf5, 0x34, + 0x87, 0xca, 0xe6, 0xbf, 0x9f, 0xe8, 0x82, 0x54, + 0x1a, 0xb0, 0xe7, 0x35, 0xe3, 0xea, 0xda, 0x3b }, + { 0x8c, 0x59, 0xe4, 0x40, 0x76, 0x41, 0xa0, 0x1e, + 0x8f, 0xf9, 0x1f, 0x99, 0x80, 0xdc, 0x23, 0x6f, + 0x4e, 0xcd, 0x6f, 0xcf, 0x52, 0x58, 0x9a, 0x09, + 0x9a, 0x96, 0x16, 0x33, 0x96, 0x77, 0x14, 0xe1 }, + { 0x83, 0x3b, 0x1a, 0xc6, 0xa2, 0x51, 0xfd, 0x08, + 0xfd, 0x6d, 0x90, 0x8f, 0xea, 0x2a, 0x4e, 0xe1, + 0xe0, 0x40, 0xbc, 0xa9, 0x3f, 0xc1, 0xa3, 0x8e, + 0xc3, 0x82, 0x0e, 0x0c, 0x10, 0xbd, 0x82, 0xea }, + { 0xa2, 0x44, 0xf9, 0x27, 0xf3, 0xb4, 0x0b, 0x8f, + 0x6c, 0x39, 0x15, 0x70, 0xc7, 0x65, 0x41, 0x8f, + 0x2f, 0x6e, 0x70, 0x8e, 0xac, 0x90, 0x06, 0xc5, + 0x1a, 0x7f, 0xef, 0xf4, 0xaf, 0x3b, 0x2b, 0x9e }, + { 0x3d, 0x99, 0xed, 0x95, 0x50, 0xcf, 0x11, 0x96, + 0xe6, 0xc4, 0xd2, 0x0c, 0x25, 0x96, 0x20, 0xf8, + 0x58, 0xc3, 0xd7, 0x03, 0x37, 0x4c, 0x12, 0x8c, + 0xe7, 0xb5, 0x90, 0x31, 0x0c, 0x83, 0x04, 0x6d }, + { 0x2b, 0x35, 0xc4, 0x7d, 0x7b, 0x87, 0x76, 0x1f, + 0x0a, 0xe4, 0x3a, 0xc5, 0x6a, 0xc2, 0x7b, 0x9f, + 0x25, 0x83, 0x03, 0x67, 0xb5, 0x95, 0xbe, 0x8c, + 0x24, 0x0e, 0x94, 0x60, 0x0c, 0x6e, 0x33, 0x12 }, + { 0x5d, 0x11, 0xed, 0x37, 0xd2, 0x4d, 0xc7, 0x67, + 0x30, 0x5c, 0xb7, 0xe1, 0x46, 0x7d, 0x87, 0xc0, + 0x65, 0xac, 0x4b, 0xc8, 0xa4, 0x26, 0xde, 0x38, + 0x99, 0x1f, 0xf5, 0x9a, 0xa8, 0x73, 0x5d, 0x02 }, + { 0xb8, 0x36, 0x47, 0x8e, 0x1c, 0xa0, 0x64, 0x0d, + 0xce, 0x6f, 0xd9, 0x10, 0xa5, 0x09, 0x62, 0x72, + 0xc8, 0x33, 0x09, 0x90, 0xcd, 0x97, 0x86, 0x4a, + 0xc2, 0xbf, 0x14, 0xef, 0x6b, 0x23, 0x91, 0x4a }, + { 0x91, 0x00, 0xf9, 0x46, 0xd6, 0xcc, 0xde, 0x3a, + 0x59, 0x7f, 0x90, 0xd3, 0x9f, 0xc1, 0x21, 0x5b, + 0xad, 0xdc, 0x74, 0x13, 0x64, 0x3d, 0x85, 0xc2, + 0x1c, 0x3e, 0xee, 0x5d, 0x2d, 0xd3, 0x28, 0x94 }, + { 0xda, 0x70, 0xee, 0xdd, 0x23, 0xe6, 0x63, 0xaa, + 0x1a, 0x74, 0xb9, 0x76, 0x69, 0x35, 0xb4, 0x79, + 0x22, 0x2a, 0x72, 0xaf, 0xba, 0x5c, 0x79, 0x51, + 0x58, 0xda, 0xd4, 0x1a, 0x3b, 0xd7, 0x7e, 0x40 }, + { 0xf0, 0x67, 0xed, 0x6a, 0x0d, 0xbd, 0x43, 0xaa, + 0x0a, 0x92, 0x54, 0xe6, 0x9f, 0xd6, 0x6b, 0xdd, + 0x8a, 0xcb, 0x87, 0xde, 0x93, 0x6c, 0x25, 0x8c, + 0xfb, 0x02, 0x28, 0x5f, 0x2c, 0x11, 0xfa, 0x79 }, + { 0x71, 0x5c, 0x99, 0xc7, 0xd5, 0x75, 0x80, 0xcf, + 0x97, 0x53, 0xb4, 0xc1, 0xd7, 0x95, 0xe4, 0x5a, + 0x83, 0xfb, 0xb2, 0x28, 0xc0, 0xd3, 0x6f, 0xbe, + 0x20, 0xfa, 0xf3, 0x9b, 0xdd, 0x6d, 0x4e, 0x85 }, + { 0xe4, 0x57, 0xd6, 0xad, 0x1e, 0x67, 0xcb, 0x9b, + 0xbd, 0x17, 0xcb, 0xd6, 0x98, 0xfa, 0x6d, 0x7d, + 0xae, 0x0c, 0x9b, 0x7a, 0xd6, 0xcb, 0xd6, 0x53, + 0x96, 0x34, 0xe3, 0x2a, 0x71, 0x9c, 0x84, 0x92 }, + { 0xec, 0xe3, 0xea, 0x81, 0x03, 0xe0, 0x24, 0x83, + 0xc6, 0x4a, 0x70, 0xa4, 0xbd, 0xce, 0xe8, 0xce, + 0xb6, 0x27, 0x8f, 0x25, 0x33, 0xf3, 0xf4, 0x8d, + 0xbe, 0xed, 0xfb, 0xa9, 0x45, 0x31, 0xd4, 0xae }, + { 0x38, 0x8a, 0xa5, 0xd3, 0x66, 0x7a, 0x97, 0xc6, + 0x8d, 0x3d, 0x56, 0xf8, 0xf3, 0xee, 0x8d, 0x3d, + 0x36, 0x09, 0x1f, 0x17, 0xfe, 0x5d, 0x1b, 0x0d, + 0x5d, 0x84, 0xc9, 0x3b, 0x2f, 0xfe, 0x40, 0xbd }, + { 0x8b, 0x6b, 0x31, 0xb9, 0xad, 0x7c, 0x3d, 0x5c, + 0xd8, 0x4b, 0xf9, 0x89, 0x47, 0xb9, 0xcd, 0xb5, + 0x9d, 0xf8, 0xa2, 0x5f, 0xf7, 0x38, 0x10, 0x10, + 0x13, 0xbe, 0x4f, 0xd6, 0x5e, 0x1d, 0xd1, 0xa3 }, + { 0x06, 0x62, 0x91, 0xf6, 0xbb, 0xd2, 0x5f, 0x3c, + 0x85, 0x3d, 0xb7, 0xd8, 0xb9, 0x5c, 0x9a, 0x1c, + 0xfb, 0x9b, 0xf1, 0xc1, 0xc9, 0x9f, 0xb9, 0x5a, + 0x9b, 0x78, 0x69, 0xd9, 0x0f, 0x1c, 0x29, 0x03 }, + { 0xa7, 0x07, 0xef, 0xbc, 0xcd, 0xce, 0xed, 0x42, + 0x96, 0x7a, 0x66, 0xf5, 0x53, 0x9b, 0x93, 0xed, + 0x75, 0x60, 0xd4, 0x67, 0x30, 0x40, 0x16, 0xc4, + 0x78, 0x0d, 0x77, 0x55, 0xa5, 0x65, 0xd4, 0xc4 }, + { 0x38, 0xc5, 0x3d, 0xfb, 0x70, 0xbe, 0x7e, 0x79, + 0x2b, 0x07, 0xa6, 0xa3, 0x5b, 0x8a, 0x6a, 0x0a, + 0xba, 0x02, 0xc5, 0xc5, 0xf3, 0x8b, 0xaf, 0x5c, + 0x82, 0x3f, 0xdf, 0xd9, 0xe4, 0x2d, 0x65, 0x7e }, + { 0xf2, 0x91, 0x13, 0x86, 0x50, 0x1d, 0x9a, 0xb9, + 0xd7, 0x20, 0xcf, 0x8a, 0xd1, 0x05, 0x03, 0xd5, + 0x63, 0x4b, 0xf4, 0xb7, 0xd1, 0x2b, 0x56, 0xdf, + 0xb7, 0x4f, 0xec, 0xc6, 0xe4, 0x09, 0x3f, 0x68 }, + { 0xc6, 0xf2, 0xbd, 0xd5, 0x2b, 0x81, 0xe6, 0xe4, + 0xf6, 0x59, 0x5a, 0xbd, 0x4d, 0x7f, 0xb3, 0x1f, + 0x65, 0x11, 0x69, 0xd0, 0x0f, 0xf3, 0x26, 0x92, + 0x6b, 0x34, 0x94, 0x7b, 0x28, 0xa8, 0x39, 0x59 }, + { 0x29, 0x3d, 0x94, 0xb1, 0x8c, 0x98, 0xbb, 0x32, + 0x23, 0x36, 0x6b, 0x8c, 0xe7, 0x4c, 0x28, 0xfb, + 0xdf, 0x28, 0xe1, 0xf8, 0x4a, 0x33, 0x50, 0xb0, + 0xeb, 0x2d, 0x18, 0x04, 0xa5, 0x77, 0x57, 0x9b }, + { 0x2c, 0x2f, 0xa5, 0xc0, 0xb5, 0x15, 0x33, 0x16, + 0x5b, 0xc3, 0x75, 0xc2, 0x2e, 0x27, 0x81, 0x76, + 0x82, 0x70, 0xa3, 0x83, 0x98, 0x5d, 0x13, 0xbd, + 0x6b, 0x67, 0xb6, 0xfd, 0x67, 0xf8, 0x89, 0xeb }, + { 0xca, 0xa0, 0x9b, 0x82, 0xb7, 0x25, 0x62, 0xe4, + 0x3f, 0x4b, 0x22, 0x75, 0xc0, 0x91, 0x91, 0x8e, + 0x62, 0x4d, 0x91, 0x16, 0x61, 0xcc, 0x81, 0x1b, + 0xb5, 0xfa, 0xec, 0x51, 0xf6, 0x08, 0x8e, 0xf7 }, + { 0x24, 0x76, 0x1e, 0x45, 0xe6, 0x74, 0x39, 0x53, + 0x79, 0xfb, 0x17, 0x72, 0x9c, 0x78, 0xcb, 0x93, + 0x9e, 0x6f, 0x74, 0xc5, 0xdf, 0xfb, 0x9c, 0x96, + 0x1f, 0x49, 0x59, 0x82, 0xc3, 0xed, 0x1f, 0xe3 }, + { 0x55, 0xb7, 0x0a, 0x82, 0x13, 0x1e, 0xc9, 0x48, + 0x88, 0xd7, 0xab, 0x54, 0xa7, 0xc5, 0x15, 0x25, + 0x5c, 0x39, 0x38, 0xbb, 0x10, 0xbc, 0x78, 0x4d, + 0xc9, 0xb6, 0x7f, 0x07, 0x6e, 0x34, 0x1a, 0x73 }, + { 0x6a, 0xb9, 0x05, 0x7b, 0x97, 0x7e, 0xbc, 0x3c, + 0xa4, 0xd4, 0xce, 0x74, 0x50, 0x6c, 0x25, 0xcc, + 0xcd, 0xc5, 0x66, 0x49, 0x7c, 0x45, 0x0b, 0x54, + 0x15, 0xa3, 0x94, 0x86, 0xf8, 0x65, 0x7a, 0x03 }, + { 0x24, 0x06, 0x6d, 0xee, 0xe0, 0xec, 0xee, 0x15, + 0xa4, 0x5f, 0x0a, 0x32, 0x6d, 0x0f, 0x8d, 0xbc, + 0x79, 0x76, 0x1e, 0xbb, 0x93, 0xcf, 0x8c, 0x03, + 0x77, 0xaf, 0x44, 0x09, 0x78, 0xfc, 0xf9, 0x94 }, + { 0x20, 0x00, 0x0d, 0x3f, 0x66, 0xba, 0x76, 0x86, + 0x0d, 0x5a, 0x95, 0x06, 0x88, 0xb9, 0xaa, 0x0d, + 0x76, 0xcf, 0xea, 0x59, 0xb0, 0x05, 0xd8, 0x59, + 0x91, 0x4b, 0x1a, 0x46, 0x65, 0x3a, 0x93, 0x9b }, + { 0xb9, 0x2d, 0xaa, 0x79, 0x60, 0x3e, 0x3b, 0xdb, + 0xc3, 0xbf, 0xe0, 0xf4, 0x19, 0xe4, 0x09, 0xb2, + 0xea, 0x10, 0xdc, 0x43, 0x5b, 0xee, 0xfe, 0x29, + 0x59, 0xda, 0x16, 0x89, 0x5d, 0x5d, 0xca, 0x1c }, + { 0xe9, 0x47, 0x94, 0x87, 0x05, 0xb2, 0x06, 0xd5, + 0x72, 0xb0, 0xe8, 0xf6, 0x2f, 0x66, 0xa6, 0x55, + 0x1c, 0xbd, 0x6b, 0xc3, 0x05, 0xd2, 0x6c, 0xe7, + 0x53, 0x9a, 0x12, 0xf9, 0xaa, 0xdf, 0x75, 0x71 }, + { 0x3d, 0x67, 0xc1, 0xb3, 0xf9, 0xb2, 0x39, 0x10, + 0xe3, 0xd3, 0x5e, 0x6b, 0x0f, 0x2c, 0xcf, 0x44, + 0xa0, 0xb5, 0x40, 0xa4, 0x5c, 0x18, 0xba, 0x3c, + 0x36, 0x26, 0x4d, 0xd4, 0x8e, 0x96, 0xaf, 0x6a }, + { 0xc7, 0x55, 0x8b, 0xab, 0xda, 0x04, 0xbc, 0xcb, + 0x76, 0x4d, 0x0b, 0xbf, 0x33, 0x58, 0x42, 0x51, + 0x41, 0x90, 0x2d, 0x22, 0x39, 0x1d, 0x9f, 0x8c, + 0x59, 0x15, 0x9f, 0xec, 0x9e, 0x49, 0xb1, 0x51 }, + { 0x0b, 0x73, 0x2b, 0xb0, 0x35, 0x67, 0x5a, 0x50, + 0xff, 0x58, 0xf2, 0xc2, 0x42, 0xe4, 0x71, 0x0a, + 0xec, 0xe6, 0x46, 0x70, 0x07, 0x9c, 0x13, 0x04, + 0x4c, 0x79, 0xc9, 0xb7, 0x49, 0x1f, 0x70, 0x00 }, + { 0xd1, 0x20, 0xb5, 0xef, 0x6d, 0x57, 0xeb, 0xf0, + 0x6e, 0xaf, 0x96, 0xbc, 0x93, 0x3c, 0x96, 0x7b, + 0x16, 0xcb, 0xe6, 0xe2, 0xbf, 0x00, 0x74, 0x1c, + 0x30, 0xaa, 0x1c, 0x54, 0xba, 0x64, 0x80, 0x1f }, + { 0x58, 0xd2, 0x12, 0xad, 0x6f, 0x58, 0xae, 0xf0, + 0xf8, 0x01, 0x16, 0xb4, 0x41, 0xe5, 0x7f, 0x61, + 0x95, 0xbf, 0xef, 0x26, 0xb6, 0x14, 0x63, 0xed, + 0xec, 0x11, 0x83, 0xcd, 0xb0, 0x4f, 0xe7, 0x6d }, + { 0xb8, 0x83, 0x6f, 0x51, 0xd1, 0xe2, 0x9b, 0xdf, + 0xdb, 0xa3, 0x25, 0x56, 0x53, 0x60, 0x26, 0x8b, + 0x8f, 0xad, 0x62, 0x74, 0x73, 0xed, 0xec, 0xef, + 0x7e, 0xae, 0xfe, 0xe8, 0x37, 0xc7, 0x40, 0x03 }, + { 0xc5, 0x47, 0xa3, 0xc1, 0x24, 0xae, 0x56, 0x85, + 0xff, 0xa7, 0xb8, 0xed, 0xaf, 0x96, 0xec, 0x86, + 0xf8, 0xb2, 0xd0, 0xd5, 0x0c, 0xee, 0x8b, 0xe3, + 0xb1, 0xf0, 0xc7, 0x67, 0x63, 0x06, 0x9d, 0x9c }, + { 0x5d, 0x16, 0x8b, 0x76, 0x9a, 0x2f, 0x67, 0x85, + 0x3d, 0x62, 0x95, 0xf7, 0x56, 0x8b, 0xe4, 0x0b, + 0xb7, 0xa1, 0x6b, 0x8d, 0x65, 0xba, 0x87, 0x63, + 0x5d, 0x19, 0x78, 0xd2, 0xab, 0x11, 0xba, 0x2a }, + { 0xa2, 0xf6, 0x75, 0xdc, 0x73, 0x02, 0x63, 0x8c, + 0xb6, 0x02, 0x01, 0x06, 0x4c, 0xa5, 0x50, 0x77, + 0x71, 0x4d, 0x71, 0xfe, 0x09, 0x6a, 0x31, 0x5f, + 0x2f, 0xe7, 0x40, 0x12, 0x77, 0xca, 0xa5, 0xaf }, + { 0xc8, 0xaa, 0xb5, 0xcd, 0x01, 0x60, 0xae, 0x78, + 0xcd, 0x2e, 0x8a, 0xc5, 0xfb, 0x0e, 0x09, 0x3c, + 0xdb, 0x5c, 0x4b, 0x60, 0x52, 0xa0, 0xa9, 0x7b, + 0xb0, 0x42, 0x16, 0x82, 0x6f, 0xa7, 0xa4, 0x37 }, + { 0xff, 0x68, 0xca, 0x40, 0x35, 0xbf, 0xeb, 0x43, + 0xfb, 0xf1, 0x45, 0xfd, 0xdd, 0x5e, 0x43, 0xf1, + 0xce, 0xa5, 0x4f, 0x11, 0xf7, 0xbe, 0xe1, 0x30, + 0x58, 0xf0, 0x27, 0x32, 0x9a, 0x4a, 0x5f, 0xa4 }, + { 0x1d, 0x4e, 0x54, 0x87, 0xae, 0x3c, 0x74, 0x0f, + 0x2b, 0xa6, 0xe5, 0x41, 0xac, 0x91, 0xbc, 0x2b, + 0xfc, 0xd2, 0x99, 0x9c, 0x51, 0x8d, 0x80, 0x7b, + 0x42, 0x67, 0x48, 0x80, 0x3a, 0x35, 0x0f, 0xd4 }, + { 0x6d, 0x24, 0x4e, 0x1a, 0x06, 0xce, 0x4e, 0xf5, + 0x78, 0xdd, 0x0f, 0x63, 0xaf, 0xf0, 0x93, 0x67, + 0x06, 0x73, 0x51, 0x19, 0xca, 0x9c, 0x8d, 0x22, + 0xd8, 0x6c, 0x80, 0x14, 0x14, 0xab, 0x97, 0x41 }, + { 0xde, 0xcf, 0x73, 0x29, 0xdb, 0xcc, 0x82, 0x7b, + 0x8f, 0xc5, 0x24, 0xc9, 0x43, 0x1e, 0x89, 0x98, + 0x02, 0x9e, 0xce, 0x12, 0xce, 0x93, 0xb7, 0xb2, + 0xf3, 0xe7, 0x69, 0xa9, 0x41, 0xfb, 0x8c, 0xea }, + { 0x2f, 0xaf, 0xcc, 0x0f, 0x2e, 0x63, 0xcb, 0xd0, + 0x77, 0x55, 0xbe, 0x7b, 0x75, 0xec, 0xea, 0x0a, + 0xdf, 0xf9, 0xaa, 0x5e, 0xde, 0x2a, 0x52, 0xfd, + 0xab, 0x4d, 0xfd, 0x03, 0x74, 0xcd, 0x48, 0x3f }, + { 0xaa, 0x85, 0x01, 0x0d, 0xd4, 0x6a, 0x54, 0x6b, + 0x53, 0x5e, 0xf4, 0xcf, 0x5f, 0x07, 0xd6, 0x51, + 0x61, 0xe8, 0x98, 0x28, 0xf3, 0xa7, 0x7d, 0xb7, + 0xb9, 0xb5, 0x6f, 0x0d, 0xf5, 0x9a, 0xae, 0x45 }, + { 0x07, 0xe8, 0xe1, 0xee, 0x73, 0x2c, 0xb0, 0xd3, + 0x56, 0xc9, 0xc0, 0xd1, 0x06, 0x9c, 0x89, 0xd1, + 0x7a, 0xdf, 0x6a, 0x9a, 0x33, 0x4f, 0x74, 0x5e, + 0xc7, 0x86, 0x73, 0x32, 0x54, 0x8c, 0xa8, 0xe9 }, + { 0x0e, 0x01, 0xe8, 0x1c, 0xad, 0xa8, 0x16, 0x2b, + 0xfd, 0x5f, 0x8a, 0x8c, 0x81, 0x8a, 0x6c, 0x69, + 0xfe, 0xdf, 0x02, 0xce, 0xb5, 0x20, 0x85, 0x23, + 0xcb, 0xe5, 0x31, 0x3b, 0x89, 0xca, 0x10, 0x53 }, + { 0x6b, 0xb6, 0xc6, 0x47, 0x26, 0x55, 0x08, 0x43, + 0x99, 0x85, 0x2e, 0x00, 0x24, 0x9f, 0x8c, 0xb2, + 0x47, 0x89, 0x6d, 0x39, 0x2b, 0x02, 0xd7, 0x3b, + 0x7f, 0x0d, 0xd8, 0x18, 0xe1, 0xe2, 0x9b, 0x07 }, + { 0x42, 0xd4, 0x63, 0x6e, 0x20, 0x60, 0xf0, 0x8f, + 0x41, 0xc8, 0x82, 0xe7, 0x6b, 0x39, 0x6b, 0x11, + 0x2e, 0xf6, 0x27, 0xcc, 0x24, 0xc4, 0x3d, 0xd5, + 0xf8, 0x3a, 0x1d, 0x1a, 0x7e, 0xad, 0x71, 0x1a }, + { 0x48, 0x58, 0xc9, 0xa1, 0x88, 0xb0, 0x23, 0x4f, + 0xb9, 0xa8, 0xd4, 0x7d, 0x0b, 0x41, 0x33, 0x65, + 0x0a, 0x03, 0x0b, 0xd0, 0x61, 0x1b, 0x87, 0xc3, + 0x89, 0x2e, 0x94, 0x95, 0x1f, 0x8d, 0xf8, 0x52 }, + { 0x3f, 0xab, 0x3e, 0x36, 0x98, 0x8d, 0x44, 0x5a, + 0x51, 0xc8, 0x78, 0x3e, 0x53, 0x1b, 0xe3, 0xa0, + 0x2b, 0xe4, 0x0c, 0xd0, 0x47, 0x96, 0xcf, 0xb6, + 0x1d, 0x40, 0x34, 0x74, 0x42, 0xd3, 0xf7, 0x94 }, + { 0xeb, 0xab, 0xc4, 0x96, 0x36, 0xbd, 0x43, 0x3d, + 0x2e, 0xc8, 0xf0, 0xe5, 0x18, 0x73, 0x2e, 0xf8, + 0xfa, 0x21, 0xd4, 0xd0, 0x71, 0xcc, 0x3b, 0xc4, + 0x6c, 0xd7, 0x9f, 0xa3, 0x8a, 0x28, 0xb8, 0x10 }, + { 0xa1, 0xd0, 0x34, 0x35, 0x23, 0xb8, 0x93, 0xfc, + 0xa8, 0x4f, 0x47, 0xfe, 0xb4, 0xa6, 0x4d, 0x35, + 0x0a, 0x17, 0xd8, 0xee, 0xf5, 0x49, 0x7e, 0xce, + 0x69, 0x7d, 0x02, 0xd7, 0x91, 0x78, 0xb5, 0x91 }, + { 0x26, 0x2e, 0xbf, 0xd9, 0x13, 0x0b, 0x7d, 0x28, + 0x76, 0x0d, 0x08, 0xef, 0x8b, 0xfd, 0x3b, 0x86, + 0xcd, 0xd3, 0xb2, 0x11, 0x3d, 0x2c, 0xae, 0xf7, + 0xea, 0x95, 0x1a, 0x30, 0x3d, 0xfa, 0x38, 0x46 }, + { 0xf7, 0x61, 0x58, 0xed, 0xd5, 0x0a, 0x15, 0x4f, + 0xa7, 0x82, 0x03, 0xed, 0x23, 0x62, 0x93, 0x2f, + 0xcb, 0x82, 0x53, 0xaa, 0xe3, 0x78, 0x90, 0x3e, + 0xde, 0xd1, 0xe0, 0x3f, 0x70, 0x21, 0xa2, 0x57 }, + { 0x26, 0x17, 0x8e, 0x95, 0x0a, 0xc7, 0x22, 0xf6, + 0x7a, 0xe5, 0x6e, 0x57, 0x1b, 0x28, 0x4c, 0x02, + 0x07, 0x68, 0x4a, 0x63, 0x34, 0xa1, 0x77, 0x48, + 0xa9, 0x4d, 0x26, 0x0b, 0xc5, 0xf5, 0x52, 0x74 }, + { 0xc3, 0x78, 0xd1, 0xe4, 0x93, 0xb4, 0x0e, 0xf1, + 0x1f, 0xe6, 0xa1, 0x5d, 0x9c, 0x27, 0x37, 0xa3, + 0x78, 0x09, 0x63, 0x4c, 0x5a, 0xba, 0xd5, 0xb3, + 0x3d, 0x7e, 0x39, 0x3b, 0x4a, 0xe0, 0x5d, 0x03 }, + { 0x98, 0x4b, 0xd8, 0x37, 0x91, 0x01, 0xbe, 0x8f, + 0xd8, 0x06, 0x12, 0xd8, 0xea, 0x29, 0x59, 0xa7, + 0x86, 0x5e, 0xc9, 0x71, 0x85, 0x23, 0x55, 0x01, + 0x07, 0xae, 0x39, 0x38, 0xdf, 0x32, 0x01, 0x1b }, + { 0xc6, 0xf2, 0x5a, 0x81, 0x2a, 0x14, 0x48, 0x58, + 0xac, 0x5c, 0xed, 0x37, 0xa9, 0x3a, 0x9f, 0x47, + 0x59, 0xba, 0x0b, 0x1c, 0x0f, 0xdc, 0x43, 0x1d, + 0xce, 0x35, 0xf9, 0xec, 0x1f, 0x1f, 0x4a, 0x99 }, + { 0x92, 0x4c, 0x75, 0xc9, 0x44, 0x24, 0xff, 0x75, + 0xe7, 0x4b, 0x8b, 0x4e, 0x94, 0x35, 0x89, 0x58, + 0xb0, 0x27, 0xb1, 0x71, 0xdf, 0x5e, 0x57, 0x89, + 0x9a, 0xd0, 0xd4, 0xda, 0xc3, 0x73, 0x53, 0xb6 }, + { 0x0a, 0xf3, 0x58, 0x92, 0xa6, 0x3f, 0x45, 0x93, + 0x1f, 0x68, 0x46, 0xed, 0x19, 0x03, 0x61, 0xcd, + 0x07, 0x30, 0x89, 0xe0, 0x77, 0x16, 0x57, 0x14, + 0xb5, 0x0b, 0x81, 0xa2, 0xe3, 0xdd, 0x9b, 0xa1 }, + { 0xcc, 0x80, 0xce, 0xfb, 0x26, 0xc3, 0xb2, 0xb0, + 0xda, 0xef, 0x23, 0x3e, 0x60, 0x6d, 0x5f, 0xfc, + 0x80, 0xfa, 0x17, 0x42, 0x7d, 0x18, 0xe3, 0x04, + 0x89, 0x67, 0x3e, 0x06, 0xef, 0x4b, 0x87, 0xf7 }, + { 0xc2, 0xf8, 0xc8, 0x11, 0x74, 0x47, 0xf3, 0x97, + 0x8b, 0x08, 0x18, 0xdc, 0xf6, 0xf7, 0x01, 0x16, + 0xac, 0x56, 0xfd, 0x18, 0x4d, 0xd1, 0x27, 0x84, + 0x94, 0xe1, 0x03, 0xfc, 0x6d, 0x74, 0xa8, 0x87 }, + { 0xbd, 0xec, 0xf6, 0xbf, 0xc1, 0xba, 0x0d, 0xf6, + 0xe8, 0x62, 0xc8, 0x31, 0x99, 0x22, 0x07, 0x79, + 0x6a, 0xcc, 0x79, 0x79, 0x68, 0x35, 0x88, 0x28, + 0xc0, 0x6e, 0x7a, 0x51, 0xe0, 0x90, 0x09, 0x8f }, + { 0x24, 0xd1, 0xa2, 0x6e, 0x3d, 0xab, 0x02, 0xfe, + 0x45, 0x72, 0xd2, 0xaa, 0x7d, 0xbd, 0x3e, 0xc3, + 0x0f, 0x06, 0x93, 0xdb, 0x26, 0xf2, 0x73, 0xd0, + 0xab, 0x2c, 0xb0, 0xc1, 0x3b, 0x5e, 0x64, 0x51 }, + { 0xec, 0x56, 0xf5, 0x8b, 0x09, 0x29, 0x9a, 0x30, + 0x0b, 0x14, 0x05, 0x65, 0xd7, 0xd3, 0xe6, 0x87, + 0x82, 0xb6, 0xe2, 0xfb, 0xeb, 0x4b, 0x7e, 0xa9, + 0x7a, 0xc0, 0x57, 0x98, 0x90, 0x61, 0xdd, 0x3f }, + { 0x11, 0xa4, 0x37, 0xc1, 0xab, 0xa3, 0xc1, 0x19, + 0xdd, 0xfa, 0xb3, 0x1b, 0x3e, 0x8c, 0x84, 0x1d, + 0xee, 0xeb, 0x91, 0x3e, 0xf5, 0x7f, 0x7e, 0x48, + 0xf2, 0xc9, 0xcf, 0x5a, 0x28, 0xfa, 0x42, 0xbc }, + { 0x53, 0xc7, 0xe6, 0x11, 0x4b, 0x85, 0x0a, 0x2c, + 0xb4, 0x96, 0xc9, 0xb3, 0xc6, 0x9a, 0x62, 0x3e, + 0xae, 0xa2, 0xcb, 0x1d, 0x33, 0xdd, 0x81, 0x7e, + 0x47, 0x65, 0xed, 0xaa, 0x68, 0x23, 0xc2, 0x28 }, + { 0x15, 0x4c, 0x3e, 0x96, 0xfe, 0xe5, 0xdb, 0x14, + 0xf8, 0x77, 0x3e, 0x18, 0xaf, 0x14, 0x85, 0x79, + 0x13, 0x50, 0x9d, 0xa9, 0x99, 0xb4, 0x6c, 0xdd, + 0x3d, 0x4c, 0x16, 0x97, 0x60, 0xc8, 0x3a, 0xd2 }, + { 0x40, 0xb9, 0x91, 0x6f, 0x09, 0x3e, 0x02, 0x7a, + 0x87, 0x86, 0x64, 0x18, 0x18, 0x92, 0x06, 0x20, + 0x47, 0x2f, 0xbc, 0xf6, 0x8f, 0x70, 0x1d, 0x1b, + 0x68, 0x06, 0x32, 0xe6, 0x99, 0x6b, 0xde, 0xd3 }, + { 0x24, 0xc4, 0xcb, 0xba, 0x07, 0x11, 0x98, 0x31, + 0xa7, 0x26, 0xb0, 0x53, 0x05, 0xd9, 0x6d, 0xa0, + 0x2f, 0xf8, 0xb1, 0x48, 0xf0, 0xda, 0x44, 0x0f, + 0xe2, 0x33, 0xbc, 0xaa, 0x32, 0xc7, 0x2f, 0x6f }, + { 0x5d, 0x20, 0x15, 0x10, 0x25, 0x00, 0x20, 0xb7, + 0x83, 0x68, 0x96, 0x88, 0xab, 0xbf, 0x8e, 0xcf, + 0x25, 0x94, 0xa9, 0x6a, 0x08, 0xf2, 0xbf, 0xec, + 0x6c, 0xe0, 0x57, 0x44, 0x65, 0xdd, 0xed, 0x71 }, + { 0x04, 0x3b, 0x97, 0xe3, 0x36, 0xee, 0x6f, 0xdb, + 0xbe, 0x2b, 0x50, 0xf2, 0x2a, 0xf8, 0x32, 0x75, + 0xa4, 0x08, 0x48, 0x05, 0xd2, 0xd5, 0x64, 0x59, + 0x62, 0x45, 0x4b, 0x6c, 0x9b, 0x80, 0x53, 0xa0 }, + { 0x56, 0x48, 0x35, 0xcb, 0xae, 0xa7, 0x74, 0x94, + 0x85, 0x68, 0xbe, 0x36, 0xcf, 0x52, 0xfc, 0xdd, + 0x83, 0x93, 0x4e, 0xb0, 0xa2, 0x75, 0x12, 0xdb, + 0xe3, 0xe2, 0xdb, 0x47, 0xb9, 0xe6, 0x63, 0x5a }, + { 0xf2, 0x1c, 0x33, 0xf4, 0x7b, 0xde, 0x40, 0xa2, + 0xa1, 0x01, 0xc9, 0xcd, 0xe8, 0x02, 0x7a, 0xaf, + 0x61, 0xa3, 0x13, 0x7d, 0xe2, 0x42, 0x2b, 0x30, + 0x03, 0x5a, 0x04, 0xc2, 0x70, 0x89, 0x41, 0x83 }, + { 0x9d, 0xb0, 0xef, 0x74, 0xe6, 0x6c, 0xbb, 0x84, + 0x2e, 0xb0, 0xe0, 0x73, 0x43, 0xa0, 0x3c, 0x5c, + 0x56, 0x7e, 0x37, 0x2b, 0x3f, 0x23, 0xb9, 0x43, + 0xc7, 0x88, 0xa4, 0xf2, 0x50, 0xf6, 0x78, 0x91 }, + { 0xab, 0x8d, 0x08, 0x65, 0x5f, 0xf1, 0xd3, 0xfe, + 0x87, 0x58, 0xd5, 0x62, 0x23, 0x5f, 0xd2, 0x3e, + 0x7c, 0xf9, 0xdc, 0xaa, 0xd6, 0x58, 0x87, 0x2a, + 0x49, 0xe5, 0xd3, 0x18, 0x3b, 0x6c, 0xce, 0xbd }, + { 0x6f, 0x27, 0xf7, 0x7e, 0x7b, 0xcf, 0x46, 0xa1, + 0xe9, 0x63, 0xad, 0xe0, 0x30, 0x97, 0x33, 0x54, + 0x30, 0x31, 0xdc, 0xcd, 0xd4, 0x7c, 0xaa, 0xc1, + 0x74, 0xd7, 0xd2, 0x7c, 0xe8, 0x07, 0x7e, 0x8b }, + { 0xe3, 0xcd, 0x54, 0xda, 0x7e, 0x44, 0x4c, 0xaa, + 0x62, 0x07, 0x56, 0x95, 0x25, 0xa6, 0x70, 0xeb, + 0xae, 0x12, 0x78, 0xde, 0x4e, 0x3f, 0xe2, 0x68, + 0x4b, 0x3e, 0x33, 0xf5, 0xef, 0x90, 0xcc, 0x1b }, + { 0xb2, 0xc3, 0xe3, 0x3a, 0x51, 0xd2, 0x2c, 0x4c, + 0x08, 0xfc, 0x09, 0x89, 0xc8, 0x73, 0xc9, 0xcc, + 0x41, 0x50, 0x57, 0x9b, 0x1e, 0x61, 0x63, 0xfa, + 0x69, 0x4a, 0xd5, 0x1d, 0x53, 0xd7, 0x12, 0xdc }, + { 0xbe, 0x7f, 0xda, 0x98, 0x3e, 0x13, 0x18, 0x9b, + 0x4c, 0x77, 0xe0, 0xa8, 0x09, 0x20, 0xb6, 0xe0, + 0xe0, 0xea, 0x80, 0xc3, 0xb8, 0x4d, 0xbe, 0x7e, + 0x71, 0x17, 0xd2, 0x53, 0xf4, 0x81, 0x12, 0xf4 }, + { 0xb6, 0x00, 0x8c, 0x28, 0xfa, 0xe0, 0x8a, 0xa4, + 0x27, 0xe5, 0xbd, 0x3a, 0xad, 0x36, 0xf1, 0x00, + 0x21, 0xf1, 0x6c, 0x77, 0xcf, 0xea, 0xbe, 0xd0, + 0x7f, 0x97, 0xcc, 0x7d, 0xc1, 0xf1, 0x28, 0x4a }, + { 0x6e, 0x4e, 0x67, 0x60, 0xc5, 0x38, 0xf2, 0xe9, + 0x7b, 0x3a, 0xdb, 0xfb, 0xbc, 0xde, 0x57, 0xf8, + 0x96, 0x6b, 0x7e, 0xa8, 0xfc, 0xb5, 0xbf, 0x7e, + 0xfe, 0xc9, 0x13, 0xfd, 0x2a, 0x2b, 0x0c, 0x55 }, + { 0x4a, 0xe5, 0x1f, 0xd1, 0x83, 0x4a, 0xa5, 0xbd, + 0x9a, 0x6f, 0x7e, 0xc3, 0x9f, 0xc6, 0x63, 0x33, + 0x8d, 0xc5, 0xd2, 0xe2, 0x07, 0x61, 0x56, 0x6d, + 0x90, 0xcc, 0x68, 0xb1, 0xcb, 0x87, 0x5e, 0xd8 }, + { 0xb6, 0x73, 0xaa, 0xd7, 0x5a, 0xb1, 0xfd, 0xb5, + 0x40, 0x1a, 0xbf, 0xa1, 0xbf, 0x89, 0xf3, 0xad, + 0xd2, 0xeb, 0xc4, 0x68, 0xdf, 0x36, 0x24, 0xa4, + 0x78, 0xf4, 0xfe, 0x85, 0x9d, 0x8d, 0x55, 0xe2 }, + { 0x13, 0xc9, 0x47, 0x1a, 0x98, 0x55, 0x91, 0x35, + 0x39, 0x83, 0x66, 0x60, 0x39, 0x8d, 0xa0, 0xf3, + 0xf9, 0x9a, 0xda, 0x08, 0x47, 0x9c, 0x69, 0xd1, + 0xb7, 0xfc, 0xaa, 0x34, 0x61, 0xdd, 0x7e, 0x59 }, + { 0x2c, 0x11, 0xf4, 0xa7, 0xf9, 0x9a, 0x1d, 0x23, + 0xa5, 0x8b, 0xb6, 0x36, 0x35, 0x0f, 0xe8, 0x49, + 0xf2, 0x9c, 0xba, 0xc1, 0xb2, 0xa1, 0x11, 0x2d, + 0x9f, 0x1e, 0xd5, 0xbc, 0x5b, 0x31, 0x3c, 0xcd }, + { 0xc7, 0xd3, 0xc0, 0x70, 0x6b, 0x11, 0xae, 0x74, + 0x1c, 0x05, 0xa1, 0xef, 0x15, 0x0d, 0xd6, 0x5b, + 0x54, 0x94, 0xd6, 0xd5, 0x4c, 0x9a, 0x86, 0xe2, + 0x61, 0x78, 0x54, 0xe6, 0xae, 0xee, 0xbb, 0xd9 }, + { 0x19, 0x4e, 0x10, 0xc9, 0x38, 0x93, 0xaf, 0xa0, + 0x64, 0xc3, 0xac, 0x04, 0xc0, 0xdd, 0x80, 0x8d, + 0x79, 0x1c, 0x3d, 0x4b, 0x75, 0x56, 0xe8, 0x9d, + 0x8d, 0x9c, 0xb2, 0x25, 0xc4, 0xb3, 0x33, 0x39 }, + { 0x6f, 0xc4, 0x98, 0x8b, 0x8f, 0x78, 0x54, 0x6b, + 0x16, 0x88, 0x99, 0x18, 0x45, 0x90, 0x8f, 0x13, + 0x4b, 0x6a, 0x48, 0x2e, 0x69, 0x94, 0xb3, 0xd4, + 0x83, 0x17, 0xbf, 0x08, 0xdb, 0x29, 0x21, 0x85 }, + { 0x56, 0x65, 0xbe, 0xb8, 0xb0, 0x95, 0x55, 0x25, + 0x81, 0x3b, 0x59, 0x81, 0xcd, 0x14, 0x2e, 0xd4, + 0xd0, 0x3f, 0xba, 0x38, 0xa6, 0xf3, 0xe5, 0xad, + 0x26, 0x8e, 0x0c, 0xc2, 0x70, 0xd1, 0xcd, 0x11 }, + { 0xb8, 0x83, 0xd6, 0x8f, 0x5f, 0xe5, 0x19, 0x36, + 0x43, 0x1b, 0xa4, 0x25, 0x67, 0x38, 0x05, 0x3b, + 0x1d, 0x04, 0x26, 0xd4, 0xcb, 0x64, 0xb1, 0x6e, + 0x83, 0xba, 0xdc, 0x5e, 0x9f, 0xbe, 0x3b, 0x81 }, + { 0x53, 0xe7, 0xb2, 0x7e, 0xa5, 0x9c, 0x2f, 0x6d, + 0xbb, 0x50, 0x76, 0x9e, 0x43, 0x55, 0x4d, 0xf3, + 0x5a, 0xf8, 0x9f, 0x48, 0x22, 0xd0, 0x46, 0x6b, + 0x00, 0x7d, 0xd6, 0xf6, 0xde, 0xaf, 0xff, 0x02 }, + { 0x1f, 0x1a, 0x02, 0x29, 0xd4, 0x64, 0x0f, 0x01, + 0x90, 0x15, 0x88, 0xd9, 0xde, 0xc2, 0x2d, 0x13, + 0xfc, 0x3e, 0xb3, 0x4a, 0x61, 0xb3, 0x29, 0x38, + 0xef, 0xbf, 0x53, 0x34, 0xb2, 0x80, 0x0a, 0xfa }, + { 0xc2, 0xb4, 0x05, 0xaf, 0xa0, 0xfa, 0x66, 0x68, + 0x85, 0x2a, 0xee, 0x4d, 0x88, 0x04, 0x08, 0x53, + 0xfa, 0xb8, 0x00, 0xe7, 0x2b, 0x57, 0x58, 0x14, + 0x18, 0xe5, 0x50, 0x6f, 0x21, 0x4c, 0x7d, 0x1f }, + { 0xc0, 0x8a, 0xa1, 0xc2, 0x86, 0xd7, 0x09, 0xfd, + 0xc7, 0x47, 0x37, 0x44, 0x97, 0x71, 0x88, 0xc8, + 0x95, 0xba, 0x01, 0x10, 0x14, 0x24, 0x7e, 0x4e, + 0xfa, 0x8d, 0x07, 0xe7, 0x8f, 0xec, 0x69, 0x5c }, + { 0xf0, 0x3f, 0x57, 0x89, 0xd3, 0x33, 0x6b, 0x80, + 0xd0, 0x02, 0xd5, 0x9f, 0xdf, 0x91, 0x8b, 0xdb, + 0x77, 0x5b, 0x00, 0x95, 0x6e, 0xd5, 0x52, 0x8e, + 0x86, 0xaa, 0x99, 0x4a, 0xcb, 0x38, 0xfe, 0x2d } +}; + +static const u8 blake2s_keyed_testvecs[][BLAKE2S_HASH_SIZE] __initconst = { + { 0x48, 0xa8, 0x99, 0x7d, 0xa4, 0x07, 0x87, 0x6b, + 0x3d, 0x79, 0xc0, 0xd9, 0x23, 0x25, 0xad, 0x3b, + 0x89, 0xcb, 0xb7, 0x54, 0xd8, 0x6a, 0xb7, 0x1a, + 0xee, 0x04, 0x7a, 0xd3, 0x45, 0xfd, 0x2c, 0x49 }, + { 0x40, 0xd1, 0x5f, 0xee, 0x7c, 0x32, 0x88, 0x30, + 0x16, 0x6a, 0xc3, 0xf9, 0x18, 0x65, 0x0f, 0x80, + 0x7e, 0x7e, 0x01, 0xe1, 0x77, 0x25, 0x8c, 0xdc, + 0x0a, 0x39, 0xb1, 0x1f, 0x59, 0x80, 0x66, 0xf1 }, + { 0x6b, 0xb7, 0x13, 0x00, 0x64, 0x4c, 0xd3, 0x99, + 0x1b, 0x26, 0xcc, 0xd4, 0xd2, 0x74, 0xac, 0xd1, + 0xad, 0xea, 0xb8, 0xb1, 0xd7, 0x91, 0x45, 0x46, + 0xc1, 0x19, 0x8b, 0xbe, 0x9f, 0xc9, 0xd8, 0x03 }, + { 0x1d, 0x22, 0x0d, 0xbe, 0x2e, 0xe1, 0x34, 0x66, + 0x1f, 0xdf, 0x6d, 0x9e, 0x74, 0xb4, 0x17, 0x04, + 0x71, 0x05, 0x56, 0xf2, 0xf6, 0xe5, 0xa0, 0x91, + 0xb2, 0x27, 0x69, 0x74, 0x45, 0xdb, 0xea, 0x6b }, + { 0xf6, 0xc3, 0xfb, 0xad, 0xb4, 0xcc, 0x68, 0x7a, + 0x00, 0x64, 0xa5, 0xbe, 0x6e, 0x79, 0x1b, 0xec, + 0x63, 0xb8, 0x68, 0xad, 0x62, 0xfb, 0xa6, 0x1b, + 0x37, 0x57, 0xef, 0x9c, 0xa5, 0x2e, 0x05, 0xb2 }, + { 0x49, 0xc1, 0xf2, 0x11, 0x88, 0xdf, 0xd7, 0x69, + 0xae, 0xa0, 0xe9, 0x11, 0xdd, 0x6b, 0x41, 0xf1, + 0x4d, 0xab, 0x10, 0x9d, 0x2b, 0x85, 0x97, 0x7a, + 0xa3, 0x08, 0x8b, 0x5c, 0x70, 0x7e, 0x85, 0x98 }, + { 0xfd, 0xd8, 0x99, 0x3d, 0xcd, 0x43, 0xf6, 0x96, + 0xd4, 0x4f, 0x3c, 0xea, 0x0f, 0xf3, 0x53, 0x45, + 0x23, 0x4e, 0xc8, 0xee, 0x08, 0x3e, 0xb3, 0xca, + 0xda, 0x01, 0x7c, 0x7f, 0x78, 0xc1, 0x71, 0x43 }, + { 0xe6, 0xc8, 0x12, 0x56, 0x37, 0x43, 0x8d, 0x09, + 0x05, 0xb7, 0x49, 0xf4, 0x65, 0x60, 0xac, 0x89, + 0xfd, 0x47, 0x1c, 0xf8, 0x69, 0x2e, 0x28, 0xfa, + 0xb9, 0x82, 0xf7, 0x3f, 0x01, 0x9b, 0x83, 0xa9 }, + { 0x19, 0xfc, 0x8c, 0xa6, 0x97, 0x9d, 0x60, 0xe6, + 0xed, 0xd3, 0xb4, 0x54, 0x1e, 0x2f, 0x96, 0x7c, + 0xed, 0x74, 0x0d, 0xf6, 0xec, 0x1e, 0xae, 0xbb, + 0xfe, 0x81, 0x38, 0x32, 0xe9, 0x6b, 0x29, 0x74 }, + { 0xa6, 0xad, 0x77, 0x7c, 0xe8, 0x81, 0xb5, 0x2b, + 0xb5, 0xa4, 0x42, 0x1a, 0xb6, 0xcd, 0xd2, 0xdf, + 0xba, 0x13, 0xe9, 0x63, 0x65, 0x2d, 0x4d, 0x6d, + 0x12, 0x2a, 0xee, 0x46, 0x54, 0x8c, 0x14, 0xa7 }, + { 0xf5, 0xc4, 0xb2, 0xba, 0x1a, 0x00, 0x78, 0x1b, + 0x13, 0xab, 0xa0, 0x42, 0x52, 0x42, 0xc6, 0x9c, + 0xb1, 0x55, 0x2f, 0x3f, 0x71, 0xa9, 0xa3, 0xbb, + 0x22, 0xb4, 0xa6, 0xb4, 0x27, 0x7b, 0x46, 0xdd }, + { 0xe3, 0x3c, 0x4c, 0x9b, 0xd0, 0xcc, 0x7e, 0x45, + 0xc8, 0x0e, 0x65, 0xc7, 0x7f, 0xa5, 0x99, 0x7f, + 0xec, 0x70, 0x02, 0x73, 0x85, 0x41, 0x50, 0x9e, + 0x68, 0xa9, 0x42, 0x38, 0x91, 0xe8, 0x22, 0xa3 }, + { 0xfb, 0xa1, 0x61, 0x69, 0xb2, 0xc3, 0xee, 0x10, + 0x5b, 0xe6, 0xe1, 0xe6, 0x50, 0xe5, 0xcb, 0xf4, + 0x07, 0x46, 0xb6, 0x75, 0x3d, 0x03, 0x6a, 0xb5, + 0x51, 0x79, 0x01, 0x4a, 0xd7, 0xef, 0x66, 0x51 }, + { 0xf5, 0xc4, 0xbe, 0xc6, 0xd6, 0x2f, 0xc6, 0x08, + 0xbf, 0x41, 0xcc, 0x11, 0x5f, 0x16, 0xd6, 0x1c, + 0x7e, 0xfd, 0x3f, 0xf6, 0xc6, 0x56, 0x92, 0xbb, + 0xe0, 0xaf, 0xff, 0xb1, 0xfe, 0xde, 0x74, 0x75 }, + { 0xa4, 0x86, 0x2e, 0x76, 0xdb, 0x84, 0x7f, 0x05, + 0xba, 0x17, 0xed, 0xe5, 0xda, 0x4e, 0x7f, 0x91, + 0xb5, 0x92, 0x5c, 0xf1, 0xad, 0x4b, 0xa1, 0x27, + 0x32, 0xc3, 0x99, 0x57, 0x42, 0xa5, 0xcd, 0x6e }, + { 0x65, 0xf4, 0xb8, 0x60, 0xcd, 0x15, 0xb3, 0x8e, + 0xf8, 0x14, 0xa1, 0xa8, 0x04, 0x31, 0x4a, 0x55, + 0xbe, 0x95, 0x3c, 0xaa, 0x65, 0xfd, 0x75, 0x8a, + 0xd9, 0x89, 0xff, 0x34, 0xa4, 0x1c, 0x1e, 0xea }, + { 0x19, 0xba, 0x23, 0x4f, 0x0a, 0x4f, 0x38, 0x63, + 0x7d, 0x18, 0x39, 0xf9, 0xd9, 0xf7, 0x6a, 0xd9, + 0x1c, 0x85, 0x22, 0x30, 0x71, 0x43, 0xc9, 0x7d, + 0x5f, 0x93, 0xf6, 0x92, 0x74, 0xce, 0xc9, 0xa7 }, + { 0x1a, 0x67, 0x18, 0x6c, 0xa4, 0xa5, 0xcb, 0x8e, + 0x65, 0xfc, 0xa0, 0xe2, 0xec, 0xbc, 0x5d, 0xdc, + 0x14, 0xae, 0x38, 0x1b, 0xb8, 0xbf, 0xfe, 0xb9, + 0xe0, 0xa1, 0x03, 0x44, 0x9e, 0x3e, 0xf0, 0x3c }, + { 0xaf, 0xbe, 0xa3, 0x17, 0xb5, 0xa2, 0xe8, 0x9c, + 0x0b, 0xd9, 0x0c, 0xcf, 0x5d, 0x7f, 0xd0, 0xed, + 0x57, 0xfe, 0x58, 0x5e, 0x4b, 0xe3, 0x27, 0x1b, + 0x0a, 0x6b, 0xf0, 0xf5, 0x78, 0x6b, 0x0f, 0x26 }, + { 0xf1, 0xb0, 0x15, 0x58, 0xce, 0x54, 0x12, 0x62, + 0xf5, 0xec, 0x34, 0x29, 0x9d, 0x6f, 0xb4, 0x09, + 0x00, 0x09, 0xe3, 0x43, 0x4b, 0xe2, 0xf4, 0x91, + 0x05, 0xcf, 0x46, 0xaf, 0x4d, 0x2d, 0x41, 0x24 }, + { 0x13, 0xa0, 0xa0, 0xc8, 0x63, 0x35, 0x63, 0x5e, + 0xaa, 0x74, 0xca, 0x2d, 0x5d, 0x48, 0x8c, 0x79, + 0x7b, 0xbb, 0x4f, 0x47, 0xdc, 0x07, 0x10, 0x50, + 0x15, 0xed, 0x6a, 0x1f, 0x33, 0x09, 0xef, 0xce }, + { 0x15, 0x80, 0xaf, 0xee, 0xbe, 0xbb, 0x34, 0x6f, + 0x94, 0xd5, 0x9f, 0xe6, 0x2d, 0xa0, 0xb7, 0x92, + 0x37, 0xea, 0xd7, 0xb1, 0x49, 0x1f, 0x56, 0x67, + 0xa9, 0x0e, 0x45, 0xed, 0xf6, 0xca, 0x8b, 0x03 }, + { 0x20, 0xbe, 0x1a, 0x87, 0x5b, 0x38, 0xc5, 0x73, + 0xdd, 0x7f, 0xaa, 0xa0, 0xde, 0x48, 0x9d, 0x65, + 0x5c, 0x11, 0xef, 0xb6, 0xa5, 0x52, 0x69, 0x8e, + 0x07, 0xa2, 0xd3, 0x31, 0xb5, 0xf6, 0x55, 0xc3 }, + { 0xbe, 0x1f, 0xe3, 0xc4, 0xc0, 0x40, 0x18, 0xc5, + 0x4c, 0x4a, 0x0f, 0x6b, 0x9a, 0x2e, 0xd3, 0xc5, + 0x3a, 0xbe, 0x3a, 0x9f, 0x76, 0xb4, 0xd2, 0x6d, + 0xe5, 0x6f, 0xc9, 0xae, 0x95, 0x05, 0x9a, 0x99 }, + { 0xe3, 0xe3, 0xac, 0xe5, 0x37, 0xeb, 0x3e, 0xdd, + 0x84, 0x63, 0xd9, 0xad, 0x35, 0x82, 0xe1, 0x3c, + 0xf8, 0x65, 0x33, 0xff, 0xde, 0x43, 0xd6, 0x68, + 0xdd, 0x2e, 0x93, 0xbb, 0xdb, 0xd7, 0x19, 0x5a }, + { 0x11, 0x0c, 0x50, 0xc0, 0xbf, 0x2c, 0x6e, 0x7a, + 0xeb, 0x7e, 0x43, 0x5d, 0x92, 0xd1, 0x32, 0xab, + 0x66, 0x55, 0x16, 0x8e, 0x78, 0xa2, 0xde, 0xcd, + 0xec, 0x33, 0x30, 0x77, 0x76, 0x84, 0xd9, 0xc1 }, + { 0xe9, 0xba, 0x8f, 0x50, 0x5c, 0x9c, 0x80, 0xc0, + 0x86, 0x66, 0xa7, 0x01, 0xf3, 0x36, 0x7e, 0x6c, + 0xc6, 0x65, 0xf3, 0x4b, 0x22, 0xe7, 0x3c, 0x3c, + 0x04, 0x17, 0xeb, 0x1c, 0x22, 0x06, 0x08, 0x2f }, + { 0x26, 0xcd, 0x66, 0xfc, 0xa0, 0x23, 0x79, 0xc7, + 0x6d, 0xf1, 0x23, 0x17, 0x05, 0x2b, 0xca, 0xfd, + 0x6c, 0xd8, 0xc3, 0xa7, 0xb8, 0x90, 0xd8, 0x05, + 0xf3, 0x6c, 0x49, 0x98, 0x97, 0x82, 0x43, 0x3a }, + { 0x21, 0x3f, 0x35, 0x96, 0xd6, 0xe3, 0xa5, 0xd0, + 0xe9, 0x93, 0x2c, 0xd2, 0x15, 0x91, 0x46, 0x01, + 0x5e, 0x2a, 0xbc, 0x94, 0x9f, 0x47, 0x29, 0xee, + 0x26, 0x32, 0xfe, 0x1e, 0xdb, 0x78, 0xd3, 0x37 }, + { 0x10, 0x15, 0xd7, 0x01, 0x08, 0xe0, 0x3b, 0xe1, + 0xc7, 0x02, 0xfe, 0x97, 0x25, 0x36, 0x07, 0xd1, + 0x4a, 0xee, 0x59, 0x1f, 0x24, 0x13, 0xea, 0x67, + 0x87, 0x42, 0x7b, 0x64, 0x59, 0xff, 0x21, 0x9a }, + { 0x3c, 0xa9, 0x89, 0xde, 0x10, 0xcf, 0xe6, 0x09, + 0x90, 0x94, 0x72, 0xc8, 0xd3, 0x56, 0x10, 0x80, + 0x5b, 0x2f, 0x97, 0x77, 0x34, 0xcf, 0x65, 0x2c, + 0xc6, 0x4b, 0x3b, 0xfc, 0x88, 0x2d, 0x5d, 0x89 }, + { 0xb6, 0x15, 0x6f, 0x72, 0xd3, 0x80, 0xee, 0x9e, + 0xa6, 0xac, 0xd1, 0x90, 0x46, 0x4f, 0x23, 0x07, + 0xa5, 0xc1, 0x79, 0xef, 0x01, 0xfd, 0x71, 0xf9, + 0x9f, 0x2d, 0x0f, 0x7a, 0x57, 0x36, 0x0a, 0xea }, + { 0xc0, 0x3b, 0xc6, 0x42, 0xb2, 0x09, 0x59, 0xcb, + 0xe1, 0x33, 0xa0, 0x30, 0x3e, 0x0c, 0x1a, 0xbf, + 0xf3, 0xe3, 0x1e, 0xc8, 0xe1, 0xa3, 0x28, 0xec, + 0x85, 0x65, 0xc3, 0x6d, 0xec, 0xff, 0x52, 0x65 }, + { 0x2c, 0x3e, 0x08, 0x17, 0x6f, 0x76, 0x0c, 0x62, + 0x64, 0xc3, 0xa2, 0xcd, 0x66, 0xfe, 0xc6, 0xc3, + 0xd7, 0x8d, 0xe4, 0x3f, 0xc1, 0x92, 0x45, 0x7b, + 0x2a, 0x4a, 0x66, 0x0a, 0x1e, 0x0e, 0xb2, 0x2b }, + { 0xf7, 0x38, 0xc0, 0x2f, 0x3c, 0x1b, 0x19, 0x0c, + 0x51, 0x2b, 0x1a, 0x32, 0xde, 0xab, 0xf3, 0x53, + 0x72, 0x8e, 0x0e, 0x9a, 0xb0, 0x34, 0x49, 0x0e, + 0x3c, 0x34, 0x09, 0x94, 0x6a, 0x97, 0xae, 0xec }, + { 0x8b, 0x18, 0x80, 0xdf, 0x30, 0x1c, 0xc9, 0x63, + 0x41, 0x88, 0x11, 0x08, 0x89, 0x64, 0x83, 0x92, + 0x87, 0xff, 0x7f, 0xe3, 0x1c, 0x49, 0xea, 0x6e, + 0xbd, 0x9e, 0x48, 0xbd, 0xee, 0xe4, 0x97, 0xc5 }, + { 0x1e, 0x75, 0xcb, 0x21, 0xc6, 0x09, 0x89, 0x02, + 0x03, 0x75, 0xf1, 0xa7, 0xa2, 0x42, 0x83, 0x9f, + 0x0b, 0x0b, 0x68, 0x97, 0x3a, 0x4c, 0x2a, 0x05, + 0xcf, 0x75, 0x55, 0xed, 0x5a, 0xae, 0xc4, 0xc1 }, + { 0x62, 0xbf, 0x8a, 0x9c, 0x32, 0xa5, 0xbc, 0xcf, + 0x29, 0x0b, 0x6c, 0x47, 0x4d, 0x75, 0xb2, 0xa2, + 0xa4, 0x09, 0x3f, 0x1a, 0x9e, 0x27, 0x13, 0x94, + 0x33, 0xa8, 0xf2, 0xb3, 0xbc, 0xe7, 0xb8, 0xd7 }, + { 0x16, 0x6c, 0x83, 0x50, 0xd3, 0x17, 0x3b, 0x5e, + 0x70, 0x2b, 0x78, 0x3d, 0xfd, 0x33, 0xc6, 0x6e, + 0xe0, 0x43, 0x27, 0x42, 0xe9, 0xb9, 0x2b, 0x99, + 0x7f, 0xd2, 0x3c, 0x60, 0xdc, 0x67, 0x56, 0xca }, + { 0x04, 0x4a, 0x14, 0xd8, 0x22, 0xa9, 0x0c, 0xac, + 0xf2, 0xf5, 0xa1, 0x01, 0x42, 0x8a, 0xdc, 0x8f, + 0x41, 0x09, 0x38, 0x6c, 0xcb, 0x15, 0x8b, 0xf9, + 0x05, 0xc8, 0x61, 0x8b, 0x8e, 0xe2, 0x4e, 0xc3 }, + { 0x38, 0x7d, 0x39, 0x7e, 0xa4, 0x3a, 0x99, 0x4b, + 0xe8, 0x4d, 0x2d, 0x54, 0x4a, 0xfb, 0xe4, 0x81, + 0xa2, 0x00, 0x0f, 0x55, 0x25, 0x26, 0x96, 0xbb, + 0xa2, 0xc5, 0x0c, 0x8e, 0xbd, 0x10, 0x13, 0x47 }, + { 0x56, 0xf8, 0xcc, 0xf1, 0xf8, 0x64, 0x09, 0xb4, + 0x6c, 0xe3, 0x61, 0x66, 0xae, 0x91, 0x65, 0x13, + 0x84, 0x41, 0x57, 0x75, 0x89, 0xdb, 0x08, 0xcb, + 0xc5, 0xf6, 0x6c, 0xa2, 0x97, 0x43, 0xb9, 0xfd }, + { 0x97, 0x06, 0xc0, 0x92, 0xb0, 0x4d, 0x91, 0xf5, + 0x3d, 0xff, 0x91, 0xfa, 0x37, 0xb7, 0x49, 0x3d, + 0x28, 0xb5, 0x76, 0xb5, 0xd7, 0x10, 0x46, 0x9d, + 0xf7, 0x94, 0x01, 0x66, 0x22, 0x36, 0xfc, 0x03 }, + { 0x87, 0x79, 0x68, 0x68, 0x6c, 0x06, 0x8c, 0xe2, + 0xf7, 0xe2, 0xad, 0xcf, 0xf6, 0x8b, 0xf8, 0x74, + 0x8e, 0xdf, 0x3c, 0xf8, 0x62, 0xcf, 0xb4, 0xd3, + 0x94, 0x7a, 0x31, 0x06, 0x95, 0x80, 0x54, 0xe3 }, + { 0x88, 0x17, 0xe5, 0x71, 0x98, 0x79, 0xac, 0xf7, + 0x02, 0x47, 0x87, 0xec, 0xcd, 0xb2, 0x71, 0x03, + 0x55, 0x66, 0xcf, 0xa3, 0x33, 0xe0, 0x49, 0x40, + 0x7c, 0x01, 0x78, 0xcc, 0xc5, 0x7a, 0x5b, 0x9f }, + { 0x89, 0x38, 0x24, 0x9e, 0x4b, 0x50, 0xca, 0xda, + 0xcc, 0xdf, 0x5b, 0x18, 0x62, 0x13, 0x26, 0xcb, + 0xb1, 0x52, 0x53, 0xe3, 0x3a, 0x20, 0xf5, 0x63, + 0x6e, 0x99, 0x5d, 0x72, 0x47, 0x8d, 0xe4, 0x72 }, + { 0xf1, 0x64, 0xab, 0xba, 0x49, 0x63, 0xa4, 0x4d, + 0x10, 0x72, 0x57, 0xe3, 0x23, 0x2d, 0x90, 0xac, + 0xa5, 0xe6, 0x6a, 0x14, 0x08, 0x24, 0x8c, 0x51, + 0x74, 0x1e, 0x99, 0x1d, 0xb5, 0x22, 0x77, 0x56 }, + { 0xd0, 0x55, 0x63, 0xe2, 0xb1, 0xcb, 0xa0, 0xc4, + 0xa2, 0xa1, 0xe8, 0xbd, 0xe3, 0xa1, 0xa0, 0xd9, + 0xf5, 0xb4, 0x0c, 0x85, 0xa0, 0x70, 0xd6, 0xf5, + 0xfb, 0x21, 0x06, 0x6e, 0xad, 0x5d, 0x06, 0x01 }, + { 0x03, 0xfb, 0xb1, 0x63, 0x84, 0xf0, 0xa3, 0x86, + 0x6f, 0x4c, 0x31, 0x17, 0x87, 0x76, 0x66, 0xef, + 0xbf, 0x12, 0x45, 0x97, 0x56, 0x4b, 0x29, 0x3d, + 0x4a, 0xab, 0x0d, 0x26, 0x9f, 0xab, 0xdd, 0xfa }, + { 0x5f, 0xa8, 0x48, 0x6a, 0xc0, 0xe5, 0x29, 0x64, + 0xd1, 0x88, 0x1b, 0xbe, 0x33, 0x8e, 0xb5, 0x4b, + 0xe2, 0xf7, 0x19, 0x54, 0x92, 0x24, 0x89, 0x20, + 0x57, 0xb4, 0xda, 0x04, 0xba, 0x8b, 0x34, 0x75 }, + { 0xcd, 0xfa, 0xbc, 0xee, 0x46, 0x91, 0x11, 0x11, + 0x23, 0x6a, 0x31, 0x70, 0x8b, 0x25, 0x39, 0xd7, + 0x1f, 0xc2, 0x11, 0xd9, 0xb0, 0x9c, 0x0d, 0x85, + 0x30, 0xa1, 0x1e, 0x1d, 0xbf, 0x6e, 0xed, 0x01 }, + { 0x4f, 0x82, 0xde, 0x03, 0xb9, 0x50, 0x47, 0x93, + 0xb8, 0x2a, 0x07, 0xa0, 0xbd, 0xcd, 0xff, 0x31, + 0x4d, 0x75, 0x9e, 0x7b, 0x62, 0xd2, 0x6b, 0x78, + 0x49, 0x46, 0xb0, 0xd3, 0x6f, 0x91, 0x6f, 0x52 }, + { 0x25, 0x9e, 0xc7, 0xf1, 0x73, 0xbc, 0xc7, 0x6a, + 0x09, 0x94, 0xc9, 0x67, 0xb4, 0xf5, 0xf0, 0x24, + 0xc5, 0x60, 0x57, 0xfb, 0x79, 0xc9, 0x65, 0xc4, + 0xfa, 0xe4, 0x18, 0x75, 0xf0, 0x6a, 0x0e, 0x4c }, + { 0x19, 0x3c, 0xc8, 0xe7, 0xc3, 0xe0, 0x8b, 0xb3, + 0x0f, 0x54, 0x37, 0xaa, 0x27, 0xad, 0xe1, 0xf1, + 0x42, 0x36, 0x9b, 0x24, 0x6a, 0x67, 0x5b, 0x23, + 0x83, 0xe6, 0xda, 0x9b, 0x49, 0xa9, 0x80, 0x9e }, + { 0x5c, 0x10, 0x89, 0x6f, 0x0e, 0x28, 0x56, 0xb2, + 0xa2, 0xee, 0xe0, 0xfe, 0x4a, 0x2c, 0x16, 0x33, + 0x56, 0x5d, 0x18, 0xf0, 0xe9, 0x3e, 0x1f, 0xab, + 0x26, 0xc3, 0x73, 0xe8, 0xf8, 0x29, 0x65, 0x4d }, + { 0xf1, 0x60, 0x12, 0xd9, 0x3f, 0x28, 0x85, 0x1a, + 0x1e, 0xb9, 0x89, 0xf5, 0xd0, 0xb4, 0x3f, 0x3f, + 0x39, 0xca, 0x73, 0xc9, 0xa6, 0x2d, 0x51, 0x81, + 0xbf, 0xf2, 0x37, 0x53, 0x6b, 0xd3, 0x48, 0xc3 }, + { 0x29, 0x66, 0xb3, 0xcf, 0xae, 0x1e, 0x44, 0xea, + 0x99, 0x6d, 0xc5, 0xd6, 0x86, 0xcf, 0x25, 0xfa, + 0x05, 0x3f, 0xb6, 0xf6, 0x72, 0x01, 0xb9, 0xe4, + 0x6e, 0xad, 0xe8, 0x5d, 0x0a, 0xd6, 0xb8, 0x06 }, + { 0xdd, 0xb8, 0x78, 0x24, 0x85, 0xe9, 0x00, 0xbc, + 0x60, 0xbc, 0xf4, 0xc3, 0x3a, 0x6f, 0xd5, 0x85, + 0x68, 0x0c, 0xc6, 0x83, 0xd5, 0x16, 0xef, 0xa0, + 0x3e, 0xb9, 0x98, 0x5f, 0xad, 0x87, 0x15, 0xfb }, + { 0x4c, 0x4d, 0x6e, 0x71, 0xae, 0xa0, 0x57, 0x86, + 0x41, 0x31, 0x48, 0xfc, 0x7a, 0x78, 0x6b, 0x0e, + 0xca, 0xf5, 0x82, 0xcf, 0xf1, 0x20, 0x9f, 0x5a, + 0x80, 0x9f, 0xba, 0x85, 0x04, 0xce, 0x66, 0x2c }, + { 0xfb, 0x4c, 0x5e, 0x86, 0xd7, 0xb2, 0x22, 0x9b, + 0x99, 0xb8, 0xba, 0x6d, 0x94, 0xc2, 0x47, 0xef, + 0x96, 0x4a, 0xa3, 0xa2, 0xba, 0xe8, 0xed, 0xc7, + 0x75, 0x69, 0xf2, 0x8d, 0xbb, 0xff, 0x2d, 0x4e }, + { 0xe9, 0x4f, 0x52, 0x6d, 0xe9, 0x01, 0x96, 0x33, + 0xec, 0xd5, 0x4a, 0xc6, 0x12, 0x0f, 0x23, 0x95, + 0x8d, 0x77, 0x18, 0xf1, 0xe7, 0x71, 0x7b, 0xf3, + 0x29, 0x21, 0x1a, 0x4f, 0xae, 0xed, 0x4e, 0x6d }, + { 0xcb, 0xd6, 0x66, 0x0a, 0x10, 0xdb, 0x3f, 0x23, + 0xf7, 0xa0, 0x3d, 0x4b, 0x9d, 0x40, 0x44, 0xc7, + 0x93, 0x2b, 0x28, 0x01, 0xac, 0x89, 0xd6, 0x0b, + 0xc9, 0xeb, 0x92, 0xd6, 0x5a, 0x46, 0xc2, 0xa0 }, + { 0x88, 0x18, 0xbb, 0xd3, 0xdb, 0x4d, 0xc1, 0x23, + 0xb2, 0x5c, 0xbb, 0xa5, 0xf5, 0x4c, 0x2b, 0xc4, + 0xb3, 0xfc, 0xf9, 0xbf, 0x7d, 0x7a, 0x77, 0x09, + 0xf4, 0xae, 0x58, 0x8b, 0x26, 0x7c, 0x4e, 0xce }, + { 0xc6, 0x53, 0x82, 0x51, 0x3f, 0x07, 0x46, 0x0d, + 0xa3, 0x98, 0x33, 0xcb, 0x66, 0x6c, 0x5e, 0xd8, + 0x2e, 0x61, 0xb9, 0xe9, 0x98, 0xf4, 0xb0, 0xc4, + 0x28, 0x7c, 0xee, 0x56, 0xc3, 0xcc, 0x9b, 0xcd }, + { 0x89, 0x75, 0xb0, 0x57, 0x7f, 0xd3, 0x55, 0x66, + 0xd7, 0x50, 0xb3, 0x62, 0xb0, 0x89, 0x7a, 0x26, + 0xc3, 0x99, 0x13, 0x6d, 0xf0, 0x7b, 0xab, 0xab, + 0xbd, 0xe6, 0x20, 0x3f, 0xf2, 0x95, 0x4e, 0xd4 }, + { 0x21, 0xfe, 0x0c, 0xeb, 0x00, 0x52, 0xbe, 0x7f, + 0xb0, 0xf0, 0x04, 0x18, 0x7c, 0xac, 0xd7, 0xde, + 0x67, 0xfa, 0x6e, 0xb0, 0x93, 0x8d, 0x92, 0x76, + 0x77, 0xf2, 0x39, 0x8c, 0x13, 0x23, 0x17, 0xa8 }, + { 0x2e, 0xf7, 0x3f, 0x3c, 0x26, 0xf1, 0x2d, 0x93, + 0x88, 0x9f, 0x3c, 0x78, 0xb6, 0xa6, 0x6c, 0x1d, + 0x52, 0xb6, 0x49, 0xdc, 0x9e, 0x85, 0x6e, 0x2c, + 0x17, 0x2e, 0xa7, 0xc5, 0x8a, 0xc2, 0xb5, 0xe3 }, + { 0x38, 0x8a, 0x3c, 0xd5, 0x6d, 0x73, 0x86, 0x7a, + 0xbb, 0x5f, 0x84, 0x01, 0x49, 0x2b, 0x6e, 0x26, + 0x81, 0xeb, 0x69, 0x85, 0x1e, 0x76, 0x7f, 0xd8, + 0x42, 0x10, 0xa5, 0x60, 0x76, 0xfb, 0x3d, 0xd3 }, + { 0xaf, 0x53, 0x3e, 0x02, 0x2f, 0xc9, 0x43, 0x9e, + 0x4e, 0x3c, 0xb8, 0x38, 0xec, 0xd1, 0x86, 0x92, + 0x23, 0x2a, 0xdf, 0x6f, 0xe9, 0x83, 0x95, 0x26, + 0xd3, 0xc3, 0xdd, 0x1b, 0x71, 0x91, 0x0b, 0x1a }, + { 0x75, 0x1c, 0x09, 0xd4, 0x1a, 0x93, 0x43, 0x88, + 0x2a, 0x81, 0xcd, 0x13, 0xee, 0x40, 0x81, 0x8d, + 0x12, 0xeb, 0x44, 0xc6, 0xc7, 0xf4, 0x0d, 0xf1, + 0x6e, 0x4a, 0xea, 0x8f, 0xab, 0x91, 0x97, 0x2a }, + { 0x5b, 0x73, 0xdd, 0xb6, 0x8d, 0x9d, 0x2b, 0x0a, + 0xa2, 0x65, 0xa0, 0x79, 0x88, 0xd6, 0xb8, 0x8a, + 0xe9, 0xaa, 0xc5, 0x82, 0xaf, 0x83, 0x03, 0x2f, + 0x8a, 0x9b, 0x21, 0xa2, 0xe1, 0xb7, 0xbf, 0x18 }, + { 0x3d, 0xa2, 0x91, 0x26, 0xc7, 0xc5, 0xd7, 0xf4, + 0x3e, 0x64, 0x24, 0x2a, 0x79, 0xfe, 0xaa, 0x4e, + 0xf3, 0x45, 0x9c, 0xde, 0xcc, 0xc8, 0x98, 0xed, + 0x59, 0xa9, 0x7f, 0x6e, 0xc9, 0x3b, 0x9d, 0xab }, + { 0x56, 0x6d, 0xc9, 0x20, 0x29, 0x3d, 0xa5, 0xcb, + 0x4f, 0xe0, 0xaa, 0x8a, 0xbd, 0xa8, 0xbb, 0xf5, + 0x6f, 0x55, 0x23, 0x13, 0xbf, 0xf1, 0x90, 0x46, + 0x64, 0x1e, 0x36, 0x15, 0xc1, 0xe3, 0xed, 0x3f }, + { 0x41, 0x15, 0xbe, 0xa0, 0x2f, 0x73, 0xf9, 0x7f, + 0x62, 0x9e, 0x5c, 0x55, 0x90, 0x72, 0x0c, 0x01, + 0xe7, 0xe4, 0x49, 0xae, 0x2a, 0x66, 0x97, 0xd4, + 0xd2, 0x78, 0x33, 0x21, 0x30, 0x36, 0x92, 0xf9 }, + { 0x4c, 0xe0, 0x8f, 0x47, 0x62, 0x46, 0x8a, 0x76, + 0x70, 0x01, 0x21, 0x64, 0x87, 0x8d, 0x68, 0x34, + 0x0c, 0x52, 0xa3, 0x5e, 0x66, 0xc1, 0x88, 0x4d, + 0x5c, 0x86, 0x48, 0x89, 0xab, 0xc9, 0x66, 0x77 }, + { 0x81, 0xea, 0x0b, 0x78, 0x04, 0x12, 0x4e, 0x0c, + 0x22, 0xea, 0x5f, 0xc7, 0x11, 0x04, 0xa2, 0xaf, + 0xcb, 0x52, 0xa1, 0xfa, 0x81, 0x6f, 0x3e, 0xcb, + 0x7d, 0xcb, 0x5d, 0x9d, 0xea, 0x17, 0x86, 0xd0 }, + { 0xfe, 0x36, 0x27, 0x33, 0xb0, 0x5f, 0x6b, 0xed, + 0xaf, 0x93, 0x79, 0xd7, 0xf7, 0x93, 0x6e, 0xde, + 0x20, 0x9b, 0x1f, 0x83, 0x23, 0xc3, 0x92, 0x25, + 0x49, 0xd9, 0xe7, 0x36, 0x81, 0xb5, 0xdb, 0x7b }, + { 0xef, 0xf3, 0x7d, 0x30, 0xdf, 0xd2, 0x03, 0x59, + 0xbe, 0x4e, 0x73, 0xfd, 0xf4, 0x0d, 0x27, 0x73, + 0x4b, 0x3d, 0xf9, 0x0a, 0x97, 0xa5, 0x5e, 0xd7, + 0x45, 0x29, 0x72, 0x94, 0xca, 0x85, 0xd0, 0x9f }, + { 0x17, 0x2f, 0xfc, 0x67, 0x15, 0x3d, 0x12, 0xe0, + 0xca, 0x76, 0xa8, 0xb6, 0xcd, 0x5d, 0x47, 0x31, + 0x88, 0x5b, 0x39, 0xce, 0x0c, 0xac, 0x93, 0xa8, + 0x97, 0x2a, 0x18, 0x00, 0x6c, 0x8b, 0x8b, 0xaf }, + { 0xc4, 0x79, 0x57, 0xf1, 0xcc, 0x88, 0xe8, 0x3e, + 0xf9, 0x44, 0x58, 0x39, 0x70, 0x9a, 0x48, 0x0a, + 0x03, 0x6b, 0xed, 0x5f, 0x88, 0xac, 0x0f, 0xcc, + 0x8e, 0x1e, 0x70, 0x3f, 0xfa, 0xac, 0x13, 0x2c }, + { 0x30, 0xf3, 0x54, 0x83, 0x70, 0xcf, 0xdc, 0xed, + 0xa5, 0xc3, 0x7b, 0x56, 0x9b, 0x61, 0x75, 0xe7, + 0x99, 0xee, 0xf1, 0xa6, 0x2a, 0xaa, 0x94, 0x32, + 0x45, 0xae, 0x76, 0x69, 0xc2, 0x27, 0xa7, 0xb5 }, + { 0xc9, 0x5d, 0xcb, 0x3c, 0xf1, 0xf2, 0x7d, 0x0e, + 0xef, 0x2f, 0x25, 0xd2, 0x41, 0x38, 0x70, 0x90, + 0x4a, 0x87, 0x7c, 0x4a, 0x56, 0xc2, 0xde, 0x1e, + 0x83, 0xe2, 0xbc, 0x2a, 0xe2, 0xe4, 0x68, 0x21 }, + { 0xd5, 0xd0, 0xb5, 0xd7, 0x05, 0x43, 0x4c, 0xd4, + 0x6b, 0x18, 0x57, 0x49, 0xf6, 0x6b, 0xfb, 0x58, + 0x36, 0xdc, 0xdf, 0x6e, 0xe5, 0x49, 0xa2, 0xb7, + 0xa4, 0xae, 0xe7, 0xf5, 0x80, 0x07, 0xca, 0xaf }, + { 0xbb, 0xc1, 0x24, 0xa7, 0x12, 0xf1, 0x5d, 0x07, + 0xc3, 0x00, 0xe0, 0x5b, 0x66, 0x83, 0x89, 0xa4, + 0x39, 0xc9, 0x17, 0x77, 0xf7, 0x21, 0xf8, 0x32, + 0x0c, 0x1c, 0x90, 0x78, 0x06, 0x6d, 0x2c, 0x7e }, + { 0xa4, 0x51, 0xb4, 0x8c, 0x35, 0xa6, 0xc7, 0x85, + 0x4c, 0xfa, 0xae, 0x60, 0x26, 0x2e, 0x76, 0x99, + 0x08, 0x16, 0x38, 0x2a, 0xc0, 0x66, 0x7e, 0x5a, + 0x5c, 0x9e, 0x1b, 0x46, 0xc4, 0x34, 0x2d, 0xdf }, + { 0xb0, 0xd1, 0x50, 0xfb, 0x55, 0xe7, 0x78, 0xd0, + 0x11, 0x47, 0xf0, 0xb5, 0xd8, 0x9d, 0x99, 0xec, + 0xb2, 0x0f, 0xf0, 0x7e, 0x5e, 0x67, 0x60, 0xd6, + 0xb6, 0x45, 0xeb, 0x5b, 0x65, 0x4c, 0x62, 0x2b }, + { 0x34, 0xf7, 0x37, 0xc0, 0xab, 0x21, 0x99, 0x51, + 0xee, 0xe8, 0x9a, 0x9f, 0x8d, 0xac, 0x29, 0x9c, + 0x9d, 0x4c, 0x38, 0xf3, 0x3f, 0xa4, 0x94, 0xc5, + 0xc6, 0xee, 0xfc, 0x92, 0xb6, 0xdb, 0x08, 0xbc }, + { 0x1a, 0x62, 0xcc, 0x3a, 0x00, 0x80, 0x0d, 0xcb, + 0xd9, 0x98, 0x91, 0x08, 0x0c, 0x1e, 0x09, 0x84, + 0x58, 0x19, 0x3a, 0x8c, 0xc9, 0xf9, 0x70, 0xea, + 0x99, 0xfb, 0xef, 0xf0, 0x03, 0x18, 0xc2, 0x89 }, + { 0xcf, 0xce, 0x55, 0xeb, 0xaf, 0xc8, 0x40, 0xd7, + 0xae, 0x48, 0x28, 0x1c, 0x7f, 0xd5, 0x7e, 0xc8, + 0xb4, 0x82, 0xd4, 0xb7, 0x04, 0x43, 0x74, 0x95, + 0x49, 0x5a, 0xc4, 0x14, 0xcf, 0x4a, 0x37, 0x4b }, + { 0x67, 0x46, 0xfa, 0xcf, 0x71, 0x14, 0x6d, 0x99, + 0x9d, 0xab, 0xd0, 0x5d, 0x09, 0x3a, 0xe5, 0x86, + 0x64, 0x8d, 0x1e, 0xe2, 0x8e, 0x72, 0x61, 0x7b, + 0x99, 0xd0, 0xf0, 0x08, 0x6e, 0x1e, 0x45, 0xbf }, + { 0x57, 0x1c, 0xed, 0x28, 0x3b, 0x3f, 0x23, 0xb4, + 0xe7, 0x50, 0xbf, 0x12, 0xa2, 0xca, 0xf1, 0x78, + 0x18, 0x47, 0xbd, 0x89, 0x0e, 0x43, 0x60, 0x3c, + 0xdc, 0x59, 0x76, 0x10, 0x2b, 0x7b, 0xb1, 0x1b }, + { 0xcf, 0xcb, 0x76, 0x5b, 0x04, 0x8e, 0x35, 0x02, + 0x2c, 0x5d, 0x08, 0x9d, 0x26, 0xe8, 0x5a, 0x36, + 0xb0, 0x05, 0xa2, 0xb8, 0x04, 0x93, 0xd0, 0x3a, + 0x14, 0x4e, 0x09, 0xf4, 0x09, 0xb6, 0xaf, 0xd1 }, + { 0x40, 0x50, 0xc7, 0xa2, 0x77, 0x05, 0xbb, 0x27, + 0xf4, 0x20, 0x89, 0xb2, 0x99, 0xf3, 0xcb, 0xe5, + 0x05, 0x4e, 0xad, 0x68, 0x72, 0x7e, 0x8e, 0xf9, + 0x31, 0x8c, 0xe6, 0xf2, 0x5c, 0xd6, 0xf3, 0x1d }, + { 0x18, 0x40, 0x70, 0xbd, 0x5d, 0x26, 0x5f, 0xbd, + 0xc1, 0x42, 0xcd, 0x1c, 0x5c, 0xd0, 0xd7, 0xe4, + 0x14, 0xe7, 0x03, 0x69, 0xa2, 0x66, 0xd6, 0x27, + 0xc8, 0xfb, 0xa8, 0x4f, 0xa5, 0xe8, 0x4c, 0x34 }, + { 0x9e, 0xdd, 0xa9, 0xa4, 0x44, 0x39, 0x02, 0xa9, + 0x58, 0x8c, 0x0d, 0x0c, 0xcc, 0x62, 0xb9, 0x30, + 0x21, 0x84, 0x79, 0xa6, 0x84, 0x1e, 0x6f, 0xe7, + 0xd4, 0x30, 0x03, 0xf0, 0x4b, 0x1f, 0xd6, 0x43 }, + { 0xe4, 0x12, 0xfe, 0xef, 0x79, 0x08, 0x32, 0x4a, + 0x6d, 0xa1, 0x84, 0x16, 0x29, 0xf3, 0x5d, 0x3d, + 0x35, 0x86, 0x42, 0x01, 0x93, 0x10, 0xec, 0x57, + 0xc6, 0x14, 0x83, 0x6b, 0x63, 0xd3, 0x07, 0x63 }, + { 0x1a, 0x2b, 0x8e, 0xdf, 0xf3, 0xf9, 0xac, 0xc1, + 0x55, 0x4f, 0xcb, 0xae, 0x3c, 0xf1, 0xd6, 0x29, + 0x8c, 0x64, 0x62, 0xe2, 0x2e, 0x5e, 0xb0, 0x25, + 0x96, 0x84, 0xf8, 0x35, 0x01, 0x2b, 0xd1, 0x3f }, + { 0x28, 0x8c, 0x4a, 0xd9, 0xb9, 0x40, 0x97, 0x62, + 0xea, 0x07, 0xc2, 0x4a, 0x41, 0xf0, 0x4f, 0x69, + 0xa7, 0xd7, 0x4b, 0xee, 0x2d, 0x95, 0x43, 0x53, + 0x74, 0xbd, 0xe9, 0x46, 0xd7, 0x24, 0x1c, 0x7b }, + { 0x80, 0x56, 0x91, 0xbb, 0x28, 0x67, 0x48, 0xcf, + 0xb5, 0x91, 0xd3, 0xae, 0xbe, 0x7e, 0x6f, 0x4e, + 0x4d, 0xc6, 0xe2, 0x80, 0x8c, 0x65, 0x14, 0x3c, + 0xc0, 0x04, 0xe4, 0xeb, 0x6f, 0xd0, 0x9d, 0x43 }, + { 0xd4, 0xac, 0x8d, 0x3a, 0x0a, 0xfc, 0x6c, 0xfa, + 0x7b, 0x46, 0x0a, 0xe3, 0x00, 0x1b, 0xae, 0xb3, + 0x6d, 0xad, 0xb3, 0x7d, 0xa0, 0x7d, 0x2e, 0x8a, + 0xc9, 0x18, 0x22, 0xdf, 0x34, 0x8a, 0xed, 0x3d }, + { 0xc3, 0x76, 0x61, 0x70, 0x14, 0xd2, 0x01, 0x58, + 0xbc, 0xed, 0x3d, 0x3b, 0xa5, 0x52, 0xb6, 0xec, + 0xcf, 0x84, 0xe6, 0x2a, 0xa3, 0xeb, 0x65, 0x0e, + 0x90, 0x02, 0x9c, 0x84, 0xd1, 0x3e, 0xea, 0x69 }, + { 0xc4, 0x1f, 0x09, 0xf4, 0x3c, 0xec, 0xae, 0x72, + 0x93, 0xd6, 0x00, 0x7c, 0xa0, 0xa3, 0x57, 0x08, + 0x7d, 0x5a, 0xe5, 0x9b, 0xe5, 0x00, 0xc1, 0xcd, + 0x5b, 0x28, 0x9e, 0xe8, 0x10, 0xc7, 0xb0, 0x82 }, + { 0x03, 0xd1, 0xce, 0xd1, 0xfb, 0xa5, 0xc3, 0x91, + 0x55, 0xc4, 0x4b, 0x77, 0x65, 0xcb, 0x76, 0x0c, + 0x78, 0x70, 0x8d, 0xcf, 0xc8, 0x0b, 0x0b, 0xd8, + 0xad, 0xe3, 0xa5, 0x6d, 0xa8, 0x83, 0x0b, 0x29 }, + { 0x09, 0xbd, 0xe6, 0xf1, 0x52, 0x21, 0x8d, 0xc9, + 0x2c, 0x41, 0xd7, 0xf4, 0x53, 0x87, 0xe6, 0x3e, + 0x58, 0x69, 0xd8, 0x07, 0xec, 0x70, 0xb8, 0x21, + 0x40, 0x5d, 0xbd, 0x88, 0x4b, 0x7f, 0xcf, 0x4b }, + { 0x71, 0xc9, 0x03, 0x6e, 0x18, 0x17, 0x9b, 0x90, + 0xb3, 0x7d, 0x39, 0xe9, 0xf0, 0x5e, 0xb8, 0x9c, + 0xc5, 0xfc, 0x34, 0x1f, 0xd7, 0xc4, 0x77, 0xd0, + 0xd7, 0x49, 0x32, 0x85, 0xfa, 0xca, 0x08, 0xa4 }, + { 0x59, 0x16, 0x83, 0x3e, 0xbb, 0x05, 0xcd, 0x91, + 0x9c, 0xa7, 0xfe, 0x83, 0xb6, 0x92, 0xd3, 0x20, + 0x5b, 0xef, 0x72, 0x39, 0x2b, 0x2c, 0xf6, 0xbb, + 0x0a, 0x6d, 0x43, 0xf9, 0x94, 0xf9, 0x5f, 0x11 }, + { 0xf6, 0x3a, 0xab, 0x3e, 0xc6, 0x41, 0xb3, 0xb0, + 0x24, 0x96, 0x4c, 0x2b, 0x43, 0x7c, 0x04, 0xf6, + 0x04, 0x3c, 0x4c, 0x7e, 0x02, 0x79, 0x23, 0x99, + 0x95, 0x40, 0x19, 0x58, 0xf8, 0x6b, 0xbe, 0x54 }, + { 0xf1, 0x72, 0xb1, 0x80, 0xbf, 0xb0, 0x97, 0x40, + 0x49, 0x31, 0x20, 0xb6, 0x32, 0x6c, 0xbd, 0xc5, + 0x61, 0xe4, 0x77, 0xde, 0xf9, 0xbb, 0xcf, 0xd2, + 0x8c, 0xc8, 0xc1, 0xc5, 0xe3, 0x37, 0x9a, 0x31 }, + { 0xcb, 0x9b, 0x89, 0xcc, 0x18, 0x38, 0x1d, 0xd9, + 0x14, 0x1a, 0xde, 0x58, 0x86, 0x54, 0xd4, 0xe6, + 0xa2, 0x31, 0xd5, 0xbf, 0x49, 0xd4, 0xd5, 0x9a, + 0xc2, 0x7d, 0x86, 0x9c, 0xbe, 0x10, 0x0c, 0xf3 }, + { 0x7b, 0xd8, 0x81, 0x50, 0x46, 0xfd, 0xd8, 0x10, + 0xa9, 0x23, 0xe1, 0x98, 0x4a, 0xae, 0xbd, 0xcd, + 0xf8, 0x4d, 0x87, 0xc8, 0x99, 0x2d, 0x68, 0xb5, + 0xee, 0xb4, 0x60, 0xf9, 0x3e, 0xb3, 0xc8, 0xd7 }, + { 0x60, 0x7b, 0xe6, 0x68, 0x62, 0xfd, 0x08, 0xee, + 0x5b, 0x19, 0xfa, 0xca, 0xc0, 0x9d, 0xfd, 0xbc, + 0xd4, 0x0c, 0x31, 0x21, 0x01, 0xd6, 0x6e, 0x6e, + 0xbd, 0x2b, 0x84, 0x1f, 0x1b, 0x9a, 0x93, 0x25 }, + { 0x9f, 0xe0, 0x3b, 0xbe, 0x69, 0xab, 0x18, 0x34, + 0xf5, 0x21, 0x9b, 0x0d, 0xa8, 0x8a, 0x08, 0xb3, + 0x0a, 0x66, 0xc5, 0x91, 0x3f, 0x01, 0x51, 0x96, + 0x3c, 0x36, 0x05, 0x60, 0xdb, 0x03, 0x87, 0xb3 }, + { 0x90, 0xa8, 0x35, 0x85, 0x71, 0x7b, 0x75, 0xf0, + 0xe9, 0xb7, 0x25, 0xe0, 0x55, 0xee, 0xee, 0xb9, + 0xe7, 0xa0, 0x28, 0xea, 0x7e, 0x6c, 0xbc, 0x07, + 0xb2, 0x09, 0x17, 0xec, 0x03, 0x63, 0xe3, 0x8c }, + { 0x33, 0x6e, 0xa0, 0x53, 0x0f, 0x4a, 0x74, 0x69, + 0x12, 0x6e, 0x02, 0x18, 0x58, 0x7e, 0xbb, 0xde, + 0x33, 0x58, 0xa0, 0xb3, 0x1c, 0x29, 0xd2, 0x00, + 0xf7, 0xdc, 0x7e, 0xb1, 0x5c, 0x6a, 0xad, 0xd8 }, + { 0xa7, 0x9e, 0x76, 0xdc, 0x0a, 0xbc, 0xa4, 0x39, + 0x6f, 0x07, 0x47, 0xcd, 0x7b, 0x74, 0x8d, 0xf9, + 0x13, 0x00, 0x76, 0x26, 0xb1, 0xd6, 0x59, 0xda, + 0x0c, 0x1f, 0x78, 0xb9, 0x30, 0x3d, 0x01, 0xa3 }, + { 0x44, 0xe7, 0x8a, 0x77, 0x37, 0x56, 0xe0, 0x95, + 0x15, 0x19, 0x50, 0x4d, 0x70, 0x38, 0xd2, 0x8d, + 0x02, 0x13, 0xa3, 0x7e, 0x0c, 0xe3, 0x75, 0x37, + 0x17, 0x57, 0xbc, 0x99, 0x63, 0x11, 0xe3, 0xb8 }, + { 0x77, 0xac, 0x01, 0x2a, 0x3f, 0x75, 0x4d, 0xcf, + 0xea, 0xb5, 0xeb, 0x99, 0x6b, 0xe9, 0xcd, 0x2d, + 0x1f, 0x96, 0x11, 0x1b, 0x6e, 0x49, 0xf3, 0x99, + 0x4d, 0xf1, 0x81, 0xf2, 0x85, 0x69, 0xd8, 0x25 }, + { 0xce, 0x5a, 0x10, 0xdb, 0x6f, 0xcc, 0xda, 0xf1, + 0x40, 0xaa, 0xa4, 0xde, 0xd6, 0x25, 0x0a, 0x9c, + 0x06, 0xe9, 0x22, 0x2b, 0xc9, 0xf9, 0xf3, 0x65, + 0x8a, 0x4a, 0xff, 0x93, 0x5f, 0x2b, 0x9f, 0x3a }, + { 0xec, 0xc2, 0x03, 0xa7, 0xfe, 0x2b, 0xe4, 0xab, + 0xd5, 0x5b, 0xb5, 0x3e, 0x6e, 0x67, 0x35, 0x72, + 0xe0, 0x07, 0x8d, 0xa8, 0xcd, 0x37, 0x5e, 0xf4, + 0x30, 0xcc, 0x97, 0xf9, 0xf8, 0x00, 0x83, 0xaf }, + { 0x14, 0xa5, 0x18, 0x6d, 0xe9, 0xd7, 0xa1, 0x8b, + 0x04, 0x12, 0xb8, 0x56, 0x3e, 0x51, 0xcc, 0x54, + 0x33, 0x84, 0x0b, 0x4a, 0x12, 0x9a, 0x8f, 0xf9, + 0x63, 0xb3, 0x3a, 0x3c, 0x4a, 0xfe, 0x8e, 0xbb }, + { 0x13, 0xf8, 0xef, 0x95, 0xcb, 0x86, 0xe6, 0xa6, + 0x38, 0x93, 0x1c, 0x8e, 0x10, 0x76, 0x73, 0xeb, + 0x76, 0xba, 0x10, 0xd7, 0xc2, 0xcd, 0x70, 0xb9, + 0xd9, 0x92, 0x0b, 0xbe, 0xed, 0x92, 0x94, 0x09 }, + { 0x0b, 0x33, 0x8f, 0x4e, 0xe1, 0x2f, 0x2d, 0xfc, + 0xb7, 0x87, 0x13, 0x37, 0x79, 0x41, 0xe0, 0xb0, + 0x63, 0x21, 0x52, 0x58, 0x1d, 0x13, 0x32, 0x51, + 0x6e, 0x4a, 0x2c, 0xab, 0x19, 0x42, 0xcc, 0xa4 }, + { 0xea, 0xab, 0x0e, 0xc3, 0x7b, 0x3b, 0x8a, 0xb7, + 0x96, 0xe9, 0xf5, 0x72, 0x38, 0xde, 0x14, 0xa2, + 0x64, 0xa0, 0x76, 0xf3, 0x88, 0x7d, 0x86, 0xe2, + 0x9b, 0xb5, 0x90, 0x6d, 0xb5, 0xa0, 0x0e, 0x02 }, + { 0x23, 0xcb, 0x68, 0xb8, 0xc0, 0xe6, 0xdc, 0x26, + 0xdc, 0x27, 0x76, 0x6d, 0xdc, 0x0a, 0x13, 0xa9, + 0x94, 0x38, 0xfd, 0x55, 0x61, 0x7a, 0xa4, 0x09, + 0x5d, 0x8f, 0x96, 0x97, 0x20, 0xc8, 0x72, 0xdf }, + { 0x09, 0x1d, 0x8e, 0xe3, 0x0d, 0x6f, 0x29, 0x68, + 0xd4, 0x6b, 0x68, 0x7d, 0xd6, 0x52, 0x92, 0x66, + 0x57, 0x42, 0xde, 0x0b, 0xb8, 0x3d, 0xcc, 0x00, + 0x04, 0xc7, 0x2c, 0xe1, 0x00, 0x07, 0xa5, 0x49 }, + { 0x7f, 0x50, 0x7a, 0xbc, 0x6d, 0x19, 0xba, 0x00, + 0xc0, 0x65, 0xa8, 0x76, 0xec, 0x56, 0x57, 0x86, + 0x88, 0x82, 0xd1, 0x8a, 0x22, 0x1b, 0xc4, 0x6c, + 0x7a, 0x69, 0x12, 0x54, 0x1f, 0x5b, 0xc7, 0xba }, + { 0xa0, 0x60, 0x7c, 0x24, 0xe1, 0x4e, 0x8c, 0x22, + 0x3d, 0xb0, 0xd7, 0x0b, 0x4d, 0x30, 0xee, 0x88, + 0x01, 0x4d, 0x60, 0x3f, 0x43, 0x7e, 0x9e, 0x02, + 0xaa, 0x7d, 0xaf, 0xa3, 0xcd, 0xfb, 0xad, 0x94 }, + { 0xdd, 0xbf, 0xea, 0x75, 0xcc, 0x46, 0x78, 0x82, + 0xeb, 0x34, 0x83, 0xce, 0x5e, 0x2e, 0x75, 0x6a, + 0x4f, 0x47, 0x01, 0xb7, 0x6b, 0x44, 0x55, 0x19, + 0xe8, 0x9f, 0x22, 0xd6, 0x0f, 0xa8, 0x6e, 0x06 }, + { 0x0c, 0x31, 0x1f, 0x38, 0xc3, 0x5a, 0x4f, 0xb9, + 0x0d, 0x65, 0x1c, 0x28, 0x9d, 0x48, 0x68, 0x56, + 0xcd, 0x14, 0x13, 0xdf, 0x9b, 0x06, 0x77, 0xf5, + 0x3e, 0xce, 0x2c, 0xd9, 0xe4, 0x77, 0xc6, 0x0a }, + { 0x46, 0xa7, 0x3a, 0x8d, 0xd3, 0xe7, 0x0f, 0x59, + 0xd3, 0x94, 0x2c, 0x01, 0xdf, 0x59, 0x9d, 0xef, + 0x78, 0x3c, 0x9d, 0xa8, 0x2f, 0xd8, 0x32, 0x22, + 0xcd, 0x66, 0x2b, 0x53, 0xdc, 0xe7, 0xdb, 0xdf }, + { 0xad, 0x03, 0x8f, 0xf9, 0xb1, 0x4d, 0xe8, 0x4a, + 0x80, 0x1e, 0x4e, 0x62, 0x1c, 0xe5, 0xdf, 0x02, + 0x9d, 0xd9, 0x35, 0x20, 0xd0, 0xc2, 0xfa, 0x38, + 0xbf, 0xf1, 0x76, 0xa8, 0xb1, 0xd1, 0x69, 0x8c }, + { 0xab, 0x70, 0xc5, 0xdf, 0xbd, 0x1e, 0xa8, 0x17, + 0xfe, 0xd0, 0xcd, 0x06, 0x72, 0x93, 0xab, 0xf3, + 0x19, 0xe5, 0xd7, 0x90, 0x1c, 0x21, 0x41, 0xd5, + 0xd9, 0x9b, 0x23, 0xf0, 0x3a, 0x38, 0xe7, 0x48 }, + { 0x1f, 0xff, 0xda, 0x67, 0x93, 0x2b, 0x73, 0xc8, + 0xec, 0xaf, 0x00, 0x9a, 0x34, 0x91, 0xa0, 0x26, + 0x95, 0x3b, 0xab, 0xfe, 0x1f, 0x66, 0x3b, 0x06, + 0x97, 0xc3, 0xc4, 0xae, 0x8b, 0x2e, 0x7d, 0xcb }, + { 0xb0, 0xd2, 0xcc, 0x19, 0x47, 0x2d, 0xd5, 0x7f, + 0x2b, 0x17, 0xef, 0xc0, 0x3c, 0x8d, 0x58, 0xc2, + 0x28, 0x3d, 0xbb, 0x19, 0xda, 0x57, 0x2f, 0x77, + 0x55, 0x85, 0x5a, 0xa9, 0x79, 0x43, 0x17, 0xa0 }, + { 0xa0, 0xd1, 0x9a, 0x6e, 0xe3, 0x39, 0x79, 0xc3, + 0x25, 0x51, 0x0e, 0x27, 0x66, 0x22, 0xdf, 0x41, + 0xf7, 0x15, 0x83, 0xd0, 0x75, 0x01, 0xb8, 0x70, + 0x71, 0x12, 0x9a, 0x0a, 0xd9, 0x47, 0x32, 0xa5 }, + { 0x72, 0x46, 0x42, 0xa7, 0x03, 0x2d, 0x10, 0x62, + 0xb8, 0x9e, 0x52, 0xbe, 0xa3, 0x4b, 0x75, 0xdf, + 0x7d, 0x8f, 0xe7, 0x72, 0xd9, 0xfe, 0x3c, 0x93, + 0xdd, 0xf3, 0xc4, 0x54, 0x5a, 0xb5, 0xa9, 0x9b }, + { 0xad, 0xe5, 0xea, 0xa7, 0xe6, 0x1f, 0x67, 0x2d, + 0x58, 0x7e, 0xa0, 0x3d, 0xae, 0x7d, 0x7b, 0x55, + 0x22, 0x9c, 0x01, 0xd0, 0x6b, 0xc0, 0xa5, 0x70, + 0x14, 0x36, 0xcb, 0xd1, 0x83, 0x66, 0xa6, 0x26 }, + { 0x01, 0x3b, 0x31, 0xeb, 0xd2, 0x28, 0xfc, 0xdd, + 0xa5, 0x1f, 0xab, 0xb0, 0x3b, 0xb0, 0x2d, 0x60, + 0xac, 0x20, 0xca, 0x21, 0x5a, 0xaf, 0xa8, 0x3b, + 0xdd, 0x85, 0x5e, 0x37, 0x55, 0xa3, 0x5f, 0x0b }, + { 0x33, 0x2e, 0xd4, 0x0b, 0xb1, 0x0d, 0xde, 0x3c, + 0x95, 0x4a, 0x75, 0xd7, 0xb8, 0x99, 0x9d, 0x4b, + 0x26, 0xa1, 0xc0, 0x63, 0xc1, 0xdc, 0x6e, 0x32, + 0xc1, 0xd9, 0x1b, 0xab, 0x7b, 0xbb, 0x7d, 0x16 }, + { 0xc7, 0xa1, 0x97, 0xb3, 0xa0, 0x5b, 0x56, 0x6b, + 0xcc, 0x9f, 0xac, 0xd2, 0x0e, 0x44, 0x1d, 0x6f, + 0x6c, 0x28, 0x60, 0xac, 0x96, 0x51, 0xcd, 0x51, + 0xd6, 0xb9, 0xd2, 0xcd, 0xee, 0xea, 0x03, 0x90 }, + { 0xbd, 0x9c, 0xf6, 0x4e, 0xa8, 0x95, 0x3c, 0x03, + 0x71, 0x08, 0xe6, 0xf6, 0x54, 0x91, 0x4f, 0x39, + 0x58, 0xb6, 0x8e, 0x29, 0xc1, 0x67, 0x00, 0xdc, + 0x18, 0x4d, 0x94, 0xa2, 0x17, 0x08, 0xff, 0x60 }, + { 0x88, 0x35, 0xb0, 0xac, 0x02, 0x11, 0x51, 0xdf, + 0x71, 0x64, 0x74, 0xce, 0x27, 0xce, 0x4d, 0x3c, + 0x15, 0xf0, 0xb2, 0xda, 0xb4, 0x80, 0x03, 0xcf, + 0x3f, 0x3e, 0xfd, 0x09, 0x45, 0x10, 0x6b, 0x9a }, + { 0x3b, 0xfe, 0xfa, 0x33, 0x01, 0xaa, 0x55, 0xc0, + 0x80, 0x19, 0x0c, 0xff, 0xda, 0x8e, 0xae, 0x51, + 0xd9, 0xaf, 0x48, 0x8b, 0x4c, 0x1f, 0x24, 0xc3, + 0xd9, 0xa7, 0x52, 0x42, 0xfd, 0x8e, 0xa0, 0x1d }, + { 0x08, 0x28, 0x4d, 0x14, 0x99, 0x3c, 0xd4, 0x7d, + 0x53, 0xeb, 0xae, 0xcf, 0x0d, 0xf0, 0x47, 0x8c, + 0xc1, 0x82, 0xc8, 0x9c, 0x00, 0xe1, 0x85, 0x9c, + 0x84, 0x85, 0x16, 0x86, 0xdd, 0xf2, 0xc1, 0xb7 }, + { 0x1e, 0xd7, 0xef, 0x9f, 0x04, 0xc2, 0xac, 0x8d, + 0xb6, 0xa8, 0x64, 0xdb, 0x13, 0x10, 0x87, 0xf2, + 0x70, 0x65, 0x09, 0x8e, 0x69, 0xc3, 0xfe, 0x78, + 0x71, 0x8d, 0x9b, 0x94, 0x7f, 0x4a, 0x39, 0xd0 }, + { 0xc1, 0x61, 0xf2, 0xdc, 0xd5, 0x7e, 0x9c, 0x14, + 0x39, 0xb3, 0x1a, 0x9d, 0xd4, 0x3d, 0x8f, 0x3d, + 0x7d, 0xd8, 0xf0, 0xeb, 0x7c, 0xfa, 0xc6, 0xfb, + 0x25, 0xa0, 0xf2, 0x8e, 0x30, 0x6f, 0x06, 0x61 }, + { 0xc0, 0x19, 0x69, 0xad, 0x34, 0xc5, 0x2c, 0xaf, + 0x3d, 0xc4, 0xd8, 0x0d, 0x19, 0x73, 0x5c, 0x29, + 0x73, 0x1a, 0xc6, 0xe7, 0xa9, 0x20, 0x85, 0xab, + 0x92, 0x50, 0xc4, 0x8d, 0xea, 0x48, 0xa3, 0xfc }, + { 0x17, 0x20, 0xb3, 0x65, 0x56, 0x19, 0xd2, 0xa5, + 0x2b, 0x35, 0x21, 0xae, 0x0e, 0x49, 0xe3, 0x45, + 0xcb, 0x33, 0x89, 0xeb, 0xd6, 0x20, 0x8a, 0xca, + 0xf9, 0xf1, 0x3f, 0xda, 0xcc, 0xa8, 0xbe, 0x49 }, + { 0x75, 0x62, 0x88, 0x36, 0x1c, 0x83, 0xe2, 0x4c, + 0x61, 0x7c, 0xf9, 0x5c, 0x90, 0x5b, 0x22, 0xd0, + 0x17, 0xcd, 0xc8, 0x6f, 0x0b, 0xf1, 0xd6, 0x58, + 0xf4, 0x75, 0x6c, 0x73, 0x79, 0x87, 0x3b, 0x7f }, + { 0xe7, 0xd0, 0xed, 0xa3, 0x45, 0x26, 0x93, 0xb7, + 0x52, 0xab, 0xcd, 0xa1, 0xb5, 0x5e, 0x27, 0x6f, + 0x82, 0x69, 0x8f, 0x5f, 0x16, 0x05, 0x40, 0x3e, + 0xff, 0x83, 0x0b, 0xea, 0x00, 0x71, 0xa3, 0x94 }, + { 0x2c, 0x82, 0xec, 0xaa, 0x6b, 0x84, 0x80, 0x3e, + 0x04, 0x4a, 0xf6, 0x31, 0x18, 0xaf, 0xe5, 0x44, + 0x68, 0x7c, 0xb6, 0xe6, 0xc7, 0xdf, 0x49, 0xed, + 0x76, 0x2d, 0xfd, 0x7c, 0x86, 0x93, 0xa1, 0xbc }, + { 0x61, 0x36, 0xcb, 0xf4, 0xb4, 0x41, 0x05, 0x6f, + 0xa1, 0xe2, 0x72, 0x24, 0x98, 0x12, 0x5d, 0x6d, + 0xed, 0x45, 0xe1, 0x7b, 0x52, 0x14, 0x39, 0x59, + 0xc7, 0xf4, 0xd4, 0xe3, 0x95, 0x21, 0x8a, 0xc2 }, + { 0x72, 0x1d, 0x32, 0x45, 0xaa, 0xfe, 0xf2, 0x7f, + 0x6a, 0x62, 0x4f, 0x47, 0x95, 0x4b, 0x6c, 0x25, + 0x50, 0x79, 0x52, 0x6f, 0xfa, 0x25, 0xe9, 0xff, + 0x77, 0xe5, 0xdc, 0xff, 0x47, 0x3b, 0x15, 0x97 }, + { 0x9d, 0xd2, 0xfb, 0xd8, 0xce, 0xf1, 0x6c, 0x35, + 0x3c, 0x0a, 0xc2, 0x11, 0x91, 0xd5, 0x09, 0xeb, + 0x28, 0xdd, 0x9e, 0x3e, 0x0d, 0x8c, 0xea, 0x5d, + 0x26, 0xca, 0x83, 0x93, 0x93, 0x85, 0x1c, 0x3a }, + { 0xb2, 0x39, 0x4c, 0xea, 0xcd, 0xeb, 0xf2, 0x1b, + 0xf9, 0xdf, 0x2c, 0xed, 0x98, 0xe5, 0x8f, 0x1c, + 0x3a, 0x4b, 0xbb, 0xff, 0x66, 0x0d, 0xd9, 0x00, + 0xf6, 0x22, 0x02, 0xd6, 0x78, 0x5c, 0xc4, 0x6e }, + { 0x57, 0x08, 0x9f, 0x22, 0x27, 0x49, 0xad, 0x78, + 0x71, 0x76, 0x5f, 0x06, 0x2b, 0x11, 0x4f, 0x43, + 0xba, 0x20, 0xec, 0x56, 0x42, 0x2a, 0x8b, 0x1e, + 0x3f, 0x87, 0x19, 0x2c, 0x0e, 0xa7, 0x18, 0xc6 }, + { 0xe4, 0x9a, 0x94, 0x59, 0x96, 0x1c, 0xd3, 0x3c, + 0xdf, 0x4a, 0xae, 0x1b, 0x10, 0x78, 0xa5, 0xde, + 0xa7, 0xc0, 0x40, 0xe0, 0xfe, 0xa3, 0x40, 0xc9, + 0x3a, 0x72, 0x48, 0x72, 0xfc, 0x4a, 0xf8, 0x06 }, + { 0xed, 0xe6, 0x7f, 0x72, 0x0e, 0xff, 0xd2, 0xca, + 0x9c, 0x88, 0x99, 0x41, 0x52, 0xd0, 0x20, 0x1d, + 0xee, 0x6b, 0x0a, 0x2d, 0x2c, 0x07, 0x7a, 0xca, + 0x6d, 0xae, 0x29, 0xf7, 0x3f, 0x8b, 0x63, 0x09 }, + { 0xe0, 0xf4, 0x34, 0xbf, 0x22, 0xe3, 0x08, 0x80, + 0x39, 0xc2, 0x1f, 0x71, 0x9f, 0xfc, 0x67, 0xf0, + 0xf2, 0xcb, 0x5e, 0x98, 0xa7, 0xa0, 0x19, 0x4c, + 0x76, 0xe9, 0x6b, 0xf4, 0xe8, 0xe1, 0x7e, 0x61 }, + { 0x27, 0x7c, 0x04, 0xe2, 0x85, 0x34, 0x84, 0xa4, + 0xeb, 0xa9, 0x10, 0xad, 0x33, 0x6d, 0x01, 0xb4, + 0x77, 0xb6, 0x7c, 0xc2, 0x00, 0xc5, 0x9f, 0x3c, + 0x8d, 0x77, 0xee, 0xf8, 0x49, 0x4f, 0x29, 0xcd }, + { 0x15, 0x6d, 0x57, 0x47, 0xd0, 0xc9, 0x9c, 0x7f, + 0x27, 0x09, 0x7d, 0x7b, 0x7e, 0x00, 0x2b, 0x2e, + 0x18, 0x5c, 0xb7, 0x2d, 0x8d, 0xd7, 0xeb, 0x42, + 0x4a, 0x03, 0x21, 0x52, 0x81, 0x61, 0x21, 0x9f }, + { 0x20, 0xdd, 0xd1, 0xed, 0x9b, 0x1c, 0xa8, 0x03, + 0x94, 0x6d, 0x64, 0xa8, 0x3a, 0xe4, 0x65, 0x9d, + 0xa6, 0x7f, 0xba, 0x7a, 0x1a, 0x3e, 0xdd, 0xb1, + 0xe1, 0x03, 0xc0, 0xf5, 0xe0, 0x3e, 0x3a, 0x2c }, + { 0xf0, 0xaf, 0x60, 0x4d, 0x3d, 0xab, 0xbf, 0x9a, + 0x0f, 0x2a, 0x7d, 0x3d, 0xda, 0x6b, 0xd3, 0x8b, + 0xba, 0x72, 0xc6, 0xd0, 0x9b, 0xe4, 0x94, 0xfc, + 0xef, 0x71, 0x3f, 0xf1, 0x01, 0x89, 0xb6, 0xe6 }, + { 0x98, 0x02, 0xbb, 0x87, 0xde, 0xf4, 0xcc, 0x10, + 0xc4, 0xa5, 0xfd, 0x49, 0xaa, 0x58, 0xdf, 0xe2, + 0xf3, 0xfd, 0xdb, 0x46, 0xb4, 0x70, 0x88, 0x14, + 0xea, 0xd8, 0x1d, 0x23, 0xba, 0x95, 0x13, 0x9b }, + { 0x4f, 0x8c, 0xe1, 0xe5, 0x1d, 0x2f, 0xe7, 0xf2, + 0x40, 0x43, 0xa9, 0x04, 0xd8, 0x98, 0xeb, 0xfc, + 0x91, 0x97, 0x54, 0x18, 0x75, 0x34, 0x13, 0xaa, + 0x09, 0x9b, 0x79, 0x5e, 0xcb, 0x35, 0xce, 0xdb }, + { 0xbd, 0xdc, 0x65, 0x14, 0xd7, 0xee, 0x6a, 0xce, + 0x0a, 0x4a, 0xc1, 0xd0, 0xe0, 0x68, 0x11, 0x22, + 0x88, 0xcb, 0xcf, 0x56, 0x04, 0x54, 0x64, 0x27, + 0x05, 0x63, 0x01, 0x77, 0xcb, 0xa6, 0x08, 0xbd }, + { 0xd6, 0x35, 0x99, 0x4f, 0x62, 0x91, 0x51, 0x7b, + 0x02, 0x81, 0xff, 0xdd, 0x49, 0x6a, 0xfa, 0x86, + 0x27, 0x12, 0xe5, 0xb3, 0xc4, 0xe5, 0x2e, 0x4c, + 0xd5, 0xfd, 0xae, 0x8c, 0x0e, 0x72, 0xfb, 0x08 }, + { 0x87, 0x8d, 0x9c, 0xa6, 0x00, 0xcf, 0x87, 0xe7, + 0x69, 0xcc, 0x30, 0x5c, 0x1b, 0x35, 0x25, 0x51, + 0x86, 0x61, 0x5a, 0x73, 0xa0, 0xda, 0x61, 0x3b, + 0x5f, 0x1c, 0x98, 0xdb, 0xf8, 0x12, 0x83, 0xea }, + { 0xa6, 0x4e, 0xbe, 0x5d, 0xc1, 0x85, 0xde, 0x9f, + 0xdd, 0xe7, 0x60, 0x7b, 0x69, 0x98, 0x70, 0x2e, + 0xb2, 0x34, 0x56, 0x18, 0x49, 0x57, 0x30, 0x7d, + 0x2f, 0xa7, 0x2e, 0x87, 0xa4, 0x77, 0x02, 0xd6 }, + { 0xce, 0x50, 0xea, 0xb7, 0xb5, 0xeb, 0x52, 0xbd, + 0xc9, 0xad, 0x8e, 0x5a, 0x48, 0x0a, 0xb7, 0x80, + 0xca, 0x93, 0x20, 0xe4, 0x43, 0x60, 0xb1, 0xfe, + 0x37, 0xe0, 0x3f, 0x2f, 0x7a, 0xd7, 0xde, 0x01 }, + { 0xee, 0xdd, 0xb7, 0xc0, 0xdb, 0x6e, 0x30, 0xab, + 0xe6, 0x6d, 0x79, 0xe3, 0x27, 0x51, 0x1e, 0x61, + 0xfc, 0xeb, 0xbc, 0x29, 0xf1, 0x59, 0xb4, 0x0a, + 0x86, 0xb0, 0x46, 0xec, 0xf0, 0x51, 0x38, 0x23 }, + { 0x78, 0x7f, 0xc9, 0x34, 0x40, 0xc1, 0xec, 0x96, + 0xb5, 0xad, 0x01, 0xc1, 0x6c, 0xf7, 0x79, 0x16, + 0xa1, 0x40, 0x5f, 0x94, 0x26, 0x35, 0x6e, 0xc9, + 0x21, 0xd8, 0xdf, 0xf3, 0xea, 0x63, 0xb7, 0xe0 }, + { 0x7f, 0x0d, 0x5e, 0xab, 0x47, 0xee, 0xfd, 0xa6, + 0x96, 0xc0, 0xbf, 0x0f, 0xbf, 0x86, 0xab, 0x21, + 0x6f, 0xce, 0x46, 0x1e, 0x93, 0x03, 0xab, 0xa6, + 0xac, 0x37, 0x41, 0x20, 0xe8, 0x90, 0xe8, 0xdf }, + { 0xb6, 0x80, 0x04, 0xb4, 0x2f, 0x14, 0xad, 0x02, + 0x9f, 0x4c, 0x2e, 0x03, 0xb1, 0xd5, 0xeb, 0x76, + 0xd5, 0x71, 0x60, 0xe2, 0x64, 0x76, 0xd2, 0x11, + 0x31, 0xbe, 0xf2, 0x0a, 0xda, 0x7d, 0x27, 0xf4 }, + { 0xb0, 0xc4, 0xeb, 0x18, 0xae, 0x25, 0x0b, 0x51, + 0xa4, 0x13, 0x82, 0xea, 0xd9, 0x2d, 0x0d, 0xc7, + 0x45, 0x5f, 0x93, 0x79, 0xfc, 0x98, 0x84, 0x42, + 0x8e, 0x47, 0x70, 0x60, 0x8d, 0xb0, 0xfa, 0xec }, + { 0xf9, 0x2b, 0x7a, 0x87, 0x0c, 0x05, 0x9f, 0x4d, + 0x46, 0x46, 0x4c, 0x82, 0x4e, 0xc9, 0x63, 0x55, + 0x14, 0x0b, 0xdc, 0xe6, 0x81, 0x32, 0x2c, 0xc3, + 0xa9, 0x92, 0xff, 0x10, 0x3e, 0x3f, 0xea, 0x52 }, + { 0x53, 0x64, 0x31, 0x26, 0x14, 0x81, 0x33, 0x98, + 0xcc, 0x52, 0x5d, 0x4c, 0x4e, 0x14, 0x6e, 0xde, + 0xb3, 0x71, 0x26, 0x5f, 0xba, 0x19, 0x13, 0x3a, + 0x2c, 0x3d, 0x21, 0x59, 0x29, 0x8a, 0x17, 0x42 }, + { 0xf6, 0x62, 0x0e, 0x68, 0xd3, 0x7f, 0xb2, 0xaf, + 0x50, 0x00, 0xfc, 0x28, 0xe2, 0x3b, 0x83, 0x22, + 0x97, 0xec, 0xd8, 0xbc, 0xe9, 0x9e, 0x8b, 0xe4, + 0xd0, 0x4e, 0x85, 0x30, 0x9e, 0x3d, 0x33, 0x74 }, + { 0x53, 0x16, 0xa2, 0x79, 0x69, 0xd7, 0xfe, 0x04, + 0xff, 0x27, 0xb2, 0x83, 0x96, 0x1b, 0xff, 0xc3, + 0xbf, 0x5d, 0xfb, 0x32, 0xfb, 0x6a, 0x89, 0xd1, + 0x01, 0xc6, 0xc3, 0xb1, 0x93, 0x7c, 0x28, 0x71 }, + { 0x81, 0xd1, 0x66, 0x4f, 0xdf, 0x3c, 0xb3, 0x3c, + 0x24, 0xee, 0xba, 0xc0, 0xbd, 0x64, 0x24, 0x4b, + 0x77, 0xc4, 0xab, 0xea, 0x90, 0xbb, 0xe8, 0xb5, + 0xee, 0x0b, 0x2a, 0xaf, 0xcf, 0x2d, 0x6a, 0x53 }, + { 0x34, 0x57, 0x82, 0xf2, 0x95, 0xb0, 0x88, 0x03, + 0x52, 0xe9, 0x24, 0xa0, 0x46, 0x7b, 0x5f, 0xbc, + 0x3e, 0x8f, 0x3b, 0xfb, 0xc3, 0xc7, 0xe4, 0x8b, + 0x67, 0x09, 0x1f, 0xb5, 0xe8, 0x0a, 0x94, 0x42 }, + { 0x79, 0x41, 0x11, 0xea, 0x6c, 0xd6, 0x5e, 0x31, + 0x1f, 0x74, 0xee, 0x41, 0xd4, 0x76, 0xcb, 0x63, + 0x2c, 0xe1, 0xe4, 0xb0, 0x51, 0xdc, 0x1d, 0x9e, + 0x9d, 0x06, 0x1a, 0x19, 0xe1, 0xd0, 0xbb, 0x49 }, + { 0x2a, 0x85, 0xda, 0xf6, 0x13, 0x88, 0x16, 0xb9, + 0x9b, 0xf8, 0xd0, 0x8b, 0xa2, 0x11, 0x4b, 0x7a, + 0xb0, 0x79, 0x75, 0xa7, 0x84, 0x20, 0xc1, 0xa3, + 0xb0, 0x6a, 0x77, 0x7c, 0x22, 0xdd, 0x8b, 0xcb }, + { 0x89, 0xb0, 0xd5, 0xf2, 0x89, 0xec, 0x16, 0x40, + 0x1a, 0x06, 0x9a, 0x96, 0x0d, 0x0b, 0x09, 0x3e, + 0x62, 0x5d, 0xa3, 0xcf, 0x41, 0xee, 0x29, 0xb5, + 0x9b, 0x93, 0x0c, 0x58, 0x20, 0x14, 0x54, 0x55 }, + { 0xd0, 0xfd, 0xcb, 0x54, 0x39, 0x43, 0xfc, 0x27, + 0xd2, 0x08, 0x64, 0xf5, 0x21, 0x81, 0x47, 0x1b, + 0x94, 0x2c, 0xc7, 0x7c, 0xa6, 0x75, 0xbc, 0xb3, + 0x0d, 0xf3, 0x1d, 0x35, 0x8e, 0xf7, 0xb1, 0xeb }, + { 0xb1, 0x7e, 0xa8, 0xd7, 0x70, 0x63, 0xc7, 0x09, + 0xd4, 0xdc, 0x6b, 0x87, 0x94, 0x13, 0xc3, 0x43, + 0xe3, 0x79, 0x0e, 0x9e, 0x62, 0xca, 0x85, 0xb7, + 0x90, 0x0b, 0x08, 0x6f, 0x6b, 0x75, 0xc6, 0x72 }, + { 0xe7, 0x1a, 0x3e, 0x2c, 0x27, 0x4d, 0xb8, 0x42, + 0xd9, 0x21, 0x14, 0xf2, 0x17, 0xe2, 0xc0, 0xea, + 0xc8, 0xb4, 0x50, 0x93, 0xfd, 0xfd, 0x9d, 0xf4, + 0xca, 0x71, 0x62, 0x39, 0x48, 0x62, 0xd5, 0x01 }, + { 0xc0, 0x47, 0x67, 0x59, 0xab, 0x7a, 0xa3, 0x33, + 0x23, 0x4f, 0x6b, 0x44, 0xf5, 0xfd, 0x85, 0x83, + 0x90, 0xec, 0x23, 0x69, 0x4c, 0x62, 0x2c, 0xb9, + 0x86, 0xe7, 0x69, 0xc7, 0x8e, 0xdd, 0x73, 0x3e }, + { 0x9a, 0xb8, 0xea, 0xbb, 0x14, 0x16, 0x43, 0x4d, + 0x85, 0x39, 0x13, 0x41, 0xd5, 0x69, 0x93, 0xc5, + 0x54, 0x58, 0x16, 0x7d, 0x44, 0x18, 0xb1, 0x9a, + 0x0f, 0x2a, 0xd8, 0xb7, 0x9a, 0x83, 0xa7, 0x5b }, + { 0x79, 0x92, 0xd0, 0xbb, 0xb1, 0x5e, 0x23, 0x82, + 0x6f, 0x44, 0x3e, 0x00, 0x50, 0x5d, 0x68, 0xd3, + 0xed, 0x73, 0x72, 0x99, 0x5a, 0x5c, 0x3e, 0x49, + 0x86, 0x54, 0x10, 0x2f, 0xbc, 0xd0, 0x96, 0x4e }, + { 0xc0, 0x21, 0xb3, 0x00, 0x85, 0x15, 0x14, 0x35, + 0xdf, 0x33, 0xb0, 0x07, 0xcc, 0xec, 0xc6, 0x9d, + 0xf1, 0x26, 0x9f, 0x39, 0xba, 0x25, 0x09, 0x2b, + 0xed, 0x59, 0xd9, 0x32, 0xac, 0x0f, 0xdc, 0x28 }, + { 0x91, 0xa2, 0x5e, 0xc0, 0xec, 0x0d, 0x9a, 0x56, + 0x7f, 0x89, 0xc4, 0xbf, 0xe1, 0xa6, 0x5a, 0x0e, + 0x43, 0x2d, 0x07, 0x06, 0x4b, 0x41, 0x90, 0xe2, + 0x7d, 0xfb, 0x81, 0x90, 0x1f, 0xd3, 0x13, 0x9b }, + { 0x59, 0x50, 0xd3, 0x9a, 0x23, 0xe1, 0x54, 0x5f, + 0x30, 0x12, 0x70, 0xaa, 0x1a, 0x12, 0xf2, 0xe6, + 0xc4, 0x53, 0x77, 0x6e, 0x4d, 0x63, 0x55, 0xde, + 0x42, 0x5c, 0xc1, 0x53, 0xf9, 0x81, 0x88, 0x67 }, + { 0xd7, 0x9f, 0x14, 0x72, 0x0c, 0x61, 0x0a, 0xf1, + 0x79, 0xa3, 0x76, 0x5d, 0x4b, 0x7c, 0x09, 0x68, + 0xf9, 0x77, 0x96, 0x2d, 0xbf, 0x65, 0x5b, 0x52, + 0x12, 0x72, 0xb6, 0xf1, 0xe1, 0x94, 0x48, 0x8e }, + { 0xe9, 0x53, 0x1b, 0xfc, 0x8b, 0x02, 0x99, 0x5a, + 0xea, 0xa7, 0x5b, 0xa2, 0x70, 0x31, 0xfa, 0xdb, + 0xcb, 0xf4, 0xa0, 0xda, 0xb8, 0x96, 0x1d, 0x92, + 0x96, 0xcd, 0x7e, 0x84, 0xd2, 0x5d, 0x60, 0x06 }, + { 0x34, 0xe9, 0xc2, 0x6a, 0x01, 0xd7, 0xf1, 0x61, + 0x81, 0xb4, 0x54, 0xa9, 0xd1, 0x62, 0x3c, 0x23, + 0x3c, 0xb9, 0x9d, 0x31, 0xc6, 0x94, 0x65, 0x6e, + 0x94, 0x13, 0xac, 0xa3, 0xe9, 0x18, 0x69, 0x2f }, + { 0xd9, 0xd7, 0x42, 0x2f, 0x43, 0x7b, 0xd4, 0x39, + 0xdd, 0xd4, 0xd8, 0x83, 0xda, 0xe2, 0xa0, 0x83, + 0x50, 0x17, 0x34, 0x14, 0xbe, 0x78, 0x15, 0x51, + 0x33, 0xff, 0xf1, 0x96, 0x4c, 0x3d, 0x79, 0x72 }, + { 0x4a, 0xee, 0x0c, 0x7a, 0xaf, 0x07, 0x54, 0x14, + 0xff, 0x17, 0x93, 0xea, 0xd7, 0xea, 0xca, 0x60, + 0x17, 0x75, 0xc6, 0x15, 0xdb, 0xd6, 0x0b, 0x64, + 0x0b, 0x0a, 0x9f, 0x0c, 0xe5, 0x05, 0xd4, 0x35 }, + { 0x6b, 0xfd, 0xd1, 0x54, 0x59, 0xc8, 0x3b, 0x99, + 0xf0, 0x96, 0xbf, 0xb4, 0x9e, 0xe8, 0x7b, 0x06, + 0x3d, 0x69, 0xc1, 0x97, 0x4c, 0x69, 0x28, 0xac, + 0xfc, 0xfb, 0x40, 0x99, 0xf8, 0xc4, 0xef, 0x67 }, + { 0x9f, 0xd1, 0xc4, 0x08, 0xfd, 0x75, 0xc3, 0x36, + 0x19, 0x3a, 0x2a, 0x14, 0xd9, 0x4f, 0x6a, 0xf5, + 0xad, 0xf0, 0x50, 0xb8, 0x03, 0x87, 0xb4, 0xb0, + 0x10, 0xfb, 0x29, 0xf4, 0xcc, 0x72, 0x70, 0x7c }, + { 0x13, 0xc8, 0x84, 0x80, 0xa5, 0xd0, 0x0d, 0x6c, + 0x8c, 0x7a, 0xd2, 0x11, 0x0d, 0x76, 0xa8, 0x2d, + 0x9b, 0x70, 0xf4, 0xfa, 0x66, 0x96, 0xd4, 0xe5, + 0xdd, 0x42, 0xa0, 0x66, 0xdc, 0xaf, 0x99, 0x20 }, + { 0x82, 0x0e, 0x72, 0x5e, 0xe2, 0x5f, 0xe8, 0xfd, + 0x3a, 0x8d, 0x5a, 0xbe, 0x4c, 0x46, 0xc3, 0xba, + 0x88, 0x9d, 0xe6, 0xfa, 0x91, 0x91, 0xaa, 0x22, + 0xba, 0x67, 0xd5, 0x70, 0x54, 0x21, 0x54, 0x2b }, + { 0x32, 0xd9, 0x3a, 0x0e, 0xb0, 0x2f, 0x42, 0xfb, + 0xbc, 0xaf, 0x2b, 0xad, 0x00, 0x85, 0xb2, 0x82, + 0xe4, 0x60, 0x46, 0xa4, 0xdf, 0x7a, 0xd1, 0x06, + 0x57, 0xc9, 0xd6, 0x47, 0x63, 0x75, 0xb9, 0x3e }, + { 0xad, 0xc5, 0x18, 0x79, 0x05, 0xb1, 0x66, 0x9c, + 0xd8, 0xec, 0x9c, 0x72, 0x1e, 0x19, 0x53, 0x78, + 0x6b, 0x9d, 0x89, 0xa9, 0xba, 0xe3, 0x07, 0x80, + 0xf1, 0xe1, 0xea, 0xb2, 0x4a, 0x00, 0x52, 0x3c }, + { 0xe9, 0x07, 0x56, 0xff, 0x7f, 0x9a, 0xd8, 0x10, + 0xb2, 0x39, 0xa1, 0x0c, 0xed, 0x2c, 0xf9, 0xb2, + 0x28, 0x43, 0x54, 0xc1, 0xf8, 0xc7, 0xe0, 0xac, + 0xcc, 0x24, 0x61, 0xdc, 0x79, 0x6d, 0x6e, 0x89 }, + { 0x12, 0x51, 0xf7, 0x6e, 0x56, 0x97, 0x84, 0x81, + 0x87, 0x53, 0x59, 0x80, 0x1d, 0xb5, 0x89, 0xa0, + 0xb2, 0x2f, 0x86, 0xd8, 0xd6, 0x34, 0xdc, 0x04, + 0x50, 0x6f, 0x32, 0x2e, 0xd7, 0x8f, 0x17, 0xe8 }, + { 0x3a, 0xfa, 0x89, 0x9f, 0xd9, 0x80, 0xe7, 0x3e, + 0xcb, 0x7f, 0x4d, 0x8b, 0x8f, 0x29, 0x1d, 0xc9, + 0xaf, 0x79, 0x6b, 0xc6, 0x5d, 0x27, 0xf9, 0x74, + 0xc6, 0xf1, 0x93, 0xc9, 0x19, 0x1a, 0x09, 0xfd }, + { 0xaa, 0x30, 0x5b, 0xe2, 0x6e, 0x5d, 0xed, 0xdc, + 0x3c, 0x10, 0x10, 0xcb, 0xc2, 0x13, 0xf9, 0x5f, + 0x05, 0x1c, 0x78, 0x5c, 0x5b, 0x43, 0x1e, 0x6a, + 0x7c, 0xd0, 0x48, 0xf1, 0x61, 0x78, 0x75, 0x28 }, + { 0x8e, 0xa1, 0x88, 0x4f, 0xf3, 0x2e, 0x9d, 0x10, + 0xf0, 0x39, 0xb4, 0x07, 0xd0, 0xd4, 0x4e, 0x7e, + 0x67, 0x0a, 0xbd, 0x88, 0x4a, 0xee, 0xe0, 0xfb, + 0x75, 0x7a, 0xe9, 0x4e, 0xaa, 0x97, 0x37, 0x3d }, + { 0xd4, 0x82, 0xb2, 0x15, 0x5d, 0x4d, 0xec, 0x6b, + 0x47, 0x36, 0xa1, 0xf1, 0x61, 0x7b, 0x53, 0xaa, + 0xa3, 0x73, 0x10, 0x27, 0x7d, 0x3f, 0xef, 0x0c, + 0x37, 0xad, 0x41, 0x76, 0x8f, 0xc2, 0x35, 0xb4 }, + { 0x4d, 0x41, 0x39, 0x71, 0x38, 0x7e, 0x7a, 0x88, + 0x98, 0xa8, 0xdc, 0x2a, 0x27, 0x50, 0x07, 0x78, + 0x53, 0x9e, 0xa2, 0x14, 0xa2, 0xdf, 0xe9, 0xb3, + 0xd7, 0xe8, 0xeb, 0xdc, 0xe5, 0xcf, 0x3d, 0xb3 }, + { 0x69, 0x6e, 0x5d, 0x46, 0xe6, 0xc5, 0x7e, 0x87, + 0x96, 0xe4, 0x73, 0x5d, 0x08, 0x91, 0x6e, 0x0b, + 0x79, 0x29, 0xb3, 0xcf, 0x29, 0x8c, 0x29, 0x6d, + 0x22, 0xe9, 0xd3, 0x01, 0x96, 0x53, 0x37, 0x1c }, + { 0x1f, 0x56, 0x47, 0xc1, 0xd3, 0xb0, 0x88, 0x22, + 0x88, 0x85, 0x86, 0x5c, 0x89, 0x40, 0x90, 0x8b, + 0xf4, 0x0d, 0x1a, 0x82, 0x72, 0x82, 0x19, 0x73, + 0xb1, 0x60, 0x00, 0x8e, 0x7a, 0x3c, 0xe2, 0xeb }, + { 0xb6, 0xe7, 0x6c, 0x33, 0x0f, 0x02, 0x1a, 0x5b, + 0xda, 0x65, 0x87, 0x50, 0x10, 0xb0, 0xed, 0xf0, + 0x91, 0x26, 0xc0, 0xf5, 0x10, 0xea, 0x84, 0x90, + 0x48, 0x19, 0x20, 0x03, 0xae, 0xf4, 0xc6, 0x1c }, + { 0x3c, 0xd9, 0x52, 0xa0, 0xbe, 0xad, 0xa4, 0x1a, + 0xbb, 0x42, 0x4c, 0xe4, 0x7f, 0x94, 0xb4, 0x2b, + 0xe6, 0x4e, 0x1f, 0xfb, 0x0f, 0xd0, 0x78, 0x22, + 0x76, 0x80, 0x79, 0x46, 0xd0, 0xd0, 0xbc, 0x55 }, + { 0x98, 0xd9, 0x26, 0x77, 0x43, 0x9b, 0x41, 0xb7, + 0xbb, 0x51, 0x33, 0x12, 0xaf, 0xb9, 0x2b, 0xcc, + 0x8e, 0xe9, 0x68, 0xb2, 0xe3, 0xb2, 0x38, 0xce, + 0xcb, 0x9b, 0x0f, 0x34, 0xc9, 0xbb, 0x63, 0xd0 }, + { 0xec, 0xbc, 0xa2, 0xcf, 0x08, 0xae, 0x57, 0xd5, + 0x17, 0xad, 0x16, 0x15, 0x8a, 0x32, 0xbf, 0xa7, + 0xdc, 0x03, 0x82, 0xea, 0xed, 0xa1, 0x28, 0xe9, + 0x18, 0x86, 0x73, 0x4c, 0x24, 0xa0, 0xb2, 0x9d }, + { 0x94, 0x2c, 0xc7, 0xc0, 0xb5, 0x2e, 0x2b, 0x16, + 0xa4, 0xb8, 0x9f, 0xa4, 0xfc, 0x7e, 0x0b, 0xf6, + 0x09, 0xe2, 0x9a, 0x08, 0xc1, 0xa8, 0x54, 0x34, + 0x52, 0xb7, 0x7c, 0x7b, 0xfd, 0x11, 0xbb, 0x28 }, + { 0x8a, 0x06, 0x5d, 0x8b, 0x61, 0xa0, 0xdf, 0xfb, + 0x17, 0x0d, 0x56, 0x27, 0x73, 0x5a, 0x76, 0xb0, + 0xe9, 0x50, 0x60, 0x37, 0x80, 0x8c, 0xba, 0x16, + 0xc3, 0x45, 0x00, 0x7c, 0x9f, 0x79, 0xcf, 0x8f }, + { 0x1b, 0x9f, 0xa1, 0x97, 0x14, 0x65, 0x9c, 0x78, + 0xff, 0x41, 0x38, 0x71, 0x84, 0x92, 0x15, 0x36, + 0x10, 0x29, 0xac, 0x80, 0x2b, 0x1c, 0xbc, 0xd5, + 0x4e, 0x40, 0x8b, 0xd8, 0x72, 0x87, 0xf8, 0x1f }, + { 0x8d, 0xab, 0x07, 0x1b, 0xcd, 0x6c, 0x72, 0x92, + 0xa9, 0xef, 0x72, 0x7b, 0x4a, 0xe0, 0xd8, 0x67, + 0x13, 0x30, 0x1d, 0xa8, 0x61, 0x8d, 0x9a, 0x48, + 0xad, 0xce, 0x55, 0xf3, 0x03, 0xa8, 0x69, 0xa1 }, + { 0x82, 0x53, 0xe3, 0xe7, 0xc7, 0xb6, 0x84, 0xb9, + 0xcb, 0x2b, 0xeb, 0x01, 0x4c, 0xe3, 0x30, 0xff, + 0x3d, 0x99, 0xd1, 0x7a, 0xbb, 0xdb, 0xab, 0xe4, + 0xf4, 0xd6, 0x74, 0xde, 0xd5, 0x3f, 0xfc, 0x6b }, + { 0xf1, 0x95, 0xf3, 0x21, 0xe9, 0xe3, 0xd6, 0xbd, + 0x7d, 0x07, 0x45, 0x04, 0xdd, 0x2a, 0xb0, 0xe6, + 0x24, 0x1f, 0x92, 0xe7, 0x84, 0xb1, 0xaa, 0x27, + 0x1f, 0xf6, 0x48, 0xb1, 0xca, 0xb6, 0xd7, 0xf6 }, + { 0x27, 0xe4, 0xcc, 0x72, 0x09, 0x0f, 0x24, 0x12, + 0x66, 0x47, 0x6a, 0x7c, 0x09, 0x49, 0x5f, 0x2d, + 0xb1, 0x53, 0xd5, 0xbc, 0xbd, 0x76, 0x19, 0x03, + 0xef, 0x79, 0x27, 0x5e, 0xc5, 0x6b, 0x2e, 0xd8 }, + { 0x89, 0x9c, 0x24, 0x05, 0x78, 0x8e, 0x25, 0xb9, + 0x9a, 0x18, 0x46, 0x35, 0x5e, 0x64, 0x6d, 0x77, + 0xcf, 0x40, 0x00, 0x83, 0x41, 0x5f, 0x7d, 0xc5, + 0xaf, 0xe6, 0x9d, 0x6e, 0x17, 0xc0, 0x00, 0x23 }, + { 0xa5, 0x9b, 0x78, 0xc4, 0x90, 0x57, 0x44, 0x07, + 0x6b, 0xfe, 0xe8, 0x94, 0xde, 0x70, 0x7d, 0x4f, + 0x12, 0x0b, 0x5c, 0x68, 0x93, 0xea, 0x04, 0x00, + 0x29, 0x7d, 0x0b, 0xb8, 0x34, 0x72, 0x76, 0x32 }, + { 0x59, 0xdc, 0x78, 0xb1, 0x05, 0x64, 0x97, 0x07, + 0xa2, 0xbb, 0x44, 0x19, 0xc4, 0x8f, 0x00, 0x54, + 0x00, 0xd3, 0x97, 0x3d, 0xe3, 0x73, 0x66, 0x10, + 0x23, 0x04, 0x35, 0xb1, 0x04, 0x24, 0xb2, 0x4f }, + { 0xc0, 0x14, 0x9d, 0x1d, 0x7e, 0x7a, 0x63, 0x53, + 0xa6, 0xd9, 0x06, 0xef, 0xe7, 0x28, 0xf2, 0xf3, + 0x29, 0xfe, 0x14, 0xa4, 0x14, 0x9a, 0x3e, 0xa7, + 0x76, 0x09, 0xbc, 0x42, 0xb9, 0x75, 0xdd, 0xfa }, + { 0xa3, 0x2f, 0x24, 0x14, 0x74, 0xa6, 0xc1, 0x69, + 0x32, 0xe9, 0x24, 0x3b, 0xe0, 0xcf, 0x09, 0xbc, + 0xdc, 0x7e, 0x0c, 0xa0, 0xe7, 0xa6, 0xa1, 0xb9, + 0xb1, 0xa0, 0xf0, 0x1e, 0x41, 0x50, 0x23, 0x77 }, + { 0xb2, 0x39, 0xb2, 0xe4, 0xf8, 0x18, 0x41, 0x36, + 0x1c, 0x13, 0x39, 0xf6, 0x8e, 0x2c, 0x35, 0x9f, + 0x92, 0x9a, 0xf9, 0xad, 0x9f, 0x34, 0xe0, 0x1a, + 0xab, 0x46, 0x31, 0xad, 0x6d, 0x55, 0x00, 0xb0 }, + { 0x85, 0xfb, 0x41, 0x9c, 0x70, 0x02, 0xa3, 0xe0, + 0xb4, 0xb6, 0xea, 0x09, 0x3b, 0x4c, 0x1a, 0xc6, + 0x93, 0x66, 0x45, 0xb6, 0x5d, 0xac, 0x5a, 0xc1, + 0x5a, 0x85, 0x28, 0xb7, 0xb9, 0x4c, 0x17, 0x54 }, + { 0x96, 0x19, 0x72, 0x06, 0x25, 0xf1, 0x90, 0xb9, + 0x3a, 0x3f, 0xad, 0x18, 0x6a, 0xb3, 0x14, 0x18, + 0x96, 0x33, 0xc0, 0xd3, 0xa0, 0x1e, 0x6f, 0x9b, + 0xc8, 0xc4, 0xa8, 0xf8, 0x2f, 0x38, 0x3d, 0xbf }, + { 0x7d, 0x62, 0x0d, 0x90, 0xfe, 0x69, 0xfa, 0x46, + 0x9a, 0x65, 0x38, 0x38, 0x89, 0x70, 0xa1, 0xaa, + 0x09, 0xbb, 0x48, 0xa2, 0xd5, 0x9b, 0x34, 0x7b, + 0x97, 0xe8, 0xce, 0x71, 0xf4, 0x8c, 0x7f, 0x46 }, + { 0x29, 0x43, 0x83, 0x56, 0x85, 0x96, 0xfb, 0x37, + 0xc7, 0x5b, 0xba, 0xcd, 0x97, 0x9c, 0x5f, 0xf6, + 0xf2, 0x0a, 0x55, 0x6b, 0xf8, 0x87, 0x9c, 0xc7, + 0x29, 0x24, 0x85, 0x5d, 0xf9, 0xb8, 0x24, 0x0e }, + { 0x16, 0xb1, 0x8a, 0xb3, 0x14, 0x35, 0x9c, 0x2b, + 0x83, 0x3c, 0x1c, 0x69, 0x86, 0xd4, 0x8c, 0x55, + 0xa9, 0xfc, 0x97, 0xcd, 0xe9, 0xa3, 0xc1, 0xf1, + 0x0a, 0x31, 0x77, 0x14, 0x0f, 0x73, 0xf7, 0x38 }, + { 0x8c, 0xbb, 0xdd, 0x14, 0xbc, 0x33, 0xf0, 0x4c, + 0xf4, 0x58, 0x13, 0xe4, 0xa1, 0x53, 0xa2, 0x73, + 0xd3, 0x6a, 0xda, 0xd5, 0xce, 0x71, 0xf4, 0x99, + 0xee, 0xb8, 0x7f, 0xb8, 0xac, 0x63, 0xb7, 0x29 }, + { 0x69, 0xc9, 0xa4, 0x98, 0xdb, 0x17, 0x4e, 0xca, + 0xef, 0xcc, 0x5a, 0x3a, 0xc9, 0xfd, 0xed, 0xf0, + 0xf8, 0x13, 0xa5, 0xbe, 0xc7, 0x27, 0xf1, 0xe7, + 0x75, 0xba, 0xbd, 0xec, 0x77, 0x18, 0x81, 0x6e }, + { 0xb4, 0x62, 0xc3, 0xbe, 0x40, 0x44, 0x8f, 0x1d, + 0x4f, 0x80, 0x62, 0x62, 0x54, 0xe5, 0x35, 0xb0, + 0x8b, 0xc9, 0xcd, 0xcf, 0xf5, 0x99, 0xa7, 0x68, + 0x57, 0x8d, 0x4b, 0x28, 0x81, 0xa8, 0xe3, 0xf0 }, + { 0x55, 0x3e, 0x9d, 0x9c, 0x5f, 0x36, 0x0a, 0xc0, + 0xb7, 0x4a, 0x7d, 0x44, 0xe5, 0xa3, 0x91, 0xda, + 0xd4, 0xce, 0xd0, 0x3e, 0x0c, 0x24, 0x18, 0x3b, + 0x7e, 0x8e, 0xca, 0xbd, 0xf1, 0x71, 0x5a, 0x64 }, + { 0x7a, 0x7c, 0x55, 0xa5, 0x6f, 0xa9, 0xae, 0x51, + 0xe6, 0x55, 0xe0, 0x19, 0x75, 0xd8, 0xa6, 0xff, + 0x4a, 0xe9, 0xe4, 0xb4, 0x86, 0xfc, 0xbe, 0x4e, + 0xac, 0x04, 0x45, 0x88, 0xf2, 0x45, 0xeb, 0xea }, + { 0x2a, 0xfd, 0xf3, 0xc8, 0x2a, 0xbc, 0x48, 0x67, + 0xf5, 0xde, 0x11, 0x12, 0x86, 0xc2, 0xb3, 0xbe, + 0x7d, 0x6e, 0x48, 0x65, 0x7b, 0xa9, 0x23, 0xcf, + 0xbf, 0x10, 0x1a, 0x6d, 0xfc, 0xf9, 0xdb, 0x9a }, + { 0x41, 0x03, 0x7d, 0x2e, 0xdc, 0xdc, 0xe0, 0xc4, + 0x9b, 0x7f, 0xb4, 0xa6, 0xaa, 0x09, 0x99, 0xca, + 0x66, 0x97, 0x6c, 0x74, 0x83, 0xaf, 0xe6, 0x31, + 0xd4, 0xed, 0xa2, 0x83, 0x14, 0x4f, 0x6d, 0xfc }, + { 0xc4, 0x46, 0x6f, 0x84, 0x97, 0xca, 0x2e, 0xeb, + 0x45, 0x83, 0xa0, 0xb0, 0x8e, 0x9d, 0x9a, 0xc7, + 0x43, 0x95, 0x70, 0x9f, 0xda, 0x10, 0x9d, 0x24, + 0xf2, 0xe4, 0x46, 0x21, 0x96, 0x77, 0x9c, 0x5d }, + { 0x75, 0xf6, 0x09, 0x33, 0x8a, 0xa6, 0x7d, 0x96, + 0x9a, 0x2a, 0xe2, 0xa2, 0x36, 0x2b, 0x2d, 0xa9, + 0xd7, 0x7c, 0x69, 0x5d, 0xfd, 0x1d, 0xf7, 0x22, + 0x4a, 0x69, 0x01, 0xdb, 0x93, 0x2c, 0x33, 0x64 }, + { 0x68, 0x60, 0x6c, 0xeb, 0x98, 0x9d, 0x54, 0x88, + 0xfc, 0x7c, 0xf6, 0x49, 0xf3, 0xd7, 0xc2, 0x72, + 0xef, 0x05, 0x5d, 0xa1, 0xa9, 0x3f, 0xae, 0xcd, + 0x55, 0xfe, 0x06, 0xf6, 0x96, 0x70, 0x98, 0xca }, + { 0x44, 0x34, 0x6b, 0xde, 0xb7, 0xe0, 0x52, 0xf6, + 0x25, 0x50, 0x48, 0xf0, 0xd9, 0xb4, 0x2c, 0x42, + 0x5b, 0xab, 0x9c, 0x3d, 0xd2, 0x41, 0x68, 0x21, + 0x2c, 0x3e, 0xcf, 0x1e, 0xbf, 0x34, 0xe6, 0xae }, + { 0x8e, 0x9c, 0xf6, 0xe1, 0xf3, 0x66, 0x47, 0x1f, + 0x2a, 0xc7, 0xd2, 0xee, 0x9b, 0x5e, 0x62, 0x66, + 0xfd, 0xa7, 0x1f, 0x8f, 0x2e, 0x41, 0x09, 0xf2, + 0x23, 0x7e, 0xd5, 0xf8, 0x81, 0x3f, 0xc7, 0x18 }, + { 0x84, 0xbb, 0xeb, 0x84, 0x06, 0xd2, 0x50, 0x95, + 0x1f, 0x8c, 0x1b, 0x3e, 0x86, 0xa7, 0xc0, 0x10, + 0x08, 0x29, 0x21, 0x83, 0x3d, 0xfd, 0x95, 0x55, + 0xa2, 0xf9, 0x09, 0xb1, 0x08, 0x6e, 0xb4, 0xb8 }, + { 0xee, 0x66, 0x6f, 0x3e, 0xef, 0x0f, 0x7e, 0x2a, + 0x9c, 0x22, 0x29, 0x58, 0xc9, 0x7e, 0xaf, 0x35, + 0xf5, 0x1c, 0xed, 0x39, 0x3d, 0x71, 0x44, 0x85, + 0xab, 0x09, 0xa0, 0x69, 0x34, 0x0f, 0xdf, 0x88 }, + { 0xc1, 0x53, 0xd3, 0x4a, 0x65, 0xc4, 0x7b, 0x4a, + 0x62, 0xc5, 0xca, 0xcf, 0x24, 0x01, 0x09, 0x75, + 0xd0, 0x35, 0x6b, 0x2f, 0x32, 0xc8, 0xf5, 0xda, + 0x53, 0x0d, 0x33, 0x88, 0x16, 0xad, 0x5d, 0xe6 }, + { 0x9f, 0xc5, 0x45, 0x01, 0x09, 0xe1, 0xb7, 0x79, + 0xf6, 0xc7, 0xae, 0x79, 0xd5, 0x6c, 0x27, 0x63, + 0x5c, 0x8d, 0xd4, 0x26, 0xc5, 0xa9, 0xd5, 0x4e, + 0x25, 0x78, 0xdb, 0x98, 0x9b, 0x8c, 0x3b, 0x4e }, + { 0xd1, 0x2b, 0xf3, 0x73, 0x2e, 0xf4, 0xaf, 0x5c, + 0x22, 0xfa, 0x90, 0x35, 0x6a, 0xf8, 0xfc, 0x50, + 0xfc, 0xb4, 0x0f, 0x8f, 0x2e, 0xa5, 0xc8, 0x59, + 0x47, 0x37, 0xa3, 0xb3, 0xd5, 0xab, 0xdb, 0xd7 }, + { 0x11, 0x03, 0x0b, 0x92, 0x89, 0xbb, 0xa5, 0xaf, + 0x65, 0x26, 0x06, 0x72, 0xab, 0x6f, 0xee, 0x88, + 0xb8, 0x74, 0x20, 0xac, 0xef, 0x4a, 0x17, 0x89, + 0xa2, 0x07, 0x3b, 0x7e, 0xc2, 0xf2, 0xa0, 0x9e }, + { 0x69, 0xcb, 0x19, 0x2b, 0x84, 0x44, 0x00, 0x5c, + 0x8c, 0x0c, 0xeb, 0x12, 0xc8, 0x46, 0x86, 0x07, + 0x68, 0x18, 0x8c, 0xda, 0x0a, 0xec, 0x27, 0xa9, + 0xc8, 0xa5, 0x5c, 0xde, 0xe2, 0x12, 0x36, 0x32 }, + { 0xdb, 0x44, 0x4c, 0x15, 0x59, 0x7b, 0x5f, 0x1a, + 0x03, 0xd1, 0xf9, 0xed, 0xd1, 0x6e, 0x4a, 0x9f, + 0x43, 0xa6, 0x67, 0xcc, 0x27, 0x51, 0x75, 0xdf, + 0xa2, 0xb7, 0x04, 0xe3, 0xbb, 0x1a, 0x9b, 0x83 }, + { 0x3f, 0xb7, 0x35, 0x06, 0x1a, 0xbc, 0x51, 0x9d, + 0xfe, 0x97, 0x9e, 0x54, 0xc1, 0xee, 0x5b, 0xfa, + 0xd0, 0xa9, 0xd8, 0x58, 0xb3, 0x31, 0x5b, 0xad, + 0x34, 0xbd, 0xe9, 0x99, 0xef, 0xd7, 0x24, 0xdd } +}; + +static bool __init blake2s_selftest(void) +{ + u8 key[BLAKE2S_KEY_SIZE]; + u8 buf[ARRAY_SIZE(blake2s_testvecs)]; + u8 hash[BLAKE2S_HASH_SIZE]; + size_t i; + bool success = true; + + for (i = 0; i < BLAKE2S_KEY_SIZE; ++i) + key[i] = (u8)i; + + for (i = 0; i < ARRAY_SIZE(blake2s_testvecs); ++i) + buf[i] = (u8)i; + + for (i = 0; i < ARRAY_SIZE(blake2s_keyed_testvecs); ++i) { + blake2s(hash, buf, key, BLAKE2S_HASH_SIZE, i, BLAKE2S_KEY_SIZE); + if (memcmp(hash, blake2s_keyed_testvecs[i], BLAKE2S_HASH_SIZE)) { + pr_err("blake2s keyed self-test %zu: FAIL\n", i + 1); + success = false; + } + } + + for (i = 0; i < ARRAY_SIZE(blake2s_testvecs); ++i) { + blake2s(hash, buf, NULL, BLAKE2S_HASH_SIZE, i, 0); + if (memcmp(hash, blake2s_testvecs[i], BLAKE2S_HASH_SIZE)) { + pr_err("blake2s unkeyed self-test %zu: FAIL\n", i + i); + success = false; + } + } + return success; +} From patchwork Fri Mar 22 06:29:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 160848 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp454437jan; Thu, 21 Mar 2019 23:42:41 -0700 (PDT) X-Google-Smtp-Source: APXvYqz2GSbYnN0kHkwAMziV7xHGT5l4Jzqcx7Mi3TOr3erqgay9d4K4GKOQGO+7Uvcpu1RhvoZs X-Received: by 2002:a17:902:bd90:: with SMTP id q16mr1633002pls.162.1553236961211; Thu, 21 Mar 2019 23:42:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553236961; cv=none; d=google.com; s=arc-20160816; b=IcBQn9cp9Xf+2By9Q4T+VR/fWALQsUUDzect08i/cOeYIUF6L9T6SaUAP0XdhJ0asr 28t36t/oES6jZiYxOg2Pi/tsDNH2QJxR5GCFNEXuS5fFJt1RT6HBrAxXWyeGKbSD53y8 gs0kOrhLFZ8ALENTjJGihBXiHvGaO27JkEzuehUzfDJM7FMDoLvFuQyCfUV2BFWBztnU B4J629yQlJsfxvOK+fQHyfeechgA3gdqLY+0EkzDxRKtgdtS5PdsBI9dI5QzM41Dw8Is eTT83LcPCVXUiJKr4VnazGveJeoQzgJuNl0cAOig7ff2FxlJ1yMfd7qifPdqM7yGkEqP PAZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:from:message-id:to:references :subject; bh=3yMshxk0RdqNZIVSjSTeXsznh3/37zWcU6zSepnlK6g=; b=Q6YXm4lCQQ/vaF/sY1xJCEP9JV6T2YSeZYaNuXjePj/vmA91zRkS7Ee3D1qC6RH+FG zWkT1Pji1K/VYhAvknj9skKWnaZ5Ch5ueaBhAb8+KEIAh6muc7qelPCtUgi/urXOUvHc HJUZrp7UBK9rJ6+2rcf7GVN+vpvGc0SiHaLFVsLsKroVmAGL6wqdOge4fFdNj7+4z7a7 AybJoyv9feWPDpLumf4rcqyWXVsPEVdkBILR1hY+DGse5rqhPz4UWeXpq8ayVuKDkHiM MzRAsEL50jDXkkgj6R/hghlMccLwo2nj41HTdbbQcoxUa8mPX6Nzig/Hpmi7UeL9TRRg AplQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b38si6642527plb.249.2019.03.21.23.42.40; Thu, 21 Mar 2019 23:42:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727710AbfCVGmk (ORCPT + 3 others); Fri, 22 Mar 2019 02:42:40 -0400 Received: from orcrist.hmeau.com ([104.223.48.154]:44492 "EHLO deadmen.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727607AbfCVGmk (ORCPT ); Fri, 22 Mar 2019 02:42:40 -0400 Received: from gondobar.mordor.me.apana.org.au ([192.168.128.4] helo=gondobar) by deadmen.hmeau.com with esmtps (Exim 4.89 #2 (Debian)) id 1h7Dgo-0002Z2-AU; Fri, 22 Mar 2019 14:30:03 +0800 Received: from herbert by gondobar with local (Exim 4.89) (envelope-from ) id 1h7Dgd-0001J8-N3; Fri, 22 Mar 2019 14:29:51 +0800 Subject: [PATCH 12/17] zinc: BLAKE2s x86_64 implementation References: <20190322062740.nrwfx2rvmt7lzotj@gondor.apana.org.au> To: Linus Torvalds , "David S. Miller" , "Jason A. Donenfeld" , Eric Biggers , Ard Biesheuvel , Linux Crypto Mailing List , linux-fscrypt@vger.kernel.org, linux-arm-kernel@lists.infradead.org, LKML , Paul Crowley , Greg Kaiser , Samuel Neves , Tomer Ashur , Martin Willi Message-Id: From: Herbert Xu Date: Fri, 22 Mar 2019 14:29:51 +0800 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Jason A. Donenfeld These implementations from Samuel Neves support AVX and AVX-512VL. Originally this used AVX-512F, but Skylake thermal throttling made AVX-512VL more attractive and possible to do with negligable difference. Signed-off-by: Jason A. Donenfeld Signed-off-by: Samuel Neves Co-developed-by: Samuel Neves Cc: Thomas Gleixner Cc: Ingo Molnar Cc: x86@kernel.org Cc: Jean-Philippe Aumasson Cc: Andy Lutomirski Cc: Greg KH Cc: Andrew Morton Cc: Linus Torvalds Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org Signed-off-by: Herbert Xu --- lib/zinc/Makefile | 1 lib/zinc/blake2s/blake2s-x86_64-glue.c | 71 +++ lib/zinc/blake2s/blake2s-x86_64.S | 685 +++++++++++++++++++++++++++++++++ lib/zinc/blake2s/blake2s.c | 4 4 files changed, 761 insertions(+) diff --git a/lib/zinc/Makefile b/lib/zinc/Makefile index c1cf678e4068..213e38a88560 100644 --- a/lib/zinc/Makefile +++ b/lib/zinc/Makefile @@ -12,4 +12,5 @@ zinc_chacha20poly1305-y := chacha20poly1305.o obj-$(CONFIG_ZINC_CHACHA20POLY1305) += zinc_chacha20poly1305.o zinc_blake2s-y := blake2s/blake2s.o +zinc_blake2s-$(CONFIG_ZINC_ARCH_X86_64) += blake2s/blake2s-x86_64.o obj-$(CONFIG_ZINC_BLAKE2S) += zinc_blake2s.o diff --git a/lib/zinc/blake2s/blake2s-x86_64-glue.c b/lib/zinc/blake2s/blake2s-x86_64-glue.c new file mode 100644 index 000000000000..615c38861579 --- /dev/null +++ b/lib/zinc/blake2s/blake2s-x86_64-glue.c @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + */ + +#include +#include +#include +#include + +asmlinkage void blake2s_compress_avx(struct blake2s_state *state, + const u8 *block, const size_t nblocks, + const u32 inc); +asmlinkage void blake2s_compress_avx512(struct blake2s_state *state, + const u8 *block, const size_t nblocks, + const u32 inc); + +static bool blake2s_use_avx __ro_after_init; +static bool blake2s_use_avx512 __ro_after_init; +static bool *const blake2s_nobs[] __initconst = { &blake2s_use_avx512 }; + +static void __init blake2s_fpu_init(void) +{ + blake2s_use_avx = + boot_cpu_has(X86_FEATURE_AVX) && + cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL); + blake2s_use_avx512 = + boot_cpu_has(X86_FEATURE_AVX) && + boot_cpu_has(X86_FEATURE_AVX2) && + boot_cpu_has(X86_FEATURE_AVX512F) && + boot_cpu_has(X86_FEATURE_AVX512VL) && + cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM | + XFEATURE_MASK_AVX512, NULL); +} + +static inline bool blake2s_compress_arch(struct blake2s_state *state, + const u8 *block, size_t nblocks, + const u32 inc) +{ + simd_context_t simd_context; + bool used_arch = false; + + /* SIMD disables preemption, so relax after processing each page. */ + BUILD_BUG_ON(PAGE_SIZE / BLAKE2S_BLOCK_SIZE < 8); + + simd_get(&simd_context); + + if (!IS_ENABLED(CONFIG_AS_AVX) || !blake2s_use_avx || + !simd_use(&simd_context)) + goto out; + used_arch = true; + + for (;;) { + const size_t blocks = min_t(size_t, nblocks, + PAGE_SIZE / BLAKE2S_BLOCK_SIZE); + + if (IS_ENABLED(CONFIG_AS_AVX512) && blake2s_use_avx512) + blake2s_compress_avx512(state, block, blocks, inc); + else + blake2s_compress_avx(state, block, blocks, inc); + + nblocks -= blocks; + if (!nblocks) + break; + block += blocks * BLAKE2S_BLOCK_SIZE; + simd_relax(&simd_context); + } +out: + simd_put(&simd_context); + return used_arch; +} diff --git a/lib/zinc/blake2s/blake2s-x86_64.S b/lib/zinc/blake2s/blake2s-x86_64.S new file mode 100644 index 000000000000..1407a5ffb5c2 --- /dev/null +++ b/lib/zinc/blake2s/blake2s-x86_64.S @@ -0,0 +1,685 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ +/* + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + * Copyright (C) 2017 Samuel Neves . All Rights Reserved. + */ + +#include + +.section .rodata.cst32.BLAKE2S_IV, "aM", @progbits, 32 +.align 32 +IV: .octa 0xA54FF53A3C6EF372BB67AE856A09E667 + .octa 0x5BE0CD191F83D9AB9B05688C510E527F +.section .rodata.cst16.ROT16, "aM", @progbits, 16 +.align 16 +ROT16: .octa 0x0D0C0F0E09080B0A0504070601000302 +.section .rodata.cst16.ROR328, "aM", @progbits, 16 +.align 16 +ROR328: .octa 0x0C0F0E0D080B0A090407060500030201 +#ifdef CONFIG_AS_AVX512 +.section .rodata.cst64.BLAKE2S_SIGMA, "aM", @progbits, 640 +.align 64 +SIGMA: +.long 0, 2, 4, 6, 1, 3, 5, 7, 8, 10, 12, 14, 9, 11, 13, 15 +.long 11, 2, 12, 14, 9, 8, 15, 3, 4, 0, 13, 6, 10, 1, 7, 5 +.long 10, 12, 11, 6, 5, 9, 13, 3, 4, 15, 14, 2, 0, 7, 8, 1 +.long 10, 9, 7, 0, 11, 14, 1, 12, 6, 2, 15, 3, 13, 8, 5, 4 +.long 4, 9, 8, 13, 14, 0, 10, 11, 7, 3, 12, 1, 5, 6, 15, 2 +.long 2, 10, 4, 14, 13, 3, 9, 11, 6, 5, 7, 12, 15, 1, 8, 0 +.long 4, 11, 14, 8, 13, 10, 12, 5, 2, 1, 15, 3, 9, 7, 0, 6 +.long 6, 12, 0, 13, 15, 2, 1, 10, 4, 5, 11, 14, 8, 3, 9, 7 +.long 14, 5, 4, 12, 9, 7, 3, 10, 2, 0, 6, 15, 11, 1, 13, 8 +.long 11, 7, 13, 10, 12, 14, 0, 15, 4, 5, 6, 9, 2, 1, 8, 3 +#endif /* CONFIG_AS_AVX512 */ + +.text +#ifdef CONFIG_AS_AVX +ENTRY(blake2s_compress_avx) + movl %ecx, %ecx + testq %rdx, %rdx + je .Lendofloop + .align 32 +.Lbeginofloop: + addq %rcx, 32(%rdi) + vmovdqu IV+16(%rip), %xmm1 + vmovdqu (%rsi), %xmm4 + vpxor 32(%rdi), %xmm1, %xmm1 + vmovdqu 16(%rsi), %xmm3 + vshufps $136, %xmm3, %xmm4, %xmm6 + vmovdqa ROT16(%rip), %xmm7 + vpaddd (%rdi), %xmm6, %xmm6 + vpaddd 16(%rdi), %xmm6, %xmm6 + vpxor %xmm6, %xmm1, %xmm1 + vmovdqu IV(%rip), %xmm8 + vpshufb %xmm7, %xmm1, %xmm1 + vmovdqu 48(%rsi), %xmm5 + vpaddd %xmm1, %xmm8, %xmm8 + vpxor 16(%rdi), %xmm8, %xmm9 + vmovdqu 32(%rsi), %xmm2 + vpblendw $12, %xmm3, %xmm5, %xmm13 + vshufps $221, %xmm5, %xmm2, %xmm12 + vpunpckhqdq %xmm2, %xmm4, %xmm14 + vpslld $20, %xmm9, %xmm0 + vpsrld $12, %xmm9, %xmm9 + vpxor %xmm0, %xmm9, %xmm0 + vshufps $221, %xmm3, %xmm4, %xmm9 + vpaddd %xmm9, %xmm6, %xmm9 + vpaddd %xmm0, %xmm9, %xmm9 + vpxor %xmm9, %xmm1, %xmm1 + vmovdqa ROR328(%rip), %xmm6 + vpshufb %xmm6, %xmm1, %xmm1 + vpaddd %xmm1, %xmm8, %xmm8 + vpxor %xmm8, %xmm0, %xmm0 + vpshufd $147, %xmm1, %xmm1 + vpshufd $78, %xmm8, %xmm8 + vpslld $25, %xmm0, %xmm10 + vpsrld $7, %xmm0, %xmm0 + vpxor %xmm10, %xmm0, %xmm0 + vshufps $136, %xmm5, %xmm2, %xmm10 + vpshufd $57, %xmm0, %xmm0 + vpaddd %xmm10, %xmm9, %xmm9 + vpaddd %xmm0, %xmm9, %xmm9 + vpxor %xmm9, %xmm1, %xmm1 + vpaddd %xmm12, %xmm9, %xmm9 + vpblendw $12, %xmm2, %xmm3, %xmm12 + vpshufb %xmm7, %xmm1, %xmm1 + vpaddd %xmm1, %xmm8, %xmm8 + vpxor %xmm8, %xmm0, %xmm10 + vpslld $20, %xmm10, %xmm0 + vpsrld $12, %xmm10, %xmm10 + vpxor %xmm0, %xmm10, %xmm0 + vpaddd %xmm0, %xmm9, %xmm9 + vpxor %xmm9, %xmm1, %xmm1 + vpshufb %xmm6, %xmm1, %xmm1 + vpaddd %xmm1, %xmm8, %xmm8 + vpxor %xmm8, %xmm0, %xmm0 + vpshufd $57, %xmm1, %xmm1 + vpshufd $78, %xmm8, %xmm8 + vpslld $25, %xmm0, %xmm10 + vpsrld $7, %xmm0, %xmm0 + vpxor %xmm10, %xmm0, %xmm0 + vpslldq $4, %xmm5, %xmm10 + vpblendw $240, %xmm10, %xmm12, %xmm12 + vpshufd $147, %xmm0, %xmm0 + vpshufd $147, %xmm12, %xmm12 + vpaddd %xmm9, %xmm12, %xmm12 + vpaddd %xmm0, %xmm12, %xmm12 + vpxor %xmm12, %xmm1, %xmm1 + vpshufb %xmm7, %xmm1, %xmm1 + vpaddd %xmm1, %xmm8, %xmm8 + vpxor %xmm8, %xmm0, %xmm11 + vpslld $20, %xmm11, %xmm9 + vpsrld $12, %xmm11, %xmm11 + vpxor %xmm9, %xmm11, %xmm0 + vpshufd $8, %xmm2, %xmm9 + vpblendw $192, %xmm5, %xmm3, %xmm11 + vpblendw $240, %xmm11, %xmm9, %xmm9 + vpshufd $177, %xmm9, %xmm9 + vpaddd %xmm12, %xmm9, %xmm9 + vpaddd %xmm0, %xmm9, %xmm11 + vpxor %xmm11, %xmm1, %xmm1 + vpshufb %xmm6, %xmm1, %xmm1 + vpaddd %xmm1, %xmm8, %xmm8 + vpxor %xmm8, %xmm0, %xmm9 + vpshufd $147, %xmm1, %xmm1 + vpshufd $78, %xmm8, %xmm8 + vpslld $25, %xmm9, %xmm0 + vpsrld $7, %xmm9, %xmm9 + vpxor %xmm0, %xmm9, %xmm0 + vpslldq $4, %xmm3, %xmm9 + vpblendw $48, %xmm9, %xmm2, %xmm9 + vpblendw $240, %xmm9, %xmm4, %xmm9 + vpshufd $57, %xmm0, %xmm0 + vpshufd $177, %xmm9, %xmm9 + vpaddd %xmm11, %xmm9, %xmm9 + vpaddd %xmm0, %xmm9, %xmm9 + vpxor %xmm9, %xmm1, %xmm1 + vpshufb %xmm7, %xmm1, %xmm1 + vpaddd %xmm1, %xmm8, %xmm11 + vpxor %xmm11, %xmm0, %xmm0 + vpslld $20, %xmm0, %xmm8 + vpsrld $12, %xmm0, %xmm0 + vpxor %xmm8, %xmm0, %xmm0 + vpunpckhdq %xmm3, %xmm4, %xmm8 + vpblendw $12, %xmm10, %xmm8, %xmm12 + vpshufd $177, %xmm12, %xmm12 + vpaddd %xmm9, %xmm12, %xmm9 + vpaddd %xmm0, %xmm9, %xmm9 + vpxor %xmm9, %xmm1, %xmm1 + vpshufb %xmm6, %xmm1, %xmm1 + vpaddd %xmm1, %xmm11, %xmm11 + vpxor %xmm11, %xmm0, %xmm0 + vpshufd $57, %xmm1, %xmm1 + vpshufd $78, %xmm11, %xmm11 + vpslld $25, %xmm0, %xmm12 + vpsrld $7, %xmm0, %xmm0 + vpxor %xmm12, %xmm0, %xmm0 + vpunpckhdq %xmm5, %xmm2, %xmm12 + vpshufd $147, %xmm0, %xmm0 + vpblendw $15, %xmm13, %xmm12, %xmm12 + vpslldq $8, %xmm5, %xmm13 + vpshufd $210, %xmm12, %xmm12 + vpaddd %xmm9, %xmm12, %xmm9 + vpaddd %xmm0, %xmm9, %xmm9 + vpxor %xmm9, %xmm1, %xmm1 + vpshufb %xmm7, %xmm1, %xmm1 + vpaddd %xmm1, %xmm11, %xmm11 + vpxor %xmm11, %xmm0, %xmm0 + vpslld $20, %xmm0, %xmm12 + vpsrld $12, %xmm0, %xmm0 + vpxor %xmm12, %xmm0, %xmm0 + vpunpckldq %xmm4, %xmm2, %xmm12 + vpblendw $240, %xmm4, %xmm12, %xmm12 + vpblendw $192, %xmm13, %xmm12, %xmm12 + vpsrldq $12, %xmm3, %xmm13 + vpaddd %xmm12, %xmm9, %xmm9 + vpaddd %xmm0, %xmm9, %xmm9 + vpxor %xmm9, %xmm1, %xmm1 + vpshufb %xmm6, %xmm1, %xmm1 + vpaddd %xmm1, %xmm11, %xmm11 + vpxor %xmm11, %xmm0, %xmm0 + vpshufd $147, %xmm1, %xmm1 + vpshufd $78, %xmm11, %xmm11 + vpslld $25, %xmm0, %xmm12 + vpsrld $7, %xmm0, %xmm0 + vpxor %xmm12, %xmm0, %xmm0 + vpblendw $60, %xmm2, %xmm4, %xmm12 + vpblendw $3, %xmm13, %xmm12, %xmm12 + vpshufd $57, %xmm0, %xmm0 + vpshufd $78, %xmm12, %xmm12 + vpaddd %xmm9, %xmm12, %xmm9 + vpaddd %xmm0, %xmm9, %xmm9 + vpxor %xmm9, %xmm1, %xmm1 + vpshufb %xmm7, %xmm1, %xmm1 + vpaddd %xmm1, %xmm11, %xmm11 + vpxor %xmm11, %xmm0, %xmm12 + vpslld $20, %xmm12, %xmm13 + vpsrld $12, %xmm12, %xmm0 + vpblendw $51, %xmm3, %xmm4, %xmm12 + vpxor %xmm13, %xmm0, %xmm0 + vpblendw $192, %xmm10, %xmm12, %xmm10 + vpslldq $8, %xmm2, %xmm12 + vpshufd $27, %xmm10, %xmm10 + vpaddd %xmm9, %xmm10, %xmm9 + vpaddd %xmm0, %xmm9, %xmm9 + vpxor %xmm9, %xmm1, %xmm1 + vpshufb %xmm6, %xmm1, %xmm1 + vpaddd %xmm1, %xmm11, %xmm11 + vpxor %xmm11, %xmm0, %xmm0 + vpshufd $57, %xmm1, %xmm1 + vpshufd $78, %xmm11, %xmm11 + vpslld $25, %xmm0, %xmm10 + vpsrld $7, %xmm0, %xmm0 + vpxor %xmm10, %xmm0, %xmm0 + vpunpckhdq %xmm2, %xmm8, %xmm10 + vpshufd $147, %xmm0, %xmm0 + vpblendw $12, %xmm5, %xmm10, %xmm10 + vpshufd $210, %xmm10, %xmm10 + vpaddd %xmm9, %xmm10, %xmm9 + vpaddd %xmm0, %xmm9, %xmm9 + vpxor %xmm9, %xmm1, %xmm1 + vpshufb %xmm7, %xmm1, %xmm1 + vpaddd %xmm1, %xmm11, %xmm11 + vpxor %xmm11, %xmm0, %xmm10 + vpslld $20, %xmm10, %xmm0 + vpsrld $12, %xmm10, %xmm10 + vpxor %xmm0, %xmm10, %xmm0 + vpblendw $12, %xmm4, %xmm5, %xmm10 + vpblendw $192, %xmm12, %xmm10, %xmm10 + vpunpckldq %xmm2, %xmm4, %xmm12 + vpshufd $135, %xmm10, %xmm10 + vpaddd %xmm9, %xmm10, %xmm9 + vpaddd %xmm0, %xmm9, %xmm9 + vpxor %xmm9, %xmm1, %xmm1 + vpshufb %xmm6, %xmm1, %xmm1 + vpaddd %xmm1, %xmm11, %xmm13 + vpxor %xmm13, %xmm0, %xmm0 + vpshufd $147, %xmm1, %xmm1 + vpshufd $78, %xmm13, %xmm13 + vpslld $25, %xmm0, %xmm10 + vpsrld $7, %xmm0, %xmm0 + vpxor %xmm10, %xmm0, %xmm0 + vpblendw $15, %xmm3, %xmm4, %xmm10 + vpblendw $192, %xmm5, %xmm10, %xmm10 + vpshufd $57, %xmm0, %xmm0 + vpshufd $198, %xmm10, %xmm10 + vpaddd %xmm9, %xmm10, %xmm10 + vpaddd %xmm0, %xmm10, %xmm10 + vpxor %xmm10, %xmm1, %xmm1 + vpshufb %xmm7, %xmm1, %xmm1 + vpaddd %xmm1, %xmm13, %xmm13 + vpxor %xmm13, %xmm0, %xmm9 + vpslld $20, %xmm9, %xmm0 + vpsrld $12, %xmm9, %xmm9 + vpxor %xmm0, %xmm9, %xmm0 + vpunpckhdq %xmm2, %xmm3, %xmm9 + vpunpcklqdq %xmm12, %xmm9, %xmm15 + vpunpcklqdq %xmm12, %xmm8, %xmm12 + vpblendw $15, %xmm5, %xmm8, %xmm8 + vpaddd %xmm15, %xmm10, %xmm15 + vpaddd %xmm0, %xmm15, %xmm15 + vpxor %xmm15, %xmm1, %xmm1 + vpshufd $141, %xmm8, %xmm8 + vpshufb %xmm6, %xmm1, %xmm1 + vpaddd %xmm1, %xmm13, %xmm13 + vpxor %xmm13, %xmm0, %xmm0 + vpshufd $57, %xmm1, %xmm1 + vpshufd $78, %xmm13, %xmm13 + vpslld $25, %xmm0, %xmm10 + vpsrld $7, %xmm0, %xmm0 + vpxor %xmm10, %xmm0, %xmm0 + vpunpcklqdq %xmm2, %xmm3, %xmm10 + vpshufd $147, %xmm0, %xmm0 + vpblendw $51, %xmm14, %xmm10, %xmm14 + vpshufd $135, %xmm14, %xmm14 + vpaddd %xmm15, %xmm14, %xmm14 + vpaddd %xmm0, %xmm14, %xmm14 + vpxor %xmm14, %xmm1, %xmm1 + vpunpcklqdq %xmm3, %xmm4, %xmm15 + vpshufb %xmm7, %xmm1, %xmm1 + vpaddd %xmm1, %xmm13, %xmm13 + vpxor %xmm13, %xmm0, %xmm0 + vpslld $20, %xmm0, %xmm11 + vpsrld $12, %xmm0, %xmm0 + vpxor %xmm11, %xmm0, %xmm0 + vpunpckhqdq %xmm5, %xmm3, %xmm11 + vpblendw $51, %xmm15, %xmm11, %xmm11 + vpunpckhqdq %xmm3, %xmm5, %xmm15 + vpaddd %xmm11, %xmm14, %xmm11 + vpaddd %xmm0, %xmm11, %xmm11 + vpxor %xmm11, %xmm1, %xmm1 + vpshufb %xmm6, %xmm1, %xmm1 + vpaddd %xmm1, %xmm13, %xmm13 + vpxor %xmm13, %xmm0, %xmm0 + vpshufd $147, %xmm1, %xmm1 + vpshufd $78, %xmm13, %xmm13 + vpslld $25, %xmm0, %xmm14 + vpsrld $7, %xmm0, %xmm0 + vpxor %xmm14, %xmm0, %xmm14 + vpunpckhqdq %xmm4, %xmm2, %xmm0 + vpshufd $57, %xmm14, %xmm14 + vpblendw $51, %xmm15, %xmm0, %xmm15 + vpaddd %xmm15, %xmm11, %xmm15 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm15, %xmm1, %xmm1 + vpshufb %xmm7, %xmm1, %xmm1 + vpaddd %xmm1, %xmm13, %xmm13 + vpxor %xmm13, %xmm14, %xmm14 + vpslld $20, %xmm14, %xmm11 + vpsrld $12, %xmm14, %xmm14 + vpxor %xmm11, %xmm14, %xmm14 + vpblendw $3, %xmm2, %xmm4, %xmm11 + vpslldq $8, %xmm11, %xmm0 + vpblendw $15, %xmm5, %xmm0, %xmm0 + vpshufd $99, %xmm0, %xmm0 + vpaddd %xmm15, %xmm0, %xmm15 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm15, %xmm1, %xmm0 + vpaddd %xmm12, %xmm15, %xmm15 + vpshufb %xmm6, %xmm0, %xmm0 + vpaddd %xmm0, %xmm13, %xmm13 + vpxor %xmm13, %xmm14, %xmm14 + vpshufd $57, %xmm0, %xmm0 + vpshufd $78, %xmm13, %xmm13 + vpslld $25, %xmm14, %xmm1 + vpsrld $7, %xmm14, %xmm14 + vpxor %xmm1, %xmm14, %xmm14 + vpblendw $3, %xmm5, %xmm4, %xmm1 + vpshufd $147, %xmm14, %xmm14 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm15, %xmm0, %xmm0 + vpshufb %xmm7, %xmm0, %xmm0 + vpaddd %xmm0, %xmm13, %xmm13 + vpxor %xmm13, %xmm14, %xmm14 + vpslld $20, %xmm14, %xmm12 + vpsrld $12, %xmm14, %xmm14 + vpxor %xmm12, %xmm14, %xmm14 + vpsrldq $4, %xmm2, %xmm12 + vpblendw $60, %xmm12, %xmm1, %xmm1 + vpaddd %xmm1, %xmm15, %xmm15 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm15, %xmm0, %xmm0 + vpblendw $12, %xmm4, %xmm3, %xmm1 + vpshufb %xmm6, %xmm0, %xmm0 + vpaddd %xmm0, %xmm13, %xmm13 + vpxor %xmm13, %xmm14, %xmm14 + vpshufd $147, %xmm0, %xmm0 + vpshufd $78, %xmm13, %xmm13 + vpslld $25, %xmm14, %xmm12 + vpsrld $7, %xmm14, %xmm14 + vpxor %xmm12, %xmm14, %xmm14 + vpsrldq $4, %xmm5, %xmm12 + vpblendw $48, %xmm12, %xmm1, %xmm1 + vpshufd $33, %xmm5, %xmm12 + vpshufd $57, %xmm14, %xmm14 + vpshufd $108, %xmm1, %xmm1 + vpblendw $51, %xmm12, %xmm10, %xmm12 + vpaddd %xmm15, %xmm1, %xmm15 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm15, %xmm0, %xmm0 + vpaddd %xmm12, %xmm15, %xmm15 + vpshufb %xmm7, %xmm0, %xmm0 + vpaddd %xmm0, %xmm13, %xmm1 + vpxor %xmm1, %xmm14, %xmm14 + vpslld $20, %xmm14, %xmm13 + vpsrld $12, %xmm14, %xmm14 + vpxor %xmm13, %xmm14, %xmm14 + vpslldq $12, %xmm3, %xmm13 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm15, %xmm0, %xmm0 + vpshufb %xmm6, %xmm0, %xmm0 + vpaddd %xmm0, %xmm1, %xmm1 + vpxor %xmm1, %xmm14, %xmm14 + vpshufd $57, %xmm0, %xmm0 + vpshufd $78, %xmm1, %xmm1 + vpslld $25, %xmm14, %xmm12 + vpsrld $7, %xmm14, %xmm14 + vpxor %xmm12, %xmm14, %xmm14 + vpblendw $51, %xmm5, %xmm4, %xmm12 + vpshufd $147, %xmm14, %xmm14 + vpblendw $192, %xmm13, %xmm12, %xmm12 + vpaddd %xmm12, %xmm15, %xmm15 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm15, %xmm0, %xmm0 + vpsrldq $4, %xmm3, %xmm12 + vpshufb %xmm7, %xmm0, %xmm0 + vpaddd %xmm0, %xmm1, %xmm1 + vpxor %xmm1, %xmm14, %xmm14 + vpslld $20, %xmm14, %xmm13 + vpsrld $12, %xmm14, %xmm14 + vpxor %xmm13, %xmm14, %xmm14 + vpblendw $48, %xmm2, %xmm5, %xmm13 + vpblendw $3, %xmm12, %xmm13, %xmm13 + vpshufd $156, %xmm13, %xmm13 + vpaddd %xmm15, %xmm13, %xmm15 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm15, %xmm0, %xmm0 + vpshufb %xmm6, %xmm0, %xmm0 + vpaddd %xmm0, %xmm1, %xmm1 + vpxor %xmm1, %xmm14, %xmm14 + vpshufd $147, %xmm0, %xmm0 + vpshufd $78, %xmm1, %xmm1 + vpslld $25, %xmm14, %xmm13 + vpsrld $7, %xmm14, %xmm14 + vpxor %xmm13, %xmm14, %xmm14 + vpunpcklqdq %xmm2, %xmm4, %xmm13 + vpshufd $57, %xmm14, %xmm14 + vpblendw $12, %xmm12, %xmm13, %xmm12 + vpshufd $180, %xmm12, %xmm12 + vpaddd %xmm15, %xmm12, %xmm15 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm15, %xmm0, %xmm0 + vpshufb %xmm7, %xmm0, %xmm0 + vpaddd %xmm0, %xmm1, %xmm1 + vpxor %xmm1, %xmm14, %xmm14 + vpslld $20, %xmm14, %xmm12 + vpsrld $12, %xmm14, %xmm14 + vpxor %xmm12, %xmm14, %xmm14 + vpunpckhqdq %xmm9, %xmm4, %xmm12 + vpshufd $198, %xmm12, %xmm12 + vpaddd %xmm15, %xmm12, %xmm15 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm15, %xmm0, %xmm0 + vpaddd %xmm15, %xmm8, %xmm15 + vpshufb %xmm6, %xmm0, %xmm0 + vpaddd %xmm0, %xmm1, %xmm1 + vpxor %xmm1, %xmm14, %xmm14 + vpshufd $57, %xmm0, %xmm0 + vpshufd $78, %xmm1, %xmm1 + vpslld $25, %xmm14, %xmm12 + vpsrld $7, %xmm14, %xmm14 + vpxor %xmm12, %xmm14, %xmm14 + vpsrldq $4, %xmm4, %xmm12 + vpshufd $147, %xmm14, %xmm14 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm15, %xmm0, %xmm0 + vpshufb %xmm7, %xmm0, %xmm0 + vpaddd %xmm0, %xmm1, %xmm1 + vpxor %xmm1, %xmm14, %xmm14 + vpslld $20, %xmm14, %xmm8 + vpsrld $12, %xmm14, %xmm14 + vpxor %xmm14, %xmm8, %xmm14 + vpblendw $48, %xmm5, %xmm2, %xmm8 + vpblendw $3, %xmm12, %xmm8, %xmm8 + vpunpckhqdq %xmm5, %xmm4, %xmm12 + vpshufd $75, %xmm8, %xmm8 + vpblendw $60, %xmm10, %xmm12, %xmm10 + vpaddd %xmm15, %xmm8, %xmm15 + vpaddd %xmm14, %xmm15, %xmm15 + vpxor %xmm0, %xmm15, %xmm0 + vpshufd $45, %xmm10, %xmm10 + vpshufb %xmm6, %xmm0, %xmm0 + vpaddd %xmm15, %xmm10, %xmm15 + vpaddd %xmm0, %xmm1, %xmm1 + vpxor %xmm1, %xmm14, %xmm14 + vpshufd $147, %xmm0, %xmm0 + vpshufd $78, %xmm1, %xmm1 + vpslld $25, %xmm14, %xmm8 + vpsrld $7, %xmm14, %xmm14 + vpxor %xmm14, %xmm8, %xmm8 + vpshufd $57, %xmm8, %xmm8 + vpaddd %xmm8, %xmm15, %xmm15 + vpxor %xmm0, %xmm15, %xmm0 + vpshufb %xmm7, %xmm0, %xmm0 + vpaddd %xmm0, %xmm1, %xmm1 + vpxor %xmm8, %xmm1, %xmm8 + vpslld $20, %xmm8, %xmm10 + vpsrld $12, %xmm8, %xmm8 + vpxor %xmm8, %xmm10, %xmm10 + vpunpckldq %xmm3, %xmm4, %xmm8 + vpunpcklqdq %xmm9, %xmm8, %xmm9 + vpaddd %xmm9, %xmm15, %xmm9 + vpaddd %xmm10, %xmm9, %xmm9 + vpxor %xmm0, %xmm9, %xmm8 + vpshufb %xmm6, %xmm8, %xmm8 + vpaddd %xmm8, %xmm1, %xmm1 + vpxor %xmm1, %xmm10, %xmm10 + vpshufd $57, %xmm8, %xmm8 + vpshufd $78, %xmm1, %xmm1 + vpslld $25, %xmm10, %xmm12 + vpsrld $7, %xmm10, %xmm10 + vpxor %xmm10, %xmm12, %xmm10 + vpblendw $48, %xmm4, %xmm3, %xmm12 + vpshufd $147, %xmm10, %xmm0 + vpunpckhdq %xmm5, %xmm3, %xmm10 + vpshufd $78, %xmm12, %xmm12 + vpunpcklqdq %xmm4, %xmm10, %xmm10 + vpblendw $192, %xmm2, %xmm10, %xmm10 + vpshufhw $78, %xmm10, %xmm10 + vpaddd %xmm10, %xmm9, %xmm10 + vpaddd %xmm0, %xmm10, %xmm10 + vpxor %xmm8, %xmm10, %xmm8 + vpshufb %xmm7, %xmm8, %xmm8 + vpaddd %xmm8, %xmm1, %xmm1 + vpxor %xmm0, %xmm1, %xmm9 + vpslld $20, %xmm9, %xmm0 + vpsrld $12, %xmm9, %xmm9 + vpxor %xmm9, %xmm0, %xmm0 + vpunpckhdq %xmm5, %xmm4, %xmm9 + vpblendw $240, %xmm9, %xmm2, %xmm13 + vpshufd $39, %xmm13, %xmm13 + vpaddd %xmm10, %xmm13, %xmm10 + vpaddd %xmm0, %xmm10, %xmm10 + vpxor %xmm8, %xmm10, %xmm8 + vpblendw $12, %xmm4, %xmm2, %xmm13 + vpshufb %xmm6, %xmm8, %xmm8 + vpslldq $4, %xmm13, %xmm13 + vpblendw $15, %xmm5, %xmm13, %xmm13 + vpaddd %xmm8, %xmm1, %xmm1 + vpxor %xmm1, %xmm0, %xmm0 + vpaddd %xmm13, %xmm10, %xmm13 + vpshufd $147, %xmm8, %xmm8 + vpshufd $78, %xmm1, %xmm1 + vpslld $25, %xmm0, %xmm14 + vpsrld $7, %xmm0, %xmm0 + vpxor %xmm0, %xmm14, %xmm14 + vpshufd $57, %xmm14, %xmm14 + vpaddd %xmm14, %xmm13, %xmm13 + vpxor %xmm8, %xmm13, %xmm8 + vpaddd %xmm13, %xmm12, %xmm12 + vpshufb %xmm7, %xmm8, %xmm8 + vpaddd %xmm8, %xmm1, %xmm1 + vpxor %xmm14, %xmm1, %xmm14 + vpslld $20, %xmm14, %xmm10 + vpsrld $12, %xmm14, %xmm14 + vpxor %xmm14, %xmm10, %xmm10 + vpaddd %xmm10, %xmm12, %xmm12 + vpxor %xmm8, %xmm12, %xmm8 + vpshufb %xmm6, %xmm8, %xmm8 + vpaddd %xmm8, %xmm1, %xmm1 + vpxor %xmm1, %xmm10, %xmm0 + vpshufd $57, %xmm8, %xmm8 + vpshufd $78, %xmm1, %xmm1 + vpslld $25, %xmm0, %xmm10 + vpsrld $7, %xmm0, %xmm0 + vpxor %xmm0, %xmm10, %xmm10 + vpblendw $48, %xmm2, %xmm3, %xmm0 + vpblendw $15, %xmm11, %xmm0, %xmm0 + vpshufd $147, %xmm10, %xmm10 + vpshufd $114, %xmm0, %xmm0 + vpaddd %xmm12, %xmm0, %xmm0 + vpaddd %xmm10, %xmm0, %xmm0 + vpxor %xmm8, %xmm0, %xmm8 + vpshufb %xmm7, %xmm8, %xmm8 + vpaddd %xmm8, %xmm1, %xmm1 + vpxor %xmm10, %xmm1, %xmm10 + vpslld $20, %xmm10, %xmm11 + vpsrld $12, %xmm10, %xmm10 + vpxor %xmm10, %xmm11, %xmm10 + vpslldq $4, %xmm4, %xmm11 + vpblendw $192, %xmm11, %xmm3, %xmm3 + vpunpckldq %xmm5, %xmm4, %xmm4 + vpshufd $99, %xmm3, %xmm3 + vpaddd %xmm0, %xmm3, %xmm3 + vpaddd %xmm10, %xmm3, %xmm3 + vpxor %xmm8, %xmm3, %xmm11 + vpunpckldq %xmm5, %xmm2, %xmm0 + vpblendw $192, %xmm2, %xmm5, %xmm2 + vpshufb %xmm6, %xmm11, %xmm11 + vpunpckhqdq %xmm0, %xmm9, %xmm0 + vpblendw $15, %xmm4, %xmm2, %xmm4 + vpaddd %xmm11, %xmm1, %xmm1 + vpxor %xmm1, %xmm10, %xmm10 + vpshufd $147, %xmm11, %xmm11 + vpshufd $201, %xmm0, %xmm0 + vpslld $25, %xmm10, %xmm8 + vpsrld $7, %xmm10, %xmm10 + vpxor %xmm10, %xmm8, %xmm10 + vpshufd $78, %xmm1, %xmm1 + vpaddd %xmm3, %xmm0, %xmm0 + vpshufd $27, %xmm4, %xmm4 + vpshufd $57, %xmm10, %xmm10 + vpaddd %xmm10, %xmm0, %xmm0 + vpxor %xmm11, %xmm0, %xmm11 + vpaddd %xmm0, %xmm4, %xmm0 + vpshufb %xmm7, %xmm11, %xmm7 + vpaddd %xmm7, %xmm1, %xmm1 + vpxor %xmm10, %xmm1, %xmm10 + vpslld $20, %xmm10, %xmm8 + vpsrld $12, %xmm10, %xmm10 + vpxor %xmm10, %xmm8, %xmm8 + vpaddd %xmm8, %xmm0, %xmm0 + vpxor %xmm7, %xmm0, %xmm7 + vpshufb %xmm6, %xmm7, %xmm6 + vpaddd %xmm6, %xmm1, %xmm1 + vpxor %xmm1, %xmm8, %xmm8 + vpshufd $78, %xmm1, %xmm1 + vpshufd $57, %xmm6, %xmm6 + vpslld $25, %xmm8, %xmm2 + vpsrld $7, %xmm8, %xmm8 + vpxor %xmm8, %xmm2, %xmm8 + vpxor (%rdi), %xmm1, %xmm1 + vpshufd $147, %xmm8, %xmm8 + vpxor %xmm0, %xmm1, %xmm0 + vmovups %xmm0, (%rdi) + vpxor 16(%rdi), %xmm8, %xmm0 + vpxor %xmm6, %xmm0, %xmm6 + vmovups %xmm6, 16(%rdi) + addq $64, %rsi + decq %rdx + jnz .Lbeginofloop +.Lendofloop: + ret +ENDPROC(blake2s_compress_avx) +#endif /* CONFIG_AS_AVX */ + +#ifdef CONFIG_AS_AVX512 +ENTRY(blake2s_compress_avx512) + vmovdqu (%rdi),%xmm0 + vmovdqu 0x10(%rdi),%xmm1 + vmovdqu 0x20(%rdi),%xmm4 + vmovq %rcx,%xmm5 + vmovdqa IV(%rip),%xmm14 + vmovdqa IV+16(%rip),%xmm15 + jmp .Lblake2s_compress_avx512_mainloop +.align 32 +.Lblake2s_compress_avx512_mainloop: + vmovdqa %xmm0,%xmm10 + vmovdqa %xmm1,%xmm11 + vpaddq %xmm5,%xmm4,%xmm4 + vmovdqa %xmm14,%xmm2 + vpxor %xmm15,%xmm4,%xmm3 + vmovdqu (%rsi),%ymm6 + vmovdqu 0x20(%rsi),%ymm7 + addq $0x40,%rsi + leaq SIGMA(%rip),%rax + movb $0xa,%cl +.Lblake2s_compress_avx512_roundloop: + addq $0x40,%rax + vmovdqa -0x40(%rax),%ymm8 + vmovdqa -0x20(%rax),%ymm9 + vpermi2d %ymm7,%ymm6,%ymm8 + vpermi2d %ymm7,%ymm6,%ymm9 + vmovdqa %ymm8,%ymm6 + vmovdqa %ymm9,%ymm7 + vpaddd %xmm8,%xmm0,%xmm0 + vpaddd %xmm1,%xmm0,%xmm0 + vpxor %xmm0,%xmm3,%xmm3 + vprord $0x10,%xmm3,%xmm3 + vpaddd %xmm3,%xmm2,%xmm2 + vpxor %xmm2,%xmm1,%xmm1 + vprord $0xc,%xmm1,%xmm1 + vextracti128 $0x1,%ymm8,%xmm8 + vpaddd %xmm8,%xmm0,%xmm0 + vpaddd %xmm1,%xmm0,%xmm0 + vpxor %xmm0,%xmm3,%xmm3 + vprord $0x8,%xmm3,%xmm3 + vpaddd %xmm3,%xmm2,%xmm2 + vpxor %xmm2,%xmm1,%xmm1 + vprord $0x7,%xmm1,%xmm1 + vpshufd $0x39,%xmm1,%xmm1 + vpshufd $0x4e,%xmm2,%xmm2 + vpshufd $0x93,%xmm3,%xmm3 + vpaddd %xmm9,%xmm0,%xmm0 + vpaddd %xmm1,%xmm0,%xmm0 + vpxor %xmm0,%xmm3,%xmm3 + vprord $0x10,%xmm3,%xmm3 + vpaddd %xmm3,%xmm2,%xmm2 + vpxor %xmm2,%xmm1,%xmm1 + vprord $0xc,%xmm1,%xmm1 + vextracti128 $0x1,%ymm9,%xmm9 + vpaddd %xmm9,%xmm0,%xmm0 + vpaddd %xmm1,%xmm0,%xmm0 + vpxor %xmm0,%xmm3,%xmm3 + vprord $0x8,%xmm3,%xmm3 + vpaddd %xmm3,%xmm2,%xmm2 + vpxor %xmm2,%xmm1,%xmm1 + vprord $0x7,%xmm1,%xmm1 + vpshufd $0x93,%xmm1,%xmm1 + vpshufd $0x4e,%xmm2,%xmm2 + vpshufd $0x39,%xmm3,%xmm3 + decb %cl + jne .Lblake2s_compress_avx512_roundloop + vpxor %xmm10,%xmm0,%xmm0 + vpxor %xmm11,%xmm1,%xmm1 + vpxor %xmm2,%xmm0,%xmm0 + vpxor %xmm3,%xmm1,%xmm1 + decq %rdx + jne .Lblake2s_compress_avx512_mainloop + vmovdqu %xmm0,(%rdi) + vmovdqu %xmm1,0x10(%rdi) + vmovdqu %xmm4,0x20(%rdi) + vzeroupper + retq +ENDPROC(blake2s_compress_avx512) +#endif /* CONFIG_AS_AVX512 */ diff --git a/lib/zinc/blake2s/blake2s.c b/lib/zinc/blake2s/blake2s.c index 58d7e9378bd4..59db1ce2f7ef 100644 --- a/lib/zinc/blake2s/blake2s.c +++ b/lib/zinc/blake2s/blake2s.c @@ -110,6 +110,9 @@ void blake2s_init_key(struct blake2s_state *state, const size_t outlen, } EXPORT_SYMBOL(blake2s_init_key); +#if defined(CONFIG_ZINC_ARCH_X86_64) +#include "blake2s-x86_64-glue.c" +#else static bool *const blake2s_nobs[] __initconst = { }; static void __init blake2s_fpu_init(void) { @@ -120,6 +123,7 @@ static inline bool blake2s_compress_arch(struct blake2s_state *state, { return false; } +#endif static inline void blake2s_compress(struct blake2s_state *state, const u8 *block, size_t nblocks, From patchwork Fri Mar 22 06:29:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 160841 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp451170jan; Thu, 21 Mar 2019 23:38:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqww/qCFKIjk7JMH4tcgLqoicl0frMn3EUOikRa8d11xkxMV/X13rMtUUFW3y4MCcIGHmY/y X-Received: by 2002:a17:902:7c8f:: with SMTP id y15mr7706035pll.44.1553236699841; Thu, 21 Mar 2019 23:38:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553236699; cv=none; d=google.com; s=arc-20160816; b=pZiP5RJbBG2Qwcefn3Y7i5volsJ/sFDkRg7eWb7o4w57yhAX7LfM/Y0fQWNWG4Tcil ixYyjpkH+A/L6CysRXvFrj2JD2iUuEDqTXBL587vH0thy55pbqC3n+vZcnMG0jk5tY0s By91t5ecvr2I7bH4rmpydiIZ5/TfmQjY4g7p0zM8MoL/Sw6uOamuNFspoO8qocEvQOKp Pyqq54kMk7Gb8xg9r1UMMGEMTxMvNnxJIqUmz/7toDjdONOnu3fFJURTlHaM/G5yTxwa 42R8SP8nY50Sau9c8rHgSWCpZSVAJE8Fo1ErFeTdZr1gzNUg27/TigixihwgIH3t2ok0 QpHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:from:message-id:to:references :subject; bh=9XlR2xUhTHJYWZsn9kWCLkfQiMK+Hxb9DBeEAXceMBA=; b=Btt4wcWR9IAt5IjhpwbbfvTGN30tlT6EsL28mclI633TvSGUsNDC6QLxvsiU8ONlky aTEc+3YKUeVqqdCzsH+Ga2uynOM1JQupPcyGalLfOQoRnSeLzy1l+0ze88BR3tjMQNZs Z36UBVNglo+23oyVtB83AvGrbbM34t1CJJyU0LFNHFl6dMRXZqj0HnZ/ORT+wWcJSRY7 KXiJDuxS3FlxTFLrr1D9VVWfdqqDmpaJbsFIgZsdmmwgOZ9dx1V8gVIkwNvSLKeeeyac QNnuJSWy+2uZvO7989pBhFZ3MWw7E3Z1nSJLJVBVKlu8ElHZNoXt7gj8K59T5T1fHO6j fPaQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w14si5793784pga.584.2019.03.21.23.38.18; Thu, 21 Mar 2019 23:38:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727501AbfCVGiR (ORCPT + 3 others); Fri, 22 Mar 2019 02:38:17 -0400 Received: from orcrist.hmeau.com ([104.223.48.154]:44222 "EHLO deadmen.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727461AbfCVGiR (ORCPT ); Fri, 22 Mar 2019 02:38:17 -0400 Received: from gondobar.mordor.me.apana.org.au ([192.168.128.4] helo=gondobar) by deadmen.hmeau.com with esmtps (Exim 4.89 #2 (Debian)) id 1h7Dgo-0002Z4-13; Fri, 22 Mar 2019 14:30:02 +0800 Received: from herbert by gondobar with local (Exim 4.89) (envelope-from ) id 1h7Dgf-0001JM-4n; Fri, 22 Mar 2019 14:29:53 +0800 Subject: [PATCH 13/17] zinc: Curve25519 generic C implementations and selftest References: <20190322062740.nrwfx2rvmt7lzotj@gondor.apana.org.au> To: Linus Torvalds , "David S. Miller" , "Jason A. Donenfeld" , Eric Biggers , Ard Biesheuvel , Linux Crypto Mailing List , linux-fscrypt@vger.kernel.org, linux-arm-kernel@lists.infradead.org, LKML , Paul Crowley , Greg Kaiser , Samuel Neves , Tomer Ashur , Martin Willi Message-Id: From: Herbert Xu Date: Fri, 22 Mar 2019 14:29:53 +0800 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Jason A. Donenfeld This contains two formally verified C implementations of the Curve25519 scalar multiplication function, one for 32-bit systems, and one for 64-bit systems whose compiler supports efficient 128-bit integer types. Not only are these implementations formally verified, but they are also the fastest available C implementations. They have been modified to be friendly to kernel space and to be generally less horrendous looking, but still an effort has been made to retain their formally verified characteristic, and so the C might look slightly unidiomatic. The 64-bit version comes from HACL*: https://github.com/project-everest/hacl-star The 32-bit version comes from Fiat: https://github.com/mit-plv/fiat-crypto Information: https://cr.yp.to/ecdh.html Signed-off-by: Jason A. Donenfeld Cc: Karthikeyan Bhargavan Cc: Samuel Neves Cc: Jean-Philippe Aumasson Cc: Andy Lutomirski Cc: Greg KH Cc: Andrew Morton Cc: Linus Torvalds Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org Signed-off-by: Herbert Xu --- include/zinc/curve25519.h | 22 lib/zinc/Kconfig | 4 lib/zinc/Makefile | 3 lib/zinc/curve25519/curve25519-fiat32.c | 860 ++++++++++++++++++++ lib/zinc/curve25519/curve25519-hacl64.c | 784 +++++++++++++++++++ lib/zinc/curve25519/curve25519.c | 108 ++ lib/zinc/selftest/curve25519.c | 1315 ++++++++++++++++++++++++++++++++ 7 files changed, 3096 insertions(+) diff --git a/include/zinc/curve25519.h b/include/zinc/curve25519.h new file mode 100644 index 000000000000..def173e736fc --- /dev/null +++ b/include/zinc/curve25519.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ +/* + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + */ + +#ifndef _ZINC_CURVE25519_H +#define _ZINC_CURVE25519_H + +#include + +enum curve25519_lengths { + CURVE25519_KEY_SIZE = 32 +}; + +bool __must_check curve25519(u8 mypublic[CURVE25519_KEY_SIZE], + const u8 secret[CURVE25519_KEY_SIZE], + const u8 basepoint[CURVE25519_KEY_SIZE]); +void curve25519_generate_secret(u8 secret[CURVE25519_KEY_SIZE]); +bool __must_check curve25519_generate_public( + u8 pub[CURVE25519_KEY_SIZE], const u8 secret[CURVE25519_KEY_SIZE]); + +#endif /* _ZINC_CURVE25519_H */ diff --git a/lib/zinc/Kconfig b/lib/zinc/Kconfig index 8c25042ac014..d8e7727fe605 100644 --- a/lib/zinc/Kconfig +++ b/lib/zinc/Kconfig @@ -19,6 +19,10 @@ config ZINC_CHACHA20POLY1305 config ZINC_BLAKE2S tristate +config ZINC_CURVE25519 + tristate + select CONFIG_CRYPTO + config ZINC_SELFTEST bool "Zinc cryptography library self-tests" help diff --git a/lib/zinc/Makefile b/lib/zinc/Makefile index 213e38a88560..9050089e1b5a 100644 --- a/lib/zinc/Makefile +++ b/lib/zinc/Makefile @@ -14,3 +14,6 @@ obj-$(CONFIG_ZINC_CHACHA20POLY1305) += zinc_chacha20poly1305.o zinc_blake2s-y := blake2s/blake2s.o zinc_blake2s-$(CONFIG_ZINC_ARCH_X86_64) += blake2s/blake2s-x86_64.o obj-$(CONFIG_ZINC_BLAKE2S) += zinc_blake2s.o + +zinc_curve25519-y := curve25519/curve25519.o +obj-$(CONFIG_ZINC_CURVE25519) += zinc_curve25519.o diff --git a/lib/zinc/curve25519/curve25519-fiat32.c b/lib/zinc/curve25519/curve25519-fiat32.c new file mode 100644 index 000000000000..6076c06f28ed --- /dev/null +++ b/lib/zinc/curve25519/curve25519-fiat32.c @@ -0,0 +1,860 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Copyright (C) 2015-2016 The fiat-crypto Authors. + * Copyright (C) 2018 Jason A. Donenfeld . All Rights Reserved. + * + * This is a machine-generated formally verified implementation of Curve25519 + * ECDH from: . Though originally + * machine generated, it has been tweaked to be suitable for use in the kernel. + * It is optimized for 32-bit machines and machines that cannot work efficiently + * with 128-bit integer types. + */ + +/* fe means field element. Here the field is \Z/(2^255-19). An element t, + * entries t[0]...t[9], represents the integer t[0]+2^26 t[1]+2^51 t[2]+2^77 + * t[3]+2^102 t[4]+...+2^230 t[9]. + * fe limbs are bounded by 1.125*2^26,1.125*2^25,1.125*2^26,1.125*2^25,etc. + * Multiplication and carrying produce fe from fe_loose. + */ +typedef struct fe { u32 v[10]; } fe; + +/* fe_loose limbs are bounded by 3.375*2^26,3.375*2^25,3.375*2^26,3.375*2^25,etc + * Addition and subtraction produce fe_loose from (fe, fe). + */ +typedef struct fe_loose { u32 v[10]; } fe_loose; + +static __always_inline void fe_frombytes_impl(u32 h[10], const u8 *s) +{ + /* Ignores top bit of s. */ + u32 a0 = get_unaligned_le32(s); + u32 a1 = get_unaligned_le32(s+4); + u32 a2 = get_unaligned_le32(s+8); + u32 a3 = get_unaligned_le32(s+12); + u32 a4 = get_unaligned_le32(s+16); + u32 a5 = get_unaligned_le32(s+20); + u32 a6 = get_unaligned_le32(s+24); + u32 a7 = get_unaligned_le32(s+28); + h[0] = a0&((1<<26)-1); /* 26 used, 32-26 left. 26 */ + h[1] = (a0>>26) | ((a1&((1<<19)-1))<< 6); /* (32-26) + 19 = 6+19 = 25 */ + h[2] = (a1>>19) | ((a2&((1<<13)-1))<<13); /* (32-19) + 13 = 13+13 = 26 */ + h[3] = (a2>>13) | ((a3&((1<< 6)-1))<<19); /* (32-13) + 6 = 19+ 6 = 25 */ + h[4] = (a3>> 6); /* (32- 6) = 26 */ + h[5] = a4&((1<<25)-1); /* 25 */ + h[6] = (a4>>25) | ((a5&((1<<19)-1))<< 7); /* (32-25) + 19 = 7+19 = 26 */ + h[7] = (a5>>19) | ((a6&((1<<12)-1))<<13); /* (32-19) + 12 = 13+12 = 25 */ + h[8] = (a6>>12) | ((a7&((1<< 6)-1))<<20); /* (32-12) + 6 = 20+ 6 = 26 */ + h[9] = (a7>> 6)&((1<<25)-1); /* 25 */ +} + +static __always_inline void fe_frombytes(fe *h, const u8 *s) +{ + fe_frombytes_impl(h->v, s); +} + +static __always_inline u8 /*bool*/ +addcarryx_u25(u8 /*bool*/ c, u32 a, u32 b, u32 *low) +{ + /* This function extracts 25 bits of result and 1 bit of carry + * (26 total), so a 32-bit intermediate is sufficient. + */ + u32 x = a + b + c; + *low = x & ((1 << 25) - 1); + return (x >> 25) & 1; +} + +static __always_inline u8 /*bool*/ +addcarryx_u26(u8 /*bool*/ c, u32 a, u32 b, u32 *low) +{ + /* This function extracts 26 bits of result and 1 bit of carry + * (27 total), so a 32-bit intermediate is sufficient. + */ + u32 x = a + b + c; + *low = x & ((1 << 26) - 1); + return (x >> 26) & 1; +} + +static __always_inline u8 /*bool*/ +subborrow_u25(u8 /*bool*/ c, u32 a, u32 b, u32 *low) +{ + /* This function extracts 25 bits of result and 1 bit of borrow + * (26 total), so a 32-bit intermediate is sufficient. + */ + u32 x = a - b - c; + *low = x & ((1 << 25) - 1); + return x >> 31; +} + +static __always_inline u8 /*bool*/ +subborrow_u26(u8 /*bool*/ c, u32 a, u32 b, u32 *low) +{ + /* This function extracts 26 bits of result and 1 bit of borrow + *(27 total), so a 32-bit intermediate is sufficient. + */ + u32 x = a - b - c; + *low = x & ((1 << 26) - 1); + return x >> 31; +} + +static __always_inline u32 cmovznz32(u32 t, u32 z, u32 nz) +{ + t = -!!t; /* all set if nonzero, 0 if 0 */ + return (t&nz) | ((~t)&z); +} + +static __always_inline void fe_freeze(u32 out[10], const u32 in1[10]) +{ + { const u32 x17 = in1[9]; + { const u32 x18 = in1[8]; + { const u32 x16 = in1[7]; + { const u32 x14 = in1[6]; + { const u32 x12 = in1[5]; + { const u32 x10 = in1[4]; + { const u32 x8 = in1[3]; + { const u32 x6 = in1[2]; + { const u32 x4 = in1[1]; + { const u32 x2 = in1[0]; + { u32 x20; u8/*bool*/ x21 = subborrow_u26(0x0, x2, 0x3ffffed, &x20); + { u32 x23; u8/*bool*/ x24 = subborrow_u25(x21, x4, 0x1ffffff, &x23); + { u32 x26; u8/*bool*/ x27 = subborrow_u26(x24, x6, 0x3ffffff, &x26); + { u32 x29; u8/*bool*/ x30 = subborrow_u25(x27, x8, 0x1ffffff, &x29); + { u32 x32; u8/*bool*/ x33 = subborrow_u26(x30, x10, 0x3ffffff, &x32); + { u32 x35; u8/*bool*/ x36 = subborrow_u25(x33, x12, 0x1ffffff, &x35); + { u32 x38; u8/*bool*/ x39 = subborrow_u26(x36, x14, 0x3ffffff, &x38); + { u32 x41; u8/*bool*/ x42 = subborrow_u25(x39, x16, 0x1ffffff, &x41); + { u32 x44; u8/*bool*/ x45 = subborrow_u26(x42, x18, 0x3ffffff, &x44); + { u32 x47; u8/*bool*/ x48 = subborrow_u25(x45, x17, 0x1ffffff, &x47); + { u32 x49 = cmovznz32(x48, 0x0, 0xffffffff); + { u32 x50 = (x49 & 0x3ffffed); + { u32 x52; u8/*bool*/ x53 = addcarryx_u26(0x0, x20, x50, &x52); + { u32 x54 = (x49 & 0x1ffffff); + { u32 x56; u8/*bool*/ x57 = addcarryx_u25(x53, x23, x54, &x56); + { u32 x58 = (x49 & 0x3ffffff); + { u32 x60; u8/*bool*/ x61 = addcarryx_u26(x57, x26, x58, &x60); + { u32 x62 = (x49 & 0x1ffffff); + { u32 x64; u8/*bool*/ x65 = addcarryx_u25(x61, x29, x62, &x64); + { u32 x66 = (x49 & 0x3ffffff); + { u32 x68; u8/*bool*/ x69 = addcarryx_u26(x65, x32, x66, &x68); + { u32 x70 = (x49 & 0x1ffffff); + { u32 x72; u8/*bool*/ x73 = addcarryx_u25(x69, x35, x70, &x72); + { u32 x74 = (x49 & 0x3ffffff); + { u32 x76; u8/*bool*/ x77 = addcarryx_u26(x73, x38, x74, &x76); + { u32 x78 = (x49 & 0x1ffffff); + { u32 x80; u8/*bool*/ x81 = addcarryx_u25(x77, x41, x78, &x80); + { u32 x82 = (x49 & 0x3ffffff); + { u32 x84; u8/*bool*/ x85 = addcarryx_u26(x81, x44, x82, &x84); + { u32 x86 = (x49 & 0x1ffffff); + { u32 x88; addcarryx_u25(x85, x47, x86, &x88); + out[0] = x52; + out[1] = x56; + out[2] = x60; + out[3] = x64; + out[4] = x68; + out[5] = x72; + out[6] = x76; + out[7] = x80; + out[8] = x84; + out[9] = x88; + }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} +} + +static __always_inline void fe_tobytes(u8 s[32], const fe *f) +{ + u32 h[10]; + fe_freeze(h, f->v); + s[0] = h[0] >> 0; + s[1] = h[0] >> 8; + s[2] = h[0] >> 16; + s[3] = (h[0] >> 24) | (h[1] << 2); + s[4] = h[1] >> 6; + s[5] = h[1] >> 14; + s[6] = (h[1] >> 22) | (h[2] << 3); + s[7] = h[2] >> 5; + s[8] = h[2] >> 13; + s[9] = (h[2] >> 21) | (h[3] << 5); + s[10] = h[3] >> 3; + s[11] = h[3] >> 11; + s[12] = (h[3] >> 19) | (h[4] << 6); + s[13] = h[4] >> 2; + s[14] = h[4] >> 10; + s[15] = h[4] >> 18; + s[16] = h[5] >> 0; + s[17] = h[5] >> 8; + s[18] = h[5] >> 16; + s[19] = (h[5] >> 24) | (h[6] << 1); + s[20] = h[6] >> 7; + s[21] = h[6] >> 15; + s[22] = (h[6] >> 23) | (h[7] << 3); + s[23] = h[7] >> 5; + s[24] = h[7] >> 13; + s[25] = (h[7] >> 21) | (h[8] << 4); + s[26] = h[8] >> 4; + s[27] = h[8] >> 12; + s[28] = (h[8] >> 20) | (h[9] << 6); + s[29] = h[9] >> 2; + s[30] = h[9] >> 10; + s[31] = h[9] >> 18; +} + +/* h = f */ +static __always_inline void fe_copy(fe *h, const fe *f) +{ + memmove(h, f, sizeof(u32) * 10); +} + +static __always_inline void fe_copy_lt(fe_loose *h, const fe *f) +{ + memmove(h, f, sizeof(u32) * 10); +} + +/* h = 0 */ +static __always_inline void fe_0(fe *h) +{ + memset(h, 0, sizeof(u32) * 10); +} + +/* h = 1 */ +static __always_inline void fe_1(fe *h) +{ + memset(h, 0, sizeof(u32) * 10); + h->v[0] = 1; +} + +static void fe_add_impl(u32 out[10], const u32 in1[10], const u32 in2[10]) +{ + { const u32 x20 = in1[9]; + { const u32 x21 = in1[8]; + { const u32 x19 = in1[7]; + { const u32 x17 = in1[6]; + { const u32 x15 = in1[5]; + { const u32 x13 = in1[4]; + { const u32 x11 = in1[3]; + { const u32 x9 = in1[2]; + { const u32 x7 = in1[1]; + { const u32 x5 = in1[0]; + { const u32 x38 = in2[9]; + { const u32 x39 = in2[8]; + { const u32 x37 = in2[7]; + { const u32 x35 = in2[6]; + { const u32 x33 = in2[5]; + { const u32 x31 = in2[4]; + { const u32 x29 = in2[3]; + { const u32 x27 = in2[2]; + { const u32 x25 = in2[1]; + { const u32 x23 = in2[0]; + out[0] = (x5 + x23); + out[1] = (x7 + x25); + out[2] = (x9 + x27); + out[3] = (x11 + x29); + out[4] = (x13 + x31); + out[5] = (x15 + x33); + out[6] = (x17 + x35); + out[7] = (x19 + x37); + out[8] = (x21 + x39); + out[9] = (x20 + x38); + }}}}}}}}}}}}}}}}}}}} +} + +/* h = f + g + * Can overlap h with f or g. + */ +static __always_inline void fe_add(fe_loose *h, const fe *f, const fe *g) +{ + fe_add_impl(h->v, f->v, g->v); +} + +static void fe_sub_impl(u32 out[10], const u32 in1[10], const u32 in2[10]) +{ + { const u32 x20 = in1[9]; + { const u32 x21 = in1[8]; + { const u32 x19 = in1[7]; + { const u32 x17 = in1[6]; + { const u32 x15 = in1[5]; + { const u32 x13 = in1[4]; + { const u32 x11 = in1[3]; + { const u32 x9 = in1[2]; + { const u32 x7 = in1[1]; + { const u32 x5 = in1[0]; + { const u32 x38 = in2[9]; + { const u32 x39 = in2[8]; + { const u32 x37 = in2[7]; + { const u32 x35 = in2[6]; + { const u32 x33 = in2[5]; + { const u32 x31 = in2[4]; + { const u32 x29 = in2[3]; + { const u32 x27 = in2[2]; + { const u32 x25 = in2[1]; + { const u32 x23 = in2[0]; + out[0] = ((0x7ffffda + x5) - x23); + out[1] = ((0x3fffffe + x7) - x25); + out[2] = ((0x7fffffe + x9) - x27); + out[3] = ((0x3fffffe + x11) - x29); + out[4] = ((0x7fffffe + x13) - x31); + out[5] = ((0x3fffffe + x15) - x33); + out[6] = ((0x7fffffe + x17) - x35); + out[7] = ((0x3fffffe + x19) - x37); + out[8] = ((0x7fffffe + x21) - x39); + out[9] = ((0x3fffffe + x20) - x38); + }}}}}}}}}}}}}}}}}}}} +} + +/* h = f - g + * Can overlap h with f or g. + */ +static __always_inline void fe_sub(fe_loose *h, const fe *f, const fe *g) +{ + fe_sub_impl(h->v, f->v, g->v); +} + +static void fe_mul_impl(u32 out[10], const u32 in1[10], const u32 in2[10]) +{ + { const u32 x20 = in1[9]; + { const u32 x21 = in1[8]; + { const u32 x19 = in1[7]; + { const u32 x17 = in1[6]; + { const u32 x15 = in1[5]; + { const u32 x13 = in1[4]; + { const u32 x11 = in1[3]; + { const u32 x9 = in1[2]; + { const u32 x7 = in1[1]; + { const u32 x5 = in1[0]; + { const u32 x38 = in2[9]; + { const u32 x39 = in2[8]; + { const u32 x37 = in2[7]; + { const u32 x35 = in2[6]; + { const u32 x33 = in2[5]; + { const u32 x31 = in2[4]; + { const u32 x29 = in2[3]; + { const u32 x27 = in2[2]; + { const u32 x25 = in2[1]; + { const u32 x23 = in2[0]; + { u64 x40 = ((u64)x23 * x5); + { u64 x41 = (((u64)x23 * x7) + ((u64)x25 * x5)); + { u64 x42 = ((((u64)(0x2 * x25) * x7) + ((u64)x23 * x9)) + ((u64)x27 * x5)); + { u64 x43 = (((((u64)x25 * x9) + ((u64)x27 * x7)) + ((u64)x23 * x11)) + ((u64)x29 * x5)); + { u64 x44 = (((((u64)x27 * x9) + (0x2 * (((u64)x25 * x11) + ((u64)x29 * x7)))) + ((u64)x23 * x13)) + ((u64)x31 * x5)); + { u64 x45 = (((((((u64)x27 * x11) + ((u64)x29 * x9)) + ((u64)x25 * x13)) + ((u64)x31 * x7)) + ((u64)x23 * x15)) + ((u64)x33 * x5)); + { u64 x46 = (((((0x2 * ((((u64)x29 * x11) + ((u64)x25 * x15)) + ((u64)x33 * x7))) + ((u64)x27 * x13)) + ((u64)x31 * x9)) + ((u64)x23 * x17)) + ((u64)x35 * x5)); + { u64 x47 = (((((((((u64)x29 * x13) + ((u64)x31 * x11)) + ((u64)x27 * x15)) + ((u64)x33 * x9)) + ((u64)x25 * x17)) + ((u64)x35 * x7)) + ((u64)x23 * x19)) + ((u64)x37 * x5)); + { u64 x48 = (((((((u64)x31 * x13) + (0x2 * (((((u64)x29 * x15) + ((u64)x33 * x11)) + ((u64)x25 * x19)) + ((u64)x37 * x7)))) + ((u64)x27 * x17)) + ((u64)x35 * x9)) + ((u64)x23 * x21)) + ((u64)x39 * x5)); + { u64 x49 = (((((((((((u64)x31 * x15) + ((u64)x33 * x13)) + ((u64)x29 * x17)) + ((u64)x35 * x11)) + ((u64)x27 * x19)) + ((u64)x37 * x9)) + ((u64)x25 * x21)) + ((u64)x39 * x7)) + ((u64)x23 * x20)) + ((u64)x38 * x5)); + { u64 x50 = (((((0x2 * ((((((u64)x33 * x15) + ((u64)x29 * x19)) + ((u64)x37 * x11)) + ((u64)x25 * x20)) + ((u64)x38 * x7))) + ((u64)x31 * x17)) + ((u64)x35 * x13)) + ((u64)x27 * x21)) + ((u64)x39 * x9)); + { u64 x51 = (((((((((u64)x33 * x17) + ((u64)x35 * x15)) + ((u64)x31 * x19)) + ((u64)x37 * x13)) + ((u64)x29 * x21)) + ((u64)x39 * x11)) + ((u64)x27 * x20)) + ((u64)x38 * x9)); + { u64 x52 = (((((u64)x35 * x17) + (0x2 * (((((u64)x33 * x19) + ((u64)x37 * x15)) + ((u64)x29 * x20)) + ((u64)x38 * x11)))) + ((u64)x31 * x21)) + ((u64)x39 * x13)); + { u64 x53 = (((((((u64)x35 * x19) + ((u64)x37 * x17)) + ((u64)x33 * x21)) + ((u64)x39 * x15)) + ((u64)x31 * x20)) + ((u64)x38 * x13)); + { u64 x54 = (((0x2 * ((((u64)x37 * x19) + ((u64)x33 * x20)) + ((u64)x38 * x15))) + ((u64)x35 * x21)) + ((u64)x39 * x17)); + { u64 x55 = (((((u64)x37 * x21) + ((u64)x39 * x19)) + ((u64)x35 * x20)) + ((u64)x38 * x17)); + { u64 x56 = (((u64)x39 * x21) + (0x2 * (((u64)x37 * x20) + ((u64)x38 * x19)))); + { u64 x57 = (((u64)x39 * x20) + ((u64)x38 * x21)); + { u64 x58 = ((u64)(0x2 * x38) * x20); + { u64 x59 = (x48 + (x58 << 0x4)); + { u64 x60 = (x59 + (x58 << 0x1)); + { u64 x61 = (x60 + x58); + { u64 x62 = (x47 + (x57 << 0x4)); + { u64 x63 = (x62 + (x57 << 0x1)); + { u64 x64 = (x63 + x57); + { u64 x65 = (x46 + (x56 << 0x4)); + { u64 x66 = (x65 + (x56 << 0x1)); + { u64 x67 = (x66 + x56); + { u64 x68 = (x45 + (x55 << 0x4)); + { u64 x69 = (x68 + (x55 << 0x1)); + { u64 x70 = (x69 + x55); + { u64 x71 = (x44 + (x54 << 0x4)); + { u64 x72 = (x71 + (x54 << 0x1)); + { u64 x73 = (x72 + x54); + { u64 x74 = (x43 + (x53 << 0x4)); + { u64 x75 = (x74 + (x53 << 0x1)); + { u64 x76 = (x75 + x53); + { u64 x77 = (x42 + (x52 << 0x4)); + { u64 x78 = (x77 + (x52 << 0x1)); + { u64 x79 = (x78 + x52); + { u64 x80 = (x41 + (x51 << 0x4)); + { u64 x81 = (x80 + (x51 << 0x1)); + { u64 x82 = (x81 + x51); + { u64 x83 = (x40 + (x50 << 0x4)); + { u64 x84 = (x83 + (x50 << 0x1)); + { u64 x85 = (x84 + x50); + { u64 x86 = (x85 >> 0x1a); + { u32 x87 = ((u32)x85 & 0x3ffffff); + { u64 x88 = (x86 + x82); + { u64 x89 = (x88 >> 0x19); + { u32 x90 = ((u32)x88 & 0x1ffffff); + { u64 x91 = (x89 + x79); + { u64 x92 = (x91 >> 0x1a); + { u32 x93 = ((u32)x91 & 0x3ffffff); + { u64 x94 = (x92 + x76); + { u64 x95 = (x94 >> 0x19); + { u32 x96 = ((u32)x94 & 0x1ffffff); + { u64 x97 = (x95 + x73); + { u64 x98 = (x97 >> 0x1a); + { u32 x99 = ((u32)x97 & 0x3ffffff); + { u64 x100 = (x98 + x70); + { u64 x101 = (x100 >> 0x19); + { u32 x102 = ((u32)x100 & 0x1ffffff); + { u64 x103 = (x101 + x67); + { u64 x104 = (x103 >> 0x1a); + { u32 x105 = ((u32)x103 & 0x3ffffff); + { u64 x106 = (x104 + x64); + { u64 x107 = (x106 >> 0x19); + { u32 x108 = ((u32)x106 & 0x1ffffff); + { u64 x109 = (x107 + x61); + { u64 x110 = (x109 >> 0x1a); + { u32 x111 = ((u32)x109 & 0x3ffffff); + { u64 x112 = (x110 + x49); + { u64 x113 = (x112 >> 0x19); + { u32 x114 = ((u32)x112 & 0x1ffffff); + { u64 x115 = (x87 + (0x13 * x113)); + { u32 x116 = (u32) (x115 >> 0x1a); + { u32 x117 = ((u32)x115 & 0x3ffffff); + { u32 x118 = (x116 + x90); + { u32 x119 = (x118 >> 0x19); + { u32 x120 = (x118 & 0x1ffffff); + out[0] = x117; + out[1] = x120; + out[2] = (x119 + x93); + out[3] = x96; + out[4] = x99; + out[5] = x102; + out[6] = x105; + out[7] = x108; + out[8] = x111; + out[9] = x114; + }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} +} + +static __always_inline void fe_mul_ttt(fe *h, const fe *f, const fe *g) +{ + fe_mul_impl(h->v, f->v, g->v); +} + +static __always_inline void fe_mul_tlt(fe *h, const fe_loose *f, const fe *g) +{ + fe_mul_impl(h->v, f->v, g->v); +} + +static __always_inline void +fe_mul_tll(fe *h, const fe_loose *f, const fe_loose *g) +{ + fe_mul_impl(h->v, f->v, g->v); +} + +static void fe_sqr_impl(u32 out[10], const u32 in1[10]) +{ + { const u32 x17 = in1[9]; + { const u32 x18 = in1[8]; + { const u32 x16 = in1[7]; + { const u32 x14 = in1[6]; + { const u32 x12 = in1[5]; + { const u32 x10 = in1[4]; + { const u32 x8 = in1[3]; + { const u32 x6 = in1[2]; + { const u32 x4 = in1[1]; + { const u32 x2 = in1[0]; + { u64 x19 = ((u64)x2 * x2); + { u64 x20 = ((u64)(0x2 * x2) * x4); + { u64 x21 = (0x2 * (((u64)x4 * x4) + ((u64)x2 * x6))); + { u64 x22 = (0x2 * (((u64)x4 * x6) + ((u64)x2 * x8))); + { u64 x23 = ((((u64)x6 * x6) + ((u64)(0x4 * x4) * x8)) + ((u64)(0x2 * x2) * x10)); + { u64 x24 = (0x2 * ((((u64)x6 * x8) + ((u64)x4 * x10)) + ((u64)x2 * x12))); + { u64 x25 = (0x2 * (((((u64)x8 * x8) + ((u64)x6 * x10)) + ((u64)x2 * x14)) + ((u64)(0x2 * x4) * x12))); + { u64 x26 = (0x2 * (((((u64)x8 * x10) + ((u64)x6 * x12)) + ((u64)x4 * x14)) + ((u64)x2 * x16))); + { u64 x27 = (((u64)x10 * x10) + (0x2 * ((((u64)x6 * x14) + ((u64)x2 * x18)) + (0x2 * (((u64)x4 * x16) + ((u64)x8 * x12)))))); + { u64 x28 = (0x2 * ((((((u64)x10 * x12) + ((u64)x8 * x14)) + ((u64)x6 * x16)) + ((u64)x4 * x18)) + ((u64)x2 * x17))); + { u64 x29 = (0x2 * (((((u64)x12 * x12) + ((u64)x10 * x14)) + ((u64)x6 * x18)) + (0x2 * (((u64)x8 * x16) + ((u64)x4 * x17))))); + { u64 x30 = (0x2 * (((((u64)x12 * x14) + ((u64)x10 * x16)) + ((u64)x8 * x18)) + ((u64)x6 * x17))); + { u64 x31 = (((u64)x14 * x14) + (0x2 * (((u64)x10 * x18) + (0x2 * (((u64)x12 * x16) + ((u64)x8 * x17)))))); + { u64 x32 = (0x2 * ((((u64)x14 * x16) + ((u64)x12 * x18)) + ((u64)x10 * x17))); + { u64 x33 = (0x2 * ((((u64)x16 * x16) + ((u64)x14 * x18)) + ((u64)(0x2 * x12) * x17))); + { u64 x34 = (0x2 * (((u64)x16 * x18) + ((u64)x14 * x17))); + { u64 x35 = (((u64)x18 * x18) + ((u64)(0x4 * x16) * x17)); + { u64 x36 = ((u64)(0x2 * x18) * x17); + { u64 x37 = ((u64)(0x2 * x17) * x17); + { u64 x38 = (x27 + (x37 << 0x4)); + { u64 x39 = (x38 + (x37 << 0x1)); + { u64 x40 = (x39 + x37); + { u64 x41 = (x26 + (x36 << 0x4)); + { u64 x42 = (x41 + (x36 << 0x1)); + { u64 x43 = (x42 + x36); + { u64 x44 = (x25 + (x35 << 0x4)); + { u64 x45 = (x44 + (x35 << 0x1)); + { u64 x46 = (x45 + x35); + { u64 x47 = (x24 + (x34 << 0x4)); + { u64 x48 = (x47 + (x34 << 0x1)); + { u64 x49 = (x48 + x34); + { u64 x50 = (x23 + (x33 << 0x4)); + { u64 x51 = (x50 + (x33 << 0x1)); + { u64 x52 = (x51 + x33); + { u64 x53 = (x22 + (x32 << 0x4)); + { u64 x54 = (x53 + (x32 << 0x1)); + { u64 x55 = (x54 + x32); + { u64 x56 = (x21 + (x31 << 0x4)); + { u64 x57 = (x56 + (x31 << 0x1)); + { u64 x58 = (x57 + x31); + { u64 x59 = (x20 + (x30 << 0x4)); + { u64 x60 = (x59 + (x30 << 0x1)); + { u64 x61 = (x60 + x30); + { u64 x62 = (x19 + (x29 << 0x4)); + { u64 x63 = (x62 + (x29 << 0x1)); + { u64 x64 = (x63 + x29); + { u64 x65 = (x64 >> 0x1a); + { u32 x66 = ((u32)x64 & 0x3ffffff); + { u64 x67 = (x65 + x61); + { u64 x68 = (x67 >> 0x19); + { u32 x69 = ((u32)x67 & 0x1ffffff); + { u64 x70 = (x68 + x58); + { u64 x71 = (x70 >> 0x1a); + { u32 x72 = ((u32)x70 & 0x3ffffff); + { u64 x73 = (x71 + x55); + { u64 x74 = (x73 >> 0x19); + { u32 x75 = ((u32)x73 & 0x1ffffff); + { u64 x76 = (x74 + x52); + { u64 x77 = (x76 >> 0x1a); + { u32 x78 = ((u32)x76 & 0x3ffffff); + { u64 x79 = (x77 + x49); + { u64 x80 = (x79 >> 0x19); + { u32 x81 = ((u32)x79 & 0x1ffffff); + { u64 x82 = (x80 + x46); + { u64 x83 = (x82 >> 0x1a); + { u32 x84 = ((u32)x82 & 0x3ffffff); + { u64 x85 = (x83 + x43); + { u64 x86 = (x85 >> 0x19); + { u32 x87 = ((u32)x85 & 0x1ffffff); + { u64 x88 = (x86 + x40); + { u64 x89 = (x88 >> 0x1a); + { u32 x90 = ((u32)x88 & 0x3ffffff); + { u64 x91 = (x89 + x28); + { u64 x92 = (x91 >> 0x19); + { u32 x93 = ((u32)x91 & 0x1ffffff); + { u64 x94 = (x66 + (0x13 * x92)); + { u32 x95 = (u32) (x94 >> 0x1a); + { u32 x96 = ((u32)x94 & 0x3ffffff); + { u32 x97 = (x95 + x69); + { u32 x98 = (x97 >> 0x19); + { u32 x99 = (x97 & 0x1ffffff); + out[0] = x96; + out[1] = x99; + out[2] = (x98 + x72); + out[3] = x75; + out[4] = x78; + out[5] = x81; + out[6] = x84; + out[7] = x87; + out[8] = x90; + out[9] = x93; + }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} +} + +static __always_inline void fe_sq_tl(fe *h, const fe_loose *f) +{ + fe_sqr_impl(h->v, f->v); +} + +static __always_inline void fe_sq_tt(fe *h, const fe *f) +{ + fe_sqr_impl(h->v, f->v); +} + +static __always_inline void fe_loose_invert(fe *out, const fe_loose *z) +{ + fe t0; + fe t1; + fe t2; + fe t3; + int i; + + fe_sq_tl(&t0, z); + fe_sq_tt(&t1, &t0); + for (i = 1; i < 2; ++i) + fe_sq_tt(&t1, &t1); + fe_mul_tlt(&t1, z, &t1); + fe_mul_ttt(&t0, &t0, &t1); + fe_sq_tt(&t2, &t0); + fe_mul_ttt(&t1, &t1, &t2); + fe_sq_tt(&t2, &t1); + for (i = 1; i < 5; ++i) + fe_sq_tt(&t2, &t2); + fe_mul_ttt(&t1, &t2, &t1); + fe_sq_tt(&t2, &t1); + for (i = 1; i < 10; ++i) + fe_sq_tt(&t2, &t2); + fe_mul_ttt(&t2, &t2, &t1); + fe_sq_tt(&t3, &t2); + for (i = 1; i < 20; ++i) + fe_sq_tt(&t3, &t3); + fe_mul_ttt(&t2, &t3, &t2); + fe_sq_tt(&t2, &t2); + for (i = 1; i < 10; ++i) + fe_sq_tt(&t2, &t2); + fe_mul_ttt(&t1, &t2, &t1); + fe_sq_tt(&t2, &t1); + for (i = 1; i < 50; ++i) + fe_sq_tt(&t2, &t2); + fe_mul_ttt(&t2, &t2, &t1); + fe_sq_tt(&t3, &t2); + for (i = 1; i < 100; ++i) + fe_sq_tt(&t3, &t3); + fe_mul_ttt(&t2, &t3, &t2); + fe_sq_tt(&t2, &t2); + for (i = 1; i < 50; ++i) + fe_sq_tt(&t2, &t2); + fe_mul_ttt(&t1, &t2, &t1); + fe_sq_tt(&t1, &t1); + for (i = 1; i < 5; ++i) + fe_sq_tt(&t1, &t1); + fe_mul_ttt(out, &t1, &t0); +} + +static __always_inline void fe_invert(fe *out, const fe *z) +{ + fe_loose l; + fe_copy_lt(&l, z); + fe_loose_invert(out, &l); +} + +/* Replace (f,g) with (g,f) if b == 1; + * replace (f,g) with (f,g) if b == 0. + * + * Preconditions: b in {0,1} + */ +static __always_inline void fe_cswap(fe *f, fe *g, unsigned int b) +{ + unsigned i; + b = 0-b; + for (i = 0; i < 10; i++) { + u32 x = f->v[i] ^ g->v[i]; + x &= b; + f->v[i] ^= x; + g->v[i] ^= x; + } +} + +/* NOTE: based on fiat-crypto fe_mul, edited for in2=121666, 0, 0.*/ +static __always_inline void fe_mul_121666_impl(u32 out[10], const u32 in1[10]) +{ + { const u32 x20 = in1[9]; + { const u32 x21 = in1[8]; + { const u32 x19 = in1[7]; + { const u32 x17 = in1[6]; + { const u32 x15 = in1[5]; + { const u32 x13 = in1[4]; + { const u32 x11 = in1[3]; + { const u32 x9 = in1[2]; + { const u32 x7 = in1[1]; + { const u32 x5 = in1[0]; + { const u32 x38 = 0; + { const u32 x39 = 0; + { const u32 x37 = 0; + { const u32 x35 = 0; + { const u32 x33 = 0; + { const u32 x31 = 0; + { const u32 x29 = 0; + { const u32 x27 = 0; + { const u32 x25 = 0; + { const u32 x23 = 121666; + { u64 x40 = ((u64)x23 * x5); + { u64 x41 = (((u64)x23 * x7) + ((u64)x25 * x5)); + { u64 x42 = ((((u64)(0x2 * x25) * x7) + ((u64)x23 * x9)) + ((u64)x27 * x5)); + { u64 x43 = (((((u64)x25 * x9) + ((u64)x27 * x7)) + ((u64)x23 * x11)) + ((u64)x29 * x5)); + { u64 x44 = (((((u64)x27 * x9) + (0x2 * (((u64)x25 * x11) + ((u64)x29 * x7)))) + ((u64)x23 * x13)) + ((u64)x31 * x5)); + { u64 x45 = (((((((u64)x27 * x11) + ((u64)x29 * x9)) + ((u64)x25 * x13)) + ((u64)x31 * x7)) + ((u64)x23 * x15)) + ((u64)x33 * x5)); + { u64 x46 = (((((0x2 * ((((u64)x29 * x11) + ((u64)x25 * x15)) + ((u64)x33 * x7))) + ((u64)x27 * x13)) + ((u64)x31 * x9)) + ((u64)x23 * x17)) + ((u64)x35 * x5)); + { u64 x47 = (((((((((u64)x29 * x13) + ((u64)x31 * x11)) + ((u64)x27 * x15)) + ((u64)x33 * x9)) + ((u64)x25 * x17)) + ((u64)x35 * x7)) + ((u64)x23 * x19)) + ((u64)x37 * x5)); + { u64 x48 = (((((((u64)x31 * x13) + (0x2 * (((((u64)x29 * x15) + ((u64)x33 * x11)) + ((u64)x25 * x19)) + ((u64)x37 * x7)))) + ((u64)x27 * x17)) + ((u64)x35 * x9)) + ((u64)x23 * x21)) + ((u64)x39 * x5)); + { u64 x49 = (((((((((((u64)x31 * x15) + ((u64)x33 * x13)) + ((u64)x29 * x17)) + ((u64)x35 * x11)) + ((u64)x27 * x19)) + ((u64)x37 * x9)) + ((u64)x25 * x21)) + ((u64)x39 * x7)) + ((u64)x23 * x20)) + ((u64)x38 * x5)); + { u64 x50 = (((((0x2 * ((((((u64)x33 * x15) + ((u64)x29 * x19)) + ((u64)x37 * x11)) + ((u64)x25 * x20)) + ((u64)x38 * x7))) + ((u64)x31 * x17)) + ((u64)x35 * x13)) + ((u64)x27 * x21)) + ((u64)x39 * x9)); + { u64 x51 = (((((((((u64)x33 * x17) + ((u64)x35 * x15)) + ((u64)x31 * x19)) + ((u64)x37 * x13)) + ((u64)x29 * x21)) + ((u64)x39 * x11)) + ((u64)x27 * x20)) + ((u64)x38 * x9)); + { u64 x52 = (((((u64)x35 * x17) + (0x2 * (((((u64)x33 * x19) + ((u64)x37 * x15)) + ((u64)x29 * x20)) + ((u64)x38 * x11)))) + ((u64)x31 * x21)) + ((u64)x39 * x13)); + { u64 x53 = (((((((u64)x35 * x19) + ((u64)x37 * x17)) + ((u64)x33 * x21)) + ((u64)x39 * x15)) + ((u64)x31 * x20)) + ((u64)x38 * x13)); + { u64 x54 = (((0x2 * ((((u64)x37 * x19) + ((u64)x33 * x20)) + ((u64)x38 * x15))) + ((u64)x35 * x21)) + ((u64)x39 * x17)); + { u64 x55 = (((((u64)x37 * x21) + ((u64)x39 * x19)) + ((u64)x35 * x20)) + ((u64)x38 * x17)); + { u64 x56 = (((u64)x39 * x21) + (0x2 * (((u64)x37 * x20) + ((u64)x38 * x19)))); + { u64 x57 = (((u64)x39 * x20) + ((u64)x38 * x21)); + { u64 x58 = ((u64)(0x2 * x38) * x20); + { u64 x59 = (x48 + (x58 << 0x4)); + { u64 x60 = (x59 + (x58 << 0x1)); + { u64 x61 = (x60 + x58); + { u64 x62 = (x47 + (x57 << 0x4)); + { u64 x63 = (x62 + (x57 << 0x1)); + { u64 x64 = (x63 + x57); + { u64 x65 = (x46 + (x56 << 0x4)); + { u64 x66 = (x65 + (x56 << 0x1)); + { u64 x67 = (x66 + x56); + { u64 x68 = (x45 + (x55 << 0x4)); + { u64 x69 = (x68 + (x55 << 0x1)); + { u64 x70 = (x69 + x55); + { u64 x71 = (x44 + (x54 << 0x4)); + { u64 x72 = (x71 + (x54 << 0x1)); + { u64 x73 = (x72 + x54); + { u64 x74 = (x43 + (x53 << 0x4)); + { u64 x75 = (x74 + (x53 << 0x1)); + { u64 x76 = (x75 + x53); + { u64 x77 = (x42 + (x52 << 0x4)); + { u64 x78 = (x77 + (x52 << 0x1)); + { u64 x79 = (x78 + x52); + { u64 x80 = (x41 + (x51 << 0x4)); + { u64 x81 = (x80 + (x51 << 0x1)); + { u64 x82 = (x81 + x51); + { u64 x83 = (x40 + (x50 << 0x4)); + { u64 x84 = (x83 + (x50 << 0x1)); + { u64 x85 = (x84 + x50); + { u64 x86 = (x85 >> 0x1a); + { u32 x87 = ((u32)x85 & 0x3ffffff); + { u64 x88 = (x86 + x82); + { u64 x89 = (x88 >> 0x19); + { u32 x90 = ((u32)x88 & 0x1ffffff); + { u64 x91 = (x89 + x79); + { u64 x92 = (x91 >> 0x1a); + { u32 x93 = ((u32)x91 & 0x3ffffff); + { u64 x94 = (x92 + x76); + { u64 x95 = (x94 >> 0x19); + { u32 x96 = ((u32)x94 & 0x1ffffff); + { u64 x97 = (x95 + x73); + { u64 x98 = (x97 >> 0x1a); + { u32 x99 = ((u32)x97 & 0x3ffffff); + { u64 x100 = (x98 + x70); + { u64 x101 = (x100 >> 0x19); + { u32 x102 = ((u32)x100 & 0x1ffffff); + { u64 x103 = (x101 + x67); + { u64 x104 = (x103 >> 0x1a); + { u32 x105 = ((u32)x103 & 0x3ffffff); + { u64 x106 = (x104 + x64); + { u64 x107 = (x106 >> 0x19); + { u32 x108 = ((u32)x106 & 0x1ffffff); + { u64 x109 = (x107 + x61); + { u64 x110 = (x109 >> 0x1a); + { u32 x111 = ((u32)x109 & 0x3ffffff); + { u64 x112 = (x110 + x49); + { u64 x113 = (x112 >> 0x19); + { u32 x114 = ((u32)x112 & 0x1ffffff); + { u64 x115 = (x87 + (0x13 * x113)); + { u32 x116 = (u32) (x115 >> 0x1a); + { u32 x117 = ((u32)x115 & 0x3ffffff); + { u32 x118 = (x116 + x90); + { u32 x119 = (x118 >> 0x19); + { u32 x120 = (x118 & 0x1ffffff); + out[0] = x117; + out[1] = x120; + out[2] = (x119 + x93); + out[3] = x96; + out[4] = x99; + out[5] = x102; + out[6] = x105; + out[7] = x108; + out[8] = x111; + out[9] = x114; + }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} +} + +static __always_inline void fe_mul121666(fe *h, const fe_loose *f) +{ + fe_mul_121666_impl(h->v, f->v); +} + +static void curve25519_generic(u8 out[CURVE25519_KEY_SIZE], + const u8 scalar[CURVE25519_KEY_SIZE], + const u8 point[CURVE25519_KEY_SIZE]) +{ + fe x1, x2, z2, x3, z3; + fe_loose x2l, z2l, x3l; + unsigned swap = 0; + int pos; + u8 e[32]; + + memcpy(e, scalar, 32); + normalize_secret(e); + + /* The following implementation was transcribed to Coq and proven to + * correspond to unary scalar multiplication in affine coordinates given + * that x1 != 0 is the x coordinate of some point on the curve. It was + * also checked in Coq that doing a ladderstep with x1 = x3 = 0 gives + * z2' = z3' = 0, and z2 = z3 = 0 gives z2' = z3' = 0. The statement was + * quantified over the underlying field, so it applies to Curve25519 + * itself and the quadratic twist of Curve25519. It was not proven in + * Coq that prime-field arithmetic correctly simulates extension-field + * arithmetic on prime-field values. The decoding of the byte array + * representation of e was not considered. + * + * Specification of Montgomery curves in affine coordinates: + * + * + * Proof that these form a group that is isomorphic to a Weierstrass + * curve: + * + * + * Coq transcription and correctness proof of the loop + * (where scalarbits=255): + * + * + * preconditions: 0 <= e < 2^255 (not necessarily e < order), + * fe_invert(0) = 0 + */ + fe_frombytes(&x1, point); + fe_1(&x2); + fe_0(&z2); + fe_copy(&x3, &x1); + fe_1(&z3); + + for (pos = 254; pos >= 0; --pos) { + fe tmp0, tmp1; + fe_loose tmp0l, tmp1l; + /* loop invariant as of right before the test, for the case + * where x1 != 0: + * pos >= -1; if z2 = 0 then x2 is nonzero; if z3 = 0 then x3 + * is nonzero + * let r := e >> (pos+1) in the following equalities of + * projective points: + * to_xz (r*P) === if swap then (x3, z3) else (x2, z2) + * to_xz ((r+1)*P) === if swap then (x2, z2) else (x3, z3) + * x1 is the nonzero x coordinate of the nonzero + * point (r*P-(r+1)*P) + */ + unsigned b = 1 & (e[pos / 8] >> (pos & 7)); + swap ^= b; + fe_cswap(&x2, &x3, swap); + fe_cswap(&z2, &z3, swap); + swap = b; + /* Coq transcription of ladderstep formula (called from + * transcribed loop): + * + * + * x1 != 0 + * x1 = 0 + */ + fe_sub(&tmp0l, &x3, &z3); + fe_sub(&tmp1l, &x2, &z2); + fe_add(&x2l, &x2, &z2); + fe_add(&z2l, &x3, &z3); + fe_mul_tll(&z3, &tmp0l, &x2l); + fe_mul_tll(&z2, &z2l, &tmp1l); + fe_sq_tl(&tmp0, &tmp1l); + fe_sq_tl(&tmp1, &x2l); + fe_add(&x3l, &z3, &z2); + fe_sub(&z2l, &z3, &z2); + fe_mul_ttt(&x2, &tmp1, &tmp0); + fe_sub(&tmp1l, &tmp1, &tmp0); + fe_sq_tl(&z2, &z2l); + fe_mul121666(&z3, &tmp1l); + fe_sq_tl(&x3, &x3l); + fe_add(&tmp0l, &tmp0, &z3); + fe_mul_ttt(&z3, &x1, &z2); + fe_mul_tll(&z2, &tmp1l, &tmp0l); + } + /* here pos=-1, so r=e, so to_xz (e*P) === if swap then (x3, z3) + * else (x2, z2) + */ + fe_cswap(&x2, &x3, swap); + fe_cswap(&z2, &z3, swap); + + fe_invert(&z2, &z2); + fe_mul_ttt(&x2, &x2, &z2); + fe_tobytes(out, &x2); + + memzero_explicit(&x1, sizeof(x1)); + memzero_explicit(&x2, sizeof(x2)); + memzero_explicit(&z2, sizeof(z2)); + memzero_explicit(&x3, sizeof(x3)); + memzero_explicit(&z3, sizeof(z3)); + memzero_explicit(&x2l, sizeof(x2l)); + memzero_explicit(&z2l, sizeof(z2l)); + memzero_explicit(&x3l, sizeof(x3l)); + memzero_explicit(&e, sizeof(e)); +} diff --git a/lib/zinc/curve25519/curve25519-hacl64.c b/lib/zinc/curve25519/curve25519-hacl64.c new file mode 100644 index 000000000000..ce7dfcf064b1 --- /dev/null +++ b/lib/zinc/curve25519/curve25519-hacl64.c @@ -0,0 +1,784 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Copyright (C) 2016-2017 INRIA and Microsoft Corporation. + * Copyright (C) 2018 Jason A. Donenfeld . All Rights Reserved. + * + * This is a machine-generated formally verified implementation of Curve25519 + * ECDH from: . Though originally machine + * generated, it has been tweaked to be suitable for use in the kernel. It is + * optimized for 64-bit machines that can efficiently work with 128-bit + * integer types. + */ + +typedef __uint128_t u128; + +static __always_inline u64 u64_eq_mask(u64 a, u64 b) +{ + u64 x = a ^ b; + u64 minus_x = ~x + (u64)1U; + u64 x_or_minus_x = x | minus_x; + u64 xnx = x_or_minus_x >> (u32)63U; + u64 c = xnx - (u64)1U; + return c; +} + +static __always_inline u64 u64_gte_mask(u64 a, u64 b) +{ + u64 x = a; + u64 y = b; + u64 x_xor_y = x ^ y; + u64 x_sub_y = x - y; + u64 x_sub_y_xor_y = x_sub_y ^ y; + u64 q = x_xor_y | x_sub_y_xor_y; + u64 x_xor_q = x ^ q; + u64 x_xor_q_ = x_xor_q >> (u32)63U; + u64 c = x_xor_q_ - (u64)1U; + return c; +} + +static __always_inline void modulo_carry_top(u64 *b) +{ + u64 b4 = b[4]; + u64 b0 = b[0]; + u64 b4_ = b4 & 0x7ffffffffffffLLU; + u64 b0_ = b0 + 19 * (b4 >> 51); + b[4] = b4_; + b[0] = b0_; +} + +static __always_inline void fproduct_copy_from_wide_(u64 *output, u128 *input) +{ + { + u128 xi = input[0]; + output[0] = ((u64)(xi)); + } + { + u128 xi = input[1]; + output[1] = ((u64)(xi)); + } + { + u128 xi = input[2]; + output[2] = ((u64)(xi)); + } + { + u128 xi = input[3]; + output[3] = ((u64)(xi)); + } + { + u128 xi = input[4]; + output[4] = ((u64)(xi)); + } +} + +static __always_inline void +fproduct_sum_scalar_multiplication_(u128 *output, u64 *input, u64 s) +{ + output[0] += (u128)input[0] * s; + output[1] += (u128)input[1] * s; + output[2] += (u128)input[2] * s; + output[3] += (u128)input[3] * s; + output[4] += (u128)input[4] * s; +} + +static __always_inline void fproduct_carry_wide_(u128 *tmp) +{ + { + u32 ctr = 0; + u128 tctr = tmp[ctr]; + u128 tctrp1 = tmp[ctr + 1]; + u64 r0 = ((u64)(tctr)) & 0x7ffffffffffffLLU; + u128 c = ((tctr) >> (51)); + tmp[ctr] = ((u128)(r0)); + tmp[ctr + 1] = ((tctrp1) + (c)); + } + { + u32 ctr = 1; + u128 tctr = tmp[ctr]; + u128 tctrp1 = tmp[ctr + 1]; + u64 r0 = ((u64)(tctr)) & 0x7ffffffffffffLLU; + u128 c = ((tctr) >> (51)); + tmp[ctr] = ((u128)(r0)); + tmp[ctr + 1] = ((tctrp1) + (c)); + } + + { + u32 ctr = 2; + u128 tctr = tmp[ctr]; + u128 tctrp1 = tmp[ctr + 1]; + u64 r0 = ((u64)(tctr)) & 0x7ffffffffffffLLU; + u128 c = ((tctr) >> (51)); + tmp[ctr] = ((u128)(r0)); + tmp[ctr + 1] = ((tctrp1) + (c)); + } + { + u32 ctr = 3; + u128 tctr = tmp[ctr]; + u128 tctrp1 = tmp[ctr + 1]; + u64 r0 = ((u64)(tctr)) & 0x7ffffffffffffLLU; + u128 c = ((tctr) >> (51)); + tmp[ctr] = ((u128)(r0)); + tmp[ctr + 1] = ((tctrp1) + (c)); + } +} + +static __always_inline void fmul_shift_reduce(u64 *output) +{ + u64 tmp = output[4]; + u64 b0; + { + u32 ctr = 5 - 0 - 1; + u64 z = output[ctr - 1]; + output[ctr] = z; + } + { + u32 ctr = 5 - 1 - 1; + u64 z = output[ctr - 1]; + output[ctr] = z; + } + { + u32 ctr = 5 - 2 - 1; + u64 z = output[ctr - 1]; + output[ctr] = z; + } + { + u32 ctr = 5 - 3 - 1; + u64 z = output[ctr - 1]; + output[ctr] = z; + } + output[0] = tmp; + b0 = output[0]; + output[0] = 19 * b0; +} + +static __always_inline void fmul_mul_shift_reduce_(u128 *output, u64 *input, + u64 *input21) +{ + u32 i; + u64 input2i; + { + u64 input2i = input21[0]; + fproduct_sum_scalar_multiplication_(output, input, input2i); + fmul_shift_reduce(input); + } + { + u64 input2i = input21[1]; + fproduct_sum_scalar_multiplication_(output, input, input2i); + fmul_shift_reduce(input); + } + { + u64 input2i = input21[2]; + fproduct_sum_scalar_multiplication_(output, input, input2i); + fmul_shift_reduce(input); + } + { + u64 input2i = input21[3]; + fproduct_sum_scalar_multiplication_(output, input, input2i); + fmul_shift_reduce(input); + } + i = 4; + input2i = input21[i]; + fproduct_sum_scalar_multiplication_(output, input, input2i); +} + +static __always_inline void fmul_fmul(u64 *output, u64 *input, u64 *input21) +{ + u64 tmp[5] = { input[0], input[1], input[2], input[3], input[4] }; + { + u128 b4; + u128 b0; + u128 b4_; + u128 b0_; + u64 i0; + u64 i1; + u64 i0_; + u64 i1_; + u128 t[5] = { 0 }; + fmul_mul_shift_reduce_(t, tmp, input21); + fproduct_carry_wide_(t); + b4 = t[4]; + b0 = t[0]; + b4_ = ((b4) & (((u128)(0x7ffffffffffffLLU)))); + b0_ = ((b0) + (((u128)(19) * (((u64)(((b4) >> (51)))))))); + t[4] = b4_; + t[0] = b0_; + fproduct_copy_from_wide_(output, t); + i0 = output[0]; + i1 = output[1]; + i0_ = i0 & 0x7ffffffffffffLLU; + i1_ = i1 + (i0 >> 51); + output[0] = i0_; + output[1] = i1_; + } +} + +static __always_inline void fsquare_fsquare__(u128 *tmp, u64 *output) +{ + u64 r0 = output[0]; + u64 r1 = output[1]; + u64 r2 = output[2]; + u64 r3 = output[3]; + u64 r4 = output[4]; + u64 d0 = r0 * 2; + u64 d1 = r1 * 2; + u64 d2 = r2 * 2 * 19; + u64 d419 = r4 * 19; + u64 d4 = d419 * 2; + u128 s0 = ((((((u128)(r0) * (r0))) + (((u128)(d4) * (r1))))) + + (((u128)(d2) * (r3)))); + u128 s1 = ((((((u128)(d0) * (r1))) + (((u128)(d4) * (r2))))) + + (((u128)(r3 * 19) * (r3)))); + u128 s2 = ((((((u128)(d0) * (r2))) + (((u128)(r1) * (r1))))) + + (((u128)(d4) * (r3)))); + u128 s3 = ((((((u128)(d0) * (r3))) + (((u128)(d1) * (r2))))) + + (((u128)(r4) * (d419)))); + u128 s4 = ((((((u128)(d0) * (r4))) + (((u128)(d1) * (r3))))) + + (((u128)(r2) * (r2)))); + tmp[0] = s0; + tmp[1] = s1; + tmp[2] = s2; + tmp[3] = s3; + tmp[4] = s4; +} + +static __always_inline void fsquare_fsquare_(u128 *tmp, u64 *output) +{ + u128 b4; + u128 b0; + u128 b4_; + u128 b0_; + u64 i0; + u64 i1; + u64 i0_; + u64 i1_; + fsquare_fsquare__(tmp, output); + fproduct_carry_wide_(tmp); + b4 = tmp[4]; + b0 = tmp[0]; + b4_ = ((b4) & (((u128)(0x7ffffffffffffLLU)))); + b0_ = ((b0) + (((u128)(19) * (((u64)(((b4) >> (51)))))))); + tmp[4] = b4_; + tmp[0] = b0_; + fproduct_copy_from_wide_(output, tmp); + i0 = output[0]; + i1 = output[1]; + i0_ = i0 & 0x7ffffffffffffLLU; + i1_ = i1 + (i0 >> 51); + output[0] = i0_; + output[1] = i1_; +} + +static __always_inline void fsquare_fsquare_times_(u64 *output, u128 *tmp, + u32 count1) +{ + u32 i; + fsquare_fsquare_(tmp, output); + for (i = 1; i < count1; ++i) + fsquare_fsquare_(tmp, output); +} + +static __always_inline void fsquare_fsquare_times(u64 *output, u64 *input, + u32 count1) +{ + u128 t[5]; + memcpy(output, input, 5 * sizeof(*input)); + fsquare_fsquare_times_(output, t, count1); +} + +static __always_inline void fsquare_fsquare_times_inplace(u64 *output, + u32 count1) +{ + u128 t[5]; + fsquare_fsquare_times_(output, t, count1); +} + +static __always_inline void crecip_crecip(u64 *out, u64 *z) +{ + u64 buf[20] = { 0 }; + u64 *a0 = buf; + u64 *t00 = buf + 5; + u64 *b0 = buf + 10; + u64 *t01; + u64 *b1; + u64 *c0; + u64 *a; + u64 *t0; + u64 *b; + u64 *c; + fsquare_fsquare_times(a0, z, 1); + fsquare_fsquare_times(t00, a0, 2); + fmul_fmul(b0, t00, z); + fmul_fmul(a0, b0, a0); + fsquare_fsquare_times(t00, a0, 1); + fmul_fmul(b0, t00, b0); + fsquare_fsquare_times(t00, b0, 5); + t01 = buf + 5; + b1 = buf + 10; + c0 = buf + 15; + fmul_fmul(b1, t01, b1); + fsquare_fsquare_times(t01, b1, 10); + fmul_fmul(c0, t01, b1); + fsquare_fsquare_times(t01, c0, 20); + fmul_fmul(t01, t01, c0); + fsquare_fsquare_times_inplace(t01, 10); + fmul_fmul(b1, t01, b1); + fsquare_fsquare_times(t01, b1, 50); + a = buf; + t0 = buf + 5; + b = buf + 10; + c = buf + 15; + fmul_fmul(c, t0, b); + fsquare_fsquare_times(t0, c, 100); + fmul_fmul(t0, t0, c); + fsquare_fsquare_times_inplace(t0, 50); + fmul_fmul(t0, t0, b); + fsquare_fsquare_times_inplace(t0, 5); + fmul_fmul(out, t0, a); +} + +static __always_inline void fsum(u64 *a, u64 *b) +{ + a[0] += b[0]; + a[1] += b[1]; + a[2] += b[2]; + a[3] += b[3]; + a[4] += b[4]; +} + +static __always_inline void fdifference(u64 *a, u64 *b) +{ + u64 tmp[5] = { 0 }; + u64 b0; + u64 b1; + u64 b2; + u64 b3; + u64 b4; + memcpy(tmp, b, 5 * sizeof(*b)); + b0 = tmp[0]; + b1 = tmp[1]; + b2 = tmp[2]; + b3 = tmp[3]; + b4 = tmp[4]; + tmp[0] = b0 + 0x3fffffffffff68LLU; + tmp[1] = b1 + 0x3ffffffffffff8LLU; + tmp[2] = b2 + 0x3ffffffffffff8LLU; + tmp[3] = b3 + 0x3ffffffffffff8LLU; + tmp[4] = b4 + 0x3ffffffffffff8LLU; + { + u64 xi = a[0]; + u64 yi = tmp[0]; + a[0] = yi - xi; + } + { + u64 xi = a[1]; + u64 yi = tmp[1]; + a[1] = yi - xi; + } + { + u64 xi = a[2]; + u64 yi = tmp[2]; + a[2] = yi - xi; + } + { + u64 xi = a[3]; + u64 yi = tmp[3]; + a[3] = yi - xi; + } + { + u64 xi = a[4]; + u64 yi = tmp[4]; + a[4] = yi - xi; + } +} + +static __always_inline void fscalar(u64 *output, u64 *b, u64 s) +{ + u128 tmp[5]; + u128 b4; + u128 b0; + u128 b4_; + u128 b0_; + { + u64 xi = b[0]; + tmp[0] = ((u128)(xi) * (s)); + } + { + u64 xi = b[1]; + tmp[1] = ((u128)(xi) * (s)); + } + { + u64 xi = b[2]; + tmp[2] = ((u128)(xi) * (s)); + } + { + u64 xi = b[3]; + tmp[3] = ((u128)(xi) * (s)); + } + { + u64 xi = b[4]; + tmp[4] = ((u128)(xi) * (s)); + } + fproduct_carry_wide_(tmp); + b4 = tmp[4]; + b0 = tmp[0]; + b4_ = ((b4) & (((u128)(0x7ffffffffffffLLU)))); + b0_ = ((b0) + (((u128)(19) * (((u64)(((b4) >> (51)))))))); + tmp[4] = b4_; + tmp[0] = b0_; + fproduct_copy_from_wide_(output, tmp); +} + +static __always_inline void fmul(u64 *output, u64 *a, u64 *b) +{ + fmul_fmul(output, a, b); +} + +static __always_inline void crecip(u64 *output, u64 *input) +{ + crecip_crecip(output, input); +} + +static __always_inline void point_swap_conditional_step(u64 *a, u64 *b, + u64 swap1, u32 ctr) +{ + u32 i = ctr - 1; + u64 ai = a[i]; + u64 bi = b[i]; + u64 x = swap1 & (ai ^ bi); + u64 ai1 = ai ^ x; + u64 bi1 = bi ^ x; + a[i] = ai1; + b[i] = bi1; +} + +static __always_inline void point_swap_conditional5(u64 *a, u64 *b, u64 swap1) +{ + point_swap_conditional_step(a, b, swap1, 5); + point_swap_conditional_step(a, b, swap1, 4); + point_swap_conditional_step(a, b, swap1, 3); + point_swap_conditional_step(a, b, swap1, 2); + point_swap_conditional_step(a, b, swap1, 1); +} + +static __always_inline void point_swap_conditional(u64 *a, u64 *b, u64 iswap) +{ + u64 swap1 = 0 - iswap; + point_swap_conditional5(a, b, swap1); + point_swap_conditional5(a + 5, b + 5, swap1); +} + +static __always_inline void point_copy(u64 *output, u64 *input) +{ + memcpy(output, input, 5 * sizeof(*input)); + memcpy(output + 5, input + 5, 5 * sizeof(*input)); +} + +static __always_inline void addanddouble_fmonty(u64 *pp, u64 *ppq, u64 *p, + u64 *pq, u64 *qmqp) +{ + u64 *qx = qmqp; + u64 *x2 = pp; + u64 *z2 = pp + 5; + u64 *x3 = ppq; + u64 *z3 = ppq + 5; + u64 *x = p; + u64 *z = p + 5; + u64 *xprime = pq; + u64 *zprime = pq + 5; + u64 buf[40] = { 0 }; + u64 *origx = buf; + u64 *origxprime0 = buf + 5; + u64 *xxprime0; + u64 *zzprime0; + u64 *origxprime; + xxprime0 = buf + 25; + zzprime0 = buf + 30; + memcpy(origx, x, 5 * sizeof(*x)); + fsum(x, z); + fdifference(z, origx); + memcpy(origxprime0, xprime, 5 * sizeof(*xprime)); + fsum(xprime, zprime); + fdifference(zprime, origxprime0); + fmul(xxprime0, xprime, z); + fmul(zzprime0, x, zprime); + origxprime = buf + 5; + { + u64 *xx0; + u64 *zz0; + u64 *xxprime; + u64 *zzprime; + u64 *zzzprime; + xx0 = buf + 15; + zz0 = buf + 20; + xxprime = buf + 25; + zzprime = buf + 30; + zzzprime = buf + 35; + memcpy(origxprime, xxprime, 5 * sizeof(*xxprime)); + fsum(xxprime, zzprime); + fdifference(zzprime, origxprime); + fsquare_fsquare_times(x3, xxprime, 1); + fsquare_fsquare_times(zzzprime, zzprime, 1); + fmul(z3, zzzprime, qx); + fsquare_fsquare_times(xx0, x, 1); + fsquare_fsquare_times(zz0, z, 1); + { + u64 *zzz; + u64 *xx; + u64 *zz; + u64 scalar; + zzz = buf + 10; + xx = buf + 15; + zz = buf + 20; + fmul(x2, xx, zz); + fdifference(zz, xx); + scalar = 121665; + fscalar(zzz, zz, scalar); + fsum(zzz, xx); + fmul(z2, zzz, zz); + } + } +} + +static __always_inline void +ladder_smallloop_cmult_small_loop_step(u64 *nq, u64 *nqpq, u64 *nq2, u64 *nqpq2, + u64 *q, u8 byt) +{ + u64 bit0 = (u64)(byt >> 7); + u64 bit; + point_swap_conditional(nq, nqpq, bit0); + addanddouble_fmonty(nq2, nqpq2, nq, nqpq, q); + bit = (u64)(byt >> 7); + point_swap_conditional(nq2, nqpq2, bit); +} + +static __always_inline void +ladder_smallloop_cmult_small_loop_double_step(u64 *nq, u64 *nqpq, u64 *nq2, + u64 *nqpq2, u64 *q, u8 byt) +{ + u8 byt1; + ladder_smallloop_cmult_small_loop_step(nq, nqpq, nq2, nqpq2, q, byt); + byt1 = byt << 1; + ladder_smallloop_cmult_small_loop_step(nq2, nqpq2, nq, nqpq, q, byt1); +} + +static __always_inline void +ladder_smallloop_cmult_small_loop(u64 *nq, u64 *nqpq, u64 *nq2, u64 *nqpq2, + u64 *q, u8 byt, u32 i) +{ + while (i--) { + ladder_smallloop_cmult_small_loop_double_step(nq, nqpq, nq2, + nqpq2, q, byt); + byt <<= 2; + } +} + +static __always_inline void ladder_bigloop_cmult_big_loop(u8 *n1, u64 *nq, + u64 *nqpq, u64 *nq2, + u64 *nqpq2, u64 *q, + u32 i) +{ + while (i--) { + u8 byte = n1[i]; + ladder_smallloop_cmult_small_loop(nq, nqpq, nq2, nqpq2, q, + byte, 4); + } +} + +static void ladder_cmult(u64 *result, u8 *n1, u64 *q) +{ + u64 point_buf[40] = { 0 }; + u64 *nq = point_buf; + u64 *nqpq = point_buf + 10; + u64 *nq2 = point_buf + 20; + u64 *nqpq2 = point_buf + 30; + point_copy(nqpq, q); + nq[0] = 1; + ladder_bigloop_cmult_big_loop(n1, nq, nqpq, nq2, nqpq2, q, 32); + point_copy(result, nq); +} + +static __always_inline void format_fexpand(u64 *output, const u8 *input) +{ + const u8 *x00 = input + 6; + const u8 *x01 = input + 12; + const u8 *x02 = input + 19; + const u8 *x0 = input + 24; + u64 i0, i1, i2, i3, i4, output0, output1, output2, output3, output4; + i0 = get_unaligned_le64(input); + i1 = get_unaligned_le64(x00); + i2 = get_unaligned_le64(x01); + i3 = get_unaligned_le64(x02); + i4 = get_unaligned_le64(x0); + output0 = i0 & 0x7ffffffffffffLLU; + output1 = i1 >> 3 & 0x7ffffffffffffLLU; + output2 = i2 >> 6 & 0x7ffffffffffffLLU; + output3 = i3 >> 1 & 0x7ffffffffffffLLU; + output4 = i4 >> 12 & 0x7ffffffffffffLLU; + output[0] = output0; + output[1] = output1; + output[2] = output2; + output[3] = output3; + output[4] = output4; +} + +static __always_inline void format_fcontract_first_carry_pass(u64 *input) +{ + u64 t0 = input[0]; + u64 t1 = input[1]; + u64 t2 = input[2]; + u64 t3 = input[3]; + u64 t4 = input[4]; + u64 t1_ = t1 + (t0 >> 51); + u64 t0_ = t0 & 0x7ffffffffffffLLU; + u64 t2_ = t2 + (t1_ >> 51); + u64 t1__ = t1_ & 0x7ffffffffffffLLU; + u64 t3_ = t3 + (t2_ >> 51); + u64 t2__ = t2_ & 0x7ffffffffffffLLU; + u64 t4_ = t4 + (t3_ >> 51); + u64 t3__ = t3_ & 0x7ffffffffffffLLU; + input[0] = t0_; + input[1] = t1__; + input[2] = t2__; + input[3] = t3__; + input[4] = t4_; +} + +static __always_inline void format_fcontract_first_carry_full(u64 *input) +{ + format_fcontract_first_carry_pass(input); + modulo_carry_top(input); +} + +static __always_inline void format_fcontract_second_carry_pass(u64 *input) +{ + u64 t0 = input[0]; + u64 t1 = input[1]; + u64 t2 = input[2]; + u64 t3 = input[3]; + u64 t4 = input[4]; + u64 t1_ = t1 + (t0 >> 51); + u64 t0_ = t0 & 0x7ffffffffffffLLU; + u64 t2_ = t2 + (t1_ >> 51); + u64 t1__ = t1_ & 0x7ffffffffffffLLU; + u64 t3_ = t3 + (t2_ >> 51); + u64 t2__ = t2_ & 0x7ffffffffffffLLU; + u64 t4_ = t4 + (t3_ >> 51); + u64 t3__ = t3_ & 0x7ffffffffffffLLU; + input[0] = t0_; + input[1] = t1__; + input[2] = t2__; + input[3] = t3__; + input[4] = t4_; +} + +static __always_inline void format_fcontract_second_carry_full(u64 *input) +{ + u64 i0; + u64 i1; + u64 i0_; + u64 i1_; + format_fcontract_second_carry_pass(input); + modulo_carry_top(input); + i0 = input[0]; + i1 = input[1]; + i0_ = i0 & 0x7ffffffffffffLLU; + i1_ = i1 + (i0 >> 51); + input[0] = i0_; + input[1] = i1_; +} + +static __always_inline void format_fcontract_trim(u64 *input) +{ + u64 a0 = input[0]; + u64 a1 = input[1]; + u64 a2 = input[2]; + u64 a3 = input[3]; + u64 a4 = input[4]; + u64 mask0 = u64_gte_mask(a0, 0x7ffffffffffedLLU); + u64 mask1 = u64_eq_mask(a1, 0x7ffffffffffffLLU); + u64 mask2 = u64_eq_mask(a2, 0x7ffffffffffffLLU); + u64 mask3 = u64_eq_mask(a3, 0x7ffffffffffffLLU); + u64 mask4 = u64_eq_mask(a4, 0x7ffffffffffffLLU); + u64 mask = (((mask0 & mask1) & mask2) & mask3) & mask4; + u64 a0_ = a0 - (0x7ffffffffffedLLU & mask); + u64 a1_ = a1 - (0x7ffffffffffffLLU & mask); + u64 a2_ = a2 - (0x7ffffffffffffLLU & mask); + u64 a3_ = a3 - (0x7ffffffffffffLLU & mask); + u64 a4_ = a4 - (0x7ffffffffffffLLU & mask); + input[0] = a0_; + input[1] = a1_; + input[2] = a2_; + input[3] = a3_; + input[4] = a4_; +} + +static __always_inline void format_fcontract_store(u8 *output, u64 *input) +{ + u64 t0 = input[0]; + u64 t1 = input[1]; + u64 t2 = input[2]; + u64 t3 = input[3]; + u64 t4 = input[4]; + u64 o0 = t1 << 51 | t0; + u64 o1 = t2 << 38 | t1 >> 13; + u64 o2 = t3 << 25 | t2 >> 26; + u64 o3 = t4 << 12 | t3 >> 39; + u8 *b0 = output; + u8 *b1 = output + 8; + u8 *b2 = output + 16; + u8 *b3 = output + 24; + put_unaligned_le64(o0, b0); + put_unaligned_le64(o1, b1); + put_unaligned_le64(o2, b2); + put_unaligned_le64(o3, b3); +} + +static __always_inline void format_fcontract(u8 *output, u64 *input) +{ + format_fcontract_first_carry_full(input); + format_fcontract_second_carry_full(input); + format_fcontract_trim(input); + format_fcontract_store(output, input); +} + +static __always_inline void format_scalar_of_point(u8 *scalar, u64 *point) +{ + u64 *x = point; + u64 *z = point + 5; + u64 buf[10] __aligned(32) = { 0 }; + u64 *zmone = buf; + u64 *sc = buf + 5; + crecip(zmone, z); + fmul(sc, x, zmone); + format_fcontract(scalar, sc); +} + +static void curve25519_generic(u8 mypublic[CURVE25519_KEY_SIZE], + const u8 secret[CURVE25519_KEY_SIZE], + const u8 basepoint[CURVE25519_KEY_SIZE]) +{ + u64 buf0[10] __aligned(32) = { 0 }; + u64 *x0 = buf0; + u64 *z = buf0 + 5; + u64 *q; + format_fexpand(x0, basepoint); + z[0] = 1; + q = buf0; + { + u8 e[32] __aligned(32) = { 0 }; + u8 *scalar; + memcpy(e, secret, 32); + normalize_secret(e); + scalar = e; + { + u64 buf[15] = { 0 }; + u64 *nq = buf; + u64 *x = nq; + x[0] = 1; + ladder_cmult(nq, scalar, q); + format_scalar_of_point(mypublic, nq); + memzero_explicit(buf, sizeof(buf)); + } + memzero_explicit(e, sizeof(e)); + } + memzero_explicit(buf0, sizeof(buf0)); +} diff --git a/lib/zinc/curve25519/curve25519.c b/lib/zinc/curve25519/curve25519.c new file mode 100644 index 000000000000..8d9d8fc0ca38 --- /dev/null +++ b/lib/zinc/curve25519/curve25519.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + * + * This is an implementation of the Curve25519 ECDH algorithm, using either + * a 32-bit implementation or a 64-bit implementation with 128-bit integers, + * depending on what is supported by the target compiler. + * + * Information: https://cr.yp.to/ecdh.html + */ + +#include +#include "../selftest/run.h" + +#include +#include +#include +#include +#include +#include +#include // For crypto_memneq. + +static bool *const curve25519_nobs[] __initconst = { }; +static void __init curve25519_fpu_init(void) +{ +} +static inline bool curve25519_arch(u8 mypublic[CURVE25519_KEY_SIZE], + const u8 secret[CURVE25519_KEY_SIZE], + const u8 basepoint[CURVE25519_KEY_SIZE]) +{ + return false; +} +static inline bool curve25519_base_arch(u8 pub[CURVE25519_KEY_SIZE], + const u8 secret[CURVE25519_KEY_SIZE]) +{ + return false; +} + +static __always_inline void normalize_secret(u8 secret[CURVE25519_KEY_SIZE]) +{ + secret[0] &= 248; + secret[31] &= 127; + secret[31] |= 64; +} + +#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) +#include "curve25519-hacl64.c" +#else +#include "curve25519-fiat32.c" +#endif + +static const u8 null_point[CURVE25519_KEY_SIZE] = { 0 }; + +bool curve25519(u8 mypublic[CURVE25519_KEY_SIZE], + const u8 secret[CURVE25519_KEY_SIZE], + const u8 basepoint[CURVE25519_KEY_SIZE]) +{ + if (!curve25519_arch(mypublic, secret, basepoint)) + curve25519_generic(mypublic, secret, basepoint); + return crypto_memneq(mypublic, null_point, CURVE25519_KEY_SIZE); +} +EXPORT_SYMBOL(curve25519); + +bool curve25519_generate_public(u8 pub[CURVE25519_KEY_SIZE], + const u8 secret[CURVE25519_KEY_SIZE]) +{ + static const u8 basepoint[CURVE25519_KEY_SIZE] __aligned(32) = { 9 }; + + if (unlikely(!crypto_memneq(secret, null_point, CURVE25519_KEY_SIZE))) + return false; + + if (curve25519_base_arch(pub, secret)) + return crypto_memneq(pub, null_point, CURVE25519_KEY_SIZE); + return curve25519(pub, secret, basepoint); +} +EXPORT_SYMBOL(curve25519_generate_public); + +void curve25519_generate_secret(u8 secret[CURVE25519_KEY_SIZE]) +{ + get_random_bytes_wait(secret, CURVE25519_KEY_SIZE); + normalize_secret(secret); +} +EXPORT_SYMBOL(curve25519_generate_secret); + +#include "../selftest/curve25519.c" + +static bool nosimd __initdata = false; + +static int __init mod_init(void) +{ + if (!nosimd) + curve25519_fpu_init(); + if (!selftest_run("curve25519", curve25519_selftest, curve25519_nobs, + ARRAY_SIZE(curve25519_nobs))) + return -ENOTRECOVERABLE; + return 0; +} + +static void __exit mod_exit(void) +{ +} + +module_param(nosimd, bool, 0); +module_init(mod_init); +module_exit(mod_exit); +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Curve25519 scalar multiplication"); +MODULE_AUTHOR("Jason A. Donenfeld "); diff --git a/lib/zinc/selftest/curve25519.c b/lib/zinc/selftest/curve25519.c new file mode 100644 index 000000000000..fa653d47df5a --- /dev/null +++ b/lib/zinc/selftest/curve25519.c @@ -0,0 +1,1315 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + */ + +struct curve25519_test_vector { + u8 private[CURVE25519_KEY_SIZE]; + u8 public[CURVE25519_KEY_SIZE]; + u8 result[CURVE25519_KEY_SIZE]; + bool valid; +}; +static const struct curve25519_test_vector curve25519_test_vectors[] __initconst = { + { + .private = { 0x77, 0x07, 0x6d, 0x0a, 0x73, 0x18, 0xa5, 0x7d, + 0x3c, 0x16, 0xc1, 0x72, 0x51, 0xb2, 0x66, 0x45, + 0xdf, 0x4c, 0x2f, 0x87, 0xeb, 0xc0, 0x99, 0x2a, + 0xb1, 0x77, 0xfb, 0xa5, 0x1d, 0xb9, 0x2c, 0x2a }, + .public = { 0xde, 0x9e, 0xdb, 0x7d, 0x7b, 0x7d, 0xc1, 0xb4, + 0xd3, 0x5b, 0x61, 0xc2, 0xec, 0xe4, 0x35, 0x37, + 0x3f, 0x83, 0x43, 0xc8, 0x5b, 0x78, 0x67, 0x4d, + 0xad, 0xfc, 0x7e, 0x14, 0x6f, 0x88, 0x2b, 0x4f }, + .result = { 0x4a, 0x5d, 0x9d, 0x5b, 0xa4, 0xce, 0x2d, 0xe1, + 0x72, 0x8e, 0x3b, 0xf4, 0x80, 0x35, 0x0f, 0x25, + 0xe0, 0x7e, 0x21, 0xc9, 0x47, 0xd1, 0x9e, 0x33, + 0x76, 0xf0, 0x9b, 0x3c, 0x1e, 0x16, 0x17, 0x42 }, + .valid = true + }, + { + .private = { 0x5d, 0xab, 0x08, 0x7e, 0x62, 0x4a, 0x8a, 0x4b, + 0x79, 0xe1, 0x7f, 0x8b, 0x83, 0x80, 0x0e, 0xe6, + 0x6f, 0x3b, 0xb1, 0x29, 0x26, 0x18, 0xb6, 0xfd, + 0x1c, 0x2f, 0x8b, 0x27, 0xff, 0x88, 0xe0, 0xeb }, + .public = { 0x85, 0x20, 0xf0, 0x09, 0x89, 0x30, 0xa7, 0x54, + 0x74, 0x8b, 0x7d, 0xdc, 0xb4, 0x3e, 0xf7, 0x5a, + 0x0d, 0xbf, 0x3a, 0x0d, 0x26, 0x38, 0x1a, 0xf4, + 0xeb, 0xa4, 0xa9, 0x8e, 0xaa, 0x9b, 0x4e, 0x6a }, + .result = { 0x4a, 0x5d, 0x9d, 0x5b, 0xa4, 0xce, 0x2d, 0xe1, + 0x72, 0x8e, 0x3b, 0xf4, 0x80, 0x35, 0x0f, 0x25, + 0xe0, 0x7e, 0x21, 0xc9, 0x47, 0xd1, 0x9e, 0x33, + 0x76, 0xf0, 0x9b, 0x3c, 0x1e, 0x16, 0x17, 0x42 }, + .valid = true + }, + { + .private = { 1 }, + .public = { 0x25, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .result = { 0x3c, 0x77, 0x77, 0xca, 0xf9, 0x97, 0xb2, 0x64, + 0x41, 0x60, 0x77, 0x66, 0x5b, 0x4e, 0x22, 0x9d, + 0x0b, 0x95, 0x48, 0xdc, 0x0c, 0xd8, 0x19, 0x98, + 0xdd, 0xcd, 0xc5, 0xc8, 0x53, 0x3c, 0x79, 0x7f }, + .valid = true + }, + { + .private = { 1 }, + .public = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0xb3, 0x2d, 0x13, 0x62, 0xc2, 0x48, 0xd6, 0x2f, + 0xe6, 0x26, 0x19, 0xcf, 0xf0, 0x4d, 0xd4, 0x3d, + 0xb7, 0x3f, 0xfc, 0x1b, 0x63, 0x08, 0xed, 0xe3, + 0x0b, 0x78, 0xd8, 0x73, 0x80, 0xf1, 0xe8, 0x34 }, + .valid = true + }, + { + .private = { 0xa5, 0x46, 0xe3, 0x6b, 0xf0, 0x52, 0x7c, 0x9d, + 0x3b, 0x16, 0x15, 0x4b, 0x82, 0x46, 0x5e, 0xdd, + 0x62, 0x14, 0x4c, 0x0a, 0xc1, 0xfc, 0x5a, 0x18, + 0x50, 0x6a, 0x22, 0x44, 0xba, 0x44, 0x9a, 0xc4 }, + .public = { 0xe6, 0xdb, 0x68, 0x67, 0x58, 0x30, 0x30, 0xdb, + 0x35, 0x94, 0xc1, 0xa4, 0x24, 0xb1, 0x5f, 0x7c, + 0x72, 0x66, 0x24, 0xec, 0x26, 0xb3, 0x35, 0x3b, + 0x10, 0xa9, 0x03, 0xa6, 0xd0, 0xab, 0x1c, 0x4c }, + .result = { 0xc3, 0xda, 0x55, 0x37, 0x9d, 0xe9, 0xc6, 0x90, + 0x8e, 0x94, 0xea, 0x4d, 0xf2, 0x8d, 0x08, 0x4f, + 0x32, 0xec, 0xcf, 0x03, 0x49, 0x1c, 0x71, 0xf7, + 0x54, 0xb4, 0x07, 0x55, 0x77, 0xa2, 0x85, 0x52 }, + .valid = true + }, + { + .private = { 1, 2, 3, 4 }, + .public = { 0 }, + .result = { 0 }, + .valid = false + }, + { + .private = { 2, 4, 6, 8 }, + .public = { 0xe0, 0xeb, 0x7a, 0x7c, 0x3b, 0x41, 0xb8, 0xae, + 0x16, 0x56, 0xe3, 0xfa, 0xf1, 0x9f, 0xc4, 0x6a, + 0xda, 0x09, 0x8d, 0xeb, 0x9c, 0x32, 0xb1, 0xfd, + 0x86, 0x62, 0x05, 0x16, 0x5f, 0x49, 0xb8 }, + .result = { 0 }, + .valid = false + }, + { + .private = { 0xff, 0xff, 0xff, 0xff, 0x0a, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .public = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0x0a, 0x00, 0xfb, 0x9f }, + .result = { 0x77, 0x52, 0xb6, 0x18, 0xc1, 0x2d, 0x48, 0xd2, + 0xc6, 0x93, 0x46, 0x83, 0x81, 0x7c, 0xc6, 0x57, + 0xf3, 0x31, 0x03, 0x19, 0x49, 0x48, 0x20, 0x05, + 0x42, 0x2b, 0x4e, 0xae, 0x8d, 0x1d, 0x43, 0x23 }, + .valid = true + }, + { + .private = { 0x8e, 0x0a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .public = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x8e, 0x06 }, + .result = { 0x5a, 0xdf, 0xaa, 0x25, 0x86, 0x8e, 0x32, 0x3d, + 0xae, 0x49, 0x62, 0xc1, 0x01, 0x5c, 0xb3, 0x12, + 0xe1, 0xc5, 0xc7, 0x9e, 0x95, 0x3f, 0x03, 0x99, + 0xb0, 0xba, 0x16, 0x22, 0xf3, 0xb6, 0xf7, 0x0c }, + .valid = true + }, + /* wycheproof - normal case */ + { + .private = { 0x48, 0x52, 0x83, 0x4d, 0x9d, 0x6b, 0x77, 0xda, + 0xde, 0xab, 0xaa, 0xf2, 0xe1, 0x1d, 0xca, 0x66, + 0xd1, 0x9f, 0xe7, 0x49, 0x93, 0xa7, 0xbe, 0xc3, + 0x6c, 0x6e, 0x16, 0xa0, 0x98, 0x3f, 0xea, 0xba }, + .public = { 0x9c, 0x64, 0x7d, 0x9a, 0xe5, 0x89, 0xb9, 0xf5, + 0x8f, 0xdc, 0x3c, 0xa4, 0x94, 0x7e, 0xfb, 0xc9, + 0x15, 0xc4, 0xb2, 0xe0, 0x8e, 0x74, 0x4a, 0x0e, + 0xdf, 0x46, 0x9d, 0xac, 0x59, 0xc8, 0xf8, 0x5a }, + .result = { 0x87, 0xb7, 0xf2, 0x12, 0xb6, 0x27, 0xf7, 0xa5, + 0x4c, 0xa5, 0xe0, 0xbc, 0xda, 0xdd, 0xd5, 0x38, + 0x9d, 0x9d, 0xe6, 0x15, 0x6c, 0xdb, 0xcf, 0x8e, + 0xbe, 0x14, 0xff, 0xbc, 0xfb, 0x43, 0x65, 0x51 }, + .valid = true + }, + /* wycheproof - public key on twist */ + { + .private = { 0x58, 0x8c, 0x06, 0x1a, 0x50, 0x80, 0x4a, 0xc4, + 0x88, 0xad, 0x77, 0x4a, 0xc7, 0x16, 0xc3, 0xf5, + 0xba, 0x71, 0x4b, 0x27, 0x12, 0xe0, 0x48, 0x49, + 0x13, 0x79, 0xa5, 0x00, 0x21, 0x19, 0x98, 0xa8 }, + .public = { 0x63, 0xaa, 0x40, 0xc6, 0xe3, 0x83, 0x46, 0xc5, + 0xca, 0xf2, 0x3a, 0x6d, 0xf0, 0xa5, 0xe6, 0xc8, + 0x08, 0x89, 0xa0, 0x86, 0x47, 0xe5, 0x51, 0xb3, + 0x56, 0x34, 0x49, 0xbe, 0xfc, 0xfc, 0x97, 0x33 }, + .result = { 0xb1, 0xa7, 0x07, 0x51, 0x94, 0x95, 0xff, 0xff, + 0xb2, 0x98, 0xff, 0x94, 0x17, 0x16, 0xb0, 0x6d, + 0xfa, 0xb8, 0x7c, 0xf8, 0xd9, 0x11, 0x23, 0xfe, + 0x2b, 0xe9, 0xa2, 0x33, 0xdd, 0xa2, 0x22, 0x12 }, + .valid = true + }, + /* wycheproof - public key on twist */ + { + .private = { 0xb0, 0x5b, 0xfd, 0x32, 0xe5, 0x53, 0x25, 0xd9, + 0xfd, 0x64, 0x8c, 0xb3, 0x02, 0x84, 0x80, 0x39, + 0x00, 0x0b, 0x39, 0x0e, 0x44, 0xd5, 0x21, 0xe5, + 0x8a, 0xab, 0x3b, 0x29, 0xa6, 0x96, 0x0b, 0xa8 }, + .public = { 0x0f, 0x83, 0xc3, 0x6f, 0xde, 0xd9, 0xd3, 0x2f, + 0xad, 0xf4, 0xef, 0xa3, 0xae, 0x93, 0xa9, 0x0b, + 0xb5, 0xcf, 0xa6, 0x68, 0x93, 0xbc, 0x41, 0x2c, + 0x43, 0xfa, 0x72, 0x87, 0xdb, 0xb9, 0x97, 0x79 }, + .result = { 0x67, 0xdd, 0x4a, 0x6e, 0x16, 0x55, 0x33, 0x53, + 0x4c, 0x0e, 0x3f, 0x17, 0x2e, 0x4a, 0xb8, 0x57, + 0x6b, 0xca, 0x92, 0x3a, 0x5f, 0x07, 0xb2, 0xc0, + 0x69, 0xb4, 0xc3, 0x10, 0xff, 0x2e, 0x93, 0x5b }, + .valid = true + }, + /* wycheproof - public key on twist */ + { + .private = { 0x70, 0xe3, 0x4b, 0xcb, 0xe1, 0xf4, 0x7f, 0xbc, + 0x0f, 0xdd, 0xfd, 0x7c, 0x1e, 0x1a, 0xa5, 0x3d, + 0x57, 0xbf, 0xe0, 0xf6, 0x6d, 0x24, 0x30, 0x67, + 0xb4, 0x24, 0xbb, 0x62, 0x10, 0xbe, 0xd1, 0x9c }, + .public = { 0x0b, 0x82, 0x11, 0xa2, 0xb6, 0x04, 0x90, 0x97, + 0xf6, 0x87, 0x1c, 0x6c, 0x05, 0x2d, 0x3c, 0x5f, + 0xc1, 0xba, 0x17, 0xda, 0x9e, 0x32, 0xae, 0x45, + 0x84, 0x03, 0xb0, 0x5b, 0xb2, 0x83, 0x09, 0x2a }, + .result = { 0x4a, 0x06, 0x38, 0xcf, 0xaa, 0x9e, 0xf1, 0x93, + 0x3b, 0x47, 0xf8, 0x93, 0x92, 0x96, 0xa6, 0xb2, + 0x5b, 0xe5, 0x41, 0xef, 0x7f, 0x70, 0xe8, 0x44, + 0xc0, 0xbc, 0xc0, 0x0b, 0x13, 0x4d, 0xe6, 0x4a }, + .valid = true + }, + /* wycheproof - public key on twist */ + { + .private = { 0x68, 0xc1, 0xf3, 0xa6, 0x53, 0xa4, 0xcd, 0xb1, + 0xd3, 0x7b, 0xba, 0x94, 0x73, 0x8f, 0x8b, 0x95, + 0x7a, 0x57, 0xbe, 0xb2, 0x4d, 0x64, 0x6e, 0x99, + 0x4d, 0xc2, 0x9a, 0x27, 0x6a, 0xad, 0x45, 0x8d }, + .public = { 0x34, 0x3a, 0xc2, 0x0a, 0x3b, 0x9c, 0x6a, 0x27, + 0xb1, 0x00, 0x81, 0x76, 0x50, 0x9a, 0xd3, 0x07, + 0x35, 0x85, 0x6e, 0xc1, 0xc8, 0xd8, 0xfc, 0xae, + 0x13, 0x91, 0x2d, 0x08, 0xd1, 0x52, 0xf4, 0x6c }, + .result = { 0x39, 0x94, 0x91, 0xfc, 0xe8, 0xdf, 0xab, 0x73, + 0xb4, 0xf9, 0xf6, 0x11, 0xde, 0x8e, 0xa0, 0xb2, + 0x7b, 0x28, 0xf8, 0x59, 0x94, 0x25, 0x0b, 0x0f, + 0x47, 0x5d, 0x58, 0x5d, 0x04, 0x2a, 0xc2, 0x07 }, + .valid = true + }, + /* wycheproof - public key on twist */ + { + .private = { 0xd8, 0x77, 0xb2, 0x6d, 0x06, 0xdf, 0xf9, 0xd9, + 0xf7, 0xfd, 0x4c, 0x5b, 0x37, 0x69, 0xf8, 0xcd, + 0xd5, 0xb3, 0x05, 0x16, 0xa5, 0xab, 0x80, 0x6b, + 0xe3, 0x24, 0xff, 0x3e, 0xb6, 0x9e, 0xa0, 0xb2 }, + .public = { 0xfa, 0x69, 0x5f, 0xc7, 0xbe, 0x8d, 0x1b, 0xe5, + 0xbf, 0x70, 0x48, 0x98, 0xf3, 0x88, 0xc4, 0x52, + 0xba, 0xfd, 0xd3, 0xb8, 0xea, 0xe8, 0x05, 0xf8, + 0x68, 0x1a, 0x8d, 0x15, 0xc2, 0xd4, 0xe1, 0x42 }, + .result = { 0x2c, 0x4f, 0xe1, 0x1d, 0x49, 0x0a, 0x53, 0x86, + 0x17, 0x76, 0xb1, 0x3b, 0x43, 0x54, 0xab, 0xd4, + 0xcf, 0x5a, 0x97, 0x69, 0x9d, 0xb6, 0xe6, 0xc6, + 0x8c, 0x16, 0x26, 0xd0, 0x76, 0x62, 0xf7, 0x58 }, + .valid = true + }, + /* wycheproof - public key = 0 */ + { + .private = { 0x20, 0x74, 0x94, 0x03, 0x8f, 0x2b, 0xb8, 0x11, + 0xd4, 0x78, 0x05, 0xbc, 0xdf, 0x04, 0xa2, 0xac, + 0x58, 0x5a, 0xda, 0x7f, 0x2f, 0x23, 0x38, 0x9b, + 0xfd, 0x46, 0x58, 0xf9, 0xdd, 0xd4, 0xde, 0xbc }, + .public = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key = 1 */ + { + .private = { 0x20, 0x2e, 0x89, 0x72, 0xb6, 0x1c, 0x7e, 0x61, + 0x93, 0x0e, 0xb9, 0x45, 0x0b, 0x50, 0x70, 0xea, + 0xe1, 0xc6, 0x70, 0x47, 0x56, 0x85, 0x54, 0x1f, + 0x04, 0x76, 0x21, 0x7e, 0x48, 0x18, 0xcf, 0xab }, + .public = { 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - edge case on twist */ + { + .private = { 0x38, 0xdd, 0xe9, 0xf3, 0xe7, 0xb7, 0x99, 0x04, + 0x5f, 0x9a, 0xc3, 0x79, 0x3d, 0x4a, 0x92, 0x77, + 0xda, 0xde, 0xad, 0xc4, 0x1b, 0xec, 0x02, 0x90, + 0xf8, 0x1f, 0x74, 0x4f, 0x73, 0x77, 0x5f, 0x84 }, + .public = { 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .result = { 0x9a, 0x2c, 0xfe, 0x84, 0xff, 0x9c, 0x4a, 0x97, + 0x39, 0x62, 0x5c, 0xae, 0x4a, 0x3b, 0x82, 0xa9, + 0x06, 0x87, 0x7a, 0x44, 0x19, 0x46, 0xf8, 0xd7, + 0xb3, 0xd7, 0x95, 0xfe, 0x8f, 0x5d, 0x16, 0x39 }, + .valid = true + }, + /* wycheproof - edge case on twist */ + { + .private = { 0x98, 0x57, 0xa9, 0x14, 0xe3, 0xc2, 0x90, 0x36, + 0xfd, 0x9a, 0x44, 0x2b, 0xa5, 0x26, 0xb5, 0xcd, + 0xcd, 0xf2, 0x82, 0x16, 0x15, 0x3e, 0x63, 0x6c, + 0x10, 0x67, 0x7a, 0xca, 0xb6, 0xbd, 0x6a, 0xa5 }, + .public = { 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .result = { 0x4d, 0xa4, 0xe0, 0xaa, 0x07, 0x2c, 0x23, 0x2e, + 0xe2, 0xf0, 0xfa, 0x4e, 0x51, 0x9a, 0xe5, 0x0b, + 0x52, 0xc1, 0xed, 0xd0, 0x8a, 0x53, 0x4d, 0x4e, + 0xf3, 0x46, 0xc2, 0xe1, 0x06, 0xd2, 0x1d, 0x60 }, + .valid = true + }, + /* wycheproof - edge case on twist */ + { + .private = { 0x48, 0xe2, 0x13, 0x0d, 0x72, 0x33, 0x05, 0xed, + 0x05, 0xe6, 0xe5, 0x89, 0x4d, 0x39, 0x8a, 0x5e, + 0x33, 0x36, 0x7a, 0x8c, 0x6a, 0xac, 0x8f, 0xcd, + 0xf0, 0xa8, 0x8e, 0x4b, 0x42, 0x82, 0x0d, 0xb7 }, + .public = { 0xff, 0xff, 0xff, 0x03, 0x00, 0x00, 0xf8, 0xff, + 0xff, 0x1f, 0x00, 0x00, 0xc0, 0xff, 0xff, 0xff, + 0x00, 0x00, 0x00, 0xfe, 0xff, 0xff, 0x07, 0x00, + 0x00, 0xf0, 0xff, 0xff, 0x3f, 0x00, 0x00, 0x00 }, + .result = { 0x9e, 0xd1, 0x0c, 0x53, 0x74, 0x7f, 0x64, 0x7f, + 0x82, 0xf4, 0x51, 0x25, 0xd3, 0xde, 0x15, 0xa1, + 0xe6, 0xb8, 0x24, 0x49, 0x6a, 0xb4, 0x04, 0x10, + 0xff, 0xcc, 0x3c, 0xfe, 0x95, 0x76, 0x0f, 0x3b }, + .valid = true + }, + /* wycheproof - edge case on twist */ + { + .private = { 0x28, 0xf4, 0x10, 0x11, 0x69, 0x18, 0x51, 0xb3, + 0xa6, 0x2b, 0x64, 0x15, 0x53, 0xb3, 0x0d, 0x0d, + 0xfd, 0xdc, 0xb8, 0xff, 0xfc, 0xf5, 0x37, 0x00, + 0xa7, 0xbe, 0x2f, 0x6a, 0x87, 0x2e, 0x9f, 0xb0 }, + .public = { 0x00, 0x00, 0x00, 0xfc, 0xff, 0xff, 0x07, 0x00, + 0x00, 0xe0, 0xff, 0xff, 0x3f, 0x00, 0x00, 0x00, + 0xff, 0xff, 0xff, 0x01, 0x00, 0x00, 0xf8, 0xff, + 0xff, 0x0f, 0x00, 0x00, 0xc0, 0xff, 0xff, 0x7f }, + .result = { 0xcf, 0x72, 0xb4, 0xaa, 0x6a, 0xa1, 0xc9, 0xf8, + 0x94, 0xf4, 0x16, 0x5b, 0x86, 0x10, 0x9a, 0xa4, + 0x68, 0x51, 0x76, 0x48, 0xe1, 0xf0, 0xcc, 0x70, + 0xe1, 0xab, 0x08, 0x46, 0x01, 0x76, 0x50, 0x6b }, + .valid = true + }, + /* wycheproof - edge case on twist */ + { + .private = { 0x18, 0xa9, 0x3b, 0x64, 0x99, 0xb9, 0xf6, 0xb3, + 0x22, 0x5c, 0xa0, 0x2f, 0xef, 0x41, 0x0e, 0x0a, + 0xde, 0xc2, 0x35, 0x32, 0x32, 0x1d, 0x2d, 0x8e, + 0xf1, 0xa6, 0xd6, 0x02, 0xa8, 0xc6, 0x5b, 0x83 }, + .public = { 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff, + 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff, + 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff, + 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0x7f }, + .result = { 0x5d, 0x50, 0xb6, 0x28, 0x36, 0xbb, 0x69, 0x57, + 0x94, 0x10, 0x38, 0x6c, 0xf7, 0xbb, 0x81, 0x1c, + 0x14, 0xbf, 0x85, 0xb1, 0xc7, 0xb1, 0x7e, 0x59, + 0x24, 0xc7, 0xff, 0xea, 0x91, 0xef, 0x9e, 0x12 }, + .valid = true + }, + /* wycheproof - edge case on twist */ + { + .private = { 0xc0, 0x1d, 0x13, 0x05, 0xa1, 0x33, 0x8a, 0x1f, + 0xca, 0xc2, 0xba, 0x7e, 0x2e, 0x03, 0x2b, 0x42, + 0x7e, 0x0b, 0x04, 0x90, 0x31, 0x65, 0xac, 0xa9, + 0x57, 0xd8, 0xd0, 0x55, 0x3d, 0x87, 0x17, 0xb0 }, + .public = { 0xea, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .result = { 0x19, 0x23, 0x0e, 0xb1, 0x48, 0xd5, 0xd6, 0x7c, + 0x3c, 0x22, 0xab, 0x1d, 0xae, 0xff, 0x80, 0xa5, + 0x7e, 0xae, 0x42, 0x65, 0xce, 0x28, 0x72, 0x65, + 0x7b, 0x2c, 0x80, 0x99, 0xfc, 0x69, 0x8e, 0x50 }, + .valid = true + }, + /* wycheproof - edge case for public key */ + { + .private = { 0x38, 0x6f, 0x7f, 0x16, 0xc5, 0x07, 0x31, 0xd6, + 0x4f, 0x82, 0xe6, 0xa1, 0x70, 0xb1, 0x42, 0xa4, + 0xe3, 0x4f, 0x31, 0xfd, 0x77, 0x68, 0xfc, 0xb8, + 0x90, 0x29, 0x25, 0xe7, 0xd1, 0xe2, 0x1a, 0xbe }, + .public = { 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .result = { 0x0f, 0xca, 0xb5, 0xd8, 0x42, 0xa0, 0x78, 0xd7, + 0xa7, 0x1f, 0xc5, 0x9b, 0x57, 0xbf, 0xb4, 0xca, + 0x0b, 0xe6, 0x87, 0x3b, 0x49, 0xdc, 0xdb, 0x9f, + 0x44, 0xe1, 0x4a, 0xe8, 0xfb, 0xdf, 0xa5, 0x42 }, + .valid = true + }, + /* wycheproof - edge case for public key */ + { + .private = { 0xe0, 0x23, 0xa2, 0x89, 0xbd, 0x5e, 0x90, 0xfa, + 0x28, 0x04, 0xdd, 0xc0, 0x19, 0xa0, 0x5e, 0xf3, + 0xe7, 0x9d, 0x43, 0x4b, 0xb6, 0xea, 0x2f, 0x52, + 0x2e, 0xcb, 0x64, 0x3a, 0x75, 0x29, 0x6e, 0x95 }, + .public = { 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, + 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, + 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, + 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00 }, + .result = { 0x54, 0xce, 0x8f, 0x22, 0x75, 0xc0, 0x77, 0xe3, + 0xb1, 0x30, 0x6a, 0x39, 0x39, 0xc5, 0xe0, 0x3e, + 0xef, 0x6b, 0xbb, 0x88, 0x06, 0x05, 0x44, 0x75, + 0x8d, 0x9f, 0xef, 0x59, 0xb0, 0xbc, 0x3e, 0x4f }, + .valid = true + }, + /* wycheproof - edge case for public key */ + { + .private = { 0x68, 0xf0, 0x10, 0xd6, 0x2e, 0xe8, 0xd9, 0x26, + 0x05, 0x3a, 0x36, 0x1c, 0x3a, 0x75, 0xc6, 0xea, + 0x4e, 0xbd, 0xc8, 0x60, 0x6a, 0xb2, 0x85, 0x00, + 0x3a, 0x6f, 0x8f, 0x40, 0x76, 0xb0, 0x1e, 0x83 }, + .public = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x03 }, + .result = { 0xf1, 0x36, 0x77, 0x5c, 0x5b, 0xeb, 0x0a, 0xf8, + 0x11, 0x0a, 0xf1, 0x0b, 0x20, 0x37, 0x23, 0x32, + 0x04, 0x3c, 0xab, 0x75, 0x24, 0x19, 0x67, 0x87, + 0x75, 0xa2, 0x23, 0xdf, 0x57, 0xc9, 0xd3, 0x0d }, + .valid = true + }, + /* wycheproof - edge case for public key */ + { + .private = { 0x58, 0xeb, 0xcb, 0x35, 0xb0, 0xf8, 0x84, 0x5c, + 0xaf, 0x1e, 0xc6, 0x30, 0xf9, 0x65, 0x76, 0xb6, + 0x2c, 0x4b, 0x7b, 0x6c, 0x36, 0xb2, 0x9d, 0xeb, + 0x2c, 0xb0, 0x08, 0x46, 0x51, 0x75, 0x5c, 0x96 }, + .public = { 0xff, 0xff, 0xff, 0xfb, 0xff, 0xff, 0xfb, 0xff, + 0xff, 0xdf, 0xff, 0xff, 0xdf, 0xff, 0xff, 0xff, + 0xfe, 0xff, 0xff, 0xfe, 0xff, 0xff, 0xf7, 0xff, + 0xff, 0xf7, 0xff, 0xff, 0xbf, 0xff, 0xff, 0x3f }, + .result = { 0xbf, 0x9a, 0xff, 0xd0, 0x6b, 0x84, 0x40, 0x85, + 0x58, 0x64, 0x60, 0x96, 0x2e, 0xf2, 0x14, 0x6f, + 0xf3, 0xd4, 0x53, 0x3d, 0x94, 0x44, 0xaa, 0xb0, + 0x06, 0xeb, 0x88, 0xcc, 0x30, 0x54, 0x40, 0x7d }, + .valid = true + }, + /* wycheproof - edge case for public key */ + { + .private = { 0x18, 0x8c, 0x4b, 0xc5, 0xb9, 0xc4, 0x4b, 0x38, + 0xbb, 0x65, 0x8b, 0x9b, 0x2a, 0xe8, 0x2d, 0x5b, + 0x01, 0x01, 0x5e, 0x09, 0x31, 0x84, 0xb1, 0x7c, + 0xb7, 0x86, 0x35, 0x03, 0xa7, 0x83, 0xe1, 0xbb }, + .public = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f }, + .result = { 0xd4, 0x80, 0xde, 0x04, 0xf6, 0x99, 0xcb, 0x3b, + 0xe0, 0x68, 0x4a, 0x9c, 0xc2, 0xe3, 0x12, 0x81, + 0xea, 0x0b, 0xc5, 0xa9, 0xdc, 0xc1, 0x57, 0xd3, + 0xd2, 0x01, 0x58, 0xd4, 0x6c, 0xa5, 0x24, 0x6d }, + .valid = true + }, + /* wycheproof - edge case for public key */ + { + .private = { 0xe0, 0x6c, 0x11, 0xbb, 0x2e, 0x13, 0xce, 0x3d, + 0xc7, 0x67, 0x3f, 0x67, 0xf5, 0x48, 0x22, 0x42, + 0x90, 0x94, 0x23, 0xa9, 0xae, 0x95, 0xee, 0x98, + 0x6a, 0x98, 0x8d, 0x98, 0xfa, 0xee, 0x23, 0xa2 }, + .public = { 0xff, 0xff, 0xff, 0xff, 0xfe, 0xff, 0xff, 0x7f, + 0xff, 0xff, 0xff, 0xff, 0xfe, 0xff, 0xff, 0x7f, + 0xff, 0xff, 0xff, 0xff, 0xfe, 0xff, 0xff, 0x7f, + 0xff, 0xff, 0xff, 0xff, 0xfe, 0xff, 0xff, 0x7f }, + .result = { 0x4c, 0x44, 0x01, 0xcc, 0xe6, 0xb5, 0x1e, 0x4c, + 0xb1, 0x8f, 0x27, 0x90, 0x24, 0x6c, 0x9b, 0xf9, + 0x14, 0xdb, 0x66, 0x77, 0x50, 0xa1, 0xcb, 0x89, + 0x06, 0x90, 0x92, 0xaf, 0x07, 0x29, 0x22, 0x76 }, + .valid = true + }, + /* wycheproof - edge case for public key */ + { + .private = { 0xc0, 0x65, 0x8c, 0x46, 0xdd, 0xe1, 0x81, 0x29, + 0x29, 0x38, 0x77, 0x53, 0x5b, 0x11, 0x62, 0xb6, + 0xf9, 0xf5, 0x41, 0x4a, 0x23, 0xcf, 0x4d, 0x2c, + 0xbc, 0x14, 0x0a, 0x4d, 0x99, 0xda, 0x2b, 0x8f }, + .public = { 0xeb, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .result = { 0x57, 0x8b, 0xa8, 0xcc, 0x2d, 0xbd, 0xc5, 0x75, + 0xaf, 0xcf, 0x9d, 0xf2, 0xb3, 0xee, 0x61, 0x89, + 0xf5, 0x33, 0x7d, 0x68, 0x54, 0xc7, 0x9b, 0x4c, + 0xe1, 0x65, 0xea, 0x12, 0x29, 0x3b, 0x3a, 0x0f }, + .valid = true + }, + /* wycheproof - public key with low order */ + { + .private = { 0x10, 0x25, 0x5c, 0x92, 0x30, 0xa9, 0x7a, 0x30, + 0xa4, 0x58, 0xca, 0x28, 0x4a, 0x62, 0x96, 0x69, + 0x29, 0x3a, 0x31, 0x89, 0x0c, 0xda, 0x9d, 0x14, + 0x7f, 0xeb, 0xc7, 0xd1, 0xe2, 0x2d, 0x6b, 0xb1 }, + .public = { 0xe0, 0xeb, 0x7a, 0x7c, 0x3b, 0x41, 0xb8, 0xae, + 0x16, 0x56, 0xe3, 0xfa, 0xf1, 0x9f, 0xc4, 0x6a, + 0xda, 0x09, 0x8d, 0xeb, 0x9c, 0x32, 0xb1, 0xfd, + 0x86, 0x62, 0x05, 0x16, 0x5f, 0x49, 0xb8, 0x00 }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key with low order */ + { + .private = { 0x78, 0xf1, 0xe8, 0xed, 0xf1, 0x44, 0x81, 0xb3, + 0x89, 0x44, 0x8d, 0xac, 0x8f, 0x59, 0xc7, 0x0b, + 0x03, 0x8e, 0x7c, 0xf9, 0x2e, 0xf2, 0xc7, 0xef, + 0xf5, 0x7a, 0x72, 0x46, 0x6e, 0x11, 0x52, 0x96 }, + .public = { 0x5f, 0x9c, 0x95, 0xbc, 0xa3, 0x50, 0x8c, 0x24, + 0xb1, 0xd0, 0xb1, 0x55, 0x9c, 0x83, 0xef, 0x5b, + 0x04, 0x44, 0x5c, 0xc4, 0x58, 0x1c, 0x8e, 0x86, + 0xd8, 0x22, 0x4e, 0xdd, 0xd0, 0x9f, 0x11, 0x57 }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key with low order */ + { + .private = { 0xa0, 0xa0, 0x5a, 0x3e, 0x8f, 0x9f, 0x44, 0x20, + 0x4d, 0x5f, 0x80, 0x59, 0xa9, 0x4a, 0xc7, 0xdf, + 0xc3, 0x9a, 0x49, 0xac, 0x01, 0x6d, 0xd7, 0x43, + 0xdb, 0xfa, 0x43, 0xc5, 0xd6, 0x71, 0xfd, 0x88 }, + .public = { 0xec, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key with low order */ + { + .private = { 0xd0, 0xdb, 0xb3, 0xed, 0x19, 0x06, 0x66, 0x3f, + 0x15, 0x42, 0x0a, 0xf3, 0x1f, 0x4e, 0xaf, 0x65, + 0x09, 0xd9, 0xa9, 0x94, 0x97, 0x23, 0x50, 0x06, + 0x05, 0xad, 0x7c, 0x1c, 0x6e, 0x74, 0x50, 0xa9 }, + .public = { 0xed, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key with low order */ + { + .private = { 0xc0, 0xb1, 0xd0, 0xeb, 0x22, 0xb2, 0x44, 0xfe, + 0x32, 0x91, 0x14, 0x00, 0x72, 0xcd, 0xd9, 0xd9, + 0x89, 0xb5, 0xf0, 0xec, 0xd9, 0x6c, 0x10, 0x0f, + 0xeb, 0x5b, 0xca, 0x24, 0x1c, 0x1d, 0x9f, 0x8f }, + .public = { 0xee, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key with low order */ + { + .private = { 0x48, 0x0b, 0xf4, 0x5f, 0x59, 0x49, 0x42, 0xa8, + 0xbc, 0x0f, 0x33, 0x53, 0xc6, 0xe8, 0xb8, 0x85, + 0x3d, 0x77, 0xf3, 0x51, 0xf1, 0xc2, 0xca, 0x6c, + 0x2d, 0x1a, 0xbf, 0x8a, 0x00, 0xb4, 0x22, 0x9c }, + .public = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80 }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key with low order */ + { + .private = { 0x30, 0xf9, 0x93, 0xfc, 0xf8, 0x51, 0x4f, 0xc8, + 0x9b, 0xd8, 0xdb, 0x14, 0xcd, 0x43, 0xba, 0x0d, + 0x4b, 0x25, 0x30, 0xe7, 0x3c, 0x42, 0x76, 0xa0, + 0x5e, 0x1b, 0x14, 0x5d, 0x42, 0x0c, 0xed, 0xb4 }, + .public = { 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80 }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key with low order */ + { + .private = { 0xc0, 0x49, 0x74, 0xb7, 0x58, 0x38, 0x0e, 0x2a, + 0x5b, 0x5d, 0xf6, 0xeb, 0x09, 0xbb, 0x2f, 0x6b, + 0x34, 0x34, 0xf9, 0x82, 0x72, 0x2a, 0x8e, 0x67, + 0x6d, 0x3d, 0xa2, 0x51, 0xd1, 0xb3, 0xde, 0x83 }, + .public = { 0xe0, 0xeb, 0x7a, 0x7c, 0x3b, 0x41, 0xb8, 0xae, + 0x16, 0x56, 0xe3, 0xfa, 0xf1, 0x9f, 0xc4, 0x6a, + 0xda, 0x09, 0x8d, 0xeb, 0x9c, 0x32, 0xb1, 0xfd, + 0x86, 0x62, 0x05, 0x16, 0x5f, 0x49, 0xb8, 0x80 }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key with low order */ + { + .private = { 0x50, 0x2a, 0x31, 0x37, 0x3d, 0xb3, 0x24, 0x46, + 0x84, 0x2f, 0xe5, 0xad, 0xd3, 0xe0, 0x24, 0x02, + 0x2e, 0xa5, 0x4f, 0x27, 0x41, 0x82, 0xaf, 0xc3, + 0xd9, 0xf1, 0xbb, 0x3d, 0x39, 0x53, 0x4e, 0xb5 }, + .public = { 0x5f, 0x9c, 0x95, 0xbc, 0xa3, 0x50, 0x8c, 0x24, + 0xb1, 0xd0, 0xb1, 0x55, 0x9c, 0x83, 0xef, 0x5b, + 0x04, 0x44, 0x5c, 0xc4, 0x58, 0x1c, 0x8e, 0x86, + 0xd8, 0x22, 0x4e, 0xdd, 0xd0, 0x9f, 0x11, 0xd7 }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key with low order */ + { + .private = { 0x90, 0xfa, 0x64, 0x17, 0xb0, 0xe3, 0x70, 0x30, + 0xfd, 0x6e, 0x43, 0xef, 0xf2, 0xab, 0xae, 0xf1, + 0x4c, 0x67, 0x93, 0x11, 0x7a, 0x03, 0x9c, 0xf6, + 0x21, 0x31, 0x8b, 0xa9, 0x0f, 0x4e, 0x98, 0xbe }, + .public = { 0xec, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key with low order */ + { + .private = { 0x78, 0xad, 0x3f, 0x26, 0x02, 0x7f, 0x1c, 0x9f, + 0xdd, 0x97, 0x5a, 0x16, 0x13, 0xb9, 0x47, 0x77, + 0x9b, 0xad, 0x2c, 0xf2, 0xb7, 0x41, 0xad, 0xe0, + 0x18, 0x40, 0x88, 0x5a, 0x30, 0xbb, 0x97, 0x9c }, + .public = { 0xed, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key with low order */ + { + .private = { 0x98, 0xe2, 0x3d, 0xe7, 0xb1, 0xe0, 0x92, 0x6e, + 0xd9, 0xc8, 0x7e, 0x7b, 0x14, 0xba, 0xf5, 0x5f, + 0x49, 0x7a, 0x1d, 0x70, 0x96, 0xf9, 0x39, 0x77, + 0x68, 0x0e, 0x44, 0xdc, 0x1c, 0x7b, 0x7b, 0x8b }, + .public = { 0xee, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = false + }, + /* wycheproof - public key >= p */ + { + .private = { 0xf0, 0x1e, 0x48, 0xda, 0xfa, 0xc9, 0xd7, 0xbc, + 0xf5, 0x89, 0xcb, 0xc3, 0x82, 0xc8, 0x78, 0xd1, + 0x8b, 0xda, 0x35, 0x50, 0x58, 0x9f, 0xfb, 0x5d, + 0x50, 0xb5, 0x23, 0xbe, 0xbe, 0x32, 0x9d, 0xae }, + .public = { 0xef, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .result = { 0xbd, 0x36, 0xa0, 0x79, 0x0e, 0xb8, 0x83, 0x09, + 0x8c, 0x98, 0x8b, 0x21, 0x78, 0x67, 0x73, 0xde, + 0x0b, 0x3a, 0x4d, 0xf1, 0x62, 0x28, 0x2c, 0xf1, + 0x10, 0xde, 0x18, 0xdd, 0x48, 0x4c, 0xe7, 0x4b }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0x28, 0x87, 0x96, 0xbc, 0x5a, 0xff, 0x4b, 0x81, + 0xa3, 0x75, 0x01, 0x75, 0x7b, 0xc0, 0x75, 0x3a, + 0x3c, 0x21, 0x96, 0x47, 0x90, 0xd3, 0x86, 0x99, + 0x30, 0x8d, 0xeb, 0xc1, 0x7a, 0x6e, 0xaf, 0x8d }, + .public = { 0xf0, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .result = { 0xb4, 0xe0, 0xdd, 0x76, 0xda, 0x7b, 0x07, 0x17, + 0x28, 0xb6, 0x1f, 0x85, 0x67, 0x71, 0xaa, 0x35, + 0x6e, 0x57, 0xed, 0xa7, 0x8a, 0x5b, 0x16, 0x55, + 0xcc, 0x38, 0x20, 0xfb, 0x5f, 0x85, 0x4c, 0x5c }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0x98, 0xdf, 0x84, 0x5f, 0x66, 0x51, 0xbf, 0x11, + 0x38, 0x22, 0x1f, 0x11, 0x90, 0x41, 0xf7, 0x2b, + 0x6d, 0xbc, 0x3c, 0x4a, 0xce, 0x71, 0x43, 0xd9, + 0x9f, 0xd5, 0x5a, 0xd8, 0x67, 0x48, 0x0d, 0xa8 }, + .public = { 0xf1, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .result = { 0x6f, 0xdf, 0x6c, 0x37, 0x61, 0x1d, 0xbd, 0x53, + 0x04, 0xdc, 0x0f, 0x2e, 0xb7, 0xc9, 0x51, 0x7e, + 0xb3, 0xc5, 0x0e, 0x12, 0xfd, 0x05, 0x0a, 0xc6, + 0xde, 0xc2, 0x70, 0x71, 0xd4, 0xbf, 0xc0, 0x34 }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0xf0, 0x94, 0x98, 0xe4, 0x6f, 0x02, 0xf8, 0x78, + 0x82, 0x9e, 0x78, 0xb8, 0x03, 0xd3, 0x16, 0xa2, + 0xed, 0x69, 0x5d, 0x04, 0x98, 0xa0, 0x8a, 0xbd, + 0xf8, 0x27, 0x69, 0x30, 0xe2, 0x4e, 0xdc, 0xb0 }, + .public = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .result = { 0x4c, 0x8f, 0xc4, 0xb1, 0xc6, 0xab, 0x88, 0xfb, + 0x21, 0xf1, 0x8f, 0x6d, 0x4c, 0x81, 0x02, 0x40, + 0xd4, 0xe9, 0x46, 0x51, 0xba, 0x44, 0xf7, 0xa2, + 0xc8, 0x63, 0xce, 0xc7, 0xdc, 0x56, 0x60, 0x2d }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0x18, 0x13, 0xc1, 0x0a, 0x5c, 0x7f, 0x21, 0xf9, + 0x6e, 0x17, 0xf2, 0x88, 0xc0, 0xcc, 0x37, 0x60, + 0x7c, 0x04, 0xc5, 0xf5, 0xae, 0xa2, 0xdb, 0x13, + 0x4f, 0x9e, 0x2f, 0xfc, 0x66, 0xbd, 0x9d, 0xb8 }, + .public = { 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80 }, + .result = { 0x1c, 0xd0, 0xb2, 0x82, 0x67, 0xdc, 0x54, 0x1c, + 0x64, 0x2d, 0x6d, 0x7d, 0xca, 0x44, 0xa8, 0xb3, + 0x8a, 0x63, 0x73, 0x6e, 0xef, 0x5c, 0x4e, 0x65, + 0x01, 0xff, 0xbb, 0xb1, 0x78, 0x0c, 0x03, 0x3c }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0x78, 0x57, 0xfb, 0x80, 0x86, 0x53, 0x64, 0x5a, + 0x0b, 0xeb, 0x13, 0x8a, 0x64, 0xf5, 0xf4, 0xd7, + 0x33, 0xa4, 0x5e, 0xa8, 0x4c, 0x3c, 0xda, 0x11, + 0xa9, 0xc0, 0x6f, 0x7e, 0x71, 0x39, 0x14, 0x9e }, + .public = { 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80 }, + .result = { 0x87, 0x55, 0xbe, 0x01, 0xc6, 0x0a, 0x7e, 0x82, + 0x5c, 0xff, 0x3e, 0x0e, 0x78, 0xcb, 0x3a, 0xa4, + 0x33, 0x38, 0x61, 0x51, 0x6a, 0xa5, 0x9b, 0x1c, + 0x51, 0xa8, 0xb2, 0xa5, 0x43, 0xdf, 0xa8, 0x22 }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0xe0, 0x3a, 0xa8, 0x42, 0xe2, 0xab, 0xc5, 0x6e, + 0x81, 0xe8, 0x7b, 0x8b, 0x9f, 0x41, 0x7b, 0x2a, + 0x1e, 0x59, 0x13, 0xc7, 0x23, 0xee, 0xd2, 0x8d, + 0x75, 0x2f, 0x8d, 0x47, 0xa5, 0x9f, 0x49, 0x8f }, + .public = { 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80 }, + .result = { 0x54, 0xc9, 0xa1, 0xed, 0x95, 0xe5, 0x46, 0xd2, + 0x78, 0x22, 0xa3, 0x60, 0x93, 0x1d, 0xda, 0x60, + 0xa1, 0xdf, 0x04, 0x9d, 0xa6, 0xf9, 0x04, 0x25, + 0x3c, 0x06, 0x12, 0xbb, 0xdc, 0x08, 0x74, 0x76 }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0xf8, 0xf7, 0x07, 0xb7, 0x99, 0x9b, 0x18, 0xcb, + 0x0d, 0x6b, 0x96, 0x12, 0x4f, 0x20, 0x45, 0x97, + 0x2c, 0xa2, 0x74, 0xbf, 0xc1, 0x54, 0xad, 0x0c, + 0x87, 0x03, 0x8c, 0x24, 0xc6, 0xd0, 0xd4, 0xb2 }, + .public = { 0xda, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0xcc, 0x1f, 0x40, 0xd7, 0x43, 0xcd, 0xc2, 0x23, + 0x0e, 0x10, 0x43, 0xda, 0xba, 0x8b, 0x75, 0xe8, + 0x10, 0xf1, 0xfb, 0xab, 0x7f, 0x25, 0x52, 0x69, + 0xbd, 0x9e, 0xbb, 0x29, 0xe6, 0xbf, 0x49, 0x4f }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0xa0, 0x34, 0xf6, 0x84, 0xfa, 0x63, 0x1e, 0x1a, + 0x34, 0x81, 0x18, 0xc1, 0xce, 0x4c, 0x98, 0x23, + 0x1f, 0x2d, 0x9e, 0xec, 0x9b, 0xa5, 0x36, 0x5b, + 0x4a, 0x05, 0xd6, 0x9a, 0x78, 0x5b, 0x07, 0x96 }, + .public = { 0xdb, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0x54, 0x99, 0x8e, 0xe4, 0x3a, 0x5b, 0x00, 0x7b, + 0xf4, 0x99, 0xf0, 0x78, 0xe7, 0x36, 0x52, 0x44, + 0x00, 0xa8, 0xb5, 0xc7, 0xe9, 0xb9, 0xb4, 0x37, + 0x71, 0x74, 0x8c, 0x7c, 0xdf, 0x88, 0x04, 0x12 }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0x30, 0xb6, 0xc6, 0xa0, 0xf2, 0xff, 0xa6, 0x80, + 0x76, 0x8f, 0x99, 0x2b, 0xa8, 0x9e, 0x15, 0x2d, + 0x5b, 0xc9, 0x89, 0x3d, 0x38, 0xc9, 0x11, 0x9b, + 0xe4, 0xf7, 0x67, 0xbf, 0xab, 0x6e, 0x0c, 0xa5 }, + .public = { 0xdc, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0xea, 0xd9, 0xb3, 0x8e, 0xfd, 0xd7, 0x23, 0x63, + 0x79, 0x34, 0xe5, 0x5a, 0xb7, 0x17, 0xa7, 0xae, + 0x09, 0xeb, 0x86, 0xa2, 0x1d, 0xc3, 0x6a, 0x3f, + 0xee, 0xb8, 0x8b, 0x75, 0x9e, 0x39, 0x1e, 0x09 }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0x90, 0x1b, 0x9d, 0xcf, 0x88, 0x1e, 0x01, 0xe0, + 0x27, 0x57, 0x50, 0x35, 0xd4, 0x0b, 0x43, 0xbd, + 0xc1, 0xc5, 0x24, 0x2e, 0x03, 0x08, 0x47, 0x49, + 0x5b, 0x0c, 0x72, 0x86, 0x46, 0x9b, 0x65, 0x91 }, + .public = { 0xea, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0x60, 0x2f, 0xf4, 0x07, 0x89, 0xb5, 0x4b, 0x41, + 0x80, 0x59, 0x15, 0xfe, 0x2a, 0x62, 0x21, 0xf0, + 0x7a, 0x50, 0xff, 0xc2, 0xc3, 0xfc, 0x94, 0xcf, + 0x61, 0xf1, 0x3d, 0x79, 0x04, 0xe8, 0x8e, 0x0e }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0x80, 0x46, 0x67, 0x7c, 0x28, 0xfd, 0x82, 0xc9, + 0xa1, 0xbd, 0xb7, 0x1a, 0x1a, 0x1a, 0x34, 0xfa, + 0xba, 0x12, 0x25, 0xe2, 0x50, 0x7f, 0xe3, 0xf5, + 0x4d, 0x10, 0xbd, 0x5b, 0x0d, 0x86, 0x5f, 0x8e }, + .public = { 0xeb, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0xe0, 0x0a, 0xe8, 0xb1, 0x43, 0x47, 0x12, 0x47, + 0xba, 0x24, 0xf1, 0x2c, 0x88, 0x55, 0x36, 0xc3, + 0xcb, 0x98, 0x1b, 0x58, 0xe1, 0xe5, 0x6b, 0x2b, + 0xaf, 0x35, 0xc1, 0x2a, 0xe1, 0xf7, 0x9c, 0x26 }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0x60, 0x2f, 0x7e, 0x2f, 0x68, 0xa8, 0x46, 0xb8, + 0x2c, 0xc2, 0x69, 0xb1, 0xd4, 0x8e, 0x93, 0x98, + 0x86, 0xae, 0x54, 0xfd, 0x63, 0x6c, 0x1f, 0xe0, + 0x74, 0xd7, 0x10, 0x12, 0x7d, 0x47, 0x24, 0x91 }, + .public = { 0xef, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0x98, 0xcb, 0x9b, 0x50, 0xdd, 0x3f, 0xc2, 0xb0, + 0xd4, 0xf2, 0xd2, 0xbf, 0x7c, 0x5c, 0xfd, 0xd1, + 0x0c, 0x8f, 0xcd, 0x31, 0xfc, 0x40, 0xaf, 0x1a, + 0xd4, 0x4f, 0x47, 0xc1, 0x31, 0x37, 0x63, 0x62 }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0x60, 0x88, 0x7b, 0x3d, 0xc7, 0x24, 0x43, 0x02, + 0x6e, 0xbe, 0xdb, 0xbb, 0xb7, 0x06, 0x65, 0xf4, + 0x2b, 0x87, 0xad, 0xd1, 0x44, 0x0e, 0x77, 0x68, + 0xfb, 0xd7, 0xe8, 0xe2, 0xce, 0x5f, 0x63, 0x9d }, + .public = { 0xf0, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0x38, 0xd6, 0x30, 0x4c, 0x4a, 0x7e, 0x6d, 0x9f, + 0x79, 0x59, 0x33, 0x4f, 0xb5, 0x24, 0x5b, 0xd2, + 0xc7, 0x54, 0x52, 0x5d, 0x4c, 0x91, 0xdb, 0x95, + 0x02, 0x06, 0x92, 0x62, 0x34, 0xc1, 0xf6, 0x33 }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0x78, 0xd3, 0x1d, 0xfa, 0x85, 0x44, 0x97, 0xd7, + 0x2d, 0x8d, 0xef, 0x8a, 0x1b, 0x7f, 0xb0, 0x06, + 0xce, 0xc2, 0xd8, 0xc4, 0x92, 0x46, 0x47, 0xc9, + 0x38, 0x14, 0xae, 0x56, 0xfa, 0xed, 0xa4, 0x95 }, + .public = { 0xf1, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0x78, 0x6c, 0xd5, 0x49, 0x96, 0xf0, 0x14, 0xa5, + 0xa0, 0x31, 0xec, 0x14, 0xdb, 0x81, 0x2e, 0xd0, + 0x83, 0x55, 0x06, 0x1f, 0xdb, 0x5d, 0xe6, 0x80, + 0xa8, 0x00, 0xac, 0x52, 0x1f, 0x31, 0x8e, 0x23 }, + .valid = true + }, + /* wycheproof - public key >= p */ + { + .private = { 0xc0, 0x4c, 0x5b, 0xae, 0xfa, 0x83, 0x02, 0xdd, + 0xde, 0xd6, 0xa4, 0xbb, 0x95, 0x77, 0x61, 0xb4, + 0xeb, 0x97, 0xae, 0xfa, 0x4f, 0xc3, 0xb8, 0x04, + 0x30, 0x85, 0xf9, 0x6a, 0x56, 0x59, 0xb3, 0xa5 }, + .public = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .result = { 0x29, 0xae, 0x8b, 0xc7, 0x3e, 0x9b, 0x10, 0xa0, + 0x8b, 0x4f, 0x68, 0x1c, 0x43, 0xc3, 0xe0, 0xac, + 0x1a, 0x17, 0x1d, 0x31, 0xb3, 0x8f, 0x1a, 0x48, + 0xef, 0xba, 0x29, 0xae, 0x63, 0x9e, 0xa1, 0x34 }, + .valid = true + }, + /* wycheproof - RFC 7748 */ + { + .private = { 0xa0, 0x46, 0xe3, 0x6b, 0xf0, 0x52, 0x7c, 0x9d, + 0x3b, 0x16, 0x15, 0x4b, 0x82, 0x46, 0x5e, 0xdd, + 0x62, 0x14, 0x4c, 0x0a, 0xc1, 0xfc, 0x5a, 0x18, + 0x50, 0x6a, 0x22, 0x44, 0xba, 0x44, 0x9a, 0x44 }, + .public = { 0xe6, 0xdb, 0x68, 0x67, 0x58, 0x30, 0x30, 0xdb, + 0x35, 0x94, 0xc1, 0xa4, 0x24, 0xb1, 0x5f, 0x7c, + 0x72, 0x66, 0x24, 0xec, 0x26, 0xb3, 0x35, 0x3b, + 0x10, 0xa9, 0x03, 0xa6, 0xd0, 0xab, 0x1c, 0x4c }, + .result = { 0xc3, 0xda, 0x55, 0x37, 0x9d, 0xe9, 0xc6, 0x90, + 0x8e, 0x94, 0xea, 0x4d, 0xf2, 0x8d, 0x08, 0x4f, + 0x32, 0xec, 0xcf, 0x03, 0x49, 0x1c, 0x71, 0xf7, + 0x54, 0xb4, 0x07, 0x55, 0x77, 0xa2, 0x85, 0x52 }, + .valid = true + }, + /* wycheproof - RFC 7748 */ + { + .private = { 0x48, 0x66, 0xe9, 0xd4, 0xd1, 0xb4, 0x67, 0x3c, + 0x5a, 0xd2, 0x26, 0x91, 0x95, 0x7d, 0x6a, 0xf5, + 0xc1, 0x1b, 0x64, 0x21, 0xe0, 0xea, 0x01, 0xd4, + 0x2c, 0xa4, 0x16, 0x9e, 0x79, 0x18, 0xba, 0x4d }, + .public = { 0xe5, 0x21, 0x0f, 0x12, 0x78, 0x68, 0x11, 0xd3, + 0xf4, 0xb7, 0x95, 0x9d, 0x05, 0x38, 0xae, 0x2c, + 0x31, 0xdb, 0xe7, 0x10, 0x6f, 0xc0, 0x3c, 0x3e, + 0xfc, 0x4c, 0xd5, 0x49, 0xc7, 0x15, 0xa4, 0x13 }, + .result = { 0x95, 0xcb, 0xde, 0x94, 0x76, 0xe8, 0x90, 0x7d, + 0x7a, 0xad, 0xe4, 0x5c, 0xb4, 0xb8, 0x73, 0xf8, + 0x8b, 0x59, 0x5a, 0x68, 0x79, 0x9f, 0xa1, 0x52, + 0xe6, 0xf8, 0xf7, 0x64, 0x7a, 0xac, 0x79, 0x57 }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0x0a, 0xb4, 0xe7, 0x63, 0x80, 0xd8, 0x4d, 0xde, + 0x4f, 0x68, 0x33, 0xc5, 0x8f, 0x2a, 0x9f, 0xb8, + 0xf8, 0x3b, 0xb0, 0x16, 0x9b, 0x17, 0x2b, 0xe4, + 0xb6, 0xe0, 0x59, 0x28, 0x87, 0x74, 0x1a, 0x36 }, + .result = { 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0x89, 0xe1, 0x0d, 0x57, 0x01, 0xb4, 0x33, 0x7d, + 0x2d, 0x03, 0x21, 0x81, 0x53, 0x8b, 0x10, 0x64, + 0xbd, 0x40, 0x84, 0x40, 0x1c, 0xec, 0xa1, 0xfd, + 0x12, 0x66, 0x3a, 0x19, 0x59, 0x38, 0x80, 0x00 }, + .result = { 0x09, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0x2b, 0x55, 0xd3, 0xaa, 0x4a, 0x8f, 0x80, 0xc8, + 0xc0, 0xb2, 0xae, 0x5f, 0x93, 0x3e, 0x85, 0xaf, + 0x49, 0xbe, 0xac, 0x36, 0xc2, 0xfa, 0x73, 0x94, + 0xba, 0xb7, 0x6c, 0x89, 0x33, 0xf8, 0xf8, 0x1d }, + .result = { 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0x63, 0xe5, 0xb1, 0xfe, 0x96, 0x01, 0xfe, 0x84, + 0x38, 0x5d, 0x88, 0x66, 0xb0, 0x42, 0x12, 0x62, + 0xf7, 0x8f, 0xbf, 0xa5, 0xaf, 0xf9, 0x58, 0x5e, + 0x62, 0x66, 0x79, 0xb1, 0x85, 0x47, 0xd9, 0x59 }, + .result = { 0xfe, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0xe4, 0x28, 0xf3, 0xda, 0xc1, 0x78, 0x09, 0xf8, + 0x27, 0xa5, 0x22, 0xce, 0x32, 0x35, 0x50, 0x58, + 0xd0, 0x73, 0x69, 0x36, 0x4a, 0xa7, 0x89, 0x02, + 0xee, 0x10, 0x13, 0x9b, 0x9f, 0x9d, 0xd6, 0x53 }, + .result = { 0xfc, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0xb3, 0xb5, 0x0e, 0x3e, 0xd3, 0xa4, 0x07, 0xb9, + 0x5d, 0xe9, 0x42, 0xef, 0x74, 0x57, 0x5b, 0x5a, + 0xb8, 0xa1, 0x0c, 0x09, 0xee, 0x10, 0x35, 0x44, + 0xd6, 0x0b, 0xdf, 0xed, 0x81, 0x38, 0xab, 0x2b }, + .result = { 0xf9, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0x21, 0x3f, 0xff, 0xe9, 0x3d, 0x5e, 0xa8, 0xcd, + 0x24, 0x2e, 0x46, 0x28, 0x44, 0x02, 0x99, 0x22, + 0xc4, 0x3c, 0x77, 0xc9, 0xe3, 0xe4, 0x2f, 0x56, + 0x2f, 0x48, 0x5d, 0x24, 0xc5, 0x01, 0xa2, 0x0b }, + .result = { 0xf3, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0x91, 0xb2, 0x32, 0xa1, 0x78, 0xb3, 0xcd, 0x53, + 0x09, 0x32, 0x44, 0x1e, 0x61, 0x39, 0x41, 0x8f, + 0x72, 0x17, 0x22, 0x92, 0xf1, 0xda, 0x4c, 0x18, + 0x34, 0xfc, 0x5e, 0xbf, 0xef, 0xb5, 0x1e, 0x3f }, + .result = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x03 }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0x04, 0x5c, 0x6e, 0x11, 0xc5, 0xd3, 0x32, 0x55, + 0x6c, 0x78, 0x22, 0xfe, 0x94, 0xeb, 0xf8, 0x9b, + 0x56, 0xa3, 0x87, 0x8d, 0xc2, 0x7c, 0xa0, 0x79, + 0x10, 0x30, 0x58, 0x84, 0x9f, 0xab, 0xcb, 0x4f }, + .result = { 0xe5, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0x1c, 0xa2, 0x19, 0x0b, 0x71, 0x16, 0x35, 0x39, + 0x06, 0x3c, 0x35, 0x77, 0x3b, 0xda, 0x0c, 0x9c, + 0x92, 0x8e, 0x91, 0x36, 0xf0, 0x62, 0x0a, 0xeb, + 0x09, 0x3f, 0x09, 0x91, 0x97, 0xb7, 0xf7, 0x4e }, + .result = { 0xe3, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0xf7, 0x6e, 0x90, 0x10, 0xac, 0x33, 0xc5, 0x04, + 0x3b, 0x2d, 0x3b, 0x76, 0xa8, 0x42, 0x17, 0x10, + 0x00, 0xc4, 0x91, 0x62, 0x22, 0xe9, 0xe8, 0x58, + 0x97, 0xa0, 0xae, 0xc7, 0xf6, 0x35, 0x0b, 0x3c }, + .result = { 0xdd, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0xbb, 0x72, 0x68, 0x8d, 0x8f, 0x8a, 0xa7, 0xa3, + 0x9c, 0xd6, 0x06, 0x0c, 0xd5, 0xc8, 0x09, 0x3c, + 0xde, 0xc6, 0xfe, 0x34, 0x19, 0x37, 0xc3, 0x88, + 0x6a, 0x99, 0x34, 0x6c, 0xd0, 0x7f, 0xaa, 0x55 }, + .result = { 0xdb, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0x88, 0xfd, 0xde, 0xa1, 0x93, 0x39, 0x1c, 0x6a, + 0x59, 0x33, 0xef, 0x9b, 0x71, 0x90, 0x15, 0x49, + 0x44, 0x72, 0x05, 0xaa, 0xe9, 0xda, 0x92, 0x8a, + 0x6b, 0x91, 0xa3, 0x52, 0xba, 0x10, 0xf4, 0x1f }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02 }, + .valid = true + }, + /* wycheproof - edge case for shared secret */ + { + .private = { 0xa0, 0xa4, 0xf1, 0x30, 0xb9, 0x8a, 0x5b, 0xe4, + 0xb1, 0xce, 0xdb, 0x7c, 0xb8, 0x55, 0x84, 0xa3, + 0x52, 0x0e, 0x14, 0x2d, 0x47, 0x4d, 0xc9, 0xcc, + 0xb9, 0x09, 0xa0, 0x73, 0xa9, 0x76, 0xbf, 0x63 }, + .public = { 0x30, 0x3b, 0x39, 0x2f, 0x15, 0x31, 0x16, 0xca, + 0xd9, 0xcc, 0x68, 0x2a, 0x00, 0xcc, 0xc4, 0x4c, + 0x95, 0xff, 0x0d, 0x3b, 0xbe, 0x56, 0x8b, 0xeb, + 0x6c, 0x4e, 0x73, 0x9b, 0xaf, 0xdc, 0x2c, 0x68 }, + .result = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x00 }, + .valid = true + }, + /* wycheproof - checking for overflow */ + { + .private = { 0xc8, 0x17, 0x24, 0x70, 0x40, 0x00, 0xb2, 0x6d, + 0x31, 0x70, 0x3c, 0xc9, 0x7e, 0x3a, 0x37, 0x8d, + 0x56, 0xfa, 0xd8, 0x21, 0x93, 0x61, 0xc8, 0x8c, + 0xca, 0x8b, 0xd7, 0xc5, 0x71, 0x9b, 0x12, 0xb2 }, + .public = { 0xfd, 0x30, 0x0a, 0xeb, 0x40, 0xe1, 0xfa, 0x58, + 0x25, 0x18, 0x41, 0x2b, 0x49, 0xb2, 0x08, 0xa7, + 0x84, 0x2b, 0x1e, 0x1f, 0x05, 0x6a, 0x04, 0x01, + 0x78, 0xea, 0x41, 0x41, 0x53, 0x4f, 0x65, 0x2d }, + .result = { 0xb7, 0x34, 0x10, 0x5d, 0xc2, 0x57, 0x58, 0x5d, + 0x73, 0xb5, 0x66, 0xcc, 0xb7, 0x6f, 0x06, 0x27, + 0x95, 0xcc, 0xbe, 0xc8, 0x91, 0x28, 0xe5, 0x2b, + 0x02, 0xf3, 0xe5, 0x96, 0x39, 0xf1, 0x3c, 0x46 }, + .valid = true + }, + /* wycheproof - checking for overflow */ + { + .private = { 0xc8, 0x17, 0x24, 0x70, 0x40, 0x00, 0xb2, 0x6d, + 0x31, 0x70, 0x3c, 0xc9, 0x7e, 0x3a, 0x37, 0x8d, + 0x56, 0xfa, 0xd8, 0x21, 0x93, 0x61, 0xc8, 0x8c, + 0xca, 0x8b, 0xd7, 0xc5, 0x71, 0x9b, 0x12, 0xb2 }, + .public = { 0xc8, 0xef, 0x79, 0xb5, 0x14, 0xd7, 0x68, 0x26, + 0x77, 0xbc, 0x79, 0x31, 0xe0, 0x6e, 0xe5, 0xc2, + 0x7c, 0x9b, 0x39, 0x2b, 0x4a, 0xe9, 0x48, 0x44, + 0x73, 0xf5, 0x54, 0xe6, 0x67, 0x8e, 0xcc, 0x2e }, + .result = { 0x64, 0x7a, 0x46, 0xb6, 0xfc, 0x3f, 0x40, 0xd6, + 0x21, 0x41, 0xee, 0x3c, 0xee, 0x70, 0x6b, 0x4d, + 0x7a, 0x92, 0x71, 0x59, 0x3a, 0x7b, 0x14, 0x3e, + 0x8e, 0x2e, 0x22, 0x79, 0x88, 0x3e, 0x45, 0x50 }, + .valid = true + }, + /* wycheproof - checking for overflow */ + { + .private = { 0xc8, 0x17, 0x24, 0x70, 0x40, 0x00, 0xb2, 0x6d, + 0x31, 0x70, 0x3c, 0xc9, 0x7e, 0x3a, 0x37, 0x8d, + 0x56, 0xfa, 0xd8, 0x21, 0x93, 0x61, 0xc8, 0x8c, + 0xca, 0x8b, 0xd7, 0xc5, 0x71, 0x9b, 0x12, 0xb2 }, + .public = { 0x64, 0xae, 0xac, 0x25, 0x04, 0x14, 0x48, 0x61, + 0x53, 0x2b, 0x7b, 0xbc, 0xb6, 0xc8, 0x7d, 0x67, + 0xdd, 0x4c, 0x1f, 0x07, 0xeb, 0xc2, 0xe0, 0x6e, + 0xff, 0xb9, 0x5a, 0xec, 0xc6, 0x17, 0x0b, 0x2c }, + .result = { 0x4f, 0xf0, 0x3d, 0x5f, 0xb4, 0x3c, 0xd8, 0x65, + 0x7a, 0x3c, 0xf3, 0x7c, 0x13, 0x8c, 0xad, 0xce, + 0xcc, 0xe5, 0x09, 0xe4, 0xeb, 0xa0, 0x89, 0xd0, + 0xef, 0x40, 0xb4, 0xe4, 0xfb, 0x94, 0x61, 0x55 }, + .valid = true + }, + /* wycheproof - checking for overflow */ + { + .private = { 0xc8, 0x17, 0x24, 0x70, 0x40, 0x00, 0xb2, 0x6d, + 0x31, 0x70, 0x3c, 0xc9, 0x7e, 0x3a, 0x37, 0x8d, + 0x56, 0xfa, 0xd8, 0x21, 0x93, 0x61, 0xc8, 0x8c, + 0xca, 0x8b, 0xd7, 0xc5, 0x71, 0x9b, 0x12, 0xb2 }, + .public = { 0xbf, 0x68, 0xe3, 0x5e, 0x9b, 0xdb, 0x7e, 0xee, + 0x1b, 0x50, 0x57, 0x02, 0x21, 0x86, 0x0f, 0x5d, + 0xcd, 0xad, 0x8a, 0xcb, 0xab, 0x03, 0x1b, 0x14, + 0x97, 0x4c, 0xc4, 0x90, 0x13, 0xc4, 0x98, 0x31 }, + .result = { 0x21, 0xce, 0xe5, 0x2e, 0xfd, 0xbc, 0x81, 0x2e, + 0x1d, 0x02, 0x1a, 0x4a, 0xf1, 0xe1, 0xd8, 0xbc, + 0x4d, 0xb3, 0xc4, 0x00, 0xe4, 0xd2, 0xa2, 0xc5, + 0x6a, 0x39, 0x26, 0xdb, 0x4d, 0x99, 0xc6, 0x5b }, + .valid = true + }, + /* wycheproof - checking for overflow */ + { + .private = { 0xc8, 0x17, 0x24, 0x70, 0x40, 0x00, 0xb2, 0x6d, + 0x31, 0x70, 0x3c, 0xc9, 0x7e, 0x3a, 0x37, 0x8d, + 0x56, 0xfa, 0xd8, 0x21, 0x93, 0x61, 0xc8, 0x8c, + 0xca, 0x8b, 0xd7, 0xc5, 0x71, 0x9b, 0x12, 0xb2 }, + .public = { 0x53, 0x47, 0xc4, 0x91, 0x33, 0x1a, 0x64, 0xb4, + 0x3d, 0xdc, 0x68, 0x30, 0x34, 0xe6, 0x77, 0xf5, + 0x3d, 0xc3, 0x2b, 0x52, 0xa5, 0x2a, 0x57, 0x7c, + 0x15, 0xa8, 0x3b, 0xf2, 0x98, 0xe9, 0x9f, 0x19 }, + .result = { 0x18, 0xcb, 0x89, 0xe4, 0xe2, 0x0c, 0x0c, 0x2b, + 0xd3, 0x24, 0x30, 0x52, 0x45, 0x26, 0x6c, 0x93, + 0x27, 0x69, 0x0b, 0xbe, 0x79, 0xac, 0xb8, 0x8f, + 0x5b, 0x8f, 0xb3, 0xf7, 0x4e, 0xca, 0x3e, 0x52 }, + .valid = true + }, + /* wycheproof - private key == -1 (mod order) */ + { + .private = { 0xa0, 0x23, 0xcd, 0xd0, 0x83, 0xef, 0x5b, 0xb8, + 0x2f, 0x10, 0xd6, 0x2e, 0x59, 0xe1, 0x5a, 0x68, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x50 }, + .public = { 0x25, 0x8e, 0x04, 0x52, 0x3b, 0x8d, 0x25, 0x3e, + 0xe6, 0x57, 0x19, 0xfc, 0x69, 0x06, 0xc6, 0x57, + 0x19, 0x2d, 0x80, 0x71, 0x7e, 0xdc, 0x82, 0x8f, + 0xa0, 0xaf, 0x21, 0x68, 0x6e, 0x2f, 0xaa, 0x75 }, + .result = { 0x25, 0x8e, 0x04, 0x52, 0x3b, 0x8d, 0x25, 0x3e, + 0xe6, 0x57, 0x19, 0xfc, 0x69, 0x06, 0xc6, 0x57, + 0x19, 0x2d, 0x80, 0x71, 0x7e, 0xdc, 0x82, 0x8f, + 0xa0, 0xaf, 0x21, 0x68, 0x6e, 0x2f, 0xaa, 0x75 }, + .valid = true + }, + /* wycheproof - private key == 1 (mod order) on twist */ + { + .private = { 0x58, 0x08, 0x3d, 0xd2, 0x61, 0xad, 0x91, 0xef, + 0xf9, 0x52, 0x32, 0x2e, 0xc8, 0x24, 0xc6, 0x82, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x5f }, + .public = { 0x2e, 0xae, 0x5e, 0xc3, 0xdd, 0x49, 0x4e, 0x9f, + 0x2d, 0x37, 0xd2, 0x58, 0xf8, 0x73, 0xa8, 0xe6, + 0xe9, 0xd0, 0xdb, 0xd1, 0xe3, 0x83, 0xef, 0x64, + 0xd9, 0x8b, 0xb9, 0x1b, 0x3e, 0x0b, 0xe0, 0x35 }, + .result = { 0x2e, 0xae, 0x5e, 0xc3, 0xdd, 0x49, 0x4e, 0x9f, + 0x2d, 0x37, 0xd2, 0x58, 0xf8, 0x73, 0xa8, 0xe6, + 0xe9, 0xd0, 0xdb, 0xd1, 0xe3, 0x83, 0xef, 0x64, + 0xd9, 0x8b, 0xb9, 0x1b, 0x3e, 0x0b, 0xe0, 0x35 }, + .valid = true + } +}; + +static bool __init curve25519_selftest(void) +{ + bool success = true, ret, ret2; + size_t i = 0, j; + u8 in[CURVE25519_KEY_SIZE]; + u8 out[CURVE25519_KEY_SIZE], out2[CURVE25519_KEY_SIZE]; + + for (i = 0; i < ARRAY_SIZE(curve25519_test_vectors); ++i) { + memset(out, 0, CURVE25519_KEY_SIZE); + ret = curve25519(out, curve25519_test_vectors[i].private, + curve25519_test_vectors[i].public); + if (ret != curve25519_test_vectors[i].valid || + memcmp(out, curve25519_test_vectors[i].result, + CURVE25519_KEY_SIZE)) { + pr_err("curve25519 self-test %zu: FAIL\n", i + 1); + success = false; + } + } + + for (i = 0; i < 5; ++i) { + get_random_bytes(in, sizeof(in)); + ret = curve25519_generate_public(out, in); + ret2 = curve25519(out2, in, (u8[CURVE25519_KEY_SIZE]){ 9 }); + if (ret != ret2 || memcmp(out, out2, CURVE25519_KEY_SIZE)) { + pr_err("curve25519 basepoint self-test %zu: FAIL: input - 0x", + i + 1); + for (j = CURVE25519_KEY_SIZE; j-- > 0;) + printk(KERN_CONT "%02x", in[j]); + printk(KERN_CONT "\n"); + success = false; + } + } + + return success; +} From patchwork Fri Mar 22 06:29:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 160842 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp451542jan; Thu, 21 Mar 2019 23:38:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqzBmh7qjhrKKJ8d6RmLXImUH5TMnpbkkVSR3oSOMMtqeV3IBROdReGIAvC/XESYm9mN6uBT X-Received: by 2002:a63:104d:: with SMTP id 13mr7198208pgq.343.1553236731566; Thu, 21 Mar 2019 23:38:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553236731; cv=none; d=google.com; s=arc-20160816; b=qsXf4oCwlQiPZW81Zhla0pVKXP9oaZ3aTmRavPiIO1AqtbaYPZ9eWxgJgaXB4mUPu3 oHbZ6t8DSCFg7v/LWAe55FohwHA7WWWdWceeMoWWfmjkZBLF9B3jrk0CFd3Mz2GSjskt O4uef02JudbXPhg5Rnr1hvru1+zl88WhkluOcQhbch82VMDFcqHR0/ubqrGkbnTj+J+O ljeXpwNw5F9YVXm3498BQbE6hcNRGChcujH1R2+7iMp7WsH5gqk7WP3nHkyvw0wZC7DN Ds2hVA3HPoU4rZ3wJcs506lEOrDdNY0ZYCWlUH1/3cgGDBSUUgtVxZFgi267LySKKonz ozeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:from:message-id:to:references :subject; bh=roK++RQSTN9hkuh61X00iYIDXSPP3mT8EkO9R+IGUO8=; b=VBzcUGQea/vPDR4kyVmirTuNbV5idfNENUO0gE0ft3q/ggGI8oMtnbJyZusX4CBiLq v4goaEeVUHuKCS4q9yRrUJUIxxSuvgmCcAyegThDlMN4/l0LQQNVwgM2aPvxt4E5Dp5a iZJdHB1MdRG4i9PNA9EvoP/ip2krsUbFpNqebI+ZfT8hLoPPu3QXX3+I0yKny0oA72Eu EwBNeCdkHON64a5dlF6Vk2QeKT92whO8plNOqp+GVpuRTed9T6wygZYOaHZ3arw+/y2O 6Vq8Y5wnmbV2OqcyjBpmZulRmMquviM2YxXLnC8Wa1GL9uJX3916g7NaVoG0+2U5Et0c n33A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g140si6284351pfb.93.2019.03.21.23.38.50; Thu, 21 Mar 2019 23:38:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727615AbfCVGiu (ORCPT + 3 others); Fri, 22 Mar 2019 02:38:50 -0400 Received: from orcrist.hmeau.com ([104.223.48.154]:44254 "EHLO deadmen.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727461AbfCVGit (ORCPT ); Fri, 22 Mar 2019 02:38:49 -0400 Received: from gondobar.mordor.me.apana.org.au ([192.168.128.4] helo=gondobar) by deadmen.hmeau.com with esmtps (Exim 4.89 #2 (Debian)) id 1h7Dgo-0002Z7-1L; Fri, 22 Mar 2019 14:30:02 +0800 Received: from herbert by gondobar with local (Exim 4.89) (envelope-from ) id 1h7Dgg-0001JW-LX; Fri, 22 Mar 2019 14:29:54 +0800 Subject: [PATCH 14/17] zinc: Curve25519 x86_64 implementation References: <20190322062740.nrwfx2rvmt7lzotj@gondor.apana.org.au> To: Linus Torvalds , "David S. Miller" , "Jason A. Donenfeld" , Eric Biggers , Ard Biesheuvel , Linux Crypto Mailing List , linux-fscrypt@vger.kernel.org, linux-arm-kernel@lists.infradead.org, LKML , Paul Crowley , Greg Kaiser , Samuel Neves , Tomer Ashur , Martin Willi Message-Id: From: Herbert Xu Date: Fri, 22 Mar 2019 14:29:54 +0800 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Jason A. Donenfeld This implementation is the fastest available x86_64 implementation, and unlike Sandy2x, it doesn't requie use of the floating point registers at all. Instead it makes use of BMI2 and ADX, available on recent microarchitectures. The implementation was written by Armando Faz-Hernández with contributions (upstream) from Samuel Neves and me, in addition to further changes in the kernel implementation from us. Signed-off-by: Jason A. Donenfeld Signed-off-by: Samuel Neves Cc: Armando Faz-Hernández Cc: Thomas Gleixner Cc: Ingo Molnar Cc: x86@kernel.org Cc: Samuel Neves Cc: Jean-Philippe Aumasson Cc: Andy Lutomirski Cc: Greg KH Cc: Andrew Morton Cc: Linus Torvalds Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org Signed-off-by: Herbert Xu --- lib/zinc/curve25519/curve25519-x86_64-glue.c | 48 lib/zinc/curve25519/curve25519-x86_64.c | 2333 +++++++++++++++++++++++++++ lib/zinc/curve25519/curve25519.c | 4 3 files changed, 2385 insertions(+) diff --git a/lib/zinc/curve25519/curve25519-x86_64-glue.c b/lib/zinc/curve25519/curve25519-x86_64-glue.c new file mode 100644 index 000000000000..f79c1422a106 --- /dev/null +++ b/lib/zinc/curve25519/curve25519-x86_64-glue.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + */ + +#include +#include + +#include "curve25519-x86_64.c" + +static bool curve25519_use_bmi2 __ro_after_init; +static bool curve25519_use_adx __ro_after_init; +static bool *const curve25519_nobs[] __initconst = { + &curve25519_use_bmi2, &curve25519_use_adx }; + +static void __init curve25519_fpu_init(void) +{ + curve25519_use_bmi2 = boot_cpu_has(X86_FEATURE_BMI2); + curve25519_use_adx = boot_cpu_has(X86_FEATURE_BMI2) && + boot_cpu_has(X86_FEATURE_ADX); +} + +static inline bool curve25519_arch(u8 mypublic[CURVE25519_KEY_SIZE], + const u8 secret[CURVE25519_KEY_SIZE], + const u8 basepoint[CURVE25519_KEY_SIZE]) +{ + if (curve25519_use_adx) { + curve25519_adx(mypublic, secret, basepoint); + return true; + } else if (curve25519_use_bmi2) { + curve25519_bmi2(mypublic, secret, basepoint); + return true; + } + return false; +} + +static inline bool curve25519_base_arch(u8 pub[CURVE25519_KEY_SIZE], + const u8 secret[CURVE25519_KEY_SIZE]) +{ + if (curve25519_use_adx) { + curve25519_adx_base(pub, secret); + return true; + } else if (curve25519_use_bmi2) { + curve25519_bmi2_base(pub, secret); + return true; + } + return false; +} diff --git a/lib/zinc/curve25519/curve25519-x86_64.c b/lib/zinc/curve25519/curve25519-x86_64.c new file mode 100644 index 000000000000..db5e2e619e64 --- /dev/null +++ b/lib/zinc/curve25519/curve25519-x86_64.c @@ -0,0 +1,2333 @@ +// SPDX-License-Identifier: GPL-2.0 OR LGPL-2.1 +/* + * Copyright (c) 2017 Armando Faz . All Rights Reserved. + * Copyright (C) 2018 Jason A. Donenfeld . All Rights Reserved. + * Copyright (C) 2018 Samuel Neves . All Rights Reserved. + */ + +enum { NUM_WORDS_ELTFP25519 = 4 }; +typedef __aligned(32) u64 eltfp25519_1w[NUM_WORDS_ELTFP25519]; +typedef __aligned(32) u64 eltfp25519_1w_buffer[2 * NUM_WORDS_ELTFP25519]; + +#define mul_eltfp25519_1w_adx(c, a, b) do { \ + mul_256x256_integer_adx(m.buffer, a, b); \ + red_eltfp25519_1w_adx(c, m.buffer); \ +} while (0) + +#define mul_eltfp25519_1w_bmi2(c, a, b) do { \ + mul_256x256_integer_bmi2(m.buffer, a, b); \ + red_eltfp25519_1w_bmi2(c, m.buffer); \ +} while (0) + +#define sqr_eltfp25519_1w_adx(a) do { \ + sqr_256x256_integer_adx(m.buffer, a); \ + red_eltfp25519_1w_adx(a, m.buffer); \ +} while (0) + +#define sqr_eltfp25519_1w_bmi2(a) do { \ + sqr_256x256_integer_bmi2(m.buffer, a); \ + red_eltfp25519_1w_bmi2(a, m.buffer); \ +} while (0) + +#define mul_eltfp25519_2w_adx(c, a, b) do { \ + mul2_256x256_integer_adx(m.buffer, a, b); \ + red_eltfp25519_2w_adx(c, m.buffer); \ +} while (0) + +#define mul_eltfp25519_2w_bmi2(c, a, b) do { \ + mul2_256x256_integer_bmi2(m.buffer, a, b); \ + red_eltfp25519_2w_bmi2(c, m.buffer); \ +} while (0) + +#define sqr_eltfp25519_2w_adx(a) do { \ + sqr2_256x256_integer_adx(m.buffer, a); \ + red_eltfp25519_2w_adx(a, m.buffer); \ +} while (0) + +#define sqr_eltfp25519_2w_bmi2(a) do { \ + sqr2_256x256_integer_bmi2(m.buffer, a); \ + red_eltfp25519_2w_bmi2(a, m.buffer); \ +} while (0) + +#define sqrn_eltfp25519_1w_adx(a, times) do { \ + int ____counter = (times); \ + while (____counter-- > 0) \ + sqr_eltfp25519_1w_adx(a); \ +} while (0) + +#define sqrn_eltfp25519_1w_bmi2(a, times) do { \ + int ____counter = (times); \ + while (____counter-- > 0) \ + sqr_eltfp25519_1w_bmi2(a); \ +} while (0) + +#define copy_eltfp25519_1w(C, A) do { \ + (C)[0] = (A)[0]; \ + (C)[1] = (A)[1]; \ + (C)[2] = (A)[2]; \ + (C)[3] = (A)[3]; \ +} while (0) + +#define setzero_eltfp25519_1w(C) do { \ + (C)[0] = 0; \ + (C)[1] = 0; \ + (C)[2] = 0; \ + (C)[3] = 0; \ +} while (0) + +__aligned(32) static const u64 table_ladder_8k[252 * NUM_WORDS_ELTFP25519] = { + /* 1 */ 0xfffffffffffffff3UL, 0xffffffffffffffffUL, + 0xffffffffffffffffUL, 0x5fffffffffffffffUL, + /* 2 */ 0x6b8220f416aafe96UL, 0x82ebeb2b4f566a34UL, + 0xd5a9a5b075a5950fUL, 0x5142b2cf4b2488f4UL, + /* 3 */ 0x6aaebc750069680cUL, 0x89cf7820a0f99c41UL, + 0x2a58d9183b56d0f4UL, 0x4b5aca80e36011a4UL, + /* 4 */ 0x329132348c29745dUL, 0xf4a2e616e1642fd7UL, + 0x1e45bb03ff67bc34UL, 0x306912d0f42a9b4aUL, + /* 5 */ 0xff886507e6af7154UL, 0x04f50e13dfeec82fUL, + 0xaa512fe82abab5ceUL, 0x174e251a68d5f222UL, + /* 6 */ 0xcf96700d82028898UL, 0x1743e3370a2c02c5UL, + 0x379eec98b4e86eaaUL, 0x0c59888a51e0482eUL, + /* 7 */ 0xfbcbf1d699b5d189UL, 0xacaef0d58e9fdc84UL, + 0xc1c20d06231f7614UL, 0x2938218da274f972UL, + /* 8 */ 0xf6af49beff1d7f18UL, 0xcc541c22387ac9c2UL, + 0x96fcc9ef4015c56bUL, 0x69c1627c690913a9UL, + /* 9 */ 0x7a86fd2f4733db0eUL, 0xfdb8c4f29e087de9UL, + 0x095e4b1a8ea2a229UL, 0x1ad7a7c829b37a79UL, + /* 10 */ 0x342d89cad17ea0c0UL, 0x67bedda6cced2051UL, + 0x19ca31bf2bb42f74UL, 0x3df7b4c84980acbbUL, + /* 11 */ 0xa8c6444dc80ad883UL, 0xb91e440366e3ab85UL, + 0xc215cda00164f6d8UL, 0x3d867c6ef247e668UL, + /* 12 */ 0xc7dd582bcc3e658cUL, 0xfd2c4748ee0e5528UL, + 0xa0fd9b95cc9f4f71UL, 0x7529d871b0675ddfUL, + /* 13 */ 0xb8f568b42d3cbd78UL, 0x1233011b91f3da82UL, + 0x2dce6ccd4a7c3b62UL, 0x75e7fc8e9e498603UL, + /* 14 */ 0x2f4f13f1fcd0b6ecUL, 0xf1a8ca1f29ff7a45UL, + 0xc249c1a72981e29bUL, 0x6ebe0dbb8c83b56aUL, + /* 15 */ 0x7114fa8d170bb222UL, 0x65a2dcd5bf93935fUL, + 0xbdc41f68b59c979aUL, 0x2f0eef79a2ce9289UL, + /* 16 */ 0x42ecbf0c083c37ceUL, 0x2930bc09ec496322UL, + 0xf294b0c19cfeac0dUL, 0x3780aa4bedfabb80UL, + /* 17 */ 0x56c17d3e7cead929UL, 0xe7cb4beb2e5722c5UL, + 0x0ce931732dbfe15aUL, 0x41b883c7621052f8UL, + /* 18 */ 0xdbf75ca0c3d25350UL, 0x2936be086eb1e351UL, + 0xc936e03cb4a9b212UL, 0x1d45bf82322225aaUL, + /* 19 */ 0xe81ab1036a024cc5UL, 0xe212201c304c9a72UL, + 0xc5d73fba6832b1fcUL, 0x20ffdb5a4d839581UL, + /* 20 */ 0xa283d367be5d0fadUL, 0x6c2b25ca8b164475UL, + 0x9d4935467caaf22eUL, 0x5166408eee85ff49UL, + /* 21 */ 0x3c67baa2fab4e361UL, 0xb3e433c67ef35cefUL, + 0x5259729241159b1cUL, 0x6a621892d5b0ab33UL, + /* 22 */ 0x20b74a387555cdcbUL, 0x532aa10e1208923fUL, + 0xeaa17b7762281dd1UL, 0x61ab3443f05c44bfUL, + /* 23 */ 0x257a6c422324def8UL, 0x131c6c1017e3cf7fUL, + 0x23758739f630a257UL, 0x295a407a01a78580UL, + /* 24 */ 0xf8c443246d5da8d9UL, 0x19d775450c52fa5dUL, + 0x2afcfc92731bf83dUL, 0x7d10c8e81b2b4700UL, + /* 25 */ 0xc8e0271f70baa20bUL, 0x993748867ca63957UL, + 0x5412efb3cb7ed4bbUL, 0x3196d36173e62975UL, + /* 26 */ 0xde5bcad141c7dffcUL, 0x47cc8cd2b395c848UL, + 0xa34cd942e11af3cbUL, 0x0256dbf2d04ecec2UL, + /* 27 */ 0x875ab7e94b0e667fUL, 0xcad4dd83c0850d10UL, + 0x47f12e8f4e72c79fUL, 0x5f1a87bb8c85b19bUL, + /* 28 */ 0x7ae9d0b6437f51b8UL, 0x12c7ce5518879065UL, + 0x2ade09fe5cf77aeeUL, 0x23a05a2f7d2c5627UL, + /* 29 */ 0x5908e128f17c169aUL, 0xf77498dd8ad0852dUL, + 0x74b4c4ceab102f64UL, 0x183abadd10139845UL, + /* 30 */ 0xb165ba8daa92aaacUL, 0xd5c5ef9599386705UL, + 0xbe2f8f0cf8fc40d1UL, 0x2701e635ee204514UL, + /* 31 */ 0x629fa80020156514UL, 0xf223868764a8c1ceUL, + 0x5b894fff0b3f060eUL, 0x60d9944cf708a3faUL, + /* 32 */ 0xaeea001a1c7a201fUL, 0xebf16a633ee2ce63UL, + 0x6f7709594c7a07e1UL, 0x79b958150d0208cbUL, + /* 33 */ 0x24b55e5301d410e7UL, 0xe3a34edff3fdc84dUL, + 0xd88768e4904032d8UL, 0x131384427b3aaeecUL, + /* 34 */ 0x8405e51286234f14UL, 0x14dc4739adb4c529UL, + 0xb8a2b5b250634ffdUL, 0x2fe2a94ad8a7ff93UL, + /* 35 */ 0xec5c57efe843faddUL, 0x2843ce40f0bb9918UL, + 0xa4b561d6cf3d6305UL, 0x743629bde8fb777eUL, + /* 36 */ 0x343edd46bbaf738fUL, 0xed981828b101a651UL, + 0xa401760b882c797aUL, 0x1fc223e28dc88730UL, + /* 37 */ 0x48604e91fc0fba0eUL, 0xb637f78f052c6fa4UL, + 0x91ccac3d09e9239cUL, 0x23f7eed4437a687cUL, + /* 38 */ 0x5173b1118d9bd800UL, 0x29d641b63189d4a7UL, + 0xfdbf177988bbc586UL, 0x2959894fcad81df5UL, + /* 39 */ 0xaebc8ef3b4bbc899UL, 0x4148995ab26992b9UL, + 0x24e20b0134f92cfbUL, 0x40d158894a05dee8UL, + /* 40 */ 0x46b00b1185af76f6UL, 0x26bac77873187a79UL, + 0x3dc0bf95ab8fff5fUL, 0x2a608bd8945524d7UL, + /* 41 */ 0x26449588bd446302UL, 0x7c4bc21c0388439cUL, + 0x8e98a4f383bd11b2UL, 0x26218d7bc9d876b9UL, + /* 42 */ 0xe3081542997c178aUL, 0x3c2d29a86fb6606fUL, + 0x5c217736fa279374UL, 0x7dde05734afeb1faUL, + /* 43 */ 0x3bf10e3906d42babUL, 0xe4f7803e1980649cUL, + 0xe6053bf89595bf7aUL, 0x394faf38da245530UL, + /* 44 */ 0x7a8efb58896928f4UL, 0xfbc778e9cc6a113cUL, + 0x72670ce330af596fUL, 0x48f222a81d3d6cf7UL, + /* 45 */ 0xf01fce410d72caa7UL, 0x5a20ecc7213b5595UL, + 0x7bc21165c1fa1483UL, 0x07f89ae31da8a741UL, + /* 46 */ 0x05d2c2b4c6830ff9UL, 0xd43e330fc6316293UL, + 0xa5a5590a96d3a904UL, 0x705edb91a65333b6UL, + /* 47 */ 0x048ee15e0bb9a5f7UL, 0x3240cfca9e0aaf5dUL, + 0x8f4b71ceedc4a40bUL, 0x621c0da3de544a6dUL, + /* 48 */ 0x92872836a08c4091UL, 0xce8375b010c91445UL, + 0x8a72eb524f276394UL, 0x2667fcfa7ec83635UL, + /* 49 */ 0x7f4c173345e8752aUL, 0x061b47feee7079a5UL, + 0x25dd9afa9f86ff34UL, 0x3780cef5425dc89cUL, + /* 50 */ 0x1a46035a513bb4e9UL, 0x3e1ef379ac575adaUL, + 0xc78c5f1c5fa24b50UL, 0x321a967634fd9f22UL, + /* 51 */ 0x946707b8826e27faUL, 0x3dca84d64c506fd0UL, + 0xc189218075e91436UL, 0x6d9284169b3b8484UL, + /* 52 */ 0x3a67e840383f2ddfUL, 0x33eec9a30c4f9b75UL, + 0x3ec7c86fa783ef47UL, 0x26ec449fbac9fbc4UL, + /* 53 */ 0x5c0f38cba09b9e7dUL, 0x81168cc762a3478cUL, + 0x3e23b0d306fc121cUL, 0x5a238aa0a5efdcddUL, + /* 54 */ 0x1ba26121c4ea43ffUL, 0x36f8c77f7c8832b5UL, + 0x88fbea0b0adcf99aUL, 0x5ca9938ec25bebf9UL, + /* 55 */ 0xd5436a5e51fccda0UL, 0x1dbc4797c2cd893bUL, + 0x19346a65d3224a08UL, 0x0f5034e49b9af466UL, + /* 56 */ 0xf23c3967a1e0b96eUL, 0xe58b08fa867a4d88UL, + 0xfb2fabc6a7341679UL, 0x2a75381eb6026946UL, + /* 57 */ 0xc80a3be4c19420acUL, 0x66b1f6c681f2b6dcUL, + 0x7cf7036761e93388UL, 0x25abbbd8a660a4c4UL, + /* 58 */ 0x91ea12ba14fd5198UL, 0x684950fc4a3cffa9UL, + 0xf826842130f5ad28UL, 0x3ea988f75301a441UL, + /* 59 */ 0xc978109a695f8c6fUL, 0x1746eb4a0530c3f3UL, + 0x444d6d77b4459995UL, 0x75952b8c054e5cc7UL, + /* 60 */ 0xa3703f7915f4d6aaUL, 0x66c346202f2647d8UL, + 0xd01469df811d644bUL, 0x77fea47d81a5d71fUL, + /* 61 */ 0xc5e9529ef57ca381UL, 0x6eeeb4b9ce2f881aUL, + 0xb6e91a28e8009bd6UL, 0x4b80be3e9afc3fecUL, + /* 62 */ 0x7e3773c526aed2c5UL, 0x1b4afcb453c9a49dUL, + 0xa920bdd7baffb24dUL, 0x7c54699f122d400eUL, + /* 63 */ 0xef46c8e14fa94bc8UL, 0xe0b074ce2952ed5eUL, + 0xbea450e1dbd885d5UL, 0x61b68649320f712cUL, + /* 64 */ 0x8a485f7309ccbdd1UL, 0xbd06320d7d4d1a2dUL, + 0x25232973322dbef4UL, 0x445dc4758c17f770UL, + /* 65 */ 0xdb0434177cc8933cUL, 0xed6fe82175ea059fUL, + 0x1efebefdc053db34UL, 0x4adbe867c65daf99UL, + /* 66 */ 0x3acd71a2a90609dfUL, 0xe5e991856dd04050UL, + 0x1ec69b688157c23cUL, 0x697427f6885cfe4dUL, + /* 67 */ 0xd7be7b9b65e1a851UL, 0xa03d28d522c536ddUL, + 0x28399d658fd2b645UL, 0x49e5b7e17c2641e1UL, + /* 68 */ 0x6f8c3a98700457a4UL, 0x5078f0a25ebb6778UL, + 0xd13c3ccbc382960fUL, 0x2e003258a7df84b1UL, + /* 69 */ 0x8ad1f39be6296a1cUL, 0xc1eeaa652a5fbfb2UL, + 0x33ee0673fd26f3cbUL, 0x59256173a69d2cccUL, + /* 70 */ 0x41ea07aa4e18fc41UL, 0xd9fc19527c87a51eUL, + 0xbdaacb805831ca6fUL, 0x445b652dc916694fUL, + /* 71 */ 0xce92a3a7f2172315UL, 0x1edc282de11b9964UL, + 0xa1823aafe04c314aUL, 0x790a2d94437cf586UL, + /* 72 */ 0x71c447fb93f6e009UL, 0x8922a56722845276UL, + 0xbf70903b204f5169UL, 0x2f7a89891ba319feUL, + /* 73 */ 0x02a08eb577e2140cUL, 0xed9a4ed4427bdcf4UL, + 0x5253ec44e4323cd1UL, 0x3e88363c14e9355bUL, + /* 74 */ 0xaa66c14277110b8cUL, 0x1ae0391610a23390UL, + 0x2030bd12c93fc2a2UL, 0x3ee141579555c7abUL, + /* 75 */ 0x9214de3a6d6e7d41UL, 0x3ccdd88607f17efeUL, + 0x674f1288f8e11217UL, 0x5682250f329f93d0UL, + /* 76 */ 0x6cf00b136d2e396eUL, 0x6e4cf86f1014debfUL, + 0x5930b1b5bfcc4e83UL, 0x047069b48aba16b6UL, + /* 77 */ 0x0d4ce4ab69b20793UL, 0xb24db91a97d0fb9eUL, + 0xcdfa50f54e00d01dUL, 0x221b1085368bddb5UL, + /* 78 */ 0xe7e59468b1e3d8d2UL, 0x53c56563bd122f93UL, + 0xeee8a903e0663f09UL, 0x61efa662cbbe3d42UL, + /* 79 */ 0x2cf8ddddde6eab2aUL, 0x9bf80ad51435f231UL, + 0x5deadacec9f04973UL, 0x29275b5d41d29b27UL, + /* 80 */ 0xcfde0f0895ebf14fUL, 0xb9aab96b054905a7UL, + 0xcae80dd9a1c420fdUL, 0x0a63bf2f1673bbc7UL, + /* 81 */ 0x092f6e11958fbc8cUL, 0x672a81e804822fadUL, + 0xcac8351560d52517UL, 0x6f3f7722c8f192f8UL, + /* 82 */ 0xf8ba90ccc2e894b7UL, 0x2c7557a438ff9f0dUL, + 0x894d1d855ae52359UL, 0x68e122157b743d69UL, + /* 83 */ 0xd87e5570cfb919f3UL, 0x3f2cdecd95798db9UL, + 0x2121154710c0a2ceUL, 0x3c66a115246dc5b2UL, + /* 84 */ 0xcbedc562294ecb72UL, 0xba7143c36a280b16UL, + 0x9610c2efd4078b67UL, 0x6144735d946a4b1eUL, + /* 85 */ 0x536f111ed75b3350UL, 0x0211db8c2041d81bUL, + 0xf93cb1000e10413cUL, 0x149dfd3c039e8876UL, + /* 86 */ 0xd479dde46b63155bUL, 0xb66e15e93c837976UL, + 0xdafde43b1f13e038UL, 0x5fafda1a2e4b0b35UL, + /* 87 */ 0x3600bbdf17197581UL, 0x3972050bbe3cd2c2UL, + 0x5938906dbdd5be86UL, 0x34fce5e43f9b860fUL, + /* 88 */ 0x75a8a4cd42d14d02UL, 0x828dabc53441df65UL, + 0x33dcabedd2e131d3UL, 0x3ebad76fb814d25fUL, + /* 89 */ 0xd4906f566f70e10fUL, 0x5d12f7aa51690f5aUL, + 0x45adb16e76cefcf2UL, 0x01f768aead232999UL, + /* 90 */ 0x2b6cc77b6248febdUL, 0x3cd30628ec3aaffdUL, + 0xce1c0b80d4ef486aUL, 0x4c3bff2ea6f66c23UL, + /* 91 */ 0x3f2ec4094aeaeb5fUL, 0x61b19b286e372ca7UL, + 0x5eefa966de2a701dUL, 0x23b20565de55e3efUL, + /* 92 */ 0xe301ca5279d58557UL, 0x07b2d4ce27c2874fUL, + 0xa532cd8a9dcf1d67UL, 0x2a52fee23f2bff56UL, + /* 93 */ 0x8624efb37cd8663dUL, 0xbbc7ac20ffbd7594UL, + 0x57b85e9c82d37445UL, 0x7b3052cb86a6ec66UL, + /* 94 */ 0x3482f0ad2525e91eUL, 0x2cb68043d28edca0UL, + 0xaf4f6d052e1b003aUL, 0x185f8c2529781b0aUL, + /* 95 */ 0xaa41de5bd80ce0d6UL, 0x9407b2416853e9d6UL, + 0x563ec36e357f4c3aUL, 0x4cc4b8dd0e297bceUL, + /* 96 */ 0xa2fc1a52ffb8730eUL, 0x1811f16e67058e37UL, + 0x10f9a366cddf4ee1UL, 0x72f4a0c4a0b9f099UL, + /* 97 */ 0x8c16c06f663f4ea7UL, 0x693b3af74e970fbaUL, + 0x2102e7f1d69ec345UL, 0x0ba53cbc968a8089UL, + /* 98 */ 0xca3d9dc7fea15537UL, 0x4c6824bb51536493UL, + 0xb9886314844006b1UL, 0x40d2a72ab454cc60UL, + /* 99 */ 0x5936a1b712570975UL, 0x91b9d648debda657UL, + 0x3344094bb64330eaUL, 0x006ba10d12ee51d0UL, + /* 100 */ 0x19228468f5de5d58UL, 0x0eb12f4c38cc05b0UL, + 0xa1039f9dd5601990UL, 0x4502d4ce4fff0e0bUL, + /* 101 */ 0xeb2054106837c189UL, 0xd0f6544c6dd3b93cUL, + 0x40727064c416d74fUL, 0x6e15c6114b502ef0UL, + /* 102 */ 0x4df2a398cfb1a76bUL, 0x11256c7419f2f6b1UL, + 0x4a497962066e6043UL, 0x705b3aab41355b44UL, + /* 103 */ 0x365ef536d797b1d8UL, 0x00076bd622ddf0dbUL, + 0x3bbf33b0e0575a88UL, 0x3777aa05c8e4ca4dUL, + /* 104 */ 0x392745c85578db5fUL, 0x6fda4149dbae5ae2UL, + 0xb1f0b00b8adc9867UL, 0x09963437d36f1da3UL, + /* 105 */ 0x7e824e90a5dc3853UL, 0xccb5f6641f135cbdUL, + 0x6736d86c87ce8fccUL, 0x625f3ce26604249fUL, + /* 106 */ 0xaf8ac8059502f63fUL, 0x0c05e70a2e351469UL, + 0x35292e9c764b6305UL, 0x1a394360c7e23ac3UL, + /* 107 */ 0xd5c6d53251183264UL, 0x62065abd43c2b74fUL, + 0xb5fbf5d03b973f9bUL, 0x13a3da3661206e5eUL, + /* 108 */ 0xc6bd5837725d94e5UL, 0x18e30912205016c5UL, + 0x2088ce1570033c68UL, 0x7fba1f495c837987UL, + /* 109 */ 0x5a8c7423f2f9079dUL, 0x1735157b34023fc5UL, + 0xe4f9b49ad2fab351UL, 0x6691ff72c878e33cUL, + /* 110 */ 0x122c2adedc5eff3eUL, 0xf8dd4bf1d8956cf4UL, + 0xeb86205d9e9e5bdaUL, 0x049b92b9d975c743UL, + /* 111 */ 0xa5379730b0f6c05aUL, 0x72a0ffacc6f3a553UL, + 0xb0032c34b20dcd6dUL, 0x470e9dbc88d5164aUL, + /* 112 */ 0xb19cf10ca237c047UL, 0xb65466711f6c81a2UL, + 0xb3321bd16dd80b43UL, 0x48c14f600c5fbe8eUL, + /* 113 */ 0x66451c264aa6c803UL, 0xb66e3904a4fa7da6UL, + 0xd45f19b0b3128395UL, 0x31602627c3c9bc10UL, + /* 114 */ 0x3120dc4832e4e10dUL, 0xeb20c46756c717f7UL, + 0x00f52e3f67280294UL, 0x566d4fc14730c509UL, + /* 115 */ 0x7e3a5d40fd837206UL, 0xc1e926dc7159547aUL, + 0x216730fba68d6095UL, 0x22e8c3843f69cea7UL, + /* 116 */ 0x33d074e8930e4b2bUL, 0xb6e4350e84d15816UL, + 0x5534c26ad6ba2365UL, 0x7773c12f89f1f3f3UL, + /* 117 */ 0x8cba404da57962aaUL, 0x5b9897a81999ce56UL, + 0x508e862f121692fcUL, 0x3a81907fa093c291UL, + /* 118 */ 0x0dded0ff4725a510UL, 0x10d8cc10673fc503UL, + 0x5b9d151c9f1f4e89UL, 0x32a5c1d5cb09a44cUL, + /* 119 */ 0x1e0aa442b90541fbUL, 0x5f85eb7cc1b485dbUL, + 0xbee595ce8a9df2e5UL, 0x25e496c722422236UL, + /* 120 */ 0x5edf3c46cd0fe5b9UL, 0x34e75a7ed2a43388UL, + 0xe488de11d761e352UL, 0x0e878a01a085545cUL, + /* 121 */ 0xba493c77e021bb04UL, 0x2b4d1843c7df899aUL, + 0x9ea37a487ae80d67UL, 0x67a9958011e41794UL, + /* 122 */ 0x4b58051a6697b065UL, 0x47e33f7d8d6ba6d4UL, + 0xbb4da8d483ca46c1UL, 0x68becaa181c2db0dUL, + /* 123 */ 0x8d8980e90b989aa5UL, 0xf95eb14a2c93c99bUL, + 0x51c6c7c4796e73a2UL, 0x6e228363b5efb569UL, + /* 124 */ 0xc6bbc0b02dd624c8UL, 0x777eb47dec8170eeUL, + 0x3cde15a004cfafa9UL, 0x1dc6bc087160bf9bUL, + /* 125 */ 0x2e07e043eec34002UL, 0x18e9fc677a68dc7fUL, + 0xd8da03188bd15b9aUL, 0x48fbc3bb00568253UL, + /* 126 */ 0x57547d4cfb654ce1UL, 0xd3565b82a058e2adUL, + 0xf63eaf0bbf154478UL, 0x47531ef114dfbb18UL, + /* 127 */ 0xe1ec630a4278c587UL, 0x5507d546ca8e83f3UL, + 0x85e135c63adc0c2bUL, 0x0aa7efa85682844eUL, + /* 128 */ 0x72691ba8b3e1f615UL, 0x32b4e9701fbe3ffaUL, + 0x97b6d92e39bb7868UL, 0x2cfe53dea02e39e8UL, + /* 129 */ 0x687392cd85cd52b0UL, 0x27ff66c910e29831UL, + 0x97134556a9832d06UL, 0x269bb0360a84f8a0UL, + /* 130 */ 0x706e55457643f85cUL, 0x3734a48c9b597d1bUL, + 0x7aee91e8c6efa472UL, 0x5cd6abc198a9d9e0UL, + /* 131 */ 0x0e04de06cb3ce41aUL, 0xd8c6eb893402e138UL, + 0x904659bb686e3772UL, 0x7215c371746ba8c8UL, + /* 132 */ 0xfd12a97eeae4a2d9UL, 0x9514b7516394f2c5UL, + 0x266fd5809208f294UL, 0x5c847085619a26b9UL, + /* 133 */ 0x52985410fed694eaUL, 0x3c905b934a2ed254UL, + 0x10bb47692d3be467UL, 0x063b3d2d69e5e9e1UL, + /* 134 */ 0x472726eedda57debUL, 0xefb6c4ae10f41891UL, + 0x2b1641917b307614UL, 0x117c554fc4f45b7cUL, + /* 135 */ 0xc07cf3118f9d8812UL, 0x01dbd82050017939UL, + 0xd7e803f4171b2827UL, 0x1015e87487d225eaUL, + /* 136 */ 0xc58de3fed23acc4dUL, 0x50db91c294a7be2dUL, + 0x0b94d43d1c9cf457UL, 0x6b1640fa6e37524aUL, + /* 137 */ 0x692f346c5fda0d09UL, 0x200b1c59fa4d3151UL, + 0xb8c46f760777a296UL, 0x4b38395f3ffdfbcfUL, + /* 138 */ 0x18d25e00be54d671UL, 0x60d50582bec8aba6UL, + 0x87ad8f263b78b982UL, 0x50fdf64e9cda0432UL, + /* 139 */ 0x90f567aac578dcf0UL, 0xef1e9b0ef2a3133bUL, + 0x0eebba9242d9de71UL, 0x15473c9bf03101c7UL, + /* 140 */ 0x7c77e8ae56b78095UL, 0xb678e7666e6f078eUL, + 0x2da0b9615348ba1fUL, 0x7cf931c1ff733f0bUL, + /* 141 */ 0x26b357f50a0a366cUL, 0xe9708cf42b87d732UL, + 0xc13aeea5f91cb2c0UL, 0x35d90c991143bb4cUL, + /* 142 */ 0x47c1c404a9a0d9dcUL, 0x659e58451972d251UL, + 0x3875a8c473b38c31UL, 0x1fbd9ed379561f24UL, + /* 143 */ 0x11fabc6fd41ec28dUL, 0x7ef8dfe3cd2a2dcaUL, + 0x72e73b5d8c404595UL, 0x6135fa4954b72f27UL, + /* 144 */ 0xccfc32a2de24b69cUL, 0x3f55698c1f095d88UL, + 0xbe3350ed5ac3f929UL, 0x5e9bf806ca477eebUL, + /* 145 */ 0xe9ce8fb63c309f68UL, 0x5376f63565e1f9f4UL, + 0xd1afcfb35a6393f1UL, 0x6632a1ede5623506UL, + /* 146 */ 0x0b7d6c390c2ded4cUL, 0x56cb3281df04cb1fUL, + 0x66305a1249ecc3c7UL, 0x5d588b60a38ca72aUL, + /* 147 */ 0xa6ecbf78e8e5f42dUL, 0x86eeb44b3c8a3eecUL, + 0xec219c48fbd21604UL, 0x1aaf1af517c36731UL, + /* 148 */ 0xc306a2836769bde7UL, 0x208280622b1e2adbUL, + 0x8027f51ffbff94a6UL, 0x76cfa1ce1124f26bUL, + /* 149 */ 0x18eb00562422abb6UL, 0xf377c4d58f8c29c3UL, + 0x4dbbc207f531561aUL, 0x0253b7f082128a27UL, + /* 150 */ 0x3d1f091cb62c17e0UL, 0x4860e1abd64628a9UL, + 0x52d17436309d4253UL, 0x356f97e13efae576UL, + /* 151 */ 0xd351e11aa150535bUL, 0x3e6b45bb1dd878ccUL, + 0x0c776128bed92c98UL, 0x1d34ae93032885b8UL, + /* 152 */ 0x4ba0488ca85ba4c3UL, 0x985348c33c9ce6ceUL, + 0x66124c6f97bda770UL, 0x0f81a0290654124aUL, + /* 153 */ 0x9ed09ca6569b86fdUL, 0x811009fd18af9a2dUL, + 0xff08d03f93d8c20aUL, 0x52a148199faef26bUL, + /* 154 */ 0x3e03f9dc2d8d1b73UL, 0x4205801873961a70UL, + 0xc0d987f041a35970UL, 0x07aa1f15a1c0d549UL, + /* 155 */ 0xdfd46ce08cd27224UL, 0x6d0a024f934e4239UL, + 0x808a7a6399897b59UL, 0x0a4556e9e13d95a2UL, + /* 156 */ 0xd21a991fe9c13045UL, 0x9b0e8548fe7751b8UL, + 0x5da643cb4bf30035UL, 0x77db28d63940f721UL, + /* 157 */ 0xfc5eeb614adc9011UL, 0x5229419ae8c411ebUL, + 0x9ec3e7787d1dcf74UL, 0x340d053e216e4cb5UL, + /* 158 */ 0xcac7af39b48df2b4UL, 0xc0faec2871a10a94UL, + 0x140a69245ca575edUL, 0x0cf1c37134273a4cUL, + /* 159 */ 0xc8ee306ac224b8a5UL, 0x57eaee7ccb4930b0UL, + 0xa1e806bdaacbe74fUL, 0x7d9a62742eeb657dUL, + /* 160 */ 0x9eb6b6ef546c4830UL, 0x885cca1fddb36e2eUL, + 0xe6b9f383ef0d7105UL, 0x58654fef9d2e0412UL, + /* 161 */ 0xa905c4ffbe0e8e26UL, 0x942de5df9b31816eUL, + 0x497d723f802e88e1UL, 0x30684dea602f408dUL, + /* 162 */ 0x21e5a278a3e6cb34UL, 0xaefb6e6f5b151dc4UL, + 0xb30b8e049d77ca15UL, 0x28c3c9cf53b98981UL, + /* 163 */ 0x287fb721556cdd2aUL, 0x0d317ca897022274UL, + 0x7468c7423a543258UL, 0x4a7f11464eb5642fUL, + /* 164 */ 0xa237a4774d193aa6UL, 0xd865986ea92129a1UL, + 0x24c515ecf87c1a88UL, 0x604003575f39f5ebUL, + /* 165 */ 0x47b9f189570a9b27UL, 0x2b98cede465e4b78UL, + 0x026df551dbb85c20UL, 0x74fcd91047e21901UL, + /* 166 */ 0x13e2a90a23c1bfa3UL, 0x0cb0074e478519f6UL, + 0x5ff1cbbe3af6cf44UL, 0x67fe5438be812dbeUL, + /* 167 */ 0xd13cf64fa40f05b0UL, 0x054dfb2f32283787UL, + 0x4173915b7f0d2aeaUL, 0x482f144f1f610d4eUL, + /* 168 */ 0xf6210201b47f8234UL, 0x5d0ae1929e70b990UL, + 0xdcd7f455b049567cUL, 0x7e93d0f1f0916f01UL, + /* 169 */ 0xdd79cbf18a7db4faUL, 0xbe8391bf6f74c62fUL, + 0x027145d14b8291bdUL, 0x585a73ea2cbf1705UL, + /* 170 */ 0x485ca03e928a0db2UL, 0x10fc01a5742857e7UL, + 0x2f482edbd6d551a7UL, 0x0f0433b5048fdb8aUL, + /* 171 */ 0x60da2e8dd7dc6247UL, 0x88b4c9d38cd4819aUL, + 0x13033ac001f66697UL, 0x273b24fe3b367d75UL, + /* 172 */ 0xc6e8f66a31b3b9d4UL, 0x281514a494df49d5UL, + 0xd1726fdfc8b23da7UL, 0x4b3ae7d103dee548UL, + /* 173 */ 0xc6256e19ce4b9d7eUL, 0xff5c5cf186e3c61cUL, + 0xacc63ca34b8ec145UL, 0x74621888fee66574UL, + /* 174 */ 0x956f409645290a1eUL, 0xef0bf8e3263a962eUL, + 0xed6a50eb5ec2647bUL, 0x0694283a9dca7502UL, + /* 175 */ 0x769b963643a2dcd1UL, 0x42b7c8ea09fc5353UL, + 0x4f002aee13397eabUL, 0x63005e2c19b7d63aUL, + /* 176 */ 0xca6736da63023beaUL, 0x966c7f6db12a99b7UL, + 0xace09390c537c5e1UL, 0x0b696063a1aa89eeUL, + /* 177 */ 0xebb03e97288c56e5UL, 0x432a9f9f938c8be8UL, + 0xa6a5a93d5b717f71UL, 0x1a5fb4c3e18f9d97UL, + /* 178 */ 0x1c94e7ad1c60cdceUL, 0xee202a43fc02c4a0UL, + 0x8dafe4d867c46a20UL, 0x0a10263c8ac27b58UL, + /* 179 */ 0xd0dea9dfe4432a4aUL, 0x856af87bbe9277c5UL, + 0xce8472acc212c71aUL, 0x6f151b6d9bbb1e91UL, + /* 180 */ 0x26776c527ceed56aUL, 0x7d211cb7fbf8faecUL, + 0x37ae66a6fd4609ccUL, 0x1f81b702d2770c42UL, + /* 181 */ 0x2fb0b057eac58392UL, 0xe1dd89fe29744e9dUL, + 0xc964f8eb17beb4f8UL, 0x29571073c9a2d41eUL, + /* 182 */ 0xa948a18981c0e254UL, 0x2df6369b65b22830UL, + 0xa33eb2d75fcfd3c6UL, 0x078cd6ec4199a01fUL, + /* 183 */ 0x4a584a41ad900d2fUL, 0x32142b78e2c74c52UL, + 0x68c4e8338431c978UL, 0x7f69ea9008689fc2UL, + /* 184 */ 0x52f2c81e46a38265UL, 0xfd78072d04a832fdUL, + 0x8cd7d5fa25359e94UL, 0x4de71b7454cc29d2UL, + /* 185 */ 0x42eb60ad1eda6ac9UL, 0x0aad37dfdbc09c3aUL, + 0x81004b71e33cc191UL, 0x44e6be345122803cUL, + /* 186 */ 0x03fe8388ba1920dbUL, 0xf5d57c32150db008UL, + 0x49c8c4281af60c29UL, 0x21edb518de701aeeUL, + /* 187 */ 0x7fb63e418f06dc99UL, 0xa4460d99c166d7b8UL, + 0x24dd5248ce520a83UL, 0x5ec3ad712b928358UL, + /* 188 */ 0x15022a5fbd17930fUL, 0xa4f64a77d82570e3UL, + 0x12bc8d6915783712UL, 0x498194c0fc620abbUL, + /* 189 */ 0x38a2d9d255686c82UL, 0x785c6bd9193e21f0UL, + 0xe4d5c81ab24a5484UL, 0x56307860b2e20989UL, + /* 190 */ 0x429d55f78b4d74c4UL, 0x22f1834643350131UL, + 0x1e60c24598c71fffUL, 0x59f2f014979983efUL, + /* 191 */ 0x46a47d56eb494a44UL, 0x3e22a854d636a18eUL, + 0xb346e15274491c3bUL, 0x2ceafd4e5390cde7UL, + /* 192 */ 0xba8a8538be0d6675UL, 0x4b9074bb50818e23UL, + 0xcbdab89085d304c3UL, 0x61a24fe0e56192c4UL, + /* 193 */ 0xcb7615e6db525bcbUL, 0xdd7d8c35a567e4caUL, + 0xe6b4153acafcdd69UL, 0x2d668e097f3c9766UL, + /* 194 */ 0xa57e7e265ce55ef0UL, 0x5d9f4e527cd4b967UL, + 0xfbc83606492fd1e5UL, 0x090d52beb7c3f7aeUL, + /* 195 */ 0x09b9515a1e7b4d7cUL, 0x1f266a2599da44c0UL, + 0xa1c49548e2c55504UL, 0x7ef04287126f15ccUL, + /* 196 */ 0xfed1659dbd30ef15UL, 0x8b4ab9eec4e0277bUL, + 0x884d6236a5df3291UL, 0x1fd96ea6bf5cf788UL, + /* 197 */ 0x42a161981f190d9aUL, 0x61d849507e6052c1UL, + 0x9fe113bf285a2cd5UL, 0x7c22d676dbad85d8UL, + /* 198 */ 0x82e770ed2bfbd27dUL, 0x4c05b2ece996f5a5UL, + 0xcd40a9c2b0900150UL, 0x5895319213d9bf64UL, + /* 199 */ 0xe7cc5d703fea2e08UL, 0xb50c491258e2188cUL, + 0xcce30baa48205bf0UL, 0x537c659ccfa32d62UL, + /* 200 */ 0x37b6623a98cfc088UL, 0xfe9bed1fa4d6aca4UL, + 0x04d29b8e56a8d1b0UL, 0x725f71c40b519575UL, + /* 201 */ 0x28c7f89cd0339ce6UL, 0x8367b14469ddc18bUL, + 0x883ada83a6a1652cUL, 0x585f1974034d6c17UL, + /* 202 */ 0x89cfb266f1b19188UL, 0xe63b4863e7c35217UL, + 0xd88c9da6b4c0526aUL, 0x3e035c9df0954635UL, + /* 203 */ 0xdd9d5412fb45de9dUL, 0xdd684532e4cff40dUL, + 0x4b5c999b151d671cUL, 0x2d8c2cc811e7f690UL, + /* 204 */ 0x7f54be1d90055d40UL, 0xa464c5df464aaf40UL, + 0x33979624f0e917beUL, 0x2c018dc527356b30UL, + /* 205 */ 0xa5415024e330b3d4UL, 0x73ff3d96691652d3UL, + 0x94ec42c4ef9b59f1UL, 0x0747201618d08e5aUL, + /* 206 */ 0x4d6ca48aca411c53UL, 0x66415f2fcfa66119UL, + 0x9c4dd40051e227ffUL, 0x59810bc09a02f7ebUL, + /* 207 */ 0x2a7eb171b3dc101dUL, 0x441c5ab99ffef68eUL, + 0x32025c9b93b359eaUL, 0x5e8ce0a71e9d112fUL, + /* 208 */ 0xbfcccb92429503fdUL, 0xd271ba752f095d55UL, + 0x345ead5e972d091eUL, 0x18c8df11a83103baUL, + /* 209 */ 0x90cd949a9aed0f4cUL, 0xc5d1f4cb6660e37eUL, + 0xb8cac52d56c52e0bUL, 0x6e42e400c5808e0dUL, + /* 210 */ 0xa3b46966eeaefd23UL, 0x0c4f1f0be39ecdcaUL, + 0x189dc8c9d683a51dUL, 0x51f27f054c09351bUL, + /* 211 */ 0x4c487ccd2a320682UL, 0x587ea95bb3df1c96UL, + 0xc8ccf79e555cb8e8UL, 0x547dc829a206d73dUL, + /* 212 */ 0xb822a6cd80c39b06UL, 0xe96d54732000d4c6UL, + 0x28535b6f91463b4dUL, 0x228f4660e2486e1dUL, + /* 213 */ 0x98799538de8d3abfUL, 0x8cd8330045ebca6eUL, + 0x79952a008221e738UL, 0x4322e1a7535cd2bbUL, + /* 214 */ 0xb114c11819d1801cUL, 0x2016e4d84f3f5ec7UL, + 0xdd0e2df409260f4cUL, 0x5ec362c0ae5f7266UL, + /* 215 */ 0xc0462b18b8b2b4eeUL, 0x7cc8d950274d1afbUL, + 0xf25f7105436b02d2UL, 0x43bbf8dcbff9ccd3UL, + /* 216 */ 0xb6ad1767a039e9dfUL, 0xb0714da8f69d3583UL, + 0x5e55fa18b42931f5UL, 0x4ed5558f33c60961UL, + /* 217 */ 0x1fe37901c647a5ddUL, 0x593ddf1f8081d357UL, + 0x0249a4fd813fd7a6UL, 0x69acca274e9caf61UL, + /* 218 */ 0x047ba3ea330721c9UL, 0x83423fc20e7e1ea0UL, + 0x1df4c0af01314a60UL, 0x09a62dab89289527UL, + /* 219 */ 0xa5b325a49cc6cb00UL, 0xe94b5dc654b56cb6UL, + 0x3be28779adc994a0UL, 0x4296e8f8ba3a4aadUL, + /* 220 */ 0x328689761e451eabUL, 0x2e4d598bff59594aUL, + 0x49b96853d7a7084aUL, 0x4980a319601420a8UL, + /* 221 */ 0x9565b9e12f552c42UL, 0x8a5318db7100fe96UL, + 0x05c90b4d43add0d7UL, 0x538b4cd66a5d4edaUL, + /* 222 */ 0xf4e94fc3e89f039fUL, 0x592c9af26f618045UL, + 0x08a36eb5fd4b9550UL, 0x25fffaf6c2ed1419UL, + /* 223 */ 0x34434459cc79d354UL, 0xeeecbfb4b1d5476bUL, + 0xddeb34a061615d99UL, 0x5129cecceb64b773UL, + /* 224 */ 0xee43215894993520UL, 0x772f9c7cf14c0b3bUL, + 0xd2e2fce306bedad5UL, 0x715f42b546f06a97UL, + /* 225 */ 0x434ecdceda5b5f1aUL, 0x0da17115a49741a9UL, + 0x680bd77c73edad2eUL, 0x487c02354edd9041UL, + /* 226 */ 0xb8efeff3a70ed9c4UL, 0x56a32aa3e857e302UL, + 0xdf3a68bd48a2a5a0UL, 0x07f650b73176c444UL, + /* 227 */ 0xe38b9b1626e0ccb1UL, 0x79e053c18b09fb36UL, + 0x56d90319c9f94964UL, 0x1ca941e7ac9ff5c4UL, + /* 228 */ 0x49c4df29162fa0bbUL, 0x8488cf3282b33305UL, + 0x95dfda14cabb437dUL, 0x3391f78264d5ad86UL, + /* 229 */ 0x729ae06ae2b5095dUL, 0xd58a58d73259a946UL, + 0xe9834262d13921edUL, 0x27fedafaa54bb592UL, + /* 230 */ 0xa99dc5b829ad48bbUL, 0x5f025742499ee260UL, + 0x802c8ecd5d7513fdUL, 0x78ceb3ef3f6dd938UL, + /* 231 */ 0xc342f44f8a135d94UL, 0x7b9edb44828cdda3UL, + 0x9436d11a0537cfe7UL, 0x5064b164ec1ab4c8UL, + /* 232 */ 0x7020eccfd37eb2fcUL, 0x1f31ea3ed90d25fcUL, + 0x1b930d7bdfa1bb34UL, 0x5344467a48113044UL, + /* 233 */ 0x70073170f25e6dfbUL, 0xe385dc1a50114cc8UL, + 0x2348698ac8fc4f00UL, 0x2a77a55284dd40d8UL, + /* 234 */ 0xfe06afe0c98c6ce4UL, 0xc235df96dddfd6e4UL, + 0x1428d01e33bf1ed3UL, 0x785768ec9300bdafUL, + /* 235 */ 0x9702e57a91deb63bUL, 0x61bdb8bfe5ce8b80UL, + 0x645b426f3d1d58acUL, 0x4804a82227a557bcUL, + /* 236 */ 0x8e57048ab44d2601UL, 0x68d6501a4b3a6935UL, + 0xc39c9ec3f9e1c293UL, 0x4172f257d4de63e2UL, + /* 237 */ 0xd368b450330c6401UL, 0x040d3017418f2391UL, + 0x2c34bb6090b7d90dUL, 0x16f649228fdfd51fUL, + /* 238 */ 0xbea6818e2b928ef5UL, 0xe28ccf91cdc11e72UL, + 0x594aaa68e77a36cdUL, 0x313034806c7ffd0fUL, + /* 239 */ 0x8a9d27ac2249bd65UL, 0x19a3b464018e9512UL, + 0xc26ccff352b37ec7UL, 0x056f68341d797b21UL, + /* 240 */ 0x5e79d6757efd2327UL, 0xfabdbcb6553afe15UL, + 0xd3e7222c6eaf5a60UL, 0x7046c76d4dae743bUL, + /* 241 */ 0x660be872b18d4a55UL, 0x19992518574e1496UL, + 0xc103053a302bdcbbUL, 0x3ed8e9800b218e8eUL, + /* 242 */ 0x7b0b9239fa75e03eUL, 0xefe9fb684633c083UL, + 0x98a35fbe391a7793UL, 0x6065510fe2d0fe34UL, + /* 243 */ 0x55cb668548abad0cUL, 0xb4584548da87e527UL, + 0x2c43ecea0107c1ddUL, 0x526028809372de35UL, + /* 244 */ 0x3415c56af9213b1fUL, 0x5bee1a4d017e98dbUL, + 0x13f6b105b5cf709bUL, 0x5ff20e3482b29ab6UL, + /* 245 */ 0x0aa29c75cc2e6c90UL, 0xfc7d73ca3a70e206UL, + 0x899fc38fc4b5c515UL, 0x250386b124ffc207UL, + /* 246 */ 0x54ea28d5ae3d2b56UL, 0x9913149dd6de60ceUL, + 0x16694fc58f06d6c1UL, 0x46b23975eb018fc7UL, + /* 247 */ 0x470a6a0fb4b7b4e2UL, 0x5d92475a8f7253deUL, + 0xabeee5b52fbd3adbUL, 0x7fa20801a0806968UL, + /* 248 */ 0x76f3faf19f7714d2UL, 0xb3e840c12f4660c3UL, + 0x0fb4cd8df212744eUL, 0x4b065a251d3a2dd2UL, + /* 249 */ 0x5cebde383d77cd4aUL, 0x6adf39df882c9cb1UL, + 0xa2dd242eb09af759UL, 0x3147c0e50e5f6422UL, + /* 250 */ 0x164ca5101d1350dbUL, 0xf8d13479c33fc962UL, + 0xe640ce4d13e5da08UL, 0x4bdee0c45061f8baUL, + /* 251 */ 0xd7c46dc1a4edb1c9UL, 0x5514d7b6437fd98aUL, + 0x58942f6bb2a1c00bUL, 0x2dffb2ab1d70710eUL, + /* 252 */ 0xccdfcf2fc18b6d68UL, 0xa8ebcba8b7806167UL, + 0x980697f95e2937e3UL, 0x02fbba1cd0126e8cUL +}; + +/* c is two 512-bit products: c0[0:7]=a0[0:3]*b0[0:3] and c1[8:15]=a1[4:7]*b1[4:7] + * a is two 256-bit integers: a0[0:3] and a1[4:7] + * b is two 256-bit integers: b0[0:3] and b1[4:7] + */ +static void mul2_256x256_integer_adx(u64 *const c, const u64 *const a, + const u64 *const b) +{ + asm volatile( + "xorl %%r14d, %%r14d ;" + "movq (%1), %%rdx; " /* A[0] */ + "mulx (%2), %%r8, %%r15; " /* A[0]*B[0] */ + "xorl %%r10d, %%r10d ;" + "movq %%r8, (%0) ;" + "mulx 8(%2), %%r10, %%rax; " /* A[0]*B[1] */ + "adox %%r10, %%r15 ;" + "mulx 16(%2), %%r8, %%rbx; " /* A[0]*B[2] */ + "adox %%r8, %%rax ;" + "mulx 24(%2), %%r10, %%rcx; " /* A[0]*B[3] */ + "adox %%r10, %%rbx ;" + /******************************************/ + "adox %%r14, %%rcx ;" + + "movq 8(%1), %%rdx; " /* A[1] */ + "mulx (%2), %%r8, %%r9; " /* A[1]*B[0] */ + "adox %%r15, %%r8 ;" + "movq %%r8, 8(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[1]*B[1] */ + "adox %%r10, %%r9 ;" + "adcx %%r9, %%rax ;" + "mulx 16(%2), %%r8, %%r13; " /* A[1]*B[2] */ + "adox %%r8, %%r11 ;" + "adcx %%r11, %%rbx ;" + "mulx 24(%2), %%r10, %%r15; " /* A[1]*B[3] */ + "adox %%r10, %%r13 ;" + "adcx %%r13, %%rcx ;" + /******************************************/ + "adox %%r14, %%r15 ;" + "adcx %%r14, %%r15 ;" + + "movq 16(%1), %%rdx; " /* A[2] */ + "xorl %%r10d, %%r10d ;" + "mulx (%2), %%r8, %%r9; " /* A[2]*B[0] */ + "adox %%rax, %%r8 ;" + "movq %%r8, 16(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[2]*B[1] */ + "adox %%r10, %%r9 ;" + "adcx %%r9, %%rbx ;" + "mulx 16(%2), %%r8, %%r13; " /* A[2]*B[2] */ + "adox %%r8, %%r11 ;" + "adcx %%r11, %%rcx ;" + "mulx 24(%2), %%r10, %%rax; " /* A[2]*B[3] */ + "adox %%r10, %%r13 ;" + "adcx %%r13, %%r15 ;" + /******************************************/ + "adox %%r14, %%rax ;" + "adcx %%r14, %%rax ;" + + "movq 24(%1), %%rdx; " /* A[3] */ + "xorl %%r10d, %%r10d ;" + "mulx (%2), %%r8, %%r9; " /* A[3]*B[0] */ + "adox %%rbx, %%r8 ;" + "movq %%r8, 24(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[3]*B[1] */ + "adox %%r10, %%r9 ;" + "adcx %%r9, %%rcx ;" + "movq %%rcx, 32(%0) ;" + "mulx 16(%2), %%r8, %%r13; " /* A[3]*B[2] */ + "adox %%r8, %%r11 ;" + "adcx %%r11, %%r15 ;" + "movq %%r15, 40(%0) ;" + "mulx 24(%2), %%r10, %%rbx; " /* A[3]*B[3] */ + "adox %%r10, %%r13 ;" + "adcx %%r13, %%rax ;" + "movq %%rax, 48(%0) ;" + /******************************************/ + "adox %%r14, %%rbx ;" + "adcx %%r14, %%rbx ;" + "movq %%rbx, 56(%0) ;" + + "movq 32(%1), %%rdx; " /* C[0] */ + "mulx 32(%2), %%r8, %%r15; " /* C[0]*D[0] */ + "xorl %%r10d, %%r10d ;" + "movq %%r8, 64(%0);" + "mulx 40(%2), %%r10, %%rax; " /* C[0]*D[1] */ + "adox %%r10, %%r15 ;" + "mulx 48(%2), %%r8, %%rbx; " /* C[0]*D[2] */ + "adox %%r8, %%rax ;" + "mulx 56(%2), %%r10, %%rcx; " /* C[0]*D[3] */ + "adox %%r10, %%rbx ;" + /******************************************/ + "adox %%r14, %%rcx ;" + + "movq 40(%1), %%rdx; " /* C[1] */ + "xorl %%r10d, %%r10d ;" + "mulx 32(%2), %%r8, %%r9; " /* C[1]*D[0] */ + "adox %%r15, %%r8 ;" + "movq %%r8, 72(%0);" + "mulx 40(%2), %%r10, %%r11; " /* C[1]*D[1] */ + "adox %%r10, %%r9 ;" + "adcx %%r9, %%rax ;" + "mulx 48(%2), %%r8, %%r13; " /* C[1]*D[2] */ + "adox %%r8, %%r11 ;" + "adcx %%r11, %%rbx ;" + "mulx 56(%2), %%r10, %%r15; " /* C[1]*D[3] */ + "adox %%r10, %%r13 ;" + "adcx %%r13, %%rcx ;" + /******************************************/ + "adox %%r14, %%r15 ;" + "adcx %%r14, %%r15 ;" + + "movq 48(%1), %%rdx; " /* C[2] */ + "xorl %%r10d, %%r10d ;" + "mulx 32(%2), %%r8, %%r9; " /* C[2]*D[0] */ + "adox %%rax, %%r8 ;" + "movq %%r8, 80(%0);" + "mulx 40(%2), %%r10, %%r11; " /* C[2]*D[1] */ + "adox %%r10, %%r9 ;" + "adcx %%r9, %%rbx ;" + "mulx 48(%2), %%r8, %%r13; " /* C[2]*D[2] */ + "adox %%r8, %%r11 ;" + "adcx %%r11, %%rcx ;" + "mulx 56(%2), %%r10, %%rax; " /* C[2]*D[3] */ + "adox %%r10, %%r13 ;" + "adcx %%r13, %%r15 ;" + /******************************************/ + "adox %%r14, %%rax ;" + "adcx %%r14, %%rax ;" + + "movq 56(%1), %%rdx; " /* C[3] */ + "xorl %%r10d, %%r10d ;" + "mulx 32(%2), %%r8, %%r9; " /* C[3]*D[0] */ + "adox %%rbx, %%r8 ;" + "movq %%r8, 88(%0);" + "mulx 40(%2), %%r10, %%r11; " /* C[3]*D[1] */ + "adox %%r10, %%r9 ;" + "adcx %%r9, %%rcx ;" + "movq %%rcx, 96(%0) ;" + "mulx 48(%2), %%r8, %%r13; " /* C[3]*D[2] */ + "adox %%r8, %%r11 ;" + "adcx %%r11, %%r15 ;" + "movq %%r15, 104(%0) ;" + "mulx 56(%2), %%r10, %%rbx; " /* C[3]*D[3] */ + "adox %%r10, %%r13 ;" + "adcx %%r13, %%rax ;" + "movq %%rax, 112(%0) ;" + /******************************************/ + "adox %%r14, %%rbx ;" + "adcx %%r14, %%rbx ;" + "movq %%rbx, 120(%0) ;" + : + : "r"(c), "r"(a), "r"(b) + : "memory", "cc", "%rax", "%rbx", "%rcx", "%rdx", "%r8", "%r9", + "%r10", "%r11", "%r13", "%r14", "%r15"); +} + +static void mul2_256x256_integer_bmi2(u64 *const c, const u64 *const a, + const u64 *const b) +{ + asm volatile( + "movq (%1), %%rdx; " /* A[0] */ + "mulx (%2), %%r8, %%r15; " /* A[0]*B[0] */ + "movq %%r8, (%0) ;" + "mulx 8(%2), %%r10, %%rax; " /* A[0]*B[1] */ + "addq %%r10, %%r15 ;" + "mulx 16(%2), %%r8, %%rbx; " /* A[0]*B[2] */ + "adcq %%r8, %%rax ;" + "mulx 24(%2), %%r10, %%rcx; " /* A[0]*B[3] */ + "adcq %%r10, %%rbx ;" + /******************************************/ + "adcq $0, %%rcx ;" + + "movq 8(%1), %%rdx; " /* A[1] */ + "mulx (%2), %%r8, %%r9; " /* A[1]*B[0] */ + "addq %%r15, %%r8 ;" + "movq %%r8, 8(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[1]*B[1] */ + "adcq %%r10, %%r9 ;" + "mulx 16(%2), %%r8, %%r13; " /* A[1]*B[2] */ + "adcq %%r8, %%r11 ;" + "mulx 24(%2), %%r10, %%r15; " /* A[1]*B[3] */ + "adcq %%r10, %%r13 ;" + /******************************************/ + "adcq $0, %%r15 ;" + + "addq %%r9, %%rax ;" + "adcq %%r11, %%rbx ;" + "adcq %%r13, %%rcx ;" + "adcq $0, %%r15 ;" + + "movq 16(%1), %%rdx; " /* A[2] */ + "mulx (%2), %%r8, %%r9; " /* A[2]*B[0] */ + "addq %%rax, %%r8 ;" + "movq %%r8, 16(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[2]*B[1] */ + "adcq %%r10, %%r9 ;" + "mulx 16(%2), %%r8, %%r13; " /* A[2]*B[2] */ + "adcq %%r8, %%r11 ;" + "mulx 24(%2), %%r10, %%rax; " /* A[2]*B[3] */ + "adcq %%r10, %%r13 ;" + /******************************************/ + "adcq $0, %%rax ;" + + "addq %%r9, %%rbx ;" + "adcq %%r11, %%rcx ;" + "adcq %%r13, %%r15 ;" + "adcq $0, %%rax ;" + + "movq 24(%1), %%rdx; " /* A[3] */ + "mulx (%2), %%r8, %%r9; " /* A[3]*B[0] */ + "addq %%rbx, %%r8 ;" + "movq %%r8, 24(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[3]*B[1] */ + "adcq %%r10, %%r9 ;" + "mulx 16(%2), %%r8, %%r13; " /* A[3]*B[2] */ + "adcq %%r8, %%r11 ;" + "mulx 24(%2), %%r10, %%rbx; " /* A[3]*B[3] */ + "adcq %%r10, %%r13 ;" + /******************************************/ + "adcq $0, %%rbx ;" + + "addq %%r9, %%rcx ;" + "movq %%rcx, 32(%0) ;" + "adcq %%r11, %%r15 ;" + "movq %%r15, 40(%0) ;" + "adcq %%r13, %%rax ;" + "movq %%rax, 48(%0) ;" + "adcq $0, %%rbx ;" + "movq %%rbx, 56(%0) ;" + + "movq 32(%1), %%rdx; " /* C[0] */ + "mulx 32(%2), %%r8, %%r15; " /* C[0]*D[0] */ + "movq %%r8, 64(%0) ;" + "mulx 40(%2), %%r10, %%rax; " /* C[0]*D[1] */ + "addq %%r10, %%r15 ;" + "mulx 48(%2), %%r8, %%rbx; " /* C[0]*D[2] */ + "adcq %%r8, %%rax ;" + "mulx 56(%2), %%r10, %%rcx; " /* C[0]*D[3] */ + "adcq %%r10, %%rbx ;" + /******************************************/ + "adcq $0, %%rcx ;" + + "movq 40(%1), %%rdx; " /* C[1] */ + "mulx 32(%2), %%r8, %%r9; " /* C[1]*D[0] */ + "addq %%r15, %%r8 ;" + "movq %%r8, 72(%0) ;" + "mulx 40(%2), %%r10, %%r11; " /* C[1]*D[1] */ + "adcq %%r10, %%r9 ;" + "mulx 48(%2), %%r8, %%r13; " /* C[1]*D[2] */ + "adcq %%r8, %%r11 ;" + "mulx 56(%2), %%r10, %%r15; " /* C[1]*D[3] */ + "adcq %%r10, %%r13 ;" + /******************************************/ + "adcq $0, %%r15 ;" + + "addq %%r9, %%rax ;" + "adcq %%r11, %%rbx ;" + "adcq %%r13, %%rcx ;" + "adcq $0, %%r15 ;" + + "movq 48(%1), %%rdx; " /* C[2] */ + "mulx 32(%2), %%r8, %%r9; " /* C[2]*D[0] */ + "addq %%rax, %%r8 ;" + "movq %%r8, 80(%0) ;" + "mulx 40(%2), %%r10, %%r11; " /* C[2]*D[1] */ + "adcq %%r10, %%r9 ;" + "mulx 48(%2), %%r8, %%r13; " /* C[2]*D[2] */ + "adcq %%r8, %%r11 ;" + "mulx 56(%2), %%r10, %%rax; " /* C[2]*D[3] */ + "adcq %%r10, %%r13 ;" + /******************************************/ + "adcq $0, %%rax ;" + + "addq %%r9, %%rbx ;" + "adcq %%r11, %%rcx ;" + "adcq %%r13, %%r15 ;" + "adcq $0, %%rax ;" + + "movq 56(%1), %%rdx; " /* C[3] */ + "mulx 32(%2), %%r8, %%r9; " /* C[3]*D[0] */ + "addq %%rbx, %%r8 ;" + "movq %%r8, 88(%0) ;" + "mulx 40(%2), %%r10, %%r11; " /* C[3]*D[1] */ + "adcq %%r10, %%r9 ;" + "mulx 48(%2), %%r8, %%r13; " /* C[3]*D[2] */ + "adcq %%r8, %%r11 ;" + "mulx 56(%2), %%r10, %%rbx; " /* C[3]*D[3] */ + "adcq %%r10, %%r13 ;" + /******************************************/ + "adcq $0, %%rbx ;" + + "addq %%r9, %%rcx ;" + "movq %%rcx, 96(%0) ;" + "adcq %%r11, %%r15 ;" + "movq %%r15, 104(%0) ;" + "adcq %%r13, %%rax ;" + "movq %%rax, 112(%0) ;" + "adcq $0, %%rbx ;" + "movq %%rbx, 120(%0) ;" + : + : "r"(c), "r"(a), "r"(b) + : "memory", "cc", "%rax", "%rbx", "%rcx", "%rdx", "%r8", "%r9", + "%r10", "%r11", "%r13", "%r15"); +} + +static void sqr2_256x256_integer_adx(u64 *const c, const u64 *const a) +{ + asm volatile( + "movq (%1), %%rdx ;" /* A[0] */ + "mulx 8(%1), %%r8, %%r14 ;" /* A[1]*A[0] */ + "xorl %%r15d, %%r15d;" + "mulx 16(%1), %%r9, %%r10 ;" /* A[2]*A[0] */ + "adcx %%r14, %%r9 ;" + "mulx 24(%1), %%rax, %%rcx ;" /* A[3]*A[0] */ + "adcx %%rax, %%r10 ;" + "movq 24(%1), %%rdx ;" /* A[3] */ + "mulx 8(%1), %%r11, %%rbx ;" /* A[1]*A[3] */ + "adcx %%rcx, %%r11 ;" + "mulx 16(%1), %%rax, %%r13 ;" /* A[2]*A[3] */ + "adcx %%rax, %%rbx ;" + "movq 8(%1), %%rdx ;" /* A[1] */ + "adcx %%r15, %%r13 ;" + "mulx 16(%1), %%rax, %%rcx ;" /* A[2]*A[1] */ + "movq $0, %%r14 ;" + /******************************************/ + "adcx %%r15, %%r14 ;" + + "xorl %%r15d, %%r15d;" + "adox %%rax, %%r10 ;" + "adcx %%r8, %%r8 ;" + "adox %%rcx, %%r11 ;" + "adcx %%r9, %%r9 ;" + "adox %%r15, %%rbx ;" + "adcx %%r10, %%r10 ;" + "adox %%r15, %%r13 ;" + "adcx %%r11, %%r11 ;" + "adox %%r15, %%r14 ;" + "adcx %%rbx, %%rbx ;" + "adcx %%r13, %%r13 ;" + "adcx %%r14, %%r14 ;" + + "movq (%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* A[0]^2 */ + /*******************/ + "movq %%rax, 0(%0) ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, 8(%0) ;" + "movq 8(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* A[1]^2 */ + "adcq %%rax, %%r9 ;" + "movq %%r9, 16(%0) ;" + "adcq %%rcx, %%r10 ;" + "movq %%r10, 24(%0) ;" + "movq 16(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* A[2]^2 */ + "adcq %%rax, %%r11 ;" + "movq %%r11, 32(%0) ;" + "adcq %%rcx, %%rbx ;" + "movq %%rbx, 40(%0) ;" + "movq 24(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* A[3]^2 */ + "adcq %%rax, %%r13 ;" + "movq %%r13, 48(%0) ;" + "adcq %%rcx, %%r14 ;" + "movq %%r14, 56(%0) ;" + + + "movq 32(%1), %%rdx ;" /* B[0] */ + "mulx 40(%1), %%r8, %%r14 ;" /* B[1]*B[0] */ + "xorl %%r15d, %%r15d;" + "mulx 48(%1), %%r9, %%r10 ;" /* B[2]*B[0] */ + "adcx %%r14, %%r9 ;" + "mulx 56(%1), %%rax, %%rcx ;" /* B[3]*B[0] */ + "adcx %%rax, %%r10 ;" + "movq 56(%1), %%rdx ;" /* B[3] */ + "mulx 40(%1), %%r11, %%rbx ;" /* B[1]*B[3] */ + "adcx %%rcx, %%r11 ;" + "mulx 48(%1), %%rax, %%r13 ;" /* B[2]*B[3] */ + "adcx %%rax, %%rbx ;" + "movq 40(%1), %%rdx ;" /* B[1] */ + "adcx %%r15, %%r13 ;" + "mulx 48(%1), %%rax, %%rcx ;" /* B[2]*B[1] */ + "movq $0, %%r14 ;" + /******************************************/ + "adcx %%r15, %%r14 ;" + + "xorl %%r15d, %%r15d;" + "adox %%rax, %%r10 ;" + "adcx %%r8, %%r8 ;" + "adox %%rcx, %%r11 ;" + "adcx %%r9, %%r9 ;" + "adox %%r15, %%rbx ;" + "adcx %%r10, %%r10 ;" + "adox %%r15, %%r13 ;" + "adcx %%r11, %%r11 ;" + "adox %%r15, %%r14 ;" + "adcx %%rbx, %%rbx ;" + "adcx %%r13, %%r13 ;" + "adcx %%r14, %%r14 ;" + + "movq 32(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* B[0]^2 */ + /*******************/ + "movq %%rax, 64(%0) ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, 72(%0) ;" + "movq 40(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* B[1]^2 */ + "adcq %%rax, %%r9 ;" + "movq %%r9, 80(%0) ;" + "adcq %%rcx, %%r10 ;" + "movq %%r10, 88(%0) ;" + "movq 48(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* B[2]^2 */ + "adcq %%rax, %%r11 ;" + "movq %%r11, 96(%0) ;" + "adcq %%rcx, %%rbx ;" + "movq %%rbx, 104(%0) ;" + "movq 56(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* B[3]^2 */ + "adcq %%rax, %%r13 ;" + "movq %%r13, 112(%0) ;" + "adcq %%rcx, %%r14 ;" + "movq %%r14, 120(%0) ;" + : + : "r"(c), "r"(a) + : "memory", "cc", "%rax", "%rbx", "%rcx", "%rdx", "%r8", "%r9", + "%r10", "%r11", "%r13", "%r14", "%r15"); +} + +static void sqr2_256x256_integer_bmi2(u64 *const c, const u64 *const a) +{ + asm volatile( + "movq 8(%1), %%rdx ;" /* A[1] */ + "mulx (%1), %%r8, %%r9 ;" /* A[0]*A[1] */ + "mulx 16(%1), %%r10, %%r11 ;" /* A[2]*A[1] */ + "mulx 24(%1), %%rcx, %%r14 ;" /* A[3]*A[1] */ + + "movq 16(%1), %%rdx ;" /* A[2] */ + "mulx 24(%1), %%r15, %%r13 ;" /* A[3]*A[2] */ + "mulx (%1), %%rax, %%rdx ;" /* A[0]*A[2] */ + + "addq %%rax, %%r9 ;" + "adcq %%rdx, %%r10 ;" + "adcq %%rcx, %%r11 ;" + "adcq %%r14, %%r15 ;" + "adcq $0, %%r13 ;" + "movq $0, %%r14 ;" + "adcq $0, %%r14 ;" + + "movq (%1), %%rdx ;" /* A[0] */ + "mulx 24(%1), %%rax, %%rcx ;" /* A[0]*A[3] */ + + "addq %%rax, %%r10 ;" + "adcq %%rcx, %%r11 ;" + "adcq $0, %%r15 ;" + "adcq $0, %%r13 ;" + "adcq $0, %%r14 ;" + + "shldq $1, %%r13, %%r14 ;" + "shldq $1, %%r15, %%r13 ;" + "shldq $1, %%r11, %%r15 ;" + "shldq $1, %%r10, %%r11 ;" + "shldq $1, %%r9, %%r10 ;" + "shldq $1, %%r8, %%r9 ;" + "shlq $1, %%r8 ;" + + /*******************/ + "mulx %%rdx, %%rax, %%rcx ; " /* A[0]^2 */ + /*******************/ + "movq %%rax, 0(%0) ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, 8(%0) ;" + "movq 8(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ; " /* A[1]^2 */ + "adcq %%rax, %%r9 ;" + "movq %%r9, 16(%0) ;" + "adcq %%rcx, %%r10 ;" + "movq %%r10, 24(%0) ;" + "movq 16(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ; " /* A[2]^2 */ + "adcq %%rax, %%r11 ;" + "movq %%r11, 32(%0) ;" + "adcq %%rcx, %%r15 ;" + "movq %%r15, 40(%0) ;" + "movq 24(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ; " /* A[3]^2 */ + "adcq %%rax, %%r13 ;" + "movq %%r13, 48(%0) ;" + "adcq %%rcx, %%r14 ;" + "movq %%r14, 56(%0) ;" + + "movq 40(%1), %%rdx ;" /* B[1] */ + "mulx 32(%1), %%r8, %%r9 ;" /* B[0]*B[1] */ + "mulx 48(%1), %%r10, %%r11 ;" /* B[2]*B[1] */ + "mulx 56(%1), %%rcx, %%r14 ;" /* B[3]*B[1] */ + + "movq 48(%1), %%rdx ;" /* B[2] */ + "mulx 56(%1), %%r15, %%r13 ;" /* B[3]*B[2] */ + "mulx 32(%1), %%rax, %%rdx ;" /* B[0]*B[2] */ + + "addq %%rax, %%r9 ;" + "adcq %%rdx, %%r10 ;" + "adcq %%rcx, %%r11 ;" + "adcq %%r14, %%r15 ;" + "adcq $0, %%r13 ;" + "movq $0, %%r14 ;" + "adcq $0, %%r14 ;" + + "movq 32(%1), %%rdx ;" /* B[0] */ + "mulx 56(%1), %%rax, %%rcx ;" /* B[0]*B[3] */ + + "addq %%rax, %%r10 ;" + "adcq %%rcx, %%r11 ;" + "adcq $0, %%r15 ;" + "adcq $0, %%r13 ;" + "adcq $0, %%r14 ;" + + "shldq $1, %%r13, %%r14 ;" + "shldq $1, %%r15, %%r13 ;" + "shldq $1, %%r11, %%r15 ;" + "shldq $1, %%r10, %%r11 ;" + "shldq $1, %%r9, %%r10 ;" + "shldq $1, %%r8, %%r9 ;" + "shlq $1, %%r8 ;" + + /*******************/ + "mulx %%rdx, %%rax, %%rcx ; " /* B[0]^2 */ + /*******************/ + "movq %%rax, 64(%0) ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, 72(%0) ;" + "movq 40(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ; " /* B[1]^2 */ + "adcq %%rax, %%r9 ;" + "movq %%r9, 80(%0) ;" + "adcq %%rcx, %%r10 ;" + "movq %%r10, 88(%0) ;" + "movq 48(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ; " /* B[2]^2 */ + "adcq %%rax, %%r11 ;" + "movq %%r11, 96(%0) ;" + "adcq %%rcx, %%r15 ;" + "movq %%r15, 104(%0) ;" + "movq 56(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ; " /* B[3]^2 */ + "adcq %%rax, %%r13 ;" + "movq %%r13, 112(%0) ;" + "adcq %%rcx, %%r14 ;" + "movq %%r14, 120(%0) ;" + : + : "r"(c), "r"(a) + : "memory", "cc", "%rax", "%rcx", "%rdx", "%r8", "%r9", "%r10", + "%r11", "%r13", "%r14", "%r15"); +} + +static void red_eltfp25519_2w_adx(u64 *const c, const u64 *const a) +{ + asm volatile( + "movl $38, %%edx; " /* 2*c = 38 = 2^256 */ + "mulx 32(%1), %%r8, %%r10; " /* c*C[4] */ + "xorl %%ebx, %%ebx ;" + "adox (%1), %%r8 ;" + "mulx 40(%1), %%r9, %%r11; " /* c*C[5] */ + "adcx %%r10, %%r9 ;" + "adox 8(%1), %%r9 ;" + "mulx 48(%1), %%r10, %%rax; " /* c*C[6] */ + "adcx %%r11, %%r10 ;" + "adox 16(%1), %%r10 ;" + "mulx 56(%1), %%r11, %%rcx; " /* c*C[7] */ + "adcx %%rax, %%r11 ;" + "adox 24(%1), %%r11 ;" + /***************************************/ + "adcx %%rbx, %%rcx ;" + "adox %%rbx, %%rcx ;" + "imul %%rdx, %%rcx ;" /* c*C[4], cf=0, of=0 */ + "adcx %%rcx, %%r8 ;" + "adcx %%rbx, %%r9 ;" + "movq %%r9, 8(%0) ;" + "adcx %%rbx, %%r10 ;" + "movq %%r10, 16(%0) ;" + "adcx %%rbx, %%r11 ;" + "movq %%r11, 24(%0) ;" + "mov $0, %%ecx ;" + "cmovc %%edx, %%ecx ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, (%0) ;" + + "mulx 96(%1), %%r8, %%r10; " /* c*C[4] */ + "xorl %%ebx, %%ebx ;" + "adox 64(%1), %%r8 ;" + "mulx 104(%1), %%r9, %%r11; " /* c*C[5] */ + "adcx %%r10, %%r9 ;" + "adox 72(%1), %%r9 ;" + "mulx 112(%1), %%r10, %%rax; " /* c*C[6] */ + "adcx %%r11, %%r10 ;" + "adox 80(%1), %%r10 ;" + "mulx 120(%1), %%r11, %%rcx; " /* c*C[7] */ + "adcx %%rax, %%r11 ;" + "adox 88(%1), %%r11 ;" + /****************************************/ + "adcx %%rbx, %%rcx ;" + "adox %%rbx, %%rcx ;" + "imul %%rdx, %%rcx ;" /* c*C[4], cf=0, of=0 */ + "adcx %%rcx, %%r8 ;" + "adcx %%rbx, %%r9 ;" + "movq %%r9, 40(%0) ;" + "adcx %%rbx, %%r10 ;" + "movq %%r10, 48(%0) ;" + "adcx %%rbx, %%r11 ;" + "movq %%r11, 56(%0) ;" + "mov $0, %%ecx ;" + "cmovc %%edx, %%ecx ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, 32(%0) ;" + : + : "r"(c), "r"(a) + : "memory", "cc", "%rax", "%rbx", "%rcx", "%rdx", "%r8", "%r9", + "%r10", "%r11"); +} + +static void red_eltfp25519_2w_bmi2(u64 *const c, const u64 *const a) +{ + asm volatile( + "movl $38, %%edx ; " /* 2*c = 38 = 2^256 */ + "mulx 32(%1), %%r8, %%r10 ;" /* c*C[4] */ + "mulx 40(%1), %%r9, %%r11 ;" /* c*C[5] */ + "addq %%r10, %%r9 ;" + "mulx 48(%1), %%r10, %%rax ;" /* c*C[6] */ + "adcq %%r11, %%r10 ;" + "mulx 56(%1), %%r11, %%rcx ;" /* c*C[7] */ + "adcq %%rax, %%r11 ;" + /***************************************/ + "adcq $0, %%rcx ;" + "addq (%1), %%r8 ;" + "adcq 8(%1), %%r9 ;" + "adcq 16(%1), %%r10 ;" + "adcq 24(%1), %%r11 ;" + "adcq $0, %%rcx ;" + "imul %%rdx, %%rcx ;" /* c*C[4], cf=0 */ + "addq %%rcx, %%r8 ;" + "adcq $0, %%r9 ;" + "movq %%r9, 8(%0) ;" + "adcq $0, %%r10 ;" + "movq %%r10, 16(%0) ;" + "adcq $0, %%r11 ;" + "movq %%r11, 24(%0) ;" + "mov $0, %%ecx ;" + "cmovc %%edx, %%ecx ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, (%0) ;" + + "mulx 96(%1), %%r8, %%r10 ;" /* c*C[4] */ + "mulx 104(%1), %%r9, %%r11 ;" /* c*C[5] */ + "addq %%r10, %%r9 ;" + "mulx 112(%1), %%r10, %%rax ;" /* c*C[6] */ + "adcq %%r11, %%r10 ;" + "mulx 120(%1), %%r11, %%rcx ;" /* c*C[7] */ + "adcq %%rax, %%r11 ;" + /****************************************/ + "adcq $0, %%rcx ;" + "addq 64(%1), %%r8 ;" + "adcq 72(%1), %%r9 ;" + "adcq 80(%1), %%r10 ;" + "adcq 88(%1), %%r11 ;" + "adcq $0, %%rcx ;" + "imul %%rdx, %%rcx ;" /* c*C[4], cf=0 */ + "addq %%rcx, %%r8 ;" + "adcq $0, %%r9 ;" + "movq %%r9, 40(%0) ;" + "adcq $0, %%r10 ;" + "movq %%r10, 48(%0) ;" + "adcq $0, %%r11 ;" + "movq %%r11, 56(%0) ;" + "mov $0, %%ecx ;" + "cmovc %%edx, %%ecx ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, 32(%0) ;" + : + : "r"(c), "r"(a) + : "memory", "cc", "%rax", "%rcx", "%rdx", "%r8", "%r9", "%r10", + "%r11"); +} + +static void mul_256x256_integer_adx(u64 *const c, const u64 *const a, + const u64 *const b) +{ + asm volatile( + "movq (%1), %%rdx; " /* A[0] */ + "mulx (%2), %%r8, %%r9; " /* A[0]*B[0] */ + "xorl %%r10d, %%r10d ;" + "movq %%r8, (%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[0]*B[1] */ + "adox %%r9, %%r10 ;" + "movq %%r10, 8(%0) ;" + "mulx 16(%2), %%r15, %%r13; " /* A[0]*B[2] */ + "adox %%r11, %%r15 ;" + "mulx 24(%2), %%r14, %%rdx; " /* A[0]*B[3] */ + "adox %%r13, %%r14 ;" + "movq $0, %%rax ;" + /******************************************/ + "adox %%rdx, %%rax ;" + + "movq 8(%1), %%rdx; " /* A[1] */ + "mulx (%2), %%r8, %%r9; " /* A[1]*B[0] */ + "xorl %%r10d, %%r10d ;" + "adcx 8(%0), %%r8 ;" + "movq %%r8, 8(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[1]*B[1] */ + "adox %%r9, %%r10 ;" + "adcx %%r15, %%r10 ;" + "movq %%r10, 16(%0) ;" + "mulx 16(%2), %%r15, %%r13; " /* A[1]*B[2] */ + "adox %%r11, %%r15 ;" + "adcx %%r14, %%r15 ;" + "movq $0, %%r8 ;" + "mulx 24(%2), %%r14, %%rdx; " /* A[1]*B[3] */ + "adox %%r13, %%r14 ;" + "adcx %%rax, %%r14 ;" + "movq $0, %%rax ;" + /******************************************/ + "adox %%rdx, %%rax ;" + "adcx %%r8, %%rax ;" + + "movq 16(%1), %%rdx; " /* A[2] */ + "mulx (%2), %%r8, %%r9; " /* A[2]*B[0] */ + "xorl %%r10d, %%r10d ;" + "adcx 16(%0), %%r8 ;" + "movq %%r8, 16(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[2]*B[1] */ + "adox %%r9, %%r10 ;" + "adcx %%r15, %%r10 ;" + "movq %%r10, 24(%0) ;" + "mulx 16(%2), %%r15, %%r13; " /* A[2]*B[2] */ + "adox %%r11, %%r15 ;" + "adcx %%r14, %%r15 ;" + "movq $0, %%r8 ;" + "mulx 24(%2), %%r14, %%rdx; " /* A[2]*B[3] */ + "adox %%r13, %%r14 ;" + "adcx %%rax, %%r14 ;" + "movq $0, %%rax ;" + /******************************************/ + "adox %%rdx, %%rax ;" + "adcx %%r8, %%rax ;" + + "movq 24(%1), %%rdx; " /* A[3] */ + "mulx (%2), %%r8, %%r9; " /* A[3]*B[0] */ + "xorl %%r10d, %%r10d ;" + "adcx 24(%0), %%r8 ;" + "movq %%r8, 24(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[3]*B[1] */ + "adox %%r9, %%r10 ;" + "adcx %%r15, %%r10 ;" + "movq %%r10, 32(%0) ;" + "mulx 16(%2), %%r15, %%r13; " /* A[3]*B[2] */ + "adox %%r11, %%r15 ;" + "adcx %%r14, %%r15 ;" + "movq %%r15, 40(%0) ;" + "movq $0, %%r8 ;" + "mulx 24(%2), %%r14, %%rdx; " /* A[3]*B[3] */ + "adox %%r13, %%r14 ;" + "adcx %%rax, %%r14 ;" + "movq %%r14, 48(%0) ;" + "movq $0, %%rax ;" + /******************************************/ + "adox %%rdx, %%rax ;" + "adcx %%r8, %%rax ;" + "movq %%rax, 56(%0) ;" + : + : "r"(c), "r"(a), "r"(b) + : "memory", "cc", "%rax", "%rdx", "%r8", "%r9", "%r10", "%r11", + "%r13", "%r14", "%r15"); +} + +static void mul_256x256_integer_bmi2(u64 *const c, const u64 *const a, + const u64 *const b) +{ + asm volatile( + "movq (%1), %%rdx; " /* A[0] */ + "mulx (%2), %%r8, %%r15; " /* A[0]*B[0] */ + "movq %%r8, (%0) ;" + "mulx 8(%2), %%r10, %%rax; " /* A[0]*B[1] */ + "addq %%r10, %%r15 ;" + "mulx 16(%2), %%r8, %%rbx; " /* A[0]*B[2] */ + "adcq %%r8, %%rax ;" + "mulx 24(%2), %%r10, %%rcx; " /* A[0]*B[3] */ + "adcq %%r10, %%rbx ;" + /******************************************/ + "adcq $0, %%rcx ;" + + "movq 8(%1), %%rdx; " /* A[1] */ + "mulx (%2), %%r8, %%r9; " /* A[1]*B[0] */ + "addq %%r15, %%r8 ;" + "movq %%r8, 8(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[1]*B[1] */ + "adcq %%r10, %%r9 ;" + "mulx 16(%2), %%r8, %%r13; " /* A[1]*B[2] */ + "adcq %%r8, %%r11 ;" + "mulx 24(%2), %%r10, %%r15; " /* A[1]*B[3] */ + "adcq %%r10, %%r13 ;" + /******************************************/ + "adcq $0, %%r15 ;" + + "addq %%r9, %%rax ;" + "adcq %%r11, %%rbx ;" + "adcq %%r13, %%rcx ;" + "adcq $0, %%r15 ;" + + "movq 16(%1), %%rdx; " /* A[2] */ + "mulx (%2), %%r8, %%r9; " /* A[2]*B[0] */ + "addq %%rax, %%r8 ;" + "movq %%r8, 16(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[2]*B[1] */ + "adcq %%r10, %%r9 ;" + "mulx 16(%2), %%r8, %%r13; " /* A[2]*B[2] */ + "adcq %%r8, %%r11 ;" + "mulx 24(%2), %%r10, %%rax; " /* A[2]*B[3] */ + "adcq %%r10, %%r13 ;" + /******************************************/ + "adcq $0, %%rax ;" + + "addq %%r9, %%rbx ;" + "adcq %%r11, %%rcx ;" + "adcq %%r13, %%r15 ;" + "adcq $0, %%rax ;" + + "movq 24(%1), %%rdx; " /* A[3] */ + "mulx (%2), %%r8, %%r9; " /* A[3]*B[0] */ + "addq %%rbx, %%r8 ;" + "movq %%r8, 24(%0) ;" + "mulx 8(%2), %%r10, %%r11; " /* A[3]*B[1] */ + "adcq %%r10, %%r9 ;" + "mulx 16(%2), %%r8, %%r13; " /* A[3]*B[2] */ + "adcq %%r8, %%r11 ;" + "mulx 24(%2), %%r10, %%rbx; " /* A[3]*B[3] */ + "adcq %%r10, %%r13 ;" + /******************************************/ + "adcq $0, %%rbx ;" + + "addq %%r9, %%rcx ;" + "movq %%rcx, 32(%0) ;" + "adcq %%r11, %%r15 ;" + "movq %%r15, 40(%0) ;" + "adcq %%r13, %%rax ;" + "movq %%rax, 48(%0) ;" + "adcq $0, %%rbx ;" + "movq %%rbx, 56(%0) ;" + : + : "r"(c), "r"(a), "r"(b) + : "memory", "cc", "%rax", "%rbx", "%rcx", "%rdx", "%r8", "%r9", + "%r10", "%r11", "%r13", "%r15"); +} + +static void sqr_256x256_integer_adx(u64 *const c, const u64 *const a) +{ + asm volatile( + "movq (%1), %%rdx ;" /* A[0] */ + "mulx 8(%1), %%r8, %%r14 ;" /* A[1]*A[0] */ + "xorl %%r15d, %%r15d;" + "mulx 16(%1), %%r9, %%r10 ;" /* A[2]*A[0] */ + "adcx %%r14, %%r9 ;" + "mulx 24(%1), %%rax, %%rcx ;" /* A[3]*A[0] */ + "adcx %%rax, %%r10 ;" + "movq 24(%1), %%rdx ;" /* A[3] */ + "mulx 8(%1), %%r11, %%rbx ;" /* A[1]*A[3] */ + "adcx %%rcx, %%r11 ;" + "mulx 16(%1), %%rax, %%r13 ;" /* A[2]*A[3] */ + "adcx %%rax, %%rbx ;" + "movq 8(%1), %%rdx ;" /* A[1] */ + "adcx %%r15, %%r13 ;" + "mulx 16(%1), %%rax, %%rcx ;" /* A[2]*A[1] */ + "movq $0, %%r14 ;" + /******************************************/ + "adcx %%r15, %%r14 ;" + + "xorl %%r15d, %%r15d;" + "adox %%rax, %%r10 ;" + "adcx %%r8, %%r8 ;" + "adox %%rcx, %%r11 ;" + "adcx %%r9, %%r9 ;" + "adox %%r15, %%rbx ;" + "adcx %%r10, %%r10 ;" + "adox %%r15, %%r13 ;" + "adcx %%r11, %%r11 ;" + "adox %%r15, %%r14 ;" + "adcx %%rbx, %%rbx ;" + "adcx %%r13, %%r13 ;" + "adcx %%r14, %%r14 ;" + + "movq (%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* A[0]^2 */ + /*******************/ + "movq %%rax, 0(%0) ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, 8(%0) ;" + "movq 8(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* A[1]^2 */ + "adcq %%rax, %%r9 ;" + "movq %%r9, 16(%0) ;" + "adcq %%rcx, %%r10 ;" + "movq %%r10, 24(%0) ;" + "movq 16(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* A[2]^2 */ + "adcq %%rax, %%r11 ;" + "movq %%r11, 32(%0) ;" + "adcq %%rcx, %%rbx ;" + "movq %%rbx, 40(%0) ;" + "movq 24(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* A[3]^2 */ + "adcq %%rax, %%r13 ;" + "movq %%r13, 48(%0) ;" + "adcq %%rcx, %%r14 ;" + "movq %%r14, 56(%0) ;" + : + : "r"(c), "r"(a) + : "memory", "cc", "%rax", "%rbx", "%rcx", "%rdx", "%r8", "%r9", + "%r10", "%r11", "%r13", "%r14", "%r15"); +} + +static void sqr_256x256_integer_bmi2(u64 *const c, const u64 *const a) +{ + asm volatile( + "movq 8(%1), %%rdx ;" /* A[1] */ + "mulx (%1), %%r8, %%r9 ;" /* A[0]*A[1] */ + "mulx 16(%1), %%r10, %%r11 ;" /* A[2]*A[1] */ + "mulx 24(%1), %%rcx, %%r14 ;" /* A[3]*A[1] */ + + "movq 16(%1), %%rdx ;" /* A[2] */ + "mulx 24(%1), %%r15, %%r13 ;" /* A[3]*A[2] */ + "mulx (%1), %%rax, %%rdx ;" /* A[0]*A[2] */ + + "addq %%rax, %%r9 ;" + "adcq %%rdx, %%r10 ;" + "adcq %%rcx, %%r11 ;" + "adcq %%r14, %%r15 ;" + "adcq $0, %%r13 ;" + "movq $0, %%r14 ;" + "adcq $0, %%r14 ;" + + "movq (%1), %%rdx ;" /* A[0] */ + "mulx 24(%1), %%rax, %%rcx ;" /* A[0]*A[3] */ + + "addq %%rax, %%r10 ;" + "adcq %%rcx, %%r11 ;" + "adcq $0, %%r15 ;" + "adcq $0, %%r13 ;" + "adcq $0, %%r14 ;" + + "shldq $1, %%r13, %%r14 ;" + "shldq $1, %%r15, %%r13 ;" + "shldq $1, %%r11, %%r15 ;" + "shldq $1, %%r10, %%r11 ;" + "shldq $1, %%r9, %%r10 ;" + "shldq $1, %%r8, %%r9 ;" + "shlq $1, %%r8 ;" + + /*******************/ + "mulx %%rdx, %%rax, %%rcx ;" /* A[0]^2 */ + /*******************/ + "movq %%rax, 0(%0) ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, 8(%0) ;" + "movq 8(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* A[1]^2 */ + "adcq %%rax, %%r9 ;" + "movq %%r9, 16(%0) ;" + "adcq %%rcx, %%r10 ;" + "movq %%r10, 24(%0) ;" + "movq 16(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* A[2]^2 */ + "adcq %%rax, %%r11 ;" + "movq %%r11, 32(%0) ;" + "adcq %%rcx, %%r15 ;" + "movq %%r15, 40(%0) ;" + "movq 24(%1), %%rdx ;" + "mulx %%rdx, %%rax, %%rcx ;" /* A[3]^2 */ + "adcq %%rax, %%r13 ;" + "movq %%r13, 48(%0) ;" + "adcq %%rcx, %%r14 ;" + "movq %%r14, 56(%0) ;" + : + : "r"(c), "r"(a) + : "memory", "cc", "%rax", "%rcx", "%rdx", "%r8", "%r9", "%r10", + "%r11", "%r13", "%r14", "%r15"); +} + +static void red_eltfp25519_1w_adx(u64 *const c, const u64 *const a) +{ + asm volatile( + "movl $38, %%edx ;" /* 2*c = 38 = 2^256 */ + "mulx 32(%1), %%r8, %%r10 ;" /* c*C[4] */ + "xorl %%ebx, %%ebx ;" + "adox (%1), %%r8 ;" + "mulx 40(%1), %%r9, %%r11 ;" /* c*C[5] */ + "adcx %%r10, %%r9 ;" + "adox 8(%1), %%r9 ;" + "mulx 48(%1), %%r10, %%rax ;" /* c*C[6] */ + "adcx %%r11, %%r10 ;" + "adox 16(%1), %%r10 ;" + "mulx 56(%1), %%r11, %%rcx ;" /* c*C[7] */ + "adcx %%rax, %%r11 ;" + "adox 24(%1), %%r11 ;" + /***************************************/ + "adcx %%rbx, %%rcx ;" + "adox %%rbx, %%rcx ;" + "imul %%rdx, %%rcx ;" /* c*C[4], cf=0, of=0 */ + "adcx %%rcx, %%r8 ;" + "adcx %%rbx, %%r9 ;" + "movq %%r9, 8(%0) ;" + "adcx %%rbx, %%r10 ;" + "movq %%r10, 16(%0) ;" + "adcx %%rbx, %%r11 ;" + "movq %%r11, 24(%0) ;" + "mov $0, %%ecx ;" + "cmovc %%edx, %%ecx ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, (%0) ;" + : + : "r"(c), "r"(a) + : "memory", "cc", "%rax", "%rbx", "%rcx", "%rdx", "%r8", "%r9", + "%r10", "%r11"); +} + +static void red_eltfp25519_1w_bmi2(u64 *const c, const u64 *const a) +{ + asm volatile( + "movl $38, %%edx ;" /* 2*c = 38 = 2^256 */ + "mulx 32(%1), %%r8, %%r10 ;" /* c*C[4] */ + "mulx 40(%1), %%r9, %%r11 ;" /* c*C[5] */ + "addq %%r10, %%r9 ;" + "mulx 48(%1), %%r10, %%rax ;" /* c*C[6] */ + "adcq %%r11, %%r10 ;" + "mulx 56(%1), %%r11, %%rcx ;" /* c*C[7] */ + "adcq %%rax, %%r11 ;" + /***************************************/ + "adcq $0, %%rcx ;" + "addq (%1), %%r8 ;" + "adcq 8(%1), %%r9 ;" + "adcq 16(%1), %%r10 ;" + "adcq 24(%1), %%r11 ;" + "adcq $0, %%rcx ;" + "imul %%rdx, %%rcx ;" /* c*C[4], cf=0 */ + "addq %%rcx, %%r8 ;" + "adcq $0, %%r9 ;" + "movq %%r9, 8(%0) ;" + "adcq $0, %%r10 ;" + "movq %%r10, 16(%0) ;" + "adcq $0, %%r11 ;" + "movq %%r11, 24(%0) ;" + "mov $0, %%ecx ;" + "cmovc %%edx, %%ecx ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, (%0) ;" + : + : "r"(c), "r"(a) + : "memory", "cc", "%rax", "%rcx", "%rdx", "%r8", "%r9", "%r10", + "%r11"); +} + +static __always_inline void +add_eltfp25519_1w_adx(u64 *const c, const u64 *const a, const u64 *const b) +{ + asm volatile( + "mov $38, %%eax ;" + "xorl %%ecx, %%ecx ;" + "movq (%2), %%r8 ;" + "adcx (%1), %%r8 ;" + "movq 8(%2), %%r9 ;" + "adcx 8(%1), %%r9 ;" + "movq 16(%2), %%r10 ;" + "adcx 16(%1), %%r10 ;" + "movq 24(%2), %%r11 ;" + "adcx 24(%1), %%r11 ;" + "cmovc %%eax, %%ecx ;" + "xorl %%eax, %%eax ;" + "adcx %%rcx, %%r8 ;" + "adcx %%rax, %%r9 ;" + "movq %%r9, 8(%0) ;" + "adcx %%rax, %%r10 ;" + "movq %%r10, 16(%0) ;" + "adcx %%rax, %%r11 ;" + "movq %%r11, 24(%0) ;" + "mov $38, %%ecx ;" + "cmovc %%ecx, %%eax ;" + "addq %%rax, %%r8 ;" + "movq %%r8, (%0) ;" + : + : "r"(c), "r"(a), "r"(b) + : "memory", "cc", "%rax", "%rcx", "%r8", "%r9", "%r10", "%r11"); +} + +static __always_inline void +add_eltfp25519_1w_bmi2(u64 *const c, const u64 *const a, const u64 *const b) +{ + asm volatile( + "mov $38, %%eax ;" + "movq (%2), %%r8 ;" + "addq (%1), %%r8 ;" + "movq 8(%2), %%r9 ;" + "adcq 8(%1), %%r9 ;" + "movq 16(%2), %%r10 ;" + "adcq 16(%1), %%r10 ;" + "movq 24(%2), %%r11 ;" + "adcq 24(%1), %%r11 ;" + "mov $0, %%ecx ;" + "cmovc %%eax, %%ecx ;" + "addq %%rcx, %%r8 ;" + "adcq $0, %%r9 ;" + "movq %%r9, 8(%0) ;" + "adcq $0, %%r10 ;" + "movq %%r10, 16(%0) ;" + "adcq $0, %%r11 ;" + "movq %%r11, 24(%0) ;" + "mov $0, %%ecx ;" + "cmovc %%eax, %%ecx ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, (%0) ;" + : + : "r"(c), "r"(a), "r"(b) + : "memory", "cc", "%rax", "%rcx", "%r8", "%r9", "%r10", "%r11"); +} + +static __always_inline void +sub_eltfp25519_1w(u64 *const c, const u64 *const a, const u64 *const b) +{ + asm volatile( + "mov $38, %%eax ;" + "movq (%1), %%r8 ;" + "subq (%2), %%r8 ;" + "movq 8(%1), %%r9 ;" + "sbbq 8(%2), %%r9 ;" + "movq 16(%1), %%r10 ;" + "sbbq 16(%2), %%r10 ;" + "movq 24(%1), %%r11 ;" + "sbbq 24(%2), %%r11 ;" + "mov $0, %%ecx ;" + "cmovc %%eax, %%ecx ;" + "subq %%rcx, %%r8 ;" + "sbbq $0, %%r9 ;" + "movq %%r9, 8(%0) ;" + "sbbq $0, %%r10 ;" + "movq %%r10, 16(%0) ;" + "sbbq $0, %%r11 ;" + "movq %%r11, 24(%0) ;" + "mov $0, %%ecx ;" + "cmovc %%eax, %%ecx ;" + "subq %%rcx, %%r8 ;" + "movq %%r8, (%0) ;" + : + : "r"(c), "r"(a), "r"(b) + : "memory", "cc", "%rax", "%rcx", "%r8", "%r9", "%r10", "%r11"); +} + +/* Multiplication by a24 = (A+2)/4 = (486662+2)/4 = 121666 */ +static __always_inline void +mul_a24_eltfp25519_1w(u64 *const c, const u64 *const a) +{ + const u64 a24 = 121666; + asm volatile( + "movq %2, %%rdx ;" + "mulx (%1), %%r8, %%r10 ;" + "mulx 8(%1), %%r9, %%r11 ;" + "addq %%r10, %%r9 ;" + "mulx 16(%1), %%r10, %%rax ;" + "adcq %%r11, %%r10 ;" + "mulx 24(%1), %%r11, %%rcx ;" + "adcq %%rax, %%r11 ;" + /**************************/ + "adcq $0, %%rcx ;" + "movl $38, %%edx ;" /* 2*c = 38 = 2^256 mod 2^255-19*/ + "imul %%rdx, %%rcx ;" + "addq %%rcx, %%r8 ;" + "adcq $0, %%r9 ;" + "movq %%r9, 8(%0) ;" + "adcq $0, %%r10 ;" + "movq %%r10, 16(%0) ;" + "adcq $0, %%r11 ;" + "movq %%r11, 24(%0) ;" + "mov $0, %%ecx ;" + "cmovc %%edx, %%ecx ;" + "addq %%rcx, %%r8 ;" + "movq %%r8, (%0) ;" + : + : "r"(c), "r"(a), "r"(a24) + : "memory", "cc", "%rax", "%rcx", "%rdx", "%r8", "%r9", "%r10", + "%r11"); +} + +static void inv_eltfp25519_1w_adx(u64 *const c, const u64 *const a) +{ + struct { + eltfp25519_1w_buffer buffer; + eltfp25519_1w x0, x1, x2; + } __aligned(32) m; + u64 *T[4]; + + T[0] = m.x0; + T[1] = c; /* x^(-1) */ + T[2] = m.x1; + T[3] = m.x2; + + copy_eltfp25519_1w(T[1], a); + sqrn_eltfp25519_1w_adx(T[1], 1); + copy_eltfp25519_1w(T[2], T[1]); + sqrn_eltfp25519_1w_adx(T[2], 2); + mul_eltfp25519_1w_adx(T[0], a, T[2]); + mul_eltfp25519_1w_adx(T[1], T[1], T[0]); + copy_eltfp25519_1w(T[2], T[1]); + sqrn_eltfp25519_1w_adx(T[2], 1); + mul_eltfp25519_1w_adx(T[0], T[0], T[2]); + copy_eltfp25519_1w(T[2], T[0]); + sqrn_eltfp25519_1w_adx(T[2], 5); + mul_eltfp25519_1w_adx(T[0], T[0], T[2]); + copy_eltfp25519_1w(T[2], T[0]); + sqrn_eltfp25519_1w_adx(T[2], 10); + mul_eltfp25519_1w_adx(T[2], T[2], T[0]); + copy_eltfp25519_1w(T[3], T[2]); + sqrn_eltfp25519_1w_adx(T[3], 20); + mul_eltfp25519_1w_adx(T[3], T[3], T[2]); + sqrn_eltfp25519_1w_adx(T[3], 10); + mul_eltfp25519_1w_adx(T[3], T[3], T[0]); + copy_eltfp25519_1w(T[0], T[3]); + sqrn_eltfp25519_1w_adx(T[0], 50); + mul_eltfp25519_1w_adx(T[0], T[0], T[3]); + copy_eltfp25519_1w(T[2], T[0]); + sqrn_eltfp25519_1w_adx(T[2], 100); + mul_eltfp25519_1w_adx(T[2], T[2], T[0]); + sqrn_eltfp25519_1w_adx(T[2], 50); + mul_eltfp25519_1w_adx(T[2], T[2], T[3]); + sqrn_eltfp25519_1w_adx(T[2], 5); + mul_eltfp25519_1w_adx(T[1], T[1], T[2]); + + memzero_explicit(&m, sizeof(m)); +} + +static void inv_eltfp25519_1w_bmi2(u64 *const c, const u64 *const a) +{ + struct { + eltfp25519_1w_buffer buffer; + eltfp25519_1w x0, x1, x2; + } __aligned(32) m; + u64 *T[5]; + + T[0] = m.x0; + T[1] = c; /* x^(-1) */ + T[2] = m.x1; + T[3] = m.x2; + + copy_eltfp25519_1w(T[1], a); + sqrn_eltfp25519_1w_bmi2(T[1], 1); + copy_eltfp25519_1w(T[2], T[1]); + sqrn_eltfp25519_1w_bmi2(T[2], 2); + mul_eltfp25519_1w_bmi2(T[0], a, T[2]); + mul_eltfp25519_1w_bmi2(T[1], T[1], T[0]); + copy_eltfp25519_1w(T[2], T[1]); + sqrn_eltfp25519_1w_bmi2(T[2], 1); + mul_eltfp25519_1w_bmi2(T[0], T[0], T[2]); + copy_eltfp25519_1w(T[2], T[0]); + sqrn_eltfp25519_1w_bmi2(T[2], 5); + mul_eltfp25519_1w_bmi2(T[0], T[0], T[2]); + copy_eltfp25519_1w(T[2], T[0]); + sqrn_eltfp25519_1w_bmi2(T[2], 10); + mul_eltfp25519_1w_bmi2(T[2], T[2], T[0]); + copy_eltfp25519_1w(T[3], T[2]); + sqrn_eltfp25519_1w_bmi2(T[3], 20); + mul_eltfp25519_1w_bmi2(T[3], T[3], T[2]); + sqrn_eltfp25519_1w_bmi2(T[3], 10); + mul_eltfp25519_1w_bmi2(T[3], T[3], T[0]); + copy_eltfp25519_1w(T[0], T[3]); + sqrn_eltfp25519_1w_bmi2(T[0], 50); + mul_eltfp25519_1w_bmi2(T[0], T[0], T[3]); + copy_eltfp25519_1w(T[2], T[0]); + sqrn_eltfp25519_1w_bmi2(T[2], 100); + mul_eltfp25519_1w_bmi2(T[2], T[2], T[0]); + sqrn_eltfp25519_1w_bmi2(T[2], 50); + mul_eltfp25519_1w_bmi2(T[2], T[2], T[3]); + sqrn_eltfp25519_1w_bmi2(T[2], 5); + mul_eltfp25519_1w_bmi2(T[1], T[1], T[2]); + + memzero_explicit(&m, sizeof(m)); +} + +/* Given c, a 256-bit number, fred_eltfp25519_1w updates c + * with a number such that 0 <= C < 2**255-19. + */ +static __always_inline void fred_eltfp25519_1w(u64 *const c) +{ + u64 tmp0 = 38, tmp1 = 19; + asm volatile( + "btrq $63, %3 ;" /* Put bit 255 in carry flag and clear */ + "cmovncl %k5, %k4 ;" /* c[255] ? 38 : 19 */ + + /* Add either 19 or 38 to c */ + "addq %4, %0 ;" + "adcq $0, %1 ;" + "adcq $0, %2 ;" + "adcq $0, %3 ;" + + /* Test for bit 255 again; only triggered on overflow modulo 2^255-19 */ + "movl $0, %k4 ;" + "cmovnsl %k5, %k4 ;" /* c[255] ? 0 : 19 */ + "btrq $63, %3 ;" /* Clear bit 255 */ + + /* Subtract 19 if necessary */ + "subq %4, %0 ;" + "sbbq $0, %1 ;" + "sbbq $0, %2 ;" + "sbbq $0, %3 ;" + + : "+r"(c[0]), "+r"(c[1]), "+r"(c[2]), "+r"(c[3]), "+r"(tmp0), + "+r"(tmp1) + : + : "memory", "cc"); +} + +static __always_inline void cswap(u8 bit, u64 *const px, u64 *const py) +{ + u64 temp; + asm volatile( + "test %9, %9 ;" + "movq %0, %8 ;" + "cmovnzq %4, %0 ;" + "cmovnzq %8, %4 ;" + "movq %1, %8 ;" + "cmovnzq %5, %1 ;" + "cmovnzq %8, %5 ;" + "movq %2, %8 ;" + "cmovnzq %6, %2 ;" + "cmovnzq %8, %6 ;" + "movq %3, %8 ;" + "cmovnzq %7, %3 ;" + "cmovnzq %8, %7 ;" + : "+r"(px[0]), "+r"(px[1]), "+r"(px[2]), "+r"(px[3]), + "+r"(py[0]), "+r"(py[1]), "+r"(py[2]), "+r"(py[3]), + "=r"(temp) + : "r"(bit) + : "cc" + ); +} + +static __always_inline void cselect(u8 bit, u64 *const px, const u64 *const py) +{ + asm volatile( + "test %4, %4 ;" + "cmovnzq %5, %0 ;" + "cmovnzq %6, %1 ;" + "cmovnzq %7, %2 ;" + "cmovnzq %8, %3 ;" + : "+r"(px[0]), "+r"(px[1]), "+r"(px[2]), "+r"(px[3]) + : "r"(bit), "rm"(py[0]), "rm"(py[1]), "rm"(py[2]), "rm"(py[3]) + : "cc" + ); +} + +static __always_inline void clamp_secret(u8 secret[CURVE25519_KEY_SIZE]) +{ + secret[0] &= 248; + secret[31] &= 127; + secret[31] |= 64; +} + +static void curve25519_adx(u8 shared[CURVE25519_KEY_SIZE], + const u8 private_key[CURVE25519_KEY_SIZE], + const u8 session_key[CURVE25519_KEY_SIZE]) +{ + struct { + u64 buffer[4 * NUM_WORDS_ELTFP25519]; + u64 coordinates[4 * NUM_WORDS_ELTFP25519]; + u64 workspace[6 * NUM_WORDS_ELTFP25519]; + u8 session[CURVE25519_KEY_SIZE]; + u8 private[CURVE25519_KEY_SIZE]; + } __aligned(32) m; + + int i = 0, j = 0; + u64 prev = 0; + u64 *const X1 = (u64 *)m.session; + u64 *const key = (u64 *)m.private; + u64 *const Px = m.coordinates + 0; + u64 *const Pz = m.coordinates + 4; + u64 *const Qx = m.coordinates + 8; + u64 *const Qz = m.coordinates + 12; + u64 *const X2 = Qx; + u64 *const Z2 = Qz; + u64 *const X3 = Px; + u64 *const Z3 = Pz; + u64 *const X2Z2 = Qx; + u64 *const X3Z3 = Px; + + u64 *const A = m.workspace + 0; + u64 *const B = m.workspace + 4; + u64 *const D = m.workspace + 8; + u64 *const C = m.workspace + 12; + u64 *const DA = m.workspace + 16; + u64 *const CB = m.workspace + 20; + u64 *const AB = A; + u64 *const DC = D; + u64 *const DACB = DA; + + memcpy(m.private, private_key, sizeof(m.private)); + memcpy(m.session, session_key, sizeof(m.session)); + + clamp_secret(m.private); + + /* As in the draft: + * When receiving such an array, implementations of curve25519 + * MUST mask the most-significant bit in the final byte. This + * is done to preserve compatibility with point formats which + * reserve the sign bit for use in other protocols and to + * increase resistance to implementation fingerprinting + */ + m.session[CURVE25519_KEY_SIZE - 1] &= (1 << (255 % 8)) - 1; + + copy_eltfp25519_1w(Px, X1); + setzero_eltfp25519_1w(Pz); + setzero_eltfp25519_1w(Qx); + setzero_eltfp25519_1w(Qz); + + Pz[0] = 1; + Qx[0] = 1; + + /* main-loop */ + prev = 0; + j = 62; + for (i = 3; i >= 0; --i) { + while (j >= 0) { + u64 bit = (key[i] >> j) & 0x1; + u64 swap = bit ^ prev; + prev = bit; + + add_eltfp25519_1w_adx(A, X2, Z2); /* A = (X2+Z2) */ + sub_eltfp25519_1w(B, X2, Z2); /* B = (X2-Z2) */ + add_eltfp25519_1w_adx(C, X3, Z3); /* C = (X3+Z3) */ + sub_eltfp25519_1w(D, X3, Z3); /* D = (X3-Z3) */ + mul_eltfp25519_2w_adx(DACB, AB, DC); /* [DA|CB] = [A|B]*[D|C] */ + + cselect(swap, A, C); + cselect(swap, B, D); + + sqr_eltfp25519_2w_adx(AB); /* [AA|BB] = [A^2|B^2] */ + add_eltfp25519_1w_adx(X3, DA, CB); /* X3 = (DA+CB) */ + sub_eltfp25519_1w(Z3, DA, CB); /* Z3 = (DA-CB) */ + sqr_eltfp25519_2w_adx(X3Z3); /* [X3|Z3] = [(DA+CB)|(DA+CB)]^2 */ + + copy_eltfp25519_1w(X2, B); /* X2 = B^2 */ + sub_eltfp25519_1w(Z2, A, B); /* Z2 = E = AA-BB */ + + mul_a24_eltfp25519_1w(B, Z2); /* B = a24*E */ + add_eltfp25519_1w_adx(B, B, X2); /* B = a24*E+B */ + mul_eltfp25519_2w_adx(X2Z2, X2Z2, AB); /* [X2|Z2] = [B|E]*[A|a24*E+B] */ + mul_eltfp25519_1w_adx(Z3, Z3, X1); /* Z3 = Z3*X1 */ + --j; + } + j = 63; + } + + inv_eltfp25519_1w_adx(A, Qz); + mul_eltfp25519_1w_adx((u64 *)shared, Qx, A); + fred_eltfp25519_1w((u64 *)shared); + + memzero_explicit(&m, sizeof(m)); +} + +static void curve25519_adx_base(u8 session_key[CURVE25519_KEY_SIZE], + const u8 private_key[CURVE25519_KEY_SIZE]) +{ + struct { + u64 buffer[4 * NUM_WORDS_ELTFP25519]; + u64 coordinates[4 * NUM_WORDS_ELTFP25519]; + u64 workspace[4 * NUM_WORDS_ELTFP25519]; + u8 private[CURVE25519_KEY_SIZE]; + } __aligned(32) m; + + const int ite[4] = { 64, 64, 64, 63 }; + const int q = 3; + u64 swap = 1; + + int i = 0, j = 0, k = 0; + u64 *const key = (u64 *)m.private; + u64 *const Ur1 = m.coordinates + 0; + u64 *const Zr1 = m.coordinates + 4; + u64 *const Ur2 = m.coordinates + 8; + u64 *const Zr2 = m.coordinates + 12; + + u64 *const UZr1 = m.coordinates + 0; + u64 *const ZUr2 = m.coordinates + 8; + + u64 *const A = m.workspace + 0; + u64 *const B = m.workspace + 4; + u64 *const C = m.workspace + 8; + u64 *const D = m.workspace + 12; + + u64 *const AB = m.workspace + 0; + u64 *const CD = m.workspace + 8; + + const u64 *const P = table_ladder_8k; + + memcpy(m.private, private_key, sizeof(m.private)); + + clamp_secret(m.private); + + setzero_eltfp25519_1w(Ur1); + setzero_eltfp25519_1w(Zr1); + setzero_eltfp25519_1w(Zr2); + Ur1[0] = 1; + Zr1[0] = 1; + Zr2[0] = 1; + + /* G-S */ + Ur2[3] = 0x1eaecdeee27cab34UL; + Ur2[2] = 0xadc7a0b9235d48e2UL; + Ur2[1] = 0xbbf095ae14b2edf8UL; + Ur2[0] = 0x7e94e1fec82faabdUL; + + /* main-loop */ + j = q; + for (i = 0; i < NUM_WORDS_ELTFP25519; ++i) { + while (j < ite[i]) { + u64 bit = (key[i] >> j) & 0x1; + k = (64 * i + j - q); + swap = swap ^ bit; + cswap(swap, Ur1, Ur2); + cswap(swap, Zr1, Zr2); + swap = bit; + /* Addition */ + sub_eltfp25519_1w(B, Ur1, Zr1); /* B = Ur1-Zr1 */ + add_eltfp25519_1w_adx(A, Ur1, Zr1); /* A = Ur1+Zr1 */ + mul_eltfp25519_1w_adx(C, &P[4 * k], B); /* C = M0-B */ + sub_eltfp25519_1w(B, A, C); /* B = (Ur1+Zr1) - M*(Ur1-Zr1) */ + add_eltfp25519_1w_adx(A, A, C); /* A = (Ur1+Zr1) + M*(Ur1-Zr1) */ + sqr_eltfp25519_2w_adx(AB); /* A = A^2 | B = B^2 */ + mul_eltfp25519_2w_adx(UZr1, ZUr2, AB); /* Ur1 = Zr2*A | Zr1 = Ur2*B */ + ++j; + } + j = 0; + } + + /* Doubling */ + for (i = 0; i < q; ++i) { + add_eltfp25519_1w_adx(A, Ur1, Zr1); /* A = Ur1+Zr1 */ + sub_eltfp25519_1w(B, Ur1, Zr1); /* B = Ur1-Zr1 */ + sqr_eltfp25519_2w_adx(AB); /* A = A**2 B = B**2 */ + copy_eltfp25519_1w(C, B); /* C = B */ + sub_eltfp25519_1w(B, A, B); /* B = A-B */ + mul_a24_eltfp25519_1w(D, B); /* D = my_a24*B */ + add_eltfp25519_1w_adx(D, D, C); /* D = D+C */ + mul_eltfp25519_2w_adx(UZr1, AB, CD); /* Ur1 = A*B Zr1 = Zr1*A */ + } + + /* Convert to affine coordinates */ + inv_eltfp25519_1w_adx(A, Zr1); + mul_eltfp25519_1w_adx((u64 *)session_key, Ur1, A); + fred_eltfp25519_1w((u64 *)session_key); + + memzero_explicit(&m, sizeof(m)); +} + +static void curve25519_bmi2(u8 shared[CURVE25519_KEY_SIZE], + const u8 private_key[CURVE25519_KEY_SIZE], + const u8 session_key[CURVE25519_KEY_SIZE]) +{ + struct { + u64 buffer[4 * NUM_WORDS_ELTFP25519]; + u64 coordinates[4 * NUM_WORDS_ELTFP25519]; + u64 workspace[6 * NUM_WORDS_ELTFP25519]; + u8 session[CURVE25519_KEY_SIZE]; + u8 private[CURVE25519_KEY_SIZE]; + } __aligned(32) m; + + int i = 0, j = 0; + u64 prev = 0; + u64 *const X1 = (u64 *)m.session; + u64 *const key = (u64 *)m.private; + u64 *const Px = m.coordinates + 0; + u64 *const Pz = m.coordinates + 4; + u64 *const Qx = m.coordinates + 8; + u64 *const Qz = m.coordinates + 12; + u64 *const X2 = Qx; + u64 *const Z2 = Qz; + u64 *const X3 = Px; + u64 *const Z3 = Pz; + u64 *const X2Z2 = Qx; + u64 *const X3Z3 = Px; + + u64 *const A = m.workspace + 0; + u64 *const B = m.workspace + 4; + u64 *const D = m.workspace + 8; + u64 *const C = m.workspace + 12; + u64 *const DA = m.workspace + 16; + u64 *const CB = m.workspace + 20; + u64 *const AB = A; + u64 *const DC = D; + u64 *const DACB = DA; + + memcpy(m.private, private_key, sizeof(m.private)); + memcpy(m.session, session_key, sizeof(m.session)); + + clamp_secret(m.private); + + /* As in the draft: + * When receiving such an array, implementations of curve25519 + * MUST mask the most-significant bit in the final byte. This + * is done to preserve compatibility with point formats which + * reserve the sign bit for use in other protocols and to + * increase resistance to implementation fingerprinting + */ + m.session[CURVE25519_KEY_SIZE - 1] &= (1 << (255 % 8)) - 1; + + copy_eltfp25519_1w(Px, X1); + setzero_eltfp25519_1w(Pz); + setzero_eltfp25519_1w(Qx); + setzero_eltfp25519_1w(Qz); + + Pz[0] = 1; + Qx[0] = 1; + + /* main-loop */ + prev = 0; + j = 62; + for (i = 3; i >= 0; --i) { + while (j >= 0) { + u64 bit = (key[i] >> j) & 0x1; + u64 swap = bit ^ prev; + prev = bit; + + add_eltfp25519_1w_bmi2(A, X2, Z2); /* A = (X2+Z2) */ + sub_eltfp25519_1w(B, X2, Z2); /* B = (X2-Z2) */ + add_eltfp25519_1w_bmi2(C, X3, Z3); /* C = (X3+Z3) */ + sub_eltfp25519_1w(D, X3, Z3); /* D = (X3-Z3) */ + mul_eltfp25519_2w_bmi2(DACB, AB, DC); /* [DA|CB] = [A|B]*[D|C] */ + + cselect(swap, A, C); + cselect(swap, B, D); + + sqr_eltfp25519_2w_bmi2(AB); /* [AA|BB] = [A^2|B^2] */ + add_eltfp25519_1w_bmi2(X3, DA, CB); /* X3 = (DA+CB) */ + sub_eltfp25519_1w(Z3, DA, CB); /* Z3 = (DA-CB) */ + sqr_eltfp25519_2w_bmi2(X3Z3); /* [X3|Z3] = [(DA+CB)|(DA+CB)]^2 */ + + copy_eltfp25519_1w(X2, B); /* X2 = B^2 */ + sub_eltfp25519_1w(Z2, A, B); /* Z2 = E = AA-BB */ + + mul_a24_eltfp25519_1w(B, Z2); /* B = a24*E */ + add_eltfp25519_1w_bmi2(B, B, X2); /* B = a24*E+B */ + mul_eltfp25519_2w_bmi2(X2Z2, X2Z2, AB); /* [X2|Z2] = [B|E]*[A|a24*E+B] */ + mul_eltfp25519_1w_bmi2(Z3, Z3, X1); /* Z3 = Z3*X1 */ + --j; + } + j = 63; + } + + inv_eltfp25519_1w_bmi2(A, Qz); + mul_eltfp25519_1w_bmi2((u64 *)shared, Qx, A); + fred_eltfp25519_1w((u64 *)shared); + + memzero_explicit(&m, sizeof(m)); +} + +static void curve25519_bmi2_base(u8 session_key[CURVE25519_KEY_SIZE], + const u8 private_key[CURVE25519_KEY_SIZE]) +{ + struct { + u64 buffer[4 * NUM_WORDS_ELTFP25519]; + u64 coordinates[4 * NUM_WORDS_ELTFP25519]; + u64 workspace[4 * NUM_WORDS_ELTFP25519]; + u8 private[CURVE25519_KEY_SIZE]; + } __aligned(32) m; + + const int ite[4] = { 64, 64, 64, 63 }; + const int q = 3; + u64 swap = 1; + + int i = 0, j = 0, k = 0; + u64 *const key = (u64 *)m.private; + u64 *const Ur1 = m.coordinates + 0; + u64 *const Zr1 = m.coordinates + 4; + u64 *const Ur2 = m.coordinates + 8; + u64 *const Zr2 = m.coordinates + 12; + + u64 *const UZr1 = m.coordinates + 0; + u64 *const ZUr2 = m.coordinates + 8; + + u64 *const A = m.workspace + 0; + u64 *const B = m.workspace + 4; + u64 *const C = m.workspace + 8; + u64 *const D = m.workspace + 12; + + u64 *const AB = m.workspace + 0; + u64 *const CD = m.workspace + 8; + + const u64 *const P = table_ladder_8k; + + memcpy(m.private, private_key, sizeof(m.private)); + + clamp_secret(m.private); + + setzero_eltfp25519_1w(Ur1); + setzero_eltfp25519_1w(Zr1); + setzero_eltfp25519_1w(Zr2); + Ur1[0] = 1; + Zr1[0] = 1; + Zr2[0] = 1; + + /* G-S */ + Ur2[3] = 0x1eaecdeee27cab34UL; + Ur2[2] = 0xadc7a0b9235d48e2UL; + Ur2[1] = 0xbbf095ae14b2edf8UL; + Ur2[0] = 0x7e94e1fec82faabdUL; + + /* main-loop */ + j = q; + for (i = 0; i < NUM_WORDS_ELTFP25519; ++i) { + while (j < ite[i]) { + u64 bit = (key[i] >> j) & 0x1; + k = (64 * i + j - q); + swap = swap ^ bit; + cswap(swap, Ur1, Ur2); + cswap(swap, Zr1, Zr2); + swap = bit; + /* Addition */ + sub_eltfp25519_1w(B, Ur1, Zr1); /* B = Ur1-Zr1 */ + add_eltfp25519_1w_bmi2(A, Ur1, Zr1); /* A = Ur1+Zr1 */ + mul_eltfp25519_1w_bmi2(C, &P[4 * k], B);/* C = M0-B */ + sub_eltfp25519_1w(B, A, C); /* B = (Ur1+Zr1) - M*(Ur1-Zr1) */ + add_eltfp25519_1w_bmi2(A, A, C); /* A = (Ur1+Zr1) + M*(Ur1-Zr1) */ + sqr_eltfp25519_2w_bmi2(AB); /* A = A^2 | B = B^2 */ + mul_eltfp25519_2w_bmi2(UZr1, ZUr2, AB); /* Ur1 = Zr2*A | Zr1 = Ur2*B */ + ++j; + } + j = 0; + } + + /* Doubling */ + for (i = 0; i < q; ++i) { + add_eltfp25519_1w_bmi2(A, Ur1, Zr1); /* A = Ur1+Zr1 */ + sub_eltfp25519_1w(B, Ur1, Zr1); /* B = Ur1-Zr1 */ + sqr_eltfp25519_2w_bmi2(AB); /* A = A**2 B = B**2 */ + copy_eltfp25519_1w(C, B); /* C = B */ + sub_eltfp25519_1w(B, A, B); /* B = A-B */ + mul_a24_eltfp25519_1w(D, B); /* D = my_a24*B */ + add_eltfp25519_1w_bmi2(D, D, C); /* D = D+C */ + mul_eltfp25519_2w_bmi2(UZr1, AB, CD); /* Ur1 = A*B Zr1 = Zr1*A */ + } + + /* Convert to affine coordinates */ + inv_eltfp25519_1w_bmi2(A, Zr1); + mul_eltfp25519_1w_bmi2((u64 *)session_key, Ur1, A); + fred_eltfp25519_1w((u64 *)session_key); + + memzero_explicit(&m, sizeof(m)); +} diff --git a/lib/zinc/curve25519/curve25519.c b/lib/zinc/curve25519/curve25519.c index 8d9d8fc0ca38..074fbb1c2149 100644 --- a/lib/zinc/curve25519/curve25519.c +++ b/lib/zinc/curve25519/curve25519.c @@ -20,6 +20,9 @@ #include #include // For crypto_memneq. +#if defined(CONFIG_ZINC_ARCH_X86_64) +#include "curve25519-x86_64-glue.c" +#else static bool *const curve25519_nobs[] __initconst = { }; static void __init curve25519_fpu_init(void) { @@ -35,6 +38,7 @@ static inline bool curve25519_base_arch(u8 pub[CURVE25519_KEY_SIZE], { return false; } +#endif static __always_inline void normalize_secret(u8 secret[CURVE25519_KEY_SIZE]) { From patchwork Fri Mar 22 06:29:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 160846 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp453285jan; Thu, 21 Mar 2019 23:41:08 -0700 (PDT) X-Google-Smtp-Source: APXvYqyyR+MWk+XPfwj52azfrPrjnv+PeQ61PSsSBLJg2+YcJuJ7PYvVWDsSnEoj2CFcNju/MQ3W X-Received: by 2002:aa7:8243:: with SMTP id e3mr7685998pfn.40.1553236868333; Thu, 21 Mar 2019 23:41:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553236868; cv=none; d=google.com; s=arc-20160816; b=inYtPehH7gVTyhNfIw2IJGldKJd7ua9d9MbXTBl+O+sUW8ai4Ts3PD9XOVkG7/OqYP vOymwKlAD8zjM3uPIKjuJld8TsKQmvosMgG0R6NZMoKx3AeQHAnYnqh2ifyPWIV1/5Qr jhJaAbF+Tu+0rBH4U6PwmSEfAlnDSRBW2sQ9oasOBCVR+w6LD0NcRA7Pt4WDYyeLOtnZ WrGadzGrOJIsCFtd40YXD5rkoAxcQoCorATcJlIJYt3LqMxkiWab0yYwOMOjmrkTcY0s n1zqavLzUR+arB1wkvEJe3wasCoRPU33hL6L27n6Epcfr0rxkDjdnoNSNvX+aCHpWifv 24EQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:from:message-id:to:references :subject; bh=Eh5sAowIbgNzZE4xRmSLhy+ofBdtuHiAVLkUTAS/UdQ=; b=ccXD5JOZ/PHphLeF0Qr5umszp8IL7Xt0/LOykWjBQIQ0mQ9Q6Vk5iLAtVxwhdiyGX8 fnrN79TcxIPh4lvDBFAtBngmuF1EOnExnkyJt1EtsX9MC9CYv+XPbYt6XBvI9ytw0MH9 +vsjUv11JDP74nlhx3CbhI8CQS9RLJHIocjm8hbpfplL8aGz6JyM/cDpw+2WyZz4Cz56 fP1o8czRuy8BGNtpHp4yVEG3O172+SHlwgr5rTJdKuQD7zzA0/wPbGNORZYEzUGKgetD ryuTzgXRbJ91+K/HGrMtEmGfhn9RbBr9qV0PYu4CHIvMq3a+HzILdqX648m6ejt3sj1Q 0QqA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b7si6617755pla.195.2019.03.21.23.41.08; Thu, 21 Mar 2019 23:41:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727695AbfCVGlH (ORCPT + 3 others); Fri, 22 Mar 2019 02:41:07 -0400 Received: from orcrist.hmeau.com ([104.223.48.154]:44430 "EHLO deadmen.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727429AbfCVGlH (ORCPT ); Fri, 22 Mar 2019 02:41:07 -0400 Received: from gondobar.mordor.me.apana.org.au ([192.168.128.4] helo=gondobar) by deadmen.hmeau.com with esmtps (Exim 4.89 #2 (Debian)) id 1h7Dgo-0002Z9-0x; Fri, 22 Mar 2019 14:30:02 +0800 Received: from herbert by gondobar with local (Exim 4.89) (envelope-from ) id 1h7Dgi-0001Jh-2z; Fri, 22 Mar 2019 14:29:56 +0800 Subject: [PATCH 15/17] zinc: import Bernstein and Schwabe's Curve25519 ARM implementation References: <20190322062740.nrwfx2rvmt7lzotj@gondor.apana.org.au> To: Linus Torvalds , "David S. Miller" , "Jason A. Donenfeld" , Eric Biggers , Ard Biesheuvel , Linux Crypto Mailing List , linux-fscrypt@vger.kernel.org, linux-arm-kernel@lists.infradead.org, LKML , Paul Crowley , Greg Kaiser , Samuel Neves , Tomer Ashur , Martin Willi Message-Id: From: Herbert Xu Date: Fri, 22 Mar 2019 14:29:56 +0800 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Jason A. Donenfeld This comes from Dan Bernstein and Peter Schwabe's public domain NEON code, and is included here in raw form so that subsequent commits that fix these up for the kernel can see how it has changed. This code does have some entirely cosmetic formatting differences, adding indentation and so forth, so that when we actually port it for use in the kernel in the subsequent commit, it's obvious what's changed in the process. This code originates from SUPERCOP 20180818, available at . Signed-off-by: Jason A. Donenfeld Cc: Russell King Cc: linux-arm-kernel@lists.infradead.org Cc: Samuel Neves Cc: Jean-Philippe Aumasson Cc: Andy Lutomirski Cc: Greg KH Cc: Andrew Morton Cc: Linus Torvalds Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org Signed-off-by: Herbert Xu --- lib/zinc/curve25519/curve25519-arm-supercop.S | 2105 ++++++++++++++++++++++++++ 1 file changed, 2105 insertions(+) diff --git a/lib/zinc/curve25519/curve25519-arm-supercop.S b/lib/zinc/curve25519/curve25519-arm-supercop.S new file mode 100644 index 000000000000..f33b85fef382 --- /dev/null +++ b/lib/zinc/curve25519/curve25519-arm-supercop.S @@ -0,0 +1,2105 @@ +/* + * Public domain code from Daniel J. Bernstein and Peter Schwabe, from + * SUPERCOP's curve25519/neon2/scalarmult.s. + */ + +.fpu neon +.text +.align 4 +.global _crypto_scalarmult_curve25519_neon2 +.global crypto_scalarmult_curve25519_neon2 +.type _crypto_scalarmult_curve25519_neon2 STT_FUNC +.type crypto_scalarmult_curve25519_neon2 STT_FUNC + _crypto_scalarmult_curve25519_neon2: + crypto_scalarmult_curve25519_neon2: + vpush {q4, q5, q6, q7} + mov r12, sp + sub sp, sp, #736 + and sp, sp, #0xffffffe0 + strd r4, [sp, #0] + strd r6, [sp, #8] + strd r8, [sp, #16] + strd r10, [sp, #24] + str r12, [sp, #480] + str r14, [sp, #484] + mov r0, r0 + mov r1, r1 + mov r2, r2 + add r3, sp, #32 + ldr r4, =0 + ldr r5, =254 + vmov.i32 q0, #1 + vshr.u64 q1, q0, #7 + vshr.u64 q0, q0, #8 + vmov.i32 d4, #19 + vmov.i32 d5, #38 + add r6, sp, #512 + vst1.8 {d2-d3}, [r6, : 128] + add r6, sp, #528 + vst1.8 {d0-d1}, [r6, : 128] + add r6, sp, #544 + vst1.8 {d4-d5}, [r6, : 128] + add r6, r3, #0 + vmov.i32 q2, #0 + vst1.8 {d4-d5}, [r6, : 128]! + vst1.8 {d4-d5}, [r6, : 128]! + vst1.8 d4, [r6, : 64] + add r6, r3, #0 + ldr r7, =960 + sub r7, r7, #2 + neg r7, r7 + sub r7, r7, r7, LSL #7 + str r7, [r6] + add r6, sp, #704 + vld1.8 {d4-d5}, [r1]! + vld1.8 {d6-d7}, [r1] + vst1.8 {d4-d5}, [r6, : 128]! + vst1.8 {d6-d7}, [r6, : 128] + sub r1, r6, #16 + ldrb r6, [r1] + and r6, r6, #248 + strb r6, [r1] + ldrb r6, [r1, #31] + and r6, r6, #127 + orr r6, r6, #64 + strb r6, [r1, #31] + vmov.i64 q2, #0xffffffff + vshr.u64 q3, q2, #7 + vshr.u64 q2, q2, #6 + vld1.8 {d8}, [r2] + vld1.8 {d10}, [r2] + add r2, r2, #6 + vld1.8 {d12}, [r2] + vld1.8 {d14}, [r2] + add r2, r2, #6 + vld1.8 {d16}, [r2] + add r2, r2, #4 + vld1.8 {d18}, [r2] + vld1.8 {d20}, [r2] + add r2, r2, #6 + vld1.8 {d22}, [r2] + add r2, r2, #2 + vld1.8 {d24}, [r2] + vld1.8 {d26}, [r2] + vshr.u64 q5, q5, #26 + vshr.u64 q6, q6, #3 + vshr.u64 q7, q7, #29 + vshr.u64 q8, q8, #6 + vshr.u64 q10, q10, #25 + vshr.u64 q11, q11, #3 + vshr.u64 q12, q12, #12 + vshr.u64 q13, q13, #38 + vand q4, q4, q2 + vand q6, q6, q2 + vand q8, q8, q2 + vand q10, q10, q2 + vand q2, q12, q2 + vand q5, q5, q3 + vand q7, q7, q3 + vand q9, q9, q3 + vand q11, q11, q3 + vand q3, q13, q3 + add r2, r3, #48 + vadd.i64 q12, q4, q1 + vadd.i64 q13, q10, q1 + vshr.s64 q12, q12, #26 + vshr.s64 q13, q13, #26 + vadd.i64 q5, q5, q12 + vshl.i64 q12, q12, #26 + vadd.i64 q14, q5, q0 + vadd.i64 q11, q11, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q15, q11, q0 + vsub.i64 q4, q4, q12 + vshr.s64 q12, q14, #25 + vsub.i64 q10, q10, q13 + vshr.s64 q13, q15, #25 + vadd.i64 q6, q6, q12 + vshl.i64 q12, q12, #25 + vadd.i64 q14, q6, q1 + vadd.i64 q2, q2, q13 + vsub.i64 q5, q5, q12 + vshr.s64 q12, q14, #26 + vshl.i64 q13, q13, #25 + vadd.i64 q14, q2, q1 + vadd.i64 q7, q7, q12 + vshl.i64 q12, q12, #26 + vadd.i64 q15, q7, q0 + vsub.i64 q11, q11, q13 + vshr.s64 q13, q14, #26 + vsub.i64 q6, q6, q12 + vshr.s64 q12, q15, #25 + vadd.i64 q3, q3, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q14, q3, q0 + vadd.i64 q8, q8, q12 + vshl.i64 q12, q12, #25 + vadd.i64 q15, q8, q1 + add r2, r2, #8 + vsub.i64 q2, q2, q13 + vshr.s64 q13, q14, #25 + vsub.i64 q7, q7, q12 + vshr.s64 q12, q15, #26 + vadd.i64 q14, q13, q13 + vadd.i64 q9, q9, q12 + vtrn.32 d12, d14 + vshl.i64 q12, q12, #26 + vtrn.32 d13, d15 + vadd.i64 q0, q9, q0 + vadd.i64 q4, q4, q14 + vst1.8 d12, [r2, : 64]! + vshl.i64 q6, q13, #4 + vsub.i64 q7, q8, q12 + vshr.s64 q0, q0, #25 + vadd.i64 q4, q4, q6 + vadd.i64 q6, q10, q0 + vshl.i64 q0, q0, #25 + vadd.i64 q8, q6, q1 + vadd.i64 q4, q4, q13 + vshl.i64 q10, q13, #25 + vadd.i64 q1, q4, q1 + vsub.i64 q0, q9, q0 + vshr.s64 q8, q8, #26 + vsub.i64 q3, q3, q10 + vtrn.32 d14, d0 + vshr.s64 q1, q1, #26 + vtrn.32 d15, d1 + vadd.i64 q0, q11, q8 + vst1.8 d14, [r2, : 64] + vshl.i64 q7, q8, #26 + vadd.i64 q5, q5, q1 + vtrn.32 d4, d6 + vshl.i64 q1, q1, #26 + vtrn.32 d5, d7 + vsub.i64 q3, q6, q7 + add r2, r2, #16 + vsub.i64 q1, q4, q1 + vst1.8 d4, [r2, : 64] + vtrn.32 d6, d0 + vtrn.32 d7, d1 + sub r2, r2, #8 + vtrn.32 d2, d10 + vtrn.32 d3, d11 + vst1.8 d6, [r2, : 64] + sub r2, r2, #24 + vst1.8 d2, [r2, : 64] + add r2, r3, #96 + vmov.i32 q0, #0 + vmov.i64 d2, #0xff + vmov.i64 d3, #0 + vshr.u32 q1, q1, #7 + vst1.8 {d2-d3}, [r2, : 128]! + vst1.8 {d0-d1}, [r2, : 128]! + vst1.8 d0, [r2, : 64] + add r2, r3, #144 + vmov.i32 q0, #0 + vst1.8 {d0-d1}, [r2, : 128]! + vst1.8 {d0-d1}, [r2, : 128]! + vst1.8 d0, [r2, : 64] + add r2, r3, #240 + vmov.i32 q0, #0 + vmov.i64 d2, #0xff + vmov.i64 d3, #0 + vshr.u32 q1, q1, #7 + vst1.8 {d2-d3}, [r2, : 128]! + vst1.8 {d0-d1}, [r2, : 128]! + vst1.8 d0, [r2, : 64] + add r2, r3, #48 + add r6, r3, #192 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d4}, [r2, : 64] + vst1.8 {d0-d1}, [r6, : 128]! + vst1.8 {d2-d3}, [r6, : 128]! + vst1.8 d4, [r6, : 64] +._mainloop: + mov r2, r5, LSR #3 + and r6, r5, #7 + ldrb r2, [r1, r2] + mov r2, r2, LSR r6 + and r2, r2, #1 + str r5, [sp, #488] + eor r4, r4, r2 + str r2, [sp, #492] + neg r2, r4 + add r4, r3, #96 + add r5, r3, #192 + add r6, r3, #144 + vld1.8 {d8-d9}, [r4, : 128]! + add r7, r3, #240 + vld1.8 {d10-d11}, [r5, : 128]! + veor q6, q4, q5 + vld1.8 {d14-d15}, [r6, : 128]! + vdup.i32 q8, r2 + vld1.8 {d18-d19}, [r7, : 128]! + veor q10, q7, q9 + vld1.8 {d22-d23}, [r4, : 128]! + vand q6, q6, q8 + vld1.8 {d24-d25}, [r5, : 128]! + vand q10, q10, q8 + vld1.8 {d26-d27}, [r6, : 128]! + veor q4, q4, q6 + vld1.8 {d28-d29}, [r7, : 128]! + veor q5, q5, q6 + vld1.8 {d0}, [r4, : 64] + veor q6, q7, q10 + vld1.8 {d2}, [r5, : 64] + veor q7, q9, q10 + vld1.8 {d4}, [r6, : 64] + veor q9, q11, q12 + vld1.8 {d6}, [r7, : 64] + veor q10, q0, q1 + sub r2, r4, #32 + vand q9, q9, q8 + sub r4, r5, #32 + vand q10, q10, q8 + sub r5, r6, #32 + veor q11, q11, q9 + sub r6, r7, #32 + veor q0, q0, q10 + veor q9, q12, q9 + veor q1, q1, q10 + veor q10, q13, q14 + veor q12, q2, q3 + vand q10, q10, q8 + vand q8, q12, q8 + veor q12, q13, q10 + veor q2, q2, q8 + veor q10, q14, q10 + veor q3, q3, q8 + vadd.i32 q8, q4, q6 + vsub.i32 q4, q4, q6 + vst1.8 {d16-d17}, [r2, : 128]! + vadd.i32 q6, q11, q12 + vst1.8 {d8-d9}, [r5, : 128]! + vsub.i32 q4, q11, q12 + vst1.8 {d12-d13}, [r2, : 128]! + vadd.i32 q6, q0, q2 + vst1.8 {d8-d9}, [r5, : 128]! + vsub.i32 q0, q0, q2 + vst1.8 d12, [r2, : 64] + vadd.i32 q2, q5, q7 + vst1.8 d0, [r5, : 64] + vsub.i32 q0, q5, q7 + vst1.8 {d4-d5}, [r4, : 128]! + vadd.i32 q2, q9, q10 + vst1.8 {d0-d1}, [r6, : 128]! + vsub.i32 q0, q9, q10 + vst1.8 {d4-d5}, [r4, : 128]! + vadd.i32 q2, q1, q3 + vst1.8 {d0-d1}, [r6, : 128]! + vsub.i32 q0, q1, q3 + vst1.8 d4, [r4, : 64] + vst1.8 d0, [r6, : 64] + add r2, sp, #544 + add r4, r3, #96 + add r5, r3, #144 + vld1.8 {d0-d1}, [r2, : 128] + vld1.8 {d2-d3}, [r4, : 128]! + vld1.8 {d4-d5}, [r5, : 128]! + vzip.i32 q1, q2 + vld1.8 {d6-d7}, [r4, : 128]! + vld1.8 {d8-d9}, [r5, : 128]! + vshl.i32 q5, q1, #1 + vzip.i32 q3, q4 + vshl.i32 q6, q2, #1 + vld1.8 {d14}, [r4, : 64] + vshl.i32 q8, q3, #1 + vld1.8 {d15}, [r5, : 64] + vshl.i32 q9, q4, #1 + vmul.i32 d21, d7, d1 + vtrn.32 d14, d15 + vmul.i32 q11, q4, q0 + vmul.i32 q0, q7, q0 + vmull.s32 q12, d2, d2 + vmlal.s32 q12, d11, d1 + vmlal.s32 q12, d12, d0 + vmlal.s32 q12, d13, d23 + vmlal.s32 q12, d16, d22 + vmlal.s32 q12, d7, d21 + vmull.s32 q10, d2, d11 + vmlal.s32 q10, d4, d1 + vmlal.s32 q10, d13, d0 + vmlal.s32 q10, d6, d23 + vmlal.s32 q10, d17, d22 + vmull.s32 q13, d10, d4 + vmlal.s32 q13, d11, d3 + vmlal.s32 q13, d13, d1 + vmlal.s32 q13, d16, d0 + vmlal.s32 q13, d17, d23 + vmlal.s32 q13, d8, d22 + vmull.s32 q1, d10, d5 + vmlal.s32 q1, d11, d4 + vmlal.s32 q1, d6, d1 + vmlal.s32 q1, d17, d0 + vmlal.s32 q1, d8, d23 + vmull.s32 q14, d10, d6 + vmlal.s32 q14, d11, d13 + vmlal.s32 q14, d4, d4 + vmlal.s32 q14, d17, d1 + vmlal.s32 q14, d18, d0 + vmlal.s32 q14, d9, d23 + vmull.s32 q11, d10, d7 + vmlal.s32 q11, d11, d6 + vmlal.s32 q11, d12, d5 + vmlal.s32 q11, d8, d1 + vmlal.s32 q11, d19, d0 + vmull.s32 q15, d10, d8 + vmlal.s32 q15, d11, d17 + vmlal.s32 q15, d12, d6 + vmlal.s32 q15, d13, d5 + vmlal.s32 q15, d19, d1 + vmlal.s32 q15, d14, d0 + vmull.s32 q2, d10, d9 + vmlal.s32 q2, d11, d8 + vmlal.s32 q2, d12, d7 + vmlal.s32 q2, d13, d6 + vmlal.s32 q2, d14, d1 + vmull.s32 q0, d15, d1 + vmlal.s32 q0, d10, d14 + vmlal.s32 q0, d11, d19 + vmlal.s32 q0, d12, d8 + vmlal.s32 q0, d13, d17 + vmlal.s32 q0, d6, d6 + add r2, sp, #512 + vld1.8 {d18-d19}, [r2, : 128] + vmull.s32 q3, d16, d7 + vmlal.s32 q3, d10, d15 + vmlal.s32 q3, d11, d14 + vmlal.s32 q3, d12, d9 + vmlal.s32 q3, d13, d8 + add r2, sp, #528 + vld1.8 {d8-d9}, [r2, : 128] + vadd.i64 q5, q12, q9 + vadd.i64 q6, q15, q9 + vshr.s64 q5, q5, #26 + vshr.s64 q6, q6, #26 + vadd.i64 q7, q10, q5 + vshl.i64 q5, q5, #26 + vadd.i64 q8, q7, q4 + vadd.i64 q2, q2, q6 + vshl.i64 q6, q6, #26 + vadd.i64 q10, q2, q4 + vsub.i64 q5, q12, q5 + vshr.s64 q8, q8, #25 + vsub.i64 q6, q15, q6 + vshr.s64 q10, q10, #25 + vadd.i64 q12, q13, q8 + vshl.i64 q8, q8, #25 + vadd.i64 q13, q12, q9 + vadd.i64 q0, q0, q10 + vsub.i64 q7, q7, q8 + vshr.s64 q8, q13, #26 + vshl.i64 q10, q10, #25 + vadd.i64 q13, q0, q9 + vadd.i64 q1, q1, q8 + vshl.i64 q8, q8, #26 + vadd.i64 q15, q1, q4 + vsub.i64 q2, q2, q10 + vshr.s64 q10, q13, #26 + vsub.i64 q8, q12, q8 + vshr.s64 q12, q15, #25 + vadd.i64 q3, q3, q10 + vshl.i64 q10, q10, #26 + vadd.i64 q13, q3, q4 + vadd.i64 q14, q14, q12 + add r2, r3, #288 + vshl.i64 q12, q12, #25 + add r4, r3, #336 + vadd.i64 q15, q14, q9 + add r2, r2, #8 + vsub.i64 q0, q0, q10 + add r4, r4, #8 + vshr.s64 q10, q13, #25 + vsub.i64 q1, q1, q12 + vshr.s64 q12, q15, #26 + vadd.i64 q13, q10, q10 + vadd.i64 q11, q11, q12 + vtrn.32 d16, d2 + vshl.i64 q12, q12, #26 + vtrn.32 d17, d3 + vadd.i64 q1, q11, q4 + vadd.i64 q4, q5, q13 + vst1.8 d16, [r2, : 64]! + vshl.i64 q5, q10, #4 + vst1.8 d17, [r4, : 64]! + vsub.i64 q8, q14, q12 + vshr.s64 q1, q1, #25 + vadd.i64 q4, q4, q5 + vadd.i64 q5, q6, q1 + vshl.i64 q1, q1, #25 + vadd.i64 q6, q5, q9 + vadd.i64 q4, q4, q10 + vshl.i64 q10, q10, #25 + vadd.i64 q9, q4, q9 + vsub.i64 q1, q11, q1 + vshr.s64 q6, q6, #26 + vsub.i64 q3, q3, q10 + vtrn.32 d16, d2 + vshr.s64 q9, q9, #26 + vtrn.32 d17, d3 + vadd.i64 q1, q2, q6 + vst1.8 d16, [r2, : 64] + vshl.i64 q2, q6, #26 + vst1.8 d17, [r4, : 64] + vadd.i64 q6, q7, q9 + vtrn.32 d0, d6 + vshl.i64 q7, q9, #26 + vtrn.32 d1, d7 + vsub.i64 q2, q5, q2 + add r2, r2, #16 + vsub.i64 q3, q4, q7 + vst1.8 d0, [r2, : 64] + add r4, r4, #16 + vst1.8 d1, [r4, : 64] + vtrn.32 d4, d2 + vtrn.32 d5, d3 + sub r2, r2, #8 + sub r4, r4, #8 + vtrn.32 d6, d12 + vtrn.32 d7, d13 + vst1.8 d4, [r2, : 64] + vst1.8 d5, [r4, : 64] + sub r2, r2, #24 + sub r4, r4, #24 + vst1.8 d6, [r2, : 64] + vst1.8 d7, [r4, : 64] + add r2, r3, #240 + add r4, r3, #96 + vld1.8 {d0-d1}, [r4, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vld1.8 {d4}, [r4, : 64] + add r4, r3, #144 + vld1.8 {d6-d7}, [r4, : 128]! + vtrn.32 q0, q3 + vld1.8 {d8-d9}, [r4, : 128]! + vshl.i32 q5, q0, #4 + vtrn.32 q1, q4 + vshl.i32 q6, q3, #4 + vadd.i32 q5, q5, q0 + vadd.i32 q6, q6, q3 + vshl.i32 q7, q1, #4 + vld1.8 {d5}, [r4, : 64] + vshl.i32 q8, q4, #4 + vtrn.32 d4, d5 + vadd.i32 q7, q7, q1 + vadd.i32 q8, q8, q4 + vld1.8 {d18-d19}, [r2, : 128]! + vshl.i32 q10, q2, #4 + vld1.8 {d22-d23}, [r2, : 128]! + vadd.i32 q10, q10, q2 + vld1.8 {d24}, [r2, : 64] + vadd.i32 q5, q5, q0 + add r2, r3, #192 + vld1.8 {d26-d27}, [r2, : 128]! + vadd.i32 q6, q6, q3 + vld1.8 {d28-d29}, [r2, : 128]! + vadd.i32 q8, q8, q4 + vld1.8 {d25}, [r2, : 64] + vadd.i32 q10, q10, q2 + vtrn.32 q9, q13 + vadd.i32 q7, q7, q1 + vadd.i32 q5, q5, q0 + vtrn.32 q11, q14 + vadd.i32 q6, q6, q3 + add r2, sp, #560 + vadd.i32 q10, q10, q2 + vtrn.32 d24, d25 + vst1.8 {d12-d13}, [r2, : 128] + vshl.i32 q6, q13, #1 + add r2, sp, #576 + vst1.8 {d20-d21}, [r2, : 128] + vshl.i32 q10, q14, #1 + add r2, sp, #592 + vst1.8 {d12-d13}, [r2, : 128] + vshl.i32 q15, q12, #1 + vadd.i32 q8, q8, q4 + vext.32 d10, d31, d30, #0 + vadd.i32 q7, q7, q1 + add r2, sp, #608 + vst1.8 {d16-d17}, [r2, : 128] + vmull.s32 q8, d18, d5 + vmlal.s32 q8, d26, d4 + vmlal.s32 q8, d19, d9 + vmlal.s32 q8, d27, d3 + vmlal.s32 q8, d22, d8 + vmlal.s32 q8, d28, d2 + vmlal.s32 q8, d23, d7 + vmlal.s32 q8, d29, d1 + vmlal.s32 q8, d24, d6 + vmlal.s32 q8, d25, d0 + add r2, sp, #624 + vst1.8 {d14-d15}, [r2, : 128] + vmull.s32 q2, d18, d4 + vmlal.s32 q2, d12, d9 + vmlal.s32 q2, d13, d8 + vmlal.s32 q2, d19, d3 + vmlal.s32 q2, d22, d2 + vmlal.s32 q2, d23, d1 + vmlal.s32 q2, d24, d0 + add r2, sp, #640 + vst1.8 {d20-d21}, [r2, : 128] + vmull.s32 q7, d18, d9 + vmlal.s32 q7, d26, d3 + vmlal.s32 q7, d19, d8 + vmlal.s32 q7, d27, d2 + vmlal.s32 q7, d22, d7 + vmlal.s32 q7, d28, d1 + vmlal.s32 q7, d23, d6 + vmlal.s32 q7, d29, d0 + add r2, sp, #656 + vst1.8 {d10-d11}, [r2, : 128] + vmull.s32 q5, d18, d3 + vmlal.s32 q5, d19, d2 + vmlal.s32 q5, d22, d1 + vmlal.s32 q5, d23, d0 + vmlal.s32 q5, d12, d8 + add r2, sp, #672 + vst1.8 {d16-d17}, [r2, : 128] + vmull.s32 q4, d18, d8 + vmlal.s32 q4, d26, d2 + vmlal.s32 q4, d19, d7 + vmlal.s32 q4, d27, d1 + vmlal.s32 q4, d22, d6 + vmlal.s32 q4, d28, d0 + vmull.s32 q8, d18, d7 + vmlal.s32 q8, d26, d1 + vmlal.s32 q8, d19, d6 + vmlal.s32 q8, d27, d0 + add r2, sp, #576 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q7, d24, d21 + vmlal.s32 q7, d25, d20 + vmlal.s32 q4, d23, d21 + vmlal.s32 q4, d29, d20 + vmlal.s32 q8, d22, d21 + vmlal.s32 q8, d28, d20 + vmlal.s32 q5, d24, d20 + add r2, sp, #576 + vst1.8 {d14-d15}, [r2, : 128] + vmull.s32 q7, d18, d6 + vmlal.s32 q7, d26, d0 + add r2, sp, #656 + vld1.8 {d30-d31}, [r2, : 128] + vmlal.s32 q2, d30, d21 + vmlal.s32 q7, d19, d21 + vmlal.s32 q7, d27, d20 + add r2, sp, #624 + vld1.8 {d26-d27}, [r2, : 128] + vmlal.s32 q4, d25, d27 + vmlal.s32 q8, d29, d27 + vmlal.s32 q8, d25, d26 + vmlal.s32 q7, d28, d27 + vmlal.s32 q7, d29, d26 + add r2, sp, #608 + vld1.8 {d28-d29}, [r2, : 128] + vmlal.s32 q4, d24, d29 + vmlal.s32 q8, d23, d29 + vmlal.s32 q8, d24, d28 + vmlal.s32 q7, d22, d29 + vmlal.s32 q7, d23, d28 + add r2, sp, #608 + vst1.8 {d8-d9}, [r2, : 128] + add r2, sp, #560 + vld1.8 {d8-d9}, [r2, : 128] + vmlal.s32 q7, d24, d9 + vmlal.s32 q7, d25, d31 + vmull.s32 q1, d18, d2 + vmlal.s32 q1, d19, d1 + vmlal.s32 q1, d22, d0 + vmlal.s32 q1, d24, d27 + vmlal.s32 q1, d23, d20 + vmlal.s32 q1, d12, d7 + vmlal.s32 q1, d13, d6 + vmull.s32 q6, d18, d1 + vmlal.s32 q6, d19, d0 + vmlal.s32 q6, d23, d27 + vmlal.s32 q6, d22, d20 + vmlal.s32 q6, d24, d26 + vmull.s32 q0, d18, d0 + vmlal.s32 q0, d22, d27 + vmlal.s32 q0, d23, d26 + vmlal.s32 q0, d24, d31 + vmlal.s32 q0, d19, d20 + add r2, sp, #640 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q2, d18, d7 + vmlal.s32 q2, d19, d6 + vmlal.s32 q5, d18, d6 + vmlal.s32 q5, d19, d21 + vmlal.s32 q1, d18, d21 + vmlal.s32 q1, d19, d29 + vmlal.s32 q0, d18, d28 + vmlal.s32 q0, d19, d9 + vmlal.s32 q6, d18, d29 + vmlal.s32 q6, d19, d28 + add r2, sp, #592 + vld1.8 {d18-d19}, [r2, : 128] + add r2, sp, #512 + vld1.8 {d22-d23}, [r2, : 128] + vmlal.s32 q5, d19, d7 + vmlal.s32 q0, d18, d21 + vmlal.s32 q0, d19, d29 + vmlal.s32 q6, d18, d6 + add r2, sp, #528 + vld1.8 {d6-d7}, [r2, : 128] + vmlal.s32 q6, d19, d21 + add r2, sp, #576 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q0, d30, d8 + add r2, sp, #672 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q5, d30, d29 + add r2, sp, #608 + vld1.8 {d24-d25}, [r2, : 128] + vmlal.s32 q1, d30, d28 + vadd.i64 q13, q0, q11 + vadd.i64 q14, q5, q11 + vmlal.s32 q6, d30, d9 + vshr.s64 q4, q13, #26 + vshr.s64 q13, q14, #26 + vadd.i64 q7, q7, q4 + vshl.i64 q4, q4, #26 + vadd.i64 q14, q7, q3 + vadd.i64 q9, q9, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q15, q9, q3 + vsub.i64 q0, q0, q4 + vshr.s64 q4, q14, #25 + vsub.i64 q5, q5, q13 + vshr.s64 q13, q15, #25 + vadd.i64 q6, q6, q4 + vshl.i64 q4, q4, #25 + vadd.i64 q14, q6, q11 + vadd.i64 q2, q2, q13 + vsub.i64 q4, q7, q4 + vshr.s64 q7, q14, #26 + vshl.i64 q13, q13, #25 + vadd.i64 q14, q2, q11 + vadd.i64 q8, q8, q7 + vshl.i64 q7, q7, #26 + vadd.i64 q15, q8, q3 + vsub.i64 q9, q9, q13 + vshr.s64 q13, q14, #26 + vsub.i64 q6, q6, q7 + vshr.s64 q7, q15, #25 + vadd.i64 q10, q10, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q14, q10, q3 + vadd.i64 q1, q1, q7 + add r2, r3, #144 + vshl.i64 q7, q7, #25 + add r4, r3, #96 + vadd.i64 q15, q1, q11 + add r2, r2, #8 + vsub.i64 q2, q2, q13 + add r4, r4, #8 + vshr.s64 q13, q14, #25 + vsub.i64 q7, q8, q7 + vshr.s64 q8, q15, #26 + vadd.i64 q14, q13, q13 + vadd.i64 q12, q12, q8 + vtrn.32 d12, d14 + vshl.i64 q8, q8, #26 + vtrn.32 d13, d15 + vadd.i64 q3, q12, q3 + vadd.i64 q0, q0, q14 + vst1.8 d12, [r2, : 64]! + vshl.i64 q7, q13, #4 + vst1.8 d13, [r4, : 64]! + vsub.i64 q1, q1, q8 + vshr.s64 q3, q3, #25 + vadd.i64 q0, q0, q7 + vadd.i64 q5, q5, q3 + vshl.i64 q3, q3, #25 + vadd.i64 q6, q5, q11 + vadd.i64 q0, q0, q13 + vshl.i64 q7, q13, #25 + vadd.i64 q8, q0, q11 + vsub.i64 q3, q12, q3 + vshr.s64 q6, q6, #26 + vsub.i64 q7, q10, q7 + vtrn.32 d2, d6 + vshr.s64 q8, q8, #26 + vtrn.32 d3, d7 + vadd.i64 q3, q9, q6 + vst1.8 d2, [r2, : 64] + vshl.i64 q6, q6, #26 + vst1.8 d3, [r4, : 64] + vadd.i64 q1, q4, q8 + vtrn.32 d4, d14 + vshl.i64 q4, q8, #26 + vtrn.32 d5, d15 + vsub.i64 q5, q5, q6 + add r2, r2, #16 + vsub.i64 q0, q0, q4 + vst1.8 d4, [r2, : 64] + add r4, r4, #16 + vst1.8 d5, [r4, : 64] + vtrn.32 d10, d6 + vtrn.32 d11, d7 + sub r2, r2, #8 + sub r4, r4, #8 + vtrn.32 d0, d2 + vtrn.32 d1, d3 + vst1.8 d10, [r2, : 64] + vst1.8 d11, [r4, : 64] + sub r2, r2, #24 + sub r4, r4, #24 + vst1.8 d0, [r2, : 64] + vst1.8 d1, [r4, : 64] + add r2, r3, #288 + add r4, r3, #336 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vsub.i32 q0, q0, q1 + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d4-d5}, [r4, : 128]! + vsub.i32 q1, q1, q2 + add r5, r3, #240 + vld1.8 {d4}, [r2, : 64] + vld1.8 {d6}, [r4, : 64] + vsub.i32 q2, q2, q3 + vst1.8 {d0-d1}, [r5, : 128]! + vst1.8 {d2-d3}, [r5, : 128]! + vst1.8 d4, [r5, : 64] + add r2, r3, #144 + add r4, r3, #96 + add r5, r3, #144 + add r6, r3, #192 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vsub.i32 q2, q0, q1 + vadd.i32 q0, q0, q1 + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d6-d7}, [r4, : 128]! + vsub.i32 q4, q1, q3 + vadd.i32 q1, q1, q3 + vld1.8 {d6}, [r2, : 64] + vld1.8 {d10}, [r4, : 64] + vsub.i32 q6, q3, q5 + vadd.i32 q3, q3, q5 + vst1.8 {d4-d5}, [r5, : 128]! + vst1.8 {d0-d1}, [r6, : 128]! + vst1.8 {d8-d9}, [r5, : 128]! + vst1.8 {d2-d3}, [r6, : 128]! + vst1.8 d12, [r5, : 64] + vst1.8 d6, [r6, : 64] + add r2, r3, #0 + add r4, r3, #240 + vld1.8 {d0-d1}, [r4, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vld1.8 {d4}, [r4, : 64] + add r4, r3, #336 + vld1.8 {d6-d7}, [r4, : 128]! + vtrn.32 q0, q3 + vld1.8 {d8-d9}, [r4, : 128]! + vshl.i32 q5, q0, #4 + vtrn.32 q1, q4 + vshl.i32 q6, q3, #4 + vadd.i32 q5, q5, q0 + vadd.i32 q6, q6, q3 + vshl.i32 q7, q1, #4 + vld1.8 {d5}, [r4, : 64] + vshl.i32 q8, q4, #4 + vtrn.32 d4, d5 + vadd.i32 q7, q7, q1 + vadd.i32 q8, q8, q4 + vld1.8 {d18-d19}, [r2, : 128]! + vshl.i32 q10, q2, #4 + vld1.8 {d22-d23}, [r2, : 128]! + vadd.i32 q10, q10, q2 + vld1.8 {d24}, [r2, : 64] + vadd.i32 q5, q5, q0 + add r2, r3, #288 + vld1.8 {d26-d27}, [r2, : 128]! + vadd.i32 q6, q6, q3 + vld1.8 {d28-d29}, [r2, : 128]! + vadd.i32 q8, q8, q4 + vld1.8 {d25}, [r2, : 64] + vadd.i32 q10, q10, q2 + vtrn.32 q9, q13 + vadd.i32 q7, q7, q1 + vadd.i32 q5, q5, q0 + vtrn.32 q11, q14 + vadd.i32 q6, q6, q3 + add r2, sp, #560 + vadd.i32 q10, q10, q2 + vtrn.32 d24, d25 + vst1.8 {d12-d13}, [r2, : 128] + vshl.i32 q6, q13, #1 + add r2, sp, #576 + vst1.8 {d20-d21}, [r2, : 128] + vshl.i32 q10, q14, #1 + add r2, sp, #592 + vst1.8 {d12-d13}, [r2, : 128] + vshl.i32 q15, q12, #1 + vadd.i32 q8, q8, q4 + vext.32 d10, d31, d30, #0 + vadd.i32 q7, q7, q1 + add r2, sp, #608 + vst1.8 {d16-d17}, [r2, : 128] + vmull.s32 q8, d18, d5 + vmlal.s32 q8, d26, d4 + vmlal.s32 q8, d19, d9 + vmlal.s32 q8, d27, d3 + vmlal.s32 q8, d22, d8 + vmlal.s32 q8, d28, d2 + vmlal.s32 q8, d23, d7 + vmlal.s32 q8, d29, d1 + vmlal.s32 q8, d24, d6 + vmlal.s32 q8, d25, d0 + add r2, sp, #624 + vst1.8 {d14-d15}, [r2, : 128] + vmull.s32 q2, d18, d4 + vmlal.s32 q2, d12, d9 + vmlal.s32 q2, d13, d8 + vmlal.s32 q2, d19, d3 + vmlal.s32 q2, d22, d2 + vmlal.s32 q2, d23, d1 + vmlal.s32 q2, d24, d0 + add r2, sp, #640 + vst1.8 {d20-d21}, [r2, : 128] + vmull.s32 q7, d18, d9 + vmlal.s32 q7, d26, d3 + vmlal.s32 q7, d19, d8 + vmlal.s32 q7, d27, d2 + vmlal.s32 q7, d22, d7 + vmlal.s32 q7, d28, d1 + vmlal.s32 q7, d23, d6 + vmlal.s32 q7, d29, d0 + add r2, sp, #656 + vst1.8 {d10-d11}, [r2, : 128] + vmull.s32 q5, d18, d3 + vmlal.s32 q5, d19, d2 + vmlal.s32 q5, d22, d1 + vmlal.s32 q5, d23, d0 + vmlal.s32 q5, d12, d8 + add r2, sp, #672 + vst1.8 {d16-d17}, [r2, : 128] + vmull.s32 q4, d18, d8 + vmlal.s32 q4, d26, d2 + vmlal.s32 q4, d19, d7 + vmlal.s32 q4, d27, d1 + vmlal.s32 q4, d22, d6 + vmlal.s32 q4, d28, d0 + vmull.s32 q8, d18, d7 + vmlal.s32 q8, d26, d1 + vmlal.s32 q8, d19, d6 + vmlal.s32 q8, d27, d0 + add r2, sp, #576 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q7, d24, d21 + vmlal.s32 q7, d25, d20 + vmlal.s32 q4, d23, d21 + vmlal.s32 q4, d29, d20 + vmlal.s32 q8, d22, d21 + vmlal.s32 q8, d28, d20 + vmlal.s32 q5, d24, d20 + add r2, sp, #576 + vst1.8 {d14-d15}, [r2, : 128] + vmull.s32 q7, d18, d6 + vmlal.s32 q7, d26, d0 + add r2, sp, #656 + vld1.8 {d30-d31}, [r2, : 128] + vmlal.s32 q2, d30, d21 + vmlal.s32 q7, d19, d21 + vmlal.s32 q7, d27, d20 + add r2, sp, #624 + vld1.8 {d26-d27}, [r2, : 128] + vmlal.s32 q4, d25, d27 + vmlal.s32 q8, d29, d27 + vmlal.s32 q8, d25, d26 + vmlal.s32 q7, d28, d27 + vmlal.s32 q7, d29, d26 + add r2, sp, #608 + vld1.8 {d28-d29}, [r2, : 128] + vmlal.s32 q4, d24, d29 + vmlal.s32 q8, d23, d29 + vmlal.s32 q8, d24, d28 + vmlal.s32 q7, d22, d29 + vmlal.s32 q7, d23, d28 + add r2, sp, #608 + vst1.8 {d8-d9}, [r2, : 128] + add r2, sp, #560 + vld1.8 {d8-d9}, [r2, : 128] + vmlal.s32 q7, d24, d9 + vmlal.s32 q7, d25, d31 + vmull.s32 q1, d18, d2 + vmlal.s32 q1, d19, d1 + vmlal.s32 q1, d22, d0 + vmlal.s32 q1, d24, d27 + vmlal.s32 q1, d23, d20 + vmlal.s32 q1, d12, d7 + vmlal.s32 q1, d13, d6 + vmull.s32 q6, d18, d1 + vmlal.s32 q6, d19, d0 + vmlal.s32 q6, d23, d27 + vmlal.s32 q6, d22, d20 + vmlal.s32 q6, d24, d26 + vmull.s32 q0, d18, d0 + vmlal.s32 q0, d22, d27 + vmlal.s32 q0, d23, d26 + vmlal.s32 q0, d24, d31 + vmlal.s32 q0, d19, d20 + add r2, sp, #640 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q2, d18, d7 + vmlal.s32 q2, d19, d6 + vmlal.s32 q5, d18, d6 + vmlal.s32 q5, d19, d21 + vmlal.s32 q1, d18, d21 + vmlal.s32 q1, d19, d29 + vmlal.s32 q0, d18, d28 + vmlal.s32 q0, d19, d9 + vmlal.s32 q6, d18, d29 + vmlal.s32 q6, d19, d28 + add r2, sp, #592 + vld1.8 {d18-d19}, [r2, : 128] + add r2, sp, #512 + vld1.8 {d22-d23}, [r2, : 128] + vmlal.s32 q5, d19, d7 + vmlal.s32 q0, d18, d21 + vmlal.s32 q0, d19, d29 + vmlal.s32 q6, d18, d6 + add r2, sp, #528 + vld1.8 {d6-d7}, [r2, : 128] + vmlal.s32 q6, d19, d21 + add r2, sp, #576 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q0, d30, d8 + add r2, sp, #672 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q5, d30, d29 + add r2, sp, #608 + vld1.8 {d24-d25}, [r2, : 128] + vmlal.s32 q1, d30, d28 + vadd.i64 q13, q0, q11 + vadd.i64 q14, q5, q11 + vmlal.s32 q6, d30, d9 + vshr.s64 q4, q13, #26 + vshr.s64 q13, q14, #26 + vadd.i64 q7, q7, q4 + vshl.i64 q4, q4, #26 + vadd.i64 q14, q7, q3 + vadd.i64 q9, q9, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q15, q9, q3 + vsub.i64 q0, q0, q4 + vshr.s64 q4, q14, #25 + vsub.i64 q5, q5, q13 + vshr.s64 q13, q15, #25 + vadd.i64 q6, q6, q4 + vshl.i64 q4, q4, #25 + vadd.i64 q14, q6, q11 + vadd.i64 q2, q2, q13 + vsub.i64 q4, q7, q4 + vshr.s64 q7, q14, #26 + vshl.i64 q13, q13, #25 + vadd.i64 q14, q2, q11 + vadd.i64 q8, q8, q7 + vshl.i64 q7, q7, #26 + vadd.i64 q15, q8, q3 + vsub.i64 q9, q9, q13 + vshr.s64 q13, q14, #26 + vsub.i64 q6, q6, q7 + vshr.s64 q7, q15, #25 + vadd.i64 q10, q10, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q14, q10, q3 + vadd.i64 q1, q1, q7 + add r2, r3, #288 + vshl.i64 q7, q7, #25 + add r4, r3, #96 + vadd.i64 q15, q1, q11 + add r2, r2, #8 + vsub.i64 q2, q2, q13 + add r4, r4, #8 + vshr.s64 q13, q14, #25 + vsub.i64 q7, q8, q7 + vshr.s64 q8, q15, #26 + vadd.i64 q14, q13, q13 + vadd.i64 q12, q12, q8 + vtrn.32 d12, d14 + vshl.i64 q8, q8, #26 + vtrn.32 d13, d15 + vadd.i64 q3, q12, q3 + vadd.i64 q0, q0, q14 + vst1.8 d12, [r2, : 64]! + vshl.i64 q7, q13, #4 + vst1.8 d13, [r4, : 64]! + vsub.i64 q1, q1, q8 + vshr.s64 q3, q3, #25 + vadd.i64 q0, q0, q7 + vadd.i64 q5, q5, q3 + vshl.i64 q3, q3, #25 + vadd.i64 q6, q5, q11 + vadd.i64 q0, q0, q13 + vshl.i64 q7, q13, #25 + vadd.i64 q8, q0, q11 + vsub.i64 q3, q12, q3 + vshr.s64 q6, q6, #26 + vsub.i64 q7, q10, q7 + vtrn.32 d2, d6 + vshr.s64 q8, q8, #26 + vtrn.32 d3, d7 + vadd.i64 q3, q9, q6 + vst1.8 d2, [r2, : 64] + vshl.i64 q6, q6, #26 + vst1.8 d3, [r4, : 64] + vadd.i64 q1, q4, q8 + vtrn.32 d4, d14 + vshl.i64 q4, q8, #26 + vtrn.32 d5, d15 + vsub.i64 q5, q5, q6 + add r2, r2, #16 + vsub.i64 q0, q0, q4 + vst1.8 d4, [r2, : 64] + add r4, r4, #16 + vst1.8 d5, [r4, : 64] + vtrn.32 d10, d6 + vtrn.32 d11, d7 + sub r2, r2, #8 + sub r4, r4, #8 + vtrn.32 d0, d2 + vtrn.32 d1, d3 + vst1.8 d10, [r2, : 64] + vst1.8 d11, [r4, : 64] + sub r2, r2, #24 + sub r4, r4, #24 + vst1.8 d0, [r2, : 64] + vst1.8 d1, [r4, : 64] + add r2, sp, #544 + add r4, r3, #144 + add r5, r3, #192 + vld1.8 {d0-d1}, [r2, : 128] + vld1.8 {d2-d3}, [r4, : 128]! + vld1.8 {d4-d5}, [r5, : 128]! + vzip.i32 q1, q2 + vld1.8 {d6-d7}, [r4, : 128]! + vld1.8 {d8-d9}, [r5, : 128]! + vshl.i32 q5, q1, #1 + vzip.i32 q3, q4 + vshl.i32 q6, q2, #1 + vld1.8 {d14}, [r4, : 64] + vshl.i32 q8, q3, #1 + vld1.8 {d15}, [r5, : 64] + vshl.i32 q9, q4, #1 + vmul.i32 d21, d7, d1 + vtrn.32 d14, d15 + vmul.i32 q11, q4, q0 + vmul.i32 q0, q7, q0 + vmull.s32 q12, d2, d2 + vmlal.s32 q12, d11, d1 + vmlal.s32 q12, d12, d0 + vmlal.s32 q12, d13, d23 + vmlal.s32 q12, d16, d22 + vmlal.s32 q12, d7, d21 + vmull.s32 q10, d2, d11 + vmlal.s32 q10, d4, d1 + vmlal.s32 q10, d13, d0 + vmlal.s32 q10, d6, d23 + vmlal.s32 q10, d17, d22 + vmull.s32 q13, d10, d4 + vmlal.s32 q13, d11, d3 + vmlal.s32 q13, d13, d1 + vmlal.s32 q13, d16, d0 + vmlal.s32 q13, d17, d23 + vmlal.s32 q13, d8, d22 + vmull.s32 q1, d10, d5 + vmlal.s32 q1, d11, d4 + vmlal.s32 q1, d6, d1 + vmlal.s32 q1, d17, d0 + vmlal.s32 q1, d8, d23 + vmull.s32 q14, d10, d6 + vmlal.s32 q14, d11, d13 + vmlal.s32 q14, d4, d4 + vmlal.s32 q14, d17, d1 + vmlal.s32 q14, d18, d0 + vmlal.s32 q14, d9, d23 + vmull.s32 q11, d10, d7 + vmlal.s32 q11, d11, d6 + vmlal.s32 q11, d12, d5 + vmlal.s32 q11, d8, d1 + vmlal.s32 q11, d19, d0 + vmull.s32 q15, d10, d8 + vmlal.s32 q15, d11, d17 + vmlal.s32 q15, d12, d6 + vmlal.s32 q15, d13, d5 + vmlal.s32 q15, d19, d1 + vmlal.s32 q15, d14, d0 + vmull.s32 q2, d10, d9 + vmlal.s32 q2, d11, d8 + vmlal.s32 q2, d12, d7 + vmlal.s32 q2, d13, d6 + vmlal.s32 q2, d14, d1 + vmull.s32 q0, d15, d1 + vmlal.s32 q0, d10, d14 + vmlal.s32 q0, d11, d19 + vmlal.s32 q0, d12, d8 + vmlal.s32 q0, d13, d17 + vmlal.s32 q0, d6, d6 + add r2, sp, #512 + vld1.8 {d18-d19}, [r2, : 128] + vmull.s32 q3, d16, d7 + vmlal.s32 q3, d10, d15 + vmlal.s32 q3, d11, d14 + vmlal.s32 q3, d12, d9 + vmlal.s32 q3, d13, d8 + add r2, sp, #528 + vld1.8 {d8-d9}, [r2, : 128] + vadd.i64 q5, q12, q9 + vadd.i64 q6, q15, q9 + vshr.s64 q5, q5, #26 + vshr.s64 q6, q6, #26 + vadd.i64 q7, q10, q5 + vshl.i64 q5, q5, #26 + vadd.i64 q8, q7, q4 + vadd.i64 q2, q2, q6 + vshl.i64 q6, q6, #26 + vadd.i64 q10, q2, q4 + vsub.i64 q5, q12, q5 + vshr.s64 q8, q8, #25 + vsub.i64 q6, q15, q6 + vshr.s64 q10, q10, #25 + vadd.i64 q12, q13, q8 + vshl.i64 q8, q8, #25 + vadd.i64 q13, q12, q9 + vadd.i64 q0, q0, q10 + vsub.i64 q7, q7, q8 + vshr.s64 q8, q13, #26 + vshl.i64 q10, q10, #25 + vadd.i64 q13, q0, q9 + vadd.i64 q1, q1, q8 + vshl.i64 q8, q8, #26 + vadd.i64 q15, q1, q4 + vsub.i64 q2, q2, q10 + vshr.s64 q10, q13, #26 + vsub.i64 q8, q12, q8 + vshr.s64 q12, q15, #25 + vadd.i64 q3, q3, q10 + vshl.i64 q10, q10, #26 + vadd.i64 q13, q3, q4 + vadd.i64 q14, q14, q12 + add r2, r3, #144 + vshl.i64 q12, q12, #25 + add r4, r3, #192 + vadd.i64 q15, q14, q9 + add r2, r2, #8 + vsub.i64 q0, q0, q10 + add r4, r4, #8 + vshr.s64 q10, q13, #25 + vsub.i64 q1, q1, q12 + vshr.s64 q12, q15, #26 + vadd.i64 q13, q10, q10 + vadd.i64 q11, q11, q12 + vtrn.32 d16, d2 + vshl.i64 q12, q12, #26 + vtrn.32 d17, d3 + vadd.i64 q1, q11, q4 + vadd.i64 q4, q5, q13 + vst1.8 d16, [r2, : 64]! + vshl.i64 q5, q10, #4 + vst1.8 d17, [r4, : 64]! + vsub.i64 q8, q14, q12 + vshr.s64 q1, q1, #25 + vadd.i64 q4, q4, q5 + vadd.i64 q5, q6, q1 + vshl.i64 q1, q1, #25 + vadd.i64 q6, q5, q9 + vadd.i64 q4, q4, q10 + vshl.i64 q10, q10, #25 + vadd.i64 q9, q4, q9 + vsub.i64 q1, q11, q1 + vshr.s64 q6, q6, #26 + vsub.i64 q3, q3, q10 + vtrn.32 d16, d2 + vshr.s64 q9, q9, #26 + vtrn.32 d17, d3 + vadd.i64 q1, q2, q6 + vst1.8 d16, [r2, : 64] + vshl.i64 q2, q6, #26 + vst1.8 d17, [r4, : 64] + vadd.i64 q6, q7, q9 + vtrn.32 d0, d6 + vshl.i64 q7, q9, #26 + vtrn.32 d1, d7 + vsub.i64 q2, q5, q2 + add r2, r2, #16 + vsub.i64 q3, q4, q7 + vst1.8 d0, [r2, : 64] + add r4, r4, #16 + vst1.8 d1, [r4, : 64] + vtrn.32 d4, d2 + vtrn.32 d5, d3 + sub r2, r2, #8 + sub r4, r4, #8 + vtrn.32 d6, d12 + vtrn.32 d7, d13 + vst1.8 d4, [r2, : 64] + vst1.8 d5, [r4, : 64] + sub r2, r2, #24 + sub r4, r4, #24 + vst1.8 d6, [r2, : 64] + vst1.8 d7, [r4, : 64] + add r2, r3, #336 + add r4, r3, #288 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vadd.i32 q0, q0, q1 + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d4-d5}, [r4, : 128]! + vadd.i32 q1, q1, q2 + add r5, r3, #288 + vld1.8 {d4}, [r2, : 64] + vld1.8 {d6}, [r4, : 64] + vadd.i32 q2, q2, q3 + vst1.8 {d0-d1}, [r5, : 128]! + vst1.8 {d2-d3}, [r5, : 128]! + vst1.8 d4, [r5, : 64] + add r2, r3, #48 + add r4, r3, #144 + vld1.8 {d0-d1}, [r4, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vld1.8 {d4}, [r4, : 64] + add r4, r3, #288 + vld1.8 {d6-d7}, [r4, : 128]! + vtrn.32 q0, q3 + vld1.8 {d8-d9}, [r4, : 128]! + vshl.i32 q5, q0, #4 + vtrn.32 q1, q4 + vshl.i32 q6, q3, #4 + vadd.i32 q5, q5, q0 + vadd.i32 q6, q6, q3 + vshl.i32 q7, q1, #4 + vld1.8 {d5}, [r4, : 64] + vshl.i32 q8, q4, #4 + vtrn.32 d4, d5 + vadd.i32 q7, q7, q1 + vadd.i32 q8, q8, q4 + vld1.8 {d18-d19}, [r2, : 128]! + vshl.i32 q10, q2, #4 + vld1.8 {d22-d23}, [r2, : 128]! + vadd.i32 q10, q10, q2 + vld1.8 {d24}, [r2, : 64] + vadd.i32 q5, q5, q0 + add r2, r3, #240 + vld1.8 {d26-d27}, [r2, : 128]! + vadd.i32 q6, q6, q3 + vld1.8 {d28-d29}, [r2, : 128]! + vadd.i32 q8, q8, q4 + vld1.8 {d25}, [r2, : 64] + vadd.i32 q10, q10, q2 + vtrn.32 q9, q13 + vadd.i32 q7, q7, q1 + vadd.i32 q5, q5, q0 + vtrn.32 q11, q14 + vadd.i32 q6, q6, q3 + add r2, sp, #560 + vadd.i32 q10, q10, q2 + vtrn.32 d24, d25 + vst1.8 {d12-d13}, [r2, : 128] + vshl.i32 q6, q13, #1 + add r2, sp, #576 + vst1.8 {d20-d21}, [r2, : 128] + vshl.i32 q10, q14, #1 + add r2, sp, #592 + vst1.8 {d12-d13}, [r2, : 128] + vshl.i32 q15, q12, #1 + vadd.i32 q8, q8, q4 + vext.32 d10, d31, d30, #0 + vadd.i32 q7, q7, q1 + add r2, sp, #608 + vst1.8 {d16-d17}, [r2, : 128] + vmull.s32 q8, d18, d5 + vmlal.s32 q8, d26, d4 + vmlal.s32 q8, d19, d9 + vmlal.s32 q8, d27, d3 + vmlal.s32 q8, d22, d8 + vmlal.s32 q8, d28, d2 + vmlal.s32 q8, d23, d7 + vmlal.s32 q8, d29, d1 + vmlal.s32 q8, d24, d6 + vmlal.s32 q8, d25, d0 + add r2, sp, #624 + vst1.8 {d14-d15}, [r2, : 128] + vmull.s32 q2, d18, d4 + vmlal.s32 q2, d12, d9 + vmlal.s32 q2, d13, d8 + vmlal.s32 q2, d19, d3 + vmlal.s32 q2, d22, d2 + vmlal.s32 q2, d23, d1 + vmlal.s32 q2, d24, d0 + add r2, sp, #640 + vst1.8 {d20-d21}, [r2, : 128] + vmull.s32 q7, d18, d9 + vmlal.s32 q7, d26, d3 + vmlal.s32 q7, d19, d8 + vmlal.s32 q7, d27, d2 + vmlal.s32 q7, d22, d7 + vmlal.s32 q7, d28, d1 + vmlal.s32 q7, d23, d6 + vmlal.s32 q7, d29, d0 + add r2, sp, #656 + vst1.8 {d10-d11}, [r2, : 128] + vmull.s32 q5, d18, d3 + vmlal.s32 q5, d19, d2 + vmlal.s32 q5, d22, d1 + vmlal.s32 q5, d23, d0 + vmlal.s32 q5, d12, d8 + add r2, sp, #672 + vst1.8 {d16-d17}, [r2, : 128] + vmull.s32 q4, d18, d8 + vmlal.s32 q4, d26, d2 + vmlal.s32 q4, d19, d7 + vmlal.s32 q4, d27, d1 + vmlal.s32 q4, d22, d6 + vmlal.s32 q4, d28, d0 + vmull.s32 q8, d18, d7 + vmlal.s32 q8, d26, d1 + vmlal.s32 q8, d19, d6 + vmlal.s32 q8, d27, d0 + add r2, sp, #576 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q7, d24, d21 + vmlal.s32 q7, d25, d20 + vmlal.s32 q4, d23, d21 + vmlal.s32 q4, d29, d20 + vmlal.s32 q8, d22, d21 + vmlal.s32 q8, d28, d20 + vmlal.s32 q5, d24, d20 + add r2, sp, #576 + vst1.8 {d14-d15}, [r2, : 128] + vmull.s32 q7, d18, d6 + vmlal.s32 q7, d26, d0 + add r2, sp, #656 + vld1.8 {d30-d31}, [r2, : 128] + vmlal.s32 q2, d30, d21 + vmlal.s32 q7, d19, d21 + vmlal.s32 q7, d27, d20 + add r2, sp, #624 + vld1.8 {d26-d27}, [r2, : 128] + vmlal.s32 q4, d25, d27 + vmlal.s32 q8, d29, d27 + vmlal.s32 q8, d25, d26 + vmlal.s32 q7, d28, d27 + vmlal.s32 q7, d29, d26 + add r2, sp, #608 + vld1.8 {d28-d29}, [r2, : 128] + vmlal.s32 q4, d24, d29 + vmlal.s32 q8, d23, d29 + vmlal.s32 q8, d24, d28 + vmlal.s32 q7, d22, d29 + vmlal.s32 q7, d23, d28 + add r2, sp, #608 + vst1.8 {d8-d9}, [r2, : 128] + add r2, sp, #560 + vld1.8 {d8-d9}, [r2, : 128] + vmlal.s32 q7, d24, d9 + vmlal.s32 q7, d25, d31 + vmull.s32 q1, d18, d2 + vmlal.s32 q1, d19, d1 + vmlal.s32 q1, d22, d0 + vmlal.s32 q1, d24, d27 + vmlal.s32 q1, d23, d20 + vmlal.s32 q1, d12, d7 + vmlal.s32 q1, d13, d6 + vmull.s32 q6, d18, d1 + vmlal.s32 q6, d19, d0 + vmlal.s32 q6, d23, d27 + vmlal.s32 q6, d22, d20 + vmlal.s32 q6, d24, d26 + vmull.s32 q0, d18, d0 + vmlal.s32 q0, d22, d27 + vmlal.s32 q0, d23, d26 + vmlal.s32 q0, d24, d31 + vmlal.s32 q0, d19, d20 + add r2, sp, #640 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q2, d18, d7 + vmlal.s32 q2, d19, d6 + vmlal.s32 q5, d18, d6 + vmlal.s32 q5, d19, d21 + vmlal.s32 q1, d18, d21 + vmlal.s32 q1, d19, d29 + vmlal.s32 q0, d18, d28 + vmlal.s32 q0, d19, d9 + vmlal.s32 q6, d18, d29 + vmlal.s32 q6, d19, d28 + add r2, sp, #592 + vld1.8 {d18-d19}, [r2, : 128] + add r2, sp, #512 + vld1.8 {d22-d23}, [r2, : 128] + vmlal.s32 q5, d19, d7 + vmlal.s32 q0, d18, d21 + vmlal.s32 q0, d19, d29 + vmlal.s32 q6, d18, d6 + add r2, sp, #528 + vld1.8 {d6-d7}, [r2, : 128] + vmlal.s32 q6, d19, d21 + add r2, sp, #576 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q0, d30, d8 + add r2, sp, #672 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q5, d30, d29 + add r2, sp, #608 + vld1.8 {d24-d25}, [r2, : 128] + vmlal.s32 q1, d30, d28 + vadd.i64 q13, q0, q11 + vadd.i64 q14, q5, q11 + vmlal.s32 q6, d30, d9 + vshr.s64 q4, q13, #26 + vshr.s64 q13, q14, #26 + vadd.i64 q7, q7, q4 + vshl.i64 q4, q4, #26 + vadd.i64 q14, q7, q3 + vadd.i64 q9, q9, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q15, q9, q3 + vsub.i64 q0, q0, q4 + vshr.s64 q4, q14, #25 + vsub.i64 q5, q5, q13 + vshr.s64 q13, q15, #25 + vadd.i64 q6, q6, q4 + vshl.i64 q4, q4, #25 + vadd.i64 q14, q6, q11 + vadd.i64 q2, q2, q13 + vsub.i64 q4, q7, q4 + vshr.s64 q7, q14, #26 + vshl.i64 q13, q13, #25 + vadd.i64 q14, q2, q11 + vadd.i64 q8, q8, q7 + vshl.i64 q7, q7, #26 + vadd.i64 q15, q8, q3 + vsub.i64 q9, q9, q13 + vshr.s64 q13, q14, #26 + vsub.i64 q6, q6, q7 + vshr.s64 q7, q15, #25 + vadd.i64 q10, q10, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q14, q10, q3 + vadd.i64 q1, q1, q7 + add r2, r3, #240 + vshl.i64 q7, q7, #25 + add r4, r3, #144 + vadd.i64 q15, q1, q11 + add r2, r2, #8 + vsub.i64 q2, q2, q13 + add r4, r4, #8 + vshr.s64 q13, q14, #25 + vsub.i64 q7, q8, q7 + vshr.s64 q8, q15, #26 + vadd.i64 q14, q13, q13 + vadd.i64 q12, q12, q8 + vtrn.32 d12, d14 + vshl.i64 q8, q8, #26 + vtrn.32 d13, d15 + vadd.i64 q3, q12, q3 + vadd.i64 q0, q0, q14 + vst1.8 d12, [r2, : 64]! + vshl.i64 q7, q13, #4 + vst1.8 d13, [r4, : 64]! + vsub.i64 q1, q1, q8 + vshr.s64 q3, q3, #25 + vadd.i64 q0, q0, q7 + vadd.i64 q5, q5, q3 + vshl.i64 q3, q3, #25 + vadd.i64 q6, q5, q11 + vadd.i64 q0, q0, q13 + vshl.i64 q7, q13, #25 + vadd.i64 q8, q0, q11 + vsub.i64 q3, q12, q3 + vshr.s64 q6, q6, #26 + vsub.i64 q7, q10, q7 + vtrn.32 d2, d6 + vshr.s64 q8, q8, #26 + vtrn.32 d3, d7 + vadd.i64 q3, q9, q6 + vst1.8 d2, [r2, : 64] + vshl.i64 q6, q6, #26 + vst1.8 d3, [r4, : 64] + vadd.i64 q1, q4, q8 + vtrn.32 d4, d14 + vshl.i64 q4, q8, #26 + vtrn.32 d5, d15 + vsub.i64 q5, q5, q6 + add r2, r2, #16 + vsub.i64 q0, q0, q4 + vst1.8 d4, [r2, : 64] + add r4, r4, #16 + vst1.8 d5, [r4, : 64] + vtrn.32 d10, d6 + vtrn.32 d11, d7 + sub r2, r2, #8 + sub r4, r4, #8 + vtrn.32 d0, d2 + vtrn.32 d1, d3 + vst1.8 d10, [r2, : 64] + vst1.8 d11, [r4, : 64] + sub r2, r2, #24 + sub r4, r4, #24 + vst1.8 d0, [r2, : 64] + vst1.8 d1, [r4, : 64] + ldr r2, [sp, #488] + ldr r4, [sp, #492] + subs r5, r2, #1 + bge ._mainloop + add r1, r3, #144 + add r2, r3, #336 + vld1.8 {d0-d1}, [r1, : 128]! + vld1.8 {d2-d3}, [r1, : 128]! + vld1.8 {d4}, [r1, : 64] + vst1.8 {d0-d1}, [r2, : 128]! + vst1.8 {d2-d3}, [r2, : 128]! + vst1.8 d4, [r2, : 64] + ldr r1, =0 +._invertloop: + add r2, r3, #144 + ldr r4, =0 + ldr r5, =2 + cmp r1, #1 + ldreq r5, =1 + addeq r2, r3, #336 + addeq r4, r3, #48 + cmp r1, #2 + ldreq r5, =1 + addeq r2, r3, #48 + cmp r1, #3 + ldreq r5, =5 + addeq r4, r3, #336 + cmp r1, #4 + ldreq r5, =10 + cmp r1, #5 + ldreq r5, =20 + cmp r1, #6 + ldreq r5, =10 + addeq r2, r3, #336 + addeq r4, r3, #336 + cmp r1, #7 + ldreq r5, =50 + cmp r1, #8 + ldreq r5, =100 + cmp r1, #9 + ldreq r5, =50 + addeq r2, r3, #336 + cmp r1, #10 + ldreq r5, =5 + addeq r2, r3, #48 + cmp r1, #11 + ldreq r5, =0 + addeq r2, r3, #96 + add r6, r3, #144 + add r7, r3, #288 + vld1.8 {d0-d1}, [r6, : 128]! + vld1.8 {d2-d3}, [r6, : 128]! + vld1.8 {d4}, [r6, : 64] + vst1.8 {d0-d1}, [r7, : 128]! + vst1.8 {d2-d3}, [r7, : 128]! + vst1.8 d4, [r7, : 64] + cmp r5, #0 + beq ._skipsquaringloop +._squaringloop: + add r6, r3, #288 + add r7, r3, #288 + add r8, r3, #288 + vmov.i32 q0, #19 + vmov.i32 q1, #0 + vmov.i32 q2, #1 + vzip.i32 q1, q2 + vld1.8 {d4-d5}, [r7, : 128]! + vld1.8 {d6-d7}, [r7, : 128]! + vld1.8 {d9}, [r7, : 64] + vld1.8 {d10-d11}, [r6, : 128]! + add r7, sp, #416 + vld1.8 {d12-d13}, [r6, : 128]! + vmul.i32 q7, q2, q0 + vld1.8 {d8}, [r6, : 64] + vext.32 d17, d11, d10, #1 + vmul.i32 q9, q3, q0 + vext.32 d16, d10, d8, #1 + vshl.u32 q10, q5, q1 + vext.32 d22, d14, d4, #1 + vext.32 d24, d18, d6, #1 + vshl.u32 q13, q6, q1 + vshl.u32 d28, d8, d2 + vrev64.i32 d22, d22 + vmul.i32 d1, d9, d1 + vrev64.i32 d24, d24 + vext.32 d29, d8, d13, #1 + vext.32 d0, d1, d9, #1 + vrev64.i32 d0, d0 + vext.32 d2, d9, d1, #1 + vext.32 d23, d15, d5, #1 + vmull.s32 q4, d20, d4 + vrev64.i32 d23, d23 + vmlal.s32 q4, d21, d1 + vrev64.i32 d2, d2 + vmlal.s32 q4, d26, d19 + vext.32 d3, d5, d15, #1 + vmlal.s32 q4, d27, d18 + vrev64.i32 d3, d3 + vmlal.s32 q4, d28, d15 + vext.32 d14, d12, d11, #1 + vmull.s32 q5, d16, d23 + vext.32 d15, d13, d12, #1 + vmlal.s32 q5, d17, d4 + vst1.8 d8, [r7, : 64]! + vmlal.s32 q5, d14, d1 + vext.32 d12, d9, d8, #0 + vmlal.s32 q5, d15, d19 + vmov.i64 d13, #0 + vmlal.s32 q5, d29, d18 + vext.32 d25, d19, d7, #1 + vmlal.s32 q6, d20, d5 + vrev64.i32 d25, d25 + vmlal.s32 q6, d21, d4 + vst1.8 d11, [r7, : 64]! + vmlal.s32 q6, d26, d1 + vext.32 d9, d10, d10, #0 + vmlal.s32 q6, d27, d19 + vmov.i64 d8, #0 + vmlal.s32 q6, d28, d18 + vmlal.s32 q4, d16, d24 + vmlal.s32 q4, d17, d5 + vmlal.s32 q4, d14, d4 + vst1.8 d12, [r7, : 64]! + vmlal.s32 q4, d15, d1 + vext.32 d10, d13, d12, #0 + vmlal.s32 q4, d29, d19 + vmov.i64 d11, #0 + vmlal.s32 q5, d20, d6 + vmlal.s32 q5, d21, d5 + vmlal.s32 q5, d26, d4 + vext.32 d13, d8, d8, #0 + vmlal.s32 q5, d27, d1 + vmov.i64 d12, #0 + vmlal.s32 q5, d28, d19 + vst1.8 d9, [r7, : 64]! + vmlal.s32 q6, d16, d25 + vmlal.s32 q6, d17, d6 + vst1.8 d10, [r7, : 64] + vmlal.s32 q6, d14, d5 + vext.32 d8, d11, d10, #0 + vmlal.s32 q6, d15, d4 + vmov.i64 d9, #0 + vmlal.s32 q6, d29, d1 + vmlal.s32 q4, d20, d7 + vmlal.s32 q4, d21, d6 + vmlal.s32 q4, d26, d5 + vext.32 d11, d12, d12, #0 + vmlal.s32 q4, d27, d4 + vmov.i64 d10, #0 + vmlal.s32 q4, d28, d1 + vmlal.s32 q5, d16, d0 + sub r6, r7, #32 + vmlal.s32 q5, d17, d7 + vmlal.s32 q5, d14, d6 + vext.32 d30, d9, d8, #0 + vmlal.s32 q5, d15, d5 + vld1.8 {d31}, [r6, : 64]! + vmlal.s32 q5, d29, d4 + vmlal.s32 q15, d20, d0 + vext.32 d0, d6, d18, #1 + vmlal.s32 q15, d21, d25 + vrev64.i32 d0, d0 + vmlal.s32 q15, d26, d24 + vext.32 d1, d7, d19, #1 + vext.32 d7, d10, d10, #0 + vmlal.s32 q15, d27, d23 + vrev64.i32 d1, d1 + vld1.8 {d6}, [r6, : 64] + vmlal.s32 q15, d28, d22 + vmlal.s32 q3, d16, d4 + add r6, r6, #24 + vmlal.s32 q3, d17, d2 + vext.32 d4, d31, d30, #0 + vmov d17, d11 + vmlal.s32 q3, d14, d1 + vext.32 d11, d13, d13, #0 + vext.32 d13, d30, d30, #0 + vmlal.s32 q3, d15, d0 + vext.32 d1, d8, d8, #0 + vmlal.s32 q3, d29, d3 + vld1.8 {d5}, [r6, : 64] + sub r6, r6, #16 + vext.32 d10, d6, d6, #0 + vmov.i32 q1, #0xffffffff + vshl.i64 q4, q1, #25 + add r7, sp, #512 + vld1.8 {d14-d15}, [r7, : 128] + vadd.i64 q9, q2, q7 + vshl.i64 q1, q1, #26 + vshr.s64 q10, q9, #26 + vld1.8 {d0}, [r6, : 64]! + vadd.i64 q5, q5, q10 + vand q9, q9, q1 + vld1.8 {d16}, [r6, : 64]! + add r6, sp, #528 + vld1.8 {d20-d21}, [r6, : 128] + vadd.i64 q11, q5, q10 + vsub.i64 q2, q2, q9 + vshr.s64 q9, q11, #25 + vext.32 d12, d5, d4, #0 + vand q11, q11, q4 + vadd.i64 q0, q0, q9 + vmov d19, d7 + vadd.i64 q3, q0, q7 + vsub.i64 q5, q5, q11 + vshr.s64 q11, q3, #26 + vext.32 d18, d11, d10, #0 + vand q3, q3, q1 + vadd.i64 q8, q8, q11 + vadd.i64 q11, q8, q10 + vsub.i64 q0, q0, q3 + vshr.s64 q3, q11, #25 + vand q11, q11, q4 + vadd.i64 q3, q6, q3 + vadd.i64 q6, q3, q7 + vsub.i64 q8, q8, q11 + vshr.s64 q11, q6, #26 + vand q6, q6, q1 + vadd.i64 q9, q9, q11 + vadd.i64 d25, d19, d21 + vsub.i64 q3, q3, q6 + vshr.s64 d23, d25, #25 + vand q4, q12, q4 + vadd.i64 d21, d23, d23 + vshl.i64 d25, d23, #4 + vadd.i64 d21, d21, d23 + vadd.i64 d25, d25, d21 + vadd.i64 d4, d4, d25 + vzip.i32 q0, q8 + vadd.i64 d12, d4, d14 + add r6, r8, #8 + vst1.8 d0, [r6, : 64] + vsub.i64 d19, d19, d9 + add r6, r6, #16 + vst1.8 d16, [r6, : 64] + vshr.s64 d22, d12, #26 + vand q0, q6, q1 + vadd.i64 d10, d10, d22 + vzip.i32 q3, q9 + vsub.i64 d4, d4, d0 + sub r6, r6, #8 + vst1.8 d6, [r6, : 64] + add r6, r6, #16 + vst1.8 d18, [r6, : 64] + vzip.i32 q2, q5 + sub r6, r6, #32 + vst1.8 d4, [r6, : 64] + subs r5, r5, #1 + bhi ._squaringloop +._skipsquaringloop: + mov r2, r2 + add r5, r3, #288 + add r6, r3, #144 + vmov.i32 q0, #19 + vmov.i32 q1, #0 + vmov.i32 q2, #1 + vzip.i32 q1, q2 + vld1.8 {d4-d5}, [r5, : 128]! + vld1.8 {d6-d7}, [r5, : 128]! + vld1.8 {d9}, [r5, : 64] + vld1.8 {d10-d11}, [r2, : 128]! + add r5, sp, #416 + vld1.8 {d12-d13}, [r2, : 128]! + vmul.i32 q7, q2, q0 + vld1.8 {d8}, [r2, : 64] + vext.32 d17, d11, d10, #1 + vmul.i32 q9, q3, q0 + vext.32 d16, d10, d8, #1 + vshl.u32 q10, q5, q1 + vext.32 d22, d14, d4, #1 + vext.32 d24, d18, d6, #1 + vshl.u32 q13, q6, q1 + vshl.u32 d28, d8, d2 + vrev64.i32 d22, d22 + vmul.i32 d1, d9, d1 + vrev64.i32 d24, d24 + vext.32 d29, d8, d13, #1 + vext.32 d0, d1, d9, #1 + vrev64.i32 d0, d0 + vext.32 d2, d9, d1, #1 + vext.32 d23, d15, d5, #1 + vmull.s32 q4, d20, d4 + vrev64.i32 d23, d23 + vmlal.s32 q4, d21, d1 + vrev64.i32 d2, d2 + vmlal.s32 q4, d26, d19 + vext.32 d3, d5, d15, #1 + vmlal.s32 q4, d27, d18 + vrev64.i32 d3, d3 + vmlal.s32 q4, d28, d15 + vext.32 d14, d12, d11, #1 + vmull.s32 q5, d16, d23 + vext.32 d15, d13, d12, #1 + vmlal.s32 q5, d17, d4 + vst1.8 d8, [r5, : 64]! + vmlal.s32 q5, d14, d1 + vext.32 d12, d9, d8, #0 + vmlal.s32 q5, d15, d19 + vmov.i64 d13, #0 + vmlal.s32 q5, d29, d18 + vext.32 d25, d19, d7, #1 + vmlal.s32 q6, d20, d5 + vrev64.i32 d25, d25 + vmlal.s32 q6, d21, d4 + vst1.8 d11, [r5, : 64]! + vmlal.s32 q6, d26, d1 + vext.32 d9, d10, d10, #0 + vmlal.s32 q6, d27, d19 + vmov.i64 d8, #0 + vmlal.s32 q6, d28, d18 + vmlal.s32 q4, d16, d24 + vmlal.s32 q4, d17, d5 + vmlal.s32 q4, d14, d4 + vst1.8 d12, [r5, : 64]! + vmlal.s32 q4, d15, d1 + vext.32 d10, d13, d12, #0 + vmlal.s32 q4, d29, d19 + vmov.i64 d11, #0 + vmlal.s32 q5, d20, d6 + vmlal.s32 q5, d21, d5 + vmlal.s32 q5, d26, d4 + vext.32 d13, d8, d8, #0 + vmlal.s32 q5, d27, d1 + vmov.i64 d12, #0 + vmlal.s32 q5, d28, d19 + vst1.8 d9, [r5, : 64]! + vmlal.s32 q6, d16, d25 + vmlal.s32 q6, d17, d6 + vst1.8 d10, [r5, : 64] + vmlal.s32 q6, d14, d5 + vext.32 d8, d11, d10, #0 + vmlal.s32 q6, d15, d4 + vmov.i64 d9, #0 + vmlal.s32 q6, d29, d1 + vmlal.s32 q4, d20, d7 + vmlal.s32 q4, d21, d6 + vmlal.s32 q4, d26, d5 + vext.32 d11, d12, d12, #0 + vmlal.s32 q4, d27, d4 + vmov.i64 d10, #0 + vmlal.s32 q4, d28, d1 + vmlal.s32 q5, d16, d0 + sub r2, r5, #32 + vmlal.s32 q5, d17, d7 + vmlal.s32 q5, d14, d6 + vext.32 d30, d9, d8, #0 + vmlal.s32 q5, d15, d5 + vld1.8 {d31}, [r2, : 64]! + vmlal.s32 q5, d29, d4 + vmlal.s32 q15, d20, d0 + vext.32 d0, d6, d18, #1 + vmlal.s32 q15, d21, d25 + vrev64.i32 d0, d0 + vmlal.s32 q15, d26, d24 + vext.32 d1, d7, d19, #1 + vext.32 d7, d10, d10, #0 + vmlal.s32 q15, d27, d23 + vrev64.i32 d1, d1 + vld1.8 {d6}, [r2, : 64] + vmlal.s32 q15, d28, d22 + vmlal.s32 q3, d16, d4 + add r2, r2, #24 + vmlal.s32 q3, d17, d2 + vext.32 d4, d31, d30, #0 + vmov d17, d11 + vmlal.s32 q3, d14, d1 + vext.32 d11, d13, d13, #0 + vext.32 d13, d30, d30, #0 + vmlal.s32 q3, d15, d0 + vext.32 d1, d8, d8, #0 + vmlal.s32 q3, d29, d3 + vld1.8 {d5}, [r2, : 64] + sub r2, r2, #16 + vext.32 d10, d6, d6, #0 + vmov.i32 q1, #0xffffffff + vshl.i64 q4, q1, #25 + add r5, sp, #512 + vld1.8 {d14-d15}, [r5, : 128] + vadd.i64 q9, q2, q7 + vshl.i64 q1, q1, #26 + vshr.s64 q10, q9, #26 + vld1.8 {d0}, [r2, : 64]! + vadd.i64 q5, q5, q10 + vand q9, q9, q1 + vld1.8 {d16}, [r2, : 64]! + add r2, sp, #528 + vld1.8 {d20-d21}, [r2, : 128] + vadd.i64 q11, q5, q10 + vsub.i64 q2, q2, q9 + vshr.s64 q9, q11, #25 + vext.32 d12, d5, d4, #0 + vand q11, q11, q4 + vadd.i64 q0, q0, q9 + vmov d19, d7 + vadd.i64 q3, q0, q7 + vsub.i64 q5, q5, q11 + vshr.s64 q11, q3, #26 + vext.32 d18, d11, d10, #0 + vand q3, q3, q1 + vadd.i64 q8, q8, q11 + vadd.i64 q11, q8, q10 + vsub.i64 q0, q0, q3 + vshr.s64 q3, q11, #25 + vand q11, q11, q4 + vadd.i64 q3, q6, q3 + vadd.i64 q6, q3, q7 + vsub.i64 q8, q8, q11 + vshr.s64 q11, q6, #26 + vand q6, q6, q1 + vadd.i64 q9, q9, q11 + vadd.i64 d25, d19, d21 + vsub.i64 q3, q3, q6 + vshr.s64 d23, d25, #25 + vand q4, q12, q4 + vadd.i64 d21, d23, d23 + vshl.i64 d25, d23, #4 + vadd.i64 d21, d21, d23 + vadd.i64 d25, d25, d21 + vadd.i64 d4, d4, d25 + vzip.i32 q0, q8 + vadd.i64 d12, d4, d14 + add r2, r6, #8 + vst1.8 d0, [r2, : 64] + vsub.i64 d19, d19, d9 + add r2, r2, #16 + vst1.8 d16, [r2, : 64] + vshr.s64 d22, d12, #26 + vand q0, q6, q1 + vadd.i64 d10, d10, d22 + vzip.i32 q3, q9 + vsub.i64 d4, d4, d0 + sub r2, r2, #8 + vst1.8 d6, [r2, : 64] + add r2, r2, #16 + vst1.8 d18, [r2, : 64] + vzip.i32 q2, q5 + sub r2, r2, #32 + vst1.8 d4, [r2, : 64] + cmp r4, #0 + beq ._skippostcopy + add r2, r3, #144 + mov r4, r4 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d4}, [r2, : 64] + vst1.8 {d0-d1}, [r4, : 128]! + vst1.8 {d2-d3}, [r4, : 128]! + vst1.8 d4, [r4, : 64] +._skippostcopy: + cmp r1, #1 + bne ._skipfinalcopy + add r2, r3, #288 + add r4, r3, #144 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d4}, [r2, : 64] + vst1.8 {d0-d1}, [r4, : 128]! + vst1.8 {d2-d3}, [r4, : 128]! + vst1.8 d4, [r4, : 64] +._skipfinalcopy: + add r1, r1, #1 + cmp r1, #12 + blo ._invertloop + add r1, r3, #144 + ldr r2, [r1], #4 + ldr r3, [r1], #4 + ldr r4, [r1], #4 + ldr r5, [r1], #4 + ldr r6, [r1], #4 + ldr r7, [r1], #4 + ldr r8, [r1], #4 + ldr r9, [r1], #4 + ldr r10, [r1], #4 + ldr r1, [r1] + add r11, r1, r1, LSL #4 + add r11, r11, r1, LSL #1 + add r11, r11, #16777216 + mov r11, r11, ASR #25 + add r11, r11, r2 + mov r11, r11, ASR #26 + add r11, r11, r3 + mov r11, r11, ASR #25 + add r11, r11, r4 + mov r11, r11, ASR #26 + add r11, r11, r5 + mov r11, r11, ASR #25 + add r11, r11, r6 + mov r11, r11, ASR #26 + add r11, r11, r7 + mov r11, r11, ASR #25 + add r11, r11, r8 + mov r11, r11, ASR #26 + add r11, r11, r9 + mov r11, r11, ASR #25 + add r11, r11, r10 + mov r11, r11, ASR #26 + add r11, r11, r1 + mov r11, r11, ASR #25 + add r2, r2, r11 + add r2, r2, r11, LSL #1 + add r2, r2, r11, LSL #4 + mov r11, r2, ASR #26 + add r3, r3, r11 + sub r2, r2, r11, LSL #26 + mov r11, r3, ASR #25 + add r4, r4, r11 + sub r3, r3, r11, LSL #25 + mov r11, r4, ASR #26 + add r5, r5, r11 + sub r4, r4, r11, LSL #26 + mov r11, r5, ASR #25 + add r6, r6, r11 + sub r5, r5, r11, LSL #25 + mov r11, r6, ASR #26 + add r7, r7, r11 + sub r6, r6, r11, LSL #26 + mov r11, r7, ASR #25 + add r8, r8, r11 + sub r7, r7, r11, LSL #25 + mov r11, r8, ASR #26 + add r9, r9, r11 + sub r8, r8, r11, LSL #26 + mov r11, r9, ASR #25 + add r10, r10, r11 + sub r9, r9, r11, LSL #25 + mov r11, r10, ASR #26 + add r1, r1, r11 + sub r10, r10, r11, LSL #26 + mov r11, r1, ASR #25 + sub r1, r1, r11, LSL #25 + add r2, r2, r3, LSL #26 + mov r3, r3, LSR #6 + add r3, r3, r4, LSL #19 + mov r4, r4, LSR #13 + add r4, r4, r5, LSL #13 + mov r5, r5, LSR #19 + add r5, r5, r6, LSL #6 + add r6, r7, r8, LSL #25 + mov r7, r8, LSR #7 + add r7, r7, r9, LSL #19 + mov r8, r9, LSR #13 + add r8, r8, r10, LSL #12 + mov r9, r10, LSR #20 + add r1, r9, r1, LSL #6 + str r2, [r0], #4 + str r3, [r0], #4 + str r4, [r0], #4 + str r5, [r0], #4 + str r6, [r0], #4 + str r7, [r0], #4 + str r8, [r0], #4 + str r1, [r0] + ldrd r4, [sp, #0] + ldrd r6, [sp, #8] + ldrd r8, [sp, #16] + ldrd r10, [sp, #24] + ldr r12, [sp, #480] + ldr r14, [sp, #484] + ldr r0, =0 + mov sp, r12 + vpop {q4, q5, q6, q7} + bx lr From patchwork Fri Mar 22 06:29:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 160845 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp453044jan; Thu, 21 Mar 2019 23:40:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqxgibZu5ju+6LD2rLobJby4aGJt5Mz2BJbfV+pG34fMC/00E47ckAxuQniky0u0wu+9sc+q X-Received: by 2002:a63:bd51:: with SMTP id d17mr7295138pgp.117.1553236848130; Thu, 21 Mar 2019 23:40:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553236848; cv=none; d=google.com; s=arc-20160816; b=nN+WLPO8w+GK7LVnAmY/In9jBSCEGAKRDcOFfnK2gfAc8GRv2b0dxcKiTj3UvpoyGk OCAcDTlBkYG/wg6yiNJVA9WXaWl1uxyzTn/u5UP0yMRimb671k6LN/G+/wAx/ysID86Y XVciniOW8YiSpat7/FxfOR0Rn74J3wkRdsz9CVwBjXI5/0ELNwRwxFiGbCe+secDhh8L GDiNovIAbhEzKsE5Plcgs1s24Bi0klBzaEWR1a2A3FjFHNTkw5pddUXeHnTdN1SEN9b8 y1znxlzvbx86U0t74kmgAz+UwiPg0h/8vnfFWA4ykyoNII0Nor8htRVXoLfumqmJRyqS /Kvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:from:message-id:to:references :subject; bh=Ns67dE3lrhSypWObP0q5gUYHsHJ5gkjPCRNRiuqLvWQ=; b=YdIIxGg8Wj+oNDgABQlwnNQtuEk6LiYW2uX7kf+i9eWolJp6EVNUGkrb8stUqANZKx q6urC7xX+vWccEIe4Sdtix/nYPXhje2gchzVZYkEfo+LXx12Vj/dk1ksfJ+SoE5dP+sj L2fHMyiXbNMuMLxAOG+UVwBzENtRuU24vV0IBUIbBWdWghonpchRWNpn89upjiNywpmn XIAf0qSl5gRX8/MMH5kOU5bu49uCW+1qa8vjBuNDGDJXx9yBzyqZfR6bWyHdH+lKAXIp ZCvbankXp8IfS1cp66d2eTj4bVIP1w/zZMS4MZMDnXWkLZgVJzyqAulYo+fSH2bqoh7v C1Dw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r6si3875081pfn.165.2019.03.21.23.40.47; Thu, 21 Mar 2019 23:40:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726096AbfCVGkr (ORCPT + 3 others); Fri, 22 Mar 2019 02:40:47 -0400 Received: from orcrist.hmeau.com ([104.223.48.154]:44404 "EHLO deadmen.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727429AbfCVGkq (ORCPT ); Fri, 22 Mar 2019 02:40:46 -0400 Received: from gondobar.mordor.me.apana.org.au ([192.168.128.4] helo=gondobar) by deadmen.hmeau.com with esmtps (Exim 4.89 #2 (Debian)) id 1h7Dh5-0002ZL-Aq; Fri, 22 Mar 2019 14:30:21 +0800 Received: from herbert by gondobar with local (Exim 4.89) (envelope-from ) id 1h7Dgj-0001Jv-F3; Fri, 22 Mar 2019 14:29:57 +0800 Subject: [PATCH 16/17] zinc: Curve25519 ARM implementation References: <20190322062740.nrwfx2rvmt7lzotj@gondor.apana.org.au> To: Linus Torvalds , "David S. Miller" , "Jason A. Donenfeld" , Eric Biggers , Ard Biesheuvel , Linux Crypto Mailing List , linux-fscrypt@vger.kernel.org, linux-arm-kernel@lists.infradead.org, LKML , Paul Crowley , Greg Kaiser , Samuel Neves , Tomer Ashur , Martin Willi Message-Id: From: Herbert Xu Date: Fri, 22 Mar 2019 14:29:57 +0800 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Jason A. Donenfeld This ports the SUPERCOP implementation for usage in kernel space. In addition to the usual header, macro, and style changes required for kernel space, it makes a few small changes to the code: - The stack alignment is relaxed to 16 bytes. - Superfluous mov statements have been removed. - ldr for constants has been replaced with movw. - ldreq has been replaced with moveq. - The str epilogue has been made more idiomatic. - SIMD registers are not pushed and popped at the beginning and end. - The prologue and epilogue have been made idiomatic. - A hole has been removed from the stack, saving 32 bytes. - We write-back the base register whenever possible for vld1.8. - Some multiplications have been reordered for better A7 performance. There are more opportunities for cleanup, since this code is from qhasm, which doesn't always do the most opportune thing. But even prior to extensive hand optimizations, this code delivers significant performance improvements (given in get_cycles() per call): ----------- ------------- | generic C | this commit | ------------ ----------- ------------- | Cortex-A7 | 49136 | 22395 | ------------ ----------- ------------- | Cortex-A17 | 17326 | 4983 | ------------ ----------- ------------- Signed-off-by: Jason A. Donenfeld Cc: Russell King Cc: linux-arm-kernel@lists.infradead.org Cc: Samuel Neves Cc: Jean-Philippe Aumasson Cc: Andy Lutomirski Cc: Greg KH Cc: Andrew Morton Cc: Linus Torvalds Cc: kernel-hardening@lists.openwall.com Cc: linux-crypto@vger.kernel.org Signed-off-by: Herbert Xu --- lib/zinc/Makefile | 1 lib/zinc/curve25519/curve25519-arm-glue.c | 43 lib/zinc/curve25519/curve25519-arm-supercop.S | 2105 -------------------------- lib/zinc/curve25519/curve25519-arm.S | 2064 +++++++++++++++++++++++++ lib/zinc/curve25519/curve25519.c | 2 5 files changed, 2110 insertions(+), 2105 deletions(-) diff --git a/lib/zinc/Makefile b/lib/zinc/Makefile index 9050089e1b5a..0edb0c1fbb0a 100644 --- a/lib/zinc/Makefile +++ b/lib/zinc/Makefile @@ -16,4 +16,5 @@ zinc_blake2s-$(CONFIG_ZINC_ARCH_X86_64) += blake2s/blake2s-x86_64.o obj-$(CONFIG_ZINC_BLAKE2S) += zinc_blake2s.o zinc_curve25519-y := curve25519/curve25519.o +zinc_curve25519-$(CONFIG_ZINC_ARCH_ARM) += curve25519/curve25519-arm.o obj-$(CONFIG_ZINC_CURVE25519) += zinc_curve25519.o diff --git a/lib/zinc/curve25519/curve25519-arm-glue.c b/lib/zinc/curve25519/curve25519-arm-glue.c new file mode 100644 index 000000000000..c71c981c3ba9 --- /dev/null +++ b/lib/zinc/curve25519/curve25519-arm-glue.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + */ + +#include +#include +#include + +asmlinkage void curve25519_neon(u8 mypublic[CURVE25519_KEY_SIZE], + const u8 secret[CURVE25519_KEY_SIZE], + const u8 basepoint[CURVE25519_KEY_SIZE]); + +static bool curve25519_use_neon __ro_after_init; +static bool *const curve25519_nobs[] __initconst = { &curve25519_use_neon }; +static void __init curve25519_fpu_init(void) +{ + curve25519_use_neon = elf_hwcap & HWCAP_NEON; +} + +static inline bool curve25519_arch(u8 mypublic[CURVE25519_KEY_SIZE], + const u8 secret[CURVE25519_KEY_SIZE], + const u8 basepoint[CURVE25519_KEY_SIZE]) +{ + simd_context_t simd_context; + bool used_arch = false; + + simd_get(&simd_context); + if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && + !IS_ENABLED(CONFIG_CPU_BIG_ENDIAN) && curve25519_use_neon && + simd_use(&simd_context)) { + curve25519_neon(mypublic, secret, basepoint); + used_arch = true; + } + simd_put(&simd_context); + return used_arch; +} + +static inline bool curve25519_base_arch(u8 pub[CURVE25519_KEY_SIZE], + const u8 secret[CURVE25519_KEY_SIZE]) +{ + return false; +} diff --git a/lib/zinc/curve25519/curve25519-arm-supercop.S b/lib/zinc/curve25519/curve25519-arm-supercop.S deleted file mode 100644 index f33b85fef382..000000000000 --- a/lib/zinc/curve25519/curve25519-arm-supercop.S +++ /dev/null @@ -1,2105 +0,0 @@ -/* - * Public domain code from Daniel J. Bernstein and Peter Schwabe, from - * SUPERCOP's curve25519/neon2/scalarmult.s. - */ - -.fpu neon -.text -.align 4 -.global _crypto_scalarmult_curve25519_neon2 -.global crypto_scalarmult_curve25519_neon2 -.type _crypto_scalarmult_curve25519_neon2 STT_FUNC -.type crypto_scalarmult_curve25519_neon2 STT_FUNC - _crypto_scalarmult_curve25519_neon2: - crypto_scalarmult_curve25519_neon2: - vpush {q4, q5, q6, q7} - mov r12, sp - sub sp, sp, #736 - and sp, sp, #0xffffffe0 - strd r4, [sp, #0] - strd r6, [sp, #8] - strd r8, [sp, #16] - strd r10, [sp, #24] - str r12, [sp, #480] - str r14, [sp, #484] - mov r0, r0 - mov r1, r1 - mov r2, r2 - add r3, sp, #32 - ldr r4, =0 - ldr r5, =254 - vmov.i32 q0, #1 - vshr.u64 q1, q0, #7 - vshr.u64 q0, q0, #8 - vmov.i32 d4, #19 - vmov.i32 d5, #38 - add r6, sp, #512 - vst1.8 {d2-d3}, [r6, : 128] - add r6, sp, #528 - vst1.8 {d0-d1}, [r6, : 128] - add r6, sp, #544 - vst1.8 {d4-d5}, [r6, : 128] - add r6, r3, #0 - vmov.i32 q2, #0 - vst1.8 {d4-d5}, [r6, : 128]! - vst1.8 {d4-d5}, [r6, : 128]! - vst1.8 d4, [r6, : 64] - add r6, r3, #0 - ldr r7, =960 - sub r7, r7, #2 - neg r7, r7 - sub r7, r7, r7, LSL #7 - str r7, [r6] - add r6, sp, #704 - vld1.8 {d4-d5}, [r1]! - vld1.8 {d6-d7}, [r1] - vst1.8 {d4-d5}, [r6, : 128]! - vst1.8 {d6-d7}, [r6, : 128] - sub r1, r6, #16 - ldrb r6, [r1] - and r6, r6, #248 - strb r6, [r1] - ldrb r6, [r1, #31] - and r6, r6, #127 - orr r6, r6, #64 - strb r6, [r1, #31] - vmov.i64 q2, #0xffffffff - vshr.u64 q3, q2, #7 - vshr.u64 q2, q2, #6 - vld1.8 {d8}, [r2] - vld1.8 {d10}, [r2] - add r2, r2, #6 - vld1.8 {d12}, [r2] - vld1.8 {d14}, [r2] - add r2, r2, #6 - vld1.8 {d16}, [r2] - add r2, r2, #4 - vld1.8 {d18}, [r2] - vld1.8 {d20}, [r2] - add r2, r2, #6 - vld1.8 {d22}, [r2] - add r2, r2, #2 - vld1.8 {d24}, [r2] - vld1.8 {d26}, [r2] - vshr.u64 q5, q5, #26 - vshr.u64 q6, q6, #3 - vshr.u64 q7, q7, #29 - vshr.u64 q8, q8, #6 - vshr.u64 q10, q10, #25 - vshr.u64 q11, q11, #3 - vshr.u64 q12, q12, #12 - vshr.u64 q13, q13, #38 - vand q4, q4, q2 - vand q6, q6, q2 - vand q8, q8, q2 - vand q10, q10, q2 - vand q2, q12, q2 - vand q5, q5, q3 - vand q7, q7, q3 - vand q9, q9, q3 - vand q11, q11, q3 - vand q3, q13, q3 - add r2, r3, #48 - vadd.i64 q12, q4, q1 - vadd.i64 q13, q10, q1 - vshr.s64 q12, q12, #26 - vshr.s64 q13, q13, #26 - vadd.i64 q5, q5, q12 - vshl.i64 q12, q12, #26 - vadd.i64 q14, q5, q0 - vadd.i64 q11, q11, q13 - vshl.i64 q13, q13, #26 - vadd.i64 q15, q11, q0 - vsub.i64 q4, q4, q12 - vshr.s64 q12, q14, #25 - vsub.i64 q10, q10, q13 - vshr.s64 q13, q15, #25 - vadd.i64 q6, q6, q12 - vshl.i64 q12, q12, #25 - vadd.i64 q14, q6, q1 - vadd.i64 q2, q2, q13 - vsub.i64 q5, q5, q12 - vshr.s64 q12, q14, #26 - vshl.i64 q13, q13, #25 - vadd.i64 q14, q2, q1 - vadd.i64 q7, q7, q12 - vshl.i64 q12, q12, #26 - vadd.i64 q15, q7, q0 - vsub.i64 q11, q11, q13 - vshr.s64 q13, q14, #26 - vsub.i64 q6, q6, q12 - vshr.s64 q12, q15, #25 - vadd.i64 q3, q3, q13 - vshl.i64 q13, q13, #26 - vadd.i64 q14, q3, q0 - vadd.i64 q8, q8, q12 - vshl.i64 q12, q12, #25 - vadd.i64 q15, q8, q1 - add r2, r2, #8 - vsub.i64 q2, q2, q13 - vshr.s64 q13, q14, #25 - vsub.i64 q7, q7, q12 - vshr.s64 q12, q15, #26 - vadd.i64 q14, q13, q13 - vadd.i64 q9, q9, q12 - vtrn.32 d12, d14 - vshl.i64 q12, q12, #26 - vtrn.32 d13, d15 - vadd.i64 q0, q9, q0 - vadd.i64 q4, q4, q14 - vst1.8 d12, [r2, : 64]! - vshl.i64 q6, q13, #4 - vsub.i64 q7, q8, q12 - vshr.s64 q0, q0, #25 - vadd.i64 q4, q4, q6 - vadd.i64 q6, q10, q0 - vshl.i64 q0, q0, #25 - vadd.i64 q8, q6, q1 - vadd.i64 q4, q4, q13 - vshl.i64 q10, q13, #25 - vadd.i64 q1, q4, q1 - vsub.i64 q0, q9, q0 - vshr.s64 q8, q8, #26 - vsub.i64 q3, q3, q10 - vtrn.32 d14, d0 - vshr.s64 q1, q1, #26 - vtrn.32 d15, d1 - vadd.i64 q0, q11, q8 - vst1.8 d14, [r2, : 64] - vshl.i64 q7, q8, #26 - vadd.i64 q5, q5, q1 - vtrn.32 d4, d6 - vshl.i64 q1, q1, #26 - vtrn.32 d5, d7 - vsub.i64 q3, q6, q7 - add r2, r2, #16 - vsub.i64 q1, q4, q1 - vst1.8 d4, [r2, : 64] - vtrn.32 d6, d0 - vtrn.32 d7, d1 - sub r2, r2, #8 - vtrn.32 d2, d10 - vtrn.32 d3, d11 - vst1.8 d6, [r2, : 64] - sub r2, r2, #24 - vst1.8 d2, [r2, : 64] - add r2, r3, #96 - vmov.i32 q0, #0 - vmov.i64 d2, #0xff - vmov.i64 d3, #0 - vshr.u32 q1, q1, #7 - vst1.8 {d2-d3}, [r2, : 128]! - vst1.8 {d0-d1}, [r2, : 128]! - vst1.8 d0, [r2, : 64] - add r2, r3, #144 - vmov.i32 q0, #0 - vst1.8 {d0-d1}, [r2, : 128]! - vst1.8 {d0-d1}, [r2, : 128]! - vst1.8 d0, [r2, : 64] - add r2, r3, #240 - vmov.i32 q0, #0 - vmov.i64 d2, #0xff - vmov.i64 d3, #0 - vshr.u32 q1, q1, #7 - vst1.8 {d2-d3}, [r2, : 128]! - vst1.8 {d0-d1}, [r2, : 128]! - vst1.8 d0, [r2, : 64] - add r2, r3, #48 - add r6, r3, #192 - vld1.8 {d0-d1}, [r2, : 128]! - vld1.8 {d2-d3}, [r2, : 128]! - vld1.8 {d4}, [r2, : 64] - vst1.8 {d0-d1}, [r6, : 128]! - vst1.8 {d2-d3}, [r6, : 128]! - vst1.8 d4, [r6, : 64] -._mainloop: - mov r2, r5, LSR #3 - and r6, r5, #7 - ldrb r2, [r1, r2] - mov r2, r2, LSR r6 - and r2, r2, #1 - str r5, [sp, #488] - eor r4, r4, r2 - str r2, [sp, #492] - neg r2, r4 - add r4, r3, #96 - add r5, r3, #192 - add r6, r3, #144 - vld1.8 {d8-d9}, [r4, : 128]! - add r7, r3, #240 - vld1.8 {d10-d11}, [r5, : 128]! - veor q6, q4, q5 - vld1.8 {d14-d15}, [r6, : 128]! - vdup.i32 q8, r2 - vld1.8 {d18-d19}, [r7, : 128]! - veor q10, q7, q9 - vld1.8 {d22-d23}, [r4, : 128]! - vand q6, q6, q8 - vld1.8 {d24-d25}, [r5, : 128]! - vand q10, q10, q8 - vld1.8 {d26-d27}, [r6, : 128]! - veor q4, q4, q6 - vld1.8 {d28-d29}, [r7, : 128]! - veor q5, q5, q6 - vld1.8 {d0}, [r4, : 64] - veor q6, q7, q10 - vld1.8 {d2}, [r5, : 64] - veor q7, q9, q10 - vld1.8 {d4}, [r6, : 64] - veor q9, q11, q12 - vld1.8 {d6}, [r7, : 64] - veor q10, q0, q1 - sub r2, r4, #32 - vand q9, q9, q8 - sub r4, r5, #32 - vand q10, q10, q8 - sub r5, r6, #32 - veor q11, q11, q9 - sub r6, r7, #32 - veor q0, q0, q10 - veor q9, q12, q9 - veor q1, q1, q10 - veor q10, q13, q14 - veor q12, q2, q3 - vand q10, q10, q8 - vand q8, q12, q8 - veor q12, q13, q10 - veor q2, q2, q8 - veor q10, q14, q10 - veor q3, q3, q8 - vadd.i32 q8, q4, q6 - vsub.i32 q4, q4, q6 - vst1.8 {d16-d17}, [r2, : 128]! - vadd.i32 q6, q11, q12 - vst1.8 {d8-d9}, [r5, : 128]! - vsub.i32 q4, q11, q12 - vst1.8 {d12-d13}, [r2, : 128]! - vadd.i32 q6, q0, q2 - vst1.8 {d8-d9}, [r5, : 128]! - vsub.i32 q0, q0, q2 - vst1.8 d12, [r2, : 64] - vadd.i32 q2, q5, q7 - vst1.8 d0, [r5, : 64] - vsub.i32 q0, q5, q7 - vst1.8 {d4-d5}, [r4, : 128]! - vadd.i32 q2, q9, q10 - vst1.8 {d0-d1}, [r6, : 128]! - vsub.i32 q0, q9, q10 - vst1.8 {d4-d5}, [r4, : 128]! - vadd.i32 q2, q1, q3 - vst1.8 {d0-d1}, [r6, : 128]! - vsub.i32 q0, q1, q3 - vst1.8 d4, [r4, : 64] - vst1.8 d0, [r6, : 64] - add r2, sp, #544 - add r4, r3, #96 - add r5, r3, #144 - vld1.8 {d0-d1}, [r2, : 128] - vld1.8 {d2-d3}, [r4, : 128]! - vld1.8 {d4-d5}, [r5, : 128]! - vzip.i32 q1, q2 - vld1.8 {d6-d7}, [r4, : 128]! - vld1.8 {d8-d9}, [r5, : 128]! - vshl.i32 q5, q1, #1 - vzip.i32 q3, q4 - vshl.i32 q6, q2, #1 - vld1.8 {d14}, [r4, : 64] - vshl.i32 q8, q3, #1 - vld1.8 {d15}, [r5, : 64] - vshl.i32 q9, q4, #1 - vmul.i32 d21, d7, d1 - vtrn.32 d14, d15 - vmul.i32 q11, q4, q0 - vmul.i32 q0, q7, q0 - vmull.s32 q12, d2, d2 - vmlal.s32 q12, d11, d1 - vmlal.s32 q12, d12, d0 - vmlal.s32 q12, d13, d23 - vmlal.s32 q12, d16, d22 - vmlal.s32 q12, d7, d21 - vmull.s32 q10, d2, d11 - vmlal.s32 q10, d4, d1 - vmlal.s32 q10, d13, d0 - vmlal.s32 q10, d6, d23 - vmlal.s32 q10, d17, d22 - vmull.s32 q13, d10, d4 - vmlal.s32 q13, d11, d3 - vmlal.s32 q13, d13, d1 - vmlal.s32 q13, d16, d0 - vmlal.s32 q13, d17, d23 - vmlal.s32 q13, d8, d22 - vmull.s32 q1, d10, d5 - vmlal.s32 q1, d11, d4 - vmlal.s32 q1, d6, d1 - vmlal.s32 q1, d17, d0 - vmlal.s32 q1, d8, d23 - vmull.s32 q14, d10, d6 - vmlal.s32 q14, d11, d13 - vmlal.s32 q14, d4, d4 - vmlal.s32 q14, d17, d1 - vmlal.s32 q14, d18, d0 - vmlal.s32 q14, d9, d23 - vmull.s32 q11, d10, d7 - vmlal.s32 q11, d11, d6 - vmlal.s32 q11, d12, d5 - vmlal.s32 q11, d8, d1 - vmlal.s32 q11, d19, d0 - vmull.s32 q15, d10, d8 - vmlal.s32 q15, d11, d17 - vmlal.s32 q15, d12, d6 - vmlal.s32 q15, d13, d5 - vmlal.s32 q15, d19, d1 - vmlal.s32 q15, d14, d0 - vmull.s32 q2, d10, d9 - vmlal.s32 q2, d11, d8 - vmlal.s32 q2, d12, d7 - vmlal.s32 q2, d13, d6 - vmlal.s32 q2, d14, d1 - vmull.s32 q0, d15, d1 - vmlal.s32 q0, d10, d14 - vmlal.s32 q0, d11, d19 - vmlal.s32 q0, d12, d8 - vmlal.s32 q0, d13, d17 - vmlal.s32 q0, d6, d6 - add r2, sp, #512 - vld1.8 {d18-d19}, [r2, : 128] - vmull.s32 q3, d16, d7 - vmlal.s32 q3, d10, d15 - vmlal.s32 q3, d11, d14 - vmlal.s32 q3, d12, d9 - vmlal.s32 q3, d13, d8 - add r2, sp, #528 - vld1.8 {d8-d9}, [r2, : 128] - vadd.i64 q5, q12, q9 - vadd.i64 q6, q15, q9 - vshr.s64 q5, q5, #26 - vshr.s64 q6, q6, #26 - vadd.i64 q7, q10, q5 - vshl.i64 q5, q5, #26 - vadd.i64 q8, q7, q4 - vadd.i64 q2, q2, q6 - vshl.i64 q6, q6, #26 - vadd.i64 q10, q2, q4 - vsub.i64 q5, q12, q5 - vshr.s64 q8, q8, #25 - vsub.i64 q6, q15, q6 - vshr.s64 q10, q10, #25 - vadd.i64 q12, q13, q8 - vshl.i64 q8, q8, #25 - vadd.i64 q13, q12, q9 - vadd.i64 q0, q0, q10 - vsub.i64 q7, q7, q8 - vshr.s64 q8, q13, #26 - vshl.i64 q10, q10, #25 - vadd.i64 q13, q0, q9 - vadd.i64 q1, q1, q8 - vshl.i64 q8, q8, #26 - vadd.i64 q15, q1, q4 - vsub.i64 q2, q2, q10 - vshr.s64 q10, q13, #26 - vsub.i64 q8, q12, q8 - vshr.s64 q12, q15, #25 - vadd.i64 q3, q3, q10 - vshl.i64 q10, q10, #26 - vadd.i64 q13, q3, q4 - vadd.i64 q14, q14, q12 - add r2, r3, #288 - vshl.i64 q12, q12, #25 - add r4, r3, #336 - vadd.i64 q15, q14, q9 - add r2, r2, #8 - vsub.i64 q0, q0, q10 - add r4, r4, #8 - vshr.s64 q10, q13, #25 - vsub.i64 q1, q1, q12 - vshr.s64 q12, q15, #26 - vadd.i64 q13, q10, q10 - vadd.i64 q11, q11, q12 - vtrn.32 d16, d2 - vshl.i64 q12, q12, #26 - vtrn.32 d17, d3 - vadd.i64 q1, q11, q4 - vadd.i64 q4, q5, q13 - vst1.8 d16, [r2, : 64]! - vshl.i64 q5, q10, #4 - vst1.8 d17, [r4, : 64]! - vsub.i64 q8, q14, q12 - vshr.s64 q1, q1, #25 - vadd.i64 q4, q4, q5 - vadd.i64 q5, q6, q1 - vshl.i64 q1, q1, #25 - vadd.i64 q6, q5, q9 - vadd.i64 q4, q4, q10 - vshl.i64 q10, q10, #25 - vadd.i64 q9, q4, q9 - vsub.i64 q1, q11, q1 - vshr.s64 q6, q6, #26 - vsub.i64 q3, q3, q10 - vtrn.32 d16, d2 - vshr.s64 q9, q9, #26 - vtrn.32 d17, d3 - vadd.i64 q1, q2, q6 - vst1.8 d16, [r2, : 64] - vshl.i64 q2, q6, #26 - vst1.8 d17, [r4, : 64] - vadd.i64 q6, q7, q9 - vtrn.32 d0, d6 - vshl.i64 q7, q9, #26 - vtrn.32 d1, d7 - vsub.i64 q2, q5, q2 - add r2, r2, #16 - vsub.i64 q3, q4, q7 - vst1.8 d0, [r2, : 64] - add r4, r4, #16 - vst1.8 d1, [r4, : 64] - vtrn.32 d4, d2 - vtrn.32 d5, d3 - sub r2, r2, #8 - sub r4, r4, #8 - vtrn.32 d6, d12 - vtrn.32 d7, d13 - vst1.8 d4, [r2, : 64] - vst1.8 d5, [r4, : 64] - sub r2, r2, #24 - sub r4, r4, #24 - vst1.8 d6, [r2, : 64] - vst1.8 d7, [r4, : 64] - add r2, r3, #240 - add r4, r3, #96 - vld1.8 {d0-d1}, [r4, : 128]! - vld1.8 {d2-d3}, [r4, : 128]! - vld1.8 {d4}, [r4, : 64] - add r4, r3, #144 - vld1.8 {d6-d7}, [r4, : 128]! - vtrn.32 q0, q3 - vld1.8 {d8-d9}, [r4, : 128]! - vshl.i32 q5, q0, #4 - vtrn.32 q1, q4 - vshl.i32 q6, q3, #4 - vadd.i32 q5, q5, q0 - vadd.i32 q6, q6, q3 - vshl.i32 q7, q1, #4 - vld1.8 {d5}, [r4, : 64] - vshl.i32 q8, q4, #4 - vtrn.32 d4, d5 - vadd.i32 q7, q7, q1 - vadd.i32 q8, q8, q4 - vld1.8 {d18-d19}, [r2, : 128]! - vshl.i32 q10, q2, #4 - vld1.8 {d22-d23}, [r2, : 128]! - vadd.i32 q10, q10, q2 - vld1.8 {d24}, [r2, : 64] - vadd.i32 q5, q5, q0 - add r2, r3, #192 - vld1.8 {d26-d27}, [r2, : 128]! - vadd.i32 q6, q6, q3 - vld1.8 {d28-d29}, [r2, : 128]! - vadd.i32 q8, q8, q4 - vld1.8 {d25}, [r2, : 64] - vadd.i32 q10, q10, q2 - vtrn.32 q9, q13 - vadd.i32 q7, q7, q1 - vadd.i32 q5, q5, q0 - vtrn.32 q11, q14 - vadd.i32 q6, q6, q3 - add r2, sp, #560 - vadd.i32 q10, q10, q2 - vtrn.32 d24, d25 - vst1.8 {d12-d13}, [r2, : 128] - vshl.i32 q6, q13, #1 - add r2, sp, #576 - vst1.8 {d20-d21}, [r2, : 128] - vshl.i32 q10, q14, #1 - add r2, sp, #592 - vst1.8 {d12-d13}, [r2, : 128] - vshl.i32 q15, q12, #1 - vadd.i32 q8, q8, q4 - vext.32 d10, d31, d30, #0 - vadd.i32 q7, q7, q1 - add r2, sp, #608 - vst1.8 {d16-d17}, [r2, : 128] - vmull.s32 q8, d18, d5 - vmlal.s32 q8, d26, d4 - vmlal.s32 q8, d19, d9 - vmlal.s32 q8, d27, d3 - vmlal.s32 q8, d22, d8 - vmlal.s32 q8, d28, d2 - vmlal.s32 q8, d23, d7 - vmlal.s32 q8, d29, d1 - vmlal.s32 q8, d24, d6 - vmlal.s32 q8, d25, d0 - add r2, sp, #624 - vst1.8 {d14-d15}, [r2, : 128] - vmull.s32 q2, d18, d4 - vmlal.s32 q2, d12, d9 - vmlal.s32 q2, d13, d8 - vmlal.s32 q2, d19, d3 - vmlal.s32 q2, d22, d2 - vmlal.s32 q2, d23, d1 - vmlal.s32 q2, d24, d0 - add r2, sp, #640 - vst1.8 {d20-d21}, [r2, : 128] - vmull.s32 q7, d18, d9 - vmlal.s32 q7, d26, d3 - vmlal.s32 q7, d19, d8 - vmlal.s32 q7, d27, d2 - vmlal.s32 q7, d22, d7 - vmlal.s32 q7, d28, d1 - vmlal.s32 q7, d23, d6 - vmlal.s32 q7, d29, d0 - add r2, sp, #656 - vst1.8 {d10-d11}, [r2, : 128] - vmull.s32 q5, d18, d3 - vmlal.s32 q5, d19, d2 - vmlal.s32 q5, d22, d1 - vmlal.s32 q5, d23, d0 - vmlal.s32 q5, d12, d8 - add r2, sp, #672 - vst1.8 {d16-d17}, [r2, : 128] - vmull.s32 q4, d18, d8 - vmlal.s32 q4, d26, d2 - vmlal.s32 q4, d19, d7 - vmlal.s32 q4, d27, d1 - vmlal.s32 q4, d22, d6 - vmlal.s32 q4, d28, d0 - vmull.s32 q8, d18, d7 - vmlal.s32 q8, d26, d1 - vmlal.s32 q8, d19, d6 - vmlal.s32 q8, d27, d0 - add r2, sp, #576 - vld1.8 {d20-d21}, [r2, : 128] - vmlal.s32 q7, d24, d21 - vmlal.s32 q7, d25, d20 - vmlal.s32 q4, d23, d21 - vmlal.s32 q4, d29, d20 - vmlal.s32 q8, d22, d21 - vmlal.s32 q8, d28, d20 - vmlal.s32 q5, d24, d20 - add r2, sp, #576 - vst1.8 {d14-d15}, [r2, : 128] - vmull.s32 q7, d18, d6 - vmlal.s32 q7, d26, d0 - add r2, sp, #656 - vld1.8 {d30-d31}, [r2, : 128] - vmlal.s32 q2, d30, d21 - vmlal.s32 q7, d19, d21 - vmlal.s32 q7, d27, d20 - add r2, sp, #624 - vld1.8 {d26-d27}, [r2, : 128] - vmlal.s32 q4, d25, d27 - vmlal.s32 q8, d29, d27 - vmlal.s32 q8, d25, d26 - vmlal.s32 q7, d28, d27 - vmlal.s32 q7, d29, d26 - add r2, sp, #608 - vld1.8 {d28-d29}, [r2, : 128] - vmlal.s32 q4, d24, d29 - vmlal.s32 q8, d23, d29 - vmlal.s32 q8, d24, d28 - vmlal.s32 q7, d22, d29 - vmlal.s32 q7, d23, d28 - add r2, sp, #608 - vst1.8 {d8-d9}, [r2, : 128] - add r2, sp, #560 - vld1.8 {d8-d9}, [r2, : 128] - vmlal.s32 q7, d24, d9 - vmlal.s32 q7, d25, d31 - vmull.s32 q1, d18, d2 - vmlal.s32 q1, d19, d1 - vmlal.s32 q1, d22, d0 - vmlal.s32 q1, d24, d27 - vmlal.s32 q1, d23, d20 - vmlal.s32 q1, d12, d7 - vmlal.s32 q1, d13, d6 - vmull.s32 q6, d18, d1 - vmlal.s32 q6, d19, d0 - vmlal.s32 q6, d23, d27 - vmlal.s32 q6, d22, d20 - vmlal.s32 q6, d24, d26 - vmull.s32 q0, d18, d0 - vmlal.s32 q0, d22, d27 - vmlal.s32 q0, d23, d26 - vmlal.s32 q0, d24, d31 - vmlal.s32 q0, d19, d20 - add r2, sp, #640 - vld1.8 {d18-d19}, [r2, : 128] - vmlal.s32 q2, d18, d7 - vmlal.s32 q2, d19, d6 - vmlal.s32 q5, d18, d6 - vmlal.s32 q5, d19, d21 - vmlal.s32 q1, d18, d21 - vmlal.s32 q1, d19, d29 - vmlal.s32 q0, d18, d28 - vmlal.s32 q0, d19, d9 - vmlal.s32 q6, d18, d29 - vmlal.s32 q6, d19, d28 - add r2, sp, #592 - vld1.8 {d18-d19}, [r2, : 128] - add r2, sp, #512 - vld1.8 {d22-d23}, [r2, : 128] - vmlal.s32 q5, d19, d7 - vmlal.s32 q0, d18, d21 - vmlal.s32 q0, d19, d29 - vmlal.s32 q6, d18, d6 - add r2, sp, #528 - vld1.8 {d6-d7}, [r2, : 128] - vmlal.s32 q6, d19, d21 - add r2, sp, #576 - vld1.8 {d18-d19}, [r2, : 128] - vmlal.s32 q0, d30, d8 - add r2, sp, #672 - vld1.8 {d20-d21}, [r2, : 128] - vmlal.s32 q5, d30, d29 - add r2, sp, #608 - vld1.8 {d24-d25}, [r2, : 128] - vmlal.s32 q1, d30, d28 - vadd.i64 q13, q0, q11 - vadd.i64 q14, q5, q11 - vmlal.s32 q6, d30, d9 - vshr.s64 q4, q13, #26 - vshr.s64 q13, q14, #26 - vadd.i64 q7, q7, q4 - vshl.i64 q4, q4, #26 - vadd.i64 q14, q7, q3 - vadd.i64 q9, q9, q13 - vshl.i64 q13, q13, #26 - vadd.i64 q15, q9, q3 - vsub.i64 q0, q0, q4 - vshr.s64 q4, q14, #25 - vsub.i64 q5, q5, q13 - vshr.s64 q13, q15, #25 - vadd.i64 q6, q6, q4 - vshl.i64 q4, q4, #25 - vadd.i64 q14, q6, q11 - vadd.i64 q2, q2, q13 - vsub.i64 q4, q7, q4 - vshr.s64 q7, q14, #26 - vshl.i64 q13, q13, #25 - vadd.i64 q14, q2, q11 - vadd.i64 q8, q8, q7 - vshl.i64 q7, q7, #26 - vadd.i64 q15, q8, q3 - vsub.i64 q9, q9, q13 - vshr.s64 q13, q14, #26 - vsub.i64 q6, q6, q7 - vshr.s64 q7, q15, #25 - vadd.i64 q10, q10, q13 - vshl.i64 q13, q13, #26 - vadd.i64 q14, q10, q3 - vadd.i64 q1, q1, q7 - add r2, r3, #144 - vshl.i64 q7, q7, #25 - add r4, r3, #96 - vadd.i64 q15, q1, q11 - add r2, r2, #8 - vsub.i64 q2, q2, q13 - add r4, r4, #8 - vshr.s64 q13, q14, #25 - vsub.i64 q7, q8, q7 - vshr.s64 q8, q15, #26 - vadd.i64 q14, q13, q13 - vadd.i64 q12, q12, q8 - vtrn.32 d12, d14 - vshl.i64 q8, q8, #26 - vtrn.32 d13, d15 - vadd.i64 q3, q12, q3 - vadd.i64 q0, q0, q14 - vst1.8 d12, [r2, : 64]! - vshl.i64 q7, q13, #4 - vst1.8 d13, [r4, : 64]! - vsub.i64 q1, q1, q8 - vshr.s64 q3, q3, #25 - vadd.i64 q0, q0, q7 - vadd.i64 q5, q5, q3 - vshl.i64 q3, q3, #25 - vadd.i64 q6, q5, q11 - vadd.i64 q0, q0, q13 - vshl.i64 q7, q13, #25 - vadd.i64 q8, q0, q11 - vsub.i64 q3, q12, q3 - vshr.s64 q6, q6, #26 - vsub.i64 q7, q10, q7 - vtrn.32 d2, d6 - vshr.s64 q8, q8, #26 - vtrn.32 d3, d7 - vadd.i64 q3, q9, q6 - vst1.8 d2, [r2, : 64] - vshl.i64 q6, q6, #26 - vst1.8 d3, [r4, : 64] - vadd.i64 q1, q4, q8 - vtrn.32 d4, d14 - vshl.i64 q4, q8, #26 - vtrn.32 d5, d15 - vsub.i64 q5, q5, q6 - add r2, r2, #16 - vsub.i64 q0, q0, q4 - vst1.8 d4, [r2, : 64] - add r4, r4, #16 - vst1.8 d5, [r4, : 64] - vtrn.32 d10, d6 - vtrn.32 d11, d7 - sub r2, r2, #8 - sub r4, r4, #8 - vtrn.32 d0, d2 - vtrn.32 d1, d3 - vst1.8 d10, [r2, : 64] - vst1.8 d11, [r4, : 64] - sub r2, r2, #24 - sub r4, r4, #24 - vst1.8 d0, [r2, : 64] - vst1.8 d1, [r4, : 64] - add r2, r3, #288 - add r4, r3, #336 - vld1.8 {d0-d1}, [r2, : 128]! - vld1.8 {d2-d3}, [r4, : 128]! - vsub.i32 q0, q0, q1 - vld1.8 {d2-d3}, [r2, : 128]! - vld1.8 {d4-d5}, [r4, : 128]! - vsub.i32 q1, q1, q2 - add r5, r3, #240 - vld1.8 {d4}, [r2, : 64] - vld1.8 {d6}, [r4, : 64] - vsub.i32 q2, q2, q3 - vst1.8 {d0-d1}, [r5, : 128]! - vst1.8 {d2-d3}, [r5, : 128]! - vst1.8 d4, [r5, : 64] - add r2, r3, #144 - add r4, r3, #96 - add r5, r3, #144 - add r6, r3, #192 - vld1.8 {d0-d1}, [r2, : 128]! - vld1.8 {d2-d3}, [r4, : 128]! - vsub.i32 q2, q0, q1 - vadd.i32 q0, q0, q1 - vld1.8 {d2-d3}, [r2, : 128]! - vld1.8 {d6-d7}, [r4, : 128]! - vsub.i32 q4, q1, q3 - vadd.i32 q1, q1, q3 - vld1.8 {d6}, [r2, : 64] - vld1.8 {d10}, [r4, : 64] - vsub.i32 q6, q3, q5 - vadd.i32 q3, q3, q5 - vst1.8 {d4-d5}, [r5, : 128]! - vst1.8 {d0-d1}, [r6, : 128]! - vst1.8 {d8-d9}, [r5, : 128]! - vst1.8 {d2-d3}, [r6, : 128]! - vst1.8 d12, [r5, : 64] - vst1.8 d6, [r6, : 64] - add r2, r3, #0 - add r4, r3, #240 - vld1.8 {d0-d1}, [r4, : 128]! - vld1.8 {d2-d3}, [r4, : 128]! - vld1.8 {d4}, [r4, : 64] - add r4, r3, #336 - vld1.8 {d6-d7}, [r4, : 128]! - vtrn.32 q0, q3 - vld1.8 {d8-d9}, [r4, : 128]! - vshl.i32 q5, q0, #4 - vtrn.32 q1, q4 - vshl.i32 q6, q3, #4 - vadd.i32 q5, q5, q0 - vadd.i32 q6, q6, q3 - vshl.i32 q7, q1, #4 - vld1.8 {d5}, [r4, : 64] - vshl.i32 q8, q4, #4 - vtrn.32 d4, d5 - vadd.i32 q7, q7, q1 - vadd.i32 q8, q8, q4 - vld1.8 {d18-d19}, [r2, : 128]! - vshl.i32 q10, q2, #4 - vld1.8 {d22-d23}, [r2, : 128]! - vadd.i32 q10, q10, q2 - vld1.8 {d24}, [r2, : 64] - vadd.i32 q5, q5, q0 - add r2, r3, #288 - vld1.8 {d26-d27}, [r2, : 128]! - vadd.i32 q6, q6, q3 - vld1.8 {d28-d29}, [r2, : 128]! - vadd.i32 q8, q8, q4 - vld1.8 {d25}, [r2, : 64] - vadd.i32 q10, q10, q2 - vtrn.32 q9, q13 - vadd.i32 q7, q7, q1 - vadd.i32 q5, q5, q0 - vtrn.32 q11, q14 - vadd.i32 q6, q6, q3 - add r2, sp, #560 - vadd.i32 q10, q10, q2 - vtrn.32 d24, d25 - vst1.8 {d12-d13}, [r2, : 128] - vshl.i32 q6, q13, #1 - add r2, sp, #576 - vst1.8 {d20-d21}, [r2, : 128] - vshl.i32 q10, q14, #1 - add r2, sp, #592 - vst1.8 {d12-d13}, [r2, : 128] - vshl.i32 q15, q12, #1 - vadd.i32 q8, q8, q4 - vext.32 d10, d31, d30, #0 - vadd.i32 q7, q7, q1 - add r2, sp, #608 - vst1.8 {d16-d17}, [r2, : 128] - vmull.s32 q8, d18, d5 - vmlal.s32 q8, d26, d4 - vmlal.s32 q8, d19, d9 - vmlal.s32 q8, d27, d3 - vmlal.s32 q8, d22, d8 - vmlal.s32 q8, d28, d2 - vmlal.s32 q8, d23, d7 - vmlal.s32 q8, d29, d1 - vmlal.s32 q8, d24, d6 - vmlal.s32 q8, d25, d0 - add r2, sp, #624 - vst1.8 {d14-d15}, [r2, : 128] - vmull.s32 q2, d18, d4 - vmlal.s32 q2, d12, d9 - vmlal.s32 q2, d13, d8 - vmlal.s32 q2, d19, d3 - vmlal.s32 q2, d22, d2 - vmlal.s32 q2, d23, d1 - vmlal.s32 q2, d24, d0 - add r2, sp, #640 - vst1.8 {d20-d21}, [r2, : 128] - vmull.s32 q7, d18, d9 - vmlal.s32 q7, d26, d3 - vmlal.s32 q7, d19, d8 - vmlal.s32 q7, d27, d2 - vmlal.s32 q7, d22, d7 - vmlal.s32 q7, d28, d1 - vmlal.s32 q7, d23, d6 - vmlal.s32 q7, d29, d0 - add r2, sp, #656 - vst1.8 {d10-d11}, [r2, : 128] - vmull.s32 q5, d18, d3 - vmlal.s32 q5, d19, d2 - vmlal.s32 q5, d22, d1 - vmlal.s32 q5, d23, d0 - vmlal.s32 q5, d12, d8 - add r2, sp, #672 - vst1.8 {d16-d17}, [r2, : 128] - vmull.s32 q4, d18, d8 - vmlal.s32 q4, d26, d2 - vmlal.s32 q4, d19, d7 - vmlal.s32 q4, d27, d1 - vmlal.s32 q4, d22, d6 - vmlal.s32 q4, d28, d0 - vmull.s32 q8, d18, d7 - vmlal.s32 q8, d26, d1 - vmlal.s32 q8, d19, d6 - vmlal.s32 q8, d27, d0 - add r2, sp, #576 - vld1.8 {d20-d21}, [r2, : 128] - vmlal.s32 q7, d24, d21 - vmlal.s32 q7, d25, d20 - vmlal.s32 q4, d23, d21 - vmlal.s32 q4, d29, d20 - vmlal.s32 q8, d22, d21 - vmlal.s32 q8, d28, d20 - vmlal.s32 q5, d24, d20 - add r2, sp, #576 - vst1.8 {d14-d15}, [r2, : 128] - vmull.s32 q7, d18, d6 - vmlal.s32 q7, d26, d0 - add r2, sp, #656 - vld1.8 {d30-d31}, [r2, : 128] - vmlal.s32 q2, d30, d21 - vmlal.s32 q7, d19, d21 - vmlal.s32 q7, d27, d20 - add r2, sp, #624 - vld1.8 {d26-d27}, [r2, : 128] - vmlal.s32 q4, d25, d27 - vmlal.s32 q8, d29, d27 - vmlal.s32 q8, d25, d26 - vmlal.s32 q7, d28, d27 - vmlal.s32 q7, d29, d26 - add r2, sp, #608 - vld1.8 {d28-d29}, [r2, : 128] - vmlal.s32 q4, d24, d29 - vmlal.s32 q8, d23, d29 - vmlal.s32 q8, d24, d28 - vmlal.s32 q7, d22, d29 - vmlal.s32 q7, d23, d28 - add r2, sp, #608 - vst1.8 {d8-d9}, [r2, : 128] - add r2, sp, #560 - vld1.8 {d8-d9}, [r2, : 128] - vmlal.s32 q7, d24, d9 - vmlal.s32 q7, d25, d31 - vmull.s32 q1, d18, d2 - vmlal.s32 q1, d19, d1 - vmlal.s32 q1, d22, d0 - vmlal.s32 q1, d24, d27 - vmlal.s32 q1, d23, d20 - vmlal.s32 q1, d12, d7 - vmlal.s32 q1, d13, d6 - vmull.s32 q6, d18, d1 - vmlal.s32 q6, d19, d0 - vmlal.s32 q6, d23, d27 - vmlal.s32 q6, d22, d20 - vmlal.s32 q6, d24, d26 - vmull.s32 q0, d18, d0 - vmlal.s32 q0, d22, d27 - vmlal.s32 q0, d23, d26 - vmlal.s32 q0, d24, d31 - vmlal.s32 q0, d19, d20 - add r2, sp, #640 - vld1.8 {d18-d19}, [r2, : 128] - vmlal.s32 q2, d18, d7 - vmlal.s32 q2, d19, d6 - vmlal.s32 q5, d18, d6 - vmlal.s32 q5, d19, d21 - vmlal.s32 q1, d18, d21 - vmlal.s32 q1, d19, d29 - vmlal.s32 q0, d18, d28 - vmlal.s32 q0, d19, d9 - vmlal.s32 q6, d18, d29 - vmlal.s32 q6, d19, d28 - add r2, sp, #592 - vld1.8 {d18-d19}, [r2, : 128] - add r2, sp, #512 - vld1.8 {d22-d23}, [r2, : 128] - vmlal.s32 q5, d19, d7 - vmlal.s32 q0, d18, d21 - vmlal.s32 q0, d19, d29 - vmlal.s32 q6, d18, d6 - add r2, sp, #528 - vld1.8 {d6-d7}, [r2, : 128] - vmlal.s32 q6, d19, d21 - add r2, sp, #576 - vld1.8 {d18-d19}, [r2, : 128] - vmlal.s32 q0, d30, d8 - add r2, sp, #672 - vld1.8 {d20-d21}, [r2, : 128] - vmlal.s32 q5, d30, d29 - add r2, sp, #608 - vld1.8 {d24-d25}, [r2, : 128] - vmlal.s32 q1, d30, d28 - vadd.i64 q13, q0, q11 - vadd.i64 q14, q5, q11 - vmlal.s32 q6, d30, d9 - vshr.s64 q4, q13, #26 - vshr.s64 q13, q14, #26 - vadd.i64 q7, q7, q4 - vshl.i64 q4, q4, #26 - vadd.i64 q14, q7, q3 - vadd.i64 q9, q9, q13 - vshl.i64 q13, q13, #26 - vadd.i64 q15, q9, q3 - vsub.i64 q0, q0, q4 - vshr.s64 q4, q14, #25 - vsub.i64 q5, q5, q13 - vshr.s64 q13, q15, #25 - vadd.i64 q6, q6, q4 - vshl.i64 q4, q4, #25 - vadd.i64 q14, q6, q11 - vadd.i64 q2, q2, q13 - vsub.i64 q4, q7, q4 - vshr.s64 q7, q14, #26 - vshl.i64 q13, q13, #25 - vadd.i64 q14, q2, q11 - vadd.i64 q8, q8, q7 - vshl.i64 q7, q7, #26 - vadd.i64 q15, q8, q3 - vsub.i64 q9, q9, q13 - vshr.s64 q13, q14, #26 - vsub.i64 q6, q6, q7 - vshr.s64 q7, q15, #25 - vadd.i64 q10, q10, q13 - vshl.i64 q13, q13, #26 - vadd.i64 q14, q10, q3 - vadd.i64 q1, q1, q7 - add r2, r3, #288 - vshl.i64 q7, q7, #25 - add r4, r3, #96 - vadd.i64 q15, q1, q11 - add r2, r2, #8 - vsub.i64 q2, q2, q13 - add r4, r4, #8 - vshr.s64 q13, q14, #25 - vsub.i64 q7, q8, q7 - vshr.s64 q8, q15, #26 - vadd.i64 q14, q13, q13 - vadd.i64 q12, q12, q8 - vtrn.32 d12, d14 - vshl.i64 q8, q8, #26 - vtrn.32 d13, d15 - vadd.i64 q3, q12, q3 - vadd.i64 q0, q0, q14 - vst1.8 d12, [r2, : 64]! - vshl.i64 q7, q13, #4 - vst1.8 d13, [r4, : 64]! - vsub.i64 q1, q1, q8 - vshr.s64 q3, q3, #25 - vadd.i64 q0, q0, q7 - vadd.i64 q5, q5, q3 - vshl.i64 q3, q3, #25 - vadd.i64 q6, q5, q11 - vadd.i64 q0, q0, q13 - vshl.i64 q7, q13, #25 - vadd.i64 q8, q0, q11 - vsub.i64 q3, q12, q3 - vshr.s64 q6, q6, #26 - vsub.i64 q7, q10, q7 - vtrn.32 d2, d6 - vshr.s64 q8, q8, #26 - vtrn.32 d3, d7 - vadd.i64 q3, q9, q6 - vst1.8 d2, [r2, : 64] - vshl.i64 q6, q6, #26 - vst1.8 d3, [r4, : 64] - vadd.i64 q1, q4, q8 - vtrn.32 d4, d14 - vshl.i64 q4, q8, #26 - vtrn.32 d5, d15 - vsub.i64 q5, q5, q6 - add r2, r2, #16 - vsub.i64 q0, q0, q4 - vst1.8 d4, [r2, : 64] - add r4, r4, #16 - vst1.8 d5, [r4, : 64] - vtrn.32 d10, d6 - vtrn.32 d11, d7 - sub r2, r2, #8 - sub r4, r4, #8 - vtrn.32 d0, d2 - vtrn.32 d1, d3 - vst1.8 d10, [r2, : 64] - vst1.8 d11, [r4, : 64] - sub r2, r2, #24 - sub r4, r4, #24 - vst1.8 d0, [r2, : 64] - vst1.8 d1, [r4, : 64] - add r2, sp, #544 - add r4, r3, #144 - add r5, r3, #192 - vld1.8 {d0-d1}, [r2, : 128] - vld1.8 {d2-d3}, [r4, : 128]! - vld1.8 {d4-d5}, [r5, : 128]! - vzip.i32 q1, q2 - vld1.8 {d6-d7}, [r4, : 128]! - vld1.8 {d8-d9}, [r5, : 128]! - vshl.i32 q5, q1, #1 - vzip.i32 q3, q4 - vshl.i32 q6, q2, #1 - vld1.8 {d14}, [r4, : 64] - vshl.i32 q8, q3, #1 - vld1.8 {d15}, [r5, : 64] - vshl.i32 q9, q4, #1 - vmul.i32 d21, d7, d1 - vtrn.32 d14, d15 - vmul.i32 q11, q4, q0 - vmul.i32 q0, q7, q0 - vmull.s32 q12, d2, d2 - vmlal.s32 q12, d11, d1 - vmlal.s32 q12, d12, d0 - vmlal.s32 q12, d13, d23 - vmlal.s32 q12, d16, d22 - vmlal.s32 q12, d7, d21 - vmull.s32 q10, d2, d11 - vmlal.s32 q10, d4, d1 - vmlal.s32 q10, d13, d0 - vmlal.s32 q10, d6, d23 - vmlal.s32 q10, d17, d22 - vmull.s32 q13, d10, d4 - vmlal.s32 q13, d11, d3 - vmlal.s32 q13, d13, d1 - vmlal.s32 q13, d16, d0 - vmlal.s32 q13, d17, d23 - vmlal.s32 q13, d8, d22 - vmull.s32 q1, d10, d5 - vmlal.s32 q1, d11, d4 - vmlal.s32 q1, d6, d1 - vmlal.s32 q1, d17, d0 - vmlal.s32 q1, d8, d23 - vmull.s32 q14, d10, d6 - vmlal.s32 q14, d11, d13 - vmlal.s32 q14, d4, d4 - vmlal.s32 q14, d17, d1 - vmlal.s32 q14, d18, d0 - vmlal.s32 q14, d9, d23 - vmull.s32 q11, d10, d7 - vmlal.s32 q11, d11, d6 - vmlal.s32 q11, d12, d5 - vmlal.s32 q11, d8, d1 - vmlal.s32 q11, d19, d0 - vmull.s32 q15, d10, d8 - vmlal.s32 q15, d11, d17 - vmlal.s32 q15, d12, d6 - vmlal.s32 q15, d13, d5 - vmlal.s32 q15, d19, d1 - vmlal.s32 q15, d14, d0 - vmull.s32 q2, d10, d9 - vmlal.s32 q2, d11, d8 - vmlal.s32 q2, d12, d7 - vmlal.s32 q2, d13, d6 - vmlal.s32 q2, d14, d1 - vmull.s32 q0, d15, d1 - vmlal.s32 q0, d10, d14 - vmlal.s32 q0, d11, d19 - vmlal.s32 q0, d12, d8 - vmlal.s32 q0, d13, d17 - vmlal.s32 q0, d6, d6 - add r2, sp, #512 - vld1.8 {d18-d19}, [r2, : 128] - vmull.s32 q3, d16, d7 - vmlal.s32 q3, d10, d15 - vmlal.s32 q3, d11, d14 - vmlal.s32 q3, d12, d9 - vmlal.s32 q3, d13, d8 - add r2, sp, #528 - vld1.8 {d8-d9}, [r2, : 128] - vadd.i64 q5, q12, q9 - vadd.i64 q6, q15, q9 - vshr.s64 q5, q5, #26 - vshr.s64 q6, q6, #26 - vadd.i64 q7, q10, q5 - vshl.i64 q5, q5, #26 - vadd.i64 q8, q7, q4 - vadd.i64 q2, q2, q6 - vshl.i64 q6, q6, #26 - vadd.i64 q10, q2, q4 - vsub.i64 q5, q12, q5 - vshr.s64 q8, q8, #25 - vsub.i64 q6, q15, q6 - vshr.s64 q10, q10, #25 - vadd.i64 q12, q13, q8 - vshl.i64 q8, q8, #25 - vadd.i64 q13, q12, q9 - vadd.i64 q0, q0, q10 - vsub.i64 q7, q7, q8 - vshr.s64 q8, q13, #26 - vshl.i64 q10, q10, #25 - vadd.i64 q13, q0, q9 - vadd.i64 q1, q1, q8 - vshl.i64 q8, q8, #26 - vadd.i64 q15, q1, q4 - vsub.i64 q2, q2, q10 - vshr.s64 q10, q13, #26 - vsub.i64 q8, q12, q8 - vshr.s64 q12, q15, #25 - vadd.i64 q3, q3, q10 - vshl.i64 q10, q10, #26 - vadd.i64 q13, q3, q4 - vadd.i64 q14, q14, q12 - add r2, r3, #144 - vshl.i64 q12, q12, #25 - add r4, r3, #192 - vadd.i64 q15, q14, q9 - add r2, r2, #8 - vsub.i64 q0, q0, q10 - add r4, r4, #8 - vshr.s64 q10, q13, #25 - vsub.i64 q1, q1, q12 - vshr.s64 q12, q15, #26 - vadd.i64 q13, q10, q10 - vadd.i64 q11, q11, q12 - vtrn.32 d16, d2 - vshl.i64 q12, q12, #26 - vtrn.32 d17, d3 - vadd.i64 q1, q11, q4 - vadd.i64 q4, q5, q13 - vst1.8 d16, [r2, : 64]! - vshl.i64 q5, q10, #4 - vst1.8 d17, [r4, : 64]! - vsub.i64 q8, q14, q12 - vshr.s64 q1, q1, #25 - vadd.i64 q4, q4, q5 - vadd.i64 q5, q6, q1 - vshl.i64 q1, q1, #25 - vadd.i64 q6, q5, q9 - vadd.i64 q4, q4, q10 - vshl.i64 q10, q10, #25 - vadd.i64 q9, q4, q9 - vsub.i64 q1, q11, q1 - vshr.s64 q6, q6, #26 - vsub.i64 q3, q3, q10 - vtrn.32 d16, d2 - vshr.s64 q9, q9, #26 - vtrn.32 d17, d3 - vadd.i64 q1, q2, q6 - vst1.8 d16, [r2, : 64] - vshl.i64 q2, q6, #26 - vst1.8 d17, [r4, : 64] - vadd.i64 q6, q7, q9 - vtrn.32 d0, d6 - vshl.i64 q7, q9, #26 - vtrn.32 d1, d7 - vsub.i64 q2, q5, q2 - add r2, r2, #16 - vsub.i64 q3, q4, q7 - vst1.8 d0, [r2, : 64] - add r4, r4, #16 - vst1.8 d1, [r4, : 64] - vtrn.32 d4, d2 - vtrn.32 d5, d3 - sub r2, r2, #8 - sub r4, r4, #8 - vtrn.32 d6, d12 - vtrn.32 d7, d13 - vst1.8 d4, [r2, : 64] - vst1.8 d5, [r4, : 64] - sub r2, r2, #24 - sub r4, r4, #24 - vst1.8 d6, [r2, : 64] - vst1.8 d7, [r4, : 64] - add r2, r3, #336 - add r4, r3, #288 - vld1.8 {d0-d1}, [r2, : 128]! - vld1.8 {d2-d3}, [r4, : 128]! - vadd.i32 q0, q0, q1 - vld1.8 {d2-d3}, [r2, : 128]! - vld1.8 {d4-d5}, [r4, : 128]! - vadd.i32 q1, q1, q2 - add r5, r3, #288 - vld1.8 {d4}, [r2, : 64] - vld1.8 {d6}, [r4, : 64] - vadd.i32 q2, q2, q3 - vst1.8 {d0-d1}, [r5, : 128]! - vst1.8 {d2-d3}, [r5, : 128]! - vst1.8 d4, [r5, : 64] - add r2, r3, #48 - add r4, r3, #144 - vld1.8 {d0-d1}, [r4, : 128]! - vld1.8 {d2-d3}, [r4, : 128]! - vld1.8 {d4}, [r4, : 64] - add r4, r3, #288 - vld1.8 {d6-d7}, [r4, : 128]! - vtrn.32 q0, q3 - vld1.8 {d8-d9}, [r4, : 128]! - vshl.i32 q5, q0, #4 - vtrn.32 q1, q4 - vshl.i32 q6, q3, #4 - vadd.i32 q5, q5, q0 - vadd.i32 q6, q6, q3 - vshl.i32 q7, q1, #4 - vld1.8 {d5}, [r4, : 64] - vshl.i32 q8, q4, #4 - vtrn.32 d4, d5 - vadd.i32 q7, q7, q1 - vadd.i32 q8, q8, q4 - vld1.8 {d18-d19}, [r2, : 128]! - vshl.i32 q10, q2, #4 - vld1.8 {d22-d23}, [r2, : 128]! - vadd.i32 q10, q10, q2 - vld1.8 {d24}, [r2, : 64] - vadd.i32 q5, q5, q0 - add r2, r3, #240 - vld1.8 {d26-d27}, [r2, : 128]! - vadd.i32 q6, q6, q3 - vld1.8 {d28-d29}, [r2, : 128]! - vadd.i32 q8, q8, q4 - vld1.8 {d25}, [r2, : 64] - vadd.i32 q10, q10, q2 - vtrn.32 q9, q13 - vadd.i32 q7, q7, q1 - vadd.i32 q5, q5, q0 - vtrn.32 q11, q14 - vadd.i32 q6, q6, q3 - add r2, sp, #560 - vadd.i32 q10, q10, q2 - vtrn.32 d24, d25 - vst1.8 {d12-d13}, [r2, : 128] - vshl.i32 q6, q13, #1 - add r2, sp, #576 - vst1.8 {d20-d21}, [r2, : 128] - vshl.i32 q10, q14, #1 - add r2, sp, #592 - vst1.8 {d12-d13}, [r2, : 128] - vshl.i32 q15, q12, #1 - vadd.i32 q8, q8, q4 - vext.32 d10, d31, d30, #0 - vadd.i32 q7, q7, q1 - add r2, sp, #608 - vst1.8 {d16-d17}, [r2, : 128] - vmull.s32 q8, d18, d5 - vmlal.s32 q8, d26, d4 - vmlal.s32 q8, d19, d9 - vmlal.s32 q8, d27, d3 - vmlal.s32 q8, d22, d8 - vmlal.s32 q8, d28, d2 - vmlal.s32 q8, d23, d7 - vmlal.s32 q8, d29, d1 - vmlal.s32 q8, d24, d6 - vmlal.s32 q8, d25, d0 - add r2, sp, #624 - vst1.8 {d14-d15}, [r2, : 128] - vmull.s32 q2, d18, d4 - vmlal.s32 q2, d12, d9 - vmlal.s32 q2, d13, d8 - vmlal.s32 q2, d19, d3 - vmlal.s32 q2, d22, d2 - vmlal.s32 q2, d23, d1 - vmlal.s32 q2, d24, d0 - add r2, sp, #640 - vst1.8 {d20-d21}, [r2, : 128] - vmull.s32 q7, d18, d9 - vmlal.s32 q7, d26, d3 - vmlal.s32 q7, d19, d8 - vmlal.s32 q7, d27, d2 - vmlal.s32 q7, d22, d7 - vmlal.s32 q7, d28, d1 - vmlal.s32 q7, d23, d6 - vmlal.s32 q7, d29, d0 - add r2, sp, #656 - vst1.8 {d10-d11}, [r2, : 128] - vmull.s32 q5, d18, d3 - vmlal.s32 q5, d19, d2 - vmlal.s32 q5, d22, d1 - vmlal.s32 q5, d23, d0 - vmlal.s32 q5, d12, d8 - add r2, sp, #672 - vst1.8 {d16-d17}, [r2, : 128] - vmull.s32 q4, d18, d8 - vmlal.s32 q4, d26, d2 - vmlal.s32 q4, d19, d7 - vmlal.s32 q4, d27, d1 - vmlal.s32 q4, d22, d6 - vmlal.s32 q4, d28, d0 - vmull.s32 q8, d18, d7 - vmlal.s32 q8, d26, d1 - vmlal.s32 q8, d19, d6 - vmlal.s32 q8, d27, d0 - add r2, sp, #576 - vld1.8 {d20-d21}, [r2, : 128] - vmlal.s32 q7, d24, d21 - vmlal.s32 q7, d25, d20 - vmlal.s32 q4, d23, d21 - vmlal.s32 q4, d29, d20 - vmlal.s32 q8, d22, d21 - vmlal.s32 q8, d28, d20 - vmlal.s32 q5, d24, d20 - add r2, sp, #576 - vst1.8 {d14-d15}, [r2, : 128] - vmull.s32 q7, d18, d6 - vmlal.s32 q7, d26, d0 - add r2, sp, #656 - vld1.8 {d30-d31}, [r2, : 128] - vmlal.s32 q2, d30, d21 - vmlal.s32 q7, d19, d21 - vmlal.s32 q7, d27, d20 - add r2, sp, #624 - vld1.8 {d26-d27}, [r2, : 128] - vmlal.s32 q4, d25, d27 - vmlal.s32 q8, d29, d27 - vmlal.s32 q8, d25, d26 - vmlal.s32 q7, d28, d27 - vmlal.s32 q7, d29, d26 - add r2, sp, #608 - vld1.8 {d28-d29}, [r2, : 128] - vmlal.s32 q4, d24, d29 - vmlal.s32 q8, d23, d29 - vmlal.s32 q8, d24, d28 - vmlal.s32 q7, d22, d29 - vmlal.s32 q7, d23, d28 - add r2, sp, #608 - vst1.8 {d8-d9}, [r2, : 128] - add r2, sp, #560 - vld1.8 {d8-d9}, [r2, : 128] - vmlal.s32 q7, d24, d9 - vmlal.s32 q7, d25, d31 - vmull.s32 q1, d18, d2 - vmlal.s32 q1, d19, d1 - vmlal.s32 q1, d22, d0 - vmlal.s32 q1, d24, d27 - vmlal.s32 q1, d23, d20 - vmlal.s32 q1, d12, d7 - vmlal.s32 q1, d13, d6 - vmull.s32 q6, d18, d1 - vmlal.s32 q6, d19, d0 - vmlal.s32 q6, d23, d27 - vmlal.s32 q6, d22, d20 - vmlal.s32 q6, d24, d26 - vmull.s32 q0, d18, d0 - vmlal.s32 q0, d22, d27 - vmlal.s32 q0, d23, d26 - vmlal.s32 q0, d24, d31 - vmlal.s32 q0, d19, d20 - add r2, sp, #640 - vld1.8 {d18-d19}, [r2, : 128] - vmlal.s32 q2, d18, d7 - vmlal.s32 q2, d19, d6 - vmlal.s32 q5, d18, d6 - vmlal.s32 q5, d19, d21 - vmlal.s32 q1, d18, d21 - vmlal.s32 q1, d19, d29 - vmlal.s32 q0, d18, d28 - vmlal.s32 q0, d19, d9 - vmlal.s32 q6, d18, d29 - vmlal.s32 q6, d19, d28 - add r2, sp, #592 - vld1.8 {d18-d19}, [r2, : 128] - add r2, sp, #512 - vld1.8 {d22-d23}, [r2, : 128] - vmlal.s32 q5, d19, d7 - vmlal.s32 q0, d18, d21 - vmlal.s32 q0, d19, d29 - vmlal.s32 q6, d18, d6 - add r2, sp, #528 - vld1.8 {d6-d7}, [r2, : 128] - vmlal.s32 q6, d19, d21 - add r2, sp, #576 - vld1.8 {d18-d19}, [r2, : 128] - vmlal.s32 q0, d30, d8 - add r2, sp, #672 - vld1.8 {d20-d21}, [r2, : 128] - vmlal.s32 q5, d30, d29 - add r2, sp, #608 - vld1.8 {d24-d25}, [r2, : 128] - vmlal.s32 q1, d30, d28 - vadd.i64 q13, q0, q11 - vadd.i64 q14, q5, q11 - vmlal.s32 q6, d30, d9 - vshr.s64 q4, q13, #26 - vshr.s64 q13, q14, #26 - vadd.i64 q7, q7, q4 - vshl.i64 q4, q4, #26 - vadd.i64 q14, q7, q3 - vadd.i64 q9, q9, q13 - vshl.i64 q13, q13, #26 - vadd.i64 q15, q9, q3 - vsub.i64 q0, q0, q4 - vshr.s64 q4, q14, #25 - vsub.i64 q5, q5, q13 - vshr.s64 q13, q15, #25 - vadd.i64 q6, q6, q4 - vshl.i64 q4, q4, #25 - vadd.i64 q14, q6, q11 - vadd.i64 q2, q2, q13 - vsub.i64 q4, q7, q4 - vshr.s64 q7, q14, #26 - vshl.i64 q13, q13, #25 - vadd.i64 q14, q2, q11 - vadd.i64 q8, q8, q7 - vshl.i64 q7, q7, #26 - vadd.i64 q15, q8, q3 - vsub.i64 q9, q9, q13 - vshr.s64 q13, q14, #26 - vsub.i64 q6, q6, q7 - vshr.s64 q7, q15, #25 - vadd.i64 q10, q10, q13 - vshl.i64 q13, q13, #26 - vadd.i64 q14, q10, q3 - vadd.i64 q1, q1, q7 - add r2, r3, #240 - vshl.i64 q7, q7, #25 - add r4, r3, #144 - vadd.i64 q15, q1, q11 - add r2, r2, #8 - vsub.i64 q2, q2, q13 - add r4, r4, #8 - vshr.s64 q13, q14, #25 - vsub.i64 q7, q8, q7 - vshr.s64 q8, q15, #26 - vadd.i64 q14, q13, q13 - vadd.i64 q12, q12, q8 - vtrn.32 d12, d14 - vshl.i64 q8, q8, #26 - vtrn.32 d13, d15 - vadd.i64 q3, q12, q3 - vadd.i64 q0, q0, q14 - vst1.8 d12, [r2, : 64]! - vshl.i64 q7, q13, #4 - vst1.8 d13, [r4, : 64]! - vsub.i64 q1, q1, q8 - vshr.s64 q3, q3, #25 - vadd.i64 q0, q0, q7 - vadd.i64 q5, q5, q3 - vshl.i64 q3, q3, #25 - vadd.i64 q6, q5, q11 - vadd.i64 q0, q0, q13 - vshl.i64 q7, q13, #25 - vadd.i64 q8, q0, q11 - vsub.i64 q3, q12, q3 - vshr.s64 q6, q6, #26 - vsub.i64 q7, q10, q7 - vtrn.32 d2, d6 - vshr.s64 q8, q8, #26 - vtrn.32 d3, d7 - vadd.i64 q3, q9, q6 - vst1.8 d2, [r2, : 64] - vshl.i64 q6, q6, #26 - vst1.8 d3, [r4, : 64] - vadd.i64 q1, q4, q8 - vtrn.32 d4, d14 - vshl.i64 q4, q8, #26 - vtrn.32 d5, d15 - vsub.i64 q5, q5, q6 - add r2, r2, #16 - vsub.i64 q0, q0, q4 - vst1.8 d4, [r2, : 64] - add r4, r4, #16 - vst1.8 d5, [r4, : 64] - vtrn.32 d10, d6 - vtrn.32 d11, d7 - sub r2, r2, #8 - sub r4, r4, #8 - vtrn.32 d0, d2 - vtrn.32 d1, d3 - vst1.8 d10, [r2, : 64] - vst1.8 d11, [r4, : 64] - sub r2, r2, #24 - sub r4, r4, #24 - vst1.8 d0, [r2, : 64] - vst1.8 d1, [r4, : 64] - ldr r2, [sp, #488] - ldr r4, [sp, #492] - subs r5, r2, #1 - bge ._mainloop - add r1, r3, #144 - add r2, r3, #336 - vld1.8 {d0-d1}, [r1, : 128]! - vld1.8 {d2-d3}, [r1, : 128]! - vld1.8 {d4}, [r1, : 64] - vst1.8 {d0-d1}, [r2, : 128]! - vst1.8 {d2-d3}, [r2, : 128]! - vst1.8 d4, [r2, : 64] - ldr r1, =0 -._invertloop: - add r2, r3, #144 - ldr r4, =0 - ldr r5, =2 - cmp r1, #1 - ldreq r5, =1 - addeq r2, r3, #336 - addeq r4, r3, #48 - cmp r1, #2 - ldreq r5, =1 - addeq r2, r3, #48 - cmp r1, #3 - ldreq r5, =5 - addeq r4, r3, #336 - cmp r1, #4 - ldreq r5, =10 - cmp r1, #5 - ldreq r5, =20 - cmp r1, #6 - ldreq r5, =10 - addeq r2, r3, #336 - addeq r4, r3, #336 - cmp r1, #7 - ldreq r5, =50 - cmp r1, #8 - ldreq r5, =100 - cmp r1, #9 - ldreq r5, =50 - addeq r2, r3, #336 - cmp r1, #10 - ldreq r5, =5 - addeq r2, r3, #48 - cmp r1, #11 - ldreq r5, =0 - addeq r2, r3, #96 - add r6, r3, #144 - add r7, r3, #288 - vld1.8 {d0-d1}, [r6, : 128]! - vld1.8 {d2-d3}, [r6, : 128]! - vld1.8 {d4}, [r6, : 64] - vst1.8 {d0-d1}, [r7, : 128]! - vst1.8 {d2-d3}, [r7, : 128]! - vst1.8 d4, [r7, : 64] - cmp r5, #0 - beq ._skipsquaringloop -._squaringloop: - add r6, r3, #288 - add r7, r3, #288 - add r8, r3, #288 - vmov.i32 q0, #19 - vmov.i32 q1, #0 - vmov.i32 q2, #1 - vzip.i32 q1, q2 - vld1.8 {d4-d5}, [r7, : 128]! - vld1.8 {d6-d7}, [r7, : 128]! - vld1.8 {d9}, [r7, : 64] - vld1.8 {d10-d11}, [r6, : 128]! - add r7, sp, #416 - vld1.8 {d12-d13}, [r6, : 128]! - vmul.i32 q7, q2, q0 - vld1.8 {d8}, [r6, : 64] - vext.32 d17, d11, d10, #1 - vmul.i32 q9, q3, q0 - vext.32 d16, d10, d8, #1 - vshl.u32 q10, q5, q1 - vext.32 d22, d14, d4, #1 - vext.32 d24, d18, d6, #1 - vshl.u32 q13, q6, q1 - vshl.u32 d28, d8, d2 - vrev64.i32 d22, d22 - vmul.i32 d1, d9, d1 - vrev64.i32 d24, d24 - vext.32 d29, d8, d13, #1 - vext.32 d0, d1, d9, #1 - vrev64.i32 d0, d0 - vext.32 d2, d9, d1, #1 - vext.32 d23, d15, d5, #1 - vmull.s32 q4, d20, d4 - vrev64.i32 d23, d23 - vmlal.s32 q4, d21, d1 - vrev64.i32 d2, d2 - vmlal.s32 q4, d26, d19 - vext.32 d3, d5, d15, #1 - vmlal.s32 q4, d27, d18 - vrev64.i32 d3, d3 - vmlal.s32 q4, d28, d15 - vext.32 d14, d12, d11, #1 - vmull.s32 q5, d16, d23 - vext.32 d15, d13, d12, #1 - vmlal.s32 q5, d17, d4 - vst1.8 d8, [r7, : 64]! - vmlal.s32 q5, d14, d1 - vext.32 d12, d9, d8, #0 - vmlal.s32 q5, d15, d19 - vmov.i64 d13, #0 - vmlal.s32 q5, d29, d18 - vext.32 d25, d19, d7, #1 - vmlal.s32 q6, d20, d5 - vrev64.i32 d25, d25 - vmlal.s32 q6, d21, d4 - vst1.8 d11, [r7, : 64]! - vmlal.s32 q6, d26, d1 - vext.32 d9, d10, d10, #0 - vmlal.s32 q6, d27, d19 - vmov.i64 d8, #0 - vmlal.s32 q6, d28, d18 - vmlal.s32 q4, d16, d24 - vmlal.s32 q4, d17, d5 - vmlal.s32 q4, d14, d4 - vst1.8 d12, [r7, : 64]! - vmlal.s32 q4, d15, d1 - vext.32 d10, d13, d12, #0 - vmlal.s32 q4, d29, d19 - vmov.i64 d11, #0 - vmlal.s32 q5, d20, d6 - vmlal.s32 q5, d21, d5 - vmlal.s32 q5, d26, d4 - vext.32 d13, d8, d8, #0 - vmlal.s32 q5, d27, d1 - vmov.i64 d12, #0 - vmlal.s32 q5, d28, d19 - vst1.8 d9, [r7, : 64]! - vmlal.s32 q6, d16, d25 - vmlal.s32 q6, d17, d6 - vst1.8 d10, [r7, : 64] - vmlal.s32 q6, d14, d5 - vext.32 d8, d11, d10, #0 - vmlal.s32 q6, d15, d4 - vmov.i64 d9, #0 - vmlal.s32 q6, d29, d1 - vmlal.s32 q4, d20, d7 - vmlal.s32 q4, d21, d6 - vmlal.s32 q4, d26, d5 - vext.32 d11, d12, d12, #0 - vmlal.s32 q4, d27, d4 - vmov.i64 d10, #0 - vmlal.s32 q4, d28, d1 - vmlal.s32 q5, d16, d0 - sub r6, r7, #32 - vmlal.s32 q5, d17, d7 - vmlal.s32 q5, d14, d6 - vext.32 d30, d9, d8, #0 - vmlal.s32 q5, d15, d5 - vld1.8 {d31}, [r6, : 64]! - vmlal.s32 q5, d29, d4 - vmlal.s32 q15, d20, d0 - vext.32 d0, d6, d18, #1 - vmlal.s32 q15, d21, d25 - vrev64.i32 d0, d0 - vmlal.s32 q15, d26, d24 - vext.32 d1, d7, d19, #1 - vext.32 d7, d10, d10, #0 - vmlal.s32 q15, d27, d23 - vrev64.i32 d1, d1 - vld1.8 {d6}, [r6, : 64] - vmlal.s32 q15, d28, d22 - vmlal.s32 q3, d16, d4 - add r6, r6, #24 - vmlal.s32 q3, d17, d2 - vext.32 d4, d31, d30, #0 - vmov d17, d11 - vmlal.s32 q3, d14, d1 - vext.32 d11, d13, d13, #0 - vext.32 d13, d30, d30, #0 - vmlal.s32 q3, d15, d0 - vext.32 d1, d8, d8, #0 - vmlal.s32 q3, d29, d3 - vld1.8 {d5}, [r6, : 64] - sub r6, r6, #16 - vext.32 d10, d6, d6, #0 - vmov.i32 q1, #0xffffffff - vshl.i64 q4, q1, #25 - add r7, sp, #512 - vld1.8 {d14-d15}, [r7, : 128] - vadd.i64 q9, q2, q7 - vshl.i64 q1, q1, #26 - vshr.s64 q10, q9, #26 - vld1.8 {d0}, [r6, : 64]! - vadd.i64 q5, q5, q10 - vand q9, q9, q1 - vld1.8 {d16}, [r6, : 64]! - add r6, sp, #528 - vld1.8 {d20-d21}, [r6, : 128] - vadd.i64 q11, q5, q10 - vsub.i64 q2, q2, q9 - vshr.s64 q9, q11, #25 - vext.32 d12, d5, d4, #0 - vand q11, q11, q4 - vadd.i64 q0, q0, q9 - vmov d19, d7 - vadd.i64 q3, q0, q7 - vsub.i64 q5, q5, q11 - vshr.s64 q11, q3, #26 - vext.32 d18, d11, d10, #0 - vand q3, q3, q1 - vadd.i64 q8, q8, q11 - vadd.i64 q11, q8, q10 - vsub.i64 q0, q0, q3 - vshr.s64 q3, q11, #25 - vand q11, q11, q4 - vadd.i64 q3, q6, q3 - vadd.i64 q6, q3, q7 - vsub.i64 q8, q8, q11 - vshr.s64 q11, q6, #26 - vand q6, q6, q1 - vadd.i64 q9, q9, q11 - vadd.i64 d25, d19, d21 - vsub.i64 q3, q3, q6 - vshr.s64 d23, d25, #25 - vand q4, q12, q4 - vadd.i64 d21, d23, d23 - vshl.i64 d25, d23, #4 - vadd.i64 d21, d21, d23 - vadd.i64 d25, d25, d21 - vadd.i64 d4, d4, d25 - vzip.i32 q0, q8 - vadd.i64 d12, d4, d14 - add r6, r8, #8 - vst1.8 d0, [r6, : 64] - vsub.i64 d19, d19, d9 - add r6, r6, #16 - vst1.8 d16, [r6, : 64] - vshr.s64 d22, d12, #26 - vand q0, q6, q1 - vadd.i64 d10, d10, d22 - vzip.i32 q3, q9 - vsub.i64 d4, d4, d0 - sub r6, r6, #8 - vst1.8 d6, [r6, : 64] - add r6, r6, #16 - vst1.8 d18, [r6, : 64] - vzip.i32 q2, q5 - sub r6, r6, #32 - vst1.8 d4, [r6, : 64] - subs r5, r5, #1 - bhi ._squaringloop -._skipsquaringloop: - mov r2, r2 - add r5, r3, #288 - add r6, r3, #144 - vmov.i32 q0, #19 - vmov.i32 q1, #0 - vmov.i32 q2, #1 - vzip.i32 q1, q2 - vld1.8 {d4-d5}, [r5, : 128]! - vld1.8 {d6-d7}, [r5, : 128]! - vld1.8 {d9}, [r5, : 64] - vld1.8 {d10-d11}, [r2, : 128]! - add r5, sp, #416 - vld1.8 {d12-d13}, [r2, : 128]! - vmul.i32 q7, q2, q0 - vld1.8 {d8}, [r2, : 64] - vext.32 d17, d11, d10, #1 - vmul.i32 q9, q3, q0 - vext.32 d16, d10, d8, #1 - vshl.u32 q10, q5, q1 - vext.32 d22, d14, d4, #1 - vext.32 d24, d18, d6, #1 - vshl.u32 q13, q6, q1 - vshl.u32 d28, d8, d2 - vrev64.i32 d22, d22 - vmul.i32 d1, d9, d1 - vrev64.i32 d24, d24 - vext.32 d29, d8, d13, #1 - vext.32 d0, d1, d9, #1 - vrev64.i32 d0, d0 - vext.32 d2, d9, d1, #1 - vext.32 d23, d15, d5, #1 - vmull.s32 q4, d20, d4 - vrev64.i32 d23, d23 - vmlal.s32 q4, d21, d1 - vrev64.i32 d2, d2 - vmlal.s32 q4, d26, d19 - vext.32 d3, d5, d15, #1 - vmlal.s32 q4, d27, d18 - vrev64.i32 d3, d3 - vmlal.s32 q4, d28, d15 - vext.32 d14, d12, d11, #1 - vmull.s32 q5, d16, d23 - vext.32 d15, d13, d12, #1 - vmlal.s32 q5, d17, d4 - vst1.8 d8, [r5, : 64]! - vmlal.s32 q5, d14, d1 - vext.32 d12, d9, d8, #0 - vmlal.s32 q5, d15, d19 - vmov.i64 d13, #0 - vmlal.s32 q5, d29, d18 - vext.32 d25, d19, d7, #1 - vmlal.s32 q6, d20, d5 - vrev64.i32 d25, d25 - vmlal.s32 q6, d21, d4 - vst1.8 d11, [r5, : 64]! - vmlal.s32 q6, d26, d1 - vext.32 d9, d10, d10, #0 - vmlal.s32 q6, d27, d19 - vmov.i64 d8, #0 - vmlal.s32 q6, d28, d18 - vmlal.s32 q4, d16, d24 - vmlal.s32 q4, d17, d5 - vmlal.s32 q4, d14, d4 - vst1.8 d12, [r5, : 64]! - vmlal.s32 q4, d15, d1 - vext.32 d10, d13, d12, #0 - vmlal.s32 q4, d29, d19 - vmov.i64 d11, #0 - vmlal.s32 q5, d20, d6 - vmlal.s32 q5, d21, d5 - vmlal.s32 q5, d26, d4 - vext.32 d13, d8, d8, #0 - vmlal.s32 q5, d27, d1 - vmov.i64 d12, #0 - vmlal.s32 q5, d28, d19 - vst1.8 d9, [r5, : 64]! - vmlal.s32 q6, d16, d25 - vmlal.s32 q6, d17, d6 - vst1.8 d10, [r5, : 64] - vmlal.s32 q6, d14, d5 - vext.32 d8, d11, d10, #0 - vmlal.s32 q6, d15, d4 - vmov.i64 d9, #0 - vmlal.s32 q6, d29, d1 - vmlal.s32 q4, d20, d7 - vmlal.s32 q4, d21, d6 - vmlal.s32 q4, d26, d5 - vext.32 d11, d12, d12, #0 - vmlal.s32 q4, d27, d4 - vmov.i64 d10, #0 - vmlal.s32 q4, d28, d1 - vmlal.s32 q5, d16, d0 - sub r2, r5, #32 - vmlal.s32 q5, d17, d7 - vmlal.s32 q5, d14, d6 - vext.32 d30, d9, d8, #0 - vmlal.s32 q5, d15, d5 - vld1.8 {d31}, [r2, : 64]! - vmlal.s32 q5, d29, d4 - vmlal.s32 q15, d20, d0 - vext.32 d0, d6, d18, #1 - vmlal.s32 q15, d21, d25 - vrev64.i32 d0, d0 - vmlal.s32 q15, d26, d24 - vext.32 d1, d7, d19, #1 - vext.32 d7, d10, d10, #0 - vmlal.s32 q15, d27, d23 - vrev64.i32 d1, d1 - vld1.8 {d6}, [r2, : 64] - vmlal.s32 q15, d28, d22 - vmlal.s32 q3, d16, d4 - add r2, r2, #24 - vmlal.s32 q3, d17, d2 - vext.32 d4, d31, d30, #0 - vmov d17, d11 - vmlal.s32 q3, d14, d1 - vext.32 d11, d13, d13, #0 - vext.32 d13, d30, d30, #0 - vmlal.s32 q3, d15, d0 - vext.32 d1, d8, d8, #0 - vmlal.s32 q3, d29, d3 - vld1.8 {d5}, [r2, : 64] - sub r2, r2, #16 - vext.32 d10, d6, d6, #0 - vmov.i32 q1, #0xffffffff - vshl.i64 q4, q1, #25 - add r5, sp, #512 - vld1.8 {d14-d15}, [r5, : 128] - vadd.i64 q9, q2, q7 - vshl.i64 q1, q1, #26 - vshr.s64 q10, q9, #26 - vld1.8 {d0}, [r2, : 64]! - vadd.i64 q5, q5, q10 - vand q9, q9, q1 - vld1.8 {d16}, [r2, : 64]! - add r2, sp, #528 - vld1.8 {d20-d21}, [r2, : 128] - vadd.i64 q11, q5, q10 - vsub.i64 q2, q2, q9 - vshr.s64 q9, q11, #25 - vext.32 d12, d5, d4, #0 - vand q11, q11, q4 - vadd.i64 q0, q0, q9 - vmov d19, d7 - vadd.i64 q3, q0, q7 - vsub.i64 q5, q5, q11 - vshr.s64 q11, q3, #26 - vext.32 d18, d11, d10, #0 - vand q3, q3, q1 - vadd.i64 q8, q8, q11 - vadd.i64 q11, q8, q10 - vsub.i64 q0, q0, q3 - vshr.s64 q3, q11, #25 - vand q11, q11, q4 - vadd.i64 q3, q6, q3 - vadd.i64 q6, q3, q7 - vsub.i64 q8, q8, q11 - vshr.s64 q11, q6, #26 - vand q6, q6, q1 - vadd.i64 q9, q9, q11 - vadd.i64 d25, d19, d21 - vsub.i64 q3, q3, q6 - vshr.s64 d23, d25, #25 - vand q4, q12, q4 - vadd.i64 d21, d23, d23 - vshl.i64 d25, d23, #4 - vadd.i64 d21, d21, d23 - vadd.i64 d25, d25, d21 - vadd.i64 d4, d4, d25 - vzip.i32 q0, q8 - vadd.i64 d12, d4, d14 - add r2, r6, #8 - vst1.8 d0, [r2, : 64] - vsub.i64 d19, d19, d9 - add r2, r2, #16 - vst1.8 d16, [r2, : 64] - vshr.s64 d22, d12, #26 - vand q0, q6, q1 - vadd.i64 d10, d10, d22 - vzip.i32 q3, q9 - vsub.i64 d4, d4, d0 - sub r2, r2, #8 - vst1.8 d6, [r2, : 64] - add r2, r2, #16 - vst1.8 d18, [r2, : 64] - vzip.i32 q2, q5 - sub r2, r2, #32 - vst1.8 d4, [r2, : 64] - cmp r4, #0 - beq ._skippostcopy - add r2, r3, #144 - mov r4, r4 - vld1.8 {d0-d1}, [r2, : 128]! - vld1.8 {d2-d3}, [r2, : 128]! - vld1.8 {d4}, [r2, : 64] - vst1.8 {d0-d1}, [r4, : 128]! - vst1.8 {d2-d3}, [r4, : 128]! - vst1.8 d4, [r4, : 64] -._skippostcopy: - cmp r1, #1 - bne ._skipfinalcopy - add r2, r3, #288 - add r4, r3, #144 - vld1.8 {d0-d1}, [r2, : 128]! - vld1.8 {d2-d3}, [r2, : 128]! - vld1.8 {d4}, [r2, : 64] - vst1.8 {d0-d1}, [r4, : 128]! - vst1.8 {d2-d3}, [r4, : 128]! - vst1.8 d4, [r4, : 64] -._skipfinalcopy: - add r1, r1, #1 - cmp r1, #12 - blo ._invertloop - add r1, r3, #144 - ldr r2, [r1], #4 - ldr r3, [r1], #4 - ldr r4, [r1], #4 - ldr r5, [r1], #4 - ldr r6, [r1], #4 - ldr r7, [r1], #4 - ldr r8, [r1], #4 - ldr r9, [r1], #4 - ldr r10, [r1], #4 - ldr r1, [r1] - add r11, r1, r1, LSL #4 - add r11, r11, r1, LSL #1 - add r11, r11, #16777216 - mov r11, r11, ASR #25 - add r11, r11, r2 - mov r11, r11, ASR #26 - add r11, r11, r3 - mov r11, r11, ASR #25 - add r11, r11, r4 - mov r11, r11, ASR #26 - add r11, r11, r5 - mov r11, r11, ASR #25 - add r11, r11, r6 - mov r11, r11, ASR #26 - add r11, r11, r7 - mov r11, r11, ASR #25 - add r11, r11, r8 - mov r11, r11, ASR #26 - add r11, r11, r9 - mov r11, r11, ASR #25 - add r11, r11, r10 - mov r11, r11, ASR #26 - add r11, r11, r1 - mov r11, r11, ASR #25 - add r2, r2, r11 - add r2, r2, r11, LSL #1 - add r2, r2, r11, LSL #4 - mov r11, r2, ASR #26 - add r3, r3, r11 - sub r2, r2, r11, LSL #26 - mov r11, r3, ASR #25 - add r4, r4, r11 - sub r3, r3, r11, LSL #25 - mov r11, r4, ASR #26 - add r5, r5, r11 - sub r4, r4, r11, LSL #26 - mov r11, r5, ASR #25 - add r6, r6, r11 - sub r5, r5, r11, LSL #25 - mov r11, r6, ASR #26 - add r7, r7, r11 - sub r6, r6, r11, LSL #26 - mov r11, r7, ASR #25 - add r8, r8, r11 - sub r7, r7, r11, LSL #25 - mov r11, r8, ASR #26 - add r9, r9, r11 - sub r8, r8, r11, LSL #26 - mov r11, r9, ASR #25 - add r10, r10, r11 - sub r9, r9, r11, LSL #25 - mov r11, r10, ASR #26 - add r1, r1, r11 - sub r10, r10, r11, LSL #26 - mov r11, r1, ASR #25 - sub r1, r1, r11, LSL #25 - add r2, r2, r3, LSL #26 - mov r3, r3, LSR #6 - add r3, r3, r4, LSL #19 - mov r4, r4, LSR #13 - add r4, r4, r5, LSL #13 - mov r5, r5, LSR #19 - add r5, r5, r6, LSL #6 - add r6, r7, r8, LSL #25 - mov r7, r8, LSR #7 - add r7, r7, r9, LSL #19 - mov r8, r9, LSR #13 - add r8, r8, r10, LSL #12 - mov r9, r10, LSR #20 - add r1, r9, r1, LSL #6 - str r2, [r0], #4 - str r3, [r0], #4 - str r4, [r0], #4 - str r5, [r0], #4 - str r6, [r0], #4 - str r7, [r0], #4 - str r8, [r0], #4 - str r1, [r0] - ldrd r4, [sp, #0] - ldrd r6, [sp, #8] - ldrd r8, [sp, #16] - ldrd r10, [sp, #24] - ldr r12, [sp, #480] - ldr r14, [sp, #484] - ldr r0, =0 - mov sp, r12 - vpop {q4, q5, q6, q7} - bx lr diff --git a/lib/zinc/curve25519/curve25519-arm.S b/lib/zinc/curve25519/curve25519-arm.S new file mode 100644 index 000000000000..b63ac48e7f8d --- /dev/null +++ b/lib/zinc/curve25519/curve25519-arm.S @@ -0,0 +1,2064 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ +/* + * Copyright (C) 2015-2018 Jason A. Donenfeld . All Rights Reserved. + * + * Based on public domain code from Daniel J. Bernstein and Peter Schwabe. This + * began from SUPERCOP's curve25519/neon2/scalarmult.s, but has subsequently been + * manually reworked for use in kernel space. + */ + +#if defined(CONFIG_KERNEL_MODE_NEON) && !defined(__ARMEB__) +#include + +.text +.fpu neon +.arch armv7-a +.align 4 + +ENTRY(curve25519_neon) + push {r4-r11, lr} + mov ip, sp + sub r3, sp, #704 + and r3, r3, #0xfffffff0 + mov sp, r3 + movw r4, #0 + movw r5, #254 + vmov.i32 q0, #1 + vshr.u64 q1, q0, #7 + vshr.u64 q0, q0, #8 + vmov.i32 d4, #19 + vmov.i32 d5, #38 + add r6, sp, #480 + vst1.8 {d2-d3}, [r6, : 128]! + vst1.8 {d0-d1}, [r6, : 128]! + vst1.8 {d4-d5}, [r6, : 128] + add r6, r3, #0 + vmov.i32 q2, #0 + vst1.8 {d4-d5}, [r6, : 128]! + vst1.8 {d4-d5}, [r6, : 128]! + vst1.8 d4, [r6, : 64] + add r6, r3, #0 + movw r7, #960 + sub r7, r7, #2 + neg r7, r7 + sub r7, r7, r7, LSL #7 + str r7, [r6] + add r6, sp, #672 + vld1.8 {d4-d5}, [r1]! + vld1.8 {d6-d7}, [r1] + vst1.8 {d4-d5}, [r6, : 128]! + vst1.8 {d6-d7}, [r6, : 128] + sub r1, r6, #16 + ldrb r6, [r1] + and r6, r6, #248 + strb r6, [r1] + ldrb r6, [r1, #31] + and r6, r6, #127 + orr r6, r6, #64 + strb r6, [r1, #31] + vmov.i64 q2, #0xffffffff + vshr.u64 q3, q2, #7 + vshr.u64 q2, q2, #6 + vld1.8 {d8}, [r2] + vld1.8 {d10}, [r2] + add r2, r2, #6 + vld1.8 {d12}, [r2] + vld1.8 {d14}, [r2] + add r2, r2, #6 + vld1.8 {d16}, [r2] + add r2, r2, #4 + vld1.8 {d18}, [r2] + vld1.8 {d20}, [r2] + add r2, r2, #6 + vld1.8 {d22}, [r2] + add r2, r2, #2 + vld1.8 {d24}, [r2] + vld1.8 {d26}, [r2] + vshr.u64 q5, q5, #26 + vshr.u64 q6, q6, #3 + vshr.u64 q7, q7, #29 + vshr.u64 q8, q8, #6 + vshr.u64 q10, q10, #25 + vshr.u64 q11, q11, #3 + vshr.u64 q12, q12, #12 + vshr.u64 q13, q13, #38 + vand q4, q4, q2 + vand q6, q6, q2 + vand q8, q8, q2 + vand q10, q10, q2 + vand q2, q12, q2 + vand q5, q5, q3 + vand q7, q7, q3 + vand q9, q9, q3 + vand q11, q11, q3 + vand q3, q13, q3 + add r2, r3, #48 + vadd.i64 q12, q4, q1 + vadd.i64 q13, q10, q1 + vshr.s64 q12, q12, #26 + vshr.s64 q13, q13, #26 + vadd.i64 q5, q5, q12 + vshl.i64 q12, q12, #26 + vadd.i64 q14, q5, q0 + vadd.i64 q11, q11, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q15, q11, q0 + vsub.i64 q4, q4, q12 + vshr.s64 q12, q14, #25 + vsub.i64 q10, q10, q13 + vshr.s64 q13, q15, #25 + vadd.i64 q6, q6, q12 + vshl.i64 q12, q12, #25 + vadd.i64 q14, q6, q1 + vadd.i64 q2, q2, q13 + vsub.i64 q5, q5, q12 + vshr.s64 q12, q14, #26 + vshl.i64 q13, q13, #25 + vadd.i64 q14, q2, q1 + vadd.i64 q7, q7, q12 + vshl.i64 q12, q12, #26 + vadd.i64 q15, q7, q0 + vsub.i64 q11, q11, q13 + vshr.s64 q13, q14, #26 + vsub.i64 q6, q6, q12 + vshr.s64 q12, q15, #25 + vadd.i64 q3, q3, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q14, q3, q0 + vadd.i64 q8, q8, q12 + vshl.i64 q12, q12, #25 + vadd.i64 q15, q8, q1 + add r2, r2, #8 + vsub.i64 q2, q2, q13 + vshr.s64 q13, q14, #25 + vsub.i64 q7, q7, q12 + vshr.s64 q12, q15, #26 + vadd.i64 q14, q13, q13 + vadd.i64 q9, q9, q12 + vtrn.32 d12, d14 + vshl.i64 q12, q12, #26 + vtrn.32 d13, d15 + vadd.i64 q0, q9, q0 + vadd.i64 q4, q4, q14 + vst1.8 d12, [r2, : 64]! + vshl.i64 q6, q13, #4 + vsub.i64 q7, q8, q12 + vshr.s64 q0, q0, #25 + vadd.i64 q4, q4, q6 + vadd.i64 q6, q10, q0 + vshl.i64 q0, q0, #25 + vadd.i64 q8, q6, q1 + vadd.i64 q4, q4, q13 + vshl.i64 q10, q13, #25 + vadd.i64 q1, q4, q1 + vsub.i64 q0, q9, q0 + vshr.s64 q8, q8, #26 + vsub.i64 q3, q3, q10 + vtrn.32 d14, d0 + vshr.s64 q1, q1, #26 + vtrn.32 d15, d1 + vadd.i64 q0, q11, q8 + vst1.8 d14, [r2, : 64] + vshl.i64 q7, q8, #26 + vadd.i64 q5, q5, q1 + vtrn.32 d4, d6 + vshl.i64 q1, q1, #26 + vtrn.32 d5, d7 + vsub.i64 q3, q6, q7 + add r2, r2, #16 + vsub.i64 q1, q4, q1 + vst1.8 d4, [r2, : 64] + vtrn.32 d6, d0 + vtrn.32 d7, d1 + sub r2, r2, #8 + vtrn.32 d2, d10 + vtrn.32 d3, d11 + vst1.8 d6, [r2, : 64] + sub r2, r2, #24 + vst1.8 d2, [r2, : 64] + add r2, r3, #96 + vmov.i32 q0, #0 + vmov.i64 d2, #0xff + vmov.i64 d3, #0 + vshr.u32 q1, q1, #7 + vst1.8 {d2-d3}, [r2, : 128]! + vst1.8 {d0-d1}, [r2, : 128]! + vst1.8 d0, [r2, : 64] + add r2, r3, #144 + vmov.i32 q0, #0 + vst1.8 {d0-d1}, [r2, : 128]! + vst1.8 {d0-d1}, [r2, : 128]! + vst1.8 d0, [r2, : 64] + add r2, r3, #240 + vmov.i32 q0, #0 + vmov.i64 d2, #0xff + vmov.i64 d3, #0 + vshr.u32 q1, q1, #7 + vst1.8 {d2-d3}, [r2, : 128]! + vst1.8 {d0-d1}, [r2, : 128]! + vst1.8 d0, [r2, : 64] + add r2, r3, #48 + add r6, r3, #192 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d4}, [r2, : 64] + vst1.8 {d0-d1}, [r6, : 128]! + vst1.8 {d2-d3}, [r6, : 128]! + vst1.8 d4, [r6, : 64] +.Lmainloop: + mov r2, r5, LSR #3 + and r6, r5, #7 + ldrb r2, [r1, r2] + mov r2, r2, LSR r6 + and r2, r2, #1 + str r5, [sp, #456] + eor r4, r4, r2 + str r2, [sp, #460] + neg r2, r4 + add r4, r3, #96 + add r5, r3, #192 + add r6, r3, #144 + vld1.8 {d8-d9}, [r4, : 128]! + add r7, r3, #240 + vld1.8 {d10-d11}, [r5, : 128]! + veor q6, q4, q5 + vld1.8 {d14-d15}, [r6, : 128]! + vdup.i32 q8, r2 + vld1.8 {d18-d19}, [r7, : 128]! + veor q10, q7, q9 + vld1.8 {d22-d23}, [r4, : 128]! + vand q6, q6, q8 + vld1.8 {d24-d25}, [r5, : 128]! + vand q10, q10, q8 + vld1.8 {d26-d27}, [r6, : 128]! + veor q4, q4, q6 + vld1.8 {d28-d29}, [r7, : 128]! + veor q5, q5, q6 + vld1.8 {d0}, [r4, : 64] + veor q6, q7, q10 + vld1.8 {d2}, [r5, : 64] + veor q7, q9, q10 + vld1.8 {d4}, [r6, : 64] + veor q9, q11, q12 + vld1.8 {d6}, [r7, : 64] + veor q10, q0, q1 + sub r2, r4, #32 + vand q9, q9, q8 + sub r4, r5, #32 + vand q10, q10, q8 + sub r5, r6, #32 + veor q11, q11, q9 + sub r6, r7, #32 + veor q0, q0, q10 + veor q9, q12, q9 + veor q1, q1, q10 + veor q10, q13, q14 + veor q12, q2, q3 + vand q10, q10, q8 + vand q8, q12, q8 + veor q12, q13, q10 + veor q2, q2, q8 + veor q10, q14, q10 + veor q3, q3, q8 + vadd.i32 q8, q4, q6 + vsub.i32 q4, q4, q6 + vst1.8 {d16-d17}, [r2, : 128]! + vadd.i32 q6, q11, q12 + vst1.8 {d8-d9}, [r5, : 128]! + vsub.i32 q4, q11, q12 + vst1.8 {d12-d13}, [r2, : 128]! + vadd.i32 q6, q0, q2 + vst1.8 {d8-d9}, [r5, : 128]! + vsub.i32 q0, q0, q2 + vst1.8 d12, [r2, : 64] + vadd.i32 q2, q5, q7 + vst1.8 d0, [r5, : 64] + vsub.i32 q0, q5, q7 + vst1.8 {d4-d5}, [r4, : 128]! + vadd.i32 q2, q9, q10 + vst1.8 {d0-d1}, [r6, : 128]! + vsub.i32 q0, q9, q10 + vst1.8 {d4-d5}, [r4, : 128]! + vadd.i32 q2, q1, q3 + vst1.8 {d0-d1}, [r6, : 128]! + vsub.i32 q0, q1, q3 + vst1.8 d4, [r4, : 64] + vst1.8 d0, [r6, : 64] + add r2, sp, #512 + add r4, r3, #96 + add r5, r3, #144 + vld1.8 {d0-d1}, [r2, : 128] + vld1.8 {d2-d3}, [r4, : 128]! + vld1.8 {d4-d5}, [r5, : 128]! + vzip.i32 q1, q2 + vld1.8 {d6-d7}, [r4, : 128]! + vld1.8 {d8-d9}, [r5, : 128]! + vshl.i32 q5, q1, #1 + vzip.i32 q3, q4 + vshl.i32 q6, q2, #1 + vld1.8 {d14}, [r4, : 64] + vshl.i32 q8, q3, #1 + vld1.8 {d15}, [r5, : 64] + vshl.i32 q9, q4, #1 + vmul.i32 d21, d7, d1 + vtrn.32 d14, d15 + vmul.i32 q11, q4, q0 + vmul.i32 q0, q7, q0 + vmull.s32 q12, d2, d2 + vmlal.s32 q12, d11, d1 + vmlal.s32 q12, d12, d0 + vmlal.s32 q12, d13, d23 + vmlal.s32 q12, d16, d22 + vmlal.s32 q12, d7, d21 + vmull.s32 q10, d2, d11 + vmlal.s32 q10, d4, d1 + vmlal.s32 q10, d13, d0 + vmlal.s32 q10, d6, d23 + vmlal.s32 q10, d17, d22 + vmull.s32 q13, d10, d4 + vmlal.s32 q13, d11, d3 + vmlal.s32 q13, d13, d1 + vmlal.s32 q13, d16, d0 + vmlal.s32 q13, d17, d23 + vmlal.s32 q13, d8, d22 + vmull.s32 q1, d10, d5 + vmlal.s32 q1, d11, d4 + vmlal.s32 q1, d6, d1 + vmlal.s32 q1, d17, d0 + vmlal.s32 q1, d8, d23 + vmull.s32 q14, d10, d6 + vmlal.s32 q14, d11, d13 + vmlal.s32 q14, d4, d4 + vmlal.s32 q14, d17, d1 + vmlal.s32 q14, d18, d0 + vmlal.s32 q14, d9, d23 + vmull.s32 q11, d10, d7 + vmlal.s32 q11, d11, d6 + vmlal.s32 q11, d12, d5 + vmlal.s32 q11, d8, d1 + vmlal.s32 q11, d19, d0 + vmull.s32 q15, d10, d8 + vmlal.s32 q15, d11, d17 + vmlal.s32 q15, d12, d6 + vmlal.s32 q15, d13, d5 + vmlal.s32 q15, d19, d1 + vmlal.s32 q15, d14, d0 + vmull.s32 q2, d10, d9 + vmlal.s32 q2, d11, d8 + vmlal.s32 q2, d12, d7 + vmlal.s32 q2, d13, d6 + vmlal.s32 q2, d14, d1 + vmull.s32 q0, d15, d1 + vmlal.s32 q0, d10, d14 + vmlal.s32 q0, d11, d19 + vmlal.s32 q0, d12, d8 + vmlal.s32 q0, d13, d17 + vmlal.s32 q0, d6, d6 + add r2, sp, #480 + vld1.8 {d18-d19}, [r2, : 128]! + vmull.s32 q3, d16, d7 + vmlal.s32 q3, d10, d15 + vmlal.s32 q3, d11, d14 + vmlal.s32 q3, d12, d9 + vmlal.s32 q3, d13, d8 + vld1.8 {d8-d9}, [r2, : 128] + vadd.i64 q5, q12, q9 + vadd.i64 q6, q15, q9 + vshr.s64 q5, q5, #26 + vshr.s64 q6, q6, #26 + vadd.i64 q7, q10, q5 + vshl.i64 q5, q5, #26 + vadd.i64 q8, q7, q4 + vadd.i64 q2, q2, q6 + vshl.i64 q6, q6, #26 + vadd.i64 q10, q2, q4 + vsub.i64 q5, q12, q5 + vshr.s64 q8, q8, #25 + vsub.i64 q6, q15, q6 + vshr.s64 q10, q10, #25 + vadd.i64 q12, q13, q8 + vshl.i64 q8, q8, #25 + vadd.i64 q13, q12, q9 + vadd.i64 q0, q0, q10 + vsub.i64 q7, q7, q8 + vshr.s64 q8, q13, #26 + vshl.i64 q10, q10, #25 + vadd.i64 q13, q0, q9 + vadd.i64 q1, q1, q8 + vshl.i64 q8, q8, #26 + vadd.i64 q15, q1, q4 + vsub.i64 q2, q2, q10 + vshr.s64 q10, q13, #26 + vsub.i64 q8, q12, q8 + vshr.s64 q12, q15, #25 + vadd.i64 q3, q3, q10 + vshl.i64 q10, q10, #26 + vadd.i64 q13, q3, q4 + vadd.i64 q14, q14, q12 + add r2, r3, #288 + vshl.i64 q12, q12, #25 + add r4, r3, #336 + vadd.i64 q15, q14, q9 + add r2, r2, #8 + vsub.i64 q0, q0, q10 + add r4, r4, #8 + vshr.s64 q10, q13, #25 + vsub.i64 q1, q1, q12 + vshr.s64 q12, q15, #26 + vadd.i64 q13, q10, q10 + vadd.i64 q11, q11, q12 + vtrn.32 d16, d2 + vshl.i64 q12, q12, #26 + vtrn.32 d17, d3 + vadd.i64 q1, q11, q4 + vadd.i64 q4, q5, q13 + vst1.8 d16, [r2, : 64]! + vshl.i64 q5, q10, #4 + vst1.8 d17, [r4, : 64]! + vsub.i64 q8, q14, q12 + vshr.s64 q1, q1, #25 + vadd.i64 q4, q4, q5 + vadd.i64 q5, q6, q1 + vshl.i64 q1, q1, #25 + vadd.i64 q6, q5, q9 + vadd.i64 q4, q4, q10 + vshl.i64 q10, q10, #25 + vadd.i64 q9, q4, q9 + vsub.i64 q1, q11, q1 + vshr.s64 q6, q6, #26 + vsub.i64 q3, q3, q10 + vtrn.32 d16, d2 + vshr.s64 q9, q9, #26 + vtrn.32 d17, d3 + vadd.i64 q1, q2, q6 + vst1.8 d16, [r2, : 64] + vshl.i64 q2, q6, #26 + vst1.8 d17, [r4, : 64] + vadd.i64 q6, q7, q9 + vtrn.32 d0, d6 + vshl.i64 q7, q9, #26 + vtrn.32 d1, d7 + vsub.i64 q2, q5, q2 + add r2, r2, #16 + vsub.i64 q3, q4, q7 + vst1.8 d0, [r2, : 64] + add r4, r4, #16 + vst1.8 d1, [r4, : 64] + vtrn.32 d4, d2 + vtrn.32 d5, d3 + sub r2, r2, #8 + sub r4, r4, #8 + vtrn.32 d6, d12 + vtrn.32 d7, d13 + vst1.8 d4, [r2, : 64] + vst1.8 d5, [r4, : 64] + sub r2, r2, #24 + sub r4, r4, #24 + vst1.8 d6, [r2, : 64] + vst1.8 d7, [r4, : 64] + add r2, r3, #240 + add r4, r3, #96 + vld1.8 {d0-d1}, [r4, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vld1.8 {d4}, [r4, : 64] + add r4, r3, #144 + vld1.8 {d6-d7}, [r4, : 128]! + vtrn.32 q0, q3 + vld1.8 {d8-d9}, [r4, : 128]! + vshl.i32 q5, q0, #4 + vtrn.32 q1, q4 + vshl.i32 q6, q3, #4 + vadd.i32 q5, q5, q0 + vadd.i32 q6, q6, q3 + vshl.i32 q7, q1, #4 + vld1.8 {d5}, [r4, : 64] + vshl.i32 q8, q4, #4 + vtrn.32 d4, d5 + vadd.i32 q7, q7, q1 + vadd.i32 q8, q8, q4 + vld1.8 {d18-d19}, [r2, : 128]! + vshl.i32 q10, q2, #4 + vld1.8 {d22-d23}, [r2, : 128]! + vadd.i32 q10, q10, q2 + vld1.8 {d24}, [r2, : 64] + vadd.i32 q5, q5, q0 + add r2, r3, #192 + vld1.8 {d26-d27}, [r2, : 128]! + vadd.i32 q6, q6, q3 + vld1.8 {d28-d29}, [r2, : 128]! + vadd.i32 q8, q8, q4 + vld1.8 {d25}, [r2, : 64] + vadd.i32 q10, q10, q2 + vtrn.32 q9, q13 + vadd.i32 q7, q7, q1 + vadd.i32 q5, q5, q0 + vtrn.32 q11, q14 + vadd.i32 q6, q6, q3 + add r2, sp, #528 + vadd.i32 q10, q10, q2 + vtrn.32 d24, d25 + vst1.8 {d12-d13}, [r2, : 128]! + vshl.i32 q6, q13, #1 + vst1.8 {d20-d21}, [r2, : 128]! + vshl.i32 q10, q14, #1 + vst1.8 {d12-d13}, [r2, : 128]! + vshl.i32 q15, q12, #1 + vadd.i32 q8, q8, q4 + vext.32 d10, d31, d30, #0 + vadd.i32 q7, q7, q1 + vst1.8 {d16-d17}, [r2, : 128]! + vmull.s32 q8, d18, d5 + vmlal.s32 q8, d26, d4 + vmlal.s32 q8, d19, d9 + vmlal.s32 q8, d27, d3 + vmlal.s32 q8, d22, d8 + vmlal.s32 q8, d28, d2 + vmlal.s32 q8, d23, d7 + vmlal.s32 q8, d29, d1 + vmlal.s32 q8, d24, d6 + vmlal.s32 q8, d25, d0 + vst1.8 {d14-d15}, [r2, : 128]! + vmull.s32 q2, d18, d4 + vmlal.s32 q2, d12, d9 + vmlal.s32 q2, d13, d8 + vmlal.s32 q2, d19, d3 + vmlal.s32 q2, d22, d2 + vmlal.s32 q2, d23, d1 + vmlal.s32 q2, d24, d0 + vst1.8 {d20-d21}, [r2, : 128]! + vmull.s32 q7, d18, d9 + vmlal.s32 q7, d26, d3 + vmlal.s32 q7, d19, d8 + vmlal.s32 q7, d27, d2 + vmlal.s32 q7, d22, d7 + vmlal.s32 q7, d28, d1 + vmlal.s32 q7, d23, d6 + vmlal.s32 q7, d29, d0 + vst1.8 {d10-d11}, [r2, : 128]! + vmull.s32 q5, d18, d3 + vmlal.s32 q5, d19, d2 + vmlal.s32 q5, d22, d1 + vmlal.s32 q5, d23, d0 + vmlal.s32 q5, d12, d8 + vst1.8 {d16-d17}, [r2, : 128] + vmull.s32 q4, d18, d8 + vmlal.s32 q4, d26, d2 + vmlal.s32 q4, d19, d7 + vmlal.s32 q4, d27, d1 + vmlal.s32 q4, d22, d6 + vmlal.s32 q4, d28, d0 + vmull.s32 q8, d18, d7 + vmlal.s32 q8, d26, d1 + vmlal.s32 q8, d19, d6 + vmlal.s32 q8, d27, d0 + add r2, sp, #544 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q7, d24, d21 + vmlal.s32 q7, d25, d20 + vmlal.s32 q4, d23, d21 + vmlal.s32 q4, d29, d20 + vmlal.s32 q8, d22, d21 + vmlal.s32 q8, d28, d20 + vmlal.s32 q5, d24, d20 + vst1.8 {d14-d15}, [r2, : 128] + vmull.s32 q7, d18, d6 + vmlal.s32 q7, d26, d0 + add r2, sp, #624 + vld1.8 {d30-d31}, [r2, : 128] + vmlal.s32 q2, d30, d21 + vmlal.s32 q7, d19, d21 + vmlal.s32 q7, d27, d20 + add r2, sp, #592 + vld1.8 {d26-d27}, [r2, : 128] + vmlal.s32 q4, d25, d27 + vmlal.s32 q8, d29, d27 + vmlal.s32 q8, d25, d26 + vmlal.s32 q7, d28, d27 + vmlal.s32 q7, d29, d26 + add r2, sp, #576 + vld1.8 {d28-d29}, [r2, : 128] + vmlal.s32 q4, d24, d29 + vmlal.s32 q8, d23, d29 + vmlal.s32 q8, d24, d28 + vmlal.s32 q7, d22, d29 + vmlal.s32 q7, d23, d28 + vst1.8 {d8-d9}, [r2, : 128] + add r2, sp, #528 + vld1.8 {d8-d9}, [r2, : 128] + vmlal.s32 q7, d24, d9 + vmlal.s32 q7, d25, d31 + vmull.s32 q1, d18, d2 + vmlal.s32 q1, d19, d1 + vmlal.s32 q1, d22, d0 + vmlal.s32 q1, d24, d27 + vmlal.s32 q1, d23, d20 + vmlal.s32 q1, d12, d7 + vmlal.s32 q1, d13, d6 + vmull.s32 q6, d18, d1 + vmlal.s32 q6, d19, d0 + vmlal.s32 q6, d23, d27 + vmlal.s32 q6, d22, d20 + vmlal.s32 q6, d24, d26 + vmull.s32 q0, d18, d0 + vmlal.s32 q0, d22, d27 + vmlal.s32 q0, d23, d26 + vmlal.s32 q0, d24, d31 + vmlal.s32 q0, d19, d20 + add r2, sp, #608 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q2, d18, d7 + vmlal.s32 q5, d18, d6 + vmlal.s32 q1, d18, d21 + vmlal.s32 q0, d18, d28 + vmlal.s32 q6, d18, d29 + vmlal.s32 q2, d19, d6 + vmlal.s32 q5, d19, d21 + vmlal.s32 q1, d19, d29 + vmlal.s32 q0, d19, d9 + vmlal.s32 q6, d19, d28 + add r2, sp, #560 + vld1.8 {d18-d19}, [r2, : 128] + add r2, sp, #480 + vld1.8 {d22-d23}, [r2, : 128] + vmlal.s32 q5, d19, d7 + vmlal.s32 q0, d18, d21 + vmlal.s32 q0, d19, d29 + vmlal.s32 q6, d18, d6 + add r2, sp, #496 + vld1.8 {d6-d7}, [r2, : 128] + vmlal.s32 q6, d19, d21 + add r2, sp, #544 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q0, d30, d8 + add r2, sp, #640 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q5, d30, d29 + add r2, sp, #576 + vld1.8 {d24-d25}, [r2, : 128] + vmlal.s32 q1, d30, d28 + vadd.i64 q13, q0, q11 + vadd.i64 q14, q5, q11 + vmlal.s32 q6, d30, d9 + vshr.s64 q4, q13, #26 + vshr.s64 q13, q14, #26 + vadd.i64 q7, q7, q4 + vshl.i64 q4, q4, #26 + vadd.i64 q14, q7, q3 + vadd.i64 q9, q9, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q15, q9, q3 + vsub.i64 q0, q0, q4 + vshr.s64 q4, q14, #25 + vsub.i64 q5, q5, q13 + vshr.s64 q13, q15, #25 + vadd.i64 q6, q6, q4 + vshl.i64 q4, q4, #25 + vadd.i64 q14, q6, q11 + vadd.i64 q2, q2, q13 + vsub.i64 q4, q7, q4 + vshr.s64 q7, q14, #26 + vshl.i64 q13, q13, #25 + vadd.i64 q14, q2, q11 + vadd.i64 q8, q8, q7 + vshl.i64 q7, q7, #26 + vadd.i64 q15, q8, q3 + vsub.i64 q9, q9, q13 + vshr.s64 q13, q14, #26 + vsub.i64 q6, q6, q7 + vshr.s64 q7, q15, #25 + vadd.i64 q10, q10, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q14, q10, q3 + vadd.i64 q1, q1, q7 + add r2, r3, #144 + vshl.i64 q7, q7, #25 + add r4, r3, #96 + vadd.i64 q15, q1, q11 + add r2, r2, #8 + vsub.i64 q2, q2, q13 + add r4, r4, #8 + vshr.s64 q13, q14, #25 + vsub.i64 q7, q8, q7 + vshr.s64 q8, q15, #26 + vadd.i64 q14, q13, q13 + vadd.i64 q12, q12, q8 + vtrn.32 d12, d14 + vshl.i64 q8, q8, #26 + vtrn.32 d13, d15 + vadd.i64 q3, q12, q3 + vadd.i64 q0, q0, q14 + vst1.8 d12, [r2, : 64]! + vshl.i64 q7, q13, #4 + vst1.8 d13, [r4, : 64]! + vsub.i64 q1, q1, q8 + vshr.s64 q3, q3, #25 + vadd.i64 q0, q0, q7 + vadd.i64 q5, q5, q3 + vshl.i64 q3, q3, #25 + vadd.i64 q6, q5, q11 + vadd.i64 q0, q0, q13 + vshl.i64 q7, q13, #25 + vadd.i64 q8, q0, q11 + vsub.i64 q3, q12, q3 + vshr.s64 q6, q6, #26 + vsub.i64 q7, q10, q7 + vtrn.32 d2, d6 + vshr.s64 q8, q8, #26 + vtrn.32 d3, d7 + vadd.i64 q3, q9, q6 + vst1.8 d2, [r2, : 64] + vshl.i64 q6, q6, #26 + vst1.8 d3, [r4, : 64] + vadd.i64 q1, q4, q8 + vtrn.32 d4, d14 + vshl.i64 q4, q8, #26 + vtrn.32 d5, d15 + vsub.i64 q5, q5, q6 + add r2, r2, #16 + vsub.i64 q0, q0, q4 + vst1.8 d4, [r2, : 64] + add r4, r4, #16 + vst1.8 d5, [r4, : 64] + vtrn.32 d10, d6 + vtrn.32 d11, d7 + sub r2, r2, #8 + sub r4, r4, #8 + vtrn.32 d0, d2 + vtrn.32 d1, d3 + vst1.8 d10, [r2, : 64] + vst1.8 d11, [r4, : 64] + sub r2, r2, #24 + sub r4, r4, #24 + vst1.8 d0, [r2, : 64] + vst1.8 d1, [r4, : 64] + add r2, r3, #288 + add r4, r3, #336 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vsub.i32 q0, q0, q1 + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d4-d5}, [r4, : 128]! + vsub.i32 q1, q1, q2 + add r5, r3, #240 + vld1.8 {d4}, [r2, : 64] + vld1.8 {d6}, [r4, : 64] + vsub.i32 q2, q2, q3 + vst1.8 {d0-d1}, [r5, : 128]! + vst1.8 {d2-d3}, [r5, : 128]! + vst1.8 d4, [r5, : 64] + add r2, r3, #144 + add r4, r3, #96 + add r5, r3, #144 + add r6, r3, #192 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vsub.i32 q2, q0, q1 + vadd.i32 q0, q0, q1 + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d6-d7}, [r4, : 128]! + vsub.i32 q4, q1, q3 + vadd.i32 q1, q1, q3 + vld1.8 {d6}, [r2, : 64] + vld1.8 {d10}, [r4, : 64] + vsub.i32 q6, q3, q5 + vadd.i32 q3, q3, q5 + vst1.8 {d4-d5}, [r5, : 128]! + vst1.8 {d0-d1}, [r6, : 128]! + vst1.8 {d8-d9}, [r5, : 128]! + vst1.8 {d2-d3}, [r6, : 128]! + vst1.8 d12, [r5, : 64] + vst1.8 d6, [r6, : 64] + add r2, r3, #0 + add r4, r3, #240 + vld1.8 {d0-d1}, [r4, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vld1.8 {d4}, [r4, : 64] + add r4, r3, #336 + vld1.8 {d6-d7}, [r4, : 128]! + vtrn.32 q0, q3 + vld1.8 {d8-d9}, [r4, : 128]! + vshl.i32 q5, q0, #4 + vtrn.32 q1, q4 + vshl.i32 q6, q3, #4 + vadd.i32 q5, q5, q0 + vadd.i32 q6, q6, q3 + vshl.i32 q7, q1, #4 + vld1.8 {d5}, [r4, : 64] + vshl.i32 q8, q4, #4 + vtrn.32 d4, d5 + vadd.i32 q7, q7, q1 + vadd.i32 q8, q8, q4 + vld1.8 {d18-d19}, [r2, : 128]! + vshl.i32 q10, q2, #4 + vld1.8 {d22-d23}, [r2, : 128]! + vadd.i32 q10, q10, q2 + vld1.8 {d24}, [r2, : 64] + vadd.i32 q5, q5, q0 + add r2, r3, #288 + vld1.8 {d26-d27}, [r2, : 128]! + vadd.i32 q6, q6, q3 + vld1.8 {d28-d29}, [r2, : 128]! + vadd.i32 q8, q8, q4 + vld1.8 {d25}, [r2, : 64] + vadd.i32 q10, q10, q2 + vtrn.32 q9, q13 + vadd.i32 q7, q7, q1 + vadd.i32 q5, q5, q0 + vtrn.32 q11, q14 + vadd.i32 q6, q6, q3 + add r2, sp, #528 + vadd.i32 q10, q10, q2 + vtrn.32 d24, d25 + vst1.8 {d12-d13}, [r2, : 128]! + vshl.i32 q6, q13, #1 + vst1.8 {d20-d21}, [r2, : 128]! + vshl.i32 q10, q14, #1 + vst1.8 {d12-d13}, [r2, : 128]! + vshl.i32 q15, q12, #1 + vadd.i32 q8, q8, q4 + vext.32 d10, d31, d30, #0 + vadd.i32 q7, q7, q1 + vst1.8 {d16-d17}, [r2, : 128]! + vmull.s32 q8, d18, d5 + vmlal.s32 q8, d26, d4 + vmlal.s32 q8, d19, d9 + vmlal.s32 q8, d27, d3 + vmlal.s32 q8, d22, d8 + vmlal.s32 q8, d28, d2 + vmlal.s32 q8, d23, d7 + vmlal.s32 q8, d29, d1 + vmlal.s32 q8, d24, d6 + vmlal.s32 q8, d25, d0 + vst1.8 {d14-d15}, [r2, : 128]! + vmull.s32 q2, d18, d4 + vmlal.s32 q2, d12, d9 + vmlal.s32 q2, d13, d8 + vmlal.s32 q2, d19, d3 + vmlal.s32 q2, d22, d2 + vmlal.s32 q2, d23, d1 + vmlal.s32 q2, d24, d0 + vst1.8 {d20-d21}, [r2, : 128]! + vmull.s32 q7, d18, d9 + vmlal.s32 q7, d26, d3 + vmlal.s32 q7, d19, d8 + vmlal.s32 q7, d27, d2 + vmlal.s32 q7, d22, d7 + vmlal.s32 q7, d28, d1 + vmlal.s32 q7, d23, d6 + vmlal.s32 q7, d29, d0 + vst1.8 {d10-d11}, [r2, : 128]! + vmull.s32 q5, d18, d3 + vmlal.s32 q5, d19, d2 + vmlal.s32 q5, d22, d1 + vmlal.s32 q5, d23, d0 + vmlal.s32 q5, d12, d8 + vst1.8 {d16-d17}, [r2, : 128]! + vmull.s32 q4, d18, d8 + vmlal.s32 q4, d26, d2 + vmlal.s32 q4, d19, d7 + vmlal.s32 q4, d27, d1 + vmlal.s32 q4, d22, d6 + vmlal.s32 q4, d28, d0 + vmull.s32 q8, d18, d7 + vmlal.s32 q8, d26, d1 + vmlal.s32 q8, d19, d6 + vmlal.s32 q8, d27, d0 + add r2, sp, #544 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q7, d24, d21 + vmlal.s32 q7, d25, d20 + vmlal.s32 q4, d23, d21 + vmlal.s32 q4, d29, d20 + vmlal.s32 q8, d22, d21 + vmlal.s32 q8, d28, d20 + vmlal.s32 q5, d24, d20 + vst1.8 {d14-d15}, [r2, : 128] + vmull.s32 q7, d18, d6 + vmlal.s32 q7, d26, d0 + add r2, sp, #624 + vld1.8 {d30-d31}, [r2, : 128] + vmlal.s32 q2, d30, d21 + vmlal.s32 q7, d19, d21 + vmlal.s32 q7, d27, d20 + add r2, sp, #592 + vld1.8 {d26-d27}, [r2, : 128] + vmlal.s32 q4, d25, d27 + vmlal.s32 q8, d29, d27 + vmlal.s32 q8, d25, d26 + vmlal.s32 q7, d28, d27 + vmlal.s32 q7, d29, d26 + add r2, sp, #576 + vld1.8 {d28-d29}, [r2, : 128] + vmlal.s32 q4, d24, d29 + vmlal.s32 q8, d23, d29 + vmlal.s32 q8, d24, d28 + vmlal.s32 q7, d22, d29 + vmlal.s32 q7, d23, d28 + vst1.8 {d8-d9}, [r2, : 128] + add r2, sp, #528 + vld1.8 {d8-d9}, [r2, : 128] + vmlal.s32 q7, d24, d9 + vmlal.s32 q7, d25, d31 + vmull.s32 q1, d18, d2 + vmlal.s32 q1, d19, d1 + vmlal.s32 q1, d22, d0 + vmlal.s32 q1, d24, d27 + vmlal.s32 q1, d23, d20 + vmlal.s32 q1, d12, d7 + vmlal.s32 q1, d13, d6 + vmull.s32 q6, d18, d1 + vmlal.s32 q6, d19, d0 + vmlal.s32 q6, d23, d27 + vmlal.s32 q6, d22, d20 + vmlal.s32 q6, d24, d26 + vmull.s32 q0, d18, d0 + vmlal.s32 q0, d22, d27 + vmlal.s32 q0, d23, d26 + vmlal.s32 q0, d24, d31 + vmlal.s32 q0, d19, d20 + add r2, sp, #608 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q2, d18, d7 + vmlal.s32 q5, d18, d6 + vmlal.s32 q1, d18, d21 + vmlal.s32 q0, d18, d28 + vmlal.s32 q6, d18, d29 + vmlal.s32 q2, d19, d6 + vmlal.s32 q5, d19, d21 + vmlal.s32 q1, d19, d29 + vmlal.s32 q0, d19, d9 + vmlal.s32 q6, d19, d28 + add r2, sp, #560 + vld1.8 {d18-d19}, [r2, : 128] + add r2, sp, #480 + vld1.8 {d22-d23}, [r2, : 128] + vmlal.s32 q5, d19, d7 + vmlal.s32 q0, d18, d21 + vmlal.s32 q0, d19, d29 + vmlal.s32 q6, d18, d6 + add r2, sp, #496 + vld1.8 {d6-d7}, [r2, : 128] + vmlal.s32 q6, d19, d21 + add r2, sp, #544 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q0, d30, d8 + add r2, sp, #640 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q5, d30, d29 + add r2, sp, #576 + vld1.8 {d24-d25}, [r2, : 128] + vmlal.s32 q1, d30, d28 + vadd.i64 q13, q0, q11 + vadd.i64 q14, q5, q11 + vmlal.s32 q6, d30, d9 + vshr.s64 q4, q13, #26 + vshr.s64 q13, q14, #26 + vadd.i64 q7, q7, q4 + vshl.i64 q4, q4, #26 + vadd.i64 q14, q7, q3 + vadd.i64 q9, q9, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q15, q9, q3 + vsub.i64 q0, q0, q4 + vshr.s64 q4, q14, #25 + vsub.i64 q5, q5, q13 + vshr.s64 q13, q15, #25 + vadd.i64 q6, q6, q4 + vshl.i64 q4, q4, #25 + vadd.i64 q14, q6, q11 + vadd.i64 q2, q2, q13 + vsub.i64 q4, q7, q4 + vshr.s64 q7, q14, #26 + vshl.i64 q13, q13, #25 + vadd.i64 q14, q2, q11 + vadd.i64 q8, q8, q7 + vshl.i64 q7, q7, #26 + vadd.i64 q15, q8, q3 + vsub.i64 q9, q9, q13 + vshr.s64 q13, q14, #26 + vsub.i64 q6, q6, q7 + vshr.s64 q7, q15, #25 + vadd.i64 q10, q10, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q14, q10, q3 + vadd.i64 q1, q1, q7 + add r2, r3, #288 + vshl.i64 q7, q7, #25 + add r4, r3, #96 + vadd.i64 q15, q1, q11 + add r2, r2, #8 + vsub.i64 q2, q2, q13 + add r4, r4, #8 + vshr.s64 q13, q14, #25 + vsub.i64 q7, q8, q7 + vshr.s64 q8, q15, #26 + vadd.i64 q14, q13, q13 + vadd.i64 q12, q12, q8 + vtrn.32 d12, d14 + vshl.i64 q8, q8, #26 + vtrn.32 d13, d15 + vadd.i64 q3, q12, q3 + vadd.i64 q0, q0, q14 + vst1.8 d12, [r2, : 64]! + vshl.i64 q7, q13, #4 + vst1.8 d13, [r4, : 64]! + vsub.i64 q1, q1, q8 + vshr.s64 q3, q3, #25 + vadd.i64 q0, q0, q7 + vadd.i64 q5, q5, q3 + vshl.i64 q3, q3, #25 + vadd.i64 q6, q5, q11 + vadd.i64 q0, q0, q13 + vshl.i64 q7, q13, #25 + vadd.i64 q8, q0, q11 + vsub.i64 q3, q12, q3 + vshr.s64 q6, q6, #26 + vsub.i64 q7, q10, q7 + vtrn.32 d2, d6 + vshr.s64 q8, q8, #26 + vtrn.32 d3, d7 + vadd.i64 q3, q9, q6 + vst1.8 d2, [r2, : 64] + vshl.i64 q6, q6, #26 + vst1.8 d3, [r4, : 64] + vadd.i64 q1, q4, q8 + vtrn.32 d4, d14 + vshl.i64 q4, q8, #26 + vtrn.32 d5, d15 + vsub.i64 q5, q5, q6 + add r2, r2, #16 + vsub.i64 q0, q0, q4 + vst1.8 d4, [r2, : 64] + add r4, r4, #16 + vst1.8 d5, [r4, : 64] + vtrn.32 d10, d6 + vtrn.32 d11, d7 + sub r2, r2, #8 + sub r4, r4, #8 + vtrn.32 d0, d2 + vtrn.32 d1, d3 + vst1.8 d10, [r2, : 64] + vst1.8 d11, [r4, : 64] + sub r2, r2, #24 + sub r4, r4, #24 + vst1.8 d0, [r2, : 64] + vst1.8 d1, [r4, : 64] + add r2, sp, #512 + add r4, r3, #144 + add r5, r3, #192 + vld1.8 {d0-d1}, [r2, : 128] + vld1.8 {d2-d3}, [r4, : 128]! + vld1.8 {d4-d5}, [r5, : 128]! + vzip.i32 q1, q2 + vld1.8 {d6-d7}, [r4, : 128]! + vld1.8 {d8-d9}, [r5, : 128]! + vshl.i32 q5, q1, #1 + vzip.i32 q3, q4 + vshl.i32 q6, q2, #1 + vld1.8 {d14}, [r4, : 64] + vshl.i32 q8, q3, #1 + vld1.8 {d15}, [r5, : 64] + vshl.i32 q9, q4, #1 + vmul.i32 d21, d7, d1 + vtrn.32 d14, d15 + vmul.i32 q11, q4, q0 + vmul.i32 q0, q7, q0 + vmull.s32 q12, d2, d2 + vmlal.s32 q12, d11, d1 + vmlal.s32 q12, d12, d0 + vmlal.s32 q12, d13, d23 + vmlal.s32 q12, d16, d22 + vmlal.s32 q12, d7, d21 + vmull.s32 q10, d2, d11 + vmlal.s32 q10, d4, d1 + vmlal.s32 q10, d13, d0 + vmlal.s32 q10, d6, d23 + vmlal.s32 q10, d17, d22 + vmull.s32 q13, d10, d4 + vmlal.s32 q13, d11, d3 + vmlal.s32 q13, d13, d1 + vmlal.s32 q13, d16, d0 + vmlal.s32 q13, d17, d23 + vmlal.s32 q13, d8, d22 + vmull.s32 q1, d10, d5 + vmlal.s32 q1, d11, d4 + vmlal.s32 q1, d6, d1 + vmlal.s32 q1, d17, d0 + vmlal.s32 q1, d8, d23 + vmull.s32 q14, d10, d6 + vmlal.s32 q14, d11, d13 + vmlal.s32 q14, d4, d4 + vmlal.s32 q14, d17, d1 + vmlal.s32 q14, d18, d0 + vmlal.s32 q14, d9, d23 + vmull.s32 q11, d10, d7 + vmlal.s32 q11, d11, d6 + vmlal.s32 q11, d12, d5 + vmlal.s32 q11, d8, d1 + vmlal.s32 q11, d19, d0 + vmull.s32 q15, d10, d8 + vmlal.s32 q15, d11, d17 + vmlal.s32 q15, d12, d6 + vmlal.s32 q15, d13, d5 + vmlal.s32 q15, d19, d1 + vmlal.s32 q15, d14, d0 + vmull.s32 q2, d10, d9 + vmlal.s32 q2, d11, d8 + vmlal.s32 q2, d12, d7 + vmlal.s32 q2, d13, d6 + vmlal.s32 q2, d14, d1 + vmull.s32 q0, d15, d1 + vmlal.s32 q0, d10, d14 + vmlal.s32 q0, d11, d19 + vmlal.s32 q0, d12, d8 + vmlal.s32 q0, d13, d17 + vmlal.s32 q0, d6, d6 + add r2, sp, #480 + vld1.8 {d18-d19}, [r2, : 128]! + vmull.s32 q3, d16, d7 + vmlal.s32 q3, d10, d15 + vmlal.s32 q3, d11, d14 + vmlal.s32 q3, d12, d9 + vmlal.s32 q3, d13, d8 + vld1.8 {d8-d9}, [r2, : 128] + vadd.i64 q5, q12, q9 + vadd.i64 q6, q15, q9 + vshr.s64 q5, q5, #26 + vshr.s64 q6, q6, #26 + vadd.i64 q7, q10, q5 + vshl.i64 q5, q5, #26 + vadd.i64 q8, q7, q4 + vadd.i64 q2, q2, q6 + vshl.i64 q6, q6, #26 + vadd.i64 q10, q2, q4 + vsub.i64 q5, q12, q5 + vshr.s64 q8, q8, #25 + vsub.i64 q6, q15, q6 + vshr.s64 q10, q10, #25 + vadd.i64 q12, q13, q8 + vshl.i64 q8, q8, #25 + vadd.i64 q13, q12, q9 + vadd.i64 q0, q0, q10 + vsub.i64 q7, q7, q8 + vshr.s64 q8, q13, #26 + vshl.i64 q10, q10, #25 + vadd.i64 q13, q0, q9 + vadd.i64 q1, q1, q8 + vshl.i64 q8, q8, #26 + vadd.i64 q15, q1, q4 + vsub.i64 q2, q2, q10 + vshr.s64 q10, q13, #26 + vsub.i64 q8, q12, q8 + vshr.s64 q12, q15, #25 + vadd.i64 q3, q3, q10 + vshl.i64 q10, q10, #26 + vadd.i64 q13, q3, q4 + vadd.i64 q14, q14, q12 + add r2, r3, #144 + vshl.i64 q12, q12, #25 + add r4, r3, #192 + vadd.i64 q15, q14, q9 + add r2, r2, #8 + vsub.i64 q0, q0, q10 + add r4, r4, #8 + vshr.s64 q10, q13, #25 + vsub.i64 q1, q1, q12 + vshr.s64 q12, q15, #26 + vadd.i64 q13, q10, q10 + vadd.i64 q11, q11, q12 + vtrn.32 d16, d2 + vshl.i64 q12, q12, #26 + vtrn.32 d17, d3 + vadd.i64 q1, q11, q4 + vadd.i64 q4, q5, q13 + vst1.8 d16, [r2, : 64]! + vshl.i64 q5, q10, #4 + vst1.8 d17, [r4, : 64]! + vsub.i64 q8, q14, q12 + vshr.s64 q1, q1, #25 + vadd.i64 q4, q4, q5 + vadd.i64 q5, q6, q1 + vshl.i64 q1, q1, #25 + vadd.i64 q6, q5, q9 + vadd.i64 q4, q4, q10 + vshl.i64 q10, q10, #25 + vadd.i64 q9, q4, q9 + vsub.i64 q1, q11, q1 + vshr.s64 q6, q6, #26 + vsub.i64 q3, q3, q10 + vtrn.32 d16, d2 + vshr.s64 q9, q9, #26 + vtrn.32 d17, d3 + vadd.i64 q1, q2, q6 + vst1.8 d16, [r2, : 64] + vshl.i64 q2, q6, #26 + vst1.8 d17, [r4, : 64] + vadd.i64 q6, q7, q9 + vtrn.32 d0, d6 + vshl.i64 q7, q9, #26 + vtrn.32 d1, d7 + vsub.i64 q2, q5, q2 + add r2, r2, #16 + vsub.i64 q3, q4, q7 + vst1.8 d0, [r2, : 64] + add r4, r4, #16 + vst1.8 d1, [r4, : 64] + vtrn.32 d4, d2 + vtrn.32 d5, d3 + sub r2, r2, #8 + sub r4, r4, #8 + vtrn.32 d6, d12 + vtrn.32 d7, d13 + vst1.8 d4, [r2, : 64] + vst1.8 d5, [r4, : 64] + sub r2, r2, #24 + sub r4, r4, #24 + vst1.8 d6, [r2, : 64] + vst1.8 d7, [r4, : 64] + add r2, r3, #336 + add r4, r3, #288 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vadd.i32 q0, q0, q1 + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d4-d5}, [r4, : 128]! + vadd.i32 q1, q1, q2 + add r5, r3, #288 + vld1.8 {d4}, [r2, : 64] + vld1.8 {d6}, [r4, : 64] + vadd.i32 q2, q2, q3 + vst1.8 {d0-d1}, [r5, : 128]! + vst1.8 {d2-d3}, [r5, : 128]! + vst1.8 d4, [r5, : 64] + add r2, r3, #48 + add r4, r3, #144 + vld1.8 {d0-d1}, [r4, : 128]! + vld1.8 {d2-d3}, [r4, : 128]! + vld1.8 {d4}, [r4, : 64] + add r4, r3, #288 + vld1.8 {d6-d7}, [r4, : 128]! + vtrn.32 q0, q3 + vld1.8 {d8-d9}, [r4, : 128]! + vshl.i32 q5, q0, #4 + vtrn.32 q1, q4 + vshl.i32 q6, q3, #4 + vadd.i32 q5, q5, q0 + vadd.i32 q6, q6, q3 + vshl.i32 q7, q1, #4 + vld1.8 {d5}, [r4, : 64] + vshl.i32 q8, q4, #4 + vtrn.32 d4, d5 + vadd.i32 q7, q7, q1 + vadd.i32 q8, q8, q4 + vld1.8 {d18-d19}, [r2, : 128]! + vshl.i32 q10, q2, #4 + vld1.8 {d22-d23}, [r2, : 128]! + vadd.i32 q10, q10, q2 + vld1.8 {d24}, [r2, : 64] + vadd.i32 q5, q5, q0 + add r2, r3, #240 + vld1.8 {d26-d27}, [r2, : 128]! + vadd.i32 q6, q6, q3 + vld1.8 {d28-d29}, [r2, : 128]! + vadd.i32 q8, q8, q4 + vld1.8 {d25}, [r2, : 64] + vadd.i32 q10, q10, q2 + vtrn.32 q9, q13 + vadd.i32 q7, q7, q1 + vadd.i32 q5, q5, q0 + vtrn.32 q11, q14 + vadd.i32 q6, q6, q3 + add r2, sp, #528 + vadd.i32 q10, q10, q2 + vtrn.32 d24, d25 + vst1.8 {d12-d13}, [r2, : 128]! + vshl.i32 q6, q13, #1 + vst1.8 {d20-d21}, [r2, : 128]! + vshl.i32 q10, q14, #1 + vst1.8 {d12-d13}, [r2, : 128]! + vshl.i32 q15, q12, #1 + vadd.i32 q8, q8, q4 + vext.32 d10, d31, d30, #0 + vadd.i32 q7, q7, q1 + vst1.8 {d16-d17}, [r2, : 128]! + vmull.s32 q8, d18, d5 + vmlal.s32 q8, d26, d4 + vmlal.s32 q8, d19, d9 + vmlal.s32 q8, d27, d3 + vmlal.s32 q8, d22, d8 + vmlal.s32 q8, d28, d2 + vmlal.s32 q8, d23, d7 + vmlal.s32 q8, d29, d1 + vmlal.s32 q8, d24, d6 + vmlal.s32 q8, d25, d0 + vst1.8 {d14-d15}, [r2, : 128]! + vmull.s32 q2, d18, d4 + vmlal.s32 q2, d12, d9 + vmlal.s32 q2, d13, d8 + vmlal.s32 q2, d19, d3 + vmlal.s32 q2, d22, d2 + vmlal.s32 q2, d23, d1 + vmlal.s32 q2, d24, d0 + vst1.8 {d20-d21}, [r2, : 128]! + vmull.s32 q7, d18, d9 + vmlal.s32 q7, d26, d3 + vmlal.s32 q7, d19, d8 + vmlal.s32 q7, d27, d2 + vmlal.s32 q7, d22, d7 + vmlal.s32 q7, d28, d1 + vmlal.s32 q7, d23, d6 + vmlal.s32 q7, d29, d0 + vst1.8 {d10-d11}, [r2, : 128]! + vmull.s32 q5, d18, d3 + vmlal.s32 q5, d19, d2 + vmlal.s32 q5, d22, d1 + vmlal.s32 q5, d23, d0 + vmlal.s32 q5, d12, d8 + vst1.8 {d16-d17}, [r2, : 128]! + vmull.s32 q4, d18, d8 + vmlal.s32 q4, d26, d2 + vmlal.s32 q4, d19, d7 + vmlal.s32 q4, d27, d1 + vmlal.s32 q4, d22, d6 + vmlal.s32 q4, d28, d0 + vmull.s32 q8, d18, d7 + vmlal.s32 q8, d26, d1 + vmlal.s32 q8, d19, d6 + vmlal.s32 q8, d27, d0 + add r2, sp, #544 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q7, d24, d21 + vmlal.s32 q7, d25, d20 + vmlal.s32 q4, d23, d21 + vmlal.s32 q4, d29, d20 + vmlal.s32 q8, d22, d21 + vmlal.s32 q8, d28, d20 + vmlal.s32 q5, d24, d20 + vst1.8 {d14-d15}, [r2, : 128] + vmull.s32 q7, d18, d6 + vmlal.s32 q7, d26, d0 + add r2, sp, #624 + vld1.8 {d30-d31}, [r2, : 128] + vmlal.s32 q2, d30, d21 + vmlal.s32 q7, d19, d21 + vmlal.s32 q7, d27, d20 + add r2, sp, #592 + vld1.8 {d26-d27}, [r2, : 128] + vmlal.s32 q4, d25, d27 + vmlal.s32 q8, d29, d27 + vmlal.s32 q8, d25, d26 + vmlal.s32 q7, d28, d27 + vmlal.s32 q7, d29, d26 + add r2, sp, #576 + vld1.8 {d28-d29}, [r2, : 128] + vmlal.s32 q4, d24, d29 + vmlal.s32 q8, d23, d29 + vmlal.s32 q8, d24, d28 + vmlal.s32 q7, d22, d29 + vmlal.s32 q7, d23, d28 + vst1.8 {d8-d9}, [r2, : 128] + add r2, sp, #528 + vld1.8 {d8-d9}, [r2, : 128] + vmlal.s32 q7, d24, d9 + vmlal.s32 q7, d25, d31 + vmull.s32 q1, d18, d2 + vmlal.s32 q1, d19, d1 + vmlal.s32 q1, d22, d0 + vmlal.s32 q1, d24, d27 + vmlal.s32 q1, d23, d20 + vmlal.s32 q1, d12, d7 + vmlal.s32 q1, d13, d6 + vmull.s32 q6, d18, d1 + vmlal.s32 q6, d19, d0 + vmlal.s32 q6, d23, d27 + vmlal.s32 q6, d22, d20 + vmlal.s32 q6, d24, d26 + vmull.s32 q0, d18, d0 + vmlal.s32 q0, d22, d27 + vmlal.s32 q0, d23, d26 + vmlal.s32 q0, d24, d31 + vmlal.s32 q0, d19, d20 + add r2, sp, #608 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q2, d18, d7 + vmlal.s32 q5, d18, d6 + vmlal.s32 q1, d18, d21 + vmlal.s32 q0, d18, d28 + vmlal.s32 q6, d18, d29 + vmlal.s32 q2, d19, d6 + vmlal.s32 q5, d19, d21 + vmlal.s32 q1, d19, d29 + vmlal.s32 q0, d19, d9 + vmlal.s32 q6, d19, d28 + add r2, sp, #560 + vld1.8 {d18-d19}, [r2, : 128] + add r2, sp, #480 + vld1.8 {d22-d23}, [r2, : 128] + vmlal.s32 q5, d19, d7 + vmlal.s32 q0, d18, d21 + vmlal.s32 q0, d19, d29 + vmlal.s32 q6, d18, d6 + add r2, sp, #496 + vld1.8 {d6-d7}, [r2, : 128] + vmlal.s32 q6, d19, d21 + add r2, sp, #544 + vld1.8 {d18-d19}, [r2, : 128] + vmlal.s32 q0, d30, d8 + add r2, sp, #640 + vld1.8 {d20-d21}, [r2, : 128] + vmlal.s32 q5, d30, d29 + add r2, sp, #576 + vld1.8 {d24-d25}, [r2, : 128] + vmlal.s32 q1, d30, d28 + vadd.i64 q13, q0, q11 + vadd.i64 q14, q5, q11 + vmlal.s32 q6, d30, d9 + vshr.s64 q4, q13, #26 + vshr.s64 q13, q14, #26 + vadd.i64 q7, q7, q4 + vshl.i64 q4, q4, #26 + vadd.i64 q14, q7, q3 + vadd.i64 q9, q9, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q15, q9, q3 + vsub.i64 q0, q0, q4 + vshr.s64 q4, q14, #25 + vsub.i64 q5, q5, q13 + vshr.s64 q13, q15, #25 + vadd.i64 q6, q6, q4 + vshl.i64 q4, q4, #25 + vadd.i64 q14, q6, q11 + vadd.i64 q2, q2, q13 + vsub.i64 q4, q7, q4 + vshr.s64 q7, q14, #26 + vshl.i64 q13, q13, #25 + vadd.i64 q14, q2, q11 + vadd.i64 q8, q8, q7 + vshl.i64 q7, q7, #26 + vadd.i64 q15, q8, q3 + vsub.i64 q9, q9, q13 + vshr.s64 q13, q14, #26 + vsub.i64 q6, q6, q7 + vshr.s64 q7, q15, #25 + vadd.i64 q10, q10, q13 + vshl.i64 q13, q13, #26 + vadd.i64 q14, q10, q3 + vadd.i64 q1, q1, q7 + add r2, r3, #240 + vshl.i64 q7, q7, #25 + add r4, r3, #144 + vadd.i64 q15, q1, q11 + add r2, r2, #8 + vsub.i64 q2, q2, q13 + add r4, r4, #8 + vshr.s64 q13, q14, #25 + vsub.i64 q7, q8, q7 + vshr.s64 q8, q15, #26 + vadd.i64 q14, q13, q13 + vadd.i64 q12, q12, q8 + vtrn.32 d12, d14 + vshl.i64 q8, q8, #26 + vtrn.32 d13, d15 + vadd.i64 q3, q12, q3 + vadd.i64 q0, q0, q14 + vst1.8 d12, [r2, : 64]! + vshl.i64 q7, q13, #4 + vst1.8 d13, [r4, : 64]! + vsub.i64 q1, q1, q8 + vshr.s64 q3, q3, #25 + vadd.i64 q0, q0, q7 + vadd.i64 q5, q5, q3 + vshl.i64 q3, q3, #25 + vadd.i64 q6, q5, q11 + vadd.i64 q0, q0, q13 + vshl.i64 q7, q13, #25 + vadd.i64 q8, q0, q11 + vsub.i64 q3, q12, q3 + vshr.s64 q6, q6, #26 + vsub.i64 q7, q10, q7 + vtrn.32 d2, d6 + vshr.s64 q8, q8, #26 + vtrn.32 d3, d7 + vadd.i64 q3, q9, q6 + vst1.8 d2, [r2, : 64] + vshl.i64 q6, q6, #26 + vst1.8 d3, [r4, : 64] + vadd.i64 q1, q4, q8 + vtrn.32 d4, d14 + vshl.i64 q4, q8, #26 + vtrn.32 d5, d15 + vsub.i64 q5, q5, q6 + add r2, r2, #16 + vsub.i64 q0, q0, q4 + vst1.8 d4, [r2, : 64] + add r4, r4, #16 + vst1.8 d5, [r4, : 64] + vtrn.32 d10, d6 + vtrn.32 d11, d7 + sub r2, r2, #8 + sub r4, r4, #8 + vtrn.32 d0, d2 + vtrn.32 d1, d3 + vst1.8 d10, [r2, : 64] + vst1.8 d11, [r4, : 64] + sub r2, r2, #24 + sub r4, r4, #24 + vst1.8 d0, [r2, : 64] + vst1.8 d1, [r4, : 64] + ldr r2, [sp, #456] + ldr r4, [sp, #460] + subs r5, r2, #1 + bge .Lmainloop + add r1, r3, #144 + add r2, r3, #336 + vld1.8 {d0-d1}, [r1, : 128]! + vld1.8 {d2-d3}, [r1, : 128]! + vld1.8 {d4}, [r1, : 64] + vst1.8 {d0-d1}, [r2, : 128]! + vst1.8 {d2-d3}, [r2, : 128]! + vst1.8 d4, [r2, : 64] + movw r1, #0 +.Linvertloop: + add r2, r3, #144 + movw r4, #0 + movw r5, #2 + cmp r1, #1 + moveq r5, #1 + addeq r2, r3, #336 + addeq r4, r3, #48 + cmp r1, #2 + moveq r5, #1 + addeq r2, r3, #48 + cmp r1, #3 + moveq r5, #5 + addeq r4, r3, #336 + cmp r1, #4 + moveq r5, #10 + cmp r1, #5 + moveq r5, #20 + cmp r1, #6 + moveq r5, #10 + addeq r2, r3, #336 + addeq r4, r3, #336 + cmp r1, #7 + moveq r5, #50 + cmp r1, #8 + moveq r5, #100 + cmp r1, #9 + moveq r5, #50 + addeq r2, r3, #336 + cmp r1, #10 + moveq r5, #5 + addeq r2, r3, #48 + cmp r1, #11 + moveq r5, #0 + addeq r2, r3, #96 + add r6, r3, #144 + add r7, r3, #288 + vld1.8 {d0-d1}, [r6, : 128]! + vld1.8 {d2-d3}, [r6, : 128]! + vld1.8 {d4}, [r6, : 64] + vst1.8 {d0-d1}, [r7, : 128]! + vst1.8 {d2-d3}, [r7, : 128]! + vst1.8 d4, [r7, : 64] + cmp r5, #0 + beq .Lskipsquaringloop +.Lsquaringloop: + add r6, r3, #288 + add r7, r3, #288 + add r8, r3, #288 + vmov.i32 q0, #19 + vmov.i32 q1, #0 + vmov.i32 q2, #1 + vzip.i32 q1, q2 + vld1.8 {d4-d5}, [r7, : 128]! + vld1.8 {d6-d7}, [r7, : 128]! + vld1.8 {d9}, [r7, : 64] + vld1.8 {d10-d11}, [r6, : 128]! + add r7, sp, #384 + vld1.8 {d12-d13}, [r6, : 128]! + vmul.i32 q7, q2, q0 + vld1.8 {d8}, [r6, : 64] + vext.32 d17, d11, d10, #1 + vmul.i32 q9, q3, q0 + vext.32 d16, d10, d8, #1 + vshl.u32 q10, q5, q1 + vext.32 d22, d14, d4, #1 + vext.32 d24, d18, d6, #1 + vshl.u32 q13, q6, q1 + vshl.u32 d28, d8, d2 + vrev64.i32 d22, d22 + vmul.i32 d1, d9, d1 + vrev64.i32 d24, d24 + vext.32 d29, d8, d13, #1 + vext.32 d0, d1, d9, #1 + vrev64.i32 d0, d0 + vext.32 d2, d9, d1, #1 + vext.32 d23, d15, d5, #1 + vmull.s32 q4, d20, d4 + vrev64.i32 d23, d23 + vmlal.s32 q4, d21, d1 + vrev64.i32 d2, d2 + vmlal.s32 q4, d26, d19 + vext.32 d3, d5, d15, #1 + vmlal.s32 q4, d27, d18 + vrev64.i32 d3, d3 + vmlal.s32 q4, d28, d15 + vext.32 d14, d12, d11, #1 + vmull.s32 q5, d16, d23 + vext.32 d15, d13, d12, #1 + vmlal.s32 q5, d17, d4 + vst1.8 d8, [r7, : 64]! + vmlal.s32 q5, d14, d1 + vext.32 d12, d9, d8, #0 + vmlal.s32 q5, d15, d19 + vmov.i64 d13, #0 + vmlal.s32 q5, d29, d18 + vext.32 d25, d19, d7, #1 + vmlal.s32 q6, d20, d5 + vrev64.i32 d25, d25 + vmlal.s32 q6, d21, d4 + vst1.8 d11, [r7, : 64]! + vmlal.s32 q6, d26, d1 + vext.32 d9, d10, d10, #0 + vmlal.s32 q6, d27, d19 + vmov.i64 d8, #0 + vmlal.s32 q6, d28, d18 + vmlal.s32 q4, d16, d24 + vmlal.s32 q4, d17, d5 + vmlal.s32 q4, d14, d4 + vst1.8 d12, [r7, : 64]! + vmlal.s32 q4, d15, d1 + vext.32 d10, d13, d12, #0 + vmlal.s32 q4, d29, d19 + vmov.i64 d11, #0 + vmlal.s32 q5, d20, d6 + vmlal.s32 q5, d21, d5 + vmlal.s32 q5, d26, d4 + vext.32 d13, d8, d8, #0 + vmlal.s32 q5, d27, d1 + vmov.i64 d12, #0 + vmlal.s32 q5, d28, d19 + vst1.8 d9, [r7, : 64]! + vmlal.s32 q6, d16, d25 + vmlal.s32 q6, d17, d6 + vst1.8 d10, [r7, : 64] + vmlal.s32 q6, d14, d5 + vext.32 d8, d11, d10, #0 + vmlal.s32 q6, d15, d4 + vmov.i64 d9, #0 + vmlal.s32 q6, d29, d1 + vmlal.s32 q4, d20, d7 + vmlal.s32 q4, d21, d6 + vmlal.s32 q4, d26, d5 + vext.32 d11, d12, d12, #0 + vmlal.s32 q4, d27, d4 + vmov.i64 d10, #0 + vmlal.s32 q4, d28, d1 + vmlal.s32 q5, d16, d0 + sub r6, r7, #32 + vmlal.s32 q5, d17, d7 + vmlal.s32 q5, d14, d6 + vext.32 d30, d9, d8, #0 + vmlal.s32 q5, d15, d5 + vld1.8 {d31}, [r6, : 64]! + vmlal.s32 q5, d29, d4 + vmlal.s32 q15, d20, d0 + vext.32 d0, d6, d18, #1 + vmlal.s32 q15, d21, d25 + vrev64.i32 d0, d0 + vmlal.s32 q15, d26, d24 + vext.32 d1, d7, d19, #1 + vext.32 d7, d10, d10, #0 + vmlal.s32 q15, d27, d23 + vrev64.i32 d1, d1 + vld1.8 {d6}, [r6, : 64] + vmlal.s32 q15, d28, d22 + vmlal.s32 q3, d16, d4 + add r6, r6, #24 + vmlal.s32 q3, d17, d2 + vext.32 d4, d31, d30, #0 + vmov d17, d11 + vmlal.s32 q3, d14, d1 + vext.32 d11, d13, d13, #0 + vext.32 d13, d30, d30, #0 + vmlal.s32 q3, d15, d0 + vext.32 d1, d8, d8, #0 + vmlal.s32 q3, d29, d3 + vld1.8 {d5}, [r6, : 64] + sub r6, r6, #16 + vext.32 d10, d6, d6, #0 + vmov.i32 q1, #0xffffffff + vshl.i64 q4, q1, #25 + add r7, sp, #480 + vld1.8 {d14-d15}, [r7, : 128] + vadd.i64 q9, q2, q7 + vshl.i64 q1, q1, #26 + vshr.s64 q10, q9, #26 + vld1.8 {d0}, [r6, : 64]! + vadd.i64 q5, q5, q10 + vand q9, q9, q1 + vld1.8 {d16}, [r6, : 64]! + add r6, sp, #496 + vld1.8 {d20-d21}, [r6, : 128] + vadd.i64 q11, q5, q10 + vsub.i64 q2, q2, q9 + vshr.s64 q9, q11, #25 + vext.32 d12, d5, d4, #0 + vand q11, q11, q4 + vadd.i64 q0, q0, q9 + vmov d19, d7 + vadd.i64 q3, q0, q7 + vsub.i64 q5, q5, q11 + vshr.s64 q11, q3, #26 + vext.32 d18, d11, d10, #0 + vand q3, q3, q1 + vadd.i64 q8, q8, q11 + vadd.i64 q11, q8, q10 + vsub.i64 q0, q0, q3 + vshr.s64 q3, q11, #25 + vand q11, q11, q4 + vadd.i64 q3, q6, q3 + vadd.i64 q6, q3, q7 + vsub.i64 q8, q8, q11 + vshr.s64 q11, q6, #26 + vand q6, q6, q1 + vadd.i64 q9, q9, q11 + vadd.i64 d25, d19, d21 + vsub.i64 q3, q3, q6 + vshr.s64 d23, d25, #25 + vand q4, q12, q4 + vadd.i64 d21, d23, d23 + vshl.i64 d25, d23, #4 + vadd.i64 d21, d21, d23 + vadd.i64 d25, d25, d21 + vadd.i64 d4, d4, d25 + vzip.i32 q0, q8 + vadd.i64 d12, d4, d14 + add r6, r8, #8 + vst1.8 d0, [r6, : 64] + vsub.i64 d19, d19, d9 + add r6, r6, #16 + vst1.8 d16, [r6, : 64] + vshr.s64 d22, d12, #26 + vand q0, q6, q1 + vadd.i64 d10, d10, d22 + vzip.i32 q3, q9 + vsub.i64 d4, d4, d0 + sub r6, r6, #8 + vst1.8 d6, [r6, : 64] + add r6, r6, #16 + vst1.8 d18, [r6, : 64] + vzip.i32 q2, q5 + sub r6, r6, #32 + vst1.8 d4, [r6, : 64] + subs r5, r5, #1 + bhi .Lsquaringloop +.Lskipsquaringloop: + mov r2, r2 + add r5, r3, #288 + add r6, r3, #144 + vmov.i32 q0, #19 + vmov.i32 q1, #0 + vmov.i32 q2, #1 + vzip.i32 q1, q2 + vld1.8 {d4-d5}, [r5, : 128]! + vld1.8 {d6-d7}, [r5, : 128]! + vld1.8 {d9}, [r5, : 64] + vld1.8 {d10-d11}, [r2, : 128]! + add r5, sp, #384 + vld1.8 {d12-d13}, [r2, : 128]! + vmul.i32 q7, q2, q0 + vld1.8 {d8}, [r2, : 64] + vext.32 d17, d11, d10, #1 + vmul.i32 q9, q3, q0 + vext.32 d16, d10, d8, #1 + vshl.u32 q10, q5, q1 + vext.32 d22, d14, d4, #1 + vext.32 d24, d18, d6, #1 + vshl.u32 q13, q6, q1 + vshl.u32 d28, d8, d2 + vrev64.i32 d22, d22 + vmul.i32 d1, d9, d1 + vrev64.i32 d24, d24 + vext.32 d29, d8, d13, #1 + vext.32 d0, d1, d9, #1 + vrev64.i32 d0, d0 + vext.32 d2, d9, d1, #1 + vext.32 d23, d15, d5, #1 + vmull.s32 q4, d20, d4 + vrev64.i32 d23, d23 + vmlal.s32 q4, d21, d1 + vrev64.i32 d2, d2 + vmlal.s32 q4, d26, d19 + vext.32 d3, d5, d15, #1 + vmlal.s32 q4, d27, d18 + vrev64.i32 d3, d3 + vmlal.s32 q4, d28, d15 + vext.32 d14, d12, d11, #1 + vmull.s32 q5, d16, d23 + vext.32 d15, d13, d12, #1 + vmlal.s32 q5, d17, d4 + vst1.8 d8, [r5, : 64]! + vmlal.s32 q5, d14, d1 + vext.32 d12, d9, d8, #0 + vmlal.s32 q5, d15, d19 + vmov.i64 d13, #0 + vmlal.s32 q5, d29, d18 + vext.32 d25, d19, d7, #1 + vmlal.s32 q6, d20, d5 + vrev64.i32 d25, d25 + vmlal.s32 q6, d21, d4 + vst1.8 d11, [r5, : 64]! + vmlal.s32 q6, d26, d1 + vext.32 d9, d10, d10, #0 + vmlal.s32 q6, d27, d19 + vmov.i64 d8, #0 + vmlal.s32 q6, d28, d18 + vmlal.s32 q4, d16, d24 + vmlal.s32 q4, d17, d5 + vmlal.s32 q4, d14, d4 + vst1.8 d12, [r5, : 64]! + vmlal.s32 q4, d15, d1 + vext.32 d10, d13, d12, #0 + vmlal.s32 q4, d29, d19 + vmov.i64 d11, #0 + vmlal.s32 q5, d20, d6 + vmlal.s32 q5, d21, d5 + vmlal.s32 q5, d26, d4 + vext.32 d13, d8, d8, #0 + vmlal.s32 q5, d27, d1 + vmov.i64 d12, #0 + vmlal.s32 q5, d28, d19 + vst1.8 d9, [r5, : 64]! + vmlal.s32 q6, d16, d25 + vmlal.s32 q6, d17, d6 + vst1.8 d10, [r5, : 64] + vmlal.s32 q6, d14, d5 + vext.32 d8, d11, d10, #0 + vmlal.s32 q6, d15, d4 + vmov.i64 d9, #0 + vmlal.s32 q6, d29, d1 + vmlal.s32 q4, d20, d7 + vmlal.s32 q4, d21, d6 + vmlal.s32 q4, d26, d5 + vext.32 d11, d12, d12, #0 + vmlal.s32 q4, d27, d4 + vmov.i64 d10, #0 + vmlal.s32 q4, d28, d1 + vmlal.s32 q5, d16, d0 + sub r2, r5, #32 + vmlal.s32 q5, d17, d7 + vmlal.s32 q5, d14, d6 + vext.32 d30, d9, d8, #0 + vmlal.s32 q5, d15, d5 + vld1.8 {d31}, [r2, : 64]! + vmlal.s32 q5, d29, d4 + vmlal.s32 q15, d20, d0 + vext.32 d0, d6, d18, #1 + vmlal.s32 q15, d21, d25 + vrev64.i32 d0, d0 + vmlal.s32 q15, d26, d24 + vext.32 d1, d7, d19, #1 + vext.32 d7, d10, d10, #0 + vmlal.s32 q15, d27, d23 + vrev64.i32 d1, d1 + vld1.8 {d6}, [r2, : 64] + vmlal.s32 q15, d28, d22 + vmlal.s32 q3, d16, d4 + add r2, r2, #24 + vmlal.s32 q3, d17, d2 + vext.32 d4, d31, d30, #0 + vmov d17, d11 + vmlal.s32 q3, d14, d1 + vext.32 d11, d13, d13, #0 + vext.32 d13, d30, d30, #0 + vmlal.s32 q3, d15, d0 + vext.32 d1, d8, d8, #0 + vmlal.s32 q3, d29, d3 + vld1.8 {d5}, [r2, : 64] + sub r2, r2, #16 + vext.32 d10, d6, d6, #0 + vmov.i32 q1, #0xffffffff + vshl.i64 q4, q1, #25 + add r5, sp, #480 + vld1.8 {d14-d15}, [r5, : 128] + vadd.i64 q9, q2, q7 + vshl.i64 q1, q1, #26 + vshr.s64 q10, q9, #26 + vld1.8 {d0}, [r2, : 64]! + vadd.i64 q5, q5, q10 + vand q9, q9, q1 + vld1.8 {d16}, [r2, : 64]! + add r2, sp, #496 + vld1.8 {d20-d21}, [r2, : 128] + vadd.i64 q11, q5, q10 + vsub.i64 q2, q2, q9 + vshr.s64 q9, q11, #25 + vext.32 d12, d5, d4, #0 + vand q11, q11, q4 + vadd.i64 q0, q0, q9 + vmov d19, d7 + vadd.i64 q3, q0, q7 + vsub.i64 q5, q5, q11 + vshr.s64 q11, q3, #26 + vext.32 d18, d11, d10, #0 + vand q3, q3, q1 + vadd.i64 q8, q8, q11 + vadd.i64 q11, q8, q10 + vsub.i64 q0, q0, q3 + vshr.s64 q3, q11, #25 + vand q11, q11, q4 + vadd.i64 q3, q6, q3 + vadd.i64 q6, q3, q7 + vsub.i64 q8, q8, q11 + vshr.s64 q11, q6, #26 + vand q6, q6, q1 + vadd.i64 q9, q9, q11 + vadd.i64 d25, d19, d21 + vsub.i64 q3, q3, q6 + vshr.s64 d23, d25, #25 + vand q4, q12, q4 + vadd.i64 d21, d23, d23 + vshl.i64 d25, d23, #4 + vadd.i64 d21, d21, d23 + vadd.i64 d25, d25, d21 + vadd.i64 d4, d4, d25 + vzip.i32 q0, q8 + vadd.i64 d12, d4, d14 + add r2, r6, #8 + vst1.8 d0, [r2, : 64] + vsub.i64 d19, d19, d9 + add r2, r2, #16 + vst1.8 d16, [r2, : 64] + vshr.s64 d22, d12, #26 + vand q0, q6, q1 + vadd.i64 d10, d10, d22 + vzip.i32 q3, q9 + vsub.i64 d4, d4, d0 + sub r2, r2, #8 + vst1.8 d6, [r2, : 64] + add r2, r2, #16 + vst1.8 d18, [r2, : 64] + vzip.i32 q2, q5 + sub r2, r2, #32 + vst1.8 d4, [r2, : 64] + cmp r4, #0 + beq .Lskippostcopy + add r2, r3, #144 + mov r4, r4 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d4}, [r2, : 64] + vst1.8 {d0-d1}, [r4, : 128]! + vst1.8 {d2-d3}, [r4, : 128]! + vst1.8 d4, [r4, : 64] +.Lskippostcopy: + cmp r1, #1 + bne .Lskipfinalcopy + add r2, r3, #288 + add r4, r3, #144 + vld1.8 {d0-d1}, [r2, : 128]! + vld1.8 {d2-d3}, [r2, : 128]! + vld1.8 {d4}, [r2, : 64] + vst1.8 {d0-d1}, [r4, : 128]! + vst1.8 {d2-d3}, [r4, : 128]! + vst1.8 d4, [r4, : 64] +.Lskipfinalcopy: + add r1, r1, #1 + cmp r1, #12 + blo .Linvertloop + add r1, r3, #144 + ldr r2, [r1], #4 + ldr r3, [r1], #4 + ldr r4, [r1], #4 + ldr r5, [r1], #4 + ldr r6, [r1], #4 + ldr r7, [r1], #4 + ldr r8, [r1], #4 + ldr r9, [r1], #4 + ldr r10, [r1], #4 + ldr r1, [r1] + add r11, r1, r1, LSL #4 + add r11, r11, r1, LSL #1 + add r11, r11, #16777216 + mov r11, r11, ASR #25 + add r11, r11, r2 + mov r11, r11, ASR #26 + add r11, r11, r3 + mov r11, r11, ASR #25 + add r11, r11, r4 + mov r11, r11, ASR #26 + add r11, r11, r5 + mov r11, r11, ASR #25 + add r11, r11, r6 + mov r11, r11, ASR #26 + add r11, r11, r7 + mov r11, r11, ASR #25 + add r11, r11, r8 + mov r11, r11, ASR #26 + add r11, r11, r9 + mov r11, r11, ASR #25 + add r11, r11, r10 + mov r11, r11, ASR #26 + add r11, r11, r1 + mov r11, r11, ASR #25 + add r2, r2, r11 + add r2, r2, r11, LSL #1 + add r2, r2, r11, LSL #4 + mov r11, r2, ASR #26 + add r3, r3, r11 + sub r2, r2, r11, LSL #26 + mov r11, r3, ASR #25 + add r4, r4, r11 + sub r3, r3, r11, LSL #25 + mov r11, r4, ASR #26 + add r5, r5, r11 + sub r4, r4, r11, LSL #26 + mov r11, r5, ASR #25 + add r6, r6, r11 + sub r5, r5, r11, LSL #25 + mov r11, r6, ASR #26 + add r7, r7, r11 + sub r6, r6, r11, LSL #26 + mov r11, r7, ASR #25 + add r8, r8, r11 + sub r7, r7, r11, LSL #25 + mov r11, r8, ASR #26 + add r9, r9, r11 + sub r8, r8, r11, LSL #26 + mov r11, r9, ASR #25 + add r10, r10, r11 + sub r9, r9, r11, LSL #25 + mov r11, r10, ASR #26 + add r1, r1, r11 + sub r10, r10, r11, LSL #26 + mov r11, r1, ASR #25 + sub r1, r1, r11, LSL #25 + add r2, r2, r3, LSL #26 + mov r3, r3, LSR #6 + add r3, r3, r4, LSL #19 + mov r4, r4, LSR #13 + add r4, r4, r5, LSL #13 + mov r5, r5, LSR #19 + add r5, r5, r6, LSL #6 + add r6, r7, r8, LSL #25 + mov r7, r8, LSR #7 + add r7, r7, r9, LSL #19 + mov r8, r9, LSR #13 + add r8, r8, r10, LSL #12 + mov r9, r10, LSR #20 + add r1, r9, r1, LSL #6 + str r2, [r0] + str r3, [r0, #4] + str r4, [r0, #8] + str r5, [r0, #12] + str r6, [r0, #16] + str r7, [r0, #20] + str r8, [r0, #24] + str r1, [r0, #28] + movw r0, #0 + mov sp, ip + pop {r4-r11, pc} +ENDPROC(curve25519_neon) +#endif diff --git a/lib/zinc/curve25519/curve25519.c b/lib/zinc/curve25519/curve25519.c index 074fbb1c2149..0ebe16f8c638 100644 --- a/lib/zinc/curve25519/curve25519.c +++ b/lib/zinc/curve25519/curve25519.c @@ -22,6 +22,8 @@ #if defined(CONFIG_ZINC_ARCH_X86_64) #include "curve25519-x86_64-glue.c" +#elif defined(CONFIG_ZINC_ARCH_ARM) +#include "curve25519-arm-glue.c" #else static bool *const curve25519_nobs[] __initconst = { }; static void __init curve25519_fpu_init(void) From patchwork Fri Mar 22 06:29:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 160843 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp451853jan; Thu, 21 Mar 2019 23:39:16 -0700 (PDT) X-Google-Smtp-Source: APXvYqyJyuJDFc4VyLf/Ihbt27X6GWnQdOBFrA2yYccLqVBf5vskyc9cHYHXVq8cgUSn/Ik7oW9C X-Received: by 2002:a63:6e02:: with SMTP id j2mr7035982pgc.229.1553236755974; Thu, 21 Mar 2019 23:39:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553236755; cv=none; d=google.com; s=arc-20160816; b=vYBKz1WNqPqsuaAodgVdaEVEogwamctwPzP6WNbpNrS07OuerxpmoTsW1RPUJRWhKT H0kRTvZCY145CzWgxCgdSbHqoMM8jJutqQsNcYatvGrzACY9YP1tpo7YHtoPU8is4XfF 3Xyqz9j4sSKHJZvaREydv3Udjc+zRGLmbb4Fel1PRUsRwzXtXt+NSJKdAtLC0SN6oi3n jKgOGrOnERgp2t97JmgKpN7lNxz2OE0v0FxieOt+Yr2Q7QLn/SCxvKgxyPz5HyE3vSwY /OqpAqwgN45kM4F0KFTZlpnHJjlRUo9iO5IIHxdHipoGy1h8y4UnlaTCQIEWObKaCkqA DZFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:from:message-id:to:references :subject; bh=EeTX8ff8xhYtSMVmpPGJ+0VG0OdW7Gic80dH4K9z6XE=; b=bMahO+ZpAUJ+en/IvkoJ875BLpWhz9oUOw/ziUjLzhv43num+WlPIkSewy8dZ2wQnn qfcxNuGo7kVy5LItL8PmZit/wmy9M8Dg+CvIwH9k19wB2q+fD1wiqBSkXFM2qecrCBaS LDeEhrb6ovgy7nbbrAb00iaQzxWaImTxf6pzhxw7jnd8iHisRwEgEtllC0RTq/ssPHD2 AXZ41c9hXX0EsIPhIxrNCoXg0HoG2cXhqzRPTH0OCbxHlW1ONiLbFmbws5Lv0Hfyn5Gd dAm/DM7UpIdhAw+kh6LYkX0UO/gFvAhTy0J82vTLZUfjoC/W4mFNPPN2wT6Jv0zw3tUW VUyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b4si6164618pfj.16.2019.03.21.23.39.15; Thu, 21 Mar 2019 23:39:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727745AbfCVGjP (ORCPT + 3 others); Fri, 22 Mar 2019 02:39:15 -0400 Received: from orcrist.hmeau.com ([104.223.48.154]:44304 "EHLO deadmen.hmeau.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727461AbfCVGjP (ORCPT ); Fri, 22 Mar 2019 02:39:15 -0400 Received: from gondobar.mordor.me.apana.org.au ([192.168.128.4] helo=gondobar) by deadmen.hmeau.com with esmtps (Exim 4.89 #2 (Debian)) id 1h7Dh3-0002ZM-Aw; Fri, 22 Mar 2019 14:30:20 +0800 Received: from herbert by gondobar with local (Exim 4.89) (envelope-from ) id 1h7Dgk-0001K4-Mg; Fri, 22 Mar 2019 14:29:59 +0800 Subject: [PATCH 17/17] security/keys: rewrite big_key crypto to use Zinc References: <20190322062740.nrwfx2rvmt7lzotj@gondor.apana.org.au> To: Linus Torvalds , "David S. Miller" , "Jason A. Donenfeld" , Eric Biggers , Ard Biesheuvel , Linux Crypto Mailing List , linux-fscrypt@vger.kernel.org, linux-arm-kernel@lists.infradead.org, LKML , Paul Crowley , Greg Kaiser , Samuel Neves , Tomer Ashur , Martin Willi Message-Id: From: Herbert Xu Date: Fri, 22 Mar 2019 14:29:58 +0800 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Jason A. Donenfeld A while back, I noticed that the crypto and crypto API usage in big_keys were entirely broken in multiple ways, so I rewrote it. Now, I'm rewriting it again, but this time using Zinc's ChaCha20Poly1305 function. This makes the file considerably more simple; the diffstat alone should justify this commit. It also should be faster, since it no longer requires a mutex around the "aead api object" (nor allocations), allowing us to encrypt multiple items in parallel. We also benefit from being able to pass any type of pointer to Zinc, so we can get rid of the ridiculously complex custom page allocator that big_key really doesn't need. Signed-off-by: Jason A. Donenfeld Cc: David Howells Cc: Samuel Neves Cc: Jean-Philippe Aumasson Cc: Andy Lutomirski Cc: Greg KH Cc: Andrew Morton Cc: Linus Torvalds Cc: kernel-hardening@lists.openwall.com Signed-off-by: Herbert Xu --- security/keys/Kconfig | 4 security/keys/big_key.c | 230 +++++------------------------------------------- 2 files changed, 28 insertions(+), 206 deletions(-) diff --git a/security/keys/Kconfig b/security/keys/Kconfig index 6462e6654ccf..66ff26298fb3 100644 --- a/security/keys/Kconfig +++ b/security/keys/Kconfig @@ -45,9 +45,7 @@ config BIG_KEYS bool "Large payload keys" depends on KEYS depends on TMPFS - select CRYPTO - select CRYPTO_AES - select CRYPTO_GCM + select ZINC_CHACHA20POLY1305 help This option provides support for holding large keys within the kernel (for example Kerberos ticket caches). The data may be stored out to diff --git a/security/keys/big_key.c b/security/keys/big_key.c index 2806e70d7f8f..bb15360221d8 100644 --- a/security/keys/big_key.c +++ b/security/keys/big_key.c @@ -1,6 +1,6 @@ /* Large capacity key type * - * Copyright (C) 2017 Jason A. Donenfeld . All Rights Reserved. + * Copyright (C) 2017-2018 Jason A. Donenfeld . All Rights Reserved. * Copyright (C) 2013 Red Hat, Inc. All Rights Reserved. * Written by David Howells (dhowells@redhat.com) * @@ -16,20 +16,10 @@ #include #include #include -#include #include -#include #include #include -#include -#include - -struct big_key_buf { - unsigned int nr_pages; - void *virt; - struct scatterlist *sg; - struct page *pages[]; -}; +#include /* * Layout of key payload words. @@ -42,14 +32,6 @@ enum { }; /* - * Crypto operation with big_key data - */ -enum big_key_op { - BIG_KEY_ENC, - BIG_KEY_DEC, -}; - -/* * If the data is under this limit, there's no point creating a shm file to * hold it as the permanently resident metadata for the shmem fs will be at * least as large as the data. @@ -57,16 +39,6 @@ enum big_key_op { #define BIG_KEY_FILE_THRESHOLD (sizeof(struct inode) + sizeof(struct dentry)) /* - * Key size for big_key data encryption - */ -#define ENC_KEY_SIZE 32 - -/* - * Authentication tag length - */ -#define ENC_AUTHTAG_SIZE 16 - -/* * big_key defined keys take an arbitrary string as the description and an * arbitrary blob of data as the payload */ @@ -79,136 +51,20 @@ struct key_type key_type_big_key = { .destroy = big_key_destroy, .describe = big_key_describe, .read = big_key_read, - /* no ->update(); don't add it without changing big_key_crypt() nonce */ + /* no ->update(); don't add it without changing chacha20poly1305's nonce */ }; /* - * Crypto names for big_key data authenticated encryption - */ -static const char big_key_alg_name[] = "gcm(aes)"; -#define BIG_KEY_IV_SIZE GCM_AES_IV_SIZE - -/* - * Crypto algorithms for big_key data authenticated encryption - */ -static struct crypto_aead *big_key_aead; - -/* - * Since changing the key affects the entire object, we need a mutex. - */ -static DEFINE_MUTEX(big_key_aead_lock); - -/* - * Encrypt/decrypt big_key data - */ -static int big_key_crypt(enum big_key_op op, struct big_key_buf *buf, size_t datalen, u8 *key) -{ - int ret; - struct aead_request *aead_req; - /* We always use a zero nonce. The reason we can get away with this is - * because we're using a different randomly generated key for every - * different encryption. Notably, too, key_type_big_key doesn't define - * an .update function, so there's no chance we'll wind up reusing the - * key to encrypt updated data. Simply put: one key, one encryption. - */ - u8 zero_nonce[BIG_KEY_IV_SIZE]; - - aead_req = aead_request_alloc(big_key_aead, GFP_KERNEL); - if (!aead_req) - return -ENOMEM; - - memset(zero_nonce, 0, sizeof(zero_nonce)); - aead_request_set_crypt(aead_req, buf->sg, buf->sg, datalen, zero_nonce); - aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL); - aead_request_set_ad(aead_req, 0); - - mutex_lock(&big_key_aead_lock); - if (crypto_aead_setkey(big_key_aead, key, ENC_KEY_SIZE)) { - ret = -EAGAIN; - goto error; - } - if (op == BIG_KEY_ENC) - ret = crypto_aead_encrypt(aead_req); - else - ret = crypto_aead_decrypt(aead_req); -error: - mutex_unlock(&big_key_aead_lock); - aead_request_free(aead_req); - return ret; -} - -/* - * Free up the buffer. - */ -static void big_key_free_buffer(struct big_key_buf *buf) -{ - unsigned int i; - - if (buf->virt) { - memset(buf->virt, 0, buf->nr_pages * PAGE_SIZE); - vunmap(buf->virt); - } - - for (i = 0; i < buf->nr_pages; i++) - if (buf->pages[i]) - __free_page(buf->pages[i]); - - kfree(buf); -} - -/* - * Allocate a buffer consisting of a set of pages with a virtual mapping - * applied over them. - */ -static void *big_key_alloc_buffer(size_t len) -{ - struct big_key_buf *buf; - unsigned int npg = (len + PAGE_SIZE - 1) >> PAGE_SHIFT; - unsigned int i, l; - - buf = kzalloc(sizeof(struct big_key_buf) + - sizeof(struct page) * npg + - sizeof(struct scatterlist) * npg, - GFP_KERNEL); - if (!buf) - return NULL; - - buf->nr_pages = npg; - buf->sg = (void *)(buf->pages + npg); - sg_init_table(buf->sg, npg); - - for (i = 0; i < buf->nr_pages; i++) { - buf->pages[i] = alloc_page(GFP_KERNEL); - if (!buf->pages[i]) - goto nomem; - - l = min_t(size_t, len, PAGE_SIZE); - sg_set_page(&buf->sg[i], buf->pages[i], l, 0); - len -= l; - } - - buf->virt = vmap(buf->pages, buf->nr_pages, VM_MAP, PAGE_KERNEL); - if (!buf->virt) - goto nomem; - - return buf; - -nomem: - big_key_free_buffer(buf); - return NULL; -} - -/* * Preparse a big key */ int big_key_preparse(struct key_preparsed_payload *prep) { - struct big_key_buf *buf; struct path *path = (struct path *)&prep->payload.data[big_key_path]; struct file *file; - u8 *enckey; + u8 *buf, *enckey; ssize_t written; - size_t datalen = prep->datalen, enclen = datalen + ENC_AUTHTAG_SIZE; + size_t datalen = prep->datalen; + size_t enclen = datalen + CHACHA20POLY1305_AUTHTAG_SIZE; int ret; if (datalen <= 0 || datalen > 1024 * 1024 || !prep->data) @@ -224,28 +80,28 @@ int big_key_preparse(struct key_preparsed_payload *prep) * to be swapped out if needed. * * File content is stored encrypted with randomly generated key. + * Since the key is random for each file, we can set the nonce + * to zero, provided we never define a ->update() call. */ loff_t pos = 0; - buf = big_key_alloc_buffer(enclen); + buf = kvmalloc(enclen, GFP_KERNEL); if (!buf) return -ENOMEM; - memcpy(buf->virt, prep->data, datalen); /* generate random key */ - enckey = kmalloc(ENC_KEY_SIZE, GFP_KERNEL); + enckey = kmalloc(CHACHA20POLY1305_KEY_SIZE, GFP_KERNEL); if (!enckey) { ret = -ENOMEM; goto error; } - ret = get_random_bytes_wait(enckey, ENC_KEY_SIZE); + ret = get_random_bytes_wait(enckey, CHACHA20POLY1305_KEY_SIZE); if (unlikely(ret)) goto err_enckey; - /* encrypt aligned data */ - ret = big_key_crypt(BIG_KEY_ENC, buf, datalen, enckey); - if (ret) - goto err_enckey; + /* encrypt data */ + chacha20poly1305_encrypt(buf, prep->data, datalen, NULL, 0, + 0, enckey); /* save aligned data to file */ file = shmem_kernel_file_setup("", enclen, 0); @@ -254,7 +110,7 @@ int big_key_preparse(struct key_preparsed_payload *prep) goto err_enckey; } - written = kernel_write(file, buf->virt, enclen, &pos); + written = kernel_write(file, buf, enclen, &pos); if (written != enclen) { ret = written; if (written >= 0) @@ -269,7 +125,7 @@ int big_key_preparse(struct key_preparsed_payload *prep) *path = file->f_path; path_get(path); fput(file); - big_key_free_buffer(buf); + kvfree(buf); } else { /* Just store the data in a buffer */ void *data = kmalloc(datalen, GFP_KERNEL); @@ -287,7 +143,7 @@ int big_key_preparse(struct key_preparsed_payload *prep) err_enckey: kzfree(enckey); error: - big_key_free_buffer(buf); + kvfree(buf); return ret; } @@ -365,14 +221,13 @@ long big_key_read(const struct key *key, char __user *buffer, size_t buflen) return datalen; if (datalen > BIG_KEY_FILE_THRESHOLD) { - struct big_key_buf *buf; struct path *path = (struct path *)&key->payload.data[big_key_path]; struct file *file; - u8 *enckey = (u8 *)key->payload.data[big_key_data]; - size_t enclen = datalen + ENC_AUTHTAG_SIZE; + u8 *buf, *enckey = (u8 *)key->payload.data[big_key_data]; + size_t enclen = datalen + CHACHA20POLY1305_AUTHTAG_SIZE; loff_t pos = 0; - buf = big_key_alloc_buffer(enclen); + buf = kvmalloc(enclen, GFP_KERNEL); if (!buf) return -ENOMEM; @@ -383,26 +238,27 @@ long big_key_read(const struct key *key, char __user *buffer, size_t buflen) } /* read file to kernel and decrypt */ - ret = kernel_read(file, buf->virt, enclen, &pos); + ret = kernel_read(file, buf, enclen, &pos); if (ret >= 0 && ret != enclen) { ret = -EIO; goto err_fput; } - ret = big_key_crypt(BIG_KEY_DEC, buf, enclen, enckey); - if (ret) + ret = chacha20poly1305_decrypt(buf, buf, enclen, NULL, 0, 0, + enckey) ? 0 : -EINVAL; + if (unlikely(ret)) goto err_fput; ret = datalen; /* copy decrypted data to user */ - if (copy_to_user(buffer, buf->virt, datalen) != 0) + if (copy_to_user(buffer, buf, datalen) != 0) ret = -EFAULT; err_fput: fput(file); error: - big_key_free_buffer(buf); + kvfree(buf); } else { ret = datalen; if (copy_to_user(buffer, key->payload.data[big_key_data], @@ -418,39 +274,7 @@ long big_key_read(const struct key *key, char __user *buffer, size_t buflen) */ static int __init big_key_init(void) { - int ret; - - /* init block cipher */ - big_key_aead = crypto_alloc_aead(big_key_alg_name, 0, CRYPTO_ALG_ASYNC); - if (IS_ERR(big_key_aead)) { - ret = PTR_ERR(big_key_aead); - pr_err("Can't alloc crypto: %d\n", ret); - return ret; - } - - if (unlikely(crypto_aead_ivsize(big_key_aead) != BIG_KEY_IV_SIZE)) { - WARN(1, "big key algorithm changed?"); - ret = -EINVAL; - goto free_aead; - } - - ret = crypto_aead_setauthsize(big_key_aead, ENC_AUTHTAG_SIZE); - if (ret < 0) { - pr_err("Can't set crypto auth tag len: %d\n", ret); - goto free_aead; - } - - ret = register_key_type(&key_type_big_key); - if (ret < 0) { - pr_err("Can't register type: %d\n", ret); - goto free_aead; - } - - return 0; - -free_aead: - crypto_free_aead(big_key_aead); - return ret; + return register_key_type(&key_type_big_key); } late_initcall(big_key_init);