mbox series

[v2,0/3] crypto: aria: implement aria-avx2 and aria-avx512

Message ID 20221105082021.17997-1-ap420073@gmail.com
Headers show
Series crypto: aria: implement aria-avx2 and aria-avx512 | expand

Message

Taehee Yoo Nov. 5, 2022, 8:20 a.m. UTC
This patchset is to implement aria-avx2 and aria-avx512.
There are some differences between aria-avx, aria-avx2, and aria-avx512,
but they are not core logic(s-box, diffusion layer).

ARIA-AVX2
It supports 32way parallel processing using 256bit registers.
Like ARIA-AVX, it supports both AES-NI based s-box layer algorithm and
GFNI based s-box layer algorithm.
These algorithms are the same as ARIA-AVX except that AES-NI doesn't
support 256bit registers, so it is used twice.

ARIA-AVX512
It supports 64way parallel processing using 512bit registers.
It supports only GFNI based s-box layer algorithm.

Benchmarks with i3-12100
commands: modprobe tcrypt mode=610 num_mb=8192

ARIA-AVX512(128bit and 256bit)
    testing speed of multibuffer ecb(aria) (ecb-aria-avx512) encryption
tcrypt: 1 operation in 1504 cycles (1024 bytes)
tcrypt: 1 operation in 4595 cycles (4096 bytes)
tcrypt: 1 operation in 1763 cycles (1024 bytes)
tcrypt: 1 operation in 5540 cycles (4096 bytes)
    testing speed of multibuffer ecb(aria) (ecb-aria-avx512) decryption
tcrypt: 1 operation in 1502 cycles (1024 bytes)
tcrypt: 1 operation in 4615 cycles (4096 bytes)
tcrypt: 1 operation in 1759 cycles (1024 bytes)
tcrypt: 1 operation in 5554 cycles (4096 bytes)

ARIA-AVX2 with GFNI(128bit and 256bit)
    testing speed of multibuffer ecb(aria) (ecb-aria-avx2) encryption
tcrypt: 1 operation in 2003 cycles (1024 bytes)
tcrypt: 1 operation in 5867 cycles (4096 bytes)
tcrypt: 1 operation in 2358 cycles (1024 bytes)
tcrypt: 1 operation in 7295 cycles (4096 bytes)
    testing speed of multibuffer ecb(aria) (ecb-aria-avx2) decryption
tcrypt: 1 operation in 2004 cycles (1024 bytes)
tcrypt: 1 operation in 5956 cycles (4096 bytes)
tcrypt: 1 operation in 2409 cycles (1024 bytes)
tcrypt: 1 operation in 7564 cycles (4096 bytes)

ARIA-AVX with GFNI(128bit and 256bit)
    testing speed of multibuffer ecb(aria) (ecb-aria-avx) encryption
tcrypt: 1 operation in 2761 cycles (1024 bytes)
tcrypt: 1 operation in 9390 cycles (4096 bytes)
tcrypt: 1 operation in 3401 cycles (1024 bytes)
tcrypt: 1 operation in 11876 cycles (4096 bytes)
    testing speed of multibuffer ecb(aria) (ecb-aria-avx) decryption
tcrypt: 1 operation in 2735 cycles (1024 bytes)
tcrypt: 1 operation in 9424 cycles (4096 bytes)
tcrypt: 1 operation in 3369 cycles (1024 bytes)
tcrypt: 1 operation in 11954 cycles (4096 bytes)

v2:
 - Add new "add keystream array into struct aria_ctx" patch.
 - Use keystream array in the aria_ctx instead of stack memory

Taehee Yoo (3):
  crypto: aria: add keystream array into struct aria_ctx
  crypto: aria: implement aria-avx2
  crypto: aria: implement aria-avx512

 arch/x86/crypto/Kconfig                   |   38 +
 arch/x86/crypto/Makefile                  |    6 +
 arch/x86/crypto/aria-aesni-avx2-asm_64.S  | 1436 +++++++++++++++++++++
 arch/x86/crypto/aria-avx.h                |   41 +-
 arch/x86/crypto/aria-gfni-avx512-asm_64.S | 1023 +++++++++++++++
 arch/x86/crypto/aria_aesni_avx2_glue.c    |  236 ++++
 arch/x86/crypto/aria_aesni_avx_glue.c     |   30 +-
 arch/x86/crypto/aria_gfni_avx512_glue.c   |  232 ++++
 include/crypto/aria.h                     |   24 +
 9 files changed, 3051 insertions(+), 15 deletions(-)
 create mode 100644 arch/x86/crypto/aria-aesni-avx2-asm_64.S
 create mode 100644 arch/x86/crypto/aria-gfni-avx512-asm_64.S
 create mode 100644 arch/x86/crypto/aria_aesni_avx2_glue.c
 create mode 100644 arch/x86/crypto/aria_gfni_avx512_glue.c

Comments

Dave Hansen Nov. 5, 2022, 5:31 p.m. UTC | #1
On 11/5/22 09:20, Elliott, Robert (Servers) wrote:
> --- a/arch/x86/crypto/aesni-intel_glue.c
> +++ b/arch/x86/crypto/aesni-intel_glue.c
> @@ -288,6 +288,10 @@ static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx,
>         struct crypto_aes_ctx *ctx = aes_ctx(raw_ctx);
>         int err;
> 
> +       BUILD_BUG_ON(offsetof(struct crypto_aes_ctx, key_enc) != 0);
> +       BUILD_BUG_ON(offsetof(struct crypto_aes_ctx, key_dec) != 240);
> +       BUILD_BUG_ON(offsetof(struct crypto_aes_ctx, key_length) != 480);

We have a nice fancy way of doing these.  See things like
CPU_ENTRY_AREA_entry_stack or TSS_sp0.  It's all put together from
arch/x86/kernel/asm-offsets.c and gets plopped in
include/generated/asm-offsets.h.

This is vastly preferred to hard-coded magic number offsets, even if
they do have a BUILD_BUG_ON() somewhere.
Taehee Yoo Nov. 6, 2022, 7:07 a.m. UTC | #2
Hi Elliott and Dave,
Thanks a lot for the reviews!

On 11/6/22 02:31, Dave Hansen wrote:
 > On 11/5/22 09:20, Elliott, Robert (Servers) wrote:
 >> --- a/arch/x86/crypto/aesni-intel_glue.c
 >> +++ b/arch/x86/crypto/aesni-intel_glue.c
 >> @@ -288,6 +288,10 @@ static int aes_set_key_common(struct crypto_tfm 
*tfm, void *raw_ctx,
 >>          struct crypto_aes_ctx *ctx = aes_ctx(raw_ctx);
 >>          int err;
 >>
 >> +       BUILD_BUG_ON(offsetof(struct crypto_aes_ctx, key_enc) != 0);
 >> +       BUILD_BUG_ON(offsetof(struct crypto_aes_ctx, key_dec) != 240);
 >> +       BUILD_BUG_ON(offsetof(struct crypto_aes_ctx, key_length) != 
480);
 >
 > We have a nice fancy way of doing these.  See things like
 > CPU_ENTRY_AREA_entry_stack or TSS_sp0.  It's all put together from
 > arch/x86/kernel/asm-offsets.c and gets plopped in
 > include/generated/asm-offsets.h.
 >
 > This is vastly preferred to hard-coded magic number offsets, even if
 > they do have a BUILD_BUG_ON() somewhere.

I will define ARIA_CTX_xxx with asm-offsets.c.
Then, the assembly code can use the correct offset of enc_key, dec_key, 
and the rounds in the struct aria_ctx.
Due to we can sure the offsets are correct, BUILD_BUG_ON() will become 
unnecessary.

I will send the v3 patch.

Thanks a lot!
Taehee Yoo