From patchwork Sat Jul 28 18:53:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 143107 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp2271836ljj; Sat, 28 Jul 2018 11:54:15 -0700 (PDT) X-Google-Smtp-Source: AAOMgpe6fvCTlAwYJ4JHw/pxghzrQqzTaZ1GZbHd1xYVa3PY6rUGLdI76xwAJ4zJs/lHhQwDHPsK X-Received: by 2002:a17:902:543:: with SMTP id 61-v6mr10699795plf.126.1532804055248; Sat, 28 Jul 2018 11:54:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532804055; cv=none; d=google.com; s=arc-20160816; b=ALR7jPbZJpVdEaZwwnhZQJPRYPOXrAzX0AM+KqjxYzVyEC4Zp+hnt4dLBlqWgmyir8 XWbCXnhgeHZv12j+2ju0aeXdrO3SnbHW+hXNWh9ZlTH/E+l++ENTgQtT+huz4D7tiikO W4wJw9bT6aa+okn3viPeTsASs5OnBXk7LQZ2Zkxwf3OGq+2GqCmsoeQXmKje4HB1Wky8 +aOydqi5CPlsvMkZ6u0rmPQbvfV5n2PorJ8iuN48j3jjw3COQj/XrZOIUxZz+yAlgi6g PS1/mPpaJSAAAAbRueFd2yVYluNtdjisH+Iwtj2ig7Q9w7A8R92gUOIm6AzsgnA5zgWN 2aCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=MCLQHxXgsaDx4qTaAWhl4imYstzVt0UGkv+2Gmb3pQc=; b=ZVh4w7z4TqH3n681HA6dwxsaSAHzN7j6EzXK9waMNmXeOE9fqVit7hlLxrDgmYyaOj DasGMRVUYiq/UN/0K1Ie3Gniv06H096PuQ9vngC5T27cK+xJYn6oaIxFhkOAp2Mpw1ap uP5KJSJjTVS5lbiL+W+yv6X0BM4SCqMUloNmiW/gyDdj2YVn8CPQvqrpN82yaXjJHEEN ZCdAenHKrPUriLlBf5i1GuAeKl9ZCXQnQdsXOWKgZeDQd7sb4ax74iI4/y6R5m2e/CpO CAxtH4gvsi2mECKmTrLt5ESmMUhfjPZVECjftuafVyD2aTSKmjFF7hsYHGm7pRBXVrI4 Re/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QMHrgkIL; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a72-v6si6300178pge.497.2018.07.28.11.54.15; Sat, 28 Jul 2018 11:54:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QMHrgkIL; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730378AbeG1UVj (ORCPT + 1 other); Sat, 28 Jul 2018 16:21:39 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:38559 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730154AbeG1UVj (ORCPT ); Sat, 28 Jul 2018 16:21:39 -0400 Received: by mail-wr1-f66.google.com with SMTP id v14-v6so8375197wro.5 for ; Sat, 28 Jul 2018 11:54:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MCLQHxXgsaDx4qTaAWhl4imYstzVt0UGkv+2Gmb3pQc=; b=QMHrgkILAMQD3nWocvmG9espu8jKnxTtQrHty0rdAHBveQzvrT6OzSyZ8YvNrOH3+q 8mU2j2aANrqvdNJkMLUSENuSYy+0SWnwVHQQ5pZtVUaCa7kEvIxkZ6UwnZxlXqRi+1/q cY9JkvXjSbd6DpNZbtYpVh2srBkBilFiwpEEs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MCLQHxXgsaDx4qTaAWhl4imYstzVt0UGkv+2Gmb3pQc=; b=lhLTsAbiSuXvt3J3UHP7xPsnKuxCEpUBQlfy9SsiHOlUinhpMtqYyDWQfVLAIiKoWh infgdpiDoDdFEJLaZ69Rs5SzsX9plpPmD5fIbmvzAuBU7sWMQaHPM9n4dNqLahayMNxW qXBlsnMvZoG1C0C2xsvXv998owxV5cvsc5qgE7NWPZ7+ch5ALiR+I6TnGWHqUb+x9dk/ DLHs1zkqKdvypvoUPAuDzmcCrqqdoRmz/B1fiGOhYptNyJVkD/wt+BnP0WwB4eQq7GqE Ge45tN6sD5FjRh8kmMUNCtJkDJjDiyqHkulm0oq25yWjqbNuUId0i+yap4G0ZCP5dErJ YtCQ== X-Gm-Message-State: AOUpUlHh7F5OGU5Jkoibj2oFbHHVSNDqs4ZfJipbzAwoVFiUaUj2By9V SqQJ4xU9qqCUUIk1q4ueOROabYR+aw0= X-Received: by 2002:adf:9183:: with SMTP id 3-v6mr10403158wri.122.1532804051430; Sat, 28 Jul 2018 11:54:11 -0700 (PDT) Received: from rev02.home (b80182.upc-b.chello.nl. [212.83.80.182]) by smtp.gmail.com with ESMTPSA id v188-v6sm10308407wme.43.2018.07.28.11.54.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 28 Jul 2018 11:54:10 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, will.deacon@arm.com, catalin.marinas@arm.com, vakul.garg@nxp.com, jerome.forissier@linaro.org, jens.wiklander@linaro.org, Ard Biesheuvel Subject: [PATCH 1/2] crypto/arm64: aes-ce-gcm - operate on two input blocks at a time Date: Sat, 28 Jul 2018 20:53:59 +0200 Message-Id: <20180728185400.8237-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180728185400.8237-1-ard.biesheuvel@linaro.org> References: <20180728185400.8237-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Update the core AES/GCM transform and the associated plumbing to operate on 2 AES/GHASH blocks at a time. By itself, this is not expected to result in a noticeable speedup, but it paves the way for reimplementing the GHASH component using 2-way aggregation. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/ghash-ce-core.S | 129 +++++++++++++++----- arch/arm64/crypto/ghash-ce-glue.c | 84 +++++++++---- 2 files changed, 155 insertions(+), 58 deletions(-) -- 2.18.0 diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index dcffb9e77589..437a2fb0f7f9 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -286,9 +286,10 @@ ENTRY(pmull_ghash_update_p8) __pmull_ghash p8 ENDPROC(pmull_ghash_update_p8) - KS .req v8 - CTR .req v9 - INP .req v10 + KS0 .req v8 + KS1 .req v9 + INP0 .req v10 + INP1 .req v11 .macro load_round_keys, rounds, rk cmp \rounds, #12 @@ -350,90 +351,152 @@ CPU_LE( rev x28, x28 ) eor SHASH2.16b, SHASH2.16b, SHASH.16b .if \enc == 1 - ld1 {KS.16b}, [x27] + ld1 {KS0.16b-KS1.16b}, [x27] .endif -1: ld1 {CTR.8b}, [x24] // load upper counter - ld1 {INP.16b}, [x22], #16 +1: ld1 {INP0.16b-INP1.16b}, [x22], #32 + rev x9, x28 - add x28, x28, #1 - sub w19, w19, #1 - ins CTR.d[1], x9 // set lower counter + add x10, x28, #1 + add x28, x28, #2 .if \enc == 1 - eor INP.16b, INP.16b, KS.16b // encrypt input - st1 {INP.16b}, [x21], #16 + eor INP0.16b, INP0.16b, KS0.16b // encrypt input + eor INP1.16b, INP1.16b, KS1.16b .endif - rev64 T1.16b, INP.16b + ld1 {KS0.8b}, [x24] // load upper counter + rev x10, x10 + sub w19, w19, #2 + mov KS1.8b, KS0.8b + ins KS0.d[1], x9 // set lower counter + ins KS1.d[1], x10 + + rev64 T1.16b, INP0.16b cmp w26, #12 b.ge 4f // AES-192/256? -2: enc_round CTR, v21 +2: enc_round KS0, v21 + + ext T2.16b, XL.16b, XL.16b, #8 + ext IN1.16b, T1.16b, T1.16b, #8 + + enc_round KS1, v21 + + eor T1.16b, T1.16b, T2.16b + eor XL.16b, XL.16b, IN1.16b + + enc_round KS0, v22 + + pmull2 XH.1q, SHASH.2d, XL.2d // a1 * b1 + eor T1.16b, T1.16b, XL.16b + + enc_round KS1, v22 + + pmull XL.1q, SHASH.1d, XL.1d // a0 * b0 + pmull XM.1q, SHASH2.1d, T1.1d // (a1 + a0)(b1 + b0) + + enc_round KS0, v23 + + ext T1.16b, XL.16b, XH.16b, #8 + eor T2.16b, XL.16b, XH.16b + eor XM.16b, XM.16b, T1.16b + + enc_round KS1, v23 + + eor XM.16b, XM.16b, T2.16b + pmull T2.1q, XL.1d, MASK.1d + + enc_round KS0, v24 + + mov XH.d[0], XM.d[1] + mov XM.d[1], XL.d[0] + + enc_round KS1, v24 + + eor XL.16b, XM.16b, T2.16b + + enc_round KS0, v25 + + ext T2.16b, XL.16b, XL.16b, #8 + + enc_round KS1, v25 + + pmull XL.1q, XL.1d, MASK.1d + eor T2.16b, T2.16b, XH.16b + + enc_round KS0, v26 + + eor XL.16b, XL.16b, T2.16b + rev64 T1.16b, INP1.16b + + enc_round KS1, v26 ext T2.16b, XL.16b, XL.16b, #8 ext IN1.16b, T1.16b, T1.16b, #8 - enc_round CTR, v22 + enc_round KS0, v27 eor T1.16b, T1.16b, T2.16b eor XL.16b, XL.16b, IN1.16b - enc_round CTR, v23 + enc_round KS1, v27 pmull2 XH.1q, SHASH.2d, XL.2d // a1 * b1 eor T1.16b, T1.16b, XL.16b - enc_round CTR, v24 + enc_round KS0, v28 pmull XL.1q, SHASH.1d, XL.1d // a0 * b0 pmull XM.1q, SHASH2.1d, T1.1d // (a1 + a0)(b1 + b0) - enc_round CTR, v25 + enc_round KS1, v28 ext T1.16b, XL.16b, XH.16b, #8 eor T2.16b, XL.16b, XH.16b eor XM.16b, XM.16b, T1.16b - enc_round CTR, v26 + enc_round KS0, v29 eor XM.16b, XM.16b, T2.16b pmull T2.1q, XL.1d, MASK.1d - enc_round CTR, v27 + enc_round KS1, v29 mov XH.d[0], XM.d[1] mov XM.d[1], XL.d[0] - enc_round CTR, v28 + aese KS0.16b, v30.16b eor XL.16b, XM.16b, T2.16b - enc_round CTR, v29 + aese KS1.16b, v30.16b ext T2.16b, XL.16b, XL.16b, #8 - aese CTR.16b, v30.16b + eor KS0.16b, KS0.16b, v31.16b pmull XL.1q, XL.1d, MASK.1d eor T2.16b, T2.16b, XH.16b - eor KS.16b, CTR.16b, v31.16b + eor KS1.16b, KS1.16b, v31.16b eor XL.16b, XL.16b, T2.16b .if \enc == 0 - eor INP.16b, INP.16b, KS.16b - st1 {INP.16b}, [x21], #16 + eor INP0.16b, INP0.16b, KS0.16b + eor INP1.16b, INP1.16b, KS1.16b .endif + st1 {INP0.16b-INP1.16b}, [x21], #32 + cbz w19, 3f if_will_cond_yield_neon st1 {XL.2d}, [x20] .if \enc == 1 - st1 {KS.16b}, [x27] + st1 {KS0.16b-KS1.16b}, [x27] .endif do_cond_yield_neon b 0b @@ -443,7 +506,7 @@ CPU_LE( rev x28, x28 ) 3: st1 {XL.2d}, [x20] .if \enc == 1 - st1 {KS.16b}, [x27] + st1 {KS0.16b-KS1.16b}, [x27] .endif CPU_LE( rev x28, x28 ) @@ -453,10 +516,14 @@ CPU_LE( rev x28, x28 ) ret 4: b.eq 5f // AES-192? - enc_round CTR, v17 - enc_round CTR, v18 -5: enc_round CTR, v19 - enc_round CTR, v20 + enc_round KS0, v17 + enc_round KS1, v17 + enc_round KS0, v18 + enc_round KS1, v18 +5: enc_round KS0, v19 + enc_round KS1, v19 + enc_round KS0, v20 + enc_round KS1, v20 b 2b .endm diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index 8a10f1d7199a..371f8368c196 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -349,7 +349,7 @@ static int gcm_encrypt(struct aead_request *req) struct gcm_aes_ctx *ctx = crypto_aead_ctx(aead); struct skcipher_walk walk; u8 iv[AES_BLOCK_SIZE]; - u8 ks[AES_BLOCK_SIZE]; + u8 ks[2 * AES_BLOCK_SIZE]; u8 tag[AES_BLOCK_SIZE]; u64 dg[2] = {}; int err; @@ -369,12 +369,15 @@ static int gcm_encrypt(struct aead_request *req) pmull_gcm_encrypt_block(ks, iv, NULL, num_rounds(&ctx->aes_key)); put_unaligned_be32(3, iv + GCM_IV_SIZE); + pmull_gcm_encrypt_block(ks + AES_BLOCK_SIZE, iv, NULL, + num_rounds(&ctx->aes_key)); + put_unaligned_be32(4, iv + GCM_IV_SIZE); kernel_neon_end(); err = skcipher_walk_aead_encrypt(&walk, req, false); - while (walk.nbytes >= AES_BLOCK_SIZE) { - int blocks = walk.nbytes / AES_BLOCK_SIZE; + while (walk.nbytes >= 2 * AES_BLOCK_SIZE) { + int blocks = walk.nbytes / (2 * AES_BLOCK_SIZE) * 2; kernel_neon_begin(); pmull_gcm_encrypt(blocks, dg, walk.dst.virt.addr, @@ -384,7 +387,7 @@ static int gcm_encrypt(struct aead_request *req) kernel_neon_end(); err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); + walk.nbytes % (2 * AES_BLOCK_SIZE)); } } else { __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, @@ -424,13 +427,21 @@ static int gcm_encrypt(struct aead_request *req) /* handle the tail */ if (walk.nbytes) { u8 buf[GHASH_BLOCK_SIZE]; + unsigned int nbytes = walk.nbytes; + u8 *dst = walk.dst.virt.addr; + u8 *head = NULL; - crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, ks, - walk.nbytes); + crypto_xor_cpy(dst, walk.src.virt.addr, ks, nbytes); - memcpy(buf, walk.dst.virt.addr, walk.nbytes); - memset(buf + walk.nbytes, 0, GHASH_BLOCK_SIZE - walk.nbytes); - ghash_do_update(1, dg, buf, &ctx->ghash_key, NULL); + if (walk.nbytes > GHASH_BLOCK_SIZE) { + head = dst; + dst += GHASH_BLOCK_SIZE; + nbytes %= GHASH_BLOCK_SIZE; + } + + memcpy(buf, dst, nbytes); + memset(buf + nbytes, 0, GHASH_BLOCK_SIZE - nbytes); + ghash_do_update(!!nbytes, dg, buf, &ctx->ghash_key, head); err = skcipher_walk_done(&walk, 0); } @@ -453,10 +464,11 @@ static int gcm_decrypt(struct aead_request *req) struct gcm_aes_ctx *ctx = crypto_aead_ctx(aead); unsigned int authsize = crypto_aead_authsize(aead); struct skcipher_walk walk; - u8 iv[AES_BLOCK_SIZE]; + u8 iv[2 * AES_BLOCK_SIZE]; u8 tag[AES_BLOCK_SIZE]; - u8 buf[GHASH_BLOCK_SIZE]; + u8 buf[2 * GHASH_BLOCK_SIZE]; u64 dg[2] = {}; + int nrounds = num_rounds(&ctx->aes_key); int err; if (req->assoclen) @@ -467,31 +479,40 @@ static int gcm_decrypt(struct aead_request *req) if (likely(may_use_simd())) { kernel_neon_begin(); - - pmull_gcm_encrypt_block(tag, iv, ctx->aes_key.key_enc, - num_rounds(&ctx->aes_key)); + pmull_gcm_encrypt_block(tag, iv, ctx->aes_key.key_enc, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); kernel_neon_end(); err = skcipher_walk_aead_decrypt(&walk, req, false); - while (walk.nbytes >= AES_BLOCK_SIZE) { - int blocks = walk.nbytes / AES_BLOCK_SIZE; + while (walk.nbytes >= 2 * AES_BLOCK_SIZE) { + int blocks = walk.nbytes / (2 * AES_BLOCK_SIZE) * 2; kernel_neon_begin(); pmull_gcm_decrypt(blocks, dg, walk.dst.virt.addr, walk.src.virt.addr, &ctx->ghash_key, - iv, ctx->aes_key.key_enc, - num_rounds(&ctx->aes_key)); + iv, ctx->aes_key.key_enc, nrounds); kernel_neon_end(); err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); + walk.nbytes % (2 * AES_BLOCK_SIZE)); + } + if (walk.nbytes > AES_BLOCK_SIZE) { + u32 ctr = get_unaligned_be32(iv + GCM_IV_SIZE); + + memcpy(iv + AES_BLOCK_SIZE, iv, GCM_IV_SIZE); + put_unaligned_be32(ctr + 1, + iv + AES_BLOCK_SIZE + GCM_IV_SIZE); } if (walk.nbytes) { kernel_neon_begin(); pmull_gcm_encrypt_block(iv, iv, ctx->aes_key.key_enc, - num_rounds(&ctx->aes_key)); + nrounds); + + if (walk.nbytes > AES_BLOCK_SIZE) + pmull_gcm_encrypt_block(iv + AES_BLOCK_SIZE, + iv + AES_BLOCK_SIZE, + NULL, nrounds); kernel_neon_end(); } @@ -512,8 +533,7 @@ static int gcm_decrypt(struct aead_request *req) do { __aes_arm64_encrypt(ctx->aes_key.key_enc, - buf, iv, - num_rounds(&ctx->aes_key)); + buf, iv, nrounds); crypto_xor_cpy(dst, src, buf, AES_BLOCK_SIZE); crypto_inc(iv, AES_BLOCK_SIZE); @@ -526,14 +546,24 @@ static int gcm_decrypt(struct aead_request *req) } if (walk.nbytes) __aes_arm64_encrypt(ctx->aes_key.key_enc, iv, iv, - num_rounds(&ctx->aes_key)); + nrounds); } /* handle the tail */ if (walk.nbytes) { - memcpy(buf, walk.src.virt.addr, walk.nbytes); - memset(buf + walk.nbytes, 0, GHASH_BLOCK_SIZE - walk.nbytes); - ghash_do_update(1, dg, buf, &ctx->ghash_key, NULL); + const u8 *src = walk.src.virt.addr; + const u8 *head = NULL; + unsigned int nbytes = walk.nbytes; + + if (walk.nbytes > GHASH_BLOCK_SIZE) { + head = src; + src += GHASH_BLOCK_SIZE; + nbytes %= GHASH_BLOCK_SIZE; + } + + memcpy(buf, src, nbytes); + memset(buf + nbytes, 0, GHASH_BLOCK_SIZE - nbytes); + ghash_do_update(!!nbytes, dg, buf, &ctx->ghash_key, head); crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, iv, walk.nbytes); @@ -558,7 +588,7 @@ static int gcm_decrypt(struct aead_request *req) static struct aead_alg gcm_aes_alg = { .ivsize = GCM_IV_SIZE, - .chunksize = AES_BLOCK_SIZE, + .chunksize = 2 * AES_BLOCK_SIZE, .maxauthsize = AES_BLOCK_SIZE, .setkey = gcm_setkey, .setauthsize = gcm_setauthsize, From patchwork Sat Jul 28 18:54:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 143108 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp2271852ljj; Sat, 28 Jul 2018 11:54:16 -0700 (PDT) X-Google-Smtp-Source: AAOMgpd+H3UXUr4C0joD3wZ5elY2RMGnFdXCPPJML9wOYSevbcMpxZ/Z3uKDbuSetXlB2XQi+5eq X-Received: by 2002:a17:902:529:: with SMTP id 38-v6mr10788060plf.145.1532804056180; Sat, 28 Jul 2018 11:54:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532804056; cv=none; d=google.com; s=arc-20160816; b=k+fmC/Fq3R5+dJcGLQfENc9UbrB9E1yX8ZtugFNTuuallloH5rSmZV17gS1A/vZ/Gf a+GLu6NWK7hA8v/Myiky7CizjNMBDw5iKzRspBI7mZY4/3vyNGLI2HSj5cSpH9Qvxmaz vATVwkxwIz87AQ8P/0Ymh1U53KncO17Xo9Fe+wJTJczwHb2PmvUxygTRTHPUlcNQLvSm omppiBAOKqtP0Z+BgA04JM67nMEFNLnJF7KU0nHK+qDYvDu4TDiZPgiGBmRO47rhgP78 xrpCiAa77RIGoX/27Kx3nUEDWCFLBaFUUZfKMPB5K+AJVeIlJqViR0ZeNF82ae+rO0hs jHUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=DTzyUmZugtgosm6frwLWjC77xcUSzrtTz1UF9mPjHPU=; b=e/U+85Qljb+2RIbiD7v0Z9SpL5SaqVzviRYJoBdUCFejBDlurcO1Pted6b3IcwUR9f r1olFMSwjY64Y6KTNvu/ehQXew92BsmqVKl6u/dO74qCqQy+RSJoHXI/2l0jc2jqMNAV LRTZy+jXCauNH2Bk7MoWNJVoWzi/7VZ6XKzRq4Cjw/WbL5uOuH9SrBReKGXiPTkgAko1 C5rEIvAjvPRjAZ2KvF/OcFrj/cN30k7xv1bzHRFp9se+Sk39p8ic7aleK5XyEt00kTfB 4LEq81gWxS3dTIzOinjSftGALrhBQhqKMsi83JQokJ2EabdxvHQiAqPFiE7oS4AKXaum 3lvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=bGx9nAKW; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a72-v6si6300178pge.497.2018.07.28.11.54.15; Sat, 28 Jul 2018 11:54:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=bGx9nAKW; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730154AbeG1UVk (ORCPT + 1 other); Sat, 28 Jul 2018 16:21:40 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:39315 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730370AbeG1UVk (ORCPT ); Sat, 28 Jul 2018 16:21:40 -0400 Received: by mail-wr1-f65.google.com with SMTP id h10-v6so8354665wre.6 for ; Sat, 28 Jul 2018 11:54:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DTzyUmZugtgosm6frwLWjC77xcUSzrtTz1UF9mPjHPU=; b=bGx9nAKWexcnR5ecJzN7NHtfYz9htIHCY4c2KQ0R6ujRbACy8xmlA67BfRHyDhnM7Z cGqTV1M/BI9bC4+PcLABNOV6pkpnwv6M5XRVx5Zn3GoCIMWJzLRUMlhDtyhJFK+xEPTD 6T8GZaPEEl0S5TaFeW2GlDEqjGtwsgkQ7M3N4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DTzyUmZugtgosm6frwLWjC77xcUSzrtTz1UF9mPjHPU=; b=VK6rnUr55TaOeFQWDOOLVxmp865JtaaD5M/ZqeHOxIJHEpzE8m07F/QHEKZOqGD45r duJMN08BJO0UH+40VFXiFxnTFaq/itOQhveS+MuIqoSjbUSW+oOdowAoOzT5VKC1JEId wgNuDkvhJsRMKzhnzMGvyIcHSRG2BxEg+Zpmkjm1c4Bg1Jj629D2GC5sZaieKEnSLB2+ YExfpNoSrMv3GX6gaU2re80XaRDlCSofXtr8MMAkC/V4uXf8jgJYzLI/FhBhAlDi3zGh gUCGWiAfth5M1BYMHv/cphtFBt/1Ow+bLULOcXeGSOorV2zehHuTyM+SvHIcineOves6 ZGEQ== X-Gm-Message-State: AOUpUlEzZNgzvanJLQ0QazH/Uc1HZOSp2qc7+nq/SqQ4NPvIN9LiE6eQ wCmaXC2b1LlyCTNkWSvWE6cZSZnlP3g= X-Received: by 2002:adf:c74e:: with SMTP id b14-v6mr9660913wrh.16.1532804052725; Sat, 28 Jul 2018 11:54:12 -0700 (PDT) Received: from rev02.home (b80182.upc-b.chello.nl. [212.83.80.182]) by smtp.gmail.com with ESMTPSA id v188-v6sm10308407wme.43.2018.07.28.11.54.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 28 Jul 2018 11:54:12 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, will.deacon@arm.com, catalin.marinas@arm.com, vakul.garg@nxp.com, jerome.forissier@linaro.org, jens.wiklander@linaro.org, Ard Biesheuvel Subject: [PATCH 2/2] crypto/arm64: aes-ce-gcm - implement 2-way aggregation Date: Sat, 28 Jul 2018 20:54:00 +0200 Message-Id: <20180728185400.8237-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180728185400.8237-1-ard.biesheuvel@linaro.org> References: <20180728185400.8237-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Implement a faster version of the GHASH transform which amortizes the reduction modulo the characteristic polynomial across two input blocks at a time. This is based on the Intel white paper "Carry-Less Multiplication Instruction and its Usage for Computing the GCM Mode" On a Cortex-A53, the gcm(aes) performance increases 24%, from 3.0 cycles per byte to 2.4 cpb for large input sizes. Signed-off-by: Ard Biesheuvel --- Raw numbers after the patch arch/arm64/crypto/ghash-ce-core.S | 87 +++++++------------- arch/arm64/crypto/ghash-ce-glue.c | 33 ++++++-- 2 files changed, 54 insertions(+), 66 deletions(-) -- 2.18.0 tcrypt performance numbers for a Cortex-A53 @ 1 GHz (CONFIG_PREEMPT disabled) BASELINE: ========= test 0 (128 bit key, 16 byte blocks): 445165 operations in 1 seconds ( 7122640 bytes) test 1 (128 bit key, 64 byte blocks): 437076 operations in 1 seconds ( 27972864 bytes) test 2 (128 bit key, 256 byte blocks): 354203 operations in 1 seconds ( 90675968 bytes) test 3 (128 bit key, 512 byte blocks): 284031 operations in 1 seconds (145423872 bytes) test 4 (128 bit key, 1024 byte blocks): 203473 operations in 1 seconds (208356352 bytes) test 5 (128 bit key, 2048 byte blocks): 129855 operations in 1 seconds (265943040 bytes) test 6 (128 bit key, 4096 byte blocks): 75686 operations in 1 seconds (310009856 bytes) test 7 (128 bit key, 8192 byte blocks): 40167 operations in 1 seconds (329048064 bytes) test 8 (192 bit key, 16 byte blocks): 441610 operations in 1 seconds ( 7065760 bytes) test 9 (192 bit key, 64 byte blocks): 429364 operations in 1 seconds ( 27479296 bytes) test 10 (192 bit key, 256 byte blocks): 343303 operations in 1 seconds ( 87885568 bytes) test 11 (192 bit key, 512 byte blocks): 272029 operations in 1 seconds (139278848 bytes) test 12 (192 bit key, 1024 byte blocks): 192399 operations in 1 seconds (197016576 bytes) test 13 (192 bit key, 2048 byte blocks): 121298 operations in 1 seconds (248418304 bytes) test 14 (192 bit key, 4096 byte blocks): 69994 operations in 1 seconds (286695424 bytes) test 15 (192 bit key, 8192 byte blocks): 37045 operations in 1 seconds (303472640 bytes) test 16 (256 bit key, 16 byte blocks): 438244 operations in 1 seconds ( 7011904 bytes) test 17 (256 bit key, 64 byte blocks): 423345 operations in 1 seconds ( 27094080 bytes) test 18 (256 bit key, 256 byte blocks): 336844 operations in 1 seconds ( 86232064 bytes) test 19 (256 bit key, 512 byte blocks): 265711 operations in 1 seconds (136044032 bytes) test 20 (256 bit key, 1024 byte blocks): 186853 operations in 1 seconds (191337472 bytes) test 21 (256 bit key, 2048 byte blocks): 117301 operations in 1 seconds (240232448 bytes) test 22 (256 bit key, 4096 byte blocks): 67513 operations in 1 seconds (276533248 bytes) test 23 (256 bit key, 8192 byte blocks): 35629 operations in 1 seconds (291872768 bytes) THIS PATCH: =========== test 0 (128 bit key, 16 byte blocks): 441257 operations in 1 seconds ( 7060112 bytes) test 1 (128 bit key, 64 byte blocks): 436595 operations in 1 seconds ( 27942080 bytes) test 2 (128 bit key, 256 byte blocks): 369839 operations in 1 seconds ( 94678784 bytes) test 3 (128 bit key, 512 byte blocks): 308239 operations in 1 seconds (157818368 bytes) test 4 (128 bit key, 1024 byte blocks): 231004 operations in 1 seconds (236548096 bytes) test 5 (128 bit key, 2048 byte blocks): 153930 operations in 1 seconds (315248640 bytes) test 6 (128 bit key, 4096 byte blocks): 92739 operations in 1 seconds (379858944 bytes) test 7 (128 bit key, 8192 byte blocks): 49934 operations in 1 seconds (409059328 bytes) test 8 (192 bit key, 16 byte blocks): 437427 operations in 1 seconds ( 6998832 bytes) test 9 (192 bit key, 64 byte blocks): 429462 operations in 1 seconds ( 27485568 bytes) test 10 (192 bit key, 256 byte blocks): 358183 operations in 1 seconds ( 91694848 bytes) test 11 (192 bit key, 512 byte blocks): 294539 operations in 1 seconds (150803968 bytes) test 12 (192 bit key, 1024 byte blocks): 217082 operations in 1 seconds (222291968 bytes) test 13 (192 bit key, 2048 byte blocks): 140672 operations in 1 seconds (288096256 bytes) test 14 (192 bit key, 4096 byte blocks): 84369 operations in 1 seconds (345575424 bytes) test 15 (192 bit key, 8192 byte blocks): 45280 operations in 1 seconds (370933760 bytes) test 16 (256 bit key, 16 byte blocks): 434127 operations in 1 seconds ( 6946032 bytes) test 17 (256 bit key, 64 byte blocks): 423837 operations in 1 seconds ( 27125568 bytes) test 18 (256 bit key, 256 byte blocks): 351244 operations in 1 seconds ( 89918464 bytes) test 19 (256 bit key, 512 byte blocks): 286884 operations in 1 seconds (146884608 bytes) test 20 (256 bit key, 1024 byte blocks): 209954 operations in 1 seconds (214992896 bytes) test 21 (256 bit key, 2048 byte blocks): 136553 operations in 1 seconds (279660544 bytes) test 22 (256 bit key, 4096 byte blocks): 80749 operations in 1 seconds (330747904 bytes) test 23 (256 bit key, 8192 byte blocks): 43118 operations in 1 seconds (353222656 bytes) diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index 437a2fb0f7f9..c144b526abe6 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -290,6 +290,11 @@ ENDPROC(pmull_ghash_update_p8) KS1 .req v9 INP0 .req v10 INP1 .req v11 + HH .req v12 + Hhl .req v13 + XLn .req v14 + XMn .req v15 + XHn .req v16 .macro load_round_keys, rounds, rk cmp \rounds, #12 @@ -342,13 +347,13 @@ CPU_LE( rev x28, x28 ) 0: mov x0, x25 load_round_keys w26, x0 - ld1 {SHASH.2d}, [x23] + add x1, x23, #32 + ld1 {HH.2d-Hhl.2d}, [x23] + ld1 {SHASH.2d}, [x1] ld1 {XL.2d}, [x20] movi MASK.16b, #0xe1 - ext SHASH2.16b, SHASH.16b, SHASH.16b, #8 shl MASK.2d, MASK.2d, #57 - eor SHASH2.16b, SHASH2.16b, SHASH.16b .if \enc == 1 ld1 {KS0.16b-KS1.16b}, [x27] @@ -372,116 +377,82 @@ CPU_LE( rev x28, x28 ) ins KS0.d[1], x9 // set lower counter ins KS1.d[1], x10 - rev64 T1.16b, INP0.16b + rev64 T1.16b, INP1.16b cmp w26, #12 b.ge 4f // AES-192/256? 2: enc_round KS0, v21 - - ext T2.16b, XL.16b, XL.16b, #8 ext IN1.16b, T1.16b, T1.16b, #8 enc_round KS1, v21 - - eor T1.16b, T1.16b, T2.16b - eor XL.16b, XL.16b, IN1.16b + pmull2 XHn.1q, SHASH.2d, IN1.2d // a1 * b1 enc_round KS0, v22 - - pmull2 XH.1q, SHASH.2d, XL.2d // a1 * b1 - eor T1.16b, T1.16b, XL.16b + eor T1.16b, T1.16b, IN1.16b enc_round KS1, v22 - - pmull XL.1q, SHASH.1d, XL.1d // a0 * b0 - pmull XM.1q, SHASH2.1d, T1.1d // (a1 + a0)(b1 + b0) + pmull XLn.1q, SHASH.1d, IN1.1d // a0 * b0 enc_round KS0, v23 - - ext T1.16b, XL.16b, XH.16b, #8 - eor T2.16b, XL.16b, XH.16b - eor XM.16b, XM.16b, T1.16b + pmull XMn.1q, Hhl.1d, T1.1d // (a1 + a0)(b1 + b0) enc_round KS1, v23 - - eor XM.16b, XM.16b, T2.16b - pmull T2.1q, XL.1d, MASK.1d + rev64 T1.16b, INP0.16b + ext T2.16b, XL.16b, XL.16b, #8 enc_round KS0, v24 - - mov XH.d[0], XM.d[1] - mov XM.d[1], XL.d[0] + ext IN1.16b, T1.16b, T1.16b, #8 + eor T1.16b, T1.16b, T2.16b enc_round KS1, v24 - - eor XL.16b, XM.16b, T2.16b + eor XL.16b, XL.16b, IN1.16b enc_round KS0, v25 - - ext T2.16b, XL.16b, XL.16b, #8 + pmull2 XH.1q, HH.2d, XL.2d // a1 * b1 enc_round KS1, v25 - - pmull XL.1q, XL.1d, MASK.1d - eor T2.16b, T2.16b, XH.16b + eor T1.16b, T1.16b, XL.16b enc_round KS0, v26 - - eor XL.16b, XL.16b, T2.16b - rev64 T1.16b, INP1.16b + pmull XL.1q, HH.1d, XL.1d // a0 * b0 enc_round KS1, v26 - - ext T2.16b, XL.16b, XL.16b, #8 - ext IN1.16b, T1.16b, T1.16b, #8 + pmull2 XM.1q, Hhl.2d, T1.2d // (a1 + a0)(b1 + b0) enc_round KS0, v27 - - eor T1.16b, T1.16b, T2.16b - eor XL.16b, XL.16b, IN1.16b + eor XH.16b, XH.16b, XHn.16b + eor XM.16b, XM.16b, XMn.16b enc_round KS1, v27 - - pmull2 XH.1q, SHASH.2d, XL.2d // a1 * b1 - eor T1.16b, T1.16b, XL.16b + eor XL.16b, XL.16b, XLn.16b + ext T1.16b, XL.16b, XH.16b, #8 enc_round KS0, v28 - - pmull XL.1q, SHASH.1d, XL.1d // a0 * b0 - pmull XM.1q, SHASH2.1d, T1.1d // (a1 + a0)(b1 + b0) - - enc_round KS1, v28 - - ext T1.16b, XL.16b, XH.16b, #8 eor T2.16b, XL.16b, XH.16b eor XM.16b, XM.16b, T1.16b - enc_round KS0, v29 - + enc_round KS1, v28 eor XM.16b, XM.16b, T2.16b + + enc_round KS0, v29 pmull T2.1q, XL.1d, MASK.1d enc_round KS1, v29 - mov XH.d[0], XM.d[1] mov XM.d[1], XL.d[0] aese KS0.16b, v30.16b - eor XL.16b, XM.16b, T2.16b aese KS1.16b, v30.16b - ext T2.16b, XL.16b, XL.16b, #8 eor KS0.16b, KS0.16b, v31.16b - pmull XL.1q, XL.1d, MASK.1d eor T2.16b, T2.16b, XH.16b eor KS1.16b, KS1.16b, v31.16b - eor XL.16b, XL.16b, T2.16b .if \enc == 0 diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index 371f8368c196..65a0b8239620 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -46,6 +46,8 @@ struct ghash_desc_ctx { struct gcm_aes_ctx { struct crypto_aes_ctx aes_key; + u64 h2[2]; + u64 hhl[2]; struct ghash_key ghash_key; }; @@ -62,12 +64,11 @@ static void (*pmull_ghash_update)(int blocks, u64 dg[], const char *src, const char *head); asmlinkage void pmull_gcm_encrypt(int blocks, u64 dg[], u8 dst[], - const u8 src[], struct ghash_key const *k, - u8 ctr[], u32 const rk[], int rounds, - u8 ks[]); + const u8 src[], u64 const *k, u8 ctr[], + u32 const rk[], int rounds, u8 ks[]); asmlinkage void pmull_gcm_decrypt(int blocks, u64 dg[], u8 dst[], - const u8 src[], struct ghash_key const *k, + const u8 src[], u64 const *k, u8 ctr[], u32 const rk[], int rounds); asmlinkage void pmull_gcm_encrypt_block(u8 dst[], u8 const src[], @@ -233,7 +234,8 @@ static int gcm_setkey(struct crypto_aead *tfm, const u8 *inkey, unsigned int keylen) { struct gcm_aes_ctx *ctx = crypto_aead_ctx(tfm); - u8 key[GHASH_BLOCK_SIZE]; + be128 h1, h2; + u8 *key = (u8 *)&h1; int ret; ret = crypto_aes_expand_key(&ctx->aes_key, inkey, keylen); @@ -245,7 +247,22 @@ static int gcm_setkey(struct crypto_aead *tfm, const u8 *inkey, __aes_arm64_encrypt(ctx->aes_key.key_enc, key, (u8[AES_BLOCK_SIZE]){}, num_rounds(&ctx->aes_key)); - return __ghash_setkey(&ctx->ghash_key, key, sizeof(key)); + __ghash_setkey(&ctx->ghash_key, key, sizeof(be128)); + + /* calculate H^2 and Hhl (used for 2-way aggregation) */ + h2 = h1; + gf128mul_lle(&h2, &h1); + + ctx->h2[0] = (be64_to_cpu(h2.b) << 1) | (be64_to_cpu(h2.a) >> 63); + ctx->h2[1] = (be64_to_cpu(h2.a) << 1) | (be64_to_cpu(h2.b) >> 63); + + if (be64_to_cpu(h2.a) >> 63) + ctx->h2[1] ^= 0xc200000000000000UL; + + ctx->hhl[0] = ctx->ghash_key.a ^ ctx->ghash_key.b; + ctx->hhl[1] = ctx->h2[0] ^ ctx->h2[1]; + + return 0; } static int gcm_setauthsize(struct crypto_aead *tfm, unsigned int authsize) @@ -381,7 +398,7 @@ static int gcm_encrypt(struct aead_request *req) kernel_neon_begin(); pmull_gcm_encrypt(blocks, dg, walk.dst.virt.addr, - walk.src.virt.addr, &ctx->ghash_key, + walk.src.virt.addr, ctx->h2, iv, ctx->aes_key.key_enc, num_rounds(&ctx->aes_key), ks); kernel_neon_end(); @@ -490,7 +507,7 @@ static int gcm_decrypt(struct aead_request *req) kernel_neon_begin(); pmull_gcm_decrypt(blocks, dg, walk.dst.virt.addr, - walk.src.virt.addr, &ctx->ghash_key, + walk.src.virt.addr, ctx->h2, iv, ctx->aes_key.key_enc, nrounds); kernel_neon_end();