From patchwork Mon Jul 30 21:06:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 143172 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp4526164ljj; Mon, 30 Jul 2018 14:06:54 -0700 (PDT) X-Google-Smtp-Source: AAOMgper2guLFdwIYdOoNEh53WOss0tE4b4bi0ousxg/ypZUlq1U19ewxbmYDkJEdbpMJQl7l5cA X-Received: by 2002:a63:9902:: with SMTP id d2-v6mr17743602pge.343.1532984814789; Mon, 30 Jul 2018 14:06:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532984814; cv=none; d=google.com; s=arc-20160816; b=ngXLD/IsaxKpBoM8s2m0tTSTtUS+5SsyCXPieTE+17qSKprCy1FszKx4poI9+wpMOs sSPOIh43r4ca3dWTLzmx5RJOASmQbTXBHg6i/al5QV7JzBhPysLNxC7WoQe7Nhf3w1LZ EPmjhVQJhagS/XDoTTwb2tX/bY41jsbDUuDKI+fFD6qw7VdzEPbElGwHMO6WJXIjs6mv nzaYpSARKonqWk3G5xl4gemh+vIwdr14k9j7lcYWLq4JJO8TApHyzz3FpVsKsS8TEELM aXt1Nxsv1kTqurV3w3g2skW7hAQf3P0vQlnAwhcsY8Vb6v9CahrIXf503aTIypV55Vcn rPvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Uh24AV5TNSmS+7+NMkGioALDRvHboZ9k//HWtNFFdJw=; b=mhx0vsyv3QiPpjHlnVe6ZBqSWBPhxTrlpq1DFzpkCoMdqu7b3WNTmS/UoCFgpXDU7J 0m4xozkyx7kX1TkjmhnuycrII6y61ZimLZ2+79+5oUSdNZ1qO7rJ2q+JaJmL884ikCJR oXl+3mIBcRl15JBBhE534wl70aGOWEfZLTaRFgX9UUMhZYx8VPvDJtD+gvp2qAHABVlc d0CbnltBi5Tw68UyA7Bha/tJJfANkBsa+YJ1ApQ0+4uhajbngcIjt57UNBJFb7LTzho3 a+7GnRwLsliul8OQYjS10m9XI0okfZrcXI9gq8ZRpq5aN2uYjn/UPV5NNt3y9pCXy07n i8nQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hAoocBE0; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d23-v6si12014849pfl.122.2018.07.30.14.06.54; Mon, 30 Jul 2018 14:06:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hAoocBE0; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728799AbeG3Wnm (ORCPT + 1 other); Mon, 30 Jul 2018 18:43:42 -0400 Received: from mail-ed1-f65.google.com ([209.85.208.65]:33798 "EHLO mail-ed1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728746AbeG3Wnm (ORCPT ); Mon, 30 Jul 2018 18:43:42 -0400 Received: by mail-ed1-f65.google.com with SMTP id h1-v6so4677192eds.1 for ; Mon, 30 Jul 2018 14:06:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Uh24AV5TNSmS+7+NMkGioALDRvHboZ9k//HWtNFFdJw=; b=hAoocBE0y0Orpxbqinf6hE1Ic1fo+E/bChWd2aEKvXICpk7lGkGvwqP+ONm3VzBdRH PNNrlPyWZry9/ASUDAqVPJc1EZSWFMgzKUmmY9tuqcsePUu8NvVPVeeSYadr2zuf65V6 ARL8tNoXO/a8rb8FVoZL0EovpI61oXa4EQGHc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Uh24AV5TNSmS+7+NMkGioALDRvHboZ9k//HWtNFFdJw=; b=CK7joajDlS1tLbGdA0vnLbq6tbgAEnoSaD0mPcWQOmsU4PZH+D0DPkCs10NU9430VF Vilna/j9VVHpQCbJCkMiBm/uD1kHT3BfV0avvnKi+H1ffcqjko8Z7rqxIz3Ye3o2WJfr rT2gP7b3XdXaNrImcwniK3IYZIDnlMF9ZHnW2aGu7iuljFLYUijDCsRQoaenXxwl6fFy Tv8MwiKhKy737YObCWW+sxWsd293gS3gt4Fwe3sY/YQwvenKTrw9Q3tYDWr1SUmehurk nWSq8B47ST7YZWBYWJNDG1CUQgkSfFEY6HnS9QoffFL7zgSupbIzi9yunN7nN/ei3RMk FrYw== X-Gm-Message-State: AOUpUlEM4O5JTW+Ijmj7Yc/JSzAMqDw9zDv4IpoftYNHGVh5mzVbMTsW 38fEsemhr56mMOL63hXNMC+EBiUpC5Q= X-Received: by 2002:a50:8d19:: with SMTP id s25-v6mr718742eds.238.1532984811431; Mon, 30 Jul 2018 14:06:51 -0700 (PDT) Received: from rev02.home (b80182.upc-b.chello.nl. [212.83.80.182]) by smtp.gmail.com with ESMTPSA id g6-v6sm2677328edn.28.2018.07.30.14.06.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Jul 2018 14:06:50 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, vakul.garg@nxp.com, Ard Biesheuvel Subject: [PATCH v2 3/3] crypto: arm64/aes-ce-gcm - don't reload key schedule if avoidable Date: Mon, 30 Jul 2018 23:06:42 +0200 Message-Id: <20180730210642.25180-4-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180730210642.25180-1-ard.biesheuvel@linaro.org> References: <20180730210642.25180-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Squeeze out another 5% of performance by minimizing the number of invocations of kernel_neon_begin()/kernel_neon_end() on the common path, which also allows some reloads of the key schedule to be optimized away. The resulting code runs at 2.3 cycles per byte on a Cortex-A53. Signed-off-by: Ard Biesheuvel --- Raw numbers after the patch. arch/arm64/crypto/ghash-ce-core.S | 9 ++- arch/arm64/crypto/ghash-ce-glue.c | 81 +++++++++++--------- 2 files changed, 49 insertions(+), 41 deletions(-) -- 2.18.0 testing speed of gcm(aes) (gcm-aes-ce) encryption test 0 (128 bit key, 16 byte blocks): 365343 operations in 1 seconds ( 5845488 bytes) test 1 (128 bit key, 64 byte blocks): 504620 operations in 1 seconds ( 32295680 bytes) test 2 (128 bit key, 256 byte blocks): 418881 operations in 1 seconds (107233536 bytes) test 3 (128 bit key, 512 byte blocks): 343166 operations in 1 seconds (175700992 bytes) test 4 (128 bit key, 1024 byte blocks): 252229 operations in 1 seconds (258282496 bytes) test 5 (128 bit key, 2048 byte blocks): 164862 operations in 1 seconds (337637376 bytes) test 6 (128 bit key, 4096 byte blocks): 98274 operations in 1 seconds (402530304 bytes) test 7 (128 bit key, 8192 byte blocks): 52530 operations in 1 seconds (430325760 bytes) test 8 (192 bit key, 16 byte blocks): 343221 operations in 1 seconds ( 5491536 bytes) test 9 (192 bit key, 64 byte blocks): 495929 operations in 1 seconds ( 31739456 bytes) test 10 (192 bit key, 256 byte blocks): 404755 operations in 1 seconds (103617280 bytes) test 11 (192 bit key, 512 byte blocks): 326728 operations in 1 seconds (167284736 bytes) test 12 (192 bit key, 1024 byte blocks): 235987 operations in 1 seconds (241650688 bytes) test 13 (192 bit key, 2048 byte blocks): 151724 operations in 1 seconds (310730752 bytes) test 14 (192 bit key, 4096 byte blocks): 89285 operations in 1 seconds (365711360 bytes) test 15 (192 bit key, 8192 byte blocks): 47432 operations in 1 seconds (388562944 bytes) test 16 (256 bit key, 16 byte blocks): 323574 operations in 1 seconds ( 5177184 bytes) test 17 (256 bit key, 64 byte blocks): 489854 operations in 1 seconds ( 31350656 bytes) test 18 (256 bit key, 256 byte blocks): 396979 operations in 1 seconds (101626624 bytes) test 19 (256 bit key, 512 byte blocks): 317923 operations in 1 seconds (162776576 bytes) test 20 (256 bit key, 1024 byte blocks): 211440 operations in 1 seconds (216514560 bytes) test 21 (256 bit key, 2048 byte blocks): 145407 operations in 1 seconds (297793536 bytes) test 22 (256 bit key, 4096 byte blocks): 85050 operations in 1 seconds (348364800 bytes) test 23 (256 bit key, 8192 byte blocks): 45068 operations in 1 seconds (369197056 bytes) diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index f7281e7a592f..913e49932ae6 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -1,7 +1,7 @@ /* * Accelerated GHASH implementation with ARMv8 PMULL instructions. * - * Copyright (C) 2014 - 2017 Linaro Ltd. + * Copyright (C) 2014 - 2018 Linaro Ltd. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published @@ -332,8 +332,6 @@ ENDPROC(pmull_ghash_update_p8) ld1 {XL.2d}, [x1] ldr x8, [x5, #8] // load lower counter - load_round_keys w7, x6 - movi MASK.16b, #0xe1 trn1 SHASH2.2d, SHASH.2d, HH.2d trn2 T1.2d, SHASH.2d, HH.2d @@ -346,6 +344,8 @@ CPU_LE( rev x8, x8 ) ld1 {KS0.16b-KS1.16b}, [x10] .endif + cbnz x6, 4f + 0: ld1 {INP0.16b-INP1.16b}, [x3], #32 rev x9, x8 @@ -471,6 +471,9 @@ CPU_LE( rev x8, x8 ) enc_round KS0, v20 enc_round KS1, v20 b 1b + +4: load_round_keys w7, x6 + b 0b .endm /* diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index c41ac62c90e9..88e3d93fa7c7 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -1,7 +1,7 @@ /* * Accelerated GHASH implementation with ARMv8 PMULL instructions. * - * Copyright (C) 2014 - 2017 Linaro Ltd. + * Copyright (C) 2014 - 2018 Linaro Ltd. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published @@ -374,37 +374,39 @@ static int gcm_encrypt(struct aead_request *req) memcpy(iv, req->iv, GCM_IV_SIZE); put_unaligned_be32(1, iv + GCM_IV_SIZE); - if (likely(may_use_simd())) { - kernel_neon_begin(); + err = skcipher_walk_aead_encrypt(&walk, req, false); + if (likely(may_use_simd() && walk.total >= 2 * AES_BLOCK_SIZE)) { + u32 const *rk = NULL; + + kernel_neon_begin(); pmull_gcm_encrypt_block(tag, iv, ctx->aes_key.key_enc, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); pmull_gcm_encrypt_block(ks, iv, NULL, nrounds); put_unaligned_be32(3, iv + GCM_IV_SIZE); pmull_gcm_encrypt_block(ks + AES_BLOCK_SIZE, iv, NULL, nrounds); put_unaligned_be32(4, iv + GCM_IV_SIZE); - kernel_neon_end(); - - err = skcipher_walk_aead_encrypt(&walk, req, false); - while (walk.nbytes >= 2 * AES_BLOCK_SIZE) { + do { int blocks = walk.nbytes / (2 * AES_BLOCK_SIZE) * 2; - kernel_neon_begin(); + if (rk) + kernel_neon_begin(); + pmull_gcm_encrypt(blocks, dg, walk.dst.virt.addr, walk.src.virt.addr, ctx->h2, iv, - ctx->aes_key.key_enc, nrounds, ks); + rk, nrounds, ks); kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % (2 * AES_BLOCK_SIZE)); - } + + rk = ctx->aes_key.key_enc; + } while (walk.nbytes >= 2 * AES_BLOCK_SIZE); } else { __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); - err = skcipher_walk_aead_encrypt(&walk, req, false); - while (walk.nbytes >= AES_BLOCK_SIZE) { int blocks = walk.nbytes / AES_BLOCK_SIZE; u8 *dst = walk.dst.virt.addr; @@ -486,50 +488,53 @@ static int gcm_decrypt(struct aead_request *req) memcpy(iv, req->iv, GCM_IV_SIZE); put_unaligned_be32(1, iv + GCM_IV_SIZE); - if (likely(may_use_simd())) { + err = skcipher_walk_aead_decrypt(&walk, req, false); + + if (likely(may_use_simd() && walk.total >= 2 * AES_BLOCK_SIZE)) { + u32 const *rk = NULL; + kernel_neon_begin(); pmull_gcm_encrypt_block(tag, iv, ctx->aes_key.key_enc, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); - kernel_neon_end(); - err = skcipher_walk_aead_decrypt(&walk, req, false); - - while (walk.nbytes >= 2 * AES_BLOCK_SIZE) { + do { int blocks = walk.nbytes / (2 * AES_BLOCK_SIZE) * 2; + int rem = walk.total - blocks * AES_BLOCK_SIZE; + + if (rk) + kernel_neon_begin(); - kernel_neon_begin(); pmull_gcm_decrypt(blocks, dg, walk.dst.virt.addr, walk.src.virt.addr, ctx->h2, iv, - ctx->aes_key.key_enc, nrounds); - kernel_neon_end(); + rk, nrounds); - err = skcipher_walk_done(&walk, - walk.nbytes % (2 * AES_BLOCK_SIZE)); - } + /* check if this is the final iteration of the loop */ + if (rem < (2 * AES_BLOCK_SIZE)) { + u8 *iv2 = iv + AES_BLOCK_SIZE; - if (walk.nbytes) { - u8 *iv2 = iv + AES_BLOCK_SIZE; + if (rem > AES_BLOCK_SIZE) { + memcpy(iv2, iv, AES_BLOCK_SIZE); + crypto_inc(iv2, AES_BLOCK_SIZE); + } - if (walk.nbytes > AES_BLOCK_SIZE) { - memcpy(iv2, iv, AES_BLOCK_SIZE); - crypto_inc(iv2, AES_BLOCK_SIZE); - } + pmull_gcm_encrypt_block(iv, iv, NULL, nrounds); - kernel_neon_begin(); - pmull_gcm_encrypt_block(iv, iv, ctx->aes_key.key_enc, - nrounds); + if (rem > AES_BLOCK_SIZE) + pmull_gcm_encrypt_block(iv2, iv2, NULL, + nrounds); + } - if (walk.nbytes > AES_BLOCK_SIZE) - pmull_gcm_encrypt_block(iv2, iv2, NULL, - nrounds); kernel_neon_end(); - } + + err = skcipher_walk_done(&walk, + walk.nbytes % (2 * AES_BLOCK_SIZE)); + + rk = ctx->aes_key.key_enc; + } while (walk.nbytes >= 2 * AES_BLOCK_SIZE); } else { __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); - err = skcipher_walk_aead_decrypt(&walk, req, false); - while (walk.nbytes >= AES_BLOCK_SIZE) { int blocks = walk.nbytes / AES_BLOCK_SIZE; u8 *dst = walk.dst.virt.addr;