From patchwork Sat Mar 10 15:21:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 131290 Delivered-To: patch@linaro.org Received: by 10.46.66.2 with SMTP id p2csp2244237lja; Sat, 10 Mar 2018 07:22:28 -0800 (PST) X-Google-Smtp-Source: AG47ELuZin/i6eQYOdNuCf9LHz+1pkV3kGZbDdpC5C0Ng98Gp5Oq7jWt6Qh8OWlzcch3CnQjleZ9 X-Received: by 10.101.67.70 with SMTP id k6mr1888686pgq.93.1520695348690; Sat, 10 Mar 2018 07:22:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520695348; cv=none; d=google.com; s=arc-20160816; b=JalAfVRQ3nyrSPEYBoFhKok78ldQoN7elLwt1IiRLTVWVpFYjW7mHQzaAFk661C5xH RAwbkqUWhYsQ3aTQ7+wVKbBBtSRzZpcwwpiplZ62/vjtbo8huq+UPK9k13hQUQyz+SPC DRo3FMhv/UrnmXEy5KcxaFmfVuP9/J2sDbuUjYfRHNOGd3UNuUXQtkQGbcuDf7eLTzbd R8HP8+rvFIwklTVIuM9eg9LSaDK2geDRbTG1Dx3l53WpK/l0qzdcD7LLS0AWF/3hUYZn PYsB1npO1PbExdP6pO+Gw8wfRbWsq+2+VvpJq1yiw4eRrS3PnteMZA4AZmpim202GG0G ylFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=4pwF6VpO4WD55J5W6z8sHLlTr0WNU+bdMOAIPAPZWuo=; b=mQF1zOT8MTB32fJTLTE7iwqRNaRJCWY1i/qEoP4zwbIqxAJMyYWV+yR7WCCjzYurIE +XOB2eFzHxY0LyBL9KqDpX+ZBdv8TIAgm3b0GidSS3JG1J5PNSfAC/+rd94pJZf0v+/H bGdSpNQnqrgVeAaAOuaFRoLtttBJLNiZP+d8KTz8e4UpUiEB1lRYighTcei72rv7asp0 SbMxsPwxL1JDdU6Ft/25Ot5ISQxA7lIePOQBLhiWFff7NnMYy3h1+0Fd+Ubz8nYB6LSq CHAqChf+V203qRwXRt/r5LkSD4c/oVl5PW6+TOyeniIwfW1eEWiNaLvP2BiP8tTT2ViP 00Jw== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=W1McdFWN; spf=pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-rt-users-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u9si2377515pgp.670.2018.03.10.07.22.27; Sat, 10 Mar 2018 07:22:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=W1McdFWN; spf=pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-rt-users-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932169AbeCJPW1 (ORCPT + 4 others); Sat, 10 Mar 2018 10:22:27 -0500 Received: from mail-wr0-f176.google.com ([209.85.128.176]:43372 "EHLO mail-wr0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932110AbeCJPW0 (ORCPT ); Sat, 10 Mar 2018 10:22:26 -0500 Received: by mail-wr0-f176.google.com with SMTP id a63so8023498wrc.10 for ; Sat, 10 Mar 2018 07:22:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=yQPxiBorqZ0sCxOsRuf+GUBez16PboY/+RyNc3L8EvQ=; b=W1McdFWNK9tv2SXsjHBdUB63s5vOXyKMcKRadKoU/K4vyNhVumOYxprEMUtDeQqI7Q j55K3ZwbT40sJmpNzmMUgehC8quqgm8Zm9/g1W+rEeWxXJ8FIIZj8VXOCCWn8koYe3VI yTjhbGwqaP4Cb1M+qeUUAkNBNStJf7F3xzRMA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=yQPxiBorqZ0sCxOsRuf+GUBez16PboY/+RyNc3L8EvQ=; b=LrqyZ7Vwf23ynkmWpd6KZBk4cwHLFNU9/BPY8dw4Y7pNWBjKOXmyaa7wtMRO0DGmhc wcUnlhbT3dLiRaU13AzpSdMlUQNpA/VCKxzWS8GcpdmO9FsISx9tjCFY4YcGmSPA9hVi 17cqk9f3ip8Lz98F23WxDEH59qrM9pY67G6Hshb2V86gpJKUpJ3l8Dl4TLQ2pB8fZ1/9 xftWVFpPIDMObh/i7H+3pxnyfN80w5judG62N1DWxocxhFgKa313CsnlBt3W7NJwGyP8 p6S2vZGebxnGPodAdAf2OyDYP7YdMXzgFdgD5WYsmKNPDrQmL8YhEyv0BYF/keJpcuPC J9Mw== X-Gm-Message-State: AElRT7FZeLe0gIM+flHRn1FkRC7zCuUybiRXCXr4JhYPcYYYb8KB4GMI 9hXU3fQAqPTLodyfFIq8zxNHgQ== X-Received: by 10.223.201.142 with SMTP id f14mr1899142wrh.40.1520695344682; Sat, 10 Mar 2018 07:22:24 -0800 (PST) Received: from localhost.localdomain ([105.148.128.186]) by smtp.gmail.com with ESMTPSA id m9sm7027531wrf.13.2018.03.10.07.22.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 10 Mar 2018 07:22:22 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Dave Martin , Russell King - ARM Linux , Sebastian Andrzej Siewior , Mark Rutland , linux-rt-users@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Will Deacon , Steven Rostedt , Thomas Gleixner Subject: [PATCH v5 00/23] crypto: arm64 - play nice with CONFIG_PREEMPT Date: Sat, 10 Mar 2018 15:21:45 +0000 Message-Id: <20180310152208.10369-1-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.15.1 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org As reported by Sebastian, the way the arm64 NEON crypto code currently keeps kernel mode NEON enabled across calls into skcipher_walk_xxx() is causing problems with RT builds, given that the skcipher walk API may allocate and free temporary buffers it uses to present the input and output arrays to the crypto algorithm in blocksize sized chunks (where blocksize is the natural blocksize of the crypto algorithm), and doing so with NEON enabled means we're alloc/free'ing memory with preemption disabled. This was deliberate: when this code was introduced, each kernel_neon_begin() and kernel_neon_end() call incurred a fixed penalty of storing resp. loading the contents of all NEON registers to/from memory, and so doing it less often had an obvious performance benefit. However, in the mean time, we have refactored the core kernel mode NEON code, and now kernel_neon_begin() only incurs this penalty the first time it is called after entering the kernel, and the NEON register restore is deferred until returning to userland. This means pulling those calls into the loops that iterate over the input/output of the crypto algorithm is not a big deal anymore (although there are some places in the code where we relied on the NEON registers retaining their values between calls) So let's clean this up for arm64: update the NEON based skcipher drivers to no longer keep the NEON enabled when calling into the skcipher walk API. As pointed out by Peter, this only solves part of the problem. So let's tackle it more thoroughly, and update the algorithms to test the NEED_RESCHED flag each time after processing a fixed chunk of input. Given that this issue was flagged by the RT people, I would appreciate it if they could confirm whether they are happy with this approach. Changes since v4: - rebase onto v4.16-rc3 - apply the same treatment to new SHA512, SHA-3 and SM3 code that landed in v4.16-rc1 Changes since v3: - incorporate Dave's feedback on the asm macros to push/pop frames and to yield the NEON conditionally - make frame_push/pop more easy to use, by recording the arguments to frame_push, removing the need to specify them again when calling frame_pop - emit local symbol .Lframe_local_offset to allow code using the frame push/pop macros to index the stack more easily - use the magic \@ macro invocation counter provided by GAS to generate unique labels om the NEON yield macros, rather than relying on chance Changes since v2: - Drop logic to yield only after so many blocks - as it turns out, the throughput of the algorithms that are most likely to be affected by the overhead (GHASH and AES-CE) only drops by ~1% (on Cortex-A57), and if that is inacceptable, you are probably not using CONFIG_PREEMPT in the first place. - Add yield support to the AES-CCM driver - Clean up macros based on feedback from Dave - Given that I had to add stack frame logic to many of these functions, factor it out and wrap it in a couple of macros - Merge the changes to the core asm driver and glue code of the GHASH/GCM driver. The latter was not correct without the former. Changes since v1: - add CRC-T10DIF test vector (#1) - stop using GFP_ATOMIC in scatterwalk API calls, now that they are executed with preemption enabled (#2 - #6) - do some preparatory refactoring on the AES block mode code (#7 - #9) - add yield patches (#10 - #18) - add test patch (#19) - DO NOT MERGE Cc: Dave Martin Cc: Russell King - ARM Linux Cc: Sebastian Andrzej Siewior Cc: Mark Rutland Cc: linux-rt-users@vger.kernel.org Cc: Peter Zijlstra Cc: Catalin Marinas Cc: Will Deacon Cc: Steven Rostedt Cc: Thomas Gleixner Ard Biesheuvel (23): crypto: testmgr - add a new test case for CRC-T10DIF crypto: arm64/aes-ce-ccm - move kernel mode neon en/disable into loop crypto: arm64/aes-blk - move kernel mode neon en/disable into loop crypto: arm64/aes-bs - move kernel mode neon en/disable into loop crypto: arm64/chacha20 - move kernel mode neon en/disable into loop crypto: arm64/aes-blk - remove configurable interleave crypto: arm64/aes-blk - add 4 way interleave to CBC encrypt path crypto: arm64/aes-blk - add 4 way interleave to CBC-MAC encrypt path crypto: arm64/sha256-neon - play nice with CONFIG_PREEMPT kernels arm64: assembler: add utility macros to push/pop stack frames arm64: assembler: add macros to conditionally yield the NEON under PREEMPT crypto: arm64/sha1-ce - yield NEON after every block of input crypto: arm64/sha2-ce - yield NEON after every block of input crypto: arm64/aes-ccm - yield NEON after every block of input crypto: arm64/aes-blk - yield NEON after every block of input crypto: arm64/aes-bs - yield NEON after every block of input crypto: arm64/aes-ghash - yield NEON after every block of input crypto: arm64/crc32-ce - yield NEON after every block of input crypto: arm64/crct10dif-ce - yield NEON after every block of input crypto: arm64/sha3-ce - yield NEON after every block of input crypto: arm64/sha512-ce - yield NEON after every block of input crypto: arm64/sm3-ce - yield NEON after every block of input DO NOT MERGE arch/arm64/crypto/Makefile | 3 - arch/arm64/crypto/aes-ce-ccm-core.S | 150 ++++-- arch/arm64/crypto/aes-ce-ccm-glue.c | 47 +- arch/arm64/crypto/aes-ce.S | 15 +- arch/arm64/crypto/aes-glue.c | 95 ++-- arch/arm64/crypto/aes-modes.S | 562 +++++++++----------- arch/arm64/crypto/aes-neonbs-core.S | 305 ++++++----- arch/arm64/crypto/aes-neonbs-glue.c | 48 +- arch/arm64/crypto/chacha20-neon-glue.c | 12 +- arch/arm64/crypto/crc32-ce-core.S | 40 +- arch/arm64/crypto/crct10dif-ce-core.S | 32 +- arch/arm64/crypto/ghash-ce-core.S | 113 ++-- arch/arm64/crypto/ghash-ce-glue.c | 28 +- arch/arm64/crypto/sha1-ce-core.S | 42 +- arch/arm64/crypto/sha2-ce-core.S | 37 +- arch/arm64/crypto/sha256-glue.c | 36 +- arch/arm64/crypto/sha3-ce-core.S | 77 ++- arch/arm64/crypto/sha512-ce-core.S | 27 +- arch/arm64/crypto/sm3-ce-core.S | 30 +- arch/arm64/include/asm/assembler.h | 167 ++++++ arch/arm64/kernel/asm-offsets.c | 2 + crypto/testmgr.h | 259 +++++++++ 22 files changed, 1392 insertions(+), 735 deletions(-) -- 2.15.1 -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html