From patchwork Sat Jun 10 16:22:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 103555 Delivered-To: patch@linaro.org Received: by 10.140.91.77 with SMTP id y71csp270780qgd; Sat, 10 Jun 2017 09:23:09 -0700 (PDT) X-Received: by 10.98.7.76 with SMTP id b73mr5585155pfd.212.1497111789241; Sat, 10 Jun 2017 09:23:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1497111789; cv=none; d=google.com; s=arc-20160816; b=03YXRTGtHdSRzZC2y0EMDA5j8w7cLCWxEuracQNit4ltczQBK3XwcW1dUGbFWAbd7G 9/zf+nNyEQYNKqZ5sWD87NO7p9LF1ubrskNfulWIoLg5kmNs00QP6XijI411A5P1IBYn bERSgNnOswyCOPXhrpBjW0l50PNZss6y20Hx/LYkchOQzv/dpsRK8E+0/A5/2bXoftPp 0f4E7gMeJB4ISmw9kqFwZwyuTJAyfxRUFwvfNLz1rTDdIEI7LHhc4du+t0PDlIbueEId g2aiq97rDEuKKUqbRZYEDjg5zc8pSYRRovbMGz6J13gMxYg8dzbIJrkCCnC/wa0qKgb2 AeqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=lIp+GhTVp1Hz7hqfqM4LTeyqdBxkYff4P+q+YQOf1oY=; b=V5Qss7zpb3zFe5ODk+PGi7r4za1HwQrOJDv/DIueoa4Sy8pqlWOExcN3e9/gmn4K56 HhAEnssCaIt3CRER5DhKzDOsosrTWJpK2tmyiJ/yFLVWAFQH9IHEefIUBqTGllGLspA+ 5aBC/C8cfpo7kB6fwM5bmHO+vTW8c8YLXoNxPpk553AGNCc5x7cZzNbHRQ01p1vfYeXt XUAemR0FlQgrNxBZMz09S17KvCMRGlz7HKsFiwmvFhOxecyekkB1x6v84uJQ2oPpw4Bz eGS2m9oINbgVa+pqiMD/e5e0bkzErWRIVkYnOR++bDlpZyY9D40gMBiyyeQsFn255v++ rWvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m11si9340618pgc.389.2017.06.10.09.23.08; Sat, 10 Jun 2017 09:23:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752097AbdFJQXH (ORCPT + 1 other); Sat, 10 Jun 2017 12:23:07 -0400 Received: from mail-wr0-f181.google.com ([209.85.128.181]:35806 "EHLO mail-wr0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752070AbdFJQXG (ORCPT ); Sat, 10 Jun 2017 12:23:06 -0400 Received: by mail-wr0-f181.google.com with SMTP id q97so58888066wrb.2 for ; Sat, 10 Jun 2017 09:23:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=lIp+GhTVp1Hz7hqfqM4LTeyqdBxkYff4P+q+YQOf1oY=; b=LhyB5e5ZoeoNkTjxpczgtwKbg0yEQs7q+TaFXd4lFR/H2iQ6orCPzgj7pvdM3sVFQV 08RE1ZGzZXFjtPxCThhT893icDBHsVoQqtw1/4mk9sLXZHZUvbe6fpr+jdShIlhjKAlj /u2OzzTk8SD1wM2rMFWsbTEXgMZzJJuySzEL4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=lIp+GhTVp1Hz7hqfqM4LTeyqdBxkYff4P+q+YQOf1oY=; b=hUpLPLRd9YPxDHSwYCk5CqpSJiIheg6EIfJsEJjicBoQRfuIIbMAktWlrxoc/c87cM F2UQ6/FytwtwagGumV1G1AeNgOpJ/9NELwOO+/dJUOst/TPPRuPmeq7dOxMT1aApFYK6 td3kasTKyqGc27jn4ZJxDVLod6glSGs8biVTf5uT7zcNXucJkrNQIIb/9kkGv7DKelIa TigRSEDcoioqWQ8Q2HP/h63ZU4u9WeSw6H611caAQmOhHTXOA6/okxhPlZerQp+0PyJn ApNerkAJ9C4qOt8P+vKrNApd+fFWufQGtKjO9gwKJdKSOQN/Myd64BaBqYH44YgMIQEB TBAA== X-Gm-Message-State: AKS2vOwgxUHFdeSjsqXN805r/rqHqMWsJuFa0iXSpgEDeuQjs7Kw9Ptt kS8YX/hNrmy/bvpyNkx9pg== X-Received: by 10.28.175.71 with SMTP id y68mr1218321wme.75.1497111784811; Sat, 10 Jun 2017 09:23:04 -0700 (PDT) Received: from localhost.localdomain ([160.165.120.116]) by smtp.gmail.com with ESMTPSA id k35sm4440181wre.9.2017.06.10.09.23.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 10 Jun 2017 09:23:04 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, dave.martin@arm.com Cc: Ard Biesheuvel Subject: [PATCH 00/12] arm64: crypto: prepare for new kernel mode NEON policy Date: Sat, 10 Jun 2017 16:22:46 +0000 Message-Id: <1497111778-4210-1-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org TL;DR: preparatory work for expected changes in arm64's handling of kernel mode SIMD @Herbert: The arm64 maintainers may want to take this through the arm64 tree, and if not, we need their acks on patch #1. Thanks. Currently, arm64 allows kernel mode NEON (KMN) in process, softirq or hardirq context. In the process case, we preserve/restore the NEON context lazily, but in the softirq/hardirq cases, we eagerly stash a slice of the NEON register file, and immediately restore it when kernel_neon_end() is called. Given the above, arm64 actually does not use the generic may_use_simd() API at all*, which was added to allow async wrappers of synchronous SIMD routines to be implemented in a generic manner. (On x86, kernel mode SIMD may be used in process context or while serving an interrupt taken from user space. On ARM, SIMD may only be used in process context) When adding support for the SVE architecture extension, which shared part of the NEON register file with the SIMD and crypto extensions, the eager preserve/ restore in interrupt context is becoming a problem: it should either preserve and restore the entire SVE state (which may be up to 8 KB in size), or it should not be allowed to interrupt the lazy preserve, which does need to deal with the large SVE state anyway. Otherwise, such an interruption would corrupt the NEON state the lazy preserve sees after the interruption. Given how a) KMN is never actually used in hardirq context, b) KMN is only used in softirq context by mac80211 code running on behalf of WiFi devices that don't perform the crypto in hardware, b) KMN in softirq context is statistically unlikely to interrupt the kernel while it is doing kernel mode NEON in process context, the unconditional eager preserve/restore typically executes when no KMN in process context is in progress, and we can simplify things substantially by disallowing nested KMN, i.e., disallow KMN in hardirq context, and allow KMN in softirq only if no KMN in process context is already in progress. The no-nesting rule leaves only the outer SVE-aware lazy preserve/restore, which needs to execute with bottom halves disabled, but other than that, no intrusive changes should be needed to deal with the SVE payloads. Given that the no-nesting rule implies that SIMD is no longer allowed in any context, the KMN users need to be made aware of this. This series updates the current KMN users in the arm64 tree to take may_use_simd() into account. Since at this time, SIMD is still allowed in any context, an implementation of may_use_simd() is added that simply returns true (#1). It will be updated in the future when the no-nesting modifications are made. * may_use_simd() is only used as a hint in the SHA256 NEON code, since on some microarchitectures, it is only marginally faster, and the eager preserve and restore could actually make it slower. Ard Biesheuvel (12): arm64: neon: replace generic definition of may_use_simd() crypto: arm64/ghash-ce - add non-SIMD scalar fallback crypto: arm64/crct10dif - add non-SIMD generic fallback crypto: arm64/crc32 - add non-SIMD scalar fallback crypto: arm64/sha1-ce - add non-SIMD generic fallback crypto: arm64/sha2-ce - add non-SIMD scalar fallback crypto: arm64/aes-ce-cipher - match round key endianness with generic code crypto: arm64/aes-ce-cipher: add non-SIMD generic fallback crypto: arm64/aes-ce-ccm: add non-SIMD generic fallback crypto: arm64/aes-blk - add a non-SIMD fallback for synchronous CTR crypto: arm64/chacha20 - take may_use_simd() into account crypto: arm64/aes-bs - implement non-SIMD fallback for AES-CTR arch/arm64/crypto/Kconfig | 22 ++- arch/arm64/crypto/aes-ce-ccm-core.S | 30 ++-- arch/arm64/crypto/aes-ce-ccm-glue.c | 152 +++++++++++++++----- arch/arm64/crypto/aes-ce-cipher.c | 55 ++++--- arch/arm64/crypto/aes-ce.S | 12 +- arch/arm64/crypto/aes-ctr-fallback.h | 55 +++++++ arch/arm64/crypto/aes-glue.c | 17 ++- arch/arm64/crypto/aes-neonbs-glue.c | 48 ++++++- arch/arm64/crypto/chacha20-neon-glue.c | 5 +- arch/arm64/crypto/crc32-ce-glue.c | 11 +- arch/arm64/crypto/crct10dif-ce-glue.c | 13 +- arch/arm64/crypto/ghash-ce-glue.c | 49 +++++-- arch/arm64/crypto/sha1-ce-glue.c | 18 ++- arch/arm64/crypto/sha2-ce-glue.c | 30 +++- arch/arm64/crypto/sha256-glue.c | 1 + arch/arm64/include/asm/Kbuild | 1 - arch/arm64/include/asm/simd.h | 24 ++++ 17 files changed, 420 insertions(+), 123 deletions(-) create mode 100644 arch/arm64/crypto/aes-ctr-fallback.h create mode 100644 arch/arm64/include/asm/simd.h -- 2.7.4