From patchwork Mon Sep 21 07:58:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B75CC43463 for ; Mon, 21 Sep 2020 07:59:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C5A6F214F1 for ; Mon, 21 Sep 2020 07:59:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726318AbgIUH7S (ORCPT ); Mon, 21 Sep 2020 03:59:18 -0400 Received: from mx2.suse.de ([195.135.220.15]:56496 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726358AbgIUH7R (ORCPT ); Mon, 21 Sep 2020 03:59:17 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 2C92AB4FB; Mon, 21 Sep 2020 07:59:51 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 01/41] random: remove dead code in credit_entropy_bits() Date: Mon, 21 Sep 2020 09:58:17 +0200 Message-Id: <20200921075857.4424-2-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Since commit 90ea1c6436d2 ("random: remove the blocking pool") the local has_initialized in credit_entropy_bits() won't get set anymore and the corresponding if-clause became dead code. Remove it as well as the has_initialized variable itself from credit_entropy_bits(). Signed-off-by: Nicolai Stange --- drivers/char/random.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index d20ba1b104ca..0580968fd28c 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -660,7 +660,7 @@ static void process_random_ready_list(void) */ static void credit_entropy_bits(struct entropy_store *r, int nbits) { - int entropy_count, orig, has_initialized = 0; + int entropy_count, orig; const int pool_size = r->poolinfo->poolfracbits; int nfrac = nbits << ENTROPY_SHIFT; @@ -717,11 +717,6 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits) if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) goto retry; - if (has_initialized) { - r->initialized = 1; - kill_fasync(&fasync, SIGIO, POLL_IN); - } - trace_credit_entropy_bits(r->name, nbits, entropy_count >> ENTROPY_SHIFT, _RET_IP_); From patchwork Mon Sep 21 07:58:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252958 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26B5CC43465 for ; Mon, 21 Sep 2020 08:02:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F3D2F20EDD for ; Mon, 21 Sep 2020 08:01:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726442AbgIUH7R (ORCPT ); Mon, 21 Sep 2020 03:59:17 -0400 Received: from mx2.suse.de ([195.135.220.15]:56334 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726211AbgIUH7R (ORCPT ); Mon, 21 Sep 2020 03:59:17 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 29F86B340; Mon, 21 Sep 2020 07:59:51 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 02/41] random: remove dead code for nbits < 0 in credit_entropy_bits() Date: Mon, 21 Sep 2020 09:58:18 +0200 Message-Id: <20200921075857.4424-3-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The nbits argument to credit_entropy_bits() is never negative and the branch handling it is dead code. Remove it. The code for handling the regular nbits > 0 case used to live in the corresponding else branch, but has now been lifted up to function scope. Move the declaration of 'pnfrac' to the function prologue in order to adhere to C99 rules. Likewise, move the declaration of 's' into the body loop, the only scope it's referenced from. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 69 ++++++++++++++++++++----------------------- 1 file changed, 32 insertions(+), 37 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 0580968fd28c..c4b7bdbd460e 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -654,7 +654,7 @@ static void process_random_ready_list(void) } /* - * Credit (or debit) the entropy store with n bits of entropy. + * Credit the entropy store with n bits of entropy. * Use credit_entropy_bits_safe() if the value comes from userspace * or otherwise should be checked for extreme values. */ @@ -663,50 +663,45 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits) int entropy_count, orig; const int pool_size = r->poolinfo->poolfracbits; int nfrac = nbits << ENTROPY_SHIFT; + int pnfrac; if (!nbits) return; retry: entropy_count = orig = READ_ONCE(r->entropy_count); - if (nfrac < 0) { - /* Debit */ - entropy_count += nfrac; - } else { - /* - * Credit: we have to account for the possibility of - * overwriting already present entropy. Even in the - * ideal case of pure Shannon entropy, new contributions - * approach the full value asymptotically: - * - * entropy <- entropy + (pool_size - entropy) * - * (1 - exp(-add_entropy/pool_size)) - * - * For add_entropy <= pool_size/2 then - * (1 - exp(-add_entropy/pool_size)) >= - * (add_entropy/pool_size)*0.7869... - * so we can approximate the exponential with - * 3/4*add_entropy/pool_size and still be on the - * safe side by adding at most pool_size/2 at a time. - * - * The use of pool_size-2 in the while statement is to - * prevent rounding artifacts from making the loop - * arbitrarily long; this limits the loop to log2(pool_size)*2 - * turns no matter how large nbits is. - */ - int pnfrac = nfrac; - const int s = r->poolinfo->poolbitshift + ENTROPY_SHIFT + 2; + /* + * Credit: we have to account for the possibility of + * overwriting already present entropy. Even in the + * ideal case of pure Shannon entropy, new contributions + * approach the full value asymptotically: + * + * entropy <- entropy + (pool_size - entropy) * + * (1 - exp(-add_entropy/pool_size)) + * + * For add_entropy <= pool_size/2 then + * (1 - exp(-add_entropy/pool_size)) >= + * (add_entropy/pool_size)*0.7869... + * so we can approximate the exponential with + * 3/4*add_entropy/pool_size and still be on the + * safe side by adding at most pool_size/2 at a time. + * + * The use of pool_size-2 in the while statement is to + * prevent rounding artifacts from making the loop + * arbitrarily long; this limits the loop to log2(pool_size)*2 + * turns no matter how large nbits is. + */ + pnfrac = nfrac; + do { /* The +2 corresponds to the /4 in the denominator */ + const int s = r->poolinfo->poolbitshift + ENTROPY_SHIFT + 2; + unsigned int anfrac = min(pnfrac, pool_size/2); + unsigned int add = + ((pool_size - entropy_count)*anfrac*3) >> s; - do { - unsigned int anfrac = min(pnfrac, pool_size/2); - unsigned int add = - ((pool_size - entropy_count)*anfrac*3) >> s; - - entropy_count += add; - pnfrac -= anfrac; - } while (unlikely(entropy_count < pool_size-2 && pnfrac)); - } + entropy_count += add; + pnfrac -= anfrac; + } while (unlikely(entropy_count < pool_size-2 && pnfrac)); if (WARN_ON(entropy_count < 0)) { pr_warn("negative entropy/overflow: pool %s count %d\n", From patchwork Mon Sep 21 07:58:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B774C43465 for ; Mon, 21 Sep 2020 07:59:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 365522085B for ; Mon, 21 Sep 2020 07:59:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726508AbgIUH7X (ORCPT ); Mon, 21 Sep 2020 03:59:23 -0400 Received: from mx2.suse.de ([195.135.220.15]:56800 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726454AbgIUH7T (ORCPT ); Mon, 21 Sep 2020 03:59:19 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id E94C1B4FD; Mon, 21 Sep 2020 07:59:51 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 05/41] random: don't reset entropy to zero on overflow Date: Mon, 21 Sep 2020 09:58:21 +0200 Message-Id: <20200921075857.4424-6-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org credit_entropy_bits() adds one or more positive values to the signed entropy_count and checks if the result is negative afterwards. Note that because the initial value of entropy_count is positive, a negative result can happen only on overflow. However, if the final entropy_count is found to have overflown, a WARN() is emitted and the entropy_store's entropy count reset to zero. Even though this case should never happen, it is better to retain previously available entropy as this will facilitate a future change factoring out that approximation of the exponential. Make credit_entropy_bits() tp reset entropy_count to the original value rather than zero on overflow. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 35e381be20fe..6adac462aa0d 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -706,7 +706,7 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits) if (WARN_ON(entropy_count < 0)) { pr_warn("negative entropy/overflow: pool %s count %d\n", r->name, entropy_count); - entropy_count = 0; + entropy_count = orig; } else if (entropy_count > pool_size) entropy_count = pool_size; if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) From patchwork Mon Oct 19 19:34:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stephan Mueller X-Patchwork-Id: 285412 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F334C4363D for ; Mon, 19 Oct 2020 19:53:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E37E3222C2 for ; Mon, 19 Oct 2020 19:53:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=chronox.de header.i=@chronox.de header.b="iPUuwz2H" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731576AbgJSTxb (ORCPT ); Mon, 19 Oct 2020 15:53:31 -0400 Received: from mo4-p04-ob.smtp.rzone.de ([81.169.146.179]:34146 "EHLO mo4-p04-ob.smtp.rzone.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731419AbgJSTwq (ORCPT ); Mon, 19 Oct 2020 15:52:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603137162; s=strato-dkim-0002; d=chronox.de; h=References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender; bh=+v/bsCsJhKy5vLwMSLj5aaJGDfzbnF1Civ8ZP0F0mAY=; b=iPUuwz2HHxym2X1rt44bQDbYHjt5ouVnqIKrzRpHQ1TQbBJi9ky2/0gFHoic1vIzRj RQPYWrxV1oHiXnXYU65dLh6N2nJGbIeQJ8NrKSWeQfihWJTP65jBlekyDCmXOo3e5/tC ioP2AqQSeWD7ALMln1NmvkCuxBbH2/Be1apOC00XGOj2v//rRpnlFJ1CJrImWYPDs3KT p59PyXbZ324IPRpIRDuuZeLbSWIPrIVdH5N4Ayt1iL3TruI+KWkak73kr7pmZ/thE95x AnMtxv9ICYUAwjwm8xQ7L6ddD9F9I0tDnaDwOAcsC2UZQP46MFfWMXkHdNgI3bOWoWt6 HMDg== X-RZG-AUTH: ":P2ERcEykfu11Y98lp/T7+hdri+uKZK8TKWEqNyiHySGSa9k9xmwdNnzGHXPbJPSb3t0=" X-RZG-CLASS-ID: mo00 Received: from positron.chronox.de by smtp.strato.de (RZmta 47.2.1 DYNA|AUTH) with ESMTPSA id C0b627w9JJpiU6s (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits)) (Client did not present a certificate); Mon, 19 Oct 2020 21:51:44 +0200 (CEST) From: Stephan =?iso-8859-1?q?M=FCller?= To: Torsten Duwe Cc: Willy Tarreau , "Theodore Y. Ts'o" , linux-crypto@vger.kernel.org, Nicolai Stange , LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , Petr Tesarik Subject: [PATCH v36 07/13] LRNG - add SP800-90A DRBG extension Date: Mon, 19 Oct 2020 21:34:51 +0200 Message-ID: <7861221.NyiUUSuA9g@positron.chronox.de> In-Reply-To: <3073852.aeNJFYEL58@positron.chronox.de> References: <20200921075857.4424-1-nstange@suse.de> <20201016172619.GA18410@lst.de> <3073852.aeNJFYEL58@positron.chronox.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Using the LRNG switchable DRNG support, the SP800-90A DRBG extension is implemented. The DRBG uses the kernel crypto API DRBG implementation. In addition, it uses the kernel crypto API SHASH support to provide the hashing operation. The DRBG supports the choice of either a CTR DRBG using AES-256, HMAC DRBG with SHA-512 core or Hash DRBG with SHA-512 core. The used core can be selected with the module parameter lrng_drbg_type. The default is the CTR DRBG. When compiling the DRBG extension statically, the DRBG is loaded at late_initcall stage which implies that with the start of user space, the user space interfaces of getrandom(2), /dev/random and /dev/urandom provide random data produced by an SP800-90A DRBG. CC: Torsten Duwe CC: "Eric W. Biederman" CC: "Alexander E. Patrakov" CC: "Ahmed S. Darwish" CC: "Theodore Y. Ts'o" CC: Willy Tarreau CC: Matthew Garrett CC: Vito Caputo CC: Andreas Dilger CC: Jan Kara CC: Ray Strode CC: William Jon McCann CC: zhangjs CC: Andy Lutomirski CC: Florian Weimer CC: Lennart Poettering CC: Nicolai Stange Reviewed-by: Roman Drahtmueller Tested-by: Roman Drahtmüller Tested-by: Marcelo Henrique Cerri Tested-by: Neil Horman Signed-off-by: Stephan Mueller --- drivers/char/lrng/Kconfig | 10 ++ drivers/char/lrng/Makefile | 1 + drivers/char/lrng/lrng_drbg.c | 197 ++++++++++++++++++++++++++++++++++ 3 files changed, 208 insertions(+) create mode 100644 drivers/char/lrng/lrng_drbg.c diff --git a/drivers/char/lrng/Kconfig b/drivers/char/lrng/Kconfig index daa2057248ac..a3c4cd153f35 100644 --- a/drivers/char/lrng/Kconfig +++ b/drivers/char/lrng/Kconfig @@ -81,6 +81,16 @@ if LRNG_DRNG_SWITCH config LRNG_KCAPI_HASH bool +config LRNG_DRBG + tristate "SP800-90A support for the LRNG" + depends on CRYPTO + select CRYPTO_DRBG_MENU + select CRYPTO_SHA512 + select LRNG_KCAPI_HASH + help + Enable the SP800-90A DRBG support for the LRNG. Once the + module is loaded, output from /dev/random, /dev/urandom, + getrandom(2), or get_random_bytes_full is provided by a DRBG. endif # LRNG_DRNG_SWITCH endif # LRNG diff --git a/drivers/char/lrng/Makefile b/drivers/char/lrng/Makefile index 40f8826edeeb..6ebd252db12f 100644 --- a/drivers/char/lrng/Makefile +++ b/drivers/char/lrng/Makefile @@ -12,3 +12,4 @@ obj-$(CONFIG_NUMA) += lrng_numa.o obj-$(CONFIG_SYSCTL) += lrng_proc.o obj-$(CONFIG_LRNG_DRNG_SWITCH) += lrng_switch.o obj-$(CONFIG_LRNG_KCAPI_HASH) += lrng_kcapi_hash.o +obj-$(CONFIG_LRNG_DRBG) += lrng_drbg.o diff --git a/drivers/char/lrng/lrng_drbg.c b/drivers/char/lrng/lrng_drbg.c new file mode 100644 index 000000000000..c428d41af64d --- /dev/null +++ b/drivers/char/lrng/lrng_drbg.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause +/* + * Backend for the LRNG providing the cryptographic primitives using the + * kernel crypto API and its DRBG. + * + * Copyright (C) 2016 - 2020, Stephan Mueller + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include + +#include "lrng_kcapi_hash.h" + +/* + * Define a DRBG plus a hash / MAC used to extract data from the entropy pool. + * For LRNG_HASH_NAME you can use a hash or a MAC (HMAC or CMAC) of your choice + * (Note, you should use the suggested selections below -- using SHA-1 or MD5 + * is not wise). The idea is that the used cipher primitive can be selected to + * be the same as used for the DRBG. I.e. the LRNG only uses one cipher + * primitive using the same cipher implementation with the options offered in + * the following. This means, if the CTR DRBG is selected and AES-NI is present, + * both the CTR DRBG and the selected cmac(aes) use AES-NI. + * + * The security strengths of the DRBGs are all 256 bits according to + * SP800-57 section 5.6.1. + * + * This definition is allowed to be changed. + */ +#ifdef CONFIG_CRYPTO_DRBG_CTR +static unsigned int lrng_drbg_type = 0; +#elif defined CONFIG_CRYPTO_DRBG_HMAC +static unsigned int lrng_drbg_type = 1; +#elif defined CONFIG_CRYPTO_DRBG_HASH +static unsigned int lrng_drbg_type = 2; +#else +#error "Unknown DRBG in use" +#endif + +/* The parameter must be r/o in sysfs as otherwise races appear. */ +module_param(lrng_drbg_type, uint, 0444); +MODULE_PARM_DESC(lrng_drbg_type, "DRBG type used for LRNG (0->CTR_DRBG, 1->HMAC_DRBG, 2->Hash_DRBG)"); + +struct lrng_drbg { + const char *hash_name; + const char *drbg_core; +}; + +static const struct lrng_drbg lrng_drbg_types[] = { + { /* CTR_DRBG with AES-256 using derivation function */ + .hash_name = "sha512", + .drbg_core = "drbg_nopr_ctr_aes256", + }, { /* HMAC_DRBG with SHA-512 */ + .hash_name = "sha512", + .drbg_core = "drbg_nopr_hmac_sha512", + }, { /* Hash_DRBG with SHA-512 using derivation function */ + .hash_name = "sha512", + .drbg_core = "drbg_nopr_sha512" + } +}; + +static int lrng_drbg_drng_seed_helper(void *drng, const u8 *inbuf, u32 inbuflen) +{ + struct drbg_state *drbg = (struct drbg_state *)drng; + LIST_HEAD(seedlist); + struct drbg_string data; + int ret; + + drbg_string_fill(&data, inbuf, inbuflen); + list_add_tail(&data.list, &seedlist); + ret = drbg->d_ops->update(drbg, &seedlist, drbg->seeded); + + if (ret >= 0) + drbg->seeded = true; + + return ret; +} + +static int lrng_drbg_drng_generate_helper(void *drng, u8 *outbuf, u32 outbuflen) +{ + struct drbg_state *drbg = (struct drbg_state *)drng; + + return drbg->d_ops->generate(drbg, outbuf, outbuflen, NULL); +} + +static void *lrng_drbg_drng_alloc(u32 sec_strength) +{ + struct drbg_state *drbg; + int coreref = -1; + bool pr = false; + int ret; + + drbg_convert_tfm_core(lrng_drbg_types[lrng_drbg_type].drbg_core, + &coreref, &pr); + if (coreref < 0) + return ERR_PTR(-EFAULT); + + drbg = kzalloc(sizeof(struct drbg_state), GFP_KERNEL); + if (!drbg) + return ERR_PTR(-ENOMEM); + + drbg->core = &drbg_cores[coreref]; + drbg->seeded = false; + ret = drbg_alloc_state(drbg); + if (ret) + goto err; + + if (sec_strength > drbg_sec_strength(drbg->core->flags)) { + pr_err("Security strength of DRBG (%u bits) lower than requested by LRNG (%u bits)\n", + drbg_sec_strength(drbg->core->flags) * 8, + sec_strength * 8); + goto dealloc; + } + + if (sec_strength < drbg_sec_strength(drbg->core->flags)) + pr_warn("Security strength of DRBG (%u bits) higher than requested by LRNG (%u bits)\n", + drbg_sec_strength(drbg->core->flags) * 8, + sec_strength * 8); + + pr_info("DRBG with %s core allocated\n", drbg->core->backend_cra_name); + + return drbg; + +dealloc: + if (drbg->d_ops) + drbg->d_ops->crypto_fini(drbg); + drbg_dealloc_state(drbg); +err: + kfree(drbg); + return ERR_PTR(-EINVAL); +} + +static void lrng_drbg_drng_dealloc(void *drng) +{ + struct drbg_state *drbg = (struct drbg_state *)drng; + + if (drbg && drbg->d_ops) + drbg->d_ops->crypto_fini(drbg); + drbg_dealloc_state(drbg); + kfree_sensitive(drbg); + pr_info("DRBG deallocated\n"); +} + +static void *lrng_drbg_hash_alloc(void) +{ + return lrng_kcapi_hash_alloc(lrng_drbg_types[lrng_drbg_type].hash_name); +} + +static const char *lrng_drbg_name(void) +{ + return lrng_drbg_types[lrng_drbg_type].drbg_core; +} + +static const char *lrng_hash_name(void) +{ + return lrng_drbg_types[lrng_drbg_type].hash_name; +} + +static const struct lrng_crypto_cb lrng_drbg_crypto_cb = { + .lrng_drng_name = lrng_drbg_name, + .lrng_hash_name = lrng_hash_name, + .lrng_drng_alloc = lrng_drbg_drng_alloc, + .lrng_drng_dealloc = lrng_drbg_drng_dealloc, + .lrng_drng_seed_helper = lrng_drbg_drng_seed_helper, + .lrng_drng_generate_helper = lrng_drbg_drng_generate_helper, + .lrng_hash_alloc = lrng_drbg_hash_alloc, + .lrng_hash_dealloc = lrng_kcapi_hash_dealloc, + .lrng_hash_digestsize = lrng_kcapi_hash_digestsize, + .lrng_hash_init = lrng_kcapi_hash_init, + .lrng_hash_update = lrng_kcapi_hash_update, + .lrng_hash_final = lrng_kcapi_hash_final, +}; + +static int __init lrng_drbg_init(void) +{ + if (lrng_drbg_type >= ARRAY_SIZE(lrng_drbg_types)) { + pr_err("lrng_drbg_type parameter too large (given %u - max: %lu)", + lrng_drbg_type, + (unsigned long)ARRAY_SIZE(lrng_drbg_types) - 1); + return -EAGAIN; + } + return lrng_set_drng_cb(&lrng_drbg_crypto_cb); +} + +static void __exit lrng_drbg_exit(void) +{ + lrng_set_drng_cb(NULL); +} + +late_initcall(lrng_drbg_init); +module_exit(lrng_drbg_exit); +MODULE_LICENSE("Dual BSD/GPL"); +MODULE_AUTHOR("Stephan Mueller "); +MODULE_DESCRIPTION("Linux Random Number Generator - SP800-90A DRBG backend"); From patchwork Mon Sep 21 07:58:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252976 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F224AC43463 for ; Mon, 21 Sep 2020 07:59:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BAE8420874 for ; Mon, 21 Sep 2020 07:59:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726488AbgIUH7V (ORCPT ); Mon, 21 Sep 2020 03:59:21 -0400 Received: from mx2.suse.de ([195.135.220.15]:56832 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726471AbgIUH7U (ORCPT ); Mon, 21 Sep 2020 03:59:20 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id A8BF3B504; Mon, 21 Sep 2020 07:59:53 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 08/41] random: introduce __credit_entropy_bits_fast() for hot paths Date: Mon, 21 Sep 2020 09:58:24 +0200 Message-Id: <20200921075857.4424-9-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org When transferring entropy from the fast_pool into the global input_pool from add_interrupt_randomness(), there are at least two atomic operations involved: one when taking the input_pool's spinlock for the actual mixing and another one in the cmpxchg loop in credit_entropy_bits() for updating the pool's ->entropy_count. Because cmpxchg is potentially costly, it would be nice if it could be avoided. As said, the input_pool's spinlock is taken anyway, and I see no reason why its scope should not be extended to protect ->entropy_count as well. Performance considerations set aside, this will also facilitate future changes introducing additional fields to input_pool which will also have to get updated atomically from the consumer/producer sides. The actual move to extend the spinlock's scope to cover ->entropy_count will be the subject of a future patch. Prepare for that by putting a limit on the work to be done with the lock being held. In order to avoid releasing and regrabbing from hot producer paths, they'll keep the lock when executing those calculations in pool_entropy_delta(). The loop found in the latter has a theoretical upper bound of 2 * log2(pool_size) == 24 iterations. However, as all entropy increments awarded from the interrupt path are less than pool_size/2 in magnitude, it is safe to enforce a guaranteed limit of one on the iteration count by setting pool_entropy_delta()'s 'fast' parameter. Introduce __credit_entropy_bits_fast() doing exactly that. Currently it resembles the behaviour from credit_entropy_bits() except that - pool_entropy_delta() gets called with 'fast' set to true and - that __credit_entropy_bits_fast() returns a bool indicating whether the caller should reseed the primary_crng. Note that unlike it's the case with credit_entropy_bits(), the reseeding won't be possible from within __credit_entropy_bits_fast() anymore once it actually gets invoked with the pool lock being held in the future. There is no functional change. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 49 ++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 46 insertions(+), 3 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 08caa7a691a5..d9e4dd27d45d 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -714,6 +714,39 @@ static unsigned int pool_entropy_delta(struct entropy_store *r, return entropy_count - base_entropy_count; } +/* + * Credit the entropy store with n bits of entropy. + * To be used from hot paths when it is either known that nbits is + * smaller than one half of the pool size or losing anything beyond that + * doesn't matter. + */ +static bool __credit_entropy_bits_fast(struct entropy_store *r, int nbits) +{ + int entropy_count, orig; + + if (!nbits) + return false; + +retry: + orig = READ_ONCE(r->entropy_count); + entropy_count = orig + pool_entropy_delta(r, orig, + nbits << ENTROPY_SHIFT, + true); + if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) + goto retry; + + trace_credit_entropy_bits(r->name, nbits, + entropy_count >> ENTROPY_SHIFT, _RET_IP_); + + if (unlikely(r == &input_pool && crng_init < 2)) { + const int entropy_bits = entropy_count >> ENTROPY_SHIFT; + + return (entropy_bits >= 128); + } + + return false; +} + /* * Credit the entropy store with n bits of entropy. * Use credit_entropy_bits_safe() if the value comes from userspace @@ -1169,6 +1202,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) unsigned num; } sample; long delta, delta2, delta3; + bool reseed; sample.jiffies = jiffies; sample.cycles = random_get_entropy(); @@ -1206,7 +1240,9 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) * Round down by 1 bit on general principles, * and limit entropy estimate to 12 bits. */ - credit_entropy_bits(r, min_t(int, fls(delta>>1), 11)); + reseed = __credit_entropy_bits_fast(r, min_t(int, fls(delta>>1), 11)); + if (reseed) + crng_reseed(&primary_crng, r); } void add_input_randomness(unsigned int type, unsigned int code, @@ -1274,6 +1310,7 @@ void add_interrupt_randomness(int irq, int irq_flags) __u64 ip; unsigned long seed; int credit = 0; + bool reseed; if (cycles == 0) cycles = get_reg(fast_pool, regs); @@ -1326,7 +1363,9 @@ void add_interrupt_randomness(int irq, int irq_flags) fast_pool->count = 0; /* award one bit for the contents of the fast pool */ - credit_entropy_bits(r, credit + 1); + reseed = __credit_entropy_bits_fast(r, credit + 1); + if (reseed) + crng_reseed(&primary_crng, r); } EXPORT_SYMBOL_GPL(add_interrupt_randomness); @@ -1599,7 +1638,11 @@ EXPORT_SYMBOL(get_random_bytes); */ static void entropy_timer(struct timer_list *t) { - credit_entropy_bits(&input_pool, 1); + bool reseed; + + reseed = __credit_entropy_bits_fast(&input_pool, 1); + if (reseed) + crng_reseed(&primary_crng, &input_pool); } /* From patchwork Mon Sep 21 07:58:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 359E5C43463 for ; Mon, 21 Sep 2020 08:01:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F08D220BED for ; Mon, 21 Sep 2020 08:01:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726590AbgIUIBq (ORCPT ); Mon, 21 Sep 2020 04:01:46 -0400 Received: from mx2.suse.de ([195.135.220.15]:56892 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726211AbgIUH7V (ORCPT ); Mon, 21 Sep 2020 03:59:21 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 4774AB506; Mon, 21 Sep 2020 07:59:54 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 09/41] random: protect ->entropy_count with the pool spinlock Date: Mon, 21 Sep 2020 09:58:25 +0200 Message-Id: <20200921075857.4424-10-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Currently, all updates to ->entropy_count are synchronized by means of cmpxchg-retry loops found in credit_entropy_bits(), __credit_entropy_bits_fast() and account() respectively. However, all but one __credit_entropy_bits_fast() call sites grap the pool ->lock already and it would be nice if the potentially costly cmpxchg could be avoided in these performance critical paths. In addition to that, future patches will introduce new fields to struct entropy_store which will required some kinf of synchronization with ->entropy_count updates from said producer paths as well. Protect ->entropy_count with the pool ->lock. - Make callers of __credit_entropy_bits_fast() invoke it with the pool ->lock held. Extend existing critical sections where possible. Drop the cmpxchg-reply loop in __credit_entropy_bits_fast() in favor of a plain assignment. - Retain the retry loop in credit_entropy_bits(): the potentially expensive pool_entropy_delta() should not be called under the lock in order to not unnecessarily block contenders. In order to continue to synchronize with __credit_entropy_bits_fast() and account(), the cmpxchg gets replaced by a plain comparison + store with the ->lock being held. - Make account() grab the ->lock and drop the cmpxchg-retry loop in favor of a plain assignent. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 44 +++++++++++++++++++++++++++++-------------- 1 file changed, 30 insertions(+), 14 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index d9e4dd27d45d..9f87332b158f 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -718,7 +718,7 @@ static unsigned int pool_entropy_delta(struct entropy_store *r, * Credit the entropy store with n bits of entropy. * To be used from hot paths when it is either known that nbits is * smaller than one half of the pool size or losing anything beyond that - * doesn't matter. + * doesn't matter. Must be called with r->lock being held. */ static bool __credit_entropy_bits_fast(struct entropy_store *r, int nbits) { @@ -727,13 +727,11 @@ static bool __credit_entropy_bits_fast(struct entropy_store *r, int nbits) if (!nbits) return false; -retry: - orig = READ_ONCE(r->entropy_count); + orig = r->entropy_count; entropy_count = orig + pool_entropy_delta(r, orig, nbits << ENTROPY_SHIFT, true); - if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) - goto retry; + WRITE_ONCE(r->entropy_count, entropy_count); trace_credit_entropy_bits(r->name, nbits, entropy_count >> ENTROPY_SHIFT, _RET_IP_); @@ -755,17 +753,28 @@ static bool __credit_entropy_bits_fast(struct entropy_store *r, int nbits) static void credit_entropy_bits(struct entropy_store *r, int nbits) { int entropy_count, orig; + unsigned long flags; if (!nbits) return; retry: + /* + * Don't run the potentially expensive pool_entropy_delta() + * calculations under the spinlock. Instead retry until + * ->entropy_count becomes stable. + */ orig = READ_ONCE(r->entropy_count); entropy_count = orig + pool_entropy_delta(r, orig, nbits << ENTROPY_SHIFT, false); - if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) + spin_lock_irqsave(&r->lock, flags); + if (r->entropy_count != orig) { + spin_unlock_irqrestore(&r->lock, flags); goto retry; + } + WRITE_ONCE(r->entropy_count, entropy_count); + spin_unlock_irqrestore(&r->lock, flags); trace_credit_entropy_bits(r->name, nbits, entropy_count >> ENTROPY_SHIFT, _RET_IP_); @@ -1203,12 +1212,11 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) } sample; long delta, delta2, delta3; bool reseed; + unsigned long flags; sample.jiffies = jiffies; sample.cycles = random_get_entropy(); sample.num = num; - r = &input_pool; - mix_pool_bytes(r, &sample, sizeof(sample)); /* * Calculate number of bits of randomness we probably added. @@ -1235,12 +1243,16 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) if (delta > delta3) delta = delta3; + r = &input_pool; + spin_lock_irqsave(&r->lock, flags); + __mix_pool_bytes(r, &sample, sizeof(sample)); /* * delta is now minimum absolute delta. * Round down by 1 bit on general principles, * and limit entropy estimate to 12 bits. */ reseed = __credit_entropy_bits_fast(r, min_t(int, fls(delta>>1), 11)); + spin_unlock_irqrestore(&r->lock, flags); if (reseed) crng_reseed(&primary_crng, r); } @@ -1358,12 +1370,12 @@ void add_interrupt_randomness(int irq, int irq_flags) __mix_pool_bytes(r, &seed, sizeof(seed)); credit = 1; } - spin_unlock(&r->lock); fast_pool->count = 0; /* award one bit for the contents of the fast pool */ reseed = __credit_entropy_bits_fast(r, credit + 1); + spin_unlock(&r->lock); if (reseed) crng_reseed(&primary_crng, r); } @@ -1393,14 +1405,15 @@ EXPORT_SYMBOL_GPL(add_disk_randomness); */ static size_t account(struct entropy_store *r, size_t nbytes, int min) { - int entropy_count, orig, have_bytes; + int entropy_count, have_bytes; size_t ibytes, nfrac; + unsigned long flags; BUG_ON(r->entropy_count > r->poolinfo->poolfracbits); + spin_lock_irqsave(&r->lock, flags); /* Can we pull enough? */ -retry: - entropy_count = orig = READ_ONCE(r->entropy_count); + entropy_count = r->entropy_count; ibytes = nbytes; /* never pull more than available */ have_bytes = entropy_count >> (ENTROPY_SHIFT + 3); @@ -1420,8 +1433,8 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min) else entropy_count = 0; - if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) - goto retry; + WRITE_ONCE(r->entropy_count, entropy_count); + spin_unlock_irqrestore(&r->lock, flags); trace_debit_entropy(r->name, 8 * ibytes); if (ibytes && ENTROPY_BITS(r) < random_write_wakeup_bits) { @@ -1639,8 +1652,11 @@ EXPORT_SYMBOL(get_random_bytes); static void entropy_timer(struct timer_list *t) { bool reseed; + unsigned long flags; + spin_lock_irqsave(&input_pool.lock, flags); reseed = __credit_entropy_bits_fast(&input_pool, 1); + spin_unlock_irqrestore(&input_pool.lock, flags); if (reseed) crng_reseed(&primary_crng, &input_pool); } From patchwork Mon Sep 21 07:58:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252968 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E513AC43468 for ; Mon, 21 Sep 2020 08:00:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AED2D214F1 for ; Mon, 21 Sep 2020 08:00:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726694AbgIUIA1 (ORCPT ); Mon, 21 Sep 2020 04:00:27 -0400 Received: from mx2.suse.de ([195.135.220.15]:57142 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726492AbgIUH7W (ORCPT ); Mon, 21 Sep 2020 03:59:22 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 82889B505; Mon, 21 Sep 2020 07:59:55 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 11/41] random: convert add_timer_randomness() to queued_entropy API Date: Mon, 21 Sep 2020 09:58:27 +0200 Message-Id: <20200921075857.4424-12-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to drop __credit_entropy_bits_fast() in favor of the new __queue_entropy()/__dispatch_queued_entropy_fast() API, convert add_timer_randomness() from the former to the latter. There is no change in functionality at this point, because __credit_entropy_bits_fast() has already been reimplemented on top of the new API before. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index b91d1fc08ac5..e8c86abde901 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1400,6 +1400,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) long delta, delta2, delta3; bool reseed; unsigned long flags; + struct queued_entropy q = { 0 }; sample.jiffies = jiffies; sample.cycles = random_get_entropy(); @@ -1432,13 +1433,14 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) r = &input_pool; spin_lock_irqsave(&r->lock, flags); - __mix_pool_bytes(r, &sample, sizeof(sample)); /* * delta is now minimum absolute delta. * Round down by 1 bit on general principles, * and limit entropy estimate to 12 bits. */ - reseed = __credit_entropy_bits_fast(r, min_t(int, fls(delta>>1), 11)); + __queue_entropy(r, &q, min_t(int, fls(delta>>1), 11) << ENTROPY_SHIFT); + __mix_pool_bytes(r, &sample, sizeof(sample)); + reseed = __dispatch_queued_entropy_fast(r, &q); spin_unlock_irqrestore(&r->lock, flags); if (reseed) crng_reseed(&primary_crng, r); From patchwork Mon Sep 21 07:58:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46910C43464 for ; Mon, 21 Sep 2020 08:00:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0D07620EDD for ; Mon, 21 Sep 2020 08:00:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726706AbgIUIA2 (ORCPT ); Mon, 21 Sep 2020 04:00:28 -0400 Received: from mx2.suse.de ([195.135.220.15]:56802 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726487AbgIUH7W (ORCPT ); Mon, 21 Sep 2020 03:59:22 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id B03C1B50E; Mon, 21 Sep 2020 07:59:56 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 13/41] random: convert try_to_generate_entropy() to queued_entropy API Date: Mon, 21 Sep 2020 09:58:29 +0200 Message-Id: <20200921075857.4424-14-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to drop __credit_entropy_bits_fast() in favor of the new __queue_entropy()/__dispatch_queued_entropy_fast() API, convert try_to_generate_entropy() from the former to the latter. Replace the call to __credit_entropy_bits_fast() from the timer callback, entropy_timer(), by a queue_entropy() operation. Dispatch it from the loop in try_to_generate_entropy() by invoking __dispatch_queued_entropy_fast() after the timestamp has been mixed into the input_pool. In order to provide the timer callback and try_to_generate_entropy() with access to a common struct queued_entropy instance, move the currently anonymous struct definition from the local 'stack' variable declaration in try_to_generate_entropy() to file scope and assign it a name, "struct try_to_generate_entropy_stack". Make entropy_timer() obtain a pointer to the corresponding instance by means of container_of() on the ->timer member contained therein. Amend struct try_to_generate_entropy_stack by a new member ->q of type struct queued_entropy. Note that the described scheme alters behaviour a bit: first of all, new entropy credit now gets only dispatched to the pool after the actual mixing has completed rather than in an unsynchronized manner directly from the timer callback. As the mixing loop try_to_generate_entropy() is expected to run at higher frequency than the timer, this is unlikely to make any difference in practice. Furthermore, the pool entropy watermark as tracked over the period from queuing the entropy in the timer callback and to its subsequent dispatch from try_to_generate_entropy() is now taken into account when calculating the actual credit at dispatch. In consequence, the amount of new entropy dispatched to the pool will potentially be lowered if said period happens to overlap with the pool extraction from an initial crng_reseed() on the primary_crng. However, as getting the primary_crng seeded is the whole point of the try_to_generate_entropy() exercise, this won't matter. Note that instead of calling queue_entropy() from the timer callback, an alternative would have been to maintain an invocation counter and queue that up from try_to_generate_entropy() right before the mix operation. This would have reduced the described effect of the pool's entropy watermark and in fact matched the intended queue_entropy() API usage better. However, in this particular case of try_to_generate_entropy(), jitter is desired and invoking queue_entropy() with its buffer locking etc. from the timer callback could potentially contribute to that. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 42 +++++++++++++++++++++++++++++------------- 1 file changed, 29 insertions(+), 13 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index bd3774c6be4b..dfbe49fdbcf1 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1911,6 +1911,12 @@ void get_random_bytes(void *buf, int nbytes) EXPORT_SYMBOL(get_random_bytes); +struct try_to_generate_entropy_stack { + unsigned long now; + struct timer_list timer; + struct queued_entropy q; +} stack; + /* * Each time the timer fires, we expect that we got an unpredictable * jump in the cycle counter. Even if the timer is running on another @@ -1926,14 +1932,10 @@ EXPORT_SYMBOL(get_random_bytes); */ static void entropy_timer(struct timer_list *t) { - bool reseed; - unsigned long flags; + struct try_to_generate_entropy_stack *stack; - spin_lock_irqsave(&input_pool.lock, flags); - reseed = __credit_entropy_bits_fast(&input_pool, 1); - spin_unlock_irqrestore(&input_pool.lock, flags); - if (reseed) - crng_reseed(&primary_crng, &input_pool); + stack = container_of(t, struct try_to_generate_entropy_stack, timer); + queue_entropy(&input_pool, &stack->q, 1 << ENTROPY_SHIFT); } /* @@ -1942,10 +1944,9 @@ static void entropy_timer(struct timer_list *t) */ static void try_to_generate_entropy(void) { - struct { - unsigned long now; - struct timer_list timer; - } stack; + struct try_to_generate_entropy_stack stack = { 0 }; + unsigned long flags; + bool reseed; stack.now = random_get_entropy(); @@ -1957,14 +1958,29 @@ static void try_to_generate_entropy(void) while (!crng_ready()) { if (!timer_pending(&stack.timer)) mod_timer(&stack.timer, jiffies+1); - mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now)); + spin_lock_irqsave(&input_pool.lock, flags); + __mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now)); + reseed = __dispatch_queued_entropy_fast(&input_pool, &stack.q); + spin_unlock_irqrestore(&input_pool.lock, flags); + + if (reseed) + crng_reseed(&primary_crng, &input_pool); + schedule(); stack.now = random_get_entropy(); } del_timer_sync(&stack.timer); destroy_timer_on_stack(&stack.timer); - mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now)); + spin_lock_irqsave(&input_pool.lock, flags); + __mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now)); + /* + * Must be called here once more in order to complete a + * previously unmatched queue_entropy() from entropy_timer(), + * if any. + */ + __dispatch_queued_entropy_fast(&input_pool, &stack.q); + spin_unlock_irqrestore(&input_pool.lock, flags); } /* From patchwork Mon Sep 21 07:58:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F101C43464 for ; Mon, 21 Sep 2020 08:00:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0ADE120874 for ; Mon, 21 Sep 2020 08:00:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726674AbgIUIAO (ORCPT ); Mon, 21 Sep 2020 04:00:14 -0400 Received: from mx2.suse.de ([195.135.220.15]:57298 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726506AbgIUH7X (ORCPT ); Mon, 21 Sep 2020 03:59:23 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id C919BB512; Mon, 21 Sep 2020 07:59:57 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 15/41] random: convert add_hwgenerator_randomness() to queued_entropy API Date: Mon, 21 Sep 2020 09:58:31 +0200 Message-Id: <20200921075857.4424-16-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to drop credit_entropy_bits() in favor of the new queue_entropy()/dispatch_queued_entropy() API, convert add_hwgenerator_randomness() from the former to the latter. As a side effect, the pool entropy watermark as tracked over the duration of the mix_pool_bytes() operation is now taken correctly taken into account when calulating the amount of new entropy to dispatch to the pool based on the latter's fill level. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 60ce185d7b2d..78e65367ea86 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -2640,6 +2640,7 @@ void add_hwgenerator_randomness(const char *buffer, size_t count, size_t entropy) { struct entropy_store *poolp = &input_pool; + struct queued_entropy q = { 0 }; if (unlikely(crng_init == 0)) { crng_fast_load(buffer, count); @@ -2652,8 +2653,9 @@ void add_hwgenerator_randomness(const char *buffer, size_t count, */ wait_event_interruptible(random_write_wait, kthread_should_stop() || ENTROPY_BITS(&input_pool) <= random_write_wakeup_bits); + queue_entropy(poolp, &q, entropy << ENTROPY_SHIFT); mix_pool_bytes(poolp, buffer, count); - credit_entropy_bits(poolp, entropy); + dispatch_queued_entropy(poolp, &q); } EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); From patchwork Mon Sep 21 07:58:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB07AC43463 for ; Mon, 21 Sep 2020 08:00:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 73D0820EDD for ; Mon, 21 Sep 2020 08:00:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726671AbgIUIAO (ORCPT ); Mon, 21 Sep 2020 04:00:14 -0400 Received: from mx2.suse.de ([195.135.220.15]:57440 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726514AbgIUH7Y (ORCPT ); Mon, 21 Sep 2020 03:59:24 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id ECF35B515; Mon, 21 Sep 2020 07:59:58 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 17/41] random: drop credit_entropy_bits() and credit_entropy_bits_safe() Date: Mon, 21 Sep 2020 09:58:33 +0200 Message-Id: <20200921075857.4424-18-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org All former call sites of credit_entropy_bits() and credit_entropy_bits_safe() respectively have been converted to the new dispatch_queued_entropy() API. Drop the now unused functions. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 29 +---------------------------- 1 file changed, 1 insertion(+), 28 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 03eadefabbca..a49805d0d23c 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -533,7 +533,7 @@ static __u32 const twist_table[8] = { /* * This function adds bytes into the entropy "pool". It does not * update the entropy estimate. The caller should call - * credit_entropy_bits if this is appropriate. + * queue_entropy()+dispatch_queued_entropy() if this is appropriate. * * The pool is stirred with a primitive polynomial of the appropriate * degree, and then twisted. We twist by three bits at a time because @@ -988,33 +988,6 @@ static void discard_queued_entropy(struct entropy_store *r, spin_unlock_irqrestore(&r->lock, flags); } -/* - * Credit the entropy store with n bits of entropy. - * Use credit_entropy_bits_safe() if the value comes from userspace - * or otherwise should be checked for extreme values. - */ -static void credit_entropy_bits(struct entropy_store *r, int nbits) -{ - struct queued_entropy q = { 0 }; - - queue_entropy(r, &q, nbits << ENTROPY_SHIFT); - dispatch_queued_entropy(r, &q); -} - -static int credit_entropy_bits_safe(struct entropy_store *r, int nbits) -{ - const int nbits_max = r->poolinfo->poolwords * 32; - - if (nbits < 0) - return -EINVAL; - - /* Cap the value to avoid overflows */ - nbits = min(nbits, nbits_max); - - credit_entropy_bits(r, nbits); - return 0; -} - /********************************************************************* * * CRNG using CHACHA20 From patchwork Mon Sep 21 07:58:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252972 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80883C43465 for ; Mon, 21 Sep 2020 08:00:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 54CA0214F1 for ; Mon, 21 Sep 2020 08:00:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726620AbgIUH7u (ORCPT ); Mon, 21 Sep 2020 03:59:50 -0400 Received: from mx2.suse.de ([195.135.220.15]:57622 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726524AbgIUH71 (ORCPT ); Mon, 21 Sep 2020 03:59:27 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 2A891B510; Mon, 21 Sep 2020 08:00:00 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 19/41] random: reintroduce arch_has_random() + arch_has_random_seed() Date: Mon, 21 Sep 2020 09:58:35 +0200 Message-Id: <20200921075857.4424-20-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org A future patch will introduce support for making up for a certain amount of lacking entropy in crng_reseed() by means of arch_get_random_long() or arch_get_random_seed_long() respectively. However, before even the tiniest bit of precious entropy is withdrawn from the input_pool, it should be checked if whether the current arch even has support for these. Reintroduce arch_has_random() + arch_has_random_seed() and implement them for arm64, powerpc, s390 and x86 as appropriate (yeah, I know this should go in separate commits, but this is part of a RFC series). Note that this more or less reverts commits 647f50d5d9d9 ("linux/random.h: Remove arch_has_random, arch_has_random_seed") cbac004995a0 ("powerpc: Remove arch_has_random, arch_has_random_seed") 5e054c820f59 ("s390: Remove arch_has_random, arch_has_random_seed") 5f2ed7f5b99b ("x86: Remove arch_has_random, arch_has_random_seed") Signed-off-by: Nicolai Stange --- arch/arm64/include/asm/archrandom.h | 25 ++++++++++++++++++------- arch/powerpc/include/asm/archrandom.h | 12 +++++++++++- arch/s390/include/asm/archrandom.h | 14 ++++++++++++-- arch/x86/include/asm/archrandom.h | 18 ++++++++++++++---- include/linux/random.h | 8 ++++++++ 5 files changed, 63 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/archrandom.h b/arch/arm64/include/asm/archrandom.h index 44209f6146aa..055d18713db7 100644 --- a/arch/arm64/include/asm/archrandom.h +++ b/arch/arm64/include/asm/archrandom.h @@ -26,17 +26,13 @@ static inline bool __arm64_rndr(unsigned long *v) return ok; } -static inline bool __must_check arch_get_random_long(unsigned long *v) -{ - return false; -} -static inline bool __must_check arch_get_random_int(unsigned int *v) +static inline bool arch_has_random(void) { return false; } -static inline bool __must_check arch_get_random_seed_long(unsigned long *v) +static inline bool arch_has_random_seed(void) { /* * Only support the generic interface after we have detected @@ -44,7 +40,22 @@ static inline bool __must_check arch_get_random_seed_long(unsigned long *v) * cpufeature code and with potential scheduling between CPUs * with and without the feature. */ - if (!cpus_have_const_cap(ARM64_HAS_RNG)) + return cpus_have_const_cap(ARM64_HAS_RNG); +} + +static inline bool __must_check arch_get_random_long(unsigned long *v) +{ + return false; +} + +static inline bool __must_check arch_get_random_int(unsigned int *v) +{ + return false; +} + +static inline bool __must_check arch_get_random_seed_long(unsigned long *v) +{ + if (!arch_has_random_seed()) return false; return __arm64_rndr(v); diff --git a/arch/powerpc/include/asm/archrandom.h b/arch/powerpc/include/asm/archrandom.h index 9a53e29680f4..47c2d74e7244 100644 --- a/arch/powerpc/include/asm/archrandom.h +++ b/arch/powerpc/include/asm/archrandom.h @@ -6,6 +6,16 @@ #include +static inline bool arch_has_random(void) +{ + return false; +} + +static inline bool arch_has_random_seed(void) +{ + return ppc_md.get_random_seed; +} + static inline bool __must_check arch_get_random_long(unsigned long *v) { return false; @@ -18,7 +28,7 @@ static inline bool __must_check arch_get_random_int(unsigned int *v) static inline bool __must_check arch_get_random_seed_long(unsigned long *v) { - if (ppc_md.get_random_seed) + if (arch_has_random_seed()) return ppc_md.get_random_seed(v); return false; diff --git a/arch/s390/include/asm/archrandom.h b/arch/s390/include/asm/archrandom.h index de61ce562052..18973845634c 100644 --- a/arch/s390/include/asm/archrandom.h +++ b/arch/s390/include/asm/archrandom.h @@ -21,6 +21,16 @@ extern atomic64_t s390_arch_random_counter; bool s390_arch_random_generate(u8 *buf, unsigned int nbytes); +static inline bool arch_has_random(void) +{ + return false; +} + +static inline bool arch_has_random_seed(void) +{ + return static_branch_likely(&s390_arch_random_available); +} + static inline bool __must_check arch_get_random_long(unsigned long *v) { return false; @@ -33,7 +43,7 @@ static inline bool __must_check arch_get_random_int(unsigned int *v) static inline bool __must_check arch_get_random_seed_long(unsigned long *v) { - if (static_branch_likely(&s390_arch_random_available)) { + if (arch_has_random_seed()) { return s390_arch_random_generate((u8 *)v, sizeof(*v)); } return false; @@ -41,7 +51,7 @@ static inline bool __must_check arch_get_random_seed_long(unsigned long *v) static inline bool __must_check arch_get_random_seed_int(unsigned int *v) { - if (static_branch_likely(&s390_arch_random_available)) { + if (arch_has_random_seed()) { return s390_arch_random_generate((u8 *)v, sizeof(*v)); } return false; diff --git a/arch/x86/include/asm/archrandom.h b/arch/x86/include/asm/archrandom.h index ebc248e49549..030f46c9e310 100644 --- a/arch/x86/include/asm/archrandom.h +++ b/arch/x86/include/asm/archrandom.h @@ -70,24 +70,34 @@ static inline bool __must_check rdseed_int(unsigned int *v) */ #ifdef CONFIG_ARCH_RANDOM +static inline bool arch_has_random(void) +{ + return static_cpu_has(X86_FEATURE_RDRAND); +} + +static inline bool arch_has_random_seed(void) +{ + return static_cpu_has(X86_FEATURE_RDSEED); +} + static inline bool __must_check arch_get_random_long(unsigned long *v) { - return static_cpu_has(X86_FEATURE_RDRAND) ? rdrand_long(v) : false; + return arch_has_random() ? rdrand_long(v) : false; } static inline bool __must_check arch_get_random_int(unsigned int *v) { - return static_cpu_has(X86_FEATURE_RDRAND) ? rdrand_int(v) : false; + return arch_has_random() ? rdrand_int(v) : false; } static inline bool __must_check arch_get_random_seed_long(unsigned long *v) { - return static_cpu_has(X86_FEATURE_RDSEED) ? rdseed_long(v) : false; + return arch_has_random_seed() ? rdseed_long(v) : false; } static inline bool __must_check arch_get_random_seed_int(unsigned int *v) { - return static_cpu_has(X86_FEATURE_RDSEED) ? rdseed_int(v) : false; + return arch_has_random_seed() ? rdseed_int(v) : false; } extern void x86_init_rdrand(struct cpuinfo_x86 *c); diff --git a/include/linux/random.h b/include/linux/random.h index f45b8be3e3c4..d4653422a0c7 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -120,6 +120,14 @@ unsigned long randomize_page(unsigned long start, unsigned long range); #ifdef CONFIG_ARCH_RANDOM # include #else +static inline bool arch_has_random(void) +{ + return false; +} +static inline bool arch_has_random_seed(void) +{ + return false; +} static inline bool __must_check arch_get_random_long(unsigned long *v) { return false; From patchwork Mon Sep 21 07:58:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D85CBC43465 for ; Mon, 21 Sep 2020 07:59:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A675320874 for ; Mon, 21 Sep 2020 07:59:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726417AbgIUH7u (ORCPT ); Mon, 21 Sep 2020 03:59:50 -0400 Received: from mx2.suse.de ([195.135.220.15]:56892 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726537AbgIUH71 (ORCPT ); Mon, 21 Sep 2020 03:59:27 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 59C6DB51C; Mon, 21 Sep 2020 08:00:01 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 21/41] random: don't invoke arch_get_random_long() from add_interrupt_randomness() Date: Mon, 21 Sep 2020 09:58:37 +0200 Message-Id: <20200921075857.4424-22-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org x86's RDSEED/RDRAND insns have reportedly been slowed down significantly on certain CPU families due to the ucode update required to mitigate against the "Special Register Buffer Data Sampling" vulnerability (CVE-2020-0543) and should not get invoked from the interrupt path anymore. Currently, add_interrupt_randomness() obtains an arch_get_random_long() sample for each bit of entropy awarded to the "interrupt source", mixes it into the input_pool and awards that sample another bit of entropy. This lock step between the interrupt source and arch_get_random_long() ensures that the latter cannot dominate the former. There are some more entropy sources all mixing into input_pool with a non-zero entropy attribution: - try_to_generate_entropy() at boot time - add_input_randomness() and add_disk_randomness() - add_hwgenerator_randomness(). I don't see what's so special about the interrupt randomness source that entropy awarded to the architectural RNG should be limited to its output rate only rather than to the joint rate from all these entropy sources as a whole. Follow this approach. Don't mix arch_get_random_long() entropy from add_interrupt_randomness() into the input_pool. Instead, make crng_reseed() invoke the architectural RNG to make up for any lack of entropy up to one half of the minimum seed size. That is, if the input_pool contains less than 128 bits of entropy, the architectural RNG will be invoked and attributed an entropy value equal to the difference, but never more than 64 bits. Note that - the architectural RNG won't be able to dominate the other randomness sources taken together in this scheme and - in case the input_pool contains more entropy than required for the minimum seed level, it won't be attributed any entropy at all. That is, the architectural RNG is effectively turned into an emergency reserve in a sense. A potentially adverse effect of this change is that entropy might get depleted at a higher rate than before from the interrupt source, namely if the input_pool contains more than half of the minimum seed size of entropy at reseed. However, the entropy sources feeding into input_pool are assumed to provide entropy at a steady rate when averaged over the time scale of a reseed interval, which is several minutes in length. Thus, as the primary_crng reseeds are the only consumers of input_pool entropy nowadays, the input_pool's fill level can be assumed to be relatively constant at the time of reseeds and an equilibrium between the rates at which the input_pool receives and releases entropy will be reached. OTOH, remember that the rate at which the pool entropy increases is exponentially damped as the pool fills up. In case the interrupt source is a major contributor to the pool, not having to account anymore for the architectural RNG's noise formerly mixed in in lockstep will leave considerably more pool capacity to the interrupt noise, which is a welcomed side effect. So, make min_crng_reseed_pool_entropy() return only half of the minimum seed size required in case an architectural RNG will likely be able to provide the other half, as indicated by arch_has_random() || arch_has_random_seed(). This will effectively - make dispatch_queued_entropy() to attempt an intitial seed of the primary_crng as soon as the amount of entropy available from the input_pool has first exceeded that threshold and also - makes crng_reseed() to lower the minimum amount of entropy to be extracted from the input_pool by means of extract_entropy() to one half of the minimum seed size. Introduce a new boolean variable "arch_randomness_required" to crng_reseed() for tracking whether or not the seed must be amended by additional output from the architectural RNG. Initialize it to false, make crng_reseed() set it in case its extract_entropy() invocation could obtain only less than the minimum seed size from input_pool. crng_reseed() already attempts to xor output from the architectural RNG over the full length of the crng state, i.e. over the full length of the latter's 256 bit ChaCha20 key. Currently, failure in doing so is not considered fatal. Make it so if arch_randomness_required has been set. Note that assuming one bit of entropy per bit obtained from the architectural RNG, it would actually suffice to successfully obtain (16 - num + sizeof(u32) - 1) / sizeof(u32) u32's from arch_get_random_long()/arch_get_random_seed_long(), where 16 is the minimum seed size in bytes and num is the number of bytes which have been previuously obtained from the input_pool. However, this assumption might be overly optimistic and the total number of arch_get_random_long() invocations per 64 bits of entropy attributed to it has already been lowered from >= 64 to eight by this patch. Moreover, the arch_get_random_long() loop in crng_reseed() would need to get reorganized in order to make sure that there will actually be a sufficient number of successful invocations when writing to the target buffer area following the bytes obtained from the input_pool. Thus, in case failing arch_get_random_long()s in combination with arch_randomness_required set became a problem in the future, it would be better to improve the error path and simply return the unused entropy extracted from the input_pool back. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 49 +++++++++++++++++++++++++------------------ 1 file changed, 29 insertions(+), 20 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 424de1565927..7712b4464ef5 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1190,6 +1190,14 @@ static int crng_slow_load(const char *cp, size_t len) */ static int min_crng_reseed_pool_entropy(void) { + /* + * If there's an architecture provided RNG, use it for + * up to one half of the minimum entropy needed for + * reseeding. That way it won't dominate the entropy + * collected by other means at input_pool. + */ + if (arch_has_random() || arch_has_random_seed()) + return 8; return 16; } @@ -1197,6 +1205,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) { unsigned long flags; int i, num; + bool arch_randomness_required = false; union { __u8 block[CHACHA_BLOCK_SIZE]; __u32 key[8]; @@ -1205,8 +1214,16 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) if (r) { num = extract_entropy(r, &buf, 32, min_crng_reseed_pool_entropy()); - if (num == 0) + if (num == 0) { return; + } else if (num < 16) { + /* + * The input_pool did not provide sufficient + * entropy for reseeding and the architecture + * provided RNG will have to make up for it. + */ + arch_randomness_required = true; + } } else { _extract_crng(&primary_crng, buf.block); _crng_backtrack_protect(&primary_crng, buf.block, @@ -1216,8 +1233,17 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) for (i = 0; i < 8; i++) { unsigned long rv; if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) + !arch_get_random_long(&rv)) { + if (arch_randomness_required) { + /* + * The input_pool failed to provide + * sufficient entropy and the arch RNG + * could not make up for that either. + */ + return; + } rv = random_get_entropy(); + } buf.key[i] ^= rv; } @@ -1522,8 +1548,6 @@ void add_interrupt_randomness(int irq, int irq_flags) cycles_t cycles = random_get_entropy(); __u32 c_high, j_high; __u64 ip; - unsigned long seed; - int credit = 0; bool reseed; struct queued_entropy q = { 0 }; @@ -1560,26 +1584,11 @@ void add_interrupt_randomness(int irq, int irq_flags) if (!spin_trylock(&r->lock)) return; - /* - * If we have architectural seed generator, produce a seed and - * add it to the pool further below. For the sake of paranoia - * don't let the architectural seed generator dominate the - * input from the interrupt noise. - */ - credit = !!arch_get_random_long(&seed); - fast_pool->last = now; fast_pool->count = 0; /* award one bit for the contents of the fast pool */ - __queue_entropy(r, &q, (credit + 1) << ENTROPY_SHIFT); + __queue_entropy(r, &q, 1 << ENTROPY_SHIFT); __mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool)); - if (credit) { - /* - * A seed has been obtained from - * arch_get_random_seed_long() above, mix it in. - */ - __mix_pool_bytes(r, &seed, sizeof(seed)); - } reseed = __dispatch_queued_entropy_fast(r, &q); spin_unlock(&r->lock); if (reseed) From patchwork Mon Sep 21 07:58:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9301BC43464 for ; Mon, 21 Sep 2020 08:00:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6BA8E214F1 for ; Mon, 21 Sep 2020 08:00:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726628AbgIUH7u (ORCPT ); Mon, 21 Sep 2020 03:59:50 -0400 Received: from mx2.suse.de ([195.135.220.15]:56802 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726420AbgIUH71 (ORCPT ); Mon, 21 Sep 2020 03:59:27 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 80FADB51D; Mon, 21 Sep 2020 08:00:02 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 23/41] random: don't award entropy to non-SP800-90B arch RNGs in FIPS mode Date: Mon, 21 Sep 2020 09:58:39 +0200 Message-Id: <20200921075857.4424-24-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org It is required by SP800-90C that only SP800-90B compliant entropy sources may be used for seeding DRBGs. Don't award any entropy to arch_get_random_long() if fips_enabled is true. Don't award any entropy to arch_get_random_seed_long() if fips_enabled && !arch_has_sp800_90b_random_seed(). This is achieved by making min_crng_reseed_pool_entropy() return the full minimum seed size if fips_enabled && !arch_has_sp800_90b_random_seed() is true. This prevents crng_reseed() from attempting to make up for any lack of entropy in the input_pool by reading from the architectural RNG. Make crng_reseed() bail out in FIPS mode if the input_pool provides insufficient entropy and any of the arch_get_random_seed_long() invocations fails: there's no statement regarding SP900-90B compliance of arch_get_random_long() and so it can't be used as a backup. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 7712b4464ef5..aaddee4e4ab1 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1195,9 +1195,13 @@ static int min_crng_reseed_pool_entropy(void) * up to one half of the minimum entropy needed for * reseeding. That way it won't dominate the entropy * collected by other means at input_pool. + * If in FIPS mode, restrict this to SP900-90B compliant + * architectural RNGs. */ - if (arch_has_random() || arch_has_random_seed()) + if (arch_has_sp800_90b_random_seed() || + (!fips_enabled && (arch_has_random() || arch_has_random_seed()))) { return 8; + } return 16; } @@ -1233,7 +1237,8 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) for (i = 0; i < 8; i++) { unsigned long rv; if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) { + ((arch_randomness_required && fips_enabled) || + !arch_get_random_long(&rv))) { if (arch_randomness_required) { /* * The input_pool failed to provide From patchwork Mon Sep 21 07:58:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252966 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18076C43466 for ; Mon, 21 Sep 2020 08:00:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D393C20BED for ; Mon, 21 Sep 2020 08:00:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726467AbgIUIAx (ORCPT ); Mon, 21 Sep 2020 04:00:53 -0400 Received: from mx2.suse.de ([195.135.220.15]:57440 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726579AbgIUH7k (ORCPT ); Mon, 21 Sep 2020 03:59:40 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id C5D09B523; Mon, 21 Sep 2020 08:00:04 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 27/41] random: increase per-IRQ event entropy estimate if in FIPS mode Date: Mon, 21 Sep 2020 09:58:43 +0200 Message-Id: <20200921075857.4424-28-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org NIST SP800-90C prohibits the use of multiple correlated entropy sources. However, add_interrupt_randomness(), add_disk_randomness() and add_input_randomness() are clearly not independent and an upcoming patch will make the latter two to stop contributing any entropy to the global balance if fips_enabled is on. With the current parameter settings, it can be assumed that add_disk_randomness() resp. add_input_randomness() are the dominating contributors to the overall entropy reserve for some common workloads: both more or less estimate the entropy per event to equal the width of the minimum out of the first, second and third jiffes deltas to the previous occurrence. add_interrupt_randomness() on the other hand only attributes one single bit entropy to a full batch of 64 IRQ events (or once a second if that completes earlier). Thus, the upcoming exclusion of two potent entropy sources should somehow be compensated for. Stephan Müller worked around this very problem in his "LRNG" proposal ([1]) by increasing the entropy estimate per IRQ event. Namely, in case a get_cycles() with instruction granularity is available, he estimated one bit of entropy per IRQ event and (IIRC) 1/10 bits otherwise. I haven't tested this claim myself, in particular not on smaller devices. But for the sake of moving the development of this RFC series forward, I'll assume it as granted and hereby postulate that The lower eight bits of the differences between get_cycles() from two successive IRQ events on the same CPU carry - one bit of min-entropy in case a get_cycles() with instruction granularity is available and - 1/8 bit of min-entropy in case get_cycles() is still non-trivial, but has a lower resolution. In particular this is assumed to be true for highly periodic interrupts like those issued for e.g. USB microframes and on all supported architectures. In the former case, the underlying source of randomness is believed to follow the same principles as for the Jitter RNGs resp. try_to_generate_entropy(): diffences in RAM vs. CPU clockings and unpredictability of cache states to a certain extent. Notes: - NIST SP800-90B requires a means to access raw samples for validation purposes. Implementation of such an interface is deliberately not part of this RFC series here, but would necessarily be subject of future work. So there would be a means to at least validate these assumptions. - The choice of 1/8 over the 1/10 from the LRNG patchset has been made because it's a power of two and I suppose that the estimate of 1/10 had been quite arbitrary anyway. Replacement of the 1/8 by smaller powers of two down to 1/64 will be supported throughout this patch series. Some health tests as required by NIST SP800-90B will be implemented later in this series. In order to allow for dynamically decreasing the assessed entropy on a per-CPU basis upon health test failures, make it an attibute of the per-CPU struct fast_pool. That is, introduce a new integer field ->event_entropy_shift to struct fast_pool. The estimated entropy per IRQ sample will be calculated as 2^-event_entropy_shift. Initialize it statically with -1 to indicate that runtime initialization hasn't happened yet. Introduce fast_pool_init_accounting() which gets called unconditionally from add_interrupt_randomness() for doing the necessary runtime initializations once, i.e. if ->event_entropy_shift is still found to be negative. Implement it with the help of the also new min_irq_event_entropy_shift(), which will return the initial ->event_entropy_shift value as determined according to the rules from above. That is, depending on the have_highres_cycle_ctr, the result is eiher zero or three. Note that have_highres_cycle_ctr will only get properly initialized from rand_initialize() if fips_enabled is set, but ->event_entropy_shift will also only ever get accessed in this case. Finally, for the case tha fips_enabled is set, make add_interrupt_randomness() to estimate the amount of entropy transferred from the fast_pool into the global input_pool as fast_pool_entropy(->count, ->event_entropy_shift), rather than only one single bit. Remember that fast_pool_entropy() calculates the amount of entropy contained in a fast_pool, based on the total number of events mixed into it and the estimated entropy per event. [1] https://lkml.kernel.org/r/5695397.lOV4Wx5bFT@positron.chronox.de Suggested-by: Stephan Müller Signed-off-by: Nicolai Stange --- drivers/char/random.c | 50 ++++++++++++++++++++++++++++++++++++++----- 1 file changed, 45 insertions(+), 5 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index ac36c56dd135..8f79e90f2429 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -615,6 +615,7 @@ struct fast_pool { unsigned long last; unsigned short reg_idx; unsigned char count; + int event_entropy_shift; }; /* @@ -1509,7 +1510,9 @@ void add_input_randomness(unsigned int type, unsigned int code, } EXPORT_SYMBOL_GPL(add_input_randomness); -static DEFINE_PER_CPU(struct fast_pool, irq_randomness); +static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { + .event_entropy_shift = -1, +}; #ifdef ADD_INTERRUPT_BENCH static unsigned long avg_cycles, avg_deviation; @@ -1599,6 +1602,32 @@ static unsigned int fast_pool_entropy(unsigned int num_events, return result >> event_entropy_shift; } +static inline int min_irq_event_entropy_shift(void) +{ + if (static_branch_likely(&have_highres_cycle_ctr)) { + /* + * If a cycle counter with a good enough resolution is + * available, estimate the entropy per IRQ event to + * be no more than 2^-0 == 1 bit. + */ + return 0; + } + + /* + * Otherwise return an estimate upper bound of + * 2^-3 == 1/8 bit per event. + */ + return 3; +} + +static inline void fast_pool_init_accounting(struct fast_pool *f) +{ + if (likely(f->event_entropy_shift >= 0)) + return; + + f->event_entropy_shift = min_irq_event_entropy_shift(); +} + void add_interrupt_randomness(int irq, int irq_flags) { struct entropy_store *r; @@ -1610,6 +1639,7 @@ void add_interrupt_randomness(int irq, int irq_flags) __u64 ip; bool reseed; struct queued_entropy q = { 0 }; + unsigned int nfrac; if (cycles == 0) cycles = get_reg(fast_pool, regs); @@ -1644,13 +1674,23 @@ void add_interrupt_randomness(int irq, int irq_flags) if (!spin_trylock(&r->lock)) return; - fast_pool->last = now; - fast_pool->count = 0; - /* award one bit for the contents of the fast pool */ - __queue_entropy(r, &q, 1 << ENTROPY_SHIFT); + fast_pool_init_accounting(fast_pool); + + if (!fips_enabled) { + /* award one bit for the contents of the fast pool */ + nfrac = 1 << ENTROPY_SHIFT; + } else { + nfrac = fast_pool_entropy(fast_pool->count, + fast_pool->event_entropy_shift); + } + __queue_entropy(r, &q, nfrac); __mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool)); reseed = __dispatch_queued_entropy_fast(r, &q); spin_unlock(&r->lock); + + fast_pool->last = now; + fast_pool->count = 0; + if (reseed) crng_reseed(&primary_crng, r); } From patchwork Mon Sep 21 07:58:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFF82C43465 for ; Mon, 21 Sep 2020 08:01:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C92D620EDD for ; Mon, 21 Sep 2020 08:01:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726788AbgIUIBH (ORCPT ); Mon, 21 Sep 2020 04:01:07 -0400 Received: from mx2.suse.de ([195.135.220.15]:58044 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726587AbgIUH7j (ORCPT ); Mon, 21 Sep 2020 03:59:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 1D565B52C; Mon, 21 Sep 2020 08:00:07 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 31/41] random: introduce struct health_test + health_test_reset() placeholders Date: Mon, 21 Sep 2020 09:58:47 +0200 Message-Id: <20200921075857.4424-32-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The to be implemented health tests will maintain some per-CPU state as they successively process the IRQ samples fed into the resp. fast_pool from add_interrupt_randomness(). In order to not to clutter future patches with trivialities, introduce an empty struct health_test supposed to keep said state in the future. Add a member of this new type to struct fast_pool. Introduce a health_test_reset() stub, which is supposed to (re)initialize instances of struct health_test. Invoke it from the fast_pool_init_accounting() to make sure that a fast_pool's contained health_test instance gets initialized once before its first usage. Make add_interrupt_randomness call fast_pool_init_accounting() earlier: health test functionality will get invoked before the latter's old location and it must have been initialized by that time. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 37746df53acf..0f56c873a501 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -879,6 +879,11 @@ static void discard_queued_entropy(struct entropy_store *r, spin_unlock_irqrestore(&r->lock, flags); } +struct health_test {}; + +static void health_test_reset(struct health_test *h) +{} + struct fast_pool { __u32 pool[4]; unsigned long last; @@ -886,6 +891,7 @@ struct fast_pool { unsigned char count; int event_entropy_shift; struct queued_entropy q; + struct health_test health; }; /* @@ -1644,6 +1650,7 @@ static inline void fast_pool_init_accounting(struct fast_pool *f) return; f->event_entropy_shift = min_irq_event_entropy_shift(); + health_test_reset(&f->health); } void add_interrupt_randomness(int irq, int irq_flags) @@ -1674,6 +1681,8 @@ void add_interrupt_randomness(int irq, int irq_flags) add_interrupt_bench(cycles); this_cpu_add(net_rand_state.s1, fast_pool->pool[cycles & 3]); + fast_pool_init_accounting(fast_pool); + if (unlikely(crng_init == 0)) { if ((fast_pool->count >= 64) && crng_fast_load((char *) fast_pool->pool, @@ -1692,8 +1701,6 @@ void add_interrupt_randomness(int irq, int irq_flags) if (!spin_trylock(&r->lock)) return; - fast_pool_init_accounting(fast_pool); - if (!fips_enabled) { /* award one bit for the contents of the fast pool */ nfrac = 1 << ENTROPY_SHIFT; From patchwork Mon Sep 21 07:58:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252962 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF4CEC43468 for ; Mon, 21 Sep 2020 08:01:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B910720BED for ; Mon, 21 Sep 2020 08:01:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726336AbgIUIBH (ORCPT ); Mon, 21 Sep 2020 04:01:07 -0400 Received: from mx2.suse.de ([195.135.220.15]:58098 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726577AbgIUH7j (ORCPT ); Mon, 21 Sep 2020 03:59:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 43BD2B52E; Mon, 21 Sep 2020 08:00:08 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 33/41] random: make health_test_process() maintain the get_cycles() delta Date: Mon, 21 Sep 2020 09:58:49 +0200 Message-Id: <20200921075857.4424-34-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The min-entropy estimate has been made for the lower eight bits of the deltas between cycle counter values from successive IRQ events and thus, the upcoming health tests should actually be run on these deltas. Introduce a new field ->previous_sample to struct health_test for storing the previous get_cycles() value. Make health_test_process() maintain it and also calculate the delta between the current and the previous value at this point already in preparation to passing it to the upcoming health tests. Note that ->previous_sample is deliberately not touched from health_test_reset() in order to maintain a steady flow of correctly calculated deltas across health test resets. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index cb6441b96b8e..33f9b7b59f92 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -879,7 +879,9 @@ static void discard_queued_entropy(struct entropy_store *r, spin_unlock_irqrestore(&r->lock, flags); } -struct health_test {}; +struct health_test { + u8 previous_sample; +}; enum health_result { health_none, @@ -895,6 +897,16 @@ static enum health_result health_test_process(struct health_test *h, unsigned int event_entropy_shift, u8 sample) { + u8 sample_delta; + + /* + * The min-entropy estimate has been made for the lower eight + * bits of the deltas between cycle counter values from + * successive IRQ events. + */ + sample_delta = sample - h->previous_sample; + h->previous_sample = sample; + return health_none; } From patchwork Mon Sep 21 07:58:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1B43C4346A for ; Mon, 21 Sep 2020 08:00:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C9720214F1 for ; Mon, 21 Sep 2020 08:00:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726741AbgIUIAr (ORCPT ); Mon, 21 Sep 2020 04:00:47 -0400 Received: from mx2.suse.de ([195.135.220.15]:57374 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726578AbgIUH7k (ORCPT ); Mon, 21 Sep 2020 03:59:40 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6E74BB525; Mon, 21 Sep 2020 08:00:09 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 35/41] random: improve the APT's statistical power Date: Mon, 21 Sep 2020 09:58:51 +0200 Message-Id: <20200921075857.4424-36-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The Adapative Proportion Test as specified by NIST SP800-90B counts how often the first sample value in a sequence of n samples occurs among the remaining n - 1 ones and will report failure if the result is unexpectedly large. The intention is to capture cases where a noise source's actual min-entropy falls below the one estimated during the validation process. Note that, assuming i.i.d., a decrease in per-IRQ min-entropy corresponds to an increase in the maximum probability among all possible sample values, per the definition of min-entropy. For example, consider the maximum supported per-IRQ min-entropy estimate of H=1, which corresponds to a maximum probability of p = 2^-H = 50% among all possible sample values. Now, if the actual entropy degraded to H/2, it would mean that some sample value's likelihood had increased to ~70%. The ability of the APT to detect this degradation is limited by the way it's currently implemented: a prerequisite for successfully reporting a sequence of n samples as bad is to find the offending sample value at the leading position. Thus, the power of the APT is always limited by the probability of the offending sample value, i.e. 70% in this example, no matter how large the total number n of examined of samples is. This can be improved upon by taking advantage of the fact that only values of H <= 1 are currently supported for the per-IRQ entropy estimate. It follows that the maximum probability among all sample values would increase to > 1/2 in case the actual min-entropy happened to fall below the assumed value. If we were to examine a sequence of n1 samples, the expected number of occurrences of the offending sample value would be > 1/2 * n1 (again assuming i.i.d). For example, for an actual entropy of H/2, with H=1 as above, the probability to find 4 or more samples of the same value among a sequence of n1 = 7 events would be ~88%, which is an improvement over the 70% from above. So partition the total number of samples n = 128/H to examine from the APT into two parts, n1 and n2, such that n = n1 + n2 with n1 odd. Rather than simply picking the first sample value to subsequently search for in the remaining n-1 events, make the APT to run a "presearch" on the first n1 samples in order to find the value occurring more than n1 / 2 times, if there is such one. Make the APT then continue as usual: let it search the remaining n2 samples for the found candidate value, count the number of occurrences and report failure if a certain threshold is reached. Of course, new thresholds should be installed in order to gain optimal statistical power from the second phase while still maintaining a false positive rate of 2^-16 as before. An exhaustive search among all possibilities for the different choices of n1 and supported per-IRQ min-entropies revealed that n1 = 7 is optimal for n = 128 (H = 1) and close to the resp. optimum for larger n, i.e. smaller H. With this choice, the new presearch scheme yields new thresholds ("c") and probabilities to detect a entropy degradations to H/2 ("power") as tabulated below: H n c power -------------------- 1 128 83 64.7% 1/2 256 205 79.1% 1/4 512 458 81.6% 1/8 1024 968 84.0% 1/16 2048 1991 84.9% 1/32 4096 4038 86.9% 1/64 8192 8134 86.4% Compare this to the former numbers for the original implementation: H n c power -------------------- 1 128 87 52.5% 1/2 256 210 67.5% 1/4 512 463 76.7% 1/8 1024 973 82.8% 1/16 2048 1997 82.6% 1/32 4096 4044 85.8% 1/64 8192 8140 85.8% So for smaller values of H, i.e. for H <= 1/8, the improvement isn't really impressive, but that was to be expected. OTOH, for the larger Hs, that is for the per-IRQ entropies estimated for systems with a high resolution get_cycles(), there is a clear advantage over the old scheme. Implement the described presearch for finding the sample value occurring more than half of the times among the first n1=7 events in a sequence of n=128/H samples to examine, if there is such one. Rather than maintaining individual per-CPU counters for the 2^8 possible sample values each, count the numbers of ones at the eight resp. bit positions. Note that if some sample value has indeed been observed more than half of the time, it will dominate all these bit counters and its value can be unambiguously restored from them, which is all that is needed. For better reviewability, represent the eight bit counters as an array of eight u8's at struct health_test and implement the bit counting as well as the final candidate extraction in the most naive way. A follow-up patch will sequeeze the counters into a single u32 and also optimize the bit counting and candidate extraction performance-wise. Implement the new health_apt_presearch_update() for updating the presearch bit counters. Call it from health_test_apt() on the first n1=7 samples. Implement the new health_apt_presearch_finalize() for restoring the candidate from the presearch bit counters. Call it from health_test_apt() once the n1'th event in a sequence has been processed and the presearch phase is to be concluded. Make health_test_apt() search for the candidate value as determined by the presearch phase among the sequence's remaining n2 = n - n1 samples. Adapt the failure thresholds to the now slightly smaller n2 values. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 58 +++++++++++++++++++++++++++++++++++++------ 1 file changed, 50 insertions(+), 8 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 131302cbc495..75a103f24fea 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -881,8 +881,13 @@ static void discard_queued_entropy(struct entropy_store *r, struct health_test { unsigned short apt_event_count; - unsigned short apt_candidate_count; - u8 apt_candidate; + union { + u8 apt_presearch_bit_counters[8]; + struct { + unsigned short apt_candidate_count; + u8 apt_candidate; + }; + }; u8 previous_sample; }; @@ -895,9 +900,44 @@ enum health_result { }; /* Adaptive Proportion Test */ +#define HEALTH_APT_PRESEARCH_EVENT_COUNT 7 + +static void health_apt_presearch_update(struct health_test *h, u8 sample_delta) +{ + int i; + + for (i = 0; i < 8; ++i) { + h->apt_presearch_bit_counters[i] = sample_delta & 0x1; + sample_delta >>= 1; + } +} + +static void health_apt_presearch_finalize(struct health_test *h) +{ + int i; + + /* + * If some event octet occurred more than half of the time, + * i.e. more than HEALTH_APT_PRESEARCH_EVENT_COUNT / 2 times, + * then its value can be restored unambigiously from the eight + * ->apt_presearch_bit_counters each holding the count of 1s + * encountered at the corresponding bit positions. + */ + h->apt_candidate = 0; + for (i = 0; i < 8; ++i) { + if (h->apt_presearch_bit_counters[i] >= + (HEALTH_APT_PRESEARCH_EVENT_COUNT + 1) / 2) { + h->apt_candidate |= 1 << i; + } + } + h->apt_candidate_count = 0; +}; + static void health_apt_reset(struct health_test *h) { h->apt_event_count = 0; + memset(h->apt_presearch_bit_counters, 0, + sizeof(h->apt_presearch_bit_counters)); } static enum health_result @@ -911,16 +951,18 @@ health_test_apt(struct health_test *h, unsigned int event_entropy_shift, * values of event_entropy_shift each, should have probability * <= 2^-16. */ - static const unsigned int c[] = {87, 210, 463, 973, 1997, 4044, 8140}; + static const unsigned int c[] = {83, 205, 458, 968, 1991, 4038, 8134}; + + BUILD_BUG_ON(HEALTH_APT_PRESEARCH_EVENT_COUNT != 7); - if (!h->apt_event_count) { - h->apt_event_count = 1; - h->apt_candidate = sample_delta; - h->apt_candidate_count = 0; + ++h->apt_event_count; + if (unlikely(h->apt_event_count <= HEALTH_APT_PRESEARCH_EVENT_COUNT)) { + health_apt_presearch_update(h, sample_delta); + if (h->apt_event_count == HEALTH_APT_PRESEARCH_EVENT_COUNT) + health_apt_presearch_finalize(h); return health_queue; } - ++h->apt_event_count; if (unlikely(h->apt_candidate == sample_delta && ++h->apt_candidate_count == c[event_entropy_shift])) { health_apt_reset(h); From patchwork Mon Sep 21 07:58:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27930C43465 for ; Mon, 21 Sep 2020 08:01:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EF90820BED for ; Mon, 21 Sep 2020 08:01:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726761AbgIUIAy (ORCPT ); Mon, 21 Sep 2020 04:00:54 -0400 Received: from mx2.suse.de ([195.135.220.15]:57144 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726589AbgIUH7j (ORCPT ); Mon, 21 Sep 2020 03:59:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 9E7B9B288; Mon, 21 Sep 2020 08:00:10 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 37/41] random: implement the "Repetition Count" NIST SP800-90B health test Date: Mon, 21 Sep 2020 09:58:53 +0200 Message-Id: <20200921075857.4424-38-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The "Repetition Count Test" (RCT) as specified by NIST SP800-90B simply counts the number of times the same sample value has been observed and reports failure if an highly unlikely threshold is exceeded. The exact values of the latter depend on the estimated per-IRQ min-entropy H as well as on the upper bounds set on the probability of false positives. For the latter, a maximum value of 2^-20 is recommended and with this value the threshold can be calculated as 1 + ceil(20 / H). It should be noted that the RCT has very poor statistical power and is only intended to detect catastrophic noise source failures, like the get_cycles() in add_interrupt_randomness() always returning the same constant. Add the fields needed for maintaining the RCT state to struct health_test: ->rct_previous_delta for storing the previous sample value and ->rct_count for keeping track of how many times this value has been observed in a row so far. Implement the RCT and wrap it in a new function, health_test_rct(). Make the health test entry point, health_test_process(), call it early before invoking the APT and forward failure reports to the caller. All other return codes from the RCT are ignored, because - as said, the statistical power is weak and a positive outcome wouldn't tell anything and - it's not desirable to make the caller, i.e. add_interrupt_randomness(), to further queue any entropy once the concurrently running APT has signaled a successful completion. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 55 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 55 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 2c744d2a9b26..54ee082ca4a8 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -890,6 +890,9 @@ struct health_test { }; u8 previous_sample; + + u8 rct_previous_delta; + unsigned short rct_count; }; enum health_result { @@ -899,6 +902,43 @@ enum health_result { health_discard, }; +/* Repetition count test. */ +static enum health_result +health_test_rct(struct health_test *h, unsigned int event_entropy_shift, + u8 sample_delta) +{ + unsigned int c; + + if (likely(sample_delta != h->rct_previous_delta)) { + h->rct_previous_delta = sample_delta; + h->rct_count = 0; + return health_dispatch; + } + + h->rct_count++; + if (!h->rct_count) { + /* Overflow. */ + h->rct_count = -1; + } + + /* + * With a min-entropy of H = 2^-event_entropy_shift bits per + * event, the maximum probability of seing any particular + * sample value (i.e. delta) is bounded by 2^-H. Thus, the + * probability to observe the same events C times in a row is + * less than 2^-((C - 1) * H). Limit the false positive rate + * of the repetition count test to 2^-20, which yields a + * cut-off value of C = 1 + 20/H. Note that the actual number + * of repetitions equals ->rct_count + 1, so this offset by + * one must be accounted for in the comparison below. + */ + c = 20 << event_entropy_shift; + if (h->rct_count >= c) + return health_discard; + + return health_queue; +} + /* Adaptive Proportion Test */ #define HEALTH_APT_PRESEARCH_EVENT_COUNT 7 @@ -1027,6 +1067,7 @@ health_test_process(struct health_test *h, unsigned int event_entropy_shift, u8 sample) { u8 sample_delta; + enum health_result rct; /* * The min-entropy estimate has been made for the lower eight @@ -1036,6 +1077,20 @@ health_test_process(struct health_test *h, unsigned int event_entropy_shift, sample_delta = sample - h->previous_sample; h->previous_sample = sample; + rct = health_test_rct(h, event_entropy_shift, sample_delta); + if (rct == health_discard) { + /* + * Something is really off, get_cycles() has become + * (or always been) a constant. + */ + return health_discard; + } + + /* + * Otherwise return whatever the APT returns. In particular, + * don't care about whether the RCT needs to consume more + * samples to complete. + */ return health_test_apt(h, event_entropy_shift, sample_delta); } From patchwork Mon Sep 21 07:58:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252960 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A005CC43466 for ; Mon, 21 Sep 2020 08:01:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7961820874 for ; Mon, 21 Sep 2020 08:01:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726723AbgIUIBb (ORCPT ); Mon, 21 Sep 2020 04:01:31 -0400 Received: from mx2.suse.de ([195.135.220.15]:56980 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726582AbgIUH7j (ORCPT ); Mon, 21 Sep 2020 03:59:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id C5247B531; Mon, 21 Sep 2020 08:00:11 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 39/41] random: make the startup tests include muliple APT invocations Date: Mon, 21 Sep 2020 09:58:55 +0200 Message-Id: <20200921075857.4424-40-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Given a per-IRQ min-entropy estimate of H, the Adaptive Proportion Tests (APT) will need to consume at most n = 128/H samples before reaching a conclusion. The supported values for H are 1, 1/2, 1/4, 1/8, ..., 1/64, but only 1 and 1/8 are currently being actively used on systems with and without a high resolution get_cycles() respectively. The corresponding number of samples consumed by one APT execution are 128, 256, 512, 1024, 2048, 4096 and 8192. Currently, the ->warmup parameter used for controlling the startup is hardcoded to be initialized to 1024 and the health test logic won't permit the caller, i.e. add_interrupt_randomness() to dispatch any entropy to the global balance until that many events have been processed *and* the first APT has completed, whichever comes later. It would take roughly eight successful APT invocations for H=1 until the startup sequence has completed, but for all H <= 1/8, the ->warmup logic is effectively useless because the first APT would always need to process >= 1024 samples anyway. The probabilites of one single APT invocation successfully detecting a degradation of the per-IRQ min-entopy to H/2 ("power") are as follows for the different supported H estimates: H n power --------------- 1 128 64.7% 1/2 256 79.1% 1/4 512 81.6% 1/8 1024 84.0% 1/16 2048 84.9% 1/32 4096 86.9% 1/64 8192 86.4% Thus, for H=1, the probability that at least one out of those eight APT invocations will detect a degradation to H/2 is 1 - (1 - 64.7%)^8 = 99.98%, which is quite good. OTOH, the 84.0% achievable with the single APT invocation for H = 1/8 is only semi-satisfactory. Note that as it currently stands, the only point in time where the health tests can still intervene and keep back low quality noise from the primary_crng is before the initial seed has happened. Afterwards, failing continuous health tests would only potentially delay those best effort reseeds (which is questionable behaviour in itself, as the crng state's entropy is never reduced in the course of reseeding). A future patch will enable dynamically switching from the initial H=1 or 1/8 resp. to lower per-IRQ entropy values upon health test failures in order to keep those systems going where these more or less arbitrary per-IRQ entropy estimates turn out to be simply wrong. From a paranoia POV, it is certainly a good idea to run the APT several times in a row during startup in order to achieve a good statistical power. Extending the warmup to cover the larger of the 1024 events required by NIST SP800-90B and four full APT lengths will result in a combined probability of detecting an entropy degradation to H/2 of >= 99.98% across all supported values of H. The obvious downside is that the number of IRQ events required for the initial seed will be qadrupled, at least for H <= 1/8. Follow this approach. Amend health_test_reset()'s signature by an additional parameter, event_entropy_shift, and make it set ->warmup to the larger of 1024 and 4 * 128 / (2^-event_entropy_shift). Adjust all call sites accordingly. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index bd8c24e433d0..86dd87588b1b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1058,14 +1058,21 @@ health_test_apt(struct health_test *h, unsigned int event_entropy_shift, return health_queue; } -static void health_test_reset(struct health_test *h) +static void health_test_reset(struct health_test *h, + unsigned int event_entropy_shift) { /* - * Don't dispatch until at least 1024 events have been - * processed by the continuous health tests as required by - * NIST SP800-90B for the startup tests. + * Let H = 2^-event_entropy_shift equal the estimated per-IRQ + * min-entropy. One APT will consume at most 128 / H samples + * until completion. Run the startup tests for the larger of + * 1024 events as required by NIST or four times the APT + * length. In either case, the combined probability of the + * resulting number of successive APTs to detect a degradation + * of H to H/2 will be >= 99.8%, for any supported value of + * event_entropy_shift. */ - h->warmup = 1024; + h->warmup = 4 * (128 << event_entropy_shift); + h->warmup = max_t(unsigned int, h->warmup, 1024); health_apt_reset(h); } @@ -1092,7 +1099,7 @@ health_test_process(struct health_test *h, unsigned int event_entropy_shift, * (or always been) a constant. */ if (h->warmup) - health_test_reset(h); + health_test_reset(h, event_entropy_shift); return health_discard; } @@ -1104,7 +1111,7 @@ health_test_process(struct health_test *h, unsigned int event_entropy_shift, apt = health_test_apt(h, event_entropy_shift, sample_delta); if (unlikely(h->warmup) && --h->warmup) { if (apt == health_discard) - health_test_reset(h); + health_test_reset(h, event_entropy_shift); /* * Don't allow the caller to dispatch until warmup * has completed. @@ -1883,7 +1890,7 @@ static inline void fast_pool_init_accounting(struct fast_pool *f) return; f->event_entropy_shift = min_irq_event_entropy_shift(); - health_test_reset(&f->health); + health_test_reset(&f->health, f->event_entropy_shift); } void add_interrupt_randomness(int irq, int irq_flags) From patchwork Mon Sep 21 07:58:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CFE1C43463 for ; Mon, 21 Sep 2020 08:01:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51CE620BED for ; Mon, 21 Sep 2020 08:01:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726845AbgIUIBb (ORCPT ); Mon, 21 Sep 2020 04:01:31 -0400 Received: from mx2.suse.de ([195.135.220.15]:56798 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726581AbgIUH7j (ORCPT ); Mon, 21 Sep 2020 03:59:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 60A82B535; Mon, 21 Sep 2020 08:00:12 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 40/41] random: trigger startup health test on any failure of the health tests Date: Mon, 21 Sep 2020 09:58:56 +0200 Message-Id: <20200921075857.4424-41-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The startup health tests to be executed at boot as required by NIST 800-90B consist of running the contiuous health tests, i.e. the Adaptive Proportion Test (APT) and the Repetition Count Test (RCT), until a certain amount of noise samples have been examined. In case of test failure during this period, the startup tests would get restarted by means of reinitializing the fast_pool's ->warmup member with the original number of total samples to examine during startup. A future patch will enable dynamically switching from the initial H=1 or 1/8 per-IRQ min-entropy estimates to lower values upon health test failures in order to keep those systems going where these more or less arbitrary per-IRQ entropy estimates turn out to simply be wrong. It is certainly desirable to restart the startup health tests upon such a switch. In order to keep the upcoming code comprehensible, move the startup test restart logic from health_test_process() into add_interrupt_randomness(). For simplicity, make add_interrupt_randomness() trigger a startup test on each health test failure. Note that there's a change in behaviour: up to now, only the bootime startup tests would have restarted themselves upon failure, whereas now even a failure of the continuous health tests can potentially trigger a startup test long after boot. Note that as it currently stands, rerunning the full startup tests after the crng has received its initial seed has the only effect to inhibit entropy dispatch for a while and thus, to potentially delay those best effort crng reseeds during runtime. As reseeds never reduce a crng state's entropy, this behaviour is admittedly questionable. However, further patches introducing forced reseeds might perhaps become necessary in the future, c.f. the specification of "reseed_interval" in NIST SP800-90A. Thus, it's better to keep the startup health test restart logic consistent for now. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 86dd87588b1b..bb79dcb96882 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1098,8 +1098,6 @@ health_test_process(struct health_test *h, unsigned int event_entropy_shift, * Something is really off, get_cycles() has become * (or always been) a constant. */ - if (h->warmup) - health_test_reset(h, event_entropy_shift); return health_discard; } @@ -1110,8 +1108,6 @@ health_test_process(struct health_test *h, unsigned int event_entropy_shift, */ apt = health_test_apt(h, event_entropy_shift, sample_delta); if (unlikely(h->warmup) && --h->warmup) { - if (apt == health_discard) - health_test_reset(h, event_entropy_shift); /* * Don't allow the caller to dispatch until warmup * has completed. @@ -1928,6 +1924,14 @@ void add_interrupt_randomness(int irq, int irq_flags) health_test_process(&fast_pool->health, fast_pool->event_entropy_shift, cycles); + if (unlikely(health_result == health_discard)) { + /* + * Oops, something's odd. Restart the startup + * tests. + */ + health_test_reset(&fast_pool->health, + fast_pool->event_entropy_shift); + } } if (unlikely(crng_init == 0)) { From patchwork Mon Sep 21 07:58:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 252964 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 894DEC43463 for ; Mon, 21 Sep 2020 08:01:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 62C6F20BED for ; Mon, 21 Sep 2020 08:01:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726796AbgIUIBI (ORCPT ); Mon, 21 Sep 2020 04:01:08 -0400 Received: from mx2.suse.de ([195.135.220.15]:57940 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726592AbgIUH7j (ORCPT ); Mon, 21 Sep 2020 03:59:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id E0F6EB533; Mon, 21 Sep 2020 08:00:12 +0000 (UTC) From: Nicolai Stange To: "Theodore Y. Ts'o" Cc: linux-crypto@vger.kernel.org, LKML , Arnd Bergmann , Greg Kroah-Hartman , "Eric W. Biederman" , "Alexander E. Patrakov" , "Ahmed S. Darwish" , Willy Tarreau , Matthew Garrett , Vito Caputo , Andreas Dilger , Jan Kara , Ray Strode , William Jon McCann , zhangjs , Andy Lutomirski , Florian Weimer , Lennart Poettering , Peter Matthias , Marcelo Henrique Cerri , Roman Drahtmueller , Neil Horman , Randy Dunlap , Julia Lawall , Dan Carpenter , Andy Lavr , Eric Biggers , "Jason A. Donenfeld" , =?utf-8?q?Stephan_M=C3=BCller?= , Torsten Duwe , Petr Tesarik , Nicolai Stange Subject: [RFC PATCH 41/41] random: lower per-IRQ entropy estimate upon health test failure Date: Mon, 21 Sep 2020 09:58:57 +0200 Message-Id: <20200921075857.4424-42-nstange@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921075857.4424-1-nstange@suse.de> References: <20200921075857.4424-1-nstange@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Currently, if fips_enabled is set, a per-IRQ min-entropy estimate of either 1 bit or 1/8 bit is assumed, depending on whether a high resolution get_cycles() is available or not. The statistical NIST SP800-90B startup health tests are run on a certain amount of noise samples and are intended to reject in case this hypothesis turns out to be wrong, i.e. if the actual min-entropy is smaller. As long as the startup tests haven't finished, entropy dispatch and thus, the initial crng seeding, is inhibited. On test failure, the startup tests would restart themselves from the beginning. It follows that in case a system's actual per-IRQ min-entropy is smaller than the more or less arbitrarily assessed 1 bit or 1/8 bit resp., there will be a good chance that the initial crng seed will never complete. AFAICT, such a situation could potentially prevent certain userspace daemons like OpenSSH from loading. In order to still be able to make any progress, make add_interrupt_randomness() lower the per-IRQ min-entropy by one half upon each health test failure, but only until the minimum supported value of 1/64 bits has been reached. Note that health test failures will cause a restart of the startup health tests already and thus, a certain number of additional noise samples resp. IRQ events will have to get examined by the health tests before the initial crng seeding can take place. This number of fresh events required is reciprocal to the estimated per-IRQ min-entropy H: for the Adaptive Proportion Test (APT) it equals ~128 / H. It follows that this patch won't be of much help for embedded systems or VMs with poor IRQ rates at boot time, at least not without manual intervention. But there aren't many options left when fips_enabled is set. With respect to NIST SP800-90B conformance, this patch enters kind of a gray area: NIST SP800-90B has no notion of such a dynamically adjusted min-entropy estimate. Instead, it is assumed that some fixed value has been estimated based on general principles and subsequently validated in the course of the certification process. However, I would argue that if a system had successfully passed certification for 1 bit or 1/8 bit resp. of estimated min-entropy per sample, it would automatically be approved for all smaller values as well. Had we started out with such a lower value passing the health tests from the beginning, the latter would never have complained in the first place and the system would have come up just fine. Finally, note that all statistical tests have a non-zero probability of false positives and so do the NIST SP800-90B health tests. In order to not keep the estimated per-IRQ entropy at a smaller level than necessary for forever after spurious health test failures, make add_interrupt_randomness() attempt to double it again after a certain number of successful health test passes at the degraded entropy level have been completed. This threshold should not be too small in order to avoid excessive entropy accounting loss due to continuously alternating between a too large per-IRQ entropy estimate and the next smaller value. For now, choose a value of five as a compromise between quick recovery and limiting said accounting loss. So, introduce a new member ->good_tests to struct fast_pool for keeping track of the number of successfult health test passes. Make add_interrupt_randomness() increment it upon successful healh test completion and reset it to zero on failures. Make add_interrupt_randomness() double the current min-entropy estimate and restart the startup health in case ->good_tests is > 4 and the entropy had previously been lowered. Signed-off-by: Nicolai Stange --- drivers/char/random.c | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index bb79dcb96882..24c09ba9d7d0 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1126,6 +1126,7 @@ struct fast_pool { bool dispatch_needed : 1; bool discard_needed : 1; int event_entropy_shift; + unsigned int good_tests; struct queued_entropy q; struct health_test health; }; @@ -1926,9 +1927,13 @@ void add_interrupt_randomness(int irq, int irq_flags) cycles); if (unlikely(health_result == health_discard)) { /* - * Oops, something's odd. Restart the startup - * tests. + * Oops, something's odd. Lower the entropy + * estimate and restart the startup tests. */ + fast_pool->event_entropy_shift = + min_t(unsigned int, + fast_pool->event_entropy_shift + 1, 6); + fast_pool->good_tests = 0; health_test_reset(&fast_pool->health, fast_pool->event_entropy_shift); } @@ -1951,6 +1956,7 @@ void add_interrupt_randomness(int irq, int irq_flags) * entropy discard request? */ fast_pool->dispatch_needed = !fast_pool->discard_needed; + fast_pool->good_tests++; break; case health_discard: @@ -2005,6 +2011,21 @@ void add_interrupt_randomness(int irq, int irq_flags) if (fast_pool->dispatch_needed || health_result == health_none) { reseed = __dispatch_queued_entropy_fast(r, q); fast_pool->dispatch_needed = false; + + /* + * In case the estimated per-IRQ min-entropy had to be + * lowered due to health test failure, but the lower + * value has proven to withstand the tests for some + * time now, try to give the next better value another + * shot. + */ + if (unlikely((fast_pool->event_entropy_shift > + min_irq_event_entropy_shift())) && + fast_pool->good_tests > 4) { + fast_pool->event_entropy_shift--; + health_test_reset(&fast_pool->health, + fast_pool->event_entropy_shift); + } } else if (fast_pool->discard_needed) { int dummy;