diff mbox series

[v5] arm64: Implement archrandom.h for ARMv8.5-RNG

Message ID 20191108135751.3218-1-rth@twiddle.net
State New
Headers show
Series [v5] arm64: Implement archrandom.h for ARMv8.5-RNG | expand

Commit Message

Richard Henderson Nov. 8, 2019, 1:57 p.m. UTC
From: Richard Henderson <richard.henderson@linaro.org>


Expose the ID_AA64ISAR0.RNDR field to userspace, as the
RNG system registers are always available at EL0.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

---
v2: Use __mrs_s and fix missing cc clobber (Mark),
    Log rng failures with pr_warn (Mark),
    Use __must_check; put RNDR in arch_get_random_long and RNDRRS
    in arch_get_random_seed_long (Ard),
    Use ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, and check this_cpu_has_cap
    when reading random data.  Move everything out of line, now that
    there are 5 other function calls involved, and to unify the rate
    limiting on the pr_warn.
v3: Keep arch_get_random{,_seed}_long in sync.
v4: Use __cpus_have_const_cap before falling back to this_cpu_has_cap.
v5: Improve commentary; fix some checkpatch warnings.
---
 Documentation/arm64/cpu-feature-registers.rst |  2 +
 arch/arm64/include/asm/archrandom.h           | 35 ++++++++
 arch/arm64/include/asm/cpucaps.h              |  3 +-
 arch/arm64/include/asm/sysreg.h               |  4 +
 arch/arm64/kernel/cpufeature.c                | 13 +++
 arch/arm64/kernel/random.c                    | 82 +++++++++++++++++++
 arch/arm64/Kconfig                            | 12 +++
 arch/arm64/kernel/Makefile                    |  1 +
 drivers/char/Kconfig                          |  4 +-
 9 files changed, 153 insertions(+), 3 deletions(-)
 create mode 100644 arch/arm64/include/asm/archrandom.h
 create mode 100644 arch/arm64/kernel/random.c

-- 
2.17.1

Comments

Mark Rutland Nov. 8, 2019, 2:30 p.m. UTC | #1
On Fri, Nov 08, 2019 at 02:57:51PM +0100, Richard Henderson wrote:
> From: Richard Henderson <richard.henderson@linaro.org>

> 

> Expose the ID_AA64ISAR0.RNDR field to userspace, as the

> RNG system registers are always available at EL0.

> 

> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

> ---

> v2: Use __mrs_s and fix missing cc clobber (Mark),

>     Log rng failures with pr_warn (Mark),


When I suggested this, I meant in the probe path.

Since it can legitimately fail at runtime, I don't think it's worth
logging there. Maybe it's worth recording stats, but the generic wrapper
could do that.

[...]

> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c

> index 80f459ad0190..456d5c461cbf 100644

> --- a/arch/arm64/kernel/cpufeature.c

> +++ b/arch/arm64/kernel/cpufeature.c

> @@ -119,6 +119,7 @@ static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);

>   * sync with the documentation of the CPU feature register ABI.

>   */

>  static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {

> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RNDR_SHIFT, 4, 0),


If we're going to expose this to userspace, it must be a system feature.
If all the boto CPUs have the feature, we'll advertise it to userspace,
and therefore must mandate it for late-onlined CPUs.

>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0),

>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),

>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),

> @@ -1565,6 +1566,18 @@ static const struct arm64_cpu_capabilities arm64_features[] = {

>  		.sign = FTR_UNSIGNED,

>  		.min_field_value = 1,

>  	},

> +#endif

> +#ifdef CONFIG_ARCH_RANDOM

> +	{

> +		.desc = "Random Number Generator",

> +		.capability = ARM64_HAS_RNG,

> +		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,


As above, if we're advertisting this to userspace and/or VMs, this must
be a system-wide feature, and cannot be a weak local feature.

> +		.matches = has_cpuid_feature,

> +		.sys_reg = SYS_ID_AA64ISAR0_EL1,

> +		.field_pos = ID_AA64ISAR0_RNDR_SHIFT,

> +		.sign = FTR_UNSIGNED,

> +		.min_field_value = 1,

> +	},

>  #endif

>  	{},

>  };

> diff --git a/arch/arm64/kernel/random.c b/arch/arm64/kernel/random.c

> new file mode 100644

> index 000000000000..e7ff29dd637c

> --- /dev/null

> +++ b/arch/arm64/kernel/random.c

> @@ -0,0 +1,82 @@

> +// SPDX-License-Identifier: GPL-2.0

> +/*

> + * Random number generation using ARMv8.5-RNG.

> + */

> +

> +#include <linux/random.h>

> +#include <linux/ratelimit.h>

> +#include <linux/printk.h>

> +#include <linux/preempt.h>

> +#include <asm/cpufeature.h>

> +

> +static inline bool has_random(void)

> +{

> +	/*

> +	 * We "have" RNG if either

> +	 * (1) every cpu in the system has RNG, or

> +	 * (2) in a non-preemptible context, current cpu has RNG.

> +	 *

> +	 * Case 1 is the expected case when RNG is deployed, but

> +	 * case 2 is present as a backup.  Case 2 has two effects:

> +	 * (A) rand_initialize() is able to use the instructions

> +	 * when present in the boot cpu, which happens before

> +	 * secondary cpus are enabled and before features are

> +	 * resolved for the full system.

> +	 * (B) add_interrupt_randomness() is able to use the

> +	 * instructions when present on the current cpu, in case

> +	 * some big/little system only has RNG on big cpus.

> +	 *

> +	 * We can use __cpus_have_const_cap because we then fall

> +	 * back to checking the current cpu.

> +	 */

> +	return __cpus_have_const_cap(ARM64_HAS_RNG) ||

> +	       (!preemptible() && this_cpu_has_cap(ARM64_HAS_RNG));

> +}


We don't bother with special-casing local handling mismatch like this
for other features. I'd ratehr that:

* On the boot CPU, prior to detecting secondaries, we can seed the usual
  pool with the RNG if the boot CPU has it.

* Once secondaries are up, if the feature is present system-wide, we can
  make use of the feature as a system-wide feature. If not, we don't use
  the RNG.


[...]

> +bool arch_get_random_long(unsigned long *v)

> +{

> +	bool ok;

> +

> +	if (!has_random())

> +		return false;

> +

> +	/*

> +	 * Reads of RNDR set PSTATE.NZCV to 0b0000 on success,

> +	 * and set PSTATE.NZCV to 0b0100 otherwise.

> +	 */

> +	asm volatile(

> +		__mrs_s("%0", SYS_RNDR_EL0) "\n"

> +	"	cset %w1, ne\n"

> +	: "=r"(*v), "=r"(ok)


Nit: place a space between the constraint and the bracketed variable, as
we do elsewhere.

> +	:

> +	: "cc");

> +

> +	if (unlikely(!ok))

> +		pr_warn_ratelimited("cpu%d: sys_rndr failed\n",

> +				    read_cpuid_id());

> +	return ok;

> +}


... so this can be:

bool arch_get_random_long(unsigned long *v)
{
	bool ok;

	if (!cpus_have_const_cap(ARM64_HAS_RNG))
		return false;

	/*
	 * Reads of RNDR set PSTATE.NZCV to 0b0000 on success,
	 * and set PSTATE.NZCV to 0b0100 otherwise.
	 */
	asm volatile(
		__mrs_s("%0", SYS_RNDR_EL0) "\n"
	"	cset %w1, ne\n"
	: "=r" (*v), "=r" (ok)
	:
	: "cc");

	return ok;
}

...with similar for arch_get_random_seed_long().

[...]

>  config RANDOM_TRUST_CPU

>  	bool "Trust the CPU manufacturer to initialize Linux's CRNG"

> -	depends on X86 || S390 || PPC

> +	depends on X86 || S390 || PPC || ARM64


Can't that depend on ARCH_RANDOM instead?

Thanks,
Mark.
Richard Henderson Nov. 9, 2019, 9:04 a.m. UTC | #2
On 11/8/19 3:30 PM, Mark Rutland wrote:
> On Fri, Nov 08, 2019 at 02:57:51PM +0100, Richard Henderson wrote:

>> From: Richard Henderson <richard.henderson@linaro.org>

>>

>> Expose the ID_AA64ISAR0.RNDR field to userspace, as the

>> RNG system registers are always available at EL0.

>>

>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

>> ---

>> v2: Use __mrs_s and fix missing cc clobber (Mark),

>>     Log rng failures with pr_warn (Mark),

> 

> When I suggested this, I meant in the probe path.

> 

> Since it can legitimately fail at runtime, I don't think it's worth

> logging there. Maybe it's worth recording stats, but the generic wrapper

> could do that.


Ah, ok, dropped.

>> +#ifdef CONFIG_ARCH_RANDOM

>> +	{

>> +		.desc = "Random Number Generator",

>> +		.capability = ARM64_HAS_RNG,

>> +		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,

> 

> As above, if we're advertisting this to userspace and/or VMs, this must

> be a system-wide feature, and cannot be a weak local feature.


Could you draw me the link between struct arm64_cpu_capabilities, as seen here,
and struct arm64_ftr_bits, which exposes the system registers to userspace/vms?

AFAICS, ARM64_HAS_RNG is private to the kernel; there is no ELF HWCAP value
exposed to userspace by this.

The adjustment of ID_AA64ISAR0.RNDR is FTR_LOWER_SAFE, which means the minimum
value of all online cpus.  (Which seems to generate a pr_warn in
check_update_ftr_reg for hot-plug secondaries that do not match.)


> We don't bother with special-casing local handling mismatch like this

> for other features. I'd ratehr that:

> 

> * On the boot CPU, prior to detecting secondaries, we can seed the usual

>   pool with the RNG if the boot CPU has it.

> 

> * Once secondaries are up, if the feature is present system-wide, we can

>   make use of the feature as a system-wide feature. If not, we don't use

>   the RNG.


Unless I'm mis-reading things, there is not a setting for ARM64_CPUCAP_* that
allows exactly this.  If I use ARM64_CPUCAP_SYSTEM_FEATURE, then the feature is
not detected early enough for the boot cpu.

I can change this to ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE.  That way it is
system-wide, and also detected early enough to be used for rand_initialize().
However, it has the side effect that secondaries are not allowed to omit RNG if
the boot cpu has RNG.

Is there some setting that I've missed?  Is it ok to kick the problem down the
road until someone actually builds mis-matched hardware?


> ... so this can be:

> 

> bool arch_get_random_long(unsigned long *v)

> {

> 	bool ok;

> 

> 	if (!cpus_have_const_cap(ARM64_HAS_RNG))

> 		return false;

> 

> 	/*

> 	 * Reads of RNDR set PSTATE.NZCV to 0b0000 on success,

> 	 * and set PSTATE.NZCV to 0b0100 otherwise.

> 	 */

> 	asm volatile(

> 		__mrs_s("%0", SYS_RNDR_EL0) "\n"

> 	"	cset %w1, ne\n"

> 	: "=r" (*v), "=r" (ok)

> 	:

> 	: "cc");

> 

> 	return ok;

> }

> 

> ...with similar for arch_get_random_seed_long().


Done.

>>  config RANDOM_TRUST_CPU

>>  	bool "Trust the CPU manufacturer to initialize Linux's CRNG"

>> -	depends on X86 || S390 || PPC

>> +	depends on X86 || S390 || PPC || ARM64

> 

> Can't that depend on ARCH_RANDOM instead?


Yes, it can.


r~
Mark Rutland Nov. 12, 2019, 9:52 a.m. UTC | #3
On Sat, Nov 09, 2019 at 10:04:28AM +0100, Richard Henderson wrote:
> On 11/8/19 3:30 PM, Mark Rutland wrote:

> > On Fri, Nov 08, 2019 at 02:57:51PM +0100, Richard Henderson wrote:

> >> From: Richard Henderson <richard.henderson@linaro.org>

> >>

> >> Expose the ID_AA64ISAR0.RNDR field to userspace, as the

> >> RNG system registers are always available at EL0.

> >>

> >> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

> >> ---

> >> v2: Use __mrs_s and fix missing cc clobber (Mark),

> >>     Log rng failures with pr_warn (Mark),

> > 

> > When I suggested this, I meant in the probe path.

> > 

> > Since it can legitimately fail at runtime, I don't think it's worth

> > logging there. Maybe it's worth recording stats, but the generic wrapper

> > could do that.

> 

> Ah, ok, dropped.

> 

> >> +#ifdef CONFIG_ARCH_RANDOM

> >> +	{

> >> +		.desc = "Random Number Generator",

> >> +		.capability = ARM64_HAS_RNG,

> >> +		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,

> > 

> > As above, if we're advertisting this to userspace and/or VMs, this must

> > be a system-wide feature, and cannot be a weak local feature.

> 

> Could you draw me the link between struct arm64_cpu_capabilities, as seen here,

> and struct arm64_ftr_bits, which exposes the system registers to userspace/vms?

> 

> AFAICS, ARM64_HAS_RNG is private to the kernel; there is no ELF HWCAP value

> exposed to userspace by this.


The cap is kernel-private, but in arm64_ftr_bits the field was marked
FTR_VISIBLE, which means the field is exposed to userspace and VMs via
ID register emulation.

> The adjustment of ID_AA64ISAR0.RNDR is FTR_LOWER_SAFE, which means the minimum

> value of all online cpus.  (Which seems to generate a pr_warn in

> check_update_ftr_reg for hot-plug secondaries that do not match.)


You're right that we'll warn (due to the STRICT mask), but I think that
given we're fairly certain we'll see mismatched systems, we should
handle that now rather than punt it a few months down the line.

> > We don't bother with special-casing local handling mismatch like this

> > for other features. I'd ratehr that:

> > 

> > * On the boot CPU, prior to detecting secondaries, we can seed the usual

> >   pool with the RNG if the boot CPU has it.

> > 

> > * Once secondaries are up, if the feature is present system-wide, we can

> >   make use of the feature as a system-wide feature. If not, we don't use

> >   the RNG.

> 

> Unless I'm mis-reading things, there is not a setting for ARM64_CPUCAP_* that

> allows exactly this.  If I use ARM64_CPUCAP_SYSTEM_FEATURE, then the feature is

> not detected early enough for the boot cpu.


Early in the boot process you can use this_cpu_has_cap(). My suggestion
was to have an explicit point (e.g. somewhere in setup_arch(), or an
initcall), were we check that and seed entropy on the boot CPU if
possible.

> I can change this to ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE.  That way it is

> system-wide, and also detected early enough to be used for rand_initialize().

> However, it has the side effect that secondaries are not allowed to omit RNG if

> the boot cpu has RNG.


Can we refactor things so that early-on (at rand_initialize() time), we
call a different arch helper, e.g. a new arch_get_early_random*()?

Then that could do the this_cpu_has_cap() check to initialize things,
and at runtime we cna rely on the system-wide cap.

> Is there some setting that I've missed?  Is it ok to kick the problem down the

> road until someone actually builds mis-matched hardware?


As above, I think that we an be fairly certain we're going to encounter
such systems, and it's going to be more painful to retrofit support
later (e.g. as we'll have to backport that), so I'd rather we handle
that up-front.

Thanks,
Mark.
diff mbox series

Patch

diff --git a/Documentation/arm64/cpu-feature-registers.rst b/Documentation/arm64/cpu-feature-registers.rst
index 2955287e9acc..78d6f5c6e824 100644
--- a/Documentation/arm64/cpu-feature-registers.rst
+++ b/Documentation/arm64/cpu-feature-registers.rst
@@ -117,6 +117,8 @@  infrastructure:
      +------------------------------+---------+---------+
      | Name                         |  bits   | visible |
      +------------------------------+---------+---------+
+     | RNDR                         | [63-60] |    y    |
+     +------------------------------+---------+---------+
      | TS                           | [55-52] |    y    |
      +------------------------------+---------+---------+
      | FHM                          | [51-48] |    y    |
diff --git a/arch/arm64/include/asm/archrandom.h b/arch/arm64/include/asm/archrandom.h
new file mode 100644
index 000000000000..e796a6de7421
--- /dev/null
+++ b/arch/arm64/include/asm/archrandom.h
@@ -0,0 +1,35 @@ 
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_ARCHRANDOM_H
+#define _ASM_ARCHRANDOM_H
+
+#ifdef CONFIG_ARCH_RANDOM
+
+bool __must_check arch_get_random_long(unsigned long *v);
+bool __must_check arch_get_random_seed_long(unsigned long *v);
+
+static inline bool __must_check arch_get_random_int(unsigned int *v)
+{
+	unsigned long val;
+
+	if (arch_get_random_long(&val)) {
+		*v = val;
+		return true;
+	}
+
+	return false;
+}
+
+static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
+{
+	unsigned long val;
+
+	if (arch_get_random_seed_long(&val)) {
+		*v = val;
+		return true;
+	}
+
+	return false;
+}
+
+#endif /* CONFIG_ARCH_RANDOM */
+#endif /* _ASM_ARCHRANDOM_H */
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index ac1dbca3d0cd..1dd7644bc59a 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -54,7 +54,8 @@ 
 #define ARM64_WORKAROUND_1463225		44
 #define ARM64_WORKAROUND_CAVIUM_TX2_219_TVM	45
 #define ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM	46
+#define ARM64_HAS_RNG				47
 
-#define ARM64_NCAPS				47
+#define ARM64_NCAPS				48
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 6e919fafb43d..5e718f279469 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -365,6 +365,9 @@ 
 #define SYS_CTR_EL0			sys_reg(3, 3, 0, 0, 1)
 #define SYS_DCZID_EL0			sys_reg(3, 3, 0, 0, 7)
 
+#define SYS_RNDR_EL0			sys_reg(3, 3, 2, 4, 0)
+#define SYS_RNDRRS_EL0			sys_reg(3, 3, 2, 4, 1)
+
 #define SYS_PMCR_EL0			sys_reg(3, 3, 9, 12, 0)
 #define SYS_PMCNTENSET_EL0		sys_reg(3, 3, 9, 12, 1)
 #define SYS_PMCNTENCLR_EL0		sys_reg(3, 3, 9, 12, 2)
@@ -539,6 +542,7 @@ 
 			 ENDIAN_SET_EL1 | SCTLR_EL1_UCI  | SCTLR_EL1_RES1)
 
 /* id_aa64isar0 */
+#define ID_AA64ISAR0_RNDR_SHIFT		60
 #define ID_AA64ISAR0_TS_SHIFT		52
 #define ID_AA64ISAR0_FHM_SHIFT		48
 #define ID_AA64ISAR0_DP_SHIFT		44
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 80f459ad0190..456d5c461cbf 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -119,6 +119,7 @@  static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
  * sync with the documentation of the CPU feature register ABI.
  */
 static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RNDR_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
@@ -1565,6 +1566,18 @@  static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.min_field_value = 1,
 	},
+#endif
+#ifdef CONFIG_ARCH_RANDOM
+	{
+		.desc = "Random Number Generator",
+		.capability = ARM64_HAS_RNG,
+		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64ISAR0_EL1,
+		.field_pos = ID_AA64ISAR0_RNDR_SHIFT,
+		.sign = FTR_UNSIGNED,
+		.min_field_value = 1,
+	},
 #endif
 	{},
 };
diff --git a/arch/arm64/kernel/random.c b/arch/arm64/kernel/random.c
new file mode 100644
index 000000000000..e7ff29dd637c
--- /dev/null
+++ b/arch/arm64/kernel/random.c
@@ -0,0 +1,82 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Random number generation using ARMv8.5-RNG.
+ */
+
+#include <linux/random.h>
+#include <linux/ratelimit.h>
+#include <linux/printk.h>
+#include <linux/preempt.h>
+#include <asm/cpufeature.h>
+
+static inline bool has_random(void)
+{
+	/*
+	 * We "have" RNG if either
+	 * (1) every cpu in the system has RNG, or
+	 * (2) in a non-preemptible context, current cpu has RNG.
+	 *
+	 * Case 1 is the expected case when RNG is deployed, but
+	 * case 2 is present as a backup.  Case 2 has two effects:
+	 * (A) rand_initialize() is able to use the instructions
+	 * when present in the boot cpu, which happens before
+	 * secondary cpus are enabled and before features are
+	 * resolved for the full system.
+	 * (B) add_interrupt_randomness() is able to use the
+	 * instructions when present on the current cpu, in case
+	 * some big/little system only has RNG on big cpus.
+	 *
+	 * We can use __cpus_have_const_cap because we then fall
+	 * back to checking the current cpu.
+	 */
+	return __cpus_have_const_cap(ARM64_HAS_RNG) ||
+	       (!preemptible() && this_cpu_has_cap(ARM64_HAS_RNG));
+}
+
+bool arch_get_random_long(unsigned long *v)
+{
+	bool ok;
+
+	if (!has_random())
+		return false;
+
+	/*
+	 * Reads of RNDR set PSTATE.NZCV to 0b0000 on success,
+	 * and set PSTATE.NZCV to 0b0100 otherwise.
+	 */
+	asm volatile(
+		__mrs_s("%0", SYS_RNDR_EL0) "\n"
+	"	cset %w1, ne\n"
+	: "=r"(*v), "=r"(ok)
+	:
+	: "cc");
+
+	if (unlikely(!ok))
+		pr_warn_ratelimited("cpu%d: sys_rndr failed\n",
+				    read_cpuid_id());
+	return ok;
+}
+
+bool arch_get_random_seed_long(unsigned long *v)
+{
+	bool ok;
+
+	if (!has_random())
+		return false;
+
+	/*
+	 * Reads of RNDRRS set PSTATE.NZCV to 0b0000 on success,
+	 * and set PSTATE.NZCV to 0b0100 otherwise.
+	 */
+	asm volatile(
+		__mrs_s("%0", SYS_RNDRRS_EL0) "\n"
+	"	cset %w1, ne\n"
+	: "=r"(*v), "=r"(ok)
+	:
+	: "cc");
+
+	if (unlikely(!ok))
+		pr_warn_ratelimited("cpu%d: sys_rndrrs failed\n",
+				    read_cpuid_id());
+	return ok;
+}
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3f047afb982c..5bc88601f07b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1438,6 +1438,18 @@  config ARM64_PTR_AUTH
 
 endmenu
 
+menu "ARMv8.5 architectural features"
+
+config ARCH_RANDOM
+	bool "Enable support for random number generation"
+	default y
+	help
+	  Random number generation (part of the ARMv8.5 Extensions)
+	  provides a high bandwidth, cryptographically secure
+	  hardware random number generator.
+
+endmenu
+
 config ARM64_SVE
 	bool "ARM Scalable Vector Extension support"
 	default y
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 478491f07b4f..a47c2b984da7 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -63,6 +63,7 @@  obj-$(CONFIG_CRASH_CORE)		+= crash_core.o
 obj-$(CONFIG_ARM_SDE_INTERFACE)		+= sdei.o
 obj-$(CONFIG_ARM64_SSBD)		+= ssbd.o
 obj-$(CONFIG_ARM64_PTR_AUTH)		+= pointer_auth.o
+obj-$(CONFIG_ARCH_RANDOM)		+= random.o
 
 obj-y					+= vdso/ probes/
 obj-$(CONFIG_COMPAT_VDSO)		+= vdso32/
diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
index df0fc997dc3e..f26a0a8cc0d0 100644
--- a/drivers/char/Kconfig
+++ b/drivers/char/Kconfig
@@ -539,7 +539,7 @@  endmenu
 
 config RANDOM_TRUST_CPU
 	bool "Trust the CPU manufacturer to initialize Linux's CRNG"
-	depends on X86 || S390 || PPC
+	depends on X86 || S390 || PPC || ARM64
 	default n
 	help
 	Assume that CPU manufacturer (e.g., Intel or AMD for RDSEED or
@@ -559,4 +559,4 @@  config RANDOM_TRUST_BOOTLOADER
 	device randomness. Say Y here to assume the entropy provided by the
 	booloader is trustworthy so it will be added to the kernel's entropy
 	pool. Otherwise, say N here so it will be regarded as device input that
-	only mixes the entropy pool.
\ No newline at end of file
+	only mixes the entropy pool.