diff mbox series

[v6,13/17] x86: use fallback for random_get_entropy() instead of zero

Message ID 20220423212623.1957011-14-Jason@zx2c4.com
State Superseded
Headers show
Series archs/random: fallback to best raw ktime when no cycle counter | expand

Commit Message

Jason A. Donenfeld April 23, 2022, 9:26 p.m. UTC
In the event that random_get_entropy() can't access a cycle counter or
similar, falling back to returning 0 is really not the best we can do.
Instead, at least calling random_get_entropy_fallback() would be
preferable, because that always needs to return _something_, even
falling back to jiffies eventually. It's not as though
random_get_entropy_fallback() is super high precision or guaranteed to
be entropic, but basically anything that's not zero all the time is
better than returning zero all the time.

If CONFIG_X86_TSC=n, then it's possible that we're running on a 486 with
no RDTSC, so we only need the fallback code for that case. While we're
at it, fix up both the new function and the get_cycles() function from
which its derived to use cpu_feature_enabled() rather than boot_cpu_has().

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 arch/x86/include/asm/timex.h | 10 ++++++++++
 arch/x86/include/asm/tsc.h   |  4 ++--
 2 files changed, 12 insertions(+), 2 deletions(-)

Comments

Thomas Gleixner April 25, 2022, 12:35 p.m. UTC | #1
On Sat, Apr 23 2022 at 23:26, Jason A. Donenfeld wrote:

Please follow the guidelines of the tip maintainers when touching x86
code. https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#patch-subject

> In the event that random_get_entropy() can't access a cycle counter or
> similar, falling back to returning 0 is really not the best we can do.
> Instead, at least calling random_get_entropy_fallback() would be
> preferable, because that always needs to return _something_, even
> falling back to jiffies eventually. It's not as though
> random_get_entropy_fallback() is super high precision or guaranteed to
> be entropic, but basically anything that's not zero all the time is
> better than returning zero all the time.
>
> If CONFIG_X86_TSC=n, then it's possible that we're running on a 486
> with no RDTSC, so we only need the fallback code for that case.

There are also 586 CPUs which lack TSC.

> While we're at it, fix up both the new function and the get_cycles()
> function from which its derived to use cpu_feature_enabled() rather
> than boot_cpu_has().

Lot's of 'we' here. We are not doing anything.

https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#changelog

> +static inline unsigned long random_get_entropy(void)
> +{
> +#ifndef CONFIG_X86_TSC
> +	if (!cpu_feature_enabled(X86_FEATURE_TSC))
> +		return random_get_entropy_fallback();
> +#endif

Please get rid of this ifdeffery. While you are right, that anything
with CONFIG_X86_TSC=y should have a TSC, there is virt ....

cpu_feature_enabled() is runtime patched and only evaluated before
alternative patching, so the win of this ifdef is marginally, if even
noticable.

We surely can think about making TSC mandatory, but not selectively in a
particalur context.

Thanks,

        tglx
Jason A. Donenfeld April 25, 2022, 1:41 p.m. UTC | #2
Hi Thomas,

On Mon, Apr 25, 2022 at 02:35:39PM +0200, Thomas Gleixner wrote:
> On Sat, Apr 23 2022 at 23:26, Jason A. Donenfeld wrote:
> 
> Please follow the guidelines of the tip maintainers when touching x86
> code. https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#patch-subject
> https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#changelog

Will fix up the message and topic.

> > In the event that random_get_entropy() can't access a cycle counter or
> > similar, falling back to returning 0 is really not the best we can do.
> > Instead, at least calling random_get_entropy_fallback() would be
> > preferable, because that always needs to return _something_, even
> > falling back to jiffies eventually. It's not as though
> > random_get_entropy_fallback() is super high precision or guaranteed to
> > be entropic, but basically anything that's not zero all the time is
> > better than returning zero all the time.
> >
> > If CONFIG_X86_TSC=n, then it's possible that we're running on a 486
> > with no RDTSC, so we only need the fallback code for that case.
> 
> There are also 586 CPUs which lack TSC.

Will note this in the rewritten commit message.

> > +static inline unsigned long random_get_entropy(void)
> > +{
> > +#ifndef CONFIG_X86_TSC
> > +	if (!cpu_feature_enabled(X86_FEATURE_TSC))
> > +		return random_get_entropy_fallback();
> > +#endif
> 
> Please get rid of this ifdeffery. While you are right, that anything
> with CONFIG_X86_TSC=y should have a TSC, there is virt ....
> 
> cpu_feature_enabled() is runtime patched and only evaluated before
> alternative patching, so the win of this ifdef is marginally, if even
> noticable.
> 
> We surely can think about making TSC mandatory, but not selectively in a
> particalur context.

This would be a regression of sorts from the current code, which reads:

    static inline cycles_t get_cycles(void)
    {
    #ifndef CONFIG_X86_TSC
            if (!boot_cpu_has(X86_FEATURE_TSC))
                    return 0;
    #endif
            return rdtsc();
    }

So my ifdef is just copying that one. I can make it into a `if
(IS_ENABLED(CONFIG_X86_TSC))` thing, if you'd prefer that. But on
systems where CONFIG_X86_TSC=n, including the extra code there seems
kind of undesirable. Consider the current interrupt handler random.c
code on x86, whose first lines (usually) compile to:

.text:0000000000001A70 add_interrupt_randomness proc near
.text:0000000000001A70                 movsxd  rcx, edi
.text:0000000000001A73                 rdtsc
.text:0000000000001A75                 shl     rdx, 20h
.text:0000000000001A79                 mov     rdi, [rsp+0]
.text:0000000000001A7D                 or      rax, rdx
.text:0000000000001A80                 mov     rdx, offset irq_randomness
.text:0000000000001A87                 mov     rsi, gs:__irq_regs
.text:0000000000001A8F                 add     rdx, gs:this_cpu_off

I'm not sure what advantage we'd get by changing that from the ifdefs to
unconditionally including that if statement. Your argument about virt
can't be valid, since the current get_cycles() code uses the ifdefs. And
if you're arguing from the basis that CONFIG_X86_TSC=y is not reliable,
then why does CONFIG_X86_TSC even exist in the first place?

Rather the stronger argument you could make would be that moving from
boot_cpu_has() to cpu_feature_enabled() (Borislav's suggestion) means
that there's now a static branch here, so the actual cost to the
assembly is a few meaningless nops, which you don't think would play a
part in quantizing when rdtsc is called. If this is actually what you
have in mind, and you find that ifdef ugly enough that this would be
worth it, I can understand, and I'll make that change for v7. But I
don't understand you to currently be making that argument, so I'm not
quite sure what your position is.

Could you clarify what you mean here and what your expectations are? And
how does that point of view tie into the fact that get_cycles()
currently uses the ifdef?

Thanks,
Jason
Jason A. Donenfeld April 25, 2022, 1:51 p.m. UTC | #3
On Mon, Apr 25, 2022 at 03:41:00PM +0200, Jason A. Donenfeld wrote:
> .text:0000000000001A70 add_interrupt_randomness proc near
> .text:0000000000001A70                 movsxd  rcx, edi
> .text:0000000000001A73                 rdtsc
> .text:0000000000001A75                 shl     rdx, 20h
> .text:0000000000001A79                 mov     rdi, [rsp+0]
> .text:0000000000001A7D                 or      rax, rdx
> .text:0000000000001A80                 mov     rdx, offset irq_randomness
> .text:0000000000001A87                 mov     rsi, gs:__irq_regs
> .text:0000000000001A8F                 add     rdx, gs:this_cpu_off

For context, here's what your suggested change looks like:

.text:0000000000001AF0 add_interrupt_randomness proc near
.text:0000000000001AF0                 push    rbx
.text:0000000000001AF1                 movsxd  rbx, edi
.text:0000000000001AF4                 jmp     loc_25AA
.text:0000000000001AF9 loc_1AF9:                               ; CODE XREF: add_interrupt_randomness+AC1↓j
.text:0000000000001AF9                 rdtsc
.text:0000000000001AFB                 shl     rdx, 20h
.text:0000000000001AFF                 or      rax, rdx
.text:0000000000001B02
.text:0000000000001B02 loc_1B02:                               ; CODE XREF: add_interrupt_randomness+12C↓j
.text:0000000000001B02                 mov     rdx, offset irq_randomness
.text:0000000000001B09                 mov     rcx, gs:__irq_regs
.text:0000000000001B11                 add     rdx, gs:this_cpu_off
[...]
.altinstr_aux:00000000000025AA loc_25AA:                               ; CODE XREF: add_interrupt_randomness+4↑j
.altinstr_aux:00000000000025AA                 test    byte ptr cs:boot_cpu_data+20h, 10h
.altinstr_aux:00000000000025B1                 jnz     loc_1AF9
.altinstr_aux:00000000000025B7                 jmp     loc_1C17

So an extra push, and then that jmp gets nop'd out (I hope). Plus the
altinstr area. I guess that's not awful, but I fail to see what we
actually gain here over the existing code, which doesn't bother messing
with this at runtime when the kernel builder opts out. The ifdef has
always been there; what's the reason for adding it now in 2022?
Virtualization cannot be it.

Jason
Thomas Gleixner April 25, 2022, 5:03 p.m. UTC | #4
On Mon, Apr 25 2022 at 15:41, Jason A. Donenfeld wrote:
> On Mon, Apr 25, 2022 at 02:35:39PM +0200, Thomas Gleixner wrote:
>> > +static inline unsigned long random_get_entropy(void)
>> > +{
>> > +#ifndef CONFIG_X86_TSC
>> > +	if (!cpu_feature_enabled(X86_FEATURE_TSC))
>> > +		return random_get_entropy_fallback();
>> > +#endif
>> 
>> Please get rid of this ifdeffery. While you are right, that anything
>> with CONFIG_X86_TSC=y should have a TSC, there is virt ....
>> 
>> cpu_feature_enabled() is runtime patched and only evaluated before
>> alternative patching, so the win of this ifdef is marginally, if even
>> noticable.
>> 
>> We surely can think about making TSC mandatory, but not selectively in a
>> particalur context.
>
> This would be a regression of sorts from the current code, which reads:
>
>     static inline cycles_t get_cycles(void)
>     {
>     #ifndef CONFIG_X86_TSC
>             if (!boot_cpu_has(X86_FEATURE_TSC))
>                     return 0;
>     #endif
>             return rdtsc();
>     }

Bah. Indeed. Misread the patch, but yes, if you are at it to make that
cpu_feature_enabled() then change the config thing to IS_ENABLED() too.

Thanks,

        tglx
Jason A. Donenfeld April 25, 2022, 5:24 p.m. UTC | #5
Hey Thomas,

On Mon, Apr 25, 2022 at 07:03:54PM +0200, Thomas Gleixner wrote:
> Bah. Indeed. Misread the patch, but yes, if you are at it to make that
> cpu_feature_enabled() then change the config thing to IS_ENABLED() too.

Alright, will do for v7. I also just confirmed the !IS_ENABLED() way
produces the same code as the #ifndef.

Jason
diff mbox series

Patch

diff --git a/arch/x86/include/asm/timex.h b/arch/x86/include/asm/timex.h
index a4a8b1b16c0c..2b841c39d876 100644
--- a/arch/x86/include/asm/timex.h
+++ b/arch/x86/include/asm/timex.h
@@ -5,6 +5,16 @@ 
 #include <asm/processor.h>
 #include <asm/tsc.h>
 
+static inline unsigned long random_get_entropy(void)
+{
+#ifndef CONFIG_X86_TSC
+	if (!cpu_feature_enabled(X86_FEATURE_TSC))
+		return random_get_entropy_fallback();
+#endif
+	return rdtsc();
+}
+#define random_get_entropy random_get_entropy
+
 /* Assume we use the PIT time source for the clock tick */
 #define CLOCK_TICK_RATE		PIT_TICK_RATE
 
diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
index 01a300a9700b..192ea48ec80b 100644
--- a/arch/x86/include/asm/tsc.h
+++ b/arch/x86/include/asm/tsc.h
@@ -21,12 +21,12 @@  extern void disable_TSC(void);
 static inline cycles_t get_cycles(void)
 {
 #ifndef CONFIG_X86_TSC
-	if (!boot_cpu_has(X86_FEATURE_TSC))
+	if (!cpu_feature_enabled(X86_FEATURE_TSC))
 		return 0;
 #endif
-
 	return rdtsc();
 }
+#define get_cycles get_cycles
 
 extern struct system_counterval_t convert_art_to_tsc(u64 art);
 extern struct system_counterval_t convert_art_ns_to_tsc(u64 art_ns);