diff mbox series

[v2,2/2] perf/x86/amd: Stop calling amd_pmu_cpu_reset() from amd_pmu_cpu_dead()

Message ID 20231026170330.4657-3-mario.limonciello@amd.com
State New
Headers show
Series Fixes for s3 with parallel bootup | expand

Commit Message

Mario Limonciello Oct. 26, 2023, 5:03 p.m. UTC
During suspend testing on a workstation CPU a preemption BUG was reported.

```
BUG: using smp_processor_id() in preemptible [00000000] code: rtcwake/2960
caller is amd_pmu_lbr_reset+0x19/0xc0
CPU: 104 PID: 2960 Comm: rtcwake Not tainted 6.6.0-rc6-00002-g3e2c7f3ac51f
Call Trace:
 <TASK>
 dump_stack_lvl+0x44/0x60
 check_preemption_disabled+0xce/0xf0
 ? __pfx_x86_pmu_dead_cpu+0x10/0x10
 amd_pmu_lbr_reset+0x19/0xc0
 ? __pfx_x86_pmu_dead_cpu+0x10/0x10
 amd_pmu_cpu_reset.constprop.0+0x51/0x60
 amd_pmu_cpu_dead+0x3e/0x90
 x86_pmu_dead_cpu+0x13/0x20
 cpuhp_invoke_callback+0x169/0x4b0
 ? __pfx_virtnet_cpu_dead+0x10/0x10
 __cpuhp_invoke_callback_range+0x76/0xe0
 _cpu_down+0x112/0x270
 freeze_secondary_cpus+0x8e/0x280
 suspend_devices_and_enter+0x342/0x900
 pm_suspend+0x2fd/0x690
 state_store+0x71/0xd0
 kernfs_fop_write_iter+0x128/0x1c0
 vfs_write+0x2db/0x400
 ksys_write+0x5f/0xe0
 do_syscall_64+0x59/0x90
 ? srso_alias_return_thunk+0x5/0x7f
 ? count_memcg_events.constprop.0+0x1a/0x30
 ? srso_alias_return_thunk+0x5/0x7f
 ? handle_mm_fault+0x1e9/0x340
 ? srso_alias_return_thunk+0x5/0x7f
 ? preempt_count_add+0x4d/0xa0
 ? srso_alias_return_thunk+0x5/0x7f
 ? up_read+0x38/0x70
 ? srso_alias_return_thunk+0x5/0x7f
 ? do_user_addr_fault+0x343/0x6b0
 ? srso_alias_return_thunk+0x5/0x7f
 ? exc_page_fault+0x74/0x170
 entry_SYSCALL_64_after_hwframe+0x6e/0xd8
RIP: 0033:0x7f32f8d14a77
Code: 10 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 f3 0f 1e fa
64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff
77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24
RSP: 002b:00007ffdc648de18 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f32f8d14a77
RDX: 0000000000000004 RSI: 000055b2fc2a5670 RDI: 0000000000000004
RBP: 000055b2fc2a5670 R08: 0000000000000000 R09: 000055b2fc2a5670
R10: 00007f32f8e1a2f0 R11: 0000000000000246 R12: 0000000000000004
R13: 000055b2fc2a2480 R14: 00007f32f8e16600 R15: 00007f32f8e15a00
 </TASK>
```

This bug shows that there is a mistake with the flow used for offlining
a CPU.  Calling amd_pmu_cpu_reset() from the dead callback is problematic
because this doesn't run on the actual CPU being offlined.  The intent of
the function is to reset MSRs local to that CPU.

Move the call into the dying callback which is actually run on the local
CPU.

Cc: stable@vger.kernel.org # 6.1+
Fixes: ca5b7c0d9621 ("perf/x86/amd/lbr: Add LbrExtV2 branch record support")
Suggested-by: Sandipan Das <sandipan.das@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v1->v2:
 * Add more of trace
 * Explain root cause better
 * Adjust solution to fix real root cause
---
 arch/x86/events/amd/core.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Comments

Thomas Gleixner Oct. 27, 2023, 9:47 p.m. UTC | #1
On Thu, Oct 26 2023 at 12:03, Mario Limonciello wrote:

> During suspend testing on a workstation CPU a preemption BUG was
> reported.

How is this related to a workstation CPU? Laptop CPUs and server CPUs
are magically not affected, right?

Also how is this related to suspend?

This clearly affects any CPU down operation whether in the context of
suspend or initiated via sysfs, no?

Just because you observed it during suspend testing does not magically
make it a suspend related problem....

> BUG: using smp_processor_id() in preemptible [00000000] code: rtcwake/2960
> caller is amd_pmu_lbr_reset+0x19/0xc0
> CPU: 104 PID: 2960 Comm: rtcwake Not tainted 6.6.0-rc6-00002-g3e2c7f3ac51f
> Call Trace:
>  <TASK>
>  dump_stack_lvl+0x44/0x60
>  check_preemption_disabled+0xce/0xf0
>  ? __pfx_x86_pmu_dead_cpu+0x10/0x10
>  amd_pmu_lbr_reset+0x19/0xc0
>  ? __pfx_x86_pmu_dead_cpu+0x10/0x10
>  amd_pmu_cpu_reset.constprop.0+0x51/0x60
>  amd_pmu_cpu_dead+0x3e/0x90
>  x86_pmu_dead_cpu+0x13/0x20
>  cpuhp_invoke_callback+0x169/0x4b0
>  ? __pfx_virtnet_cpu_dead+0x10/0x10
>  __cpuhp_invoke_callback_range+0x76/0xe0
>  _cpu_down+0x112/0x270
>  freeze_secondary_cpus+0x8e/0x280
>  suspend_devices_and_enter+0x342/0x900
>  pm_suspend+0x2fd/0x690
>  state_store+0x71/0xd0
>  kernfs_fop_write_iter+0x128/0x1c0
>  vfs_write+0x2db/0x400
>  ksys_write+0x5f/0xe0
>  do_syscall_64+0x59/0x90
>  ? srso_alias_return_thunk+0x5/0x7f
>  ? count_memcg_events.constprop.0+0x1a/0x30
>  ? srso_alias_return_thunk+0x5/0x7f
>  ? handle_mm_fault+0x1e9/0x340
>  ? srso_alias_return_thunk+0x5/0x7f
>  ? preempt_count_add+0x4d/0xa0
>  ? srso_alias_return_thunk+0x5/0x7f
>  ? up_read+0x38/0x70
>  ? srso_alias_return_thunk+0x5/0x7f
>  ? do_user_addr_fault+0x343/0x6b0
>  ? srso_alias_return_thunk+0x5/0x7f
>  ? exc_page_fault+0x74/0x170
>  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
> RIP: 0033:0x7f32f8d14a77
> Code: 10 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 f3 0f 1e fa
> 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff
> 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24
> RSP: 002b:00007ffdc648de18 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f32f8d14a77
> RDX: 0000000000000004 RSI: 000055b2fc2a5670 RDI: 0000000000000004
> RBP: 000055b2fc2a5670 R08: 0000000000000000 R09: 000055b2fc2a5670
> R10: 00007f32f8e1a2f0 R11: 0000000000000246 R12: 0000000000000004
> R13: 000055b2fc2a2480 R14: 00007f32f8e16600 R15: 00007f32f8e15a00
>  </TASK>

How much of that backtrace is actually substantial information?

At max 5 lines out of ~50. See:

  https://www.kernel.org/doc/html/latest/process/submitting-patches.html#backtraces

> This bug shows that there is a mistake with the flow used for offlining

This bug shows nothing than a calltrace. Please explain the context and
the fail in coherent sentences. The bug backtrace is just for
illustration.

> a CPU.  Calling amd_pmu_cpu_reset() from the dead callback is
> problematic

It's not problematic. It's simply wrong.

> because this doesn't run on the actual CPU being offlined.  The intent of
> the function is to reset MSRs local to that CPU.
>
> Move the call into the dying callback which is actually run on the local
> CPU.

...

> +static void amd_pmu_cpu_dying(int cpu)
> +{
> +	amd_pmu_cpu_reset(cpu);
> +}

You clearly can spare that wrapper which wraps a function with the signature

     void fn(int)

into a function with the signature

     void fn(int)

by just assigning amd_pmu_cpu_reset() to the cpu_dying callback, no?

Thanks,

        tglx
diff mbox series

Patch

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index e24976593a29..4ec6d3ece07d 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -598,13 +598,17 @@  static void amd_pmu_cpu_starting(int cpu)
 	cpuc->amd_nb->refcnt++;
 }
 
+static void amd_pmu_cpu_dying(int cpu)
+{
+	amd_pmu_cpu_reset(cpu);
+}
+
 static void amd_pmu_cpu_dead(int cpu)
 {
 	struct cpu_hw_events *cpuhw = &per_cpu(cpu_hw_events, cpu);
 
 	kfree(cpuhw->lbr_sel);
 	cpuhw->lbr_sel = NULL;
-	amd_pmu_cpu_reset(cpu);
 
 	if (!x86_pmu.amd_nb_constraints)
 		return;
@@ -1270,6 +1274,7 @@  static __initconst const struct x86_pmu amd_pmu = {
 
 	.cpu_prepare		= amd_pmu_cpu_prepare,
 	.cpu_starting		= amd_pmu_cpu_starting,
+	.cpu_dying		= amd_pmu_cpu_dying,
 	.cpu_dead		= amd_pmu_cpu_dead,
 
 	.amd_nb_constraints	= 1,