diff mbox

[v3,1/2] x86/AMD: Fix cpu_llc_id for AMD Fam17h systems

Message ID 20161107155626.rjapdlgiredm7uvh@pd.tnic
State New
Headers show

Commit Message

Borislav Petkov Nov. 7, 2016, 3:56 p.m. UTC
On Mon, Nov 07, 2016 at 03:07:46PM +0100, Ingo Molnar wrote:
>  - cache domains might be seriously mixed up, resulting in serious drop in

>    performance.

>

>  - or domains might be partitioned 'wrong' but not catastrophically

>   wrong, resulting in a minor performance drop (if at all)


Something between the two.

Here's some debugging output from set_cpu_sibling_map():

[    0.202033] smpboot: set_cpu_sibling_map: cpu: 0, has_smt: 0, has_mp: 1
[    0.202043] smpboot: set_cpu_sibling_map: first loop, llc(this): 65528, o: 0, llc(o): 65528
[    0.202058] smpboot: set_cpu_sibling_map: first loop, link mask smt

so we link it into the SMT mask even if has_smt is off.

[    0.202067] smpboot: set_cpu_sibling_map: first loop, link mask llc
[    0.202077] smpboot: set_cpu_sibling_map: second loop, llc(this): 65528, o: 0, llc(o): 65528
[    0.202091] smpboot: set_cpu_sibling_map: second loop, link mask die

I've attached the debug diff.

And since those llc(o), i.e. the cpu_llc_id of the *other* CPU in the
loops in set_cpu_sibling_map() underflows, we're generating the funniest
thread_siblings masks and then when I run 8 threads of nbench, they get
spread around the LLC domains in a very strange pattern which doesn't
give you the normal scheduling spread one would expect for performance.

And this is just one workload - I can't imagine what else might be
influenced by this funkiness.

Oh and other things like EDAC use cpu_llc_id so they will be b0rked too.

So we absolutely need to fix that cpu_llc_id thing.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
--

Comments

Ingo Molnar Nov. 8, 2016, 6:31 a.m. UTC | #1
* Borislav Petkov <bp@suse.de> wrote:

> On Mon, Nov 07, 2016 at 03:07:46PM +0100, Ingo Molnar wrote:

> >  - cache domains might be seriously mixed up, resulting in serious drop in

> >    performance.

> >

> >  - or domains might be partitioned 'wrong' but not catastrophically

> >   wrong, resulting in a minor performance drop (if at all)

> 

> Something between the two.

> 

> Here's some debugging output from set_cpu_sibling_map():

> 

> [    0.202033] smpboot: set_cpu_sibling_map: cpu: 0, has_smt: 0, has_mp: 1

> [    0.202043] smpboot: set_cpu_sibling_map: first loop, llc(this): 65528, o: 0, llc(o): 65528

> [    0.202058] smpboot: set_cpu_sibling_map: first loop, link mask smt

> 

> so we link it into the SMT mask even if has_smt is off.

> 

> [    0.202067] smpboot: set_cpu_sibling_map: first loop, link mask llc

> [    0.202077] smpboot: set_cpu_sibling_map: second loop, llc(this): 65528, o: 0, llc(o): 65528

> [    0.202091] smpboot: set_cpu_sibling_map: second loop, link mask die

> 

> I've attached the debug diff.

> 

> And since those llc(o), i.e. the cpu_llc_id of the *other* CPU in the

> loops in set_cpu_sibling_map() underflows, we're generating the funniest

> thread_siblings masks and then when I run 8 threads of nbench, they get

> spread around the LLC domains in a very strange pattern which doesn't

> give you the normal scheduling spread one would expect for performance.

>

> And this is just one workload - I can't imagine what else might be

> influenced by this funkiness.

> 

> Oh and other things like EDAC use cpu_llc_id so they will be b0rked too.


So the point I tried to make is that to people doing -stable backporting decisions 
this description you just gave is much more valuable than the previous changelog.

> So we absolutely need to fix that cpu_llc_id thing.


Absolutely!

Thanks,

	Ingo
Ingo Molnar Nov. 8, 2016, 10:29 a.m. UTC | #2
* Borislav Petkov <bp@suse.de> wrote:

> On Tue, Nov 08, 2016 at 07:31:45AM +0100, Ingo Molnar wrote:

> > So the point I tried to make is that to people doing -stable

> > backporting decisions this description you just gave is much more

> > valuable than the previous changelog.

> 

> Ok, how's that below? I've integrated the gist of it in the commit message:


Looks good to me! Please also update the second patch with meta information and I 
can apply them.

Thanks,

	Ingo
Borislav Petkov Nov. 8, 2016, 11:01 a.m. UTC | #3
On Tue, Nov 08, 2016 at 11:29:09AM +0100, Ingo Molnar wrote:
> Looks good to me! Please also update the second patch with meta

> information and I can apply them.


What exactly do you want to have there? I think the commit message is
pretty explanatory.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
--
Ingo Molnar Nov. 8, 2016, 2:54 p.m. UTC | #4
* Borislav Petkov <bp@suse.de> wrote:

> On Tue, Nov 08, 2016 at 11:29:09AM +0100, Ingo Molnar wrote:

> > Looks good to me! Please also update the second patch with meta

> > information and I can apply them.

> 

> What exactly do you want to have there? I think the commit message is

> pretty explanatory.


This one you gave:

  > No affect on current hw - just a cleanup.


Nothing in the existing changelog (including the title) explained that detail.

Thanks,

	Ingo
diff mbox

Patch

diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 601d2b331350..5974098d8266 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -506,6 +506,9 @@  void set_cpu_sibling_map(int cpu)
 	struct cpuinfo_x86 *o;
 	int i, threads;
 
+	pr_info("%s: cpu: %d, has_smt: %d, has_mp: %d\n",
+		__func__, cpu, has_smt, has_mp);
+
 	cpumask_set_cpu(cpu, cpu_sibling_setup_mask);
 
 	if (!has_mp) {
@@ -519,11 +522,19 @@  void set_cpu_sibling_map(int cpu)
 	for_each_cpu(i, cpu_sibling_setup_mask) {
 		o = &cpu_data(i);
 
-		if ((i == cpu) || (has_smt && match_smt(c, o)))
+		pr_info("%s: first loop, llc(this): %d, o: %d, llc(o): %d\n",
+			__func__, per_cpu(cpu_llc_id, cpu),
+			o->cpu_index, per_cpu(cpu_llc_id, o->cpu_index));
+
+		if ((i == cpu) || (has_smt && match_smt(c, o))) {
+			pr_info("%s: first loop, link mask smt\n", __func__);
 			link_mask(topology_sibling_cpumask, cpu, i);
+		}
 
-		if ((i == cpu) || (has_mp && match_llc(c, o)))
+		if ((i == cpu) || (has_mp && match_llc(c, o))) {
+			pr_info("%s: first loop, link mask llc\n", __func__);
 			link_mask(cpu_llc_shared_mask, cpu, i);
+		}
 
 	}
 
@@ -534,7 +545,12 @@  void set_cpu_sibling_map(int cpu)
 	for_each_cpu(i, cpu_sibling_setup_mask) {
 		o = &cpu_data(i);
 
+		pr_info("%s: second loop, llc(this): %d, o: %d, llc(o): %d\n",
+			__func__, per_cpu(cpu_llc_id, cpu),
+			o->cpu_index, per_cpu(cpu_llc_id, o->cpu_index));
+
 		if ((i == cpu) || (has_mp && match_die(c, o))) {
+			pr_info("%s: second loop, link mask die\n", __func__);
 			link_mask(topology_core_cpumask, cpu, i);
 
 			/*