From patchwork Mon Nov 7 15:56:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Borislav Petkov X-Patchwork-Id: 81117 Delivered-To: patch@linaro.org Received: by 10.182.113.165 with SMTP id iz5csp1184417obb; Mon, 7 Nov 2016 07:57:10 -0800 (PST) X-Received: by 10.99.218.85 with SMTP id l21mr7666455pgj.102.1478534230474; Mon, 07 Nov 2016 07:57:10 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e132si20866772pfg.66.2016.11.07.07.57.10; Mon, 07 Nov 2016 07:57:10 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932460AbcKGP5C (ORCPT + 27 others); Mon, 7 Nov 2016 10:57:02 -0500 Received: from mx2.suse.de ([195.135.220.15]:38773 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932385AbcKGP46 (ORCPT ); Mon, 7 Nov 2016 10:56:58 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 628A1AB9B; Mon, 7 Nov 2016 15:56:31 +0000 (UTC) Received: by pd.tnic (Postfix, from userid 1000) id A884A161DB6; Mon, 7 Nov 2016 16:56:26 +0100 (CET) Date: Mon, 7 Nov 2016 16:56:26 +0100 From: Borislav Petkov To: Ingo Molnar Cc: x86@kernel.org, Yazen Ghannam , linux-kernel@vger.kernel.org, Peter Zijlstra Subject: Re: [PATCH v3 1/2] x86/AMD: Fix cpu_llc_id for AMD Fam17h systems Message-ID: <20161107155626.rjapdlgiredm7uvh@pd.tnic> References: <1478019063-2632-1-git-send-email-Yazen.Ghannam@amd.com> <20161102201321.slgzk2x2ya4jzfax@pd.tnic> <20161107073121.GB26938@gmail.com> <20161107092031.alxfkr6rpctodbdk@pd.tnic> <20161107140746.GA20626@gmail.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20161107140746.GA20626@gmail.com> User-Agent: NeoMutt/20161014 (1.7.1) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 07, 2016 at 03:07:46PM +0100, Ingo Molnar wrote: > - cache domains might be seriously mixed up, resulting in serious drop in > performance. > > - or domains might be partitioned 'wrong' but not catastrophically > wrong, resulting in a minor performance drop (if at all) Something between the two. Here's some debugging output from set_cpu_sibling_map(): [ 0.202033] smpboot: set_cpu_sibling_map: cpu: 0, has_smt: 0, has_mp: 1 [ 0.202043] smpboot: set_cpu_sibling_map: first loop, llc(this): 65528, o: 0, llc(o): 65528 [ 0.202058] smpboot: set_cpu_sibling_map: first loop, link mask smt so we link it into the SMT mask even if has_smt is off. [ 0.202067] smpboot: set_cpu_sibling_map: first loop, link mask llc [ 0.202077] smpboot: set_cpu_sibling_map: second loop, llc(this): 65528, o: 0, llc(o): 65528 [ 0.202091] smpboot: set_cpu_sibling_map: second loop, link mask die I've attached the debug diff. And since those llc(o), i.e. the cpu_llc_id of the *other* CPU in the loops in set_cpu_sibling_map() underflows, we're generating the funniest thread_siblings masks and then when I run 8 threads of nbench, they get spread around the LLC domains in a very strange pattern which doesn't give you the normal scheduling spread one would expect for performance. And this is just one workload - I can't imagine what else might be influenced by this funkiness. Oh and other things like EDAC use cpu_llc_id so they will be b0rked too. So we absolutely need to fix that cpu_llc_id thing. -- Regards/Gruss, Boris. SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 601d2b331350..5974098d8266 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -506,6 +506,9 @@ void set_cpu_sibling_map(int cpu) struct cpuinfo_x86 *o; int i, threads; + pr_info("%s: cpu: %d, has_smt: %d, has_mp: %d\n", + __func__, cpu, has_smt, has_mp); + cpumask_set_cpu(cpu, cpu_sibling_setup_mask); if (!has_mp) { @@ -519,11 +522,19 @@ void set_cpu_sibling_map(int cpu) for_each_cpu(i, cpu_sibling_setup_mask) { o = &cpu_data(i); - if ((i == cpu) || (has_smt && match_smt(c, o))) + pr_info("%s: first loop, llc(this): %d, o: %d, llc(o): %d\n", + __func__, per_cpu(cpu_llc_id, cpu), + o->cpu_index, per_cpu(cpu_llc_id, o->cpu_index)); + + if ((i == cpu) || (has_smt && match_smt(c, o))) { + pr_info("%s: first loop, link mask smt\n", __func__); link_mask(topology_sibling_cpumask, cpu, i); + } - if ((i == cpu) || (has_mp && match_llc(c, o))) + if ((i == cpu) || (has_mp && match_llc(c, o))) { + pr_info("%s: first loop, link mask llc\n", __func__); link_mask(cpu_llc_shared_mask, cpu, i); + } } @@ -534,7 +545,12 @@ void set_cpu_sibling_map(int cpu) for_each_cpu(i, cpu_sibling_setup_mask) { o = &cpu_data(i); + pr_info("%s: second loop, llc(this): %d, o: %d, llc(o): %d\n", + __func__, per_cpu(cpu_llc_id, cpu), + o->cpu_index, per_cpu(cpu_llc_id, o->cpu_index)); + if ((i == cpu) || (has_mp && match_die(c, o))) { + pr_info("%s: second loop, link mask die\n", __func__); link_mask(topology_core_cpumask, cpu, i); /*