From patchwork Wed Jun 6 16:38:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 137833 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp1015987lji; Wed, 6 Jun 2018 09:38:53 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLjt9XvxNjheAqsYiXQlBxx+LoqKQ9ckA7yHZzjnJDKjUO3gbxHr8u2XPg6/fnCm+/ytSDE X-Received: by 2002:a65:6205:: with SMTP id d5-v6mr3178155pgv.416.1528303133744; Wed, 06 Jun 2018 09:38:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528303133; cv=none; d=google.com; s=arc-20160816; b=1FeosApTm1tmq/HIKPg6oCzE36If6ZqtQhFsvKi7gNXFuzUtZEnlSX62g/hEXmXSbr hc9iiXpjX1SfOZJ41gIzOu87iWn3GEup6kUWcjRAVAHojmKIqdxZITsUoXQl6N6HGvke rJY+EiGE0m7rlnc+vt9ugP04shg4UTOms472UwcWOiJri7CAfHHqsgvsGaeJqxIeDYBO JzoHt5sWbkwF+XxH9UERVtIzSLg4V4USo1LC8437+9O3V24Gmz2RYi9/7Xw7wqCY9n48 JFJbCa7ZSJS5RQ4fWkxyhlydco3PYSoS+RFHs6mAWUpr7PXet7dGuaBXQZCmeWJY6U9e kYvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=TlrBbx/tHfrSxGtUGbdV14Av9QxfRpKYkuUY6vLJwsM=; b=o0YhCA/ks+lbtJInizUuO3F3GSOax5UgvNxlmocZ/iQKxyW4xH0WJw2XpV79tEVpNi 6MWgHJzzDq1FlO+2sYTpLliIG2DLfpRh9Cd9f7iqkk8gN6g+Cohg6aDSlwSpnBdJmjaZ 4KXlFU3pNde4vQbB5Q24gjrhM0UstXnFZ96L202VpEuqsNb4++ThIG092xBTM/5hZYKN TbCxhxWnzzh3ts0I3qxfczuKBdJROpFIcfTCYkXCi5qECsZWShAhIYB4Kur8IqUDE6NO GEGRbTr7OgmTUl3IwUohuoVX/jPyFGqZb7Tf70h5R4pmq24HR/OqPaXM9eJ6F0Jwk4nm S1bQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-acpi-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-acpi-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u65-v6si2650913pgc.590.2018.06.06.09.38.53; Wed, 06 Jun 2018 09:38:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-acpi-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-acpi-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-acpi-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933032AbeFFQiw (ORCPT + 8 others); Wed, 6 Jun 2018 12:38:52 -0400 Received: from foss.arm.com ([217.140.101.70]:42946 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932971AbeFFQiv (ORCPT ); Wed, 6 Jun 2018 12:38:51 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 682DA15AB; Wed, 6 Jun 2018 09:38:51 -0700 (PDT) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DC2EC3F25D; Wed, 6 Jun 2018 09:38:50 -0700 (PDT) From: Jeremy Linton To: Sudeep.Holla@arm.com Cc: Will.Deacon@arm.com, Catalin.Marinas@arm.com, Robin.Murphy@arm.com, Morten.Rasmussen@arm.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, geert@linux-m68k.org, linux-acpi@vger.kernel.org, ard.biesheuvel@linaro.org, Jeremy Linton Subject: [PATCH v2] arm64: topology: Avoid checking numa mask for scheduler MC selection Date: Wed, 6 Jun 2018 11:38:46 -0500 Message-Id: <20180606163846.495725-1-jeremy.linton@arm.com> X-Mailer: git-send-email 2.14.3 Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org The numa mask subset check can often lead to system hang or crash during CPU hotplug and system suspend operation if NUMA is disabled. This is mostly observed on HMP systems where the CPU compute capacities are different and ends up in different scheduler domains. Since cpumask_of_node is returned instead core_sibling, the scheduler is confused with incorrect cpumasks(e.g. one CPU in two different sched domains at the same time) on CPU hotplug. Lets disable the NUMA siblings checks for the time being, as NUMA in socket machines have LLC's that will assure that the scheduler topology isn't "borken". The NUMA check exists to assure that if a LLC within a socket crosses NUMA nodes/chiplets the scheduler domains remain consistent. This code will likely have to be re-enabled in the near future once the NUMA mask story is sorted. At the moment its not necessary because the NUMA in socket machines LLC's are contained within the NUMA domains. Further, as a defensive mechanism during hot-plug, lets assure that the LLC siblings are also masked. Reported-by: Geert Uytterhoeven Reviewed-by: Sudeep Holla Signed-off-by: Jeremy Linton --- arch/arm64/kernel/topology.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) -- 2.14.3 -- To unsubscribe from this list: send the line "unsubscribe linux-acpi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 7415c166281f..f845a8617812 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -215,13 +215,8 @@ EXPORT_SYMBOL_GPL(cpu_topology); const struct cpumask *cpu_coregroup_mask(int cpu) { - const cpumask_t *core_mask = cpumask_of_node(cpu_to_node(cpu)); + const cpumask_t *core_mask = &cpu_topology[cpu].core_sibling; - /* Find the smaller of NUMA, core or LLC siblings */ - if (cpumask_subset(&cpu_topology[cpu].core_sibling, core_mask)) { - /* not numa in package, lets use the package siblings */ - core_mask = &cpu_topology[cpu].core_sibling; - } if (cpu_topology[cpu].llc_id != -1) { if (cpumask_subset(&cpu_topology[cpu].llc_siblings, core_mask)) core_mask = &cpu_topology[cpu].llc_siblings; @@ -239,8 +234,10 @@ static void update_siblings_masks(unsigned int cpuid) for_each_possible_cpu(cpu) { cpu_topo = &cpu_topology[cpu]; - if (cpuid_topo->llc_id == cpu_topo->llc_id) + if (cpuid_topo->llc_id == cpu_topo->llc_id) { cpumask_set_cpu(cpu, &cpuid_topo->llc_siblings); + cpumask_set_cpu(cpuid, &cpu_topo->llc_siblings); + } if (cpuid_topo->package_id != cpu_topo->package_id) continue;