From patchwork Wed Jan 6 08:30:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Song Bao Hua \(Barry Song\)" X-Patchwork-Id: 357549 Delivered-To: patch@linaro.org Received: by 2002:a02:85a7:0:0:0:0:0 with SMTP id d36csp686444jai; Wed, 6 Jan 2021 00:38:06 -0800 (PST) X-Google-Smtp-Source: ABdhPJxH3Kje2M8FZquv6VPEyAVgEPaHZT3np/yz8SZoSAvMIgOEnks+j6kSwoDu/nlKks+IX7aM X-Received: by 2002:a50:d5d5:: with SMTP id g21mr3376622edj.41.1609922286609; Wed, 06 Jan 2021 00:38:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1609922286; cv=none; d=google.com; s=arc-20160816; b=CQlCCk52n9RcGR/cyPhJMvOOWnpWbhB+Xp3RMt8F50lr79dMI9BDynANXqFPSDDepc wf4PEQmgOoMXiFkD4lqtR2RYMtCWxXOvage8KmOsHrdZ4FnwLhOM6dgs9ztmTWdi/kJQ vlcequoOB4jPz7jjrngCNLOcOsVx8FrP/x4JNnohzLaYLk1he2kXFEXWi8UjWSDxxWw8 ICqlcRcfovyZGN1pfNfv73zTy4xfNF3apFkLEdX9KbpnXUj4z49w3tAemtUBKI5at/TU x1flQav/ZtA76A99WGknCUMdjdw7CYC37nTXLFdaOe16rkKplmkndcqYrGah9wge8m4c lQxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=+h+F3xKrC489CyTRyDC9Lds+F3I7FtnQQe9vFpo/Crc=; b=pg/QUa9YjQ6ppEGOeyhnjv3kXmwJK1B9IZdpbJ3vHOcGMbPwys6Ri2J3EiXK1kR8VN 45sDIMSFofg5cCGKXz/qnLUEI5/BplVTGTh2Vi/d3fWQ0H+EdarhTaeWSe1bYfE9wsSY 8tQvvWKRWDwptHkheowo/NcmE2C+6IOOQVI76kEJ5bwUGLaz8JDix8tIguPXE7UZj9Bw mdzpBVWAa0HKv9GKUPBiWOLjpewbHAryx23v9bi1w3XAUKu9nhWEZdJnrYpBYdaQfu4F RuZRYOQxY/KKqiR60BUZfOX0jshNZSrjc5MjfrwZ7h3sMrbIowrRXFUtOLB0aopiRMYB QRlQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-acpi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-acpi-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v1si753145edl.62.2021.01.06.00.38.06; Wed, 06 Jan 2021 00:38:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-acpi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-acpi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-acpi-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726433AbhAFIge (ORCPT + 5 others); Wed, 6 Jan 2021 03:36:34 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:9717 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725788AbhAFIgd (ORCPT ); Wed, 6 Jan 2021 03:36:33 -0500 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4D9jN01szrzl10Q; Wed, 6 Jan 2021 16:34:40 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.203.68) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.498.0; Wed, 6 Jan 2021 16:35:40 +0800 From: Barry Song To: , , , , , , , , , , , , , , , , , CC: , , , , , , , Jonathan Cameron , Barry Song Subject: [RFC PATCH v3 1/2] topology: Represent clusters of CPUs within a die. Date: Wed, 6 Jan 2021 21:30:25 +1300 Message-ID: <20210106083026.40444-2-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210106083026.40444-1-song.bao.hua@hisilicon.com> References: <20210106083026.40444-1-song.bao.hua@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.126.203.68] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org From: Jonathan Cameron Both ACPI and DT provide the ability to describe additional layers of topology between that of individual cores and higher level constructs such as the level at which the last level cache is shared. In ACPI this can be represented in PPTT as a Processor Hierarchy Node Structure [1] that is the parent of the CPU cores and in turn has a parent Processor Hierarchy Nodes Structure representing a higher level of topology. For example Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. +-----------------------------------+ +---------+ | +------+ +------+ +---------------------------+ | | | CPU0 | | cpu1 | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ cluster | | tag | | | | | CPU2 | | CPU3 | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +----+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | L3 | | data | +-----------------------------------+ | | | +------+ +------+ | +-----------+ | | | | | | | | | | | | | +------+ +------+ +----+ L3 | | | | | | tag | | | | +------+ +------+ | | | | | | | | | | ++ +-----------+ | | | +------+ +------+ |---------------------------+ | +-----------------------------------| | | +-----------------------------------| | | | +------+ +------+ +---------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ | | tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +---+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ ++ | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +--+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | +---------+ +-----------------------------------+ That means the cost to transfer ownership of a cacheline between CPUs within a cluster is lower than between CPUs in different clusters on the same die. Hence, it can make sense to tell the scheduler to use the cache affinity of the cluster to make better decision on thread migration. This patch simply exposes this information to userspace libraries like hwloc by providing cluster_cpus and related sysfs attributes. PoC of HWLOC support at [2]. Note this patch only handle the ACPI case. Special consideration is needed for SMT processors, where it is necessary to move 2 levels up the hierarchy from the leaf nodes (thus skipping the processor core level). Currently the ID provided is the offset of the Processor Hierarchy Nodes Structure within PPTT. Whilst this is unique it is not terribly elegant so alternative suggestions welcome. Note that arm64 / ACPI does not provide any means of identifying a die level in the topology but that may be unrelate to the cluster level. [1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node structure (Type 0) [2] https://github.com/hisilicon/hwloc/tree/linux-cluster Signed-off-by: Jonathan Cameron Signed-off-by: Barry Song --- -v3: * no real change(v2 comments not addressed); * rebased againest 5.11-rc2; Documentation/admin-guide/cputopology.rst | 26 +++++++++++--- arch/arm64/kernel/topology.c | 2 ++ drivers/acpi/pptt.c | 60 +++++++++++++++++++++++++++++++ drivers/base/arch_topology.c | 14 ++++++++ drivers/base/topology.c | 10 ++++++ include/linux/acpi.h | 5 +++ include/linux/arch_topology.h | 5 +++ include/linux/topology.h | 6 ++++ 8 files changed, 124 insertions(+), 4 deletions(-) -- 2.7.4 diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst index b90dafc..f9d3745 100644 --- a/Documentation/admin-guide/cputopology.rst +++ b/Documentation/admin-guide/cputopology.rst @@ -24,6 +24,12 @@ core_id: identifier (rather than the kernel's). The actual value is architecture and platform dependent. +cluster_id: + + the Cluster ID of cpuX. Typically it is the hardware platform's + identifier (rather than the kernel's). The actual value is + architecture and platform dependent. + book_id: the book ID of cpuX. Typically it is the hardware platform's @@ -56,6 +62,14 @@ package_cpus_list: human-readable list of CPUs sharing the same physical_package_id. (deprecated name: "core_siblings_list") +cluster_cpus: + + internal kernel map of CPUs within the same cluster. + +cluster_cpus_list: + + human-readable list of CPUs within the same cluster. + die_cpus: internal kernel map of CPUs within the same die. @@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h:: #define topology_physical_package_id(cpu) #define topology_die_id(cpu) + #define topology_cluster_id(cpu) #define topology_core_id(cpu) #define topology_book_id(cpu) #define topology_drawer_id(cpu) #define topology_sibling_cpumask(cpu) #define topology_core_cpumask(cpu) + #define topology_cluster_cpumask(cpu) #define topology_die_cpumask(cpu) #define topology_book_cpumask(cpu) #define topology_drawer_cpumask(cpu) @@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h: 1) topology_physical_package_id: -1 2) topology_die_id: -1 -3) topology_core_id: 0 -4) topology_sibling_cpumask: just the given CPU -5) topology_core_cpumask: just the given CPU -6) topology_die_cpumask: just the given CPU +3) topology_cluster_id: -1 +4) topology_core_id: 0 +5) topology_sibling_cpumask: just the given CPU +6) topology_core_cpumask: just the given CPU +7) topology_cluster_cpumask: just the given CPU +8) topology_die_cpumask: just the given CPU For architectures that don't support books (CONFIG_SCHED_BOOK) there are no default definitions for topology_book_id() and topology_book_cpumask(). diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index f6faa69..fe076b3 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -103,6 +103,8 @@ int __init parse_acpi_topology(void) cpu_topology[cpu].thread_id = -1; cpu_topology[cpu].core_id = topology_id; } + topology_id = find_acpi_cpu_topology_cluster(cpu); + cpu_topology[cpu].cluster_id = topology_id; topology_id = find_acpi_cpu_topology_package(cpu); cpu_topology[cpu].package_id = topology_id; diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c index 4ae9335..8646a93 100644 --- a/drivers/acpi/pptt.c +++ b/drivers/acpi/pptt.c @@ -737,6 +737,66 @@ int find_acpi_cpu_topology_package(unsigned int cpu) } /** + * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value + * @cpu: Kernel logical CPU number + * + * Determine a topology unique cluster ID for the given CPU/thread. + * This ID can then be used to group peers, which will have matching ids. + * + * The cluster, if present is the level of topology above CPUs. In a + * multi-thread CPU, it will be the level above the CPU, not the thread. + * It may not exist in single CPU systems. In simple multi-CPU systems, + * it may be equal to the package topology level. + * + * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found + * or there is no toplogy level above the CPU.. + * Otherwise returns a value which represents the package for this CPU. + */ + +int find_acpi_cpu_topology_cluster(unsigned int cpu) +{ + struct acpi_table_header *table; + acpi_status status; + struct acpi_pptt_processor *cpu_node, *cluster_node; + int retval; + int is_thread; + + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); + if (ACPI_FAILURE(status)) { + acpi_pptt_warn_missing(); + return -ENOENT; + } + cpu_node = acpi_find_processor_node(table, cpu); + if (cpu_node == NULL || !cpu_node->parent) { + retval = -ENOENT; + goto put_table; + } + + is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD; + cluster_node = fetch_pptt_node(table, cpu_node->parent); + if (cluster_node == NULL) { + retval = -ENOENT; + goto put_table; + } + if (is_thread) { + if (!cluster_node->parent) { + retval = -ENOENT; + goto put_table; + } + cluster_node = fetch_pptt_node(table, cluster_node->parent); + if (cluster_node == NULL) { + retval = -ENOENT; + goto put_table; + } + } + retval = ACPI_PTR_DIFF(cluster_node, table); +put_table: + acpi_put_table(table); + + return retval; +} + +/** * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag * @cpu: Kernel logical CPU number * diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c index de8587c..3079232 100644 --- a/drivers/base/arch_topology.c +++ b/drivers/base/arch_topology.c @@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu) return core_mask; } +const struct cpumask *cpu_clustergroup_mask(int cpu) +{ + return &cpu_topology[cpu].cluster_sibling; +} + void update_siblings_masks(unsigned int cpuid) { struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; @@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid) if (cpuid_topo->package_id != cpu_topo->package_id) continue; + if (cpuid_topo->cluster_id == cpu_topo->cluster_id) { + cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling); + cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling); + } + cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); cpumask_set_cpu(cpu, &cpuid_topo->core_sibling); @@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu) cpumask_clear(&cpu_topo->llc_sibling); cpumask_set_cpu(cpu, &cpu_topo->llc_sibling); + cpumask_clear(&cpu_topo->cluster_sibling); + cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling); + cpumask_clear(&cpu_topo->core_sibling); cpumask_set_cpu(cpu, &cpu_topo->core_sibling); cpumask_clear(&cpu_topo->thread_sibling); @@ -571,6 +584,7 @@ void remove_cpu_topology(unsigned int cpu) cpumask_clear_cpu(cpu, topology_core_cpumask(sibling)); for_each_cpu(sibling, topology_sibling_cpumask(cpu)) cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling)); + for_each_cpu(sibling, topology_llc_cpumask(cpu)) cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling)); diff --git a/drivers/base/topology.c b/drivers/base/topology.c index 4d254fc..7157ac0 100644 --- a/drivers/base/topology.c +++ b/drivers/base/topology.c @@ -46,6 +46,9 @@ static DEVICE_ATTR_RO(physical_package_id); define_id_show_func(die_id); static DEVICE_ATTR_RO(die_id); +define_id_show_func(cluster_id); +static DEVICE_ATTR_RO(cluster_id); + define_id_show_func(core_id); static DEVICE_ATTR_RO(core_id); @@ -61,6 +64,10 @@ define_siblings_show_func(core_siblings, core_cpumask); static DEVICE_ATTR_RO(core_siblings); static DEVICE_ATTR_RO(core_siblings_list); +define_siblings_show_func(cluster_cpus, cluster_cpumask); +static DEVICE_ATTR_RO(cluster_cpus); +static DEVICE_ATTR_RO(cluster_cpus_list); + define_siblings_show_func(die_cpus, die_cpumask); static DEVICE_ATTR_RO(die_cpus); static DEVICE_ATTR_RO(die_cpus_list); @@ -88,6 +95,7 @@ static DEVICE_ATTR_RO(drawer_siblings_list); static struct attribute *default_attrs[] = { &dev_attr_physical_package_id.attr, &dev_attr_die_id.attr, + &dev_attr_cluster_id.attr, &dev_attr_core_id.attr, &dev_attr_thread_siblings.attr, &dev_attr_thread_siblings_list.attr, @@ -95,6 +103,8 @@ static struct attribute *default_attrs[] = { &dev_attr_core_cpus_list.attr, &dev_attr_core_siblings.attr, &dev_attr_core_siblings_list.attr, + &dev_attr_cluster_cpus.attr, + &dev_attr_cluster_cpus_list.attr, &dev_attr_die_cpus.attr, &dev_attr_die_cpus_list.attr, &dev_attr_package_cpus.attr, diff --git a/include/linux/acpi.h b/include/linux/acpi.h index 2630c2e..8f603cd 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -1325,6 +1325,7 @@ static inline int lpit_read_residency_count_address(u64 *address) #ifdef CONFIG_ACPI_PPTT int acpi_pptt_cpu_is_thread(unsigned int cpu); int find_acpi_cpu_topology(unsigned int cpu, int level); +int find_acpi_cpu_topology_cluster(unsigned int cpu); int find_acpi_cpu_topology_package(unsigned int cpu); int find_acpi_cpu_topology_hetero_id(unsigned int cpu); int find_acpi_cpu_cache_topology(unsigned int cpu, int level); @@ -1337,6 +1338,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level) { return -EINVAL; } +static inline int find_acpi_cpu_topology_cluster(unsigned int cpu) +{ + return -EINVAL; +} static inline int find_acpi_cpu_topology_package(unsigned int cpu) { return -EINVAL; diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h index 0f6cd6b..987c7ea 100644 --- a/include/linux/arch_topology.h +++ b/include/linux/arch_topology.h @@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus, struct cpu_topology { int thread_id; int core_id; + int cluster_id; int package_id; int llc_id; cpumask_t thread_sibling; cpumask_t core_sibling; + cpumask_t cluster_sibling; cpumask_t llc_sibling; }; @@ -60,13 +62,16 @@ struct cpu_topology { extern struct cpu_topology cpu_topology[NR_CPUS]; #define topology_physical_package_id(cpu) (cpu_topology[cpu].package_id) +#define topology_cluster_id(cpu) (cpu_topology[cpu].cluster_id) #define topology_core_id(cpu) (cpu_topology[cpu].core_id) #define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling) #define topology_sibling_cpumask(cpu) (&cpu_topology[cpu].thread_sibling) +#define topology_cluster_cpumask(cpu) (&cpu_topology[cpu].cluster_sibling) #define topology_llc_cpumask(cpu) (&cpu_topology[cpu].llc_sibling) void init_cpu_topology(void); void store_cpu_topology(unsigned int cpuid); const struct cpumask *cpu_coregroup_mask(int cpu); +const struct cpumask *cpu_clustergroup_mask(int cpu); void update_siblings_masks(unsigned int cpu); void remove_cpu_topology(unsigned int cpuid); void reset_cpu_topology(void); diff --git a/include/linux/topology.h b/include/linux/topology.h index ad03df1..bf2cc3c 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -185,6 +185,9 @@ static inline int cpu_to_mem(int cpu) #ifndef topology_die_id #define topology_die_id(cpu) ((void)(cpu), -1) #endif +#ifndef topology_cluster_id +#define topology_cluster_id(cpu) ((void)(cpu), -1) +#endif #ifndef topology_core_id #define topology_core_id(cpu) ((void)(cpu), 0) #endif @@ -194,6 +197,9 @@ static inline int cpu_to_mem(int cpu) #ifndef topology_core_cpumask #define topology_core_cpumask(cpu) cpumask_of(cpu) #endif +#ifndef topology_cluster_cpumask +#define topology_cluster_cpumask(cpu) cpumask_of(cpu) +#endif #ifndef topology_die_cpumask #define topology_die_cpumask(cpu) cpumask_of(cpu) #endif From patchwork Wed Jan 6 08:30:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Song Bao Hua \(Barry Song\)" X-Patchwork-Id: 357596 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68882C4332B for ; Wed, 6 Jan 2021 08:36:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 41B6023104 for ; Wed, 6 Jan 2021 08:36:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726503AbhAFIgk (ORCPT ); Wed, 6 Jan 2021 03:36:40 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:10549 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726501AbhAFIgi (ORCPT ); Wed, 6 Jan 2021 03:36:38 -0500 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4D9jN46qg7zMFWZ; Wed, 6 Jan 2021 16:34:44 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.203.68) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.498.0; Wed, 6 Jan 2021 16:35:45 +0800 From: Barry Song To: , , , , , , , , , , , , , , , , , CC: , , , , , , , Barry Song Subject: [RFC PATCH v3 2/2] scheduler: add scheduler level for clusters Date: Wed, 6 Jan 2021 21:30:26 +1300 Message-ID: <20210106083026.40444-3-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210106083026.40444-1-song.bao.hua@hisilicon.com> References: <20210106083026.40444-1-song.bao.hua@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.126.203.68] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters. This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3. This will affect load balance. For example, without this patch, while cpu0 becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0 will try to pull a task from cpu1-cpu3 first. This will have much less overhead of task migration. On the other hand, while doing WAKE_AFFINE, this patch will try to find a core in the target cluster before scanning the whole llc domain. This means it will proactively use a core which has better affinity with target core at first. Though it is named "cluster", architectures or machines can define its exact meaning of cluster as long as some cpus can share some resources in lower level than llc. So the implementation is applicable to all architectures. Different cpus might have different resource sharing like L1, L2, cache tags, internal busses etc. Since it is hard to know where we should start to scan, this patch adds a SD_SHARE_CLS_RESOURCES rather than directly leveraging the existing SD_SHARE_PKG_RESOURCES flag. Architectures or machines can decide what is cluster and who should get SD_SHARE_CLS_RESOURCES. select_idle_cpu() will scan from the first sched_domain with SD_SHARE_CLS_RESOURCES. The below is a hackbench result: we run the below command with different -g parameter from 1 to 10, for each different g, we run the command 10 times and get the average time $ numactl -N 0 hackbench -p -T -l 20000 -g $1 hackbench will report the time which is needed to complete a certain number of messages transmissions between a certain number of tasks, for example: $ numactl -N 0 hackbench -p -T -l 20000 -g 10 Running in threaded mode with 10 groups using 40 file descriptors each (== 400 tasks) Each sender will pass 20000 messages of 100 bytes Time: 8.874 The below is the result of hackbench w/ and w/o the patch: g 1 2 3 4 5 6 7 8 9 10 w/o 1.4777 2.0112 3.1919 4.2654 5.3246 6.4019 7.5939 8.7073 9.7526 10.8987 w/ 1.4793 1.9344 2.9080 3.9267 4.8339 5.7186 6.6923 7.5088 8.3715 9.2173 +8.9% +7.9% +9.3% +10.7% +11.8% +13.8% +14.2% +15.5% Tracing the kernel while g=10, it shows select_idle_cpu() has a large chance to get cpu in the same cluster with the target while it sometimes gets cpu outside the cluster: target cpu 19 -> 17 13 -> 15 23 -> 20 23 -> 20 19 -> 17 13 -> 15 16 -> 17 19 -> 17 7 -> 5 10 -> 11 23 -> 20 *23 -> 4 ... Signed-off-by: Barry Song --- -v3: - rebased againest 5.11-rc2 - with respect to the comments of Valentin Schneider, Peter Zijlstra, Vincent Guittot and Mel Gorman etc. * moved the scheduler changes from arm64 to the common place for all architectures. * added SD_SHARE_CLS_RESOURCES sd_flags specifying the sched_domain where select_idle_cpu() should begin to scan from * removed redundant select_idle_cluster() function since all code is in select_idle_cpu() now. it also avoided scanning cluster cpus twice in v2 code; * redo the hackbench in one numa after the above changes arch/arm64/Kconfig | 7 +++++++ include/linux/sched/sd_flags.h | 9 +++++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/fair.c | 27 +++++++++++++++++++++------ kernel/sched/topology.c | 6 ++++++ 6 files changed, 57 insertions(+), 6 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 05e1735..546cd61 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -973,6 +973,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here. +config SCHED_CLUSTER + bool "Cluster scheduler support" + help + Cluster scheduler support improves the CPU scheduler's decision + making when dealing with machines that have clusters(sharing internal + bus or sharing LLC cache tag). If unsure say N here. + config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h index 34b21e9..fc3c894 100644 --- a/include/linux/sched/sd_flags.h +++ b/include/linux/sched/sd_flags.h @@ -100,6 +100,15 @@ SD_FLAG(SD_ASYM_CPUCAPACITY, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS) SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) /* + * Domain members share CPU cluster resources (i.e. llc cache tags) + * + * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share + * the cluster resouces (such as llc tags and internal bus) + * NEEDS_GROUPS: Caches are shared between groups. + */ +SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) + +/* * Domain members share CPU package resources (i.e. caches) * * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 8f0f778..846fcac 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif +#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{ + return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES; +} +#endif + #ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index bf2cc3c..81be614 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu) } #endif +#ifdef CONFIG_SCHED_CLUSTER +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{ + return topology_cluster_cpumask(cpu); +} +#endif + static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 04a3ce2..c14fae6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6145,6 +6145,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t { struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); struct sched_domain *this_sd; + struct sched_domain *prev_ssd = NULL, *ssd; u64 avg_cost, avg_idle; u64 time; int this = smp_processor_id(); @@ -6174,15 +6175,29 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t time = cpu_clock(this); - cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); - - for_each_cpu_wrap(cpu, cpus, target) { - if (!--nr) - return -1; - if (available_idle_cpu(cpu) || sched_idle_cpu(cpu)) + /* + * we first scan those child domains who declare they are sharing + * cluster resources such as llc tags, internal busses; then scan + * the whole llc + */ + for_each_domain(target, ssd) { + if ((ssd->flags & SD_SHARE_CLS_RESOURCES) || (ssd == sd)) { + cpumask_and(cpus, sched_domain_span(ssd), p->cpus_ptr); + if (prev_ssd) + cpumask_andnot(cpus, cpus, sched_domain_span(prev_ssd)); + for_each_cpu_wrap(cpu, cpus, target) { + if (!--nr) + return -1; + if (available_idle_cpu(cpu) || sched_idle_cpu(cpu)) + goto done; + } + prev_ssd = ssd; + } + if (ssd == sd) break; } +done: time = cpu_clock(this) - time; update_avg(&this_sd->avg_scan_cost, time); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 5d3675c..79030c9 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1361,6 +1361,7 @@ int __read_mostly node_reclaim_distance = RECLAIM_DISTANCE; */ #define TOPOLOGY_SD_FLAGS \ (SD_SHARE_CPUCAPACITY | \ + SD_SHARE_CLS_RESOURCES | \ SD_SHARE_PKG_RESOURCES | \ SD_NUMA | \ SD_ASYM_PACKING) @@ -1480,6 +1481,11 @@ static struct sched_domain_topology_level default_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif + +#ifdef CONFIG_SCHED_CLUSTER + { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) }, +#endif + #ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, #endif