From patchwork Mon Nov 14 12:36:46 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 5095 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 3D02023E01 for ; Mon, 14 Nov 2011 12:37:12 +0000 (UTC) Received: from mail-fx0-f52.google.com (mail-fx0-f52.google.com [209.85.161.52]) by fiordland.canonical.com (Postfix) with ESMTP id 1B07AA180DA for ; Mon, 14 Nov 2011 12:37:12 +0000 (UTC) Received: by faaa26 with SMTP id a26so394721faa.11 for ; Mon, 14 Nov 2011 04:37:12 -0800 (PST) Received: by 10.152.144.136 with SMTP id sm8mr13973982lab.33.1321274231872; Mon, 14 Nov 2011 04:37:11 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.152.40.7 with SMTP id t7cs37732lak; Mon, 14 Nov 2011 04:37:11 -0800 (PST) Received: by 10.227.209.9 with SMTP id ge9mr158649wbb.1.1321274228683; Mon, 14 Nov 2011 04:37:08 -0800 (PST) Received: from mail-ww0-f50.google.com (mail-ww0-f50.google.com [74.125.82.50]) by mx.google.com with ESMTPS id m11si10433090wbh.66.2011.11.14.04.37.08 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 14 Nov 2011 04:37:08 -0800 (PST) Received-SPF: neutral (google.com: 74.125.82.50 is neither permitted nor denied by best guess record for domain of vincent.guittot@linaro.org) client-ip=74.125.82.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 74.125.82.50 is neither permitted nor denied by best guess record for domain of vincent.guittot@linaro.org) smtp.mail=vincent.guittot@linaro.org Received: by wwg14 with SMTP id 14so6163938wwg.31 for ; Mon, 14 Nov 2011 04:37:08 -0800 (PST) Received: by 10.180.79.162 with SMTP id k2mr25040090wix.52.1321274228041; Mon, 14 Nov 2011 04:37:08 -0800 (PST) Received: from localhost.localdomain (pas72-1-88-161-60-229.fbx.proxad.net. [88.161.60.229]) by mx.google.com with ESMTPS id v10sm12185151wiy.23.2011.11.14.04.37.06 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 14 Nov 2011 04:37:07 -0800 (PST) From: Vincent Guittot To: linaro-dev@lists.linaro.org Cc: patches@linaro.org, Vincent Guittot Subject: [RFC PATCH v2 01/09] ARM: cpu topology: Add update_cpu_topology function Date: Mon, 14 Nov 2011 13:36:46 +0100 Message-Id: <1321274206-2171-1-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.7.4.1 arch_update_cpu_topology function is called by the scheduler before building its sched_domain hierarchy. Prepare the update of the cpu topology masks in this function in addition to set it in the the store_cpu_topology which is executed only once per cpu. Signed-off-by: Vincent Guittot --- arch/arm/kernel/topology.c | 108 +++++++++++++++++++++++++++++++++++--------- 1 files changed, 87 insertions(+), 21 deletions(-) diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c index 8200dea..90352cb 100644 --- a/arch/arm/kernel/topology.c +++ b/arch/arm/kernel/topology.c @@ -43,12 +43,76 @@ struct cputopo_arm cpu_topology[NR_CPUS]; +/* + * cpu topology mask management + */ + +unsigned int advanced_topology = 0; + +/* + * default topology function + */ + const struct cpumask *cpu_coregroup_mask(int cpu) { return &cpu_topology[cpu].core_sibling; } /* + * clear cpu topology masks + */ +static void clear_cpu_topology_mask(void) +{ + unsigned int cpuid; + for_each_possible_cpu(cpuid) { + struct cputopo_arm *cpuid_topo = &(cpu_topology[cpuid]); + cpumask_clear(&cpuid_topo->core_sibling); + cpumask_clear(&cpuid_topo->thread_sibling); + } + smp_wmb(); +} + +/* + * default_cpu_topology_mask set the core and thread mask as described in the + * ARM ARM + */ +static inline void default_cpu_topology_mask(unsigned int cpuid) +{ + struct cputopo_arm *cpuid_topo = &cpu_topology[cpuid]; + unsigned int cpu; + + for_each_possible_cpu(cpu) { + struct cputopo_arm *cpu_topo = &cpu_topology[cpu]; + + if (cpuid_topo->socket_id == cpu_topo->socket_id) { + cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); + if (cpu != cpuid) + cpumask_set_cpu(cpu, + &cpuid_topo->core_sibling); + + if (cpuid_topo->core_id == cpu_topo->core_id) { + cpumask_set_cpu(cpuid, + &cpu_topo->thread_sibling); + if (cpu != cpuid) + cpumask_set_cpu(cpu, + &cpuid_topo->thread_sibling); + } + } + } + smp_wmb(); +} + +static void normal_cpu_topology_mask(void) +{ + unsigned int cpuid; + + for_each_possible_cpu(cpuid) { + default_cpu_topology_mask(cpuid); + } + smp_wmb(); +} + +/* * store_cpu_topology is called at boot when only one cpu is running * and with the mutex cpu_hotplug.lock locked, when several cpus have booted, * which prevents simultaneous write access to cpu_topology array @@ -57,7 +121,6 @@ void store_cpu_topology(unsigned int cpuid) { struct cputopo_arm *cpuid_topo = &cpu_topology[cpuid]; unsigned int mpidr; - unsigned int cpu; /* If the cpu topology has been already set, just return */ if (cpuid_topo->core_id != -1) @@ -99,26 +162,11 @@ void store_cpu_topology(unsigned int cpuid) cpuid_topo->socket_id = -1; } - /* update core and thread sibling masks */ - for_each_possible_cpu(cpu) { - struct cputopo_arm *cpu_topo = &cpu_topology[cpu]; - - if (cpuid_topo->socket_id == cpu_topo->socket_id) { - cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); - if (cpu != cpuid) - cpumask_set_cpu(cpu, - &cpuid_topo->core_sibling); - - if (cpuid_topo->core_id == cpu_topo->core_id) { - cpumask_set_cpu(cpuid, - &cpu_topo->thread_sibling); - if (cpu != cpuid) - cpumask_set_cpu(cpu, - &cpuid_topo->thread_sibling); - } - } - } - smp_wmb(); + /* + * The core and thread sibling masks can also be updated during the + * call of arch_update_cpu_topology + */ + default_cpu_topology_mask(cpuid); printk(KERN_INFO "CPU%u: thread %d, cpu %d, socket %d, mpidr %x\n", cpuid, cpu_topology[cpuid].thread_id, @@ -127,6 +175,24 @@ void store_cpu_topology(unsigned int cpuid) } /* + * arch_update_cpu_topology is called by the scheduler before building + * a new sched_domain hierarchy. + */ +int arch_update_cpu_topology(void) +{ + if (!advanced_topology) + return 0; + + /* clear core mask */ + clear_cpu_topology_mask(); + + /* update core and thread sibling masks */ + normal_cpu_topology_mask(); + + return 1; +} + +/* * init_cpu_topology is called at boot when only one cpu is running * which prevent simultaneous write access to cpu_topology array */