From patchwork Tue Jun 12 12:02:02 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 9221 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 91EE023E56 for ; Tue, 12 Jun 2012 12:02:48 +0000 (UTC) Received: from mail-yx0-f180.google.com (mail-yx0-f180.google.com [209.85.213.180]) by fiordland.canonical.com (Postfix) with ESMTP id 5F0ACA18259 for ; Tue, 12 Jun 2012 12:02:48 +0000 (UTC) Received: by mail-yx0-f180.google.com with SMTP id q6so3601145yen.11 for ; Tue, 12 Jun 2012 05:02:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=bSHARml2Q/I2YLJl2ujuAjXA0PEEcYWvLZHx761JI8E=; b=cGjY8TCBmdG8uhUSVVAoJhhyq6M/xRk6ddF9CLdP5c4Im2CWCvJa/+/xfEv7+G7sax 1S+kVuvI5D/EOjxXLa7KOkyqKpsvL2D9x3PlsxEqUG5ik4bXaOs56HM0t6UwyDWWrl5q rs/iv5e3HI7k3+svJOWS/ugIJdCfT6my6NQy3/YxNRyDv2Hd/cl1Ghfp74fZKTYWmyQ3 XAXrGx51eNXeBobjzeasmoiv/ZD2n46LSr2kIdWSaGRmyvR6SqPrW9xZJNda6qmaVZRT UCxYScc4RMKdUWTHrO5W5Xb7Qk2Rdjqo0YlMxWSnDnfp3V35eLbHiS49bKNavKF0X2Pp LCPg== Received: by 10.50.163.99 with SMTP id yh3mr7874630igb.53.1339502567650; Tue, 12 Jun 2012 05:02:47 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.24.148 with SMTP id v20csp162820ibb; Tue, 12 Jun 2012 05:02:46 -0700 (PDT) Received: by 10.180.20.137 with SMTP id n9mr28447788wie.3.1339502566405; Tue, 12 Jun 2012 05:02:46 -0700 (PDT) Received: from mail-wg0-f50.google.com (mail-wg0-f50.google.com [74.125.82.50]) by mx.google.com with ESMTPS id t3si3994328wiy.46.2012.06.12.05.02.46 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 12 Jun 2012 05:02:46 -0700 (PDT) Received-SPF: neutral (google.com: 74.125.82.50 is neither permitted nor denied by best guess record for domain of vincent.guittot@linaro.org) client-ip=74.125.82.50; Authentication-Results: mx.google.com; spf=neutral (google.com: 74.125.82.50 is neither permitted nor denied by best guess record for domain of vincent.guittot@linaro.org) smtp.mail=vincent.guittot@linaro.org Received: by wgbds11 with SMTP id ds11so4307390wgb.31 for ; Tue, 12 Jun 2012 05:02:46 -0700 (PDT) Received: by 10.180.99.195 with SMTP id es3mr28424615wib.12.1339502563420; Tue, 12 Jun 2012 05:02:43 -0700 (PDT) Received: from localhost.localdomain (LPuteaux-156-14-44-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPS id gb9sm5060217wib.8.2012.06.12.05.02.41 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 12 Jun 2012 05:02:42 -0700 (PDT) From: Vincent Guittot To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linaro-dev@lists.linaro.org, devicetree-discuss@lists.ozlabs.org Cc: linux@arm.linux.org.uk, peterz@infradead.org, grant.likely@secretlab.ca, rob.herring@calxeda.com, Vincent Guittot , Lorenzo Pieralisi Subject: [RFC 2/4] ARM: topology: factorize the update of sibling masks Date: Tue, 12 Jun 2012 14:02:02 +0200 Message-Id: <1339502524-10265-3-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1339502524-10265-1-git-send-email-vincent.guittot@linaro.org> References: <1339502524-10265-1-git-send-email-vincent.guittot@linaro.org> X-Gm-Message-State: ALoCoQkh/iwtqm+ltc4hqkifybIbumeHMWOFYUVmupBTyNfCzROvXnDASn5LpYT8FQlOjvaR2I97 The factorization has also be proposed in another patch that is not merge yet. http://lists.infradead.org/pipermail/linux-arm-kernel/2012-January/080873.html So it could be dropped depending of the state of the other patch. Signed-off-by: Lorenzo Pieralisi Signed-off-by: Vincent Guittot --- arch/arm/kernel/topology.c | 47 ++++++++++++++++++++++++-------------------- 1 file changed, 26 insertions(+), 21 deletions(-) diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c index 00301a7..2f85a64 100644 --- a/arch/arm/kernel/topology.c +++ b/arch/arm/kernel/topology.c @@ -80,6 +80,31 @@ const struct cpumask *cpu_coregroup_mask(int cpu) return &cpu_topology[cpu].core_sibling; } +void update_siblings_masks(unsigned int cpuid) +{ + struct cputopo_arm *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; + int cpu; + /* update core and thread sibling masks */ + for_each_possible_cpu(cpu) { + cpu_topo = &cpu_topology[cpu]; + + if (cpuid_topo->socket_id == cpu_topo->socket_id) { + cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); + if (cpu != cpuid) + cpumask_set_cpu(cpu, &cpuid_topo->core_sibling); + + if (cpuid_topo->core_id == cpu_topo->core_id) { + cpumask_set_cpu(cpuid, + &cpu_topo->thread_sibling); + if (cpu != cpuid) + cpumask_set_cpu(cpu, + &cpuid_topo->thread_sibling); + } + } + } + smp_wmb(); +} + /* * store_cpu_topology is called at boot when only one cpu is running * and with the mutex cpu_hotplug.lock locked, when several cpus have booted, @@ -89,7 +114,6 @@ void store_cpu_topology(unsigned int cpuid) { struct cputopo_arm *cpuid_topo = &cpu_topology[cpuid]; unsigned int mpidr; - unsigned int cpu; /* If the cpu topology has been already set, just return */ if (cpuid_topo->core_id != -1) @@ -131,26 +155,7 @@ void store_cpu_topology(unsigned int cpuid) cpuid_topo->socket_id = -1; } - /* update core and thread sibling masks */ - for_each_possible_cpu(cpu) { - struct cputopo_arm *cpu_topo = &cpu_topology[cpu]; - - if (cpuid_topo->socket_id == cpu_topo->socket_id) { - cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); - if (cpu != cpuid) - cpumask_set_cpu(cpu, - &cpuid_topo->core_sibling); - - if (cpuid_topo->core_id == cpu_topo->core_id) { - cpumask_set_cpu(cpuid, - &cpu_topo->thread_sibling); - if (cpu != cpuid) - cpumask_set_cpu(cpu, - &cpuid_topo->thread_sibling); - } - } - } - smp_wmb(); + update_siblings_masks(cpuid); printk(KERN_INFO "CPU%u: thread %d, cpu %d, socket %d, mpidr %x\n", cpuid, cpu_topology[cpuid].thread_id,