From patchwork Fri May 5 11:57:20 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 98622 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp81117qge; Fri, 5 May 2017 04:58:07 -0700 (PDT) X-Received: by 10.98.24.213 with SMTP id 204mr15857838pfy.237.1493985487589; Fri, 05 May 2017 04:58:07 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m25si5435892pli.193.2017.05.05.04.58.07; Fri, 05 May 2017 04:58:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752752AbdEEL5u (ORCPT + 6 others); Fri, 5 May 2017 07:57:50 -0400 Received: from mout.kundenserver.de ([212.227.17.10]:50634 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752720AbdEEL5s (ORCPT ); Fri, 5 May 2017 07:57:48 -0400 Received: from wuerfel.lan ([78.42.17.5]) by mrelayeu.kundenserver.de (mreue102 [212.227.15.145]) with ESMTPA (Nemesis) id 0MI6Qg-1d3rEp3qM2-003uDD; Fri, 05 May 2017 13:57:34 +0200 From: Arnd Bergmann To: gregkh@linuxfoundation.org Cc: stable@vger.kernel.org, Rusty Russell , Arnd Bergmann Subject: [PATCH 4/9] [3.18-stable] cpumask_set_cpu_local_first => cpumask_local_spread, lament Date: Fri, 5 May 2017 13:57:20 +0200 Message-Id: <20170505115725.1424772-5-arnd@arndb.de> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170505115725.1424772-1-arnd@arndb.de> References: <20170505115725.1424772-1-arnd@arndb.de> X-Provags-ID: V03:K0:TYwXf2IMmkGLJZR1TOXKek94VOLqeuTgCiqiVH9/dsQRlGFg752 kBErIHR7UmAYJ5zj6cbnp1DiO9+TuPc7kRvDrYyVOU/D5ma5lMEciQh56joSX6D1atwiVYG KMgpu5rEVHLvfjxWKDVE40ZGbWwsCRRz/+NxE3UFmcite/V/nYSEpczjvOrHUbae6iiC4ht PXyIoAUMtALB45941Uleg== X-UI-Out-Filterresults: notjunk:1; V01:K0:I2aXgrag2o4=:RmzrWofwD+u4l66bFDAZ0d 7EF1kOFMVQeSOXtQGeow79+emymKTa1uDyE3ldUP4csdz3cG2m2TlktUXHiNHrnGG7KUGYHNe XhAB/0kTaEW8xeuXVKhQjNnUh42wElg/Bvlkt/6xgQlkmZr7nRn4KcuXLqGQTK5gkqWoSTzxX s+U53TR0446SNzYir8Rye9gb27s8SugoYFM/3P2W2FAcgpZ2EzQqVEP6jixyyepPaxGbIeEDs ZwcpK0I815O5suk73jhOyxeTLnpU7lr6JNcuD4BzlGKcURxqUDg4Iocg7ych59uedNZrGmJ5y KPYgFo4UECXRiJ/r5qSwrtNVI6WhUawOFxEWrwajT9k3Bm4V4R5VNZAg2vIoQGxmHKGnvT2ht KwVxPS4oUPx4+KMBZpsAZb/phJclA2X1jK2zaQ0DIx4mmnLRTXgOHxkoKyRZQQ74QoLmQYIME 9T5E48GUFXeaKT5+9W5dpJxpQj1kQ3ciizaRTcBYU8b1dNa7KCigYcuYAmE6WdN37u+snxoNC zB54WdupdrWVYK1R23BoGumonwY8GhfTasa9C8JuVhXzf0jusxqimDVvFCKoM0CgwykC+dJRB 3MZ5YBd2aSXDzRRdch4tsx3sUDDH+AZGLDe9Nw9+G1tdOr2jNGlaujg8msCb9tuHMcVrRk1W7 Xb/aJT8ESSS3Qx4ObTxuUx1BblgQzCOBPxLrR6g12iVd0F/4PVAVBKmvz/UX+orFYJMU= Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Rusty Russell Commit f36963c9d3f6f415732710da3acdd8608a9fa0e5 upstream. da91309e0a7e (cpumask: Utility function to set n'th cpu...) created a genuinely weird function. I never saw it before, it went through DaveM. (He only does this to make us other maintainers feel better about our own mistakes.) cpumask_set_cpu_local_first's purpose is say "I need to spread things across N online cpus, choose the ones on this numa node first"; you call it in a loop. It can fail. One of the two callers ignores this, the other aborts and fails the device open. It can fail in two ways: allocating the off-stack cpumask, or through a convoluted codepath which AFAICT can only occur if cpu_online_mask changes. Which shouldn't happen, because if cpu_online_mask can change while you call this, it could return a now-offline cpu anyway. It contains a nonsensical test "!cpumask_of_node(numa_node)". This was drawn to my attention by Geert, who said this causes a warning on Sparc. It sets a single bit in a cpumask instead of returning a cpu number, because that's what the callers want. It could be made more efficient by passing the previous cpu rather than an index, but that would be more invasive to the callers. [backporting for 3.18: only two callers exist, otherwise no change. The same warning shows up for "!cpumask_of_node()", and I thought about just addressing the warning, but using the whole fix seemed better in the end as one of the two callers also lacks the error handling] Fixes: da91309e0a7e8966d916a74cce42ed170fde06bf Signed-off-by: Rusty Russell (then rebased) Tested-by: Amir Vadai Acked-by: Amir Vadai Acked-by: David S. Miller Signed-off-by: Arnd Bergmann --- drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 10 ++-- drivers/net/ethernet/mellanox/mlx4/en_tx.c | 6 +-- include/linux/cpumask.h | 6 +-- lib/cpumask.c | 74 +++++++++----------------- 4 files changed, 34 insertions(+), 62 deletions(-) -- 2.9.0 diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c index 68fef1151dde..f31814293d3c 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c @@ -1500,17 +1500,13 @@ static int mlx4_en_init_affinity_hint(struct mlx4_en_priv *priv, int ring_idx) { struct mlx4_en_rx_ring *ring = priv->rx_ring[ring_idx]; int numa_node = priv->mdev->dev->numa_node; - int ret = 0; if (!zalloc_cpumask_var(&ring->affinity_mask, GFP_KERNEL)) return -ENOMEM; - ret = cpumask_set_cpu_local_first(ring_idx, numa_node, - ring->affinity_mask); - if (ret) - free_cpumask_var(ring->affinity_mask); - - return ret; + cpumask_set_cpu(cpumask_local_spread(ring_idx, numa_node), + ring->affinity_mask); + return 0; } static void mlx4_en_free_affinity_hint(struct mlx4_en_priv *priv, int ring_idx) diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c index fdc592ac2529..9fc1dd7c5a0a 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c @@ -139,9 +139,9 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv, ring->queue_index = queue_index; if (queue_index < priv->num_tx_rings_p_up) - cpumask_set_cpu_local_first(queue_index, - priv->mdev->dev->numa_node, - &ring->affinity_mask); + cpumask_set_cpu(cpumask_local_spread(queue_index, + priv->mdev->dev->numa_node), + &ring->affinity_mask); *pring = ring; return 0; diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h index 0a9a6da21e74..bb1e42ce626e 100644 --- a/include/linux/cpumask.h +++ b/include/linux/cpumask.h @@ -142,10 +142,8 @@ static inline unsigned int cpumask_any_but(const struct cpumask *mask, return 1; } -static inline int cpumask_set_cpu_local_first(int i, int numa_node, cpumask_t *dstp) +static inline unsigned int cpumask_local_spread(unsigned int i, int node) { - set_bit(0, cpumask_bits(dstp)); - return 0; } @@ -199,7 +197,7 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp) int cpumask_next_and(int n, const struct cpumask *, const struct cpumask *); int cpumask_any_but(const struct cpumask *mask, unsigned int cpu); -int cpumask_set_cpu_local_first(int i, int numa_node, cpumask_t *dstp); +unsigned int cpumask_local_spread(unsigned int i, int node); /** * for_each_cpu - iterate over every cpu in a mask diff --git a/lib/cpumask.c b/lib/cpumask.c index b6513a9f2892..c0bd0df01e3d 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -166,64 +166,42 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) #endif /** - * cpumask_set_cpu_local_first - set i'th cpu with local numa cpu's first - * + * cpumask_local_spread - select the i'th cpu with local numa cpu's first * @i: index number - * @numa_node: local numa_node - * @dstp: cpumask with the relevant cpu bit set according to the policy + * @node: local numa_node * - * This function sets the cpumask according to a numa aware policy. - * cpumask could be used as an affinity hint for the IRQ related to a - * queue. When the policy is to spread queues across cores - local cores - * first. + * This function selects an online CPU according to a numa aware policy; + * local cpus are returned first, followed by non-local ones, then it + * wraps around. * - * Returns 0 on success, -ENOMEM for no memory, and -EAGAIN when failed to set - * the cpu bit and need to re-call the function. + * It's not very efficient, but useful for setup. */ -int cpumask_set_cpu_local_first(int i, int numa_node, cpumask_t *dstp) +unsigned int cpumask_local_spread(unsigned int i, int node) { - cpumask_var_t mask; int cpu; - int ret = 0; - - if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) - return -ENOMEM; + /* Wrap: we always want a cpu. */ i %= num_online_cpus(); - if (numa_node == -1 || !cpumask_of_node(numa_node)) { - /* Use all online cpu's for non numa aware system */ - cpumask_copy(mask, cpu_online_mask); + if (node == -1) { + for_each_cpu(cpu, cpu_online_mask) + if (i-- == 0) + return cpu; } else { - int n; - - cpumask_and(mask, - cpumask_of_node(numa_node), cpu_online_mask); - - n = cpumask_weight(mask); - if (i >= n) { - i -= n; - - /* If index > number of local cpu's, mask out local - * cpu's - */ - cpumask_andnot(mask, cpu_online_mask, mask); + /* NUMA first. */ + for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask) + if (i-- == 0) + return cpu; + + for_each_cpu(cpu, cpu_online_mask) { + /* Skip NUMA nodes, done above. */ + if (cpumask_test_cpu(cpu, cpumask_of_node(node))) + continue; + + if (i-- == 0) + return cpu; } } - - for_each_cpu(cpu, mask) { - if (--i < 0) - goto out; - } - - ret = -EAGAIN; - -out: - free_cpumask_var(mask); - - if (!ret) - cpumask_set_cpu(cpu, dstp); - - return ret; + BUG(); } -EXPORT_SYMBOL(cpumask_set_cpu_local_first); +EXPORT_SYMBOL(cpumask_local_spread);