[21/30] net/mlx4: Use effective interrupt affinity

Message ID 20201210194044.672935978@linutronix.de
State New
Headers show
Series
  • [01/30] genirq: Move irq_has_action() into core code
Related show

Commit Message

Thomas Gleixner Dec. 10, 2020, 7:25 p.m.
Using the interrupt affinity mask for checking locality is not really
working well on architectures which support effective affinity masks.

The affinity mask is either the system wide default or set by user space,
but the architecture can or even must reduce the mask to the effective set,
which means that checking the affinity mask itself does not really tell
about the actual target CPUs.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tariq Toukan <tariqt@nvidia.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org
Cc: linux-rdma@vger.kernel.org
---
 drivers/net/ethernet/mellanox/mlx4/en_cq.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Tariq Toukan Dec. 13, 2020, 11:31 a.m. | #1
On 12/10/2020 9:25 PM, Thomas Gleixner wrote:
> Using the interrupt affinity mask for checking locality is not really

> working well on architectures which support effective affinity masks.

> 

> The affinity mask is either the system wide default or set by user space,

> but the architecture can or even must reduce the mask to the effective set,

> which means that checking the affinity mask itself does not really tell

> about the actual target CPUs.

> 

> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

> Cc: Tariq Toukan <tariqt@nvidia.com>

> Cc: "David S. Miller" <davem@davemloft.net>

> Cc: Jakub Kicinski <kuba@kernel.org>

> Cc: netdev@vger.kernel.org

> Cc: linux-rdma@vger.kernel.org

> ---

>   drivers/net/ethernet/mellanox/mlx4/en_cq.c |    2 +-

>   1 file changed, 1 insertion(+), 1 deletion(-)

> 

> --- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c

> +++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c

> @@ -117,7 +117,7 @@ int mlx4_en_activate_cq(struct mlx4_en_p

>   			assigned_eq = true;

>   		}

>   		irq = mlx4_eq_get_irq(mdev->dev, cq->vector);

> -		cq->aff_mask = irq_get_affinity_mask(irq);

> +		cq->aff_mask = irq_get_effective_affinity_mask(irq);

>   	} else {

>   		/* For TX we use the same irq per

>   		ring we assigned for the RX    */

> 


Reviewed-by: Tariq Toukan <tariqt@nvidia.com>


Thanks.

Patch

--- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
@@ -117,7 +117,7 @@  int mlx4_en_activate_cq(struct mlx4_en_p
 			assigned_eq = true;
 		}
 		irq = mlx4_eq_get_irq(mdev->dev, cq->vector);
-		cq->aff_mask = irq_get_affinity_mask(irq);
+		cq->aff_mask = irq_get_effective_affinity_mask(irq);
 	} else {
 		/* For TX we use the same irq per
 		ring we assigned for the RX    */