From patchwork Thu Jan 14 10:31:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 363395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D33DC4332B for ; Thu, 14 Jan 2021 10:33:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 720E823A61 for ; Thu, 14 Jan 2021 10:33:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728536AbhANKci (ORCPT ); Thu, 14 Jan 2021 05:32:38 -0500 Received: from mail.kernel.org ([198.145.29.99]:59028 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726518AbhANKch (ORCPT ); Thu, 14 Jan 2021 05:32:37 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id CFE4F23A51; Thu, 14 Jan 2021 10:31:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610620316; bh=PYARfSJon05zuLVclb65kOcY/n10LPBuhgFqVLRBxUw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U5bb2UnL3qYp0GmR3baDAocJcxymtoCv7TnjpB0OdOADTBdCdQ8hOv/Ldz5gdWA1g KS6uid5tAkUcsYKX0a38d4cAeQUcJUYYa4sd2HgJ7IQX+1jlDu+dP5gS2I77z0fKQ4 Gb6EHjp90wX8R2o+O+kDrx5A0qoxsoNhUvKxBZNB36Y9YFd1P73qHligRgRLmqQNY5 EH8smz/MAsbrvF7HkqZjXSbm1z10nWMZiQxdoXbNloRXTOTiIpJgW3xGv8/n7pW1uz q2IOmXvbJ/RrsqeB+LEWQ4yci4scRb1E5ZncDox85dMJxOd7aOckHxhxQNSuly8gVu GbMlcWS++mpgw== From: Leon Romanovsky To: Bjorn Helgaas , Saeed Mahameed Cc: Leon Romanovsky , Jason Gunthorpe , Jakub Kicinski , linux-pci@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Don Dutile , Alex Williamson , Alexander Duyck Subject: [PATCH mlx5-next v2 4/5] net/mlx5: Dynamically assign MSI-X vectors count Date: Thu, 14 Jan 2021 12:31:39 +0200 Message-Id: <20210114103140.866141-5-leon@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210114103140.866141-1-leon@kernel.org> References: <20210114103140.866141-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Leon Romanovsky The number of MSI-X vectors is PCI property visible through lspci, that field is read-only and configured by the device. The static assignment of an amount of MSI-X vectors doesn't allow utilize the newly created VF because it is not known to the device the future load and configuration where that VF will be used. To overcome the inefficiency in the spread of such MSI-X vectors, we allow the kernel to instruct the device with the needed number of such vectors. Such change immediately increases the amount of MSI-X vectors for the system with @ VFs from 12 vectors per-VF, to be 32 vectors per-VF. Before this patch: [root@server ~]# lspci -vs 0000:08:00.2 08:00.2 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] .... Capabilities: [9c] MSI-X: Enable- Count=12 Masked- After this patch: [root@server ~]# lspci -vs 0000:08:00.2 08:00.2 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] .... Capabilities: [9c] MSI-X: Enable- Count=32 Masked- Signed-off-by: Leon Romanovsky --- .../net/ethernet/mellanox/mlx5/core/main.c | 4 ++ .../ethernet/mellanox/mlx5/core/mlx5_core.h | 5 ++ .../net/ethernet/mellanox/mlx5/core/pci_irq.c | 62 +++++++++++++++++++ .../net/ethernet/mellanox/mlx5/core/sriov.c | 14 ++++- 4 files changed, 83 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index c08315b51fd3..8269cfbfc69d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -567,6 +567,10 @@ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx) if (MLX5_CAP_GEN_MAX(dev, mkey_by_name)) MLX5_SET(cmd_hca_cap, set_hca_cap, mkey_by_name, 1); + if (MLX5_CAP_GEN_MAX(dev, num_total_dynamic_vf_msix)) + MLX5_SET(cmd_hca_cap, set_hca_cap, num_total_dynamic_vf_msix, + MLX5_CAP_GEN_MAX(dev, num_total_dynamic_vf_msix)); + return set_caps(dev, set_ctx, MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h index 0a0302ce7144..5babb4434a87 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h @@ -172,6 +172,11 @@ int mlx5_irq_attach_nb(struct mlx5_irq_table *irq_table, int vecidx, struct notifier_block *nb); int mlx5_irq_detach_nb(struct mlx5_irq_table *irq_table, int vecidx, struct notifier_block *nb); + +int mlx5_set_msix_vec_count(struct mlx5_core_dev *dev, int devfn, + int msix_vec_count); +int mlx5_get_default_msix_vec_count(struct mlx5_core_dev *dev, int num_vfs); + struct cpumask * mlx5_irq_get_affinity_mask(struct mlx5_irq_table *irq_table, int vecidx); struct cpu_rmap *mlx5_irq_get_rmap(struct mlx5_irq_table *table); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c index 6fd974920394..135078e8dd55 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c @@ -55,6 +55,68 @@ static struct mlx5_irq *mlx5_irq_get(struct mlx5_core_dev *dev, int vecidx) return &irq_table->irq[vecidx]; } +int mlx5_get_default_msix_vec_count(struct mlx5_core_dev *dev, int num_vfs) +{ + int num_vf_msix, min_msix, max_msix; + + num_vf_msix = MLX5_CAP_GEN(dev, num_total_dynamic_vf_msix); + if (!num_vf_msix) + return 0; + + min_msix = MLX5_CAP_GEN(dev, min_dynamic_vf_msix_table_size); + max_msix = MLX5_CAP_GEN(dev, max_dynamic_vf_msix_table_size); + + return max(min(num_vf_msix / num_vfs, max_msix), min_msix); +} + +/** + * mlx5_set_msix_vec_count() - Set dynamically allocated MSI-X to the VF + * @dev - PF to work on + * @function_id - internal PCI VF function id + * @msix_vec_count - Number of MSI-X to set + **/ +int mlx5_set_msix_vec_count(struct mlx5_core_dev *dev, int function_id, + int msix_vec_count) +{ + int sz = MLX5_ST_SZ_BYTES(set_hca_cap_in); + int num_vf_msix, min_msix, max_msix; + void *hca_cap, *cap; + int ret; + + num_vf_msix = MLX5_CAP_GEN(dev, num_total_dynamic_vf_msix); + if (!num_vf_msix) + return 0; + + if (!MLX5_CAP_GEN(dev, vport_group_manager) || !mlx5_core_is_pf(dev)) + return -EOPNOTSUPP; + + min_msix = MLX5_CAP_GEN(dev, min_dynamic_vf_msix_table_size); + max_msix = MLX5_CAP_GEN(dev, max_dynamic_vf_msix_table_size); + + if (msix_vec_count < min_msix) + return -EINVAL; + + if (msix_vec_count > max_msix) + return -EOVERFLOW; + + hca_cap = kzalloc(sz, GFP_KERNEL); + if (!hca_cap) + return -ENOMEM; + + cap = MLX5_ADDR_OF(set_hca_cap_in, hca_cap, capability); + MLX5_SET(cmd_hca_cap, cap, dynamic_msix_table_size, msix_vec_count); + + MLX5_SET(set_hca_cap_in, hca_cap, opcode, MLX5_CMD_OP_SET_HCA_CAP); + MLX5_SET(set_hca_cap_in, hca_cap, other_function, 1); + MLX5_SET(set_hca_cap_in, hca_cap, function_id, function_id); + + MLX5_SET(set_hca_cap_in, hca_cap, op_mod, + MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE << 1); + ret = mlx5_cmd_exec_in(dev, set_hca_cap, hca_cap); + kfree(hca_cap); + return ret; +} + int mlx5_irq_attach_nb(struct mlx5_irq_table *irq_table, int vecidx, struct notifier_block *nb) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c index 3094d20297a9..c59efb1e7a26 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c @@ -71,8 +71,7 @@ static int sriov_restore_guids(struct mlx5_core_dev *dev, int vf) static int mlx5_device_enable_sriov(struct mlx5_core_dev *dev, int num_vfs) { struct mlx5_core_sriov *sriov = &dev->priv.sriov; - int err; - int vf; + int err, vf, num_msix_count; if (!MLX5_ESWITCH_MANAGER(dev)) goto enable_vfs_hca; @@ -85,12 +84,23 @@ static int mlx5_device_enable_sriov(struct mlx5_core_dev *dev, int num_vfs) } enable_vfs_hca: + num_msix_count = mlx5_get_default_msix_vec_count(dev, num_vfs); for (vf = 0; vf < num_vfs; vf++) { err = mlx5_core_enable_hca(dev, vf + 1); if (err) { mlx5_core_warn(dev, "failed to enable VF %d (%d)\n", vf, err); continue; } + + err = mlx5_set_msix_vec_count(dev, vf + 1, num_msix_count); + if (err) { + mlx5_core_warn( + dev, + "failed to set MSI-X vector counts VF %d, err %d\n", + vf, err); + continue; + } + sriov->vfs_ctx[vf].enabled = 1; if (MLX5_CAP_GEN(dev, port_type) == MLX5_CAP_PORT_TYPE_IB) { err = sriov_restore_guids(dev, vf); From patchwork Thu Jan 14 10:31:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 363394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 384E0C4332D for ; Thu, 14 Jan 2021 10:33:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0CD3F23A62 for ; Thu, 14 Jan 2021 10:33:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728605AbhANKcl (ORCPT ); Thu, 14 Jan 2021 05:32:41 -0500 Received: from mail.kernel.org ([198.145.29.99]:59060 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726518AbhANKck (ORCPT ); Thu, 14 Jan 2021 05:32:40 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6E88023A53; Thu, 14 Jan 2021 10:31:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610620320; bh=VsVf4idM5BrmgksBO9HleBJBpXijVbXMivAGlcSUC9g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NC21mz0/UZI6v9hfNot36B4RSg2EAFffjHy+vIYgo8P1DOMRMibxh2FY++Wjn3bNR 632wwInKE18FXn5gKwvLcWXpnSX8UqneWhylKWi7ZTxQLJGXld0G5Y8xV6FsZ7PxaI ZKo7wy+k4GZCQGCmDkpOQ8DsILUg1uCoZY3GklLsGF2bRrzS2Pa5q9EjfEeDIpWgK/ iubS0OveNqHttyKwdiu3/6KpzUjxItLagmaGtDjpN0iUZZ8aQmvlechTMdYgta654P Bg9WjINA4bHC8kK3ZSYvj5IErYgij83ItGeqo3NYypzJTdWQBssjpiroUg5h9j0oTd r7Yr0NVQZWcHw== From: Leon Romanovsky To: Bjorn Helgaas , Saeed Mahameed Cc: Leon Romanovsky , Jason Gunthorpe , Jakub Kicinski , linux-pci@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Don Dutile , Alex Williamson , Alexander Duyck Subject: [PATCH mlx5-next v2 5/5] net/mlx5: Allow to the users to configure number of MSI-X vectors Date: Thu, 14 Jan 2021 12:31:40 +0200 Message-Id: <20210114103140.866141-6-leon@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210114103140.866141-1-leon@kernel.org> References: <20210114103140.866141-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Leon Romanovsky Implement ability to configure MSI-X for the SR-IOV VFs. Signed-off-by: Leon Romanovsky --- .../net/ethernet/mellanox/mlx5/core/main.c | 1 + .../ethernet/mellanox/mlx5/core/mlx5_core.h | 1 + .../net/ethernet/mellanox/mlx5/core/pci_irq.c | 4 +- .../net/ethernet/mellanox/mlx5/core/sriov.c | 38 +++++++++++++++++++ 4 files changed, 42 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index 8269cfbfc69d..334b3b5077c5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -1647,6 +1647,7 @@ static struct pci_driver mlx5_core_driver = { .shutdown = shutdown, .err_handler = &mlx5_err_handler, .sriov_configure = mlx5_core_sriov_configure, + .sriov_set_msix_vec_count = mlx5_core_sriov_set_msix_vec_count, }; static void mlx5_core_verify_params(void) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h index 5babb4434a87..8a2523d2d43a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h @@ -138,6 +138,7 @@ void mlx5_sriov_cleanup(struct mlx5_core_dev *dev); int mlx5_sriov_attach(struct mlx5_core_dev *dev); void mlx5_sriov_detach(struct mlx5_core_dev *dev); int mlx5_core_sriov_configure(struct pci_dev *dev, int num_vfs); +int mlx5_core_sriov_set_msix_vec_count(struct pci_dev *vf, int msix_vec_count); int mlx5_core_enable_hca(struct mlx5_core_dev *dev, u16 func_id); int mlx5_core_disable_hca(struct mlx5_core_dev *dev, u16 func_id); int mlx5_create_scheduling_element_cmd(struct mlx5_core_dev *dev, u8 hierarchy, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c index 135078e8dd55..65a761346385 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c @@ -59,7 +59,7 @@ int mlx5_get_default_msix_vec_count(struct mlx5_core_dev *dev, int num_vfs) { int num_vf_msix, min_msix, max_msix; - num_vf_msix = MLX5_CAP_GEN(dev, num_total_dynamic_vf_msix); + num_vf_msix = MLX5_CAP_GEN_MAX(dev, num_total_dynamic_vf_msix); if (!num_vf_msix) return 0; @@ -83,7 +83,7 @@ int mlx5_set_msix_vec_count(struct mlx5_core_dev *dev, int function_id, void *hca_cap, *cap; int ret; - num_vf_msix = MLX5_CAP_GEN(dev, num_total_dynamic_vf_msix); + num_vf_msix = MLX5_CAP_GEN_MAX(dev, num_total_dynamic_vf_msix); if (!num_vf_msix) return 0; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c index c59efb1e7a26..adc7c8945b9d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c @@ -145,6 +145,7 @@ mlx5_device_disable_sriov(struct mlx5_core_dev *dev, int num_vfs, bool clear_vf) static int mlx5_sriov_enable(struct pci_dev *pdev, int num_vfs) { struct mlx5_core_dev *dev = pci_get_drvdata(pdev); + u32 num_vf_msix; int err; err = mlx5_device_enable_sriov(dev, num_vfs); @@ -153,6 +154,8 @@ static int mlx5_sriov_enable(struct pci_dev *pdev, int num_vfs) return err; } + num_vf_msix = MLX5_CAP_GEN_MAX(dev, num_total_dynamic_vf_msix); + pci_sriov_set_vf_total_msix(pdev, num_vf_msix); err = pci_enable_sriov(pdev, num_vfs); if (err) { mlx5_core_warn(dev, "pci_enable_sriov failed : %d\n", err); @@ -188,6 +191,41 @@ int mlx5_core_sriov_configure(struct pci_dev *pdev, int num_vfs) return err ? err : num_vfs; } +int mlx5_core_sriov_set_msix_vec_count(struct pci_dev *vf, int msix_vec_count) +{ + struct pci_dev *pf = pci_physfn(vf); + struct mlx5_core_sriov *sriov; + struct mlx5_core_dev *dev; + int num_vf_msix, id; + + dev = pci_get_drvdata(pf); + num_vf_msix = MLX5_CAP_GEN_MAX(dev, num_total_dynamic_vf_msix); + if (!num_vf_msix) + return -EOPNOTSUPP; + + if (!msix_vec_count) + msix_vec_count = + mlx5_get_default_msix_vec_count(dev, pci_num_vf(pf)); + + sriov = &dev->priv.sriov; + + /* Reversed translation of PCI VF function number to the internal + * function_id, which exists in the name of virtfn symlink. + */ + for (id = 0; id < pci_num_vf(pf); id++) { + if (!sriov->vfs_ctx[id].enabled) + continue; + + if (vf->devfn == pci_iov_virtfn_devfn(pf, id)) + break; + } + + if (id == pci_num_vf(pf) || !sriov->vfs_ctx[id].enabled) + return -EINVAL; + + return mlx5_set_msix_vec_count(dev, id + 1, msix_vec_count); +} + int mlx5_sriov_attach(struct mlx5_core_dev *dev) { if (!mlx5_core_is_pf(dev) || !pci_num_vf(dev->pdev))