From patchwork Thu May 28 07:25:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 218348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62053C433DF for ; Thu, 28 May 2020 07:25:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4AE1A207BC for ; Thu, 28 May 2020 07:25:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726744AbgE1HZt (ORCPT ); Thu, 28 May 2020 03:25:49 -0400 Received: from mga02.intel.com ([134.134.136.20]:14682 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726627AbgE1HZp (ORCPT ); Thu, 28 May 2020 03:25:45 -0400 IronPort-SDR: C80XPFjWHnhgUcs0pde+NBw8YSl7zAvasSOOj4Kqw2B9P3g+xJP1kFVrXrB+APgxyeV3XEswqV s5D0hWnAiy6w== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2020 00:25:43 -0700 IronPort-SDR: OPqR7J5oWYG2uXJ7QNWaq9R4NF4hc2IjoRBb/+5PrA+FOTmph8TX+2QGJC3OkC1//mIE+a4hAL d10wc4ve9ieg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,443,1583222400"; d="scan'208";a="310831132" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by FMSMGA003.fm.intel.com with ESMTP; 28 May 2020 00:25:43 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Marta Plantykow , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Andrew Bowers , Jeff Kirsher Subject: [net-next 10/15] ice: Change number of XDP TxQ to 0 when destroying rings Date: Thu, 28 May 2020 00:25:33 -0700 Message-Id: <20200528072538.1621790-11-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200528072538.1621790-1-jeffrey.t.kirsher@intel.com> References: <20200528072538.1621790-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Marta Plantykow When XDP Tx rings are destroyed the number of XDP Tx queues is not changing. This patch is changing this number to 0. Signed-off-by: Marta Plantykow Tested-by: Andrew Bowers Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/ice/ice_main.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 247e7b186b3c..081fec3131cd 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -1899,6 +1899,9 @@ int ice_destroy_xdp_rings(struct ice_vsi *vsi) for (i = 0; i < vsi->tc_cfg.numtc; i++) max_txqs[i] = vsi->num_txq; + /* change number of XDP Tx queues to 0 */ + vsi->num_xdp_txq = 0; + return ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc, max_txqs); }