From patchwork Sat Aug 14 14:08:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 497338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73B84C4338F for ; Sat, 14 Aug 2021 14:23:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 55A6560F4B for ; Sat, 14 Aug 2021 14:23:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238706AbhHNOYY (ORCPT ); Sat, 14 Aug 2021 10:24:24 -0400 Received: from mga07.intel.com ([134.134.136.100]:45196 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238654AbhHNOXj (ORCPT ); Sat, 14 Aug 2021 10:23:39 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10075"; a="279426324" X-IronPort-AV: E=Sophos;i="5.84,321,1620716400"; d="scan'208";a="279426324" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Aug 2021 07:23:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,321,1620716400"; d="scan'208";a="447568475" Received: from ranger.igk.intel.com ([10.102.21.164]) by fmsmga007.fm.intel.com with ESMTP; 14 Aug 2021 07:23:06 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, davem@davemloft.net, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn@kernel.org, magnus.karlsson@intel.com, jesse.brandeburg@intel.com, alexandr.lobakin@intel.com, joamaki@gmail.com, toke@redhat.com, brett.creeley@intel.com, Maciej Fijalkowski Subject: [PATCH v5 intel-next 5/9] ice: do not create xdp_frame on XDP_TX Date: Sat, 14 Aug 2021 16:08:08 +0200 Message-Id: <20210814140812.46632-6-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210814140812.46632-1-maciej.fijalkowski@intel.com> References: <20210814140812.46632-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org xdp_frame is not needed for XDP_TX data path in ice driver case. For this data path cleaning of sent descriptor will not happen anywhere outside of the driver, which means that carrying the information about the underlying memory model via xdp_frame will not be used. Therefore, this conversion can be simply dropped, which would relieve CPU a bit. Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 851a6e68aedf..f2e6a37112d1 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -552,7 +552,7 @@ ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, return ICE_XDP_PASS; case XDP_TX: xdp_ring = rx_ring->vsi->xdp_rings[smp_processor_id()]; - result = ice_xmit_xdp_buff(xdp, xdp_ring); + result = ice_xmit_xdp_ring(xdp->data, xdp->data_end - xdp->data, xdp_ring); if (result == ICE_XDP_CONSUMED) goto out_failure; return result;